## Posts Tagged ‘Search’

## Depth First Search

Depth First Search (DFS) searches deeper into the problem space. Breadth-first search always generates successor of the deepest unexpanded node. It uses last-in first-out stack for keeping the unexpanded nodes. More commonly, depth-first search is implemented recursively, with the recursion stack taking the place of an explicit node stack.

### Algorithm: Depth First Search

1.If the initial state is a goal state, quit and return success.

2.Otherwise, loop until success or failure is signaled.

a) Generate a state, say E, and let it be the successor of the initial state. If there is no successor, signal failure.

b) Call Depth-First Search with E as the initial state.

c) If success is returned, signal success. Otherwise continue in this loop.

### Advantages of Depth-First Search

- The advantage of depth-first Search is that memory requirement is only linear with respect to the search graph. This is in contrast with breadth-first search which requires more space. The reason is that the algorithm only needs to store a stack of nodes on the path from the root to the current node.

- The time complexity of a depth-first Search to depth d is O(b^d) since it generates the same set of nodes as breadth-first search, but simply in a different order. Thus practically depth-first search is time-limited rather than space-limited.

- If depth-first search finds solution without exploring much in a path then the time and space it takes will be very less.

### Disadvantages of Depth-First Search

- The disadvantage of Depth-First Search is that there is a possibility that it may go down the left-most path forever. Even a finite graph can generate an infinite tree. One solution to this problem is to impose a cutoff depth on the search. Although the ideal cutoff is the solution depth d and this value is rarely known in advance of actually solving the problem. If the chosen cutoff depth is less than d, the algorithm will fail to find a solution, whereas if the cutoff depth is greater than d, a large price is paid in execution time, and the first solution found may not be an optimal one.

- Depth-First Search is not guaranteed to find the solution.

- And there is no guarantee to find a minimal solution, if more than one solution exists.

**Related Articles**

## Uniform-Cost Search

# Uniform-Cost Search Algorithm

If all the edges in the search graph do not have the same cost then breadth-first search generalizes to uniform-cost search. Instead of expanding nodes in order of their depth from the root, uniform-cost search expands nodes in order of their cost from the root. At each step, the next step n to be expanded is one whose cost g(n) is lowest where g(n) is the sum of the edge costs from the root to node n. The nodes are stored in a priority queue. This algorithm is also known as Dijkstra’s single-source shortest algorithm.

Whenever a node is chosen for expansion by uniform cost search, a lowest-cost path to that node has been found. The worst case time complexity of uniform-cost search is O(b^{c}/m), where c is the cost of an optimal solution and m is the minimum edge cost. Unfortunately, it also suggests the same memory limitation as breadth-first search.

**Related Articles**

## Learning Real-Time A Star

# Learning Real-Time A* (LRTA*) Search

If a problem is to be solved repeatedly with the same goal state but different initial state then one would like an algorithm that improves its performance over time. Learning Real-Time A* (LRTA*) is such an algorithm. It behaves almost identically to RTA*, except that instead of storing the second-best f value of a node as its new heuristic value, it stores the best value instead. Once one problem instance is solved, the stored heuristic values are saved and become the minimal value for the next problem instance.

While LRTA* is less efficient than RTA* search for solving a single problem instance, if it starts with admissible initial heuristic values over repeated trials, its heuristic values eventually coverage to their exact values. At which point the algorithm returns optimal solutions.

**Related Articles**

## Real Time A Star

### Real-Time A* Search

Simply repeating minimin search for each move ignores information from previous searches and results in infinite loops. In addition, since actions are committed based on limited information often the best move, may be due to undo the previous move. The principle of rationally is that backtracking should occur when the estimated cost of continuing the current path exceeds the cost of going back to a previous state plus the estimated cost of reaching the goal from the state Real-time A* (RTA*) implements the policy in constant time per move on a tree.

For each move, the f(n) = g(n) + h(n) value of each neighbour of the current state is computed where <n()> is now the cost of the edge from the current state to the neighbour, instead of from the initial state. The problem solver moves to the neighbour with the minimum f(n) value, and stores with the previous state the best f(n) value among the remaining neighbours. This represents the h(n) value of the previous state from the perspective of the new current state. This is repeated until a goal is reached. To determine the h(n) value of a previously visited state, the stored value is used, while for a new state the heuristic evaluator is called. Note that the heuristic evaluator may employ minimum lookahead search with branch-and-bound as well.

In a fluite problem space in which there exists a path to a goal from every state, RTA* is guaranteed to find a solution, regardless of the heuristic evaluation function. Moreover, on a tree, RTA* makes locally-optimal decisions given the information it has seen so far.

## Minimin Search

**Minimin search** determines individual single-agent moves in constant time per move. The minimin search algorithm searches forward from the current state to a fixed depth determined by the informational or computational resources available. At the search horizon, the A* evaluation function f(n) = g(n) + h(n) is applied to the frontier nodes. Since all the decisions are made by a single agent, the value of an interior node is the minimum of the frontier values in the sub tree below the node. A single move is then made to the neighbour of the current state with the minimum value.

Most heuristic functions obey the triangle inequality characteristic of distance measures. As a result, f(n) = g(n) + h(n) is guaranteed to be monotonically nondecreasing along a path. Moreover, since minimum search has a fixed depth limit, we can apply depth-first branch-and-bound to prune the search tree. The performance improvement due to branch-and-bound is quite dramatic in some cases extending the achievable seearch horizon by a factor of five relative to brute-force minimum search on sliding-title puzzles.

Minimin search with branch-and-bound is an algorithm for evaluating the immediate neighbours of the current node. As such, it is run until the best child is identified at which point the chosen move is executed in the real world. We can view the static evaluation function combined with lookahead search as simply a more accurate but computationally more expensive, heuristic function. In fact, it provides an entire spectrum of heuristic functions trading off accuracy for cast, depending on the search horizon.

**Related Articles:**

AI Search Techniques

Interleaving Search

Real Time A* Search

Learning Real Time A* Search