Posts Tagged ‘Search’

PostHeaderIcon Interleaving Search

Interleaving Search And Execution

In AI search, there are computational limits and uncertainty due to the opponent’s move as in two-players games. So it is wise to have search and execution interleaved with each search determining only the next move to be made. This paradigm is also applicable to single-agent problems. In the case of autonomous vehicle navigation, for example, information is limited by the horizon of the vehicle, sensors, and it must physically move to acquire more information. Thus, one move must be computed at a time, and that move executed before computing the next. Following are the algorithms designed to address this scenario.

Related Articles

Minimin Search

Real-Time A* Search

Learning Real-Time A* Search

Related Articles

AI Search Techniques
Brute Force Search
Heuristic Search
Two-Player Games Search

PostHeaderIcon Intelligent Backtracking

What is Intelligent Backtracking?

Performance of bruteforce backtracking can be improved by using a number of techniques such as variable ordering, value ordering, back jumping, and forward checking.

The order in which variables are instantiated can have a large effect on the size of the search tree. The idea of variable ordering is to order the variables form most constrained to least constrained. For example, if a variable has only a single value remaining that is consistent with the previously instantiated variable, it should be assigned that value immediately. In general, the variables should be instantiated in increasing order of the size of their remaining domains. This can either be done statically at the beginning of the search or dynamically, reordering the remaining variables each time a variable is assigned a new value.

The order in which the value of a given variable are chosen determines the order in which the tree is searched. Since it does not effect the size of the tree, it makes no difference if all solutions are to be found. If only a single solution is required, however, value ordering can decrease the time required to find a solution. In general, one should order the values from least constraining to most constraining in order to minimize the time required to find a first solution.

An important idea, originally called back jumping, is that when an impasse is reached, instead of simply undoing the last decision made, the decision that actually caused the failure should be modified. For example, consider the three-variable problem where the variables are instantiated in the order x,y,z. Assume that values have been chosen for both x and y, but that all possible values for z conflict with the value chosen for x. In chronological backtracking, the value chosen for y would be changed, and then all the possible values for z would be tested again, to no avail. A better strategy in this case is to go back to the source of the failure, and change the value of x before trying different values for y.

When a variable is assigned a value, the idea of forward checking is to check each remaining uninstantiated variable to make sure that there is at least one assignment for each of them that is consistent with the previous assignments. If not, the original variable is assigned its next value.

Related Articles

Two Player Games Search

PostHeaderIcon Limited Discrepancy Search

Limited Discrepancy Search (LDS) Algorithm

Limited Discrepancy Search (LDS) is a completely general tree-search algorithm, but is most useful in the context of constraint satisfaction problems in which the entire tree is too large to search exhaustively. In that case, we would like to search that subset of the tree that is most likely to yield a solution in the time available. Assume that we can heuristically order a binary tree so that at any node, the left branch is more likely to lead to a solution than the right branch. LDS then proceeds in a series of depth-first iterations. The first iteration explores just the left-most path in the tree. The second iteration explores those root-to-leaf paths with exactly one right branch, or discrepancy in them. In general, each iteration explores those paths with exactly k discrepancies, with k ranging from zero to the depth of the tree. The last iteration explores just the right most branch. Under certain assumptions, one can show that LDS is likely to find a solution sooner than a strict left-to-right depth-first search.

Related Articles

Two-Player Games Search Algorithms

Intelligent Backtracking

PostHeaderIcon Minimax Search

Minimax Search Algorithm

The standard algorithm for two-player perfect-information games such as chess, checkers or othello is minimax search with heuristic static evaluation. The minimax search algorithm searches forward to a fixed depth in the game tree, limited by the amount of time available per move. At this search horizon, a heuristic function is applied to the frontier nodes. In this case, a heuristic evaluation is a function that takes a board position and returns a number that indicates how favourable that position is for one player relative to the other. For example, a very simple heuristic evaluator for chess would count the total number of pieces on the board for one player, appropriately weighted by their relative strength, and subtract the weighted sum of the opponent’s places. Thus, large positive values would correspond to strange positions for one player called MAX, whereas large negative values would represent advantageous situation for the opponent called MIN.

Given the heuristic evaluations of the frontier nodes, minimax search algorithm recursively computes the values for the interior nodes in the tree according to the maximum rule. The value of a node where it is MAX’s turn to move is the maximum of the values of its children, while the value of the node where MIN is to move is the minimum of the values of its children. Thus at alternative levels of the tree, the maximum values of the children are backed up. This continues until the values of the immediate children of the current position are computed at which point one move to the child with the maximum or minimum value is made depending on whose turn it is to move.

PostHeaderIcon Recursive Best First Search

Recursive Best First Search

The memory limitation of the heuristic path algorithm can be overcome simply by replacing the best-first search with IDA* search using the sure weighted evaluation function, with w>=1/2.

IDA* search is no longer a best-first search since the total cost of a child can beless than that of its parent, and thus nodes are not necessarily expanded in best-first order. Recursive Best-First Search (RBFS) is an alternative algorithm. Recursive best-first search is a best-first search that runs in space that is linear with respect to the maximum search depth, regardless of the cost funtion used. Even with an admissible cost function, Recursive Best-First Search generates fewer nodes than IDA*, and is generally superior to IDA*, except for a small increase in the cost per node generation.

It works by maintaining on the recursion stack the complete path to the current node being expanded as well as all immediate siblings of nodes on that path, along with the cost of the best node in the sub-tree explored below each sibling. Whenever the cost of the current node exceeds that of some other node in the previously expanded portion of the tree, the algorithm backs up to their deepest common ancestor, and continues the search down the new path. In effect, the algorithm maintains a separate threshold for each sub-tree diverging from the current search path.

Related Articles

Heuristic Search