DEPTH-FIRST ITERATIVE DEEPENING

Depth-First Iterative Deepening (DFID) search combines the best features of breadth-first search and depth-first search. Depth-First Iterative Deepening search first performs a depth-first search to depth one, then starts over, executing a complete depth-first search to depth two, and continues to run depth-first searches to successively greater depths, until a solution is found.

Since it never generates a node until all shallower nodes have been generated, the first solution found by Depth First Iterative Deepening search is guaranteed to be along a shortest path. Moreover, since at any given point it is executing a depth-first search, saving only a stack of nodes, and the algorithm terminates when it finds a solution at depth d, the space complexity of DFID is only O(d).

Although it appears that Depth First Iterative Deepening (DFID) search wastes a great deal of time in the iterations prior to the one that finds a solution, this extra work is usually insiginificant. To see this, note that the number of nodes at depth d is b^2, and each of these nodes are generated once, during the final iteration. The number of nodes at depth d-1 is b^d-1, and each of these are generated twice, once during the final iteration, and once during the penultimate iteration.

In general, the number of nodes generated by DFID is b^d + 2b^d-1 + 3b^d-2 +… + db. This is asymptomatically O(b^d) if b is greater than one, since for larger values of d the lower order terms become insignificant. In other words, most of the work goes into the final iteration, and the cost of the previous iterations is relatively small. The ratio of the number of nodes generated by DFID to those generated by breadth-first search on a once is approximately b/(b-1). In fact, depth first iterative deepening is asymptomatically optimal in terms of time and space among all brute-force shortest path algorithms on a tree.

If the edge costs differ from one another then one can run an iterative deepening version of uniform-cost search, where the depth cutoff is replaced by a cutoff on the g(n) cost of a node. At the end of each iteration, the threshold for the next iteration is set to the minimum cost of all nodes generated on the previous iteration whose cost exceeded the previous threshold.

In a graph with cycles, however, breadth-first search may be much more efficient than any depth-first search. The reason is that a breadth-first search can check for duplicate nodes whereas a depth-first search cannot. Thus, the complexity of breadth-first search grows only as the number of nodes at a given depth, while the complexity of depth-first search depends on the number of paths of a given length. For example, in a square grid, the number of nodes within a radius r of the origin is O(r^2), whereas the number of paths of length r is O(3^r), since there are three children of every node, counting its parent. Thus, in a graph with a large number of very short cycles, breadth-first search is preferable to depth-first search, if sufficient memory is available.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top