foundations of computational agents
The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).
Neither of the preceding methods is ideal. Breadth-first search, which guarantees that a path will be found, requires exponential space. Depth-first search may not halt on infinite graphs or graphs with cycles. One way to combine the space efficiency of depth-first search with the optimality of breadth-first search is to use iterative deepening. The idea is to recompute the elements of the breadth-first frontier rather than storing them. Each recomputation can be a depth-first search, which thus uses less space.
Iterative deepening repeatedly calls a depth-bounded searcher, a depth-first searcher that takes in an integer depth bound and never explores paths with more arcs than this depth bound. Iterative deepening first does a depth-first search to depth 1 by building paths of length 1 in a depth-first manner. If that does not find a solution, it can build paths to depth 2, then depth 3, and so on until a solution is found. When a search with depth-bound fails to find a solution, it can throw away all of the previous computation and start again with a depth-bound of . Eventually, it will find a solution if one exists, and, as it is enumerating paths in order of the number of arcs, a path with the fewest arcs will always be found first.
To ensure it halts for finite graphs, iterative deepening search needs to distinguish between
failure because the depth bound was reached and
failure due to exhausting the search space.
In the first case, the search must be retried with a larger depth bound. In the second case, it is a waste of time to try again with a larger depth bound, because no path exists no matter what the depth, and so the whole search should fail.
Pseudocode for iterative deepening search, , is presented in Figure 3.8. The local procedure implements a depth-bounded depth-first search (using recursion to keep the stack) that places a limit on the length of the paths for which it is searching. It either returns a path, or reaches the end of its code, and returns with no path. It uses a depth-first search to find paths of length , where is the path length of the given path from the start and is a non-negative integer. The iterative deepening searcher calls Depth_bounded_search for increasing depth bounds. This program finds the paths to goal nodes in the same order as does breadth-first search. Note that it only needs to check for a goal when , because it knows that there are no solutions for lower bounds.
To ensure that iterative deepening search fails whenever breadth-first search would fail, it needs to keep track of when increasing the bound could help find an answer. A depth-bounded search fails naturally – it fails by exhausting the search space – if the search did not prune any paths due to the depth bound. In this case, the program can stop and report no paths. This is handled through the variable hit_depth_bound in Figure 3.8, which is false when Depth_bounded_search is called initially, and becomes true if the search is pruned due to the depth bound. If it is true at the end of a depth-bounded search, the search failed due to hitting the depth bound, and so the depth bound can be increased, and another depth-bounded search is carried out.
The obvious problem with iterative deepening is the wasted computation that occurs at each step. This, however, may not be as bad as one might think, particularly if the branching factor is high. Consider the running time of the algorithm. Assume a constant branching factor of . Consider the search where the bound is . At depth , there are nodes; each of these has been generated once. The nodes at depth have been generated twice, those at depth have been generated three times, and so on, and the nodes at depth have been generated times. Thus, the total number of paths expanded is
Breadth-first search expands nodes. Thus iterative deepening has an asymptotic overhead of times the cost of expanding the nodes at depth using breadth-first search. Thus, when there is an overhead factor of , and when there is an overhead of . This algorithm is and there cannot be an asymptotically better uninformed search strategy. Note that, if the branching factor is close to 1, this analysis does not work because then the denominator would be close to zero; see Exercise 9.