# 3.7.2 Multiple-Path Pruning

There is often more than one path to a node. If only one path is required, a search algorithm can prune from the frontier any path that leads to a node to which it has already found a path.

Multiple-path pruning is implemented by maintaining an explored set (traditionally called closed list) of nodes that are at the end of paths that have been expanded. The explored set is initially empty. When a path $\left$ is selected at line 13 of Figure 3.4, if $n_{k}$ is already in the explored set, the path can be discarded. Otherwise, $n_{k}$ is added to the explored set, and the algorithm proceeds as before.

This approach does not necessarily guarantee that the least-cost path is not discarded. Something more sophisticated may have to be done to guarantee that an optimal solution is found. To ensure that the search algorithm can still find a lowest-cost path to a goal, one of the following can be done:

• Make sure that the first path found to any node is a lowest-cost path to that node, then prune all subsequent paths found to that node.

• If the search algorithm finds a lower-cost path to a node than one already found, it could remove all paths that used the higher-cost path to the node (because these cannot be on an optimal solution). That is, if there is a path $p$ on the frontier $\left$, and a path $p^{\prime}$ to $n$ is found that has a lower cost than the portion of the path from $s$ to $n$ in $p$, then $p$ can be removed from the frontier.

• Whenever the search finds a lower-cost path to a node than a path to that node already found, it could incorporate a new initial section on the paths that have extended the initial path. Thus, if there is a path $p=\left$ on the frontier, and a path $p^{\prime}$ to $n$ is found with a cost lower than the portion of $p$ from $s$ to $n$, then $p^{\prime}$ can replace the initial part of $p$ to $n$.

The first of these alternatives allows the use of the explored set without losing the ability to find an optimal path. The others require more sophisticated algorithms.

In lowest-cost-first search, the first path found to a node (i.e., when the node is selected from the frontier) is the lowest-cost path to the node. Pruning subsequent paths to that node cannot remove a lower-cost path to that node, and thus pruning subsequent paths to each node still enables an optimal solution to be found.

$A^{*}$ does not guarantee that when a path to a node is selected for the first time it is the lowest cost path to that node. Note that the admissibility theorem guarantees this for every path to a goal node but not for every path to any node. Whether it holds for all nodes depends on properties of the heuristic function.

A consistent heuristic is a non-negative function $h(n)$ on node $n$ that satisfies the constraint $h(n)\leq\mbox{cost}(n,n^{\prime})+h(n^{\prime})$ for any two nodes $n^{\prime}$ and $n$, where $\mbox{cost}(n,n^{\prime})$ is the cost of the least-cost path from $n$ to $n^{\prime}$. Note that if $h(g)=0$ for any goal $g$, then a consistent heuristic is never an overestimate of the cost of going from a node $n$ to a goal.

Consistency can be guaranteed if the heuristic function satisfies the monotone restriction: $h(n)\leq\mbox{cost}(n,n^{\prime})+h(n^{\prime})$ for any arc $\left$. It is easier to check the monotone restriction as it only depends on the arcs, whereas consistency depends on all pairs of nodes.

Consistency and the monotone restriction can be understood in terms of the triangle inequality, which specifies that the length of any side of a triangle cannot be greater than the sum of lengths of the other two sides. In consistency, the estimated cost of going from $n$ to a goal should not be more than the estimated cost of first going to $n^{\prime}$ then to a goal (see Figure 3.10). The heuristic function of Euclidean distance (the straight-line distance in a multidimensional Euclidean space) between two points when the cost function is distance is consistent. A heuristic function that is a solution to a simplified problem that has shorter solutions is also typically consistent.

With the monotone restriction, the $f$-values of the paths selected from the frontier are monotonically non-decreasing. That is, when the frontier is expanded, the $f$-values do not get smaller.

###### Proposition 3.2.

With a consistent heuristic, multiple-path pruning can never prevent $A^{*}$ search from finding an optimal solution.

That is, under the conditions of Proposition 3.1, which guarantee $A^{*}$ finds an optimal solution, if the heuristic function is consistent, $A^{*}$ with multiple-path pruning will always find an optimal solution.

###### Proof.

We show if the heuristic is consistent, when $A^{*}$ expands a path $p^{\prime}$ to a node $n^{\prime}$, no other path to $n^{\prime}$ can have a lower cost than $p^{\prime}$. Thus, the algorithm can prune subsequent paths to any node and will still find an optimal solution.

We use a proof by contradiction. Suppose the algorithm has selected a path $p^{\prime}$ to node $n^{\prime}$ for expansion, but there exists a lower-cost path to node $n^{\prime}$, which it has not found yet. Then there must be a path $p$ on the frontier that is the initial part of the lower-cost path to $n^{\prime}$. Suppose path $p$ ends at node $n$. It must be that $f(p^{\prime})\leq f(p)$, because $p^{\prime}$ was selected before $p$. This means that

 $\mbox{cost}(p^{\prime})+h(p^{\prime})\leq\mbox{cost}(p)+h(p).$

If the path to $n^{\prime}$ via $p$ has a lower cost than the path $p^{\prime}$

 $\mbox{cost}(p)+\mbox{cost}(n,n^{\prime})<\mbox{cost}(p^{\prime})$

where $\mbox{cost}(n,n^{\prime})$ is the actual cost of the least cost path from node $n$ to $n^{\prime}$. From these two equations, it follows that

 $\mbox{cost}(n,n^{\prime})<\mbox{cost}(p^{\prime})-\mbox{cost}(p)\leq h(p)-h(p^% {\prime})=h(n)-h(n^{\prime})$

where the last inequality follows because $h(p)$ is defined to be $h(n)$. This cannot happen if $h(n)-h(n^{\prime})\leq\mbox{cost}(n,n^{\prime})$, which is the consistency condition. ∎

$A^{*}$ search in practice includes multiple-path pruning; if $A^{*}$ is used without multiple-path pruning, the lack of pruning should be made explicit. It is up to the designer of a heuristic function to ensure that the heuristic is consistent, and so an optimal path will be found.

Multiple-path pruning subsumes cycle pruning, because a cycle is another path to a node and is therefore pruned. Multiple-path pruning can be done in constant time, by setting a bit on each node to which a path has been found if the graph is explicitly stored, or using a hash function. Multiple-path pruning is preferred over cycle pruning for breadth-first methods where virtually all of the nodes considered have to be stored anyway. Depth-first search does not have to store all of the nodes at the end of paths already expanded; storing them in order to implement multiple-path pruning makes depth-first search exponential in space. For this reason, cycle pruning is preferred over multiple-path pruning for depth-first search.

Multiple path pruning is not appropriate for $\mbox{IDA}^{*}$, because the space required to store the explored set is typically more than the space required for $A^{*}$, thus defeating the purpose of iterative deepening. Loop pruning can be used with $\mbox{IDA}^{*}$. In domains where there are multiple paths to a node, $\mbox{IDA}^{*}$ loses much of its effectiveness.