Third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including the full text).

4.8 Local Search

The previous sections have considered algorithms that systematically search the space. If the space is finite, they will either find a solution or report that no solution exists. Unfortunately, many search spaces are too big for systematic search and are possibly even infinite. In any reasonable time, systematic search will have failed to consider enough of the search space to give any meaningful results. This section and the next consider methods intended to work in these very large spaces. The methods do not systematically search the whole search space but they are designed to find solutions quickly on average. They do not guarantee that a solution will be found even if one exists, and so they are not able to prove that no solution exists. They are often the method of choice for applications where solutions are known to exist or are very likely to exist.

Local search methods start with a complete assignment of a value to each variable and try to iteratively improve this assignment by improving steps, by taking random steps, or by restarting with another complete assignment. A wide variety of local search techniques has been proposed. Understanding when these techniques work for different problems forms the focus of a number of research communities, including those from both operations research and AI.


1: Procedure Local-Search(V,dom,C)
2:           Inputs
3:                     V: a set of variables
4:                     dom: a function such that dom(X) is the domain of variable X
5:                     C: set of constraints to be satisfied
6:           Output
7:                     complete assignment that satisfies the constraints
8:           Local
9:                     A[V] an array of values indexed by V
10:           repeat
11:                     for each variable X do
12:                               A[X] ←a random value in dom(X);
13:                     while (stopping criterion not met & A is not a satisfying assignment)
14:                               Select a variable Y and a value V ∈dom(Y)
15:                               Set A[Y] ←V
16:                     if (A is a satisfying assignment) then
17:                               return A
18:           until termination
Figure 4.6: Local search for finding a solution to a CSP

The generic local search algorithm for CSPs is given in Figure 4.6. A specifies an assignment of a value to each variable. The first for each loop assigns a random value to each variable. The first time it is executed is called a random initialization. Each iteration of the outer loop is called a try. A common way to implement a new try is to do a random restart. An alternative to random initialization is to use a construction heuristic that guesses a solution, which is then iteratively improved.

The while loop does a local search, or a walk, through the assignment space. It maintains a current assignment S, considers a set of neighbors of the current assignment, and selects one to be the next current assignment. In Figure 4.6, the neighbors of a total assignment are those assignments that differ in the assignment of a single variable. Alternate sets of neighbors can also be used and will result in different search algorithms.

This walk through assignments continues until either a satisfying assignment is found and returned or some stopping criterion is satisfied. The stopping criterion is used to decide when to stop the current local search and do a random restart, starting again with a new assignment. A stopping criterion could be as simple as stopping after a certain number of steps.

This algorithm is not guaranteed to halt. In particular, it goes on forever if there is no solution, and it is possible to get trapped in some region of the search space. An algorithm is complete if it finds an answer whenever there is one. This algorithm is incomplete.

One instance of this algorithm is random sampling. In this algorithm, the stopping criterion is always true so that the while loop is never executed. Random sampling keeps picking random assignments until it finds one that satisfies the constraints, and otherwise it does not halt. Random sampling is complete in the sense that, given enough time, it guarantees that a solution will be found if one exists, but there is no upper bound on the time it may take. It is very slow. The efficiency depends only on the product of the domain sizes and how many solutions exist.

Another instance is a random walk. In this algorithm the while loop is only exited when it has found a satisfying assignment (i.e., the stopping criterion is always false and there are no random restarts). In the while loop it selects a variable and a value at random. Random walk is also complete in the same sense as random sampling. Each step takes less time than resampling all variables, but it can take more steps than random sampling, depending on how the solutions are distributed. Variants of this algorithm are applicable when the domain sizes of the variables differ; a random walk algorithm can either select a variable at random and then a value at random, or select a variable-value pair at random. The latter is more likely to select a variable when it has a larger domain.