4 Reasoning with Constraints

The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).

4.7 Local Search

The preceding algorithms systematically search the space of assignments of values to variables. If the space is finite, they will either find a solution or report that no solution exists. Unfortunately, many spaces are too big for systematic search and are possibly even infinite. In any reasonable time, systematic search will have failed to consider enough of the space to give any meaningful results. This section and the next consider methods intended to work in these very large spaces. The methods do not systematically search the whole search space but they are designed to find solutions quickly on average. They do not guarantee that a solution will be found even if one exists, and so they are not able to prove that no solution exists. They are often the method of choice for applications where solutions are known to exist or are very likely to exist.

Local search methods start with a total assignment of a value to each variable and try to improve this assignment iteratively by taking improving steps, by taking random steps, or by restarting with another total assignment. Many different local search techniques have been proposed. Understanding when these techniques work for different problems forms the focus of a number of research communities, in both operations research and AI.

1: procedure Local_search(Vs,dom,Cs)
2:      Inputs
3:          Vs: a set of variables
4:          dom: a function such that dom(X) is the domain of variable X
5:          Cs: set of constraints to be satisfied      
6:      Output
7:          total assignment that satisfies the constraints
8:      Local
9:          A an array of values indexed by variables in Vs      
10:      repeat
11:          for each variable X in Vs do
12:               A[X]:=a random value in dom(X);          
13:          while not stop_walk() & A is not a satisfying assignment do
14:               Select a variable Y and a value wdom(Y)
15:               Set A[Y]:=w          
16:          if A is a satisfying assignment then
17:               return A          
18:      until termination
Figure 4.7: Local search for finding a solution to a CSP

A generic local search algorithm for CSPs is given in Figure 4.7. The array A specifies an assignment of a value to each variable. The first for each loop assigns a random value to each variable. The first time it is executed is called a random initialization. Each iteration of the outer loop is called a try. The second and subsequent assignments of a random value to each variable in line 11 and line 12 is a random restart. An alternative to random initialization is to use some more informed guess, utilizing some heuristic or prior knowledge, which is then iteratively improved.

The while loop (line 13 to line 15) does a local search, or a walk, through the assignment space. It considers a set of possible successors of the total assignment A, and selects one to be the next total assignment. In Figure 4.7, the possible successors of a total assignment are those assignments that differ in the assignment of a single variable.

This walk through assignments continues until either a satisfying assignment is found and returned or some stopping criterion is satisfied. The stopping criterion, the stop_walk() function in the algorithm, is used to decide when to stop the current local search and do a random restart, starting again with a new assignment. A stopping criterion could be as simple as stopping after a certain number of steps.

This algorithm is not guaranteed to halt. In particular, if the termination condition is false, it goes on forever if there is no solution, and, even if there is a solution, it is possible to get trapped in some region of the search space. An algorithm is complete if it guarantees to finds an answer whenever there is one. This algorithm can be complete or incomplete, depending on the selection and the stopping criterion.

One version of this algorithm is random sampling. In random sampling, the stopping criterion stop_walk() always returns true so that the while loop from line 13 is never executed. Random sampling keeps picking random assignments until it finds one that satisfies the constraints, and otherwise it does not halt. Random sampling is complete in the sense that, given enough time, it guarantees that a solution will be found if one exists, but there is no upper bound on the time it may take. It is typically very slow. The efficiency depends on the product of the domain sizes and how many solutions exist.

Another version is a random walk when stop_walk() is always false, and so there are no random restarts. In a random walk the while loop is only exited when it has found a satisfying assignment. In the while loop, random walk selects a variable and a value for that variable at random. Random walk is also complete in the same sense as random sampling. Each step takes less time than resampling all variables, but it can take more steps than random sampling, depending on how the solutions are distributed. When the domain sizes of the variables differ, a random walk algorithm can either select a variable at random and then a value at random, or select a variable–value pair at random. The latter is more likely to select a variable when it has a larger domain.