foundations of computational agents
The preceding algorithms systematically search the space of assignments of values to variables. If the space is finite, they will either find a solution or report that no solution exists. Unfortunately, many spaces are too big for systematic search and are possibly even infinite. In any reasonable time, systematic search will have failed to consider enough of the space to give any meaningful results. This section and the next consider methods intended to work in these very large spaces. The methods do not systematically search the whole search space but they are designed to find solutions quickly on average. They do not guarantee that a solution will be found even if one exists, and so they are not able to prove that no solution exists. They are often the method of choice for applications where solutions are known to exist or are very likely to exist.
Local search methods start with a total assignment of a value to each variable and try to improve this assignment iteratively by taking improving steps, by taking random steps, or by restarting with another total assignment. Many different local search techniques have been proposed. Understanding when these techniques work for different problems forms the focus of a number of research communities, in both operations research and AI.
A generic local search algorithm for CSPs is given in Figure 4.7. The array specifies an assignment of a value to each variable. The first for each loop assigns a random value to each variable. The first time it is executed is called a random initialization. Each iteration of the outer loop is called a try. The second and subsequent assignments of a random value to each variable in line 11 and line 12 is a random restart. An alternative to random initialization is to use some more informed guess, utilizing some heuristic or prior knowledge, which is then iteratively improved.
The while loop (line 13 to line 15) does a local search, or a walk, through the assignment space. It considers a set of possible successors of the total assignment , and selects one to be the next total assignment. In Figure 4.7, the possible successors of a total assignment are those assignments that differ in the assignment of a single variable.
This walk through assignments continues until either a satisfying assignment is found and returned or some stopping criterion is satisfied. The stopping criterion, the function in the algorithm, is used to decide when to stop the current local search and do a random restart, starting again with a new assignment. A stopping criterion could be as simple as stopping after a certain number of steps.
This algorithm is not guaranteed to halt. In particular, if the termination condition is false, it goes on forever if there is no solution, and, even if there is a solution, it is possible to get trapped in some region of the search space. An algorithm is complete if it guarantees to finds an answer whenever there is one. This algorithm can be complete or incomplete, depending on the selection and the stopping criterion.
One version of this algorithm is random sampling. In random sampling, the stopping criterion always returns true so that the while loop from line 13 is never executed. Random sampling keeps picking random assignments until it finds one that satisfies the constraints, and otherwise it does not halt. Random sampling is complete in the sense that, given enough time, it guarantees that a solution will be found if one exists, but there is no upper bound on the time it may take. It is typically very slow. The efficiency depends on the product of the domain sizes and how many solutions exist.
Another version is a random walk when is always false, and so there are no random restarts. In a random walk the while loop is only exited when it has found a satisfying assignment. In the while loop, random walk selects a variable and a value for that variable at random. Random walk is also complete in the same sense as random sampling. Each step takes less time than resampling all variables, but it can take more steps than random sampling, depending on how the solutions are distributed. When the domain sizes of the variables differ, a random walk algorithm can either select a variable at random and then a value at random, or select a variable–value pair at random. The latter is more likely to select a variable when it has a larger domain.