Third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including the full text).

4.8.1 Iterative Best Improvement

In iterative best improvement, the neighbor of the current selected node is one that optimizes some evaluation function. In greedy descent, a neighbor is chosen to minimize an evaluation function. This is also called hill climbing or greedy ascent when the aim is to maximize. We only consider minimization; if you want to maximize a quantity, you can minimize its negation.

Iterative best improvement requires a way to evaluate each total assignment. For constraint satisfaction problems, a common evaluation function is the number of constraints that are violated by the total assignment that is to be minimized. A violated constraint is called a conflict. With the evaluation function being the number of conflicts, a solution is a total assignment with an evaluation of zero. Sometimes this evaluation function is refined by weighting some constraints more than others.

A local optimum is an assignment such that no neighbor improves the evaluation function. This is also called a local minimum in greedy descent, or a local maximum in greedy ascent.

Example 4.25: Consider the delivery scheduling in Example 4.8. Suppose gradient descent starts with the assignment A=2, B=2, C=3, D=2, E=1. This assignment has an evaluation of 3, because it does not satisfy A ≠B, B≠D, and C<D. Its neighbor with the minimal evaluation has B=4 with an evaluation of 1 because only C<D is unsatisfied. This is now a local minimum. A random walk can then change D to 4, which has an evaluation of 2. It can change A to 4, with an evaluation of 2, and then change B to 2 with an evaluation of zero, and a solution is found.

The following gives a trace of the assignments through the walk:

A B C D E evaluation
2 2 3 2 1 3
2 4 3 2 1 1
2 4 3 4 1 2
4 4 3 4 1 2
4 2 3 4 1 0

Different initializations, or different choices when multiple assignments have the same evaluation, give different results.

If the variables have small finite domains, a local search algorithm can consider all other values of the variable when considering a neighbor. If the domains are large, the cost of considering all other values may be too high. An alternative is to consider only a few other values, often the close values, for one of the variables. Sometimes quite sophisticated methods are used to select an alternative value.

Local search typically considers the best neighboring assignment even if it is equal to or even worse than the current assignment. It is often better to make a quick choice than to spend a lot of time making the best choice. There are many possible variants of which neighbor to choose:

  • Select the value and variable together. Out of all of the different assignments to any of the variables, select one of them that minimizes the evaluation function. If more than one has the minimum value; pick one of them at random.
  • Select a variable, then select its value. To select a variable, there are a number of possibilities:
    • Maintain how many violated constraints each variable is involved in, and pick one of the variables involved in the most violated constraints.
    • Select randomly a variable involved in any violated constraint.
    • Select a variable at random.

    Once the variable has been selected, it can either select one of the values that has the best evaluation or just select a value at random.

  • Select a variable and/or value at random and accept the change if it improves the evaluation.

Optimizing over all variable-value pairs makes bigger improvements than the other methods but it is more computationally expensive. Although some theoretical results exist, deciding which method works better in practice is an empirical question.