Third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including the full text).

## 4.9 Population-Based Methods

The preceding local search algorithms maintain a
single current assignment. This section considers algorithms that
maintain multiple assignments. The first method, beam search,
maintains the best *k* assignments. The next
algorithm, stochastic beam search, selects which assignments to propagate
stochastically. In genetic algorithms, which are inspired by biological evolution, the *k* assignments forming a population
interact in various ways to produce the new population. In these
algorithms, a
total assignment of a value to each variable is called an **individual** and the set of
current individuals is a **population**.

**Beam search** is a method similar to
iterative best improvement, but it maintains up to *k* assignments
instead of just one. It reports success when it finds a satisfying
assignment. At each stage of the algorithm, it selects the *k* best
neighbors of the current individuals (or all of them if there are less
than *k*) and picks randomly in the case of ties. It repeats with
this new set of *k* total assignments.

Beam search considers multiple assignments at the same time. Beam search is useful for memory-bounded cases, where
*k* can be selected depending on the memory available.

**Stochastic beam
search** is an alternative to beam search, which, instead of choosing
the best *k* individuals, selects *k* of the individuals at random;
the individuals with a better evaluation are more likely to be
chosen. This is done by making the probability of being chosen a
function of the evaluation function. A standard way to do this is to
use a **Gibbs distribution** or **Boltzmann distribution**
and to select an assignment *A* with probability proportional to

e^{-h(A)/T},

where *h(A)* is the evaluation function and *T* is a temperature.

Stochastic beam search tends to allow more diversity in the *k* individuals
than does plain beam search. In terms of evolution in biology, the
evaluation function reflects the fitness of the individual; the fitter the
individual, the more likely it is to pass that part of its variable
assignment that is good onto the next
generation. Stochastic beam search is like asexual reproduction; each
individual gives slightly mutated children and then stochastic beam search proceeds with survival
of the fittest. Note that under stochastic beam search it is possible
for an individual to be selected multiple times at random.

**Genetic algorithms** further pursue the evolution analogy. Genetic algorithms are like stochastic beam
searches, but each new element of the population is a
combination of a pair of individuals - its parents. In particular,
genetic algorithms select pairs of
individuals and then create new offspring by taking some of the values for
the offspring's variables from one of the parents and the rest from the
other parent, loosely analogous to how DNA is spliced in sexual reproduction.

The new operation that occurs in genetic algorithms is called **crossover**. Uniform crossover selects two individuals (the parents) and
creates two new individuals, called the **children**. The value for each variable
in a child comes from one of the parents. A common method is
**one-point crossover**, which assumes a
total ordering of the variables. An index *i* is selected at random. One of the children is constructed by selecting
the values for the variables before *i* from one of the parents, and
the values for variables after *i* from the other parent. The other
child gets the other values. The effectiveness of the
crossover depends on the total ordering of the variables. The
ordering of the variables is part of the design of the genetic algorithm.

Assume that you have a population of *k* individuals (where *k* is even).
A basic genetic algorithm proceeds by maintaining *k* individuals as a generation and then
using these individuals to generate a
new generation via the following steps:

- Randomly select pairs of individuals where the fitter individuals are more likely to be chosen. How much more likely a fit individual is to be chosen than a less fit individual depends on the difference in fitness levels and a temperature parameter.
- For each pair, perform a crossover.
- Randomly mutate some (very few) values by choosing other values for some randomly chosen variables. This is a random walk step.

It proceeds in this way until it has created *k* individuals, and then the operation
proceeds to the next generation. The algorithm is shown in Figure 4.10.

**Procedure**GeneticAlgorithm(

*V,dom,C,S,k*)

2:

**Inputs**

3:

*V*: a set of variables

4:

*dom*: a function;

*dom(X)*is the domain of variable

*X*

5:

*C*: set of constraints to be satisfied

6:

*S*: a cooling schedule for the temperature

7:

*k*: population size - an even integer

**Output**

8: complete assignment that satisfies the constraints

9:

**Local**

10:

*Pop*: a set of assignments

11:

*T*: real

12:

13:

*Pop←*

*k*complete random assignments

14:

*T*is assigned a value according to

*S*

15:

**repeat**

16:

**if**(some

*A∈Pop*satisfies all constraints in

*C*)

**then**

17:

**return**

*A*

18:

19:

*Npop←{}*

20:

**repeat**

*k/2*

**times**

21:

*A*

_{1}←RandomSelection(Pop,T)22:

*A*

_{1}←RandomSelection(Pop,T)23:

*N*

_{1},N_{2}←combine(A_{1},A_{2})24:

*Npop←Npop∪{mutate(N*

_{1}),mutate(N_{2})}25:

26:

*Pop←Npop*

27:

*T*is updated according to

*S*

28:

**until**termination

1:

**Procedure**RandomSelection(

*Pop,T*)

29: select

*A*from

*Pop*with probability proportional to

*e*

^{-h(A)/T}30:

**return**

*A*

1:

**Procedure**Combine(

*A*)

_{1},A_{2}31: select integer

*i*,

*1 ≤ i<|V|*at random

32: Let

*N*for

_{1}←{(X_{j}=v_{j})∈A_{1}*j ≤ i} ∪{(X*for

_{j}=v_{j})∈A_{2}*j>i}*

33: Let

*N*for

_{2}←{(X_{j}=v_{j})∈A_{2}*j ≤ i} ∪{(X*for

_{j}=v_{j})∈A_{1}*j>i}*

34:

**return**

*N*

_{1},N_{2}**Example 4.28:**Consider Example 4.8. Suppose we use the same evaluation function as in Example 4.25, namely the number of unsatisfied constraints. The individual

*A=2*,

*B=2*,

*C=3*,

*D=1*,

*E=1*has an evaluation of 4. It has a low value mainly because

*E=1*. Its offspring that preserve this property will tend to have a lower evaluation than those that do not and, thus, will be more likely to survive. Other individuals may have low values for different reasons; for example, the individual

*A=4*,

*B=2*,

*C=3*,

*D=4*,

*E=4*also has an evaluation of 4. It is low mainly because of the assignment of values to the first four variables. Again, offspring that preserve this property will be fitter and more likely to survive than those that do not. If these two were to mate, some of the offspring would inherit the bad properties of both and would die off. Some, by chance, would inherit the good properties of both. These would then have a better chance of survival.

As in other stochastic local search algorithms, it is difficult to design features and the evaluation function so that the algorithm does not get trapped in local minima. Efficiency is very sensitive to the variables used to describe the problem and the ordering of the variables. Getting this to work is an art. As with many other heuristic algorithms, evolutionary algorithms have many degrees of freedom and, therefore, are difficult to configure or tune for good performance. Also, analogies to natural evolution can be very misleading, because what is at work in nature is not always the best strategy for solving combinatorial decision or optimization problems.

A large community of researchers are working on genetic algorithms to make them practical for real problems and there have been some impressive results. What we have described here is only one of the possible genetic algorithms.