# 4.9.1 Systematic Methods for Optimization

One way to find the minimal assignment, corresponding to generate and test, is to compute the sum of the soft constraints and to select an assignment with minimum value. This is infeasible for large problems.

Arc consistency can be generalized to optimization problems by allowing pruning of dominated assignments. Suppose $c_{1},\dots,c_{k}$ are the soft constraints that involve $X$. Let $c=c_{1}+\dots+c_{k}$. Suppose $Y_{1},\dots,Y_{m}$ are the variables, other than $X$, that are involved in $c$. A value $v$ for variable $X$ is strictly dominated if, for all values $y_{1},\dots,y_{k}$ of $Y_{1},\dots,Y_{m}$, some value $v^{\prime}$ of $X$ exists such that $c(X{=}v^{\prime},Y_{1}{=}y_{1},\dots,Y_{m}{=}y_{m}). Pruning strictly dominated values does not remove a minimal solution. The pruning of domains can be done repeatedly, as in the GAC algorithm.

Weakly dominated has the same definition as strictly dominated, but with “less than” replaced by “less than or equal to.” If only one solution is required, weakly dominated values can be pruned sequentially. Which weakly dominated values are removed may affect which optimal solution is found, but removing a weakly dominated value does not remove all optimal solutions. As with arc consistency for hard constraints, pruning (strictly or weakly) dominated values may greatly simplify the problem but does not, by itself, always solve the problem.

Domain splitting is used to build a search tree. A node is an assignment of a value to a subset of the variables. The neighbors of a node are obtained by selecting a variable $X$ that is not assigned in the node to split; there is a neighbor of the node for each value of $X$. Assigning a value to $X$ allows the constraints that involve $X$ to be simplified and values for other variables to be pruned due to hard constraints or to domination. The arc cost is the evaluation of the soft constraints that are able to be evaluated. A goal node is one where all of the variables are assigned.

By assigning costs as soon as a soft constraint is able to be evaluated, search algorithms such as $A^{*}$ or branch-and-bound can be used to find a minimal solution. These methods require each arc cost to be non-negative. To achieve this, the lowest cost in each soft constraint – even if it is negative – is subtracted from each of the costs in the soft constraint. This cost then needs to be added to the cost of a final solution.

###### Example 4.29.

Suppose $X$ and $Y$ are variables with domain $\{0,1\}$. The constraint

$X$ $Y$ Cost
0 0 $-4$
0 1 $-1$
1 0 5
1 1 6

is converted into nonnegative form by subtracting $-4$ (i.e., adding 4) to each cost, so the costs range from 0 to 10, rather than from $-4$ to 6. The 4 is then subtracted from the cost of the solution.

Variable elimination can also be used for soft constraints. The variables are eliminated one at a time. A variable $X$ is eliminated as follows. Let $R$ be the set of constraints that involve $X$. Compute, $T$, a new constraint whose scope is the union of the scopes of the constraints in $R$ and whose value is the sum of the values of $R$. Let $V=\mbox{scope}(T)\setminus\{X\}$. For each value of the variables in $V$, select a value of $X$ that minimizes $T$, resulting in a new soft constraint, $N$, with scope $V$. The constraint $N$ replaces the constraints in $R$. This results in a new problem, with fewer variables and a new set of constraints, which can be solved recursively. A solution, $S$, to the reduced problem is an assignment to the variables in $V$. Thus, $T(S)$, the constraint $T$ under the assignment $S$, is a function of $X$. An optimal value for $X$ is obtained by choosing a value that results in the minimum value of $T(S)$.

Figure 4.12 gives pseudocode for the variable elimination with soft constraints (VE_SC) algorithm. The elimination ordering can be given a priori or may be computed on the fly, for example, using the elimination ordering heuristics discussed for VE_CSP. It is possible to implement VE_SC without storing $T$ and by only constructing an extensional representation of $N$.

###### Example 4.30.

Consider Example 4.27. First consider eliminating $A$. It appears in only one constraint, $c_{1}(A,B)$. Eliminating $A$ gives

$c_{6}(B)=\arg\min_{A}c_{1}(A,B)$: $B$ Cost 1 0 2 2 3 2

The constraint $c_{1}(A,B)$ is replaced by $c_{6}(B)$.

Suppose $B$ is eliminated next. $B$ appears in three constraints: $c_{2}(B,C)$, $c_{3}(B,D)$, and $c_{6}(B)$. These three constraints are added, giving
$c_{2}(B,C)+c_{3}(B,D)+c_{6}(B)$: $B$ $C$ $D$ Cost 1 1 1 8 1 1 2 5 2 1 1 4 2 1 2 4 3 1 1 6 3 1 2 8

The constraints $c_{2}$, $c_{3}$, and $c_{6}$ are replaced by

$c_{7}(C,D)=\min_{B}\left(c_{2}(B,C)+c_{3}(B,D)+c_{6}(B)\right)$: $C$ $D$ Cost 1 1 4 1 2 4

There are now three remaining constraints: $c_{4}(C,E)$, $C_{5}(D,E)$, and $c_{7}(C,D)$. These can be optimized recursively.

Suppose the recursive call returns the solution $C{=}1$, $D{=}2$, $E{=}2$. An optimal value for $B$ is the value with the minimum in $c_{2}(B,C{=}1)+c_{3}(B,D{=}2)+c_{6}(B)$, which is $B{=}2$.

From $c_{1}(A,B)$, the value of $A$ that minimizes $c_{1}(A,B{=}2)$ is $A{=}1$. Thus, an optimal solution is $A{=}1$, $B{=}2$, $C{=}1$, $D{=}2$, $E{=}2$, with a cost of 4.

The complexity of VE_SC depends on the structure of the constraint graph, as it does with hard constraints. Sparse graphs may result in small intermediate constraints in VE algorithms, including VE_SC. Densely connected graphs result in large intermediate constraints.