foundations of computational agents
The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).
Arc consistency simplifies the network by removing values of variables. A complementary method is variable elimination (VE), which simplifies the network by removing variables.
The idea of VE is to remove the variables one by one. When removing a variable $X$, VE constructs a new constraint on some of the remaining variables reflecting the effects of $X$ on all of the other variables. This new constraint replaces all of the constraints that involve $X$, forming a reduced network that does not involve $X$. The new constraint is constructed so that any solution to the reduced CSP can be extended to a solution of the CSP that contains $X$. In addition to creating the new constraint, VE provides a way to construct a solution to the CSP that contains $X$ from a solution to the reduced CSP.
The following algorithm is described using the relational algebra calculations of join and project.
When eliminating $X$, the influence of $X$ on the remaining variables is through the constraint relations that involve $X$. First, the algorithm collects all of the constraints that involve $X$. Let the join of all of these relations be the relation ${r}_{X}(X,\overline{Y})$, where $\overline{Y}$ is the set of other variables in the scope of ${r}_{X}$. Thus $\overline{Y}$ is the set of all variables that are neighbors of $X$ in the constraint graph. The algorithm then projects ${r}_{X}$ onto $\overline{Y}$; this relation replaces all of the relations that involve $X$. It now has a reduced CSP that involves one less variable, which it solves recursively. Once it has a solution for the reduced CSP, it extends that solution to a solution for the original CSP by joining the solution with ${r}_{X}$.
When only one variable is left, it returns those domain elements that are consistent with the constraints on this variable.
Consider a CSP that contains the variables ${A}$, ${B}$, and ${C}$, each with domain ${\mathrm{\{}}{\mathrm{1}}{\mathrm{,}}{\mathrm{2}}{\mathrm{,}}{\mathrm{3}}{\mathrm{,}}{\mathrm{4}}{\mathrm{\}}}$. Suppose the constraints that involve ${B}$ are $$ and $$. There may be many other variables, but if ${B}$ does not have any constraints in common with these variables, eliminating ${B}$ will not impose any new constraints on these other variables. To remove ${B}$, first join on the relations that involve ${B}$:
$$\begin{array}{cc}\hfill {A}\hfill & \hfill {B}\hfill \\ \hfill {1}\hfill & \hfill {2}\hfill \\ \hfill {1}\hfill & \hfill {3}\hfill \\ \hfill {1}\hfill & \hfill {4}\hfill \\ \hfill {2}\hfill & \hfill {3}\hfill \\ \hfill {2}\hfill & \hfill {4}\hfill \\ \hfill {3}\hfill & \hfill {4}\hfill \end{array}{\bowtie}\begin{array}{cc}\hfill {B}\hfill & \hfill {C}\hfill \\ \hfill {1}\hfill & \hfill {2}\hfill \\ \hfill {1}\hfill & \hfill {3}\hfill \\ \hfill {1}\hfill & \hfill {4}\hfill \\ \hfill {2}\hfill & \hfill {3}\hfill \\ \hfill {2}\hfill & \hfill {4}\hfill \\ \hfill {3}\hfill & \hfill {4}\hfill \end{array}{=}\begin{array}{ccc}\hfill {A}\hfill & \hfill {B}\hfill & \hfill {C}\hfill \\ \hfill {1}\hfill & \hfill {2}\hfill & \hfill {3}\hfill \\ \hfill {1}\hfill & \hfill {2}\hfill & \hfill {4}\hfill \\ \hfill {1}\hfill & \hfill {3}\hfill & \hfill {4}\hfill \\ \hfill {2}\hfill & \hfill {3}\hfill & \hfill {4}\hfill \end{array}$$ |
To get the relation on ${A}$ and ${C}$ induced by ${B}$, project this join onto ${A}$ and ${C}$, which gives
$$\begin{array}{cc}\hfill {A}\hfill & \hfill {C}\hfill \\ \hfill {1}\hfill & \hfill {3}\hfill \\ \hfill {1}\hfill & \hfill {4}\hfill \\ \hfill {2}\hfill & \hfill {4}\hfill \end{array}$$ |
This relation on ${A}$ and ${C}$ contains all of the information about the constraints on ${B}$ that affect the solutions of the rest of the network.
The original constraints on ${B}$ are replaced with the new constraint on ${A}$ and ${C}$. VE then solves the rest of the network, which is now simpler because it does not involve the variable ${B}$. To generate one or all solutions, the algorithm remembers the joined relation on ${A}{\mathrm{,}}{B}{\mathrm{,}}{C}$ to construct a solution that involves ${B}$ from a solution to the reduced network.
Figure 4.6 gives a recursive algorithm for variable elimination, VE_CSP, to find all solutions for a CSP.
The base case of the recursion occurs when only one variable is left. In this case (line 8), a solution exists if and only if there are rows in the join of the final relations. Note that these relations are all relations on a single variable, and so they are the sets of legal values for this variable. The join of these relations is the intersection of these sets.
In the non-base case, a variable $X$ is selected for elimination (line 10). Which variable is selected does not affect the correctness of the algorithm, but it may affect the efficiency. To eliminate variable $X$, the algorithm propagates the effect of $X$ onto those variables that $X$ is directly related to. This is achieved by joining all of the relations in which $X$ is involved (line 13) and then projecting $X$ out of the resulting relation (line 14). Thus, a simplified problem (with one less variable) has been created that can be solved recursively. To get the possible values for $X$, the algorithm joins the solution of the simplified problem with the relation $R$ that defines the effect of $X$.
If you only wanted to find one solution, instead of returning $R\bowtie S$, the algorithm can return one element of the join. No matter which element it returns, that element is guaranteed to be part of a solution. If any value of $R$ in this algorithm contains no tuples, there are no solutions.
The efficiency of the VE algorithm depends on the order in which variables are selected. The intermediate structure – which variables the intermediate relations are over – depends not on the actual content of relations but only on the graph structure of the constraint network. The efficiency of this algorithm can be determined by considering the graphical structure. In general, VE is efficient when the constraint network is sparse. The number of variables in the largest relation returned for a particular variable ordering is called the treewidth of the graph for that variable ordering. The treewidth of a graph is the minimum treewidth for any ordering. The complexity of VE is exponential in treewidth and linear in the number of variables. This can be compared to searching [see Section 4.3], which is exponential in the number of variables.
Finding an elimination ordering that results in the smallest treewidth is NP-hard. However, some good heuristics exist. The two most common are:
min-factor: at each stage, select the variable that results in the smallest relation
minimum deficiency or minimum fill: at each stage, select the variable that adds the fewest arcs to the remaining constraint network. The deficiency of a variable $X$ is the number of pairs of variables that are in a relationship with $X$ that are not in a relationship with each other. The intuition is that it is okay to remove a variable that results in a large relation as long as it does not make the network more complicated.
The minimum deficiency has usually been found empirically to give a smaller treewidth than min-factor, but it is more difficult to compute.
VE can also be combined with arc consistency; whenever VE removes a variable, arc consistency can be used to further simplify the problem. This approach can result in smaller intermediate tables.