Full text of the second edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2017 is now available.

## 5.7 Causal Models

A **primitive** atom is an atom that is stated as an atomic clause
when it is true. A **derived** atom is one that uses
rules to define when it is true. Typically the designer writes axioms for the derived atoms and
then expects a user to specify which primitive atoms are true. Thus, the derived atoms will be inferred as
necessary from the primitive atoms and other atoms that can be derived.

The designer of an agent must make many decisions when
designing a knowledge base for a
domain.
For example, consider two propositions, *a* and
*b*, both of which are true. There are many choices of how to write
this. A designer could specify both *a* and *b* as atomic clauses, treating
both as primitive. A designer could have *a* as primitive and *b* as
derived, stating *a* as an atomic clause and giving the rule *b ←a*. Alternatively, the designer could
specify the atomic clause *b* and the rule *a ←b*, treating *b* as primitive and *a* as
derived. These representations are logically equivalent; they cannot
be distinguished logically. However, they have different effects when
the knowledge base is changed. Suppose *a* was no longer true for some
reason. In the first and third representations, *b* would still be
true, and in the second representation *b* would no longer true.

A **causal model**, or a model of **causality**, is a representation of a domain that predicts the results of
interventions. An **intervention** is an action that forces a
variable to have a particular value; that is, it changes the value in
some way other than manipulating other variables in the model.

To predict the effect of interventions, a causal model represents
how the cause implies its effect. When the cause is changed, its effect
should be changed. An **evidential
model** represents a domain in the other direction - from effect to cause. Note that we do not assume that there is "the cause" of an effect; rather there are many propositions, which together make the effect true.

**Example 5.32:**Consider the electrical domain depicted in Figure 1.8. In this domain, switch

*s3*is up and light

*l2*is lit. There are many different ways to axiomatize this domain. Example 5.5 contains causal rules such as

*lit_l*

_{2}←up_s_{3}∧live_w_{3}.Alternatively, we could specify in the evidential direction:

*up_s*

_{3}←lit_l_{2}.*live_w*

_{3}←lit_l_{2}.These are all statements that are true of the domain.

Suppose that wire *w _{3}* was live and someone put switch

*s*up; we would expect that

_{3}*l*would become lit. However, if someone was to make

_{2}*s*lit by some mechanism outside of the model (and not by flipping the switch), we would not expect the switch to go up as a side effect.

_{3}**Example 5.33:**Consider the electrical domain depicted in Figure 1.8. The following proposition describes an invariant on the relationship between switches

*s*and

_{1}*s*and light

_{2}*l*, assuming all components are working properly:

_{1}

up_s_1 ↔(lit_l_1↔up_s_2).

This formula is symmetric between the three propositions; it
is true if and only if an odd number of the propositions are true.
However,
in the world, the relationship between these propositions is not
symmetric. Suppose all three atoms were true in some state. Putting *s _{1}*
down does not make

*s*go down to preserve

_{2}*lit_l*. Instead, putting

_{1}*s*down makes

_{1}*lit_l*false, and

_{1}*up_s*remains true to preserve this invariant. Thus, to predict the result of interventions, we require more than proposition (5.33) above.

_{2}A causal model is

*lit_l*

_{1}←up_s_{1}∧up_s_{2}.*lit_l*

_{1}←∼up_s_{1}∧∼up_s_{2}.The completion of this is equivalent to proposition (5.33); however, it makes reasonable predictions when one of the values is changed.

An evidential model is

*up_s*

_{1}←lit_l_{1}∧up_s_{2}.*up_s*

_{1}←∼lit_l_{1}∧∼up_s_{2}.This can be used to answer questions about whether *s _{1}* is up based on the
position of

*s*and whether

_{2}*l*is lit. Its completion is also equivalent to formula (5.33). However, it does not accurately predict the effect of interventions.

_{1}A **causal model** consists of

- a set of
**background variables**, sometimes called**exogenous variables**, which are determined by factors outside of the model; - a set of
**endogenous variables**, which are determined as part of the model; and - a set of functions, one for each endogenous variable, that
specifies how the endogenous variable can be determined from other
endogenous variables and background variables. The function for a
variable
*X*is called the**causal mechanism**for*X*. The entire set of functions must have a unique solution for each assignment of values to the background variables.

When the variables are propositions, the function for a proposition can be specified as a set of clauses with the proposition as their head (under the complete knowledge assumption). One way to ensure a unique solution is for the knowledge base to be acyclic.

**Example 5.34:**In Example 5.33, Equation (5.33) can be the causal mechanism for

*lit_l*. This can be expressed as the rules with

_{1}*lit_l*in the head specified in this model. There would be other causal mechanisms for

_{1}*up_s*and

_{1}*up_s*, or perhaps these could be background variables that are not controlled in the model.

_{2}An **intervention** is an action to force a variable *X* to have a
particular value *v* by some mechanism other than changing one of the
other variables in the model. The effect of an intervention can be obtained by replacing the causal
mechanism for *X* by *X=v*. To intervene to force a proposition *p* to be
true involves replacing the clauses for *p* with the atomic clause *p*. To
intervene to force a proposition *p* to be
false involves removing the clauses for *p*.

If the values of the background variables are not known, the background variables can be represented by assumables. An observation can implemented by two stages:

- abduction to explain the observation in terms of the background variables and
- prediction to see what follows from the explanations.

Intuitively, abduction tells us what the world is like, given the observations. The prediction tells us the consequence of the action, given how the world is.