# 5.9 Causal Models

A primitive atom is an atom that is defined using facts. A derived atom is an atom that is defined using rules. Typically, the designer writes axioms for the derived atoms and then expects a user to specify which primitive atoms are true. Thus, a derived atom will be inferred as necessary from the primitive atoms and other atoms that can be derived.

The designer of an agent must make many decisions when designing a knowledge base for a domain. For example, consider just two propositions, $a$ and $b$, both of which are true. There are multiple ways to write this. A designer could

• state both $a$ and $b$ as atomic clauses, treating both as primitive

• state the atomic clause $a$ and the rule $b\leftarrow\mbox{}a$, treating $a$ as primitive and $b$ as derived

• state the atomic clause $b$ and the rule $a\leftarrow\mbox{}b$, treating $b$ as primitive and $a$ as derived.

These representations are logically equivalent; they cannot be distinguished logically. However, they have different effects when the knowledge base is changed. Suppose $a$ was no longer true for some reason. In the first and third representations, $b$ would still be true, and in the second representation, $b$ would no longer true.

A causal model, or a model of causality, is a representation of a domain that predicts the results of interventions. An intervention is an action that forces a variable to have a particular value. That is, an intervention on a variable changes the value of the variable in some way other than as a side-effect of manipulating other variables in the model. Other variables may be affected by the change.

To predict the effect of interventions, a causal model represents how the cause implies its effect. When the cause is changed, its effect should be changed. An evidential model represents a domain in the other direction – from effect to cause. Note that there is no assumption that there is “the cause” of an effect; rather there are propositions, which together may cause the effect to become true.

A structural causal model defines a causal mechanism for each atom that is modeled. This causal mechanism specifies when the atom is true in terms of other atoms. If the model is manipulated to make an atom true or false, then the clauses for that atom are replaced by the appropriate assertion that the atom is true or false. The model is designed so that it gives appropriate answers for such interventions.

###### Example 5.34.

In the electrical domain depicted in Figure 5.2, consider the relationship between switches $s_{1}$ and $s_{2}$ and light $l_{1}$. Assume all components are working properly. Light $l_{1}$ is lit whenever both switches are up or both switches are down. Thus,

 ${{lit\_l}_{1}\leftrightarrow({up\_s}_{1}\leftrightarrow{up\_s}_{2})}$ (5.1)

which is logically equivalent to

 ${{up\_s}_{1}\leftrightarrow({lit\_l}_{1}\leftrightarrow{up\_s}_{2}).}$

This formula is symmetric between the three propositions; it is true if and only if an odd number of the propositions are true. However, in the world, the relationship between these propositions is not symmetric. Suppose both switches were up and the light was lit. Putting $s_{1}$ down does not make $s_{2}$ go down to preserve ${lit\_l}_{1}$. Instead, putting $s_{1}$ down makes ${lit\_l}_{1}$ false, and ${up\_s}_{2}$ remains true. Thus, to predict the result of interventions, formula (5.1) is not enough. A mechanism for each atom can make the relationship asymmetric, and account for interventions.

Assuming that nothing internal to the model causes the switches to be up or down, the state of Figure 5.2 with $s_{1}$ up and $s_{2}$ down is represented as

 $\displaystyle{{lit\_l}_{1}\leftrightarrow({up\_s}_{1}\leftrightarrow{up\_s}_{2% })}$ $\displaystyle{up\_s}_{1}$ $\displaystyle\neg{up\_s}_{2}$

which can be written as a logic program using negation as failure as

 ${lit\_l}_{1}\leftarrow\mbox{}{up\_s}_{1}\wedge\mbox{}{up\_s}_{2}.$ ${lit\_l}_{1}\leftarrow\mbox{}\mbox{\sim}{up\_s}_{1}\wedge\mbox{}\mbox{\sim% }{up\_s}_{2}.$ ${up\_s}_{1}.$

The representation makes reasonable predictions when one of the values is changed. To intervene on the switch positions, assert or remove the propositions about the switch being up. This can change whether the light is lit. To intervene to make light $l_{1}$ unlit, replace the clauses defining ${lit\_l}_{1}$. This does not change the switch positions. Note that intervening to make the light off does not mean that the agent turns the light off by moving the corresponding switch, but rather by some other way, for example, removing the light bulb or breaking it.

An evidential model is

 ${up\_s}_{1}\leftarrow\mbox{}{lit\_l}_{1}\land{up\_s}_{2}.$ ${up\_s}_{1}\leftarrow\mbox{}\mbox{\sim}{lit\_l}_{1}\land\mbox{\sim}{up\_s}% _{2}.$

This can be used to answer questions about whether $s_{1}$ is up based on the position of $s_{2}$ and whether $l_{1}$ is lit. Its completion is also equivalent to formula (5.1). However, it does not accurately predict the effect of interventions.

For most purposes, it is preferable to use a causal model of the world as it is more transparent, stable, and modular than an evidential model. Causal models under uncertainty are explored in Chapter 11.