foundations of computational agents
The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).
A primitive atom is an atom that is defined using facts. A derived atom is an atom that is defined using rules. Typically, the designer writes axioms for the derived atoms and then expects a user to specify which primitive atoms are true. Thus, a derived atom will be inferred as necessary from the primitive atoms and other atoms that can be derived.
The designer of an agent must make many decisions when designing a knowledge base for a domain. For example, consider two propositions, and , both of which are true. There are many choices of how to write this. A designer could specify both and as atomic clauses, treating both as primitive. A designer could have as primitive and as derived, stating as an atomic clause and giving the rule . Alternatively, the designer could specify the atomic clause and the rule , treating as primitive and as derived. These representations are logically equivalent; they cannot be distinguished logically. However, they have different effects when the knowledge base is changed. Suppose was no longer true for some reason. In the first and third representations, would still be true, and in the second representation would no longer true.
A causal model, or a model of causality, is a representation of a domain that predicts the results of interventions. An intervention is an action that forces a variable to have a particular value. That is, an intervention changes the value in some way other than manipulating other variables in the model.
To predict the effect of interventions, a causal model represents how the cause implies its effect. When the cause is changed, its effect should be changed. An evidential model represents a domain in the other direction – from effect to cause. Note that we do not assume that there is “the cause” of an effect; rather there are propositions, which together may cause the effect to become true.
In the electrical domain depicted in Figure 5.2, consider the relationship between switches and and light . Assume all components are working properly. Light is lit whenever both switches are up or both switches are down. Thus,
(5.1) |
This is logically equivalent to
This formula is symmetric between the three propositions; it is true if and only if an odd number of the propositions are true. However, in the world, the relationship between these propositions is not symmetric. Suppose both switches were up and the light was lit. Putting down does not make go down to preserve . Instead, putting down makes false, and remains true. Thus, to predict the result of interventions, we require more than proposition (5.1) above.
A causal model is
The completion of this is equivalent to proposition (5.1); however, it makes reasonable predictions when one of the values is changed. Changing one of the switch positions changes whether the light is lit, but changing whether the light is lit (by some other mechanism) does not change whether the switches are up or down.
An evidential model is
This can be used to answer questions about whether is up based on the position of and whether is lit. Its completion is also equivalent to formula (5.1). However, it does not accurately predict the effect of interventions.
For most purposes, it is preferable to use a causal model of the world as it is more transparent, stable and modular than an evidential model.