foundations of computational agents
One of the main lessons of AI is that successful agents exploit the structure of the world. Previous chapters showed how states can be represented in terms of features. Representing domains using features can be much more compact than representing them using states explicitly, and algorithms can exploit this compactness. There is, however, usually much more structure that can be exploited for representation and inference. In particular, this chapter considers reasoning in terms of individuals and relations:
Individuals are things in the world, whether they are concrete individuals such as people and buildings, imaginary individuals such as unicorns and programs that can reliably pass the Turing test, processes such a reading a book or going on a holiday, or abstract concepts such as money, courses and times. These are also called entities, objects or things.
Relations specify what is true about these individuals. This is meant to be as general as possible and includes properties, which are that are true or false of single individuals, propositions, which are true or false independently of any individuals, as well as relationships among multiple individuals.
In the representation of the electrical domain in Example 5.7, the propositions , , and have no internal structure. There is no notion that the propositions and are about the same relation, but with different individuals, or that and are about the same switch. There is no notion of individuals and relations.
An alternative is to represent explicitly the individual switches , , , and the properties or relations, and . Using this representation, “switch is up” is represented as . By knowing what and represent, we do not require a separate definition of . A binary relation, like , can be used to relate two individuals, such as .
Modeling in terms of individuals and relations has a number of advantages over just using features:
It is often the natural representation. Often features are properties of individuals, and this internal structure is lost in converting to features.
An agent may have to model a domain without knowing what the individuals are, or how many there will be, and, thus, without knowing what the features are. When interacting with the environment, the agent can construct the features when it finds out which individuals are in the particular environment.
An agent can do some reasoning without caring about the particular individuals. For example, it may be able to derive that something holds for all individuals without knowing what the individuals are. Or, an agent may be able to derive that some individual exists that has some properties, without caring about other individuals. There may be some queries an agent can answer for which it does not have to distinguish the individuals.
The existence of individuals could depend on actions or could be uncertain. For example, in planning in a manufacturing context, whether there is a working component may depend on many other subcomponents working and being put together correctly; some of these may depend on the agent’s actions, and some may not be under the agent’s control. Thus, an agent may have to act without knowing what features there are or what features there will be.
Often there are infinitely many individuals an agent is reasoning about, and so infinitely many features. For example, if the individuals are sentences, the agent may only have to reason about a very limited set of sentences (e.g., those that could be meant by a person speaking, or those that may be sensible to generate), even though there may be infinitely many possible sentences, and so infinitely many features.