Third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including the full text).

12.1 Exploiting Structure Beyond Features

One of the main lessons of AI is that successful agents exploit the structure of the world. The previous chapters considered states represented in terms of features. Using features is much more compact than representing the states explicitly, and algorithms can exploit this compactness. There is, however, usually much more structure in features that can be exploited for representation and inference. In particular, this chapter considers reasoning in terms of

  • individuals - things in the domain, whether they are concrete individuals such as people and buildings, imaginary individuals such as unicorns and fairies, or abstract concepts such as courses and times.
  • relations - what is true about these individuals. This is meant to be as general as possible and includes unary relations that are true or false of single individuals, in addition to relationships among multiple individuals.
Example 12.1: In Example 5.5, the propositions up_s2, up_s3, and ok_s2 have no internal structure. There is no notion that the proposition up_s2 and up_s3 are about the same relation, but with different individuals, or that up_s2 and ok_s2 are about the same switch. There is no notion of individuals and relations.

An alternative is to explicitly represent the individual switches s1, s2, s3, and the properties or relations, up and ok. Using this representation, "switch s2 is up" is represented as up(s2). By knowing what up and s1 represent, we do not require a separate definition of up(s1). A binary relation, like connected_to, can be used to relate two individuals, such as connected_to(w1,s1).

A number of reasons exist for using individuals and relations instead of just features:

  • It is often the natural representation. Often features are properties of individuals, and this internal structure is lost in converting to features.
  • An agent may have to model a domain without knowing what the individuals are, or how many there will be, and, thus, without knowing what the features are. At run time, the agent can construct the features when it finds out which individuals are in the particular environment.
  • An agent can do some reasoning without caring about the particular individuals. For example, it may be able to derive that something holds for all individuals without knowing what the individuals are. Or, an agent may be able to derive that some individual exists that has some properties, without caring about other individuals. There may be some queries an agent can answer for which it does not have to distinguish the individuals.
  • The existence of individuals could depend on actions or could be uncertain. For example, in planning in a manufacturing context, whether there is a working component may depend on many other subcomponents working and being put together correctly; some of these may depend on the agent's actions, and some may not be under the agent's control. Thus, an agent may have to act without knowing what features there are or what features there will be.
  • Often there are infinitely many individuals an agent is reasoning about, and so infinitely many features. For example, if the individuals are sentences, the agent may only have to reason about a very limited set of sentences (e.g., those that could be meant by a person speaking, or those that may be sensible to generate), even though there may be infinitely many possible sentences, and so infinitely many features.