foundations of computational agents
The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).
The representation dimension concerns how the world is described.
The different ways the world could be are called states. A state of the world specifies the agent’s internal state (its belief state) and the environment state.
At the simplest level, an agent can reason explicitly in terms of individually identified states.
A thermostat for a heater may have two belief states: off and heating. The environment may have three states: cold, comfortable, and hot. There are thus six states corresponding to the different combinations of belief and environment states. These states may not fully describe the world, but they are adequate to describe what a thermostat should do. The thermostat should move to, or stay in, heating if the environment is cold and move to, or stay in, off if the environment is hot. If the environment is comfortable, the thermostat should stay in its current state. The thermostat agent keeps the heater on in the heating state and keeps the heater off in the off state.
Instead of enumerating states, it is often easier to reason in terms of features of the state or propositions that are true or false of the state. A state may be described in terms of features, where a feature has a value in each state [see Section 4.1].
An agent that has to look after a house may have to reason about whether light bulbs are broken. It may have features for the position of each switch, the status of each switch (whether it is working okay, whether it is shorted, or whether it is broken), and whether each light works. The feature may be a feature that has value up when switch is up and has value down when the switch is down. The state of the house’s lighting may be described in terms of values for each of these features. These features depend on each other, but not in arbitrarily complex ways; for example, whether a light is on may just depend on whether it is okay, whether the switch is turned on, and whether there is electricity.
A proposition is a Boolean feature, which means that its value is either true or false. Thirty propositions can encode states. It may be easier to specify and reason with the thirty propositions than with more than a billion states. Moreover, having a compact representation of the states indicates understanding, because it means that an agent has captured some regularities in the domain.
Consider an agent that has to recognize letters of the alphabet. Suppose the agent observes a binary image, a grid of pixels, where each of the 900 grid points is either black or white. The action is to determine which of the letters is drawn in the image. There are different possible states of the image, and so different functions from the image state into the characters . We cannot even represent such functions in terms of the state space. Instead, handwriting recognition systems define features of the image, such as line segments, and define the function from images to characters in terms of these features. Modern implementations learn the features that are useful.
When describing a complex world, the features can depend on relations and individuals. What we call an individual could also be called a thing, an object or an entity. A relation on a single individual is a property. There is a feature for each possible relationship among the individuals.
The agent that looks after a house in Example 1.6 could have the lights and switches as individuals, and relations position and connected_to. Instead of the feature , it could use the relation . This relation enables the agent to reason about all switches or for an agent to have general knowledge about switches that can be used when the agent encounters a switch.
If an agent is enrolling students in courses, there could be a feature that gives the grade of a student in a course, for every student–course pair where the student took the course. There would be a passed feature for every student–course pair, which depends on the grade feature for that pair. It may be easier to reason in terms of individual students, courses and grades, and the relations grade and passed. By defining how passed depends on grade once, the agent can apply the definition for each student and course. Moreover, this can be done before the agent knows of any of the individuals and so before it knows any of the features.
The two-argument relation passed, with 1000 students and 100 courses can represent propositions and so states.
By reasoning in terms of relations and individuals, an agent can reason about whole classes of individuals without ever enumerating the features or propositions, let alone the states. An agent may have to reason about infinite sets of individuals, such as the set of all numbers or the set of all sentences. To reason about an unbounded or infinite number of individuals, an agent cannot reason in terms of states or features; it must reason at the relational level.
In the representation dimension, the agent reasons in terms of
states
features, or
individuals and relations (often called relational representations).
Some of the frameworks will be developed in terms of states, some in terms of features and some in terms of individuals and relations.
Reasoning in terms of states is introduced in Chapter 3. Reasoning in terms of features is introduced in Chapter 4. We consider relational reasoning starting in Chapter 13.