foundations of computational agents
The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).
One possible representation of the effect and precondition of actions is to explicitly enumerate the states and, for each state, specify the actions that are possible in that state and, for each state–action pair, specify the state that results from carrying out the action in that state. This would require a table such as the following:
State | Action | Resulting State |
… | … | … |
The first tuple in this relation specifies that it is possible to carry out action in state and, if it were to be carried out in state , the resulting state would be .
Thus, this is the explicit representation of a graph, where the nodes are states and the acts are actions. This is called a state-space graph. This is the sort of graph that was used in Chapter 3. Any of the algorithms of Chapter 3 can be used to search the space.
In Example 6.1, the states are the quintuples specifying the robot’s location, whether the robot has coffee, whether Sam wants coffee, whether mail is waiting, and whether the robot is carrying the mail. For example, the tuple
represents the state where Rob is at the Lab, Rob does not have coffee, Sam wants coffee, there is no mail waiting, and Sam has mail. The tuple
represents the state where Rob is at the Lab carrying coffee, Sam wants coffee, there is mail waiting, and Rob is not holding any mail.
In this example, there are states. Intuitively, all of them are possible, even if one would not expect that some of them would be reached by an intelligent robot.
There are six actions, not all of which are applicable in each state.
The actions are defined in terms of the state transitions:
State | Action | Resulting State |
… | … | … |
This table shows the transitions for two of the states. The complete representation includes the transitions for the other 62 states.
This is not a good representation for three main reasons:
There are usually too many states to represent, to acquire, and to reason with.
Small changes to the model mean a large change to the representation. Adding another feature means changing the whole representation. For example, to model the level of power in the robot, so that it can recharge itself in the Lab, every state has to change.
It does not represent the structure of states; there is much structure and regularity in the effects of actions that is not reflected in the state transitions. For example, most actions do not affect whether Sam wants coffee, but this fact needs to be repeated for every state.
An alternative is to model how the actions affect the features.