foundations of computational agents
The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).
Imagine a robot with wheels and the ability to pick up and put down objects. It has sensing capabilities allowing it to recognize objects and to avoid obstacles. It can be given orders in natural language and obey them, making reasonable choices about what to do when its goals conflict. Such a robot could be used in an office environment to deliver packages, mail, and/or coffee, or it could be embedded in a wheelchair to help disabled people. It should be useful as well as safe.
In terms of the black box characterization of an agent in Figure 1.3, the autonomous delivery robot has as inputs
prior knowledge, provided by the agent designer, about the agent’s capabilities, what objects it may encounter and have to differentiate, what requests mean, and perhaps about its environment, such as a map
past experience obtained while acting, for instance, about the effects of its actions, what objects are common in the world, and what requests to expect at different times of the day
goals in terms of what it should deliver and when, as well as preferences specifying trade-offs, such as when it must forgo one goal to pursue another, or the trade-off between acting quickly and acting safely, and
stimuli about its environment from observations from input devices such as cameras, sonar, touch, sound, laser range finders, or keyboards as well as stimuli such as the agent being forcibly moved or crashing.
The robot’s outputs are motor controls specifying how its wheels should turn, where its limbs should move, and what it should do with its grippers. Other outputs may include speech and a video display.
In terms of the dimensions of complexity, the simplest case for the robot is a flat system, represented in terms of states, with no uncertainty, with achievement goals, with no other agents, with given knowledge, and with perfect rationality. In this case, with an indefinite stage planning horizon, the problem of deciding what to do is reduced to the problem of finding a path in a graph of states. This is explored in Chapter 3.
Each dimension can add conceptual complexity to the task of reasoning:
A hierarchical decomposition can allow the complexity of the overall system to be increased while allowing each module to be simple and able to be understood by itself. This is explored in Chapter 2.
Robots in simple environments can be modeled in terms of explicit states, but the state space soon explodes when more detail is considered. Modeling and reasoning in terms of features allows for a much more compact and comprehensible system. For example, there may be features for the robot’s location, the amount of fuel it has, what it is carrying, and so forth. Reasoning in terms of features can be exploited for computational gain, because some actions may only affect a few of the features. Planning in terms of features is discussed in Chapter 6. When dealing with multiple individuals (e.g., multiple items to deliver), it may need to reason in terms of individuals and relations. Planning in terms of individuals and relations is explored in Section 15.1.
The planning horizon can be finite if the agent only looks ahead a few steps. The planning horizon can be indefinite if there is a fixed set of goals to achieve. It can be infinite if the agent has to survive for the long term, with ongoing requests and actions, such as delivering mail whenever it arrives and recharging its battery when its battery is low.
There could be goals, such as “deliver coffee to Chris and make sure you always have power.” A more complex goal may be to “clean up the lab, and put everything where it belongs.” There can be complex preferences, such as “deliver mail when it arrives and service coffee requests as soon as possible, but it is more important to deliver messages marked as urgent, and Chris really needs her coffee quickly when she asks for it.”
There can be sensing uncertainty because the robot does not know exactly what is in the world, or where it is, based on its limited sensors.
There can be uncertainty about the effects of an action, both at the low level, say due to slippage of the wheels, or at the high level because the agent might not know whether putting the coffee on Chris’s desk succeeded in delivering coffee to her.
There can be multiple robots, which can coordinate to deliver coffee and parcels and compete for power outlets. There may also be children out to trick the robot, or pets that get in the way.
A robot has a great deal to learn, such as how slippery floors are as a function of their shininess, where Chris hangs out at different parts of the day and when she will ask for coffee, and which actions result in the highest rewards.
Figure 1.7 depicts a typical laboratory environment for a delivery robot. This environment consists of four laboratories and many offices. In our examples, the robot can only push doors, and the directions of the doors in the diagram reflect the directions in which the robot can travel. Rooms require keys and those keys can be obtained from various sources. The robot must deliver parcels, beverages, and dishes from room to room. The environment also contains a stairway that is potentially hazardous to the robot.