1 Artificial Intelligence and Agents

The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).

1.3 Agents Situated in Environments

AI is about practical reasoning: reasoning in order to do something. A coupling of perception, reasoning, and acting comprises an agent. An agent acts in an environment. An agent’s environment may well include other agents. An agent together with its environment is called a world.

An agent could be, for example, a coupling of a computational engine with physical sensors and actuators, called a robot, where the environment is a physical setting. It could be the coupling of an advice-giving computer, an expert system, with a human who provides perceptual information and carries out the task. An agent could be a program that acts in a purely computational environment, a software agent.

Figure 1.3: An agent interacting with an environment

Figure 1.3 shows a black box view of an agent in terms of its inputs and outputs. At any time, what an agent does depends on:

  • prior knowledge about the agent and the environment

  • history of interaction with the environment, which is composed of

    • stimuli received from the current environment, which can include observations about the environment, as well as actions that the environment imposes on the agent and

    • past experiences of previous actions and stimuli, or other data, from which it can learn

  • goals that it must try to achieve or preferences over states of the world

  • abilities, the primitive actions the agent is capable of carrying out.

Inside the black box, an agent has some internal belief state that can encode beliefs about its environment, what it has learned, what it is trying to do, and what it intends to do. An agent updates this internal state based on stimuli. It uses the belief state and stimuli to decide on its actions. Much of this book is about what is inside this black box.

This is an all-encompassing view of intelligent agents varying in complexity from a simple thermostat, to a diagnostic advising system whose perceptions and actions are mediated by human beings, to a team of mobile robots, to society itself.

Purposive agents have preferences or goals. They prefer some states of the world to other states, and they act to try to achieve the states they prefer most. The non-purposive agents are grouped together and called nature. Whether or not an agent is purposive is a modeling assumption that may, or may not, be appropriate. For example, for some applications it may be appropriate to model a dog as purposive, and for others it may suffice to model a dog as non-purposive.

If an agent does not have preferences, by definition it does not care what world state it ends up in, and so it does not matter to it what it does. The reason to design an agent is to instill preferences in it – to make it prefer some world states and try to achieve them. An agent does not have to know its preferences explicitly. For example, a thermostat is an agent that senses the world and turns a heater either on or off. There are preferences embedded in the thermostat, such as to keep the occupants of a room at a pleasant temperature, even though the thermostat arguably does not know these are its preferences. The preferences of an agent are often the preferences of the designer of the agent, but sometimes an agent can acquire goals and preferences at run time.