11 Multiagent Systems

The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).

11.1 Multiagent Framework

This chapter considers environments that contain multiple agents, with the following assumptions:

  • The agents can act autonomously, each with its own information about the world and the other agents.

  • The outcome depends on the actions of all of the agents.

  • Each agent has its own utility that depends on the outcome. Agents act to maximize their own utility.

A mechanism specifies what actions are available to each agent and how the actions of the agents lead to outcomes. An agent acts strategically when it decides what to do based on its goals or utilities.

Sometimes we treat nature as an agent. Nature is defined as being a special agent that does not have preferences and does not act strategically. It just acts, perhaps stochastically. In terms of the agent architecture shown in Figure 1.3, nature and the other agents form the environment for an agent. Agents that are not acting strategically are treated as part of nature. A strategic agent should not treat other strategic agents as part of nature, but rather it should be open to coordination, cooperation and perhaps negotiation with other strategic agents.

There are two extremes in the study of multiagent systems:

  • fully cooperative, where the agents share the same utility function, and

  • fully competitive, when one agent can only win when another loses; in zero-sum games, for every outcome, the sum of the utilities for the agents is zero.

Most interactions are between these two extremes, where the agents’ utilities are synergistic in some aspects, competing in some, and other aspects are independent. For example, two commercial agents with stores next door to each other may both share the goal of having the street area clean and inviting; they may compete for customers, but may have no preferences about the details of the other agent’s store. Sometimes their actions do not interfere with each other, and sometimes they do. Often agents are better off if they coordinate their actions through cooperation and negotiation.

Multiagent interactions have mostly been studied using the terminology of games following the seminal work of Neumann and Morgenstern [1953]. Many issues of interaction between agents can be studied in terms of games. Even quite small games highlight deep issues. However, the study of games is meant to be about general multiagent interactions, not just artificial games.

Multiagent systems are ubiquitous in artificial intelligence. From parlor games such as checkers, chess, backgammon, and Go, to robot soccer, to interactive computer games, to agents acting in complex economic systems, games are integral to AI. Games were one of the first applications of AI. The first operating checkers program dates back to 1952. A program by Samuel [1959] beat the Connecticut state checker champion in 1961. There was great fanfare when Deep Blue [Campbell et al., 2002] beat the world chess champion in 1997 and when AlphaGo [Silver et al., 2016] beat one of the world’s top Go players in 2016. Although large, these games are conceptually simple because the agents observe the state of the world perfectly (they are fully observable). In most real-world interactions, the state of the world is only partially observable. There is now much interest in partially observable games like poker, where the environment is predictable (as the proportion of cards is known, even if the particular cards dealt is unknown), and robot soccer, where the environment is much less predictable. But all of these games are much simpler than the multiagent interactions people perform in their daily lives, let alone the strategizing needed for bartering in marketplaces or on the Internet, where the rules are less well defined and the utilities are much more multifaceted.