foundations of computational agents
The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).
The extensive form of a game is a state-based representation of the game. It is often more concise to describe states in terms of features. A multiagent decision network is a factored representation of a multiagent decision problem. It is like a decision network, except that each decision node is labeled with an agent that gets to choose a value for the node. There is a utility node for each agent specifying the utility for that agent. The parents of a decision node specify the information that will be available to the agent when it has to act.
Figure 11.4 gives a multiagent decision network for a fire alarm example.
In this scenario, there are two agents, Agent 1 and Agent 2. Each has its own noisy sensor of whether there is a fire. However, if they both call, it is possible that their calls will interfere with each other and neither call will work. Agent 1 gets to choose a value for decision variable and only observes the value for the variable . Agent 2 gets to choose a value for decision variable and only observes the value for the variable . Whether the call works depends on the values of and . Whether the fire department comes depends on whether the call works. Agent 1’s utility depends on whether there was a fire, whether the fire department comes, and whether they called – similarly for Agent 2.
A multiagent decision network can be converted into a normal-form game; however, the number of strategies may be enormous. If a decision variable has states and binary parents, there are assignments of values to parents and so strategies. That is just for a single decision node; more complicated networks are even bigger when converted to normal form. Thus, algorithms that depend on enumerating strategies are impractical for anything but the smallest multiagent decision networks.
Other representations exploit other structures in multiagent settings. For example, the utility of an agent may depend on the number of other agents who do some action, but not on their identities. An agent’s utility may depend on what a few other agents do, not directly on the actions of all other agents. An agent’s utility may only depend on what the agents at neighboring locations do, and not on the identity of these agents or on what other agents do.