Intelligence 3E

foundations of computational agents

Fully observable with multiple agents is typically called perfect information. In perfect-information games, agents act sequentially and, when an agent has to act, it gets to observe the state of the world before deciding what to do. Each agent acts to maximize its own utility.

A perfect-information game can be represented as an extensive-form game where the information sets all contain a single node. They can also be represented as a multiagent decision network where the decision nodes are totally ordered and, for each decision node, the parents of that decision node include the preceding decision node and all of their parents (so they are a multiagent counterpart of no-forgetting decision networks).

Perfect-information games are solvable in a manner similar to fully observable single-agent systems. They can be solved backward, from the last decisions to the first, using dynamic programming, or forward using search. The multiagent algorithm maintains a utility for each agent and, for each move, it selects an action that maximizes the utility of the agent making the move. The dynamic programming variant, called backward induction, starts from the end of the game, computing and caching the values and the plan of each node for each agent.

Figure 14.5 gives a top-down, depth-first search algorithm for evaluating a game tree for a perfect information game. It enumerates the whole tree. At each internal node, the agent that controls the node selects a child that is best for it. This arbitrarily chooses the first child that maximizes the agent’s score (the $>$ on line 12), but as the following example shows, which one is selected can affect the utility.

Consider the sharing game of Figure 14.2. In the recursive calls, Barb gets to choose the value that maximizes her utility. Thus, she will choose “yes” for the right two nodes she controls, and would choose either for the leftmost node she controls. Suppose she chooses “no” for this node, then Andy gets to choose one of his actions: $keep$ has utility 0 for him, $share$ has utility 1, and $give$ has utility 0, so he chooses to share. If Barb had chosen “yes” for the leftmost node, Andy would keep, and so Andy would get both items.

The case where two agents are competing, so that a positive reward for one is a negative reward for the other, is a two-agent zero-sum game. An agent that plays such a game well requires adversarial reasoning. The value of such a game can be characterized by a single number that one agent is trying to maximize and the other agent is trying to minimize, which leads to a minimax search space. Each node is either a MAX node, if it is controlled by the agent trying to maximize, or a MIN node, if it is controlled by the agent trying to minimize.

Figure 14.5 implements a minimax
algorithm for zero-sum perfect information games, by
the MAX agent choosing an action with the maximum value, and the MIN agent
choosing an action with the maximum of the *negation* of the
value.

The game-tree search of Figure 14.5 traverses the whole game tree. It is possible to prune part of the search tree for minimax games by showing that some part of the tree will never be part of an optimal play.

Consider searching in the game tree of Figure 14.6. In this figure, the square MAX nodes are controlled by the maximizing agent, and the round MIN nodes are controlled by the minimizing agent.

Suppose the values of the leaf nodes are given or are computed given the definition of the game. The numbers at the bottom show some of these values. The other values are irrelevant, as shown here. Suppose you are doing a left-first, depth-first traversal of this tree. The value of node $h$ is 7, because it is the minimum of 7 and 9. Just by considering the leftmost child of $i$ with a value of $6$, you know that the value of $i$ is less than or equal to 6. Therefore, at node $d$, the maximizing agent will go left. You do not have to evaluate the other child of $i$. Similarly, the value of $j$ is 11, so the value of $e$ is at least 11, and so the minimizing agent at node $b$ will choose to go left.

The value of $l$ is less than or equal to 5, and the value of $m$ is less than or equal to 4; thus, the value of $f$ is less than or equal to $5$, so the value of $c$ will be less than or equal to 5. So, at $a$, the maximizing agent will choose to go left. Notice that this argument did not depend on the values of the unnumbered leaves. Moreover, it did not depend on the size of the subtrees that were not explored.

The previous example analyzed what can be pruned. Minimax with alpha–beta ($\alpha $–$\beta $) pruning is a depth-first search algorithm that prunes by passing pruning information down in terms of parameters $\alpha $ and $\beta $. In this depth-first search, a node has a score, which has been obtained from (some of) its descendants.

The parameter $\alpha $ is used to prune MIN nodes. Initially, it is the highest current value for all MAX ancestors of the current node. Any MIN node whose current value is less than or equal to its $\alpha $ value does not have to be explored further. This cutoff was used to prune the other descendants of nodes $l$, $m$, and $c$ in the previous example.

The $\beta $ parameter is used to prune MAX nodes in an analogous way.

The minimax algorithm with $\alpha $–$\beta $ pruning is given in Figure 14.7. It is called, initially, with $MinimaxAlphaBeta(R,-\mathrm{\infty},\mathrm{\infty})$, where $R$ is the root node. It returns a pair of the value for the node $n$ and a path of choices that lead to this value. (Note that this path does not include $n$.) Line 13 performs $\beta $ pruning; at this stage the algorithm knows that the current path will never be chosen, and so returns a current score. Similarly, line 22 performs $\alpha $-pruning. Line 17 and line 26 concatenate $c$ to the path, as it has found a best path for the agent. In this algorithm, the path “$None$” is sometimes returned for non-leaf nodes; this only occurs when the algorithm has determined this path will not be used.

Consider running $MinimaxAlphaBeta$ on the tree of Figure 14.6. Consider the recursive calls (and the values returned, but not the paths). Initially, it calls

$$MinimaxAlphaBeta(a,-\mathrm{\infty},\mathrm{\infty})$$ |

which then calls, in turn:

$$MinimaxAlphaBeta(b,-\mathrm{\infty},\mathrm{\infty})$$ |

$$MinimaxAlphaBeta(d,-\mathrm{\infty},\mathrm{\infty})$$ |

$$MinimaxAlphaBeta(h,-\mathrm{\infty},\mathrm{\infty}).$$ |

This last call finds the minimum of both of its children and returns $7$. Next the procedure calls

$$MinimaxAlphaBeta(i,7,\mathrm{\infty})$$ |

which then gets the value for the first of $i$’s children, which has value $6$. Because $\alpha \ge \beta $, it returns $6$. The call to $d$ then returns $7$, and it calls

$$MinimaxAlphaBeta(e,-\mathrm{\infty},7).$$ |

Node $e$’s first child returns $11$ and, because $\alpha \ge \beta $, it returns $11$. Then $b$ returns $7$, and the call to $a$ calls

$$MinimaxAlphaBeta(c,7,\mathrm{\infty})$$ |

which in turn calls

$$MinimaxAlphaBeta(f,7,\mathrm{\infty})$$ |

which eventually returns 5, and so the call to $c$ returns 5, and the whole procedure returns 7.

By keeping track of the values, the maximizing agent knows to go left at $a$, then the minimizing agent will go left at $b$, and so on.

The amount of pruning provided by this algorithm depends on the ordering of the children of each node. It works best if a highest-valued child of a MAX node is selected first and if a lowest-valued child of a MIN node is returned first. In implementations of real games, much of the effort is made to try to ensure this ordering.

Most real games are too big to carry out minimax search, even with $\alpha $–$\beta $ pruning. For these games, instead of only stopping at leaf nodes, it is possible to stop at any node. The value returned at the node where the algorithm stops is an estimate of the value for this node. The function used to estimate the value is an evaluation function. Much work goes into finding good evaluation functions. There is a trade-off between the amount of computation required to compute the evaluation function and the size of the search space that can be explored in any given time. It is an empirical question as to the best compromise between a complex evaluation function and a large search space.