14.3 Solving Perfect Information Games

Fully observable with multiple agents is typically called perfect information. In perfect-information games, agents act sequentially and, when an agent has to act, it gets to observe the state of the world before deciding what to do. Each agent acts to maximize its own utility.

A perfect-information game can be represented as an extensive-form game where the information sets all contain a single node. They can also be represented as a multiagent decision network where the decision nodes are totally ordered and, for each decision node, the parents of that decision node include the preceding decision node and all of their parents (so they are a multiagent counterpart of no-forgetting decision networks).

Perfect-information games are solvable in a manner similar to fully observable single-agent systems. They can be solved backward, from the last decisions to the first, using dynamic programming, or forward using search. The multiagent algorithm maintains a utility for each agent and, for each move, it selects an action that maximizes the utility of the agent making the move. The dynamic programming variant, called backward induction, starts from the end of the game, computing and caching the values and the plan of each node for each agent.

1: procedure GameTreeSearch(n)
2:      Inputs
3:          n a node in a game tree      
4:      Output
5:          A pair of a value for each agent for node n, path that gives this value
6:      if n is a leaf node then
7:          return {i:evaluate(i,n)},None
8:      else if n is controlled by agent i then
9:          max_score_for_i:=
10:          for each child c of n do
11:               score,path:=GameTreeSearch(c)
12:               if score[i]>max_score_for_i then
13:                   max_score_for_i:=score[i]
14:                   best:=(score,c:path)                        
15:          return best      
Figure 14.5: Evaluating a perfect-information game tree (without nature)

Figure 14.5 gives a top-down, depth-first search algorithm for evaluating a game tree for a perfect information game. It enumerates the whole tree. At each internal node, the agent that controls the node selects a child that is best for it. This arbitrarily chooses the first child that maximizes the agent’s score (the > on line 12), but as the following example shows, which one is selected can affect the utility.

Example 14.6.

Consider the sharing game of Figure 14.2. In the recursive calls, Barb gets to choose the value that maximizes her utility. Thus, she will choose “yes” for the right two nodes she controls, and would choose either for the leftmost node she controls. Suppose she chooses “no” for this node, then Andy gets to choose one of his actions: keep has utility 0 for him, share has utility 1, and give has utility 0, so he chooses to share. If Barb had chosen “yes” for the leftmost node, Andy would keep, and so Andy would get both items.

14.3.1 Adversarial Games

The case where two agents are competing, so that a positive reward for one is a negative reward for the other, is a two-agent zero-sum game. An agent that plays such a game well requires adversarial reasoning. The value of such a game can be characterized by a single number that one agent is trying to maximize and the other agent is trying to minimize, which leads to a minimax search space. Each node is either a MAX node, if it is controlled by the agent trying to maximize, or a MIN node, if it is controlled by the agent trying to minimize.

Figure 14.5 implements a minimax algorithm for zero-sum perfect information games, by the MAX agent choosing an action with the maximum value, and the MIN agent choosing an action with the maximum of the negation of the value.

The game-tree search of Figure 14.5 traverses the whole game tree. It is possible to prune part of the search tree for minimax games by showing that some part of the tree will never be part of an optimal play.

Example 14.7.

Consider searching in the game tree of Figure 14.6. In this figure, the square MAX nodes are controlled by the maximizing agent, and the round MIN nodes are controlled by the minimizing agent.

Refer to caption
Figure 14.6: A zero-sum game tree showing which nodes can be pruned

Suppose the values of the leaf nodes are given or are computed given the definition of the game. The numbers at the bottom show some of these values. The other values are irrelevant, as shown here. Suppose you are doing a left-first, depth-first traversal of this tree. The value of node h is 7, because it is the minimum of 7 and 9. Just by considering the leftmost child of i with a value of 6, you know that the value of i is less than or equal to 6. Therefore, at node d, the maximizing agent will go left. You do not have to evaluate the other child of i. Similarly, the value of j is 11, so the value of e is at least 11, and so the minimizing agent at node b will choose to go left.

The value of l is less than or equal to 5, and the value of m is less than or equal to 4; thus, the value of f is less than or equal to 5, so the value of c will be less than or equal to 5. So, at a, the maximizing agent will choose to go left. Notice that this argument did not depend on the values of the unnumbered leaves. Moreover, it did not depend on the size of the subtrees that were not explored.

The previous example analyzed what can be pruned. Minimax with alpha–beta (αβ) pruning is a depth-first search algorithm that prunes by passing pruning information down in terms of parameters α and β. In this depth-first search, a node has a score, which has been obtained from (some of) its descendants.

The parameter α is used to prune MIN nodes. Initially, it is the highest current value for all MAX ancestors of the current node. Any MIN node whose current value is less than or equal to its α value does not have to be explored further. This cutoff was used to prune the other descendants of nodes l, m, and c in the previous example.

The β parameter is used to prune MAX nodes in an analogous way.

1: procedure Minimax_alpha_beta(n,α,β)
2:      Inputs
3:          n a node in a game tree
4:          α,β real numbers      
5:      Output
6:          A pair of a value for node n, path that gives this value
7:      best:=None
8:      if n is a leaf node then
9:          return evaluate(n),None
10:      else if n is a MAX node then
11:          for each child c of n do
12:               score,path:=MinimaxAlphaBeta(c,α,β)
13:               if scoreβ then
14:                   return score,None
15:               else if score>α then
16:                   α:=score
17:                   best:=c:path                        
18:          return α,best
19:      else
20:          for each child c of n do
21:               score,path:=MinimaxAlphaBeta(c,α,β)
22:               if scoreα then
23:                   return score,None
24:               else if score<β then
25:                   β:=score
26:                   best:=c:path                        
27:          return β,best      
Figure 14.7: Minimax with αβ pruning

The minimax algorithm with αβ pruning is given in Figure 14.7. It is called, initially, with MinimaxAlphaBeta(R,,), where R is the root node. It returns a pair of the value for the node n and a path of choices that lead to this value. (Note that this path does not include n.) Line 13 performs β pruning; at this stage the algorithm knows that the current path will never be chosen, and so returns a current score. Similarly, line 22 performs α-pruning. Line 17 and line 26 concatenate c to the path, as it has found a best path for the agent. In this algorithm, the path “None” is sometimes returned for non-leaf nodes; this only occurs when the algorithm has determined this path will not be used.

Example 14.8.

Consider running MinimaxAlphaBeta on the tree of Figure 14.6. Consider the recursive calls (and the values returned, but not the paths). Initially, it calls

MinimaxAlphaBeta(a,,)

which then calls, in turn:

MinimaxAlphaBeta(b,,)
MinimaxAlphaBeta(d,,)
MinimaxAlphaBeta(h,,).

This last call finds the minimum of both of its children and returns 7. Next the procedure calls

MinimaxAlphaBeta(i,7,)

which then gets the value for the first of i’s children, which has value 6. Because αβ, it returns 6. The call to d then returns 7, and it calls

MinimaxAlphaBeta(e,,7).

Node e’s first child returns 11 and, because αβ, it returns 11. Then b returns 7, and the call to a calls

MinimaxAlphaBeta(c,7,)

which in turn calls

MinimaxAlphaBeta(f,7,)

which eventually returns 5, and so the call to c returns 5, and the whole procedure returns 7.

By keeping track of the values, the maximizing agent knows to go left at a, then the minimizing agent will go left at b, and so on.

The amount of pruning provided by this algorithm depends on the ordering of the children of each node. It works best if a highest-valued child of a MAX node is selected first and if a lowest-valued child of a MIN node is returned first. In implementations of real games, much of the effort is made to try to ensure this ordering.

Most real games are too big to carry out minimax search, even with αβ pruning. For these games, instead of only stopping at leaf nodes, it is possible to stop at any node. The value returned at the node where the algorithm stops is an estimate of the value for this node. The function used to estimate the value is an evaluation function. Much work goes into finding good evaluation functions. There is a trade-off between the amount of computation required to compute the evaluation function and the size of the search space that can be explored in any given time. It is an empirical question as to the best compromise between a complex evaluation function and a large search space.