foundations of computational agents
The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).
Value iteration is a method of computing an optimal policy for an MDP and its value.
Value iteration starts at the “end” and then works backward, refining an estimate of either or . There is really no end, so it uses an arbitrary end point. Let be the value function assuming there are stages to go, and let be the -function assuming there are stages to go. These can be defined recursively. Value iteration starts with an arbitrary function . For subsequent stages, it uses the following equations to get the functions for stages to go from the functions for stages to go
It can either save the array or the array. Saving the array results in less storage, but it is more difficult to determine an optimal action, and one more iteration is needed to determine which action results in the greatest value.
Figure 9.16 shows the value iteration algorithm when the array is stored. This procedure converges no matter what the initial value function is. An initial value function that approximates converges quicker than one that does not. The basis for many abstraction techniques for MDPs is to use some heuristic method to approximate and to use this as an initial seed for value iteration.
Consider the two-state MDP of Example 9.27 with discount . We write the value function as , and the Q-function as . Suppose initially the value function is . The next Q-value is , so the next value function is (obtained by Sam partying). The next Q-value is then
Value | ||||
---|---|---|---|---|
7+0.8*(0.95*10+0.05*2) | = | 14.68 | ||
10+0.8*(0.7*10+0.3*2) | = | 16.08 | ||
0+0.8*(0.5*10+0.5*2) | = | 4.8 | ||
2+0.8*(0.1*10+0.9*2) | = | 4.24 |
So the next value function is . After 1000 iterations, the value function is . So the function is . Therefore, the optimal policy is to party when healthy and relax when sick.
Consider the nine squares around the reward of Example 9.28. The discount is . Suppose the algorithm starts with for all states .
The values of , , and (to one decimal point) for these nine cells are
0 | 0 | |
0 | 10 | |
0 | 0 | |
0 | ||
0 | ||
After the first step of value iteration (in ) the nodes get their immediate expected reward. The center node in this figure is the reward state. The right nodes have a value of , with the optimal actions being up, left, and down; each of these has a chance of crashing into the wall for an immediate expected reward of .
are the values after the second step of value iteration. Consider the node that is immediately to the left of the rewarding state. Its optimal value is to go to the right; it has a 0.7 chance of getting a reward of 10 in the following state, so that is worth 9 (10 times the discount of ) to it now. The expected reward for the other possible resulting states is . Thus, the value of this state is .
Consider the node immediately to the right of the rewarding state after the second step of value iteration. The agent’s optimal action in this state is to go left. The value of this state is
which evaluates to 6.173, which is approximated to in above.
The reward state has a value less than 10 in because the agent gets flung to one of the corners and these corners look bad at this stage.
After the next step of value iteration, shown on the right-hand side of the figure, the effect of the +10 reward has progressed one more step. In particular, the corners shown get values that indicate a reward in 3 steps.
An applet is available on the book website showing the details of value iteration for this example.
The value iteration algorithm of Figure 9.16 has an array for each stage, but it really only needs to store the current and the previous arrays. It can update one array based on values from the other.
A common refinement of this algorithm is asynchronous value iteration. Rather than sweeping through the states to create a new value function, asynchronous value iteration updates the states one at a time, in any order, and stores the values in a single array. Asynchronous value iteration can store either the array or the array. Figure 9.17 shows asynchronous value iteration when the array is stored. It converges faster than value iteration and is the basis of some of the algorithms for reinforcement learning. Termination can be difficult to determine if the agent must guarantee a particular error, unless it is careful about how the actions and states are selected. Often, this procedure is run indefinitely as an anytime algorithm where it is always prepared to give its best estimate of the optimal action in a state when asked.
Asynchronous value iteration could also be implemented by storing just the array. In that case, the algorithm selects a state and carries out the update:
Although this variant stores less information, it is more difficult to extract the policy. It requires one extra backup to determine which action results in the maximum value. This can be done using
In Example 9.32, the state one step up and one step to the left of the +10 reward state only had its value updated after three value iterations, in which each iteration involved a sweep through all of the states.
In asynchronous value iteration, the reward state can be chosen first. Next, the node to its left can be chosen, and its value will be . Next, the node above that node could be chosen, and its value would become . Note that it has a value that reflects that it is close to a reward after considering 3 states, not 300 states, as does value iteration.