foundations of computational agents
The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).
To understand how reinforcement learning works, consider how to average values that arrive to an agent sequentially.
Suppose there is a sequence of numerical values, , and the goal is to predict the next value, given the previous values. One way to do this is to have a running approximation of the expected value of . For example, given a sequence of students’ grades and the aim of predicting the next grade, a reasonable prediction may be to predict the average grade. This can be implemented by maintaining a running average, as follows:
Let be an estimate of the expected value based on the first data points . A reasonable estimate is the sample average:
Thus,
Dividing by gives
Let ; then
(12.1) |
The difference, , is called the temporal difference error or TD error; it specifies how different the new value, , is from the old prediction, . The old estimate, , is updated by times the TD error to get the new estimate, . The qualitative interpretation of the temporal difference formula is that if the new value is higher than the old prediction, increase the predicted value; if the new value is less than the old prediction, decrease the predicted value. The change is proportional to the difference between the new value and the old prediction. Note that this equation is still valid for the first value, , in which case .
This analysis assumes that all of the values have an equal weight. However, suppose you are keeping an estimate of the expected price of some item in the grocery store. Prices go up and down in the short term, but tend to increase slowly; the newer prices are more useful for the estimate of the current price than older prices, and so they should be weighted more in predicting new prices.
In reinforcement learning, the values are estimates of the effects of actions; more recent values are more accurate than earlier values because the agent is learning, and so they should be weighted more. One way to weight later examples more is to use Equation 12.1, but with as a constant () that does not depend on . Unfortunately, this does not converge to the average value when there is variability in the values in the sequence, but it can track changes when the underlying process generating the values changes.
You could reduce more slowly and potentially have the benefits of both approaches: weighting recent observations more and still converging to the average. You can guarantee convergence if
The first condition is to ensure that random fluctuations and initial conditions get averaged out, and the second condition guarantees convergence.
One way to give more weight to more recent experiences, but also converge to the average, is to set for some . For the first experience , so it ignores the prior . If , after 11 experiences, so it weights that experience as equal to all of its prior experiences. The parameter should be set to be appropriate for the domain.
Note that guaranteeing convergence to the average is not compatible with being able to adapt to make better predictions when the underlying process generating the values keeps changing.
For the rest of this chapter, without a subscript is assumed to be a constant. With a subscript it is a function of the number of cases used for the particular estimate.