13.3 Temporal Differences

To understand how reinforcement learning works, consider how to average values that arrive to an agent sequentially. Section A.1 discusses how to maintain rolling averages, which is the basis of temporal differences.

Suppose there is a sequence of numerical values, $v_{1},v_{2},v_{3},\dots$, and the aim is to predict the next. A rolling average $A_{k}$ is maintained, and updated using the temporal difference equation, derived in Section A.1:

 $\displaystyle A_{k}$ $\displaystyle=(1-\alpha_{k})*A_{k-1}+\alpha_{k}*v_{k}$ $\displaystyle=A_{k-1}+\alpha_{k}*(v_{k}-A_{k-1})$ (13.1)

where $\alpha_{k}=\frac{1}{k}$. The difference, $v_{k}-A_{k-1}$, is called the temporal difference error or TD error; it specifies how different the new value, $v_{k}$, is from the old prediction, $A_{k-1}$. The old estimate, $A_{k-1}$, is updated by $\alpha_{k}$ times the TD error to get the new estimate, $A_{k}$.

A qualitative interpretation of the temporal difference equation is that if the new value is higher than the old prediction, increase the predicted value; if the new value is less than the old prediction, decrease the predicted value. The change is proportional to the difference between the new value and the old prediction. Note that this equation is still valid for the first value, $k=1$, in which case $A_{1}=v_{1}$.

In reinforcement learning, the values are often estimates of the effects of actions; more recent values are more accurate than earlier values because the agent is learning, and so they should be weighted more. One way to weight later examples more is to use Equation 13.1, but with $\alpha$ as a constant ($0<\alpha\leq 1$) that does not depend on $k$. This does not converge to the average value when there is variability in the values of the sequence, but it can track changes when the underlying process generating the values changes. See Section A.1.

One way to give more weight to more recent experiences, but also converge to the average, is to set $\alpha_{k}=(r+1)/(r+k)$ for some $r>0$. For the first experience $\alpha_{1}=1$, so it ignores the prior $A_{0}$. If $r=9$, after 11 experiences, $\alpha_{11}=0.5$ so it weights that experience as equal to all of its prior experiences. The parameter $r$ should be set to be appropriate for the domain.

Guaranteeing convergence to the average is not compatible with being able to adapt to make better predictions when the underlying process generating the values changes, for non-stationary dynamics or rewards.