A.1 Rolling Average

It is common to get information sequentially, and require a rolling average, the mean of the values already seen, or the mean of the most recent values.

Suppose there is a sequence of numerical values, v1,v2,v3,, and the goal is to predict the mean after the first k values for each k. The rolling average, Ak, is the mean of the first k data points v1,,vk, namely

Ak=v1++vkk.

Thus

kAk =v1++vk1+vk
=(k1)Ak1+vk.

Dividing by k gives

Ak=(11k)Ak1+vkk.

Let αk=1k, then

Ak= (1αk)Ak1+αkvk
= Ak1+αk(vkAk1). (A.1)

Predicting the mean makes sense if all of the values have an equal weight. However, suppose you are keeping an estimate of the expected price of some item in the grocery store. Prices go up and down in the short term, but tend to increase slowly; the newer prices are more useful for the estimate of the current price than older prices, and so they should be weighted more in predicting new prices.

Suppose, instead, you want to maintain the average of the previous n values. The first n values need to be treated specially. Each example (after the nth) contributes 1/n to the average. When a new value arrives, the oldest is dropped. If Ak1 is the average of the previous n values, the next average is

Ak=Ak1+vkvknn.

To implement this, the n most recent values need to be stored, and the average is sensitive to what happened n steps ago. One way to simplify this is to use the rolling average, Ak1, instead of vkn. This results in Equation A.1, but with αk a constant, namely 1/n.

Having αk=1k averages out noise, but treats all data equally. Having αk a constant means more recent data is used, but any noise in the data become noise in the rolling average. Using a constant, α, gives an exponentially-decaying rolling average as item vkn, which is n steps before the current value, has weight (1α)n in the average.

You could reduce α more slowly and potentially have the benefits of both approaches: weighting recent observations more and still converging to the mean. You can guarantee convergence if

k=1αk= and k=1αk2<.

The first condition is to ensure that random fluctuations and initial conditions get averaged out, and the second condition guarantees convergence.

The rolling average is used for the simple controller of Example 2.3, some of the optimizers for neural networks in Section 8.2, and for reinforcement learning in Section 13.3.