# 13.8 Model-Based RL

In many applications of reinforcement learning, plenty of time is available for computation between each action. For example, a physical robot may have many seconds between each action. Q-learning, which only does one backup per action, will not make full use of the available computation time.

An alternative to doing one Q-value update after each action is to use the experiences to learn a model. An agent can explicitly learn $P(s^{\prime}\mid s,a)$ and $R(s,a)$. For each action that the agent carries out in the environment, the agent can do a number of steps of asynchronous value iteration to give a better estimate of the $Q$-function.

Figure 13.6 shows a generic model-based reinforcement learner. As with other reinforcement learning programs, it keeps track of $Q[S,A]$, but it also maintains a model of the dynamics, represented here as $T$, where $T[s,a,s^{\prime}]$ is the count of the number of times that the agent has done $a$ in state $s$ and ended up in state $s^{\prime}$. This also maintains $C[s,a]$, which is the count of the number of times action $a$ was carried out in state $s$. Note that $C[s,a]=\sum_{s^{\prime}}T[s,a,s^{\prime}]$, and so we could save space but increase runtime by not storing $C$, but computing it when needed. The $R[s,a]$ array maintains the average reward obtained when doing action $a$ in state $s$.

After each action, the agent observes the reward $r$ and the resulting state $s^{\prime}$. It then updates the transition-count matrices $T$ and $C$, as well as the average reward $R$. It then does a number of steps of asynchronous value iteration, using the updated probability model derived from $T$ and the updated reward model.

There are three main undefined parts to this algorithm:

• Which $Q$-values should be updated? It seems reasonable that the algorithm should at least update $Q[s,a]$, because more data have been received on the transition probability and reward. From there it can either do random updates or determine which $Q$-values would change the most. The elements that potentially have their values changed the most are the $Q[s_{1},a_{1}]$ with the highest probability of ending up at a $Q$-value that has changed the most (i.e., where $Q[s_{2},a_{2}]$ has changed the most). This can be implemented by keeping a priority queue of $Q$-values to consider. To ensure there is no divide-by-zero error, it should only choose $s_{1},a_{1}$ state–action pairs for which $C[s_{1},a_{1}]\neq 0$, or include pseudocounts for the transitions.

• How many steps of asynchronous value iteration should be done between actions? An agent could continue doing Q-updates until it has to act or until it gets new information. Figure 13.6 assumes that the agent acts and then does $Q$-updates until an observation arrives. When an observation arrives, the agent acts as soon as possible. There are may variants, including doing a fixed number of updates, which may be appropriate in games where it can act at any time. It is also possible to run the update in parallel with observing and acting.

• What should be the initial values for $Q[S,A]$? It requires some value for the transitions it has never experienced when updating $Q$. If it is using the exploration strategy of optimism in the face of uncertainty, it can use $Rmax$, the maximum reward possible, as the initial value for $R$, to encourage exploration. However, as in value iteration, the algorithm converges faster if it initializes $Q$ to be as close as possible to the final $Q$-value.

This algorithm assumes that the rewards depend on the initial state and the action. If there are separate action costs and rewards for being in a state, and the agent can separately observe the costs and rewards, the reward function can be decomposed into $C[A]$ and $R[S]$, leading to more efficient learning.

It is difficult to directly compare the model-based and model-free reinforcement learners. Typically, model-based learners are much more efficient in terms of experience; many fewer experiences are needed to learn well. However, the model-free methods use less memory and often use less computation time. If experience was cheap, such as in a computer game, a different comparison would be needed than if experience was expensive, such as for a robot.