foundations of computational agents
The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).
Dimensions: flat, states, infinite horizon, fully observable, stochastic, utility, learning, single agent, online, bounded rationality
In many applications of reinforcement learning, plenty of time is available for computation between each action. For example, a physical robot may have many seconds between each action. Q-learning, which only does one backup per action, will not make full use of the available computation time.
An alternative to doing one Q-value update after each action is to use the experiences to learn a model. An agent can explicitly learn and . For each action that the agent carries out in the environment, the agent can do a number of steps of asynchronous value iteration to give a better estimate of the -function.
Figure 12.6 shows a generic model-based reinforcement learner. As with other reinforcement learning programs, it keeps track of , but it also maintains a model of the dynamics, represented here as , where is the count of the number of times that the agent has done in state and ended up in state . This also maintains , which is the count of the number of times action was carried out in state . Note that , and so we could save space but increase runtime by not storing , but computing it when needed. The array maintains the average reward obtained when doing action in state .
After each action, the agent observes the reward and the resulting state . It then updates the transition-count matrices and as well as the average reward . It then does a number of steps of asynchronous value iteration, using the updated probability model derived from and the updated reward model.
There are three main undefined parts to this algorithm:
Which -values should be updated? It seems reasonable that the algorithm should at least update , because more data have been received on the transition probability and reward. From there it can either do random updates or determine which -values would change the most. The elements that potentially have their values changed the most are the with the highest probability of ending up at a -value that has changed the most (i.e., where has changed the most). This can be implemented by keeping a priority queue of -values to consider. To ensure these is no divide-by-zero error, it should only choose state–action pair for which , or include pseudocounts for the transitions.
How many steps of asynchronous value iteration should be done between actions? An agent could continue doing Q-updates until it has to act or until it gets new information. Figure 12.6 assumes that the agent acts and then does -updates until an observation arrives. When an observation arrives, the agent acts as soon as possible. There are may variants, including doing a fixed number of updates, which may be appropriate in games where it can act at any time. It is also possible to run the update in parallel with observing and acting.
What should be the initial values for ? It requires some value for the transitions it has never experienced when updating . If it is using the exploration strategy of optimism in the face of uncertainty, it can use , the maximum reward possible, as the initial value for , to encourage exploration. However, as in value iteration, the algorithm converges faster if it initializes to be as close as possible to the final -value.
This algorithm assumes that the rewards depend on the initial state and the action. If there are separate action costs and rewards for being in a state, and the agent can separately observe the costs and rewards, the reward function can be decomposed into and , leading to more efficient learning.
It is difficult to directly compare the model-based and model-free reinforcement learners. Typically, model-based learners are much more efficient in terms of experience; many fewer experiences are needed to learn well. However, the model-free methods use less memory and often use less computation time. If experience was cheap, such as in a computer game, a different comparison would be needed than if experience was expensive, such as for a robot.