foundations of computational agents
Usually, there are too many states to reason about explicitly. The alternative to reasoning explicitly in terms of states is to reason in terms of features, which can either be provided explicitly or learned.
Figure 13.7 shows a generic reinforcement on-policy learner that incorporates a supervised learner. This assumes the learner can carry out the operations
which adds a new example to the dataset, with input and target value
which gives a point prediction for the target for an example with input .
In , the input for the learner is a state–action pair, and the target for pair is an estimate of .
The only difference from the learners considered in Chapters 7 and 8 is that the learner must be able to incrementally add examples, and make predictions based on the examples it currently has. Newer examples are often better approximations than old examples, and the algorithms might need to take this into account.
Selecting the next action on line 10 with pure exploitation means selecting an that maximizes ; exploration can be carried out using one of the exploration techniques of Section 13.5.
Generalization in this algorithm occurs by the learner generalizing. The learner could be, for example, a linear function (see next section), a decision tree learner, or a neural network. SARSA is an instance of this algorithm where the learner memorizes, but does not generalize.
In deep reinforcement learning, a deep neural network is used as the learner. In particular, a neural network can be used to represent the -function, the value function, and/or the policy. Deep learning requires a large amount of data, and many iterations to learn, and can be sensitive to the architecture provided. While it has been very successful in games such as Go or Chess (see Section 14.7.3), it is notoriously difficult to make it work, and it is very computationally intensive. A linear function is usually better for smaller problems.
Consider an instance of SARSA with generalization (Figure 13.7) that is a linear function of features of the state and the action. While there are more complicated alternatives, such as using a decision tree or neural network, the linear function often works well, but requires feature engineering.
The feature-based learners require more information about the domain than the reinforcement-learning methods considered so far. Whereas the previous reinforcement learners were provided only with the states and the possible actions, the feature-based learners require extra domain knowledge in terms of features. This approach requires careful selection of the features; the designer should find features adequate to represent the Q-function.
The algorithm SARSA with linear function approximation, SARSA_LFA, uses a linear function of features to approximate the -function. It is based on incremental gradient descent, a variant of stochastic gradient descent that updates the parameters after every example. Suppose are numerical features of the state and the action. provides the value for the th feature for state and action . These features will be used to represent the linear -function
for some tuple of weights that have to be learned. Assume that there is an extra feature whose value is always 1, so that is not a special case.
An experience in SARSA of the form (the agent was in state , did action , received reward , and ended up in state , in which it decided to do action ) provides the new estimate to update . This experience can be used as a data point for linear regression. Let . Using Equation 7.4, weight is updated by
This update can then be incorporated into SARSA, giving the algorithm shown in Figure 13.8.
Although this program is simple to implement, feature engineering – choosing what features to include – is non-trivial. The linear function must not only convey the best action to carry out, it must also convey the information about what future states are useful.
Consider the monster game of Example 13.2. From understanding the domain, and not just treating it as a black box, some possible features that can be computed and might be useful are
has value 1 if action would most likely take the agent from state into a location where a monster could appear and has value 0 otherwise.
has value 1 if action would most likely take the agent into a wall and has value 0 otherwise.
has value 1 if step would most likely take the agent toward a prize.
has value 1 if the agent is damaged in state and action takes it toward the repair station.
has value if the agent is damaged and action would most likely take the agent into a location where a monster could appear and has value 0 otherwise. That is, it is the same as but is only applicable when the agent is damaged.
has value 1 if the agent is damaged in state and has value 0 otherwise.
has value 1 if the agent is not damaged in state and has value 0 otherwise.
has value 1 if the agent is damaged and there is a prize ahead in direction .
has value 1 if the agent is not damaged and there is a prize ahead in direction .
has the value of the -value in state if there is a prize at location in state . That is, it is the distance from the left wall if there is a prize at location .
has the value , where is the horizontal position in state if there is a prize at location in state . That is, it is the distance from the right wall if there is a prize at location .
to are like and for different combinations of the prize location and the distance from each of the four walls. For the case where the prize is at location , the -distance could take into account the wall.
An example linear function is
These are the learned values (to one decimal place) for one run of the SARSA_LFA algorithm in Figure 13.8.
AIPython (aipython.org) has an open-source Python implementation of this algorithm for this monster game. Experiment with stepping through the algorithm for individual steps, trying to understand how each step updates each parameter. Now run it for a number of steps. Consider the performance using the evaluation measures of Section 13.6. Try to make sense of the values of the parameters learned.
This algorithm tends to overfit to current experiences, and to forget about old experiences, so that when it returns to a part of the state space it has not visited recently, it will have to relearn all over again. This is known as catastrophic forgetting. One modification is to remember old experiences ( tuples) and to carry out some steps of action replay, by doing some weight updates based on random previous experiences. Updating the weights requires the use of the next action , which should be chosen according to the current policy, not the policy that was under effect when the experience occurred. When memory size becomes an issue, some of the old experiences can be discarded.
State-based MDPs and state-based reinforcement learning algorithms such as Q-learning, SARSA, and the model-based reinforcement learner have no local maxima that are not global maxima. This is because each state can be optimized separately; improving a policy for one state cannot negatively impact another state.
However, when there is generalization, improving on one state can make other states worse. This means that the algorithms can converge to local optima with a value that is not the best possible. They can work better when there is some way to escape local optima. A standard way to escape local maxima is to use randomized algorithms, for example using population-based methods, similar to particle filtering, where multiple initial initializations are run in parallel, and the best policy chosen. There has been some notable – arguably creative – solutions that have been found using evolutionary algorithms, where the individual runs are combined using a genetic algorithm.