foundations of computational agents
The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).
SARSA with Linear Function Approximation, SARSA_LFA, uses a linear function of features to approximate the -function. This algorithm uses the on-policy method SARSA, because the agent’s experiences sample the reward from the policy the agent is actually following, rather than sampling from an optimum policy.
SARSA_LFA uses features of both the state and the action. Suppose are numerical features of the state and the action. Thus, provides the value for the th feature for state and action . These features can be binary, with domain , or other numerical features. These features will be used to represent the linear -function:
for some tuple of weights, that have to be learned. Assume that there is an extra feature whose value is always 1, so that is not a special case.
Consider the grid game of Example 12.2. From understanding the domain, and not just treating it as a black box, some possible features that can be computed and might be useful are:
has value 1 if action would most likely take the agent from state into a location where a monster could appear and has value 0 otherwise.
has value 1 if action would most likely take the agent into a wall and has value 0 otherwise.
has value 1 if step would most likely take the agent toward a prize.
has value 1 if the agent is damaged in state and action takes it toward the repair station.
has value if the agent is damaged and action would most likely take the agent into a location where a monster could appear and has value 0 otherwise. That is, it is the same as but is only applicable when the agent is damaged.
has value 1 if the agent is damaged in state and has value 0 otherwise.
has value 1 if the agent is not damaged in state and has value 0 otherwise.
has value 1 if the agent is damaged and there is a prize ahead in direction .
has value 1 if the agent is not damaged and there is a prize ahead in direction .
has the value of the -value in state if there is a prize at location in state . That is, it is the distance from the left wall if there is a prize at location .
has the value , where is the horizontal position in state if there is a prize at location in state . That is, it is the distance from the right wall if there is a prize at location .
to are like and for different combinations of the prize location and the distance from each of the four walls. For the case where the prize is at location , the -distance could take into account the wall.
An example linear function is
These are the learned values (to one decimal place) for one run of algorithm in Figure 12.7.
An experience in SARSA of the form (the agent was in state , did action , and received reward and ended up in state , in which it decided to do action ) provides the new estimate of to update . This experience can be used as a data point for linear regression. Let . Using Equation 7.3, weight is updated by
This update can then be incorporated into SARSA, giving the algorithm shown in Figure 12.7.
Although this program is simple to implement, feature engineering – choosing what features to include – is non-trivial. The linear function must not only convey the best action to carry out, it must also convey the information about what future states are useful.
On the AIspace website, there is an open-source implementation of this algorithm for the game of Example 12.2 with the features of Example 12.6. Try stepping through the algorithm for individual steps, trying to understand how each step updates each parameter. Now run it for a number of steps. Consider the performance using the evaluation measures of Section 12.6. Try to make sense of the values of the parameters learned.
Many variations of this algorithm exist:
This algorithm tends to overfit to current experiences, and to forget about old experiences, so that when it goes back to a part of the state-space it has not visited recently, it will have to relearn. One modification is to remember old experiences ( tuples) and to carry out some steps of action replay, by doing some weight updates based on random previous experiences. Updating the weights requires the use of the next action , which should be chosen according to the current policy, not the policy that was under effect when the experience occurred. If memory becomes an issue, some of the old experiences can be discarded.
Different function approximations, such as a decision tree with a linear function at the leaves, could be used.
A common variant is to have a separate function for each action. This is equivalent to the -function approximated by a decision tree that splits on actions and then has a linear function. It is also possible to split on other features.
A linear function approximation can also be combined with other methods such as Q-learning, or model-based methods.
In deep reinforcement learning, a deep learner is used instead of the linear function approximation. This means that the features do not need to be engineered, but can be learned. Deep learning requires a large amount of data, and many iterations to learn, and can be sensitive to the architecture provided. A way to handle overfitting, such a regularization, is also required.