foundations of computational agents
The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).
Dimensions: flat, states, infinite horizon, fully observable, stochastic, utility, learning, single agent, online, bounded rationality
In Q-learning and related RL algorithms, an agent tries to learn the optimal policy from its history of interaction with the environment. A history of an agent is a sequence of state–action–rewards:
which means that the agent was in state and did action , which resulted in it receiving reward and being in state ; then it did action , received reward , and ended up in state ; then it did action , received reward , and ended up in state ; and so on.
We treat this history of interaction as a sequence of experiences, where an experience is a tuple
which means that the agent was in state , it did action , it received reward , and it went into state . These experiences will be the data from which the agent can learn what to do. As in decision-theoretic planning, the aim is for the agent to maximize its value, which is usually the discounted reward.
Recall that , where is an action and is a state, is the expected value (cumulative discounted reward) of doing in state and then following the optimal policy.
Q-learning uses temporal differences to estimate the value of . In Q-learning, the agent maintains a table of , where is the set of states and is the set of actions. represents its current estimate of .
An experience provides one data point for the value of . The data point is that the agent received the future value of , where ; this is the actual current reward plus the discounted estimated future value. This new data point is called a return. The agent can use the temporal difference equation (12.1) to update its estimate for :
or, equivalently,
Figure 12.3 shows a Q-learning controller, where the agent is acting and learning at the same time. The on line 15 specifies that the action is the command the controller sends to the body. The reward and the resulting state are the percepts the controller receives from the body.
The Q-learner learns (an approximation of) the optimal -function as long as the agent explores enough, and there is no bound on the number of times it tries an action in any state (i.e., it does not always do the same subset of actions in a state).
Consider the two-state MDP of Example 9.27. The agent knows there are two states and two actions . It does not know the model and it learns from the experiences. With a discount, , , and initially 0, the following is a possible trace (to a few significant digits and with the states and actions abbreviated):
2 | ||||
0 | ||||
0 |
With fixed, the Q-values will approximate, but not converge to, the values obtained with value iteration in Example 9.31. The smaller is, the closer it will converge to the actual Q-values, but the slower it will converge.
The controller of Figure 12.3 has fixed. If were decreasing appropriately it would converge to the actual Q-values. To implement this, there needs to be a separate for each state–action pair, which can be implemented using an array, , which counts the number of times action was carried out in state . Before line 17 of Figure 12.3, can be incremented, and set to, say, ; see Exercise 5.