12 Learning to Act

The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).

12.4 Q-learning

Dimensions: flat, states, infinite horizon, fully observable, stochastic, utility, learning, single agent, online, bounded rationality

In Q-learning and related RL algorithms, an agent tries to learn the optimal policy from its history of interaction with the environment. A history of an agent is a sequence of state–action–rewards:

s0,a0,r1,s1,a1,r2,s2,a2,r3,s3,a3,r4,s4

which means that the agent was in state s0 and did action a0, which resulted in it receiving reward r1 and being in state s1; then it did action a1, received reward r2, and ended up in state s2; then it did action a2, received reward r3, and ended up in state s3; and so on.

We treat this history of interaction as a sequence of experiences, where an experience is a tuple

s,a,r,s

which means that the agent was in state s, it did action a, it received reward r, and it went into state s. These experiences will be the data from which the agent can learn what to do. As in decision-theoretic planning, the aim is for the agent to maximize its value, which is usually the discounted reward.

Recall that Q*(s,a), where a is an action and s is a state, is the expected value (cumulative discounted reward) of doing a in state s and then following the optimal policy.

Q-learning uses temporal differences to estimate the value of Q*(s,a). In Q-learning, the agent maintains a table of Q[S,A], where S is the set of states and A is the set of actions. Q[s,a] represents its current estimate of Q*(s,a).

An experience s,a,r,s provides one data point for the value of Q(s,a). The data point is that the agent received the future value of r+γV(s), where V(s)=maxaQ(s,a); this is the actual current reward plus the discounted estimated future value. This new data point is called a return. The agent can use the temporal difference equation (12.1) to update its estimate for Q(s,a):

Q[s,a]:=Q[s,a]+α*(r+γmaxaQ[s,a]-Q[s,a])

or, equivalently,

Q[s,a]:=(1-α)*Q[s,a]+α*(r+γmaxaQ[s,a]).

Figure 12.3 shows a Q-learning controller, where the agent is acting and learning at the same time. The do(a) on line 15 specifies that the action a is the command the controller sends to the body. The reward and the resulting state are the percepts the controller receives from the body.

1: controller Q-learning(S,A,γ,α)
2:      Inputs
3:          S is a set of states
4:          A is a set of actions
5:          γ the discount
6:          α is the step size      
7:      Local
8:          real array Q[S,A]
9:          states s, s
10:          action a      
11:      initialize Q[S,A] arbitrarily
12:      observe current state s
13:      repeat
14:          select an action a
15:          do(a)
16:          observe reward r and state s
17:          Q[s,a]:=Q[s,a]+α*(r+γ*maxaQ[s,a]-Q[s,a])
18:          s:=s
19:      until termination
Figure 12.3: Q-learning controller

The Q-learner learns (an approximation of) the optimal Q-function as long as the agent explores enough, and there is no bound on the number of times it tries an action in any state (i.e., it does not always do the same subset of actions in a state).

Example 12.3.

Consider the two-state MDP of Example 9.27. The agent knows there are two states {healthy,sick} and two actions {relax,party}. It does not know the model and it learns from the s,a,r,s experiences. With a discount, γ=0.8, α=0.3, and Q initially 0, the following is a possible trace (to a few significant digits and with the states and actions abbreviated):

s a r s Update=(1-α)*Q[s,a]+α(r+γmaxaQ[s,a])
he re 7 he Q[he,re]=0.7*0+0.3*(7+0.8*0)=2.1
he re 7 he Q[he,re]=0.7*2.1+0.3*(7+0.8*2.1)=4.07
he pa 10 he Q[he,pa]=0.7*0+0.3*(10+0.8*4.07)=3.98
he pa 10 si Q[he,pa]=0.7*3.98+0.3*(10+0.8*0)=5.79
si pa 2 si Q[si,pa]=0.7*0+0.3*(2+0.8*0)=0.06
si re 0 si Q[si,re]=0.7*0+0.3*(0+0.8*0.06)=0.014
si re 0 he Q[si,re]=0.7*0.014+0.3*(0+0.8*5.79)=1.40

With α fixed, the Q-values will approximate, but not converge to, the values obtained with value iteration in Example 9.31. The smaller α is, the closer it will converge to the actual Q-values, but the slower it will converge.

The controller of Figure 12.3 has α fixed. If αk were decreasing appropriately it would converge to the actual Q-values. To implement this, there needs to be a separate αk for each state–action pair, which can be implemented using an array, visits[S,A], which counts the number of times action A was carried out in state S. Before line 17 of Figure 12.3, visits[s,a] can be incremented, and α set to, say, 10/(9+visits[s,a]); see Exercise 5.