13.13 Exercises

Exercise 13.1.

Explain how Q-learning fits in with the agent architecture of Section 2.1.1. Suppose that the Q-learning agent has discount factor γ, a step size of α, and is carrying out an ϵ-greedy exploration strategy.

  • (a)

    What are the components of the belief state of the Q-learning agent?

  • (b)

    What are the percepts?

  • (c)

    What is the command function of the Q-learning agent?

  • (d)

    What is the belief-state transition function of the Q-learning agent?

Exercise 13.2.

Suppose a Q-learning agent, with fixed α and discount γ, was in state 34, did action 7, received reward 3, and ended up in state 65. What value(s) get updated? Give an expression for the new value. (Be as specific as possible.)

Exercise 13.3.

For the plot of the total reward as a function of time as in Figure 13.4, the minimum and zero crossing are only meaningful statistics when balancing positive and negative rewards is reasonable behavior. Suggest what should replace these statistics when zero reward is not an appropriate definition of reasonable behavior. [Hint: Think about the cases that have only positive reward or only negative reward.]

Exercise 13.4.

Compare the different parameter settings for Q-learning for the game of Example 13.2 (the “monster game” in AIPython (aipython.org)) In particular, compare the following situations:

  • (i)

    step_size(c)=1/c and the Q-values are initialized to 0.0.

  • (ii)

    step_size(c)=10/(9+c) varies, and the Q-values are initialized to 0.0.

  • (iii)

    α varies (using whichever of (i) and (ii) is better) and the Q-values are initialized to 5.0.

  • (iv)

    α is fixed to 0.1 and the Q-values are initialized to 0.0.

  • (v)

    α is fixed to 0.1 and the Q-values are initialized to 5.0.

  • (vi)

    Some other parameter settings.

For each of these, carry out multiple runs and compare

  • (a)

    the distributions of minimum values

  • (b)

    the zero crossing

  • (c)

    the asymptotic slope for the policy that includes exploration

  • (d)

    the asymptotic slope for the policy that does not include exploration (to test this, after the algorithm has explored, set the exploitation parameter to 100% and run additional steps).

Which of these settings would you recommend? Why?

Exercise 13.5.

For the following reinforcement learning algorithms:

  • (i)

    Q-learning with fixed α and 80% exploitation.

  • (ii)

    Q-learning with fixed αk=1/k and 80% exploitation.

  • (iii)

    Q-learning with αk=1/k and 100% exploitation.

  • (iv)

    SARSA learning with αk=1/k and 80% exploitation.

  • (v)

    SARSA learning with αk=1/k and 100% exploitation.

  • (vi)

    Feature-based SARSA learning with softmax action selection.

  • (vii)

    A model-based reinforcement learner with 50% exploitation.

  • (a)

    Which of the reinforcement learning algorithms will find the optimal policy, given enough time?

  • (b)

    Which ones will actually follow the optimal policy?

Exercise 13.6.

Consider four different ways to derive the value of αk from k in Q-learning (note that for Q-learning with varying αk, there must be a different count k for each state–action pair).

  • (i)

    Let αk=1/k.

  • (ii)

    Let αk=10/(9+k).

  • (iii)

    Let αk=0.1.

  • (iv)

    Let αk=0.1 for the first 10,000 steps, αk=0.01 for the next 10,000 steps, αk=0.001 for the next 10,000 steps, αk=0.0001 for the next 10,000 steps, and so on.

  • (a)

    Which of these will converge to the true Q-value in theory?

  • (b)

    Which converges to the true Q-value in practice (i.e., in a reasonable number of steps)? Try it for more than one domain.

  • (c)

    Which are able to adapt if the environment changes slowly?

Exercise 13.7.

The model-based reinforcement learner allows for a different form of optimism in the face of uncertainty. The algorithm can be started with each state having a transition to a “nirvana” state, which has very high Q-value (but which will never be reached in practice, and so the probability will shrink to zero).

  • (a)

    Does this perform differently than initializing all Q-values to a high value? Does it work better, worse, or the same?

  • (b)

    How high does the Q-value for the nirvana state need to be to work most effectively? Suggest a reason why one value might be good, and test it.

  • (c)

    Could this method be used for the other RL algorithms? Explain how or why not.

Exercise 13.8.

The grid game of Example 13.6 included features for the x-distance to the current treasure and are the y-distance to the current treasure. Chris thought that these were not useful as they do not depend on the action. Do these features make a difference? Explain why they might or might not. Do they make a difference in practice?

Exercise 13.9.

In SARSA with linear function approximation, using linear regression to minimize r+γQw¯(s,a)Qw¯(s,a) gives a different algorithm than Figure 13.8. Explain what you get and why what is described in the text may be preferable (or not). [Hint: what should the weights be adjusted to better estimate?]

Exercise 13.10.

In Example 13.6, some of the features are perfectly correlated (e.g., F6 and F7). Does having such correlated features affect what functions are able to be represented? Does it help or hurt the speed at which learning occurs? Test this empirically on some examples.