12 Learning to Act

The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).

12.13 Exercises

  1. 1.

    Explain how Q-learning fits in with the agent architecture of Section 2.2.1. Suppose that the Q-learning agent has discount factor γ, a step size of α, and is carrying out an ϵ-greedy exploration strategy.

    1. (a)

      What are the components of the belief state of the Q-learning agent?

    2. (b)

      What are the percepts?

    3. (c)

      What is the command function of the Q-learning agent?

    4. (d)

      What is the belief-state transition function of the Q-learning agent?

  2. 2.

    For the plot of the total reward as a function of time as in Figure 12.4, the minimum and zero crossing are only meaningful statistics when balancing positive and negative rewards is reasonable behavior. Suggest what should replace these statistics when zero reward is not an appropriate definition of reasonable behavior. [Hint: Think about the cases that have only positive reward or only negative reward.]

  3. 3.

    Compare the different parameter settings for the game of Example 12.2. In particular compare the following situations

    1. (a)

      α varies, and the Q-values are initialized to 0.0

    2. (b)

      α varies, and the Q-values are initialized to 5.0

    3. (c)

      α is fixed to 0.1, and the Q-values are initialized to 0.0

    4. (d)

      α is fixed to 0.1, and the Q-values are initialized to 5.0

    5. (e)

      Some other parameter settings.

    For each of these, carry out multiple runs and compare

    1. (a)

      the distributions of minimum values

    2. (b)

      the zero crossing

    3. (c)

      the asymptotic slope for the policy that includes exploration

    4. (d)

      the asymptotic slope for the policy that does not include exploration. To test this, after the algorithm has explored, set the exploitation parameter to 100% and run additional steps.

    Which of these settings would you recommend? Why?

  4. 4.

    For the following reinforcement learning algorithms:

    1. (a)

      Q-learning with fixed α and 80% exploitation.

    2. (b)

      Q-learning with fixed αk=1/k and 80% exploitation.

    3. (c)

      Q-learning with αk=1/k and 100% exploitation.

    4. (d)

      SARSA learning with αk=1/k and 80% exploitation.

    5. (e)

      SARSA learning with αk=1/k and 100% exploitation.

    6. (f)

      Feature-based SARSA learning with soft-max action selection.

    7. (g)

      A model-based reinforcement learner with 50% exploitation.

    1. (a)

      Which of the reinforcement learning algorithms will find the optimal policy, given enough time?

    2. (b)

      Which ones will actually follow the optimal policy?

  5. 5.

    Consider four different ways to derive the value of αk from k in Q-learning (note that for Q-learning with varying αk, there must be a different count k for each state–action pair).

    1. (a)

      Let αk=1/k.

    2. (b)

      Let αk=10/(9+k).

    3. (c)

      Let αk=0.1.

    4. (d)

      Let αk=0.1 for the first 10,000 steps, αk=0.01 for the next 10,000 steps, αk=0.001 for the next 10,000 steps, αk=0.0001 for the next 10,000 steps, and so on.

    1. (a)

      Which of these will converge to the true Q-value in theory?

    2. (b)

      Which converges to the true Q-value in practice (i.e., in a reasonable number of steps)? Try it for more than one domain.

    3. (c)

      Which are able to adapt if the environment changes slowly?

  6. 6.

    The model-based reinforcement learner allows for a different form of optimism in the face of uncertainty. The algorithm can be started with each state having a transition to a “nirvana” state, which has very high Q-value (but which will never be reached in practice, and so the probability will shrink to zero).

    1. (a)

      Does this perform differently than initialing all Q-values to a high value? Does it work better, worse or the same?

    2. (b)

      How high does the Q-value for the nirvana state need to be to work most effectively? Suggest a reason why one value might be good, and test it.

    3. (c)

      Could this method be used for the other RL algorithms? Explain how or why not.

  7. 7.

    Included the features for the grid game of Example 12.6, are features that are the x-distance to the current treasure and features that are the y-distance to the current treasure. Chris thought that these were not useful as they do not depend on the action. Do these features make a difference? Explain why they might or might not. Do they make a difference in practice?

  8. 8.

    In SARSA with linear function approximation, using linear regression to minimize r+γQw¯(s,a)-Qw¯(s,a), gives a different algorithm than Figure 12.7. Explain what you get and why what is described in the text may be preferable (or not).

  9. 9.

    In Example 12.6, some of the features are perfectly correlated (e.g., F6 and F7). Does having such correlated features affect what functions are able to be represented? Does it help or hurt the speed at which learning occurs? Test this empirically on some examples.

  10. 10.

    Consider the policy improvement algorithm. At equilibrium the values of the most-preferred actions should be equal. Propose, implement and evaluate an algorithm where the policy does not change very much when the values of the most-preferred actions are close. [Hint: Consider having the probability of all actions change in proportion to the distance from the best action and use a temperature parameter in the definition of distance.]