Third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including the full text).

11.4 Review

The following are the main points you should have learned from this chapter:

  • EM is an iterative method to learn the parameters of models with hidden variables (including the case in which the classification is hidden).
  • The probabilities and the structure of belief networks can be learned from complete data. The probabilities can be derived from counts. The structure can be learned by searching for the best model given the data.
  • Missing values in examples are often not missing at random. Why they are missing is often important to determine.
  • A Markov decision process is an appropriate formalism for reinforcement learning. A common method is to learn an estimate of the value of doing each action in a state, as represented by the Q(S,A) function.
  • In reinforcement learning, an agent should trade off exploiting its knowledge and exploring to improve its knowledge.
  • Off-policy learning, such as Q-learning, learns the value of the optimal policy. On-policy learning, such as SARSA, learns the value of the policy the agent is actually carrying out (which includes the exploration).
  • Model-based reinforcement learning separates learning the dynamics and reward models from the decision-theoretic planning of what to do given the models.