foundations of computational agents
The main points you should have learned from this chapter are:
Bayes’ rule provides a way to incorporate prior knowledge into learning and a way to trade off fit-to-data and model complexity.
EM and -means are iterative methods to learn the parameters of models with hidden variables (including the case in which the classification is hidden).
The probabilities and the structure of belief networks can be learned from complete data. The probabilities can be derived from counts. The structure can be learned by searching for the best model given the data.
Missing values in examples are often not missing at random. Why they are missing is often important to determine.
Bayesian learning replaces making a prediction from the best model with finding a prediction by averaging over all of the models conditioned on the data.