foundations of computational agents
The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).
A hidden variable or latent variable is a probabilistic variable that is not observed in a data set. A Bayes classifier can be the basis for unsupervised learning by making the class a hidden variable.
The expectation maximization or EM algorithm can be used to learn probabilistic models with hidden variables. Combined with a naive Bayes classifier, it does soft clustering, similar to the $k$means algorithm, but where examples are probabilistically in classes.
As in the $k$means algorithm, the training examples and the number of classes, $k$, are given as input.
Data  Model  ➪  Probabilities 

$\begin{array}{cccc}{X}_{1}\hfill & {X}_{2}\hfill & {X}_{3}\hfill & {X}_{4}\hfill \\ t\hfill & f\hfill & t\hfill & t\hfill \\ f\hfill & t\hfill & t\hfill & f\hfill \\ f\hfill & f\hfill & t\hfill & t\hfill \\ \multicolumn{4}{c}{\mathrm{\cdots}}\end{array}$  $\begin{array}{c}P(C)\hfill \\ P({X}_{1}\mid C)\hfill \\ P({X}_{2}\mid C)\hfill \\ P({X}_{3}\mid C)\hfill \\ P({X}_{4}\mid C)\hfill \end{array}$ 
Given the data, a naive Bayes model is constructed where there is a variable for each feature in the data and a hidden variable for the class. The class variable is the only parent of the other features. This is shown in Figure 10.4. The class variable has domain $\{1,2,\mathrm{\dots},k\}$ where $k$ is the number of classes. The probabilities needed for this model are the probability of the class $C$ and the probability of each feature given $C$. The aim of the EM algorithm is to learn probabilities that best fit the data.
The EM algorithm conceptually augments the data with a class feature, $C$, and a count column. Each original example gets mapped into $k$ augmented examples, one for each class. The counts for these examples are assigned so that they sum to 1. For example, for four features and three classes, we could have

$\u27f6$ 




The EM algorithm repeats the two steps:
E step: Update the augmented counts based on the probability distribution. For each example $$ in the original data, the count associated with $$ in the augmented data is updated to
$$P(C=c\mid {X}_{1}={v}_{1},\mathrm{\dots},{X}_{n}={v}_{n}).$$ 
Note that this step involves probabilistic inference. This is an expectation step because it computes the expected values.
M step: Infer the probabilities for the model from the augmented data. Because the augmented data has values associated with all the variables, this is the same problem as learning probabilities from data in a naive Bayes classifier. This is a maximization step because it computes the maximum likelihood estimate or the maximum a posteriori probability (MAP) estimate of the probability.
The EM algorithm starts with random probabilities or random counts. EM will converge to a local maximum of the likelihood of the data.
This algorithm returns a probabilistic model, which is used to classify an existing or new example. An example is classified using
$P(C=c\mid {X}_{1}={v}_{1},\mathrm{\dots},{X}_{n}={v}_{n})$  
$={\displaystyle \frac{P(C=c)*{\prod}_{i=1}^{n}P({X}_{i}={v}_{i}\mid C=c)}{{\sum}_{{c}^{\prime}}P(C={c}^{\prime})*{\prod}_{i=1}^{n}P({X}_{i}={v}_{i}\mid C={c}^{\prime})}}.$ 
The algorithm does not need to store the augmented data, but maintains a set of sufficient statistics, which is enough information to compute the required probabilities. In each iteration, it sweeps through the data once to compute the sufficient statistics. The sufficient statistics for this algorithm are
$cc$, the class count, a $k$valued array such that $cc[c]$ is the sum of the counts of the examples in the augmented data with $class=c$
$fc$, the feature count, a threedimensional array such that $fc[i,v,c]$, for $i$ from 1 to $n$, for each value $v$ in $domain({X}_{i})$, and for each class $c$, is the sum of the counts of the augmented examples $t$ with ${X}_{i}(t)=val$ and $class(t)=c$.
The sufficient statistics from the previous iteration are used to infer the new sufficient statistics for the next iteration. Note that $cc$ could be computed from $fc$, but it is easier to maintain $cc$ directly.
The probabilities required of the model can be computed from $cc$ and $fc$:
$$P(C=c)=\frac{cc[c]}{\leftE\right}$$ 
where $\leftE\right$ is the number of examples in the original data set (which is the same as the sum of the counts in the augmented data set).
$$P({X}_{i}=v\mid C=c)=\frac{fc[i,v,c]}{cc[c]}.$$ 
Figure 10.6 gives the algorithm to compute the sufficient statistics, from which the probabilities are derived as above. Evaluating $P(C=c\mid {X}_{1}={v}_{1},\mathrm{\dots},{X}_{n}={v}_{n})$ in line 17 relies on the counts in $cc$ and $fc$. This algorithm has glossed over how to initialize the counts. One way is for $P(C\mid {X}_{1}={v}_{1},\mathrm{\dots},{X}_{n}={v}_{n})$ to return a random distribution for the first iteration, so the counts come from the data. Alternatively, the counts can be assigned randomly before seeing any data. See Exercise 6.
The algoritm will eventually converge when $cc$ and $fc$ do not change much in an iteration. The threshold for the approximately equal in line 21 can be tuned to trade off learning time and accuracy. An alternative is to run the algorithms for a fixed number of iterations.
Notice the similarity with the $k$means algorithm. The E step (probabilistically) assigns examples to classes, and the M step determines what the classes predict.
Consider Figure 10.5.
When example $$ is encountered in the data set, the algorithm computes
$P(C=c\mid {x}_{1}\wedge \mathrm{\neg}{x}_{2}\wedge {x}_{3}\wedge {x}_{4})$  
$\propto $  $P({X}_{1}=1\mid C=c)*P({X}_{2}=0\mid C=c)*P({X}_{3}=1\mid C=c)$  
$*P({X}_{4}=1\mid C=c)*P(C=c)$  
$=$  $\frac{fc[1,1,c]}{cc[c]}}*{\displaystyle \frac{fc[2,0,c]}{cc[c]}}*{\displaystyle \frac{fc[3,1,c]}{cc[c]}}*{\displaystyle \frac{fc[4,1,c]}{cc[c]}}*{\displaystyle \frac{cc[c]}{\leftE\right}$  
$\propto $  $\frac{fc[1,1,c]*fc[2,0,c]*fc[3,1,c]*fc[4,1,c]}{cc{[c]}^{3}}$ 
for each class $c$ and normalizes the results. Suppose the value computed for class $\mathrm{1}$ is 0.4, class 2 is 0.1 and for class 3 is 0.5 (as in the augmented data in Figure 10.5). Then $c\mathit{}c\mathit{}\mathrm{\_}\mathit{}n\mathit{}e\mathit{}w\mathit{}\mathrm{[}\mathrm{1}\mathrm{]}$ is incremented by $\mathrm{0.4}$, $c\mathit{}c\mathit{}\mathrm{\_}\mathit{}n\mathit{}e\mathit{}w\mathit{}\mathrm{[}\mathrm{2}\mathrm{]}$ is incremented by $\mathrm{0.1}$, etc. Values $f\mathit{}c\mathit{}\mathrm{\_}\mathit{}n\mathit{}e\mathit{}w\mathit{}\mathrm{[}\mathrm{1}\mathrm{,}\mathrm{1}\mathrm{,}\mathrm{1}\mathrm{]}$, $f\mathit{}c\mathit{}\mathrm{\_}\mathit{}n\mathit{}e\mathit{}w\mathit{}\mathrm{[}\mathrm{2}\mathrm{,}\mathrm{0}\mathrm{,}\mathrm{1}\mathrm{]}$, etc. are each incremented by $\mathrm{0.4}$. Next $f\mathit{}c\mathit{}\mathrm{\_}\mathit{}n\mathit{}e\mathit{}w\mathit{}\mathrm{[}\mathrm{1}\mathrm{,}\mathrm{1}\mathrm{,}\mathrm{2}\mathrm{]}$, $f\mathit{}c\mathit{}\mathrm{\_}\mathit{}n\mathit{}e\mathit{}w\mathit{}\mathrm{[}\mathrm{2}\mathrm{,}\mathrm{0}\mathrm{,}\mathrm{2}\mathrm{]}$ are each incremented by $\mathrm{0.1}$, etc.
Note that, as long as $k>1$, EM virtually always has multiple local maxima. In particular, any permutation of the class labels of a local maximum will also be a local maximum. To try to find a global maximum, multiple restarts can be tried, and the model with the lowest loglikelihood returned.