Intelligence 2E

foundations of computational agents

The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).

The previously described methods went forward through the network (parents were sampled before children), and were not good at passing information back through the network. The method described in this section can sample variables in any order.

A stationary distribution of a Markov chain is a distribution of its variables that is not changed by the transition function of the Markov chain. If the Markov chain mixes enough, there is a unique stationary distribution, which can be approached by running the Markov chain long enough. The idea behind Markov chain Monte Carlo (MCMC) methods to generate samples from a distribution (e.g., the posterior distribution given a belief network) is to construct a Markov chain with the desired distribution as its (unique) stationary distribution and then sample from the Markov chain; these samples will be distributed according to the desired distribution. We typically discard the first few samples in a burn-in period, as these samples may be far from the stationary distribution.

One way to create a Markov chain from a belief network with observations, is to use Gibbs sampling. The idea is to clamp observed variables to the values they were observed to have, and sample the other variables. Each variable is sampled from the distribution of the variable given the current values of the other variables. Note that each variable only depends on the values of the variables in its Markov blanket. The Markov blanket of a variable $X$ in a belief network contains $X$’s parents, $X$’s children, and the other parents of $X$’s children; these are all of the variables that appear in factors with $X$.

Figure 8.32 gives pseudocode for Gibbs sampling. The only ill-defined part is to randomly sample $P(X\mid markov\mathrm{\_}blanket(X))$. This can be computed by noticing that for each value of $X$, the probability $P(X\mid markov\mathrm{\_}blanket(X))$ is the product of the values of the factors in which $X$ appears projected onto the current value of all of the other variables.

Gibbs sampling will approach the correct probabilities as long as there are no zero probabilities. How quickly it approaches the distribution depends on how quickly the probabilities mix (how much of the probability space is explored), which depends on how extreme the probabilities are. Gibbs sampling works well when the probabilities are not extreme.

As a problematic case for Gibbs sampling, consider a simple example with three Boolean variables ${A}$, ${B}$, ${C}$, with ${A}$ as the parent of ${B}$, and ${B}$ as the parent of ${C}$. Suppose ${P}{\mathit{}}{\mathrm{(}}{a}{\mathrm{)}}{\mathrm{=}}{\mathrm{0.5}}$, ${P}{\mathit{}}{\mathrm{(}}{b}{\mathrm{\mid}}{a}{\mathrm{)}}{\mathrm{=}}{\mathrm{0.99}}$, ${P}{\mathit{}}{\mathrm{(}}{b}{\mathrm{\mid}}{\mathrm{\neg}}{\mathit{}}{a}{\mathrm{)}}{\mathrm{=}}{\mathrm{0.01}}$, ${P}{\mathit{}}{\mathrm{(}}{c}{\mathrm{\mid}}{b}{\mathrm{)}}{\mathrm{=}}{\mathrm{0.99}}$, ${P}{\mathit{}}{\mathrm{(}}{c}{\mathrm{\mid}}{\mathrm{\neg}}{\mathit{}}{b}{\mathrm{)}}{\mathrm{=}}{\mathrm{0.01}}$. There are no observations and the query variable is ${C}$. The two assignments with all variables having the same value are equally likely and are much more likely than the other assignments. Gibbs sampling will quickly get to one of these assignments, and will take a long time to transition to the other assignments (as it requires some very unlikely choices). If 0.99 and 0.01 were replaced by numbers closer to 1 and 0, it would take even longer to converge.