## 6.4 Probabilistic Inference

The most common probabilistic inference task is to compute the
posterior distribution of a query variable given some evidence.
Unfortunately,
even the problem of estimating the posterior probability in a belief
network within an absolute error (of less than *0.5*), or within a
constant multiplicative factor, is NP-hard, so general efficient implementations will
not be available.

The main approaches for probabilistic inference in belief networks are

- exploiting the structure of the network. This approach is typified by the variable elimination algorithm detailed later.
- search-based approaches. By enumerating some of the possible worlds, posterior probabilities can be estimated from the worlds generated. By computing the probability mass of the worlds not considered, a bound on the error in the posterior probability can be estimated. This approach works well when the distributions are extreme (all probabilities are close to zero or close to one), as occurs in engineered systems.
**variational inference**, where the idea is to find an approximation to the problem that is easy to compute. First choose a class of representations that are easy to compute. This class could be as simple as the set of disconnected belief networks (with no arcs). Next try to find a member of the class that is closest to the original problem. That is, find an easy-to-compute distribution that is as close as possible to the posterior distribution that must be computed. Thus, the problem reduces to an optimization problem of minimizing the error.- stochastic simulation. In these approaches, random cases are generated according to the probability distributions. By treating these random cases as a set of samples, the marginal distribution on any combination of variables can be estimated. Stochastic simulation methods are discussed in Section 6.4.2.

This book presents only the first and fourth methods.