10 Learning with Uncertainty

The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).

10.4 Bayesian Learning

Rather than choosing the most likely model or delineating the set of all models that are consistent with the training data, another approach is to compute the posterior probability of each model given the training examples.

The idea of Bayesian learning is to compute the posterior probability distribution of the target features of a new example conditioned on its input features and all the training examples.

Suppose a new case has inputs X=x (which we write simply as x) and target features Y. The aim is to compute P(YxE), where E is the set of training examples. This is the probability distribution of the target variables given the particular inputs and the examples. The role of a model is to be the assumed generator of the examples. If we let M be a set of disjoint and covering models, then reasoning by cases and the chain rule give

P(YxE) =mMP(YmxE)
=mMP(YmxE)*P(mxE)
=mMP(Ymx)*P(mE).

The first two equalities follow from the definition of conditional probability. The last equality relies on two assumptions: the model includes all the information about the examples that is necessary for a particular prediction, P(YmxE)=P(Ymx), and the model does not change depending on the inputs of the new example, P(mxE)=P(mE). Instead of choosing the best model, Bayesian learning relies on model averaging, averaging over the predictions of all the models, where each model is weighted by its posterior probability given the training examples.

P(mE) can be computed using Bayes’ rule:

P(mE)=P(Em)*P(m)P(E).

Thus, the weight of each model depends on how well it predicts the data (the likelihood) and its prior probability. The denominator, P(E), is a normalizing constant to make sure the posterior probabilities of the models sum to 1. P(E) is called the partition function. Computing P(E) may be very difficult when there are many models.

A set {e1,,ek} of examples are independent and identically distributed (i.i.d.), given model m if examples ei and ej, for ij, are independent given m. If the set of training examples E is {e1,,ek}, the assumption that the examples are i.i.d. implies

P(Em)=i=1kP(eim).

The i.i.d. assumption can be represented as a belief network, shown in Figure 10.10, where each of the ei are independent given model m.

Figure 10.10: The i.i.d. assumption as a belief network

If m is made into a discrete variable, any of the inference methods of the previous chapter could be used for inference in this network. A standard reasoning technique in such a network is to condition on every observed ei and to query the model variable or an unobserved ei variable.

The set of models may include structurally different models in addition to models that differ in the values of the parameters. One of the techniques of Bayesian learning is to make the parameters of the model explicit and to determine the distribution over the parameters.

Example 10.14.

Consider the simplest learning task of learning a single Boolean random variable, Y, with no input features. (This is the case covered in Section 7.2.3.) Each example specifies Y=true or Y=false. The aim is to learn the probability distribution of Y given the set of training examples.

There is a single parameter, ϕ, that determines the set of all models. Suppose that ϕ represents the probability of Y=true. We treat ϕ as a real-valued random variable on the interval [0,1]. Thus, by definition of ϕ, P(Y=trueϕ)=ϕ and P(Y=falseϕ)=1-ϕ.

Suppose, first, an agent has no prior information about the probability of Boolean variable Y and no knowledge beyond the training examples. This ignorance can be modeled by having the prior probability distribution of the variable ϕ as a uniform distribution over the interval [0,1]. This is the probability density function labeled n0=0,n1=0 in Figure 10.11.

We can update the probability distribution of ϕ given some examples. Assume that the examples, obtained by running a number of independent experiments, are a particular sequence of outcomes that consists of n0 cases where Y is false and n1 cases where Y is true.

Figure 10.11: Beta distribution based on different samples

The posterior distribution for ϕ given the training examples can be derived by Bayes’ rule. Let the examples E be the particular sequence of observations that resulted in n1 occurrences of Y=true and n0 occurrences of Y=false. Bayes’ rule gives us

P(ϕE)=P(Eϕ)*P(ϕ)P(E).

The denominator is a normalizing constant to make sure the area under the curve is 1.

Given that the examples are i.i.d.,

P(Eϕ)=ϕn1*(1-ϕ)n0

because there are n0 cases where Y=false, each with a probability of 1-ϕ, and n1 cases where Y=true, each with a probability of ϕ.

Note that E is the particular sequence of observations made. If the observation was just that there were a total of n0 occurrences of Y=false and n1 occurrences of Y=true, we would get an different answer, because we would have to take into account all the possible sequences that could have given this count. The latter is known as the binomial distribution.

One possible prior probability, P(ϕ), is a uniform distribution on the interval [0,1]. This would be reasonable when the agent has no prior information about the probability.

Figure 10.11 gives some posterior distributions of the variable ϕ based on different sample sizes, and given a uniform prior. The cases are (n0= 1,n1= 2), (n0= 2,n1= 4), and (n0= 4,n1= 8). Each of these peak at the same place, namely at 23. More training examples make the curve sharper.

The distribution of this example is known as the beta distribution; it is parameterized by two counts, α0 and α1, and a probability p. Traditionally, the αi parameters for the beta distribution are one more than the counts; thus, αi=ni+1. The beta distribution is

Betaα0,α1(p)=1Zpα1-1*(1-p)α0-1

where Z is a normalizing constant that ensures the integral over all values is 1. Thus, the uniform distribution on [0,1] is the beta distribution Beta1,1.

Suppose instead that Y is a discrete variable with k different values. The generalization of the beta distribution to cover this case is known as the Dirichlet distribution. The Dirichlet distribution with two sorts of parameters, the “counts” α1,,αk, and the probability parameters p1,,pk, is

Dirichletα1,,αk(p1,,pk)=1Zj=1kpjαj-1

where pi is the probability of the ith outcome (and so 0pi1) and αi is a non-negative real and Z is a normalizing constant that ensures the integral over all the probability values is 1. We can think of ai as one more than the count of the ith outcome, αi=ni+1. The Dirichlet distribution looks like Figure 10.11 along each dimension (i.e., as each pj varies between 0 and 1).

For many cases, averaging over all models weighted by their posterior distribution is difficult, because the models may be complicated (e.g., if they are decision trees or even belief networks). For the Dirichlet distribution, the expected value for outcome i (averaging over all pj) is

αijαj.

The reason that the αi parameters are one more than the counts in the definitions of the beta and Dirichlet distributions is to make this formula simple. This fraction is well defined only when the αj are all non-negative and not all are zero.

Example 10.15.

Consider Example 10.14, which determines the value of ϕ based on a sequence of observations made up of n0 cases where Y is false and n1 cases where Y is true. Consider the posterior distribution as shown in Figure 10.11. What is interesting about this is that, whereas the most likely posterior value of ϕ is n1n0+n1, the expected value of this distribution is n1+1n0+n1+2.

Thus, the expected value of the n0=1,n1=2 curve is 35, for the n0=2,n1=4 case the expected value is 58, and for the n0=4,n1=8 case it is 914. As the learner gets more training examples, this value approaches nm.

This estimate is better than nm for a number of reasons. First, it tells us what to do if the learning agent has no examples: use the uniform prior of 12. This is the expected value of the n=0,m=0 case. Second, consider the case where n=0 and m=3. The agent should not use P(y)=0, because this says that Y is impossible, and it certainly does not have evidence for this! The expected value of this curve with a uniform prior is 15.

An agent does not have to start with a uniform prior; it could start with any prior distribution. If the agent starts with a prior that is a Dirichlet distribution, its posterior will be a Dirichlet distribution. The posterior distribution can be obtained by adding the observed counts to the αi parameters of the prior distribution.

Thus, the beta and Dirichlet distributions provide a justification for using pseudocounts for estimating probabilities. The pseudocount represents the prior knowledge. A flat prior gives a pseudocount of 1. Thus, Laplace smoothing can be justified in terms of making predictions from initial ignorance.

In addition to using the posterior distribution of ϕ to derive the expected value, we can use it to answer other questions such as: What is the probability that the posterior probability, ϕ, is in the range [a,b]? In other words, derive P((ϕaϕb)e). This is the problem that the Reverend Thomas Bayes solved more than 250 years ago [Bayes, 1763]. The solution he gave – although in much more cumbersome notation – was

abpn*(1-p)m-n01pn*(1-p)m-n.

This kind of knowledge is used in surveys when it may be reported that a survey is correct with an error of at most 5%, 19 times out of 20. It is also the same type of information that is used by probably approximately correct (PAC) learning, which guarantees an error at most ϵ at least 1-δ of the time. If an agent chooses the midpoint of the range [a,b], namely a+b2, as its hypothesis, it will have error less than or equal to b-a2, just when the hypothesis is in [a,b]. The value 1-δ corresponds to P(ϕaϕbe). If ϵ=b-a2 and δ=1-P(ϕaϕbe), choosing the midpoint will result in an error at most ϵ in 1-δ of the time. PAC learning gives worst-case results, whereas Bayesian learning gives the expected number. Typically, the Bayesian estimate is more accurate, but the PAC results give a guarantee of a bound on the error. The sample complexity, the number of samples required to obtain some given accuracy, for Bayesian learning is typically much less than that of PAC learning – many fewer examples are required to expect to achieve the desired accuracy than are needed to guarantee the desired accuracy.