8 Reasoning with Uncertainty

The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).

8.2 Independence

The axioms of probability are very weak and provide few constraints on allowable conditional probabilities. For example, if there are n binary variables, there are 2n-1 numbers to be assigned to give a complete probability distribution from which arbitrary conditional probabilities can be derived. To determine any probability, you may have to start with an enormous database of probabilities.

A useful way to limit the amount of information required is to assume that each variable only directly depends on a few other variables. This uses assumptions of conditional independence. Not only does it reduce how many numbers are requires to specify a model, but also the independence structure may be exploited for efficient reasoning.

Reducing the Numbers

Two main approaches are used to overcome the need for so many numbers to specify a probability distribution:

Independence

Assume that the knowledge of the truth of one proposition does not affect the agent’s belief in another proposition in the context of other propositions.

Maximum entropy or random worlds

Assume that probabilities are as uniform as possible given the available information.

The distinction between allowing representations of independence and using maximum entropy or random worlds highlights an important difference between views of a knowledge representation:

  • The first view is that a knowledge representation provides a high-level modeling language that lets us model a domain in a reasonably natural way. According to this view, it is expected that knowledge representation designers prescribe how to use the representation language by providing a user manual on how to describe domains of interest.

  • The second view is that a knowledge representation should allow someone to add whatever knowledge they may have about a domain. The knowledge representation should fill in the rest in a commonsense manner. According to this view, it is unreasonable for a knowledge representation designer to specify how particular knowledge should be encoded.

Judging a knowledge representation by the wrong criteria does not result in a fair assessment.

A belief network is a representation of a particular independence among variables. Belief networks should be viewed as a modeling language. Many domains are concisely and naturally represented by exploiting the independencies that belief networks compactly represent.

Once the network structure and the domains of the variables for a belief network are defined, which numbers are required (the conditional probabilities) are prescribed. The user cannot simply add arbitrary conditional probabilities but must follow the network’s structure. If the numbers required of a belief network are provided and are locally consistent, the whole network will be consistent.

In contrast, the maximum entropy or random worlds approaches infer the most random worlds that are consistent with a probabilistic knowledge base. They form a probabilistic knowledge representation of the second type. For the random worlds approach, any numbers that happen to be available are added and used. However, if you allow someone to add arbitrary probabilities, it is easy for the knowledge to be inconsistent with the axioms of probability. Moreover, it is difficult to justify an answer as correct if the assumptions are not made explicit.

As long as the value of P(he) is not 0 or 1, the value of P(he) does not constrain the value of P(hfe). This latter probability could have any value in the range [0,1]. It is 1 when f implies h, and it is 0 if f implies ¬h. A common kind of qualitative knowledge is of the form P(he)=P(hfe), which specifies f is irrelevant to the probability of h given that e is observed. This idea applies to random variables, as in the following definition.

Random variable X is conditionally independent of random variable Y given a set of random variables Zs if

P(XY,Zs)=P(XZs)

whenever the probabilities are well defined. This means that for all xdomain(X), for all ydomain(Y), and for all zdomain(Zs), if P(Y=yZs=z)>0,

P(X=xY=yZs=z)=P(X=xZs=z).

That is, given a value of each variable in Zs, knowing Y’s value does not affect the belief in the value of X.

Example 8.12.

Consider a probabilistic model of students and exams. It is reasonable to assume that the random variable Intelligence is independent of Works_hard, given no observations. If you find that a student works hard, it does not tell you anything about their intelligence.

The answers to the exam (the variable Answers) would depend on whether the student is intelligent and works hard. Thus, given Answers, Intelligent would be dependent on Works_hard; if you found someone had insightful answers, and did not work hard, your belief that they are intelligent would go up.

The grade on the exam (variable Grade) should depend on the student’s answers, not on the intelligence or whether the student worked hard. Thus Grade would be independent of Intelligence given Answers. However, if the answers were not observed, Intelligence will affect Grade (because highly intelligent students would be expected to have different answers than not so intelligent students); thus Grade is dependent on Intelligence given no observations.

Proposition 8.5.

The following four statements are equivalent, as long as the conditional probabilities are well defined

  1. 1.

    X is conditionally independent of Y given Z.

  2. 2.

    Y is conditionally independent of X given Z.

  3. 3.

    P(X=xY=yZ=z)=P(X=xY=yZ=z) for all values x, y, y and z. That is, in the context that you are given a value for Z, changing the value of Y does not affect the belief in X.

  4. 4.

    P(X,YZ)=P(XZ)P(YZ).

The proof is left as an exercise. See Exercise 3.

Variables X and Y are unconditionally independent if P(X,Y)=P(X)P(Y), that is, if they are conditionally independent given no observations. Note that X and Y being unconditionally independent does not imply they are conditionally independent given some other information Z.

Conditional independence is a useful assumption that is often natural to assess and can be exploited in inference. It is very rare that we would have a table of probabilities of worlds and assess independence numerically.