# 8.4.2 Representing Conditional Probabilities and Factors

A conditional probability distribution is a function on variables; given an assignment to the values of the variables, it gives a number. A factor is a function of a set of variables; the variables it depends on are the scope of the factor. Thus a conditional probability is a factor, as it is a function on variables. This section explores some variants for representing factors and conditional probabilities. Some of the representations are for arbitrary factors and some are specific to conditional probabilities.

Factors do not have to be implemented as conditional probability tables. The resulting tabular representation is often too large when there are many parents. Often, structure in conditional probabilities can be exploited.

One such structure exploits context-specific independence, where one variable is conditionally independent of another, given a particular value of the third variable.

###### Example 8.25.

Suppose a robot can go outside or get coffee (so the $Action$ has domain $\{go\_out,get\_coffee\}$. Whether it gets wet (variable $Wet$) depends on whether there is rain (variable $Rain$) in the context that it went out or on whether the cup was full (variable $Full$) if it got coffee. Thus $Wet$ is independent of $Rain$ given $Action{=}get\_coffee$, but is dependent on $Rain$ given $Action{=}go\_out$. Also, $Wet$ is independent of $Full$ given $Action{=}go\_out$, but is dependent on $Full$ given $Action{=}get\_coffee$.

Context-specific independence may be exploited in a representation by not requiring numbers that are not needed. A simple representation for conditional probabilities that models context-specific independence is a decision tree, where the parents in a belief network correspond to the input features and the child corresponds to the target feature. Another representation is in terms of definite clauses with probabilities. Context-specific independence could also be represented as tables that have contexts that specify when they should be used, as in the following example.

###### Example 8.26.

The conditional probability $P(Wet\mid Action,Rain,Full)$ could be represented as a decision tree, as definite clauses with probabilities, or as tables with contexts:

 $\begin{array}[]{l}\includegraphics[[]]{../figures/ch08/wet}\\ wet\leftarrow\mbox{}go\_out\wedge\mbox{}rain:0.8\\ wet\leftarrow\mbox{}go\_out\wedge\mbox{}\neg rain:0.1\\ wet\leftarrow\mbox{}get\_coffee\wedge\mbox{}full:0.6\\ wet\leftarrow\mbox{}get\_coffee\wedge\mbox{}\neg full:0.3\end{array}~{}\begin{% array}[]{ll}go\_out:&\begin{array}[]{|ll|l|}\hline Rain&Wet&Prob\\ \hline t&t&0.8\\ t&f&0.2\\ f&t&0.1\\ f&f&0.9\\ \hline\end{array}\\ &\\ get\_coffee:&\begin{array}[]{|ll|l|}\hline Full&Wet&Prob\\ \hline t&t&0.6\\ t&f&0.4\\ f&t&0.3\\ f&f&0.7\\ \hline\end{array}\end{array}$

Another common representation is a noisy-or, where the child is true if one of the parents is activated and each parent has a probability of activation. So the child is an “or” of the activations of the parents. The noisy-or is defined as follows. If $X$ has Boolean parents $V_{1},\dots,V_{k}$, the probability is defined by $k+1$ parameters $p_{0},\dots,p_{k}$. We invent $k$ new Boolean variables $A_{0},A_{1},\dots,A_{k}$, where for each $i>0$, $A_{i}$ has $V_{i}$ as its only parent. Define $P(A_{i}{=}true\mid V_{i}{=}true)=p_{i}$ and $P(A_{i}{=}true\mid V_{i}{=}false)=0$. The bias term, $A_{0}$ has $P(A_{0})=p_{0}$. The variables $A_{0},\dots,A_{k}$ are the parents of $X$, and the conditional probability is that $P(X\mid A_{0},A_{1},\dots,A_{k})$ is 1 if any of the $A_{i}$ are true and is 0 if all of the $A_{i}$ are false. Thus $p_{0}$ is the probability of $X$ when all of $V_{i}$ are false; the probability of $X$ increases if more of the $V_{i}$ become true.

###### Example 8.27.

Suppose the robot could get wet from rain or coffee. There is a probability that it gets wet from rain if it rains, and a probability that it gets wet from coffee if it has coffee, and a probability that it gets wet for other reasons. The robot gets wet if it gets wet from one of them, giving the “or”. We could have, $P(wet\_from\_rain\mid rain)=0.3$, $P(wet\_from\_coffee\mid coffee)=0.2$ and, for the bias term, $P(wet\_for\_other\_reasons)=0.1$. The robot is wet if it wet from rain, wet from coffee, or wet for other reasons.

A log-linear model is a model where probabilities are specified as a product of terms. When the terms are non-zero (they are all strictly positive), the log of a product is the sum of logs. The sum of terms is often a convenient term to work with. To see how such a form is used to represent conditional probabilities, we can write the conditional probability in the following way:

 $\displaystyle P(h\mid e)$ $\displaystyle=\frac{P(h\land e)}{P(h\land e)+P(\neg h\land e)}$ $\displaystyle=\frac{1}{1+P(\neg h\land e)/P(h\land e)}$ $\displaystyle=\frac{1}{1+e^{-(\log P(h\land e)/P(\neg h\land e))}}$ $\displaystyle=sigmoid(\log odds(h\mid e))$
• The sigmoid function, $sigmoid(x)=1/(1+e^{-x})$, plotted in Figure 7.9, has been used previously in this book for logistic regression and neural networks.

• The conditional odds (as often used by bookmakers in gambling) is

 $\displaystyle odds(h\mid e)=$ $\displaystyle\frac{P(h\land e)}{P(\neg h\land e)}$ $\displaystyle=$ $\displaystyle\frac{P(e\mid h)}{P(e\mid\neg h)}*\frac{P(h)}{P(\neg h)}$

where $\frac{P(h)}{P(\neg h)}=\frac{P(h)}{1-P(h)}$ is the prior odds and $\frac{P(e\mid h)}{P(e\mid\neg h)}$ is the likelihood ratio. For a fixed $h$, it is often useful to represent $P(e\mid h)/P(e\mid\neg h)$ as a product of terms, and so the log is a sum of terms.

The logistic regression model of a conditional probability $P(X\mid Y_{1},\dots,Y_{k})$ is of the form

 $P(x\mid Y_{1},\dots,Y_{k})=sigmoid\left(\sum_{i}w_{i}*Y_{i}\right)$

where $Y_{i}$ is assumed to have domain $\{0,1\}$. (Assume a dummy input $Y_{0}$ which is always 1.) This corresponds to a decomposition of the conditional probability, where the probabilities are a product of terms for each $Y_{i}$.

Note that $P(X\mid Y_{1}{=}0,\dots,Y_{k}{=}0)=sigmoid(w_{0})$. Thus $w_{0}$ determines the probability when all of the parents are zero. Each $w_{i}$ specifies a value that should be added as $Y_{i}$ changes. If $Y_{i}$ is Boolean with values $\{0,1\}$, then $P(X\mid Y_{1}{=}0,\dots,Y_{i}{=}1,\dots,Y_{k}{=}0)=sigmoid(w_{0}+w_{i})$. The logistic regression model makes the independence assumption that the influence of each parent on the child does not depend on the other parents. Learning logistic regression models was the topic of Section 7.3.2.

###### Example 8.28.

To represent the probability of $wet$ given whether there is rain, coffee, kids, or whether the robot has a coat may be given by:

 $\displaystyle P(wet$ $\displaystyle\mid Rain,Coffee,Kids,Coat)$ $\displaystyle=$ $\displaystyle sigmoid(-1.0+2.0*Rain+1.0*Coffee+0.5*Kids-1.5*Coat)$

This implies the following conditional probabilities

 $\displaystyle P(wet\mid\neg rain\land\neg coffee\land\neg kids\land\neg coat)=% sigmoid(-1.0)=0.27.$ $\displaystyle P(wet\mid rain\land\neg coffee\land\neg kids\land\neg coat)=% sigmoid(1.0)=0.73.$ $\displaystyle P(wet\mid rain\land\neg coffee\land\neg kids\land coat)=sigmoid(% -0.5)=0.38.$

This requires fewer parameters than the $2^{4}=16$ parameters required for a tabular representation, but makes more independence assumptions.

Noisy-or and logistic regression models are similar, but different. Noisy-or is typically used when the causal assumption that a variable is true if it is caused to be true by one of the parents, is appropriate. Logistic regression is used when the various parents add-up to influence the child.