# 10.3.1 Learning the Probabilities

The simplest case occurs when a learning agent is given the structure of the model and all the variables have been observed. The agent must learn the conditional probabilities, $P(X_{i}\mid parents(X_{i}))$ for each variable $X_{i}$. Learning the conditional probabilities is an instance of supervised learning, where $X_{i}$ is the target feature, and the parents of $X_{i}$ are the input features.

For cases with few parents, each conditional probability can be learned separately using the training examples and prior knowledge, such as pseudocounts.

###### Example 10.11.

Figure 10.7 shows a typical example. We are given the model and the data, and we must infer the probabilities.

For example, one of the elements of $P(E\mid AB)$ is

 $\displaystyle P(E{=}t\mid A{=}t\wedge B{=}f)=\frac{n_{1}+c_{1}}{n_{0}+n_{1}+c_% {0}+c_{1}}$

where $n_{1}$ is the number of cases where $E{=}t\wedge A{=}t\wedge B{=}f$, and $c_{1}\geq 0$ is the corresponding pseudocount that is provided before any data is observed. Similarly, $n_{0}$ is the number of cases where $E{=}f\land A{=}t\wedge B{=}f$, and $c_{0}\geq 0$ is the corresponding pseudocount.

If a variable has many parents, using counts and pseudocounts can suffer from overfitting. Overfitting is most severe when there are few examples for some of the combinations of the parent variables. In that case, the supervised learning techniques of Chapter 7 could be used. Decision trees can be used for arbitrary discrete variables. Logistic regression and neural networks can represent a conditional probability of a binary variable given its parents. For non-binary discrete variables, indicator variables may be used.