foundations of computational agents
The information theory box discussed how to represent information using bits. For $x\in domain(X)$, it is possible to build a code that, to identify $x$ uses $-{\mathrm{log}}_{2}P(x)$ bits (or the integer greater than this). The expected number of bits to transmit a value for $X$ is then
$$H(X)=\sum _{x\in domain(X)}-P(X=x)*{\mathrm{log}}_{2}P(X=x)$$ |
This is the information content or entropy of random variable $X$.
[Note that, unlike the notation used elsewhere in the book, $H$ is a function of the variable, not a function of the values of the variable. Thus, for a variable $X$, the entropy $H(X)$ is a number, unlike $P(X)$ which a function that given a value for $X$, returns a number.]
The entropy of $X$ given the observation $Y=y$ is
$$H(X\mid Y=y)=\sum _{x}-P(X=x\mid Y=y)*{\mathrm{log}}_{2}P(X=x\mid Y=y).$$ |
Before observing $Y$, the expectation over $Y$:
$$H(X\mid Y)=\sum _{y}P(Y=y)*\sum _{x}-P(X=x\mid (Y=y)*{\mathrm{log}}_{2}P(X=x\mid (Y=y)$$ |
is called conditional entropy of $X$ given $Y$.
For a test that determines the value of $Y$, the information gain from this test is $H(X)-H(X\mid Y)$, which is the number of bits used to describe $X$ minus the expected number of bits to describe $X$ after learning $Y$. The information gain is never negative.
Suppose spinning a wheel in a game can produces a number in the set ${\mathrm{\{}}{\mathrm{1}}{\mathrm{,}}{\mathrm{2}}{\mathrm{,}}{\mathrm{\dots}}{\mathrm{,}}{\mathrm{8}}{\mathrm{\}}}$, each with equal probability. Let ${S}$ be the outcome of a spin. Then ${H}{\mathit{}}{\mathrm{(}}{S}{\mathrm{)}}{\mathrm{=}}{\mathrm{-}}{{\mathrm{\sum}}}_{{i}{\mathrm{=}}{\mathrm{1}}}^{{\mathrm{8}}}\frac{{\mathrm{1}}}{{\mathrm{8}}}{\mathrm{*}}{{\mathrm{log}}}_{{\mathrm{2}}}{\mathit{}}\frac{{\mathrm{1}}}{{\mathrm{8}}}{\mathrm{=}}{\mathrm{3}}$ bits.
Suppose there is a sensor ${G}$ that detects whether the outcome is greater than 6. ${G}{\mathrm{=}}{t}{\mathit{}}{r}{\mathit{}}{u}{\mathit{}}{e}$ if ${H}{\mathrm{>}}{\mathrm{6}}$. Then ${H}{\mathrm{(}}{S}{\mathrm{\mid}}{G}{\mathrm{)}}{\mathrm{=}}{\mathrm{-}}{\mathrm{0.25}}{{\mathrm{log}}}_{{\mathrm{2}}}\frac{{\mathrm{1}}}{{\mathrm{2}}}{\mathrm{-}}{\mathrm{0.75}}{{\mathrm{log}}}_{{\mathrm{2}}}\frac{{\mathrm{1}}}{{\mathrm{6}}}{\mathrm{=}}{\mathrm{2.19}}$. The information gain of ${G}$ is thus ${\mathrm{3}}{\mathrm{-}}{\mathrm{2.19}}{\mathrm{=}}{\mathrm{0.81}}$ bits. A fraction of a bit makes sense in that it is possible to design a code that uses 219 bits to predict 100 outcomes.
For an “even” sensor ${E}$, where ${E}{\mathrm{=}}{t}{\mathit{}}{r}{\mathit{}}{u}{\mathit{}}{e}$ if ${H}$ is even, ${H}{\mathrm{(}}{S}{\mathrm{\mid}}{E}{\mathrm{)}}{\mathrm{=}}{\mathrm{-}}{\mathrm{0.5}}{{\mathrm{log}}}_{{\mathrm{2}}}\frac{{\mathrm{1}}}{{\mathrm{4}}}{\mathrm{-}}{\mathrm{0.5}}{{\mathrm{log}}}_{{\mathrm{2}}}\frac{{\mathrm{1}}}{{\mathrm{4}}}{\mathrm{=}}{\mathrm{2}}$. The information gain of ${E}$ is thus ${\mathrm{1}}$ bit.
The notion of information is used for a number of tasks:
In diagnosis, an agent could choose a test that provides the most information.
In decision tree learning, information theory provides a useful criterion for choosing which property to split on: split on the property that provides the greatest information gain. The elements it must distinguish are the different values in the target concept, and the probabilities are obtained from the proportion of each value in the training set remaining at each node.
In Bayesian learning, information theory provides a basis for deciding which is the best model given some data.