foundations of computational agents
Supervised learning methods take the input features, the target features, and the training data and return predictors, functions on input features that predict values for the target features. Learning methods are often characterized by how the predictors are represented. This section considers some basic methods from which other methods are built. Initially assume a single target feature and the learner returns a predictor on this target.
A decision tree is a simple representation for classifying examples. Decision tree learning is one of the simplest useful techniques for supervised classification learning.
A decision tree is a tree in which
each internal (nonleaf) node is labeled with a condition, a Boolean function of examples
each internal node has two branches, one labeled $true$ and the other $false$
each leaf of the tree is labeled with a point estimate.
Decision trees are also called classification trees when the target (leaf) is a classification, and regression trees when the target is realvalued.
To classify an example, filter it down the tree, as follows. Each condition encountered in the tree is evaluated and the arc corresponding to the result is followed. When a leaf is reached, the classification corresponding to that leaf is returned. A decision tree corresponds to a nested if–then–else structure in a programming language.
Figure 7.8 shows two possible decision trees for the examples of Figure 7.1. Each decision tree can be used to classify examples according to the user’s action. To classify a new example using the tree on the left, first determine the length. If it is long, predict $skips$. Otherwise, check the thread. If the thread is new, predict $reads$. Otherwise, check the author and predict $reads$ only if the author is known. This decision tree can correctly classify all examples in Figure 7.1.
The left tree corresponds to the program defining $\widehat{UserAction}(e)$:
define $\widehat{UserAction}(e)$:  
if $long(e)$: return $skips$  
else if $new(e)$: return $reads$  
else if $unknown(e)$: return $skips$  
else: return $reads$ 
The tree on the right returns a numerical prediction for $reads$:
define $\widehat{UserAction}(e)$:  
if $long(e)$: return 0  
else $new(e)$: return 0.82 
To use decision trees as a target representation, there are a number of questions that arise.
Given some training examples, what decision tree should be generated? Because a decision tree can represent any function of discrete input features, the bias that is necessary is incorporated into the preference of one decision tree over another. One proposal is to prefer the smallest tree that is consistent with the data, which could mean the tree with the least depth or the tree with the fewest nodes. Which decision trees are the best predictors of unseen data is an empirical question.
How should an agent go about building a decision tree? One way is to search the space of decision trees for the smallest decision tree that fits the data. Unfortunately, the space of decision trees is enormous (see Exercise 7.7). A practical solution is to carry out a greedy search on the space of decision trees, with the goal of minimizing the error. This is the idea behind the algorithm described below.
A decision tree can be seen as a branching program, that takes an example and returns a prediction for that example. The program is either
a function that ignores its argument and returns a point prediction for all of the examples that reach this point (which corresponds to a leaf), or
of the form “if $c(e)$ then ${t}_{1}(e)$ else ${t}_{0}(e)$” where $c$ is a Boolean condition, and ${t}_{1}$ and ${t}_{0}$ are decision trees; ${t}_{1}$ is the tree that is used when the condition $c$ is true of example $e$, and ${t}_{0}$ is the tree used when $c(e)$ is false.
The algorithm $Decision\mathrm{\_}tree\mathrm{\_}learner$ of Figure 7.9 mirrors the recursive decomposition of a tree. It builds a decision tree from the top down as follows. The input to the algorithm is a set of input conditions (Boolean functions of examples that use only input features), the target feature, a set of training examples, and a realvalued parameter, $\gamma $, discussed below. If the input features are Boolean, they can be used directly as the conditions.
A greedy optimal split is a condition that results in the lowest error if the learner were allowed only one split and it splits on that condition. $sum\mathrm{\_}loss(Es)$ gives the sum of losses of training examples $Es$ for the loss function assumed, given that the optimal prediction is used for that loss function, as given in Figure 7.5. The procedure $select\mathrm{\_}split$ returns a greedy optimal split if the sum of losses after the split improves the sum of losses before the split by at least the threshold $\gamma $. If there is no such condition, $select\mathrm{\_}split$ returns $None$.
Adding a split increases the size of the tree by 1. The threshold $\gamma $ can be seen as a penalty for increasing the size of the tree by 1. If positive, $\gamma $ is also useful to prevent $$ holding solely due to rounding error.
If $select\mathrm{\_}split$ returns $None$, the decision tree learning algorithm creates a leaf. The function $leaf\mathrm{\_}value(Es)$ returns the value that is used as a prediction of the target for examples $Es$ that reach this node. It ignores the input features for these examples and returns a point estimate, which is the case considered in Section 7.2.2. The decision tree algorithm returns a function that takes an example and returns that point estimate.
If $select\mathrm{\_}split$ returns condition $c$, the learner splits on condition $c$ by partitioning the training examples into those examples $e$ with $c(e)$ true and those examples with $c(e)$ false. It recursively builds a subtree for each of these bags of examples. It returns a function that, given an example, tests whether $c$ is true of the example, and then uses the prediction from the appropriate subtree.
Consider applying $Decision\mathrm{\_}tree\mathrm{\_}learner$ to the classification data of Figure 7.1, with $\gamma =0$. The initial call is
$decisionTreeLearner(\{known,new,long,home\},User\mathrm{\_}action,$ 
$\{{e}_{1},{e}_{2},\mathrm{\dots},{e}_{18}\},0)$ 
where $known$ is true when $Author=known$, and similarly for the other conditions.
Suppose the stopping criterion is not true and the algorithm selects the condition $long$ to split on. It then calls
$decisionTreeLearner(\{known,new,home\},User\mathrm{\_}action,$ 
$\{{e}_{1},{e}_{3},{e}_{4},{e}_{6},{e}_{9},{e}_{10},{e}_{12}\},0)$ 
where $\{{e}_{1},{e}_{3},{e}_{4},{e}_{6},{e}_{9},{e}_{10},{e}_{12}\}$ is the set of training examples with $Length=long$.
All of these examples agree on the user action; therefore, the algorithm returns the prediction $skips$. The second step of the recursive call is
$decisionTreeLearner(\{known,new,home\},User\mathrm{\_}action,$ 
$\{{e}_{2},{e}_{5},{e}_{7},{e}_{8},{e}_{11},{e}_{13},{e}_{14},{e}_{15},{e}_{16},{e}_{17},{e}_{18}\},0).$ 
Not all of the examples agree on the user action, so assuming the stopping criterion is false, the algorithm selects a condition to split on. Suppose it selects $new$. Eventually, this recursive call returns the function on example $e$ in the case when $Length$ is $short$:
$\text{if}new(e)$  $\text{then}reads$  
$\text{else if}unknown(e)\text{then}skips\text{else}reads.$ 
The final result is the first predictor of Example 7.8.
When the loss is log loss with base 2, the mean of the losses, $sum\mathrm{\_}losses(Es)/Es$, is the entropy of the empirical distribution of $Es$. The number of bits to describe $Es$ after testing the condition $c$ is $val$, defined on line 25 of Figure 7.9. The entropy of the distribution created by the split is $val/Es$. The difference of these two is the information gain of the split. Sometimes information gain is used even when the optimality criterion is some other error measure, for example, when maximizing accuracy it is possible to select a split to optimize log loss, but return the mode as the leaf value. See Exercise 7.6.
The following example shows details of the split choice for the case where the split is chosen using log loss, and the empirical distribution is used as the leaf value.
In the running example of learning the user action from the data of Figure 7.1, suppose the aim is to minimize the log loss. The algorithm greedily chooses a split that minimizes the log loss. Suppose $\gamma $ is 0.
Without any splits, the optimal prediction on the training set is the empirical frequency. There are nine examples with $User\mathrm{\_}action=reads$ and nine examples with $User\mathrm{\_}action=skips$, and so $known$ is predicted with probability 0.5. The mean log loss is equal to $(18\ast {\mathrm{log}}_{2}0.5)/18=1$.
Consider splitting on $Author$. This partitions the examples into $[{e}_{1}$, ${e}_{4}$, ${e}_{5}$, ${e}_{6}$, ${e}_{9}$, ${e}_{10}$, ${e}_{12}$, ${e}_{13}$, ${e}_{14}$, ${e}_{15}$, ${e}_{16}$, ${e}_{17}]$ with $Author=known$ and $[{e}_{2}$, ${e}_{3}$, ${e}_{7}$, ${e}_{8}$, ${e}_{11}$, ${e}_{18}]$ with $Author=unknown$, each of which is evenly split between the different user actions. The optimal prediction for each partition is again 0.5, and so the log loss after the split is again 1. In this case, finding out whether the author is known, by itself, provides no information about what the user action will be.
Splitting on $Thread$ partitions the examples into $[{e}_{1}$, ${e}_{2}$, ${e}_{5}$, ${e}_{8}$, ${e}_{10}$, ${e}_{12}$, ${e}_{14}$, ${e}_{15}$, ${e}_{17}$, ${e}_{18}]$ with $Thread=new$ and $[{e}_{3}$, ${e}_{4}$, ${e}_{6}$, ${e}_{7}$, ${e}_{9}$, ${e}_{11}$, ${e}_{13}$, ${e}_{16}]$ with $Thread=followup$. The examples with $Thread=new$ contains three examples with $User\mathrm{\_}action=skips$ and seven examples with $User\mathrm{\_}action=reads$, thus the optimal prediction for these is to predict reads with probability $7/10$. The examples with $Thread=followup$ have two $reads$ and six $skips$. Thus, the best prediction for these is to predict $reads$ with probability $2/8$. The mean log loss after the split is
$(3\ast {\mathrm{log}}_{2}(3/10)+7\ast {\mathrm{log}}_{2}(7/10)+2\ast {\mathrm{log}}_{2}(2/8)+6\ast {\mathrm{log}}_{2}(6/8))/18$  
$\approx 15.3/18\approx 0.85.$ 
Splitting on $Length$ divides the examples into $[{e}_{1}$, ${e}_{3}$, ${e}_{4}$, ${e}_{6}$, ${e}_{9}$, ${e}_{10}$, ${e}_{12}]$ and $[{e}_{2}$, ${e}_{5}$, ${e}_{7}$, ${e}_{8}$, ${e}_{11}$, ${e}_{13}$, ${e}_{14}$, ${e}_{15}$, ${e}_{16}$, ${e}_{17}$, ${e}_{18}]$. The former all agree on the value of $User\mathrm{\_}action$ and predict with probability 1. The user action divides the second set $9:2$, and so the mean log loss is
$$(7\ast {\mathrm{log}}_{2}1+9\ast {\mathrm{log}}_{2}9/11+2\ast {\mathrm{log}}_{2}2/11)/18\approx 7.5/18\approx 0.417.$$ 
Therefore, splitting on $Length$ is better than splitting on $Thread$ or $Author$, when greedily optimizing the log loss.
In the decision tree learning algorithm (Figure 7.9), Boolean input features can be used directly as the conditions. NonBoolean input features are handled in a number of ways.
Suppose input variable $X$ is categorical, with domain $\{{v}_{1},\mathrm{\dots},{v}_{k}\}$. A binary indicator variable, ${X}_{i}$, can be associated with each value ${v}_{i}$, where ${X}_{i}(e)=1$ if $X(e)={v}_{i}$ and ${X}_{i}(e)=0$ otherwise. For each example $e$, exactly one of ${X}_{1}(e),\mathrm{\dots},{X}_{k}(e)$ is 1 and the others are 0.
When the domain of a feature is totally ordered, the feature is called an ordinal feature. This includes realvalued features as a special case, but might also include other features such as clothing sizes (S, M, L, XL, etc.), and highest level of education (none, primary, secondary, bachelor, etc.).
For an ordinal input feature $X$ and for a given value $v$, a Boolean feature can be constructed as a cut: a new feature that has value 1 when $X>v$ and 0 otherwise. Combining cuts allows for features that are true for intervals; for example, a branch might include the conditions $X>9$ is true and $X>17$ is false, which corresponds to the interval $$.
Suppose the domain of input variable $X$ is totally ordered. To select the optimal value for the cut value $v$, sort the examples on the value of $X$ and sweep through the examples to consider each split value and select the best. See Exercise 7.8.
For ordinal features (including realvalued features), binning involves choosing a set of thresholds, creating a feature for each interval between the thresholds. The thresholds $$, make $k+1$ Boolean features, one that is true for $X$ when $X\le {\alpha}_{1}$, one for $$, and one for $$ for each $$. A bin of the form $$ would require two splits to represent using cuts. The ${\alpha}_{i}$ can be chosen upfront, for example, using percentiles of the training data, or chosen depending on the target.
For categorical feature $X$, there might be a better split of the form $X\in S$ where $S$ is a set of values, rather than only splitting on a single value, as is done with indicator variables. When the target $Y$ is Boolean, to find an appropriate set $S$, sort the values of $X$ by the proportion of $Y$ that are true; a greedy optimal split will be between values in this sorted list.
It is possible to expand the algorithm to allow multiway splits. To split on a multivalued variable, there would be a child for each value in the domain of the variable. This means that the representation of the decision tree becomes more complicated than the simple if–then–else form used for binary features. There are two main problems with this approach. The first is what to do with values of a feature for which there are no training examples. The second is that for most greedy splitting heuristics, including information gain, it is generally better to split on a variable with a larger domain because it produces more children and so can fit the data better than splitting on a feature with a smaller domain. However, splitting on a feature with a smaller domain keeps the representation more compact. A fourway split, for example, is equivalent to three binary splits; they both result in four leaves.
The algorithm does not split when $select\mathrm{\_}split$ returns $None$. This occurs when there are no examples, when there are no conditions remaining, when all examples have the same value on each condition, when all of the examples have the same target value, and when the improvement of the evaluation is less than the parameter $\gamma $. A number of other criteria have been suggested for stopping earlier.
Minimum child size: do not split more if one of the children will have fewer examples than a threshold.
Maximum depth: do not split more if the depth reaches a maximum.
It is possible that one condition may only work well in conjunction with other conditions, and the greedy method may not work when this occurs. One particularly tricky case is a parity function of $k$ Boolean variables that is true if an odd (or even) number of variables are true; knowing the values of fewer than $k$ of the variables gives no information about the value of the parity function. The simplest parity functions (for $k=2$) are exclusiveor and equivalence. Parity functions have complicated decision trees.
In some cases, greedy splitting does not find a simplest decision tree and it is often useful to simplify the tree resulting from the topdown algorithm, as shown in the following example.
Consider a dataset with inputs $x$, $y$, and $z$ and target $t$. The target is true if $x$ is true and $y$ is true, or $x$ is false and $z$ is true. Figure 7.10 (a) shows a tree representation of this function. This tree can generate the data in the center (b).



(a)  (b)  (c) 
Although the simplest tree first splits on $x$, splitting on $x$ provides no information; there is the same proportion of $t$ true when $x$ is true as when $x$ is false. Instead, the algorithm can split on $y$. When $y$ is true, there is a larger proportion of $t$ true than when $y$ is false. For the case where $y$ is true, splitting on $x$ perfectly predicts the target when $x$ is true. The resulting tree is given in Figure 7.10(c). Following the paths to $t=\mathrm{\hspace{0.17em}1}$, this tree corresponds to $t$ being true when $(x\wedge y)\vee (y\wedge \neg x\wedge z)\vee (\neg y\wedge \neg x\wedge z)$, which can be simplified to $(x\wedge y)\vee (\neg x\wedge z)$. This is essentially the original tree.
Linear functions provide a basis for many learning algorithms. This section first covers regression, then considers classification.
Linear regression is the problem of fitting a linear function to a set of training examples, in which the input and target features are real numbers.
Suppose the input features, ${X}_{1},\mathrm{\dots},{X}_{m}$, are all real numbers (which includes the $\{0,1\}$ case) and there is a single target feature $Y$. A linear function of the input features is a function of the form
${\widehat{Y}}^{\overline{w}}(e)$  $={w}_{0}+{w}_{1}\ast {X}_{1}(e)+\mathrm{\cdots}+{w}_{m}\ast {X}_{m}(e)$  
$={\displaystyle \sum _{i=0}^{m}}{w}_{i}\ast {X}_{i}(e)$ 
where $$ is a vector (tuple) of weights, and ${X}_{0}$ is a special feature whose value is always 1.
Suppose $E$ is a set of examples. The mean squared loss on examples $E$ for target $Y$ is the error
$error(E,\overline{w})$  $={\displaystyle \frac{1}{\leftE\right}}{\displaystyle \sum _{e\in E}}{({\widehat{Y}}^{\overline{w}}(e)Y(e))}^{2}$  
$={\displaystyle \frac{1}{\leftE\right}}{\displaystyle \sum _{e\in E}}{\left({\displaystyle \sum _{i=0}^{m}}{w}_{i}\ast {X}_{i}(e)Y(e)\right)}^{2}.$  (7.1) 
Consider minimizing the mean squared loss. There is a unique minimum, which occurs when the partial derivatives with respect to the weights are all zero. The partial derivative of the error in Equation 7.1 with respect to weight ${w}_{i}$ is
$$\frac{\partial}{\partial {w}_{i}}error(E,\overline{w})=\frac{1}{\leftE\right}\sum _{e\in E}2\ast \delta (e)\ast {X}_{i}(e)$$  (7.2) 
where $\delta (e)={\widehat{Y}}^{\overline{w}}(e)Y(e)$, a linear function of the weights. The weights that minimize the error can be computed analytically by setting the partial derivatives to zero and solving the resulting linear equations in the weights (see Exercise 7.11).
Consider binary classification, where the domain of the target variable is $\{0,1\}$.
A linear function does not work well for such classification tasks; a learner should never make a prediction greater than 1 or less than 0. However, a linear function could make a prediction of, say, 3 for one example just to fit other examples better.
A squashed linear function is of the form
${\widehat{Y}}^{\overline{w}}(e)$  $=\varphi ({w}_{0}+{w}_{1}\ast {X}_{1}(e)+\mathrm{\cdots}+{w}_{m}\ast {X}_{m}(e))$  
$=\varphi ({\displaystyle \sum _{i}}{w}_{i}\ast {X}_{i}(e))$ 
where $\varphi $, an activation function, is a function from the real line $[\mathrm{\infty},\mathrm{\infty}]$ into some subset of the real line, such as $[0,1]$.
A prediction based on a squashed linear function is a linear classifier.
One differentiable activation function is the sigmoid or logistic function:
$$sigmoid(x)=\frac{1}{1+exp(x)}$$ 
where $exp(v)={e}^{v}$, where $e$ is Euler’s number (approximately 2.718). The sigmoid function, depicted in Figure 7.11, squashes the real line into the interval $(0,1)$, which is appropriate for classification because you would never want to make a prediction of greater than 1 or less than 0. The sigmoid function can be justified in terms of probabilities. It is also differentiable, with derivative
$$\frac{d}{dx}sigmoid(x)=sigmoid(x)\ast (1sigmoid(x)).$$ 
The problem of determining weights for the sigmoid of a linear function that minimize an error on a set of examples is called logistic regression.
The mean log loss for logistic regression is
$LL(E,\overline{w})={\displaystyle \frac{1}{\leftE\right}}\ast {\displaystyle \sum _{e\in E}}\left(Y(e)\ast \mathrm{log}\widehat{Y}(e)+(1Y(e))\ast \mathrm{log}(1\widehat{Y}(e))\right)$ 
where $\widehat{Y}(e)=sigmoid\left({\sum}_{i=0}^{m}{w}_{i}\ast {X}_{i}(e)\right)$. To minimize this, consider weight ${w}_{i}$. The partial derivative with respect to weight ${w}_{i}$ is
$$\frac{\partial}{\partial {w}_{i}}LL(E,\overline{w})=\frac{1}{\leftE\right}\sum _{e\in E}\delta (e)\ast {X}_{i}(e)$$  (7.3) 
where $\delta (e)={\widehat{Y}}^{\overline{w}}(e)Y(e)$. This is very similar to Equation 7.2, the main difference is the definition of the predicted value. Unlike Equation 7.2, this is not a linear function of the parameters (because ${\widehat{Y}}^{\overline{w}}(e)$ is not linear in the parameters) and is difficult to solve analytically.
The problem of finding a set of parameters to minimize errors is an optimization problem; see Section 4.8.
Gradient descent is an iterative method to find a local minimum of a function. To find a set of weights to minimize an error, it starts with an initial set of weights. In each step, it decreases each weight in proportion to its partial derivative:
$${w}_{i}:={w}_{i}\eta \ast \frac{\partial}{\partial {w}_{i}}error(E,\overline{w})$$ 
where $\eta $, the gradient descent step size, is called the learning rate. The learning rate, as well as the features and the data, is given as input to the learning algorithm. The partial derivative specifies how much a small change in the weight would change the error.
For linear regression with squared error and logistic regression with log loss, the derivatives, given in Equation 7.2 and Equation 7.3. For each of these (ignoring the constant factor of 2), gradient descent has the update
$${w}_{i}:={w}_{i}\eta \ast \frac{1}{\leftE\right}\ast \sum _{e\in E}\delta (e)\ast {X}_{i}(e)$$  (7.4) 
where $\delta (e)={\widehat{Y}}^{\overline{w}}(e)Y(e)$.
A direct implementation of gradient descent does not update any weights until all examples have been considered. This can be very wasteful for large datasets. It is possible to make progress with a subset of the data. This gradient descent step takes a mean value. Often you can compute means approximately by using a random sample of examples. For example, you can get a good estimate of the mean height of a large population of people by selecting 100 or 1000 people at random and using their mean height.
Instead of using all of the data for an update, stochastic gradient descent uses a random sample of examples to update the weights. It is called stochastic because of the random sampling. Random sampling is explored more in Section 9.7. The set of $b$ examples used in each update is called a minibatch or a batch.
The stochastic gradient descent algorithm for logistic regression is shown in Figure 7.12. This returns a function, $pred$, that can be used for predictions on new examples. The algorithm collects the update for each weight ${w}_{i}$ for a batch in a corresponding ${d}_{i}$, and updates the weights after each batch. The learning rate $\eta $ is assumed to be per example, and so the update needs to be divided by the batch size.
An epoch is $\lceil Es/b\rceil $ batches, which corresponds to one pass through all of the data, on average. Epochs are useful when reporting results, particularly with different batch sizes.
Consider learning a squashed linear function for classifying the data of Figure 7.1. One function that correctly classifies the examples is
$$\widehat{Reads}(e)=sigmoid(8+7\ast Short(e)+3\ast New(e)+3\ast Known(e)),$$ 
where $f$ is the sigmoid function. A function similar to this can be found with about 3000 iterations of stochastic gradient descent with a learning rate $\eta =0.05$. According to this function, $\widehat{Reads}(e)$ is true (the predicted value for example $e$ is closer to 1 than 0) if and only if $Short(e)$ is true and either $New(e)$ or $Known(e)$ is true. Thus, in this case, the linear classifier learns the same function as the decision tree learner.
Smaller batch sizes tend to learn faster as fewer examples are required for an update. However, smaller batches may not converge to a local optimum solution, whereas more data, up to all of the data, will. To see this, consider being at an optimum. A batch containing all of the examples would end up with all of the ${d}_{i}$ being zero. However, for smaller batches, the weights will vary and later batches will be using nonoptimal parameter settings and so use incorrect derivatives. It is common to start with small batch size and increase the batch size until convergence, or good enough performance has been obtained.
Incremental gradient descent, or online gradient descent, is a special case of stochastic gradient descent using minibatches of size 1. In this case, there is no need to store the intermediate values in ${d}_{i}$, but the weights can be directly updated. This is sometimes used for streaming data where each example is used once and then discarded. If the examples are not selected at random, it can suffer from catastrophic forgetting, where it fits the later data and forgets about earlier examples.
Each input feature can be seen as a dimension; $m$ features results in an $m$dimensional space. A hyperplane in an $m$dimensional space is a set of points that all satisfy a constraint that some linear function of the variables is zero. The hyperplane forms an $(m1)$dimensional space. For example, in a (twodimensional) plane, a hyperplane is a line, and in a threedimensional space, a hyperplane is a plane. A Boolean classification is linearly separable if there exists a hyperplane where the classification is true on one side of the hyperplane and false on the other side.
The $Logistic\mathrm{\_}regression\mathrm{\_}learner$ algorithm can learn any linearly separable binary classification. The error can be made arbitrarily small for arbitrary sets of examples if, and only if, the target classification is linearly separable. The hyperplane is the set of points where ${\sum}_{i}{w}_{i}\ast {X}_{i}=0$ for the learned weights $\overline{w}$. On one side of this hyperplane, the prediction is greater than 0.5; on the other side, the prediction is less than 0.5.
Figure 7.13 shows linear separators for “or” (a) and “and” (b). The dashed line separates the positive (true) cases from the negative (false) cases. One simple function that is not linearly separable is the exclusiveor (xor) function (c). There is no straight line that separates the positive examples from the negative examples. As a result, a linear classifier cannot represent, and therefore cannot learn, the exclusiveor function.
Suppose there are three input features $x$, $y$, and $z$, each with domain $\{0,1\}$, and the ground truth is the function “if $x$ then $y$ else $z$” (represented by $t$ in Figure 7.10). This function is depicted by the cube in Figure 7.13(d) with the origin ($x$, $y$, $z$ all zero) at the bottom left and the ground truth for $t$ labelled with $+$ and $$. This function is not linearly separable.
The following example shows what happens in gradient descent for logistic regression when the data is not linearly separable.
$x$  $y$  $z$  $t$  $lin$  $\widehat{t}$  
0  0  0  0  $3.98$  0.02  
0  0  1  1  0.08  0.52  
0  1  0  0  0.08  0.52  
0  1  1  1  4.14  0.98  
1  0  0  0  $4.10$  0.02  
1  0  1  0  $0.04$  0.49  
1  1  0  1  $0.04$  0.49  
1  1  1  1  4.14  0.98  
(a)  (b) 
Consider target $t$ from the previous example that is true if $x$ is true and $y$ is true, or $x$ is false and $z$ is true. The prediction of $t$ is not linearly separable, as shown in Figure 7.13(d) – there is no hyperplane that separates the positive and negative cases of $t$.
After 1000 epochs of gradient descent with a learning rate of 0.05, one run found the following weights (to two decimal points):
$lin(e)=$  $0.12\ast x(e)+4.06\ast y(e)+4.06\ast z(e)3.98$  
$\widehat{t}(e)=$  $sigmoid(lin(e)).$ 
The linear function $lin$ and the prediction for each example are shown in Figure 7.14(b). Four examples are predicted reasonably well, and the other four are predicted with a value of approximately $0.5$. This function is quite stable with different initializations. Increasing the number of iterations makes the predictions approach 0, 1, or 0.5.
When the domain of the target variable is categorical with more than two values, indicator variables can be used to convert the classification to binary variables. These binary variables could be learned separately. Because exactly one of the values must be true for each example, the predicted probabilities should add to 1. One way to handle this is to learn for allbutone value, and predict the remaining value as 1 minus the sum of the other values. This is effectively how the binary case works. However, this introduces an asymmetry in the values by treating one value differently from the other values. This is problematic because the errors for the other values accumulate, making for a poor prediction on the value treated specially; it’s even possible that the prediction for the remaining value is negative if the others sum to more than 1.
The standard alternative is to learn a linear function for each value of the target variable, exponentiate, and normalize. This has more parameters than necessary to represent the function (it is said to be overparametrized) but treats all of the values in the same way. Suppose the target $Y$ is categorical with domain represented by the tuple of values $({v}_{1},\mathrm{\dots},{v}_{k})$. The softmax function takes a vector (tuple) of real numbers, $({\alpha}_{1},\mathrm{\dots},{\alpha}_{k})$, and returns a vector of the same size, where the $i$th component of the result is
$$softmax{(({\alpha}_{1},\mathrm{\dots},{\alpha}_{k}))}_{i}=\frac{exp({\alpha}_{i})}{{\sum}_{j=1}^{k}exp({\alpha}_{j})}.$$ 
This ensures that the resulting values are all positive and sum to 1, and so can be considered as a probability distribution.
Sigmoid and softmax are closely related:
$sigmoid(x)$  $={\displaystyle \frac{1}{exp(x)+1}}$  
$={\displaystyle \frac{exp(x)}{exp(0)+exp(x)}}$  
$=softmax{((0,x))}_{2}$ 
where $(0,x)$ corresponds to the values $(false,true)$ and $softmax{((0,x))}_{2}$ is the second component of the pair that results from the softmax. The second equality follows from multiplying the numerator and the denominator by $exp(x)$, and noticing that $exp(x)\ast exp(x)=exp(0)=1$. Thus, $sigmoid$ is equivalent to $softmax$ where the false component is fixed to be 0.
A softmax, like a sigmoid, cannot represent zero probabilities.
The generalization of logistic regression to predicting a categorical feature is called softmax regression, multinomial logistic regression, or multinomial logit. It involves a linear equation for each value in the domain of the target variable, $Y$. Suppose $Y$ has domain $({v}_{i},\mathrm{\dots},{v}_{k})$. The prediction for example $e$ is a tuple of $k$ values, $softmax(({u}_{1}(e),\mathrm{\dots},{u}_{k}(e)))$, where the $j$th component is the prediction for $Y={v}_{j}$ and
$${u}_{j}(e)={w}_{0,j}+{X}_{1}(e)\ast {w}_{1,j}\ast +\mathrm{\cdots}+{X}_{m}(e)\ast {w}_{m,j}.$$ 
This is typically optimized with categorical log loss.
Consider weight ${w}_{ij}$ that is used for input ${X}_{i}$ for output value ${v}_{j}$, and example $e$ that has $Y(e)={v}_{q}$:
$\frac{\partial}{\partial {w}_{ij}}$  $logloss(softmax(({u}_{1}(e),\mathrm{\dots},{u}_{k}(e))),{v}_{q})$  
$={\displaystyle \frac{\partial}{\partial {w}_{ij}}}\mathrm{log}\left({\displaystyle \frac{exp({u}_{q}(e))}{{\sum}_{j}exp({u}_{j}(e))}}\right)$  
$={\displaystyle \frac{\partial}{\partial {w}_{ij}}}(\mathrm{log}({\displaystyle \sum _{j}}exp({u}_{j}(e))){u}_{q}(e))$  
$=({(\widehat{Y}(e))}_{j}\mathrm{\U0001d7cf}(j=q))\ast {X}_{i}$ 
where $\mathrm{\U0001d7cf}(j=q)$ is 1 if $j$ is the index of the observed value, ${v}_{q}$, and ${(\widehat{Y}(e))}_{j}$ is the $j$th component of the prediction. This is the predicted value minus the actual value.
To implement this effectively, you need to consider how computers represent numbers. Taking the exponential of a large number can result in a number larger than the largest number able to be represented on the computer, resulting in overflow. For example, $exp(800)$ will overflow for most modern CPUs. Taking exponentials of a large negative number can result in a number that is represented as zero, resulting in underflow. For example, $exp(800)$ results in zero on many CPUs. Adding a constant to each ${\alpha}_{i}$ in a softmax does not change the value of the softmax. To prevent overflow and prevent all values from underflowing, the maximum value can be subtracted from each value, so there is always a zero, and the rest are negative. On GPUs and similar parallel hardware, often lower precision is used to represent weights, and so it becomes more important to correct for underflow and overflow.
When there is a large number of possible values, the computation of the denominator can be expensive, as it requires summing over all values. For example, in natural language, we may want to predict the next word in a text, in which case the number of values could be up to a million or more (particularly when phrases and names such as “Mahatma Gandhi” are included). In this case, it is possible to represent the prediction in terms of a binary tree of the values, forming hierarchical softmax. This implements the same function as softmax, just more efficiently for large domains.
The definitions of linear and logistic regression assume that the input features are numerical.
Categorical features can be converted into features with domain $\{0,1\}$ by using indicator variables, as was done for decision tree learning. This is known as a onehot encoding.
A realvalued feature can be used directly as long as the target is a linear function of that input feature, when the other input features are fixed. If the target is not a linear function, often some transformation of the feature is used to create new features.
For ordinal features, including realvalued features, cuts can be used to define a Boolean feature from a real feature. For input feature $x$, choose a value $v$ and use a feature that is true if $x>v$, or equivalently $xv>0$. It is also common to use binning. Binning involves choosing a set of thresholds, $$, and using a feature with domain $\{0,1\}$ for each interval between ${\alpha}_{i}$ and ${\alpha}_{i+1}$. Binning allows for a piecewise constant function. Constructing a feature using $max(xv,0)$ allows for a connected piecewise linear approximation; this is the basis of the rectified linear unit (ReLU), further investigated in the next chapter.
Designing appropriate features is called feature engineering. It is often difficult to design good features. Gradientboosted trees use conjunctions of input features. Learning features is part of representation learning; see Chapter 8.