# 7.3.2 Linear Regression and Classification

Linear functions provide a basis for many learning algorithms. This section first covers regression – the problem of predicting a real-valued function from training examples – then considers the discrete case of classification.

Linear regression is the problem of fitting a linear function to a set of training examples, in which the input and target features are numeric.

Suppose the input features, $X_{1},\dots,X_{n}$, are all numeric and there is a single target feature $Y$. A linear function of the input features is a function of the form

 $\displaystyle\widehat{Y}^{\overline{w}}({e})$ $\displaystyle=w_{0}+w_{1}*{X_{1}}({e})+\dots+w_{n}*{X_{n}}({e})$ $\displaystyle=\sum_{i=0}^{n}w_{i}*{X_{i}}({e})$

where $\overline{w}=\left$ is a tuple of weights. To make $w_{0}$ not be a special case, we invent a new feature, $X_{0}$, whose value is always 1.

Suppose $E$ is a set of examples. The sum-of-squares error on examples $E$ for target $Y$ is

 $\displaystyle error(E,\overline{w})$ $\displaystyle=\sum_{e\in E}({Y}({e})-\widehat{Y}^{\overline{w}}({e}))^{2}$ $\displaystyle=\sum_{e\in E}\left({Y}({e})-\sum_{i=0}^{n}w_{i}*{X_{i}}({e})% \right)^{2}.$ (7.1)

In this linear case, the weights that minimize the error can be computed analytically (see Exercise 10). A more general approach, which can be used for wider classes of functions, is to compute the weights iteratively.

Gradient descent is an iterative method to find the minimum of a function. Gradient descent for minimizing $error$ starts with an initial set of weights; in each step, it decreases each weight in proportion to its partial derivative:

 $w_{i}\;{:}{=}\;\mbox{}w_{i}-\eta*\frac{\partial}{\partial w_{i}}error(E,% \overline{w})$

where $\eta$, the gradient descent step size, is called the learning rate. The learning rate, as well as the features and the data, is given as input to the learning algorithm. The partial derivative specifies how much a small change in the weight would change the error.

The sum-of-squares error for a linear function is convex and has a unique local minimum, which is the global minimum. As gradient descent with small enough step size will converge to a local minimum, this algorithm will converge to the global minimum.

Consider minimizing the sum-of-squares error. The partial derivative of the error in Equation 7.1 with respect to weight $w_{i}$ is

 $\frac{\partial}{\partial w_{i}}error(E,\overline{w})=\sum_{e\in E}-2*\delta(e)% *{X_{i}}({e})$ (7.2)

where $\delta(e)={Y}({e})-\widehat{Y}^{\overline{w}}({e})$.

Gradient descent will update the weights after sweeping through all examples. An alternative is to update each weight after each example. Each example $e$ can update each weight $w_{i}$ using:

 $w_{i}\;{:}{=}\;\mbox{}w_{i}+\eta*\delta(e)*{X_{i}}({e}),$ (7.3)

where we have ignored the constant 2, because we assume it is absorbed into the learning rate $\eta$.

Figure 7.8 gives an algorithm, $Linear\_learner(Xs,Y,Es,\eta)$, for learning the weights of a linear function that minimize the sum-of-squares error. This algorithm returns a function that makes predictions on examples. In the algorithm, ${X_{0}}({e})$ is defined to be 1 for all $e$.

Termination is usually after some number of steps, when the error is small or when the changes get small.

Updating the weights after each example does not strictly implement gradient descent because the weights are changing beween examples. To implement gradient descent, we should save up all of the changes and update the weights after all of the examples have been processed. The algorithm presented in Figure 7.8 is called incremental gradient descent because the weights change while it iterates through the examples. If the examples are selected at random, this is called stochastic gradient descent. These incremental methods have cheaper steps than gradient descent and so typically become more accurate more quickly when compared to saving all of the changes to the end of the examples. However, it is not guaranteed that they will converge as individual examples can move the weights away from the minimum.

Batched gradient descent updates the weights after a batch of examples. The algorithm computes the changes to the weights after every example, but only applies the changes after the batch. If a batch is all of the examples, it is equivalent to gradient descent. If a batch consists of just one example, it is equivalent to incremental gradient descent. It is typical to start with small batches to learn quickly and then increase the batch size so that it converges.

A similar algorithm can be used for other error functions that are (almost always) differentiable and where the derivative has some signal (is not 0). For the absolute error, which is not differentiable at zero, the derivative can be defined to be zero at that point because the error is already at a minimum and the weights do not have to change. See Exercise 9. It does not work for other errors such as the 0/1 error, where the derivative is either 0 (almost everywhere) or undefined.

## Squashed Linear Functions

Consider binary classification, where the domain of the target variable is $\{0,1\}$. Multiple binary target variables can be learned separately.

The use of a linear function does not work well for such classification tasks; a learner should never make a prediction of greater than 1 or less than 0. However, a linear function could make a prediction of, say, 3 for one example just to fit other examples better.

A squashed linear function is of the form

 $\displaystyle\widehat{Y}^{\overline{w}}({e})$ $\displaystyle=f(w_{0}+w_{1}*{X_{1}}({e})+\dots+w_{n}*{X_{n}}({e}))$ $\displaystyle=f(\sum_{i}w_{i}*X_{i}(e))$

where $f$, an activation function, is a function from the real line $[-\infty,\infty]$ into some subset of the real line, such as $[0,1]$.

A prediction based on a squashed linear function is a linear classifier.

A simple activation function is the step function, $step_{0}(x)$, defined by

 $step_{0}(x)=\left\{\begin{array}[]{ll}1&\mbox{ if }x\geq 0\\ 0&\mbox{ if }x<0~{}.\end{array}\right.$

A step function was the basis for the perceptron [Rosenblatt, 1958], which was one of the early methods developed for learning. It is difficult to adapt gradient descent to step functions because gradient descent takes derivatives and step functions are not differentiable.

If the activation is (almost everywhere) differentiable, gradient descent can be used to update the weights. The step size might need to converge to zero to guarantee convergence.

One differentiable activation function is the sigmoid or logistic function:

 $sigmoid(x)=\frac{1}{1+e^{-x}}~{}.$

This function, depicted in Figure 7.9, squashes the real line into the interval $(0,1)$, which is appropriate for classification because we would never want to make a prediction of greater than 1 or less than 0. It is also differentiable, with a simple derivative – namely, $\frac{d}{dx}sigmoid(x)=sigmoid(x)*(1-sigmoid(x))$.

The problem of determining weights for the sigmoid of a linear function that minimize an error on a set of examples is called logistic regression.

To optimize the log loss error for logistic regression, minimize the negative log-likelihood

 $\displaystyle LL(E,\overline{w})=-\left(\sum_{e\in E}\left({Y}({e})*\log% \widehat{Y}({e})+(1-{Y}({e}))*\log(1-\widehat{Y}({e}))\right)\right)$

where $\widehat{Y}({e})=sigmoid\left(\sum_{i=0}^{n}w_{i}*{X_{i}}({e})\right)$.

 $\frac{\partial}{\partial w_{i}}LL(E,\overline{w})=\sum_{e\in E}-\delta(e)*X_{i% }(e)$

where $\delta(e)={Y}({e})-\widehat{Y}^{\overline{w}}({e})$. This is, essentially, the same as Equation 7.2, the only differences being the definition of the predicted value and the constant “2” which can be absorbed into the step size.

The $Linear\_learner$ algorithm of Figure 7.8 can be modified to carry out logistic regression to minimize log loss by changing the prediction to be $sigmoid(\sum_{i}w_{i}*X_{i}(e))$. The algorithm is show in Figure 7.10.

###### Example 7.11.

Consider learning a squashed linear function for classifying the data of Figure 7.1. One function that correctly classifies the examples is

 $\widehat{Reads}({e})=sigmoid(-8+7*Short(e)+3*New(e)+3*Known(e))~{},$

where $f$ is the sigmoid function. A function similar to this can be found with about 3000 iterations of gradient descent with a learning rate $\eta=0.05$. According to this function, $\widehat{Reads}({e})$ is true (the predicted value for example $e$ is closer to 1 than 0) if and only if $Short(e)$ is true and either $New(e)$ or $Known(e)$ is true. Thus, the linear classifier learns the same function as the decision tree learner. To see how this works, see the “mail reading” example of the Neural AIspace.org applet.

To minimize sum-of-squares error, instead, the prediction is the same, but the derivative is different. In particular, line 16 of Figure 7.10 should become

 $update\;{:}{=}\;\mbox{}\eta*error*pred(e)*(1-pred(e))~{}.$

Consider each input feature as a dimension; if there are $n$ features, there will be $n$ dimensions. A hyperplane in an $n$-dimensional space is a set of points that all satisfy a constraint that some linear function of the variables is zero. The hyperplane forms an $(n-1)$-dimensional space. For example, in a (two-dimensional) plane, a hyperplane is a line, and in a three-dimensional space, a hyperplane is a plane. A classification is linearly separable if there exists a hyperplane where the classification is true on one side of the hyperplane and false on the other side.

The $Logistic\_regression\_learner$ algorithm can learn any linearly separable classification. The error can be made arbitrarily small for arbitrary sets of examples if, and only if, the target classification is linearly separable. The hyperplane is the set of points where $\sum_{i}w_{i}*X_{i}=0$ for the learned weights $\overline{w}$. On one side of this hyperplane, the prediction is greater than 0.5; on the other side, the prediction is less than 0.5.

###### Example 7.12.

Figure 7.11 shows linear separators for “or” and “and”. The dashed line separates the positive (true) cases from the negative (false) cases. One simple function that is not linearly separable is the exclusive-or (xor) function. There is no straight line that separates the positive examples from the negative examples. As a result, a linear classifier cannot represent, and therefore cannot learn, the exclusive-or function.

Consider a learner with three input features $x$, $y$ and $z$, each with domain $\{0,1\}$. Suppose that the ground truth is the function “if $x$ then $y$ else $z$”. This is depicted on the right of Figure 7.11 by a cube in the standard coordinates, where the $x$, $y$ and $z$ range from 0 to 1. This function is not linearly separable.

Often it is difficult to determine a priori whether a data set is linearly separable.

###### Example 7.13.

Consider the data set of Figure 7.12 (a), which is used to predict whether a person likes a holiday as a function of whether there is culture, whether the person has to fly, whether the destination is hot, whether there is music, and whether there is nature. In this data set, the value 1 means true and 0 means false. The linear classifier requires the numerical representation.

After 10,000 iterations of gradient descent with a learning rate of 0.05, the prediction found is (to one decimal point)

 $\displaystyle lin(e)=\mbox{}$ $\displaystyle 2.3*Culture(e)+0.01*Fly(e)-9.1*Hot(e)$ $\displaystyle\mbox{}-4.5*Music(e)+6.8*Nature(e)+0.01$ $\displaystyle\widehat{Likes}({e})=\mbox{}$ $\displaystyle sigmoid(lin(e))~{}.$

The linear function $lin$ and the prediction for each example are shown in Figure 7.12 (b). All but four examples are predicted reasonably well, and for those four it predicts a value of approximately $0.5$. This function is quite stable with different initializations. Increasing the number of iterations makes it predict the other tuples more accurately, but does not improve on these four. This data set is not linearly separable.

When the domain of the target variable has more than two values – there are more than two classes – indicator variables can be used to convert the classification to binary variables. These binary variables could be learned separately. The predictions of the individual classifiers can be combined to give a prediction for the target variable. Because exactly one of the values must be true for each example, a learner should not predict that more than one will be true or that none will be true. Suppose we separately learned the predicted values for $Y_{1}\dots Y_{k}$ are $q_{1}\dots q_{k}$, where the $q_{i}\geq 0$. A learner that predicts a probability distribution could predict $Y=y_{i}$ with probability $q_{i}/\sum_{j}q_{j}$. A learner that must make a definitive prediction can predict the mode, a $y_{i}$ for which $q_{i}$ is the maximum.