7.5 Composite Models

Decision trees and (squashed) linear functions provide the basis for many other supervised learning techniques. Although decision trees can represent any discrete function, many simple functions have very complicated decision trees. Linear functions and linear classifiers (without inventing new features) are very restricted in what they can represent.

One way to make the linear function more powerful is to have the inputs to the linear function be some nonlinear function of the original inputs. Adding these new features can increase the dimensionality, making some functions that were not linear (or linearly separable) in the lower-dimensional space linear in the higher-dimensional space.

Example 7.20.

The exclusive-or function, x1 xor x2, is linearly separable in the space where the dimensions are x1, x2, and x1x2, where x1x2 is a feature that is true when both x1 and x2 are true. To visualize this, consider Figure 7.13(c); with the product as the third dimension, the top-right point will be lifted out of the page, allowing for a linear separator (in this case a plane) to go underneath.

A kernel function is a function that is applied to the input features to create new features. For example, a product of features could either replace or augment the existing features. Adding such features can allow for linear separators where there was none before. Another example is, for a feature x, adding x2 and x3 to the features allows a linear learner to find the best degree-3 polynomial fit. Note that when the feature space is augmented, overfitting can become more of a problem. The use of the term kernel function is often restricted to cases where learning in the augmented space can be implemented very efficiently, but that is beyond the scope of this book.

Neural networks (Chapter 8) allow the inputs to the (squashed) linear function to be a squashed linear function with weights to be tuned. Having multiple layers of squashed linear functions as inputs to (squashed) linear functions allows more complex functions to be represented.

Another nonlinear representation is a regression tree, which is a decision tree with a constant function at each leaf, which represents a piecewise constant function. A decision tree for regression with a linear function at each leaf represents a piecewise linear function. It is even possible to have neural networks or other classifiers at the leaves of the decision tree. To classify a new example, the example is filtered down the tree, and the classifier at the leaves is then used to classify the example.

Another possibility is to use a number of classifiers that have each been trained on the data and to combine their outputs using mode, median, or mean, depending on the loss function (see Figure 7.5). In ensemble learning, an agent takes a number of learners and combines their predictions to make a prediction for the ensemble. The algorithms being combined are called base-level algorithms.

One simple yet effective composite model is to have an ensemble of decision trees, known as a random forest. The idea is to have a number of decision trees, each of which can make a prediction on each example. Once the trees have been trained, the predictions on the trees are combined, using a method appropriate for the optimality criterion, such as the mode of the tree predictions for maximizing accuracy or the mean of the tree predictions for minimizing squared error or log loss.

In order to make this work effectively, the trees that make up the forest need to make diverse predictions. There are a number of ways that you can ensure diversity, including:

  • Each tree could use a different subset of the examples to train on. If there are many examples, each tree could use just a random subset of them. In bagging, a random subset (with replacement) of |Es| examples – the same size as the dataset – is selected for each tree to train on. In each of these sets, some examples are not chosen and some are duplicated. On average, each set contains about 63% of the original examples.

  • A subset of the conditions could be used for each tree (or each split). Rather than considering all of the conditions when selecting a split, a random subset of them (e.g., a third of them) could be made available to split on.

7.5.1 Boosting

In boosting, there is a sequence of learners in which each one learns from the errors of the previous ones. The features of a boosting algorithm are:

  • There is a sequence of base learners (that may be different from each other or the same as each other), such as small decision trees or (squashed) linear functions.

  • Each learner is trained to fit the examples that the previous learners did not fit well.

  • The final prediction uses a mix (e.g., sum, weighted mean, or mode) of the predictions of each learner.

The base learners can be weak learners, in that they do not need to be very good; they just need to be better than random. These weak learners are then boosted to be components in the ensemble that performs better than any of them.

A simple boosting algorithm is functional gradient boosting, which can be used for regression as follows. The algorithm is given the hyperparameter K, which is the number of rounds of boosting, corresponding to the number of base learners in the ensemble. The final prediction, as a function of the inputs, is the sum

p0+d1(X)++dK(X)

where p0 is an initial prediction, say the mean, and each di is the difference from the previous prediction. The ith prediction is

pi(X)=p0+d1(X)++di(X).

Then pi(X)=pi1(X)+di(X). Each di is constructed so that the error of pi is minimal, given that pi1 is fixed. At each stage, the base learner learns di^ to minimize

eloss(pi1(e)+di^(e),Y(e))=eloss(di^(e),Y(e)pi1(e)).

for any loss based on the difference between the actual and predicated value. This includes the losses considered for real-valued target features, but does not include log loss, which is not of this form.

The ith learner learns di(e) to fit Yi(e)pi1(e). This is equivalent to learning from a modified dataset, where the previous prediction is subtracted from the actual value of the training set. In this way, each learner is made to correct the errors of the previous prediction.

1: procedure Boosting_learner(Xs,Y,Es,L,K)
2:   Inputs
3:    Xs: set of input features
4:    Y: target feature
5:    Es: set of examples from which to learn
6:    L: base learner
7:    K: number of components in the ensemble   
8:   Output
9:    function to make prediction on examples
10:   mean:=eEsY(e)/|Es|
11:   define p0(e)=mean
12:   for each i from 1 to K do
13:    let Ei={Xs(e),Y(e)pi1(e) for eEs}
14:    let di=L(Ei)
15:    define pi(e)=pi1(e)+di(e)   
16:   return pk
Figure 7.19: Functional gradient-boosting regression learner

The algorithm is shown in Figure 7.19. Each pi is a function that, given an example, returns a prediction for that example. Ei is a new set of examples, where for each eEs, the latest prediction, pi1(e), is subtracted from the value of the target feature Y(e). The new learner is therefore learning from the errors of the old learner. The function di is computed by applying the base learner to the examples Ei.

Ei could either be stored or the new targets for the examples can be generated as needed.

Refer to caption
Figure 7.20: Validation error of functional gradient boosting as a function of γ and number of trees
Example 7.21.

Figure 7.20 shows a plot of the squared error of the validation set as the number of trees increases for functional gradient boosting of decision trees. The different lines are for different values of the parameter γ in the decision tree learner. This is for the case where the minimum child size is 20 (no nodes have fewer than 20 examples). The point for zero trees is the error for predicting the mean. When γ=10, the trees are degenerate and it always predicts the mean. The other values of γ have similar errors for a single tree. When γ=1, the subsequent trees do not extract useful information. For both the other values of γ, boosting helped improve the prediction. The code is available as part of AIPython (aipython.org).

7.5.2 Gradient-Boosted Trees

Gradient-boosted trees are a mix of some of the techniques presented previously. They are a linear model where the features are decision trees with binary splits, learned using boosting.

Let K be the number of boosting rounds, which corresponds to the number of decision trees in the linear model.

For regression, the prediction for example (xe,ye) is

ye^=k=1Kfk(xe)

where each fk is a decision tree. Each decision tree is represented by a vector of weights, w, one weight for each leaf, and a function q that maps the input features into the corresponding leaf. q is an implementation of the if–then–else structure of the tree. wj is the jth component of w. The value of the decision tree applied to an example with input feature values x is wq(x), the weight for the leaf numbered q(x). |w| is the number of leaves in the tree, which is one more than the number of splits in the tree.

For regression, the loss function is the regularized squared error

(e(ye^ye)2)+k=1KΩ(fk).

The regularization term Ω(f)=γ|w|+12λjwj2, where w is the vector of weights for f; γ and λ are nonnegative numbers. This allows for regularizing on both the size of the tree and the values of the parameters.

For classification, the prediction is the sigmoid (or softmax) of the sum of trees

ye^=sigmoid(k=1Kfk(xe))

and the error to be optimized is the sum of log loss with the same regularization term used for regression:

(elogloss(ye^,ye))+k=1KΩ(fk).

Choosing Leaf Values

The model is learned using boosting, so the trees are learned one at a time. Consider building the tth tree, where the previous trees are fixed. Assume for now that the tree structure (q) for the tth tree is fixed. Consider a single weight wj that is used for the examples that map to the jth leaf of the tree. Let Ij={eq(xe)=j}, the set of training examples that map to the jth leaf.

When choosing the value of wj to minimize loss, γ|w| does not depend on the value of wj and so can be ignored in the minimization.

The tth tree is learned with the previous ones fixed. For regression, the loss for the tth tree is

(t)=12λjwj2+e(yek=1tfk(xe))2+constant

where constant is the regularization values for the previous trees and the size, which can be ignored when learning the parameters for the tth tree.

Ignoring the constant, this is the same as the squared error of the dataset of differences between the observed values and the previous trees, for those examples that map to this leaf, with λ extra pseudo-examples of value 0. Thus,

wj=eIj(yeye^(t1))|Ij|+λ

where ye^(t1)=k=1t1fk(xe) is the previous prediction for example e.

For classification, the prediction is the sigmoid (or softmax) of the sum of trees, and the error to be optimized for the tth tree is log loss with L2 regularization:

ye^(t) =sigmoid(k=1tfk(xe))
(t) =12λjwj2+elogloss(ye^(t),ye)+constant

where the constant again can be ignored in the minimization.

Taking the derivative of the loss with respect to wj:

wj(t) =λwj+eIj(ye^ye)

where the sum is over examples in Ij because the predicted value is constant with respect to wj outside of this set. Finding the weights where the derivative is zero, which would be a minimum, is difficult to solve analytically.

For classification, starting with each wj in the new tree as zero, a single step in the opposite direction of the gradient gives the weights for a new tree. A second-order approximation (based on the Newton–Raphson method, which we do not cover) provides a step size for the weight:

wj=eIj(yeye^(t1))eIjye^(t1)(1ye^(t1))+λ

where ye^(t1) is the previous prediction for example e.

Choosing Splits

What remains is to choose the tree structure: choosing the splits of the tree. This is done in a greedy fashion (as in the decision-tree learner of Figure 7.9). Initially, the tree has a single leaf. The algorithm searches for a best leaf to expand. For each leaf, it chooses a feature to split by searching through the choices to make the split that most decreases the loss. For small datasets, the best split can be found by sorting the values on the feature and testing every split. For larger datasets, suggested splits can be made by subsampling the data or using pre-computed percentiles.

The regularization term γ gives a direct penalty for larger trees; splitting stops when the optimal split does not decrease the loss by at least γ. Note that the L2 regularization parameter, λ, when positive, also penalizes larger trees: consider a split that provides no information; with the split, λ applied to each child provides more regularization because each child has fewer examples, and so will fit the training data worse.

1: procedure Gradient_boosted_tree_learner(Cs,Y,Es,K,λ,γ,η,css,ss)
2:   Inputs
3:    Cs: set of possible conditions
4:    Y: target feature
5:    Es: set of training examples
6:    K: number of boosting rounds
7:    λ: L2 regularization
8:    γ: penalty for increasing tree size
9:    η: weight shrinkage
10:    css: proportion of features (columns) subsampled for each tree
11:    ss: proportion of data subsampled for each tree   
12:   Output
13:    function to predict a value of Y for an example
14:   Ts:=[]
15:   repeat K times
16:    append Decision_tree_learner(sample(Cs,css),Y,sample(Es,ss),γ) to Ts   
17:   procedure leaf_value(Es) called by Decision_tree_learner
18:     Es is the set of training examples that reach this leaf
19:    numerator:=0
20:    denominator:=λ
21:    for eEs do
22:      pred:=sigmoid(tTst(e))
23:      numerator:=numerator+Y(e)pred
24:      denominator:=denominator+pred(1pred)    
25:    return ηnumerator/denominator   
26:   procedure sum_loss(Es)
27:    new_val:=leaf_value(Es) value of leaf that would be constructed
28:    return eEslogloss(sigmoid(new_val+tTst(e)),Y(e))   
Figure 7.21: Gradient-boosted trees classification learner. Decision_tree_learner is defined in Figure 7.9

The implementation of gradient-tree boosting for classification shown in Figure 7.21 calls the decision-tree learner of Figure 7.9 iteratively, once for each decision tree in the ensemble. The gradient-boosted-tree learner overrides the leaf_value function called by Decision_tree_learner. The leaf value is no longer the prediction for each example that reaches the leaf, but is a component that goes into predicting the target value of examples that reach the leaf. The function sample(S,p) returns a bag containing each member of S chosen with probability p, without replacement. This allows for regularization by subsampling the data or the features.

There are a number of hyperparameters that are used in Figure 7.21 to reduce overfitting:

  • K, the number of boosting rounds, which is the number of trees in the resulting ensemble.

  • λ, the L2 regularization term.

  • γ, the penalty for increasing the number of leaves by 1.

  • η, the amount the weights for a tree are shrunk to make the algorithm more conservative.

  • css, the proportions of columns (features) that are used .

  • ss, the proportion of the data used for each tree; e.g., if ss=0.8, then 80% of the data is selected for each tree.

The corresponding parameter names and default values for the open-source packages XGBoost and LightGBM are given in Appendix B.1.

Other common hyperparameters include:

  • The maximum depth of a tree.

  • The minimum number of training examples that map to a child for a split to be considered.

  • The maximum number of leaves in a tree. Note that this value is a property of the whole tree and is not a property of a single node. If used as a criterion to stop when the number of leaves has been reached, the result depends on how the leaves are selected. Rather than the depth-first algorithm of Figure 7.9, it is common to enumerate the leaves in a best-first manner (choosing the best split based on all of the leaves).