7.4 Overfitting

The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).

7.4.2 Regularization

Ockham’s razor specifies that we should prefer simpler models over more complex models. Instead of just optimizing fit-to-data, as done in Section 7.2.1, we can optimize fit-to-data plus a term that rewards simplicity and penalizes complexity. The penalty term is a regularizer.

The typical form for a regularizer is to find a hypothesis h to minimize:

(eerror(e,h))+λ*regularizer(h) (7.4)

where the error(e,h) is the error of example e for hypothesis h, which specifies how well hypothesis h fits example e. The regularization parameter, λ, trades off fit-to-data and model simplicity, and regularizer(h) is a penalty term that penalizes complexity or deviation from the mean. Notice that as the number of examples increases the leftmost sum tends to dominate and the regularizer has little effect. The regularizer has most effect when there are few examples. The regularization parameter is needed because the error and complexity terms are typically in different units. The regularization parameter can be chosen by prior knowledge, past experience with similar problems or by cross validation.

For example, in learning a decision tree one complexity measure is the number of splits in the decision tree (which is one less than the number of leaves for a binary decision tree). When building a decision tree, we could optimize the sum-of-squares error plus a function of the size of the decision tree, minimizing

(eE(Y(e)-Y^(e))2)+λ*|tree|

where |tree| is the number of splits in the tree. When splitting, a single split is worthwhile if it reduces the sum-of-squares error by λ.

For models where there are real-valued parameters, an L2 regularizer, penalizes the sum of squares of the parameters. To optimize the sum-of-squares error for linear regression with an L2 regularizer, minimize

(eE(Y(e)-i=0nwi*Xi(e))2)+λ(i=0nwi2)

which is known as ridge regression.

To optimize the log loss error for logistic regression with an L2 regularizer, minimize

-(eE(Y(e)logY^(e)+(1-Y(e))log(1-Y^(e))))+λ(i=0nwi2)

where Y^(e)=sigmoid(i=0nwi*Xi(e)).

An L2 regularization is implemented by adding

wi :=wi-η*(λ/|E|)*wi

after line 18 of Figure 7.8 or after line 18 of Figure 7.10 (in the scope of both “for each”). This divides by the number of examples (|E|) because it is carried out once for each example. It is also possible to regularize after each iteration through all of the examples, in which case the regularizer should not divide by the number of examples. Note that η*λ/|E| does not change as so should be computed once and stored.

An L1 regularizer adds a penalty for the sum of the absolute values of the parameters.

Adding an L1 regularizer to the log loss entails minimizing

-(eE(Y(e)logY^(e)+(1-Y(e))log(1-Y^(e))))+λ(i=0n|wi|).

The partial derivative of the sum of absolute values with respect to wi is the sign of wi, either 1 or -1 (defined as sign(wi)=wi/|wi|), at every point except at 0. We do not need to make a step at 0, because the value is already a minimum. To implement an L1 regularizer, each parameter is moved towards zero by a constant, except if that constant would change the sign of the parameter, in which case the parameter becomes zero. Thus, an L1 regularizer can be incorporated into the logistic regression gradient descent algorithm of Figure 7.10 by adding after line 18 (in the scope of both “for each”):

wi:=sign(wi)*max(0,|wi|-η*λ/|E|)

This is called iterative soft-thresholding and is a special case of the proximal-gradient method.

An L1 regularizer when there are many features tends to make many weights zero, which means the corresponding feature is ignored. This is a way to implement feature selection. An L2 regularizer tends to make all of the parameters smaller, but not zero.

Regularization, Pseudocounts and Probabilistic Mixtures

Consider the simplest case of a learner that takes a sequence E of examples e1en, with no input features. Suppose you regularize to some default value m, and so penalize the difference from m. The regularizers in Section 7.4.2 regularize to 0.

Consider the following programs that do stochastic gradient descent with L2 regularization in different ways. Each takes in the data set E, the value for m, the learning rate η and the regularization parameter λ.

procedure Learn0(E,m,η,λ)
     p:=m

     repeat

         for each eiE do
              p:=p-η*(p-ei)                    p:=p-η*λ*(p-m)
     until termination

     return p
procedure Learn1(E,m,η,λ)
     p:=m

     repeat

         for each eiE do

              p:=p-η*(p-ei)
              p:=p-η*λ*(p-m)                until termination
     return p

The programs differ as to whether the regularization happens for each element of the data set or for the whole data set at each iteration.

Program Learn0 minimizes (i(p-ei)2)+λ(p-m)2 which is minimal when p=mλ+ieiλ+n. Program Learn1 minimizes i((p-ei)2+λ(p-m)2) which is minimal when p=λ1+λm+11+λiein.

Program Learn0 is equivalent to having a pseudocount with λ extra examples, each with value m.

Program Learn1 is equivalent to a probabilistic mixture of m and the average of the data.

For a fixed number of examples, n, these can be mapped into each other; λ for Learn1 is λ for Learn0 divided by n. They act differently when the number of examples varies, for example, in cross validation, when using a single λ for multiple data sets, or in more complicated cases such as collaborative filtering.

For a fixed λ, with n varying, they are qualitatively different. In Learn0, as the number of examples increases the regularization gets less and less important. In Learn1, m has the same effect on the prediction, no matter what n is. Using the strategy of Learn0 is appropriate if the examples are independent of each other, where it is appropriate that enough examples will dominate any prior model. The strategy of Learn1 may appropriate if there is some chance the whole data set is misleading.