foundations of computational agents
The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).
A point estimate for target feature on example is a prediction of the value of . Let be the predicted value for target feature on example . The error for this example on this feature is a measure of how close is to .
For regression, when the target feature is real valued, both and are real numbers that can be compared arithmetically.
For classification, when the target feature is a discrete function, there are a number of alternatives:
When the domain of is binary, one value can be associated with 0, the other value with 1, and a prediction can be some real number. For Boolean features, with domain , we associate 0 with and 1 with . The predicted value could be any real number or could be restricted to be 0 or 1. Here we assume that the prediction can be any real number, except where explicitly noted. The predicted and actual values can be compared numerically. There is nothing special about for binary features; it is possible to use or to use zero and non-zero.
In a cardinal feature the values are mapped to real numbers. This is appropriate when values in the domain of are totally ordered, and the differences between the values are meaningful. In this case, the predicted and actual values can be compared on this scale.
Often, mapping values to the real line is not appropriate even when the values are totally ordered; for example, suppose the values are , , and . The prediction that the value is “either or ” is very different from the prediction that the value is . When the domain of a feature is totally ordered, but the differences between the values are not comparable, the feature is called an ordinal feature.
For a totally ordered feature, either cardinal or ordinal, and for a given value , a Boolean feature can be constructed as a cut: a new feature that has value 1 when and 0 otherwise. Which cut-values are used to construct features may be chosen according to the data or be selected a priori. Note that a cut for the maximal value in the domain, if there is one, is redundant as it is always true. It is also possible to construct a cut using less-than rather than less-than-or-equal to. Combining cuts allows for features that are true for intervals.
When is discrete with domain , where , a separate prediction can be made for each . This can be modeled by having a binary indicator variable, , associated with each value , where if , and otherwise. For each example, , exactly one of will be 1 and the others will be 0. A prediction gives real numbers – one real number for each .
A trading agent wants to learn a person’s preference for the length of holidays. The holiday can be for 1, 2, 3, 4, 5, or 6 days.
One representation is to have a real-valued variable that is the number of days in the holiday.
Another representation is in terms of indicator variables, , where represents the proposition that the person would like to stay for days. For each example, when there are days in the holiday, and otherwise.
The following are five data points using these two representations:
Example | |
---|---|
1 | |
6 | |
6 | |
2 | |
1 |
Example | ||||||
---|---|---|---|---|---|---|
1 | 0 | 0 | 0 | 0 | 0 | |
0 | 0 | 0 | 0 | 0 | 1 | |
0 | 0 | 0 | 0 | 0 | 1 | |
0 | 1 | 0 | 0 | 0 | 0 | |
1 | 0 | 0 | 0 | 0 | 0 |
A third representation is to have a binary cut feature of for various values of :
Example | |||||
---|---|---|---|---|---|
1 | 1 | 1 | 1 | 1 | |
0 | 0 | 0 | 0 | 0 | |
0 | 0 | 0 | 0 | 0 | |
0 | 1 | 1 | 1 | 1 | |
1 | 1 | 1 | 1 | 1 |
A prediction for a new example in the first representation can be any real number, such as .
In the second representation, the learner would predict a value for each for each example. One such prediction may be, for each example , predict , , , , , and . This is a prediction that the person may like 1 day or 6 days, but will like a stay of 3, 4, or 5 days much less.
In the third representation, the learner could predict a value for for each value of . One such prediction may be to predict with value 0.4, with 0.6, for the other values for . It is not rational to predict, for example, that but not , because one implies the other.
In the following measures of prediction error, is a set of examples and is a set of target features. For target feature and example , the actual value is and the predicted value is .
The 0/1 error on is the sum of the number of predictions that are wrong:
where is 0 when false, and 1 when true. This is the number of incorrect predictions. It does not take into account how wrong the predictions are, just whether they are correct or not.
The absolute error on is the sum of the absolute differences between the actual and predicted values on each example:
This is always non-negative, and is only zero when all the predictions exactly fit the observed values. Unlike for the 0/1 error, close predictions are better than far-away predictions.
The sum-of-squares error on is
This measure treats large errors as much worse than small errors. For example, an error of 2 on an example is as bad as 4 errors of 1, and an error of 10 on one example is as bad as 100 errors of 1. Minimizing sum-of-squares error is equivalent to minimizing the root-mean-square (RMS) error, obtained by dividing by the number of examples and taking the square root. Taking the square root and dividing by a constant do not affect which predictions are minimal.
The worst-case error on is the maximum absolute difference:
In this case, the learner is evaluated by how bad it can be.
These are often described in terms of the norms of the difference between the predicted and actual values. The 0/1 error is the error, the absolute error is the error, the sum-of-squares error is the square of the error, and the worst-case error is the error. The sum-of-squares error is often written as , as the norm takes the square root of the sum of squares. Taking square roots does not affect which value is the minimum. Note that the error does not fit the mathematical definition of a norm.
Consider the data of Figure 7.2. Figure 7.3 shows a plot of the training data (filled circles) and three lines, , , and , that predict the -value for all points. is the line that minimizes the absolute error, is the line that minimizes the sum-of-squares error, and minimizes the worst-case error of the training examples.
As no three points are collinear, any line through any pair of the points minimizes the 0/1, , error.
Lines and give similar predictions for ; namely, predicts 1.805 and predicts 1.709, whereas the data contain a data point . predicts 0.7. They give predictions within 1.5 of each other when interpolating in the range . Their predictions diverge when extrapolating from the data. and give very different predictions for .
An outlier is an example that does not follow the pattern of the other examples. The difference between the lines that minimize the various error measures is most pronounced in how they handle outliers. The point can be seen as an outlier as the other points are approximately in a line.
The prediction with the least worse-case error for this example, , only depends on three data points, , , and , each of which has the same worst-case error for prediction . The other data points could be at different locations, as long as they are not farther away from than these three points.
The prediction that minimizes the absolute error, , does not change as a function of the actual -value of the training examples, as long as the points above the line stay above the line, and those below the line stay below. For example, the prediction that minimizes the absolute error would be the same, even if the last data point was instead of .
Prediction is sensitive to all of the data points; if the -value for any point changes, the line that minimizes the sum-of-squares error will change. Changes to outliers will have more effect on the line than changes to points close to the line.
For the special case where the domain of is , and the prediction is in the range (and so for Boolean domains where is treated as 1, and as 0), the following can also be used to evaluate predictions:
The likelihood of the data is the probability of the data when the predicted value is interpreted as a probability, and each of the examples are predicted independently:
One of and is 1, and the other is 0. Thus, this product uses when and when . A better prediction is one with a higher likelihood. The model with the greatest likelihood is the maximum likelihood model.
The log-likelihood is the logarithm of the likelihood, which is:
A better prediction is one with a higher log-likelihood. To make this into an error term to minimize, the log loss is the negative of the log-likelihood divided by the number of examples.
The log loss is closely related to the notion of entropy. The log loss can be seen as the average number of bits it will take to encode the data given a code that is based on treated as a probability.
Information Theory
A bit is a binary digit. Because a bit has two possible values (0 and 1), it can be used to distinguish two items. Two bits can distinguish four items, each associated with either , , , or . In general, bits can distinguish items. Thus, we can distinguish items with bits. It may be surprising, but we can do better than this using probabilities.
Consider this code to distinguish the elements of the set , with , , , and :
This code sometimes uses 1 bit, sometimes 2 bits and sometimes uses 3 bits. On average, it uses
For example, the string with 8 characters has code , which uses 14 bits.
With this code, bit is required to distinguish from the other symbols. Distinguishing uses bits. Distinguishing or requires bits.
It is possible to build a code that, to identify , requires bits (or the integer greater than this). Suppose there is a sequence of symbols we want to transmit or store and we know the probability distribution over the symbols. A symbol with probability can use bits. To transmit a sequence, each symbol requires, on average,
bits to send it. This is called the information content or entropy of the distribution. This value just depends on the probability distribution of the symbols.
Analogous to conditioning in probability, the expected number of bits it takes to describe a distribution for given evidence is
For a test that can distinguish the cases where is true from the cases where is false, the expected information after the test is
The entropy of the distribution minus the expected information after the test is called the information gain of the test.
Consider the length of holiday data of Example 7.3. Suppose there are no input features, so all of the examples get the same prediction.
In the first representation, the prediction that minimizes the sum of absolute errors on the training data presented in Example 7.3 is 2, with an error of 10. The prediction that minimizes the sum-of-squares error on the training data is 3.2. The prediction the minimizes the worst-case error is 3.5.
For the second representation, the prediction that minimizes the sum of absolute errors for the training examples is to predict 0 for each . The prediction that minimizes the sum-of-squares error for the training examples is , , , , , and . This is also the prediction that maximizes the likelihood of the training data. The prediction that minimizes the worst-case error for the training examples is to predict 0.5 for , , and and to predict for the other features.
Thus, which prediction is preferred depends on how the prediction is represented and how it will be evaluated.