foundations of computational agents
The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).
The k-means algorithm is used for hard clustering. The training examples and the number of classes, , are given as input. The algorithm assumes that the domain of each feature defining the examples is cardinal (so that differences in values make sense).
The algorithm constructs classes, a prediction of a value for each feature for each class, and an assignment of examples to classes.
Suppose is the set of all training examples, and the input features are . Let be the value of input feature for example . We assume that these are observed. We will associate a class with each integer .
The -means algorithm constructs:
a function , which maps each example to a class. If , we say that is in class .
for each feature , a function from classes into the domain of , where gives the prediction for every member of class of the value of feature .
Example is predicted to have value for feature .
The sum-of-squares error is:
The aim is to find the functions and that minimize the sum-of-squares error.
As shown in Proposition 7.1, to minimize the sum-of-squares error, the prediction of a class should be the mean of the prediction of the examples in the class. When there are only a few examples, it is possible to search over assignments of examples to classes to minimize the error. Unfortunately, for more than a few examples, there are too many partitions of the examples into classes for exhaustive search to be feasible.
The -means algorithm iteratively improves the sum-of-squares error. Initially, it randomly assigns the examples to the classes. Then it carries out the two steps:
For each class and feature , assign to the mean value of for each example in class :
where the denominator is the number of examples in class .
Reassign each example to a class: assign each example to a class that minimizes
These two steps are repeated until the second step does not change the assignment of any example.
An algorithm that implements -means is shown in Figure 10.2. It constructs the sufficient statistics to compute the mean of each class for each feature, namely is the number of examples in class , and is the sum of the value for for examples in class . These are then used in the function which is the latest estimate of and the class of example . It uses the current values of and to determine the next values (in and ).
The random initialization could be assigning each example to a class at random, selecting points at random to be representative of the classes, or assigning some, but not all, of the examples to construct the initial sufficient statistics. The latter two methods may be more useful if the data set is large, as they avoid a pass through the whole data set for initialization.
An assignment of examples to classes is stable if an iteration of -means does not change the assignment. Stability requires that in the definition of gives a consistent value for each example in cases where more than one class is minimal. This algorithm has reached a stable assignment when each example is assigned to the same class in one iteration as in the previous iteration. When this happens, and do not change, and so the Boolean variable becomes .
This algorithm will eventually converge to a stable local minimum. This is easy to see because the sum-of-squares error keeps reducing and there are only a finite number of reassignments. This algorithm often converges in a few iterations.
Suppose an agent has observed the pairs:
, , , , , , , , , , , , , .
These data points are plotted in Figure 10.3(a). The agent wants to cluster the data points into two classes ().
In Figure 10.3(b), the points are randomly assigned into the classes; one class is depicted as and the other as . The mean of the points marked with is , shown with . The mean of the points marked with is , shown with .
In Figure 10.3(c), the points are reassigned according to the closer of the two means. After this reassignment, the mean of the points marked with is then . The mean of the points marked with is .
In Figure 10.3(d), the points are reassigned to the closest mean. This assignment is stable in that no further reassignment will change the assignment of the examples.
A different initial assignment to the points can give different clustering. One clustering that arises in this data set is for the lower points (those with a -value less than 3) to be in one class, and for the other points to be in another class.
Running the algorithm with three classes () typically separates the data into the top-right cluster, the left-center cluster, and the lower cluster. However, there are other possible stable assignments that could be reached, such as, having the top three point in two different classes, and the other points in another class. It is even possible for a class to contain no examples.
Some stable assignments may be better, in terms of sum-of-squares error, than other stable assignments. To find the best assignment, it is often useful to try multiple starting configurations, using a random restart, and selecting a stable assignment with the lowest sum-of-squares error. Note that any permutation of the labels of a stable assignment is also a stable assignment, so there are invariable multiple local minima.
One problem with the -means algorithm is that it is sensitive to the relative scale of the dimensions. For example, if one feature is , another feature is , and another is a binary feature, the different values need to be scaled so that they can be compared. How they are scaled relative to each other affects the classification.
To find an appropriate number of classes (the ), an agent could search over the number of classes. Note that classes can always result in a lower error than classes as long as more than different values are involved. A natural number of classes would be a value when there is a large reduction in error going from classes to classes, but there is only gradual reduction in error for larger values. While it is possible to construct classes from classes, the optimal division into three classes, for example, may be quite different from the optimal division into two classes.