foundations of computational agents
The k-means algorithm is used for hard clustering. The training examples and the number of classes, $k$, are given as input. The algorithm assumes that the domain of each feature defining the examples is cardinal (so that differences in values make sense).
The algorithm constructs $k$ classes, a prediction of a value for each feature for each class, and an assignment of examples to classes.
Suppose $E$ is the set of all training examples, and the input features are ${X}_{1},\mathrm{\dots},{X}_{n}$. Let ${X}_{j}(e)$ be the value of input feature ${X}_{j}$ for example $e$. We assume that these are observed. We will associate a class with each integer $c\in \{1,\mathrm{\dots},k\}$.
The $k$-means algorithm constructs:
a function $class:E\to \{1,\mathrm{\dots},k\}$, which maps each example to a class. If $class(e)=c$, we say that $e$ is in class $c$.
for each feature ${X}_{j}$, a function $\widehat{{X}_{j}}$ from classes into the domain of ${X}_{j}$, where $\widehat{{X}_{j}}(c)$ gives the prediction for every member of class $c$ of the value of feature ${X}_{j}$.
Example $e$ is predicted to have value $\widehat{{X}_{j}}(class(e))$ for feature ${X}_{j}$.
The sum-of-squares error is:
$$\sum _{e\in E}\sum _{j=1}^{n}{\left(\widehat{{X}_{j}}(class(e))-{X}_{j}(e)\right)}^{2}.$$ |
The aim is to find the functions $class$ and $\widehat{{X}_{j}}$ that minimize the sum-of-squares error.
As shown in Proposition 7.1, to minimize the sum-of-squares error, the prediction of a class should be the mean of the prediction of the examples in the class. When there are only a few examples, it is possible to search over assignments of examples to classes to minimize the error. Unfortunately, for more than a few examples, there are too many partitions of the examples into $k$ classes for exhaustive search to be feasible.
The $k$-means algorithm iteratively improves the sum-of-squares error. Initially, it randomly assigns the examples to the classes. Then it carries out the two steps:
For each class $c$ and feature ${X}_{j}$, assign to $\widehat{{X}_{j}}(c)$ the mean value of ${X}_{j}(e)$ for each example $e$ in class $c$:
$$\widehat{{X}_{j}}(c)\leftarrow \frac{{\displaystyle \sum _{e:class(e)=c}}{X}_{j}(e)}{\left|\{e:class(e)=c\}\right|}$$ |
where the denominator is the number of examples in class $c$.
Reassign each example to a class: assign each example $e$ to a class $c$ that minimizes
$$\sum _{j=1}^{n}{\left(\widehat{{X}_{j}}(c)-{X}_{j}(e)\right)}^{2}.$$ |
These two steps are repeated until the second step does not change the assignment of any example.
An algorithm that implements $k$-means is shown in Figure 10.2. It constructs the sufficient statistics to compute the mean of each class for each feature, namely $cc[c]$ is the number of examples in class $c$, and $fs[j,c]$ is the sum of the value for ${X}_{j}(e)$ for examples in class $c$. These are then used in the function $predn(j,c)$ which is the latest estimate of $\widehat{{X}_{j}}(c)$ and $class(e)$ the class of example $e$. It uses the current values of $fs$ and $cc$ to determine the next values (in $fs\mathrm{\_}new$ and $cc\mathrm{\_}new$).
The random initialization could be assigning each example to a class at random, selecting $k$ points at random to be representative of the classes, or assigning some, but not all, of the examples to construct the initial sufficient statistics. The latter two methods may be more useful if the data set is large, as they avoid a pass through the whole data set for initialization.
An assignment of examples to classes is stable if an iteration of $k$-means does not change the assignment. Stability requires that $argmin$ in the definition of $class$ gives a consistent value for each example in cases where more than one class is minimal. This algorithm has reached a stable assignment when each example is assigned to the same class in one iteration as in the previous iteration. When this happens, $fs$ and $class\mathrm{\_}count$ do not change, and so the Boolean variable $stable$ becomes $true$.
This algorithm will eventually converge to a stable local minimum. This is easy to see because the sum-of-squares error keeps reducing and there are only a finite number of reassignments. This algorithm often converges in a few iterations.
Suppose an agent has observed the $$ pairs:
$$, $$, $$, $$, $$, $$, $$, $$, $$, $$, $$, $$, $$, $$.
These data points are plotted in Figure 10.3(a). The agent wants to cluster the data points into two classes (${k}{\mathrm{=}}{\mathrm{2}}$).
In Figure 10.3(b), the points are randomly assigned into the classes; one class is depicted as ${\mathrm{+}}$ and the other as ${\mathrm{*}}$. The mean of the points marked with ${\mathrm{+}}$ is $$, shown with ${\mathrm{\oplus}}$. The mean of the points marked with ${\mathrm{*}}$ is $$, shown with ${\mathrm{\u229b}}$.
In Figure 10.3(c), the points are reassigned according to the closer of the two means. After this reassignment, the mean of the points marked with ${\mathrm{+}}$ is then $$. The mean of the points marked with ${\mathrm{*}}$ is $$.
In Figure 10.3(d), the points are reassigned to the closest mean. This assignment is stable in that no further reassignment will change the assignment of the examples.
A different initial assignment to the points can give different clustering. One clustering that arises in this data set is for the lower points (those with a ${Y}$-value less than 3) to be in one class, and for the other points to be in another class.
Running the algorithm with three classes (${k}{\mathrm{=}}{\mathrm{3}}$) typically separates the data into the top-right cluster, the left-center cluster, and the lower cluster. However, there are other possible stable assignments that could be reached, such as, having the top three point in two different classes, and the other points in another class. It is even possible for a class to contain no examples.
Some stable assignments may be better, in terms of sum-of-squares error, than other stable assignments. To find the best assignment, it is often useful to try multiple starting configurations, using a random restart, and selecting a stable assignment with the lowest sum-of-squares error. Note that any permutation of the labels of a stable assignment is also a stable assignment, so there are invariable multiple local minima.
One problem with the $k$-means algorithm is that it is sensitive to the relative scale of the dimensions. For example, if one feature is $height$, another feature is $age$, and another is a binary feature, the different values need to be scaled so that they can be compared. How they are scaled relative to each other affects the classification.
To find an appropriate number of classes (the $k$), an agent could search over the number of classes. Note that $k+1$ classes can always result in a lower error than $k$ classes as long as more than $k$ different values are involved. A natural number of classes would be a value $k$ when there is a large reduction in error going from $k-1$ classes to $k$ classes, but there is only gradual reduction in error for larger values. While it is possible to construct $k+1$ classes from $k$ classes, the optimal division into three classes, for example, may be quite different from the optimal division into two classes.