4 of the Machine Learning Advent Calendar.
Through the first three days, we explored distance-based fashions for supervised studying:
In all these fashions, the concept was the identical: we measure distances, and we resolve the output primarily based on the closest factors or nearest facilities.
In the present day, we keep on this similar household of concepts. However we use the distances in an unsupervised means: k-means.
Now, one query for individuals who already know this algorithm: k-means appears extra much like which mannequin, the k-NN classifier, or the Nearest Centroid classifier?
And when you keep in mind, for all of the fashions now we have seen up to now, there was probably not a “coaching” section or hyperparameter tuning.
- For k-NN, there is no such thing as a coaching in any respect.
- For LDA, QDA, or GNB, coaching is simply computing means and variances. And there are additionally no actual hyperparameters.
Now, with k-means, we’re going to implement a coaching algorithm that lastly appears like “actual” machine studying.
We begin with a tiny 1D instance. Then we transfer to 2D.
Purpose of k-means
Within the coaching dataset, there are no preliminary labels.
The aim of k-means is to create significant labels by grouping factors which might be shut to one another.
Allow us to take a look at the illustration under. You may clearly see two teams of factors. Every centroid (the crimson sq. and the inexperienced sq.) is in the course of its cluster, and each level is assigned to the closest one.
This provides a really intuitive image of how k-means discovers construction utilizing solely distances.
And right here, ok means the variety of facilities we attempt to discover.
Now, allow us to reply the query: Which algorithm is k-means nearer to, the k-NN classifier or the Nearest Centroid classifier?
Don’t be fooled by the ok in k-NN and k-means.
They don’t imply the identical factor:
- in k-NN, ok is the variety of neighbors, not the variety of courses;
- in k-means, ok is the variety of centroids.
Ok-means is far nearer to the Nearest Centroid classifier.
Each fashions are represented by centroids, and for a brand new commentary we merely compute the gap to every centroid to resolve to which one it belongs.
The distinction, in fact, is that within the Nearest Centroid classifier, we already know the centroids as a result of they arrive from labeled courses.
In k-means, we have no idea the centroids. The entire aim of the algorithm is to uncover appropriate ones instantly from the info.
The enterprise drawback is totally totally different: as a substitute of predicting labels, we are attempting to create them.
And in k-means, the worth of ok (the variety of centroids) is unknown. So it turns into a hyperparameter that we will tune.
k-means with solely One function
We begin with a tiny 1D instance in order that the whole lot is seen on one axis. And we are going to select the values in such a trivial means that we will immediately see the 2 centroids.
1, 2, 3, 11, 12, 13
Sure, 2, and 12.
However how would the pc know? The machine will “study” by guessing step-by-step.
Right here comes the algorithm referred to as Lloyd’s algorithm.
We are going to implement it in Excel with the next loop:
- select preliminary centroids
- compute the gap from every level to every centroid
- assign every level to the closest centroid
- recompute the centroids as the common of the factors in every cluster
- repeat steps 2 to 4 till the centroids not transfer
1. Select preliminary centroids
Choose two preliminary facilities, for instance:
They need to be throughout the knowledge vary (between 1 and 13).

2. Compute distances
For every knowledge level x:
- compute the gap to c_1,
- compute the gap to c_2.
Usually, we use absolute distance in 1D.
We now have two distance values for every level.

3. Assign clusters
For every level:
- evaluate the 2 distances,
- assign the cluster of the smallest one (1 or 2).
In Excel, it is a easy IF or MIN primarily based logic.

4. Compute the brand new centroids
For every cluster:
- take the factors assigned to that cluster,
- compute their common,
- this common turns into the brand new centroid.

5. Iterate till reaching convergence
Now in Excel, because of the formulation, we will merely paste the brand new centroid values into the cells of the preliminary centroids.
The replace is speedy, and after doing this just a few instances, you will note that the values cease altering. That’s when the algorithm has converged.

We will additionally report every step in Excel, so we will see how the centroids and clusters evolve over time.

k-means with Two Options
Now allow us to use two options. The method is strictly the identical, we merely use the Euclidean distance in 2D.
You may both do the copy-paste of the brand new centroids as values (with only a few cells to replace),

or you may show all of the intermediate steps to see the complete evolution of the algorithm.

Visualizing the Transferring Centroids in Excel
To make the method extra intuitive, it’s useful to create plots that present how the centroids transfer.
Sadly, Excel or Google Sheets should not ultimate for this type of visualization, and the info tables shortly turn out to be a bit complicated to prepare.
If you wish to see a full instance with detailed plots, you may learn this article I wrote nearly three years in the past, the place every step of the centroid motion is proven clearly.

As you may see on this image, the worksheet grew to become fairly unorganized, particularly in comparison with the sooner desk, which was very simple.

Selecting the optimum ok: The Elbow Technique
So now, it’s potential to attempt ok = 2 and ok = 3 in our case, and compute the inertia for every one. Then we merely evaluate the values.
We will even start with ok=1.
For every worth of ok:
- we run k-Means till convergence,
- compute the inertia, which is the sum of squared distances between every level and its assigned centroid.
In Excel:
- For every level, take the gap to its centroid and sq. it.
- Sum all these squared distances.
- This provides the inertia for this ok.
For instance:
- for ok = 1, the centroid is simply the general imply of x1 and x2,
- for ok = 2 and ok = 3, we take the converged centroids from the sheets the place you ran the algorithm.
Then we will plot inertia as a operate of ok, for instance for (ok = 1, 2, 3).
For this dataset
- from 1 to 2, the inertia drops loads,
- from 2 to three, the advance is far smaller.
The “elbow” is the worth of ok after which the lower in inertia turns into marginal. Within the instance, it means that ok = 2 is ample.

Conclusion
Ok-means is a really intuitive algorithm when you see it step-by-step in Excel.
We begin with easy centroids, compute distances, assign factors, replace the centroids, and repeat. Now, we will see how “machines study”, proper?
Effectively, that is solely the start, we are going to see that totally different fashions “study” in actually other ways.
And right here is the transition for tomorrow’s article: the unsupervised model of the Nearest Centroid classifier is certainly k-means.
So what can be the unsupervised model of LDA or QDA? We are going to reply this within the subsequent article.

