Close Menu
    Trending
    • America’s coming war over AI regulation
    • “Dr. Google” had its issues. Can ChatGPT Health do better?
    • Evaluating Multi-Step LLM-Generated Content: Why Customer Journeys Require Structural Metrics
    • Why SaaS Product Management Is the Best Domain for Data-Driven Professionals in 2026
    • Stop Writing Messy Boolean Masks: 10 Elegant Ways to Filter Pandas DataFrames
    • What Other Industries Can Learn from Healthcare’s Knowledge Graphs
    • Everyone wants AI sovereignty. No one can truly have it.
    • Yann LeCun’s new venture is a contrarian bet against large language models
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » The Machine Learning “Advent Calendar” Day 2: k-NN Classifier in Excel
    Artificial Intelligence

    The Machine Learning “Advent Calendar” Day 2: k-NN Classifier in Excel

    ProfitlyAIBy ProfitlyAIDecember 2, 2025No Comments10 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    the k-NN Regressor and the concept of prediction based mostly on distance, we now take a look at the k-NN Classifier.

    The precept is identical, however classification permits us to introduce a number of helpful variants, resembling Radius Nearest Neighbors, Nearest Centroid, multi-class prediction, and probabilistic distance fashions.

    So we’ll first implement the k-NN classifier, then talk about how it may be improved.

    You should use this Excel/Google sheet whereas studying this text to higher observe all the reasons.

    k-NN classifier in Excel – picture by creator

    Titanic survival dataset

    We’ll use the Titanic survival dataset, a traditional instance the place every row describes a passenger with options resembling class, intercourse, age, and fare, and the objective is to foretell whether or not the passenger survived.

    Titanic survival dataset – picture by creator – CC0: Public Area license

    Precept of k-NN for Classification

    k-NN classifier is so just like k-NN regressor that I may nearly write one single article to clarify them each.

    In truth, after we search for the ok nearest neighbors, we don’t use the worth y in any respect, not to mention its nature.

    BUT, there are nonetheless some attention-grabbing info about how classifiers (binary or multi-class) are constructed, and the way the options might be dealt with otherwise.

    We start with the binary classification activity, after which the multi-class classification.

    One Steady Function for Binary Classification

    So, very fast, we are able to do the identical train for one steady function, with this dataset.

    For the worth of y, we often use 0 and 1 to differentiate the 2 courses. However you’ll be able to discover, or you’ll discover that it may be a supply of confusion.

    k-NN classifier in Excel – One steady function – picture by creator

    Now, give it some thought: 0 and 1 are additionally numbers, proper? So, we are able to precisely do the identical course of as if we’re doing a regression.

    That’s proper. Nothing adjustments within the computation, as you see within the following screenshot. And you’ll in fact attempt to modify the worth of the brand new remark your self.

    k-NN classifier in Excel – prediction for one steady function – picture by creator

    The one distinction is how we interpret the consequence. Once we take the “common” of the neighbors’ y values, this quantity is known because the chance that the brand new remark belongs to class 1.

    So in actuality, the “common” worth shouldn’t be the great interpretation, however it’s somewhat the proportion of sophistication 1.

    We will additionally manually create this plot, to indicate how the expected chance adjustments over a variety of x values.

    Historically, to keep away from ending up with a 50 % chance, we select an odd worth for ok, in order that we are able to at all times determine with majority voting.

    k-NN classifier in Excel – predictions for one steady function – picture by creator

    Two-feature for Binary classification

    If we’ve got two options, the operation can be nearly the identical as in k-NN regressor.

    k-NN classifier in Excel – two steady options – picture by creator

    One function for multi-class classification

    Now, let’s take an instance of three courses for the goal variable y.

    Then we are able to see that we can’t use the notion of “common” anymore, because the quantity that represents the class shouldn’t be truly a quantity. And we must always higher name them “class 0”, “class 1”, and “class 2”.

    k-NN classifier in Excel – multi-class classifer – picture by creator

    From k-NN to Nearest Centroids

    When ok Turns into too Giant

    Now, let’s make ok massive. How massive? As massive as doable.

    Keep in mind, we additionally did this train with k-NN regressor, and the conclusion was that if ok equals the whole variety of observations within the coaching dataset, then k-NN regressor is the straightforward average-value estimator.

    For the k-NN classifier, it’s nearly the identical. If ok equals the whole variety of observations, then for every class, we’ll get its general proportion inside the whole coaching dataset.

    Some individuals, from a Bayesian standpoint, name these proportions the priors!

    However this doesn’t assist us a lot to categorise a brand new remark, as a result of these priors are the identical for each level.

    The Creation of Centroids

    So allow us to take another step.

    For every class, we are able to additionally group collectively all of the function values x that belong to that class, and compute their common.

    These averaged function vectors are what we name centroids.

    What can we do with these centroids?

    We will use them to categorise a brand new remark.

    As an alternative of recalculating distances to the whole dataset for each new level, we merely measure the gap to every class centroid and assign the category of the closest one.

    With the Titanic survival dataset, we are able to begin with a single function, age, and compute the centroids for the 2 courses: passengers who survived and passengers who didn’t.

    k-NN classifier in Excel – Nearest Centroids – picture by creator

    Now, additionally it is doable to make use of a number of steady options.

    For instance, we are able to use the 2 options age and fare.

    k-NN classifier in Excel – Nearest Centroids – picture by creator

    And we are able to talk about some vital traits of this mannequin:

    • The dimensions is vital, as we mentioned earlier than for k-NN regressor.
    • The lacking values should not an issue right here: after we compute the centroids per class, every one is calculated with the out there (non-empty) values
    • We went from probably the most “advanced” and “massive” mannequin (within the sense that the precise mannequin is the whole coaching dataset, so we’ve got to retailer all of the dataset) to the best mannequin (we solely use one worth per function, and we solely retailer these values as our mannequin)

    From extremely nonlinear to naively linear

    However now, are you able to consider one main downside?

    Whereas the fundamental k-NN classifier is very nonlinear, the Nearest Centroid methodology is extraordinarily linear.

    On this 1D instance, the 2 centroids are merely the typical x values of sophistication 0 and sophistication 1. As a result of these two averages are shut, the choice boundary turns into simply the midpoint between them.

    So as an alternative of a piecewise, jagged boundary that is determined by the precise location of many coaching factors (as in k-NN), we get hold of a straight cutoff that solely is determined by two numbers.

    This illustrates how Nearest Centroids compresses the whole dataset right into a easy and really linear rule.

    k-NN classifier in Excel – Nearest Centroids linearity – picture by creator

    A be aware on regression: why centroids don’t apply

    Now, this type of enchancment shouldn’t be doable for the k-NN regressor. Why?

    In classification, every class kinds a bunch of observations, so computing the typical function vector for every class is smart, and this provides us the category centroids.

    However in regression, the goal y is steady. There aren’t any discrete teams, no class boundaries, and subsequently no significant strategy to compute “the centroid of a category”.

    A steady goal has infinitely many doable values, so we can’t group observations by their y worth to type centroids.

    The one doable “centroid” in regression can be the international imply, which corresponds to the case ok = N in k-NN regressor.

    And this estimator is way too easy to be helpful.

    In brief, Nearest Centroids Classifier is a pure enchancment for classification, however it has no direct equal in regression.

    Additional statistical enhancements

    What else can we do with the fundamental k-NN classifier?

    Common and variance

    With Nearest Centroids Classifier, we used the best statistic that’s the common. A pure reflex in statistics is so as to add the variance as effectively.

    So, now, distance is now not Euclidean, however Mahalanobis distance. Utilizing this distance, we get the chance based mostly on the distribution characterised by the imply and variance of every class.

    Categorical Options dealing with

    For categorical options, we can’t compute averages or variances. And for k-NN regressor, we noticed that it was doable to do one-hot encoding or ordinal/label encoding. However the scale is vital and never simple to find out.

    Right here, we are able to do one thing equally significant, by way of possibilities: we are able to rely the proportions of every class inside a category.

    These proportions act precisely like possibilities, describing how possible every class is inside every class.

    This concept is straight linked to fashions resembling Categorical Naive Bayes, the place courses are characterised by frequency distributions over the classes.

    Weighted Distance

    One other path is to introduce weights, in order that nearer neighbors rely greater than distant ones. In scikit-learn, there’s the “weights” argument that permits us to take action.

    We will additionally change from “ok neighbors” to a set radius across the new remark, which results in radius-based classifiers.

    Radius Nearest Neighbors

    Typically, we are able to discover this following graphic to clarify k-NN classifier. However truly, with a radius like this, it displays extra the concept of Radius Nearest Neighbors.

    One benefit is the management of the neighborhood. It’s particularly attention-grabbing after we know the concrete that means of the gap, such because the geographical distance.

    Radius Nearest Neighbors classifier – picture by creator

    However the downside is that you need to know the radius prematurely.

    By the way in which, this notion of radius nearest neighbors can be appropriate for regression.

    Recap of various variants

    All these small adjustments give totally different fashions, every one making an attempt to enhance the fundamental thought of evaluating neighbors in accordance with a extra advanced definition of distance, with a management parameter what permits us to get native neighbors, or extra international characterization of neighborhood.

    We is not going to discover all these fashions right here. I merely can’t assist myself from going a bit too far when a small variation naturally results in one other thought.

    For now, take into account this as an announcement of the fashions we’ll implement later this month.

    Variants and enhancements of k-NN classifier – picture by creator

    Conclusion

    On this article, we explored the k-NN classifier from its most elementary type to a number of extensions.

    The central thought shouldn’t be actually modified: a brand new remark is classed by how comparable it’s to the coaching knowledge.

    However this straightforward thought can take many various shapes.

    With steady options, similarity relies on geometric distance.
    With categorical options, we glance as an alternative at how usually every class seems among the many neighbors.

    When ok turns into very massive, the whole dataset collapses into just some abstract statistics, which leads naturally to the Nearest Centroids Classifier.

    Understanding this household of distance-based and probability-based concepts helps us see that many machine-learning fashions are merely other ways of answering the identical query:

    Which class does this new remark most have a resemblance to?

    Within the subsequent articles, we’ll proceed exploring density-based fashions, which might be understood as international measures of similarity between observations and courses.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleJSON Parsing for Large Payloads: Balancing Speed, Memory, and Scalability
    Next Article New control system teaches soft robots the art of staying safe | MIT News
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Evaluating Multi-Step LLM-Generated Content: Why Customer Journeys Require Structural Metrics

    January 22, 2026
    Artificial Intelligence

    Why SaaS Product Management Is the Best Domain for Data-Driven Professionals in 2026

    January 22, 2026
    Artificial Intelligence

    Stop Writing Messy Boolean Masks: 10 Elegant Ways to Filter Pandas DataFrames

    January 22, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    AI Operations Under the Hood: Challenges and Best Practices

    September 5, 2025

    Audio Spectrogram Transformers Beyond the Lab

    June 10, 2025

    User-friendly system can help developers build more efficient simulations and AI models | MIT News

    April 6, 2025

    Building Fact-Checking Systems: Catching Repeating False Claims Before They Spread

    September 26, 2025

    FCA Just Dropped Big News on Live AI Testing for UK Firms

    April 30, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Deep Reinforcement Learning: The Actor-Critic Method

    January 1, 2026

    Shopify’s CEO Just Issued a Bold AI Ultimatum to His Entire Team

    April 15, 2025

    Can large language models figure out the real world? | MIT News

    August 25, 2025
    Our Picks

    America’s coming war over AI regulation

    January 23, 2026

    “Dr. Google” had its issues. Can ChatGPT Health do better?

    January 22, 2026

    Evaluating Multi-Step LLM-Generated Content: Why Customer Journeys Require Structural Metrics

    January 22, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.