Close Menu
    Trending
    • Optimizing Data Transfer in Distributed AI/ML Training Workloads
    • Achieving 5x Agentic Coding Performance with Few-Shot Prompting
    • Why the Sophistication of Your Prompt Correlates Almost Perfectly with the Sophistication of the Response, as Research by Anthropic Found
    • From Transactions to Trends: Predict When a Customer Is About to Stop Buying
    • America’s coming war over AI regulation
    • “Dr. Google” had its issues. Can ChatGPT Health do better?
    • Evaluating Multi-Step LLM-Generated Content: Why Customer Journeys Require Structural Metrics
    • Why SaaS Product Management Is the Best Domain for Data-Driven Professionals in 2026
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » The Machine Learning “Advent Calendar” Day 18: Neural Network Classifier in Excel
    Artificial Intelligence

    The Machine Learning “Advent Calendar” Day 18: Neural Network Classifier in Excel

    ProfitlyAIBy ProfitlyAIDecember 18, 2025No Comments11 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Neural Network Regressor, we now transfer to the classifier model.

    From a mathematical perspective, the 2 fashions are very related. In actual fact, they differ primarily by the interpretation of the output and the selection of the loss perform.

    Nevertheless, this classifier model is the place instinct often turns into a lot stronger.

    In apply, neural networks are used way more usually for classification than for regression. Considering by way of possibilities, determination boundaries, and courses makes the function of neurons and layers simpler to know.

    On this article, you will note:

    • the best way to outline the construction of a neural community in an intuitive method,
    • why the variety of neurons issues,
    • and why a single hidden layer is already adequate, no less than in concept.

    At this level, a pure query arises:
    If one hidden layer is sufficient, why can we discuss a lot about deep studying?

    The reply is vital.

    Deep studying is not nearly stacking many hidden layers on prime of one another. Depth helps, however it isn’t the entire story. What actually issues is how representations are constructed, reused, and constrained, and why deeper architectures are extra environment friendly to coach and generalize in apply.

    We are going to come again to this distinction later. For now, we intentionally hold the community small, so that each computation could be understood, written, and checked by hand.

    That is the easiest way to actually perceive how a neural community classifier works.

    As with the neural community regressor we constructed yesterday, we are going to break up the work into two elements.

    First, we have a look at ahead propagation and outline the neural community as a hard and fast mathematical perform that maps inputs to predicted possibilities.

    Then, we transfer to backpropagation, the place we practice this perform by minimizing the log loss utilizing gradient descent.

    The ideas are precisely the identical as earlier than. Solely the interpretation of the output and the loss perform change.

    1. Ahead propagation

    On this part, we give attention to just one factor: the mannequin itself. No coaching but. Simply the perform.

    1.1 A easy dataset and the instinct of constructing a perform

    We begin with a really small dataset:

    • 12 observations
    • One single function x
    • A binary goal y

    The dataset is deliberately easy so that each computation could be adopted manually. Nevertheless, it has one vital property: the courses are not linearly separable.

    Which means that a easy logistic regression can not remedy the issue, no matter how nicely it’s skilled.

    Dataset for Neural Community Classifier – all photographs by creator

    Nevertheless, the instinct is exactly the alternative of what it could appear at first.

    What we’re going to do is construct two logistic regressions first. Each creates a reduce within the enter house, as illustrated beneath.

    In different phrases, we begin with one single function, and we remodel it into two new options.

    Neural Community Classifier – all photographs by creator

    Then, we apply one other logistic regression, this time on these two options, to acquire the ultimate output likelihood.

    When written as a single mathematical expression, the ensuing perform is already a bit complicated to learn. That is precisely why we use a diagram: not as a result of the diagram is extra correct, however as a result of it’s simpler to grasp how the perform is constructed by composition.

    Neural Community Classifier – all photographs by creator

    1.2 Neural Community Construction

    So the visible diagram represents the next mannequin:

    • One hidden layer with two neurons within the hidden layer, which permits us to symbolize the 2 cuts we observe within the dataset
    • One output neuron, and it’s a logistic regression right here.
    Neural Community Classifier – all photographs by creator

    In our case, the mannequin is dependent upon seven coefficients:

    • Weights and biases for the 2 hidden neurons
    • Weights and bias for the output neuron

    Taken collectively, these seven numbers absolutely outline the mannequin.

    Now, in case you already perceive how a neural community classifier works, here’s a query for you:

    What number of totally different options can this mannequin have?

    In different phrases, what number of distinct units of seven coefficients can produce the identical classification boundary, or nearly the identical predicted possibilities, on this dataset?

    1.3 Implementing ahead propagation in Excel

    We now implement the mannequin utilizing Excel formulation.

    To visualise the output of the neural community, we generate new values of x starting from −2 to 2 with a step of 0.02.

    For every worth of x, we compute:

    • The outputs of the 2 hidden neurons (A1​ and A2​)
    • The ultimate output of the community

    At this stage, the mannequin is just not skilled but. We subsequently want to repair the seven parameters of the community. For now, we merely use a set of cheap values, proven beneath, which permits us to visualise the ahead propagation of the mannequin.

    It is only one potential configuration of the parameters. Even earlier than coaching, this already raises an fascinating query: what number of totally different parameter configurations might produce a sound answer for this downside?

    Coefficients selected for the neural community (picture by creator)

    We will use the next equations to compute the values of the hidden layers and the output.

    Neural Community Classifier – all photographs by creator

    The intermediate values A1 and A2​ are displayed explicitly. This avoids giant, unreadable formulation and makes the ahead propagation simple to observe.

    Formulation for ahead propagation (picture by the creator)

    The dataset has been efficiently divided into two distinct courses utilizing the neural community.

    Visualization of the output of the neural community — picture by the creator

    1.4 Ahead propagation: abstract and observations

    To recap, we began with a easy coaching dataset and outlined a neural community as an express mathematical perform, applied utilizing simple Excel formulation and a hard and fast set of coefficients. By feeding new values of xxx into this perform, we have been capable of visualize the output of the neural community and observe the way it separates the information.

    Neural Community Classifier – all photographs by creator

    Now, in case you look intently on the shapes produced by the hidden layer, which incorporates the 2 logistic regressions, you possibly can see that there are 4 potential configurations. They correspond to the totally different potential orientations of the slopes of the 2 logistic features.

    Every hidden neuron can have both a optimistic or a adverse slope. With two neurons, this results in 2×2=4 potential mixtures. These totally different configurations can produce very related determination boundaries on the output, despite the fact that the underlying parameters are totally different.

    This explains why the mannequin can admit a number of options for a similar classification downside.

    Neural Community Classifier – all photographs by creator

    The more difficult half is now to find out the values of those coefficients.

    That is the place backpropagation comes into play.

    2. Backpropagation: coaching the neural community with gradient descent

    As soon as the mannequin is outlined, coaching turns into a numerical downside.

    Regardless of its title, backpropagation is just not a separate algorithm. It’s merely gradient descent utilized to a composed perform.

    2.1 Reminder of the backpropagation algorithm

    The precept is similar for all weight-based fashions.

    We first outline the mannequin, that’s, the mathematical perform that maps the enter to the output.

    Then we outline the loss perform. Since it is a binary classification process, we use log loss, precisely as in logistic regression.

    Lastly, to be able to study the coefficients, we compute the partial derivatives of the loss with respect to every coefficient of the mannequin. These derivatives are what enable us to replace the parameters utilizing gradient descent.

    Beneath is a screenshot exhibiting the ultimate formulation for these partial derivatives.

    Neural Community Classifier – all photographs by creator

    The backpropagation algorithm can then be summarized as follows:

    1. Initialize the weights of the neural community randomly.
    2. Feedforward the inputs via the neural community to get the expected output.
    3. Calculate the error between the expected output and the precise output.
    4. Backpropagate the error via the community to calculate the gradient of the loss perform with respect to the weights.
    5. Replace the weights utilizing the calculated gradient and a studying charge.
    6. Repeat steps 2 to five till the mannequin converges.

    2.2 Initialization of the coefficients

    The dataset is organized in columns to make Excel formulation simple to increase.

    Enter information (picture by creator)

    The coefficients are initialized with particular values right here. You possibly can change them, however convergence is just not assured. Relying on the initialization, the gradient descent might converge to a special answer, converge very slowly, or fail to converge altogether.

    Preliminary values for the coefficients (picture by creator)

    2.3 Ahead propagation

    Within the columns from AG to BP, we implement the ahead propagation step. We first compute the 2 hidden activations A1 and A2, after which the output of the community. These are precisely the identical formulation as these used earlier to outline the ahead propagation of the mannequin.

    To maintain the computations readable, we course of every remark individually. In consequence, now we have 12 columns for the hidden layer outputs (A1 and A2) and 12 columns for the output layer.

    As an alternative of writing a single summation system, we compute the values remark by remark. This avoids very giant and hard-to-read formulation, and it makes the logic of the computations a lot clearer.

    This column-wise group additionally makes it simple to imitate a for-loop throughout gradient descent: the formulation can merely be prolonged by row to symbolize successive iterations.

    Ahead propagation (picture by creator)

    2.4 Errors and the price perform

    Within the columns from BQ to CN, we compute the error phrases and the values of the price perform.

    For every remark, we consider the log loss primarily based on the expected output and the true label. These particular person losses are then mixed to acquire the entire price for the every iteration.

    Errors and price perform (picture by creator)

    2.5 Partial derivatives

    We now transfer to the computation of the partial derivatives.

    The neural community has 7 coefficients, so we have to compute 7 partial derivatives, one for every parameter. For every spinoff, the computation is completed for all 12 observations, which results in a complete of 84 intermediate values.

    To maintain this manageable, the sheet is fastidiously organized. The columns are grouped and color-coded so that every spinoff could be adopted simply.

    Within the columns from CO to DL, we compute the partial derivatives related to a11 and a12.

    Within the columns from DM to EJ, we compute the partial derivatives related to b11 and b12.

    Partial derivatives — Picture by creator

    Within the columns from EK to FH, we compute the partial derivatives related to a21 and a22.

    Partial derivatives — Picture by creator

    Within the columns from FI to FT, we compute the partial derivatives related to b2.

    Partial derivatives — Picture by creator

    And to wrap it up, we sum the partial derivatives throughout the 12 observations.

    The ensuing gradients are grouped and proven within the columns from Z to FI.

    Partial derivatives — Picture by creator

    2.6 Updating weights in a for loop

    These partial derivatives enable us to carry out gradient descent for every coefficient. The updates are computed within the columns from R to X.

    At every iteration, we are able to observe how the coefficients evolve. The worth of the price perform is proven in column Y, which makes it simple to see whether or not the descent is working and whether or not the loss is reducing.

    After updating the coefficients at every step of the for loop, we recompute the output of the neural community.

    Picture by creator

    If the preliminary values of the coefficients are poorly chosen, the algorithm might fail to converge or might converge to an undesired answer, even with an inexpensive step measurement.

    Native minimal neural community (Picture by creator)

    The GIF beneath reveals the output of the neural community at every iteration of the for loop. It helps visualize how the mannequin evolves throughout coaching and the way the choice boundary progressively converges towards an answer.

    Neural community output visualization with weights updating — Picture by creator

    Conclusion

    We now have now accomplished the complete implementation of a neural community classifier, from ahead propagation to backpropagation, utilizing solely express formulation.

    By constructing every thing step-by-step, now we have seen {that a} neural community is nothing greater than a mathematical perform, skilled by gradient descent. Ahead propagation defines what the mannequin computes. Backpropagation tells us the best way to regulate the coefficients to cut back the loss.

    This file means that you can experiment freely: you possibly can change the dataset, modify the preliminary values of the coefficients, and observe how the coaching behaves. Relying on the initialization, the mannequin might converge rapidly, converge to a special answer, or get caught in a neighborhood minimal.

    Via this train, the mechanics of neural networks develop into concrete. As soon as these foundations are clear, utilizing high-level libraries feels a lot much less opaque, as a result of you realize precisely what is going on behind the scenes.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous Article4 Ways to Supercharge Your Data Science Workflow with Google AI Studio
    Next Article China figured out how to sell EVs. Now it has to bury their batteries.
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Optimizing Data Transfer in Distributed AI/ML Training Workloads

    January 23, 2026
    Artificial Intelligence

    Achieving 5x Agentic Coding Performance with Few-Shot Prompting

    January 23, 2026
    Artificial Intelligence

    Why the Sophistication of Your Prompt Correlates Almost Perfectly with the Sophistication of the Response, as Research by Anthropic Found

    January 23, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Tracking Drill-Through Actions on Power BI Report Titles

    July 14, 2025

    Which One Should You Use In 2025? » Ofemwire

    June 19, 2025

    Is OpenAI Already Too Big to Fail?

    November 11, 2025

    Meta lanserar fristående AI-app som utmanar ChatGPT

    May 1, 2025

    Evaluating LLMs for Inference, or Lessons from Teaching for Machine Learning

    June 2, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Agents, APIs, and the Next Layer of the Internet

    June 16, 2025

    What Are Large Multimodal Models (LMMs)? Applications, Features, and Benefits

    April 4, 2025

    Vad är AI-PC: Vilken ska jag köpa

    November 5, 2025
    Our Picks

    Optimizing Data Transfer in Distributed AI/ML Training Workloads

    January 23, 2026

    Achieving 5x Agentic Coding Performance with Few-Shot Prompting

    January 23, 2026

    Why the Sophistication of Your Prompt Correlates Almost Perfectly with the Sophistication of the Response, as Research by Anthropic Found

    January 23, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.