Close Menu
    Trending
    • How to Automate Workflows with AI
    • TDS Newsletter: How Compelling Data Stories Lead to Better Business Decisions
    • I Measured Neural Network Training Every 5 Steps for 10,000 Iterations
    • “The success of an AI product depends on how intuitively users can interact with its capabilities”
    • How to Crack Machine Learning System-Design Interviews
    • Music, Lyrics, and Agentic AI: Building a Smart Song Explainer using Python and OpenAI
    • An Anthropic Merger, “Lying,” and a 52-Page Memo
    • Apple’s $1 Billion Bet on Google Gemini to Fix Siri
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » I Measured Neural Network Training Every 5 Steps for 10,000 Iterations
    Artificial Intelligence

    I Measured Neural Network Training Every 5 Steps for 10,000 Iterations

    ProfitlyAIBy ProfitlyAINovember 15, 2025No Comments10 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    how neural networks discovered. Prepare them, watch the loss go down, save checkpoints each epoch. Normal workflow. Then I measured coaching dynamics at 5-step intervals as an alternative of epoch-level, and every thing I believed I knew fell aside.

    The query that began this journey: Does a neural community’s capability increase throughout coaching, or is it mounted from initialization? Till 2019, all of us assumed the reply was apparent—parameters are mounted, so capability have to be mounted too. However Ansuini et al. found one thing that shouldn’t be attainable: the efficient representational dimensionality will increase throughout coaching. Yang et al. confirmed it in 2024.

    This adjustments every thing. If studying house expands whereas the community learns, how can we mechanistically perceive what it’s truly doing?

    Excessive-Frequency Coaching Checkpoints

    After we are coaching a DNN with 10,000 steps, we used to arrange chack factors each 100 or 200 steps. Measuring at 5-step intervals generates an excessive amount of data that aren’t simple to handle. However these high-frequency checkpoints reveal very helpful details about how a DNN learns.

    Excessive-frequency checkpoints present details about:

    • Whether or not early coaching errors may be recovered from (they usually can’t)
    • Why some architectures work and others fail
    • When interpretability evaluation ought to occur (spoiler: method sooner than we thought)
    • Easy methods to design higher coaching approaches

    Throughout an utilized analysis challenge I’ve measured DNN coaching at excessive decision — each 5 steps as an alternative of each 100 or 500. I used a primary MLP structure with the identical dataset I’ve been utilizing for the final 10 years.

    Determine 1. Experimental setupWe detect discrete transitions utilizing z-score evaluation with
    rolling statistics:

    The outcomes have been stunning. Deep neural networks, even easy architectures, increase their efficient parameter house throughout coaching. I had assumed this house was predetermined by the structure itself. As a substitute, DNNs bear discrete transitions—small jumps that improve the efficient dimensionality of their studying house.

    Determine 2: Efficient dimensionality of activation patterns throughout coaching, measured utilizing steady rank. We see three distinct phases emerge: preliminary collapse (steps 0-300) the place dimensionality drops from 2500 to 500, enlargement part (steps 300-5000) the place dimensionality climbs to 1000, and stabilization (steps 5000-8000) the place dimensionality plateaus. This implies steps 0-2000 represent a qualitatively
    distinct developmental window. Picture by creator.

    In Determine 2 we will see the monitoring of activation efficient dimensionality throughout coaching. We see these transitions focus within the first 25% of coaching, and are hidden at bigger checkpoint intervals (100-1000 steps). We wanted a high-frequency checkpointing (5 steps) to detect most of them. The curve additionally reveals an attention-grabbing conduct. The preliminary collapse represents loss panorama restructuring the place random initialization offers approach to a task-aligned construction. Then we see an enlargement part with gradual dimensionality development. Between 2000-3000 steps, there’s a stabilization that displays DNN architectural capability limits.

    Determine 3: Representational dimensionality (measured utilizing steady rank) reveals sturdy damaging correlation with loss (ρ = −0.951) and reasonable damaging correlation with gradient magnitude (ρ = −0.701). As loss decreases from 2.0 to close zero, dimensionality expands from 9.0 to 9.6. Counterintuitively, improved efficiency correlates with expanded moderately than com- pressed representations. Picture by creator.

    This adjustments how we must always take into consideration DNN coaching, interpretability, and structure design.

    Exploration vs Enlargement

    Think about the next two eventualities:

    Situation A:
    Mounted Capability (Exploration)
    Situation B:
    Increasing Capability (Innovation)
    Your community begins with a hard and fast representational capability. Coaching explores completely different areas of this pre-determined house. It’s like navigating a map that exists from the start. Early coaching simply means “haven’t discovered the great area but”. Your community begins with minimal capability. Coaching creates representational constructions. Its like constructing roads whereas touring — every highway allows new locations. Early coaching establishes what turns into learnable later.

    Which is it?

    The query issues as a result of if capability expands, then early coaching isn’t recoverable. You’ll be able to’t simply “prepare longer” to repair early errors. So, interpretability has a timeline the place options type in sequence. Understanding this sequence is essential. Furehtermore, structure design appears to be about enlargement fee not simply closing capability. Lastly, important intervals exist. If we miss the window, we miss the aptitude.

    When We Have to Measure Excessive-Frequency Checkpoints

    Enlargement vs Exploration

    Determine 4: Excessive-frequency sampling vs. Low Frequency sampling within the experiment describred in Determine 1. We detect discrete transitions utilizing z-score evaluation with rolling statistics. Excessive-frequency sampling captures fast transitions that coarse-grained measurement misses. This comparability exams whether or not temporal decision impacts observable dynamics.

    As seen in Figures 2 and three, high-frequency sampling reveals attention-grabbing data. We will indentify three completely different phases:

    Section 1: Collapse (steps 0-300) The community restructures from random initialization. Dimensionality drops sharply because the loss panorama is reshaped across the activity. This isn’t studying but, it’s preparation for studying.
    Section 2: Enlargement (steps 300-5,000)
    Dimensionality climbs steadily. That is capability enlargement. The community is constructing representational constructions. Easy options that allow advanced options that allow higher-order options.
    Section 3: Stabilization (steps 5,000-8,000) Development plateaus. Architectural constraints bind. The community refines what it has moderately than constructing new capability.

    This plots reveals enlargement, not exploration. The community at step 5,000 can symbolize capabilities that have been not possible at step 300 as a result of they didn’t exist.

    Capability Expands, Parameters Don’t

    Determine 5: Comparability of activation house to weight house.
    Weight house dimensionality stays almost fixed
    (9.72-9.79) with only one detected “soar” throughout 8000 steps. Picture by creator

    The comparability between activation and weight areas reveals that each observe completely different dynamics with high-frequency sampling. The activation house reveals ap. 85 discrete jumps (together with Gaussian noise). The load house reveals only one. The identical community with the identical coaching run. It confirms that the community at step 8000 computes capabilities inaccessible at step 500 regardless of an equivalent parameter depend. That is the clearest proof for enlargement.

    DNNs innovate by producing new parameter house choices throughout coaching as a way to symbolize advanced duties.

    Transitions Are Quick and Early

    We’ve got seen how high-frequency sampling reveals many extra transitions. Low-frequency checkpointing would miss almost all of them. These transitions focus early. Two thirds of all transitions occur within the first 2,000 steps — simply 25% of complete coaching time. It implies that if we wish to perceive what options type and when, we have to look throughout steps 0-2,000, not at convergence. By step 5,000, the story is over.

    Enlargement {Couples} to Optimization

    If we glance once more at Determine 3, we see that as loss decreases, dimensionality expands. The community doesn’t simplify because it learns. It turns into extra advanced. Dimensionality correlates strongly with loss (ρ = -0.951) and reasonably with gradient magnitude (ρ = -0.701). This might appear counterintuitive: improved efficiency correlates with expanded moderately than compressed representations. We would count on networks to seek out less complicated, extra compressed representations as they be taught. As a substitute, they increase into higher-dimensional areas.

    Why?

    A attainable clarification is that advanced duties require advanced representations. The community doesn’t discover a less complicated clarification and builds the representational adjustments wanted to separate courses, acknowledge patterns, and generalize.

    Sensible Deployment

    We’ve got seen a unique approach to perceive and debug DNN coaching throughout any area.

    If we all know when options type throughout coaching, we will analyze them as they crystallize moderately than reverse-engineering a black field afterward.

    In actual deployment eventualities, we will observe representational dimensionality in real-time, detect when enlargement phases happen, and run interpretability analyses at every transition level. This tells us exactly when our community is constructing new representational constructions—and when it’s completed. The measurement strategy is architecture-agnostic: it really works whether or not you’re coaching CNNs for imaginative and prescient, transformers for language, RL brokers for management, or multimodal fashions for cross-domain duties.

    Instance 1: Intervention experiments that map causal dependencies. Disrupt coaching throughout particular home windows and measure which downstream capabilities are misplaced. If corrupting information throughout steps 2,000-5,000 completely damages texture recognition however the identical corruption at step 6,000 has no impact, you’ve discovered when texture options crystallize and what they rely upon. This works identically for object recognition in imaginative and prescient fashions, syntactic construction in language fashions, or state discrimination in RL brokers.
    Instance 2: For manufacturing deployment, steady dimensionality monitoring catches representational issues throughout coaching when you may nonetheless repair them. If layers cease increasing, you have got architectural bottlenecks. If enlargement turns into erratic, you have got instability. If early layers saturate whereas late layers fail to increase, you have got data move issues. Normal loss curves gained’t present these points till it’s too late—dimensionality monitoring surfaces them instantly.
    Instance 3: The structure design implications are equally sensible. Measure enlargement dynamics throughout the first 5-10% of coaching throughout candidate architectures. Choose for clear part transitions and structured bottom-up growth. These networks aren’t simply extra performant—they’re essentially extra interpretable as a result of options type in clear sequential layers moderately than tangled simultaneity.

    What’s Subsequent

    So we’ve established that networks increase their representational house throughout coaching, that we will measure these transitions at excessive decision, and that this opens new approaches to interpretability and intervention. The pure query: are you able to apply this to your individual work?

    I’m releasing the entire measurement infrastructure as open supply. I included validated implementations for MLPs, CNNs, ResNets, Transformers, and Imaginative and prescient Transformers, with hooks for customized architectures.

    Every thing runs with three strains added to your coaching loop.

    The GitHub repository supplies experiment templates for the experiments mentioned above: function formation mapping, intervention protocols, cross-architecture switch prediction, and manufacturing monitoring setups. The measurement methodology is validated. What issues now’s what you uncover once you apply it to your area.

    Attempt it:

    pip set up ndtracker

    Quickstart, directions, and examples within the repository: Neural Dimensionality Tracker (NDT)

    The code is production-ready. The protocols are documented. The questions are open. I want to see what you discover once you measure your coaching dynamics at excessive decision regardless of the context and the structure.

    You’ll be able to share your outcomes, open points together with your findings, or simply ⭐️ the repo if this adjustments how you consider coaching. Keep in mind, the interpretability timeline exists throughout all neural architectures.

    Javier Marín | LinkedIn | Twitter


    References & Additional Studying

    • Achille, A., Rovere, M., & Soatto, S. (2019). Crucial studying intervals in deep networks. In Worldwide Convention on Studying Representations (ICLR). https://openreview.net/forum?id=BkeStsCcKQ
    • Frankle, J., Dziugaite, G. Ok., Roy, D. M., & Carbin, M. (2020). Linear mode connectivity and the lottery ticket speculation. In Proceedings of the thirty seventh Worldwide Convention on Machine Studying (pp. 3259-3269). PMLR. https://proceedings.mlr.press/v119/frankle20a.html
    • Ansuini, A., Laio, A., Macke, J. H., & Zoccolan, D. (2019). Intrinsic dimension of information representations in deep neural networks. In Advances in Neural Data Processing Programs (Vol. 32, pp. 6109-6119). https://proceedings.neurips.cc/paper/2019/hash/cfcce0621b49c983991ead4c3d4d3b6b-Abstract.html
    • Yang, J., Zhao, Y., & Zhu, Q. (2024). ε-rank and the staircase phenomenon: New insights into neural community coaching dynamics. arXiv preprint arXiv:2412.05144. https://arxiv.org/abs/2412.05144
    • Olah, C., Mordvintsev, A., & Schubert, L. (2017). Function visualization. Distill, 2(11), e7. https://doi.org/10.23915/distill.00007
    • Elhage, N., Nanda, N., Olsson, C., Henighan, T., Joseph, N., Mann, B., Askell, A., Bai, Y., Chen, A., Conerly, T., DasSarma, N., Drain, D., Ganguli, D., Hatfield-Dodds, Z., Hernandez, D., Jones, A., Kernion, J., Lovitt, L., Ndousse, Ok., Amodei, D., Brown, T., Clark, J., Kaplan, J., McCandlish, S., & Olah, C. (2021). A mathematical framework for transformer circuits. Transformer Circuits Thread. https://transformer-circuits.pub/2021/framework/index.html



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous Article“The success of an AI product depends on how intuitively users can interact with its capabilities”
    Next Article TDS Newsletter: How Compelling Data Stories Lead to Better Business Decisions
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    How to Automate Workflows with AI

    November 15, 2025
    Artificial Intelligence

    TDS Newsletter: How Compelling Data Stories Lead to Better Business Decisions

    November 15, 2025
    Artificial Intelligence

    “The success of an AI product depends on how intuitively users can interact with its capabilities”

    November 14, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    11 Speechify Alternative You Should Try » Ofemwire

    April 4, 2025

    Imagining the future of banking with agentic AI

    September 4, 2025

    Multi-Agent Communication with the A2A Python SDK

    May 28, 2025

    AI comes for the job market, security and prosperity: The Debrief

    August 27, 2025

    What Is a Query Folding in Power BI and Why should You Care?

    July 25, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Meta MoCha genererar talande animerade karaktärer

    April 7, 2025

    Beyond Requests: Why httpx is the Modern HTTP Client You Need (Sometimes)

    October 15, 2025

    Elon Musk ska lansera betaversion av Grokipedia

    October 6, 2025
    Our Picks

    How to Automate Workflows with AI

    November 15, 2025

    TDS Newsletter: How Compelling Data Stories Lead to Better Business Decisions

    November 15, 2025

    I Measured Neural Network Training Every 5 Steps for 10,000 Iterations

    November 15, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.