Close Menu
    Trending
    • How Expert-Vetted Reasoning Datasets Improve Reinforcement Learning Model Performance
    • What we’ve been getting wrong about AI’s truth crisis
    • Building Systems That Survive Real Life
    • The crucial first step for designing a successful enterprise AI system
    • Silicon Darwinism: Why Scarcity Is the Source of True Intelligence
    • How generative AI can help scientists synthesize complex materials | MIT News
    • Distributed Reinforcement Learning for Scalable High-Performance Policy Optimization
    • How to Apply Agentic Coding to Solve Problems
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » How to Maximize Agentic Memory for Continual Learning
    Artificial Intelligence

    How to Maximize Agentic Memory for Continual Learning

    ProfitlyAIBy ProfitlyAIDecember 10, 2025No Comments8 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    fashions able to automating quite a lot of duties, equivalent to analysis and coding. Nonetheless, usually instances, you’re employed with an LLM, full a process, and the following time you work together with the LLM, you begin from scratch.

    This can be a main downside when working with LLMs. We waste lots of time merely repeating directions to LLMs, equivalent to the specified code formatting or how you can carry out duties in response to your preferences.

    That is the place brokers.md recordsdata are available in: A technique to apply continuous studying to LLMs, the place the LLM learns your patterns and behaviours by storing generalizable data in a separate file. This file is then learn each time you begin a brand new process, stopping the chilly begin downside and serving to you keep away from repeating directions.

    On this article, I’ll present a high-level overview of how I obtain continuous studying with LLMs by frequently updating the brokers.md file.

    On this article, you’ll learn to apply continuous studying to LLMs. Picture by Gemini.

    Why do we want continuous studying?

    Beginning with a recent agent context takes time. The agent wants to choose up in your preferences, and you must spend extra time interacting with the agent, getting it to do precisely what you need.

    For instance:

    • Telling the agent to make use of Python 3.13 syntax, as a substitute of three.12
    • Informing the agent to all the time use return sorts on features
    • Guaranteeing the agent by no means makes use of the Any kind

    I usually needed to explicitly inform the agent to make use of Python 3.13 syntax, and never 3.12 syntax, in all probability as a result of 3.12 syntax is extra prevalent of their coaching dataset.

    The entire level of utilizing AI brokers is to be quick. Thus, you don’t need to be spending time repeating directions on which Python model to make use of, or that the agent ought to by no means use the Any kind.

    Moreover, the AI agent typically spends further time determining data that you have already got obtainable, for instance:

    • The title of your paperwork desk
    • The names of your CloudWatch logs
    • The prefixes in your S3 buckets

    If the agent doesn’t know the title of your paperwork desk, it has to:

    1. Record all tables
    2. Discover a desk that sounds just like the doc desk (could possibly be a number of potential choices)
    3. Both make a lookup to the desk to verify, or ask the consumer
    Agentic memory
    This picture represents what an agent has to do to seek out the title of your paperwork desk. First, it has to listing all tables within the database, then discover related desk names. Lastly, the agent has to verify it has the right desk by both asking the consumer for affirmation or making a lookup within the desk. This takes lots of time. As a substitute of this, you may retailer the title of the doc desk in brokers.md, and be far simpler along with your coding agent in future interactions. Picture by Gemini.

    This takes lots of time, and is one thing we will simply stop by including the doc desk title, CloudWatch logs, and S3 bucket prefixes into brokers.md.

    Thus, the principle purpose we want continuous studying is that repeating directions is irritating and time-consuming, and when working with AI brokers, we need to be as efficient as potential.

    apply continuous studying

    There are two most important methods I method continuous studying, each involving heavy utilization of the brokers.md file, which you need to have in each repository you’re engaged on:

    1. Every time the agent makes a mistake, I inform the agent how you can appropriate the error, and to recollect this for later within the agent.md file
    2. After every thread I’ve had with the agent, I exploit the immediate under. This ensures that something I advised the agent all through the thread, or data it found all through the thread, is saved for later use. This makes later interactions far simpler.
    Generalize the data from this thread, and keep in mind it for later. 
    Something that could possibly be helpful to know for a later interplay, 
    when doing related issues. Retailer in brokers.md

    Making use of these two easy ideas will get you 80% on the best way to continuous studying with LLMs and make you a much more efficient engineer.


    Crucial level is to all the time hold the agentic reminiscence with brokers.md in thoughts. Every time the agent does one thing you don’t like, you all the time have to recollect to retailer it in brokers.md

    You would possibly assume you’re risking bloating the brokers.md file, which is able to make the agent each slower and extra expensive. Nonetheless, this isn’t actually the case. LLMs are extraordinarily good at condensing data down right into a file. Moreover, even when you have an brokers.md file consisting of hundreds of phrases, it’s not likely an issue, neither with regard to context size or value.

    The context size of frontier LLMs is a whole lot of hundreds of tokens, in order that’s no problem in any respect. And for the associated fee, you’ll in all probability begin seeing the price of utilizing the LLM go down. The explanation for that is that the agent will spend fewer tokens determining data, as a result of that data is already current in brokers.md.

    Heavy utilization of brokers.md for agentic reminiscence will each make LLM utilization sooner, and scale back value

    Some added ideas

    I’d additionally like so as to add some extra ideas which can be helpful when coping with agentic reminiscence.

    The primary tip is that when interacting with Claude Code, you may entry the agent’s reminiscence utilizing “#”, after which write what to recollect. For instance, write this into the terminal when interacting with Claude Code:

    # All the time use Python 3.13 syntax, keep away from 3.12 syntax

    You’ll then get an choice, as you see within the picture under. Both you put it aside to the consumer reminiscence, which shops the knowledge for all of your interactions with Claude Code, irrespective of the code repository. That is helpful for generic data, like all the time having a return kind for features.

    The second and third choices are to put it aside to the present folder you’re in or to the foundation folder of your venture. This may be helpful for both storing folder-specific data, for instance, solely describing a selected service. Or for storing details about a code repository usually.

    Claude Code Memory Options
    This picture highlights the totally different reminiscence choices you may have with Claude. You’ll be able to both save into the consumer reminiscence, storing the reminiscence throughout all of your classes, irrespective of the repository. Moreover, you may retailer it in a subfolder of the venture you’re in, for instance, if you wish to retailer details about a selected service. Lastly, you can even retailer the reminiscence within the root venture folder, so all work with the repository could have the context. Picture by the creator.

    Moreover, totally different coding brokers use totally different reminiscence recordsdata.

    • Claude Code makes use of CLAUDE.md
    • Warp makes use of WARP.md
    • Cursor makes use of .cursorrules

    Nonetheless, all brokers often learn brokers.md, which is why I like to recommend storing data in that file, so you may have entry to the agentic reminiscence irrespective of which coding agent you’re utilizing. It’s because sooner or later Claude Code would be the finest, however we’d see one other coding agent on high one other day.

    AGI and continuous studying

    I’d additionally like so as to add a word on AGI and continuous studying. True continuous studying is usually stated to be one of many final hindrances to attaining AGI.

    At the moment, LLMs primarily pretend continuous studying by merely storing issues they study into recordsdata they learn afterward (equivalent to brokers.md). Nonetheless, the best could be that LLMs frequently replace their mannequin weights every time studying new data, primarily the best way people study instincts.

    Sadly, true continuous studying isn’t achieved but, but it surely’s doubtless a functionality we’ll see extra of within the coming years.

    Conclusion

    On this article, I’ve talked about how you can turn into a much more efficient engineer by using brokers.md for continuous studying. With this, your agent will choose up in your habits, the errors you make, the knowledge you often use, and plenty of different helpful items of knowledge. This once more will make later interactions along with your agent far simpler. I consider heavy utilization of the brokers.md file is important to turning into an excellent engineer, and is one thing you need to continually try to attain.

    👉 My Free Assets

    🚀 10x Your Engineering with LLMs (Free 3-Day Email Course)

    📚 Get my free Vision Language Models ebook

    💻 My webinar on Vision Language Models

    👉 Discover me on socials:

    📩 Subscribe to my newsletter

    🧑‍💻 Get in touch

    🔗 LinkedIn

    🐦 X / Twitter

    ✍️ Medium



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleOpenAI’s “Ad” Backlash and Why It Signals a Deeper Problem
    Next Article The Machine Learning “Advent Calendar” Day 10: DBSCAN in Excel
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Building Systems That Survive Real Life

    February 2, 2026
    Artificial Intelligence

    Silicon Darwinism: Why Scarcity Is the Source of True Intelligence

    February 2, 2026
    Artificial Intelligence

    How generative AI can help scientists synthesize complex materials | MIT News

    February 2, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    How to Design My First AI Agent

    June 3, 2025

    The Secret Power of Data Science in Customer Support

    May 31, 2025

    A Bird’s-Eye View of Linear Algebra: Measure of a Map — Determinants

    June 10, 2025

    Generative AI tool helps 3D print personal items that sustain daily use | MIT News

    January 14, 2026

    How to Implement Randomization with the Python Random Module

    November 24, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Hugging Face Transformers in Action: Learning How To Leverage AI for NLP

    December 28, 2025

    Inside Google’s Agent2Agent (A2A) Protocol: Teaching AI Agents to Talk to Each Other

    June 2, 2025

    Understanding Reasoning in Large Language Models

    November 13, 2025
    Our Picks

    How Expert-Vetted Reasoning Datasets Improve Reinforcement Learning Model Performance

    February 3, 2026

    What we’ve been getting wrong about AI’s truth crisis

    February 2, 2026

    Building Systems That Survive Real Life

    February 2, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.