Close Menu
    Trending
    • Why Care About Prompt Caching in LLMs?
    • How Vision Language Models Are Trained from “Scratch”
    • Why physical AI is becoming manufacturing’s next advantage
    • Personalized Restaurant Ranking with a Two-Tower Embedding Variant
    • A Tale of Two Variances: Why NumPy and Pandas Give Different Answers
    • How to Build Agentic RAG with Hybrid Search
    • Building a strong data infrastructure for AI agent success
    • Defense official reveals how AI chatbots could be used for targeting decisions
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Why Care About Prompt Caching in LLMs?
    Artificial Intelligence

    Why Care About Prompt Caching in LLMs?

    ProfitlyAIBy ProfitlyAIMarch 13, 2026No Comments12 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    , we’ve talked so much about what an incredible tool RAG is for leveraging the facility of AI on customized knowledge. However, whether or not we’re speaking about plain LLM API requests, RAG purposes, or extra advanced AI brokers, there’s one frequent query that continues to be the identical. How do all this stuff scale? Specifically, what occurs with price and latency because the variety of requests in such apps grows? Particularly for extra superior AI brokers, which may contain multiple calls to an LLM for processing a single person question, these questions turn into of explicit significance.

    Luckily, in actuality, when making calls to an LLM, the identical enter tokens are often repeated throughout a number of requests. Customers are going to ask some particular questions far more than others, system prompts and directions built-in in AI-powered purposes are repeated in each person question, and even for a single immediate, fashions carry out recursive calculations to generate a whole response (keep in mind how LLMs produce textual content by predicting phrases one after the other?). Just like different purposes, the usage of the caching idea can considerably assist optimize LLM request prices and latency. For example, in line with OpenAI documentation, Immediate Caching can cut back latency by as much as a formidable 80% and enter token prices by as much as 90%.


    What about caching?

    Usually, caching in computing isn’t any new thought. At its core, a cache is a part that shops knowledge quickly in order that future requests for a similar knowledge could be served quicker. On this approach, we will distinguish between two fundamental cache states – a cache hit and a cache miss. Specifically:

    • A cache hit happens when the requested knowledge is discovered within the cache, permitting for a fast and low cost retrieval.
    • A cache miss happens when the info will not be within the cache, forcing the applying to entry the unique supply, which is costlier and time-consuming.

    Probably the most typical implementations of cache is in net browsers. When visiting a web site for the primary time, the browser checks for the URL in its cache reminiscence, however finds nothing (that can be a cache miss). Because the knowledge we’re searching for isn’t regionally accessible, the browser has to carry out a costlier and time-consuming request to the net server throughout the web, to be able to discover the info within the distant server the place they initially exist. As soon as the web page lastly masses, the browser sometimes copies that knowledge into its native cache. If we attempt to reload the identical web page 5 minutes later, the browser will search for it in its native storage. This time, it is going to discover it (a cache hit) and cargo it from there, with out reaching again to the server. This makes the browser work extra shortly and devour fewer assets.

    As you could think about, caching is especially helpful in techniques the place the identical knowledge is requested a number of instances. In most techniques, knowledge entry is never uniform, however relatively tends to observe a distribution the place a small fraction of the info accounts for the overwhelming majority of requests. A big portion of real-life purposes follows the Pareto principle, which means that about of 80% of the requests are about 20% of the info. If not for the Pareto precept, cache reminiscence would must be as massive as the first reminiscence of the system, rendering it very, very costly.


    Immediate Caching and a Little Bit about LLM Inference

    The caching idea – storing incessantly used knowledge someplace and retrieving it from there, as a substitute of acquiring it once more from its major supply – is utilized in an identical method for bettering the effectivity of LLM calls, permitting for considerably lowered prices and latency. Caching could be utilised in numerous parts that could be concerned in an AI software, most necessary of which is Immediate Caching. However, caching can even present nice advantages by being utilized to different features of an AI app, comparable to, for example, caching in RAG retrieval or query-response caching. Nonetheless, this put up goes to solely give attention to Immediate Caching.


    To grasp how Immediate Caching works, we should first perceive a little bit bit about how LLM inference – utilizing a educated LLM to generate textual content – features. LLM inference will not be a single steady course of, however is relatively divided into two distinct levels. These are:

    • Pre-fill, which refers to processing all the immediate without delay to provide the primary token. This stage requires heavy computation, and it’s thus compute-bound. We might image a really simplified model of this stage as every token attending to all different tokens, or one thing like evaluating each token with each earlier token.
    • Decoding, which appends the final generated token again into the sequence and generates the subsequent one auto-regressively. This stage is memory-bound, because the system should load all the context of earlier tokens from reminiscence to generate each single new token.

    For instance, think about now we have the next immediate:

    What ought to I prepare dinner for dinner? 

    From which we might then get the primary token:

    Right here

    and the next decoding iterations:

    Right here 
    Listed here are 
    Listed here are 5 
    Listed here are 5 simple 
    Listed here are 5 simple dinner 
    Listed here are 5 simple dinner concepts

    The difficulty with that is that to be able to generate the entire response, the mannequin must course of the identical earlier tokens time and again to provide every subsequent phrase through the decoding stage, which, as you could think about, is extremely inefficient. In our instance, which means the mannequin would course of once more the tokens ‘What ought to I prepare dinner for dinner? Listed here are 5 simple‘ for producing the output ‘concepts‘, even when it has already processed the tokens ‘What ought to I prepare dinner for dinner? Listed here are 5′ some milliseconds in the past.

    To resolve this, KV (Key-Value) Caching is utilized in LLMs. Because of this intermediate Key and Worth tensors for the enter immediate and beforehand generated tokens are calculated as soon as after which saved on the KV cache, as a substitute of recomputing from scratch at every iteration. This leads to the mannequin performing the minimal wanted calculations for producing every response. In different phrases, for every decoding iteration, the mannequin solely performs calculations to foretell the latest token after which appends it to the KV cache.

    Nonetheless, KV caching solely works for a single immediate and for producing a single response. Immediate Caching extends the rules utilized in KV caching for using caching throughout totally different prompts, customers, and periods.


    In follow, with immediate caching, we save the repeated elements of a immediate after the primary time it’s requested. These repeated elements of a immediate often have the type of massive prefixes, like system prompts, directions, or retrieved context. On this approach, when a brand new request comprises the identical prefix, the mannequin makes use of the computations made beforehand as a substitute of recalculating from scratch. That is extremely handy since it could actually considerably cut back the working prices of an AI software (we don’t must pay for repeated inputs that include the identical tokens), in addition to cut back latency (we don’t have to attend for the mannequin to course of tokens which have already been processed). That is particularly helpful in purposes the place prompts include massive repeated directions, comparable to RAG pipelines.

    You will need to perceive that this caching operates on the token degree. In follow, which means even when two prompts differ on the finish, so long as they share the identical token prefix, the cached computations for that shared portion can nonetheless be reused, and solely carry out new calculations for the tokens that differ. The tough half right here is that the frequent tokens must be in the beginning of the immediate, so how we kind our prompts and directions turns into of explicit significance. In our cooking instance, we will think about the next consecutive prompts.

    Immediate 1
    What ought to I prepare dinner for dinner? 

    after which if we enter the immediate:

    Immediate 2
    What ought to I prepare dinner for launch? 

    The shared tokens ‘What ought to I prepare dinner’ ought to be a cache hit, and thus one ought to anticipate to devour considerably lowered tokens for Immediate 2.

    Nonetheless, if we had the next prompts…

    Immediate 1
    Time for supper! What ought to I prepare dinner? 

    after which

    Immediate 2
    Launch time! What ought to I prepare dinner? 

    This might be a cache miss, for the reason that first token of every immediate is totally different. Because the immediate prefixes are totally different, we can not hit cache, even when their semantics are primarily the identical.

    In consequence, a fundamental rule of thumb on getting immediate caching to work is to at all times append any static info, like directions or system prompts, in the beginning of the mannequin enter. On the flip aspect, any sometimes variable info like timestamps or person identifications ought to go on the finish of the immediate.


    Getting our fingers soiled with the OpenAI API

    These days, a lot of the frontier basis fashions, like GPT or Claude, present some form of Immediate Caching performance straight built-in into their APIs. Extra particularly, within the talked about APIs, Immediate Caching is shared amongst all customers of a company accessing the identical API key. In different phrases, as soon as a person makes a request and its prefix is saved in cache, for some other person inputting a immediate with the identical prefix, we get a cache hit. That’s, we get to make use of precomputed calculations, which considerably cut back the token consumption and make the response technology quicker. That is significantly helpful when deploying AI purposes within the enterprise, the place we anticipate many customers to make use of the identical software, and thus the identical prefixes of inputs.

    On most up-to-date fashions, Immediate Caching is routinely activated by default, however some degree of parametrization is offered. We are able to distinguish between:

    • In-memory immediate cache retention, the place the cached prefixes are maintained for like 5-10 minutes and as much as 1 hour, and
    • Prolonged immediate cache retention (solely accessible for particular fashions), permitting for an extended retention of the cached prefix, as much as a most of 24 hours.

    However let’s take a more in-depth look!

    We are able to see all these in follow with the next minimal Python instance, making requests to the OpenAI API, utilizing Immediate Caching, and the cooking prompts talked about earlier. I added a relatively massive shared prefix to my prompts, in order to make the consequences of caching extra seen:

    from openai import OpenAI
    api_key = "your_api_key"
    shopper = OpenAI(api_key=api_key)
    
    prefix = """
    You're a useful cooking assistant.
    
    Your activity is to recommend easy, sensible dinner concepts for busy individuals.
    Comply with these tips fastidiously when producing strategies:
    
    Normal cooking guidelines:
    - Meals ought to take lower than half-hour to organize.
    - Components ought to be simple to search out in an everyday grocery store.
    - Recipes ought to keep away from overly advanced methods.
    - Favor balanced meals together with greens, protein, and carbohydrates.
    
    Formatting guidelines:
    - All the time return a numbered record.
    - Present 5 strategies.
    - Every suggestion ought to embrace a brief rationalization.
    
    Ingredient tips:
    - Favor seasonal greens.
    - Keep away from unique elements.
    - Assume the person has fundamental pantry staples comparable to olive oil, salt, pepper, garlic, onions, and pasta.
    
    Cooking philosophy:
    - Favor easy dwelling cooking.
    - Keep away from restaurant-level complexity.
    - Concentrate on meals that individuals realistically prepare dinner on weeknights.
    
    Instance meal types:
    - pasta dishes
    - rice bowls
    - stir fry
    - roasted greens with protein
    - easy soups
    - wraps and sandwiches
    - sheet pan meals
    
    Eating regimen concerns:
    - Default to wholesome meals.
    - Keep away from deep frying.
    - Favor balanced macronutrients.
    
    Further directions:
    - Hold explanations concise.
    - Keep away from repeating the identical elements in each suggestion.
    - Present selection throughout the meal strategies.
    
    """ * 80   
    # large prefix to ensure i get the 1000 one thing token threshold for activating immediate caching
    
    prompt1 = prefix + "What ought to I prepare dinner for dinner?"

    after which for the immediate 2

    prompt2 = prefix + "What ought to I prepare dinner for lunch?"
    
    response2 = shopper.responses.create(
        mannequin="gpt-5.2",
        enter=prompt2
    )
    
    print("nResponse 2:")
    print(response2.output_text)
    
    print("nUsage stats:")
    print(response2.utilization)

    So, for immediate 2, we might be solely billed the remaining, non-identical a part of the immediate. That will be the enter tokens minus the cached tokens: 20,014 – 19,840 = solely 174 tokens, or in different phrases, 99% much less tokens.

    In any case, since OpenAI imposes a 1,024 token minimal threshold for activating immediate caching and the cache could be preserved for a most of 24 hours, it turns into clear that these price advantages could be obtained in follow solely when working AI purposes at scale, with many energetic customers performing many requests each day. Nonetheless, as defined for such circumstances, the Immediate Caching characteristic can present substantial price and time advantages for LLM-powered purposes.


    On my thoughts

    Immediate Caching is a robust optimization for LLMs that may considerably enhance the effectivity of AI purposes each when it comes to price and time. By reusing earlier computations for similar immediate prefixes, the mannequin can skip redundant calculations and keep away from repeatedly processing the identical enter tokens. The result’s quicker responses and decrease prices, particularly in purposes the place massive elements of prompts—comparable to system directions or retrieved context—stay fixed throughout many requests. As AI techniques scale and the variety of LLM calls will increase, these optimizations turn into more and more necessary.


    Liked this put up? Let’s be buddies! Be a part of me on:

    📰Substack 💌 Medium 💼LinkedIn ☕Buy me a coffee!

    All pictures by the writer, besides talked about in any other case.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleHow Vision Language Models Are Trained from “Scratch”
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    How Vision Language Models Are Trained from “Scratch”

    March 13, 2026
    Artificial Intelligence

    Personalized Restaurant Ranking with a Two-Tower Embedding Variant

    March 13, 2026
    Artificial Intelligence

    A Tale of Two Variances: Why NumPy and Pandas Give Different Answers

    March 13, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Designing digital resilience in the agentic AI era

    November 20, 2025

    Cyberattacks by AI agents are coming

    April 4, 2025

    LLM-as-a-Judge: What It Is, Why It Works, and How to Use It to Evaluate AI Models

    November 24, 2025

    Microsoft lanserar MAI-Image-1 deras första egenutvecklade text-till-bild-modell

    October 15, 2025

    Exploratory Data Analysis for Credit Scoring with Python

    March 12, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Therapists Too Expensive? Why Thousands of Women Are Spilling Their Deepest Secrets to ChatGPT

    May 6, 2025

    A Basic to Advanced Guide for 2026

    February 12, 2026

    I Analysed 25,000 Hotel Names and Found Four Surprising Truths

    July 22, 2025
    Our Picks

    Why Care About Prompt Caching in LLMs?

    March 13, 2026

    How Vision Language Models Are Trained from “Scratch”

    March 13, 2026

    Why physical AI is becoming manufacturing’s next advantage

    March 13, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.