Close Menu
    Trending
    • “The success of an AI product depends on how intuitively users can interact with its capabilities”
    • How to Crack Machine Learning System-Design Interviews
    • Music, Lyrics, and Agentic AI: Building a Smart Song Explainer using Python and OpenAI
    • An Anthropic Merger, “Lying,” and a 52-Page Memo
    • Apple’s $1 Billion Bet on Google Gemini to Fix Siri
    • Critical Mistakes Companies Make When Integrating AI/ML into Their Processes
    • Nu kan du gruppchatta med ChatGPT – OpenAI testar ny funktion
    • OpenAI’s new LLM exposes the secrets of how AI really works
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » How to Evaluate Retrieval Quality in RAG Pipelines (Part 3): DCG@k and NDCG@k
    Artificial Intelligence

    How to Evaluate Retrieval Quality in RAG Pipelines (Part 3): DCG@k and NDCG@k

    ProfitlyAIBy ProfitlyAINovember 12, 2025No Comments8 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Make sure that additionally to take a look at the earlier elements:

    👉Half 1: Precision@k, Recall@k, and F1@k

    👉Part 2: Mean Reciprocal Rank (MRR) and Average Precision (AP)

    of my put up collection on retrieval analysis measures for RAG pipelines, we took an in depth take a look at the binary retrieval analysis metrics. Extra particularly, in Half 1, we went over binary, order-unaware retrieval analysis metrics, like HitRate@Okay, Recall@Okay, Precision@Okay, and F1@Okay. Binary, order-unaware retrieval analysis metrics are basically essentially the most fundamental kind of measures we will use for scoring the efficiency of our retrieval mechanism; they simply classify a end result both as related or irrelevant, and consider if related outcomes make it to the retrieved set.

    Then, partly 2, we reviewed binary, order-aware analysis metrics like Imply Reciprocal Rank (MRR) and Common Precision (AP). Binary, order-aware measures categorise outcomes both as related or irrelevant and test if they seem within the retrieval set, however on high of this, in addition they quantify how properly the outcomes are ranked. In different phrases, in addition they consider the rating with which every result’s retrieved, other than whether or not it’s retrieved or not within the first place.

    On this closing a part of the retrieval analysis metrics put up collection, I’m going to additional elaborate on the opposite giant class of metrics, past binary metrics. That’s, graded metrics. Not like binary metrics, the place outcomes are both related or irrelevant, for graded metrics, relevance is quite a spectrum. On this approach, the retrieved chunk may be roughly related to the consumer’s question.

    Two generally used graded relevance metrics that we’re going to be looking at in immediately’s put up are Discounted Cumulative Acquire (DCG@Okay) and Normalized Discounted Cumulative Acquire (NDCG@okay).


    I write 🍹DataCream, the place I’m studying and experimenting with AI and information. Subscribe here to be taught and discover with me.


    Some graded measures

    For graded retrieval measures, it’s to start with vital to know the idea of graded relevance. That’s, for graded measures, a retrieved merchandise may be roughly related, as quantified by rel_i.

    Picture by writer

    🎯 Discounted Cumulative Acquire (DCG@okay)

    Discounted Cumulative Acquire (DCG@okay) is a graded, order-aware retrieval analysis metric, permitting us to quantify how helpful a retrieved result’s, making an allowance for the rank with which it’s retrieved. We are able to calculate it as follows:

    Picture by writer

    Right here, the numerator rel_i is the graded relevance of the retrieved end result i, basically, is a quantification of how related the retrieved textual content chunk is. Furthermore, the denominator of this system is the log of the rating of the end result i. Primarily, this enables us to penalize gadgets that seem within the retrieved set with decrease ranks, emphasizing the concept outcomes showing on the high are extra vital. Thus, the extra related a result’s, the upper the rating, however the decrease the rating it seems at, the decrease the rating.

    Let’s additional discover this with a easy instance:

    Picture by writer

    In any case, a serious challenge of DCG@okay is that, as you may see, is basically a sum perform of all of the related gadgets. Thus, a retrieved set with extra gadgets (a bigger okay) and/or extra related gadgets goes to inevitably end in a bigger DCG@okay. For example, if in for instance, simply think about okay = 4, we might find yourself with a DCG@4 = 28.19. Equally, DCG@6 can be increased and so forth. As okay will increase, DCG@okay usually will increase, since we embody extra outcomes, except extra gadgets have zero relevance. Nonetheless, this doesn’t essentially imply that its retrieval efficiency is superior. Quite the opposite, this quite causes an issue as a result of it doesn’t permit us to match retrieved units with completely different okay values primarily based on DCG@okay.

    This challenge is successfully solved by the following graded measure we’re going to be discussing in a while immediately – NDCG@okay. However earlier than that, we have to introduce IDCG@Okay, required for calculating NDCG@Okay.

    🎯 Perfect Discounted Cumulative Acquire (IDCG@okay)

    Perfect Discounted Cumulative Acquire (IDCG@okay), as its identify suggests, is the DCG we might get within the excellent state of affairs the place our retrieved set is completely ranked primarily based on the retrieved outcomes’ relevance. Let’s see what the IDCG for our instance can be:

    Picture by writer

    Apparently, for a set okay, IDCG@okay goes to all the time be equal to or bigger than any DCG@okay, because it represents the rating for an ideal retrieval and rating of outcomes for a sure okay.

    Lastly, we will now calculate Normalized Discounted Cumulative Acquire (NDCG@okay), utilizing DCG@okay and IDCG@okay.

    🎯 Normalized Discounted Cumulative Acquire (NDCG@okay)

    Normalized Discounted Cumulative Acquire (NDCG@okay) is basically a normalised expression of DCG@okay, fixing our preliminary downside and rendering it comparable for various retrieved set sizes okay. We are able to calculate NDCG@okay with this simple system:

    Picture by writer

    Mainly, NDCG@okay permits us to quantify how shut our present retrieval and rating is to the best one, for a given okay. This conveniently gives us with a quantity that is comparable for various values of okay. In our instance, NDCG@okay=5 can be:

    Picture by writer

    Normally, NDCG@okay can vary from 0 to 1, with 1 representing an ideal retrieval and rating of the end result, and 0 indicating an entire mess.

    So, how can we really calculate DCG and NDCG in Python?

    In the event you’ve learn my other RAG tutorials, that is the place the Struggle and Peace instance would often are available. Nonetheless, this code instance is getting too large to incorporate in each put up, so as a substitute I’m going to indicate you find out how to calculate DCG and NDCG in Python, doing my greatest to maintain this put up at an affordable size.

    To calculate these retrieval metrics, we first must outline a floor fact set, exactly as we did in Part 1 when calculating Precision@Okay and Recall@Okay. The distinction right here is that, as a substitute of characterising every retrieved chunk as related or not, utilizing binary relevances (0 or 1), we now assign to it a graded relevance rating; for instance, from fully irrelevant (0), to tremendous related (5). Thus, our floor fact set would come with the textual content chunks which have the best graded relevance scores for every question.

    For example, for a question like “Who’s Anna PĂĄvlovna?”, a retrieved chunk that completely matches the reply may obtain a rating of three, one which partially mentions the wanted data might get a 2, and a very unrelated chunk would get a relevance rating equal to 0.

    Utilizing these graded relevance lists for a retrieved end result set, we will then calculate DCG@okay, IDCG@okay, and NDCG@okay. We’ll use Python’s math library to deal with the logarithmic phrases:

    import math

    Initially, we will outline a perform for calculating DCG@okay as follows:

    # DCG@okay
    def dcg_at_k(relevance, okay):
        okay = min(okay, len(relevance))
        return sum(rel / math.log2(i + 1) for i, rel in enumerate(relevance[:k], begin=1))

    We are able to additionally calculate IDCG@okay making use of an identical logic. Primarily, IDCG@okay is DCG@okay for an ideal retrieval and rating; thus, we will simply calculate it by calculating DCG@okay after sorting the outcomes by descending relevance.

    # IDCG@okay
    def idcg_at_k(relevance, okay):
        ideal_relevance = sorted(relevance, reverse=True)
        return dcg_at_k(ideal_relevance, okay)
    

    Lastly, after now we have calculated DCG@okay and IDCG@okay, we will additionally simply calculate NDCG@okay as their perform. Extra particularly:

    # NDCG@okay
    def ndcg_at_k(relevance, okay):
        dcg = dcg_at_k(relevance, okay)
        idcg = idcg_at_k(relevance, okay)
        return dcg / idcg if idcg > 0 else 0.0

    As defined, every of those features takes as enter an inventory of graded relevance scores for retrieved chunks. For example, let’s suppose that for a particular question, floor fact set, and retrieved outcomes take a look at, we find yourself with the next checklist:

    relevance = [3, 2, 3, 0, 1]

    Then, we will calculate the graded retrieval metrics utilizing our features :

    print(f"DCG@5: {dcg_at_k(relevance, 5):.4f}")
    print(f"IDCG@5: {idcg_at_k(relevance, 5):.4f}")
    print(f"NDCG@5: {ndcg_at_k(relevance, 5):.4f}")

    And that was that! That is how we get our graded retrieval efficiency measures for our RAG pipeline in Python.

    Lastly, equally to all different retrieval efficiency metrics, we will additionally common the scores of a metric throughout completely different queries to get a extra consultant general rating.

    On my thoughts

    At this time’s put up in regards to the graded relevance measures concludes my put up collection about essentially the most generally used metrics for evaluating the retrieval efficiency of RAG pipelines. Particularly, all through this put up collection, we explored binary measures, order-unaware and order-aware, in addition to graded measures, gaining a holistic view of how we method this. Apparently, there are many different issues that we will take a look at with the intention to consider a retrieval mechanism of a RAG pipeline, as for example, latency per question or context tokens despatched. Nonetheless, the measures I went over in these posts cowl the basics for evaluating retrieval efficiency.

    This enables us to quantify, consider, and in the end enhance the efficiency of the retrieval mechanism, in the end paving the way in which for constructing an efficient RAG pipeline that produces significant solutions, grounded within the paperwork of our alternative.


    Liked this put up? Let’s be mates! Be a part of me on:

    📰Substack 💌 Medium đŸ’ŒLinkedIn ☕Buy me a coffee!


    What about pialgorithms?

    Seeking to convey the ability of RAG into your group?

    pialgorithms can do it for you 👉 book a demo immediately



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleCoca-Cola Doubles Down on AI Holiday Ad Amid Fresh Backlash
    Next Article Deploy Your AI Assistant to Monitor and Debug n8n Workflows Using Claude and MCP
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    “The success of an AI product depends on how intuitively users can interact with its capabilities”

    November 14, 2025
    Artificial Intelligence

    How to Crack Machine Learning System-Design Interviews

    November 14, 2025
    Artificial Intelligence

    Music, Lyrics, and Agentic AI: Building a Smart Song Explainer using Python and OpenAI

    November 14, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Checking the quality of materials just got easier with a new AI tool | MIT News

    October 14, 2025

    The SyncNet Research Paper, Clearly Explained

    September 20, 2025

    How to Prepare Knowledge Workers for an AI-Powered Future with Paul Roetzer [MAICON 2025 Speaker Series]

    October 9, 2025

    A Bird’s-Eye View of Linear Algebra: Why Is Matrix Multiplication Like That?

    August 13, 2025

    Evaluation-Driven Development for LLM-Powered Products: Lessons from Building in Healthcare

    July 10, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    How to Become a Machine Learning Engineer (Step-by-Step)

    September 15, 2025

    Vibe Coda AI-appar i Google AI Studio

    October 28, 2025

    The multifaceted challenge of powering AI | MIT News

    April 7, 2025
    Our Picks

    “The success of an AI product depends on how intuitively users can interact with its capabilities”

    November 14, 2025

    How to Crack Machine Learning System-Design Interviews

    November 14, 2025

    Music, Lyrics, and Agentic AI: Building a Smart Song Explainer using Python and OpenAI

    November 14, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.