Close Menu
    Trending
    • 5 Ways to Implement Variable Discretization
    • Stop Tuning Hyperparameters. Start Tuning Your Problem.
    • Bridging the operational AI gap
    • Escaping the Prototype Mirage: Why Enterprise AI Stalls
    • RAG with Hybrid Search: How Does Keyword Search Work?
    • A “ChatGPT for spreadsheets” helps solve difficult engineering challenges faster | MIT News
    • Graph Coloring You Can See
    • Why You Should Stop Writing Loops in Pandas 
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » RAG with Hybrid Search: How Does Keyword Search Work?
    Artificial Intelligence

    RAG with Hybrid Search: How Does Keyword Search Work?

    ProfitlyAIBy ProfitlyAIMarch 4, 2026No Comments11 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    , I’ve talked rather a lot about Reterival Augmented Technology (RAG). Specifically, I’ve lined the basics of the RAG methodology, in addition to a bunch of related ideas, like chunking, embeddings, reranking, and retrieval evaluation.

    The normal RAG methodology is so helpful as a result of it permits for trying to find related components of textual content in a big data base, primarily based on the which means of the textual content reasonably than actual phrases. On this method, it permits us to make the most of the ability of AI on our customized paperwork. Sarcastically, as helpful as this similarity search is, it typically fails to retrieve components of textual content which might be actual matches to the consumer’s immediate. Extra particularly, when looking out in a big data base, particular key phrases (resembling particular technical phrases or names) could get misplaced, and related chunks might not be retrieved even when the consumer’s question comprises the precise phrases.

    Fortunately, this subject may be simply tackled by utilising an older keyword-based looking out method, like BM25 (Best Matching 25). Then, by combining the outcomes of the similarity search and BM25 search, we will primarily get one of the best of each worlds and considerably enhance the outcomes of our RAG pipeline.

    . . .

    In data retrieval techniques, BM25 is a rating perform used to guage how related a doc is to a search question. In contrast to similarity search, BM25 evaluates the doc’s relevance to the consumer’s question, not primarily based on the semantic which means of the doc, however reasonably on the precise phrases it comprises. Extra particularly, BM25 is a bag-of-words (BoW) model, which means that it doesn’t keep in mind the order of the phrases in a doc (from which the semantic which means emerges), however reasonably the frequency with which every phrase seems within the doc.

    BM25 rating for a given question q containing phrases t and a doc d may be (not so) simply calculated as follows:

    😿

    Since this expression could be a bit overwhelming, let’s take a step again and take a look at it little by little.

    . . .

    Beginning easy with TF-IDF

    The fundamental underlying idea of BM25 is TF-IDF (Time period Frequency – Inverse Doc Frequency). TF-IDF is a basic data retrieval idea aiming to measure how essential a phrase is in a particular doc in a data base. In different phrases, it measures in what number of paperwork of the data base a time period seems in, permitting on this method to specific how particular and informative a time period is a few particular doc. The rarer a time period is within the data base, the extra informative it’s thought of to be for a particular doc.

    Specifically, for a doc d in a data base and a time period t, the Time period Frequency TF(t,d) may be outlined as follows:

    and Inverse Doc Frequency IDF(t) may be outlined as follows:

    Then, the TF-IDF rating may be calculated because the product of TF and IDF as follows:

    . . .

    Let’s do a fast instance to get a greater grip of TF-IDF. Let’s assume a tiny data base containing three films with the next descriptions:

    1. “A sci-fi thriller about time journey and a harmful journey throughout alternate realities.”
    2. “A romantic drama about two strangers who fall in love throughout surprising time journey.”
    3. “A sci-fi journey that includes an alien explorer pressured to journey throughout galaxies.”

    After eradicating the stopwords, we will contemplate the next phrases in every doc:

    • doc 1: sci-fi, thriller, time, journey, harmful, journey, alternate, realities
      • measurement of doc 1, |d1| = 8
    • doc 2: romantic, drama, two, strangers, fall, love, surprising, time, journey
      • measurement of doc 2, |d2| = 9
    • doc 3: sci-fi, journey, that includes, alien, explorer, pressured, journey, galaxies
      • measurement of doc 3, |d3| = 8
    • whole paperwork in data base N = 3

    We will then calculate the f(t,d) for every time period in every doc:

    Subsequent, for every doc, we additionally calculate the Doc Frequency and the Inverse Doc Frequency:

    After which lastly we calculate the TF-IDF rating of every time period.

    So, we can we get from this? Let’s have a look, for instance, on the TF-IDF scores of doc 1. The phrase ‘journey’ shouldn’t be informative in any respect, since it’s included in all paperwork of the data base. On the flip aspect, phrases like ‘thriller’ and ‘harmful’ are very informative, particularly for doc 1, since they’re solely included in it.

    On this method, TF-IDF rating gives a easy and simple method to establish and quantify the significance of the phrases in every doc of a data base. To place it otherwise, the upper the entire rating of the phrases in a doc, the rarer the knowledge on this doc is compared to the knowledge contained in all different paperwork within the data base.

    . . .

    Understanding BM25 rating

    In BM25, we utilise the TF-IDF idea in an effort to quantify how imformative (how uncommon or essential) every doc in a data base is, with respect to a particular question. To do that, for the BM25 calculation, we solely keep in mind the phrases of every doc which might be contained within the consumer’s question, and carry out a calculation considerably just like TF-IF.

    BM25 makes use of the TF-IDF idea, however with just a few mathematical tweaks in an effort to enhance two predominant weaknesses of TF-IDF.

    . . .

    The primary ache level of TF-IDF is that TF is linear with the variety of occasions a time period t seems in a doc d, f(t,d), as any perform of the shape:

    Which means the extra occasions a time period t seems in a doc d, the extra TF grows linearly, which, as chances are you’ll think about, may be problematic for big paperwork, the place a time period seems time and again with out essentially being correspondingly extra essential.

    A easy method to resolve that is to make use of a saturation curve as an alternative of a linear perform. Which means output will increase with the enter however approaches a most restrict asymptotically, not like the linear perform, the place the output will increase with the enter ceaselessly:

    Thus, we will attempt to rewrite TF on this kind as follows, introducing a parameter k1, which permits for the management of the frequency scaling. On this method, the parameter K1allows for introducing diminishing returns. That’s, the first incidence of the time period t in a doc has a big effect on the TF rating, whereas the twentieth look solely provides a small further acquire.

    Noetheless, this may end in values within the vary 0 to 1. We will tweak this a bit extra and add a (k1 + 1) within the nominator, in order that the ensuing values of TF are comparable with the preliminary definition of TF utilized in TD-IDF.

    . . .

    Up to now, so good, however one crucial piece of knowledge that’s nonetheless lacking from this expression is the scale of the doc |d| that was included within the preliminary calculation of TF. Nonetheless, earlier than including the |d| time period, we additionally want to change it somewhat bit since that is the second ache level of the preliminary TF-IDF expression. Extra particularly, the difficulty is {that a} data base goes to include paperwork with variable lengths |d|, leading to scores of various phrases not being comparable. BM25 resolves this by normalizing |d|. That’s, as an alternative of |d|, the next expression is used:

    the place avg(dl) is the typical doc size of the paperwork within the data base. Moreover, b is a parameter in [0,1] that controls the size normalization, with b = 0 akin to no normalization and b = 1 corresponding to finish normalization.

    So, including the normalised expressionof |d|, we will get the fancier model of TF utilized in BM25. This might be as follows:

    Often, the used parameter values are k₁ ≈ 1.2 to 2.0 and b ≈ 0.75.

    . . .

    BM52 additionally makes use of a barely altered expression for the IDF calculation as follows:

    This expression is derived by asking a greater query. Within the preliminary IDF calculation, we ask:

    “How uncommon is the time period?”

    As an alternative, when making an attempt to calculate the IDF for BM25, we ask:

    “How more likely is that this time period in related paperwork than in non-relevant paperwork?”

    The likelihood of a doc containing the time period t, in a data base of N paperwork, may be expressed as:

    We will then specific the chances of a doc containing a time period t versus not containing it as:

    After which taking the inverse, we find yourself with:

    Equally to the standard IDF, we get the log of this expression to compress the intense values. An unique transformation referred to as Robertson–Sparck Jones smoothing can also be carried out, and on this method, we lastly get the IDF expression utilized in BM25.

    . . .

    Finally, we will calculate the BM25 rating for a particular doc d for a given question q that comprises a number of phrases t.

    On this method, we will rating the paperwork out there in a data base primarily based on their relevance to a particular question, after which retrieve probably the most related paperwork.

    All that is simply to say that the BM52 rating is one thing like the way more simply understood TD-IDF rating, however a bit extra refined. So, BM52 could be very fashionable for performing key phrase searches and can also be utilized in our case for key phrase searches in a RAG system.

    RAG with Hybrid Search

    So, now that we now have an concept about how BM25 works and scores the varied paperwork in a data base primarily based on the frequency of key phrases, we will additional check out how BM25 scores are integrated in a conventional RAG pipeline.

    As mentioned in a number of of my earlier posts, a quite simple RAG pipeline would look one thing like this:

    Such a pipeline makes use of a similarity rating (like cosine similarity) of embeddings in an effort to seek for, discover, and retrieve chunks which might be semantically just like the consumer’s question. Whereas similarity search could be very helpful, it could actually typically miss actual matches. Thus, by incorporating a key phrase search, on high of the similarity search within the RAG pipeline, we will establish related chunks extra successfully and comprehensively. This may alter our panorama as follows:

    For every textual content chunk, aside from the embedding, we now additionally calculate a BM25 index, permitting for fast calculation of respective BM25 scores on numerous consumer queries. On this method, for every consumer question, we will establish the chunks with the very best BM25 scores – that’s, the chunks that include probably the most uncommon, most informative phrases with respect to the consumer’s question compared to all different chunks within the data base.

    Discover how now we match the consumer’s question each with the embeddings within the vector retailer (semantic search) and the BM25 index (key phrase search). Totally different chunks are retrieved primarily based on the semantic search and the key phrase search – then the retrieved chunks are mixed, deduplicated, and ranked utilizing rank fusion.

    . . .

    On my thoughts

    Integrating BM25 key phrase search right into a RAG pipeline permits us to get one of the best of each worlds: the semantic understanding of embeddings and the precision of tangible key phrase matching. By combining these approaches, we will retrieve probably the most related chunks even from a bigger data base extra reliably, guaranteeing that crucial phrases, technical phrases, or names will not be ignored. On this method, we will considerably enhance the effectiveness of our retrieval course of and be sure that no essential related data is left behind.


    Liked this put up? Let’s be associates! Be part of me on:

    📰Substack 💌 Medium 💼LinkedIn ☕Buy me a coffee



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleA “ChatGPT for spreadsheets” helps solve difficult engineering challenges faster | MIT News
    Next Article Escaping the Prototype Mirage: Why Enterprise AI Stalls
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    5 Ways to Implement Variable Discretization

    March 4, 2026
    Artificial Intelligence

    Stop Tuning Hyperparameters. Start Tuning Your Problem.

    March 4, 2026
    Artificial Intelligence

    Escaping the Prototype Mirage: Why Enterprise AI Stalls

    March 4, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Ny forskning visar att AI-modeller vet när de testas och ändrar sitt beteende

    October 3, 2025

    This AI Startup Is Making an Anime Series and Giving Away $1 Million to Creators

    May 2, 2025

    ChatGPT’s New Image Generator Is Melting GPUs and Redefining Creativity

    April 11, 2025

    LightLab: ljusmanipulering i bilder med diffusionsbaserad teknik

    May 19, 2025

    How to Get Performance Data from Power BI with DAX Studio

    April 22, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Simple Guide to Multi-Armed Bandits: A Key Concept Before Reinforcement Learning

    July 14, 2025

    Generating Structured Outputs from LLMs

    August 8, 2025

    Unstructured data extraction made easy: A how-to guide

    September 5, 2025
    Our Picks

    5 Ways to Implement Variable Discretization

    March 4, 2026

    Stop Tuning Hyperparameters. Start Tuning Your Problem.

    March 4, 2026

    Bridging the operational AI gap

    March 4, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.