, I focus on a selected step of the RAG pipeline: The doc retrieval step. This step is vital for any RAG system’s efficiency, contemplating that with out fetching probably the most related paperwork, it’s difficult for an LLM to appropriately reply the consumer’s questions. I’ll focus on the normal method to fetching probably the most related paperwork, some strategies to enhance it, and the advantages you’ll see from higher doc retrieval in your RAG pipeline.
As per my final article on Enriching LLM Context with Metadata, I’ll write my principal objective for this text:
My objective for this text is to to focus on how one can fetch and filter probably the most related paperwork in your AI search.
Desk of contents
Why is perfect doc retrieval essential?
It’s essential to actually perceive why the doc fetching step is so vital to any RAG pipeline. To know this, you need to even have a basic define of the circulate in a RAG pipeline:
- The consumer enters their question
- The question is embedded, and also you calculate embedding similarity between the question and every particular person doc (or chunk of doc)
- We fetch probably the most related paperwork primarily based on embedding similarity
- Essentially the most related paperwork (or chunks) are fed into an LLM, and it’s prompted to reply the consumer query given the supplied chunks

Now there are a number of points of the pipeline which is essential. Components reminiscent of:
- Which embedding mannequin do you make the most of
- Which LLM mannequin do you utilize
- What number of paperwork (or chunks) do you fetch
Nevertheless, I might argue that there’s seemingly no side extra essential than the collection of paperwork. It is because with out the right paperwork, it doesn’t matter how good you’re LLM is, or what number of chunks you fetch, the reply is probably to be incorrect.
The mannequin will in all probability work with a barely worse embedding mannequin or a barely older LLM. Nevertheless, should you don’t fetch the right paperwork, you’re RAG pipeline will fail.
Conventional approaches
I’ll first perceive some conventional approaches which are used in the present day, primarily utilizing embedding similarity or key phrase search.
Embedding similarity
Utilizing embedding similarity to fetch probably the most related paperwork is the go-to method in the present day. This can be a stable method that’s respectable in most use circumstances. RAG with embedding similarity doc retrieval is precisely as I described above.
Key phrase search
Key phrase search can also be generally used to fetch related paperwork. Conventional approaches, reminiscent of TF-IDF or BM25, are nonetheless used in the present day with success. Nevertheless, key phrase search additionally has its weaknesses. For instance, it solely fetches paperwork primarily based on an actual match, which introduces points when an actual match will not be potential.
Thus, I need to focus on another strategies you need to use to enhance your doc retrieval step.
Methods to fetch extra related paperwork
On this part, I’ll focus on some extra superior strategies to fetch probably the most related paperwork. I’ll divide the part into two. The primary part will cowl optimizing doc retrieval for recall, referring to fetching as lots of the related paperwork as potential from the corpus of accessible paperwork. The opposite subsection discusses learn how to optimize for precision. This implies guaranteeing that the paperwork you fetch are literally appropriate and related for the consumer question.
Recall: Fetch extra of the related paperwork
I’ll focus on the next strategies:
- Contextual retrieval
- Fetching extra chunks
- Reranking
Contextual retrieval

Contextual retrieval is a method launched by Anthropic in September 2024. Their article covers two matters: Including context to doc chunks and mixing key phrase search (BM25) with semantic search to fetch related paperwork.
So as to add context to paperwork, they take every doc chunk and immediate an LLM, given the chunk and the complete doc, to rewrite the chunk to incorporate each info from the given chunk and related context from the complete doc.
For instance, when you’ve got a doc divided into two chunks. The place chunk one consists of essential metadata reminiscent of an tackle, date, location, and time, and the opposite chunk accommodates details about a lease settlement. The LLM would possibly rewrite the second chunk to incorporate each the lease settlement and probably the most related a part of the primary chunk, which on this case is the tackle, location, and date.
Anthropic additionally discusses combining semantic search and key phrase search of their article, primarily fetching paperwork with each strategies, and utilizing a prioritized method to mix the paperwork retrieved from every approach.
Fetching extra chunks
A less complicated method to fetch extra of the related paperwork is to easily fetch extra chunks. The extra chunks you fetch, the upper your probability of fetching the related chunks is. Nevertheless, this has two principal downsides:
- You’ll seemingly get extra irrelevant chunks as effectively (impacting recall)
- You’ll enhance the quantity of tokens you feed to your LLM, which can negatively influence the LLM’s output high quality
Reranking for recall
Rereanking can also be a strong approach, which can be utilized to extend precision and recall when fetching related paperwork to a consumer question. When fetching paperwork primarily based on semantic similarity, you’ll assign a similarity rating to all chunks, and usually solely preserve the highest Okay most comparable chunks (Okay is often a quantity between 10 and 20, but it surely varies for various functions). Which means a reranker ought to try and put the related paperwork inside the Okay most related paperwork, whereas preserving irrelevant paperwork out of the identical checklist. I feel Qwen Reranker is an efficient mannequin; nonetheless, there are additionally many different rerankers on the market.
Precision: Filter away irrelevant paperwork
- Reranking
- LLM verification
Reranking for precision
As mentioned within the final part on recall, rerankers will also be used to enhance precision. Rerankers will enhance recall by including related paperwork into the highest Okay checklist of most related paperwork. On the opposite facet, rerankers will enhance precision, by guaranteeing that the irrelevant paperwork keep out of the highest Okay most related paperwork checklist.
LLM verification
Using LLM to guage chunk (or doc) relevance can also be a strong approach to filter away irrelevant chunks. You possibly can merely create a perform like under:
def is_relevant_chunk(chunk_text: str, user_query: str) -> bool:
"""
Confirm if the chunk textual content is related to the consumer question
"""
immediate = f"""
Given the supplied consumer question, and chunk textual content, decide whether or not the chunk textual content is related to reply the consumer question.
Return a json response with {
"related": bool
}
<user_query>{user_query}</user_query>
<chunk_text>{chunk_text}</chunk_text>
"""
return llm_client.generate(immediate)
You then feed every chunk (or doc) by this perform, and solely preserve the chunks or paperwork which are judged as related by the LLM.
This system has two principal downsides:
- LLM price
- LLM response time
You’ll be sending plenty of LLM API calls, which can inevitably incur a big price. Moreover, sending so many queries will take time, which provides delay to your RAG pipeline. It is best to stability this with the necessity for fast responses to the customers.
Advantages of enhancing doc retrieval
There are quite a few advantages to enhancing the doc retrieval step in your RAG pipeline. Some examples are:
- Higher LLM query answering efficiency
- Much less hallucinations
- Extra usually capable of appropriately reply customers’ queries
- Primarily, it makes the LLMs’ job simpler
Total, the flexibility of your query answering mannequin will enhance by way of the variety of efficiently answered consumer queries. That is the metric I like to recommend scoring your RAG system after, and you’ll learn extra about LLM system evaluations in my article on Evaluating 5 Million Documents with Automatic Evals.
Fewer hallucinations are additionally an extremely essential issue. Hallucinations are one of the crucial vital points we face with LLMs. They’re so detrimental as a result of they decrease the customers’ belief within the question-answer system, which makes them much less more likely to proceed utilizing your utility. Nevertheless, guaranteeing the LLM each receives the related paperwork (precision), and minimizes the quantity of irrelevant paperwork (recall), is effective to attenuate the quantity of hallucinations the RAG system produces.
Much less irrelevant paperwork (precision), additionally avoids the issues of context bloat (an excessive amount of noise within the context), and even context poisoning (incorrect info supplied within the paperwork).
Abstract
On this article, I’ve mentioned how one can enhance the doc retrieval step of your RAG pipeline. I began off discussing how I consider the doc retrieval step is probably the most vital a part of the RAG pipeline, and it is best to spend time optimizing this step. Moreover, I mentioned how conventional RAG pipelines fetch related paperwork by semantic search and key phrase search. Persevering with, I mentioned strategies you possibly can make the most of to enhance each the precision and recall of retrieved paperwork, with strategies reminiscent of contextual retrieval and LLM chunk verification.
👉 Discover me on socials:
🧑💻 Get in touch
✍️ Medium