: Why We Want Automated Reality-Checking
Compared to the standard media, the place articles are edited and verified earlier than getting printed, social media modified the strategy fully. Out of the blue, everybody might increase their voice. Posts are shared immediately, enabling the entry to concepts and views from everywhere in the world. That was the dream, at the very least.
What started as an thought of defending freedom of speech, giving people the chance to precise opinions with out censorship, has include a trade-off. Little or no data will get checked. And that makes it tougher than ever to detect what’s correct and what’s not.
An extra problem is created as false claims hardly ever seem simply as soon as. They’re usually reshared on totally different platforms, usually altered in wording, format, size, and even language, making detection and verification much more troublesome. As these variations flow into throughout platforms they will appear acquainted and subsequently plausible to its readers.
The unique thought of an area for open, uncensored, and dependable data has run right into a paradox. The very openness meant to empower folks additionally makes it simple for misinformation to unfold. That’s precisely the place fact-checking methods are available.
The Improvement of Reality-checking Pipelines
Historically, fact-checking was a guide course of that relied on consultants (journalists, researchers, or fact-checking organizations) to confirm claims by referencing them with sources equivalent to official paperwork, or knowledgeable opinions. This strategy was very dependable and thorough, but additionally very time-consuming. The results of this delay was subsequently extra time for the false narratives to flow into, form public opinion, and allow additional manipulation.
That is the place automation is available in. Researchers have developed fact-checking pipelines that behave because the human-fact-checking-experts, however can scale to large quantities of on-line content material. The very fact-checking pipeline follows a structured course of, which normally consists of the next 5 steps:
- Declare Detection – discover statements with factual implications.
- Declare Prioritization – rank them by pace of unfold, potential hurt, or public curiosity, prioritizing essentially the most impactful circumstances.
- Retrieval of Proof – collect supporting materials and supply the context to judge it.
- Veracity Prediction – determine whether or not the declare is true, false, or one thing in between.
- Era of Rationalization – produce a justification that readers can perceive.
Along with the 5 steps, many pipelines additionally add a sixth step: retrieval of beforehand fact-checked claims (PFCR). As an alternative of redoing the work from scratch, the system checks whether or not a declare, even reformulated, has already been verified. In that case, it’s linked to the fact-check and the declare’s verdict. If not, the pipeline proceeds with proof retrieval.
This shortcut saves effort, hurries up verification, and additional advantages in multilingual settings, because it permits fact-checks in a single language to assist verification in one other.
This element is understood by many names; verified declare retrieval, declare matching, or beforehand fact-checked declare retrieval (PFCR). Whatever the identify, the concept is identical: reuse information that already exists to struggle misinformation quicker and extra successfully.
Designing the PFCR Element (Retrieval Pipeline)
At its core, beforehand fact-checked declare retrieval (PFCR) is an data retrieval process: given a declare from a social media submit, we need to discover essentially the most related match in a big assortment of already fact-checked (verified) claims. If a match exists, we will instantly hyperlink it to the supply and the decision, so there is no such thing as a want to start out verification from scratch!
Most fashionable data retrieval methods use a retriever–reranker structure. The retriever acts because the first-layer filter returning a bigger set of candidate paperwork (high okay) from the corpus. The reranker then takes these candidates and refines the rating utilizing a deeper, extra computationally intensive mannequin. This two-stage design balances pace (retriever) and accuracy (reranker).
Fashions used for retrieval could be grouped into two classes:
- Lexical fashions: quick, interpretable, and efficient when there’s robust phrase overlap. However they wrestle when concepts are phrased otherwise (synonyms, paraphrases, translations).
- Semantic fashions: seize which means moderately than floor phrases, making them splendid for PFCR. They might acknowledge that, for instance, “the Earth orbits the Solar” and “our planet revolves across the star on the middle of the photo voltaic system” are describing the identical truth, regardless that the wording is totally totally different.
As soon as candidates are retrieved, the reranking stage applies extra highly effective fashions (usually cross-encoders) to fastidiously re-score the highest outcomes making certain that essentially the most related fact-checks rank larger. As rerankers are costlier to run, they’re solely utilized to a smaller pool of candidates (e.g., the highest 100).
Collectively, the retriever–reranker pipeline gives each protection (by recognizing a wider vary of potential matches) and precision (by rating larger essentially the most comparable ones). For PFCR, this steadiness is essential because it permits a quick and scalable technique to detect repeating claims, however with a excessive accuracy in order that customers can belief the data they learn.
Constructing the Ensemble
The retriever–reranker pipeline already delivers stable efficiency. However as I evaluated the fashions and ran the experiments, one factor turned clear: no single mannequin is nice sufficient by itself.
Lexical fashions, like BM25, are nice at precise key phrase matches, however as quickly because the declare is phrased otherwise, they fail. That’s the place semantic fashions step in. They don’t have any downside with dealing with paraphrases, translations, or crosslingual situations, however generally wrestle with simple matches the place wording issues essentially the most. Not all of the semantic fashions are the identical both, every one had its personal area of interest: some work higher in English, others in multilingual settings, one other for capturing delicate contextual nuances. In different phrases, simply as misinformation mutates and reappears in numerous variations, semantic retrieval fashions additionally convey totally different strengths relying on how they had been skilled. If misinformation is adaptable, then the retrieval system should be as nicely.
That’s the place the concept of an ensemble got here in. As an alternative of betting on a single “finest” mannequin, I mixed the predictions of a number of fashions in an ensemble so they may collaborate and complement one another. As an alternative of counting on a single mannequin, why not allow them to work as a group.
Earlier than going additional into the ensemble design, I’ll briefly clarify the choice making course of for the selection of retrievers.
Establishing a Baseline (Lexical Fashions)
BM25 is among the handiest and extensively used lexical retrieval fashions usually used as a baseline in fashionable IR analysis. Earlier than evaluating the embedding-based (semantic) fashions, I used to be to see how good (or unhealthy) BM25 can carry out. And because it seems, not unhealthy in any respect!
Tech element:
BM25 is a rating operate constructed upon TF-IDF. It improves TF-IDF by introducing a saturation operate and doc size normalization. In contrast to time period frequency scoring, BM25 accounts for repeated occurrences of a time period, stopping lengthy paperwork from being unfairly favoured. It additionally features a parameter (b) that controls the burden assigned to time period frequency and doc size.
Semantic Fashions
As a place to begin for the semantic (embedding-based) fashions, I referred to the HuggingFace’s Massive Text Embedding Benchmark (MTEB) and evaluated the main fashions whereas holding the GPU useful resource constraints in thoughts.
The 2 fashions that stood out had been E5 (intfloat/multilingual-e5-large-instruct) and BGE (BAAI/bge-m3). Each achieved robust outcomes when retrieving the highest 100 candidates, so I chosen them for additional tuning and integration with BM25.
Ensemble Design
With retrievers in place, the query was: how will we mix them? I examined totally different aggregation methods together with majority voting, exponential decay weighting, and reciprocal rank fusion (RRF).
RRF labored finest because it doesn’t simply common scores, it rewards paperwork that constantly seem excessive throughout totally different rankings, no matter which mannequin produced them. This manner, the ensemble favored claims that a number of fashions “agreed on,” whereas nonetheless permitting every mannequin to contribute independently.
I additionally experimented with the variety of candidates retrieved within the first stage (generally known as hyperparameter okay). The concept is straightforward: in the event you solely pull in a really small set of candidates, you danger lacking related fact-checks altogether. Then again, if you choose too many, the reranker has to undergo lots of noise, which provides computational value with out really enhancing accuracy.
By means of the experiments, I discovered that as okay elevated, efficiency improved at first as a result of the ensemble had extra probabilities to seek out the precise fact-checks. However after a sure level, including extra candidates stopped serving to. The reranker might already see sufficient related fact-checks to make good selections, and the additional ones had been principally irrelevant. In apply, this meant discovering a “candy spot” the place the candidate pool was giant sufficient to make sure protection, however not so giant that it decreased the reranker’s effectiveness.
As a remaining step, I adjusted the weights of every mannequin. Lowering the BM25’s affect whereas giving extra weight to the semantic retrievers boosted the efficiency. In different phrases, BM25 is beneficial, however the heavy lifting is finished by E5 and BGE.
To shortly undergo the PFCR element; the pipeline consists of retrieval and reranking the place for the retrieval we will use lexical or semantic fashions whereas for the reranking we might use a semantic mannequin. Moreover, we observed that combining a number of fashions inside an ensemble improves the retrieval/reranking efficiency. Okay, so the place will we combine the ensemble?
The place Does the Ensemble Match?
The ensemble wasn’t restricted to only one a part of the pipeline. I utilized it inside each the retrieval and reranking.
- Retriever stage → I merged the candidate lists produced by BM25, E5, and BGE. This manner, the system didn’t depend on a single mannequin’s “view” of what is likely to be related however as an alternative pooled their views right into a stronger beginning set.
- Reranker stage → I then mixed the rankings from a number of rerankers (once more referring to MTEB and my GPU constraints). Since every reranker captures barely totally different nuances of similarity, mixing them helped refine the ultimate ordering of fact-checks with larger accuracy.
On the retriever stage, the ensemble enabled a wider pool of candidates, ensuring that fewer related claims slipped via the cracks (enhancing recall).Whereas the reranker stage narrowed down the main target, pushing essentially the most related fact-checks to the highest (enhancing precision).

Bringing It All Collectively (TL;DR)
Lengthy story brief; the envisioned digital utopia for open data sharing doesn’t work with out verification, and may even create the opposite – a channel for misinformation.
That was the driving power for the event of automated fact-checking pipelines, which helped us transfer nearer to that unique promise. They make it simpler to confirm data shortly and at scale, so when false claims pop up in new types, they are often noticed and addressed immediately, serving to keep accuracy and belief within the digital world.
The takeaway is straightforward: range is vital. Simply as misinformation spreads by taking up many types, a resilient fact-checking system advantages from a number of views working collectively. Utilizing an ensemble, the pipeline turns into extra strong, extra adaptable, and in the end enabling a reliable digital house.
For the curious minds
When you’re fascinated about a deeper technical dive into the retrieval and ensemble methods behind this pipeline, you possibly can try my full paper here. It goes into the mannequin selections, experiments, and detailed analysis metrics inside the system.
References
Scott A. Hale, Adriano Belisario, Ahmed Mostafa, and Chico Camargo. 2024. Analyzing Misinformation Claims In the course of the 2022 Brazilian Normal Election on WhatsApp, Twitter, and Kwai. ArXiv:2401.02395.
Rrubaa Panchendrarajan and Arkaitz Zubiaga. 2024. Declare detection for automated fact-checking: A survey on monolingual, multilingual and cross-lingual analysis. Pure Language Processing Journal, 7:100066.
Matúš Pikuliak, Ivan Srba, Robert Moro, Timo Hromadka, Timotej Smolen, Martin Melišek, Ivan ˇ Vykopal, Jakub Simko, Juraj Podroužek, and Maria Bielikova. 2023. Multilingual Beforehand FactChecked Declare Retrieval. In Proceedings of the 2023 Convention on Empirical Strategies in Pure Language Processing, pages 16477–16500, Singapore. Affiliation for Computational Linguistics.
Preslav Nakov, David Corney, Maram Hasanain, Firoj Alam, Tamer Elsayed, Alberto Barrón-Cedeño, Paolo Papotti, Shaden Shaar, and Giovanni Da San Martino. 2021. Automated Reality-Checking for Aiding Human Reality-Checkers. ArXiv:2103.07769.
Oana Balalau, Pablo Bertaud-Velten, Younes El Fraihi, Garima Gaur, Oana Goga, Samuel Guimaraes, Ioana Manolescu, and Brahim Saadi. 2024. FactCheckBureau: Construct Your Personal Reality-Test Evaluation Pipeline. In Proceedings of the thirty third ACM Worldwide Convention on Info and Data Administration, CIKM ’24, pages 5185–5189, New York, NY, USA. Affiliation for Computing Equipment
Alberto Barrón-Cedeño, Tamer Elsayed, Preslav Nakov, Giovanni Da San Martino, Maram Hasanain, Reem Suwaileh, Fatima Haouari, Nikolay Babulkov, Bayan Hamdan, Alex Nikolov, Shaden Shaar, and Zien Sheikh Ali. 2020. Overview of CheckThat! 2020: Automated Identification and Verification of Claims in Social Media. In Experimental IR Meets Multilinguality, Multimodality, and Interplay, pages 215–236, Cham. Springer Worldwide Publishing.
Ashkan Kazemi, Kiran Garimella, Devin Gaffney, and Scott Hale. 2021a. Declare Matching Past English to Scale World Reality-Checking. In Proceedings of the 59th Annual Assembly of the Affiliation for Computational Linguistics and the eleventh Worldwide Joint Convention on Pure Language Processing (Quantity 1: Lengthy Papers), pages 4504–4517, On-line. Affiliation for Computational Linguistics.
Shaden Shaar, Nikolay Babulkov, Giovanni Da San Martino, and Preslav Nakov. 2020. That may be a Recognized Lie: Detecting Beforehand Reality-Checked Claims. In Proceedings of the 58th Annual Assembly of the Affiliation for Computational Linguistics, pages 3607– 3618, On-line. Affiliation for Computational Linguistics.
Alberto Barrón-Cedeño, Tamer Elsayed, Preslav Nakov, Giovanni Da San Martino, Maram Hasanain, Reem Suwaileh, Fatima Haouari, Nikolay Babulkov, Bayan Hamdan, Alex Nikolov, Shaden Shaar, and Zien Sheikh Ali. 2020. Overview of checkthat! 2020: Automated identification and verification of claims in social media. In Experimental IR Meets Multilinguality, Multimodality, and Interplay: eleventh Worldwide Convention of the CLEF Affiliation, CLEF 2020, Thessaloniki, Greece, September 22–25, 2020, Proceedings, web page 215–236, Berlin, Heidelberg. Springer-Verlag.
Gordon V. Cormack, Charles L A Clarke, and Stefan Buettcher. 2009. Reciprocal rank fusion outperforms condorcet and particular person rank studying strategies. In Proceedings of the thirty second Worldwide ACM SIGIR Convention on Analysis and Improvement in Info Retrieval, SIGIR ’09, web page 758–759, New York, NY, USA. Affiliation for Computing Equipment
Iva Pezo, Allan Hanbury, and Moritz Staudinger. 2025. ipezoTU at SemEval-2025 Job 7: Hybrid Ensemble Retrieval for Multilingual Reality-Checking. In Proceedings of the nineteenth Worldwide Workshop on Semantic Analysis (SemEval-2025), pages 1159–1167, Vienna, Austria. Affiliation for Computational Linguistics.