Loading Session...

RAG: Retrieval Utility, Scaling & Infrastructure

Back to Schedule Check-inYou can join session 5 minutes before start time.

Session Information

  • Who Benefits from RAG? The Role of Exposure, Utility and Attribution Bias
  • Utilizing Metadata for Better Retrieval-Augmented Generation
  • Predicting Retrieval Utility and Answer Quality in Retrieval-Augmented Generation
  • Open Web Indexes for Remote Querying
  • LURE-RAG: Lightweight Utility-driven Reranking for Efficient RAG
  • Insider Knowledge: How Much Can RAG Systems Gain from Evaluation Secrets
  • Less LLM, More Documents: Searching for Improved RAG
Mar 31, 2026 10:30 - 12:30(Europe/Amsterdam)
Venue : Chaos
20260331T1030 20260331T1230 Europe/Amsterdam RAG: Retrieval Utility, Scaling & Infrastructure Who Benefits from RAG? The Role of Exposure, Utility and Attribution BiasUtilizing Metadata for Better Retrieval-Augmented GenerationPredicting Retrieval Utility and Answer Quality in Retrieval-Augmented GenerationOpen Web Indexes for Remote QueryingLURE-RAG: Lightweight Utility-driven Reranking for Efficient RAGInsider Knowledge: How Much Can RAG Systems Gain from Evaluation SecretsLess LLM, More Documents: Searching for Improved RAG Chaos ECIR2026 n.fontein@tudelft.nl

Sub Sessions

Who Benefits from RAG? The Role of Exposure, Utility and Attribution Bias

Full papersSearch and ranking Societally-motivated IR researchFull papers 10:30 AM - 12:30 PM (Europe/Amsterdam) 2026/03/31 08:30:00 UTC - 2026/03/31 10:30:00 UTC
Presenters
MD
Mahdi Dehghan
University Of Glasgow
Co-Authors
GM
Graham McDonald
Senior Lecturer, University Of Glasgow

Utilizing Metadata for Better Retrieval-Augmented Generation

Full papersEvaluation research Search and rankingFull papers 10:30 AM - 12:30 PM (Europe/Amsterdam) 2026/03/31 08:30:00 UTC - 2026/03/31 10:30:00 UTC
Retrieval-Augmented Generation systems depend on retrieving semantically relevant document chunks to support accurate, grounded outputs from large language models. In structured and repetitive corpora such as regulatory filings, chunk similarity alone often fails to distinguish between documents with overlapping language. Practitioners often flatten metadata into input text as a heuristic, but the impact and trade-offs of this practice remain poorly understood. We present a systematic study of metadata-aware retrieval strategies, comparing plain-text baselines with approaches that embed metadata directly. Our evaluation spans metadata-as-text (prefix and suffix), a dual-encoder unified embedding that fuses metadata and content in a single index, dual-encoder late-fusion retrieval, and metadata-aware query reformulation. Across multiple retrieval metrics and question types, we find that prefixing and unified embeddings consistently outperform plain-text baselines, with the unified at times exceeding prefixing while being easier to maintain. Beyond empirical comparisons, we analyze embedding space, showing that metadata integration improves effectiveness by increasing intra-document cohesion, reducing inter-document confusion, and widening the separation between relevant and irrelevant chunks. Field-level ablations show that structural cues provide strong disambiguating signals. Our code, evaluation framework, and the RAGMATE-10K benchmark are anonymously hosted.
Presenters
RY
Raquib Bin Yousuf
PhD Student, Virginia Tech
Co-Authors
NR
Naren Ramakrishnan
Virginia Tech

Predicting Retrieval Utility and Answer Quality in Retrieval-Augmented Generation

Full papersMachine Learning and Large Language ModelsFull papers 10:30 AM - 12:30 PM (Europe/Amsterdam) 2026/03/31 08:30:00 UTC - 2026/03/31 10:30:00 UTC
The quality of answers generated by large language models (LLMs) in retrieval-augmented generation (RAG) is largely influenced by the contextual information contained in the retrieved documents. A key challenge for improving RAG is to predict both the utility of retrieved documents---quantified as the performance gain from using context over generation without context---and the quality of the final answers in terms of correctness and relevance. In this paper, we define two prediction tasks within RAG. The first is retrieval performance prediction (RPP), which estimates the utility of retrieved documents. The second is generation performance prediction (GPP), which estimates the final answer quality. We hypothesise that the topical relevance of retrieved documents correlates with their utility in RAG, suggesting that Query Performance Prediction (QPP) approaches can be adapted for RPP and GPP. Beyond these retriever-centric signals, we argue that reader-centric features, such as the perplexity of the retrieved context for the LLM conditioned on the input query, can further enhance prediction accuracy. Finally, we propose that features reflecting query-agnostic document quality and readability can also provide useful signals to the predictions. We train linear regression models with the above categories of predictors for both RPP and GPP. Experiments on the Natural Questions (NQ) dataset show that combining predictors from multiple feature categories yields the most accurate estimates of RAG performance.
Presenters
FT
Fangzheng Tian
School Of Computing Science, University Of Glasgow
Co-Authors
DG
Debasis Ganguly
University Of Glasgow
CM
Craig Macdonald
Professor, University Of Glasgow

Open Web Indexes for Remote Querying

Full papersSearch and ranking System aspectsFull papers 10:30 AM - 12:30 PM (Europe/Amsterdam) 2026/03/31 08:30:00 UTC - 2026/03/31 10:30:00 UTC
We propose to redesign the access to Web-scale indexes. Instead of using custom search engine software and hiding access behind an API or a user interface, we store the inverted file in a standard, open source file format (Parquet) on publicly accessible (and cheap) object storage. Users can perform retrieval by fetching the relevant postings for the query terms and performing ranking locally. By using standard data formats and cloud infrastructure, we (a) natively support a wide range of downstream clients, and (b) can directly benefit from improvements in analytical query processing engines. We show the viability of our approach through a series of experiments using the ClueWeb corpora. While our approach (naturally) has a higher latency than dedicated search APIs, we show that we can still obtain results in reasonable time (usually within 10-20 seconds). Therefore, we argue that the increased accessibility and decreased deployment costs make this a suitable setup for cooperation in IR research by sharing large indexes publicly.
Presenters
GH
Gijs Hendriksen
PhD Candidate, Radboud University
Co-Authors
DH
Djoerd Hiemstra
Radboud University
AV
Arjen De Vries
Radboud University

LURE-RAG: Lightweight Utility-driven Reranking for Efficient RAG

Full papersMachine Learning and Large Language ModelsFull papers 10:30 AM - 12:30 PM (Europe/Amsterdam) 2026/03/31 08:30:00 UTC - 2026/03/31 10:30:00 UTC
Presenters
MC
Manish Chandra
PhD Student, University Of Glasgow
Co-Authors
DG
Debasis Ganguly
University Of Glasgow
IO
Iadh Ounis
Professor, University Of Glasgow

Insider Knowledge: How Much Can RAG Systems Gain from Evaluation Secrets

Full papersEvaluation researchFull papers 10:30 AM - 12:30 PM (Europe/Amsterdam) 2026/03/31 08:30:00 UTC - 2026/03/31 10:30:00 UTC
RAG systems are increasingly evaluated and optimized using LLM judges, an approach that is rapidly becoming the dominant paradigm for system assessment. Nugget-based approaches in particular are now embedded not only in evaluation frameworks but also in the architectures of RAG systems themselves. While this integration can lead to genuine improvements, it also creates a risk of faulty measurements due to circularity. In this paper, we investigate this risk through comparative experiments with nugget-based RAG systems, including GINGER and CRUCIBLE, against strong baselines such as GPTResearcher. By deliberately modifying CRUCIBLE to generate outputs optimized for an LLM judge, we show that near-perfect evaluation scores can be achieved when elements of the evaluation - such as prompt templates or gold nuggets - are leaked or can be predicted. Our results highlight the importance of blind evaluation settings and methodological diversity to guard against mistaking metric overfitting for genuine system progress.
Presenters
LD
Laura Dietz
Associate Professor, University Of New Hampshire
Co-Authors
BL
Bryan Li
University Of Pennsylvania
EY
Eugene Yang
Research Scientist, Human Language Technology Center Of Excellence, Johns Hopkins University
DL
Dawn Lawrie
Senior Research Scientist, HLTCOE At Johns Hopkins University
WW
William Walden
Human Language Technology Center Of Excellence, Johns Hopkins University
JM
James Mayfield
Johns Hopkins University

Less LLM, More Documents: Searching for Improved RAG

Full papersSearch and rankingFull papers 10:30 AM - 12:30 PM (Europe/Amsterdam) 2026/03/31 08:30:00 UTC - 2026/03/31 10:30:00 UTC
Presenters
JN
Jingjie Ning
Carnegie Mellon University
Co-Authors
YK
Yibo Kong
Carnegie Mellon University
Yunfan Long
Student, Carnegie Mellon University
5 visits

Session Participants

User Online
Session speakers, moderators & attendees
University of Glasgow
PhD Student
,
Virginia Tech
School of Computing Science, University of Glasgow
PhD Candidate
,
Radboud University
PhD student
,
University Of Glasgow
+ 2 more speakers. View All
Full Professor
,
University Of Padova
No attendee has checked-in to this session!
11 attendees saved this session

Session Chat

Live Chat
Chat with participants attending this session

Questions & Answers

Answered
Submit questions for the presenters

Session Polls

Active
Participate in live polls

Need Help?

Technical Issues?

If you're experiencing playback problems, try adjusting the quality or refreshing the page.

Questions for Speakers?

Use the Q&A tab to submit questions that may be addressed in follow-up sessions.