20260331T103020260331T1230Europe/AmsterdamRAG: Retrieval Utility, Scaling & InfrastructureWho Benefits from RAG? The Role of Exposure, Utility and Attribution BiasUtilizing Metadata for Better Retrieval-Augmented GenerationPredicting Retrieval Utility and Answer Quality in Retrieval-Augmented GenerationOpen Web Indexes for Remote QueryingLURE-RAG: Lightweight Utility-driven Reranking for Efficient RAGInsider Knowledge: How Much Can RAG Systems Gain from Evaluation SecretsLess LLM, More Documents: Searching for Improved RAGChaosECIR2026conference-secretariat@blueboxevents.nl
Who Benefits from RAG? The Role of Exposure, Utility andAttribution Bias
Full papersSearch and rankingSocietally-motivated IR research10:30 AM - 12:30 PM (Europe/Amsterdam) 2026/03/31 08:30:00 UTC - 2026/03/31 10:30:00 UTC
Large Language Models (LLMs) enhanced with Retrieval-Augmented Generation (RAG) have achieved substantial improvements in accuracy by grounding their responses in external documents that are relevant to the user's query. However, relatively little work has investigated the impact of RAG in terms of fairness. Particularly, it is not yet known if queries that are associated with certain groups within a fairness category systematically receive higher accuracy, or accuracy improvements in RAG systems compared to LLM-only, a phenomenon we refer to as group query fairness. In this work, we conduct extensive experiments to investigate the impact of three key factors on group query fairness in RAG, namely: Group exposure, i.e., the proportion of documents from each group appearing in the retrieved set, determined by the retriever; Group utility, i.e., the degree to which documents from each group contribute to improving answer accuracy, capturing retriever¨Cgenerator interactions; and Group attribution, i.e., the extent to which the generator relies on documents from each group when producing responses. We examine group-level average accuracy and accuracy improvements disparities across four fairness categories using three datasets derived from the TREC 2022 Fair Ranking Track for two tasks: article generation and title generation. Our findings show that RAG systems suffer from the group query fairness problem and amplify disparities in terms of average accuracy across queries from different groups, compared to an LLM-only setting. Moreover, group utility, exposure, and attribution can exhibit strong positive or negative correlations with average accuracy or accuracy improvements of queries from that group, highlighting their important role in fair RAG. Our data and code are publicly available from Github.
Utilizing Metadata for Better Retrieval-Augmented Generation
Full papersEvaluation researchSearch and rankingFull papers10:30 AM - 12:30 PM (Europe/Amsterdam) 2026/03/31 08:30:00 UTC - 2026/03/31 10:30:00 UTC
Retrieval-Augmented Generation systems depend on retrieving semantically relevant document chunks to support accurate, grounded outputs from large language models. In structured and repetitive corpora such as regulatory filings, chunk similarity alone often fails to distinguish between documents with overlapping language. Practitioners often flatten metadata into input text as a heuristic, but the impact and trade-offs of this practice remain poorly understood. We present a systematic study of metadata-aware retrieval strategies, comparing plain-text baselines with approaches that embed metadata directly. Our evaluation spans metadata-as-text (prefix and suffix), a dual-encoder unified embedding that fuses metadata and content in a single index, dual-encoder late-fusion retrieval, and metadata-aware query reformulation. Across multiple retrieval metrics and question types, we find that prefixing and unified embeddings consistently outperform plain-text baselines, with the unified at times exceeding prefixing while being easier to maintain. Beyond empirical comparisons, we analyze embedding space, showing that metadata integration improves effectiveness by increasing intra-document cohesion, reducing inter-document confusion, and widening the separation between relevant and irrelevant chunks. Field-level ablations show that structural cues provide strong disambiguating signals. Our code, evaluation framework, and the RAGMATE-10K benchmark are anonymously hosted.
Predicting Retrieval Utility and Answer Quality in
Retrieval-Augmented Generation
Full papersMachine Learning and Large Language ModelsFull papers10:30 AM - 12:30 PM (Europe/Amsterdam) 2026/03/31 08:30:00 UTC - 2026/03/31 10:30:00 UTC
The quality of answers generated by large language models (LLMs) in retrieval-augmented generation (RAG) is largely influenced by the contextual information contained in the retrieved documents. A key challenge for improving RAG is to predict both the utility of retrieved documents---quantified as the performance gain from using context over generation without context---and the quality of the final answers in terms of correctness and relevance. In this paper, we define two prediction tasks within RAG. The first is retrieval performance prediction (RPP), which estimates the utility of retrieved documents. The second is generation performance prediction (GPP), which estimates the final answer quality. We hypothesise that the topical relevance of retrieved documents correlates with their utility in RAG, suggesting that Query Performance Prediction (QPP) approaches can be adapted for RPP and GPP. Beyond these retriever-centric signals, we argue that reader-centric features, such as the perplexity of the retrieved context for the LLM conditioned on the input query, can further enhance prediction accuracy. Finally, we propose that features reflecting query-agnostic document quality and readability can also provide useful signals to the predictions. We train linear regression models with the above categories of predictors for both RPP and GPP. Experiments on the Natural Questions (NQ) dataset show that combining predictors from multiple feature categories yields the most accurate estimates of RAG performance.
Full papersSearch and rankingSystem aspectsFull papers10:30 AM - 12:30 PM (Europe/Amsterdam) 2026/03/31 08:30:00 UTC - 2026/03/31 10:30:00 UTC
We propose to redesign the access to Web-scale indexes. Instead of using custom search engine software and hiding access behind an API or a user interface, we store the inverted file in a standard, open source file format (Parquet) on publicly accessible (and cheap) object storage. Users can perform retrieval by fetching the relevant postings for the query terms and performing ranking locally. By using standard data formats and cloud infrastructure, we (a) natively support a wide range of downstream clients, and (b) can directly benefit from improvements in analytical query processing engines. We show the viability of our approach through a series of experiments using the ClueWeb corpora. While our approach (naturally) has a higher latency than dedicated search APIs, we show that we can still obtain results in reasonable time (usually within 10-20 seconds). Therefore, we argue that the increased accessibility and decreased deployment costs make this a suitable setup for cooperation in IR research by sharing large indexes publicly.
Full papersMachine Learning and Large Language Models10:30 AM - 12:30 PM (Europe/Amsterdam) 2026/03/31 08:30:00 UTC - 2026/03/31 10:30:00 UTC
Most conventional RAG pipelines rely on relevance-based retrieval, which often misaligns with utility --- that is, whether the retrieved passages actually improve generation quality. The limitations of existing utility-driven retrieval approaches for RAG are that, firstly, they are resource-intensive typically requiring query encoding, and that secondly, they do not involve listwise ranking loss during training. The latter limitation is particularly critical, as the relative order between documents directly affects generation in RAG. To address this gap, we propose LURE-RAG, a framework that augments any black-box retriever with an efficient LambdaMART-based reranker. Unlike prior methods, LURE-RAG trains the reranker with a listwise ranking loss guided by LLM utility, thereby directly optimizing the ordering of retrieved documents. Experiments on two standard datasets demonstrate that LURE-RAG achieves competitive performance, reaching 97¨C98\% of the state-of-the-art dense neural baseline, while remaining efficient in both training and inference. Moreover, its dense variant, UR-RAG, significantly outperforms the best existing baseline by up to 3\%.
Insider Knowledge: How Much Can RAG Systems Gain from
Evaluation Secrets
Full papersEvaluation researchFull papers10:30 AM - 12:30 PM (Europe/Amsterdam) 2026/03/31 08:30:00 UTC - 2026/03/31 10:30:00 UTC
RAG systems are increasingly evaluated and optimized using LLM judges, an approach that is rapidly becoming the dominant paradigm for system assessment. Nugget-based approaches in particular are now embedded not only in evaluation frameworks but also in the architectures of RAG systems themselves. While this integration can lead to genuine improvements, it also creates a risk of faulty measurements due to circularity. In this paper, we investigate this risk through comparative experiments with nugget-based RAG systems, including GINGER and CRUCIBLE, against strong baselines such as GPTResearcher. By deliberately modifying CRUCIBLE to generate outputs optimized for an LLM judge, we show that near-perfect evaluation scores can be achieved when elements of the evaluation - such as prompt templates or gold nuggets - are leaked or can be predicted. Our results highlight the importance of blind evaluation settings and methodological diversity to guard against mistaking metric overfitting for genuine system progress.
William Walden Human Language Technology Center Of Excellence, Johns Hopkins UniversityJames Mayfield Principal Computer Scientist, JHU HLTCOE
Less LLM, More Documents: Searching for Improved RAG
Full papersSearch and ranking10:30 AM - 12:30 PM (Europe/Amsterdam) 2026/03/31 08:30:00 UTC - 2026/03/31 10:30:00 UTC
Retrieval-Augmented Generation (RAG) couples document retrieval with large language models (LLMs). While scaling generators improves accuracy, it also raises cost and limits deployability. We explore an orthogonal axis: enlarging the retriever¡¯s corpus to reduce reliance on large LLMs. Experimental results show that corpus scaling consistently strengthens RAG and can often serve as a substitute for increasing model size, though with diminishing returns at larger scales. Small- and mid-sized generators paired with larger corpora often rival much larger models with smaller corpora; mid-sized models tend to gain the most, while tiny and large models benefit less. Our analysis shows that improvements arise primarily from increased coverage of answer-bearing passages, while utilization efficiency remains largely unchanged. These findings establish a principled corpus¨Cgenerator trade-off: investing in larger corpora offers an effective path to stronger RAG, often comparable to enlarging the LLM itself.