A paper co-authored by Dario Satriani, Enzo Veltri, Donatello Santoro, and Paolo Papotti presented at EurIPS 2025
Research
Science and society
Published on November 7, 2025–Updated on November 7, 2025
Dates
on the November 7, 2025
December 2-7
Location
Copenhagen, Denmark
A paper co-authored by Paolo Papotti presented at EurIPS 2025
The paper “RelationalFactQA: A Benchmark for Evaluating Tabular Fact Retrieval from Large Language Models” will be presented at the event, which will take place in Copenhagen from December 2 to 7.
Abstract: Factuality in Large Language Models (LLMs) is a persistent challenge. Current benchmarks often assess short factual answers, overlooking the critical ability to generate structured, multi-record tabular outputs from parametric knowledge. We demonstrate that this relational fact retrieval is substantially more difficult than isolated point-wise queries, even when individual facts are known to the model, exposing distinct failure modes sensitive to output dimensionality (e.g., number of attributes or records). To systematically evaluate this under-explored capability, we introduce RelationalFactQA, a new benchmark featuring diverse natural language questions (paired with SQL) and gold-standard tabular answers, specifically designed to assess knowledge retrieval in a structured format. RelationalFactQA enables analysis across varying query complexities, output sizes, and data characteristics. Our experiments reveal that even state-of-the-art LLMs struggle significantly, not exceeding 25% factual accuracy in generating relational outputs, with performance notably degrading as output dimensionality increases. These findings underscore critical limitations in current LLMs' ability to synthesize structured factual knowledge and establish RelationalFactQA as a crucial resource for measuring future progress in LLM factuality.