Hallucinations
updated
Do I Know This Entity? Knowledge Awareness and Hallucinations in
Language Models
Paper
• 2411.14257
• Published
• 14
Distinguishing Ignorance from Error in LLM Hallucinations
Paper
• 2410.22071
• Published
DeCoRe: Decoding by Contrasting Retrieval Heads to Mitigate
Hallucinations
Paper
• 2410.18860
• Published
• 11
MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
Paper
• 2410.11779
• Published
• 26
LLMs Know More Than They Show: On the Intrinsic Representation of LLM
Hallucinations
Paper
• 2410.02707
• Published
• 47
Enhanced Hallucination Detection in Neural Machine Translation through
Simple Detector Aggregation
Paper
• 2402.13331
• Published
• 2
INSIDE: LLMs' Internal States Retain the Power of Hallucination
Detection
Paper
• 2402.03744
• Published
• 4
Fine-grained Hallucination Detection and Editing for Language Models
Paper
• 2401.06855
• Published
• 4
The FACTS Grounding Leaderboard: Benchmarking LLMs' Ability to Ground
Responses to Long-Form Input
Paper
• 2501.03200
• Published
• 1
HALoGEN: Fantastic LLM Hallucinations and Where to Find Them
Paper
• 2501.08292
• Published
• 17
SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for
Generative Large Language Models
Paper
• 2303.08896
• Published
• 4
FactCG: Enhancing Fact Checkers with Graph-Based Multi-Hop Data
Paper
• 2501.17144
• Published
• 5
ARES: An Automated Evaluation Framework for Retrieval-Augmented
Generation Systems
Paper
• 2311.09476
• Published
• 7
Are Reasoning Models More Prone to Hallucination?
Paper
• 2505.23646
• Published
• 24
MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents
Paper
• 2404.10774
• Published
• 6
Mitigating Object Hallucinations via Sentence-Level Early Intervention
Paper
• 2507.12455
• Published
• 9
Why Language Models Hallucinate
Paper
• 2509.04664
• Published
• 196