DO-Bench: An Attributable Benchmark for Diagnosing Object Hallucination in Vision-Language Models
Object level hallucination remains a central reliability challenge for vision language models (VLMs), particularly in binary object existence verification. Existing benchmarks emphasize aggregate accuracy but rarely disentangle whether errors stem from perceptual limitations or from the influence of contextual textual priors, leaving underlying failure mechanisms ambiguous. We introduce DO-Bench, a controlled diagnostic benchmark that isolates these sources through structured multimodal interventions. Rather than evaluating models in unconstrained settings, DO-Bench probes two complementary dimensions: the Prior Override dimension progressively strengthens contextual textual priors while holding visual evidence constant to assess resistance to prior pressure, and the Perception-Limited dimension incrementally enhances visual evidence from full-scene context to localized object crops to measure perceptual grounding strength. This paired design enables attribution of errors to prior suppression, perceptual insufficiency, or their interaction. We further define two diagnostic metrics, PriorRobust and PerceptionAbility, to quantify these behaviors consistently. Evaluations across diverse open- and closed-source VLMs reveal systematic differences in prior sensitivity and perceptual reliability, demonstrating that object hallucination reflects heterogeneous, mechanism dependent failure patterns beyond aggregate accuracy.
Opening excerpt (first ~120 words) tap to expand
Computer Science > Computer Vision and Pattern Recognition arXiv:2604.22822 (cs) [Submitted on 18 Apr 2026] Title:DO-Bench: An Attributable Benchmark for Diagnosing Object Hallucination in Vision-Language Models Authors:JiYang Wang, Jiawei Chen, Mengqi Xiao, Yu Cheng, Yangfu Li, Zhaoxia Yin View a PDF of the paper titled DO-Bench: An Attributable Benchmark for Diagnosing Object Hallucination in Vision-Language Models, by JiYang Wang and 5 other authors View PDF HTML (experimental) Abstract:Object level hallucination remains a central reliability challenge for vision language models (VLMs), particularly in binary object existence verification.
…
Excerpt limited to ~120 words for fair-use compliance. The full article is at arXiv cs.AI.