FAIR_XAI: Improving Multimodal Foundation Model Fairness via Explainability for Wellbeing Assessment
This study evaluates the fairness and explainability of vision-language models (VLMs) in mental health assessment, focusing on depression prediction across diverse datasets. It finds significant performance and bias variations between models, with some showing gender or racial disparities, and tests explainability-based interventions that yield mixed fairness outcomes. While procedural transparency improved in some cases, it did not consistently lead to fairer predictions, highlighting a gap between explainability and equitable results. The authors recommend jointly optimizing accuracy, fairness, and generalization in future multimodal AI systems for wellbeing applications.
Full article excerpt tap to expand
Computer Science > Artificial Intelligence arXiv:2604.23786 (cs) [Submitted on 26 Apr 2026] Title:FAIR_XAI: Improving Multimodal Foundation Model Fairness via Explainability for Wellbeing Assessment Authors:Sophie Chiang, Tom Brennan, Fethiye Irmak Dogan, Jiaee Cheong, Hatice Gunes View a PDF of the paper titled FAIR_XAI: Improving Multimodal Foundation Model Fairness via Explainability for Wellbeing Assessment, by Sophie Chiang and 4 other authors View PDF HTML (experimental) Abstract:In recent years, the integration of multimodal machine learning in wellbeing assessment has offered transformative potential for monitoring mental health. However, with the rapid advancement of Vision-Language Models (VLMs), their deployment in clinical settings has raised concerns due to their lack of transparency and potential for bias. While previous research has explored the intersection of fairness and Explainable AI (XAI), its application to VLMs for wellbeing assessment and depression prediction remains under-explored. This work investigates VLM performance across laboratory (AFAR-BSFT) and naturalistic (E-DAIC) datasets, focusing on diagnostic reliability and demographic fairness. Performance varied substantially across environments and architectures; Phi3.5-Vision achieved 80.4% accuracy on E-DAIC, while Qwen2-VL struggled at 33.9%. Additionally, both models demonstrated a tendency to over-predict depression on AFAR-BSFT. Although bias existed across both architectures, Qwen2-VL showed higher gender disparities, while Phi-3.5-Vision exhibited more racial bias. Our XAI intervention framework yielded mixed results; fairness prompting achieved perfect equal opportunity for Qwen2-VL at a severe accuracy cost on E-DAIC. On AFAR-BSFT, explainability-based interventions improved procedural consistency but did not guarantee outcome fairness, sometimes amplifying racial bias. These results highlight a persistent gap between procedural transparency and equitable outcomes. We analyse these findings and consolidate concrete recommendations for addressing them, emphasising that future fairness interventions must jointly optimise predictive accuracy, demographic parity, and cross-domain generalisation. Comments: 10 pages, 4 figures, 3 tables Subjects: Artificial Intelligence (cs.AI); Machine Learning (cs.LG) Cite as: arXiv:2604.23786 [cs.AI] (or arXiv:2604.23786v1 [cs.AI] for this version) https://doi.org/10.48550/arXiv.2604.23786 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Sophie Chiang [view email] [v1] Sun, 26 Apr 2026 16:22:39 UTC (175 KB) Full-text links: Access Paper: View a PDF of the paper titled FAIR_XAI: Improving Multimodal Foundation Model Fairness via Explainability for Wellbeing Assessment, by Sophie Chiang and 4 other authorsView PDFHTML (experimental)TeX Source view license Current browse context: cs.AI < prev | next > new | recent | 2026-04 Change to browse by: cs cs.LG References & Citations NASA ADSGoogle Scholar Semantic Scholar export BibTeX citation Loading... BibTeX formatted citation × loading... Data provided by: Bookmark Bibliographic Tools Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) Connected Papers Toggle Connected Papers (What is Connected Papers?) Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart Citations (What are Smart Citations?) Code, Data, Media Code, Data and Media Associated with this…
This excerpt is published under fair use for community discussion. Read the full article at arXiv.org.