Implicit Humanization in Everyday LLM Moral Judgments
Recent adoption of conversational information systems has expanded the scope of user queries to include complex tasks such as personal advice-seeking. However, we identify a specific type of sought advice-a request for a moral judgment (i.e. "who was wrong?") in a social conflict-as an implicitly humanizing query which carries potentially harmful anthropomorphic projections. In this study, we examine the reinforcement of these assumptions in the responses of four major general-purpose LLMs through the use of linguistic, behavioral, and cognitive anthropomorphic cues. We also contribute a novel dataset of simulated user queries for moral judgments. We find current LLM system responses reinforce implicit humanization in queries, potentially exacerbating risks like overreliance or misplaced trust. We call for future work to expand the understanding of anthropomorphism to include implicit userside humanization and to design solutions that address user needs while correcting misaligned expectations of model capabilities.
Opening excerpt (first ~120 words) tap to expand
Computer Science > Computers and Society arXiv:2604.22764 (cs) [Submitted on 23 Mar 2026] Title:Implicit Humanization in Everyday LLM Moral Judgments Authors:Hoda Ayad, Tanu Mitra View a PDF of the paper titled Implicit Humanization in Everyday LLM Moral Judgments, by Hoda Ayad and Tanu Mitra View PDF Abstract:Recent adoption of conversational information systems has expanded the scope of user queries to include complex tasks such as personal advice-seeking. However, we identify a specific type of sought advice-a request for a moral judgment (i.e. "who was wrong?") in a social conflict-as an implicitly humanizing query which carries potentially harmful anthropomorphic projections.
…
Excerpt limited to ~120 words for fair-use compliance. The full article is at arXiv cs.AI.