Oxford study says a chummy AI friend will lie and feed into your false beliefs
Making AI feel more human could be creating a bigger problem than expected. A new study from the Oxford Internet Institute revealed that chatbots designed to be warm and friendly are more likely to mislead users and reinforce incorrect beliefs. The research found that AI becomes less reliable as it starts getting more agreeable. What […]
Opening excerpt (first ~120 words) tap to expand
Making AI feel more human could be creating a bigger problem than expected. A new study from the Oxford Internet Institute revealed that chatbots designed to be warm and friendly are more likely to mislead users and reinforce incorrect beliefs. The research found that AI becomes less reliable as it starts getting more agreeable. What happens to a “friendly” AI AI Chatbot AI Chatbot Researchers tested multiple AI models by training them to sound more empathetic and conversational. The result was a noticeable drop in accuracy. These “friendlier” versions made 10-30% more mistakes and were about 40% more likely to agree with false claims compared to their counterparts.
…
Excerpt limited to ~120 words for fair-use compliance. The full article is at Digital Trends.