WeSearch

Why friendly AI chatbots might be less trustworthy

·3 min read · 0 reactions · 0 comments · 4 views
Why friendly AI chatbots might be less trustworthy

Researchers found adjusting AI systems to be more warm and friendly to users would result in an "accuracy trade-off".

Original article
Bbc
Read full at Bbc →
Opening excerpt (first ~120 words) tap to expand

Why friendly AI chatbots might be less trustworthy1 day agoShareSaveAdd as preferred on GoogleLiv McMahonTechnology reporterGetty ImagesAI chatbots trained to be warm and friendly when interacting with users may also be more prone to inaccuracies, new research suggests.Oxford Internet Institute (OII) researchers analysed more than 400,000 responses from five AI systems which had been tweaked to communicate in a more empathetic way.Friendlier answers contained more mistakes - from giving inaccurate medical advice to reaffirming user's false beliefs, the study found.The findings raise further questions over the trustworthiness of AI models, which are often deliberately designed to be warm and human-like in order to increase engagement.Such concerns are accentuated by AI chatbots being used…

Excerpt limited to ~120 words for fair-use compliance. The full article is at Bbc.

Anonymous · no account needed
Share 𝕏 Facebook Reddit LinkedIn Threads WhatsApp Bluesky Mastodon Email

Discussion

0 comments

More from Bbc