WeSearch

Sam Altman’s ChatGPT Couldn’t Stop Obsessing Over Goblins

Alex Nguyen· ·2 min read · 0 reactions · 0 comments · 8 views
#chatgpt#sam altman#openai#ai behavior#goblin references
Sam Altman’s ChatGPT Couldn’t Stop Obsessing Over Goblins
⚡ TL;DR · AI summary

OpenAI revealed it had to implement a specific instruction in ChatGPT's code to stop the model from frequently referencing goblins and other creatures, a behavior linked to its 'Nerdy' personality setting. The tendency emerged through reinforcement learning, as playful responses involving mythical creatures were rated highly by human evaluators. The issue gained public attention after Sam Altman shared a meme joking about 'extra goblins' in a future model, highlighting ongoing challenges in controlling AI behavior.

Original article
Mother Jones · Alex Nguyen
Read full at Mother Jones →
Opening excerpt (first ~120 words) tap to expand

freestar.config.enabled_slots.push({ placementName: "motherjones_right_rail_1", slotId: "ROS_ATF_300x600" }); Sam Altman arrives at the 12th Breakthrough Prize Ceremony on Saturday, April 18, 2026, at Barker Hangar in Santa Monica, California.Jordan Strauss/Invision/AP Get your news from a source that’s not owned and controlled by oligarchs. Sign up for the free Mother Jones Daily. OpenAI admitted it had to develop a specific instruction in the code of its latest model of ChatGPT to stop it from repeatedly referencing “goblins, gremlins, and other creatures.” In an explanation posted on Wednesday, the company said the “strange habit” came from its chatbot personality feature—specifically for users who chose the “Nerdy” personality.

Excerpt limited to ~120 words for fair-use compliance. The full article is at Mother Jones.

Anonymous · no account needed
Share 𝕏 Facebook Reddit LinkedIn Threads WhatsApp Bluesky Mastodon Email

Discussion

0 comments

More from Mother Jones