Is Apple Intelligence Making Up Words Now?
Apple Intelligence, Apple's AI platform, has reportedly generated made-up words in notification summaries, highlighting the common issue of AI hallucination. Instances like 'imbixtent' and 'flemulating' suggest the on-device model may invent portmanteau terms when struggling to condense text. While evidence is limited to a few user reports, the phenomenon underscores ongoing challenges with AI accuracy in real-world applications.
- ▪Apple Intelligence has been observed generating fake words such as 'imbixtent' and 'flemulating' in notification summaries.
- ▪Users have reported seeing these invented terms multiple times, particularly in summaries from the Weather app and Mail.
- ▪One theory suggests the AI creates these words when it cannot properly shorten phrases, forming what users describe as 'vibes-words' or portmanteaus.
- ▪Apple previously faced issues with inaccurate news summaries, such as falsely claiming a suspect had died in jail, leading to temporary removal of the feature.
- ▪The extent of the issue is unclear, with only limited user reports found online so far.
Opening excerpt (first ~120 words) tap to expand
As powerful as LLMs can be, all have one shared weakness: hallucination. For reasons beyond our understanding, AI models have a habit of making things up, totally out of the blue. A response might be accurate, with well-cited sources and relevant information; then, all of a sudden, the AI pushes a false claim, or mistakenly interprets an ironic forum comment as fact. (That's how you end up with Google's AI Overviews recommending adding glue to your pizza.) Some LLMs may hallucinate less than others, but none are immune. That's why anytime you use a chatbot, you'll see some kind of warning on-screen, letting you know that the AI can make mistakes. Apple Intelligence, Apple's AI platform, is no exception here.
…
Excerpt limited to ~120 words for fair-use compliance. The full article is at Lifehacker.