AI Just Beat Doctors at Diagnosing ER Patients. Don’t Get All Excited
Researchers from Harvard and Beth Israel Deaconess Medical Center tested OpenAI's o1-preview, a reasoning-based large language model, against human physicians in diagnosing emergency room patients and complex clinical cases, finding the AI outperformed doctors in accuracy. Despite these results, the study authors and independent experts emphasize that AI should be viewed as a collaborative tool rather than a replacement for clinicians. The technology still faces limitations, particularly in interpreting multimodal data like medical images, and would require rigorous real-world testing and regulatory scrutiny before widespread clinical adoption.
Opening excerpt (first ~120 words) tap to expand
Emergency departments and other clinical settings across the world are now one step closer to sounding like the cockpit of the Millennium Falcon—with human doctors soliciting advice from, bickering with, and not infrequently trusting the guidance of their opinionated AI colleagues.cnx.cmd.push(function(){cnx({"playerId":"92b7b46b-43ed-4e0e-b21b-2c999302d9d7","settings":{"advertising":{"macros":{"AD_UNIT":"/23178111854/od.gizmodo.com/article","CHILD_UNIT":"article","POST_ID":"2000752676","POST_TYPE":"post","CHANNEL":"science","SECTION":"health","SUBSECTION":"","CATEGORIES":"health","TAGS":"artificial-intelligence,medical-innovations,openai,reasoning-model-llms","NOP":"0"},"timeBeforeFirstAd":0}}}).render("cnx-player-main")}); Researchers at Harvard and Boston’s Beth Israel Deaconess…
Excerpt limited to ~120 words for fair-use compliance. The full article is at Gizmodo.