Other threats on the annual list include ‘digital darkness,’ unsafe medical products and technology implementations that create sketchy workflows. The growing use of AI chatbots for dispensing medical advice is raising red flags in the healthcare industry. Simply put, you don’t know where that data has been. While healthcare leaders are embracing the technology in areas like call center operations and patient engagement, many worry that those chatbots could be harmful if not properly designed and managed. Several states are even moving to govern the technology in light of concerns that chatbots could give people potentially dangerous mental health advice. That’s why misuse of AI chatbots in healthcare has secured the top spot in ECRI’s Top 10 Health Technology Hazards of 2026. Chatbots built out of LLMs, including ChatGPT, Grok, Copilot, Claude and Gemini, are designed to crunch data and provide answers through a human-sounding interface, but that interface could confuse users into putting too much faith in the answer. Hallucinations, data drift and other problems could affect the technology, leading to incorrect diagnoses, unnecessary or harmful recommendations, even promoting unsafe practices. “Medicine is a fundamentally human endeavor. While chatbots are powerful tools, the algorithms cannot replace the expertise, education, and experience of medical professionals,” Marcus Schabacker, MD, PhD, president and chief executive officer of the Pennsylvania-based non-profit, said in a press release accompanying the report. “Realizing AI’s promise while protecting people requires disciplined oversight, detailed guidelines, and a clear-eyed understanding of AI’s limitations.” googletag.cmd.push(function() { googletag.display(“dfp-ad-hl_native1”); }); “AI models reflect…