What Are AI Chatbots in Healthcare?
AI chatbots are software applications that use artificial intelligence to simulate human conversation. Unlike basic rule-based bots, these tools rely on large language models (LLMs) and natural language processing. This allows them to understand context, user intent, and even sentiment.
Common Uses in Healthcare Settings
Healthcare organizations use chatbots for several important tasks. Many deploy them to handle incoming patient calls and queries. Others use them to support lead generation, provide customer assistance, and reduce operational costs. The appeal is clear — chatbots offer immediate responses without requiring human involvement.
Tools like ChatGPT, Claude, Copilot, Gemini, and Grok now power many of these applications. Furthermore, these platforms produce responses that sound both human and evidence-based. As a result, patients and clinicians increasingly turn to them for medical guidance.
Why ECRI Flagged Chatbots as Top Hazard
ECRI’s 2026 Health Technology Hazards Report
ECRI, an independent patient safety organization based in Willow Grove, Pennsylvania, releases an annual report on dangerous health technologies. In its 2026 edition, ECRI ranked AI chatbots as the single most significant health technology hazard. This ranking reflects growing concern about how widely these tools are used — and how little oversight governs them.
ECRI’s core concern is straightforward. Chatbot tools are not regulated as medical devices. Moreover, they have not been validated for use in clinical or healthcare settings. Despite these gaps, clinicians, patients, and healthcare personnel use them daily.
The Scale of the Problem
An OpenAI analysis shows that more than 40 million people use ChatGPT specifically for health information. Additionally, OpenAI recently announced ChatGPT Health — a dedicated experience built for health and wellness purposes. While the company states this tool is designed “to support, not replace medical care,” experts worry that over-reliance is already happening.
How Chatbots Put Patients at Risk
The Core Technical Problem
AI chatbots do not truly understand information. Instead, they predict sequences of words based on patterns learned during training. In simple terms, they generate answers that sound correct, even when they are wrong. Importantly, these tools are programmed to sound confident and to always provide a response — regardless of accuracy.
Real-World Examples of Chatbot Errors
ECRI’s experts documented several alarming cases. Chatbots have suggested incorrect diagnoses and recommended unnecessary medical testing. They have promoted subpar medical supplies and even invented body parts in response to clinical questions.
One specific example stands out. When ECRI asked a chatbot whether placing an electrosurgical return electrode over a patient’s shoulder blade was acceptable, the chatbot said yes. This advice was dangerously wrong. Following it could expose patients to serious burn injuries.
Therefore, any chatbot providing clinical guidance without human verification poses a direct threat to patient safety.
Health Disparities and AI Bias
Bias Embedded in Training Data
Beyond clinical errors, chatbots carry another significant risk — they can worsen existing health disparities. AI models learn from historical data, and that data often contains systemic biases. Consequently, chatbot responses may reinforce stereotypes and produce unequal treatment recommendations.
ECRI’s report highlights this concern directly. Biases embedded in training data distort how models interpret information. This, in turn, leads to responses that reflect and perpetuate existing inequities in healthcare access and quality.
How to Use AI Chatbots Responsibly
Guidance for Patients and Clinicians
Despite the risks, AI chatbots are not going away. Thus, responsible use becomes critical. ECRI recommends that patients, clinicians, and all chatbot users educate themselves about the limitations of these tools. Moreover, users should always verify chatbot-provided information with a qualified, knowledgeable source before acting on it.
Steps Healthcare Organizations Should Take
Healthcare organizations carry a responsibility to manage AI risk actively. ECRI advises health systems to take the following actions:
Establish AI governance committees to oversee chatbot use and set clear policies. Provide clinicians with dedicated AI literacy training so they understand tool limitations. Regularly audit AI tools to monitor performance and catch errors before they cause harm.
These steps do not eliminate risk, but they substantially reduce it. In addition, they create accountability structures that protect both patients and providers.
Key Takeaways for Healthcare Organizations
AI chatbots offer real value in healthcare — they reduce costs, improve efficiency, and expand access to information. However, the current lack of regulation and clinical validation makes them a serious patient safety risk. ECRI’s 2026 ranking places them at the very top of the hazard list for good reason.
Healthcare leaders must act now. They should invest in governance, training, and auditing before deploying these tools at scale. Above all, they must ensure that human oversight remains central to any AI-assisted clinical workflow.
