Adam Hart has worked as a nurse at St. Rose Dominican Hospital in Henderson, Nevada, for 14 years. A few years ago, while covering a shift in the emergency department, he listened to an ambulance report on an incoming patient — an elderly woman with dangerously low blood pressure. Within moments, a sepsis flag flashed across the hospital’s electronic system, setting off a chain of events that would test the limits of artificial intelligence in clinical care.
What Happened When the AI Alert Fired
Sepsis — a life-threatening response to infection — is one of the leading causes of death in U.S. hospitals. Early intervention is critical, and the AI-generated alert immediately prompted the charge nurse to instruct Hart to room the patient, take her vitals, and begin intravenous (IV) fluids. In a busy emergency room, speed is protocol.
But Hart saw something the algorithm could not: a dialysis catheter below the woman’s collarbone. Her kidneys were already compromised. A standard flood of IV fluids, he warned, could overwhelm her system and fill her lungs with fluid. The charge nurse insisted, citing the AI alert. Hart refused.
A nearby physician stepped in and ordered dopamine instead — raising the patient’s blood pressure without adding dangerous fluid volume, averting what Hart believed could have been a fatal complication.
How AI Protocols Can Override Clinical Judgment
What troubled Hart most was how naturally the AI-generated alert produced a chain of compliance. A screen created urgency. A protocol converted that urgency into an order. A bedside objection rooted in clinical reasoning was treated as defiance. No one was acting with malicious intent — the system simply pushed staff toward compliance when the evidence in front of them demanded the opposite.
This dynamic is growing increasingly familiar across U.S. health care. Over the past several years, hospitals have embedded algorithmic tools into routine clinical workflows. From predictive risk scores to agentic AI capable of autonomous decision-making — adjusting oxygen levels or reprioritizing ER triage queues with minimal human input — these technologies now touch nearly every stage of patient care.
The Rapid Spread of Hospital AI Tools
Health systems across the country are using AI to flag deteriorating patients, generate clinical notes from ambient recordings, monitor vitals through wearable devices, match patients to clinical trials, and manage complex logistics such as ICU transfers. A pilot program in Utah recently deployed chatbot technology to autonomously renew prescriptions, a move physician associations have strongly opposed due to concerns about the removal of human oversight.
The industry is pursuing a vision of continuous, data-intensive care — a monitoring infrastructure that combines electronic health records, lab results, imaging, medication histories, and real-time patient-generated data from wearables and food logs. Proponents argue this kind of always-on surveillance is simply beyond the cognitive capacity of any human provider.
Nurses Warn of Alert Fatigue and Flawed Algorithms
On the frontlines, nurses are raising serious alarms. Melissa Beebe, a registered nurse at UC Davis Health with 17 years of experience, piloted the BioButton — a small wearable sensor tracking heart rate, temperature, and breathing patterns in oncology patients. The device generated frequent, vague alerts with little actionable detail. “I have my own internal alerts,” Beebe says. “It was overdoing it but not really giving great information.”
UC Davis Health ultimately discontinued the BioButton after approximately one year, finding that nurses were detecting critical changes faster than the device’s alerts could flag them.
Elven Mitchell, an ICU nurse at Kaiser Permanente in Modesto, California, estimates that roughly half of alerts generated by centralized monitoring systems are false positives — yet hospital policy requires bedside nurses to evaluate every single one, pulling them away from high-risk patients who need direct attention.
Sepsis Prediction Became a Cautionary Tale
Hospitals nationwide widely adopted Epic’s sepsis-prediction algorithm, only for later evaluations to find it substantially less accurate than originally marketed. Epic maintains that clinical studies showed improved outcomes and that it has since released an updated version. For nurses, however, the episode illustrated how an imperfect product can quickly become entrenched policy — and ultimately their problem to manage.
Algorithmic Bias and the Absence of Regulation
The risks extend beyond performance gaps. Ziad Obermeyer, a professor at UC Berkeley’s School of Public Health, found that some widely used patient-care algorithms exhibited racial bias, affecting screening decisions for an estimated 100 to 150 million people annually. Unlike pharmaceutical drugs, which face rigorous federal gatekeeping before deployment, AI tools in health care have no single regulatory authority. Hospitals are often left to validate these tools independently — or not at all.
Why Nurses Say Human Judgment Cannot Be Replaced
Across interviews for this article, nurses consistently stressed that they are not opposed to technology. What makes them wary is the rapid deployment of heavily marketed tools whose real-world performance consistently falls short of their promises. Clinical judgment — shaped by years of training, shaped by subtle sensory cues, shaped by the way a patient’s skin looks or feels — is not something that can be encoded in an alert.
“We have five senses, and computers only get input,” Mitchell says simply.
Stanford’s chief data scientist Nigam Shah puts it plainly: “Ask nurses first, doctors second — and if the doctor and nurse disagree, believe the nurse, because they know what’s really happening.”
What Responsible AI Adoption Looks Like
Some institutions are responding by bringing AI development in-house — building, testing, and validating tools with clinical staff rather than purchasing them wholesale from vendors. Mount Sinai Health System has adopted a bottom-up innovation model, encouraging frontline workers to submit ideas. One wound-care nurse proposed an AI tool to predict bedsores; it now boasts one of the highest adoption rates in the hospital, partly because she personally trains her colleagues on it.
The lesson is clear: tools that can explain themselves, that are built with the people who will use them, and that treat the bedside alert as an invitation to look closer — not a mandate to comply — are the ones that earn clinical trust.
The Future of AI and Nursing in American Hospitals
As hospitals race toward autonomous AI agents, the story of Adam Hart remains instructive. He rejected a digital verdict to protect a patient’s lungs. The ultimate value of the nurse in the age of AI may not be the ability to follow the prompt — but the judgment, experience, and willingness to override it.
