Artificial intelligence is rapidly transforming healthcare delivery, offering tremendous potential benefits while simultaneously presenting significant challenges including algorithmic bias, care inequities, and healthcare worker burnout. The critical question of how AI should be regulated in the United States healthcare system remains largely unanswered, creating uncertainty for hospitals and patients alike.
New Guidelines Emerge for Healthcare AI Implementation
In September, two influential healthcare organizations—the hospital-accrediting Joint Commission and the Coalition for Health AI—released comprehensive recommendations for implementing artificial intelligence in medical care settings. These guidelines place substantial responsibility for compliance on individual healthcare facilities, raising concerns about feasibility and equity.
I. Glenn Cohen, faculty director of Harvard Law School’s Petrie-Flom Center for Health Law, Biotechnology, and Bioethics, along with colleagues, published analysis in the Journal of the American Medical Association suggesting that while these guidelines represent progress, significant modifications are necessary. The changes would help ease regulatory and financial burdens, particularly affecting small hospital systems that lack resources of larger medical centers.
The Critical Need for Healthcare AI Regulation
Cohen, the James A. Attwood and Leslie Williams Professor of Law, emphasizes that medical AI handling medium to high-risk functions requires regulation—whether internal self-regulation or external governmental oversight. Currently, healthcare AI operates primarily under internal hospital review, with significant variations in validation, review, and monitoring processes across different hospital systems.
This hospital-by-hospital approach creates substantial disparities. Evaluation and monitoring costs can be prohibitive, enabling some hospitals to implement comprehensive AI oversight while others cannot afford such measures. Traditional top-down regulation offers consistency but operates more slowly—potentially too slow for rapidly advancing AI technology.
Healthcare facilities face increasingly complex combinations of AI products. Some assist with administrative functions like purchasing and internal review, while many more directly impact clinical care or operate in clinically adjacent spaces. Consumer-facing medical AI products, such as mental health chatbots, bypass internal hospital review entirely, creating clear regulatory gaps.
Innovation Speed Versus Regulatory Thoroughness
The healthcare AI ecosystem thrives on startup energy and rapid innovation. However, this acceleration creates what Cohen describes as a “race dynamic” where ethical considerations risk being overlooked. Whether competing to develop breakthrough technology first, racing against funding depletion, or engaging in international AI competition, time pressures consistently threaten ethical standards.
Currently, the vast majority of medical AI never undergoes federal regulatory review—and likely no state-level review either. While healthcare AI standards and adoption incentives are necessary, subjecting everything to rigorous FDA drug approval processes or even medical device protocols would prove prohibitively expensive and slow for Silicon Valley development timelines.
Implementation Challenges and Hospital Disparities
The paradox lies in AI’s potential risks. Unlike traditional medications where patient responses can largely be characterized beforehand, medical AI performance varies dramatically based on implementation factors including resources, staffing, training, and user demographics. This creates unusual challenges for agencies like FDA, which typically avoid regulating medical practice itself.
Cohen’s research examines regulatory systems proposed by the Joint Commission and Coalition for Health AI. The Joint Commission’s accreditation power is significant—nearly every state requires it for Medicare and Medicaid billing, representing substantial portions of hospital revenue. While AI rules haven’t been formally incorporated into accreditation requirements, these guidelines signal potential future direction.
Resource Requirements and Healthcare Equity
The recommendations include strong provisions: patient notification when AI directly impacts care, informed consent requirements, and ongoing quality monitoring with continual testing and validation. These measures scale monitoring frequency to patient care risk levels but prove difficult and expensive to implement.
Major hospital systems estimate properly vetting complex algorithms and implementation can cost $300,000 to $500,000—simply unaffordable for many facilities. This creates healthcare access disparities, concentrating AI benefits in large academic medical centers in cities like Boston or San Francisco rather than resource-limited rural or community hospitals.
Looking Forward: Centralized Solutions and Optimistic Outlook
The Biden administration proposed “assurance labs”—private-sector organizations partnering with government to vet algorithms under agreed standards. The Trump administration acknowledges the problem but favors different approaches, with specific visions yet fully articulated.
Despite complexity, Cohen remains optimistic that medical artificial intelligence will significantly improve healthcare within ten years. Success requires appropriately aligned incentives ensuring technology diffusion to less-resourced settings—an outcome requiring deliberate planning rather than occurring by accident.
