Georgia Takes a Bold Stand on AI
Georgia legislators are pushing back against artificial intelligence in health insurance decisions. Senate Bill 444 passed the state Senate unanimously, signaling broad bipartisan concern. The bill states that coverage decisions for healthcare services cannot rely solely on AI systems. Instead, a qualified human must review every claim before a denial goes through.
This move reflects a growing national conversation about the role of algorithms in medical decision-making. Patients, advocates, and doctors increasingly worry that automated systems lack the judgment required for life-or-death calls.
What Senate Bill 444 Actually Says
The Core Requirement
SB 444 sets a clear standard: no health insurer can deny coverage based on AI alone. A human reviewer must be part of every coverage determination. State Senator Kay Kirkpatrick, a retired orthopedic surgeon and the bill’s sponsor, explains the reasoning simply. She says patients deserve a real person reviewing their case before a doctor-ordered treatment gets denied.
The Case for Human Oversight
Kirkpatrick points out that doctors already embrace AI for many tasks. Scheduling, documentation, and administrative processes all benefit from automation. However, medical necessity decisions carry far greater weight. When a doctor recommends a procedure, a human — not an algorithm — must make the final call on coverage. Kirkpatrick puts it directly: patients want a doctor reviewing their file before anyone says no.
How Widespread Is AI in Health Insurance?
Insurers Are Already Using It
A survey by the National Association of Insurance Commissioners examined 93 large health insurers. The findings are striking. Currently, 43% of those companies already use or plan to use AI for claim adjudication. Moreover, 26% report using or planning to use AI for benefit determinations. These figures confirm that automated decision-making in health insurance is not a future concern — it is already here.
Consumer Advocates Sound the Alarm
Georgia Watch Executive Director Liz Coyle argues that AI amplifies an already serious problem. Claim denials are already too common, she says. Furthermore, allowing a machine to override a physician’s recommendation is completely unacceptable. Coyle emphasizes that when a doctor prescribes a potentially life-saving procedure, handing that decision to a robot crosses a clear ethical line.
The Real Cost of Claim Denials
Data from KFF Health News reveals the scale of the problem. In 2023, insurers denied 20% of all claims across Affordable Care Act marketplace plans. Importantly, ACA plans must disclose denial rates by law — meaning these numbers represent only the portion of the market with transparency requirements. The true denial rate across all plans could be far higher.
These denials carry real consequences. Patients delay or skip treatments. Conditions worsen. In some cases, the financial and physical toll of a denied claim proves devastating. Introducing AI into this process without adequate safeguards threatens to accelerate those harms.
Concerns Raised in the House Committee
Defining the Human Role
The bill drew scrutiny during a House committee hearing. Lawmakers questioned whether the legislation’s language clearly defines what “human review” means in practice. State Representative Todd Jones raised a pointed concern. He noted that a clinical peer could technically review a claim for a fraction of a second and sign off without genuine scrutiny. Under that scenario, the human review requirement becomes meaningless.
Strengthening the Standard
This loophole concern highlights a critical gap. A checkbox review is not the same as meaningful clinical oversight. Therefore, advocates and lawmakers alike are pushing for stronger language that holds reviewers to a real standard of care. The bill will need to address this ambiguity before it can achieve its intended purpose.
Balancing AI Benefits with Patient Safety
Where AI Helps
Senator Kirkpatrick acknowledges that AI does offer genuine benefits in healthcare administration. Physicians she consults express enthusiasm about AI’s potential to cut through bureaucratic delays. Processing paperwork, flagging errors, and speeding up pre-authorization requests are all areas where AI can reduce friction without harming patients.
Where the Line Must Be Drawn
However, clinical decisions require something AI cannot replicate: genuine medical judgment. When the question is whether a patient’s treatment is medically necessary, experience, context, and compassion matter. Additionally, physicians carry legal and ethical accountability for their decisions — accountability that no algorithm shares. SB 444 draws that line clearly, and supporters argue it must hold firm.
What Comes Next for the Bill
The House Technology Committee is set to take up SB 444 again on March 17. The committee will weigh the need for stronger definitions around human review. Advocates expect continued debate before the bill advances to a full House vote. If it passes, Georgia will join a small group of states actively legislating guardrails on AI in healthcare coverage decisions.
Patients, physicians, and consumer groups will be watching closely. The outcome of SB 444 could set a precedent that other states follow — or signal that the push for AI regulation still has a long road ahead.
