m
Recent Posts
HomePayerAI Fraud Threat: Fabricated Medical Records Rise

AI Fraud Threat: Fabricated Medical Records Rise

AI

The Growing Threat of AI-Driven Insurance Fraud

Artificial intelligence is reshaping healthcare in many positive ways. However, it also opens new doors for fraudsters targeting insurance systems. The National Health Care Anti-Fraud Association outlined this risk clearly in a 2025 issue brief. According to that brief, AI could fuel greater frequency of existing fraud, false claims, identity theft, falsified authorization requests and appeals, and even fraudulent robocalls.

These threats are real, and they are accelerating. Moreover, industry leaders warn that health plans must act now — before the damage becomes irreversible.

Why Fabricated Medical Records Are the Biggest Risk

Generative AI Makes Forgery Easier Than Ever

Among all the AI-enabled fraud risks, one stands out as particularly alarming. Kurt Spear, Vice President of Financial Investigation and Provider Review at Highmark Blue Cross Blue Shield, identifies fabricated medical records as the most pressing concern today.

Generative AI tools can now produce convincing clinical documentation, lab reports, and patient histories. These forgeries look legitimate at first glance. As a result, they are much harder to catch than older forms of fraud.

“Historically, anti-fraud professionals would leverage that documentation to try to assess the validity of the claims,” Spear told Becker’s. “As we see that come in the door, we have to be more cautious and scrutinize those documents a lot more because they can be fabricated.”

Why This Is Harder to Catch

Traditional fraud detection relies on reviewing supporting documentation. Yet when that documentation itself is forged, the entire review process faces a new challenge. Fraudsters no longer need to steal records. Instead, they simply create them. This shift fundamentally changes the threat landscape for payers.

How Insurers Use AI to Fight Back

Turning the Tools Against the Fraudsters

Insurers are not standing still. Many plans now use AI in their own fraud detection efforts. Key applications include data pattern analysis, automated case generation and referral, text verification for services, and cross-source information gathering.

Comparing Documents for Red Flags

Spear explained that detection teams can now compare lab reports, X-rays, and other documents side by side. This comparison helps determine whether AI played a role in creating them. Furthermore, analysts look for inconsistencies that human fabricators — and even AI tools — often leave behind.

This approach gives payers a meaningful edge. Still, it requires constant updating as fraudster tactics evolve.

Agentic AI Fraud: The Next Frontier

A Harder Threat to Nail Down

Beyond generative AI, Spear points to a more complex emerging risk: agentic AI fraud. Agentic AI systems can autonomously perform multiple tasks in sequence. In a fraud context, each agent might scan publicly available medical policies, reimbursement guidelines, and coding documentation to identify exploitable loopholes.

“Each agent might have a task that it performs, and one of those can be going out and scouring different publicly available medical policies, reimbursement policies, coding guidelines — and that’s how the bad actors can oftentimes find the loopholes,” Spear said.

Still Maturing, But Worth Watching

Spear acknowledged that agentic AI fraud remains less mature than generative AI fraud. Even so, he does not rule it out. “I wouldn’t be surprised at this point if it is being used, especially in some more of the complex scenarios,” he said. The industry currently finds it harder to confirm active agentic AI use, but vigilance is essential.

Where Fraud Tends to Concentrate

Out-of-Network Providers Lead the Pattern

Interestingly, Spear noted that fraud does not cluster around a single medical specialty or provider type. Instead, it shows up most often among out-of-area and out-of-network providers.

“The providers are harder to track down and identify because they’re not in network,” he explained. “They tend to open and close quickly.” This pattern makes detection difficult and recovery of funds even harder. Therefore, payers must focus monitoring resources on these higher-risk provider segments.

Building a Two-Part AI Strategy for Payers

Understand the Threat First

Spear believes that every health plan needs a clear, two-part AI strategy. First, plans must develop a framework for how AI will be used against them. This means staying ahead of emerging fraud techniques and building detection capabilities before problems grow.

“Plans really have to have an AI strategy overall, meaning two things,” he said. “Having an understanding and a framework for how it is going to be used against us, because it is being used against us.”

Then Leverage AI for Good

Second, plans must actively use AI as a defensive tool. Detection, documentation review, data analysis, and pattern recognition all benefit from AI integration. Plans that fail to pursue both tracks, Spear warned, leave themselves exposed.

“If plans don’t think about that in those two buckets, and don’t have a strategy for it, then they are more susceptible to it, and they’re going to get hit by these fraudsters, and probably not even know about it until it’s too late.”

The message is clear: payers cannot afford a passive approach. Additionally, collaboration across the industry will be critical as AI fraud tactics continue to evolve rapidly.

Share

No comments

Sorry, the comment form is closed at this time.