m
Recent Posts
HomeProviderAI Fraud Threat: Fabricated Medical Records

AI Fraud Threat: Fabricated Medical Records

AI

The New Face of AI-Driven Insurance Fraud

Artificial intelligence is reshaping healthcare in powerful ways. However, it also arms bad actors with new tools to commit fraud. Insurers now face a wave of AI-enabled threats that go far beyond traditional false claims. Consequently, understanding these new risks is urgent for every health plan operating today.

The National Health Care Anti-Fraud Association (NHCAA) outlined these risks in a 2025 issue brief. According to the brief, AI could drive greater frequency of existing fraud behaviors, false claims, identity theft, falsified authorization requests and appeals, and robocall schemes. Together, these threats form a complex and fast-evolving challenge for payers across the country.

Why Fabricated Medical Records Are a Top Concern

Among all emerging threats, one stands out as particularly dangerous: fabricated medical records. Kurt Spear, Vice President of Financial Investigation and Provider Review at Highmark Blue Cross Blue Shield, points to generative AI as the engine behind this new fraud method. Fraudsters now use AI tools to create convincing, detailed medical documentation from scratch — documents that look entirely legitimate on the surface.

The Challenge of Spotting Faked Documents

Traditionally, anti-fraud investigators relied on medical documentation to assess the validity of a claim. Moreover, this process assumed records were authentic. Today, that assumption no longer holds. Spear explained the shift directly: investigators must now be far more cautious and scrutinize incoming documents more carefully, because generative AI can produce fabricated records that are difficult to distinguish from genuine ones.

This is a fundamental change in how fraud investigation must operate. Furthermore, as AI tools become cheaper and more accessible, the volume of fabricated records is likely to grow. Health plans that do not adapt their review processes will find themselves increasingly vulnerable.

Why This Threat Outpaces Traditional Fraud

Legacy fraud schemes often involved billing errors, upcoding, or phantom procedures. In contrast, AI-fabricated records can include convincing lab results, imaging reports, and clinical notes. As a result, they pass initial screening steps that would catch simpler fraud attempts. This sophistication makes the threat far more difficult to manage without dedicated AI-powered detection tools.

How Insurers Are Fighting Back With AI

Fortunately, AI is not only a tool for fraudsters — it is also a powerful weapon for anti-fraud teams. The NHCAA issue brief noted that generative AI can help identify fraudulent cases. Specifically, AI-driven use cases for fraud detection include data analysis, automated case generation and referral, automated text verification for services, and information gathering across multiple sources.

AI Tools for Document Review

When reviewing documentation, Spear described how AI detection tools compare lab reports and X-rays to determine whether AI was used in their creation — and whether that use was legitimate. These tools analyze patterns, inconsistencies, and markers that human reviewers might miss. As a result, AI-powered document review is becoming a critical layer in the fraud detection process.

Building a Two-Part AI Strategy

Spear emphasized that health plans must develop a clear, two-part AI strategy. First, plans need a framework for understanding how AI will be used against them. Second, they need a strategy for how to use AI in their own defense. Plans that fail to think in these two distinct tracks are far more susceptible to fraud — and may not discover it until significant financial damage has already occurred.

As Spear put it, fraudsters are already targeting plans that lack this strategy. Therefore, building it proactively is not optional — it is a financial and operational necessity.

The Rising Risk of Agentic AI Fraud

Beyond generative AI, a newer and less understood threat is taking shape: agentic AI fraud. Agentic AI systems can break complex tasks into steps and execute them autonomously. In a fraud context, each agent might perform one specific function — for instance, scanning publicly available medical policies, reimbursement guidelines, or coding rules to identify exploitable loopholes.

Why Agentic Fraud Is Harder to Detect

Spear noted that agentic AI fraud may not yet be as mature as generative AI fraud. Still, he acknowledged that pinning it down is considerably harder. Additionally, it is difficult to determine whether bad actors are actually using agentic AI or simply have a working knowledge of areas where plans are most vulnerable. Either way, the outcome is the same: fraudsters exploit policy gaps with a precision that previously required deep insider knowledge.

Spear added that agentic AI fraud is likely already in use in more complex fraud scenarios. Therefore, plans should not wait for clear evidence before preparing their defenses. Proactive monitoring and policy review are essential steps.

Where Fraud Tends to Concentrate

Interestingly, Spear noted that fraud is typically not concentrated by medical specialty or provider type. Instead, it tends to cluster around out-of-area and out-of-network providers. These providers are harder to track and identify precisely because they are not part of a plan’s network. Moreover, they often open and close operations quickly, making investigation and recovery more difficult.

This pattern has important implications for fraud strategy. Plans should direct heightened scrutiny toward out-of-network claims, particularly those involving unfamiliar providers or unusual documentation. Additionally, speed matters — the faster a plan can flag and investigate suspicious claims, the less opportunity a fraudster has to collect payments and disappear.

What Insurers Must Do Now

The convergence of generative AI and agentic AI in fraud schemes demands an equally sophisticated response from health plans. Several steps are critical:

Invest in AI-powered document review. Tools that can detect AI-generated lab reports, imaging records, and clinical notes are now a baseline requirement — not a luxury.

Develop a formal AI strategy. Plans need to address both how AI threatens them and how they can deploy it defensively. Without this framework, teams operate reactively rather than proactively.

Focus scrutiny on out-of-network providers. Given the patterns Spear described, out-of-area and out-of-network claims deserve a higher level of review, especially when documentation appears unusually polished.

Monitor policy loopholes actively. Since agentic AI may already be scanning reimbursement policies for gaps, plans should conduct their own regular audits of policy ambiguities — and close them before bad actors can exploit them.

Stay ahead of evolving tactics. AI fraud is not static. As detection tools improve, fraudsters adapt. Ongoing training, vendor partnerships, and industry collaboration are essential for keeping pace.

The threat is real, it is growing, and it is already affecting health plans across the country. Furthermore, plans that build strong AI strategies today are the ones best positioned to detect and stop fraud before it reaches a critical scale.

Share

No comments

Sorry, the comment form is closed at this time.