Allegations against Humana reveal the use of AI algorithms to restrict seniors’ rehabilitation care, mirroring claims against UnitedHealthcare. The nH Predict tool, relied upon for long-term care coverage decisions, faces scrutiny for allegedly basing care denials on unrealistic recovery projections. The lawsuit alleges pressure on employees to meet algorithm targets, affecting thousands or millions. Personal accounts, like 86-year-old JoAnne Barrows’, highlight denied coverage despite doctor-prescribed care. A parallel lawsuit against UnitedHealth underscores concerns about AI error rates. These legal actions underscore ethical debates on AI’s role in healthcare coverage decisions, prompting industry-wide reflection on its proper implementation for patient care.
In a concerning revelation, a lawsuit accuses Humana of employing AI algorithms to deny crucial rehabilitation care for seniors, echoing similar claims against UnitedHealthcare. The lawsuit implicates the nH Predict tool, alleging that its rigid predictions result in coverage denials regardless of patients’ needs. Moreover, employees reportedly faced consequences for failing to meet algorithm-driven targets. Personal testimonies, like JoAnne Barrows’, exemplify the human toll of denied care despite medical necessity. A separate lawsuit against UnitedHealth highlights concerns over high AI error rates. These developments raise ethical queries about AI’s role in healthcare coverage decisions, prompting introspection within the healthcare sector.
This legal action draws parallels to similar claims made against UnitedHealthcare, both of which dominate the Medicare Advantage market. Plaintiffs contend that Humana relied on naviHealth’s nH Predict tool, a subsidiary of UnitedHealth Group’s Optum, to determine coverage for long-term care.
The lawsuit specifically targets the nH Predict algorithm, claiming it bases coverage decisions on unrealistic projections for recovery, thus denying essential care recommended by physicians. The plaintiffs assert that Humana’s adoption of this AI model led to a substantial rise in coverage denials for post-acute care.
While the plaintiffs pursue legal recourse, a Humana spokesperson refrained from commenting on ongoing litigation.
Beyond disputing the algorithm’s accuracy, the lawsuit alleges that employees faced undue pressure to meet the model’s objectives, risking disciplinary action or termination if they failed to comply.
Although initiated by two individual plaintiffs, the lawsuit suggests that the affected class could potentially encompass thousands, if not millions, of individuals.
One of the plaintiffs, 86-year-old JoAnne Barrows, exemplifies the human toll of these allegations. Following hospitalization due to a fall, she was placed under a non-weight-bearing order for six weeks because of a leg injury. Despite this, Humana notified Barrows of coverage cancellation after just two weeks in a rehabilitation facility, disregarding her month-long prescribed non-weight-bearing period. Despite her appeal, she was denied coverage, forcing her family to bear the financial burden of necessary rehabilitation expenses.
Another lawsuit, filed recently against UnitedHealth, claims that naviHealth’s platform operates with a purported “90% error rate.” Shockingly, the insurer allegedly persisted in its use, perhaps emboldened by the low number of member appeals against claims denials.
These legal actions come in the wake of an investigative report by STAT News earlier this year, shedding light on the potential misuse of AI technology by Medicare Advantage plans in claims denials.
The lawsuit against Humana unveils troubling practices, alleging the misuse of AI algorithms to restrict seniors’ access to essential rehabilitation care. Similar accusations against UnitedHealthcare highlight systemic concerns. Instances like JoAnne Barrows’ denied coverage emphasize the real-life impact of algorithm-driven care denials. Parallel litigation citing AI error rates intensifies ethical debates surrounding AI’s role in healthcare coverage decisions. These legal actions underscore the pressing need for an ethical framework guiding AI implementation in healthcare. The industry faces scrutiny, urging a critical reevaluation of AI’s use to ensure patient-centered care remains paramount while navigating the complexities of technological advancements in healthcare delivery.