m
Recent Posts
HomeGovHealthHHS AI Use Surges Amid Workforce Cuts

HHS AI Use Surges Amid Workforce Cuts

The Department of Health and Human Services (HHS) reported a dramatic 65% increase in AI use cases in 2025, according to its latest technology inventory. The surge comes even as agency leadership has aggressively moved to downsize its workforce — raising pointed questions about whether artificial intelligence is being used to supplement staff or quietly replace them.

AI Expansion Accelerates Across HHS

HHS was already among the federal agencies with the largest AI use case inventories in prior years. The 2025 inventory adds new pilots focused on alleviating staffing shortages, several disclosures of so-called “agentic” AI tools, and a new deployment related to unaccompanied children in government care. Notably, over half of the 2025 use cases remain in pre-deployment or pilot phases, meaning most applications are just beginning to be tested.

Valerie Wirtschafter, a Brookings Institution fellow specializing in artificial intelligence and emerging technology, described the agency as clearly “leaning into” AI expansion. She noted that the inventory signals “a huge focus” on scaling up AI capabilities across HHS departments and bureaus.

Staffing Shortages Drive AI Deployment

At least two use cases in the inventory directly cite workforce limitations as a justification for deploying AI tools. The Office for Civil Rights (OCR) disclosed pilots using both ChatGPT and Microsoft Outlook CoPilot to address staffing shortages. ChatGPT is being used to improve investigative efficiency by breaking down complex legal concepts and identifying patterns in court rulings related to Medicaid services. CoPilot, integrated through Westlaw, is deployed to enable faster public correspondence.

Both tools are categorized under “law enforcement” uses, though neither entry explains specifically how generative AI addresses workforce gaps. HHS and OCR did not respond to media requests for additional comment or clarification.

While some experts acknowledge AI’s potential to boost government efficiency, they caution against over-reliance. Cody Venzke, senior policy counsel at the ACLU’s National Political Advocacy Department, warned: “It is not a stand-in for all human decision-making, and that is especially true when you are committed to breakneck downsizing of the federal government.” Wirtschafter echoed this sentiment, calling the staffing-driven AI deployments “a problem, maybe of their own making.”

Agentic AI, Palantir, and Political Alignment

Among the more notable disclosures, the Administration for Children and Families (ACF) revealed plans to use an AI system to verify identities of adults applying to sponsor unaccompanied minors. The tool, currently in pre-deployment and classified as “high-impact,” will be subject to stricter risk management requirements. No vendor was disclosed.

ACF also deployed two tools to identify position descriptions and grants that conflict with the president’s executive orders targeting diversity, equity, and inclusion programs. Both tools list Palantir as the vendor. Palantir, which has drawn scrutiny for its work with Immigration and Customs Enforcement, is cited in more than 15 HHS use cases — the majority within ACF.

HHS also disclosed multiple “agentic” AI deployments for 2025. These tools perform specific tasks autonomously or with minimal human oversight. Among them is a Centers for Disease Control and Prevention pilot of OpenAI’s Deep Research, which processes large volumes of research and data. An internal study cited in the inventory found that 94% of prompts produced high-quality reports, most completed within 30 minutes. Additional agentic uses include a FOIA response agent at the Health Resources and Services Administration and an ethics policy assistant at the National Institutes of Health.

Notably absent from the inventory is HHS’s use of xAI’s Grok chatbot on its RealFood.gov website. However, “xAI gov” does appear on a separate inventory of commercial AI tools, listed for generating document drafts, scheduling, and managing social media.

Transparency Gaps Concern Advocates

Despite being one of the more detailed federal AI inventories submitted in 2025, HHS’s publication has significant shortcomings. Unlike previous years, the new inventory lacks unique identifiers for each entry, making year-over-year comparisons difficult. Numerous fields are either blank or marked as sensitive and withheld from public view.

Most critically, privacy impact assessments (PIAs) — legally required for technologies that collect personally identifiable information — are largely absent or inaccessible. Out of all submissions, only one entry indicated where the PIA could be located. Venzke called this a serious transparency failure: “That defies the entire point of having a PIA, which is public transparency.”

Quinn Anex-Ries, a senior policy analyst at the Center for Democracy and Technology, stressed that the dramatic growth in AI use demands a corresponding growth in oversight infrastructure. “It’s not really clear that the agency did a bunch of additional groundwork to prepare for their AI use to grow by such a huge margin,” Anex-Ries said.

As HHS continues to expand its AI footprint while simultaneously shrinking its human workforce, the gap between technological adoption and meaningful accountability remains a pressing concern for transparency advocates, civil liberties groups, and the public alike.

Share

No comments

Sorry, the comment form is closed at this time.