HITRUST has introduced the pioneering HITRUST AI Assurance Program, a game-changing initiative designed to guide healthcare organizations in the secure and sustainable use of artificial intelligence (AI) models. This groundbreaking program, integrated into the HITRUST CSF, prioritizes risk management and fosters collaboration between organizations and AI service providers to address shared risks. It aims to bring clarity to risk responsibilities, especially concerning patient data privacy. The program seeks to establish a universal, trusted approach to AI security assurance, supported by collaborations with industry giants like Microsoft Azure OpenAI Service.
In a groundbreaking move, HITRUST has unveiled its latest endeavor, the HITRUST AI Assurance Program, specifically designed to guide healthcare organizations in the secure and sustainable deployment of artificial intelligence models. The program not only marks a significant milestone but also underscores the organization’s commitment to prioritizing risk management in the ever-evolving landscape of AI technology.
According to HITRUST, this initiative represents the first of its kind and is set to play a pivotal role in version 11.2 of the HITRUST CSF, emphasizing risk management as a foundational consideration. This strategic move aims to empower organizations utilizing AI across various applications, allowing them to engage more effectively with their AI service providers to jointly address shared risks.
As HITRUST puts it, this clarity regarding shared risks and accountabilities will enable organizations to rely on existing information protection controls from both internal IT services and external third-party entities, including AI technology platform providers and suppliers of AI-enabled applications and services.
Moreover, HITRUST recognizes the unique challenges posed by the opacity of deep neural networks, particularly in terms of privacy and security within healthcare settings. Consequently, healthcare organizations must fully comprehend their responsibilities regarding patient data and ensure they possess reliable risk assurances from their service providers.
The ultimate objective of the HITRUST AI Assurance Program is to offer a universally accepted approach to security assurance. This approach will empower healthcare organizations to understand the risks linked to AI model implementation and reliably demonstrate their adherence to AI risk management principles with the same level of transparency, consistency, accuracy, and quality found in all HITRUST Assurance reports.
Notably, HITRUST is collaborating with Microsoft Azure OpenAI Service to maintain the CSF and expedite its alignment with new regulations, data protection laws, and industry standards.
In the broader context, recent research has indicated that generative AI is poised to become a $22 billion component of the healthcare industry over the next decade. As health systems rush to adopt generative AI and other algorithms to enhance various clinical and operational aspects, they must tread cautiously to address the inherent risks, particularly in the realm of cybersecurity.
Robert Booker, Chief Strategy Officer at HITRUST, emphasizes the importance of shared understanding and cooperation among organizations involved in AI systems, stressing the need for a practical, scalable, recognized, and proven approach to AI system controls. Such an approach is crucial for building trust among regulators and other stakeholders, ensuring a solid foundation for AI implementations.
Omar Khawaja, Field CISO of Databricks and a HITRUST board member, underlines the significance of objective security assurance approaches such as the HITRUST CSF in establishing the necessary security foundation for AI implementations. In his view, the cyber risks associated with AI must be addressed comprehensively, considering its immense societal potential.