The World Health Organization (WHO) emphasizes caution and the ethical use of AI in healthcare, particularly large language models like ChatGPT. WHO calls for careful evaluation of risks, protection of human well-being, transparency, and evidence of benefits before the widespread implementation of AI in healthcare delivery.
The World Health Organization (WHO) has expressed its concerns regarding the deployment of artificial intelligence (AI) in healthcare, emphasizing the need for caution, ethical considerations, and responsible use. While recognizing the potential benefits of advanced AI models like ChatGPT, WHO has called for vigilance and deliberation in their implementation.
The WHO has urged healthcare organizations worldwide to exercise caution and carefully evaluate the risks associated with AI, particularly in the rapidly evolving field of large language models. It emphasizes the importance of protecting human well-being, safety, and autonomy while promoting public health and reducing inequities.
Although WHO acknowledges the excitement surrounding tools like ChatGPT, Bard, and Bert due to their potential to improve access to health information, decision support, and diagnostic capacity, it stresses the need for the consistent exercise of caution, as would be expected with any new technology.
The WHO raises concerns about the hasty adoption of untested AI systems, which can lead to medical errors, misinformation, and a loss of trust in AI, ultimately undermining its long-term benefits. Transparency, inclusion, public engagement, expert supervision, and rigorous evaluation are highlighted as essential values to be upheld in the deployment of AI in healthcare.
To ensure patient safety and the overall benefits of AI, WHO calls for clear evidence of the benefits of large language models and other AI models before their widespread and routine use in healthcare delivery.
The emergence of ChatGPT and generative AI has ushered in a new era in healthcare processes and decision-making. These advanced models have the potential to significantly impact patient engagement, inform hospital ADT choices, reshape the healthcare workforce, and transform care delivery. However, the inherent risks and uncertainties associated with these technologies necessitate oversight and a thoughtful approach to their implementation.
At HIMSS23, representatives from the World Health Organization and other health ministries emphasized the importance of patient access, safety, and health equity in digital health strategies.
WHO reiterates the significance of ethical principles and appropriate governance outlined in its guidance on the ethics and governance of AI for health.
The six core principles identified by WHO include:
- Protecting autonomy
- Promoting human well-being and safety
- Ensuring transparency and explainability
- Fostering responsibility and accountability
- Ensuring inclusiveness and equity
- Promoting responsive and sustainable AI