
New research from the University of Arizona reveals that patient trust in artificial intelligence (AI) for healthcare diagnosis is influenced by demographics and clinician support. While 52% of patients initially resist AI use, trust improves when clinicians endorse the technology. Factors like accuracy, personalized AI, and nudges from providers impact acceptance. The study emphasizes the need for physician incorporation and patient decision-making to enhance AI adoption and promote accurate and reliable AI systems in healthcare.
Research conducted by the University of Arizona (UArizona) Health Sciences reveals that 52 percent of individuals are hesitant about the use of artificial intelligence (AI) for diagnosis. However, the study highlights that clinician endorsement of AI technology can enhance patient trust in its capabilities.
The study, published in PLOS Digital Health, found that patients are divided almost equally when asked if they prefer a human clinician or an AI-driven diagnostic tool. Preferences varied based on patient demographics and the extent of clinician support for the technology.
To determine study participants’ preferences for AI-guided diagnosis and therapy, taking into account numerous contributing factors, qualitative interviews and questionnaires were conducted with them.
According to the findings, patients often have less faith in AI diagnoses than in those made by qualified medical professionals. Yet, the study found that when a patient’s healthcare provider endorsed the use of AI, they were more likely to trust it.
In the initial question of whether they preferred a human clinician or an AI, participants were evenly split, with 52.9 percent choosing a human and 47.1 percent selecting an AI. However, when informed that their provider supports and finds the AI tool helpful, participants were significantly more likely to accept it. Acceptance of AI increased when participants were told the AI was accurate and personalized or when their primary care provider recommended it.
Variables such as illness severity, racial and financial bias, the integration of AI advice by the primary care physician, and personalized treatment plans did not significantly impact AI acceptance.
Demographic factors and patient attitudes also played a role in AI uptake. Older participants, politically conservative individuals, and those placing importance on religion were less likely to prefer AI. Native American participants were more likely to choose AI compared to white patients, while black patients were less likely to do so.
The study suggests that clinicians can enhance trust in AI and integrate it into their practices by focusing on the human aspect of patient-provider interactions. Factors such as the accuracy of the information, patient-centered experiences, and nudges can contribute to increased acceptance of AI.
The researchers emphasize that further investigation is necessary to determine optimal methods for incorporating AI into clinical practice and supporting patient decision-making.
Overall, this study underscores the importance of clinician-driven approaches to foster trust in AI and facilitate its widespread adoption across diverse patient populations.