A study by the University of Arizona Health Sciences reveals that 52% of patients are resistant to using artificial intelligence (AI) for diagnosis. However, clinician support can enhance patient trust in AI. Preferences for human clinicians or AI tools varied based on patient demographics. The study highlights the importance of accurate information, nudges, and a patient-centered experience to increase AI acceptance. Clinician-driven approaches are crucial for promoting AI use before widespread adoption can occur.
A recent study conducted by the University of Arizona Health Sciences reveals that 52% of individuals are hesitant about the use of artificial intelligence (AI) for diagnosis. However, the involvement and support of clinicians can improve patient trust in AI. The research found that patients are divided when it comes to preferring a human clinician or an AI-driven diagnostic tool, with preferences varying based on patient demographics and the level of support for AI from healthcare providers.
The study, published in PLOS Digital Health, used interviews and surveys to assess patient preferences for AI-guided diagnosis and treatment. Qualitative interviews were conducted initially with 24 patients to gauge their reactions to current and future AI technologies. In the second phase, a blinded, randomized survey was administered to 2,472 participants from diverse socioeconomic, racial, and ethnic backgrounds. The survey included clinical vignettes and assessed eight variables related to patient perceptions, such as illness severity, AI accuracy, personalization, racial and financial bias, and the extent to which the primary care physician incorporated AI advice.
The findings indicated that patients initially had nearly equal preferences for a human clinician or an AI, with 52.9% choosing the human option and 47.1% opting for AI. However, when participants were informed that their healthcare provider supported the use of AI and found it helpful, they were more likely to accept it. AI acceptance also increased when participants were informed that the AI was accurate and personalized, or when the primary care provider nudged them toward the AI option. Factors like illness severity, racial and financial bias, and AI-tailored treatment plans did not significantly impact AI acceptance.
Demographic factors and patient attitudes played a role in AI uptake. Older participants, politically conservative individuals, and those placing importance on religion were less likely to prefer AI. Native American participants were more likely to choose AI compared to white patients, while Black patients were less likely to do so.
The study highlighted the importance of the human element in patient-provider interactions to build trust in AI and incorporate it into clinical practice. Factors like accurate information, nudges, and a patient-centered experience were identified as potential ways to increase AI acceptance. The researchers emphasized the need for future studies and physician involvement to ensure the accuracy of AI systems and facilitate decision-making in patient care.
Ultimately, clinician-driven approaches will be vital in promoting AI use across different patient groups before widespread adoption can be achieved. The study’s findings are expected to guide future research and influence clinical decisions, emphasizing the ongoing role of AI in healthcare.