New research from the University of California, Riverside, explores the utility of Google and ChatGPT for studying dementia. Both platforms offer strengths and weaknesses: Google provides current but biased information, while ChatGPT offers impartial but outdated data. Despite readability issues, the potential exists for these resources in dementia research. Validation remains crucial, as demonstrated by a study showing ChatGPT’s unsuitability for gastroenterology education. Future enhancements in reliability and accuracy are anticipated but may take time due to the complexities involved.
Recent research has shed light on the strengths and limitations of utilizing Google and ChatGPT for investigating dementia and Alzheimer’s disease.
A fresh investigation conducted by the University of California, Riverside (UCR) has highlighted the favorable attributes of Google and ChatGPT, encompassing their provision of current and impartial information. When harnessed effectively, these tools hold the potential to serve as valuable resources for comprehending and addressing Alzheimer’s and dementia.
In a press release, Vagelis Hristidis, Ph.D., a professor of computer science and engineering at UCR’s Bourns College of Engineering, expressed his belief that a combination of the best attributes from both platforms could yield an improved system. He anticipates such advancements over the next few years.
Hristidis, after assessing their efficacy in managing dementia and Alzheimer’s, projected that these resources will continue to play a significant role in treating certain medical conditions.
The choice to focus on these conditions was influenced by their prominence and projected growth. According to the Centers for Disease Control and Prevention (CDC), the number of individuals aged 65 and above with dementia was around 5 million in 2014, projected to rise to 14 million by 2060.
To evaluate their utility, Hristidis, and his co-authors posted 60 inquiries to both Google and ChatGPT. These queries emulated those commonly made by individuals with dementia. Roughly half of the queries aimed to gather insights into disease processes, while the remainder sought information about available services to support patients and their families.
The study yielded a mix of outcomes. Google was found to offer up-to-date information but often exhibited bias in results, including prominently displaying service providers seeking customers.
In contrast, ChatGPT provided impartial information, yet its drawback lay in the outdated nature of its data and its reliance on limited sources.
Hristidis noted, “Google provides more current information, covering a wide array of topics, whereas ChatGPT’s training updates occur every few months, rendering it somewhat outdated. For instance, if a new medication was introduced last week, you wouldn’t find it on ChatGPT.”
However, ChatGPT’s advantage over Google stemmed from its reliability and accuracy, attributed to OpenAI’s practice of incorporating dependable sources during training.
On the other hand, Google’s comprehensiveness sometimes led to inconsistencies, partly due to businesses paying for higher visibility in search results.
Notwithstanding, both platforms demonstrated low readability scores, suggesting potential limitations for individuals with lower educational levels and health literacy skills.
Nevertheless, Hristidis acknowledged the latent potential in these resources. He emphasized the possibility of improving readability, citing existing AI tools capable of reading and paraphrasing text. Enhancing reliability and accuracy, however, is a more complex challenge, considering the extensive research and development invested in ChatGPT’s creation.
Earlier research also pinpointed shortcomings in ChatGPT’s performance. A study from May highlighted its inadequacy in the field of gastroenterology education. In self-assessment tests for the American College of Gastroenterology (ACG) in 2021 and 2022, both ChatGPT-3 and ChatGPT-4 failed to achieve a passing score of 70 percent or higher.
Arvind Trindade, MD, a senior author of the study, stressed the lack of comprehensive research surrounding ChatGPT’s potential in medical education, specifically in gastroenterology. He advised against its immediate implementation in the healthcare field.
While ChatGPT demonstrates potential, its validation is essential. These research findings serve as important indicators, underscoring the need for continued validation efforts in the future.