Yale School of Medicine researchers have developed a deep learning approach for accurately detecting aortic stenosis from heart ultrasound scans, a study in the European Heart Journal revealed. The team aimed to enable point-of-care ultrasound screening for early disease detection. The deep learning model achieved high performance across diverse cohorts, indicating potential for broader community screening, but further research is required before clinical deployment. Past efforts have also used AI, such as natural language processing, to aid in aortic stenosis identification, showing promising results in automating detection.
The Yale School of Medicine has developed a cutting-edge deep learning (DL) method that can accurately detect aortic stenosis through the analysis of cardiac ultrasound data. A recent article in the European Heart Journal provided details on this breakthrough.
The scientists highlighted aortic stenosis, a prevalent cardiac ailment stemming from the constriction of the aortic valve, as a significant contributor to mortality and morbidity rates. The early recognition of this condition is imperative to forestall these adverse outcomes. However, this necessitates specialized ultrasound imaging of the heart, specifically Doppler echocardiography.
While Doppler echocardiography serves as the primary means of diagnosing aortic stenosis, its specialized nature renders it both inefficient and difficult to employ for early detection initiatives.
Senior author Rohan Khera, MD, MS, who holds positions as an assistant professor of cardiovascular medicine and health informatics at Yale and as the director of the Cardiovascular Data Science (CarDS) Lab, articulated the challenge, stating, “Our challenge is that precise evaluation of [aortic stenosis] is crucial for patient management and risk reduction. While specialized testing remains the gold standard, reliance on those who make it to our echocardiographic laboratories likely misses people early in their disease state.”
The objective of the research team was to construct a model capable of enabling point-of-care ultrasound screenings to expedite the early detection of this ailment.
To realize this objective, the researchers developed a deep learning model utilizing 5,257 studies derived from transthoracic echocardiography (TTE) exams, encompassing 17,570 videos spanning the years 2016 to 2020 at Yale New Haven Hospital.
This tool was subsequently validated externally using 2,040 consecutive studies from Yale New Haven Hospital, along with two separate cohorts comprising 4,226 and 3,072 studies from different geographic regions in California and New England hospitals.
The model exhibited remarkable performance across all cohorts, achieving an area under the receiver operating characteristic curve of 0.978 in the test set. Additionally, the DL model demonstrated an area under the receiver operating characteristic curve of 0.952 in the California cohort and 0.942 in the New England cohort.
These outcomes led the researchers to affirm that the model has the potential to effectively aid in the early identification of aortic stenosis.
Khera explained, “Our work can allow broader community screening for [aortic stenosis] as handheld ultrasounds can increasingly be used without the need for more specialized equipment. They are already being used frequently in emergency departments, and many other care settings.”
Nevertheless, further research is indispensable before the tool can be employed in clinical settings.
Prior studies have also aimed to enhance aortic stenosis detection through the utilization of artificial intelligence (AI).
In 2021, researchers at Kaiser Permanente exhibited the ability of natural language processing (NLP) to support clinicians in identifying aortic stenosis.
The model was trained to sift through echocardiogram reports and electronic medical record (EMR) data to identify abbreviations, terms, and phrases linked to the condition.
This tool expedited the identification of nearly 54,000 patients meeting the criteria for aortic stenosis, a process that the research team noted would have taken years if done manually.