In advanced cancer patients receiving new drug lines, an externally verified ML model accurately predicts 6-month mortality risk. Addressing severe illnesses helps doctors and patients. The model uses 45 variables from EHR data and was developed and validated through research, reaching an AUC of 0.80 in the validation cohort. Although useful, its usefulness is constrained by single-center datasets, necessitating more multi-system validation. Using patient-reported outcome data, similar ML models are being investigated to forecast ovarian cancer mortality.
A machine learning (ML) model, validated externally, has shown its effectiveness in predicting the six-month mortality risk for patients with advanced cancer who are beginning a new line of therapy (LOT). This innovation could greatly facilitate important discussions about severe illnesses between medical practitioners and their patients.
Recently published in JAMA Network Open, the study details the successful external validation of the ML model. Originally developed and internally validated, this model was initially designed to categorize patients into different risk groups related to mortality. Its purpose was to enable more meaningful and productive conversations about serious illness between healthcare providers and patients at various decision-making junctures during treatment.
The study emphasizes the necessity of externally validating any machine learning model in healthcare to ensure its reliability before it can be deployed clinically. Despite this, external validation results are rarely documented. This has prompted the research team to carry out external validation using up-to-date patient data for their model.
The ML model draws from 45 features extracted from Electronic Health Records (EHR) data, which can be implemented using the Fast Healthcare Interoperability Resources (FHIR) standard. The researchers also noted their intention to integrate the algorithm into a tool that can communicate and elucidate the prognosis determined by the machine learning model.
To train the model, patient data from June 1, 2014, to June 1, 2020, was used, including identifying treatment decision points (TDPs) for new therapy lines and confirming mortality outcomes at the six-month mark following each TDP. The external validation phase involved data from June 2, 2020, to April 12, 2022.
The researchers evaluated the model by comparing population characteristics between the development and validation datasets. Performance was assessed using metrics such as the area under the curve (AUC), positive predictive value, negative predictive value, sensitivity, and specificity at a predetermined risk threshold of 0.3.
This threshold was chosen to be consistent with a prior study, which categorized patients into low and high survival chance groups. Approximately 1 in 3 patients classified as having a low chance of survival remained alive after six months.
For patients predicted to have a low chance of survival, various quality metrics were calculated, including rates of palliative care or hospice referrals, hospitalizations, and average length of stay.
The validation cohort consisted of 1,822 patients with 2,613 TDPs. While the development and validation datasets exhibited similar six-month mortality rates after TDPs, there were differences in patient characteristics. Patients in the validation dataset tended to be younger and had a higher proportion of nervous system and brain cancer cases but a lower proportion of lung cancer cases compared to the development dataset.
The model displayed robust performance, achieving an AUC of 0.80 for the validation cohort. A low chance of survival was predicted for 8.7% of TDPs. Among the 130 patients with 146 TDPs predicted to have a low chance of survival and who subsequently died within six months, 16.4% were referred to hospice, 49.3% to palliative care, and 64.4% were hospitalized between the TDP and death.
The research team concluded that these findings underscore the need for a tool that can facilitate serious illness discussions between providers and patients in the context of new anticancer therapy decisions. However, they also cautioned about the model’s limited generalizability due to its reliance on single-center datasets and lack of diversity in the study cohorts.
Additionally, while this study contributes a crucial quality assessment before integrating the model into oncology care, the researchers stressed the importance of further validation across multiple healthcare systems.
The adoption of machine learning for predicting cancer mortality and enhancing patient care is a growing trend. In 2022, researchers demonstrated that an ensemble of ML models could accurately forecast the six-month mortality of ovarian cancer patients using patient-reported outcome (PRO) data.
Given the varying survival rates in ovarian cancer based on stage and type and the potential negative impact of treatment on patients’ quality of life, the model designed by the research team is aimed at identifying when an ovarian cancer patient is approaching the end of life. The predictive model successfully identified most patients who passed away within 180 days of PRO assessment, indicating its potential to address gaps in ovarian cancer care delivery.