Dementia risk prediction in individuals with mild cognitive impairment: a comparison of Cox regression and machine learning models

dc.contributor.authorWang, Meng
dc.contributor.authorGreenberg, Matthew
dc.contributor.authorForkert, Nils D.
dc.contributor.authorChekouo, Thierry
dc.contributor.authorAfriyie, Gabriel
dc.contributor.authorIsmail, Zahinoor
dc.contributor.authorSmith, Eric E.
dc.contributor.authorSajobi, Tolulope T.
dc.date.accessioned2022-11-06T01:02:25Z
dc.date.available2022-11-06T01:02:25Z
dc.date.issued2022-11-02
dc.date.updated2022-11-06T01:02:24Z
dc.description.abstractAbstract Background Cox proportional hazards regression models and machine learning models are widely used for predicting the risk of dementia. Existing comparisons of these models have mostly been based on empirical datasets and have yielded mixed results. This study examines the accuracy of various machine learning and of the Cox regression models for predicting time-to-event outcomes using Monte Carlo simulation in people with mild cognitive impairment (MCI). Methods The predictive accuracy of nine time-to-event regression and machine learning models were investigated. These models include Cox regression, penalized Cox regression (with Ridge, LASSO, and elastic net penalties), survival trees, random survival forests, survival support vector machines, artificial neural networks, and extreme gradient boosting. Simulation data were generated using study design and data characteristics of a clinical registry and a large community-based registry of patients with MCI. The predictive performance of these models was evaluated based on three-fold cross-validation via Harrell’s concordance index (c-index), integrated calibration index (ICI), and integrated brier score (IBS). Results Cox regression and machine learning model had comparable predictive accuracy across three different performance metrics and data-analytic conditions. The estimated c-index values for Cox regression, random survival forests, and extreme gradient boosting were 0.70, 0.69 and 0.70, respectively, when the data were generated from a Cox regression model in a large sample-size conditions. In contrast, the estimated c-index values for these models were 0.64, 0.64, and 0.65 when the data were generated from a random survival forest in a large sample size conditions. Both Cox regression and random survival forest had the lowest ICI values (0.12 for a large sample size and 0.18 for a small sample size) among all the investigated models regardless of sample size and data generating model. Conclusion Cox regression models have comparable, and sometimes better predictive performance, than more complex machine learning models. We recommend that the choice among these models should be guided by important considerations for research hypotheses, model interpretability, and type of data.
dc.identifier.citationBMC Medical Research Methodology. 2022 Nov 02;22(1):284
dc.identifier.doihttps://doi.org/10.1186/s12874-022-01754-y
dc.identifier.urihttp://hdl.handle.net/1880/115423
dc.identifier.urihttps://doi.org/10.11575/PRISM/44917
dc.language.rfc3066en
dc.rights.holderThe Author(s)
dc.titleDementia risk prediction in individuals with mild cognitive impairment: a comparison of Cox regression and machine learning models
dc.typeJournal Article
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
12874_2022_Article_1754.pdf
Size:
1.66 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
0 B
Format:
Item-specific license agreed upon to submission
Description: