A Clinically Practical and Interpretable Deep Model for ICU Mortality Prediction with External Validation

AMIA Annu Symp Proc. 2021 Jan 25;2020:629-637. eCollection 2020.


Deep learning models are increasingly studied in the field of critical care. However, due to the lack of external validation and interpretability, it is difficult to generalize deep learning models in critical care senarios. Few works have validated the performance of the deep learning models with external datasets. To address this, we propose a clinically practical and interpretable deep model for intensive care unit (ICU) mortality prediction with external validation. We use the newly published dataset Philips eICU to train a recurrent neural network model with two-level attention mechanism, and use the MIMIC III dataset as the external validation set to verify the model performance. This model achieves a high accuracy (AUC = 0.855 on the external validation set) and have good interpretability. Based on this model, we develop a system to support clinical decision-making in ICUs.

PMID:33936437 | PMC:PMC8075474

Related Posts