版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:VA Palo Alto Cooperat Studies Program Coordinating Palo Alto CA USA VA Palo Alto Hlth Care Syst Big Data Scientist Training Enhancement Program Palo Alto CA USA VA Palo Alto Hlth Care Syst Hosp Med Palo Alto CA 94304 USA Stanford Dept Med Div Hosp Med Stanford CA USA Stanford Univ Dept Med Div Endocrinol Gerontol & Metab & Courtesy Epidemi Sch Med Stanford CA USA
出 版 物:《JOURNAL OF GENERAL INTERNAL MEDICINE》 (J. Gen. Intern. Med.)
年 卷 期:2025年第40卷第4期
页 面:803-810页
核心收录:
学科分类:1204[管理学-公共管理] 1002[医学-临床医学] 100201[医学-内科学(含:心血管病、血液病、呼吸系病、消化系病、内分泌与代谢病、肾病、风湿病、传染病)] 10[医学]
基 金:VA Palo Alto Academic Research Catalyst (ARC) Program
主 题:predictive modeling inpatient mortality single hospital interpretability precision medicine hospital care
摘 要:BackgroundAdvances in artificial intelligence and machine learning have facilitated the creation of mortality prediction models which are increasingly used to assess quality of care and inform clinical practice. One open question is whether a hospital should utilize a mortality model trained from a diverse nationwide dataset or use a model developed primarily from their local hospital data. ObjectiveTo compare performance of a single-hospital, 30-day all-cause mortality model against an established national benchmark on the task of mortality prediction. Design/ParticipantsWe developed a single-hospital mortality prediction model using 9975 consecutive inpatient admissions at the Department of Veterans Affairs Palo Alto Healthcare System from July 26, 2018, to September 30, 2021, and compared performance against an established national model with similar features. Main MeasuresBoth the single-hospital model and the national model placed each patient in one of five prediction bins: = 30% risks of 30-day mortality. Evaluation metrics included receiver operator characteristic area under the curve (ROC AUC), sensitivity, specificity, and balanced accuracy. Final comparisons were made between the single-hospital model trained on the full training set and the national model for both metrics and prediction overlap. Key ResultsWith sufficiently large training sets of 2720 or greater inpatient admissions, there was no statistically significant difference between the performances of the national model (ROC AUC 0.89, 95%CI [0.858, 0.919]) and single-hospital model (ROC AUC 0.878, 95%CI [0.84, 0.912]). For the 89 mortality events in the test set, the single-hospital model agreed with the national model risk assessment or an adjacent risk assessment in 92.1% of the encounters. ConclusionsA single-hospital inpatient mortality prediction model can achieve performance comparable to a national model when evaluated on a single-hospital population, g