版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Manchester Metropolitan Univ Fac Hlth Psychol & Social Care Manchester Lancs England CLARA Analyt CLARA Labs Santa Clara CA USA Univ Liverpool Hlth Serv Res Liverpool Merseyside England Stanford Univ Sch Med Stanford CA 94305 USA Mersey Care NHS Fdn Trust Prescot England
出 版 物:《JMIR MHEALTH AND UHEALTH》 (JMIR mHealth uHealth)
年 卷 期:2020年第8卷第6期
页 面:e15901页
核心收录:
学科分类:1204[管理学-公共管理] 1001[医学-基础医学(可授医学、理学学位)] 10[医学]
基 金:Department of Health Global Digital Exemplar [CENT/DIGEX/RW4/2017-10-16/A]
主 题:suicide suicidal ideation smartphone cell phone machine learning nearest neighbor algorithm digital phenotyping
摘 要:Background: Digital phenotyping and machine learning are currently being used to augment or even replace traditional analytic procedures in many domains, including health care. Given the heavy reliance on smartphones and mobile devices around the world, this readily available source of data is an important and highly underutilized source that has the potential to improve mental health risk prediction and prevention and advance mental health globally. Objective: This study aimed to apply machine learning in an acute mental health setting for suicide risk prediction. This study uses a nascent approach, adding to existing knowledge by using data collected through a smartphone in place of clinical data, which have typically been collected from health care records. Methods: We created a smartphone app called Strength Within Me, which was linked to Fitbit, Apple Health kit, and Facebook, to collect salient clinical information such as sleep behavior and mood, step frequency and count, and engagement patterns with the phone from a cohort of inpatients with acute mental health (n=66). In addition, clinical research interviews were used to assess mood, sleep, and suicide risk. Multiple machine learning algorithms were tested to determine the best fit. Results: K-nearest neighbors (KNN;k=2) with uniform weighting and the Euclidean distance metric emerged as the most promising algorithm, with 68% mean accuracy (averaged over 10,000 simulations of splitting the training and testing data via 10-fold cross-validation) and an average area under the curve of 0.65. We applied a combined 5x2 F test to test the model performance of KNN against the baseline classifier that guesses training majority, random forest, support vector machine and logistic regression, and achieved F statistics of 10.7 (P=.009) and 17.6 (P=.003) for training majority and random forest, respectively, rejecting the null of performance being the same. Therefore, we have taken the first steps in prototyping a syst