版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Boston Univ Dept Biostat 801 Massachusetts Ave Boston MA 02118 USA Univ South Carolina Dept Stat Columbia SC 29208 USA
出 版 物:《BIOMETRICAL JOURNAL》 (生物统计杂志)
年 卷 期:2020年第62卷第7期
页 面:1687-1701页
核心收录:
学科分类:0710[理学-生物学] 07[理学] 09[农学] 0714[理学-统计学(可授理学、经济学学位)]
基 金:United States National Institutes of Health [R01-CA172463, R01-CA226805] National Cancer Institute-funded Breast Cancer Surveillance Consortium (BCSC) [HHSN261201100031C, P01CA154292] Division of Cancer Prevention, National Cancer Institute [R01-CA172463, R01-CA226805]
主 题:association generalized linear mixed model model-based kappa ordinal classifications screening test
摘 要:Variability between raters ordinal scores is commonly observed in imaging tests, leading to uncertainty in the diagnostic process. In breast cancer screening, a radiologist visually interprets mammograms and MRIs, while skin diseases, Alzheimer s disease, and psychiatric conditions are graded based on clinical judgment. Consequently, studies are often conducted in clinical settings to investigate whether a new training tool can improve the interpretive performance of raters. In such studies, a large group of experts each classify a set of patients test results on two separate occasions, before and after some form of training with the goal of assessing the impact of training on experts paired ratings. However, due to the correlated nature of the ordinal ratings, few statistical approaches are available to measure association between raters paired scores. Existing measures are restricted to assessing association at just one time point for a single screening test. We propose here a novel paired kappa to provide a summary measure of association between many raters paired ordinal assessments of patients test results before versus after rater training. Intrarater association also provides valuable insight into the consistency of ratings when raters view a patient s test results on two occasions with no intervention undertaken between viewings. In contrast to existing correlated measures, the proposed kappa is a measure that provides an overall evaluation of the association among multiple raters scores from two time points and is robust to the underlying disease prevalence. We implement our proposed approach in two recent breast-imaging studies and conduct extensive simulation studies to evaluate properties and performance of our summary measure of association.