版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Univ Calif Merced Dept Comp Sci & Engn Merced CA 95343 USA Nanyang Technol Univ Sch Comp Engn Singapore 639798 Singapore
出 版 物:《IEEE TRANSACTIONS ON MOBILE COMPUTING》 (IEEE移动计算汇刊)
年 卷 期:2021年第20卷第7期
页 面:2505-2517页
核心收录:
学科分类:0810[工学-信息与通信工程] 0808[工学-电气工程] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:Singapore MOE AcRF [2018-T1-002-081, MOE2016-T2-2-023] COE grant from NTU
主 题:Learning Artificial Intelligence Mobile Radio Radio Direction Finding Telecommunication Computing Uni Loc Unified Mobile Localization Framework Mobile Devices Ensemble Learning Algorithm Localization Error Schemes Error Prediction Modeling Sensors Wireless Fidelity Global Positioning System Smart Phones Mobile Computing Real Time Systems Prediction Algorithms
摘 要:Current localization schemes on mobile devices are experiencing great diversity that is mainly shown in two aspects: the large number of available localization schemes and their diverse performance. This paper presents UniLoc, a unified framework that gains improved performance from multiple localization schemes by exploiting their diversity. UniLoc predicts the localization error of each scheme online based on an error model and real-time context. It further combines the results of all available schemes based on the error prediction results and an ensemble learning algorithm. The combined result is more accurate than any individual schemes. With the flexible design of error modeling and ensemble learning, UniLoc can easily integrate a new localization scheme. The energy consumption of UniLoc is low, since its computation, including both error prediction and ensemble learning, only involves simple linear calculation. Our experience with extensive experiments tells that such easy aggregation incurs little overhead in integrating and training a localization scheme, but gains substantially from the scheme diversity. UniLoc outperforms individual localization schemes by 1.6x in a variety of environments, including 89% new places where we did not train the error models.