咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Fast class-wise updating for o... 收藏
arXiv

Fast class-wise updating for online hashing

作     者:Lin, Mingbao Ji, Rongrong Sun, Xiaoshuai Zhang, Baochang Huang, Feiyue Tian, Yonghong Tao, Dacheng 

作者机构:Media An-alytics and Computing Laboratory Department of Artificial Intelligence School of Informatics Xiamen University 361005 China Institute of Artificial Intelligence Xiamen University China Beihang University China Youtu Laboratory Tencent Shanghai200233 China Department of Computer Science and Technology Peking University Beijing100871 China School of Computer Science Faculty of Engineering University of Sydney 6 Cleveland St DarlingtonNSW2008 Australia 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2020年

核心收录:

主  题:Binary codes 

摘      要:Online image hashing has received increasing research attention recently, which processes large-scale data in a streaming fashion to update the hash functions on-the-fly. To this end, most existing works exploit this problem under a supervised setting, i.e., using class labels to boost the hashing performance, which suffers from the defects in both adaptivity and efficiency: First, large amounts of training batches are required to learn up-to-date hash functions, which leads to poor online adaptivity. Second, the training is time-consuming, which contradicts with the core need of online learning. In this paper, a novel supervised online hashing scheme, termed Fast Class-wise Updating for Online Hashing (FCOH), is proposed to address the above two challenges by introducing a novel and efficient inner product operation. To achieve fast online adaptivity, a class-wise updating method is developed to decompose the binary code learning and alternatively renew the hash functions in a class-wise fashion, which well addresses the burden on large amounts of training batches. Quantitatively, such a decomposition further leads to at least 75% storage saving. To further achieve online efficiency, we propose a semi-relaxation optimization, which accelerates the online training by treating different binary constraints independently. Without additional constraints and variables, the time complexity is significantly reduced. Such a scheme is also quantitatively shown to well preserve past information during updating hashing functions. We have quantitatively demonstrated that the collective effort of class-wise updating and semi-relaxation optimization provides a superior performance comparing to various state-of-the-art methods, which is verified through extensive experiments on three widely-used datasets. © 2020, CC BY.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分