版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Institute of Artificial Intelligence Xiamen University Key Laboratory of Multimedia Trusted Perception and Efficient Computing Ministry of Education of ChinaXiamen University Tencent Youtu Lab Department of Artificial Intelligence School of Informatics Xiamen University Peng Cheng Laboratory
出 版 物:《Science China(Information Sciences)》 (中国科学:信息科学(英文版))
年 卷 期:2025年第68卷第3期
页 面:163-180页
核心收录:
学科分类:12[管理学] 1201[管理学-管理科学与工程(可授管理学、工学学位)] 081104[工学-模式识别与智能系统] 08[工学] 080203[工学-机械设计及理论] 0835[工学-软件工程] 0802[工学-机械工程] 0811[工学-控制科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:supported by National Key R&D Program of China (Grant No. 2022ZD0118202) National Science Fund for Distinguished Young Scholars (Grant No. 62025603) National Natural Science Foundation of China (Grant Nos. U21B2037, U22B2051, 62176222, 62176223, 62176226, 62072386, 62072387, 62072389, 62002305, 62272401) Natural Science Foundation of Fujian Province of China (Grant Nos. 2021J01002, 2022J06001)
主 题:super-resolution post-training quantization distribution-flexible subset quantization neural network
摘 要:This paper introduces distribution-flexible subset quantization(DFSQ), a post-training quantization method for super-resolution networks. Our motivation for developing DFSQ is based on the distinctive activation distributions of current super-resolution models, which exhibit significant variance across samples and channels. To address this issue, DFSQ conducts channel-wise normalization of the activations and applies distribution-flexible subset quantization(SQ), wherein the quantization points are selected from a universal set consisting of multi-word additive log-scale values. To expedite the selection of quantization points in SQ, we propose a fast quantization points selection strategy that uses K-means clustering to select the quantization points closest to the centroids. Compared to the common iterative exhaustive search algorithm, our strategy avoids the enumeration of all possible combinations in the universal set, reducing the time complexity from exponential to linear. Consequently, the constraint of time costs on the size of the universal set is greatly relaxed. Extensive evaluations of various super-resolution models show that DFSQ effectively improves performance even without fine-tuning. For example, for 4-bit EDSR×2 on the Urban benchmark, DFSQ obtains 0.242 dB PSNR gains.