版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Zhengzhou Univ Sch Cyber Sci & Engn Zhengzhou Peoples R China
出 版 物:《JOURNAL OF ELECTRONIC IMAGING》 (J. Electron. Imaging)
年 卷 期:2025年第34卷第1期
核心收录:
学科分类:0808[工学-电气工程] 1002[医学-临床医学] 0809[工学-电子科学与技术(可授工学、理学学位)] 08[工学] 0702[理学-物理学]
主 题:model quantization post-training quantization image classification Hessian matrix vision transformer
摘 要:In recent years, vision transformers (ViTs) have made significant breakthroughs in computer vision and have demonstrated great potential in large-scale models. However, the quantization methods for convolutional neural network models do not perform well on ViTs models, leading to a significant decrease in accuracy when applied to ViTs models. We extend the quantization parameter optimization method based on the Hessian matrix and apply it to the quantization of the LayerNorm module in ViT models. This approach reduces the impact of quantization on task accuracy for the LayerNorm module and enables more comprehensive quantization of ViT models. To achieve fast quantization of ViTs models, we propose a quantization framework specifically designed for ViTs models: Hessian matrix-aware post-training quantization for vision transformers (HAPTQ). The experimental results on various models and datasets demonstrate that our HAPTQ method, after quantizing the LayerNorm module of various ViT models, can achieve lossless quantization (with an accuracy drop of less than 1%) in ImageNet classification tasks. Specifically, the HAPTQ method achieves 85.81% top-1 accuracy on the ViT-L model.