版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:School of Computer Science and Engineering KLE Technological University Hubballi Karnataka 580031
出 版 物:《Procedia Computer Science》
年 卷 期:2025年第260卷
页 面:1000-1008页
主 题:Quantization Techniques Post-Training Quantization (PTQ) Quantization-Aware Training (QAT) MobileNetV3Large Deep Learning
摘 要:In recent years, advances in deep learning have made a major impact in fields such as computer vision and image classification, and convolutional neural networks (CNN) have played an important role in visual recognition such as yoga pose classification. However, this model is difficult to implement on edge devices with limited resources due to the large number of calculations and operations. Memory intensive needs. Quantitative methods offer solutions by reducing sample size and increasing inference speed while maintaining accuracy. In this work, we investigate how to apply quantization techniques to the MobileNetV3Large model for yoga pose classification. The main goals include using post-training quantization (PTQ) and quantization-aware training (QAT) to improve the model’s performance, compare the performance of PTQ and QAT models, and send the best models to the edge. Kaggle’s yoga poses dataset contains 3700 images of 43 poses, which were pre-processed and used to fine-tune the MobileNetV3Large model. Using the PTQ, the sample size increased slightly from 12.5 MB to 3.43 MB. clear. In comparison, the QAT model has a higher accuracy of 84.71% and a model size of 11.2 MB. These results demonstrate the effectiveness of the quantization method in optimizing Yoga’s MobileNetV3Large model. Simplify deployment of edge devices by sharing beacons. Future research should focus on further refinement and experimental validation of these models to improve their use and support their development. Interactive and easy-to-use yoga practice tools for mobile and embedded platforms.