咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Memorization Capacity for Addi... 收藏
arXiv

Memorization Capacity for Additive Fine-Tuning with Small ReLU Networks

作     者:Sohn, Jy-Yong Kwon, Dohyun An, Seoyeon Lee, Kangwook 

作者机构:Department of Statistics and Data Science Yonsei University Korea Republic of Department of Mathematics University of Seoul Korea Republic of Center for AI and Natural Sciences Korea Institute for Advanced Study Korea Republic of Department of Electrical and Computer Engineering University of Wisconsin-Madison WI United States 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

主  题:Neurons 

摘      要:Fine-tuning large pre-trained models is a common practice in machine learning applications, yet its mathematical analysis remains largely unexplored. In this paper, we study fine-tuning through the lens of memorization capacity. Our new measure, the Fine-Tuning Capacity (FTC), is defined as the maximum number of samples a neural network can fine-tune, or equivalently, as the minimum number of neurons (m) needed to arbitrarily change N labels among K samples considered in the fine-tuning process. In essence, FTC extends the memorization capacity concept to the fine-tuning scenario. We analyze FTC for the additive fine-tuning scenario where the fine-tuned network is defined as the summation of the frozen pre-trained network f and a neural network g (with m neurons) designed for fine-tuning. When g is a ReLU network with either 2 or 3 layers, we obtain tight upper and lower bounds on FTC;we show that N samples can be fine-tuned with m = Θ(N) neurons for 2-layer networks, and with m = Θ(√N) neurons for 3-layer networks, no matter how large K is. Our results recover the known memorization capacity results when N = K as a special case. Copyright © 2024, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分