咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >SMR: State Memory Replay for L... 收藏
arXiv

SMR: State Memory Replay for Long Sequence Modeling

作     者:Qi, Biqing Gao, Junqi Zhang, Kaiyan Li, Dong Liu, Jianxing Wu, Ligang Zhou, Bowen 

作者机构:Department of Control Science and Engineering Harbin Institute of Technology China Department of Electronic Engineering Tsinghua University China School of Mathematics Harbin Institute of Technology China Frontis.AI Beijing China 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

主  题:State space methods 

摘      要:Despite the promising performance of state space models (SSMs) in long sequence modeling, limitations still exist. Advanced SSMs like S5 and S6 (Mamba) in addressing non-uniform sampling, their recursive structures impede efficient SSM computation via convolution. To overcome compatibility limitations in parallel convolutional computation, this paper proposes a novel non-recursive non-uniform sample processing strategy. Theoretical analysis of SSMs through the lens of Event-Triggered Control (ETC) theory reveals the Non-Stable State (NSS) problem, where deviations from sampling point requirements lead to error transmission and accumulation, causing the divergence of the SSM’s hidden state. Our analysis further reveals that adjustments of input sequences with early memories can mitigate the NSS problem, achieving Sampling Step Adaptation (SSA). Building on this insight, we introduce a simple yet effective plug-and-play mechanism, State Memory Replay (SMR), which utilizes learnable memories to adjust the current state with multi-step information for generalization at sampling points different from those in the training data. This enables SSMs to stably model varying sampling points. Experiments on long-range modeling tasks in autoregressive language modeling and Long Range Arena demonstrate the general effectiveness of the SMR mechanism for a series of SSM models. © 2024, CC BY.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分