版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Stevens Institute of Technology United States Beijing Normal University China The University of Hong Kong Hong Kong University of Wisconsin-Madison United States The Simons Institute for the Theory of Computing UC Berkeley United States
出 版 物:《arXiv》 (arXiv)
年 卷 期:2024年
核心收录:
摘 要:Modern Hopfield networks (MHNs) have emerged as powerful tools in deep learning, capable of replacing components such as pooling layers, LSTMs, and attention mechanisms. Recent advancements have enhanced their storage capacity, retrieval speed, and error rates. However, the fundamental limits of their computational expressiveness remain unexplored. Understanding the expressive power of MHNs is crucial for optimizing their integration into deep learning architectures. In this work, we establish rigorous theoretical bounds on the computational capabilities of MHNs using circuit complexity theory. Our key contribution is that we show that MHNs are DLOGTIME-uniform TC0. Hence, unless TC0 = NC1, a poly(n)-precision modern Hopfield networks with a constant number of layers and O(n) hidden dimension cannot solve NC1-hard problems such as the undirected graph connectivity problem and the tree isomorphism problem. We also extended our results to Kernelized Hopfield Networks. These results demonstrate the limitation in the expressive power of the modern Hopfield networks. Moreover, Our theoretical analysis provides insights to guide the development of new Hopfield-based architectures. © 2024, CC BY.