We derive and discuss various generalizations of neural PCA (Principal Component Analysis)-type learning algorithms containing nonlinearities using optimization-based approach. Standard PCA arises as an optimal soluti...
详细信息
We derive and discuss various generalizations of neural PCA (Principal Component Analysis)-type learning algorithms containing nonlinearities using optimization-based approach. Standard PCA arises as an optimal solution to several different information representation problems. We justify that this is essentially die to the fact that the solution is based on the second-order statistics only. If the respective optimization problems are generalized for nonquadratic criteria so that higher-order statistics are taken into account, their solutions will in general be different. The solutions define in a natural way several meaningful extensions of PCA and give a solid foundation for them. In this framework, we study more closely generalizations of the problems of variance maximization and mean-square error minimization. For these problems, we derive gradient-type neural learning algorithms both for symmetric and hierarchic PCA-type networks. As an important special case, the well-known Sanger's generalized hebbian algorithm (GHA) is shown to emerge from natural optimization problems.
Recent work1, 2 in artificial neural networks suggests mechanisms and optimization strategies that explain the formation of receptive fields, neurons sensitive to particular features, and their organization in mammali...
详细信息
Many neural network realizations have been recently proposed for the statistical technique of Principal Component Analysis (PCA). Explicit connections between numerical constrained adaptive algorithms and neural netwo...
详细信息
Many neural network realizations have been recently proposed for the statistical technique of Principal Component Analysis (PCA). Explicit connections between numerical constrained adaptive algorithms and neural networks with constrained hebbian learning rules are reviewed. The Stochastic Gradient Ascent (SGA) neural network is proposed and shown to be closely related to the generalized hebbian algorithm (GHA). The SGA behaves better for extracting the less dominant eigenvectors. The SGA algorithm is further extended to the case of learning minor components. The symmetrical Subspace Network is known to give a rotated basis of the dominant eigenvector subspace, but usually not the true eigenvectors themselves. Two extensions are proposed: in the first one, each neuron has a scalar parameter which breaks the symmetry. True eigenvectors are obtained in a local and fully parallel learning rule. In the second one, the case of an arbitrary number of parallel neurons is considered, not necessarily less than the input vector dimension.
暂无评论