The standard method of generating random weights and biases in feedforward neural networks with random hidden nodes selects them both from the uniform distribution over the same fixed interval. In this work, we show t...
详细信息
ISBN:
(纸本)9783030638221;9783030638238
The standard method of generating random weights and biases in feedforward neural networks with random hidden nodes selects them both from the uniform distribution over the same fixed interval. In this work, we show the drawbacks of this approach and propose new methods of generating random parameters. These methods ensure the most nonlinear fragments of sigmoids, which are most useful in modeling target function nonlinearity, are kept in the input hypercube. A new method generating sigmoids with uniformly distributed slope angles demonstrated the best performance on the illustrative examples.
This work contributes to the development of a new data-driven method (D-DM) of feedforward neural networks (FNNs) learning. This method was proposed recently as a way of improving randomizedlearning of FNNs by adjust...
详细信息
ISBN:
(纸本)9783030879860;9783030879853
This work contributes to the development of a new data-driven method (D-DM) of feedforward neural networks (FNNs) learning. This method was proposed recently as a way of improving randomizedlearning of FNNs by adjusting the network parameters to the target function fluctuations. The method employs logistic sigmoid activation functions for hidden nodes. In this study, we introduce other activation functions, such as bipolar sigmoid, sine function, saturating linear functions, reLU, and softplus. We derive formulas for their parameters, i.e. weights and biases. In the simulation study, we evaluate the performance of FNN data-driven learning with different activation functions. The results indicate that the sigmoid activation functions perform much better than others in the approximation of complex, fluctuated target functions.
暂无评论