Determination of appropriate neural-network (NN) structure is an important issue for a given learning or training task since the NN performance depends much on it. To remedy the weakness of conventional BP neural netw...
详细信息
ISBN:
(纸本)9783540859833
Determination of appropriate neural-network (NN) structure is an important issue for a given learning or training task since the NN performance depends much on it. To remedy the weakness of conventional BP neural networks and learning algorithms, a new Laguerre orthogonal basis neural network is constructed. Based on this special structure, a weights-direct-determination method is derived, which could obtain the optimal weights of such a neural network directly (or to say, just in one step). Furthermore, a growing algorithm is presented for determining immediately the smallest number of hidden-layer neurons. Theoretical analysis and simulation results substantiate the efficacy of such a Laguerre-orthogonal-basis neural network and its growing algorithm based on the weights-direct-determination method.
Traditionally, fault diagnostic strategy is used to obtain the optimal test sequence for binary systems. Actually, a lot of systems are not binary systems, such as multivalued attribute systems. Traditional algorithms...
详细信息
Traditionally, fault diagnostic strategy is used to obtain the optimal test sequence for binary systems. Actually, a lot of systems are not binary systems, such as multivalued attribute systems. Traditional algorithms generating the test sequence for binary systems and multivalued attribute systems select tests, and then identify and isolate the failure states based on the outcomes of tests. In this study, a novel diagnostic strategy for multivalued attribute system is introduced. This strategy chooses failure states and then finds a suitable test set for the selected failure states. This can avoid the backtracking approach of traditional algorithms. In order to implement this strategy, three main procedures are presented: (1) test sequencing problem is simplified to a combination of the basic test sets with unnecessary tests, and the sets for fault detection and isolation are defined, (2) the optimal test sequence generating algorithm for an individual failure state is proposed, and (3) the priority levels of failure state are determined based on the probability, and a new algorithm, which is used to generate the test sequence for all failure states, is presented. As the implementation process for the new algorithm resembles the growth of branches on a tree, it is defined as growing algorithm. Finally, two cases are used to show how the growing algorithm works, and stochastic simulation experiments are employed to validate universality and stability of the algorithm. The case studies and stochastic simulation experiments demonstrate that the results obtained by the growing algorithm are as accurate as those obtained by the rollout algorithm, and the growing algorithm needs a short running time. Therefore, the growing algorithm is suitable for multivalued attribute system, and it obtains good calculation results with a short running time and high efficiency.
Currently, cardiac computed tomography angiography (CTA) is widely applied to coronary artery disease diagnosis. Automatic segmentation of coronary artery has played an important role in coronary artery disease diagno...
详细信息
Currently, cardiac computed tomography angiography (CTA) is widely applied to coronary artery disease diagnosis. Automatic segmentation of coronary artery has played an important role in coronary artery disease diagnosis. In this study, we propose and test a fully automatic coronary artery segmentation method that does not require any human-computer interaction. The proposed method uses a growing strategy and contains three main parts namely, (1) the initial seed detection that automatically detects the root points of the left and right coronary arteries where the ascending aorta meets the coronary arteries, (2) the growing strategy that searches for the neighborhood blocks to decide the existence of coronary arteries with an improved convolutional neural network, and (3) the iterative termination condition that decides whether the growing iteration finishes. The proposed framework is validated using a dataset containing 32 cardiac CTA volumes from different patients for training and testing. Experimental results show that the proposed method obtained a Dice loss ranged from 0.70 to 0.83, which indicates that the new method outperforms the traditional methods such as level set.
Test sequencing for binary systems is an NP-complete problem. In this study, we introduce a novel algorithm for this problem, which is defined as a growing algorithm. This algorithm chooses the failure states and find...
详细信息
Test sequencing for binary systems is an NP-complete problem. In this study, we introduce a novel algorithm for this problem, which is defined as a growing algorithm. This algorithm chooses the failure states and finds a suitable test set for the selected failure states. This can avoid the backtracking approach of the traditional algorithms. Three main procedures are presented to illustrate the growing algorithm: (1) a test sequencing problem is simplified as a combinatorial problem comprising a basic test set with unnecessary tests;(2) the optimal test sequence generating algorithm (OTSGA) is proposed for an individual failure state;and (3) the priority levels of the failure states are determined based on their prior probabilities. Finally, a circuit system is used to show how the growing algorithm works, and five real-word D matrices are employed to validate the universality and stability of the algorithm. Subsequently, the application scope for the growing algorithm is demonstrated in detail by stochastic simulation experiments. This growing algorithm is suitable for large-scale systems with a sparse D matrix, and it obtains good calculation results with a short running time and high efficiency.
In this study, we introduce a new growing neural network algorithm that is based on wavelet neural networks and call our algorithm a growing wavelet neural network (GWNN) method. We apply our proposed scheme to train ...
详细信息
In this study, we introduce a new growing neural network algorithm that is based on wavelet neural networks and call our algorithm a growing wavelet neural network (GWNN) method. We apply our proposed scheme to train a wavelet neural network to solve chemotaxis problems with blow-up. These problems are highly nonlinear time-dependent systems of partial differential equations, and it is a challenge to get the pattern of the solution accurately. The proposed structure is partial retraining of the network, which increases its capacity to catch the spiky pattern of the solution. Our neural network-based algorithm allows us to solve the nonlinear chemotaxis problems without the use of linearization techniques and regularization techniques, most of which reduce the accuracy of the model. This mesh-free-based method can manage a variety of blow-up models with curved boundaries without imposing an extra cost. By proving the consistency and stability of the method, we show the convergence of GWNN solutions to analytical solutions of the chemotaxis problem. Several illustrative examples and simulation results are provided to demonstrate the correctness of the results and the robust performance of the presented algorithm. Moreover, to illustrate the effectiveness of the GWNN method, we make a comparison with two other network-based methods.
During the last decade, a significant research progress has been drawn in both the theoretical aspects and the applications of Deep Learning Neural Networks. Besides their spectacular applications, optimal architectur...
详细信息
During the last decade, a significant research progress has been drawn in both the theoretical aspects and the applications of Deep Learning Neural Networks. Besides their spectacular applications, optimal architectures of these neural networks may speed up the learning process and exhibit better generalization results. So far, many growing and pruning algorithms have been proposed by many researchers to deal with the optimization of standard Feedforward Neural Network architectures. However, applying both the growing and the pruning on the same net may lead a good model for a big data set and hence good selection results. This work is devoted to propose a new growing and pruning Learning algorithm for Deep Neural Networks. This new algorithm is presented and applied on diverse medical data sets. It is shown that this algorithm outperforms various other artificial intelligent techniques in terms of accuracy and simplicity of the resulting architecture.
One of the open problems in neural network research is how to automatically determine network architectures for given applications. In this brief, we propose a simple and efficient approach to automatically determine ...
详细信息
One of the open problems in neural network research is how to automatically determine network architectures for given applications. In this brief, we propose a simple and efficient approach to automatically determine the number of hidden nodes in generalized single-hidden-layer feedforward networks (SLFNs) which need not be neural alike. This approach referred to as error minimized extreme learning machine (EM-ELM) can add random hidden nodes to SLFNs one by one or group by group (with varying group size). During the growth of the networks, the output weights are updated incrementally. The convergence of this approach is proved in this brief as well. Simulation results demonstrate and verify that our new approach is much faster than other sequential/incremental/growing algorithms with good generalization performance.
Extreme learning machines (ELMs) have been proposed for generalized single-hidden-layer feedforward networks which need not be neuron-like and perform well in both regression and classification applications. In this b...
详细信息
Extreme learning machines (ELMs) have been proposed for generalized single-hidden-layer feedforward networks which need not be neuron-like and perform well in both regression and classification applications. In this brief, we propose an ELM with adaptive growth of hidden nodes (AG-ELM), which provides a new approach for the automated design of networks. Different from other incremental ELMs (I-ELMs) whose existing hidden nodes are frozen when the new hidden nodes are added one by one, in AG-ELM the number of hidden nodes is determined in an adaptive way in the sense that the existing networks may be replaced by newly generated networks which have fewer hidden nodes and better generalization performance. We then prove that such an AG-ELM using Lebesgue p-integrable hidden activation functions can approximate any Lebesgue p-integrable function on a compact input set. Simulation results demonstrate and verify that this new approach can achieve a more compact network architecture than the I-ELM.
In order to realize the automatic adjustment of the network structure of the extreme learning machine (ELM), inspired by the two-stage extreme learning machine (TS-ELM), a fast two-stage extreme learning machine (FTS-...
详细信息
ISBN:
(纸本)9781450371605
In order to realize the automatic adjustment of the network structure of the extreme learning machine (ELM), inspired by the two-stage extreme learning machine (TS-ELM), a fast two-stage extreme learning machine (FTS-ELM) is proposed by making the nodes added follow the arithmetic progression and using the principal component analysis (PCA) for pruning the redundant nodes. In the growing stage of hidden nodes, the nodes are added into network according to the arithmetic progression to reduce the number of iterations. In the pruning phase, PCA is used to delete redundant nodes. The hidden nodes with low contribution rate are quickly reduced by continuously reducing the cumulative contribution rate threshold, until the error (accuracy) achieves its maximum (minimum), which makes the network structure more compact. The empirical studies show that compared with ELM, EM-ELM, OP-ELM and TS-ELM algorithms, FTS-ELM leads to a compact network structure with good generalization performance, and its training time is far less than TS-ELM.
暂无评论