The paper explores a problem of artificialneuralnetworks comparative analysis and adaptation while solving a typical task of image analysis and pattern recognition. This article examines four neuralnetworks using d...
详细信息
The proceedings contain 52 papers. The topics discussed include: how artificial intelligence may impact your job;a deep neural network model for the prediction of major adverse cardiovascular event occurrences in pati...
ISBN:
(纸本)9781643682143
The proceedings contain 52 papers. The topics discussed include: how artificial intelligence may impact your job;a deep neural network model for the prediction of major adverse cardiovascular event occurrences in patients with non-ST-elevation myocardial infarction;do reviews influence real estate marketing: the experience combing with natural language processing;ranking of trapezoidal bipolar fuzzy numbers based on a new improved score function;interpretable dual-feature recommender system using reviews;facial expression recognition and image description generation in Vietnamese;hierarchical digital control system performance;and generalized-multiquadric radial basis function neuralnetworks (RBFNs) with variable shape parameters for function recovery.
Evolutionary algorithms have been successfully employed to find the best structure for many learning algorithms including neuralnetworks. Due to their flexibility and promising results, Convolutional neuralnetworks ...
详细信息
Evolutionary algorithms have been successfully employed to find the best structure for many learning algorithms including neuralnetworks. Due to their flexibility and promising results, Convolutional neuralnetworks (CNNs) have found their application in many imageprocessingapplications. The structure of CNNs greatly affects the performance of these algorithms both in terms of accuracy and computational cost, thus, finding the best architecture for these networks is a crucial task before they are employed. In this paper, we develop a genetic programming approach for the optimization of CNN structure in diagnosing COVID-19 cases via X-ray images. A graph representation for CNN architecture is proposed and evolutionary operators including crossover and mutation are specifically designed for the proposed representation. The proposed architecture of CNNs is defined by two sets of parameters, one is the skeleton which determines the arrangement of the convolutional and pooling operators and their connections and one is the numerical parameters of the operators which determine the properties of these operators like filter size and kernel size. The proposed algorithm in this paper optimizes the skeleton and the numerical parameters of the CNN architectures in a co-evolutionary scheme. The proposed algorithm is used to identify covid-19 cases via X-ray images.
To address the problem of low accuracy and poor stability of bearing diagnostic models under strong background noise, a bearing fault image recognition method is proposed that reduces the randomness of the model by av...
详细信息
ISBN:
(数字)9798350360240
ISBN:
(纸本)9798350384161
To address the problem of low accuracy and poor stability of bearing diagnostic models under strong background noise, a bearing fault image recognition method is proposed that reduces the randomness of the model by avoiding the need for artificial parameterization, which can introduce random factors. This method is based on hyperparameter optimization of the GoogLeNet convolutional neural network model and decision fusion. First of all, two-dimensional wavelet time-frequency variation of the original signal of the bearing vibration to construct an image dataset, one-dimensional classification is into a two-dimensional image problem; Secondly, three lightweight convolutional neural network architectures are selected for the noise immunity test, to get the best noise-resistant network architecture GoogLeNet; Finally, hyper-parameter optimization is performed for the network, comparing the mesh method, the progressive mesh method, and the group optimization algorithm, respectively, and getting The optimal parameters are then analyzed for decision fusion and visualization of the network. To verify the method proposed in this paper, we used the open Case Western Reserve University bearing dataset. The experimental verification demonstrates that the proposed method achieves a correctness rate of 94.33% in decision fusion under noise, with good accuracy and stability.
With the continuous expansion of neural Network technology in the artificial intelligence field, for example, image recognition and retrieval, object detection, pixel processing, automatic speech generation, etc., Con...
详细信息
artificial intelligence (AI) has been a key research area since the 1950s, initially focused on using logic and reasoning to create systems that understand language, control robots, and offer expert advice. With the r...
详细信息
ISBN:
(数字)9798331516147
ISBN:
(纸本)9798331516154
artificial intelligence (AI) has been a key research area since the 1950s, initially focused on using logic and reasoning to create systems that understand language, control robots, and offer expert advice. With the rise of big data and deep learning, AI has advanced in applications like recommendation systems, image recognition, and machine translation, primarily through optimizing loss functions in deep neuralnetworks to improve accuracy and reduce training *** descent is the core optimization method but faces challenges like slow convergence and local minima. To overcome these, algorithms like Momentum, AdaGrad, RMSProp, Adadelta, Adam, and Nadam have been developed, introducing momentum and adaptive learning rates to accelerate convergence. This paper presents a new optimization algorithm that combines the strengths of Adam and AdaGrad, offering better adaptability to different learning rates.
Monitoring tool wear state and prediction of the remaining useful life in micro-milling help avoid down time due to unalarmed failure of the cutting tool. In this regard, investigations based on experiments, and data-...
详细信息
Monitoring tool wear state and prediction of the remaining useful life in micro-milling help avoid down time due to unalarmed failure of the cutting tool. In this regard, investigations based on experiments, and data-based models, along straight tool paths have been reported in the literature. However, industrial applications often require machining along complex tool paths, where the tool failure possibilities increase multifold, have not been studied elaborately in the literature. Therefore, the main objective of this work is to develop a data-based model, which can predict the tool wear state, and remaining useful life of the tool, considering the effect of tool path complexities due to varying tool path radius in micro-milling. The data for developing the model were generated by performing micro-milling experiments under different processing parameters, such as feed, depth of cut, spindle rotation speed, and tool path radius. The tool images were captured in-situ without removing the tool from the spindle, and image binarization and alignment operations were performed to extract corresponding cutting edge wear features, such as diameter reduction as well as the wear of individual cutting edges. The tool wear classification criteria were defined to categorize the tool condition in three regions: initial wear, steadystate wear, and critical wear. To capture and model the complex mechanism of micro tool wear and failure, artificialneuralnetworks, as well as deep belief networks, were implemented to predict the wear state as well as the remaining-useful-life (RUL) of the tools. It was found that the wear rate increased with increasing tool path radii and the larger radii could lead to catastrophic tool failure. The neural-network models gave 93-99% accuracy for the prediction of wear state classification and RUL.
The strength of long short-term memory neuralnetworks (LSTMs) that have been applied is more located in handling sequences of variable length than in handling geometric variability of the image patterns. In this pape...
详细信息
The strength of long short-term memory neuralnetworks (LSTMs) that have been applied is more located in handling sequences of variable length than in handling geometric variability of the image patterns. In this paper, an end-to-end convolutional LSTM neural network is used to handle both geometric variation and sequence variability. The best results for LSTMs are often based on large-scale training of an ensemble of network instances. We show that high performances can be reached on a common benchmark set by using proper data augmentation for just five such networks using a proper coding scheme and a proper voting scheme. The networks have similar architectures (convolutional neural network (CNN): five layers, bidirectional LSTM (BiLSTM): three layers followed by a connectionist temporal classification (CTC) processing step). The approach assumes differently scaled input images and different feature map sizes. Three datasets are used: the standard benchmark RIMES dataset (French);a historical handwritten dataset KdK (Dutch);the standard benchmark George Washington (GW) dataset (English). Final performance obtained for the word-recognition test of RIMES was 96.6%, a clear improvement over other state-of-the-art approaches which did not use a pre-trained network. On the KdK and GW datasets, our approach also shows good results. The proposed approach is deployed in the Monk search engine for historical-handwriting collections.
Plants and Crops get diseased due to many reasons. It might be because of diseases of stems, leaves, roots etc. This Paper mainly congregates on leaves. Leaf Disease identification and Detection has many applications ...
详细信息
Satellite computing has emerged as a promising technology for next-generation wireless networks. This innovative technology provides data processing capabilities, which facilitates the widespread implementation of art...
详细信息
ISBN:
(数字)9798350354232
ISBN:
(纸本)9798350354249
Satellite computing has emerged as a promising technology for next-generation wireless networks. This innovative technology provides data processing capabilities, which facilitates the widespread implementation of artificial intelligence (AI)-based applications, especially for imageprocessing tasks involving deep neural network (DNN). With the limited computing resources of an individual satellite, independently handling DNN tasks generated by diverse user equipments (UEs) becomes a significant challenge. One viable solution is dividing a DNN task into multiple subtasks and subsequently distributing them across multiple satellites for collaborative computing. However, it is challenging to partition DNN appropriately and allocate subtasks into suitable satellites while ensuring load balancing. To this end, we propose a collaborative satellite computing system designed to improve task processing efficiency in satellite networks. Based on this system, a workload-balanced adaptive task splitting scheme is developed to equitably distribute the workload of DNN slices for collaborative inference, consequently enhancing the utilization of satellite computing resources. Additionally, a self-adaptive task offloading scheme based on a genetic algorithm (GA) is introduced to determine optimal offloading decisions within dynamic network environments. The numerical results illustrate that our proposal can outperform comparable methods in terms of task completion rate, delay, and resource utilization.
暂无评论