Geospatial artificial intelligence (GeoAI) and data processing techniques have significantly advanced object detection, prediction, and classification tasks. However, the availability of machine learning-ready, labele...
详细信息
Geospatial artificial intelligence (GeoAI) and data processing techniques have significantly advanced object detection, prediction, and classification tasks. However, the availability of machine learning-ready, labeled data for specific applications such as plant disease detection remains the major challenge for the broader adoption of GeoAI. For instance, collecting temporal unmanned aerial vehicle (UAV) imagery of agricultural crops to track disease emergence and progress requires substantial human labor and resources, which is often limited to a small spatial scale. Recognizing the pivotal role of temporal data in pattern recognition, object detection, and scene reconstruction, we introduce an innovative approach to augment multispectral temporal datasets: the geospatial time machine (GTM). Our proposed methodology combines graph neural network (GNN) and generative adversarial network (GAN) architectures to generate comprehensive synthetic temporal data encompassing multivariate time series. The results demonstrate that imagery generated through backcasting can enhance the accuracy of downstream classification tasks by up to 53% in plant disease detection, particularly in the initial stages of analyzing a crop growth using multispectral and multitemporal datasets.
Abnormal electrical activities due to brain tumor, developmental anomaly, neural-atrophy in cortical/sub-cortical brain regions cause an epileptic seizure. Electroencephalography (EEG) is an important diagnostic test ...
详细信息
ISBN:
(纸本)9798350343557
Abnormal electrical activities due to brain tumor, developmental anomaly, neural-atrophy in cortical/sub-cortical brain regions cause an epileptic seizure. Electroencephalography (EEG) is an important diagnostic test used for observing waveforms such as epileptic brain activities. In this study, a new method which detects epileptic seizure from EEG signals automatically is proposed. Discrete wavelet transform and time dependent entropy based statistical features of the EEG signal are used to train artificialneuralnetworks. The proposed method has been applied on EEG signals obtained from healthy individuals and epileptic patients for epileptic seizure detection, and accuracy of 100% has been achieved. This method has also been applied on EEG signals containing normal, interictal and ictal states, and accuracy, sensitivity and specificity of 98.6%, 96.0% and 99.3% have been achieved, respectively.
In the past years, machine learning (ML) and deep learning (DL) have led to the advancement of several applications, including computer vision, natural language processing, and audio processing. These complex tasks re...
详细信息
ISBN:
(纸本)9798400716164
In the past years, machine learning (ML) and deep learning (DL) have led to the advancement of several applications, including computer vision, natural language processing, and audio processing. These complex tasks require large models, which is a challenge to deploy in devices with limited resources. These resource-constrained devices have limited computation power and memory. Hence, the neuralnetworks must be optimized through network acceleration and compression techniques. This paper proposes a novel method to compress and accelerate neuralnetworks from a small set of spatial convolution kernels. Firstly, a novel pruning algorithm is proposed based on the density-based clustering method that identifies and removes redundancy in CNNs while maintaining the accuracy and throughput tradeoff. Secondly, a novel pruning algorithm based on the grid-based clustering method is proposed to identify and remove redundancy in CNNs. The performance of the three pruning algorithms (density-based, grid-based, and partitional-based clustering algorithms) is evaluated against each other. The experiments were conducted using the deep CNN compression technique on the VGG-16 and ResNet models to achieve higher accuracy on image classification than the original model at a higher compression ratio and speedup.
image compression can be carried out through the use of various methods and, here, one can cite artificialneuralnetworks (ANN). The purpose of this paper is to compare and study three methods of image compression us...
详细信息
We explore the latest advancements in deep learning techniques for improving the precision of image classification systems. We address the challenges of accurately categorizing photos due to factors such as background...
详细信息
ISBN:
(纸本)9798331506674;9798331506667
We explore the latest advancements in deep learning techniques for improving the precision of image classification systems. We address the challenges of accurately categorizing photos due to factors such as background clutter, object orientations, and inconsistent illumination. Through a review of ten foundational studies, we examine state-of-the-art solutions such as data augmentation, transfer learning, and Convolutional neuralnetworks (CNNs). Our objective is to identify key trends, obstacles, and areas for future research to enhance the accuracy of image classification.
This research proposes an innovative method for correcting banding errors in satellite images based on Generative Adversarial networks (GAN). Small satellites are frequently launched into space to obtain images that c...
详细信息
This research proposes an innovative method for correcting banding errors in satellite images based on Generative Adversarial networks (GAN). Small satellites are frequently launched into space to obtain images that can be used in scientific or military research, commercial activities, and urban planning, among other applications. However, its small cameras are more susceptible to radiometric, geometric errors, and other distortions caused by atmospheric interference. The proposed method was compared to the conventional correction technique using experimental data, showing the similar performance (92.64% and 90.05% accuracy, respectively). These experimental results suggest that generative models utilizing artificial Intelligence (AI) techniques, specifically Deep Learning, are getting closer to achieving automatic correction close to conventional methods. Advantages of the GAN models include automating the task of correcting banding in satellite images, reducing the required time, and facilitating the processing without requiring prior technical knowledge in handling Geographic Information Systems (GIS). Potentially, this technique could represent a valuable tool for satellite imageprocessing, improving the accuracy of the results and making the process more efficient. The research is particularly relevant to the field of remote sensing and can have practical applications in various industries.
The recent evolution of artificial intelligence (AI) can be considered life-changing. In particular, there is great interest in emerging hot topics in AI such as image classification and natural language processing. O...
详细信息
The recent evolution of artificial intelligence (AI) can be considered life-changing. In particular, there is great interest in emerging hot topics in AI such as image classification and natural language processing. Our world has been revolutionized by convolutional neuralnetworks and transformer for image classification and natural language processing, respectively. Moreover, these techniques can be used in the field of dementia. We introduce some applications of AI systems for treating and diagnosing dementia, including image-classification AI for recognizing facial features associated with dementia, image-classification AI for classifying leukoaraiosis in MRI images, object-detection AI for detecting microbleeding in MRI images, object-detection AI for support care, natural language-processing AI for detecting dementia within conversations, and natural language-processing AI for chatbots. Such AI technologies can significantly transform the future of dementia diagnosis and treatment. Geriatr Gerontol Int 2023;center dot center dot: center dot center dot-center dot center dot. We introduce the research on AI utilization in dementia, which can lead us to a better ***
The development of conversational artificial intelligence (AI) is examined in this research paper, with a focus on how speech and image recognition technologies can be combined to transform and interact with systems. ...
详细信息
The technology of driver behavior recognition holds significant importance in modern traffic management and autonomous driving systems. This paper revolves around Convolutional neuralnetworks (CNNs) as the cornerston...
详细信息
Neuromorphic computing extends beyond sequential processing modalities and outperforms traditional von Neumann architectures in implementing more complicated tasks, e.g., pattern processing, image recognition, and dec...
详细信息
Neuromorphic computing extends beyond sequential processing modalities and outperforms traditional von Neumann architectures in implementing more complicated tasks, e.g., pattern processing, image recognition, and decision making. It features parallel interconnected neuralnetworks, high fault tolerance, robustness, autonomous learning capability, and ultralow energy dissipation. The algorithms of artificialneural network (ANN) have also been widely used because of their facile self-organization and self-learning capabilities, which mimic those of the human brain. To some extent, ANN reflects several basic functions of the human brain and can be efficiently integrated into neuromorphic devices to perform neuromorphic computations. This review highlights recent advances in neuromorphic devices assisted by machine learning algorithms. First, the basic structure of simple neuron models inspired by biological neurons and the information processing in simple neuralnetworks are particularly discussed. Second, the fabrication and research progress of neuromorphic devices are presented regarding to materials and structures. Furthermore, the fabrication of neuromorphic devices, including stand-alone neuromorphic devices, neuromorphic device arrays, and integrated neuromorphic systems, is discussed and demonstrated with reference to some respective studies. The applications of neuromorphic devices assisted by machine learning algorithms in different fields are categorized and investigated. Finally, perspectives, suggestions, and potential solutions to the current challenges of neuromorphic devices are provided. The review discusses the basic structure of simple neuron models inspired by biological neurons and how they process information in simple neuralnetworks, laying the foundation for neuromorphic device *** progress in the fabrication of neuromorphic devices is highlighted, focusing on advancements in materials, structures, and the development of st
暂无评论