Traditional change detection (CD) algorithms cannot meet the requirements of today's high resolution remote sensing images (HR). Recently, deep learning-based CD has become a popular research topic. However, there...
详细信息
Traditional change detection (CD) algorithms cannot meet the requirements of today's high resolution remote sensing images (HR). Recently, deep learning-based CD has become a popular research topic. However, there are not many annotated samples for training deep learning (DL) models. Patch-based algorithm has become an important research direction in CD in response to the lack of training datasets, but the optimal patch size is relatively small and difficult to determine, which limits the use of spatial information and the extension of deep network. In this paper, we develop a feature-regularized mask DeepLab (FRM-DeepLab) for HRCD. First, a maskbased framework (MaskNet) that uses a few annotated samples to update model parameters is introduced. Based on MaskNet, we design a Mask-DeepLab to make full use of HR. Last, the deep features of unlabeled areas are extracted by an autoencoder as auxiliary information, and those features are concatenated in the middle-level features extracted by Mask-DeepLab to alleviate the influences of overfitting caused by small-scale samples. The algorithm is verified on three HRCD datasets. The visualization and quantitative analysis of the experiment results figure that this algorithm can implement significant performance improvement.
For Compressive Sensing problems, a number of techniques have been introduced, including traditional compressed-sensing (CS) image reconstruction and Deep Neural Network (DNN) models. Unfortunately, due to low samplin...
详细信息
For Compressive Sensing problems, a number of techniques have been introduced, including traditional compressed-sensing (CS) image reconstruction and Deep Neural Network (DNN) models. Unfortunately, due to low sampling rates, the quality of image reconstruction is still poor. This paper proposes a lossy image compression model (i.e. BCS-AE), which combines two different types to produce a model that uses more high-quality low-bitrate CS reconstruction. Initially, block-based compressed sensing (BCS) was utilized, and it was done one block at a time by the same operator. It can correctly extract images with complex geometric configurations. Second, we create an autoencoder architecture to replace traditional transforms, and we train it with a rate-distortion loss function. The proposed model is trained and then tested on the CelebA and Kodak databases. According to the results, advanced deep learning-based and iterative optimization-based algorithms perform better in terms of compression ratio and reconstruction quality.
The increasing video content over the internet motivated the exploration of novel approaches in the video compression domain. Though neural network based architectures have already emerge as de-facto in the field of i...
详细信息
The increasing video content over the internet motivated the exploration of novel approaches in the video compression domain. Though neural network based architectures have already emerge as de-facto in the field of image compression and analytics, their application in video compression also result in promising outputs. Adaptive and efficient compression techniques are required for video transmission over varying bandwidth. Several deep learning based techniques and enhancements were proposed and experimented but they didn't exhibit full optimal behavior and are not end to end trained and optimized. In the zest of a pure and end to end trainable compression technique, a deep learning based video compression architecture has been proposed comprises of frame autoencoder, flow autoencoder and motion extension network for the reconstruction of predicted frames. The video compression network has been designed incrementally and trained with random emission steps strategy. The proposed work results in significant improvement in visual perception quality measured in SSIM and PSNR when compared to some state-of-art techniques but in trade-off with frame reconstruction time sheet.
This paper mainly studies the content of the recommendation algorithm of learning resource courses in online learning platforms such as MOOC and mainly introduces the automatic encoder neural network that integrates c...
详细信息
This paper mainly studies the content of the recommendation algorithm of learning resource courses in online learning platforms such as MOOC and mainly introduces the automatic encoder neural network that integrates course relevance to realize the personalized course recommendation model. The authors first introduce how to embed a course relevance decoder in an autoencoder neural network. Secondly, the proposed confidence matrix method is introduced to distinguish the recommendation effect of the learned to the unlearned courses, and the training process of the model is introduced. Then, the design content of the experiment is introduced, including the model structure, comparative experiments, parameter settings, and evaluation indicators. Finally, the experimental results are analyzed in detail from the horizontal and vertical aspects. It is hoped that this research can provide a reference for personalized recommendation of learning resources based on deep learning technology and big data analysis.
Iterative learning control (ILC) can yield superior performance for repetitive tasks while only requiring approximate models, making this control strategy very appealing for industry. However, applying it to non-linea...
详细信息
Iterative learning control (ILC) can yield superior performance for repetitive tasks while only requiring approximate models, making this control strategy very appealing for industry. However, applying it to non-linear systems involves solving of optimization problems, which limits the industrial uptake, especially for learning online to compensate for variations throughout the system’s lifetime. Industry tackles this by designing simple rule-based learning controllers. However, these are often designed in an ad-hoc manner, which potentially limits performance. In this paper, we will couple a low-dimensional parametrized learning control algorithm with a generic signal parametrization method on the basis of machine learning, and specifically using autoencoders. This will allow high control performance, while limiting implementational complexity and maintaining interpretability, paving the way for a higher industrial uptake of learning control for non-linear systems. We will illustrate the parametrized approach in simulation on a non-linear slider-crank system, and provide an example of using the learning approach to perform a tracking task for this system.
Data acquired during production processes often contain redundant and irrelevant features. Thus, to predict quality characteristics from process data, precise feature extraction is essential to sustain a low predictio...
详细信息
Data acquired during production processes often contain redundant and irrelevant features. Thus, to predict quality characteristics from process data, precise feature extraction is essential to sustain a low prediction error and to limit the computational complexity of the deployed machine learning models. Hence, we compare two feature extraction methods: principal component analysis and an autoencoder. Based on an industrial use case, we highlight the advantages of the methods and provide guidance in creating an automated data analysis pipeline for the prediction of quality characteristics. This pipeline is fundamental for other predictive quality applications such as smart experts. Our results favor the principal component analysis for feature extraction, even though it is less expressive than autoencoders.
In this paper, methods for diagnosing degradation using ESS voltage data were analyzed. With the domestic and international market size expected to grow, the aim is to diagnose battery degradation before a fire occurs...
详细信息
Estimation of aquatic ecosystem health indices can assist in reducing the burden of time-consuming, laborintensive, and cost-effective fieldwork for the sustainable evaluation of freshwater ecosystem status. In this s...
详细信息
Estimation of aquatic ecosystem health indices can assist in reducing the burden of time-consuming, laborintensive, and cost-effective fieldwork for the sustainable evaluation of freshwater ecosystem status. In this study, we developed a deep neural network to estimate the trophic diatom index (TDI), benthic macroinvertebrate index (BMI), and fish assessment index (FAI) using water quality and hydraulic and hydrological data. A convolutional neural network (CNN) model was built to estimate health indices. In addition, an autoencoder was adopted to produce manifold features that were used as inputs for the CNN model. Conventional machine learning models, including artificial neural networks, support vector machines, random forests, and extreme gradient boosting, have been developed to estimate the TDI, BMI, and FAI. The results showed that the CNN with an autoencoder exhibited the best performance, with validation accuracies of Nash Sutcliffe Efficiency (NSE) and root mean squared error (RMSE) values of 0.592 and 17.249 for TDI, 0.669 and 12.282 for BMI, and 0.638 and 13.897 for FAI, respectively. The autoencoder enhanced the nonlinear feature learning of the time series and static input data, which contributed to improving the CNN feature extraction for accurate estimation of aquatic ecosystem health indices compared to other data -driven approaches. Therefore, deep learning techniques can be used to investigate aquatic ecosystem health by successfully reflecting the quantitative and qualitative features of health indices.
The accumulation of microplastics (MPs) resulting from disposal of plastic waste into water sources, poses a significant threat to aquatic organisms. These are readily ingested by organisms, leading to the accumulatio...
详细信息
The accumulation of microplastics (MPs) resulting from disposal of plastic waste into water sources, poses a significant threat to aquatic organisms. These are readily ingested by organisms, leading to the accumulation of harmful substances, disrupting their biological processes. Current methods for identifying microplastics have notable drawbacks, including low resolution, extended imaging time, and restricted particle size analysis. Integrating Raman spectroscopy with machine learning (ML) proves to be an effective approach for identifying and classifying MPs, especially in scenarios where they are found in environmental media or mixed with various types. Machine learning (ML) can be vital tool in assisting Raman analysis, owing to its robust feature extraction capabilities. This comprehensive review outlined the utilization of various machine learning techniques in conjunction with Raman spectral features for diverse investigations related to microplastics. The methodologies discussed encompass Principal Component Analysis, K-Nearest Neighbour, Random Forest, Support Vector Machine, and various deep learning algorithms.
During the coronavirus disease-19 (COVID-19) epidemic, there has been a growing need for rapid diagnostic tools, with Computed Tomography (CT) scans emerging as essential diagnostic resources. Nevertheless, the proces...
详细信息
During the coronavirus disease-19 (COVID-19) epidemic, there has been a growing need for rapid diagnostic tools, with Computed Tomography (CT) scans emerging as essential diagnostic resources. Nevertheless, the process of manually interpreting their findings, although informative, is nevertheless characterized by a significant amount of work and variability. In the current study, we intend to construct a machine learning-based model to automate the evaluation of CT images for COVID-19 diagnosis and to differentiate it from pneumonia and other non-COVID diseases. The model we propose employs a Tolerant Local Median Fuzzy C-means (TLMFCM) segmentation strategy in conjunction with the Stacked Sparse autoencoder (SSAE) for robust feature extraction. The classification task employs a Locally Controlled Seagull Kernel Extreme Machine Learning (LCS-KELM) whose parameters are optimized with the Seagull Optimization algorithm (SOA). Our model performed better than other models in preliminary comparisons against traditional benchmarks, with an accuracy of 96.3% and a faster processing time.
暂无评论