The use of Internet of Things (IoT) in environment monitoring has led to the development of Smart Environmental Monitoring (SEM) paradigm. Target or source localization, that determines the underlying cause of environ...
详细信息
The use of Internet of Things (IoT) in environment monitoring has led to the development of Smart Environmental Monitoring (SEM) paradigm. Target or source localization, that determines the underlying cause of environmental occurrences, is an important aspect of SEM. To prevent an environmental event in becoming a potential disaster, swift and early localization of its source is crucial. Target localization is performed with the help of IoT sensors, which are typically deployed in hazardous environments and are not easily serviceable, making energy efficiency a requirement for SEM systems. Furthermore, IoT sensors may provide anomalous readings due to low battery, low sensor sensitivity, faults, or malicious attacks. This alters the decision-making process and leads to inaccurate localization of the source. Hence, detecting and dealing with anomalies, while utilizing a limited number of sensor nodes, are key factors in addressing reliability and energy efficiency issues. The current works are iterative in nature, employs redundant nodes, do not efficiently address anomalous readings, which leads to slow and imprecise localization, while increasing energy consumption. Moreover, they are application specific, making them less adaptable to other sensing tasks. Therefore, this work proposes a comprehensive and novel deep learning-based system that - (a) repopulates missing data from inactive sensors, (b) detects and rectifies anomalous data, and (c) instantly localizes a target. The efficacy of the proposed approach is validated in a radioactive environment and compared to existing benchmark. The results show that the proposed approach achieves precise, rapid, and efficient localization by achieving 5 times better accuracy, 50 times faster speeds, and 3 times lesser energy consumption.
Reliable water quality models are crucial for better water management and pollution control. Biochemical oxygen demand (BOD) and dissolved oxygen (DO) are the widely recognized indicators used to examine the quality o...
详细信息
Reliable water quality models are crucial for better water management and pollution control. Biochemical oxygen demand (BOD) and dissolved oxygen (DO) are the widely recognized indicators used to examine the quality of water. Stakeholders often face challenges when it comes to monitoring water quality indicators daily. The conventional approach for testing is laborious and costly. Therefore, there is a need to incorporate alternate concepts for predicting water quality parameters. This article proposes a novel approach "autoencoder-DeepAutoregressive (AE-DeepAR)" to predict the concentration of BOD and DO using in-situ measurable water quality parameters as input. The autoencoder is employed to utilize a latent space that accurately captures the fundamental characteristics of the nonlinear data. On the other hand, the DeepAR model is utilized to produce point predictions. This approach is being implemented in the Mahanadi River system for the first time. It is found that conductivity, total suspended solids (TSS), turbidity, and ammoniacal nitrogen (NH3-N) are correlated with BOD. Likewise, turbidity, temperature, and total dissolved solids (TDS) correlate significantly with DO. The performance of the AE-DeepAR model in predicting BOD surpasses that of other models at all stations. The R2 range from 0.90 to 0.93, mean absolute error (MAE) varies from 0.061 to 0.147, mean squared error (MSE) lies between 0.005 to 0.028, and mean absolute percentage error (MAPE) differs from 9.20% to 19.95%. All stations show a greater degree of sensitivity. The uncertainty observed is less in most circumstances, but the unpredictability of the values caused by the occurrence of outliers in severe events may influence the outcome. The PICP value varies between 87 % to 95 % in BOD and 89 % to 95 % in DO. The results reveal that AE-DeepAR can be used as an alternative approach to predict BOD and DO with a high degree of accuracy.
Background: Single-cell RNA sequencing (scRNA-seq) unfolds complex transcriptomic datasets into detailed cellular maps. Despite recent success, there is a pressing need for specialized methods tailored towards the fun...
详细信息
Background: Single-cell RNA sequencing (scRNA-seq) unfolds complex transcriptomic datasets into detailed cellular maps. Despite recent success, there is a pressing need for specialized methods tailored towards the functional interpretation of these cellular maps. Findings: Here, we present DrivAER, a machine learning approach for the identification of driving transcriptional programs using autoencoder-based relevance scores. DrivAER scores annotated gene sets on the basis of their relevance to user-specified outcomes such as pseudotemporal ordering or disease status. DrivAER iteratively evaluates the information content of each gene set with respect to the outcome variable using autoencoders. We benchmark our method using extensive simulation analysis as well as comparison to existing methods for functional interpretation of scRNA-seq data. Furthermore, we demonstrate that DrivAER extracts key pathways and transcription factors that regulate complex biological processes from scRNA-seq data. Conclusions: By quantifying the relevance of annotated gene sets with respect to specified outcome variables, DrivAER greatly enhances our ability to understand the underlying molecular mechanisms.
Sensor faults are a common type of failure in heat pump systems, which can seriously affect the normal operation of systems. Self-correction of the sensor fault in the system is crucial. State-of-the-art sensor fault ...
详细信息
Sensor faults are a common type of failure in heat pump systems, which can seriously affect the normal operation of systems. Self-correction of the sensor fault in the system is crucial. State-of-the-art sensor fault correction methods based on data-driven and physical models face challenges, such as the need for co-located sensors, accurate physical models, and a large amount of labeled data, greatly limiting their applicability. This paper proposes using machine learning methods for fault self-correction. Firstly, a data self-correction strategy based on the convolutional autoencoder is introduced. Furthermore, an artificial sample generation strategy is proposed to address the scarcity of sensor fault data for data-driven training of the self-correction model. The results demonstrate that the proposed method effectively self-corrects both single and multiple faults. Simultaneously, thermal fault diagnosis evaluations reveal over 90 % accuracy in corrected data, with a maximum diagnostic improvement of 53.5 %. Furthermore, the study shows that the number of parameters is crucial for effective correction, underscoring that over-constraint is essential for successful self-correction.
Stroke and heart disease, which account for a high percentage of the causes of death amongst the elderly population, can occur suddenly leading to death. Hence, early diagnoses and continuous management are required. ...
详细信息
Stroke and heart disease, which account for a high percentage of the causes of death amongst the elderly population, can occur suddenly leading to death. Hence, early diagnoses and continuous management are required. High-risk diseases should be diagnosed through medical personnel using established medical techniques. However, it is time consuming to decide on a diagnosis or the opinion may differ depending on the medical professional. This study aims to shorten the diagnosis period and provide high accuracy diagnoses by establishing the semi-supervised convolution autoencoder and the U-Net models that can classify aortic atherosclerotic plaque conditions and predict the primary locations for stroke occurrence.
We present a feature engineering pipeline for the construction of musical signal characteristics, to be used for the design of a supervised model for musical genre identification. The key idea is to extend the traditi...
详细信息
ISBN:
(纸本)9781538646595
We present a feature engineering pipeline for the construction of musical signal characteristics, to be used for the design of a supervised model for musical genre identification. The key idea is to extend the traditional two-step process of extraction and classification with additive stand-alone phases which are no longer organized in a waterfall scheme. The whole system is realized by traversing backtrack arrows and cycles between various stages. In order to give a compact and effective representation of the features, the standard early temporal integration is combined with other selection and extraction phases: on the one hand, the selection of the most meaningful characteristics based on information gain, and on the other hand, the inclusion of the nonlinear correlation between this subset of features, determined by an autoencoder. The results of the experiments conducted on GTZAN dataset reveal a noticeable contribution of this methodology towards the model's performance in classification task.
In this paper,we describe a intrusion detection algorithm based on deep learning for industrial control networks,aiming at the security problem of industrial control *** learning is a kind of intelligent algorithm and...
详细信息
In this paper,we describe a intrusion detection algorithm based on deep learning for industrial control networks,aiming at the security problem of industrial control *** learning is a kind of intelligent algorithm and has the ability of automatically *** use self-learning to enhance the experience and dynamic classification *** ideology of deep learning is similar to the idea of intrusion detection to improve the detection rate and reduce the rate of false through learning,a sparse auto-encoder-extreme learning machine intrusion detection model is proposed for the problem of intrusion detection *** uses deep learning autoencoder to combine the coefficient penalty and reconstruction loss of the encode layer to extract the features of high-dimensional data during the training model,and then uses the extreme learning machine to quickly and effectively classify the extracted *** accuracy of the algorithm is verified by the industrial control intrusion detection standard data *** experimental results verify that the method can effectively improve the performance of the intrusion detection system and reduce the false alarm rate.
The generation of handwritten Xibo characters is a key step to explore the secrets of this original text. At the same time, it is also a scientific aid to the current task of rescuing and protecting Xibo characters. B...
详细信息
ISBN:
(纸本)9781450384087
The generation of handwritten Xibo characters is a key step to explore the secrets of this original text. At the same time, it is also a scientific aid to the current task of rescuing and protecting Xibo characters. Based on the peculiarities of the structure of the Xibo characters, the prevailing customs of plagiarism from generation to generation, and the reasons for the difficulty of obtaining them at present, the generation of handwritten Xibo characters is a very challenging task. Based on the development of existing handwritten fonts in the field of machine learning, combined with the characteristics of the collected handwritten Xibo font data set, we propose to use a generative adversarial network to try to generate handwritten Xibo fonts. Try the existing generative adversarial network models, they are uncomfortable with the task of generating handwritten Xibo characters. Therefore, this paper proposes a feature adversarial generative model combined with an autoencoder. Using this model to generate handwritten Xibo fonts, the experimental results show that this model can stably generate various handwritten Xibo font images.
The deep learning based trackers can always achieve high tracking precision and strong adaptability in different scenarios. However, due to the fact that the number of the parameter is large and the fine-tuning is cha...
详细信息
The deep learning based trackers can always achieve high tracking precision and strong adaptability in different scenarios. However, due to the fact that the number of the parameter is large and the fine-tuning is challenging, the time complexity is high. In order to improve the efficiency, we proposed a tracker based on fast deep learning through constructing a new network with less redundancy. Based on the theory of deep learning, we proposed a deep neural network to describe essential features of images. Furthermore, fast deep learning can be achieved by restricting the size of network. With the help of GPU, the time complexity of the network training is released to a large extent. Under the framework of particle filter, the proposed method combined the deep learning extractor with an SVM scoring professor to distinguish the target from the background. The condensed network structure reduced the complexity of the model. Compared with some other deep learning based tracker, the proposed method can achieve higher efficiency. The frame rate keeps at 22 frames per second on average. Experiments on an open tracking benchmark demonstrate that both the robustness and the timeliness of the proposed tracker are promising when the appearance of the target changes containing translation, rotation and scale or the interference containing illumination, occlusion and cluttered background. Unfortunately, it is not robust enough when the target moves fast or the motion blur and some similar objects exist.
WeChat is one of social network applications that connects people widely. Huge data is generated when users conduct conversations, which can be used to enhance their lives. This paper will describe how this data is co...
详细信息
ISBN:
(纸本)9781538674482;9781538674475
WeChat is one of social network applications that connects people widely. Huge data is generated when users conduct conversations, which can be used to enhance their lives. This paper will describe how this data is collected, how to develop a personalized chatbot using personal conversation records. Our system will have a cognitive map based on the word2vec model, which is used to learn and store the relationship of each word that appears in the chatting records. Each word will be mapped to a continuous high dimensional vector space. Then we will adopt the sequence-to-sequence framework (seq2seq) to learn the chatting styles from all pairs of chatting sentences. Meanwhile, we will replace the traditional one-hot embedding layer with our word2vec embedding layer in the seq2seq model. Furthermore, we trained an autoencoder of seq2seq architecture to learn the vector representation of each sentence, then we can evaluate the cosine similarity between model generated response and the pre-existing response in test set, and we can also display the distance with principal component analysis (PCA) projection. As a result, our word2vec embedded seq2seq model significantly outperforms the one-hot embedded one.
暂无评论