Detecting Alzheimer's disease (AD) accurately at an early stage is critical for planning and implementing disease-modifying treatments that can help prevent the progression to severe stages of the disease. In the ...
详细信息
Detecting Alzheimer's disease (AD) accurately at an early stage is critical for planning and implementing disease-modifying treatments that can help prevent the progression to severe stages of the disease. In the existing literature, diagnostic test scores and clinical status have been provided for specific time points, and predicting the disease progression poses a significant challenge. However, few studies focus on longitudinal data to build deep-learning models for AD detection. These models are not stable to be relied upon in real medical settings due to a lack of adaptive training and testing. We aim to predict the individual's diagnostic status for the next six years in an adaptive manner where prediction performance improves with the number of patient visits. This study presents a Sequence-Length Adaptive encoder-decoder Long Short-Term Memory (SLA-ED lstm) deep-learning model on longitudinal data obtained from the Alzheimer's Disease Neuroimaging Initiative archive. In the suggested approach, decoderlstm dynamically adjusts to accommodate variations in training sequence length and inference length rather than being constrained to a fixed length. We evaluated the model performance for various sequence lengths and found that for inference length one, sequence length nine gives the highest average test accuracy and area under the receiver operating characteristic curves of 0.920 and 0.982, respectively. This insight suggests that data from nine visits effectively captures meaningful cognitive status changes and is adequate for accurate model training. We conducted a comparative analysis of the proposed model against state-of-the-art methods, revealing a significant improvement in disease progression prediction over the previous methods.
Ahstract-lstm is a recurrent neural network model used for capturing long-term dependencies in sequence prediction functions. This paper proposes an lstmencoder-decoder net-work model to predict the power load demand...
详细信息
Chinese classical poetry has strict formats and complicated linguistic rules including harmonious rhyme, level and oblique tones (Pingze). Therefore, using NLP to generate classical poetry that meets the aforementione...
详细信息
ISBN:
(纸本)9781728162515
Chinese classical poetry has strict formats and complicated linguistic rules including harmonious rhyme, level and oblique tones (Pingze). Therefore, using NLP to generate classical poetry that meets the aforementioned constraints has always attracted much attention. Although many studies have shown that deep learning has good results in generating poetry, however, the use of unconstrained natural language may cause the generated poems to fail in conforming to the metric rules, and also make the expression of the meaning of the poems incomplete. In view of this, this study uses the quatrains of the Tang and Song Dynasties as the training sample, adopts encoderdecoderlstm as the quatrain generation model (QGM), and then uses Latent Dirichlet Allocation to expand the meaning of related words of each consecutive sentence of the poetry. The features and contributions of this study are as follows: (1) We propose a QGM which learns poetry knowledge from four consecutive sentences as corpus respectively in the poetry. (2) The QGM is leveraged to strengthen the meaningfulness, grammaticality, and poeticness of a poetry by extending the keywords to a collection of semantic related words. (3) We elaborate the evaluation of poetry generation with BLEU- 2 metric, cooperating with human evaluation. This study shows that our model can effectively improve the coherence of the sentences with the requirement of the metric rules.
To train a model for semantic slot filling, manually labeled data in which each word is annotated with a semantic slot label is necessary while manually preparing such data is costly. Starting from a small amount of m...
详细信息
ISBN:
(纸本)9781510833135
To train a model for semantic slot filling, manually labeled data in which each word is annotated with a semantic slot label is necessary while manually preparing such data is costly. Starting from a small amount of manually labeled data, we propose a method to generate the labeled data with using the encoderdecoderlstm. We first train the encoder-decoder lstm that accepts and generates the same manually labeled data. Then, to generate a wide variety of labeled data, we add perturbations to the vector that encodes the manually labeled data and generate labeled data with the decoderlstm based on the perturbated encoded vector. We also try to enhance the encoderdecoderlstm to generate the word sequences and their label sequences separately to obtain new pairs of words and their labels. Through the experiments with the standard ATIS slot filling task, by using the generated data, we obtained improvement in slot filling accuracy over the strong baseline with the NN-based slot filling model.
Predicting the remaining useful life (RUL) of bearings is critical in ensuring rotating machinery's reliability and maintenance efficiency. Most of the research in this domain focuses on the fault prognosis of bea...
详细信息
Predicting the remaining useful life (RUL) of bearings is critical in ensuring rotating machinery's reliability and maintenance efficiency. Most of the research in this domain focuses on the fault prognosis of bearings without proper investigation of underlying fault feature pattern mining for degradation analysis. This paper investigates the remaining operational lifespan of bearings with an enhanced feature selection strategy and anomaly monitoring of bearing operational data. Specifically, four different models, namely Bi-lstm, CNN-lstm, Conv_lstm, and encoder-decoder lstm, are utilized to capture complex temporal dependencies and spatial correlations in the bearing sensor data. In the first stage, various feature selection techniques are engaged to select degradation trend monitoring features over time-domain and frequency-domain analysis. Next, anomaly pattern mining techniques are employed to identify abnormal behavior in the data, a crucial input for the subsequent RUL forecasting models. The anomaly patterns are extracted using unsupervised learning methods such as clustering or autoencoders, enabling the detection of early signs of degradation. Subsequently, the RUL forecasting is performed using four deep learning architectures. The performance of the suggested technique is evaluated using a comprehensive dataset of sensor measurements from bearings, which includes the corresponding remaining useful life (RUL) values. The experimental findings demonstrate that the proposed models demonstrate high accuracy in correctly determining the RUL of bearings. This solution offers proactive and cost-effective maintenance procedures by employing advanced deep learning models and anomalous pattern mining techniques, resulting in increased reliability, reduced downtime, and optimized resource allocation.
In the Internet of Things(IoT)scenario,many devices will communi-cate in the presence of the cellular network;the chances of availability of spec-trum will be very scary given the presence of large numbers of mobile u...
详细信息
In the Internet of Things(IoT)scenario,many devices will communi-cate in the presence of the cellular network;the chances of availability of spec-trum will be very scary given the presence of large numbers of mobile users and large amounts of *** prediction is very encouraging for high traffic next-generation wireless networks,where devices/machines which are part of the Cognitive Radio Network(CRN)can predict the spectrum state prior to transmission to save their limited energy by avoiding unnecessarily sen-sing radio *** short-term memory(lstm)is employed to simulta-neously predict the Radio Spectrum State(RSS)for two-time slots,thereby allowing the secondary node to use the prediction result to transmit its information to achieve lower waiting time hence,enhanced performance capacity.A frame-work of spectral transmission based on the lstm prediction is formulated,named as positive prediction and sensing-based spectrum *** proposed scheme provides an average maximum waiting time gain of 2.88 *** proposed scheme provides 0.096 bps more capacity than a conventional energy detector.
In this study, a novel method for real-time trajectory prediction of aircraft with non-cooperative targets is proposed. Leveraging an encoder-decoder lstm (ED-lstm) model, we achieve accurate multi-step trajectory pre...
详细信息
In this study, a novel method for real-time trajectory prediction of aircraft with non-cooperative targets is proposed. Leveraging an encoder-decoder lstm (ED-lstm) model, we achieve accurate multi-step trajectory predictions while ensuring computational efficiency. Our approach strikes a balance between prediction accuracy and real-time responsiveness, addressing a challenging aspect of trajectory prediction. The effectiveness of our model is highlighted through evaluations using RMSE and R-2 metrics. Specifically, in a rapidly changing scenario with an average motion speed of 200 m per second, the calculated predicted position errors averaged 5.7 m at 1 s, 32.1 m at 3 s, and 82.1 m at 5 s. Notably, our model demonstrates effectiveness in predicting changes in the motion trend of non-cooperative targets. Furthermore, real-time performance analysis reveals that over 99% of predictions meet the 100 ms real-time requirement. While our approach holds promise, further validation on real-world datasets is warranted to assess its generalizability. Ultimately, this study contributes to the advancement of trajectory prediction methodologies, offering practical implications for aviation and related disciplines.
Due to intensifying climate change impacts, landslides have become increasingly threatening in the Himalayan region, particularly in India's Kam andValley. This study addresses the pressing need for accurate lands...
详细信息
ISBN:
(纸本)9789819732982;9789819732999
Due to intensifying climate change impacts, landslides have become increasingly threatening in the Himalayan region, particularly in India's Kam andValley. This study addresses the pressing need for accurate landslide prediction models by leveraging advanced Landslide Monitoring Systems (LMSs) and machine learning techniques. A significant challenge in developing these models is the class imbalance in soil movement data. Synthetic data generated by a Variational Autoencoder (VAE) is introduced to overcome this issue. Using VAE data, the study systematically compares various machine learning (ML) models, including Long Short-Term Memory (lstm), Convolutional Neural Network-Long Short-Term Memory (CNN-lstm), Convolutional lstm (Conv-lstm), encoder-decoder lstm, alongside the novel Multi-lstm with a Random Forest (RF) model. The ML models were trained with and without synthetic data. The test data used in the study remained intact without the incorporation of synthetic data. The results showcase the substantial impact of synthetic data on enhancing model performance. Notably, the Multilstm-RF model, which integrates different lstm architectures with an RF classifier, achieves remarkable accuracy, precision, recall, and F1 score value improvements. Furthermore, incorporating antecedent rainfall data from the preceding three days enriches the understanding of landslide dynamics. This research significantly advances landslide prediction techniques in vulnerable regions. This research significantly advances landslide prediction techniques in vulnerable regions, with the Multi-lstm-RF model achieving an accuracy of 98.25% and an F1 score of 0.736 in testing when incorporating synthetic data, highlighting its potential for disaster preparedness and response in landslide-prone areas.
The Himalayan region faces an escalating threat from landslides, a situation worsened by climate change. These events endanger human lives and valuable properties, underscoring the necessity for robust prediction and ...
详细信息
While implementing software projects, developers do not reinvent the wheel but try to reuse existing API calls and source code. In recent years, the problems related to recommending APIs and code snippets have been in...
详细信息
While implementing software projects, developers do not reinvent the wheel but try to reuse existing API calls and source code. In recent years, the problems related to recommending APIs and code snippets have been intensively investigated. Although current approaches have achieved encouraging performance, there is still the need to improve the recommendation process's effectiveness and efficiency. In this work, we reformulate the problem of API recommendations by proposing learning and recommending API sequences relevant to a given coding context. We present LUPE, a novel approach to API and code recommendation, exploiting cutting-edge deep learning techniques. Thanks to the underlying encoder-decoder architecture specialized in transforming sequences, LUPE can effectively learn the order in which invocations occur. The approach has been evaluated on two Android datasets and compared with GAPI and FACER, two state-of-the-art API recommender systems. Being fed with augmented training data, our conceived approach can obtain a high prediction accuracy, and produce a perfect match in several cases, hence outperforming the baselines.
暂无评论