anomaly detection of gateway electrical energy metering device is important for maintenance and operations in the power systems. Traditionally, anomaly detection was typically performed manually through the analysis o...
详细信息
anomaly detection of gateway electrical energy metering device is important for maintenance and operations in the power systems. Traditionally, anomaly detection was typically performed manually through the analysis of the collected energy information. However, the manual process is time-consuming and labor-intensive. In this condition, this paper proposes a hybrid deep-learning model, which integrates stacked intelligently detecting the abnormal events of gateway electrical energy metering device. The proposed model named SAE-LSTM model, first uses SAE to extract deep latent features of threephase voltage data collected from the gateway electrical energy metering device, and then adopts LSTM for separating the abnormal events based on the extracted deep latent features. The SAE-LSTM model, can effectively highlight the temporal information of the electrical data, thereby enhancing the accuracy of anomaly detection. The simulation experiments verify the advantages of the SAE-LSTM model in anomaly detection under different signal-to-noise ratios. The experimental results of real datasets demonstrate that it is suitable for anomaly detection of gateway electrical energy metering devices in practical scenarios.
In the era of Big data,learning discriminant feature representation from network traffic is identified has as an invariably essential task for improving the detection ability of an intrusion detection system(IDS).Owin...
详细信息
In the era of Big data,learning discriminant feature representation from network traffic is identified has as an invariably essential task for improving the detection ability of an intrusion detection system(IDS).Owing to the lack of accurately labeled network traffic data,many unsupervised feature representation learning models have been proposed with state-of-theart ***,these models fail to consider the classification error while learning the feature ***,the learnt feature representation may degrade the performance of the classification *** the first time in the field of intrusion detection,this paper proposes an unsupervised IDS model leveraging the benefits of deep autoencoder(DAE)for learning the robust feature representation and one-class support vector machine(OCSVM)for finding the more compact decision hyperplane for intrusion ***,the proposed model defines a new unified objective function to minimize the reconstruction and classification error *** unique contribution not only enables the model to support joint learning for feature representation and classifier training but also guides to learn the robust feature representation which can improve the discrimination ability of the classifier for intrusion *** set of evaluation experiments are conducted to demonstrate the potential of the proposed ***,the ablation evaluation on benchmark dataset,NSL-KDD validates the design decision of the proposed ***,the performance evaluation on recent intrusion dataset,UNSW-NB15 signifies the stable performance of the proposed ***,the comparative evaluation verifies the efficacy of the proposed model against recently published state-of-the-art methods.
In process monitoring based on stacked autoencoders (SAEs), the performance of monitoring models is directly decided by the validity of the structure and parameters, which are primarily determined by time-consuming ma...
详细信息
In process monitoring based on stacked autoencoders (SAEs), the performance of monitoring models is directly decided by the validity of the structure and parameters, which are primarily determined by time-consuming manual adjustments. This paper presents a novel method that can adaptively select parameters rather than tuning them manually. The proposed method is called adaptive parameter tuning SAE (APT-SAE). Basic SAEs aim to compress the original input data and extract simple and abstract features. Thus, the redundant information of each hidden layer output should be as small as possible. The next layer of nodes can be remarkably reduced if the amount of redundant information is large. During the pre-training stage of APT-SAE, an adaptive parameter tuning strategy is used for rapidly determining the number of layers and nodes in the paper. The cross-covariance of each AE's input data is used to determine the node number of succeeding AE. The pre-training stage ends when the correlation is weak, which is decided by the average value of cross-variance matrix. The proposed method is applied to a benchmark problem, and it outperforms several state-of-the-art methods.
Audio is inherently temporal data, where features extracted from each segment evolve over time, yielding dynamic traits. These dynamics, relative to the acoustic characteristics inherent in raw audio features, primari...
详细信息
Audio is inherently temporal data, where features extracted from each segment evolve over time, yielding dynamic traits. These dynamics, relative to the acoustic characteristics inherent in raw audio features, primarily serve as complementary aids for audio classification. This paper employs the reservoir computing model to fit the audio feature sequences efficiently, capturing feature-sequence dynamics into the readout models, and without the need for offline iterative training. Additionally, stacked autoencoders further integrate the extracted static features (i.e., raw audio features) with the captured dynamics, resulting in more stable and effective classification performance. The entire framework is called Static-Dynamic Integration Network (SDIN). The conducted experiments demonstrate the effectiveness of SDIN in speech-music classification tasks.
Recently,cyber physical system(CPS)has gained significant attention which mainly depends upon an effective collaboration with computation and physical *** greatly interrelated and united characteristics of CPS resulti...
详细信息
Recently,cyber physical system(CPS)has gained significant attention which mainly depends upon an effective collaboration with computation and physical *** greatly interrelated and united characteristics of CPS resulting in the development of cyber physical energy systems(CPES).At the same time,the rising ubiquity of wireless sensor networks(WSN)in several application areas makes it a vital part of the design of *** security and energy efficiency are the major challenging issues in CPES,this study offers an energy aware secure cyber physical systems with clustered wireless sensor networks using metaheuristic algorithms(EASCPSMA).The presented EASCPS-MA technique intends to attain lower energy utilization via clustering and security using intrusion *** EASCPSMA technique encompasses two main stages namely improved fruit fly optimization algorithm(IFFOA)based clustering and optimal deep stacked autoencoder(OSAE)based intrusion ***,the optimal selection of stacked autoencoder(SAE)parameters takes place using root mean square propagation(RMSProp)*** extensive performance validation of the EASCPS-MA technique takes place and the results are inspected under varying *** simulation results reported the improved effectiveness of the EASCPS-MA technique over other recent approaches interms of several measures.
Graph neural network has an excellent performance in obtaining the similarity relationship of samples, so it has been widely used in computer vision. But the hyperspectral remote sensing image (HSI) has some problems,...
详细信息
Graph neural network has an excellent performance in obtaining the similarity relationship of samples, so it has been widely used in computer vision. But the hyperspectral remote sensing image (HSI) has some problems, such as data redundancy, noise, lack of labeled samples, and insufficient utilization of spatial information. These problems affect the accuracy of HSI classification using graph neural networks. To solve the aforementioned problems, this article proposes graph-based semisupervised learning with weighted features for HSI classification. The method proposed in this article first uses the stacked autoencoder network to extract features, which is used to remove the redundancy of HSI data. Then, the similarity attenuation coefficient is introduced to improve the original feature weighting scheme. In this way, the contribution difference of adjacent pixels to the center pixel is reflected. Finally, to obtain more generalized spectral features, a shallow feature extraction mechanism is added to the stacked autoencoder network. And features that have good generalization can solve the problem of the lack of labeled samples. The experiment on three different types of datasets demonstrates that the proposed method in this article can get better classification performance in the case of the scarcity of labeled samples than other classification methods.
Electrical load prediction plays an important role in power system management and economic development. However, because electrical load has non-linear relationships with several factors such as the political environm...
详细信息
Electrical load prediction plays an important role in power system management and economic development. However, because electrical load has non-linear relationships with several factors such as the political environment, the economic policy, the human activities, the irregular behaviors and the other factors, it is quite difficult to predict power load accurately. In order to further improve the electrical load forecasting performance, a hybrid model is proposed in this paper. The proposed hybrid model combines the stacked autoencoders (SAE) and extreme learning machines (ELMs) to learn the characteristics of the time series data of electrical load. In this proposed method, in order to utilize the characteristics of the electrical load in different depths, the outputs of each layer of the SAE are taken as the inputs of one specific ELM. Then, the obtained results from the constructed different ELMs are integrated by the linear regression to obtain the final output. The linear regression part is trained by the least square estimation method. In addition, the hybrid model is applied to predict two real-world electrical load time series. And, detailed comparisons with the SAE, ELM, the back propagation neural network (BPNN), the multiple linear regression (MLR) and the support vector regression (SVR) are done to show the advantages of the proposed forecasting model. Experimental and comparison results demonstrate that the proposed hybrid model can achieve much better performance than the comparative methods in electrical load forecasting application.
As one of the key technologies of emotion computing, emotion recognition has received great attention. Electroencephalogram (EEG) signals are spontaneous and difficult to camouflage, so they are used for emotion recog...
详细信息
As one of the key technologies of emotion computing, emotion recognition has received great attention. Electroencephalogram (EEG) signals are spontaneous and difficult to camouflage, so they are used for emotion recognition in academic and industrial circles. In order to overcome the disadvantage that traditional machine learning based emotion recognition technology relies too much on a manual feature extraction, we propose an EEG emotion recognition algorithm based on 3D feature fusion and convolutional autoencoder (CAE). First, the differential entropy (DE) features of different frequency bands of EEG signals are fused to construct the 3D features of EEG signals, which retain the spatial information between channels. Then, the constructed 3D features are input into the CAE constructed in this paper for emotion recognition. In this paper, many experiments are carried out on the open DEAP dataset, and the recognition accuracy of valence and arousal dimensions are 89.49 and 90.76%, respectively. Therefore, the proposed method is suitable for emotion recognition tasks.
With the rapid development of the Internet of Things (IoT), network security challenges are becoming more and more complex, and the scale of intrusion attacks against the network is gradually increasing. Therefore, re...
详细信息
With the rapid development of the Internet of Things (IoT), network security challenges are becoming more and more complex, and the scale of intrusion attacks against the network is gradually increasing. Therefore, researchers have proposed Intrusion Detection Systems and constantly designed more effective systems to defend against attacks. One issue to consider is using limited computing power to process complex network data efficiently. In this paper, we take the AWID dataset as an example, propose an efficient data processing method to mitigate the interference caused by redundant data and design a lightweight deep learning-based model to analyze and predict the data category. Finally, we achieve an overall accuracy of 99.77% and an accuracy of 97.95% for attacks on the AWID dataset, with a detection rate of 99.98% for the injection attack. Our model has low computational overhead and a fast response time after training, ensuring the feasibility of applying to edge nodes with weak computational power in the IoT.
Diabetic Retinopathy (DR) is one of the long-lasting Diabetic retinal disorders that leads to vision impairment eventually blindness in most of the working-age population. The process of classifying the severity level...
详细信息
Diabetic Retinopathy (DR) is one of the long-lasting Diabetic retinal disorders that leads to vision impairment eventually blindness in most of the working-age population. The process of classifying the severity level of DR has been a great challenging task as the lesion features are hard to analyze. The screening process requires an effective detection method to classify the subtle pathologies of the retina. Deep neural architectures play a vital role in diagnosing eye disease and helps ophthalmologists to provide timely treatment. This paper proposes an efficient, optimized deep neural network with Chronological Tunicate Swarm Algorithm (CTSA) for classifying the severity of DR. Initially, the retinal images captured through the low-quality fundus photography are preprocessed and then subjected to the segmentation process. First, the optic disc and the blood vasculatures are segmented using a U-Net and sparse Fuzzy C-means-based hybrid entropy model. The lesion area is then detected using the Gabor filter banks, and then the features are extracted. The final classification process takes place using a deep stacked autoencoder (SAE) jointly optimized with a bio-inspired Tunicate Swarm Algorithm based on the chronological concept. The presented model achieved an average accuracy, sensitivity, specificity and F1-Score values of 95.9%, 88.07%, 96.80% and 85.26% for the DIARETDB0 database and 95.48%, 93.29%, 91.89% and 90.53% for the DIARETDB1 database. The experimental outcome demonstrates the effectiveness and the robustness of the proposed method in the DR classification task.
暂无评论