Image transmission holds a major share in data communication, and thus secure image transmission is currently a challenging domain of research. A secure image transmission scheme is proposed that physically transmits ...
详细信息
Image transmission holds a major share in data communication, and thus secure image transmission is currently a challenging domain of research. A secure image transmission scheme is proposed that physically transmits the encrypted image employing visual cryptography scheme (VCS). During physical transmission, the meaningless shares may attract curious hackers and if captured and stacked, the secret may be revealed. Moreover, the increase in transmission overhead due to multiple share images resulted from a single secret image after encryption is another concern regarding the physical implementation of VCS. Focusing on both observations, vector quantization (VQ) is used to encode as well as to compress each of the shares before transmission. To utilize VQ, its two parameters, cell width and dimension of grid, are needed to be optimized for various kind of images without compromising the randomness property of the shares. Hence, a particle swarm optimization-guided VQ is proposed, and furthermore, a multilayer perceptron in conjunction with an autoencoder are also trained in synchronism with that to automatically obtain the optimal VQ for each image type during the transmission. The proposed scheme is successfully implemented with different types of images for secure physical transmission with a 62.8% data volume reduction and 98.07% image quality retrieval. (C) 2019 SPIE and IS&T
Anomaly detection in a network is one of the prime concerns for network security. In this work, a novel Channel Boosted and Residual learning based deep Convolutional Neural Network (CBR-CNN) architecture is proposed ...
详细信息
Anomaly detection in a network is one of the prime concerns for network security. In this work, a novel Channel Boosted and Residual learning based deep Convolutional Neural Network (CBR-CNN) architecture is proposed for the detection of network intrusions. The proposed methodology is based on inherent nature of the anomaly detection in which one class classification approach is used to detect network intrusion. This is accomplished by the modelling of normal network traffic distribution using Stacked autoencoders (SAE). Using unsupervised training, SAE transforms the original feature space into a reconstructed feature space, which is further transformed via the proposed concept of channel boosting. Additionally, in order to increase the representational power of the neural network and the diversity in features representation, a multipath residual learning based CNN architecture is proposed to learn features at different levels of granularity. Performance of the proposed CBR-CNN technique is evaluated on NSL-KDD dataset. Our proposed method showed significant improvement over the existing techniques, achieving accuracy, AU-ROC, and AU-PR of 89.41%, 0.9473, and 0.9443 on Test(+) and 80.36%, 0.7348 and 0.9034 on Test(-21) dataset, respectively. (C) 2019 Elsevier B.V. All rights reserved.
High peak-to-average power ratio (PAPR) has been one of the major drawbacks of orthogonal frequency division multiplexing (OFDM) systems. In this letter, we propose a novel PAPR reduction scheme, known as PAPR reducin...
详细信息
High peak-to-average power ratio (PAPR) has been one of the major drawbacks of orthogonal frequency division multiplexing (OFDM) systems. In this letter, we propose a novel PAPR reduction scheme, known as PAPR reducing network (PRNet), based on the autoencoder architecture of deep learning. In the PRNet, the constellation mapping and demapping of symbols on each subcarrier is determined adaptively through a deep learning technique, such that both the bit error rate (BER) and the PAPR of the OFDM system are jointly minimized. We used simulations to show that the proposed scheme outperforms conventional schemes in terms of BER and PAPR.
We previously have applied deep autoencoder (DAE) for noise reduction arid speech enhancement. However, the DAE was trained using only clean speech. In this study, by using noisy clean training pairs, we further intro...
详细信息
ISBN:
(纸本)9781629934433
We previously have applied deep autoencoder (DAE) for noise reduction arid speech enhancement. However, the DAE was trained using only clean speech. In this study, by using noisy clean training pairs, we further introduce a denoising process in learning the DAE. In training the DAE, we still adopt greedy layer-wised pretraining plus fine tuning strategy. In pretraining, each layer is trained as a one-hidden-layer neural autoencoder (AE) using noisy-clean speech pairs as input and output (or transformed noisy-clean speech pairs by preceding AEs). Fine tuning was done by stacking all AEs with pretrained parameters for initialization. The trained DAE is used as a filter for speech estimation when noisy speech is given. Speech enhancement experiments were done to examine the performance of the trained denoising DAE. Noise reduction, speech distortion, and perceptual evaluation of speech quality (PESQ) criteria are used in the performance evaluations. Experimental results show that adding depth of the DAE consistently increase the performance when a large training data set is given. In addition, compared with a minimum mean square error based speech enhancement algorithm, our proposed denoising DAE provided superior performance on the three objective evaluations.
Sparse code multiple access (SCMA) is a promising code-based non-orthogonal multiple-access technique that can provide improved spectral efficiency and massive connectivity meeting the requirements of 5G wireless comm...
详细信息
Sparse code multiple access (SCMA) is a promising code-based non-orthogonal multiple-access technique that can provide improved spectral efficiency and massive connectivity meeting the requirements of 5G wireless communication systems. We propose a deep learning-aided SCMA (D-SCMA) in which the codebook that minimizes the bit error rate (BER) is adaptively constructed, and a decoding strategy is learned using a deep neural network-based encoder and decoder. One benefit of D-SCMA is that the construction of an efficient codebook can be achieved in an automated manner, which is generally difficult due to the non-orthogonality and multi-dimensional traits of SCMA. We use simulations to show that our proposed scheme provides a lower BER with a smaller computation time than conventional schemes.
One of the key challenges in manufacturing processes is improving the accuracy of quality monitoring and prediction. This paper proposes a generative neural network model for automatically predicting work-in-progress ...
详细信息
One of the key challenges in manufacturing processes is improving the accuracy of quality monitoring and prediction. This paper proposes a generative neural network model for automatically predicting work-in-progress product quality. Our approach combines an unsupervised feature-extraction step with a supervised learning method. An autoencoding neural network is trained using raw manufacturing process data to extract rich information from production line recordings. Then, the extracted features are reformed as time-series and are fed into a multi-layer perceptron for predicting product quality. Finally, the outputs are decoded into a forecast quality measure. We evaluate the performance of the generative model on a case study from a powder metallurgy process. Our experimental results suggest that our method can precisely capture the defective products. (C) 2019 Elsevier B.V. All rights reserved.
The use of fluorescence data coupled with neural networks for improved predictability of drinking water disinfection by-products (DBPs) was investigated. Novel application of autoencoders to process high dimensional f...
详细信息
The use of fluorescence data coupled with neural networks for improved predictability of drinking water disinfection by-products (DBPs) was investigated. Novel application of autoencoders to process high dimensional fluorescence data was related to common dimensionality reduction techniques of parallel factors analysis (PARAFAC) and principal component analysis (PCA). The proposed method was assessed based on component interpretability as well as for prediction of organic matter reactivity to formation of DBPs. Optimal prediction accuracies on a validation dataset were observed with an autoencoder-neural network approach or by utilizing the full spectrum without pre-processing. Latent representation by an autoencoder appeared to mitigate overfitting when compared to other methods. Although DBP prediction error was minimized by other pre-processing techniques, PARAFAC yielded interpretable components which resemble fluorescence expected from individual organic fluorophores. Through analysis of the network weights, fluorescence regions associated with DBP formation can be identified, representing a potential method to distinguish reactivity between fluorophore groupings. However, distinct results due to the applied dimensionality reduction approaches were observed, dictating a need for considering the role of data pre-processing in the interpretability of the results. In comparison to common organic measures currently used for DBP formation prediction, fluorescence was shown to improve prediction accuracies, with improvements to DBP prediction best realized when appropriate pre-processing and regression techniques were applied. The results of this study show promise for the potential application of neural networks to best utilize fluorescence EEM data for prediction of organic matter reactivity. (C) 2018 Elsevier Ltd. All rights reserved.
Diabetic retinopathy (DR) results in vision loss if not treated early. A computer-aided diagnosis (CAD) system based on retinal fundus images is an efficient and effective method for early DR diagnosis and assisting e...
详细信息
Diabetic retinopathy (DR) results in vision loss if not treated early. A computer-aided diagnosis (CAD) system based on retinal fundus images is an efficient and effective method for early DR diagnosis and assisting experts. A computer-aided diagnosis (CAD) system involves various stages like detection, segmentation and classification of lesions in fundus images. Many traditional machine-learning (ML) techniques based on hand-engineered features have been introduced. The recent emergence of deep learning (DL) and its decisive victory over traditional ML methods for various applications motivated the researchers to employ it for DR diagnosis, and many deep-learning-based methods have been introduced. In this paper, we review these methods, highlighting their pros and cons. In addition, we point out the challenges to be addressed in designing and learning about efficient, effective and robust deep-learning algorithms for various problems in DR diagnosis and draw attention to directions for future research.
BackgroundSingle cell RNA sequencing (scRNA-seq) is applied to assay the individual transcriptomes of large numbers of cells. The gene expression at single-cell level provides an opportunity for better understanding o...
详细信息
BackgroundSingle cell RNA sequencing (scRNA-seq) is applied to assay the individual transcriptomes of large numbers of cells. The gene expression at single-cell level provides an opportunity for better understanding of cell function and new discoveries in biomedical areas. To ensure that the single-cell based gene expression data are interpreted appropriately, it is crucial to develop new computational *** this article, we try to re-construct a neural network based on Gene Ontology (GO) for dimension reduction of scRNA-seq data. By integrating GO with both unsupervised and supervised models, two novel methods are proposed, named GOAE (Gene Ontology autoencoder) and GONN (Gene Ontology Neural Network) *** evaluation results show that the proposed models outperform some state-of-the-art dimensionality reduction approaches. Furthermore, incorporating with GO, we provide an opportunity to interpret the underlying biological mechanism behind the neural network-based model.
Analyzing human motion is a challenging task with a wide variety of applications in computer vision and in graphics. One such application, of particular importance in computer animation, is the retargeting of motion f...
详细信息
Analyzing human motion is a challenging task with a wide variety of applications in computer vision and in graphics. One such application, of particular importance in computer animation, is the retargeting of motion from one performer to another. While humans move in three dimensions, the vast majority of human motions are captured using video, requiring 2D-to-3D pose and camera recovery, before existing retargeting approaches may be applied. In this paper, we present a new method for retargeting video-captured motion between different human performers, without the need to explicitly reconstruct 3D poses and/or camera parameters. In order to achieve our goal, we learn to extract, directly from a video, a high-level latent motion representation, which is invariant to the skeleton geometry and the camera view. Our key idea is to train a deep neural network to decompose temporal sequences of 2D poses into three components: motion, skeleton, and camera view-angle. Having extracted such a representation, we are able to re-combine motion with novel skeletons and camera views, and decode a retargeted temporal sequence, which we compare to a ground truth from a synthetic dataset. We demonstrate that our framework can be used to robustly extract human motion from videos, bypassing 3D reconstruction, and outperforming existing retargeting methods, when applied to videos in-the-wild. It also enables additional applications, such as performance cloning, video-driven cartoons, and motion retrieval.
暂无评论