In this paper, we propose a novel neuralnetwork model named by Multi-branch Long Short-Time Memory Convolution neuralnetwork (MLSTM-CNN) for identifying disturbance signals in distributed optical fiber sensing syste...
详细信息
In this paper, we propose a novel neuralnetwork model named by Multi-branch Long Short-Time Memory Convolution neuralnetwork (MLSTM-CNN) for identifying disturbance signals in distributed optical fiber sensing system based on phase-sensitive optical time domain refiectometry (phi-OTDR). By unifying feature extraction and classification in a framework, MLSTM-CNN automatically extracts features at different time scales leveraging multi-branch layer and learnable LSTM layers, and then the disturbance signals are identified in the learnable CNN layers. Through constructing 25.05 km phi-OTDR experimental system, four kinds of real disturbance events, including watering, climbing, knocking, and pressing, and a false disturbance event can be effectively identified. Experimental results show that the average identification rate can reach 95.7%, and nuisance alarm rate (NAR) is 4.3%. Compared with the LSTM and CNN model, the recognition accuracy of the proposed model can be improved and the signal processing time can be efficiently reduced as well.
Phase-sensitive optical time-domain reflectometer technology can transform the fiber-optic cable into a large-scale sensor array for distributed acoustic sensing (DAS), which is an emerging infrastructure for the Inte...
详细信息
Phase-sensitive optical time-domain reflectometer technology can transform the fiber-optic cable into a large-scale sensor array for distributed acoustic sensing (DAS), which is an emerging infrastructure for the Internet of Things. However, it is limitated in event recognition capability, which is a major factor preventing its practical field application. This article proposes a perturbation recognition algorithm based on GASF-ConvNeXt-TF with fast process and high recognition accuracy. First, Gramian angular summation field (GASF) algorithm is used to encode external disturbance signal to transform the 1-D time-series signal into a more concentrated 2-D image feature. Then the convolutional neuralnetwork model ConvNeXt_tiny network is applied as the classifier. In order to prevent the weight gradient from oscillating back and forth during network training process, a cosine annealing algorithm is introduced to control the decay of the learning rate. Meanwhile, transfer learning is used to further optimize the network model, resulting in higher classification accuracy and faster convergence. Finally, two different experimental scenarios are arranged in a total length of 2.2 km of optical fiber cable, and six different disturbance events (shaking, kicking, knocking, trampling, wheel rolling, and impacting) are set. Different from previous perimeter security disturbance identification experiments, not only single-point disturbance recognition is performed but also two points disturbances are simultaneously recognized, and all have good overall identification accuracy. The overall recognition accuracy of the six disturbance events in single and multiple points experiments are 99.3% and 98.3%, respectively, with an average recognition time of 0.103 s. The proposed technique has potential application in infrastructures structure health monitoring, such as factories, airports, energy pipeline, and highway.
Graph neuralnetwork (GNN) is one of the most prominent machine learning models. It involves a substantial amount of graph-based aggregation operators, which can be abstracted as sparse matrix computation kernels. Due...
详细信息
ISBN:
(纸本)9798350387117;9798350387124
Graph neuralnetwork (GNN) is one of the most prominent machine learning models. It involves a substantial amount of graph-based aggregation operators, which can be abstracted as sparse matrix computation kernels. Due to the irregularity of the graph adjacency matrix, the aggregation has long been the performance bottleneck of GNN. According to our measurements, existing GNN implementations fall short of achieving optimal performance due to their insufficient consideration of load balancing and data locality. To bridge these performance gaps, we propose PckGNN, which aims to accelerate GNN aggregation operators on GPUs with packing strategies. PckGNN categorizes graph matrices into two types based on different sparsity levels and conducts two packing strategies respectively. For sparser matrices, Neighbor Packing enhances load balancing through a moderate-grained non-zero grouping approach. For denser matrices, Bin Packing exposes more potentials of cache data reuse by bin partitioning, non-zero extracting, format converting and two-level scheduling. Experimental results on SpMM and SDDMM show that PckGNN achieves speedups of 1.46x similar to 6.14x over existing implementations. When applied GNN inference of three typical models, it achieves speedups of more than 1.29x over the state-of-the-art frameworks.
The photon collection efficiency of gaseous scintillator detectors varies according to the position of the impinging charged particles in the medium that generates scintillation light. Thus, when impinging particles a...
详细信息
The photon collection efficiency of gaseous scintillator detectors varies according to the position of the impinging charged particles in the medium that generates scintillation light. Thus, when impinging particles are distributed over a large area, the intrinsic photon-number resolution of the system is affected by a large variation. This work presents and discusses a method for adjusting the total number of detected photons to account for variation in the photon collection efficiency as a function of the position of the light source within the scintillating medium. The method was developed and validated by processing data from systematic simulation studies based on GEANT4 that model the response of the Energy Loss Optical Scintillation System (ELOSS) detector. The position of the charged particle is calculated using a deep neuralnetwork algorithm. This is accomplished by analyzing the distribution of scintillation light recorded by the array of photosensors. The estimated particle position is then used to calculate the correction factor and adjust the amount of captured light to account for variations in the photon collection efficiency. The neuralnetwork algorithm provides excellent tracking capabilities, achieving sub-millimeter position resolution and an angular resolution of 12 mrad, approaching the performance of traditional tracking detectors (e.g., drift chambers). The present method can be generalized to any optical scintillation system where the photon collection efficiency depends on the position of the impinging particle.
distributed Brillouin optical fiber sensors can detect temperature and strain over long distances in standard single-mode fibers from the measurement of their Brillouin gain spectrum (BGS). However, the performance of...
详细信息
distributed Brillouin optical fiber sensors can detect temperature and strain over long distances in standard single-mode fibers from the measurement of their Brillouin gain spectrum (BGS). However, the performance of these sensors is restricted by parameters such as the frequency scanning interval, average times of data acqui-sition, and Brillouin frequency shift (BFS) extraction time. Therefore, achieving high-precision real-time moni-toring is challenging. In this study, we introduce a nonlinear interpolation deep neuralnetwork (NIDNN) to interpolate and denoise the BGS. Additionally, we propose a BFS extraction convolutional neuralnetwork (BFSECNN) based on depthwise separable convolution (DSC) to shorten the BFS extraction and training time. Both the simulation and experimental results show that the NIDNN and BFSECNN contributed to higher accuracy in BFS extraction and shorter data processing time. The sweep time of the distributed Brillouin optical fiber sensor with a 28.5-km length after using the NIDNN is 1/6 of the hardware sweep time of the 1-MHz interval, and the BFSECNN extraction time is 3.7 s. Moreover, combining the NIDNN with the BFSECNN reduced the BFS uncertainty by 5.3 MHz at the end of the optical fiber. Used individually or together, the proposed NIDNN and BFSECNN could improve the performance of existing distributed Brillouin optical fiber sensors without any hardware modifications.
There are many attempts to implement deep neuralnetwork (DNN) distributed training frameworks. In these attempts, Apache Spark was used to develop the frameworks. Each framework has its advantages and disadvantages a...
详细信息
There are many attempts to implement deep neuralnetwork (DNN) distributed training frameworks. In these attempts, Apache Spark was used to develop the frameworks. Each framework has its advantages and disadvantages and needs further improvements. In the process of using Apache Spark to implement distributed training systems, we ran into some obstacles that significantly affect the performance of the systems and programming thinking. This is the reason why we developed our own distributed training framework, called distributed Deep Learning Framework (DDLF), which is completely independent of Apache Spark. Our proposed framework can overcome the obstacles and is highly scalable. DDLF helps to develop applications that train DNN in a distributed environment (referred to as distributed training) in a simple, natural, and flexible way. In this paper, we will analyze the obstacles when implementing a distributed training system on Apache Spark and present solutions to overcome them in DDLF. We also present the features of DDLF and how to implement a distributed DNN training application on this framework. In addition, we conduct experiments by training a Convolutional neuralnetwork (CNN) model with datasets MNIST and CIFAR-10 in Apache Spark cluster and DDLF cluster to demonstrate the flexibility and effectiveness of DDLF.
The advent of deep learning (DL)-based models has significantly advanced Channel State Information (CSI) feedback mechanisms in wireless communication systems. However, traditional approaches often suffer from high co...
详细信息
The advent of deep learning (DL)-based models has significantly advanced Channel State Information (CSI) feedback mechanisms in wireless communication systems. However, traditional approaches often suffer from high communication overhead and potential privacy risks due to the centralized nature of CSI data processing. Specifically, contrasting with conventional approaches that lead to bandwidth inefficiency in distributed data collection, this letter proposes Dig-CSI which adopts a unique approach where each user equipment (UE) acts as a distributed lightweight generator for dataset creation, utilizing local data without necessitating uploads. This design involves each UE training an autoencoder, with its decoder acting as the distributed generator, which enhances reconstruction accuracy and generative performance. Our experimental results reveal that Dig-CSI efficiently trains a global CSI feedback model, matching the performance of conventional centralized learning models while significantly minimizing communication overhead.
As the number of edge devices with computing resources (e.g., embedded GPUs, mobile phones, and laptops) in-creases, recent studies demonstrate that it can be beneficial to col-laboratively run convolutional neural ne...
详细信息
As the number of edge devices with computing resources (e.g., embedded GPUs, mobile phones, and laptops) in-creases, recent studies demonstrate that it can be beneficial to col-laboratively run convolutional neuralnetwork (CNN) inference on more than one edge device. However, these studies make strong assumptions on the devices' conditions, and their application is far from practical. In this work, we propose a general method, called DistrEdge, to provide CNN inference distribution strategies in environments with multiple IoT edge devices. By addressing heterogeneity in devices, network conditions, and nonlinear characters of CNN computation, DistrEdge is adaptive to a wide range of cases (e.g., with different network conditions, various device types) using deep reinforcement learning technology. We utilize the latest embedded AI computing devices (e.g., NVIDIA Jetson products) to construct cases of heterogeneous devices' types in the experiment. Based on our evaluations, DistrEdge can properly adjust the distribution strategy according to the devices' computing characters and the network conditions. It achieves 1.1 to 3 x speedup compared to state-of-the-art methods.
Automated recognition of insect category,which currently is performed mainly by agriculture experts,is a challenging problem that has received increasing attention in recent *** goal of the present research is to deve...
详细信息
Automated recognition of insect category,which currently is performed mainly by agriculture experts,is a challenging problem that has received increasing attention in recent *** goal of the present research is to develop an intelligent mobile-terminal recognition system based on deep neuralnetworks to recognize garden insects in a device that can be conveniently deployed in mobile ***-of-the-art lightweight convolutional neuralnetworks(such as SqueezeNet and ShuffleNet)have the same accuracy as classical convolutional neuralnetworks such as AlexNet but fewer parameters,thereby not only requiring communication across servers during distributed training but also being more feasible to deploy on mobile terminals and other hardware with limited *** this research,we connect with the rich details of the low-level network features and the rich semantic information of the high-level network features to construct more rich semantic information feature maps which can effectively improve SqueezeNet model with a small computational *** addition,we developed an off-line insect recognition software that can be deployed on the mobile terminal to solve no network and the timedelay problems in the *** demonstrate that the proposed method is promising for recognition while remaining within a limited computational budget and delivers a much higher recognition accuracy of 91.64%with less training time relative to other classical convolutional neural *** have also verified the results that the improved SqueezeNet model has a 2.3%higher than of the original model in the open insect data IP102.
作者:
Wang, RuiJiang, YiZhang, WeiFudan Univ
Sch Informat Sci & Technol Dept Commun Sci & Engn Key Lab Informat Sci Electromagnet Waves MoE Shanghai 200433 Peoples R China Univ New South Wales
Sch Elect Engn & Telecommun Sydney NSW 2052 Australia
This paper studies a multi-antenna multi-user and multi-relay network, where the radio frequency (RF) power amplifiers (PA) of the nodes are subject to instantaneous power constraints. To optimize the nonlinear transc...
详细信息
This paper studies a multi-antenna multi-user and multi-relay network, where the radio frequency (RF) power amplifiers (PA) of the nodes are subject to instantaneous power constraints. To optimize the nonlinear transceivers of the distributed nodes, we introduce a novel perspective of relating a relay network to an artificial neuralnetwork (ANN). With this perspective, we propose a distributed learning-based relay beamforming (DLRB) scheme. Based on a set of pilot sequences, the DLRB scheme can optimize the transceivers to minimize the mean squared error (MSE) of the data stream in a distributed manner. It can effectively coordinate the distributed relay nodes to form a virtual array to suppress interferences, even assuming neither the channel state information (CSI) nor information exchange between the relay nodes or between the users. We also present a frame design to support the DRLB so that it can adapt well with time-varying channels. Extensive simulations verify the effectiveness of the proposed scheme.
暂无评论