Calculation of noise cross-correlation functions (NCF) plays an important role in ambient noise imaging which is a vital seismic method to obtain Earth inner structures. To raise the resolution of the imaging results,...
详细信息
Calculation of noise cross-correlation functions (NCF) plays an important role in ambient noise imaging which is a vital seismic method to obtain Earth inner structures. To raise the resolution of the imaging results, we need more seismic data for imaging. However, as the size of seismic data increases, the serial algorithm for NCF calculation becomes much more time-consuming. Thus, how to accelerate the NCF calculation becomes a key problem in ambient noise imaging. based on the analysis of serial algorithm, we proposed a new parallel algorithm for NCF calculation using NVIDIA GPU platform. In addition, we improved reading and writing strategy to reduce I/O consumption. Experimental results on real seismic data show the effectiveness of our method. The parallel program achives about 1861 times speedup.
Task-based programming models are emerging as a promising alternative to make the most of multi-/many-core systems. These programming models rely on runtime systems, and their goal is to improve application performanc...
详细信息
ISBN:
(纸本)9783030576752;9783030576745
Task-based programming models are emerging as a promising alternative to make the most of multi-/many-core systems. These programming models rely on runtime systems, and their goal is to improve application performance by properly scheduling application tasks to cores. Additionally, these runtime systems offer policies to cope with application phases that lack in parallelism to fill all cores. However, these policies are usually static and favor either performance or energy efficiency. In this paper, we have extended a task-based runtime system with a lightweight monitoring and prediction infrastructure that dynamically predicts the optimal number of cores required for each application phase, thus improving both performance and energy efficiency. Through the execution of several benchmarks in multi-/many-core systems, we show that our prediction-based policies have competitive performance while improving energy efficiency when compared to state of the art policies.
Vulnerability detection is an important means to protect computer software systems from network attacks and ensure data security. Automatic vulnerability detection by machine learning has become a research hotspot in ...
详细信息
Vulnerability detection is an important means to protect computer software systems from network attacks and ensure data security. Automatic vulnerability detection by machine learning has become a research hotspot in recent years. The emergence of deep learning technology reduces human experts’ boring and arduous work in defining vulnerability features, which obtains advanced features that human experts can not define intuitively. Among many neural networks, Recurrent Neural network(RNN) is structurally more suitable for processing sequences, which achieved excellent results in vulnerability detection. In 2017, Transformer is proposed in the field of Natural Language processing(NLP), which is based on Self-Attention mechanism, replaces traditional RNN in the way of text sequence processing, and is more effective than RNN in many natural language tasks. This paper proposes using Transformer to automatically detect vulnerabilities in Code Slices. Firstly, we extract Code Slices that are finer than the functional level, which can express the vulnerability patterns more accurately. Secondly, we propose an effective data representation method to retain more semantic information. Finally, the experiment proves that Transformer is superior to models based on RNN in terms of comprehensive performance, and the effective data representation can significantly improve the detection effect of deep neural networks.
In the 5G era, user equipment connected to 5G base stations can obtain better communication services. However, due to the limited coverage of base stations, the movement of users may cause frequent handover of base st...
详细信息
In the 5G era, user equipment connected to 5G base stations can obtain better communication services. However, due to the limited coverage of base stations, the movement of users may cause frequent handover of base stations. With the widespread deployment of 5G base stations, how to reduce unnecessary handover times when users connect to the base station becomes particularly important. In recent years, user trajectory data has been mined and applied to many scenarios. In the 5G network, by judging the user’s movement trajectory, the number of handovers required for the user to connect to the 5G base station can be effectively reduced. In this paper, we propose a 5G base station handover method based on trajectory prediction. A CNN-LSTM neural network, which combines a Convolutional Neural network (CNN) and Long Short-Term Memory (LSTM) has been proposed to predict the user’s trajectory. The evaluation results show that our mechanism can effectively reduce the number of base station handovers and improve the efficiency of users using the network. In addition, the stability of 5G networks can be improved by reducing inefficient base station handover.
Convolutional Neural networks (CNN) have succeeded great impact in various tasks of machine learning. Training CNN model is a computationally intensive task. Scalability and performance of CNN with GPU is demonstrated...
详细信息
Temperature is one of the major ecological factors that affect the safe storage of grain. In this paper, we propose a deep spatiooral attention mode to predict stored grain temperature, which exploits the historical t...
详细信息
With the continuous improvement of China’s broadband optical network capabilities, the focus of users’ attention has shifted to application perception. The problem of frequent broadband disconnection has long plague...
详细信息
ISBN:
(纸本)9781665406932
With the continuous improvement of China’s broadband optical network capabilities, the focus of users’ attention has shifted to application perception. The problem of frequent broadband disconnection has long plagued first-line units and greatly affected users’ trust in broadband quality. Aiming at the problem of long processing time and difficult location of boardband frequent offline fault, this paper proposes a Hadoop-based fault port detection scheme. Hadoop distributed file system is used to store the user’s online and offline data, and MapReduce is used for parallel calculation. Then, the fault port is located through the correlation of time series. Finally, the data is transmitted to the database of the display system through Sqoop In the display system, the fault situation of the whole network users is analyzed. Frequent offline failures change from the original user’s application to community maintenance to the detection platform’s self discovery of fault ports, which is a successful practice of cloud network integration big data operation, and also reduces the burden of operation and maintenance personnel.
How to effectively handle heterogeneous data sources is one of the main challenges in the design of large-scale research computing platforms to collect, analyze and integrate data from IoT sensors. The platform must s...
详细信息
ISBN:
(纸本)9783030483401;9783030483395
How to effectively handle heterogeneous data sources is one of the main challenges in the design of large-scale research computing platforms to collect, analyze and integrate data from IoT sensors. The platform must seamlessly support the integration of myriads of data formats and communication protocols, many being introduced after the platform has been deployed. Edge gateways, devices deployed at the edge of the network near the sensors, communicate with measurement stations using their proper protocol, receive and translate the messages to a standardized format, forward the data to the processing platform and provide local data buffering and preprocessing. In this work we present the TDM Edge Gateway architecture, which we have developed to be used in research contexts to meet the requirements of being self-built, low-cost, and compatible with current or future connected sensors. The architecture is based on a microservice-oriented design implemented with software containerization and leverages publish/subscribe Inter Process Communication to ensure modularity and resiliency. Costs and construction simplicity are ensured by adopting the popular Raspberry Pi Single Board Computer. The resulting platform is lean, flexible and easy to expand and integrate. It does not pose constraints on programming languages to use and relies on standard protocols and data models.
The research and analysis of the technology of vehicle bogie condition monitoring combined with the basic principle of BP network is applied to the condition monitoring of vehicle bogies. The genetic algorithm is used...
详细信息
ISBN:
(数字)9780784482902
ISBN:
(纸本)9780784482902
The research and analysis of the technology of vehicle bogie condition monitoring combined with the basic principle of BP network is applied to the condition monitoring of vehicle bogies. The genetic algorithm is used to optimize the BP network, establish the optimal weight and threshold, and establish an ideal BP network model to complete the state monitoring. The dynamic model of the bogie is established by the knowledge of the dynamics of the rail vehicle, and the irregularity of the rail is used as an external stimulus to simulate the state of the vehicle running. Test analysis using an optimized BP network. It was found that the optimized BP network is more stable and accurate. With the continuous development of railway transportation, rail transit has occupied a very important position in the public transportation system. The safety and comfort of rail vehicles is an important factor affecting their development. The suspension system is a key component of the vehicle's travel department, and the performance of the suspension system directly affects the safety and comfort of the vehicle. The online real-time fault condition monitoring of the suspension system plays an important role in the safe and stable operation of the vehicle. Therefore, it is a hot research topic for domestic and foreign scholars to seek real-time and reliable suspension system fault diagnosis methods. At present, there are many fault diagnosis methods for rail vehicle suspension systems at home and abroad. There is a vehicle suspension system fault diagnosis based on IMM algorithm, vehicle suspension system fault diagnosis based on observation method, etc. The fault can only be alerted, and the fault cannot be further determined. Artificial neural network algorithm is a discipline developed on the basis of modern neurology. BP neural network is the most widely used. It adopts paralleldistributedprocessing, learning and memory function, and nonlinear mapping ability, which can perform mu
暂无评论