An experimental scheme is presented for generating low-divergence, ultradense, relativistic, electron-positron beams using 400 GeV/c protons available at facilities such as HiRadMat and AWAKE at CERN. Preliminary Mont...
详细信息
An experimental scheme is presented for generating low-divergence, ultradense, relativistic, electron-positron beams using 400 GeV/c protons available at facilities such as HiRadMat and AWAKE at CERN. Preliminary Monte Carlo and particle-in-cell simulations demonstrate the possibility of generating beams containing 1013–1014 electron-positron pairs at sufficiently high densities to drive collisionless beam-plasma instabilities, which are expected to play an important role in magnetic field generation and the related radiation signatures of relativistic astrophysical phenomena. The pair beams are quasineutral, with size exceeding several skin depths in all dimensions, allowing the examination of the effect of competition between transverse and longitudinal instability modes on the growth of magnetic fields. Furthermore, the presented scheme allows for the possibility of controlling the relative density of hadrons to electron-positron pairs in the beam, making it possible to explore the parameter spaces for different astrophysical environments.
KAGRA, the underground and cryogenic gravitational-wave detector, was operated for its solo observation from February 25 to March 10, 2020, and its first joint observation with the GEO 600 detector from April 7 to Apr...
KAGRA, the underground and cryogenic gravitational-wave detector, was operated for its solo observation from February 25 to March 10, 2020, and its first joint observation with the GEO 600 detector from April 7 to April 21, 2020 (O3GK). This study presents an overview of the input optics systems of the KAGRA detector, which consist of various optical systems, such as a laser source, its intensity and frequency stabilization systems, modulators, a Faraday isolator, mode-matching telescopes, and a high-power beam dump. These optics were successfully delivered to the KAGRA interferometer and operated stably during the observations. The laser frequency noise was observed to limit the detector sensitivity above a few kilohertz, whereas the laser intensity did not significantly limit the detector sensitivity.
Hadoop framework consists of multiple components. One set of configuration parameters may not be suitable for all types of applications. Hence, for optimum performance, each framework parameter needs to be carefully c...
详细信息
With more and more Drosophila Driver and Neuron images, it is an important work to find the similarity relationships among them as the functional inference. There is a general problem that how to find a Drosophila Dri...
详细信息
Recent years have witnessed a processor develop- ment trend that integrates central processing unit (CPU) and graphic processing unit (GPU) into a single chip. The inte- gration helps to save some host-device data...
详细信息
Recent years have witnessed a processor develop- ment trend that integrates central processing unit (CPU) and graphic processing unit (GPU) into a single chip. The inte- gration helps to save some host-device data copying that a discrete GPU usually requires, but also introduces deep re- source sharing and possible interference between CPU and GPU. This work investigates the performance implications of independently co-running CPU and GPU programs on these platforms. First, we perform a comprehensive measurement that covers a wide variety of factors, including processor ar- chitectures, operating systems, benchmarks, timing mecha- nisms, inputs, and power management schemes. These mea- surements reveal a number of surprising observations. We an- alyze these observations and produce a list of novel insights, including the important roles of operating system (OS) con- text switching and power management in determining the program performance, and the subtle effect of CPU-GPU data copying. Finally, we confirm those insights through case studies, and point out some promising directions to mitigate anomalous performance degradation on integrated heteroge- neous processors.
Effective and real-time eyeblink detection is of widerange applications, such as deception detection, drive fatigue detection, face anti-spoofing. Despite previous efforts, most of existing focus on addressing the eye...
详细信息
According to the Taiwan Ministry of Education statistics, about one million graduates each year, some of them will go to countries, high schools or tertiary institutions to continue to attend, and some will be ready t...
详细信息
According to the Taiwan Ministry of Education statistics, about one million graduates each year, some of them will go to countries, high schools or tertiary institutions to continue to attend, and some will be ready to enter the workplace employment. During the course of study, the students' all kinds of excellent performance certificates, score transcripts, diplomas, etc., will become an important reference for admitting new schools or new works. As schools make various awards or diplomas, only the names of the schools and the students are input. Due to the lack of effective anti-forge mechanism, events that cause the graduation certificate to be forged often get noticed. In order to solve the problem of counterfeiting certificates, the digital certificate system based on blockchain technology would be proposed. By the unmodifiable property of blockchain, the digital certificate with anti-counterfeit and verifiability could be made. The procedure of issuing the digital certificate in this system is as follows. First, generate the electronic file of a paper certificate accompanying other related data into the database, meanwhile calculate the electronic file for its hash value. Finally, store the hash value into the block in the chain system. The system will create a related QR-code and inquiry string code to affix to the paper certificate. It will provide the demand unit to verify the authenticity of the paper certificate through mobile phone scanning or website inquiries. Through the unmodifiable properties of the blockchain, the system not only enhances the credibility of various paper-based certificates, but also electronically reduces the loss risks of various types of certificates.
Scientific applications are often irregular and characterized by large computationally-intensive parallel loops. Dynamic loop scheduling (DLS) techniques improve the performance of computationally-intensive scientific...
详细信息
Scientific applications are often irregular and characterized by large computationally-intensive parallel loops. Dynamic loop scheduling (DLS) techniques improve the performance of computationally-intensive scientific applications via load balancing of their execution on high-performancecomputing (HPC) systems. Identifying the most suitable choices of data distribution strategies, system sizes, and DLS techniques which improve the performance of a given application, requires intensive assessment and a large number of exploratory native experiments (using real applications on real systems), which may not always be feasible or practical due to associated time and costs. In such cases, simulative experiments are more appropriate for studying the performance of applications. This motivates the question of 'How realistic are the simulations of executions of scientific applications using DLS on HPC platforms?' In the present work, a methodology is devised to answer this question. It involves the experimental verification and analysis of the performance of DLS in scientific applications. The proposed methodology is employed for a computer vision application executing using four DLS techniques on two different HPC platforms, both via native and simulative experiments. The evaluation and analysis of the native and simulative results indicate that the accuracy of the simulative experiments is strongly influenced by the approach used to extract the computational effort of the application (FLOP-or time-based), the choice of application model representation into simulation (data or task parallel) and the available HPC subsystem models in the simulator (multi-core CPUs, memory hierarchy and network topology). The minimum and the maximum percent errors achieved between the native and the simulative experiments are 0.95% and 8.03%, respectively.
Many quantum key distribution systems employ a laser followed by an optical attenuator to prepare weak coherent states in the source. Their mean photon number must be precalibrated to guarantee the security of key dis...
详细信息
Many quantum key distribution systems employ a laser followed by an optical attenuator to prepare weak coherent states in the source. Their mean photon number must be precalibrated to guarantee the security of key distribution. Here we experimentally show that this calibration can be broken with a high-power laser attack. We test four fiber-optic attenuator types used in quantum key distribution systems, and find that two of them exhibit a permanent decrease in attenuation after laser damage. This results in higher mean photon numbers in the prepared states and may allow an eavesdropper to compromise the key.
Electric load forecasting has a significant role in power grids in order to facilitate the decision making process of energy generation & consumption. Long term forecasting is not feasible as there might be an unc...
详细信息
Electric load forecasting has a significant role in power grids in order to facilitate the decision making process of energy generation & consumption. Long term forecasting is not feasible as there might be an uncertainty in the prediction because of irregular increase in the demand for power with the growing population and dependency on electric power. Since the behaviour of electric load time series is very much non-linear and seasonal, Neural Networks are best suited model for learning the Non-Linear behaviour within the data and for forecasting purpose. This paper deals with the Recurrent Neural Networks based Models: Long-Short-Term-Memory (LSTM) and Gated-Recurrent-Unit (GRU) to deal with this challenge. Observations have been made based on the distributed implementation of various configurations of LSTM-RNN & GRU-RNN on spark clusters for hyper parameter tuning purpose and deploying best suited configuration with least RMSE value using apache memos resource management.
暂无评论