This report is devoted to discussion actual problems of current status in beam physics computing and some solution paths. All present, approaches can be divided into two different types: analytical (theoretical) and n...
详细信息
ISBN:
(纸本)078037939X
This report is devoted to discussion actual problems of current status in beam physics computing and some solution paths. All present, approaches can be divided into two different types: analytical (theoretical) and numerical simulation. The first, type is based oil theoretical investigations for some approximation models without use numerical simulation actively. The second - on the numerical simulation of some more complete models using computer systems including supercomputers and computational clusters. In the report, some realization paths of this formulated program are discussed. We hope that it gives effective and adequate instruments for further investigation in beam physics.
This report is devoted to discussion actual problems of current status in beam physics computing and some solution paths. All present, approaches can be divided into two different types: analytical (theoretical) and n...
详细信息
This report is devoted to discussion actual problems of current status in beam physics computing and some solution paths. All present, approaches can be divided into two different types: analytical (theoretical) and numerical simulation. The first type is based on theoretical investigations for some approximation models without use numerical simulation actively. The second, on the numerical simulation of some more complete models using computer systems including supercomputers and computational clusters. In the report, some realization paths of this formulated program are discussed. We hope that it gives effective and adequate instruments for further investigation in beam physics.
This paper presents a study of the efficiency of resource brokering in a computational grid constructed for CPU and data intensive scientific analysis. Real data is extracted from the logging records of an in-use reso...
详细信息
This paper presents a study of the efficiency of resource brokering in a computational grid constructed for CPU and data intensive scientific analysis. Real data is extracted from the logging records of an in-use resource broker relating to the running of Monte Carlo simulation jobs, and compared with detailed modeling of job processing in a grid system. This analysis uses performance indicators relating to how efficiently the jobs are run, as well as how effectively the available computational resources are being utilized. In the case of a heavily loaded grid, the delays incurred at different stages of brokering and scheduling are studied, in order to determine where the bottlenecks appear in this process. The performance of different grid setups is tested, for instance homogeneous and heterogeneous resource distribution, and varying numbers of resource brokers. The importance of the speed of the grid information services is also investigated.
Simulating physically realistic, complex dust behaviors is useful in interactive graphics applications, such as those used for education, entertainment, or training. Training in virtual environments is a major topic f...
详细信息
Simulating physically realistic, complex dust behaviors is useful in interactive graphics applications, such as those used for education, entertainment, or training. Training in virtual environments is a major topic for research and applications, and generating dust behaviors in real time significantly increases the realism of the simulated training environment. We introduce a method for simulating the dust behaviors that a fast-traveling vehicle causes. Our method combines particle systems, rigid-body particle dynamics, computational fluid dynamics (CFD), rendering, and visualization techniques. Our work integrates physics-based computing and graphical visualization for applications in simulated virtual environments.
We discuss the fundamental limits of computing using a new paradigm for quantum computation, cellular automata composed of arrays of coulombically coupled quantum dot molecules, which we term quantum cellular automata...
详细信息
We discuss the fundamental limits of computing using a new paradigm for quantum computation, cellular automata composed of arrays of coulombically coupled quantum dot molecules, which we term quantum cellular automata (QCA). Any logical or arithmetic operation can be performed in this scheme. QCA's provide a valuable concrete example of quantum computation in which a number of fundamental issues come to light. We examine the physics of the computing process in this paradigm. We show to what extent thermodynamic considerations impose limits on the ultimate size of individual QCA arrays. Adiabatic operation of the QCA is examined and the implications for dissipationless computing are explored.< >
The deep underground neutrino experiment (DUNE) is a next-generation neutrino experiment that will probe the properties of these elusive particles with unparalleled precision. It will also act as an observatory for ne...
详细信息
The deep underground neutrino experiment (DUNE) is a next-generation neutrino experiment that will probe the properties of these elusive particles with unparalleled precision. It will also act as an observatory for neutrino bursts caused by nearby supernovae, in the event that one occurs, while the experiment is in operation. Given these goals, the DUNE trigger and DAQ system must be able to maintain extremely high uptime and provide a path for full readout of the detectors for very long times (up to 100 s). To achieve these ends, we have designed the DUNE DAQ system around a flexible "application framework," which provides a modular interface for specific tasks while handling the interconnections between them. The application framework collects modules into applications, which can then be interacted with as units by the control, configuration, and monitoring systems. One of the key features of the framework is its communication abstraction layer, which allows for modules to interact with both internal queues and external network connections with a single transport-agnostic interface. We will report on the architecture and features of the framework.
During the past three years, an online computing system has been designed, installed and used extensively in connection with low-energy nuclear physics experiments. The system is used by experimenters and both the 4.5...
详细信息
During the past three years, an online computing system has been designed, installed and used extensively in connection with low-energy nuclear physics experiments. The system is used by experimenters and both the 4.5 MeV Van de Graaff and the 12 MeV Tandem accelerators at Argonne National Laboratory. At the heart of the system are two inter-connected small computers (ASI 210 and 2100). These machines have fast operation times and both have 8192 words of core memory. Peripheral devices include punched card and paper tape equipment, typewriters, line-printers, two magnetic tape units, and a display oscilloscope with a "light pen." ASI 210 is interfaced to a 4096-channel pulse-height analyzer and to 4 ADC units in the Tandem experimental area. The ASI 2100 has long-line links to the 4 MeV Van de Graaff and to the laboratory's CDC 3600 central processor. Experiments at both accelerators can interrupt the 3600 and use its high computing power to great advantage. The system is currently being augmented by the addition of the large external core memory (98,304 words) addressable by both small computers. The chief function of this memory is to act as a large two-parameter pulse-height analyzer and as additional storage for program. A special on-line data-handling program has been developed which greatly facilitates the acquisition and manipulation of experimental data. A brief description of the entire system will be given together with a description of the various ways in which it has been used.
The use of physics-based numerical models has been an emerging paradigm in the analysis of remote sensing data and the development of exploitation algorithms. Understanding the key parameters that affect the complex i...
详细信息
The use of physics-based numerical models has been an emerging paradigm in the analysis of remote sensing data and the development of exploitation algorithms. Understanding the key parameters that affect the complex imaging chain to exploit a scene and associated phenomenology is facilitated by these numerical models. As computing power to perform these calculations improve, the application of algorithms become operationally viable. This gain, however, is offset by the need to address complex scenarios through higher fidelity modeling. This involves computing additional model factors at many more levels. While the iterative process of executing these computations is conceptually simple, the implementation to make them operational requires special methods. Unfortunately, the mechanics and infrastructure to the solution of realizing these large number of calculations are often glossed over in the remote sensing literature. This leaves investigators without the technical details to address these class of problems. We present our experiences in addressing these types of problems through the use of the Condor High Throughput computing (HTC) system. We present several remote sensing research case studies conducted by the Digital Imaging and Remote Sensing Laboratory at RIT over the past decade and highlight the computational gains realized by these HTC frameworks.
暂无评论