Electron and photon triggers covering transverse energies from 5 GeV to several TeV are essential for signal selection in a wide variety of ATLAS physics analyses to study Standard Model processes and to search for ne...
详细信息
Electron and photon triggers covering transverse energies from 5 GeV to several TeV are essential for signal selection in a wide variety of ATLAS physics analyses to study Standard Model processes and to search for new phenomena. the ATLAS trigger system is divided in a hardware-based Level-1 trigger and a software-based high-level trigger, both of which were upgraded during the LHC shutdown in preparation for Run 2 operation. To cope withthe increasing luminosity and more challenging pile-up conditions at a centre-of-mass energy of 13 TeV, the trigger selections at each level are optimised to control the rates and keep efficiencies high. To achieve this goal multivariate analysistechniques are used. the ATLAS electron and photon triggers and their performance with Run 2 data are presented.
In the context of the LHC computing Grid (LCG) project, the Applications Area develops and maintains that part of the physics applications software and associated infrastructure that is shared among the LHC experiment...
详细信息
In the context of the LHC computing Grid (LCG) project, the Applications Area develops and maintains that part of the physics applications software and associated infrastructure that is shared among the LHC experiments. In this context, the Physicist Interface (PI) project of the LCG Application Area encompasses the interfaces and tools by which physicists will directly use the software. At present, the project covers analysis services, interactivity (the "physicist's desktop"), and visualization. the project has the mandate to provide a consistent set of interfaces and tools by which physicists will directly use the LCG software. this paper will describe the status of the project, concentrating on the activities in relation to the analysis Services subsystem. (C) 2004 Elsevier B.V. All rights reserved.
Evolutionary strategies provide a generic means to find optimum solutions for large-scale numerical problems. the EVA library, an implementation of parallel Evolutionary Algorithms, adds the possibility to run the opt...
详细信息
Evolutionary strategies provide a generic means to find optimum solutions for large-scale numerical problems. the EVA library, an implementation of parallel Evolutionary Algorithms, adds the possibility to run the optimization in parallel on devices ranging from multi-processor machines over clusters to the world wide grid. this paper discusses the application of the EVA library to two distinct computationally expensive physics problems-the optimization of particle signals in histograms and the minimization procedure involved in analysing Dalitz plots. (C) 2004 Elsevier B.V. All rights reserved.
In Spring 2004, CMS will undertake a 100 TeraByte-scale Data Challenge (DC04) as part of a series of challenges in preparation for running at CERN9;s Large Hadron Collider. During I month, DC04 must demonstrate the...
详细信息
In Spring 2004, CMS will undertake a 100 TeraByte-scale Data Challenge (DC04) as part of a series of challenges in preparation for running at CERN's Large Hadron Collider. During I month, DC04 must demonstrate the ability of the computing and software to cope with a sustained event data-taking rate of 25 Hz, for a total of 50 million events. the emphasis of DC04 is on the validation of the first pass reconstruction and storage systems at CERN and the streaming of events to a distributed system of Tier-1, and Tier-2 sites worldwide where typical analysis tasks will be performed. It is expected that the LHC computing Grid project will provide a set of grid services suitable for use in a real production environment, as part of this data challenge. the results of this challenge will be used to define the CMS software and computing systems in their Technical Design Report. (C) 2004 Elsevier B.V. All rights reserved.
We have developed an interface within the ALICE analysis framework that allows transparent usage of the experiment9;s distributed resources. this analysis plug-in makes it possible to configure back-end specific pa...
We have developed an interface within the ALICE analysis framework that allows transparent usage of the experiment's distributed resources. this analysis plug-in makes it possible to configure back-end specific parameters from a single interface and to run with no change the same custom user analysis in many computing environments, from local workstations to PROOF clusters or GRID resources. the tool is used now extensively in the ALICE collaboration for both end-user analysis and large scale productions.
In recent years, great progress has been made in the fields of machine translation, image classification and speech recognition by using deep neural networks and associated techniques (deep learning). Recently, the as...
详细信息
In recent years, great progress has been made in the fields of machine translation, image classification and speech recognition by using deep neural networks and associated techniques (deep learning). Recently, the astroparticle physics community successfully adapted supervised learning algorithms for a wide range of tasks including background rejection, object reconstruction, track segmentation and the denoising of signals. Additionally, the first approaches towards fast simulations and simulation refinement indicate the huge potential of unsupervised learning for astroparticle physics. We summarize the latest results, discuss the algorithms and challenges and further illustrate the opportunities for the astrophysics community offered by deep learning based algorithms.
the OpenPBS batch system is widely used in the HEP community. the Open PBS package has a set of tools to check the current status of the system. this information is useful, but it is not sufficient enough for resource...
详细信息
the OpenPBS batch system is widely used in the HEP community. the Open PBS package has a set of tools to check the current status of the system. this information is useful, but it is not sufficient enough for resource accounting and planning. As a solution for this problem, we developed a monitoring system which parses the logfiles from OpenPBS and stores the information into a SQL database (PostgreSQL). this allows us to analyze the data in many different ways using SQL queries. the system was used in ITEP during the last two years for batch farm monitoring. (C) 2004 Elsevier B.V. All rights reserved.
Deep learning architectures in particle physics are often strongly dependent on the order of their input variables. We present a two-stage deep learning architecture consisting of a network for sorting input objects a...
详细信息
Deep learning architectures in particle physics are often strongly dependent on the order of their input variables. We present a two-stage deep learning architecture consisting of a network for sorting input objects and a subsequent network for data analysis. the sorting network (agent) is trained through reinforcement learning using feedback from the analysis network (environment). the optimal order depends on the environment and is learned by the agent in an unsupervised approach. thus, the two-stage system can choose an optimal solution which is not known to the physicist in advance. We present the new approach and its application to the signal and background separation in top-quark pair associated Higgs boson production.
In this paper we discuss the method of the resummation of the asymptotic series suggested by Kazakov et al. in [1,2] and predictions of the higher order terms based on this approach. An application of this method to t...
详细信息
In this paper we discuss the method of the resummation of the asymptotic series suggested by Kazakov et al. in [1,2] and predictions of the higher order terms based on this approach. An application of this method to the phi(4) model is discussed.
this paper describes our experience using and extending JupyterLab. We started with a copy of CERN SWAN environment, but now our project evolves independently. A major difference is that we switched from classic Jupyt...
详细信息
this paper describes our experience using and extending JupyterLab. We started with a copy of CERN SWAN environment, but now our project evolves independently. A major difference is that we switched from classic Jupyter Notebook to JupyterLab, because our users are more insterested in text editor plus terminal workflow rather than in Notebook workflow. However, like in SWAN, we are still using CVMFS to load Jupyter kernels and other software packages, and EOS to store user home directories.
暂无评论