A large number of physics processes as seen by the ATLAS experiment manifest as collimated, hadronic sprays of particles known as 9;jets.9; Jets originating from the hadronic decay of massive particles are commo...
详细信息
A large number of physics processes as seen by the ATLAS experiment manifest as collimated, hadronic sprays of particles known as 'jets.' Jets originating from the hadronic decay of massive particles are commonly used in searches for new physics. ATLAS has employed multivariate discriminants for the challenging task of identifying the origin of a given jet. However, such classifiers exhibit strong non-linear correlations withthe invariant mass of the jet, complicating analyses which make use of the mass spectrum. A comprehensive study of different mass-decorrelation techniques is performed with ATLAS simulated datasets, comparing designed decorrelated taggers (DDT), fixed-efficiency k-NN regression, convolved substructure (CSS), adversarial neural networks (ANNs), and adaptive boosting for uniform efficiency (uBoost). Performance is evaluated using suitable metrics for classification and mass-decorrelation.
As a data-intensive computing application, high-energy physics requires storage and computing for large amounts of data at the PB level. IHEP computing center is beginning to use tiered storage architectures, such as ...
详细信息
As a data-intensive computing application, high-energy physics requires storage and computing for large amounts of data at the PB level. IHEP computing center is beginning to use tiered storage architectures, such as tape, disk or solid state drives to reduce hardware purchase costs and power consumption. At present, automatic data migration strategies are mainly used to resolve data migration between memory and disk. So the rules are relatively simple. this paper attempted to use the deep learning algorithm model to predict the evolution trend of data access heat as the basis for data migration. the implementation of some initial parts of the system were discussed, as well as the file trace collector and the LSTM model. At last some preliminary experiments are conducted withthese parts.
A method is presented to construct good ness-of-fit statistics in many dimensions, whose distribution becomes Gaussian in the lit-nit of large data samples if also the number of dimensions becomes large. Furthermore, ...
详细信息
A method is presented to construct good ness-of-fit statistics in many dimensions, whose distribution becomes Gaussian in the lit-nit of large data samples if also the number of dimensions becomes large. Furthermore, an explicit example is presented, for which this distribution as good as only depends on the expectation value and the variance of the statistic for any dimension larger than one. (c) 2005 Elsevier B.V. All rights reserved.
作者:
Pak, AlexeyKIT
Inst Theoret Teilchenphys D-76128 Karlsruhe Germany
We describe three algorithms for computer-aided symbolic multi-loop calculations that facilitated some recent novel results. First, we discuss an algorithm to derive the canonical form of an arbitrary Feynman integral...
We describe three algorithms for computer-aided symbolic multi-loop calculations that facilitated some recent novel results. First, we discuss an algorithm to derive the canonical form of an arbitrary Feynman integral in order to facilitate their identification. Second, we present a practical solution to the problem of multi-loop analytical tensor reduction. Finally, we discuss the partial fractioning of polynomials with external linear relations between the variables. All algorithms have been tested and used in real calculations.
the sophistication of fully exclusive MC event generation has grown at an extraordinary rate since the start of the LHC era, but has been mirrored by a similarly extraordinary rise in the CPU cost of state-of-the-art ...
详细信息
the sophistication of fully exclusive MC event generation has grown at an extraordinary rate since the start of the LHC era, but has been mirrored by a similarly extraordinary rise in the CPU cost of state-of-the-art MC calculations. the reliance of experimental analyses on these calculations raises the disturbing spectre of MC computations being a leading limitation on the physics impact of the HL-LHC, with MC trends showing more signs of further cost-increases rather than the desired speed-ups. I review the methods and bottlenecks in MC computation, and areas where new computing architectures, machine-learning methods, and social structures may help to avert calamity.
We present methods to perform high statistics data analyses to investigate fundamental neutrino properties in large volume neutrino detectors, fast and with modest computational resources. the introduced measures are ...
详细信息
We present methods to perform high statistics data analyses to investigate fundamental neutrino properties in large volume neutrino detectors, fast and with modest computational resources. the introduced measures are threefold: speeding up computations using graphics processors, evaluating the underlying physics processes on a grid instead of treating every event individually and lastly applying smoothing methods to quantities obtained from Monte Carlo simulations. We show that with our method we can get reliable analysis results using significantly less simulation than what is usually needed, and that the timing to run an analysis with our method is independent of sample size.
Computation of signal and background particle processes at LHC experiments requires a lot of CPU time. thus the problem of using of PC clusters for such calculation is very important. there is natural splitting of a c...
详细信息
Computation of signal and background particle processes at LHC experiments requires a lot of CPU time. thus the problem of using of PC clusters for such calculation is very important. there is natural splitting of a calculation process into several ones for proton colliders like LHC. On the partonic level the calculation may be done for each subprocesses separately. We present a development of CompHEP system which supports the utilization of a set of CPU by means of PBS batch system or submission of a jobs into GRID. the first results for both PBS and GRID tests are presented. (C) 2004 Elsevier B.V. All rights reserved.
the next generation B factory experiment Belle II will collect huge data samples which are a challenge for the computing system. To cope withthe high data volume and rate, Belle II is setting up a distributed computi...
详细信息
the next generation B factory experiment Belle II will collect huge data samples which are a challenge for the computing system. To cope withthe high data volume and rate, Belle II is setting up a distributed computing system based on existing technologies and infrastructure, plus Belle II specific extensions for workflow abstraction. this paper describes the highlights of the Belle II computing and the current status. We will also present the experience of the latest MC production campaign in 2014.
A Maple package for computing Grobner bases of linear difference ideals is described. the underlying algorithm is based on Janet and Janet-like monomial divisions associated with finite difference operators. the packa...
详细信息
A Maple package for computing Grobner bases of linear difference ideals is described. the underlying algorithm is based on Janet and Janet-like monomial divisions associated with finite difference operators. the package can be used, for example, for automatic generation of difference schemes for linear partial differential equations and for reduction of multiloop Feynman integrals. these two possible applications are illustrated by simple examples of the Laplace equation and a one-loop scalar integral of propagator type. (c) 2005 Elsevier B.V. All rights reserved.
Multivariate analysis (MVA) methods, especially discrimination techniques such as neural networks, are key ingredients in modern data analysis and play an important role in high energy physics. they are usually traine...
详细信息
Multivariate analysis (MVA) methods, especially discrimination techniques such as neural networks, are key ingredients in modern data analysis and play an important role in high energy physics. they are usually trained on simulated Monte Carlo (MC) samples to discriminate so called "signal" from "background" events and are then applied to data to select real events of signal type. We here address procedures that improve this work flow. this will be the enhancement of data / MC agreement by reweighting MC samples on a per event basis. then training MVAs on real data using the,Plot technique will be discussed. Finally we will address the construction of MVAs whose discriminator is independent of a certain control variable, i.e. cuts on this variable will not change the discriminator shape.
暂无评论