This paper summarizes the development status, prediction methods and classification of wind power power prediction at home and abroad, and verifies three typical power prediction models through example comparison- -AR...
详细信息
The Solenoidal Tracker at RHIC (STAR) is a multipurpose experiment at the Relativistic Heavy Ion Collider (RHIC) with the primary goal to study the formation and properties of the quark-gluon plasma. STAR is an intern...
详细信息
The Solenoidal Tracker at RHIC (STAR) is a multipurpose experiment at the Relativistic Heavy Ion Collider (RHIC) with the primary goal to study the formation and properties of the quark-gluon plasma. STAR is an international collaboration of member institutions and laboratories from around the world. Yearly data-taking period produces PBytes of raw data collected by the experiment. STAR primarily uses its dedicated facility at BNL to process this data, but has routinely leveraged distributed systems, both high throughput (HTC) and high performance (HPC) computing clusters, to significantly augment the processing capacity available to the experiment. The ability to automate the efficient transfer of large data sets on reliable, scalable, and secure infrastructure is critical for any large-scale distributed processing campaign. For more than a decade, STAR computing has relied upon gridFTP with its x509-based authentication to build such data transfer systems and integrate them into its larger production workflow. The end of support by the community for both gridFTP and the x509 standard requires STAR to investigate other approaches to meet its distributed processing needs. In this study we investigate two multi-purpose data distribution systems, *** and XRootD, as alternatives to gridFTP. We compare both their performance and the ease by which each service is integrated into the type of secure and automated data transfer systems STAR has previously built using gridFTP. The presented approach and study may be applicable to other distributed data processing use cases beyond STAR.
Power system simulation involves simulating the operating states and dynamic processes of power systems using mathematical models and computer technology to analyze and predict system behavior. Traditional simulation ...
详细信息
In astrometry, the determination of three-dimensional positions and velocities of stars based on observations from a space telescope suffers from the uncertainty of random and systematic errors. The systematic errors ...
详细信息
In astrometry, the determination of three-dimensional positions and velocities of stars based on observations from a space telescope suffers from the uncertainty of random and systematic errors. The systematic errors are introduced by imperfections of the telescope's optics and detectors as well as in the pointing accuracy of the satellite. The fine art of astrometry consists of heuristically finding the best possible calibration model that will account for and remove these systematic errors. Since this is a process based on trial and error, appropriate software is needed that is efficient enough to solve the system of astrometric equations and reveal the astrometric parameters of stars for the given calibration model within a reasonable time. This paper is an extended version of the conference paper published and discussed at the internationalconference on Computational Science 2024. In this work, we propose a novel software architecture and corresponding prototype of a direct solver optimized for running on supercomputers. The main advantages expected from this direct method over an iterative one are the numerical robustness, accuracy of the method, and the explicit calculation of the variance-covariance matrix for the estimation of the accuracy and correlation of the unknown parameters. This solver can handle astrometric systems with billions of equations within several hours. To reach the desired performance, we use state-of-the-art libraries and methods for hybrid parallel and vectorized computing. The calibration model based on Legendre polynomials is tested by generating synthetic observations on grid-shaped constellation with specified distortions. For these small-sized test data, the solver can recover perfectly the correct physical solution under the condition that the correct amount of eigenvalues is zeroed out. During the space mission, the calibration model should be carefully fine-tuned according to the real operating conditions. The developed solver
Edge computing has emerged as a transformative paradigm with applications ranging from IoT to smart homes and transportation systems. However, its decentralized nature presents significant challenges in securing sensi...
详细信息
The large-scale matrix eigenvalue computation, as a basic mathematical tool, has been widely used in many fields such as face recognition and data analysis. However, local terminal devices lack sufficient resources to...
详细信息
ISBN:
(纸本)9798350348439;9798350384611
The large-scale matrix eigenvalue computation, as a basic mathematical tool, has been widely used in many fields such as face recognition and data analysis. However, local terminal devices lack sufficient resources to undertake heavy computational tasks, which poses a challenge to the applications of eigenvalue computation. In this paper, we propose the first privacy-preserving edge-assisted computation scheme for solving the largest eigenvalue and corresponding eigenvector. We propose a privacy-preserving transformation method to protect data privacy and prevent edge servers from retrieving sensitive information. Meanwhile, we design a verification scheme to ensure the correctness of the results returned by the edge servers. In addition, we design a distributedparallelcomputing scheme to ensure the efficiency of edge computation. Through theoretical analysis and simulation experiments, we verify the feasibility and efficiency of our proposed scheme.
Half-precision hardware support is now almost ubiquitous. In contrast to its active use in AI, half-precision is less commonly employed in scientific and engineering computing. The valuable proposition of accelerating...
详细信息
ISBN:
(纸本)9798400717932
Half-precision hardware support is now almost ubiquitous. In contrast to its active use in AI, half-precision is less commonly employed in scientific and engineering computing. The valuable proposition of accelerating scientific computing applications using half-precision prompted this study. Focusing on solving sparse linear systems in scientific computing, we explore the technique of utilizing FP16 in multigrid preconditioners. Based on observations of sparse matrix formats, numerical features of scientific applications, and the performance characteristics of multigrid, this study formulates four guidelines for FP16 utilization in multigrid. The proposed algorithm demonstrates how to avoid FP16 overflow through scaling. A setup-then-scale strategy prevents FP16's limited accuracy and narrow range from interfering with the multigrid's numerical properties. Another strategy, recover-and-rescale on the fly, reduces the memory footprint of hotspot kernels. The extra precision-conversion overhead in mix-precision kernels is addressed by the transformation of storage formats and SIMD implementation. Two ablation experiments validate the effectiveness of our algorithm and parallel kernel implementation on ARM and X86 architectures. We further evaluate three idealized and five real-world problems to demonstrate the advantage of utilizing FP16 in a multigrid precon-ditioner. The average speedups are approximately 2.75x and 1.95x in preconditioner and end-to-end workflow, respectively.
In many "Big Data"problems, the data to be analyzed are stored in files;to solve such problems, an input step reads the data from a file into an array for processing. This input step has traditionally been p...
详细信息
In the last years, there has been a significant increment in the quantity of data available and computational resources. This leads scientific and industry communities to pursue more accurate and efficient Machine Lea...
详细信息
ISBN:
(纸本)9783031396977;9783031396984
In the last years, there has been a significant increment in the quantity of data available and computational resources. This leads scientific and industry communities to pursue more accurate and efficient Machine Learning (ML) models. Random Forest is a well-known algorithm in the ML field due to the good results obtained in a wide range of problems. Our objective is to create a parallel version of the algorithm that can generate a model using data distributed across different processors that computationally scales on available resources. This paper presents two novel proposals for this algorithm with a data-parallel approach. The first version is implemented using the PyCOMPSs framework and its failure management mechanism, while the second variant uses the new PyCOMPSs nesting paradigm where the parallel tasks can generate other tasks within them. Both approaches are compared between them and against MLlib Apache Spark Random Forest with strong and weak scaling tests. Our findings indicate that while the MLlib implementation is faster when executed in a small number of nodes, the scalability of both new variants is superior. We conclude that the proposed data-parallel approaches to the Random Forest algorithm can effectively generate accurate and efficient models in a distributedcomputing environment and offer improved scalability over existing methods.
The rapid evolution of IIoT (Industrial Internet of Things) in computing has brought about numerous security concerns, among which is the looming threat of False Data Injection (FDI) attacks. To address these attacks,...
详细信息
ISBN:
(纸本)9798350391558;9798350379990
The rapid evolution of IIoT (Industrial Internet of Things) in computing has brought about numerous security concerns, among which is the looming threat of False Data Injection (FDI) attacks. To address these attacks, a study introduces a novel approach called MLBT-FDIA-IIoT (Fault Data Injection Attack Detection in IIoT using parallel Physics-Informed Neural Networks with Giza Pyramid Construction Optimization algorithm). This method makes use of real-time sensor data for attack detection. The data is preprocessed using distributed Set-Membership Fusion Filtering (DSMFF) to remove noise. Then, it is fed into a neural network for classification. Specifically, parallel Physics-Informed Neural Networks (PPINN) are used to distinguish between normal operations and False Data Injection Attacks (FDIAs). However, PPINN lacks optimization methods for accurate detection. To address this, the study proposes the Giza Pyramid Construction Optimization algorithm (GPCOA). This algorithm optimizes the PPINN classifier to detect attacks with more precision. The proposed MLBT-FDIA-IIoT method is implemented using MATLAB and evaluates various metrics such as accuracy, recall, and precision. The results demonstrate significant improvements compared to existing techniques such as MLT-FDI-IIoT, FDIA-FDAS-IIoT, and DCDD-IIoT-FDIA.
暂无评论