Recently, multicomponent alloys have been studied for hydrogen storage because of their vast compositional field, which opened an exciting path for designing alloys with optimized properties for any specific applicati...
详细信息
Recently, multicomponent alloys have been studied for hydrogen storage because of their vast compositional field, which opened an exciting path for designing alloys with optimized properties for any specific application, in a properties-on-demand approach. Since the experimental measurements of hydrogen storage properties are very time-consuming, computational tools to assist the exploration of the endless compositional field of multi -component alloys are needed. In a previous work reported by Zepon et al. (2021), a thermodynamic model to calculate pressure-composition-temperature (PCT) diagrams for body-centered-cubic (BCC) multicomponent alloys was proposed. In the present work, we implemented this model in an open-source code with an user-friendly interface to calculate PCT diagrams for BCC multicomponent alloys having any of the following elements: Mg, Al, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Zr, Nb, Mo, Pd, Hf, and Ta. The open -sourcecode aims to allow the use of the thermodynamic model for alloy design as well as to encourage other researchers to improve the inputs and the initial thermodynamic model. As an example of application of the model for alloy design, the code was employed to investigate the effect of different metals (M) on the PCT diagrams of Ti0.3V0.3Nb0.3M0.1 alloys. (C) 2022 Hydrogen Energy Publications LLC. Published by Elsevier Ltd. All rights reserved.
Github Copilot, trained on billions of lines of public code, has recently become the buzzword in the computer science research and practice community. Although it is designed to help developers implement safe and effe...
详细信息
ISBN:
(纸本)9781450390965
Github Copilot, trained on billions of lines of public code, has recently become the buzzword in the computer science research and practice community. Although it is designed to help developers implement safe and effective code with powerful intelligence, practitioners and researchers raise concerns about its ethical and security problems, e.g., should the copyleft licensed code be freely leveraged or insecure code be considered for training in the first place? These problems pose a significant impact on Copilot and other similar products that aim to learn knowledge from large-scale open-source code through deep learning models, which are inevitably on the rise with the fast development of artificial intelligence. To mitigate such impacts, we argue that there is a need to invent effective mechanisms for protecting open-source code from being exploited by deep learning models. Here, we design and implement a prototype, CoProtector, which utilizes data poisoning techniques to arm sourcecode repositories for defending against such exploits. Our large-scale experiments empirically show that CoProtector is effective in achieving its purpose, significantly reducing the performance of Copilot-like deep learning models while being able to stably reveal the secretly embedded watermark backdoors.
Given a table of data, existing systems can often detect basic atomic types (e.g., strings vs. numbers) for each column. A new generation of data-analytics and data-preparation systems are starting to automatically re...
详细信息
ISBN:
(纸本)9781450317436
Given a table of data, existing systems can often detect basic atomic types (e.g., strings vs. numbers) for each column. A new generation of data-analytics and data-preparation systems are starting to automatically recognize rich semantic types such as date-time, email address, etc., for such metadata can bring an array of benefits including better table understanding, improved search relevance, precise data validation, and semantic data transformation. However, existing approaches only detect a limited number of types using regular-expression-like patterns, which are often inaccurate, and cannot handle rich semantic types such as credit card and ISBN numbers that encode semantic validations (e.g., checksum). We developed AUTOTYPE, a system that can synthesize type detection logic for rich data types, by leveraging code from opensource repositories like GitHub. Users only need to provide a set of positive examples for a target data type and a search keyword, our system will automatically identify relevant code, and synthesize type-detection functions using execution traces. We compiled a benchmark with 112 semantic types, out of which the proposed system can synthesize code to detect 84 such types at a high precision. Applying the synthesized type-detection logic on web table columns have also resulted in a significant increase in data types discovered compared to alternative approaches.
We present a spectral element algorithm and open-source code for computing the fractional Laplacian defined by the eigenfunction expansion on finite 2D/3D complex domains with both homogeneous and nonhomogeneous bound...
详细信息
We present a spectral element algorithm and open-source code for computing the fractional Laplacian defined by the eigenfunction expansion on finite 2D/3D complex domains with both homogeneous and nonhomogeneous boundaries. We demonstrate the scalability of the spectral element algorithm on large clusters by constructing the fractional Laplacian based on computed eigenvalues and eigen-functions using up to thousands of CPUs. To demonstrate the accuracy of this eigen-based approach for computing the factional Laplacian, we approximate the solutions of the fractional diffusion equation using the computed eigenvalues and eigenfunctions on a 2D quadrilateral, and on a 3D cubic and cylindrical domain, and compare the results with the contrived solutions to demonstrate fast convergence. Subsequently, we present simulation results for a fractional diffusion equation on a hand-shaped domain discretized with 3D hexahedra, as well as on a domain constructed from the Hanford site geometry corresponding to nonzero Dirichlet boundary conditions. Finally, we apply the algorithm to solve the surface quasi-geostrophic (SQG) equation on a 2D square with periodic boundaries. Simulation results demonstrate the accuracy, efficiency, and geometric flexibility of our algorithm and that our algorithm can capture the subtle dynamics of anomalous diffusion modeled by the fractional Laplacian on complex geometry domains. The included open-source code is the first of its kind. Program summary Program title: Nektarpp_EigenMM CPC Library link to program files: https://***/10.17632/whtc75rj55.1 Developer's repository link: https://***/paralab/Nektarpp_EigenMM Licensing provisions: MIT License Programming language: C/C++, MPI Nature of problem: An open-source parallel code for computing the spectral fractional Laplacian on 3D complex geometry domains. Solution method: A distributed, sparse, iterative algorithm is developed to solve an associated integerorder Laplace eigenvalue problem
Over the past decade, Remote Photoplethysmography (rPPG) has emerged as an unobtrusive alternative to wearable sensors for measuring physiological signals, such as heart rate. Despite advancements, its real-world scal...
详细信息
Over the past decade, Remote Photoplethysmography (rPPG) has emerged as an unobtrusive alternative to wearable sensors for measuring physiological signals, such as heart rate. Despite advancements, its real-world scalability remains limited due to the absence of standardized benchmark methods for validation. This lack of standardization complicates proper comparisons between different approaches, creating inconsistencies in performance evaluation. To address this, we conducted a comprehensive review of recent rPPG methods, analyzing their pre- and post-processing algorithms, validation procedures, benchmark algorithms, datasets, evaluation metrics, data segmentation, and reported results. Our findings demonstrate significant variability in the reported Mean Absolute Error (MAE) of benchmark rPPG methods applied to the same public datasets, confirming the challenge of inconsistent evaluation. By examining the original implementations of established benchmark methods, we developed a flexible framework that optimally selects pre- and post-processing algorithms through an exhaustive search. Applying this framework to benchmark algorithms across three public datasets, we found that 80% of the refined methods ranked within the top 25th percentile in MAE, RMSE, and PCC, with 60% surpassing the highest reported accuracies. These refined methods provide a more rigorous foundation for evaluating novel rPPG techniques, addressing the standardization gap in the field. The codebase for this framework (frPPG) is available at [https://***/Building-Robotics-Lab/flexible_rPPG], offering a valuable tool for designing and benchmarking rPPG methods against the best-performing algorithms on a given dataset.
Building Performance Simulation (BPS) uses advanced computational and data science methods. Reproducibility, the ability to obtain the same results by using the same data and methods, is essential in BPS research to e...
详细信息
Building Performance Simulation (BPS) uses advanced computational and data science methods. Reproducibility, the ability to obtain the same results by using the same data and methods, is essential in BPS research to ensure the reliability and validity of scientific results. The benefits of reproducible research include enhanced scientific integrity, faster scientific advancements, and valuable educational resources. Despite its importance, reproducibility in BPS is often overlooked due to technical complexities, insufficient documentation, and cultural barriers such as the lack of incentives for sharing code and data. This paper encourages the reproducibility of articles on computational science and proposes to recognize reproductible code and data, with persistent Digital Object Identifier (DOI), as peer-reviewed archival publications. Practical workflows for achieving reproducibility in BPS are presented for the use of MATLAB and Python.
Recent multimodal methods for lyrics alignment have relied on large datasets. Our approach introduces a box loss that directly incorporates timestamp information into the loss function, enabling precise alignment and ...
详细信息
Topology optimization(TO),a numerical technique to find the optimalmaterial layoutwith a given design domain,has attracted interest from researchers in the field of structural optimization in recent *** beginners,open...
详细信息
Topology optimization(TO),a numerical technique to find the optimalmaterial layoutwith a given design domain,has attracted interest from researchers in the field of structural optimization in recent *** beginners,opensourcecodes are undoubtedly the best alternative to learning TO,which can elaborate the implementation of a method in detail and easily engage more people to employ and extend the *** this paper,we present a summary of various open-source codes and related literature on TO methods,including solid isotropic material with penalization(SIMP),evolutionary method,level set method(LSM),moving morphable components/voids(MMC/MMV)methods,multiscale topology optimization method,***,we classify the codes into five levels,fromeasy to difficult,depending on their difficulty,so that beginners can get started and understand the form of code implementation more quickly.
Carbon-neutral hydrogen (H2) can reduce emis-sions from hard-to-electrify sectors and contribute to a net-zero greenhouse gas economy by 2050. Power-to-hydrogen (PtH2) technologies based on clean electricity can provi...
详细信息
Carbon-neutral hydrogen (H2) can reduce emis-sions from hard-to-electrify sectors and contribute to a net-zero greenhouse gas economy by 2050. Power-to-hydrogen (PtH2) technologies based on clean electricity can provide such H2, yet their carbon intensities alone do not provide sufficient basis to judge their potential contribution to a sustainable and just energy transition. Introducing a prospective life cycle assessment frame-work to decipher the non-linear relationships between future technology and energy system dynamics over time, we showcase its relevance to inform research, development, demonstration, and deployment by comparing two PtH2 technologies to steam methane reforming (SMR) across a series of environmental and resource-use metrics. We find that the system transitions in the power, cement, steel, and fuel sectors move impacts for both PtH2 technologies to equal or lower levels by 2100 compared to 2020 per kg of H2 except for metal depletion. The decarbonization of the United States power sector by 2035 allows PtH2 to reach parity with SMR at 10 kg of CO2e/kg H2 between 2030 and 2050. Updated H2 radiative forcing and leakage levels only marginally affect these results. Biomass carbon removal and storage power technologies enable carbon-negative H2 after 2040 at about -15 kg of CO2e/kg H2. Still, both PtH2 processes exhibit higher impacts across most other metrics, some of which are worsened by the decarbonization of the power sector. Observed increases in metal depletion and eco-and human toxicity levels can be reduced via PtH2 energy and material use efficiency improvements, but the power sector decarbonization routes also warrant further review and cradle-to-grave assessments to show tradeoffs from a systems perspective.
The wind resource assessment community has long had the goal of reducing the bias between wind plant pre-construction energy yield assessment (EYA) and the observed annual energy production (AEP). This comparison is t...
详细信息
The wind resource assessment community has long had the goal of reducing the bias between wind plant pre-construction energy yield assessment (EYA) and the observed annual energy production (AEP). This comparison is typically made between the 50% probability of exceedance (P50) value of the EYA and the long-term corrected operational AEP (hereafter OA AEP) and is known as the P50 bias. The industry has critically lacked an independent analysis of bias investigated across multiple consultants to identify the greatest sources of uncertainty and variance in the EYA process and the best opportunities for uncertainty reduction. The present study addresses this gap by benchmarking consultant methodologies against each other and against operational data at a scale not seen before in industry collaborations. We consider data from 10 wind plants in North America and evaluate discrepancies between eight consultancies in the steps taken from estimates of gross to net energy. Consultants tend to overestimate the gross energy produced at the turbines and then compensate by further overestimating downstream losses, leading to a mean P50 bias near zero, still with significant variability among the individual wind plants. Within our data sample, we find that consultant estimates of all loss categories, except environmental losses, tend to reduce the project-to-project variability of the P50 bias. The disagreement between consultants, however, remains flat throughout the addition of losses. Finally, we find that differences in consultants' estimates of project performance can lead to differences up to $10/MWh in the levelized cost of energy for a wind plant.
暂无评论