Blockchain technology has been extensively uti-lized in decentralized data-sharing applications, with the immutability of blockchain providing a witness for the circulation of data. However, current blockchain data-sh...
详细信息
Outlier detection on data streams identifies unusual states to sense and alarm potential risks and faults of the target systems in both the cyber and physical world. As different parameter settings of machine learning...
详细信息
Outlier detection on data streams identifies unusual states to sense and alarm potential risks and faults of the target systems in both the cyber and physical world. As different parameter settings of machine learning algorithms can result in dramatically different performance, automatic parameter selection is also of great importance in deploying outlier detection algorithms in data streams. However, current canonical parameter selection methods suffer from two key challenges: (i) Data streams generally evolve over time, but these existing methods use a fixed training set, which fails to handle this evolving environment and often results in suboptimal parameter recommendations; (ii) The stream is infinite, and thus any parameter selection method taking the entire stream as input is infeasible. In light of these limitations, this paper introduces a Dynamic Parameter Selection method for outlier detection on data Streams (DPSS for short). DPSS uses Gaussian process regression to model the relationship between parameters and detecting performance and uses Bayesian optimization to explore the optimal parameter setting. For each new subsequence, DPSS updates the recommended parameter setting to suit the evolving characteristics. Besides, DPSS only uses historical calculations to guide the parameter setting sampling and adjust the Gaussian process regression results. DPSS can be employed as an auxiliary plug-in tool to improve the detection performance of outlier detection methods. Extensive experiments show that our method can significantly improve the F-score of outlier detectors in data streams compared to its counterparts and obtains more superior parameter selection performance than other state-of the-art parameter selection approaches. DPSS also achieves better time and memory efficiency compared to competitors.
Innovations in powerful high-performance computing (HPC) architecture are enabling high-fidelity whole-core neutron transport simulations at reasonable time. Especially, the currently fashionable heterogeneous archite...
Innovations in powerful high-performance computing (HPC) architecture are enabling high-fidelity whole-core neutron transport simulations at reasonable time. Especially, the currently fashionable heterogeneous architectures make the cost of such simulations at very low level. Neutron distribution of a reactor core is governed by the Boltzmann neutron transport equation (BTE), first viable solutions of which need tremendous computer resources. Among of the high-fidelity numerical methods, the discrete ordinates method (SN) is becoming popular in the reaction design community by taking a good balance between computational cost and accuracy. Recently, MT-3000, which is a multizone heterogeneous architecture with a peak double precision performance of 11.6 TFLOPS, is proposed. In this work, the BTE is solved by the SN with heterogenous Koch-Baker-Alcouffe (KBA) parallel algorithms based on the MT-3000 architecture. A communication mechanism has been established to efficiently transmit data among the acceleration cores and the CPU cores. The kernel computation procedure is largely accelerated by the vectorization and instruction pipelining techniques. Numerical experiments show that our formulation could achieve 1.37 TFLOPs with single MT-3000, that is 11.8% of its peak performance.
Mesh generation is a crucial step in numerical simulations, significantly impacting simulation accuracy and efficiency. However, generating meshes remains time-consuming and requires expensive computational resources....
详细信息
Prototype network-based methods have made substantial progress in few-shot relation extraction (FSRE) by enhancing relation prototypes with relation descriptions. However, the distribution of relations and instances i...
详细信息
In large-scale distributed training, communication compression techniques are widely used to reduce the significant communication overhead caused by the frequent exchange of model parameters or gradients between train...
详细信息
Space-time video super-resolution (STVSR) is a comprehensive task comprising two subtasks: video super resolution in space dimension and video frame interpolation in time dimension. Conventional decoupled two-stage ap...
详细信息
ISBN:
(数字)9798350359312
ISBN:
(纸本)9798350359329
Space-time video super-resolution (STVSR) is a comprehensive task comprising two subtasks: video super resolution in space dimension and video frame interpolation in time dimension. Conventional decoupled two-stage approaches tend to overlook the intrinsic correlation between the two tasks. Overcoming this challenge requires the development of a unified model capable of simultaneously implementing space-time super-resolution across arbitrary scales. Most existing models are confined to training on fixed space upsampling scales or specific frame-rate videos, resulting in limited generalization capabilities for flexible space-time super-resolution scenarios. In response to this limitation, our approach draws inspiration from continuous implicit neural representation. We propose an enhanced Implicit Neural Alignment Network (INAN) based on the VideoINR framework, encompassing feature refinement, precise motion flow estimation, and multi-scale feature fusion to optimize the final implicit neural decoding. Our extensive experimental evaluations on multiple benchmarks underscore the efficacy of the INAN model, indicate its superior performance compared to prior STVSR methods.
Lattice Boltzmann method (LBM) has become a powerful method in computational fluid dynamics and has drawn more and more attention in high-performance computing due to its particulate nature and local dynamics, especia...
详细信息
In airfoil numerical simulation, the mesh quality has an important influence on the accuracy and error of numerical simulation. The existing mesh quality evaluation requires a lot of manual interaction, which greatly ...
详细信息
Federated Learning (FL) is a distributed machine learning framework in communication network systems. However, the systems’ Non-Independent and Identically distributed (Non-IID) data negatively affect the convergence...
详细信息
暂无评论