Lattice Boltzmann method (LBM) has become a powerful method in computational fluid dynamics and has drawn more and more attention in high-performance computing due to its particulate nature and local dynamics, especia...
详细信息
Script is the structured knowledge representation of prototypical real-life event *** the commonsense knowledge inside the script can be helpful for machines in understanding natural language and drawing commonsensibl...
详细信息
Script is the structured knowledge representation of prototypical real-life event *** the commonsense knowledge inside the script can be helpful for machines in understanding natural language and drawing commonsensible *** learning is an interesting and promising research direction,in which a trained script learning system can process narrative texts to capture script knowledge and draw ***,there are currently no survey articles on script learning,so we are providing this comprehensive survey to deeply investigate the standard framework and the major research topics on script *** research field contains three main topics:event representations,script learning models,and evaluation *** each topic,we systematically summarize and categorize the existing script learning systems,and carefully analyze and compare the advantages and disadvantages of the representative *** also discuss the current state of the research and possible future directions.
Mesh generation is a crucial step in numerical simulations, significantly impacting simulation accuracy and efficiency. However, generating meshes remains time-consuming and requires expensive computational resources....
详细信息
Space-time video super-resolution (STVSR) is a comprehensive task comprising two subtasks: video super resolution in space dimension and video frame interpolation in time dimension. Conventional decoupled two-stage ap...
详细信息
ISBN:
(数字)9798350359312
ISBN:
(纸本)9798350359329
Space-time video super-resolution (STVSR) is a comprehensive task comprising two subtasks: video super resolution in space dimension and video frame interpolation in time dimension. Conventional decoupled two-stage approaches tend to overlook the intrinsic correlation between the two tasks. Overcoming this challenge requires the development of a unified model capable of simultaneously implementing space-time super-resolution across arbitrary scales. Most existing models are confined to training on fixed space upsampling scales or specific frame-rate videos, resulting in limited generalization capabilities for flexible space-time super-resolution scenarios. In response to this limitation, our approach draws inspiration from continuous implicit neural representation. We propose an enhanced Implicit Neural Alignment Network (INAN) based on the VideoINR framework, encompassing feature refinement, precise motion flow estimation, and multi-scale feature fusion to optimize the final implicit neural decoding. Our extensive experimental evaluations on multiple benchmarks underscore the efficacy of the INAN model, indicate its superior performance compared to prior STVSR methods.
Outlier detection on data streams identifies unusual states to sense and alarm potential risks and faults of the target systems in both the cyber and physical world. As different parameter settings of machine learning...
详细信息
Outlier detection on data streams identifies unusual states to sense and alarm potential risks and faults of the target systems in both the cyber and physical world. As different parameter settings of machine learning algorithms can result in dramatically different performance, automatic parameter selection is also of great importance in deploying outlier detection algorithms in data streams. However, current canonical parameter selection methods suffer from two key challenges: (i) Data streams generally evolve over time, but these existing methods use a fixed training set, which fails to handle this evolving environment and often results in suboptimal parameter recommendations; (ii) The stream is infinite, and thus any parameter selection method taking the entire stream as input is infeasible. In light of these limitations, this paper introduces a Dynamic Parameter Selection method for outlier detection on data Streams (DPSS for short). DPSS uses Gaussian process regression to model the relationship between parameters and detecting performance and uses Bayesian optimization to explore the optimal parameter setting. For each new subsequence, DPSS updates the recommended parameter setting to suit the evolving characteristics. Besides, DPSS only uses historical calculations to guide the parameter setting sampling and adjust the Gaussian process regression results. DPSS can be employed as an auxiliary plug-in tool to improve the detection performance of outlier detection methods. Extensive experiments show that our method can significantly improve the F-score of outlier detectors in data streams compared to its counterparts and obtains more superior parameter selection performance than other state-of the-art parameter selection approaches. DPSS also achieves better time and memory efficiency compared to competitors.
Innovations in powerful high-performance computing (HPC) architecture are enabling high-fidelity whole-core neutron transport simulations at reasonable time. Especially, the currently fashionable heterogeneous archite...
Innovations in powerful high-performance computing (HPC) architecture are enabling high-fidelity whole-core neutron transport simulations at reasonable time. Especially, the currently fashionable heterogeneous architectures make the cost of such simulations at very low level. Neutron distribution of a reactor core is governed by the Boltzmann neutron transport equation (BTE), first viable solutions of which need tremendous computer resources. Among of the high-fidelity numerical methods, the discrete ordinates method (SN) is becoming popular in the reaction design community by taking a good balance between computational cost and accuracy. Recently, MT-3000, which is a multizone heterogeneous architecture with a peak double precision performance of 11.6 TFLOPS, is proposed. In this work, the BTE is solved by the SN with heterogenous Koch-Baker-Alcouffe (KBA) parallel algorithms based on the MT-3000 architecture. A communication mechanism has been established to efficiently transmit data among the acceleration cores and the CPU cores. The kernel computation procedure is largely accelerated by the vectorization and instruction pipelining techniques. Numerical experiments show that our formulation could achieve 1.37 TFLOPs with single MT-3000, that is 11.8% of its peak performance.
To address the issues of complex background interference and micro-defect detection in printed circuit board (PCB) surface inspection, while accommodating the deployment constraints of edge devices in industrial scena...
详细信息
ISBN:
(数字)9798331522285
ISBN:
(纸本)9798331522292
To address the issues of complex background interference and micro-defect detection in printed circuit board (PCB) surface inspection, while accommodating the deployment constraints of edge devices in industrial scenarios, this paper proposes a lightweight real-time detection Transformer (LRTDETR), an improved architecture based on RT-DETR. First, the original ResNet18 backbone is replaced with RepNCSPELAN, a lightweight network leveraging structural re-parameterization to reduce model complexity. Second, a global-local bidirectional feature pyramid network is introduced in the neck, which enhances the multi-scale feature representation through an adaptive weighted feature fusion mechanism, significantly improving small defect detection. Experimental results demonstrate that the improved model achieves a 96.9% mean average precision (mAP) with only a 0.2% performance drop compared to the baseline. Notably, the model size, the parameters, and the FLOPs are reduced by $58.3 \%, 53.5 \%$ , and 57.3%, respectively. The inference speed reaches 39 FPS, balancing accuracy and efficiency for real-time industrial inspection and edge deployment. This work provides a practical solution for highprecision, resource-efficient defect detection in industrial edge computing environments.
Recent advances in single-cell RNA sequencing (scRNA-seq) technology provides unprecedented opportunities for reconstruction gene regulation networks (GRNs). At present, many different models have been proposed to inf...
Recent advances in single-cell RNA sequencing (scRNA-seq) technology provides unprecedented opportunities for reconstruction gene regulation networks (GRNs). At present, many different models have been proposed to infer GRN from a large number of RNA-seq data, but most deep learning models use a priori gene regulatory network to infer potential GRNs. It is a challenge to reconstruct GRNs from scRNA-seq data due to the noise and sparsity introduced by the dropout effect. Here, we propose GAALink, a novel unsupervised deep learning method. It first constructs the gene similarity matrix and then refines it by threshold value. It then learns feature representations of genes through a graphical attention autoencoder that propagates information across genes with different weights. Finally, we use gene feature expression for matrix completion such that the GRNs are reconstructed. Compared with seven existing GRNs reconstruction methods, GAALink achieves more accurate performance on seven scRNA-seq dataset with four ground truth networks. GAALink can provide a useful tool for inferring GRNs for scRNA-seq expression data.
Deep neural networks(DNNs)have recently shown great potential in solving partial differential equations(PDEs).The success of neural network-based surrogate models is attributed to their ability to learn a rich set of ...
详细信息
Deep neural networks(DNNs)have recently shown great potential in solving partial differential equations(PDEs).The success of neural network-based surrogate models is attributed to their ability to learn a rich set of solution-related ***,learning DNNs usually involves tedious training iterations to converge and requires a very large number of training data,which hinders the application of these models to complex physical *** address this problem,we propose to apply the transfer learning approach to DNN-based PDE solving *** our work,we create pairs of transfer experiments on Helmholtz and Navier-Stokes equations by constructing subtasks with different source terms and Reynolds *** also conduct a series of experiments to investigate the degree of generality of the features between different *** results demonstrate that despite differences in underlying PDE systems,the transfer methodology can lead to a significant improvement in the accuracy of the predicted solutions and achieve a maximum performance boost of 97.3%on widely used surrogate models.
Monte Carlo (MC) simulation plays a key role in radiotherapy. Since the simulation time of the MC program cannot fully meet the clinical requirements, we use the ARM-based FT-2000+ multi-core processor for paralleliza...
详细信息
暂无评论