The implicitly coupled pressure-based algorithm is widely acknowledged for its superior convergence and robustness in solving incompressible flow problems. However, the increased expansion scale of equations and diffi...
详细信息
Existing autoregressive models follow the two-stage generation paradigm that first learns a codebook in the latent space for image reconstruction and then completes the image generation autoregressively based on the l...
Existing autoregressive models follow the two-stage generation paradigm that first learns a codebook in the latent space for image reconstruction and then completes the image generation autoregressively based on the learned codebook. However, existing codebook learning simply models all local region information of images without distinguishing their different perceptual importance, which brings redundancy in the learned codebook that not only limits the next stage's autoregressive model's ability to model important structure but also results in high training cost and slow generation speed. In this study, we borrow the idea of importance perception from classical image coding theory and propose a novel two-stage framework, which consists of Masked Quantization VAE (MQVAE) and Stackformer, to relieve the model from modeling redundancy. Specifically, MQ-VAE incorporates an adaptive mask module for masking redundant region features before quantization and an adaptive de-mask module for recovering the original grid image feature map to faithfully reconstruct the original images after quantization. Then, Stackformer learns to predict the combination of the next code and its position in the feature map. Comprehensive experiments on various image generation validate our effectiveness and efficiency. Code will be released at https://***/CrossmodalGroup/MaskedVectorQuantization.
The high-quality network reconstruction from metabolites, enzymes and reactions is the first essential step for the studies on the metabolic networks. Previously the approach based on the KEGG/LIGAND database can only...
详细信息
The high-quality network reconstruction from metabolites, enzymes and reactions is the first essential step for the studies on the metabolic networks. Previously the approach based on the KEGG/LIGAND database can only reconstruct the organism-level metabolic networks and the network quality is quite doubtful. In this paper we propose a new recursive approach to reconstruct metabolic networks. The data selection mechanism of KEGG/PATHWAY ensures that metabolic networks are parsimonious but reliable. The KO system makes it possible to generate networks hierarchically. The KEGG web service manages to keep the data up-to-date. Results of comparison with the previous approach show our new approach is practical and reliable.
Blockchain-as-a-Service (BaaS) has gradually serve as an infrastructure, providing underlying blockchain services for the rapid deployment and transaction on-chain of large-scale enterprise application scenarios. By h...
详细信息
ISBN:
(纸本)9781728119496
Blockchain-as-a-Service (BaaS) has gradually serve as an infrastructure, providing underlying blockchain services for the rapid deployment and transaction on-chain of large-scale enterprise application scenarios. By hosting different blockchains in its underlying structure layers, BaaS platform satisfies the requirements of consensus algorithms in different scenarios, and still uses its own consensus algorithms in different blockchains, which has limited scalability and is difficult to satisfy the requirements of future applications. To deal with such problems, MGCE, a microservice based generic consensus engine for BaaS is proposed in this paper. It is a functional framework based on microservices architecture which can load the appropriate consensus algorithm and perform the consensus process of transactions for blockchain nodes according the different scenarios. The engine is composed of Consensus Node Group, Consensus Management and four functional modules, namely Consensus Network, Transactions Pool, Consensus Protocol and Consensus Outcome. For microservice purpose, each functional module is deployed into a independently project, running in its own process, which is easy to modify and test, and interacts with other modules through the http APIs to complete the consensus process together. To test the function and performance of this engine, three common consensus algorithms have been implemented including Raft, BFT-SMART and PoW. Through experiments, it is proved that this engine could realize the loading and use of different consensus algorithms. Through the performance comparison between using the engine and with the direct use of each consensus algorithm, the results show that the overhead introducing the engine is acceptable. Through the operation result of three algorithms in different projects, the design of engine architecture could satisfy the microservice objective.
Vehicular Ad hoc Network (VANET) allows infor-mation exchange between vehicles. In VANET, video is the most recommended medium for transmitting information. However, the characteristics of VANET and the requirements o...
详细信息
ISBN:
(数字)9798350353952
ISBN:
(纸本)9798350353969
Vehicular Ad hoc Network (VANET) allows infor-mation exchange between vehicles. In VANET, video is the most recommended medium for transmitting information. However, the characteristics of VANET and the requirements of video content in terms of quality have resulted in various challenges. Besides, the movements of vehicles introduce complex motions in videos, making them more difficult to compress. As a result, it is a challenging task to transmit as less bits of video streams as possible while maintaining their visual quality. Therefore, we analyze Affine Motion Estimation (AME), a new coding tool in the latest generation video coding standard H.266NV C. Subsequently, a fast algorithm is proposed to make it able to describe complex motions such as rotation, zoom, shear, etc but also with low complexity. Experimental results show that the algorithm reduces the encoding complexity of AME by 20.19% and the overall encoding complexity by 3.25% compared to anchor VTM21.0, with eligible coding performance loss under Random Access (RA) configurations.
Existing autoregressive models follow the two-stage generation paradigm that first learns a codebook in the latent space for image reconstruction and then completes the image generation autoregressively based on the l...
详细信息
In a Loss of Coolant Accident (LOCA), reactor core temperatures can rise rapidly, leading to potential fuel damage and radioactive material release. This research presents a groundbreaking method that combines the pow...
详细信息
ISBN:
(数字)9798331531409
ISBN:
(纸本)9798331531416
In a Loss of Coolant Accident (LOCA), reactor core temperatures can rise rapidly, leading to potential fuel damage and radioactive material release. This research presents a groundbreaking method that combines the power of Monte Carlo Sampling and Physics-Informed Neural Networks (PINNs) to simulate and effectively address the challenging Loss of Coolant Accidents (LOCA) in nuclear reactors. In the event of a LOCA, reactor core temperatures can soar rapidly, posing a significant threat to fuel integrity and potentially leading to the release of radioactive materials. By leveraging the strengths of both Monte Carlo Sampling and PINNs, this approach aims to provide a comprehensive and accurate simulation framework for assessing and mitigating the consequences of such accidents. The method yields high prediction accuracy (MAE: 0.033, RMSE: 0.098, R2: 0.814) and demonstrates robustness through transfer learning, maintaining strong performance (MAE: 0.064, RMSE: 0.163, R
2
: 0.735).
Since data outsourcing poses privacy concerns with data leakage, searchable symmetric encryption (SSE) has emerged as a powerful solution that enables clients to perform query operations on encrypted data while preser...
Since data outsourcing poses privacy concerns with data leakage, searchable symmetric encryption (SSE) has emerged as a powerful solution that enables clients to perform query operations on encrypted data while preserving their privacy. Dynamic SSE schemes have been proposed to handle update operations. However, it is shown that updates might increase the risk of information leakage. Meanwhile, to meet the requirement of real-world applications, it is desirable to have the searchable encryption scheme which supports both multiple clients and multi-keyword queries. To address these issues, this paper proposes MMDSSE, a multi-client forward secure dynamic SSE scheme that supports multi-keyword queries. MMDSSE allows the clients narrow down the results by providing an arbitrary subset of the entire archive, and thus suitable for cloud storage environment. Security analysis and experimental evaluations show that MMDSSE is secure and efficient.
Online social network has become an important media for information spreading. To target the origin of rumors and well understand the cause of event, information source in social network should be found out. Unfortuna...
详细信息
ISBN:
(纸本)9781509035595
Online social network has become an important media for information spreading. To target the origin of rumors and well understand the cause of event, information source in social network should be found out. Unfortunately, because of the huge and complex structure of social network, it is nearly impossible to monitor the complete online social network. To solve this problem, the paper gives a method to look for information source with incomplete observation. The basis of the method is the correlation between activated time and network topology structure. Some experiments are executed and the effect of different sampling methods was analyzed as well. Experiments on BA, WS and Facebook show that the method can produce satisfied results with incomplete observation.
Due to the characteristics of stream applications and the insufficiency of conventional processors when running stream programs, stream processors which support data-level parallelism become the research hotspot. This...
详细信息
Due to the characteristics of stream applications and the insufficiency of conventional processors when running stream programs, stream processors which support data-level parallelism become the research hotspot. This paper presents two means, stream partition (SP) and stream compression (SC), to optimize streams on Imagine. The results of simulation show that SP and SC can make stream applications take full advantage of the parallel clusters, pipelines and three-level memory hierarchy of the Imagine processor, and then reduce the execution time of stream programs.
暂无评论