An abundance of data have been generated from various embedded devices, applications, and systems, and require cost-efficient storage services. Data deduplication removes duplicate chunks and becomes an important tech...
详细信息
An abundance of data have been generated from various embedded devices, applications, and systems, and require cost-efficient storage services. Data deduplication removes duplicate chunks and becomes an important technique for storage systems to improve space efficiency. However, stored unique chunks are heavily fragmented, decreasing restore performance and incurs high overheads for garbage collection. Existing schemes fail to achieve an efficient trade-off among deduplication, restore and garbage collection performance, due to failing to explore and exploit the physical locality of different chunks. In this paper, we trace the storage patterns of the fragmented chunks in backup systems, and propose a high-performance deduplication system, called HiDeStore. The main insight is to enhance the physical-locality for the new backup versions during the deduplication phase, which identifies and stores hot chunks in the active containers. The chunks not appearing in new backups become cold and are gathered together in the archival containers. Moreover, we remove the expired data with an isolated container deletion scheme, avoiding the high overheads for expired data detection. Compared with state-of-the-art schemes, HiDeStore improves the deduplication and restore performance by up to 1.4x and 1.6x, respectively, without decreasing the deduplication ratios and incurring high garbage collection overheads.
This paper investigates the input-to-state stabilization of discrete-time Markov jump systems. A quantized control scheme that includes coding and decoding procedures is proposed. The relationship between the error in...
详细信息
The zero-watermarking methods provide a means of lossless, which was adopted to protect medical image copyright requiring high integrity. However, most existing studies have only focused on robustness and there has be...
详细信息
The visual noise of each light intensity area is different when the image is drawn by Monte Carlo ***,the existing denoising algorithms have limited denoising performance under complex lighting conditions and are easy...
详细信息
The visual noise of each light intensity area is different when the image is drawn by Monte Carlo ***,the existing denoising algorithms have limited denoising performance under complex lighting conditions and are easy to lose detailed *** we propose a rendered image denoising method with filtering guided by lighting ***,we design an image segmentation algorithm based on lighting information to segment the image into different illumination ***,we establish the parameter prediction model guided by lighting information for filtering(PGLF)to predict the filtering parameters of different illumination *** different illumination areas,we use these filtering parameters to construct area filters,and the filters are guided by the lighting information to perform sub-area ***,the filtering results are fused with auxiliary features to output denoised images for improving the overall denoising effect of the *** the physically based rendering tool(PBRT)scene and Tungsten dataset,the experimental results show that compared with other guided filtering denoising methods,our method improves the peak signal-to-noise ratio(PSNR)metrics by 4.2164 dB on average and the structural similarity index(SSIM)metrics by 7.8%on *** shows that our method can better reduce the noise in complex lighting scenesand improvethe imagequality.
In differentiable search architecture search methods,a more efficient search space design can significantly improve the performance of the searched architecture,thus requiring people to carefully define the search spa...
详细信息
In differentiable search architecture search methods,a more efficient search space design can significantly improve the performance of the searched architecture,thus requiring people to carefully define the search space with different complexity according to various *** rationalizing the search strategies to explore the well-defined search space will further improve the speed and efficiency of architecture *** this in mind,we propose a faster and more efficient differentiable architecture search method,***,we introduce a more efficient search space enriched by the introduction of two redefined convolution ***,we utilize a more efficient architectural parameter regularization method,mitigating the overfitting problem during the search process and reducing the error brought about by gradient ***,we introduce a natural exponential cosine annealing method to make the learning rate of the neural network training process more suitable for the search ***,group convolution and data augmentation are employed to reduce the computational ***,through extensive experiments on several public datasets,we demonstrate that our method can more swiftly search for better-performing neural network architectures in a more efficient search space,thus validating the effectiveness of our approach.
Inductive knowledge graph embedding(KGE)aims to embed unseen entities in emerging knowledge graphs(KGs).The major recent studies of inductive KGE embed unseen entities by aggregating information from their neighboring...
详细信息
Inductive knowledge graph embedding(KGE)aims to embed unseen entities in emerging knowledge graphs(KGs).The major recent studies of inductive KGE embed unseen entities by aggregating information from their neighboring entities and relations with graph neural networks(GNNs).However,these methods rely on the existing neighbors of unseen entities and suffer from two common problems:data sparsity and feature ***,the data sparsity problem means unseen entities usually emerge with few triplets containing insufficient ***,the effectiveness of the features extracted from original KGs will degrade when repeatedly propagating these features to represent unseen entities in emerging KGs,which is termed feature smoothing *** tackle the two problems,we propose a novel model entitled Meta-Learning Based Memory Graph Convolutional Network(MMGCN)consisting of three different components:1)the two-layer information transforming module(TITM)developed to effectively transform information from original KGs to emerging KGs;2)the hyper-relation feature initializing module(HFIM)proposed to extract type-level features shared between KGs and obtain a coarse-grained representation for each entity with these features;and 3)the meta-learning training module(MTM)designed to simulate the few-shot emerging KGs and train the model in a meta-learning *** extensive experiments conducted on the few-shot link prediction task for emerging KGs demonstrate the superiority of our proposed model MMGCN compared with state-of-the-art methods.
Existing methods in article recommendation fail to fully use the article information, or pay less attention to the correlations among articles and "User-Article"s, resulting in inaccurate recommendation perf...
详细信息
Zero-shot learning enables the recognition of new class samples by migrating models learned from semanticfeatures and existing sample features to things that have never been seen before. The problems of consistencyof ...
详细信息
Zero-shot learning enables the recognition of new class samples by migrating models learned from semanticfeatures and existing sample features to things that have never been seen before. The problems of consistencyof different types of features and domain shift problems are two of the critical issues in zero-shot learning. Toaddress both of these issues, this paper proposes a new modeling structure. The traditional approach mappedsemantic features and visual features into the same feature space;based on this, a dual discriminator approachis used in the proposed model. This dual discriminator approach can further enhance the consistency betweensemantic and visual features. At the same time, this approach can also align unseen class semantic features andtraining set samples, providing a portion of information about the unseen classes. In addition, a new feature fusionmethod is proposed in the model. This method is equivalent to adding perturbation to the seen class features,which can reduce the degree to which the classification results in the model are biased towards the seen *** the same time, this feature fusion method can provide part of the information of the unseen classes, improvingits classification accuracy in generalized zero-shot learning and reducing domain bias. The proposed method isvalidated and compared with othermethods on four datasets, and fromthe experimental results, it can be seen thatthe method proposed in this paper achieves promising results.
Graph convolutional networks (GCNs) gain increasing attention on graph data learning tasks in recent years. However, in many applications, graph may come with an incomplete form where attributes of graph nodes are par...
详细信息
The discourse analysis task,which focuses on understanding the semantics of long text spans,has received increasing attention in recent *** a critical component of discourse analysis,discourse relation recognition aim...
详细信息
The discourse analysis task,which focuses on understanding the semantics of long text spans,has received increasing attention in recent *** a critical component of discourse analysis,discourse relation recognition aims to identify the rhetorical relations between adjacent discourse units(e.g.,clauses,sentences,and sentence groups),called arguments,in a *** works focused on capturing the semantic interactions between arguments to recognize their discourse relations,ignoring important textual information in the surrounding ***,in many cases,more than capturing semantic interactions from the texts of the two arguments are needed to identify their rhetorical relations,requiring mining more contextual *** this paper,we propose a method to convert the RST-style discourse trees in the training set into dependency-based trees and train a contextual evidence selector on these transformed *** this way,the selector can learn the ability to automatically pick critical textual information from the context(i.e.,as evidence)for arguments to assist in discriminating their *** we encode the arguments concatenated with corresponding evidence to obtain the enhanced argument ***,we combine original and enhanced argument representations to recognize their *** addition,we introduce auxiliary tasks to guide the training of the evidence selector to strengthen its selection *** experimental results on the Chinese CDTB dataset show that our method outperforms several state-of-the-art baselines in both micro and macro F1 scores.
暂无评论