3D printed glass optical fiber preforms have been fabricated from silica containing resin. The fabrication process includes resin preparation, preform printing, debinding and sintering. Results demonstrate the silica ...
详细信息
Deduplication has been commonly used in both enterprise storage systems and cloud storage. To overcome the performance challenge for the selective restore operations of deduplication systems, solid-state-drive-based ...
详细信息
Deduplication has been commonly used in both enterprise storage systems and cloud storage. To overcome the performance challenge for the selective restore operations of deduplication systems, solid-state-drive-based (i.e., SSD-based) re^d cache cm, be deployed for speeding up by caching popular restore contents dynamically. Unfortunately, frequent data updates induced by classical cache schemes (e.g., LRU and LFU) significantly shorten SSDs' lifetime while slowing down I/O processes in SSDs. To address this problem, we propose a new solution -- LOP-Cache to greatly improve tile write durability of SSDs as well as I/O performance by enlarging the proportion of long-term popular (LOP) data among data written into SSD-based cache. LOP-Cache keeps LOP data in the SSD cache for a long time period to decrease the number of cache replacements. Furthermore, it prevents unpopular or unnecessary data in deduplication containers from being written into the SSD cache. We implemented LOP-Cache in a prototype deduplication system to evaluate its pertbrmance. Our experimental results indicate that LOP-Cache shortens the latency of selective restore by an average of 37.3% at the cost of a small SSD-based cache with only 5.56% capacity of the deduplicated data. Importantly, LOP-Cache improves SSDs' lifetime by a factor of 9.77. The evidence shows that LOP-Cache offers a cost-efficient SSD-based read cache solution to boost performance of selective restore for deduplication systems.
Feature representation learning is a research focus in domain adaptation. Recently, due to the fast training speed, the marginalized Denoising Autoencoder (mDA) as a standing deep learning model has been widely utiliz...
详细信息
Feature representation learning is a research focus in domain adaptation. Recently, due to the fast training speed, the marginalized Denoising Autoencoder (mDA) as a standing deep learning model has been widely utilized for feature representation learning. However, the training of mDA suffers from the lack of nonlinear relationship and does not explicitly consider the distribution discrepancy between domains. To address these problems, this paper proposes a novel method for feature representation learning, namely Nonlinear cross-domain Feature learning based Dual Constraints (NFDC), which consists of kernelization and dual constraints. Firstly, we introduce kernelization to effectively extract nonlinear relationship in feature representation learning. Secondly, we design dual constraints including Maximum Mean Discrepancy (MMD) and Manifold Regularization (MR) in order to minimize distribution discrepancy during the training process. Experimental results show that our approach is superior to several state-of-the-art methods in domain adaptation tasks.
We study the problem of leveraging the syntactic structure of text to enhance pre-trained models such as BERT and RoBERTa. Existing methods utilize syntax of text either in the pre-training stage or in the fine-tuning...
详细信息
Purpose: This work aims to develop a novel distortion-free 3D-EPI acquisition and image reconstruction technique for fast and robust, high-resolution, whole-brain imaging as well as quantitative T2* mapping. Methods: ...
详细信息
—With the popularity of multimedia technology, information is always represented or transmitted from multiple views. Most of the existing algorithms are graph-based ones to learn the complex structures within multivi...
详细信息
An integral part of video analysis and surveillance is temporal activity detection, which means to simultaneously recognize and localize activities in long untrimmed videos. Currently, the most effective methods of te...
详细信息
ISBN:
(数字)9781728171685
ISBN:
(纸本)9781728171692
An integral part of video analysis and surveillance is temporal activity detection, which means to simultaneously recognize and localize activities in long untrimmed videos. Currently, the most effective methods of temporal activity detection are based on deep learning, and they typically perform very well with large scale annotated videos for training. However, these methods are limited in real applications due to the unavailable videos about certain activity classes and the time-consuming data annotation. To solve this challenging problem, we propose a novel task setting called zero-shot temporal activity detection (ZSTAD), where activities that have never been seen in training can still be detected. We design an end-to-end deep network based on R-C3D as the architecture for this solution. The proposed network is optimized with an innovative loss function that considers the embeddings of activity labels and their super-classes while learning the common semantics of seen and unseen activities. Experiments on both the THUMOS'14 and the Charades datasets show promising performance in terms of detecting unseen activities.
Classification is a hot topic in such fields as machine learning and data mining. The traditional approach of machine learning is to find a classifier closest to the real classification function, while ensemble classi...
详细信息
Classification is a hot topic in such fields as machine learning and data mining. The traditional approach of machine learning is to find a classifier closest to the real classification function, while ensemble classification is to integrate the results of base classifiers, then make an overall prediction. Compared to using a single classifier, ensemble classification can significantly improve the generalization of the learning system in most cases. However, the existing ensemble classification methods rarely consider the weight of the classifier, and there are few methods to consider updating the weights dynamically. In this paper, we are inspired by the idea of truth discovery and propose a new ensemble classification method based on the truth discovery (named ECTD). As far as we know, we are the first to apply the idea of truth discovery in the field of ensemble learning. Experimental results demonstrate that the proposed method performs well in ensemble classification.
With the advent of the era of big data, the information from multi-sources often conflicts due to that errors and fake information are inevitable. Therefore, how to obtain the most trustworthy or true information (i.e...
详细信息
With the advent of the era of big data, the information from multi-sources often conflicts due to that errors and fake information are inevitable. Therefore, how to obtain the most trustworthy or true information (i.e. truth) people need gradually becomes a troublesome problem. In order to meet this challenge, a novel hot technology named truth discovery that can infer the truth and estimate the reliability of the source without supervision has attracted more and more attention. However, most existing truth discovery methods only consider that the information is either same or different rather than the fine-grained relation between them, such as inclusion, support, mutual exclusion, etc. Actually, this situation frequently exists in real-world applications. To tackle the aforementioned issue, we propose a novel truth discovery method named OTDCR in this paper, which can handle the fine-grained relation between the information and infer the truth more effectively through modeling the relation. In addition, a novel method of processing abnormal values is applied to the preprocessing of truth discovery, which is specially designed for categorical data with the relation. Experiments in real dataset show our method is more effective than several outstanding methods.
A blockchain can be taken as a decentralized and distributed public database. In order to achieve data consistency of the system nodes, the execution of a consensus algorithm is necessary and required in the case of d...
详细信息
A blockchain can be taken as a decentralized and distributed public database. In order to achieve data consistency of the system nodes, the execution of a consensus algorithm is necessary and required in the case of decentralized environments. Simply speaking, the consensus is that every node agrees on some record in the blockchain. There are many kinds of consensus algorithms in blockchain environments, and each consensus algorithm has its own proper application scenario. Here we firstly analysis and compare various popular consensus algorithms in blockchain environments, and then as voting theory has systematically studied the decision-making in a group, the traditional methods of voting theory is summarized and listed, including (Position) scoring rules, Copeland, Maximin, Ranked pairs, Voting trees, Bucklin, Plurality with runoff, Single transferable vote, Baldwin rule, and Nanson rule. Finally, we introduce the voting methods from voting theory to consensus algorithms in the blockchain to improve its performance.
暂无评论