Class incremental learning has received wide attention due to its better adaptability to the changing characteristics of online learning. However, data cannot be shared between organizations in the data isolated islan...
详细信息
ISBN:
(数字)9781728143286
ISBN:
(纸本)9781728143293
Class incremental learning has received wide attention due to its better adaptability to the changing characteristics of online learning. However, data cannot be shared between organizations in the data isolated islands scenario. Existing solutions cannot adapt to incremental classes without aggregating data. In this paper, we propose a distributed class-incremental learning framework, called DCIGAN. It uses GAN generators to store the information of past data and continuously update GAN parameters with new data. In particular, we proposed CIGAN to ensure that the distribution of generated pseudo data is as close as possible to the real data on a single node, which guarantees the accuracy of class-incremental learning. Furthermore, we propose GF, a generators fusion method to integrate local generators of multi-nodes into a new global generator. To evaluate the performance of DCIGAN, we conduct experiments on six datasets under various parameter settings on both two and multi nodes distributed scenarios. Extensive experiments confirm that DCIGAN outperform the general baselines and achieve classification accuracy closes to the method of data aggregating.
The worldwide adoption of cloud data centers (CDCs) has given rise to the ubiquitous demand for hosting application services on the cloud. Further, contemporary data-intensive industries have seen a sharp upsurge in t...
详细信息
Domain adaptation aims to transfer knowledge across different domains and bridge the gap between them. While traditional knowledge transfer considers identical domain, a more realistic scenario is to transfer from a l...
详细信息
ISBN:
(数字)9781728199160
ISBN:
(纸本)9781728192369
Domain adaptation aims to transfer knowledge across different domains and bridge the gap between them. While traditional knowledge transfer considers identical domain, a more realistic scenario is to transfer from a larger and more diverse source domain to a smaller target domain, which is referred to as partial domain adaptation (PDA). However, matching the whole source domain to the target domain for PDA might produce negative transfer. Samples in the shared classes should be carefully selected to mitigate negative transfer in PDA. We observe that the correlations between different target domain samples and source domain samples are diverse: classes are not equally correlated and moreover, different samples have different correlation strengthes even when they are in the same class. In this study, we propose ECDT, a novel PDA method that Exploits the Correlation Diversity for knowledge Transfer between different domains. We propose a novel method to estimate target domain label space that utilizes the label distribution and feature distribution of target samples, based on which outlier source classes can be filtered out and their negative effects on transfer can be mitigated. Moreover, ECDT combines class-level correlation and instance-level correlation to quantity sample-level transferability in domain adversarial network. Experimental results on three commonly used cross-domain object data sets show that ECDT is superior to previous partial domain adaptation methods.
Temporal action localization is an important task of computer vision. Though many methods have been proposed, it still remains an open question how to predict the temporal location of action segments precisely. Most s...
详细信息
The double fetch problem occurs when the data is maliciously changed between two kernel reads of the supposedly same data, which can cause serious security problems in the kernel. Previous research focused on the doub...
详细信息
The double fetch problem occurs when the data is maliciously changed between two kernel reads of the supposedly same data, which can cause serious security problems in the kernel. Previous research focused on the double fetches between the kernel and user applications. In this paper, we present the first dedicated study of the double fetch problem between the kernel and peripheral devices (aka. the hardware double fetch). Operating systems communicate with peripheral devices by reading from and writing to the device mapped I/O (input and output) memory. Owing to the lack of effective validation of the attached hardware, compromised hardware could flip the data between two reads of the same I/O memory address, causing a double fetch problem. We propose a static pattern-matching approach to identify the hardware double fetches from the Linux kernel. Our approach can analyze the entire kernel without relying on the corresponding hardware. The results are categorized and each category is analyzed using case studies to discuss the possibility of causing bugs. We also find four previously unknown double-fetch vulnerabilities, which have been confirmed and fixed after reporting them to the maintainers.
Code review is an important process to reduce code defects and improve software quality. In social coding communities like GitHub, as everyone can submit Pull-Requests, code review plays a more important role than eve...
详细信息
Code review is an important process to reduce code defects and improve software quality. In social coding communities like GitHub, as everyone can submit Pull-Requests, code review plays a more important role than ever before, and the process is quite time-consuming. Therefore, finding and recommending proper reviewers for the emerging Pull-Requests becomes a vital task. However, most of the current studies mainly focus on recommending reviewers by checking whether they will participate or not without differentiating the participation types. In this paper, we develop a two-layer reviewer recommendation model to recommend reviewers for Pull-Requests (PRs) in GitHub projects from the technical and managerial perspectives. For the first layer, we recommend suitable developers to review the target PRs based on a hybrid recommendation method. For the second layer, after getting the recommendation results from the first layer, we specify whether the target developer will technically or managerially participate in the reviewing process. We conducted experiments on two popular projects in GitHub, and tested the approach using PRs created between February 2016 and February 2017. The results show that the first layer of our recommendation model performs better than the previous work, and the second layer can effectively differentiate the types of participation.
Cloud computing enables remote execution of users' tasks. The pervasive adoption of cloud computing in smart cities' services and applications requires timely execution of tasks adhering to Quality of Services...
详细信息
These days' smart buildings have high intensive information and massive operational parameters, not only extensive power consumption. With the development of computation capability and future 5 G, the ACP theory(i...
详细信息
These days' smart buildings have high intensive information and massive operational parameters, not only extensive power consumption. With the development of computation capability and future 5 G, the ACP theory(i.e., artificial systems,computational experiments, and parallel computing) will play a much more crucial role in modeling and control of complex systems like commercial and academic buildings. The necessity of making accurate predictions of energy consumption out of a large number of operational parameters has become a crucial problem in smart buildings. Previous attempts have been made to seek energy consumption predictions based on historical data in buildings. However, there are still questions about parallel building consumption prediction mechanism using a large number of operational parameters. This article proposes a novel hybrid deep learning prediction approach that utilizes long short-term memory as an encoder and gated recurrent unit as a decoder in conjunction with ACP theory. The proposed approach is tested and validated by real-world dataset, and the results outperformed traditional predictive models compared in this paper.
Following trails in the wild is an essential capability of out-door autonomous mobile robots. Recently, deep learningbased approaches have made great advancements in this field. However, the existing research only foc...
详细信息
Deduplication is a data redundancy elimination technique, designed to save system storage resources by reducing redundant data in cloud storage systems. With the development of cloud computing technology, deduplicatio...
详细信息
ISBN:
(数字)9781728190747
ISBN:
(纸本)9781728183824
Deduplication is a data redundancy elimination technique, designed to save system storage resources by reducing redundant data in cloud storage systems. With the development of cloud computing technology, deduplication has been increasingly applied to cloud data centers. However, traditional technologies face great challenges in big data deduplication to properly weigh the two conflicting goals of deduplication throughput and high duplicate elimination ratio. This paper proposes a similarity clustering-based deduplication strategy (named SCDS), which aims to delete more duplicate data without significantly increasing system overhead. The main idea of SCDS is to narrow the query range of fingerprint index by data partitioning and similarity clustering algorithms. In the data preprocessing stage, SCDS uses data partitioning algorithm to classify similar data together. In the data deletion stage, the similarity clustering algorithm is used to divide the similar data fingerprint superblock into the same cluster. Repetitive fingerprints are detected in the same cluster to speed up the retrieval of duplicate fingerprints. Experiments show that the deduplication ratio of SCDS is better than some existing similarity deduplication algorithms, but the overhead is only slightly higher than some high throughput but low deduplication ratio methods.
暂无评论