Single particle cryo-electron microscopy (cryo-EM) is one of the most important methods of macromolecular structure. RELION (regularized likelihood optimization) is an open-source computer program for the refinement o...
详细信息
In this study, we propose and compare stochastic variants of the extra-gradient alternating direction method, named the stochastic extra-gradient alternating direction method with Lagrangian function(SEGL) and the s...
详细信息
In this study, we propose and compare stochastic variants of the extra-gradient alternating direction method, named the stochastic extra-gradient alternating direction method with Lagrangian function(SEGL) and the stochastic extra-gradient alternating direction method with augmented Lagrangian function(SEGAL), to minimize the graph-guided optimization problems, which are composited with two convex objective functions in large scale.A number of important applications in machine learning follow the graph-guided optimization formulation, such as linear regression, logistic regression, Lasso, structured extensions of Lasso, and structured regularized logistic regression. We conduct experiments on fused logistic regression and graph-guided regularized regression. Experimental results on several genres of datasets demonstrate that the proposed algorithm outperforms other competing algorithms, and SEGAL has better performance than SEGL in practical use.
Determining the validity of knowledge triples and filling in the missing entities or relationships in the knowledge graph are the crucial tasks for large-scale knowledge graph completion. So far, the main solutions us...
详细信息
Class incremental learning has received wide attention due to its better adaptability to the changing characteristics of online learning. However, data cannot be shared between organizations in the data isolated islan...
详细信息
ISBN:
(数字)9781728143286
ISBN:
(纸本)9781728143293
Class incremental learning has received wide attention due to its better adaptability to the changing characteristics of online learning. However, data cannot be shared between organizations in the data isolated islands scenario. Existing solutions cannot adapt to incremental classes without aggregating data. In this paper, we propose a distributed class-incremental learning framework, called DCIGAN. It uses GAN generators to store the information of past data and continuously update GAN parameters with new data. In particular, we proposed CIGAN to ensure that the distribution of generated pseudo data is as close as possible to the real data on a single node, which guarantees the accuracy of class-incremental learning. Furthermore, we propose GF, a generators fusion method to integrate local generators of multi-nodes into a new global generator. To evaluate the performance of DCIGAN, we conduct experiments on six datasets under various parameter settings on both two and multi nodes distributed scenarios. Extensive experiments confirm that DCIGAN outperform the general baselines and achieve classification accuracy closes to the method of data aggregating.
Concurrency bugs widely exist in concurrent programs and have caused severe failures in the real world. Researchers have made significant progress in detecting concurrency bugs, which improves software reliability. In...
详细信息
Concurrency bugs widely exist in concurrent programs and have caused severe failures in the real world. Researchers have made significant progress in detecting concurrency bugs, which improves software reliability. In this paper, we survey the most up-to-date and well-known concurrency bug detectors. We categorize the existing detectors based on the types of concurrency bugs. Consequently, we analyze data race detectors, atomicity violation detectors, order violation detectors, and deadlock detectors, respectively. We also discuss some other techniques which are mostly related to concurrency bug detection, including schedule bounding techniques, interleaving optimizing techniques, path expanding techniques, and deterministic replay techniques. Additionally, we statistically analyze the reviewed detectors and get some interesting findings, for instance, nearly 86% of previous detectors focus on data races and atomicity violations, and dynamic approaches are popular(74%). We also discuss the limitations of previous detectors, finding that 91% of previous detectors suffer from false negatives and 64% of previous detectors suffer from runtime overhead. Based on the reviewed detectors and statistical analysis, we conclude some future research directions, including accuracy, performance,applicability, and integrality.
With the development of artificial intelligence [1-3], convolutional neural networks (CNNs) have made significant progress in image generation and manipulation. The generated images using facial image synthesis method...
With the development of artificial intelligence [1-3], convolutional neural networks (CNNs) have made significant progress in image generation and manipulation. The generated images using facial image synthesis methods, e.g., Faceswap [4], Deepfakes [5], Face2face [6], NeuralTextures [7], are very realistic and even difficult to distinguish by human eyes. It poses serious challenges to social and personal security. There are many methods that can detect synthetic facial images with an accuracy rate of over 99%. But the synthetic facial image classification model can only give a true or fake result, and it is impossible to know which part is the discordant area. Therefore, we propose and experimentally verify a method that using reinforcement learning to select the region in the image that has the least impact on the classification model.
Beyond classical domain-specific adversarial training, a recently proposed task-specific framework has achieved a great success in single source domain adaptation by utilizing task-specific decision boundaries. Howeve...
Beyond classical domain-specific adversarial training, a recently proposed task-specific framework has achieved a great success in single source domain adaptation by utilizing task-specific decision boundaries. However, compared to single-source-single-target setting, multi-source domain adaptation (MDA) shows more powerful capability to handle with most real-life cases. To align target domain with diverse multi-source domains using task-specific decision boundaries, we provide a deep insight of task-specific framework on MDA for the first time. Accordingly, we propose a novel task-specific multi-source domain adaptation method (TMDA) with a clustering embedded adversarial training process. Specifically, the proposed TMDA detects and refines less discriminative target representations through a max-min optimization over two adversarial task-specific classifiers. Moreover, our analysis implies that scattered multi-source representations disturb the adversarial training under the task-specific framework. To tight up the dispersed source representations, we embeds a relationship-based domain clustering into TMDA. Empirical results demonstrate that our TMDA outperforms state-of-the-art methods on toy dataset, sentiment analysis and digit classification.
Existing visual-based SLAM systems mainly utilize the three-dimensional environmental depth information from RGB-D cameras to complete the robotic synchronization localization and map construction task. However, the R...
详细信息
Existing visual-based SLAM systems mainly utilize the three-dimensional environmental depth information from RGB-D cameras to complete the robotic synchronization localization and map construction task. However, the RGB-D camera maintains a limited range for working and is hard to accurately measure the depth information in a far distance. Besides, the RGB-D camera will easily be influenced by strong lighting and other external factors, which will lead to a poor accuracy on the acquired environmental depth information. Recently, deep learning technologies have achieved great success in the visual SLAM area, which can directly learn high-level features from the visual inputs and improve the estimation accuracy of the depth information. Therefore, deep learning technologies maintain the potential to extend the source of the depth information and improve the performance of the SLAM system. However, the existing deep learning-based methods are mainly supervised and require a large amount of ground-truth depth data, which is hard to acquire because of the realistic constraints. In this paper, we first present an unsupervised learning framework, which not only uses image reconstruction for supervising but also exploits the pose estimation method to enhance the supervised signal and add training constraints for the task of monocular depth and camera motion estimation. Furthermore, we successfully exploit our unsupervised learning framework to assist the traditional ORB-SLAM system when the initialization module of ORB-SLAM method could not match enough features. Qualitative and quantitative experiments have shown that our unsupervised learning framework performs the depth estimation task comparably to the supervised methods and outperforms the previous state-of-the-art approach by 13.5% on KITTI dataset. Besides, our unsupervised learning framework could significantly accelerate the initialization process of ORB-SLAM system and effectively improve the accuracy on environme
OpenFlow switches in SDN use Multiple Flow Tables (MFTs) for fine-grained flow control. Commodity switches integrate hardware storage resources such as SRAM and TCAM to store flow tables to achieve high-speed lookups....
OpenFlow switches in SDN use Multiple Flow Tables (MFTs) for fine-grained flow control. Commodity switches integrate hardware storage resources such as SRAM and TCAM to store flow tables to achieve high-speed lookups. Many increased flow tables are rapidly exhausting these hardware storage resources, which makes the switches have to balance high-speed search and massive storage. The rule-caching scheme is a popular method to solve this problem, which caches the most commonly used rules into hardware storage resources. The existing rule-caching schemes are based on single hardware storage resources, and they cannot flexibly adjust the caching strategy according to the traffic characteristics. Simultaneously, the deployed commodity switches face the problem of difficulty in changing the size of SRAM and TCAM. This paper innovatively proposes the MixedCache scheme, which makes full use of the hardware storage resources in the switch according to the skewed characteristics of network traffic. MixedCache stores the large flows in SRAM by exact match and stores the small flows in the TCAM by wildcard match. MixedCache does not need to change the size of the deployed switch hardware storage resources, but makes full use of existing resources. Compared with the rule-caching scheme based on the exact match, the cache hit rate can increase by up to 15.61%. Compared with the rule-caching scheme based on the wildcard match, the cache hit rate can increase by up to 29.69%.
暂无评论