Following trails in the wild is an essential capability of out-door autonomous mobile robots. Recently, deep learning-based approaches have made great advancements in this field. However, the existing research only fo...
详细信息
Following trails in the wild is an essential capability of out-door autonomous mobile robots. Recently, deep learning-based approaches have made great advancements in this field. However, the existing research only focuses on the trail following with a single robot. In contrast, many robotic tasks in the reality, such as search and patrolling, are conducted by a group of robots. While these robots are grouped to move in the wild, they can cooperate to significantly promote the trail following accuracy, for example, by sharing images of different view angles or real-time decision fusion. This paper proposes such an approach named DL-Cooper that enables multi-robot vision-based trail following based on deep learning algorithms. It allows each robot to make a decision respectively with deep neural network and then fusion the decisions on the collective level with the support of back-end cloud computing infrastructure. It also takes Quality of Service (QoS) assurance, a very essential property of robotic software, into consideration. By limiting the condition to fusion decisions, the time latency can be minimally sacrificed. Experiments on the real-world dataset show that our approach has significantly improved the accuracy of the single-robot system.
In this paper, we propose an indoor robot autonomous navigation system. The robot firstly explores in an unknown environment, and then navigates autonomously by using the explored map. The robot is equipped a 2D laser...
详细信息
In recent years, the rapid-growing scales of graphs have sparked a lot of parallel graph analysis frameworks to leverage the massive hardware resources on CPUs or GPUs. Existing CPU implementations are time-consuming,...
详细信息
In recent years, the rapid-growing scales of graphs have sparked a lot of parallel graph analysis frameworks to leverage the massive hardware resources on CPUs or GPUs. Existing CPU implementations are time-consuming, while GPU implementations are restricted by the memory space and the complexity of programming. In this paper, we present a high performance hybrid CPU-GPU parallel graph analytics framework with good productivity based on GraphMat. We map vertex programs to generalized sparse matrix vector multiplication on GPUs to deliver high performance, and propose a high-level abstraction for developers to implement various graph algorithms with relatively little efforts. Meanwhile, several optimizations have been adopted for reducing the communication cost and leveraging hardware resources, especially the memory hierarchy. We evaluate the proposed framework on three graph primitives(PageRank, BFS and SSSP) with large-scale graphs. The experimental results show that, our implementation achieves an average speedup of 7.0 X than GraphMat on two 6-core Intel Xeon CPUs. It also has the capability to process larger datasets but achieves comparable performance than MapGraph, a state-of-theart GPU-based framework.
We consider a wide range of non-convex regularized minimization problems, where the non-convex regularization term is composite with a linear function engaged in sparse learning. Recent theoretical investigations have...
详细信息
We consider a wide range of non-convex regularized minimization problems, where the non-convex regularization term is composite with a linear function engaged in sparse learning. Recent theoretical investigations have demonstrated their superiority over their convex counterparts. The computational challenge lies in the fact that the proximal mapping associated with non-convex regularization is not easily obtained due to the imposed linear composition. Fortunately, the problem structure allows one to introduce an auxiliary variable and reformulate it as an optimization problem with linear constraints, which can be solved using the Linearized Alternating Direction Method of Multipliers (LADMM). Despite the success of LADMM in practice, it remains unknown whether LADMM is convergent in solving such non-convex compositely regularized optimizations. In this research, we first present a detailed convergence analysis of the LADMM algorithm for solving a non-convex compositely regularized optimization problem with a large class of non-convex penalties. Furthermore, we propose an Adaptive LADMM (AdaLADMM) algorithm with a line-search criterion. Experimental results on different genres of datasets validate the efficacy of the proposed algorithm.
Cloud computing has been widely adopted by enterprises because of its on-demand and elastic resource usage paradigm. Currently most cloud applications are running on one single cloud. However, more and more applicatio...
详细信息
Cloud computing has been widely adopted by enterprises because of its on-demand and elastic resource usage paradigm. Currently most cloud applications are running on one single cloud. However, more and more applications demand to run across several clouds to satisfy the requirements like best cost efficiency, avoidance of vender lock-in, and geolocation sensitive service. JointCloud computing is a new research initiated by Chinese institutes to address the computing issues concerned with multiple clouds. In JointCloud, users' diverse and dynamic requirements on cloud resources axe satisfied by providing users virtual cloud (VC) for special purposes. A virtual cloud for special purposes is in essence a user's specific cloud working environment having the customized software stacks, configurations and computing resources readily available. This paper first introduces what is JointCloud computing and then describes the design rationales, motivation examples, mechanisms and enabling technologies of VC in JointCloud.
Open-domain question answering (QA) is an important problem in natural language processing. A QA system usually consists of an information retrieve (IR) model for finding the relevant passages that may have answers an...
详细信息
Open-domain question answering (QA) is an important problem in natural language processing. A QA system usually consists of an information retrieve (IR) model for finding the relevant passages that may have answers and a read comprehension (RC) model for generating answers from the selected passages. The state-of-the-art RC model uses all the passages for feature extraction when matching semantic information of text pairs. In this paper, however, we argue that the texts of passages often have different importance for specific questions and some even provide negative information that might affect the correctness of final prediction. To address this problem, we design dynamic semantic discard reader (DSDR), a deep model that tries to drop negative information irrelevant to the questions. The result of our experiment shows that DSDR outperforms existing methods in the exact-match and F1 scores on open-domain QA datasets.
With the quick development of open source software, quantity of software is produced in the open source community (OSC) [1]. Lots of researches are launched to study the internal regular patterns of OSC [2], [3]. GitH...
详细信息
ISBN:
(纸本)9789811136719
With the quick development of open source software, quantity of software is produced in the open source community (OSC) [1]. Lots of researches are launched to study the internal regular patterns of OSC [2], [3]. GitHub is one of the most famous open source community which owns thousands software projects. As a result, there are massive and abundant data of software development activities in GitHub. With the purpose to offer an accuracy and efficient dataset of GitHub, this paper proposes Kraken which is a continuous incremental data acquisition system for GitHub. Kraken contains three main modules which are independent with each other. Kraken gets the data of GitHub from two ways: git repositories and rest API. The final result shows that Kraken could extract the commits information of git repositories and get pull requests(PRs) and issues through rest API. The commits information contains the detail development history of software and the feedbacks and wisdom of software engineers are showed through PRs and issues.
Image clustering is one of the challenging tasks in machine learning, and has been extensively used in various applications. Recently, various deep clustering methods has been proposed. These methods take a two-stage ...
详细信息
Image clustering is one of the challenging tasks in machine learning, and has been extensively used in various applications. Recently, various deep clustering methods has been proposed. These methods take a two-stage approach, feature learning and clustering, sequentially or jointly. We observe that these works usually focus on the combination of reconstruction loss and clustering loss, relatively little work has focused on improving the learning representation of the neural network for clustering. In this paper, we propose a deep convolutional embedded clustering algorithm with inception-like block (DCECI). Specifically, an inception-like block with different type of convolution filters are introduced in the symmetric deep convolutional network to preserve the local structure of convolution layers. We simultaneously minimize the reconstruction loss of the convolutional autoencoders with inception-like block and the clustering loss. Experimental results on multiple image datasets exhibit the promising performance of our proposed algorithm compared with other competitive methods.
In recent years, the rapidly growing use of graphs has sparked parallel graph analytics frameworks for leveraging the massive hardware resources, specifically graphics processing units (GPUs). However, the issues of t...
详细信息
ISBN:
(纸本)9781538657393;9781538657386
In recent years, the rapidly growing use of graphs has sparked parallel graph analytics frameworks for leveraging the massive hardware resources, specifically graphics processing units (GPUs). However, the issues of the unpredictable control flows, memory divergence, and the complexity of programming have restricted high-level GPU graph libraries. In this work, we present HPGA, a high performance parallel graph analytics framework targeting the GPU. HPGA implements an abstraction which maps vertex programs to generalized sparse matrix operations on GPUs for delivering high performance. HPGA incorporates high-performance GPU computing primitives and optimization strategies with a high-level programming model. We evaluate the performance of HPGA for three graph primitives (BFS, SSSP, PageRank) with large-scale datasets. The experimental results show that HPGA matches or even exceeds the performance of MapGraph and nvGRAPH, two state-of-the-art GPU graph libraries.
Deep learning has dramatically improved the performance of sounds recognition. However, learning acoustic models directly from the raw waveform is still challenging. Current waveform-based models generally use time-do...
详细信息
暂无评论