The paper presents a study on the behavior of some control solutions for electric drive systems evolving in conditions of reference, disturbance and parameter variability. The results of this study substantiate new co...
详细信息
The academic community has published millions of research papers to date, and the number of new papers has been increasing with time. To discover new research, researchers typically rely on manual methods such as keyw...
详细信息
The betweenness metric has always been intriguing and used in many analyses. Yet, it is one of the most computationally expensive kernels in graph mining. For that reason, making betweenness centrality computations fa...
详细信息
ISBN:
(纸本)9781627487245
The betweenness metric has always been intriguing and used in many analyses. Yet, it is one of the most computationally expensive kernels in graph mining. For that reason, making betweenness centrality computations faster is an important and well-studied problem. In this work, we propose the framework, BADIOS, which compresses a network and shatters it into pieces so that the centrality computation can be handled independently for each piece. Although BADIOS is designed and tuned for betweenness centrality, it can easily be adapted for other centrality metrics. Experimental results show that the proposed techniques can be a great arsenal to reduce the centrality computation time for various types and sizes of networks. In particular, it reduces the computation time of a 4.6 million edges graph from more than 5 days to less than 16 hours.
The betweenness centrality metric has always been intriguing for graph analyses and used in various applications. Yet, it is one of the most computationally expensive kernels in graph mining. In this work, we investig...
详细信息
ISBN:
(纸本)9781450320177
The betweenness centrality metric has always been intriguing for graph analyses and used in various applications. Yet, it is one of the most computationally expensive kernels in graph mining. In this work, we investigate a set of techniques to make the betweenness centrality computations faster on GPUs as well as on heterogeneous CPU/GPU architectures. Our techniques are based on virtualization of the vertices with high degree, strided access to adjacency lists, removal of the vertices with degree 1, and graph ordering. By combining these techniques within a fine-grain parallelism, we reduced the computation time on GPUs significantly for a set of social networks. On CPUs, which can usually have access to a large amount of memory, we used a coarse-grain parallelism. We showed that heterogeneous computing, i.e., using both architectures at the same time, is a promising solution for betweenness centrality. Experimental results show that the proposed techniques can be a great arsenal to reduce the centrality computation time for networks. In particular, it reduces the computation time of a 234 million edges graph from more than 4 months to less than 12 days. Copyright 2013 ACM.
Literature search is an integral part of the academic research. Academic recommendation services have been developed to help researchers with their literature search, many of which only provide a text-based search fun...
详细信息
ISBN:
(纸本)9781450322409
Literature search is an integral part of the academic research. Academic recommendation services have been developed to help researchers with their literature search, many of which only provide a text-based search functionality. Such services are suitable for a first-level bibliographic search;however, they lack the benefits of today's recommendation engines. In this paper, we identify three important properties that an academic recommendation service could provide for better literature search: personalization, scalability, and exploratory search. With these objectives in mind, we present a web service called theadvisor which helps the users build a strong bibliography by extending the document set obtained after a first-level search. Along with an efficient and personalized recommendation algorithm, the service also features result diversification, relevance feedback, visualization for exploratory search. We explain the design criteria and rationale we employed to make the theadvisor a useful and scalable web service with a thorough evaluation. Copyright 2013 ACM.
Result diversification has gained a lot of attention as a way to answer ambiguous queries and to tackle the redundancy problem in the results. In the last decade, diversification has been applied on or integrated into...
详细信息
ISBN:
(纸本)9781450320351
Result diversification has gained a lot of attention as a way to answer ambiguous queries and to tackle the redundancy problem in the results. In the last decade, diversification has been applied on or integrated into the process of PageRankor eigenvector-based methods that run on various graphs, including social networks, collaboration networks in academia, web and product co-purchasing graphs. For these applications, the diversification problem is usually addressed as a bicriteria objective optimization problem of relevance and diversity. However, such an approach is questionable since a query-oblivious diversification algorithm that recommends most of its results without even considering the query may perform the best on these commonly used measures. In this paper, we show the deficiencies of popular evaluation techniques of diversification methods, and investigate multiple relevance and diversity measures to understand whether they have any correlations. Next, we propose a novel measure called expanded relevance which combines both relevance and diversity into a single function in order to measure the coverage of the relevant part of the graph. We also present a new greedy diversification algorithm called Best-Coverage, which optimizes the expanded relevance of the result set with (1-1/e)-approximation. With a rigorous experimentation on graphs from various applications, we show that the proposed method is efficient and effective for many use cases. Copyright is held by the International World Wide Web Conference Committee (IW3C2).
The Second Workshop on Human computer Interaction for Third Places (HCI3P) aims at providing a forum to discuss the roles of interactive technologies, particularly under a DIY and Maker approach, in the shaping of the...
详细信息
ISBN:
(纸本)9781450329033
The Second Workshop on Human computer Interaction for Third Places (HCI3P) aims at providing a forum to discuss the roles of interactive technologies, particularly under a DIY and Maker approach, in the shaping of the third places of the future. HCI3P is organized as a hands-on event with a 6-hour "crafting" session where participants will collaboratively create a low or medium fidelity prototype.
Fast and robust algorithms and aligners have been developed to help the researchers in the analysis of genomic data whose size has been dramatically increased in the last decade due to the technological advancements i...
详细信息
In this article, we are introducing the method of the localization of the deep-ocean pipe end-point. The set of various sensors located along the pipe are used for measuring the data describing the shape of the pipe. ...
详细信息
A search for long-lived, massive particles predicted by many theories beyond the Standard Model is presented. The search targets final states with large missing transverse momentum and at least one high-mass displaced...
详细信息
A search for long-lived, massive particles predicted by many theories beyond the Standard Model is presented. The search targets final states with large missing transverse momentum and at least one high-mass displaced vertex with five or more tracks, and uses 32.8 fb−1 of s=13 TeV pp collision data collected by the ATLAS detector at the LHC. The observed yield is consistent with the expected background. The results are used to extract 95% C.L. exclusion limits on the production of long-lived gluinos with masses up to 2.37 TeV and lifetimes of O(10−2)−O(10) ns in a simplified model inspired by split supersymmetry.
暂无评论