In asymmetric group key agreement (ASGKA) protocols, a group of users establish a common encryption key which is publicly accessible and compute pairwise different decryption keys. It is left as an open problem to des...
详细信息
In asymmetric group key agreement (ASGKA) protocols, a group of users establish a common encryption key which is publicly accessible and compute pairwise different decryption keys. It is left as an open problem to design an ASGKA protocol with traitor traceability in Eurocrypt 2009. A one-round dynamic authenticated ASGKA protocol with public traitor traceability is proposed in this study. It provides a black-box tracing algorithm. Ind-CPA security with key compromise impersonation resilience (KCIR) and forward secrecy of ASGKA protocols is formally defined. The proposed protocol is proved to be Ind-CPA secure with KCIR and forward secrecy under D k-HDHE assumption. It is also proved that the proposed protocol resists collusion attack. In Setup algorithm and join algorithm, one communication round is required. In Leave algorithm, no message is required to be transmitted. The proposed protocol adopts log N O(logN)-way asymmetric multilinear map to make the size of public key and the size of ciphertext both achieve O log N (logN), where N is the number of potential group members. This is the first ASGKA protocol with public traitor traceability which is more efficient than trivial construction of ASGKA protocols.
In this paper we experimentally study the performance of main-memory, parallel, multi-core join algorithms, focusing on sort-merge and (radix-) hash join. The relative performance of these two join approaches have bee...
详细信息
In this paper we experimentally study the performance of main-memory, parallel, multi-core join algorithms, focusing on sort-merge and (radix-) hash join. The relative performance of these two join approaches have been a topic of discussion for a long time. With the advent of modern multicore architectures, it has been argued that sort-merge join is now a better choice than radix-hash join. This claim is justified based on the width of SIMD instructions (sort-merge outperforms radix-hash join once SIMD is sufficiently wide), and NUMA awareness (sort-merge is superior to hash join in NUMA architectures). We conduct extensive experiments on the original and optimized versions of these algorithms. The experiments show that, contrary to these claims, radix-hash join is still clearly superior, and sort-merge approaches to performance of radix only when very large amounts of data are involved. The paper also provides the fastest implementations of these algorithms, and covers many aspects of modern hardware architectures relevant not only for joins but for any parallel data processing operator.
Data skew in parallel joins results in poor load balancing which can lead to significantly varying execution times for the reducers in MapReduce. The performance of join operation is severely degraded in the presence ...
详细信息
Data skew in parallel joins results in poor load balancing which can lead to significantly varying execution times for the reducers in MapReduce. The performance of join operation is severely degraded in the presence of heavy skew in the datasets to be joined. Previous work mainly focuses on either input or output load imbalance among reducers, which is ineffective for load balancing. In this paper, we present a new data skew handling method based on Cluster Cost Partitioning (CCP) for optimizing parallel joins in MapReduce. A new cost model which considers the properties of both input and output is defined to estimate the cost of the parallel join. CCP employs clusters instead of join keys from input relations to create join matrix. Using the cost model, CCP identifies and splits heavy cells in the cluster join matrix. Then CCP assigns a set of non-heavy cells to reducers for join load-balancing. For different applications, the input and output weight values in the cost model could be dynamically adjusted to depict the join costs more precisely. The experimental results demonstrate that CCP achieves a more accurate load balancing result among reducers.
We define and study the Functional A ggregate Query (FAQ) problem, which encompasses many frequently asked questions in constraint satisfaction, databases, matrix operations, probabilistic graphical models and logic. ...
详细信息
ISBN:
(纸本)9781450341912
We define and study the Functional A ggregate Query (FAQ) problem, which encompasses many frequently asked questions in constraint satisfaction, databases, matrix operations, probabilistic graphical models and logic. This is our main conceptual contribution. We then present a simple algorithm called InsideOut to solve this general problem. InsideOut is a variation of the traditional dynamic programming approach for constraint programming based on variable elimination. Our variation adds a couple of simple twists to basic variable elimination in order to deal with the generality of FAQ, to take full advantage of Grohe and Marx's fractional edge cover framework, and of the analysis of recent worst-case optimal relational join algorithms. As is the case with constraint programming and graphical model inference, to make InsideOut run efficiently we need to solve an optimization problem to compute an appropriate variable ordering. The main technical contribution of this work is a precise characterization of when a variable ordering is 'semantically equivalent' to the variable ordering given by the input FAQ expression. Then, we design an approximation algorithm to find an equivalent variable ordering that has the best 'fractional FAQ-width'. Our results imply a host of known and a few new results in graphical model inference, matrix operations, relational joins, and logic. We also briefly explain how recent algorithms on beyond worst-case analysis for joins and those for solving SAT and #SAT can be viewed as variable elimination to solve FAQ over compactly represented input functions.
P2P systems are highly dynamic in nature. Nodes may join in or leave the P2P system at any moment. Frequently joining or leaving must increase the maintenance overhead greatly in DHT-based P2P system. The main reason ...
详细信息
ISBN:
(纸本)9788955191356
P2P systems are highly dynamic in nature. Nodes may join in or leave the P2P system at any moment. Frequently joining or leaving must increase the maintenance overhead greatly in DHT-based P2P system. The main reason of causing the cost is the lookup cost that nodes build their fingers. In this paper we introduce an iterative join algorithm for Chord that is suitable for highly dynamic environments. Iterative join algorithm builds the finger of node by iterative lookup and by the help of fingers information of nodes in the lookup path. Theory analysis and simulation show that Iterative join algorithm decreases efficiently the maintenance overhead and improve the lookup performance.
The wireless communication technologies built-in or brought in the vehicle enable new in-car telematics services. The development of connected cars emphasizes the use of sophisticated computation framework for gatheri...
详细信息
The wireless communication technologies built-in or brought in the vehicle enable new in-car telematics services. The development of connected cars emphasizes the use of sophisticated computation framework for gathering, analyzing a large volume of data generated in all aspects of vehicle operations using Big Data technologies. Since these data are essential for many connected cars applications, the design and monitoring of MapReduce algorithms for processing vehicle's data using Hadoop framework will allow to build a hosting of analytics data source. This hosting data source allows different connected cars industry ecosystem to access useful data they need to afford connected cars applications. This paper studies design steps to take in consideration when implementing MapReduce patterns to analyze vehicle's data in order to produce accurate useful data that are hosted at the automakers and connect cars services providers. Experiment results show that MapReduce join algorithm is highly scalable and optimized for distributed computing than Statistical Analysis System (SAS) framework and HiveQL declarative language. (C) 2016 The Authors. Published by Elsevier B.V.
The wireless communication technologies built-in or brought in the vehicle enable new in-car telematics services. The development of connected cars emphasizes the use of sophisticated computation framework for gatheri...
详细信息
The wireless communication technologies built-in or brought in the vehicle enable new in-car telematics services. The development of connected cars emphasizes the use of sophisticated computation framework for gathering, analyzing a large volume of data generated in all aspects of vehicle operations using Big Data technologies. Since these data are essential for many connected cars applications, the design and monitoring of MapReduce algorithms for processing vehicle's data using Hadoop framework will allow to build a hosting of analytics data source. This hosting data source allows different connected cars industry ecosystem to access useful data they need to afford connected cars applications. This paper studies design steps to take in consideration when implementing MapReduce patterns to analyze vehicle's data in order to produce accurate useful data that are hosted at the automakers and connect cars services providers. Experiment results show that MapReduce join algorithm is highly scalable and optimized for distributed computing than Statistical Analysis System (SAS) framework and HiveQL declarative language.
In recent years,RDF(Resource Description Framework) has been widely recognized as a standard data storage *** is a real issue that how to store and retrieve RDF data efficiently,because it is the foundation of all sem...
详细信息
In recent years,RDF(Resource Description Framework) has been widely recognized as a standard data storage *** is a real issue that how to store and retrieve RDF data efficiently,because it is the foundation of all semantic-based application *** this paper,we propose a novel storage and retrieval solution based on cloud and R*-*** to existing approaches,we use cloud-based storage approach to compress RDF data,meanwhile R*-tree is adapted to retrieve compressed *** advantage is that it not only reduces the local storage pressure but also improves the retrieval performance compare to existing *** we separate the storage and retrieval and adapt dictionary-based compression approaches,in terms of security and flexibility,our solution is better than existing approaches.
暂无评论