咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >A dynamic re-partitioning stra... 收藏

A dynamic re-partitioning strategy based on the distribution of key in Spark

作     者:Tianyu Zhang Xin Lian 

作者机构:1School of computer science and technology Chongqing University of Posts and Telecommunications Chongqing 400065 China 2Chongqing Engineering Research Center of Mobile Internet Data Application Chongqing 400065 China 

出 版 物:《AIP Conference Proceedings》 

年 卷 期:2018年第1967卷第1期

学科分类:07[理学] 0702[理学-物理学] 

摘      要:Spark is a memory-based distributed data processing framework, has the ability of processing massive data and becomes a focus in Big Data. But the performance of Spark Shuffle depends on the distribution of data. The naive Hash partition function of Spark can not guarantee load balancing when data is skewed. The time of job is affected by the node which has more data to process. In order to handle this problem, dynamic sampling is used. In the process of task execution, histogram is used to count the key frequency distribution of each node, and then generate the global key frequency distribution. After analyzing the distribution of key, load balance of data partition is achieved. Results show that the Dynamic Re-Partitioning function is better than the default Hash partition, Fine Partition and the Balanced-Schedule strategy, it can reduce the execution time of the task and improve the efficiency of the whole cluster.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分