咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Metadata Distribution and Cons... 收藏

Metadata Distribution and Consistency Techniques for Large-Scale Cluster File Systems

作     者:Xiong, Jin Hu, Yiming Li, Guojie Tang, Rongfeng Fan, Zhihua 

作者机构:Chinese Acad Sci Inst Comp Technol Beijing 100190 Peoples R China Univ Cincinnati Dept Elect & Comp Engn & Comp Sci Engn Res Ctr 542 Cincinnati OH 45221 USA NetEase Com Inc Beijing 100084 Peoples R China 

出 版 物:《IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS》 (IEEE Trans Parallel Distrib Syst)

年 卷 期:2011年第22卷第5期

页      面:803-816页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:Natural Science Foundation of China National High-Technology Research and Development Program of China [2006AA01A102, 2009AA01Z139, 2009AA01A129] National Science Foundation [CCF-0541103] 

主  题:Distributed file systems metadata management 

摘      要:Most supercomputers nowadays are based on large clusters, which call for sophisticated, scalable, and decentralized metadata processing techniques. From the perspective of maximizing metadata throughput, an ideal metadata distribution policy should automatically balance the namespace locality and even distribution without manual intervention. None of existing metadata distribution schemes is designed to make such a balance. We propose a novel metadata distribution policy, Dynamic Dir-Grain (DDG), which seeks to balance the requirements of keeping namespace locality and even distribution of the load by dynamic partitioning of the namespace into size-adjustable hierarchical units. Extensive simulation and measurement results show that DDG policies with a proper granularity significantly outperform traditional techniques such as the Random policy and the Subtree policy by 40 percent to 62 times. In addition, from the perspective of file system reliability, metadata consistency is an equally important issue. However, it is complicated by dynamic metadata distribution. Metadata consistency of cross-metadata server operations cannot be solved by traditional metadata journaling on each server. While traditional two-phase commit (2PC) algorithm can be used, it is too costly for distributed file systems. We proposed a consistent metadata processing protocol, S2PC-MP, which combines the two-phase commit algorithm with metadata processing to reduce overheads. Our measurement results show that S2PC-MP not only ensures fast recovery, but also greatly reduces fail-free execution overheads.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分