版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Intel Corp Intel Comp Labs Software & Solut Grp Santa Clara CA 95052 USA
出 版 物:《COMPUTER JOURNAL》 (计算机杂志)
年 卷 期:2005年第48卷第5期
页 面:588-601页
核心收录:
学科分类:08[工学] 0835[工学-软件工程] 0812[工学-计算机科学与技术(可授工学、理学学位)]
摘 要:State-of-the-art multiprocessor systems pose several difficulties: (i) the user has to parallelize the existing serial code;(ii) explicitly threaded programs using a thread library are not portable;(iii) writing efficient multi-threaded programs requires intimate knowledge of machine s architecture and micro-architecture. Thus, well-tuned parallelizing compilers are in high demand to leverage state-of-the-art computer advances of NUMA-based multiprocessors, simultaneous multi-threading processors and chip-multiprocessor systems in response to the performance quest from the high-performance computing community. On the other hand, OpenMP(*) has emerged as the industry standard parallel programming model. Applications can be parallelized using OpenMP with less effort in a way that is portable across a wide range of multiprocessor systems. In this paper, we present several practical compiler optimization techniques and discuss their effect on the performance of OpenMP programs. We elaborate on the major design considerations in a high performance OpenMP compiler and present experimental data based on the implementation of the optimizations in the Intel (R) C++ and Fortran compilers. Interactions of the OpenMP transformation with other sequential optimizations in the compiler are discussed. The techniques in this paper have achieved significant performance improvements on the industry standard SPEC* OMPM2001 and SPEC* OMPL2001 benchmarks, and these performance results are presented for Intel (R) Pentium (R) and Itanium (R) processor based systems.