Sparse linear algebra includes the fundamental and important operations in various large-scale scientific computing and real-world applications. There exists performance bottleneck for sparse linear algebra since it m...
详细信息
Sparse linear algebra includes the fundamental and important operations in various large-scale scientific computing and real-world applications. There exists performance bottleneck for sparse linear algebra since it mainly contains the memory-bound computations with low arithmetic intensity. How to improve its performance has increasingly become a focus of research efforts. Using parallelcomputing techniques to accelerate sparse linear algebra is currently the most popular method, while facing various challenges, e.g., large-scale data brings difficulties in storage, and the sparsity of data leads to irregular memory accesses and parallel load imbalance. Therefore, this article provides a comprehensive overview on acceleration of sparse linear algebra operations using parallelcomputing platforms, where we focus on four main classifications: sparse matrix-vector multiplication (SpMV), sparse matrix-sparse vector multiplication (SpMSpV), sparse general matrix-matrix multiplication (SpGEMM), and sparse tensor algebra. The takeaways from this article include the following: understanding the challenges of accelerating linear sparse algebra on various hardware platforms;understanding how structured data sparsity can improve storage efficiency;understanding how to optimize parallel load balance;understanding how to improve the efficiency of memory accesses;understanding how do the adaptive frameworks automatically select the optimal algorithms;and understanding recent design trends for acceleration of parallel sparse linear algebra.
暂无评论