版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Ain Shams Univ Fac Sci Math Dept Comp Sci Div Cairo 11566 Egypt
出 版 物:《IEEE ACCESS》 (IEEE Access)
年 卷 期:2020年第8卷
页 面:19737-19749页
核心收录:
主 题:Learning with error lattice-based cryptography LLL algorithm shortest vector problem closest vector problem bounded distance decoding GPU cryptanalysis
摘 要:In this paper, we present a GPU-based parallel algorithm for the Learning With Errors (LWE) problem using a lattice-based Bounded Distance Decoding (BDD) approach. To the best of our knowledge, this is the first GPU-based implementation for the LWE problem. Compared to the sequential BDD implementation of Lindner-Peikert and pruned-enumeration strategies by Kirshanova [1], our GPU-based implementation is almost faster by a factor 6 and 9 respectively. The used GPU is NVIDIA GeForce GTX 1060 6G. We also provided a parallel implementation using two GPUs. The results showed that our algorithm is scalable and faster than the sequential version (Lindner-Peikert and pruned-enumeration) by a factor of almost 13 and 16 respectively. Moreover, the results showed that our parallel implementation using two GPUs is more efficient than Kirshanova et al. s parallel implementation using 20 CPU-cores.