Given a parametric lattice with a basis given by polynomials in Z[t], we give an algorithm to construct an lll-reduced basis whose elements are eventually quasi-polynomial in t: that is, they are given by formulas tha...
详细信息
Given a parametric lattice with a basis given by polynomials in Z[t], we give an algorithm to construct an lll-reduced basis whose elements are eventually quasi-polynomial in t: that is, they are given by formulas that are piecewise polynomial in t (for sufficiently large t), such that each piece is given by a congruence class modulo a period. As a consequence, we show that there are parametric solutions of the shortest vector problem and closest vector problem that are also eventually quasi-polynomial in t.
This note is an observation that the lll algorithm applied to prime powers can be used to find "good" examples for the ABC and Szpiro conjectures. (C) 2004 Elsevier Inc. All rights reserved.
This note is an observation that the lll algorithm applied to prime powers can be used to find "good" examples for the ABC and Szpiro conjectures. (C) 2004 Elsevier Inc. All rights reserved.
Using side information to attack RSA is a practical method. In reality, it's possible to intercept some bits of an unknown divisor p of a known composite integer N for us. Then we can utilize Coppersmith's met...
详细信息
ISBN:
(纸本)9789819709441;9789819709458
Using side information to attack RSA is a practical method. In reality, it's possible to intercept some bits of an unknown divisor p of a known composite integer N for us. Then we can utilize Coppersmith's method to recover the whole p in polynomial time according to the work of Herrmann and May (Asiacrypt'08). In this paper, we analyze the idea of merging unknown bit blocks proposed by Herrmann and May in detail and indicate the cases where the blocks can be merged with a considerable reduction in the complexity of Herrmann-May's attack. In fact, the complexity of this attack depends on the output quality of the lll algorithm. For this, we purposely propose a lower upper bound of the length of lll-reduced vectors using probabilistic statistical methods. To be specific, considering the parallel to v(i)*parallel to/parallel to v(i vertical bar 1)*parallel to's as continuous random variables, we find that the middle parallel to v(i)*parallel to/parallel to v(i+1)*parallel to's after lll-reduction are almost independently and identically distributed. Then we utilize the central limit theorem to present a lower lll bound holding with probability close to 1. We have shown the advantages of merging variables and applying new bound by experiments. Finally, we combine these two points to propose the improved Herrmann-May's attack.
In this paper, we consider a variant of RSA schemes called Prime Power RSA with modulus N= prq for r 2, where p, q are of the same bit-size. May showed that when private exponent d<N;/(r+1);or d < N;, N can be...
详细信息
In this paper, we consider a variant of RSA schemes called Prime Power RSA with modulus N= prq for r 2, where p, q are of the same bit-size. May showed that when private exponent d
In this paper, we demonstrate that there exist weak keys in the RSA public-key cryptosystem with the public exponent e = NαN0.5. In 1999, Boneh and Durfee showed that when α≈ 1 and the private exponent d = Nβ< ...
详细信息
In this paper, we demonstrate that there exist weak keys in the RSA public-key cryptosystem with the public exponent e = NαN0.5. In 1999, Boneh and Durfee showed that when α≈ 1 and the private exponent d = Nβ< N0.292, the system is insecure. Moreover, their attack is still effective for 0.5 < α < *** propose a generalized cryptanalytic method to attack the RSA cryptosystem with α≤0.5. For c = [(1-α)/α] and eγc≡ d(mod ec), when γ, β satisfy γ < 1+1/c-1/(2αc) and β < αc +7/6- αγc-1/3(6α + 6αc + 1- 6αγc)1/2, we can perform cryptanalytic attacks based on the lll algorithm. The basic idea is an application of Coppersmith's techniques and we further adapt the technique of unravelled linearization, which leads to an optimized *** advantage is that we achieve new attacks on RSA with α 0.5 and consequently, there exist weak keys in RSA for most α.
Since May (Crypto'02) revealed the vulnerability of the small CRT-exponent RSA using Coppersmith's lattice-based method, several papers have studied the problem and two major improvements have been made. (1) B...
详细信息
Since May (Crypto'02) revealed the vulnerability of the small CRT-exponent RSA using Coppersmith's lattice-based method, several papers have studied the problem and two major improvements have been made. (1) Bleichenbacher and May (PKC'06) proposed an attack for small d(q) when the prime factor p is significantly smaller than the other prime factor q;the attack works for p < N-0.468. (2) Jochemsz and May (Crypto'07) proposed an attack for small d(p) and d(q) when the prime factors p and q are balanced;the attack works for d(p), d(q) < N-0.073. Even a decade has passed since their proposals, the above two attacks are still considered as the state of the art, and no improvements have been made thus far. A novel technique seems to be required for further improvements since it seems that the attacks have been studied with all the applicable techniques for Coppersmith's methods proposed by Durfee-Nguyen (Asiacrypt'00), Jochemsz-May (Asiacrypt'06), and Herrmann-May (Asiacrypt'09, PKC'10). Since it seems that the attacks have been studied with all the applicable techniques for Coppersmith's methods proposed by Durfee-Nguyen (Asiacrypt'00), Jochemsz-May (Asiacrypt'06), and Herrmann-May (Asiacrypt'09, PKC'10), improving the previous results seem technically hard. In this paper, we propose two improved attacks on the small CRT-exponent RSA: a small d(q) attack for p < N-0.5 (an improvement of Bleichenbacher-May's) and a small d(p) and d(q) attack for d(p), d(q) < N-0.122 (an improvement of Jochemsz-May's). The latter result is also an improvement of our result in the proceeding version (Eurocrypt'17);d(p), d(q) < N-0.091. We use Coppersmith's lattice-based method to solve modular equations and obtain the improvements from a novel lattice construction by exploiting useful algebraic structures of the CRT-RSA key generation equation. We explicitly show proofs of our attacks and verify the validities by computer experiments. In addition to the two main attacks, we also propose sm
For RSA, May showed a deterministic polynomial time equivalence of computing d to factoring N(= pq). On the other hand, Takagi showed a variant of RSA such that the decryption algorithm is faster than the standard RSA...
详细信息
For RSA, May showed a deterministic polynomial time equivalence of computing d to factoring N(= pq). On the other hand, Takagi showed a variant of RSA such that the decryption algorithm is faster than the standard RSA, where N = p(r)q while ed = 1 mod (p - 1)(q-1). In this paper, we show that a deterministic polynomial time equivalence also holds in this variant. The coefficient matrix T to which lll algorithm is applied is no longer lower triangular, and hence we develop a new technique to overcome this problem.
In this paper, we present a lattice based method on small secret exponent attack on the RSA scheme. Boneh and Durfee reduced the attack to finding the small roots of the bivariate modular equation: x(N + 1 + y)+1 0 (m...
详细信息
In this paper, we present a lattice based method on small secret exponent attack on the RSA scheme. Boneh and Durfee reduced the attack to finding the small roots of the bivariate modular equation: x(N + 1 + y)+1 0 (mod e), where N is an RSA modulus and e is the RSA public key and proposed a lattice based algorithm for solving the problem. When the secret exponent d is less than N-0.292, their method breaks the RSA scheme. Since the lattice used in the analysis is not full-rank, the analysis is not easy. Blomer and May proposed an alternative algorithm that uses a full-rank lattice, even though it gives a bound (d <= N-0.290) that is worse than Boneh-Durfee. However, the proof for their bound is still complicated. Herrmann and May, however, have given an elementary proof for the Boneh-Durfee's bound: d <= N-0.292. In this paper, we first give an elementary proof for achieving Blomer-May's bound: d <= N-0.290. Our proof employs the unravelled linearization technique introduced by Herrmann and May and is rather simpler than that of Blamer-May's proof. We then provide a unified framework - which subsumes the two previous methods, the Herrmann-May and the Blomer-May methods, as a special case - for constructing a lattice that can be are used to solve the problem. In addition, we prove that Boneh-Durfee's bound: d <= N-0.292 is still optimal in our unified framework.
In this paper, we present a GPU-based parallel algorithm for the Learning With Errors (LWE) problem using a lattice-based Bounded Distance Decoding (BDD) approach. To the best of our knowledge, this is the first GPU-b...
详细信息
In this paper, we present a GPU-based parallel algorithm for the Learning With Errors (LWE) problem using a lattice-based Bounded Distance Decoding (BDD) approach. To the best of our knowledge, this is the first GPU-based implementation for the LWE problem. Compared to the sequential BDD implementation of Lindner-Peikert and pruned-enumeration strategies by Kirshanova [1], our GPU-based implementation is almost faster by a factor 6 and 9 respectively. The used GPU is NVIDIA GeForce GTX 1060 6G. We also provided a parallel implementation using two GPUs. The results showed that our algorithm is scalable and faster than the sequential version (Lindner-Peikert and pruned-enumeration) by a factor of almost 13 and 16 respectively. Moreover, the results showed that our parallel implementation using two GPUs is more efficient than Kirshanova et al.'s parallel implementation using 20 CPU-cores.
We study the hardness of the optimal jug measuring problem. By proving tight lower and upper bounds on the minimum number of measuring steps required, we reduce an inapproximable NP-hard problem (i.e., the shortest GC...
详细信息
We study the hardness of the optimal jug measuring problem. By proving tight lower and upper bounds on the minimum number of measuring steps required, we reduce an inapproximable NP-hard problem (i.e., the shortest GCD multiplier problem [G. Havas, J.-P. Seifert, The Complexity of the Extended GCD Problem, in: LNCS, vol. 1672, Springer, 1999]) to it. It follows that the optimal jug measuring problem is NP-hard and so is the problem of approximating the minimum number of measuring steps within a constant factor. Along the way, we give a polynomial-time approximation algorithm with an exponential error based on the well-known lll basis reduction algorithm. (C) 2008 Elsevier B.V. All rights reserved.
暂无评论