4
$\begingroup$

Consider a Schnorr group with order a prime $q$ sized for security against current computers (like $q$ of 256 bit); modulus a prime $p=q\,r+1$ large enough (e.g. 3072 to 32768-bit) that the algorithms of choice for solving the DLP in the group are Pollard's rho or Pollard's kangaroo, rather than Index calculus, Function field sieve or GNFS.

The expected cost of breaking the Discrete Logarithm Problem in that group on classical computers is a few times $\sqrt q$ multiplications modulo $p$, thus grows about as $\log(p)^\alpha\sqrt q$ with some $\alpha$ depending on the algorithm ($\alpha\approx2$ for simple ones, $\alpha\approx\log_2(3)\approx1.585$ for Karastuba, slightly lower $\alpha$ for more complex algorithms).

On an hypothetical Quantum Computuer, Shor's algorithm in principle can be used for Discrete Logarithm problems (see answers to these questions), including for this group. How does the magnitude of $p$ influence the difficulty of doing this, by some useful criteria? Like number of qubits required, their quality (error rate? coherence time?), duration or energy required for the hypothetical computation.


My motivation is assessing if, when speed and key size are secondary, Schnorr Groups with moderate $q$ but large $p$ (by today's standards) would give some assurance of resisting an hypothetical first wave of Cryptographically Relevant Quantum Computers.

$\endgroup$

1 Answer 1

1
$\begingroup$

How does the magnitude of $p$ influence the difficulty of doing this, by some useful criteria?

Well, the expensive part of Shor's algorithm is the necessity of performing $O(\log q)$ multiplications (on entangled qubits) in the group (which, in this case, is mod $p$).

So, to do this would require $O(\log p)$ qubits, and (assuming you use the textbook multiplication algorithm) $O(\log q (\log p)^2)$ quantum operations. So, increasing $p$ would increase both the required number of qubits and the total number of operations (but both by a low order polynomial factor).

Now, one suggestion I heard (by Shamir) was to instead do radically imbalanced RSA - we take a very large prime $p$ and a moderate prime $q$ (e.g. 1024 bits), and have our public key be $pq$ and a small exponent $e > \log p / \log q$; the private key is the value $q$ and $d = e^{-1} \bmod q-1$. To encrypt a ciphertext $m$ (which is required to be less than $q$), we do the conventional encryption $c = m^e \bmod pq$. To decrypt a ciphertext $c$, we first compute $c \bmod q$, and then compute $m = (c \bmod q)^d \bmod q$ (ignoring the $p$ part). Here, encryption and decryption are both not that expensive, and to break the scheme, the only obvious approach is to factor $pq$ (which, because $p$ is large, might not be practical for first generation CRQC's).

I can't endorse this idea (because we have no idea how fast CRQC's will scale up) - however, it may perform better than your Schnorr group idea...

$\endgroup$
2
  • $\begingroup$ Shamir's unbalanced RSA approach is discussed among the trade-offs in the post-quantum RSA paper: eprint.iacr.org/2017/351.pdf#page=11 $\endgroup$ Commented Jan 16, 2023 at 16:52
  • $\begingroup$ I get that DLP in a Schnorr group is not quantum-resistant. Still, being able to raise the necessary number of qubits by a noticeable factor (like 10) and number of quantum operations (by even more) compared to ECC, at same signature size, seems a way to not be the first system to fail to hypothetical CRQC. I would call that quantum resilience. $\endgroup$
    – fgrieu
    Commented Jan 17, 2023 at 11:52

Not the answer you're looking for? Browse other questions tagged or ask your own question.