13
$\begingroup$

In classical computing, we can run the key search (for example AES) by running parallel computing nodes as many as possible.

It is clear that we can run many Grover's algorithms, too.

My question is; it possible to have a speed up using more than one Grover's algorithm as in classical computing?

$\endgroup$

2 Answers 2

8
$\begingroup$

Certainly! Imagine you have $K=2^k$ copies of the search oracle $U_S$ that you can use. Normally, you'd search by iterating the action $$ H^{\otimes n}(\mathbb{I}_n-2|0\rangle\langle 0|^{\otimes n})H^{\otimes n}U_S, $$ starting from an initial state $(H|0\rangle)^{\otimes n}$. This takes time $\Theta(\sqrt{N})$. (I'm using $\mathbb{I}_n$ to denote the $2^n\times 2^n$ identity matrix.)

You could replace this with $2^k$ parallel copies, each indexed by an $x\in\{0,1\}^k$, using $$ \left(\mathbb{I}_k\otimes H^{\otimes (n-k)}\right)\mathbb{I}_k\otimes(\mathbb{I}_{n-k}-2|0\rangle\langle 0|^{\otimes (n-k)})\left(\mathbb{I}_k\otimes H^{\otimes (n-k)}\right)U_S $$ and starting from a state $|x\rangle(H|0\rangle)^{\otimes(n-k)}$ The time required for running these would be reduced to $O(\sqrt{N/K})$, at the cost of requiring $K$ times more space.

In a scaling sense, one might consider this an irrelevant result. If you have a fixed number of oracles, $K$, then you get a fixed ($\sqrt{K}$) improvement (just like, if you have $K$ parallel classical cores, the best improvement you can get is a factor of $K$), and that does not change the scaling. But it does change the fundamental running time. We know that Grover's algorithm is exactly optimal. It takes the absolute minimum time possible with a single oracle. So, knowing that you get a $\sqrt{K}$ improvement in time is useful with regards to that benchmark of a specific running time at a specific value of $N$.

$\endgroup$
2
  • 1
    $\begingroup$ $N$ goes to infinity but $K$ does not. Your problem gets bigger but your resources remain few. $\endgroup$
    – AHusain
    Commented Oct 25, 2018 at 14:15
  • 2
    $\begingroup$ This answer is correct (though it may not be optimal, as DaftWullie does warn). This is the same attutude towards parallelization as one takes in classical circuit complexity. If you want a speed-up due to parallelization, then you look to the circuit depth (because co-ordinating multiple processes isn't going to reduce the total work). It doesn't even matter if $K$ is constant --- either you're interested in the depth improvement from parallelization, or you're not. As with quantum computation itself, merely throwing more computers at a problem doesn't magically make everything fast! $\endgroup$ Commented Oct 26, 2018 at 8:17
3
$\begingroup$

In a sense, if we were doing it in parallel on different nodes, you would save time for running. But if we talk about complexity (that is what we refer to speedup generally), we need a bit of analysis.

You agree that we need about $ \sqrt{N} $ operations for the non-parallel case. Say we have two nodes, and we separate the list of N elements into two lists of size $ N_1,N_2 $. The search on the sub-lists takes about $ \sqrt{N_1},\sqrt{N_2} $.

However, we have that $$ \sqrt{N} = \sqrt{N_1+N_2} \le \sqrt{N_1} + \sqrt{N_2} $$

And you would still need to verify which output among what is returned by the parallel processes is the one you seek. It adds a constant in the complexity so we generally hide it into the $O$ notation.

However, that would still be interesting especially if we have to cluster hardware because we are limited in numbers of qubits or another limitations.

$\endgroup$
7
  • 2
    $\begingroup$ For N1=N2 it's still an inequality: sqrt(2) * sqrt(N1) < 2 * sqrt(N1) $\endgroup$ Commented Oct 24, 2018 at 22:09
  • $\begingroup$ Oh indeed. In my head $ \sqrt{ab} = \sqrt{a} \sqrt{b} $ I thought. I should stop answering answers here at midnight and when tired. Thanks for pointing that out. $\endgroup$
    – cnada
    Commented Oct 25, 2018 at 5:54
  • 3
    $\begingroup$ @cnada: There are at least two different notions of complexity, both of which are relevant to speed-up. One is size complexity, and one is depth complexity. Unless otherwise specified, we often prefer to consider size complexity, but depth complexity is still something which is very much of interest in quantum computational complexity, for instance in MBQC [arXiv:quant-ph/0301052, arXiv:0704.1736] and recent results on unconditional depth separations [arXiv:1704.00690]. $\endgroup$ Commented Oct 25, 2018 at 8:40
  • $\begingroup$ @NieldeBeaudrap I thought people look more at depth complexity. But for Grover, the size and depth complexity are about the same order. That is quadratic in the size of the problem (generally seen as the size of a list of N elements). Do you think my approach here is not right? $\endgroup$
    – cnada
    Commented Oct 25, 2018 at 11:53
  • 2
    $\begingroup$ You're not saying anything that's wrong, I'm just pointing out that you're unduly emphasising size complexity and not really working out the benefit to depth complexity. Not much interesting happens if you only do $k \in O(1)$ parallel Grover processes, but as DaftWullie's answer suggests (and considering the classical post-processing), the depth complexity goes from $\sqrt N$ to $\log(k) \sqrt{N/k}$ for $k(N) \in \Omega(1)$ parallel Grover processes, which is an improvement by a factor of $\sqrt{k}/\!\!\;\log(k)$ (the log factor comes from identifying which if any process found a solution). $\endgroup$ Commented Oct 25, 2018 at 12:21

Not the answer you're looking for? Browse other questions tagged or ask your own question.