8
$\begingroup$

Today I saw a tweet by Tom Wong which writes: "Did you know? Without quantum entanglement, quantum computers would be no better than traditional computers."

But I remember I came up with papers like this one, which says in the abstract: "We present here a few simple examples in which quantum computing without entanglement is better than anything classically achievable, in terms of the reliability of the outcome after a fixed number of oracle calls"

Did I miss something?

$\endgroup$
1

3 Answers 3

9
$\begingroup$

The statement made by Tom Wong is based on the following reasoning, discussed in other answers on this site (for instance, this one): if there is no entanglement, then the $n$-qubit state is separable into $n$ tensor products of $1$-qubit state. Thus, a classical computer simply has to keep in memory the $2n$ complex amplitudes to simulate the quantum computer, which is efficient. Thus, whatever the quantum computer solves, the classical computer can solve it to with the same complexity.

First of all, note that this reasoning works as long as we're dealing with pure states. As Norbert Schuch mentioned in the comments and as explained in this answer, it is not yet known how to efficiently simulate mixed separable states. This answer shows examples of problems where problems that are considered hard for a classical computer can be tackled using a single pure qubit and a completely mixed state.

The paper you've linked uses this fact and challenges the fact that "if there is no entanglement, then the $n$-qubit state is separable into $n$ tensor products of $1$-qubit state" using the following mixed state: $$\varepsilon|\psi\rangle\langle\psi|+\frac{1-\varepsilon}{2^n}I_{2^n}$$ with $\varepsilon\in[0\,;\,1]$ and $I_{2^n}$ the identity matrix of size $2^n\times2^n$. In particular, there is a theorem which states that for a very small $\varepsilon$, that is for $\varepsilon$ such that: $$\varepsilon<\frac{1}{1+2^{2n-1}}$$ then no entanglement appears in the state, even if $|\psi\rangle$ is entangled. They then proceed to show that for the Deutch-Josza problem, this state yields a positive (though small) amount of information within a single query, which is not possible using a single classical call to this function. Thus, they managed to do something (gain information) that classical computers can't do.


Now comes the part where I give my opinion on this. If someone disagrees, please feel free to comment or edit the answer accordingly.

There are two things that bother me in this method.

  1. Algorithm using this method will never have any advantage (computationally speaking) w.r.t. the classical ones.
  2. The fact that there's no entanglement comes more from a wordplay than a physical property IMO.

Concerning the first point, using this method the final state of an algorithm using $n$ qubits can always be written as: $$\varepsilon|\psi\rangle\langle\psi|+\frac{1-\varepsilon}{2^n}I_{2^n}$$ with $|\psi\rangle$ the pure state one would have obtained when starting with a pure state (i.e. the state one would have obtained when using the full potential of quantum computers). Performing a measurement in the computational basis (which is what one would do without loss of generality) yields a basis state $|s\rangle$. Note that each state $s$ has a probability of at least $\frac{1-\varepsilon}{2^n}$ to be chosen. The maximum probability to measure a desirable state $|s\rangle$ is thus $\varepsilon+\frac{1-\varepsilon}{2^n}$. Note that if no entanglement is observed, this means that $\varepsilon$ is exponentially small. Intuitively, the amount of information you gain is also exponentially small. Thus, the algorithm isn't useful in practice, since it cannot solve a problem in sub-exponential time (or it cannot do it with a non-negligible advantage).

For instance, for the Deutsch-Josza problem, the probability of winning using this method (assuming that the probability that $f$ is balanced is $\frac12$) is: $$\frac12\left(\varepsilon+\frac{1-\varepsilon}{2^n}\right)+\frac12\left(\varepsilon+\frac{2^n-1}{2^n}(1-\varepsilon)\right)=\frac12+\varepsilon$$ with $\varepsilon$ being exponentially small. Recall that the classical computer wins with probability $\frac12$. Thus, though this algorithm can do something classical computers can't without entanglement, there is no algorithm that runs in polynomial time that can distinguish them.

Concerning the second point, it comes in my opinion from the way we define entanglement of mixed states. The more intuitive way to think about the state we're dealing with is:

  • With probability $\varepsilon$, one works with the desired, entangled state.
  • Or with probability $1-\varepsilon$, one works with a state on which one has no information whatsoever.

Note that this uncertainty is purely classical. The state you're working this is either one of these state, not a superposition of them. Thus, with probability $\varepsilon$, this state is entangled. Note also that the small advantage comes from the entangled part. The fact that this advantage is so small comes from the fact that $\varepsilon$ must be small for this state not be "entangled".

Thus, even though the resulting mixed state is not defined as entangled, I'd still argue that the small difference one can observe between the classical and quantum computers comes from the entanglement of $|\psi\rangle$.

$\endgroup$
4
  • $\begingroup$ A short comment: I also read this paper and I totally agree with your first observation that the advantage without entanglement is a bit fictive as "in practice" you would need an exponential number of calls to obtain a reliable answer. I think that advantages without entanglement are under some specific theoretical conditions (for instance in Oracle based-setups where you do not have access to the circuit). If you have an explicit description of the circuit, if the amount of entanglement is restricted to a finite portion of qubits the algorithm can be simulated classically in polynomial time. $\endgroup$ Commented Oct 5, 2022 at 14:54
  • 3
    $\begingroup$ Afaik it is not known how to efficiently simulate a quantum computation where the system is in a mixed separable state at any time step. (By this I don't mean the specific algorithm you mention, but the most general such computation.) So it is not clear whether entanglement is needed for a quantum speedup (though I agree it is very likely). $\endgroup$ Commented Oct 7, 2022 at 18:26
  • $\begingroup$ @NorbertSchuch Yes I agree. You can have non-classical correlations in mixed states without entanglement (making it hard to simulate efficiently classically). My answer was implicitly focusing on pure states. $\endgroup$ Commented Oct 9, 2022 at 19:46
  • $\begingroup$ *Jozsa not Josza $\endgroup$ Commented Apr 6, 2023 at 19:44
5
$\begingroup$

It might be worth pointing out the class of DQC1, where you are only allowed to have 1 "clean" qubit in a quantum circuit (and all other qubits starting in the completely mixed state).

The fact that DQC1 includes problems that are (believed to be) hard to compute classically, this seems to be hinting that there could be mixed states where you do not necessarily have high entanglement but is still hard to be simulated classically.

You can google keywords like "DQC1 entanglement" and find papers in this direction.

$\endgroup$
1
  • $\begingroup$ I think an important word is “high” entanglement. With no entanglement at all, according to Yoganathan you can efficiently classically simulate a DQC1 circuit. But with just a splash of entanglement you can e.g. calculate Jones polynomials. $\endgroup$ Commented Mar 10, 2023 at 2:39
3
$\begingroup$

Like others I agree with the position expressed in Wong's tweet - he likely posted it in the context of the then recently-awarded 2022 Nobel Prize in physics to Aspect, Clauser, and Zeilwinger, which explored the nature of entanglement. But, like others it's also interesting to explore the metes-and-bounds of when and how entanglement is necessary (or when it could be sufficient) for quantum speedups.

For example, another of my favorite quasi-counterexample to the sentiment of Wong's tweet relates to the Elitzur-Vaidman bomb tester, especially as embodied by Kwiat, Weinfurter, and Zeilinger in their paper "Quantum Seeing in the Dark".

In the setup of the tester one can test for the presence or absence of a bomb, without necessarily triggering the bomb! In the improvement of Kwiat, Weinfurter, and Zeilinger, one uses a single qubit, in conjunction with the quantum Zeno effect, to effectively test for the presence of the bomb, by almost never triggering the bomb to go off if it's there. Ryan O'Donnell also likes to introduce some aspects of quantum computing with this bomb-tester - see, e.g., this YouTube presentation of his.

Because there's only one qubit involved, it's not clear whether there's any entanglement. Nonetheless there's an improvement over anything that can be done classically. But, I call this a quasi-counterexample to the position that entanglement is necessary for quantum speedups. For example, it's not clear to me if the bomb-tester is an "algorithm", or merely a parlor-trick. Also, entanglement may somehow still be there in the setup, in some abstract way, between the one qubit and the presence or absence of the bomb.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.