9
$\begingroup$

In Empirical Algorithmics, researchers aim to understand the performance of algorithms through analyzing their empirical performance. This is quite common in machine learning and optimization. Right now, we would like to know something about the relative performance of quantum algorithms and their classical counterparts based on preliminary data from quantum computer emulators. My concern is that we might see encouraging empirical data that shows quantum computers with better scaling using simulators up to about 35 qubits, but then this advantage will disappear in the future once we have more data. What are the best practices for analyzing relative performance of classical and quantum algorithms in a robust way that gives insight about scaling? Or is this simply not yet possible?

$\endgroup$
5
  • 2
    $\begingroup$ Is there a solution to this classically, where one might worry that a classical heuristic which stops after some amount of time seems to give good approximate optima up to some input size, and then perform less well in producing approximate optima after some point? $\endgroup$ Commented Apr 5, 2018 at 17:34
  • $\begingroup$ I think that this would be due to the small problem instances possessing some structure which is lacking in the general case. For example one might try to understand how well QAOA (Quantum Approximate Optimization Algorithm) can solve the MaxCut combinatorial optimization problem. One might study MaxCut on small graph instances, but such instances might have lower connectivity or be planar, or possess some other accidental property uncharacteristic of the general problem. So it would be worth checking whether there are any size "critical points" for typical problems. $\endgroup$ Commented Apr 5, 2018 at 21:15
  • 1
    $\begingroup$ Perhaps you misunderstand me --- I do think this is a valid concern for quantum heuristics. I'm asking if it isn't also a concern for classical heuristics --- due to the 'Law of Small Numbers', or small size effects you might say --- and if not, how the problem is solved / why it does not arise. On the other hand, if it is an unsolved problem even in the classical regime... $\endgroup$ Commented Apr 5, 2018 at 21:45
  • $\begingroup$ I agree that this should also be a concern for classical heuristics. I'll make a post on a more relevant stackexchange and report back what I learn here. $\endgroup$ Commented Apr 5, 2018 at 21:49
  • $\begingroup$ What are the best practices for analyzing relative performance of classical and quantum algorithms in a robust way that gives insight about scaling? Or is this simply not yet possible? All major quantum and classical algorithms have proofs for their space and time complexity (as functions). You would just need to examine these function (as they tend to infinity) for a robust insight. Proofs are preferred to empirical methods for analysing algorithms as they are robust way of analysis. $\endgroup$
    – Learner
    Commented Feb 8, 2019 at 7:31

0