9
$\begingroup$

I read an interesting comment in a paper recently about how weirdly useful maths turns out to be. It mentions how polynomial time doesn't have to mean efficient in reality (e.g., $O(n^{999999999999999999999})$ is polynomial time, but not efficient). Yet, isn't it the case that all algorithms in polynomial time also happen to be realistic, like at most $O(n^4)$ or something? I guess my questions are:

  1. Is this surprising?

  2. Are there any examples of algorithms that are polynomial time but not practical?

$\endgroup$
6

3 Answers 3

11
$\begingroup$

The comment is wrong. It is very easy to give examples of polynomial time algorithms which aren't practical at all:

  1. The ellipsoid algorithm for solving linear programs runs in polynomial time but is too slow to be practical. The simplex algorithm, whose worst-case running time is exponential, is preferred in practice.

  2. The AKS primality testing algorithm runs in polynomial time but is too slow to be practical. Instead, randomized polynomial time algorithms are used. We sacrifice certainty for performance.

  3. Fast matrix multiplication algorithms run asymptotically faster than high-school matrix multiplication (both are polynomial), but are too slow to be practical. The high-school algorithm is used in practice.

  4. A similar issue occurs in fast integer multiplication algorithms. The algorithm with the best asymptotic complexity, Fürer's algorithm, is too slow to be used in practice. Instead, relatively simple algorithms are used even for quite large integers.

  5. Big data algorithms need to run in linearithmic time (which is $O(n \log^{O(1)}n)$) to be practical.

These examples show that the identification of polynomial time with practical is not accurate, and might depend on the circumstances. Researchers in theoretical algorithms feel a need to justify their field of research, and so they ideologically believe in the sentiments expressed in the comment you mention. You shouldn't take such comments too literally.

In fact, many algorithms used in practice are heuristic and we don't have any estimates on their running time other than empirical results. Such algorithm don't fit into the theoretical framework at all, yet there are very useful in practice. Several (but not all) machine learning algorithms belong to this class just in terms of running time (not to mention in terms of performance), as do search algorithms such as A* and alpha-beta, and SAT solving algorithms.

$\endgroup$
1
  • 1
    $\begingroup$ This multiplication algorithm asymptotically improves on Fürer's. ​ ​ $\endgroup$
    – user12859
    Commented Jan 23, 2016 at 10:36
6
$\begingroup$

Question (1) is a tricky one that I have never seen a good reason for. One suggestion may be that we're much more likely to comprehend and find the answers for simpler problems, the harder ones take special skills, so the chance the right person works on it and comes up with the answer is low. This is all very hand-waving though.

For (2) there's definitely examples, in fact, this question has already been asked and answered! Note also that there's a chain of links to a cstheory.se question and a math.se question in there that have other examples.

$\endgroup$
5
$\begingroup$

I have seen the following explanation regarding your question (1):

The powers of $n$ in the running time usually arise out of for loops or similar constructs. Each for loop in turn is needed, because we, as solvers, had an idea how to break up the problem or index over something useful. Since we usually use only a small number of ideas, the powers tend to be small. It's hard to imagine an algorithm with 20 nested for loops, where inside the very last for loop we actually do something that depends on all 20 indices.

This argument is appealing to me, but it is quite weak because it only deals with for loops. For instance, one can easily create any number of nested for loops with recursion, but then one can find good reasons against doing such a bizarre construction. In any case, I think this argument can be strengthened with an analysis of a lot of different cases, but I present just the high-level idea.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.