56
$\begingroup$

Why in computer science any complexity which is at most polynomial is considered efficient?

For any practical application(a), algorithms with complexity $n^{\log n}$ are way faster than algorithms that run in time, say, $n^{80}$, but the first is considered inefficient while the latter is efficient. Where's the logic?!

(a) Assume, for instance, the number of atoms in the universe is approximately $10^{80}$.

$\endgroup$
10
  • 3
    $\begingroup$ I'm not sure I agree with your premise. I think most people would consider $n^{80}$ to be pretty inefficient (though of course that depends on the constants as well and on the problem that is being solved). $\endgroup$
    – sepp2k
    Commented Mar 10, 2012 at 21:03
  • 16
    $\begingroup$ I would consider $n^c$ for any $c > 3$ very inefficient. You have an example of asymptotic analysis taken to an unreasonable extreme. There are no natural algorithms (that I know of) with $n^{80}$ run-time. However, there are natural algorithms with $2^n$ run-time for some problems, and fundamental questions in complexity theory about whether a polynomial algorithm for such problems exist. $\endgroup$
    – Joe
    Commented Mar 10, 2012 at 21:10
  • 6
    $\begingroup$ I think this question shouldn't be downvoted because people disagree with the premise (assuming that was the reason). Up- and downvotes are supposed to be an indication of the quality of the question, not their contents (as long as they are on-topic). $\endgroup$ Commented Mar 10, 2012 at 21:29
  • 8
    $\begingroup$ @RanG. and the full quote is (emphasis mine): Cobham's thesis holds that P is the class of computational problems which are "efficiently solvable" or "tractable"; in practice, some problems not known to be in P have practical solutions, and some that are in P do not, but this is a useful rule of thumb. $\endgroup$
    – Joe
    Commented Mar 10, 2012 at 21:44
  • 6
    $\begingroup$ In the literature (of theoretical CS), the word "efficient" is a synonym to "polynomial". Maybe this is different for other (more practical) sub fields. $\endgroup$
    – Ran G.
    Commented Mar 10, 2012 at 21:50

4 Answers 4

34
$\begingroup$

Another perspective on "efficiency" is that polynomial time allows us to define a notion of "efficiency" that doesn't depend on machine models. Specifically, there's a variant of the Church-Turing thesis called the "effective Church-Turing Thesis" that says that any problem that runs in polynomial time on on kind of machine model will also run in polynomial time on another equally powerful machine model.

This is a weaker statement to the general C-T thesis, and is 'sort of' violated by both randomized algorithms and quantum algorithms, but has not been violated in the sense of being able to solve an NP-hard problem in poly-time by changing the machine model.

This is ultimately the reason why polynomial time is a popular notion in theoryCS. However, most people realize that this does not reflect "practical efficiency". For more on this, Dick Lipton's post on 'galactic algorithms' is a great read.

$\endgroup$
3
  • 18
    $\begingroup$ A second, pragmatic reason for choosing P is that it is closed under addition, multiplication and exponentiation with constants. This is convenient when composing algorithms/machines; if the building blocks are efficient, so is the result. $\endgroup$
    – Raphael
    Commented Mar 11, 2012 at 11:36
  • $\begingroup$ I'm just curious, does anyone know whether the term "galactic algorithm" is ever used in practice? $\endgroup$ Commented Mar 11, 2012 at 20:21
  • $\begingroup$ It's not that old a term. But I've started using it :) $\endgroup$
    – Suresh
    Commented Mar 11, 2012 at 23:46
27
$\begingroup$

In theory, we care for the asymptotic behaviour, and describe classes of problems and algorithms based on their asymptotic behaviour. The keyword here is asymptotic. $O(n^{80})$ is faster than $O(n^{\log n})$ asymptotically, i.e., starting from $n > 1208925819614629174706176$ (which by the way is called: septillion!), assuming unit constant coefficients, and no low-order terms.

In practice, however, attention is paid to both exponents and constant coefficients. In practices, input sizes can not grow to septillions, so, yes, $n^{\log n}$ for all purposes will be a superior choice to $n^{80}$. Other factors also matter in practices: parallelism, memory access patterns (e.g. locality).

For example, most libraries for integer multiplication, e.g. GMP will implement a mixture of algorithms, and select inferior algorithm based on the input size select the practically superior algorithms based on the input size, though these algorithms might be asymptotically inferior. Some asymptotically "inferior" algorithms will be faster on certain input sizes, and will be selected over the optimal algorithms.

Another example, the fastest matrix multiplication algorithm known is Coppersmith-Winograd algorithm which runs in $O(n^{2.3737})$ (there are recent improvements; more here). However, it was never implemented because (1) it's hard (2) the constant coefficient is gigantic. All linear algebra packages use the less optimal of Strassen.

TL;DR theory cares for asymptotic behaviour in order to compare algorithms as the limit of input size goes to arbitrarily large numbers.

$\endgroup$
3
  • 1
    $\begingroup$ They "select inferior algorithm"? Don't you mean "select superior algorithm"? $\endgroup$
    – bitmask
    Commented Mar 11, 2012 at 9:02
  • 1
    $\begingroup$ Another good example is insertion sort vs. quick sort. Insertion sort is $\Theta(N^2)$ while quick sort is $O(n lg n)$. However, on small inputs, say 10 items, insertion sort is about twice as fast as quick sort! In fact, optimized quick sort uses insertion sort for small arrays. $\endgroup$ Commented Mar 12, 2013 at 10:03
  • $\begingroup$ Why don't we consider asymptotically cubic algorithms "bad" and asymptotically quadratic algorithms "good"? This answer begs the question. $\endgroup$
    – djechlin
    Commented Oct 18, 2015 at 15:24
4
$\begingroup$

This answer will look at the "bigger picture" context of your question. Computer science is actually a relatively young and somewhat open science and it doesn't yet have great or even good answers to some basic & fundamental questions. The basic question "what is efficiently computed" is either accurately or roughly formalized in CS (depending on opinion) as the famous P vs NP problem (or the closely related P vs Exptime problem), and its still open after more than four decades of being initially introduced by Cook/Levin ~1970 and intense work by the worlds greatest computer scientists (and many mathematicians are also interested in the problem as fundamental).

So in other words, even with a rough definition of "efficient" as P time, and one of the highest valued scientific awards — namely $1M award attached to the problem for over 10yrs — computer science cannot even prove that some problems (close to this borderline) must or must not have efficient (Ptime) algorithms. Therefore the exact definition of "efficient" more precise than P time is not necessary or even possible at this time. If/when the P vs NP conjecture is settled one way or the other, a more strict definition of "efficient" may or presumably will be possible.

Moreover, one might feel that the Ptime definition of "efficient" might even be a bit "sloppy", and most computer scientists would probably agree, and almost all of them think the P vs NP conjecture is of the utmost importance to resolve, to the point that they might even regard this assertion or observation as trivial.... in other words, so to speak, its a work in progress / we're working on it. (in fact mainstream computer scientists even go so far, only half-jokingly, to routinely refer to the gap & lack of progress/definitive separations as embarrassing.)

In fact there is even a closely related/significantly stronger conjecture than P vs NP, namely NP vs P/poly, which also cannot be resolved by computer science at this time. it conjectures that NP-time problems cannot be solved by any "P-sized" circuits, ie not even restricted to those circuits that can be created by algorithms/Turing machines.

As for how hard P vs NP may be — there is some solid reason to think it may be at least as hard as the very old Riemann conjecture in mathematics (now 1.5 century old), because both have had the same $1M award for over a decade, and neither has been solved yet/first.

So in other words, to precisely define what algorithms are really "efficient" is actually one of the most important & hardest existing open problems in theoretical science and mathematics.

In fact the question of "what is efficiently computed" is actually even more subtle, because there is a variant of the Church-Turing thesis called the P-time CT thesis, and it is not known if quantum computing actually violates it. With Shor's breakthrough result of P-time QM, factoring considered a dramatic twist in this research. In other words, the problem of what is efficiently computed actually plausibly descends all the way to deep physics principles, and relates to whether quantum computing can compute more efficiently than classical computation, which is also a generally open problem in theoretical CS and advanced physics.

So one can even add that P vs NP & the question of efficient computing may be of crucial or fundamental importance to in — addition to CS and mathematics — physics.

[1] P vs NP problem, wikipedia

[2] Millenium prize problems

[3] P/Poly class, wikipedia

[4] Shor's algorithm

$\endgroup$
1
  • $\begingroup$ correction: P vs Pspace, not P vs ExpTime $\endgroup$
    – vzn
    Commented Sep 10, 2012 at 22:25
-2
$\begingroup$

Polynomial time algorithms are considered efficient only in comparison with the hardest non-polynomial time especially the so called NP-Complete. See image: Euler diagram for P, NP, NP-complete, and NP-hard set of problems.

$\endgroup$
1
  • 1
    $\begingroup$ "in comparison with the hardest non-polynomial time especially the so called NP-Complete" -- NP-complete problems are not known to be non-polynomial, and they are certainly not the hardest. $\endgroup$
    – Raphael
    Commented Nov 27, 2017 at 21:38

Not the answer you're looking for? Browse other questions tagged or ask your own question.