11
$\begingroup$

From the point of view of asymptotic behavior, what is considered an "efficient" algorithm? What is the standard / reason for drawing the line at that point? Personally, I would think that anything which is what I might naively call "sub-polynomial", such that $f(n) = o(n^2)$ such as $n^{1+\epsilon}$ would be efficient and anything which is $\Omega(n^2)$ would be "inefficient". However, I've heard anything which is of any polynomial order be called efficient. What's the reasoning?

$\endgroup$
1

6 Answers 6

11
$\begingroup$

That depends on context. In theoretical computer science, usually every polynomial time algorithm is considered 'efficient'. In approximation algorithms for example a runtime of $n^{1/\epsilon^{1/\epsilon}}$ would be considered efficient, even though it won't be usable in practice for any reasonable value of $\epsilon$. An algorithm for SAT that runs in $n^{2^{100}}$ would be an amazing breakthrough.

In classic algorithmics, i.e. algorithms from the 80's and before, runtimes below $n^3$ or so (think matrix multiplication, min cost matching, flows, linear programming) are considered efficient. They still are considered efficient by most people, I'd say. Of course a $n^2$ algorithm is not considered efficient if a $n\log n$ algorithm is known, as for sorting for example.

Nowadays there is a trend towards sublinear algorithms or streaming algorithms that are able to deal with terabytes of data. Try using matrix multiplication to compute the page rank of all pages in Google's index. That won't work.

Of course, while certainly useful, the asymptotic runtime of an algorithm doesn't tell the whole story. There are algorithms that have good asymptotic runtime, but constants that are so huge that they effectively can't be used. Ever. Lipton calls them Galactic Algorithms. Robert Sedgewick even states that worst case bounds are "often useless for prediction, often useless for guarantees" and "worst-case analysis is useless for predicting performance" in his talk Putting the Science Back Into Computer Science.

$\endgroup$
4
  • 9
    $\begingroup$ In short: efficient is what solves your problem in a timeframe that suits you. $\endgroup$
    – Raphael
    Commented Mar 12, 2013 at 14:13
  • $\begingroup$ This doesn't really necessitate its own answer, but BPP, which is the class of functions with polynomial runtime (as described in the answer) with randomness as well, is often considered efficient. In other words, the above is right, but computers are generally allowed to access randomness to do calculations. One of the most important practical uses of randomness is hashing. $\endgroup$
    – SamM
    Commented Mar 12, 2013 at 20:36
  • $\begingroup$ Maybe "efficient" isn't really the right terminology in the first place? I was just reviewing one of my calculus books, and the author calls polynomial runtimes "tractable" and exponential runtimes "intractable". $\endgroup$ Commented Mar 13, 2013 at 8:04
  • 1
    $\begingroup$ @RobertS.Barnes: Different words, same problem. $\endgroup$
    – Raphael
    Commented Mar 13, 2013 at 8:25
4
$\begingroup$

My 2 cents from the angle of distributed algorithms: When looking at large-scale networks (P2P, social networks, etc.) a distributed algorithm is considered to be efficient if its running time is $O(\log^c n)$ for some constant $c>0$ and the algorithm uses messages of $O(\log n)$ bits. Note that the requirement on message sizes is usually given even more importance than the running time, in particular for "global" problems that have a larger lower bound on the running time, e.g. distributed MST.

$\endgroup$
3
$\begingroup$

The reasoning behind is that, from the asymptotic behavior perspective, a polynomial rate of growth is trivially less than a super-polynomial rate of growth. In practice, a polynomial time algorithm runs much faster than a super-polynomial time algorithm when the input size grows.

Of course, no one would say that an algorithm with polynomial complexity of, for instance, $O(n^{2000})$ is "efficient", but the majority of algorithms rarely exceeds a complexity of $O(n^5)$.

Practical considerations may even lead you to say that $O(n^2)$ is inefficient to process an extremely large input, and this is why we try to prove lower bounds and to design sequential algorithms matching these lower bounds on one side, and resort to parallel algorithms on the other side. For some problems, if you are willing to accept a probabilistic guarantee, you may even advantage of sub-linear time algorithms (extremely fast, but may fail to provide a correct answer with a very small probability).

$\endgroup$
3
$\begingroup$

In theory, an algorithm is said to be efficient if its worst-case running time is bounded by a polynomial in it's input length. The reasoning being that polynomials have nice closure properties. Adding, multiplying, composing polynomials are operations that yield polynomials and these are good if you are reducing problems to one another.

Of course the gap between polynomial and exponential gets very very big as the input length increases so polynomial-time algorithms are way better. In practice, a polynomial-time algorithm may take a long time before termination but it might be the case that it's an optimal algorithm (the best possible) in which case I would say it's efficient.

$\endgroup$
4
  • $\begingroup$ While I can understand that if something is the fastest known algorithm for a particular problem then it could be considered "efficient" from that point of view, it's hard for me to think of anything which runs in polytime as efficient. :-) $\endgroup$ Commented Mar 12, 2013 at 13:43
  • $\begingroup$ For polynomial runtimes, "efficient" is just a word, and a misleading one at that. $\endgroup$
    – Raphael
    Commented Mar 12, 2013 at 14:12
  • $\begingroup$ @Raphael Maybe tractable is a better word to be using...? $\endgroup$ Commented Mar 13, 2013 at 8:05
  • 1
    $\begingroup$ @RobertS.Barnes: Not much better, imho. "Tractable" is every bit as relative as "efficient". $\endgroup$
    – Raphael
    Commented Mar 13, 2013 at 8:25
0
$\begingroup$

Some problems are easy, some are hard. Whether an algorithm is "efficient" depends on how good it is compared to the inherent complexity of the problem. If you find an algorithm that factors any n digit number in O ($n^3$) operations, and I find an algorithm that sorts n numbers in O ($n^2$) operations, then your algorithm is more efficient (because it beats anything known to mankind by a huge factor, while mine is as slow as you would expect from an absolute beginner).

$\endgroup$
1
  • $\begingroup$ Interesting point of view, even though I disagree. Anyway, you want $\Theta$-s there. $\endgroup$
    – Raphael
    Commented Jun 21, 2016 at 20:55
0
$\begingroup$

The exact (time) cost of solving a problem depends on the computation model in use: Do you mean Turing machine steps, instructions of your PC, function calls in lambda calculus, ...? It turns out that you can simulate the usual computation models with each other in costs related by polynomials. E.g. $N$ CPU instructions can be simulated in $p(N)$ steps of a Turing machine, for a polynomial $p$. Separating "can be done in polynomial time" from "can't be done in polynomial time" (informally, but very wrongly, called "exponential time") thus allows one to talk independent of the exact computation model. Any algorithm whose running time is not bounded by a polynomial is clearly useless beyond very small instances.

If you take the usual CPU computation model, typical algorithms in use have execution times bounded by low-degree polynomials in the input size. The available techniques allow (somewhat) distinguishing between bounded by a polynomial or not, finer distictions are very hard to come by. When a polynomial time algorithm is called "efficient", this means low degree in practice.

And in the end, this is a definition, they could have called them "xglerby algorithms" for that matter. Naming them "efficient" (with the caveat that we typically understand that the polynomial isn't too large a degree in colloquial language) is just leaning on normal usage of the term.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.