10
$\begingroup$

For years, I've been told (and I've been advocating) that problems which could be solved in polynomial time are "easy". But now I realize that I don't know the exact reason why this is so.

Intuitively, I can guess what the relation between P problems and easy problems is. Experience tells me that P problems are easy too. However, using only mathematical terms and theoretical arguments, why is P the class of easy problems?

$\endgroup$
3
  • 4
    $\begingroup$ Probably for historical reasons, and since it has some convenient mathematical properties such as being closed under composition. Nowadays people are also interested in quasilinear time algorithms. $\endgroup$ Commented Aug 15, 2015 at 12:53
  • $\begingroup$ "easy" is not really the same as "efficient". but it is sometimes used as the opposite duality as "hard" another common term in complexity theory. fyi this is nearly the same as why is PTime considered efficient $\endgroup$
    – vzn
    Commented Aug 15, 2015 at 18:03
  • $\begingroup$ The question you pose in bold has no answer, I guess. The rest is a duplicate. $\endgroup$
    – Raphael
    Commented Aug 16, 2015 at 12:01

2 Answers 2

8
$\begingroup$

I don't think it is possible to argue strictly with only mathematical or theoretical terms. Very much does come down to heuristics.
Of course an algorithm with runtime $O(n^{1000})$ or something similarly ridiculous would not be "easy" and in most practical cases worse than an algorithm of modest exponential growth. Yet such algorithms just don't happen in practice. And if an algorithm initially has a quite high runtime, then more often than not it can be improved dramatically, which means that the problem is a lot less "difficult".
As an example, take the AKS algorithm (which tests whether a given number is prime). It started out as $O(n^{12})$ (if I recall correctly), being one of the polytime-algorithms with the greatest exponent. But it was improved to $O(n^{6+\epsilon})$ in a quite short time. And very fast probabilistic algorithms for the primality problem exist as well, which shows that proving primalty is a lot easier than, say, your favorite NP-complete problem (assuming $P\neq NP$).

Of course the above could all turn out to be completely wrong. Maybe there exists a polytime algorithm for every NP complete problem which does not have a "practical" exponent. Knuth advocates that scenario; you might want to read question 17 here.

$\endgroup$
3
  • 1
    $\begingroup$ Excelent interview. A "must read" for CS students. Thanks for sharing. $\endgroup$ Commented Aug 15, 2015 at 13:56
  • $\begingroup$ Such algorithms definitely happen in practice. I learned 10 minutes ago one semi-popular term is "galactic algorithm." The theory of approximate solutions to NP-hard problems is entirely based on producing these polynomials. If e is your error margin, the you can solve bin packing in O(n^{1/{2e^2}}). Try something reasonable like e = 5% and you get a O(n^800) polynomial. There is something called an FPTAS which guarantees poly(n,1/e) - knapsack has an FPTAS - but these are the exception, not the rule. $\endgroup$
    – djechlin
    Commented Oct 18, 2015 at 15:30
  • $\begingroup$ Also should point out that branch of c.s. is very real-world applicable and often driven by industrial problems. e.g. vehicle routing, which Uber and Amazon same-day service are actively researching, is NP-hard (it reduces from TSP). Bin-packing is similarly real-world important. And if you've ever tried burgling a museum at night, you know all about knapsack. $\endgroup$
    – djechlin
    Commented Oct 18, 2015 at 15:34
1
$\begingroup$

I most often see complexity applied to optimization problems. There are then two measures for the "size" of a problem: the number $n$ of variables for which optimal values must be determined, and $L$, the number of bits used to represent the problem instance.

Then a problem in P has operation count that is bounded above by a polynomial in $n$ and $L$, in other words, there exist constants $c$, $j$ and $k$ such that, for $n$ and $L$ sufficiently large, the number of operations needed to solve the problem is less than or equal to $cn^jL^k$. On the other hand, there are provably exponential problems (this does not, at present, include NP), for which there exist problems for which the operation count exceeds $Ce^{L+n}$. If $n$ and $L$ are large enough, this will dominate $cn^jL^k$. Of course, $j$ and $k$ may be large, e.g., $j=1000$, in which case this is a theoretical distinction only.

However, experience has shown that once a problem or problem class was shown to be in P, the exponents were quickly lowered. For instance, in 1979 Khachian described an algorithm to solve linear programs that had complexity O($n^6L^2$). Just five years later Karmarkar lowered that to O($n^{3.5}L^2$). Khachian's method is not really useful for computations, but Karmarkar's method is. Hence there is a general expectation that once a problem has been proven to be in P, even with a large exponent on $n$, algorithmic improvements and new approaches will lower the exponent. In addition, no problem in P has been identified to date where the exponent truly was large. (I seem to remember reading about problems for which $j=12$.)

$\endgroup$
1
  • 2
    $\begingroup$ The time hierarchy theorem contradicts your last sentence. $\endgroup$ Commented Aug 16, 2015 at 5:03

Not the answer you're looking for? Browse other questions tagged or ask your own question.