41
$\begingroup$

Linear programming (LP) is in P and integer programming (IP) is NP-hard. But since computers can only manipulate numbers with finite precision, in practice a computer is using integers for linear programming. Because of this, shouldn't LP and IP be in the same complexity class?

$\endgroup$
9
  • 7
    $\begingroup$ Adding a little to jmite's answer: There are many cases in which integrality constraints make the problem much harder. For example the fractional knapsack problem can be solved in polynomial time, though the integer knapsack problem is NP-Hard. So this is not only something that is true for LP and IP. $\endgroup$ Commented Mar 12, 2015 at 17:46
  • 8
    $\begingroup$ Even if we consider that computers perform operations with integers, then it does not mean that the returned solution is an integer; it can be rational, i.e., ratio of two integers. And that gives a lot more flexibility. And of course, we cannot always convert a rational solution to a feasible solution for the IP. In general, the IP will have more constraints on the variables than just asking for integral solution. Think of a $0,1$ integer program. $\endgroup$
    – megas
    Commented Mar 12, 2015 at 18:14
  • 1
    $\begingroup$ It's not that hard to manipulate numbers with infinite precision if you want to, especially when they're rational. Finite precision is merely an optimization to reduce runtimes. $\endgroup$
    – user5386
    Commented Mar 12, 2015 at 23:36
  • 3
    $\begingroup$ @Hurkyl "It's not that hard to manipulate numbers with infinite precision if you want to, especially when they're rational." There is a strict subset of the real numbers called the computable numbers, which includes rationals + numbers like sqrt(2) etc...and is defined as the set of numbers computable by a Turing machine. Those which are not included there, cannot, by definition, be manipulated by a computer. $\endgroup$ Commented Mar 13, 2015 at 0:03
  • 1
    $\begingroup$ All 3 answers are informative. I don't know which one counts as "The answer" ? $\endgroup$ Commented Mar 13, 2015 at 17:21

4 Answers 4

20
$\begingroup$

I can't comment since it requires 50 rep, but there are some misconceptions being spread about, especially Raphael's comment "In general, a continous domain means there is no brute force (and no clever heuristics to speed it up)."

This is absolutely false. The key point is indeed convexity. Barring some technical constraint qualifications, minimizing a convex function (or maximizing a concave function) over a convex set is essentially trivial, in the sense of polynomial time convergence.

Loosely speaking, you could say there's a correspondence between convexity of a problem in "mathematical" optimization and the viability of greedy algorithms in "computer science" optimization. This is in the sense that they both enable local search methods. You will never have to back-track in a greedy algorithm and you will never have to regret a direction of descent in a convex optimization problem. Local improvements on the objective function will ALWAYS lead you closer to the global optimum.

This is not so in the non-convex case. Here, there may be a global minimimum, but several local minima that a local descent algorithm will always be drawn to, in the same way greedy algorithms do when applied to NP-problems. Sometimes they find the true optimum, most of the time not.

$\endgroup$
29
$\begingroup$

The short answer: because you can use Integers to simulate Booleans for SAT, but when you don't restrict yourself to this, then you can't actually simulate SAT. You'll get a feasible answer, but it no longer carries any meaning in terms of whatever SAT instance you were trying to simulate.

The tough answer to this is that we don't know that they aren't in the same complexity class. Nobody has a proof that $P \neq NP$. If we understood the deeper reasons why the problems were so different, that would require that we understood why the complexity classes were different, which we don't.

$\endgroup$
1
  • 1
    $\begingroup$ This is in fact the correct answer, and @sashatheNoob should recognize it as such. Integer programming is NP hard because you can use it for SAT. We don't know if integer programming is harder than linear programming, because we don't know if P = NP or if P $\neq$ NP. $\endgroup$
    – vy32
    Commented Mar 27, 2020 at 14:03
22
$\begingroup$

The reason linear programming is "efficient" is that the solution space may be represented by a single convex polyhedron. If one is trying to find the "highest" vertex on that polyhedron (one may apply a linear transform to any linear programming problem to make "height" correspond to the quantity to be maximized), then from any vertex one may travel along edges to the highest point without ever having to go "downhill". What makes integer programming "hard" is that there isn't a continuous solution space, but there are instead many disjoint solution spaces, and no way to work incrementally toward the optimal solution.

$\endgroup$
6
  • 2
    $\begingroup$ The keyword here is "convexity" $\endgroup$
    – cody
    Commented Mar 14, 2015 at 2:14
  • 1
    $\begingroup$ Isn't this hill climbing the simplex method, of which no variant is known to be polynomial in the worst-case? $\endgroup$
    – jbapple
    Commented Mar 14, 2015 at 4:52
  • 1
    $\begingroup$ There are plenty of problems that easier to solve in discrete spaces (which allows discrete searches) than in continuous space. $\endgroup$
    – Raphael
    Commented Mar 14, 2015 at 13:11
  • $\begingroup$ @Raphael: can you give some examples of such problems? I've been thinking about this and can't come up with many. $\endgroup$
    – cody
    Commented Mar 17, 2015 at 19:37
  • $\begingroup$ @cody Finding maxima/minima of (one-dimensional) functions, for instance. See here for a cute example which becomes amenable only after noting that we can reduce in finite search space to a finite one. Note that LPs are kind of special that way: by noting that we only need to consider the corners of a polyhedron we get a finite search space. In general, a continous domain means there is no brute force (and no clever heuristics to speed it up). $\endgroup$
    – Raphael
    Commented Mar 17, 2015 at 21:06
3
$\begingroup$

The other answers are correct, but I find them a bit technical. Suppose you have swept (eliminated) a matrix and are looking for any solution and the matrix looks like this:

column x1 x2 x3 x4 x5 x6 | solution
-----------------------------------
       1           1  1  | 3
          1              | 1
             1     1     | 2
                2  1  1  | 1  

In lineair programming, you can now set the non-pivot columns (x5,x6) to 0, and set x4 to 0.5 and find a trivial solution. In integer programming, you cannot just set the non-pivot columns to 0. The solution is harder (NP-hard) to find. Note also that the solutions are in $Q$, so this is not directly related to finite/infinite precision.

$\endgroup$
2
  • $\begingroup$ I am worried that the above answer is not that accurate in the sense that NP-hard is actually a property with respect to "how the difficulty grows with respect to the growth of input size." That said, the example given is great. $\endgroup$
    – FAN
    Commented Sep 6, 2020 at 22:00
  • $\begingroup$ @AustenFAN The intention here is not to show the NP-hardness proof, but to show the intuition that linear programming cannot be NP-hard while Integer programming could be. $\endgroup$ Commented Sep 14, 2020 at 14:20

Not the answer you're looking for? Browse other questions tagged or ask your own question.