24
$\begingroup$

Background

A while ago my team was implementing an interior point LP solver and we came across the following conundrum:

Is there a polynomial-time algorithm to find a feasible starting point in linear programming? If so, what is the algorithm?

Of course, it is a well-established result in the literature that LPs can be solved in polynomial-time, and we know from LP theory that the feasibility problem is as hard to solve as the LP.

However, looking deeper into the algorithms (as we had to implement them), we noticed that everything we could find either (i) assumed that a feasible starting point is already known, or (ii) required using an NP-complete/NP-hard method to locate the feasible point with a guarantee (the guarantee part is important).

Even though this is not a big issue in practice because the algorithms work pretty well, we were left with a contradiction between what we knew from theory, and what we could find in the literature (no-one seems to mention this explicitly).

I mentioned this in a couple of answers (namely here and here) and it naturally sparked some controversy, so I think it's an interesting question. It is of course very possible that at the time I missed/misunderstood something about the theoretical complexity of Phase I, so I am keen to know what you guys think!

Note: everything that follows assumes a general LP problem (inequality+equality constraints).

What we know

  1. Minimising the slack error during interior point is not guaranteed to get us to the interior of the feasible region.
  2. The ellipsoid method requires a feasible starting point.
  3. Phase I in the Two-Phase method (which is to identify a feasible basis) requires Simplex iterations, hence is not p-hard (especially if there is no feasible point at all).
  4. All algorithms we could find were based either on Simplex or Newton's method, neither of which is of polynomial complexity.

Why the worst-case for Newton's method for interior point is not polynomial

There are two main reasons for this. First, one of the assumptions for Newton's method requires us to be in the neighborhood of the solution, which we cannot guarantee in the general case. Second, Newton's method is not quite robust, as it depends not only on the quality of the derivatives but also on the step size. Therefore, the only way to always solve the Newton system in practice is to use a higher complexity method such as Interval Newton or to solve a global optimisation problem.

Characteristics of the polynomial time algorithm

Considering the above, if said algorithm exists I believe that it must have the following characteristics:

  1. It must always give a feasible point/prove no feasible point exists.
  2. It must not require a feasible starting point (otherwise it's a chicken and egg problem).
  3. It must not rely on Simplex pivots.
  4. It must be possible to implement this algorithm in a way that it works in polynomial-time in practice (see regular Newton vs Interval Newton).
$\endgroup$
2
  • $\begingroup$ Do you need a point in the (relative) interior, or just any feasible point? $\endgroup$ Commented Aug 27, 2019 at 12:42
  • 2
    $\begingroup$ Any feasible point would do. $\endgroup$ Commented Aug 27, 2019 at 12:52

5 Answers 5

22
$\begingroup$

It is not true that the Two-Phase methods requires Simplex iterations, it is just the common way to do it.

Let's assume we have a linear program with $n$ variables and $m$ constraints.

Step 1) Convert this LP into standard form by splitting all unbounded variables into two $\geq 0$ variables, making sure $b$ is non-negative (by multiplying the rows that violate this by $-1$) and by introducing slack variables for all inequalities. Let's assume that this gives us the program:

$\begin{array}{lll} \max & cx \\ \mbox{s.t.} & Ax & = b \\ & x & \geq 0\end{array}$

Note that this program has at most $2n+m$ variables and $m$ constraints.

Step 2) Construct the following linear program where we introduce $m$ artificial variables as the vector $y$ and use identity matrix $I$:

$\begin{array}{lll} \min & y \\ \mbox{s.t.} & Ax + Iy & =b \\ & x,y & \geq 0\end{array}$

Step 3) We now know for certain that the solution $x=0$, $y=b$ is a feasible solution for this LP (recall we made sure that $b$ is non-negative). Since $y \geq 0$, we also know that the minimum value of the LP can not be negative. Now if we optimize this LP and find a solution of objective $0$, we know two things: (1) all artificial $y$ variables have value $0$ in this solution (otherwise the objective would be positive) and (2) the solution we found is feasible for the LP in standard form. We also know that if the minimal objective is positive, no feasible point exist for the original LP (because if such a point would exist, it would provide us a with $0$-objective solution to the LP with artificial variables). Thus, we have found a feasible point by optimizing a linear program that is polynomially larger in size than the original LP.

Note that the LP in step 3 can be optimized with any algorithm you like, you just must be able to find a path from your starting point to your feasible point. Therefore, if you have an algorithm that can take you from a feasible solution to an optimal solution in polynomial time, you can find a feasible solution in polynomial time. Furthermore, you can solve the LP in step 3 with only $m$ simplex iterations, because you should just pivot out one artificial variable in each step, and forget it exists as soon as you pivot it out. If the original LP is feasible, you can forget artificial variables without introducing any infeasibility. So in fact, the simplex method should also run in polynomial time for the first phase of the two phase approach. Correction: I mistakenly assumed you can always select which variables you pivot out, but that is not true. However, it does not matter which polynomial time algorithm you use: as long as you believe there exists an algorithm that can take you from a feasbible solution to an optimal solution in polynomial time, you will always be able to find a feasible solution in polynomial time. As mentioned, the classical algorithms that prove this is possible are the Ellipsoid method and Karmarkar's algorithm.

$\endgroup$
4
  • 1
    $\begingroup$ "Furthermore, you can solve the LP in step 3 with only m simplex iterations" - ah, that's what I missed, thanks! $\endgroup$ Commented Aug 27, 2019 at 10:19
  • 3
    $\begingroup$ Unfortunately that statement was a mistake; it is not known if that is possible. However, you stated in your question that you were able to find polynomial time algorithms assuming you have feasible starting point, and my answer still provides you with a way to obtain such a feasible starting point. If you apply the algorithms that assume you have a starting point, you will also obtain a feasible starting point for you original problem. $\endgroup$ Commented Aug 27, 2019 at 13:22
  • 1
    $\begingroup$ Hmmm that's a shame. I am aware of the proofs for Ellipsoid and Karmakar's algorithm, but it seems like a chicken and egg problem, i.e., we need a feasible point to get a feasible point. $\endgroup$ Commented Aug 27, 2019 at 15:35
  • 7
    $\begingroup$ That is why you augment your LP, as is done in steps 1 and 2 in my answer. If you perform those steps, you only need to compute the point where the values of the original variables (in standard form) are $0$ and the values of the artificial variables are equal to right hand side coefficients of your constraints. That point is guaranteed to be feasible by construction. As it you can trivially derive this point from you problem data, it solves your chicken-and-egg problem. $\endgroup$ Commented Aug 27, 2019 at 16:07
18
$\begingroup$

LP is solvable in polynomial time. The polynomial depends not only on problem size, but also the size of numbers of the input matrix. The standard proof is using the ellipsoid method. Of course, the proof uses exact arithmetic, as any complexity proof would. That method isn't practical though.

It is unknown if LP is strongly polynomial.

In practice you can solve an LP to any precision using an IPM. The hard part is to get an exact solution. These are called rounding methods: they use IPM to a certain precision then jump to the surface of the polyhedron. Both steps are polynomial in the problem data. This is again theory only, in practice we use standard crossover techniques and a few extra simplex iterations.

If you want to get a strictly feasible starting point for your IPM you have two choices:

  1. Use an embedding method. This adds one extra variable to all problem dimensions. In that higher dimensional space there is a trivial strictly interior point. Then from the solution of this new problem you can recover the solution of the original problem. This is very different from just adding slacks. This is implemented in practical solvers.

  2. Use an infeasible start IPM. These minimize the primal and dual infeasibilities together with minimizing complementarity. This tends to be a more popular option in practical implementations.

Also, it isn't true that solving the phase-1 problem with simplex is polynomial: while you are pivoting out the infeasibilities new ones can appear. In general getting a feasible solution for an LP is equivalent to solving an LP to optimality.

I hope this helps to clear up some of the confusion.

$\endgroup$
14
$\begingroup$

Point 2 of "What we know" is incorrect: the ellipsoid method does not require a feasible starting point.

As I stated in a comment earlier, in Khachiyan (1980) it is proven that "determining the compatibility of a system of linear inequalities in $\mathbb{R}^n$ belongs to the class of $P$ of problems".

In Section 6 of the paper, Khachiyan shows that if you can determine the compatibility of a system of linear inequalities in polynomial time, then you can also find the optimal solution to an LP in polynomial time, which is of course feasible.

$\endgroup$
6
  • 2
    $\begingroup$ Could you please explain how "determining the compatibility of a system of linear inequalities" belonging to $P$ implies that finding a feasible solution can be done in polynomial time? In other words, we know in polynomial time whether the system has a feasible solution, but can we "obtain/restore" such a feasible solution in polynomial time? $\endgroup$
    – rasul
    Commented Aug 27, 2019 at 7:08
  • 1
    $\begingroup$ @Opt Thank you, I have edited the answer accordingly. $\endgroup$ Commented Aug 27, 2019 at 10:42
  • $\begingroup$ Thanks Kevin - we know that it is possible in theory, but my question is about what the actual algorithm would be in practice. $\endgroup$ Commented Aug 27, 2019 at 15:45
  • $\begingroup$ I think the idea is to do a binary search on the optimal objective value, and adding objective cuts until the system is no longer compatible. $\endgroup$ Commented Aug 27, 2019 at 16:47
  • 1
    $\begingroup$ There is no reason why the ellipsoid method described by Khachiyan could not be used in practice. All steps of the algorithm are given and all operations are elementary. $\endgroup$ Commented Aug 28, 2019 at 4:08
2
$\begingroup$

There are LP algorithms which do not require feasible start, and which do not use a Phase I / Phase II method. These algorithms are based on the "Homogeneous Self-Dual Embedding" (HSDE) approach of Ye, Todd, and Mizuno (Mathematics of Operations Research, Vol. 19, No. 1 (Feb., 1994), pp. 53-67). For suitable parameter choices, a path-following HSDE algorithm achieves the standard iteration complexity bound for interior point methods.

From what I understand, almost all conic interior-point solvers today use the Homogeneous Self-Dual Embedding. I've personally implemented that algorithm in about 200 lines of python code.

$\endgroup$
2
$\begingroup$

thanks Niko and all; I'd add a few things to this nice discussion.

When you say "first feasible point", do you mean "any" point ? We have boundary feasible points, interior, and bases.

For a strictly interior point, a (polynomial) infeasible IPM can quickly give you one without the need to converge to the optimal solution (if such exists). You just set your objective vector to zero and therefore the first feasible point will also be optimal so the algorithm terminates. With todays LP solvers this is very fast (and parallel).

For a basis, you better use a Simplex-type algorithm that pivots. You can find a feasible basis with the big-M method or the Two-Phases method. None requires a feasible point to begin with and they will find one if the problem is feasible (in the two phase method eventually the artificial variable will exit the basis).

Apologies if the question was more from a theoretical point of view, as my answer is mainly of practical interest.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.