5
$\begingroup$

I am solving a mixed-integer program whose decision variables are $x \in \{0, 1\}^n$ and $y \in \mathbb{R}^m$, where $0 \leq y_j \leq u_j$ for constant upper bounds.

My primary objective function is of the form $$ \text{minimize } \sum_{i=1}^n c_i x_i $$

where each $c_i > 0$.

However, under the constraints for this problem, it turns out that there are many equivalent optimal solutions for this objective, and the solver typically finds an optimum quickly and then spends a long time iterating on dual bounds to prove its optimality.

To compensate for this, I want to to use an objective of the following form as a "tiebreaker"—that is, of the set of solutions that minimize $c^T x$, I want to select the one that minimizes $$ \sum_{j=1}^m d_j y_j $$ where each $d_j > 0$. (Actually, all I really care about is the 1-norm sparsity of $y$, so you can assume $d_j = 1$ if that helps at all.)

At a first glance, one way to accomplish this optimization objective would be as follows:

  1. Get the solution that minimizes $c^T x$ and let $z$ denote the associated objective value.
  2. Solve the MILP again, but change the objective function to $d^T y$ and impose the constraint $c^T x \leq z$.

I have two problems with this approach:

  1. In step 1, we know from experience that it takes the solver a long time to prove optimality with respect to $c^T x$, and we are not exploiting our secondary preference to accelerate this process.
  2. In step 2, it would be ideal to reuse the optimal solution from the previous step as a warm start, but I am required to use Google's OR-Tools (which doesn't provide a warm start API), so this step will take a long time.

So, I am considering adopting the following approach instead:

  1. Let $\bar c = \min \{c_i - c_j : c_i - c_j > 0\}$ equal the smallest positive difference among the $c_i$. This represents a lower bound on the difference between $z$ and the objective value of any feasible solution that is not optimal.
  2. Let $\hat d = \sum d_j$ and $\hat u = \sum u_j$.
  3. Solve the whole thing in one shot by minimizing $$\sum_{i=1}^n c_i x_i + \frac{\bar c}{\hat d \hat u}\sum_{j=1}^n d_j y_j .$$

The constants in the the second term confine its value to the range $[0, \bar c]$, meaning that the first term must be optimized before the second has any effect.

The risk of this approach is that for large $m$, the coefficients on the $y_j$ will be very small relative to the $c_i$, and the problem may be poorly conditioned.

My question is, is this approach valid, and is it possible to mitigate the risk of numerical instability for large $m$?


Edit: I don't think the approach is valid to begin with: If $c_1 = 2$, $c_2 = 0.99$, and $c_3 = 0.99$, then we get $\bar c = 1.01$, but the solutions $x = (1, 0, 0)$ and $x = (0, 1, 1)$ differ in objective value by only $0.02 < \bar c$ (assuming they are feasible).

The question of how to do this lexicographic optimization in a way that avoids the problems with the $c^T x \leq z$ approach stands.

$\endgroup$
6
  • 1
    $\begingroup$ Your assumption about $\bar{c}$ is incorrect. Suppose we minimize $3x_1 + x_2 + 5x_3$ subject to $x_1 + x_2 + 2x_3 \ge 2$ with $x$ binary. $\bar{c} = 2,$ $x = (1, 1, 0)$ is optimal with objective value 4, and $x=(0,0,1)$ is feasible with objective value 5. $\endgroup$
    – prubin
    Commented Aug 12, 2023 at 15:55
  • $\begingroup$ Yep, I pointed that out in my edit. But my broader question (how do I do this?) stands, and the notes about $\bar c$ are my proof of effort. $\endgroup$
    – Max
    Commented Aug 12, 2023 at 20:53
  • $\begingroup$ Any chance the $c_i$ are integer or at least rational? $\endgroup$
    – prubin
    Commented Aug 12, 2023 at 21:53
  • $\begingroup$ Sure, we can assume they are integers. Then you can scale the $y$ coefficients so it varies between 0 and 1. But this doesn't solve the numerical instability concerns, does it? $\endgroup$
    – Max
    Commented Aug 12, 2023 at 22:05
  • $\begingroup$ OR-Tools supports warm start for both the linear solver, and CP-SAT. $\endgroup$ Commented Dec 22, 2023 at 14:47

1 Answer 1

0
$\begingroup$

If we assume that the $c_i$ are integers, and the $x_i$ are binary, then the primary objective function is integer-valued, which means the minimum difference in objective value between an optimal solution and a suboptimal solution is 1. So we just have to scale the secondary objective in such a way that its range (the difference between the largest and smallest it can be) is strictly less than 1. That ensures that no amount of improvement in the secondary objective would lead to a suboptimal value in the primary objective. If the unscaled secondary objective is $\sum_j y_j,$ we can multiply that by $\lambda=\frac{0.5}{\sum_j u_j}$ to achieve that.

If the $c_i$ are much larger in magnitude than $\lambda$ (10 or so orders of magnitude), it's possible it could cause the solver issues, although in my experience poor scaling of constraint coefficients is a bigger problem than poor scaling of objective coefficients. If that becomes an issue, you can try replacing $y_j$ with $v_j = \alpha y_j$ where $0 < \alpha < 1.$ The secondary objective becomes $\frac{\lambda}{\alpha}\sum_j v_j,$ making the coefficients closer in magnitude to the $c_i$. It also means constraint coefficients of $y_j$ get multiplied by $\alpha$ (shrinking them), so it remains to be seen whether that damages numerical stability by screwing up the constraint scaling.

$\endgroup$
5
  • $\begingroup$ 10 orders of magnitude might be exactly what I'm looking at, unfortunately. My $c_i$ are between 10,000 and 100,000, and my $u_j$ are similar. I gave this some further thought and it seems that it is sufficient to scale the $\sum y_j$ term to vary between $0$ and $\bar c$, where $\bar c$ is the greatest common factor of the $c_i$. If the $c_i$ are all multiples of 100, for instance, forcing the second term to vary between $0$ and $1$ is overkill; $0$ and $100$ will suffice. I will try this approach and see if it damages numerical stability. $\endgroup$
    – Max
    Commented Aug 13, 2023 at 16:21
  • $\begingroup$ If your $c_i$ have a common divisor larger than 1, I would definitely divide them by it. It won't affect the optimal solution, and it will shrink the magnitude gap. $\endgroup$
    – prubin
    Commented Aug 13, 2023 at 19:53
  • $\begingroup$ Tried it out today and encountered reams of numerical instability warnings from my solver. Even if you can reformulate $c_i$ and $d_j$ and $u_j$ to be small integers, the scaling factor is still proportional to $m$, which ensures that this method causes poorer and poorer conditioning as the problem grows in size. $\endgroup$
    – Max
    Commented Aug 14, 2023 at 23:07
  • $\begingroup$ If you can switch solvers to one that handles lexicographic ordering of multiple objectives, you can avoid the conditioning problems. $\endgroup$
    – prubin
    Commented Aug 15, 2023 at 15:46
  • $\begingroup$ If I had a solver that handled lexicographic ordering of multiple objectives, I wouldn't have this question in the first place :) $\endgroup$
    – Max
    Commented Aug 15, 2023 at 23:21

Not the answer you're looking for? Browse other questions tagged or ask your own question.