4
$\begingroup$

After a few iterations, my master problem with optimality cuts is still unbounded. I wonder if it's possible in theory?

If it's possible, how to deal with the unbounded master problem?

$\endgroup$
4
  • $\begingroup$ I add some clarification: I think unboundedness both before and after adding cuts, makes the algorithm stuck due to the solution of the master problem doesn't change. I think it can also happen if adding redundant lower bound or adding variable bound. $\endgroup$
    – htam_ujn
    Commented Jun 3, 2022 at 13:16
  • $\begingroup$ Just to be clear, it is your master problem (not the subproblem) that is unbounded, and you are using the feasible corner point solution at which unboundedness was detected to generate the optimality cut, correct? $\endgroup$
    – prubin
    Commented Jun 3, 2022 at 14:41
  • $\begingroup$ @prubin Yes. The master problem is unbounded. But I'm not sure what solution the solver(gurobi) returns when the problem is unbounded. $\endgroup$
    – htam_ujn
    Commented Jun 3, 2022 at 14:56
  • $\begingroup$ I post a new question as RobPratt suggests. or.stackexchange.com/questions/8487/… $\endgroup$
    – htam_ujn
    Commented Jun 3, 2022 at 14:56

3 Answers 3

8
$\begingroup$

Assuming your master problem is to minimize $\eta$, a simple way to avoid unboundedness, even before adding any cuts, is to impose a redundant lower bound $\eta \ge L$ for some constant $L$. Often, taking $L=0$ is valid.

$\endgroup$
2
  • $\begingroup$ Your answer and prubin's below truly solved my question. But in certain situations, the optimal solution to the master problem with a lower bound will reach the artificial lower bound both before and after adding the cuts, which means the master problem will give the same solution as before and of course, generate the same cut. The algorithm will be stuck. I think it's the same situation as unboundedness. $\endgroup$
    – htam_ujn
    Commented Jun 3, 2022 at 13:11
  • $\begingroup$ This sounds to me like the cuts are not correct. Please open a separate question with more detail about this issue with cycling, which should not happen. $\endgroup$
    – RobPratt
    Commented Jun 3, 2022 at 13:59
6
$\begingroup$

Yes, it is possible in theory.

An alternative to bounding the objective function is bounding the variables. If this is a "real-world" model (where the variables represent actual decisions), they will all be bounded in practice. Adding appropriate (not overly tight, but not ridiculously loose) bounds will sometimes speed up the solution process in addition to keeping the model bounded.

Update: I added an answer to the linked question that is relevant here (and too lengthy to repeat). Basically, if the solver for the master problem is at a corner where it discovers a recession direction that makes the master unbounded, and if the subproblem generates an optimality cut (to correct the master problem under-/over-estimating the objective value at that corner), there is no guarantee that the cut causes the objective to become bounded in the recession direction. If it does not, the solver will return the same corner solution (with an amended objective value), the master will remain unbounded, and the solution process will remain stuck.

$\endgroup$
1
  • $\begingroup$ Your answer solves the problem. I have another question, please read the comment I added. $\endgroup$
    – htam_ujn
    Commented Jun 3, 2022 at 13:19
2
$\begingroup$

RobPratt's and prubin's answer indeed solves the problem in the post.

If someone wonders whether there exist other solutions, I found a class of stabilization methods in Frangioni, A. (2020) for nonlinear nonsmooth problems to avoid oscillation called proximal and level stabilization also solving this problem to some level.

proximal stabilization: adding a proximal term to the objective of the master problem, then the MP changes from $$ \min_{x \in X} f(x) $$ to $$ \min_{x \in X} f(x) + \frac{\mu}{2}|| x - x_k ||_2^2 $$ where $\mu$ is a hyperparameter.

level stabilization: adding a level set constraint, then the MP becomes $$ \min_{x \in X} || x - x_k ||_2^2 \\ \text{s.t. } f(x) \le l $$ where $l$ is a hyperparameter. You can choose $l$ as the best primal bound you have achieved.

Of course, you can combine the above 2 methods $$ \min_{x \in X} f(x) + \frac{\mu}{2}|| x - x_k ||_2^2\\ \text{s.t. } f(x) \le l $$

Reference

  1. Frangioni, A. (2020). Standard bundle methods: untrusted models and duality. In Numerical Nonsmooth Optimization (pp. 61-116). Springer, Cham.
$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.