9
$\begingroup$

Consider the optimization problem \begin{align}\label{opt-lp}\tag{Primal} \begin{array}{cl} \underset{x \in \mathbb{R}^n}{\text{minimize}} & c^\top x \\ \text{subject to} & Ax = a \\ & Bx = b \end{array} \end{align} where $A \in \mathbb{R}^{m \times n}, a \in \mathbb{R}^m, B \in \mathbb{R}^{q \times n}, b \in \mathbb{R}^q$ are the problem data, and the problem has a nonempty feasible set.

I would like to introduce a partial Lagrange relaxation to only $Ax = a$ constrains, so that the partial Lagrange function is $$L(x, \lambda) =c^\top x + \lambda^\top ( Ax - a).$$ As this is a "partial" Lagrange relaxation, I define the Lagrange dual function as $$ g(\lambda) = \underset{x : Bx = b}\inf L(x, \lambda)$$ that is, I add the constraint of $Bx = b$ already. It is clear that $g(\lambda)$ lower bounds \eqref{opt-lp}.

I think the Lagrange dual problem becomes: \begin{align}\label{dual}\tag{Dual} \sup_\lambda \inf_{x : Bx =b} c^\top x + \lambda^\top (Ax - a). \end{align} My question is whether what I am doing here has a name and whether I can simply say strong duality holds between problems \eqref{opt-lp} and \eqref{dual} in a paper without going much in detail.

$\endgroup$

2 Answers 2

9
$\begingroup$

Based on the mentioned references, suppose the primal problem is: \begin{align} \begin{array}{cl} \underset{}{\text{minimize}} & c x \\ \text{subject to} & Ax = a \\ & Dx \leq e \\ & x \geq \text{0} \end{array} \end{align}

The idea behind Lagrangian relaxation is to relax the complicating constraints to produce an easier problem by adding this constraint into the objective function with a penalty so-called Lagrange multipliers. The Lagrangian subproblem is:

\begin{align} \begin{array}{cl} \underset{}{\text{minimize}} & cx \ + \ \lambda(b \ - \ Ax)\\ \text{subject to} & Dx \leq e \\ & x \geq \text{0} \end{array} \end{align}

Solving the Lagrangian subproblem can produce a lower bound on the optimal objective value of the primal problem. Let $Z^*$ be the optimal objective value of primal and let $Z_{LR}(\lambda)$ be the optimal objective value of The Lagrangian subproblem. Then we have the following result:

For any $\lambda \in \mathbb{R}^m$ ($m$ be the number of rows in $A$) $$Z_{LR}(\lambda) \leq Z^*$$

Since primal is a minimization problem, we want lower bounds that are as large as possible; these are the most accurate and useful bounds. Different values of $\lambda$ will give different values of $Z_{LR}(\lambda)$, and hence different bounds. We’d like to find $\lambda$ that gives the largest possible bounds. That is, we want to solve:

\begin{align} \begin{array}{cl} \underset{\lambda}{\text{maximize}} & Z_{LR}(\lambda)\\ \end{array} \end{align}

According to the primal-dual relation: $Z_{LP} \leq Z_{LR}$.

$$\begin{aligned} z_{\mathrm{LR}} &\geq \max _{\lambda}\left\{\min _{x} c x+\lambda(b-A x) \mid D x \leq e, x \geq 0\right\} \\ &=\max _{\lambda}\left\{\min _{x}(c-\lambda A) x+\lambda b \mid D x \leq e, x \geq 0\right\} \\ &=\max _{\lambda}\left\{\max _{\mu} \mu e+\lambda b \mid \mu D \leq c-\lambda A, \mu \leq 0\right\} \\ &=\max _{\lambda, \mu}\{\mu e+\lambda b \mid \mu D \leq c-\lambda A, \mu \leq 0\} \\ &=\max _{\lambda, \mu}\{\mu e+\lambda b \mid \mu D+\lambda A \leq c, \mu \leq 0\} \\ &=\min _{y}\{c y \mid A y=b, D y \leq e, y \geq 0\} \\ &=z_{\mathrm{L} P} \end{aligned}$$

where the last one is LP dual of the entire problem.


References:

$\endgroup$
10
$\begingroup$

This is called Lagrangian relaxation, no matter what subset of constraints you choose to dualize.

$\endgroup$
2
  • $\begingroup$ Thank you for your answer. Is strong duality as clear as the typical case where we dualize all constraints? $\endgroup$ Commented Dec 4, 2021 at 23:56
  • 1
    $\begingroup$ @independentvariable I would say yes, it is as clear. You just squuece what you call partial LR between the original LP and the full LR. $\endgroup$
    – Sune
    Commented Dec 5, 2021 at 13:11

Not the answer you're looking for? Browse other questions tagged or ask your own question.