3
$\begingroup$

I have an optimization problem which should be solvable with Quadratic Programming:


There are $n$ multiplication coefficients $c_i$ for which optimized values are searched.

The coefficients are multiplied by an individual vector and summed up to a new vector:

$\pmb v^{sum} = \sum_{i =0}^n c_i \cdot \pmb v_i$

Finally, the mean squared error between $\pmb v^{sum}$ and a vector $\pmb u$ is formed:

$mse = \frac{1}{m}\sum_{i = 0}^m (u_i - v^{sum}_i)^2$

The restrictions for values in the $\pmb c$ vector are:

  • They are not allowed to be negative:

    $c_i \ge 0.0$

  • They must have a sum of $1.0$:

    $\sum_{i =0}^n c_i = 1.0$

So I want to find optimized $c_i$ values for which the $mse$ is minimized while considering the restrictions.


The minimization should be done with Quadratic Programming:

$minimize\ \ \ \frac{1}{2}\pmb x^T \pmb P\pmb x+\pmb q^T\pmb x$

$subject\ to\ \ \ \pmb{Gx} \leq \pmb h$

$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \pmb{Ax} = \pmb b$

Can someone help me translate the problem into the Quadratic Programming form so I can put it into a solver?

$\endgroup$
0

2 Answers 2

3
$\begingroup$

Some high level tools allow you to input the model almost as you stated it. After substituting out ${\pmb v}^{sum}$, we can write using cvxpy:

import numpy as np
import cvxpy as cp

n = 10 # number of vectors v[i]
m = 20 # length of each vector

# random data
V = np.random.random_sample((m,n))
u = np.random.random_sample((m,1))

# optimization model
c = cp.Variable((n,1),nonneg=True)
prob = cp.Problem(cp.Minimize((1/m)*cp.sum_squares(u-V@c)),
                  [sum(c) == 1])
prob.solve(verbose=True)
print("status:",prob.status)
print("mse:",prob.value)
print("values for c:")
print(c.value)
$\endgroup$
2
+25
$\begingroup$

Define $\mathbf x$ as the vector $\mathbf x:=(c_1,c_2,\dots,c_n)^T$ and $\mathbf V$ as the matrix whose columns are $\pmb v_i$, i.e., $\mathbf V=(\pmb v_1, \pmb v_2, \dots, \pmb v_n)$. Then, $\pmb v^{sum}=\mathbf{Vx}$. Then, your $mse$ can be written as $$mse = (\pmb u - \mathbf{Vx})^T(\pmb u-\mathbf{Vx}).$$ (Please double-check this by taking into account my comment to make sure that I understood your notation correctly -- I ignored the constant $\frac{1}{m}$ as it is irrelevant for minimization).

Then, we have that $$(\pmb u-\mathbf{Vx})^T (\pmb u-\mathbf{Vx}) = \pmb u^T \pmb u - 2\pmb{u}^T\mathbf{Vx}+\mathbf{x}^T \mathbf V^T \mathbf V \mathbf x.$$

$ \pmb u^T \pmb u $ is constant so you can ignore it for minimization purposes. Now define $\mathbf P:=2 \mathbf V ^T \mathbf V$ and $\mathbf q := -\frac{1}{2}\pmb u^T \mathbf V$ and you have formulated your minimization function appropriately. This is a convex problem because $\mathbf P$ is nonnegative-definite, since it is the multiplication of the transpose of a matrix with the matrix itself.

As for the constraints, define $\mathbf G$ as the $n\times n$ diagonal matrix whose diagonals are all $-1$, i.e., $\mathbf G:=\text{diag}(-1,-1, \dots, -1)$, $\mathbf h$ as the $n$-dimensional column vector $\mathbf h:=(0,0\dots,0)^T$, $\mathbf A$ as the $n$-dimensional row vector $\mathbf A:=(1,1,\dots,1)$ and $\mathbf b$ simply as the scalar $\mathbf b:=1$. This should complete the formulation of the problem, but please double-check everything if you are going to use this solution.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .