I'm solving a constrained optimization problem, where I have to find the vector $\mathbf{x}$ of dimensionality N x 1
. The input vectors are $\mathbf{a}$ and $\mathbf{h}$ with dimensionality N x 1
. The error function which I have to minimize is:
$$E(\mathbf{a},\mathbf{x})=\sum_{i=1}^N(a_i-x_i)^2=(\mathbf{a}-\mathbf{x})^T(\mathbf{a}-\mathbf{x})$$
I've to minimize this error function subject to $N$ constraints:
$$x_i \ge \varepsilon_i \; \; \; \; \; \; \forall i \in [1, N] $$
where $\varepsilon_i$ is defined as:
$$\varepsilon_i =\begin{cases} k &for \; i = 1\\ x_{i-1} + h_{i - 1} + \eta &for \; i \gt 1 \end{cases}$$
for some constants $k$ and $\eta$, and $h_i$ is ith value of vector $\mathbf{h}$.
So, the error function is quadratic and constraints are linear, so its a quadratic programming problem with $N$ variables to find.
The general form of the quadratic programming problem is to minimize $\mathbf{x}^TP\mathbf{x} + \mathbf{q}^T\mathbf{x}$ subject to the constraints $G\mathbf{x} \le \mathbf{h}$.
I can easily convert the error function to this general form by computing the matrix $P$ and vector $\mathbf{q}$, as $P = I$ and $\mathbf{q} = -2\mathbf{a}$. But I'm having a lot of trouble in transforming the constraints to the general form of $G\mathbf{x} \le \mathbf{h}$ as I cannot take the vector $\mathbf{x}$ on the one side of the inequalities. As $i$th value of vector $\mathbf{x}$ depend on the $i-1$th value of $\mathbf{x}$ (as $\mathbf{x}_i \ge \varepsilon_i$ and $\varepsilon_i$ depends on the value of $\mathbf{x}_{i-1}$).
Actually I have to code this problem in JavaScript, and I have found a library that provides the solution for the quadratic programming problem, but I need the matrices $P$ and $G$, and vectors $\mathbf{q}$ and $\mathbf{h}$ for that ($\mathbf{x}^TP\mathbf{x} + \mathbf{q}^T\mathbf{x}$ subject to $G\mathbf{x} \le \mathbf{h}$). If I could just find the matrix $G$ and vector $\mathbf{h}$, I would just make a function call and the problem would be solved. So can anyone help me?