Abstract: This paper considers the NP (Non-deterministic Polynomial)-hard problem of finding a minimum value of a quadratic program (QP), subject to m non-convex inhomogeneous quadratic constraints. One effective algorithm is proposed to get a feasible solution based on the optimal solution of its semidefinite programming (SDP) relaxation problem.

1. Introduction

$\begin{array}{l}\underset{x\in {R}^{n}}{min}f\left(x\right)={x}^{\text{T}}Ax\\ \text{s}\text{.t}\text{.}{x}^{\text{T}}{A}^{k}x+{\left({b}^{k}\right)}^{\text{T}}x\ge 1,k=1,\cdots ,m,\end{array}$ (1)

where $A,{A}^{k}\in {R}^{n×n}\left(k=1,\cdots ,m\right)$ are symmetric positive semidefinite matrices and ${b}^{k}\in {R}^{n}$ . Note that if $n=1,$ then the problem (1) is easily solved, so assume $n\ge 2.$ Generally, this problem is NP-hard    . It has a lot of applications in telecommunications, robust control, portfolio and so on. So it’s helpful to solve it.

By means of the work of Lovász and Schrijver  , Shor  , and others that certain NP-hard combinatorial optimization problems can be approximated by semidefinite programming (SDP) problems, for which efficient solution methods exist   . So many mathematics workers with this motive put forward a lot of algorithms to solving quadratic optimization problem based on the semidefinite programming (SDP) relaxation. Recently, there were several results on solving different forms of quadratic problems. The results can be summarized as follows.

•  If all ${A}^{k}$ are symmetric positive semidefinite $n×n$ matrices with positive definite sum and A is an arbitrary symmetric $n×n$ matrix. A. Nemirov- ki  produce a feasible solution $\stackrel{˜}{x}$ such that, with constant probability,

$f\left(\stackrel{˜}{x}\right)\ge \frac{1}{2\mathrm{ln}\left(2{m}^{2}\right)}\cdot v\left(SDP\right).$ (2)

•  If $A,{A}^{k}\succcurlyeq 0,{A}^{k}={\left({A}^{k}\right)}^{\text{T}},{b}^{k}=0,$ Luo et al.  showed that the ratio between the original optimal value and the SDP relaxation optimal value is bounded by $O\left({m}^{2}\right)$ , have

$\frac{v\left(P\right)}{v\left(SDP\right)}\le \frac{27{m}^{2}}{\text{π}}.$ (3)

•  If all ${A}^{k},A$ are symmetric matrices, and two or more of them are indefinite. S. He et al.  compute a feasible solution $\stackrel{˜}{x}$ such that,

$f\left(\stackrel{˜}{x}\right)\le \frac{{10}^{6}{m}^{2}}{\text{π}}\cdot v\left(SDP\right).$ (4)

•  Of special interest is the case of ellipsoid constraints

$\begin{array}{l}{A}^{k}={\left({F}^{k}\right)}^{\text{T}}{F}^{k},{b}^{k}=2{\left({F}^{k}\right)}^{\text{T}},\\ {c}^{k}={\left(‖{g}^{k}‖\right)}^{2}-{h}^{k},k=1,\cdots ,m,\end{array}$ (5)

where ${F}^{k}\in {R}^{n},{g}^{k}\in {R}^{n},{h}^{k}\in 0,1,$ $‖\text{ }\cdot \text{ }‖$ denotes the Euclidean norm, so ${x}^{\text{T}}{A}^{k}x+{b}^{k}x+{c}^{k}={‖{F}^{k}x+{g}^{k}‖}^{2}-{h}^{k},k=1,\cdots ,m$ . Nemirovski  show that if all ${b}^{k}=0$ and ${A}^{k}$ are positive semidefinite. Then a feasible solution $\stackrel{˜}{x}$ can be generated from (SDP) satisfying

$f\left(\stackrel{˜}{x}\right)\le \frac{1}{2\mathrm{ln}\left(2\left(m+1\right)\mu \right)}\cdot v\left(SDP\right),\mu =\mathrm{min}\left\{m+1,{\mathrm{max}}_{k=1,\cdots ,m}rank\left({A}^{k}\right)\right\}.$ (6)

•  In particular, if (1) has a ball constraints, $\mu =\mathrm{min}\left\{m+1,n\right\}.$ Ye and Zhang(Corollary 2.6 in  ) showed that a feasible x satisfying

$f\left(x\right)\le \frac{1}{\mathrm{min}\left\{m-1,n\right\}}\cdot v\left(SDP\right),$ (7)

can be found in polynomial time.

•  Ye.  extended the above result to allow the ellipsoids not to have a common center but assuming $A\preccurlyeq 0$ . Ye showed that a feasible solution $\stackrel{˜}{x}$ can be randomly generated such that

$E\left(\stackrel{˜}{x}{A}^{\text{T}}\stackrel{˜}{x}\right)\le \frac{{\left(1-\underset{k=1,\cdots ,m}{\mathrm{max}}‖{g}^{k}‖\right)}^{2}}{4\mathrm{ln}\left(4mn\cdot \underset{k=1,\cdots ,m}{\mathrm{max}}\left(rank\left({A}^{k}\right)\right)\right)}\cdot v\left(SDP\right).$ (8)

However, the existing algorithms are just for the problem of discrete problems or continuous problems, which is mostly based on homogeneous or inhomogeneous convex constraint problems. For this kind of quadratic optimization problem with non-convex inhomogeneous quadratic constraints, cannot find a very effective algorithm. This paper will propose a new effective algorithm to solve this problem.

This paper is organized as follows. In Section 2, we present a semidefinite pro- gramming (SDP) relaxation of (1). In Section 3, we propose a new effective algorithm to get the feasible solution of quadratic optimization problem (1) with non-convex inhomogeneous quadratic constraints. At last, some conclusions and the future works are given in Section 4.

Notations. Throughout this paper, we denote by ${R}^{n}$ and ${S}_{+}^{n}$ the n-dimen- sional real vector space and $n×n$ positive semidefinite symmetric matrices space. $A\succcurlyeq 0$ denotes that A is semidefinite. $Tr\left(\cdot \right)$ represents the trace of a matrix. The inner product of two matrices A and B is denoted by

$A•B=Tr\left(A{B}^{T}\right)={\sum }_{i=1}^{n}{\sum }_{j=1}^{n}\text{ }{a}_{ij}{b}_{ij}\cdot Pr\left(\cdot \right)$ stands for the probability.

2. Semidefinite Programming (SDP) Relaxation

In this section, we present a semidefinite programming (SDP) relaxation of (1). To avoid trivial cases, we first make the following assumption.

Assumption Let ${b}^{k}=2{A}^{k}{y}^{k}$ , has a solution, ${y}^{k}\in {R}^{n}.$

Assume t is a constant, and satisfy ${t}^{2}=1$ . (1) is equivalent to:

$\begin{array}{l}\underset{\left(x,t\right)\in {R}^{n}×R}{min}f\left(x,t\right)={x}^{\text{T}}Ax\\ \text{s}\text{.t}.\frac{1}{1+{\left({y}^{k}\right)}^{\text{T}}{A}^{k}{y}^{k}}\left[{x}^{\text{T}}{A}^{k}x+{\left({b}^{k}\right)}^{\text{T}}x\cdot t+{\left({y}^{k}\right)}^{\text{T}}{A}^{k}{y}^{k}\cdot {t}^{2}\right]\ge 1,k=1,\cdots ,m\\ {t}^{2}=1.\end{array}$ (9)

Let ${x}^{*}$ be the global optimal solution of the above problem, the objective value is $v\left({x}^{*}\right)$ . Assume $X\succcurlyeq 0$ , it’s block structure like this:

$X=\left[\begin{array}{cc}{X}^{\left(1\right)}& {X}^{\left(3\right)}\\ {\left({X}^{\left(3\right)}\right)}^{H}& {X}^{\left(2\right)}\end{array}\right]\in {S}_{+}^{\left(n+1\right)×\left(n+1\right)}.$ (10)

where

$B=\left[\begin{array}{cc}A& 0\\ 0& 0\end{array}\right]\succcurlyeq 0,\text{\hspace{0.17em}}{B}^{k}=\frac{1}{1+{\left({y}^{k}\right)}^{\text{T}}{A}^{k}{y}^{k}}\left[\begin{array}{cc}{A}^{k}& \frac{1}{2}{b}^{k}\\ \frac{1}{2}{\left({b}^{k}\right)}^{\text{T}}& {\left({y}^{k}\right)}^{\text{T}}{A}^{k}{y}^{k}\end{array}\right]\succcurlyeq 0,\text{\hspace{0.17em}}k=1,\cdots ,m.$ (11)

By letting $X=x{x}^{\text{T}}$ and dropping the rank one constraint, the semidefinite programming relaxation of (9) can be drawn up as follows.

$\begin{array}{l}minB•X\\ \text{s}\text{.t}.{B}^{k}•X\ge 1,k=1,\cdots ,m\\ {X}_{n+1,n+1}=1,X\succcurlyeq 0,X\in {R}^{\left(n+1\right)×\left(n+1\right)}.\end{array}$ (SDP)

An optimal solution of SDP relaxation (SDP) can be computed efficiently using, say, interior-point mathods; see  and references therein.

3. Algorithm

In this section, we bring an effective algorithm for solving (1). The algorithm is divided into two parts. The main idea as follows: the first stage produces a solution which satisfies the first constraint of problem (9). Making a small change to the solution which obtained in the first stage, we can get the solution of (9) in the second stage. We will set the randomization algorithm as follows.

3.1. The First Stage

The first stage of the algorithm uses the randomization algorithm, which is proposed by Luo et al.  . At first, we need obtain an optimal solution ${X}^{*}$ of (SDP), then construct a feasible solution for the first constraint of problem (9) using the following randomization procedure:

First, it can be easily verified that $\stackrel{˜}{x}={\left({x}^{\text{T}},{\stackrel{˜}{x}}_{n+1}\right)}^{\text{T}}$ satisfy the first constraint of problem (9).

$\frac{1}{1+{\left({y}^{k}\right)}^{\text{T}}{A}^{k}{y}^{k}}\left[{x}^{\text{T}}{A}^{k}x+{\left({b}^{k}\right)}^{\text{T}}\cdot x\cdot {\stackrel{˜}{x}}_{n+1}+{\left({y}^{k}\right)}^{\text{T}}{A}^{k}{y}^{k}\cdot {\stackrel{˜}{x}}_{n+1}^{2}\right]=\frac{{\xi }^{\text{T}}{B}^{k}\xi }{\underset{k}{min}{\xi }^{\text{T}}{B}^{k}\xi }\ge 1,$ (12)

(12) is equivalent to:

$\frac{1}{1+{\left({y}^{k}\right)}^{\text{T}}{A}^{k}{y}^{k}}\left[{\left(\frac{x}{{\stackrel{˜}{x}}_{n+1}}\right)}^{\text{T}}{A}^{k}\frac{x}{{\stackrel{˜}{x}}_{n+1}}+{b}^{k}\frac{x}{{\stackrel{˜}{x}}_{n+1}}+{\left({y}^{k}\right)}^{\text{T}}{A}^{k}{y}^{k}\right]\ge \frac{1}{{\stackrel{˜}{x}}_{n+1}^{2}},\forall k=1,\cdots ,m.$ (13)

Lemma 1 For $\stackrel{˜}{x}$ generated in step 2, we have that

$pr\left(\frac{1}{{\stackrel{˜}{x}}_{n+1}^{2}}\ge \frac{4}{{10}^{4}{m}^{2}}\right)\ge \frac{1}{100}.$ (14)

Proof. By the step 2, we first have

${\stackrel{˜}{x}}_{n+1}=\frac{{e}_{n+1}^{\text{T}}\xi }{\sqrt{\underset{k}{min}{\xi }^{\text{T}}{B}^{k}\xi }},$ (15)

where ${e}_{n+1}\in {R}^{\left(n+1\right)×1}$ is a vector with the $\left(n+1\right)\text{th}$ element being 1 and all the other elements being 0. By denoting $Q={e}_{n+1}{e}_{n+1}^{\text{T}},$ we obtain that

$\begin{array}{c}pr\left(\frac{1}{{\stackrel{˜}{x}}_{n+1}^{2}}\le M\right)=pr\left(\frac{1}{M}\cdot \underset{k}{min}{\xi }^{\text{T}}{B}^{k}\xi \le {\xi }^{\text{T}}{e}_{n+1}{e}_{n+1}^{\text{T}}\xi \right)\\ =pr\left(\frac{1}{M}\cdot \underset{k}{min}{\xi }^{\text{T}}{B}^{k}\xi \le {\xi }^{\text{T}}Q\xi \right).\end{array}$ (16)

By using the total probability formula for the last term in (16), we have

$\begin{array}{l}\le pr\left(\frac{1}{M}\cdot M\le {\xi }^{\text{T}}Q\xi \right)\cdot pr\left(M\le \underset{k}{min}{\xi }^{\text{T}}{B}^{k}\xi \right)+pr\left(M>\underset{k}{min}{\xi }^{\text{T}}{B}^{k}\xi \right)\cdot 1\\ \le pr\left(1\le {\xi }^{\text{T}}Q\xi \right)+pr\left(M>\underset{k}{min}{\xi }^{\text{T}}{B}^{k}\xi \right).\end{array}$ (17)

By Lemma 3.1 and Lemma 3.2 in  ,

$pr\left(1\le {\xi }^{\text{T}}Q\xi \right)=pr\left(Tr\left(Q{X}^{*}\right)\le {\xi }^{\text{T}}Q\xi \right)=pr\left(E\left({\xi }^{\text{T}}Q\xi \right)\le {\xi }^{\text{T}}Q\xi \right)<1-\frac{3}{100}.$ (18)

Since ${X}^{*}$ is feasible for (SDP), it follows that $Tr\left({B}^{k}{X}^{*}\right)\ge 1$ for all $k=1,\cdots ,m$ . Since $E\left({\xi }^{\text{T}}{B}^{k}\xi \right)=Tr\left({B}^{k}{X}^{*}\right)\ge 1,$ so

$\begin{array}{l}pr\left(M>\underset{k}{\mathrm{min}}{\xi }^{\text{T}}{B}^{k}\xi \right)\le pr\left(MTr\left({B}^{k}{X}^{*}\right)>\underset{k}{\mathrm{min}}{\xi }^{\text{T}}{B}^{k}\xi \right)\\ =pr\left(ME\left({\xi }^{\text{T}}{B}^{k}\xi \right)>\underset{k}{\mathrm{min}}{\xi }^{\text{T}}{B}^{k}\xi \right)\le \underset{k=1}{\overset{m}{\sum }}\text{ }\text{ }pr\left(ME\left({\xi }^{\text{T}}{B}^{k}\xi \right)>{\xi }^{\text{T}}{B}^{k}\xi \right).\end{array}$ (19)

According to Lemma 1 in  , we have

$pr\left(ME\left({\xi }^{\text{T}}{B}^{k}\xi \right)>{\xi }^{\text{T}}{B}^{k}\xi \right)\le \mathrm{max}\left\{\sqrt{M},\frac{2\left(\sqrt{2\left(m+1\right)}-1\right)M}{\text{π}-2}\right\}.$ (20)

Thus, it follows from (17), (18), (19) and (20) that:

$pr\left(\frac{1}{{\stackrel{˜}{x}}_{n+1}^{2}}\le M\right)\le 1-\frac{3}{100}+m\cdot \mathrm{max}\left\{\sqrt{M},\frac{2\left(\sqrt{2\left(m+1\right)}-1\right)M}{\text{π}-2}\right\}.$ (21)

The proof is completed by setting $M=\frac{4}{{10}^{4}{m}^{2}}.$

Note that by Lemma 1 and (13), it can be concluded that

$pr\left\{\frac{1}{1+{\left({y}^{k}\right)}^{\text{T}}{A}^{k}{y}^{k}}\left[{\left(\frac{x}{{\stackrel{˜}{x}}_{n+1}}\right)}^{\text{T}}{A}^{k}\frac{x}{{\stackrel{˜}{x}}_{n+1}}+{b}^{k}\frac{x}{{\stackrel{˜}{x}}_{n+1}}+{\left({y}^{k}\right)}^{\text{T}}{A}^{k}{y}^{k}\right]\ge \frac{4}{{10}^{4}{m}^{2}}\right\}\ge \frac{1}{100}.$ (22)

So there is a $\stackrel{¯}{x}$ , for any $k=1,\cdots ,m$ satisfies:

$\frac{1}{1+{\left({y}^{k}\right)}^{\text{T}}{A}^{k}{y}^{k}}\left[{\left(\stackrel{¯}{x}\right)}^{\text{T}}{A}^{k}\stackrel{¯}{x}+{b}^{k}\stackrel{¯}{x}+{\left({y}^{k}\right)}^{\text{T}}{A}^{k}{y}^{k}\right]\ge \frac{4}{{10}^{4}{m}^{2}}.$ (23)

3.2. The Second Stage

In this part, we make a change to the solution which constructed in the first stage in order to satisfy the problem (9). In this stage, we will by ways of the algorithm in  .

The procedure as follows:

Let $f\left(\tau \right)={\tau }^{2}{\stackrel{^}{x}}^{\text{T}}{A}^{k}\stackrel{^}{x}+\tau {\left({b}^{k}\right)}^{\text{T}}\stackrel{^}{x}$ , so $f\left(\tau \right)$ can be seen as a quadratic function for $\tau$ . The symmetry axis of $f\left(\tau \right)$ is:

$x=\frac{-{\left({b}^{k}\right)}^{\text{T}}\stackrel{^}{x}}{2{\stackrel{^}{x}}^{\text{T}}{A}^{k}\stackrel{^}{x}}.$ (24)

Because the $\stackrel{^}{x}$ can’t make ${\left(\stackrel{^}{x}\right)}^{\text{T}}{A}^{k}\left(\stackrel{^}{x}\right)+{\left({b}^{k}\right)}^{\text{T}}\left(\stackrel{^}{x}\right)\ge 1$ for all $k=1,\cdots ,m$ set up. We introduce a parameter $\stackrel{^}{\tau }$ , and construct a new solution $\stackrel{^}{\tau }\stackrel{^}{x}$ . It’s the feasible solution of (1).

When ${\stackrel{^}{x}}^{\text{T}}{A}^{k}\stackrel{^}{x}+{\left({b}^{k}\right)}^{\text{T}}\stackrel{^}{x}>1$ , the symmetry axis of $f\left(\tau \right)$ satisfies:

$x=\frac{-{\left({b}^{k}\right)}^{\text{T}}\stackrel{^}{x}}{2{\stackrel{^}{x}}^{\text{T}}{A}^{k}\stackrel{^}{x}}<\frac{{\stackrel{^}{x}}^{\text{T}}{A}^{k}\stackrel{^}{x}-1}{2{\stackrel{^}{x}}^{\text{T}}{A}^{k}\stackrel{^}{x}}=\frac{1}{2}-\frac{1}{2{\stackrel{^}{x}}^{\text{T}}{A}^{k}\stackrel{^}{x}}<\frac{1}{2}.$ (25)

So for all $\tau >1,$ can make ${\left(\tau \stackrel{^}{x}\right)}^{\text{T}}{A}^{k}\left(\tau \stackrel{^}{x}\right)+{\left({b}^{k}\right)}^{\text{T}}\left(\tau \stackrel{^}{x}\right)>1$ set up. It’s helpful for us to solve the problem, because we only need to find $\tau$ satisfying ${\left(\tau \stackrel{^}{x}\right)}^{\text{T}}{A}^{k}\left(\tau \stackrel{^}{x}\right)+{\left({b}^{k}\right)}^{\text{T}}\left(\tau \stackrel{^}{x}\right)>1$ in the situation of ${\stackrel{^}{x}}^{\text{T}}{A}^{k}\stackrel{^}{x}+{\left({b}^{k}\right)}^{\text{T}}\stackrel{^}{x}\le 1$

When ${\stackrel{^}{x}}^{\text{T}}{A}^{k}\stackrel{^}{x}+{\left({b}^{k}\right)}^{\text{T}}\stackrel{^}{x}\le 1$ , because ${A}^{k}\succcurlyeq 0\left(k=1,\cdots ,m\right)$ are symmetric, ${b}^{k}=2{A}^{k}{y}^{k}$ . To simplify the writing, we introduce the following notations:

$\stackrel{}{\stackrel{︷}{{x}_{k}}}=‖{\left({A}^{k}\right)}^{\frac{1}{2}}\stackrel{^}{x}‖,\text{\hspace{0.17em}}\stackrel{}{\stackrel{︷}{{y}_{k}}}=‖{\left({A}^{k}\right)}^{\frac{1}{2}}{y}^{k}‖,\text{\hspace{0.17em}}\stackrel{}{\stackrel{︷}{{z}_{k}}}=‖{\left({A}^{k}\right)}^{\frac{1}{2}}\stackrel{^}{x}+{\left({A}^{k}\right)}^{\frac{1}{2}}{y}^{k}‖,$ (26)

So

$\begin{array}{c}f\left(\tau \right)={\tau }^{2}{\stackrel{^}{x}}^{\text{T}}{A}^{k}\stackrel{^}{x}+\tau {\left({b}^{k}\right)}^{\text{T}}\stackrel{^}{x}\\ =\left({\tau }^{2}-\tau \right){\left(\stackrel{}{\stackrel{︷}{{x}_{k}}}\right)}^{2}+\tau \left[{\left(\stackrel{}{\stackrel{︷}{{x}_{k}}}\right)}^{2}+{\left({b}^{k}\right)}^{\text{T}}\stackrel{^}{x}+{\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}\right]-\tau {\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}\\ =\left({\tau }^{2}-\tau \right){\left(\stackrel{}{\stackrel{︷}{{x}_{k}}}\right)}^{2}+\tau {\left(\stackrel{}{\stackrel{︷}{{z}_{k}}}\right)}^{2}-\tau {\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}.\end{array}$ (27)

According to the norm inequality:

$‖x-y‖\ge |‖x‖-‖y‖|.$ (28)

we have

${\left(\stackrel{}{\stackrel{︷}{{x}_{k}}}\right)}^{2}={\left(‖{\left({A}^{k}\right)}^{\frac{1}{2}}\stackrel{^}{x}‖\right)}^{2}\ge {\left(|‖{\left({A}^{k}\right)}^{\frac{1}{2}}\stackrel{^}{x}+{\left({A}^{k}\right)}^{\frac{1}{2}}{y}^{k}‖-‖{\left({A}^{k}\right)}^{\frac{1}{2}}{y}^{k}‖|\right)}^{2}={\left(\stackrel{}{\stackrel{︷}{{z}_{k}}}-\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}.$ (29)

For $\tau \in \left[1,+\infty \right),$ using (28), (29). The last term in (27) can be simplified to be

$\begin{array}{l}\ge \left({\tau }^{2}-\tau \right){\left(\stackrel{}{\stackrel{︷}{{z}_{k}}}-\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}+\tau {\left(\stackrel{}{\stackrel{︷}{{z}_{k}}}\right)}^{2}-\tau {\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}\\ ={\tau }^{2}{\left(\stackrel{}{\stackrel{︷}{{z}_{k}}}-\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}+2\tau \left(\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)\left(\stackrel{}{\stackrel{︷}{{z}_{k}}}\right)-{\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}\right)={\tau }^{2}{\left(\stackrel{}{\stackrel{︷}{{z}_{k}}}-\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}+2\tau \left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)\left(\stackrel{}{\stackrel{︷}{{z}_{k}}}-\stackrel{}{\stackrel{︷}{{y}_{k}}}\right),\end{array}$ (30)

Whenever $\tau \ge \frac{-\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)\left(\stackrel{}{\stackrel{︷}{{z}_{k}}}-\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)+\sqrt{{\left(\stackrel{}{\stackrel{︷}{{z}_{k}}}-\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}\left(1+{\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}\right)}}{{\left(\stackrel{}{\stackrel{︷}{{z}_{k}}}-\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}},$ it can be easily checked that ${\left(\tau \stackrel{^}{x}\right)}^{\text{T}}{A}^{k}\left(\tau \stackrel{^}{x}\right)+{\left({b}^{k}\right)}^{\text{T}}\left(\tau \stackrel{^}{x}\right)\ge 1.$

From (23), we can get

$\begin{array}{l}\frac{1}{1+{\left({y}^{k}\right)}^{\text{T}}{A}^{k}{y}^{k}}\left[{\left(\stackrel{^}{x}\right)}^{\text{T}}{A}^{k}\stackrel{^}{x}+{b}^{k}\stackrel{^}{x}+{\left({y}^{k}\right)}^{\text{T}}{A}^{k}{y}^{k}\right]\ge \frac{4}{{10}^{4}{m}^{2}}\\ ⇔\stackrel{}{\stackrel{︷}{{z}_{k}}}\ge \sqrt{\frac{1}{50m}\left(1+{\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}\right)},\end{array}$ (31)

and we also have

${\stackrel{^}{x}}^{\text{T}}{A}^{k}\stackrel{^}{x}+{\left({b}^{k}\right)}^{\text{T}}\stackrel{^}{x}\le 1⇔\stackrel{}{\stackrel{︷}{{z}_{k}}}\le \sqrt{1+{\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}}.$ (32)

Thus

$\stackrel{^}{\tau }\le \underset{k=1,\cdots ,m}{\mathrm{max}}\frac{-\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)\left(\stackrel{}{\stackrel{︷}{{z}_{k}}}-\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)+\sqrt{{\left(\stackrel{}{\stackrel{︷}{{z}_{k}}}-\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}\left(1+{\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}\right)}}{{\left(\stackrel{}{\stackrel{︷}{{z}_{k}}}-\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}}\le \underset{k=1,\cdots ,m}{\mathrm{max}}\frac{\sqrt{{\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}+1}+\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}{|\stackrel{}{\stackrel{︷}{{z}_{k}}}-\stackrel{}{\stackrel{︷}{{y}_{k}}}|}.$ (33)

Using (31) and (32), the last term in (33) can be simplified as

$\le \underset{k=1,\cdots ,m}{\mathrm{max}}\left\{\frac{\sqrt{1+{\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}}+\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}{|\sqrt{\frac{1}{50m}\left(1+{\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}\right)}-\stackrel{}{\stackrel{︷}{{y}_{k}}}|},\frac{\sqrt{1+{\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}}+\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}{\sqrt{1+{\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}}-\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}\right\}.$ (34)

We will give the analysis of (34) as follows.

First, let

$f\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)=\frac{\sqrt{1+{\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}}+\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}{\sqrt{1+{\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}}-\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)},g\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)=\frac{\sqrt{1+{\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}}+\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}{|\sqrt{\frac{1}{50m}\left(1+{\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}\right)}-\stackrel{}{\stackrel{︷}{{y}_{k}}}|}.$ (35)

we can simplified $f\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)$

$f\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)=\frac{\sqrt{1+{\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}}+\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}{\sqrt{1+{\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}}-\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}={\left(\sqrt{1+{\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)}^{2}}+\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)\right)}^{2}.$ (36)

Since $\stackrel{}{\stackrel{︷}{{y}_{k}}}\ge 0$ , we know that $f\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)$ is an increasing function about $\stackrel{}{\stackrel{︷}{{y}_{k}}}$ .

However, $g\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)$ is a function depend on $\stackrel{}{\stackrel{︷}{{y}_{k}}}$ and the number of the constraints. According to simple calculations, we find, when satisfies $\stackrel{}{\stackrel{︷}{{y}_{k}}}<\sqrt{\frac{1}{50m-1}}$ , $g\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)$ is an increasing function about $\stackrel{}{\stackrel{︷}{{y}_{k}}}$ . In this situation, we also have $g\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)>f\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)$ . When $\stackrel{}{\stackrel{︷}{{y}_{k}}}>\sqrt{\frac{1}{50m-1}}$ , $g\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right)$ becomes smaller with the increase of $\stackrel{}{\stackrel{︷}{{y}_{k}}}$ .

where

${\gamma }_{1}=\underset{k=1,\cdots ,m}{\mathrm{max}}\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right),{\gamma }_{2}=\underset{k=1,\cdots ,m}{\mathrm{min}}\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right).$ (37)

we can write (34) as a piecewise function about ${\gamma }_{1}$ and ${\gamma }_{2}$

$f\left({\gamma }_{1},{\gamma }_{2}\right)=\left\{\begin{array}{l}\frac{\sqrt{{\left({\gamma }_{1}\right)}^{2}+1}+{\gamma }_{1}}{|\sqrt{\frac{1}{50m}\left(1+{\left({\gamma }_{1}\right)}^{2}\right)}-{\gamma }_{1}|}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }{\gamma }_{1},\text{\hspace{0.17em}}{\gamma }_{2}<\sqrt{\frac{1}{50m-1}}\text{ }\\ +\infty \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{\hspace{0.17em}}{\gamma }_{1}>\sqrt{\frac{1}{50m-1}},\text{\hspace{0.17em}}{\gamma }_{2}<\sqrt{\frac{1}{50m-1}}\text{ }\\ \text{ }\mathrm{max}\left\{\frac{\sqrt{{\left({\gamma }_{2}\right)}^{2}+1}+{\gamma }_{2}}{|\sqrt{\frac{1}{50m}\left(1+{\left({\gamma }_{2}\right)}^{2}\right)}-{\gamma }_{2}|},\frac{\sqrt{{\left({\gamma }_{1}\right)}^{2}+1}+{\gamma }_{1}}{\sqrt{{\left({\gamma }_{1}\right)}^{2}+1}-{\gamma }_{1}}\right\}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\gamma }_{1},{\gamma }_{2}>\text{\hspace{0.17em}}\sqrt{\frac{1}{50m-1}}\text{ }.\end{array}$ (38)

So it can be easily verified that

$x=f\left({\gamma }_{1},{\gamma }_{2}\right)\cdot \stackrel{^}{x},\text{\hspace{0.17em}}\text{with}{\gamma }_{1}=\underset{k=1,\cdots ,m}{\mathrm{max}}\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right),\text{\hspace{0.17em}}{\gamma }_{2}=\underset{k=1,\cdots ,m}{\mathrm{min}}\left(\stackrel{}{\stackrel{︷}{{y}_{k}}}\right),$ (39)

is a feasible solution of the original problem.

4. Conclusions

For the quadratic optimization problem with non-convex inhomogeneous quadratic constraints, it’s NP-hard. We can’t find an effective algorithm solving it. In this paper, we put forward an effective algorithm. According to it, many problems in life can be solved. Through the algorithm, we can get the feasible solution of (1). Transforming the original problem into (SDP) is a very important step in solving the problem. So we give the semidefinite programming (SDP) relaxation of (1) in Section 2, then propose an effective algorithm which given in Section 3 to construct the feasible solution of (1).

In the future, I will do the following work: discusses the quality of the feasible solution about (1), and gives some numerical experiments to verify it, we will consider the problem with inhomogeneous objective function. To this problem, we want to find an algorithm solve it by ways of the effective algorithm which put forward in this paper.

Cite this paper: Lou, K. (2017) An Effective Algorithm for Quadratic Optimization with Non-Convex Inhomogeneous Quadratic Constraints. Advances in Pure Mathematics, 7, 314-323. doi: 10.4236/apm.2017.74018.
References

   Pardalos, P.M. and Schnitger, G. (1988) Checking Glocal Optimality in Constrained Quadratic Programming is NP-hard. Operations Research Letters, 7, 33-35.
https://doi.org/10.1016/0167-6377(88)90049-1

   Pardalos, P.M. and Vavasis, S.A. (1991) Quadratic Programming with One Negative Eigenvalue Is NP-Hard. The Journal of Global Optimization, 1, 15-22.
https://doi.org/10.1007/BF00120662

   Luo, Z.-Q., Sidiropoulos, N., Tseng, P. and Zhang, S. (2007) Approximation Bounds for Quadratic Optimization with Homogeneous Quadratic Constraints. SIAM Journal on Optimization, 18, 1-28.
https://doi.org/10.1137/050642691

   Lovász, L. and Schrijver, A. (1991) Cones of Matrices and Set-functions and 0-1 Optimization. SIAM Journal on Optimization, 1, 166-190.
https://doi.org/10.1137/0801013

   Shor, N.Z. (1987) Quadratic Optimization Problems. Soviet Journal of Computer and Systems Sciences, 25, 1-11.

   Alizadeh, F. (1995) Interior Point Methods in Semidefinite Programming with Applications to Combinatorial Optimization. SIAM Journal on Optimization, 5, 13-51.
https://doi.org/10.1137/0805002

   Nesterov, Y. and Nemirovskii, A. (1994) Interior-Point Polynomial Algorithms in Convex Programming. Studies in Applied and Numerical Mathematics, Philadelphia, PA.
https://doi.org/10.1137/1.9781611970791

   Nemirovsi, A. and Roos, C. and Terlaky, T. (1999) On Maximization of Quadratic form over Intersection of Ellopsoids with Common Center. Math Program, 86, 463-473.
https://doi.org/10.1007/s101070050100

   Luo, Z.Q., Sidiropoulos, N.D., Tseng, P. and Zhang, S. (2007) Approximation Bounds for Quadratic Optimization with Homogeneous Quadratic Constraints. SIAM Journal on Optimization, 18, 1-28.
https://doi.org/10.1137/050642691

   He, S., Luo, Z.Q., Nie, J. and Zhang, S. (2008) Semidefinite Relaxation Bounds for Indefinite Homogeneous Quadratic Optimization. SIAM Journal on Optimization, 19, 503-523.
https://doi.org/10.1137/070679041

   Ye, Y. and Zhang, S. (2003) New Results on Quadratic Minimization. SIAM Journal on Optimization, 14, 245-267.
https://doi.org/10.1137/S105262340139001X

   Ye, Y. (1999) Approximating Global Quadratic Optimization with Convex Quadratic Constraints. The Journal of Global Optimization, 15, 1-17.
https://doi.org/10.1023/A:1008370723217

   Pataki, G. (2003) Computational Semidefinite and Second Order Cone Programming. The State of the Art, Math, Program, 95, 3-51.

   Hsia, Y., Wang, S. and Xu, Z. (2015) Improved Semidefinite Approximation Bounds for Nonconvex Nonhomogeneous Quadratic Optimization with Ellipsoid Constraints. Operations Research Letters, 43, 378-383.

Top