On the Solution of the Eigenvalue Complementarity Problem by a Line Search Filter-SQP Algorithm

Show more

1. Introduction

The Eigenvalue Complementarity Problem (EiCP) appears in the study of static equilibrium states of finite dimensional mechanical systems with unilateral frictional contact [1] [2] and takes the following form:

$\text{\hspace{0.05em}}\left(\text{EiCP}\right)\mathrm{:}\text{\hspace{0.17em}}\text{Find}\text{\hspace{0.17em}}\lambda >0,x\ne 0\text{\hspace{0.05em}},\text{such}\text{\hspace{0.17em}}\text{that}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\{\begin{array}{l}\omega =\left(\lambda B-A\right)x\\ \omega \ge 0\\ x\ge 0\\ {\omega}^{\text{T}}x=0\end{array}$ (1)

where B is a positive definite matrix. Note that any solution with $\omega =0$ correspond to a solution of the (GEiCP).

The Generalized Eigenvalue Complementarity Problem (GEiCP) is

$\text{\hspace{0.05em}}{\left(\text{GEiCP}\right)}_{J}:\text{\hspace{0.17em}}\text{Find}\text{\hspace{0.17em}}\lambda >0,x\ne 0\text{\hspace{0.05em}},\text{such}\text{\hspace{0.17em}}\text{that}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\{\begin{array}{l}\omega =\left(\lambda B-A\right)x\hfill \\ {\omega}_{\stackrel{\xaf}{J}}=0\hfill \\ {\omega}_{J}\ge 0\hfill \\ {x}_{J}\ge 0\hfill \\ {\omega}_{J}^{\text{T}}{x}_{J}=0\hfill \end{array}$

where B is a positive definite matrix,
$J\subseteq \left\{\mathrm{1,2,}\cdots \mathrm{,}n\right\}$ is given, and
$\stackrel{\xaf}{J}=\left\{1,2,\cdots ,n\right\}\backslash J$ . The (EiCP) is clearly the particular case of the (GEiCP)_{J} with
$J=\left\{\mathrm{1,2,}\cdots \mathrm{,}n\right\}$ .

For any solution $\left(\lambda x\mathrm{,}\omega \right)$ , the value $\lambda $ is called a (general) complementary eigenvalue of $\left(A\mathrm{,}B\right)$ , and x is a corresponding (general) complementary eigenvector [3] .

In this paper, the symmetric Eigenvalue Complementarity Problem is studied. Some properties of EiCP are derived, including a necessary and sufficient condition for solvability. When this condition is verified, we can easily obtain an initial point. For the design of an initialization algorithm, we discuss a quadratic programming formulation and a line search filter-SQP algorithm is introduced to solve the (EiCP).

2. The Symmetric Eigenvalue Complementarity Problem

In this section, the symmetric EiCP is considered, where the matrices A and B are both symmetric. In this case, the EiCP is closely related to the classical Eigenvalue Problem , and some properties are derived.

Proposition 2.1. Proposition The solvability of the (GEiCP)_{J} is in general a NP-complete problem.

It follows that solving the (GEiCP) is in general a NP-hard problem. Despite this fact, for some classes of matrices the solvability of the corresponding (EiCP) can be answered easily.

Since the set of complementary eigenvectors of a given complementary eigenvalue is a cone, there is no loss of generality in restricting the problem to finding solutions satisfying ${\Vert x\Vert}_{2}=1$ , which replaces the constraint $x\ne 0$ . In case of the (EiCP) the linear constraint ${\Vert x\Vert}_{1}={e}^{\text{T}}=1$ , can be considered instead of ${\Vert x\Vert}_{2}=1$ , since $x\ge 0$ .

When A and B are both symmetric, the (EiCP) is closely related to the classical Eigenvalue Problem [4] [5] . The complementary condition ${\omega}^{\text{T}}x=0$ can be rewritten as ${x}^{\text{T}}\left(\lambda Bx-Ax\right)=0$ . Since $x\ne 0$ , and B is positive definite,

then $\lambda \left(x\right)=\frac{{x}^{\text{T}}Ax}{{x}^{\text{T}}Bx}$ , where $\lambda \left(x\right)$ is the generalized Rayleigh quotient. The gradient of this function is $\nabla \lambda \left(x\right)=\frac{2}{{x}^{\text{T}}Bx}\left[A-\lambda \left(x\right)B\right]x$ and $\nabla \lambda \left(x\right)=0$ if

and only if $\left[A-\lambda \left(x\right)B\right]x=0$ . Analogously to the classical case, equilibrium points of the Rayleigh quotient in the nonnegative orthant with $\lambda \left(x\right)>0$ are solutions of the (EiCP). This is the main result concerning he practical solution of the symmetric (EiCP).

Proposition 2.2. (EiCP) is solvable if and only if there exists some $x\ge 0$ such that ${x}^{\text{T}}Ax>0$ .

Proposition 2.2. shows a necessary and sufficient condition for solvability. For the practical solution of the symmetric (EiCP), finding an initial point is a NP-hard problem. But for some class of matrices which an initial solution for the (OEiCP) can be easily obtained.

Proposition 2.3. Suppose that the matrix A satisfies one of the following conditions

1) $\exists i:{A}_{ii}>0;$

2) $\exists i,j:{A}_{ii}=0,{A}_{jj}\le 0\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}{A}_{ij}>0;$

3) $A\ge \mathrm{0,}A\ne \mathrm{0;}$

4) A is a S-matrix ( $\exists x\ge 0:Ax>0$ ).

Then a point $\stackrel{\xaf}{x}\in X$ such that ${\stackrel{\xaf}{x}}^{\text{T}}A\stackrel{\xaf}{x}>0$ can be easily obtained and the corresponding (EiCP) is solvable.

An equivalent way to formulate the (EiCP) is through the quadratic formulation

$\text{\hspace{0.05em}}\left(NLP\right)\text{\hspace{0.17em}}\{\begin{array}{l}\text{Max}\text{\hspace{1em}}{x}^{\text{T}}Ax\\ \text{s}\text{.t}\text{\hspace{1em}}{x}^{\text{T}}Bx\le 1\\ \text{\hspace{1em}}\text{\hspace{1em}}x\ge 0\end{array}$ (2)

Theorem 2.1. If A is strictly copositive and $\stackrel{\xaf}{x}\ge 0$ is a stationary point of (QB), then the pair $\stackrel{\xaf}{x}$ , $\stackrel{\xaf}{\lambda}={\stackrel{\xaf}{x}}^{\text{T}}A\stackrel{\xaf}{x}$ is a solution of (EiCP).

3. A Line Search Filter-SQP Algorithm

In this section a line search filter-SQP algorithm is introduced to solve the formulation of the previous section. As we all know, sequential Quadratic Programming (SQP) algorithm is one of the most efficient methods for the numerical solution of constrained nonlinear optimization problems [6] . In the previous methods, penalty function is always used as a merit function to test the acceptability of the iterate points. However, there are several difficulties associated with the use of penalty function and in particular with the choice of the penalty parameter. Too low a choice may result in the loss of an optimal solution; on the other hand, too large a choice damps out the effect of the objective function. Fletcher and Leyffer proposed a filter method as an alternative to the traditional merit function for solving the nonlinear optimization [7] . The significant advantage of the filter method is that it does not need to estimate the penalty parameter which could be difficult to obtain. Recently, the filter method was developed in [8] [9] .

Considering a suitable continuously differentiable merit function $f\left(x\right)$ , it is possible to reduce the (NLP) to the following nonlinear program

$\left(\text{P}\right)\text{\hspace{0.17em}}\{\begin{array}{l}\text{Min}\text{\hspace{1em}}f\left(x\right)\\ \text{s}\text{.t}\text{\hspace{1em}}{g}_{j}\left(x\right)\le 0\end{array}$ (3)

where the objective function $f\mathrm{:}{\mathbb{R}}^{n}\to \mathbb{R}$ and the constraints ${c}_{i}\mathrm{:}{\mathbb{R}}^{n}\to \mathbb{R}$ $j\in I=\left\{1,2,\cdots ,m\right\}$ are twice continuously differentiable.

In SQP methods, at each iteration the search direction is generally obtained by solving the subproblem as follows:

$\text{\hspace{0.05em}}{\text{QP}}_{1}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\{\begin{array}{l}\text{Min}\text{\hspace{1em}}\frac{1}{2}{d}^{\text{T}}{B}_{k}d+\nabla f{\left({x}_{k}\right)}^{\text{T}}d\\ \text{s}\text{.t}\text{.}\text{\hspace{1em}}{g}_{j}\left({x}_{k}\right)+\nabla {g}_{j}{\left({x}_{k}\right)}^{\text{T}}d\le 0\\ \text{\hspace{1em}}\text{\hspace{1em}}j\in I=\left\{1,2,\cdots ,m\right\}\end{array}$ (4)

However, the previous QP subproblem has a serious shortcoming that constraints in (QP_{1}) may be inconsistent. To overcome this disadvantage, much attention has been paid [10] [11] [12] [13] . In addition, Liu and Li [12] and Liu and Zeng [13] proposed SQP algorithms with cautious update criteria, which can be considered as modifications of the SQP algorithm given in [11] . In our algorithm, stimulated by [10] [11] [12] [13] , the quadratic subproblem (QP_{1}) is replaced by the following problem:

$\text{\hspace{0.05em}}Q\left({x}_{k}\mathrm{,}{B}_{k}\mathrm{,}{b}_{k}\right)\{\begin{array}{l}\text{Min}\text{\hspace{1em}}\frac{1}{2}{d}^{\text{T}}{B}_{k}d+\nabla f{\left({x}_{k}\right)}^{\text{T}}d+{b}_{k}t\\ \text{s}\text{.t}\text{.}\text{\hspace{1em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{g}_{j}\left({x}_{k}\right)+\nabla {g}_{j}{\left({x}_{k}\right)}^{\text{T}}d\le te\\ \text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}j\in I,t\ge 0\end{array}$ (5)

where ${b}_{k}$ is a positive parameter. Clearly, this subproblem is always consistent and is a convex programming if ${B}_{k}$ is positive semidefinite. The KKT condition for the subproblem $Q\left({x}_{k}\mathrm{,}{B}_{k}\mathrm{,}{b}_{k}\right)$ are

$\{\begin{array}{l}{g}_{j}\left({x}_{k}\right)+\nabla {g}_{j}{\left({x}_{k}\right)}^{\text{T}}{d}_{k}\le {t}_{k}e\left(j\in I\right),{t}_{k}\ge 0\hfill \\ \nabla f\left({x}_{k}\right)+{B}_{k}{d}_{k}+{\displaystyle \underset{j\in A\left({x}_{k}\right)}{\sum}}{\lambda}_{{k}^{+}}^{j}\nabla {g}_{j}\left({x}_{k}\right)=0\hfill \\ {b}_{k}-{\Vert {\lambda}_{{k}^{+}}\Vert}_{1}-{v}_{k}=0\hfill \\ {\lambda}_{{k}^{+}}^{i}\left({g}_{j}\left({x}_{k}\right)+\nabla {g}_{j}{\left({x}_{k}\right)}^{\text{T}}{d}_{k}-{t}_{k}e\right)=0\hfill \\ {v}_{k}{t}_{k}=0\hfill \\ {\lambda}_{{k}^{+}}^{i}\ge 0,{v}_{k}\ge 0\hfill \end{array}$ (6)

where $\mathcal{A}\left({x}_{k}\right)=\left\{j:{g}_{j}\left({x}_{k}\right)+\nabla {g}_{j}{\left({x}_{k}\right)}^{\text{T}}{d}_{k}-{t}_{k}e=0\right\}$ is the active set at point ${d}_{k}$ , ${\lambda}_{{k}^{+}}$ which can be used to determine a new estimate ${\lambda}_{k+1}$ for the next iteration is the Lagrangian multiplier corresponding to subproblem $Q\left({x}_{k}\mathrm{,}{B}_{k}\mathrm{,}{b}_{k}\right)$ , and ${\lambda}_{{k}^{+}}^{j}$ is the ith component of ${\lambda}_{{k}^{+}}$ .

After ${d}_{k}$ has been computed, a step size ${\alpha}_{k}\in \left(\mathrm{0,1}\right]$ is determined in order to obtain a next iterate ${x}_{k+1}={x}_{k}+{\alpha}_{k}{d}_{k}$ and ${\lambda}_{k+1}={\lambda}_{k}+{\alpha}_{k}\left({\lambda}_{{k}^{+}}-{\lambda}_{k}\right)$ .

In this paper, the Lagrangian function value instead of the objective function value is used in the filter together with an appropriate infeasibility measure.

$L\left(x,\lambda \right)=f\left(x\right)-{\lambda}^{\text{T}}c\left(x\right)$ (7)

and we define the constraints violation function $h\left(x\mathrm{,}\lambda \right)$ by

$h\left(x,\lambda \right)={\displaystyle \underset{j=1}{\overset{m}{\sum}}}{\left(\mathrm{max}\left\{0,-{c}_{i}\left(x\right)\right\}\right)}^{2}+{\left({\displaystyle \underset{j=1}{\overset{m}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\lambda}^{j}\mathrm{min}\left\{0,-{c}_{j}\left(x\right)\right\}\right)}^{2}$ (8)

where $\lambda $ is the Lagrangian multiplier corresponding to NLP satisfied at x and ${\lambda}^{j}$ is the ith component of $\lambda $ . Define ${\lambda}_{k}\left({\alpha}_{k,l}\right)={\lambda}_{k}+{\alpha}_{k,l}\left({\lambda}_{{k}^{+}}-{\lambda}_{k}\right)$ .

In the line search filter technique, for fixed constants ${\gamma}_{h}\mathrm{,}{\gamma}_{L}\in \left(\mathrm{0,1}\right)$ , a trial step size ${\alpha}_{k\mathrm{,}l}$ provides sufficient reduction with respect to the current point ${x}_{k}$ if

$\begin{array}{l}h\left({x}_{k}\left({\alpha}_{k\mathrm{,}l}\right)\mathrm{,}{\lambda}_{k}\left({\alpha}_{k\mathrm{,}l}\right)\right)\le \left(1-{\gamma}_{h}\right)h\left({x}_{k}\mathrm{,}{\lambda}_{k}\right)\\ \text{or}\text{\hspace{1em}}L\left({x}_{k}\left({\alpha}_{k\mathrm{,}l}\right)\mathrm{,}{\lambda}_{k}\left({\alpha}_{k\mathrm{,}l}\right)\right)\le L\left({x}_{k}\mathrm{,}{\lambda}_{k}\right)-{\gamma}_{L}h\left({x}_{k}\mathrm{,}{\lambda}_{k}\right)\end{array}$ (9)

Similar to the traditional strategy of the filter method, to avoid the convergence to a feasible point but not an optimal solution, we consider the following L-type [14] [15] switching condition:

${m}_{k}\left({\alpha}_{k,l}\right)<0\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.05em}}{\left[-{m}_{k}\left({\alpha}_{k,l}\right)\right]}^{{\omega}_{L}}{\left[{\alpha}_{k,l}\right]}^{1-{\omega}_{L}}>\delta {\left[h\left({x}_{k},{\lambda}_{k}\right)\right]}^{{\omega}_{h}}$ (10)

where $\delta >0,\text{\hspace{0.17em}}{\omega}_{h}>0,\text{\hspace{0.17em}}{\omega}_{L}\ge 1$ and

${m}_{k}\left(\alpha \right)=\alpha \left[\nabla g{\left({x}_{k}\right)}^{\text{T}}{d}_{k}+{\lambda}_{k}^{\text{T}}g\left({x}_{k}\right)-\left(1-\alpha \right){\left({\lambda}_{{k}^{+}}-{\lambda}_{k}\right)}^{\text{T}}g\left({x}_{k}\right)\right]$ (11)

then the Armijo-type reduction condition is employed as follows:

$L\left({x}_{k}\left({\alpha}_{k,l}\right),{\lambda}_{k}\left({\alpha}_{k,l}\right)\right)\le L\left({x}_{k},{\lambda}_{k}\right)+{\eta}_{L}{m}_{k}\left({\alpha}_{k,l}\right)$ (12)

where ${\eta}_{L}\in \left(\mathrm{0,}\frac{1}{2}\right)$ is a fixed constant.

For the sake of a simplified notation, the filter is defined in this paper not as a list but as a set ${F}_{k}\subseteq \left[\mathrm{0,}\infty \right]\times \left[\mathrm{0,}-\infty \right]$ containing all $\left(h\mathrm{,}L\right)$ pairs that are prohibited in iteration k. We say that a trial point ${x}_{k}\left({\alpha}_{k\mathrm{,}l}\right)$ is acceptable to the filter if its $\left(h\mathrm{,}L\right)$ pair does not in ${F}_{k}$ . Given a ${F}_{0}$ , throughout the algorithm ,the filter is then augmented in some iterations after the new iterate point ${x}_{k+1}$ has been accepted. We use the updating formula

$\begin{array}{l}{F}_{k+1}={F}_{k}\cup \{\left(h,L\right):h\ge \left(1-{\gamma}_{h}\right)h\left({x}_{k},{\lambda}_{k}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}L\ge L\left({x}_{k},{\lambda}_{k}\right)-{\gamma}_{L}h\left({x}_{k},{\lambda}_{k}\right)\}\end{array}$ (13)

In the situation where the trial step size is too small to guarantee sufficient reduction as defined by (10), the method switches to a feasibility restoration phrase, whose purpose is to find a new iterate point that satisfies (10) and is also acceptable to the current filter by trying to decrease the constraint violation. In order to detect the situation where no admissible step size can be found and the restoration phase has to be invoked, we approximate a minimum desired step size using linear models of involved functions. For this, we define that

${\alpha}_{k}^{\mathrm{min}}:={\gamma}_{\alpha}\ast \{\begin{array}{l}\mathrm{min}\left\{{\gamma}_{h},\frac{{\gamma}_{L}h\left({x}_{k},{\lambda}_{k}\right)}{-{\phi}_{k}},\frac{\delta {\left[h\left({x}_{k},{\lambda}_{k}\right)\right]}^{{\omega}_{h}}}{{\left[-{\phi}_{k}\right]}^{{\omega}_{L}}}\right\}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}{\omega}_{k}<0\\ {\gamma}_{h}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{otherwise}\end{array}$

where ${\phi}_{k}=\nabla g{\left({x}_{k}\right)}^{\text{T}}{d}_{k}+{\lambda}_{k}^{\text{T}}g\left({x}_{k}\right)-\left|{\left({\lambda}_{{k}^{+}}-{\lambda}_{k}\right)}^{\text{T}}g\left({x}_{k}\right)\right|$ , ${\gamma}_{\alpha}\in \left(\mathrm{0,1}\right]$ , $\delta \mathrm{,}{\omega}_{h}\mathrm{,}{\omega}_{L}$

are form (11) and ${\gamma}_{h}\mathrm{,}{\gamma}_{L}$ are form (10). The proposed algorithm switches to the feasibility restoration phase when ${\alpha}_{k\mathrm{,}l}$ becomes smaller than ${\alpha}_{k}^{\mathrm{min}}$ .

Now, the algorithm for solving the inequality constrained optimization problem (NLP) can be stated as follows:

Algorithm

Step 1. Initialization: choose an initial point ${x}_{0}\in {\mathbb{R}}^{n}$ , an initial filter ${F}_{0}\mathrm{:}\left\{\left(h\mathrm{,}L\right)\mathrm{:}L\ge \tau \mathrm{>}h\left({x}_{0}\mathrm{,}{\lambda}_{0}\right)\right\}$ , an initial parameter ${b}_{0}>0$ , a symmetric and positive definite matrix ${B}_{0}\in {\mathbb{R}}^{n\times n}$ , $\delta >0$ , ${\delta}_{1},\text{\hspace{0.17em}}{\delta}_{2}>0$ , and ${\gamma}_{H}\mathrm{,}\text{\hspace{0.17em}}{\gamma}_{L}\in \left(\mathrm{0,1}\right)$ . Choose $0\le {\tau}_{1}\le {\tau}_{2}<1$ , ${\gamma}_{\alpha}\in \left(\mathrm{0,1}\right]$ , and ${\omega}_{h}>0,\text{\hspace{0.17em}}{\omega}_{L}\ge 1$ .

Step 2. Solve the subproblem $Q\left({x}_{k}\mathrm{,}{B}_{k}\mathrm{,}{b}_{k}\right)$ to obtain the solution $\left({d}_{k}\mathrm{,}{t}_{k}\right)$ , and let $\left({\lambda}_{{k}^{+}}\mathrm{,}{v}_{k}\right)$ be the Lagrange multiplier. If ${d}_{k}=0$ and ${t}_{k}=0$ , stop.

Step 3. If ${d}_{k}=0$ but ${t}_{k}\ne 0$ , set ${x}_{k+1}={x}_{k}$ and ${F}_{k+1}={F}_{k}$ , go to Step 7.

Step 4. If ${d}_{k}\ne 0$ and ${\phi}_{k}\ge 0$ , go to the feasible restoration phrase in Step 9.

Step 5. If ${d}_{k}\ne 0$ and ${\phi}_{k}<0$ , using backtracking line search consider the following:

Step 5.1. Initial line search: set ${\alpha}_{k,l}=1$ and $l=0$ , compute the ${\alpha}_{k}^{\mathrm{min}}$ .

Step 5.2. Compute a new trial point. If the trial step size ${\alpha}_{k,l}<{\alpha}_{k}^{\mathrm{min}}$ , go to Step 9. Otherwise, compute new trial point ${x}_{k}\left({\alpha}_{k,l}\right)={x}_{k}+{\alpha}_{k,l}{d}_{k}$ and ${\lambda}_{k}\left({\alpha}_{k,l}\right)={\lambda}_{k}+{\alpha}_{k,l}\left({\lambda}_{{k}^{+}}-{\lambda}_{k}\right)$ . Check acceptability to the filter: if the trial point $\left(h\left({x}_{k}\left({\alpha}_{k,l}\right)\right),L\left({x}_{k}\left({\alpha}_{k,l}\right)\right)\right)\in {F}_{k}$ reject the trial step size and go to Step 5.4.

Step 5.3. Check sufficient decrease with respect to current iterate point:

Step 5.3.1. Case I: condition (11) holds. If the Armijo condition (12) holds, accept the trial step (that is, an L-type iteration), set ${x}_{k+1}={x}_{k}\left({\alpha}_{k,l}\right)$ , ${\lambda}_{k+1}={\lambda}_{k}\left({\alpha}_{k,l}\right)$ , ${F}_{k+1}={F}_{k}$ and go to Step 5.5; otherwise, go to Step 5.4.

Step 5.3.2. Case II: condition (11) does not hold. If (9) holds, accept the trial step , and go to Step 5.5; otherwise, go to Step 5.4.

Step 5.4. Choose a new trial size ${\alpha}_{k,l+1}\in \left[{\tau}_{1}{\alpha}_{k,l},{\tau}_{2}{\alpha}_{k,l}\right]$ . Set $l=l+1$ , and go back to Step 5.2.

Step 5.5. Accept trial point. Set ${x}_{k+1}={x}_{k}\left({\alpha}_{k,l}\right)$ and ${\alpha}_{k}={\alpha}_{k,l}$ .

Step 6. Augment filter if it is necessary. If k is not an L-type iteration, augment the filter using (13); otherwise leave the filter unchanged; that is, set ${F}_{k+1}={F}_{k}$ .

Step 7. Update parameters. Compute ${\alpha}_{k}$ by

${\alpha}_{k}=\mathrm{min}\left\{{\Vert {d}_{k}\Vert}^{-1},{\Vert {\lambda}_{k}\Vert}_{1}+{\delta}_{1}\right\}$

Set

${b}_{k+1}=\{\begin{array}{l}{b}_{k}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}{b}_{k}\ge {\alpha}_{k}\\ {b}_{k}+{\delta}_{2}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{otherwise}\end{array}$

Step 8. Update ${B}_{k}$ to ${B}_{k+1}$ . Go to Step 2 with k replaced by $k+1$ .

Step 9. Obtain a new point ${x}_{k+1}$ from a feasible restoration phrase. Set $k=k+1$ , and go to Step 2.

Remark 1. The mechanisms of the filter could ensure that $\left(h\left({x}_{k}\mathrm{,}{\lambda}_{k}\right)\mathrm{,}L\left({x}_{k}\mathrm{,}{\lambda}_{k}\right)\right)\notin {F}_{k}$ .

Remark 2. The feasibility restoration phrase in Step 9 could be any iterative algorithm with the goal of finding a less infeasible point.

Remark 3. Step 3 - Step 7 which are call the inner loop are backtracking line search whose goal is to find the ${\alpha}_{k}$ such that ${x}_{k}+{\alpha}_{k}{d}_{k}$ is acceptable as the next iteration.

Remark 4. In Step 9, we decrease $h\left(x\mathrm{,}\lambda \right)$ to get ${x}_{k+1}$ by fixing $\lambda ={\lambda}_{k}$ .

4. Global Convergence of Algorithm

In this section, we will show the proposed algorithm is well defined and globally convergent under some mild donditions. Let $\stackrel{\xaf}{x}$ be a feasible point of NLP (1) and $\stackrel{\xaf}{\lambda}$ be the multiplies corresponding to the nonlinear constraints satisfied at $\stackrel{\xaf}{x}$ . Denote the ith component of $\stackrel{\xaf}{\lambda}$ by ${\stackrel{\xaf}{\lambda}}^{i}$ . Then throughout this paper, we always assume that them following assumptions hold:

Assumptions

1) The function $L\left(x\mathrm{,}\lambda \right)$ and $g\left(x\right)$ are twice continuously differentiable.

2) The sequence $\left\{{x}_{k}\right\}$ remains in a compact and convex subset $X\in {\mathbb{R}}^{n}$ .

3) The strict Mangasarian-Fromowitz constraint qualification (SMFCQ) condition holds when $\stackrel{\xaf}{x}$ is a feasible point of the NLP. Then for $\stackrel{\xaf}{x}$ , the vectors $\left\{\nabla {g}_{j}\left(x\right)\mathrm{,}j\in I\left(x\right)\right\}$ are linearly independent, where $I\left(x\right)=\left\{j\in I|{g}_{j}\left(x\right)=h\left(x,\lambda \right)\right\}$ . There exists a constant ${M}_{B}>0$ such that ${z}^{\text{T}}{B}_{k}z\ge {M}_{B}{\Vert z\Vert}^{2}$ , $z\in \left\{z:\nabla {g}_{j}{\left({x}_{k}\right)}^{\text{T}}z=0,j\in I\left(x\right)\right\}$ . And there exist two constants $q\ge p>0$ such that the matrices sequence $\left\{{B}_{k}\right\}$ satisfy $p{\Vert d\Vert}^{2}\le {d}^{\text{T}}{B}_{k}d\le q{\Vert d\Vert}^{2}$ for all k and $d\in {\mathbb{R}}^{n}$ .

4) ${d}_{k}\mathrm{,}{\lambda}_{k}$ are uniformly bounded, i.e., there exist constants ${M}_{d}>0$ and ${M}_{\lambda}>0$ , such that $\Vert {d}_{k}\Vert \le {M}_{d}$ , ${\Vert {\lambda}_{k}\Vert}_{\infty}\le {M}_{\lambda}$ .

In the following, we first show that under Assumptions G the sequence of infeasibility measure $\left\{h\left({x}_{k}\right)\right\}$ converges to zero, and all limit points of $\left\{{x}_{k}\right\}$ are feasible.

Lemma 4.1. Suppose that Assumptions hold and that the filter is augmented only finite number of times. Then $\underset{k\to +\infty}{lim}h\left({x}_{k}\mathrm{,}{\lambda}_{k}\right)=0$ .

Lemma 4.2. Suppose that Assumptions hold and let $\left\{{x}_{k}\right\}$ be the sequence of iterates generated by Algorithm so that the filter is augmented in every iteration k. Then $\underset{k\to +\infty}{lim}h\left({x}_{k}\mathrm{,}{\lambda}_{k}\right)=0$ .

The proof of the Lemma 4.1 and Lemma 4.2 are similar to Lemma 5 in [14] by replacing $f\left(x\right)$ by $L\left(x\mathrm{,}\lambda \right)$ , we omit it. After we have the previous two lemmas, we can get the following theorem. That is to say that the limit point of $\left\{{x}_{k}\right\}$ is feasible. The proof of the next theorem is similar to the Theorem 1 in [14] .

Theorem 4.1. Suppose that Assumptions G hold. Then $\underset{k\to +\infty}{lim}h\left({x}_{k}\mathrm{,}{\lambda}_{k}\right)=0$ .

Lemma 4.3. Suppose that Assumptions hold and let $\stackrel{\xaf}{x}$ be a feasible point of NLP at which SMFCQ condition holds, but not a KKT point. If $\left\{{x}_{k}\right\}$ is a subsequence of iterate points for which $\chi \left({x}_{k}\right)\ge 0$ with a constant $\u03f5>0$ , then there exist constants ${\u03f5}_{1}>0$ , ${\u03f5}_{2}>0$ , such that

$h\left({x}_{k},{\lambda}_{k}\right)\le {\u03f5}_{1}\Rightarrow {m}_{k}\left(\alpha \right)\le -{\u03f5}_{2}\alpha $

Lemma 4.3. shows that the search direction, the optimal solution of the subproblem is a sufficient descent direction for Lagrangian function at points that are nonoptimal and sufficiently close to feasible point. And the next lemma shows that no $\left(h\mathrm{,}L\right)$ -pair corresponding to a feasible point is ever including in the filter.

Lemma 4.4. Suppose that Assumptions hold. Then ${\Omega}_{k}:=\mathrm{min}\left\{h:\left(h,L\right)\in {F}_{k}\right\}>0$ for all k.

Lemma 4.5. Suppose that Assumptions hold. Let $\left\{{x}_{{k}_{i}}\right\}$ be a subsequence with ${m}_{{k}_{i}}\mathrm{(}\alpha \mathrm{)}\le -{\u03f5}_{2}\alpha $ for a constant ${\u03f5}_{2}>0$ independent of ${k}_{i}$ and for all $\alpha \in \left(\mathrm{0,1}\right]$ . Then there exists a constant $\stackrel{\xaf}{\alpha}>0$ so that for all ${k}_{i}$ and $\alpha <\stackrel{\xaf}{\alpha}$ , we have

$L\left({x}_{{k}_{i}}+\alpha {d}_{{k}_{i}},{\lambda}_{{k}_{i}}+\alpha \left({\lambda}_{{k}_{i}}^{+}-{\lambda}_{{k}_{i}}\right)\right)-L\left({x}_{{k}_{i}},{\lambda}_{{k}_{i}}\right)\le {\eta}_{L}{m}_{{k}_{i}}\left(\alpha \right)$

The Lemma 4.5 shows that there exists a step length bounded away from zero so that the Armijo condition for the Lagrangian function is satisfied under some conditions. The next lemma shows that there is no cycle between Step 3 and Step 7 in above algorithm. The proof of the Lemma 4.6 is similar to Lemma 4 in [16] .

Lemma 4.6. Suppose that Assumptions G hold. Then the inner loop terminates in a finite number of iterations.

Lemma 4.7. Suppose that the filter is augmented only a finite number of times; that is, $\left|L\right|<\infty $ . Then $\underset{k\to +\infty}{\mathrm{lim}}{d}_{k}=0$ .

There the proof is stated for slightly different circumstances, but it is easy to verify that it is still valid in our context. And we now show the global convergence result under some mild conditions. The proof of the next two theorems is similar to the Theorem 10 and Theorem 11 in [17] .

Theorem 4.2. Suppose that all stated assumptions hold. Furthermore, assume that $\left\{{x}_{k}\right\}$ is an infinite sequence generated by Algorithm and $\left|L\right|<\infty $ . Then every limit point is a KKT point.

Theorem 4.3. Suppose that all stated assumptions hold. Furthermore, assume that $\left\{{x}_{k}\right\}$ is an infinite sequence generated by Algorithm and $\left|L\right|=\infty $ . Then there exists at least one accumulation point which is a KKT point.

5. Numerical Experiments

In this section, some computational experience is presented to illustrate the efficiency of the algorithms described in this paper for the solution of symmetric EiCP. All programs are written in MATLAB and run on a Dell version 4510U with a 2.8 GHz Intel i7 processor. For our test problems, the matrix $B={I}_{n}$ , and the matrices $A\in {\mathbb{R}}^{n\times n}$ were randomly generated. The test problems are scaled according to the procedure described in [18] . The scaling is important because the matrices that we are using are badly conditioned, and without this tool some of the problems cannot be solved.

As described in Proposition 2.3, the initial solution ${x}_{0}$ can be chosen by one several processes. In particular if A has at least one diagonal element ${\alpha}_{ii}>0$ then the initial solution can be chosen as ${x}_{0}=p{e}^{i}$ , where ${e}^{i}$ is the vector i of the canonical basis.

The parameters used in the algorithm are follows: ${B}_{0}={E}_{n}$ , ${b}_{0}=111$ , $\delta =1$ , ${\delta}_{1}=1$ , ${\delta}_{2}=1$ , ${\gamma}_{H}={\gamma}_{L}=0.5$ , ${\gamma}_{\alpha}=0.5$ , ${\tau}_{1}={\tau}_{2}=0.5$ , ${\omega}_{H}=\frac{1}{3}$ , ${\omega}_{L}=2$ , $\epsilon ={10}^{-6}$ . The stop criteria are $\Vert {d}_{0}^{k}\Vert $ sufficiently small, where ${d}_{0}^{k}={\left({d}_{0}^{\text{T}},{t}_{k}\right)}^{\text{T}}$ . In

particular, the stop criteria of Step 2 are changed to: if $\Vert {d}_{0}^{k}\Vert \le {10}^{-6}$ , stop. We update the matrices ${B}_{k}$ using the BFGS formula and use the matlab toolbox to solve the subproblem $Q\left({x}_{k}\mathrm{,}{B}_{k}\mathrm{,}{b}_{k}\right)$ .

The test results are given in Table, the notations mean follows: No: the number of the problems, m: the dimension of the vector; T: the total CPU time in seconds for solving the problem; OPT: the final value of the objective function; $\lambda $ : the eigenvalue of the EiCP.

Based on the results, we claim that this algorithm is a efficient procedure to solve Symmetric Eigenvalue Complementarity Problems.

6. Conclusion

In this paper, the Eigenvalue Complementarity Problem with real symmetric matrices have been considered, and we reformulate the EiCP to a NLP and use a line search filter-SQP algorithm to solve them. The numerical experiments show that this method is a promising method for solving the EiCP. Actually, we can solve the problem as a quadratically constrained quadratic programming problem not a NLP. And the design of efficient procedure for solving the quadratically constrained quadratic programming subproblems is also one of our current research areas.

References

[1] Costa, A., Figueiredo, I., Júdice, J. and Martins, J. (2001) A Complementarity Eigen Problem in the Stability Analysis of Finite Dimensional Elastic Systems with Frictional Contact. Complementarity: Applications, Algorithms and Extensions, 50, 67-83.

[2] Costa, A., Martins, J., Figueiredo, I. and Júdice, J. (2004) The Directional Instability Problem in Systems with Frictional Contacts. Computer Methods in Applied Mechanics and Engineering, 193, 357-384.

[3] Humes, C., Júdice, J. and Queiroz, M. (2004) The Symmetric Eigenvalue Complementarity Problem. Mathematics of Computation, 73, 1849-1863.

[4] Júdice, J., Sherali, H. and Ribeiro, I. (2007) The Eigenvalue Complementarity Problem. Computational Optimization and Applications, 37, 139-156.

https://doi.org/10.1007/s10589-007-9017-0

[5] Parlett, B. (1997) The Symmetric Eigenvalue Problem. Society for Industrial and Applied Mathematics, University of California, Berkeley, California.

[6] Boggs, P. and Tolle, J. (1995) Sequential Quadratic Programming. Acta Numerica, 4, 1-51.

https://doi.org/10.1017/S0962492900002518

[7] Fletcher, R. and Leyffer, S. (2002) Nonlinear Programming without a Penalty Function. Math. Program, 91, 239-269.

https://doi.org/10.1007/s101070100244

[8] Gould, N.I.M., Loh, Y. and Robinson, D.P. (2014) A Filter Method with Unified Step Computation for Nonlinear Optimization. The SIAM Journal on Optimization, 24, 175-209.

https://doi.org/10.1137/130920599

[9] Milzarek, A. and Ulbrich, M. (2014) A Semismooth Newton Method with Multidimensional Filter Globalization for L1-Optimization. The SIAM Journal on Optimization, 24, 298-333.

https://doi.org/10.1137/120892167

[10] Zhang, J.-L. and Zhang, X.-S. (2001) A Modified SQP Method with Nonmonotone Linesearch Technique. Journal of Global Optimization, 21, 201-218.

https://doi.org/10.1023/A:1011942228555

[11] Pantoja, J.F.A.D.O. and Mayne, D.Q. (1991) Exact Penalty Function Algorithm with Simple Updating of the Penalty Parameter. Journal of Optimization Theory and Applications, 69, 441-467.

https://doi.org/10.1007/BF00940684

[12] Liu, T. and Li, D. (2007) Convergence of the BFGS-SQP Method for Degenerate Problems. Numerical Functional Analysis and Optimization, 28, 927-944.

https://doi.org/10.1080/01630560701405002

[13] Liu, T.-W. and Zeng, J.-P. (2009) An SQP Algorithm with Cautious Updating Criteria for Nonlinear Degenerate Problems. Acta Mathematicae Applicatae Sinica, 25, 33-42.

https://doi.org/10.1007/s10255-004-4100-0

[14] Wächter, A. and Biegler, L.T. (2005) Line Search Filter Methods for Nonlinear Programming: Motivation and Global Convergence. SIAM Journal on Optimization, 16, 1-31.

https://doi.org/10.1137/S1052623403426556

[15] Wächter, A. and Biegler, L.T. (2005) Line Search Filter Methods for Nonlinear Programming: Local Convergence. SIAM Journal on Optimization, 16, 32-48.

https://doi.org/10.1137/S1052623403426544

[16] Jin, Z. (2013) A Globally Convergent Line Search Filter SQP Method for Inequality Constrained Optimization. Journal of Applied Mathematics, 2013, Article ID: 524539.

https://doi.org/10.1155/2013/524539

[17] Pang, L.-L. and Zhu, D.-T. (2016) A Line Search Filter-SQP Method with Lagrangian Function for Nonlinear Inequality Constrained Optimization. Japan Journal of Industrial and Applied Mathematics, 34, 141-176.

https://doi.org/10.1007/s13160-017-0236-1

[18] Júdice, J., Raydan, M., Rosa, S. and Santos, S. (2008) On the Solution of the Symmetric Eigenvalue Complementarity Problem by the Spectral Projected Gradient Algorithm. Numerical Algorithms, 45, 391-407.

https://doi.org/10.1007/s11075-008-9194-7