The Eigenvalue Complementarity Problem (EiCP) appears in the study of static equilibrium states of finite dimensional mechanical systems with unilateral frictional contact   and takes the following form:
where B is a positive definite matrix. Note that any solution with correspond to a solution of the (GEiCP).
The Generalized Eigenvalue Complementarity Problem (GEiCP) is
where B is a positive definite matrix, is given, and . The (EiCP) is clearly the particular case of the (GEiCP)J with .
For any solution , the value is called a (general) complementary eigenvalue of , and x is a corresponding (general) complementary eigenvector  .
In this paper, the symmetric Eigenvalue Complementarity Problem is studied. Some properties of EiCP are derived, including a necessary and sufficient condition for solvability. When this condition is verified, we can easily obtain an initial point. For the design of an initialization algorithm, we discuss a quadratic programming formulation and a line search filter-SQP algorithm is introduced to solve the (EiCP).
2. The Symmetric Eigenvalue Complementarity Problem
In this section, the symmetric EiCP is considered, where the matrices A and B are both symmetric. In this case, the EiCP is closely related to the classical Eigenvalue Problem , and some properties are derived.
Proposition 2.1. Proposition The solvability of the (GEiCP)J is in general a NP-complete problem.
It follows that solving the (GEiCP) is in general a NP-hard problem. Despite this fact, for some classes of matrices the solvability of the corresponding (EiCP) can be answered easily.
Since the set of complementary eigenvectors of a given complementary eigenvalue is a cone, there is no loss of generality in restricting the problem to finding solutions satisfying , which replaces the constraint . In case of the (EiCP) the linear constraint , can be considered instead of , since .
When A and B are both symmetric, the (EiCP) is closely related to the classical Eigenvalue Problem   . The complementary condition can be rewritten as . Since , and B is positive definite,
then , where is the generalized Rayleigh quotient. The gradient of this function is and if
and only if . Analogously to the classical case, equilibrium points of the Rayleigh quotient in the nonnegative orthant with are solutions of the (EiCP). This is the main result concerning he practical solution of the symmetric (EiCP).
Proposition 2.2. (EiCP) is solvable if and only if there exists some such that .
Proposition 2.2. shows a necessary and sufficient condition for solvability. For the practical solution of the symmetric (EiCP), finding an initial point is a NP-hard problem. But for some class of matrices which an initial solution for the (OEiCP) can be easily obtained.
Proposition 2.3. Suppose that the matrix A satisfies one of the following conditions
4) A is a S-matrix ( ).
Then a point such that can be easily obtained and the corresponding (EiCP) is solvable.
An equivalent way to formulate the (EiCP) is through the quadratic formulation
Theorem 2.1. If A is strictly copositive and is a stationary point of (QB), then the pair , is a solution of (EiCP).
3. A Line Search Filter-SQP Algorithm
In this section a line search filter-SQP algorithm is introduced to solve the formulation of the previous section. As we all know, sequential Quadratic Programming (SQP) algorithm is one of the most efficient methods for the numerical solution of constrained nonlinear optimization problems  . In the previous methods, penalty function is always used as a merit function to test the acceptability of the iterate points. However, there are several difficulties associated with the use of penalty function and in particular with the choice of the penalty parameter. Too low a choice may result in the loss of an optimal solution; on the other hand, too large a choice damps out the effect of the objective function. Fletcher and Leyffer proposed a filter method as an alternative to the traditional merit function for solving the nonlinear optimization  . The significant advantage of the filter method is that it does not need to estimate the penalty parameter which could be difficult to obtain. Recently, the filter method was developed in   .
Considering a suitable continuously differentiable merit function , it is possible to reduce the (NLP) to the following nonlinear program
where the objective function and the constraints are twice continuously differentiable.
In SQP methods, at each iteration the search direction is generally obtained by solving the subproblem as follows:
However, the previous QP subproblem has a serious shortcoming that constraints in (QP1) may be inconsistent. To overcome this disadvantage, much attention has been paid     . In addition, Liu and Li  and Liu and Zeng  proposed SQP algorithms with cautious update criteria, which can be considered as modifications of the SQP algorithm given in  . In our algorithm, stimulated by     , the quadratic subproblem (QP1) is replaced by the following problem:
where is a positive parameter. Clearly, this subproblem is always consistent and is a convex programming if is positive semidefinite. The KKT condition for the subproblem are
where is the active set at point , which can be used to determine a new estimate for the next iteration is the Lagrangian multiplier corresponding to subproblem , and is the ith component of .
After has been computed, a step size is determined in order to obtain a next iterate and .
In this paper, the Lagrangian function value instead of the objective function value is used in the filter together with an appropriate infeasibility measure.
and we define the constraints violation function by
where is the Lagrangian multiplier corresponding to NLP satisfied at x and is the ith component of . Define .
In the line search filter technique, for fixed constants , a trial step size provides sufficient reduction with respect to the current point if
Similar to the traditional strategy of the filter method, to avoid the convergence to a feasible point but not an optimal solution, we consider the following L-type   switching condition:
then the Armijo-type reduction condition is employed as follows:
where is a fixed constant.
For the sake of a simplified notation, the filter is defined in this paper not as a list but as a set containing all pairs that are prohibited in iteration k. We say that a trial point is acceptable to the filter if its pair does not in . Given a , throughout the algorithm ,the filter is then augmented in some iterations after the new iterate point has been accepted. We use the updating formula
In the situation where the trial step size is too small to guarantee sufficient reduction as defined by (10), the method switches to a feasibility restoration phrase, whose purpose is to find a new iterate point that satisfies (10) and is also acceptable to the current filter by trying to decrease the constraint violation. In order to detect the situation where no admissible step size can be found and the restoration phase has to be invoked, we approximate a minimum desired step size using linear models of involved functions. For this, we define that
where , ,
are form (11) and are form (10). The proposed algorithm switches to the feasibility restoration phase when becomes smaller than .
Now, the algorithm for solving the inequality constrained optimization problem (NLP) can be stated as follows:
Step 1. Initialization: choose an initial point , an initial filter , an initial parameter , a symmetric and positive definite matrix , , , and . Choose , , and .
Step 2. Solve the subproblem to obtain the solution , and let be the Lagrange multiplier. If and , stop.
Step 3. If but , set and , go to Step 7.
Step 4. If and , go to the feasible restoration phrase in Step 9.
Step 5. If and , using backtracking line search consider the following:
Step 5.1. Initial line search: set and , compute the .
Step 5.2. Compute a new trial point. If the trial step size , go to Step 9. Otherwise, compute new trial point and . Check acceptability to the filter: if the trial point reject the trial step size and go to Step 5.4.
Step 5.3. Check sufficient decrease with respect to current iterate point:
Step 5.3.1. Case I: condition (11) holds. If the Armijo condition (12) holds, accept the trial step (that is, an L-type iteration), set , , and go to Step 5.5; otherwise, go to Step 5.4.
Step 5.3.2. Case II: condition (11) does not hold. If (9) holds, accept the trial step , and go to Step 5.5; otherwise, go to Step 5.4.
Step 5.4. Choose a new trial size . Set , and go back to Step 5.2.
Step 5.5. Accept trial point. Set and .
Step 6. Augment filter if it is necessary. If k is not an L-type iteration, augment the filter using (13); otherwise leave the filter unchanged; that is, set .
Step 7. Update parameters. Compute by
Step 8. Update to . Go to Step 2 with k replaced by .
Step 9. Obtain a new point from a feasible restoration phrase. Set , and go to Step 2.
Remark 1. The mechanisms of the filter could ensure that .
Remark 2. The feasibility restoration phrase in Step 9 could be any iterative algorithm with the goal of finding a less infeasible point.
Remark 3. Step 3 - Step 7 which are call the inner loop are backtracking line search whose goal is to find the such that is acceptable as the next iteration.
Remark 4. In Step 9, we decrease to get by fixing .
4. Global Convergence of Algorithm
In this section, we will show the proposed algorithm is well defined and globally convergent under some mild donditions. Let be a feasible point of NLP (1) and be the multiplies corresponding to the nonlinear constraints satisfied at . Denote the ith component of by . Then throughout this paper, we always assume that them following assumptions hold:
1) The function and are twice continuously differentiable.
2) The sequence remains in a compact and convex subset .
3) The strict Mangasarian-Fromowitz constraint qualification (SMFCQ) condition holds when is a feasible point of the NLP. Then for , the vectors are linearly independent, where . There exists a constant such that , . And there exist two constants such that the matrices sequence satisfy for all k and .
4) are uniformly bounded, i.e., there exist constants and , such that , .
In the following, we first show that under Assumptions G the sequence of infeasibility measure converges to zero, and all limit points of are feasible.
Lemma 4.1. Suppose that Assumptions hold and that the filter is augmented only finite number of times. Then .
Lemma 4.2. Suppose that Assumptions hold and let be the sequence of iterates generated by Algorithm so that the filter is augmented in every iteration k. Then .
The proof of the Lemma 4.1 and Lemma 4.2 are similar to Lemma 5 in  by replacing by , we omit it. After we have the previous two lemmas, we can get the following theorem. That is to say that the limit point of is feasible. The proof of the next theorem is similar to the Theorem 1 in  .
Theorem 4.1. Suppose that Assumptions G hold. Then .
Lemma 4.3. Suppose that Assumptions hold and let be a feasible point of NLP at which SMFCQ condition holds, but not a KKT point. If is a subsequence of iterate points for which with a constant , then there exist constants , , such that
Lemma 4.3. shows that the search direction, the optimal solution of the subproblem is a sufficient descent direction for Lagrangian function at points that are nonoptimal and sufficiently close to feasible point. And the next lemma shows that no -pair corresponding to a feasible point is ever including in the filter.
Lemma 4.4. Suppose that Assumptions hold. Then for all k.
Lemma 4.5. Suppose that Assumptions hold. Let be a subsequence with for a constant independent of and for all . Then there exists a constant so that for all and , we have
The Lemma 4.5 shows that there exists a step length bounded away from zero so that the Armijo condition for the Lagrangian function is satisfied under some conditions. The next lemma shows that there is no cycle between Step 3 and Step 7 in above algorithm. The proof of the Lemma 4.6 is similar to Lemma 4 in  .
Lemma 4.6. Suppose that Assumptions G hold. Then the inner loop terminates in a finite number of iterations.
Lemma 4.7. Suppose that the filter is augmented only a finite number of times; that is, . Then .
There the proof is stated for slightly different circumstances, but it is easy to verify that it is still valid in our context. And we now show the global convergence result under some mild conditions. The proof of the next two theorems is similar to the Theorem 10 and Theorem 11 in  .
Theorem 4.2. Suppose that all stated assumptions hold. Furthermore, assume that is an infinite sequence generated by Algorithm and . Then every limit point is a KKT point.
Theorem 4.3. Suppose that all stated assumptions hold. Furthermore, assume that is an infinite sequence generated by Algorithm and . Then there exists at least one accumulation point which is a KKT point.
5. Numerical Experiments
In this section, some computational experience is presented to illustrate the efficiency of the algorithms described in this paper for the solution of symmetric EiCP. All programs are written in MATLAB and run on a Dell version 4510U with a 2.8 GHz Intel i7 processor. For our test problems, the matrix , and the matrices were randomly generated. The test problems are scaled according to the procedure described in  . The scaling is important because the matrices that we are using are badly conditioned, and without this tool some of the problems cannot be solved.
As described in Proposition 2.3, the initial solution can be chosen by one several processes. In particular if A has at least one diagonal element then the initial solution can be chosen as , where is the vector i of the canonical basis.
The parameters used in the algorithm are follows: , , , , , , , , , , . The stop criteria are sufficiently small, where . In
particular, the stop criteria of Step 2 are changed to: if , stop. We update the matrices using the BFGS formula and use the matlab toolbox to solve the subproblem .
The test results are given in Table, the notations mean follows: No: the number of the problems, m: the dimension of the vector; T: the total CPU time in seconds for solving the problem; OPT: the final value of the objective function; : the eigenvalue of the EiCP.
Based on the results, we claim that this algorithm is a efficient procedure to solve Symmetric Eigenvalue Complementarity Problems.
In this paper, the Eigenvalue Complementarity Problem with real symmetric matrices have been considered, and we reformulate the EiCP to a NLP and use a line search filter-SQP algorithm to solve them. The numerical experiments show that this method is a promising method for solving the EiCP. Actually, we can solve the problem as a quadratically constrained quadratic programming problem not a NLP. And the design of efficient procedure for solving the quadratically constrained quadratic programming subproblems is also one of our current research areas.
 Costa, A., Figueiredo, I., Júdice, J. and Martins, J. (2001) A Complementarity Eigen Problem in the Stability Analysis of Finite Dimensional Elastic Systems with Frictional Contact. Complementarity: Applications, Algorithms and Extensions, 50, 67-83.
 Costa, A., Martins, J., Figueiredo, I. and Júdice, J. (2004) The Directional Instability Problem in Systems with Frictional Contacts. Computer Methods in Applied Mechanics and Engineering, 193, 357-384.
 Gould, N.I.M., Loh, Y. and Robinson, D.P. (2014) A Filter Method with Unified Step Computation for Nonlinear Optimization. The SIAM Journal on Optimization, 24, 175-209.
 Milzarek, A. and Ulbrich, M. (2014) A Semismooth Newton Method with Multidimensional Filter Globalization for L1-Optimization. The SIAM Journal on Optimization, 24, 298-333.
 Pantoja, J.F.A.D.O. and Mayne, D.Q. (1991) Exact Penalty Function Algorithm with Simple Updating of the Penalty Parameter. Journal of Optimization Theory and Applications, 69, 441-467.
 Liu, T.-W. and Zeng, J.-P. (2009) An SQP Algorithm with Cautious Updating Criteria for Nonlinear Degenerate Problems. Acta Mathematicae Applicatae Sinica, 25, 33-42.
 Wächter, A. and Biegler, L.T. (2005) Line Search Filter Methods for Nonlinear Programming: Motivation and Global Convergence. SIAM Journal on Optimization, 16, 1-31.
 Wächter, A. and Biegler, L.T. (2005) Line Search Filter Methods for Nonlinear Programming: Local Convergence. SIAM Journal on Optimization, 16, 32-48.
 Jin, Z. (2013) A Globally Convergent Line Search Filter SQP Method for Inequality Constrained Optimization. Journal of Applied Mathematics, 2013, Article ID: 524539.
 Pang, L.-L. and Zhu, D.-T. (2016) A Line Search Filter-SQP Method with Lagrangian Function for Nonlinear Inequality Constrained Optimization. Japan Journal of Industrial and Applied Mathematics, 34, 141-176.
 Júdice, J., Raydan, M., Rosa, S. and Santos, S. (2008) On the Solution of the Symmetric Eigenvalue Complementarity Problem by the Spectral Projected Gradient Algorithm. Numerical Algorithms, 45, 391-407.