In this paper, we consider the constrained nonlinear semismooth equations problem: finding a vector such that
is a semismooth mapping. The notation of semismoothness was introduced for the functionals by Mifflin  and extended to vector functions by Qi and Sun  .
Systems of constrained semismooth equations arise in various application, for instance complementarity problems, the box constrained variational inequality problems, the KKT system of variational inequlity problems and so on. The solution of nonlinear equations can be transformed into solving the following constrained optimization problem:
where is continuously differentiable and its gradient denoted by . Many researchers have studied constrained optimization problems such as (2) and given many effective algorithms. For example, a new class of adaptive non-monotone spectral gradient method is given in reference  , an active set projected trust region algorithm in  . The methods of optimization problems involve the first-order methods and the second-order methods. Classical first-order algorithms include gradient method, sub-gradient method, conjugate gradient method, etc. The main advantage of first-order method is its small storage, which is particularly suitable for large-scale problems. However, the disadvantage of first-order method is that its convergence speed is at most linear, and it can not meet the requirements of high precision. For the second-order method, it has the advantage of fast convergence speed. Under certain conditions, it can achieve superlinear convergence or even quadratic convergence. But its disadvantage is that it needs a good initial point, sometimes it even needs the initial point to approach the local optimal point.
Motivated by this, in this paper, we combine the advantages of the first-order method with those of the second-order method. We will consider the two-stage combination algorithm to solve the optimization problem. First, we use the first-order method to obtain the global convergence of the algorithm, and then use the final point obtained by the first-order method as the new initial point to turn to the second-order method to obtain the fast convergence speed. At the same time, we use projection technology to solve the constrained conditions.
In this section, we present some definitions and theorems that are useful to our main result.
Suppose is a locally Lipschitzian function, according to Rademacher theorem, H is differentiable almost everywhere. Denote the set of points at which H is differentiable by . We write for the usual Jacobian matrix of partial derivatives whenever x is a point at which the necessary partial derivatives exists. Let be the generalized Jacobian defined by Clarke in  . Then
where the denotes the convex hull of a set, .
Definition2.1  Suppose is a locally Lipschitzian function, we say that H is semismooth at x if
exists for any .
Lemma 2.2  : Suppose is a locally Lipschitzian function, the following statements are equivalent:
1) H is semismooth at x;
2) For any ,
Lemma 2.3  : Suppose is a locally Lipschitzian function, H is semismooth at x if each component of H is semismooth at x.
Definition 2.4  : Suppose is a locally Lipschitzian function, If for any
where , then we call H is p-order semismooth at x.
Lemma 2.5  : suppose is a locally Lipschitzian function, we say H is strongly BD-regular at x if all are nonsingular.
Lemma 2.6  : Suppose that is locally Lipschitz continuous and H is BD-regular at . Then there exist a neighborhood of x and a constant K such that for any and , V is nonsingular and .
Lemma 2.7  : Suppose that is locally Lipschitz continuous and H is BD-regular at a solution of . If H is semismooth at , then there exist a neighborhood of and a constant such that for any
Lemma 2.8  : The projection operator satisfies.
1) For any , for all .
2) for all .
Lemma 2.9  : Given and , the function defined by
Lemma 2.9 actually implies that if is a stationary point of (2), then
In order to obtain the global convergence of the algorithm, in the first stage, we adopt the non-monotone spectral projection gradient method of the first-order method. The one-dimensional search procedure of Algorithm 3.1 will be called SPG1 from now on and Algorithm 3.2 will be called SPG2 in the rest of the paper.
Given , we define as the orthogonal projection on , denote . , integer , a small parameter , a large parameter , sufficient decrease parameter , , initially , .
Algorithm 3.1  (SPG1)
Step 1. If , stop, input .
Step 2. (Backtracking)
Step 2.1 Set .
Step 2.2 Set .
Step 2.3 If
Then define , , , , and go to step 3.
If (11) does not hold, define . Set , and go to step 2.2.
Step 3. compute , If , set ; else compute , , and go to step 1.
Algorithm 3.2  (SPG2)
Step 2. (Backtracking)
Step 2.1. Compute , Set .
Step 2.2. Set .
Step 2.3. If
Then define , and go to step 3.
If (12) does not hold, define . Set , and go to step 2.2.
The output point of the first stage is used as the initial point of the next stage.
Algorithm 3.3  (A Projected semismooth asymptotical newton method)
Step 0. Choose constants , Let , .
Step 1. Choose , compute .
Step 2. If is a stationary point, stop. Otherwise let
and go to step 3.
Step 3. If the linear system
has a solution , and
then use the direction . Otherwise, set .
Step 4. Let be the smallest nonnegative integer m satisfying
where for any ,
and is an optimal solution to
the optimal solution is
let , .
Step 5. Let , and go to step 1.
4. Convergence Analysis
Theorem 4.1  : Algorithm SPG1 is well defined, and any accumulation point of the sequence that is generates is a constrained stationary point.
Theorem 4.2  : Algorithm SPG2 is well defined, and any accumulation point of, and any accumulation.
Theorem 4.3  : Let be a sequence generated by Algorithm 3.3, then any accumulation point of is a station point of (2).
Many practical problems can be solved by transforming them into constrained semi-smooth equations. For example, mixed complement problem (MCP):
is a continuous differentiable function, finding a vectors satisfies
The function with is defined by
where for any and is the penalized Fischer-Burmeister function introduced by Chen et al.  and has the form:
Here, is an NCP function, which is given by
The mixed complement problem can be transformed into a semi-smooth system of equations by the above functions.
MCP can be reformulated as with
Then we can use the two phase method to solve this problem.
In this paper, we proposed a two-phase method for the constrained equations. We can also combine other first-order and second-order methods. In this paper, the iteration complexity analysis of the first-order method is a meaningful work, and we will do further research.
 Qi, L.Q., Tong, X.J. and Li, D.H. (2004) Active-Set Projected Trust-Region Algorithm for Box-Constrained Nonsmooth Equations. Journal of Optimization Theory and Applications, 120, 601-625.
 Zarantonello, E.H. (1971) Projections on Convex Sets in Hilbert Space and Spectral Theory: Part I. Projections on Convex Sets: Part II. Spectral Theory. Revista de la Unión Matemática Argentina, 26, 237-424.
 Powell, M.J.D. (1983) Variable Metric Methods for Constrained Optimization. Mathematical Programming: The State of the Art, Springer, Berlin Heidelberg.
 Birgin, E.G., Martínez, J.M. and Raydan, M. (2000) Nonmonotone Spectral Projected Gradient Methods on Convex Sets. Society for Industrial and Applied Mathematic, 10, 1196-1211.
 Sun, D., Womersley, R.S. and Qi, H. (2002) A Feasible Semismooth Asymptotically Newton Method for Mixed Complementarity Problems. Mathematical Programming, 94, 167-187.