This paper is dedicated to solving the following nonlinear convex constrained monotone equations:
where is a continuous nonlinear mapping and the feasible region is a nonempty closed convex set, e.g. an n-dimensional box, namely, . Monotone means that
where the denotes the inner product of vectors. The problems (1) emerges in many fields such as economic equilibrium problems , chemical equilibrium systems  and the power flow equations . Based on the work of Solodov and Svaiter , Wang et al.  proposed a projection type method to solve Equation (1). The obtained method in  possesses global convergence property without any regularity assumptions. Nevertheless the method needs to solve a linear equation at each iteration. To avoid solving the linear equation and improving the effectiveness, some projected conjugate gradient methods     are studied based on the projection technique of Solodov and Svaiter . The numerical results gained in     indicate that the projected conjugate gradient type methods for solving problem (1) are indeed efficient and promising. In this paper, by combining the well-known Polak-Ribière-Polyak   method with the projection technique of Solodov and Svaiter , a conjugate gradient projected method with fast convergent property is proposed for the nonlinear monotone equations with convex constraints. Under some mild conditions, the global convergent results are established for the given method. The obtained method possesses the following three beneficial properties: 1) The search direction satisfies the sufficient descent condition, 2) The global convergence is independent of any merit function, and 3) It is derivative-free method and is effective for large scale nonlinear convex constrained monotone equations (with a maximum dimension of 100,000). Furthermore, the obtained method is extended to solve the -norm problem by reformulating it as non-smooth monotone equations.
In Section 2, the modified PRP-type conjugate gradient projected method is proposed, and some preliminary properties are studied. The global convergence results are established in Section 3. The numerical experiments, and the applications of the obtained method for -norm regularized compressive sensing problems are discussed in Section 4. Finally, we have a conclusion section.
2. The Proposed Method and Corresponding Algorithm
We firstly introduce the definition of the projection operator which is defined as the mapping from to ,
where denotes the Euclidean norm of vectors, is a nonempty closed convex subset of .
The projection operator is non-expansive, namely, for any , the following condition holds
Let’s review the Polak-Ribière-Polyak   conjugate gradient method briefly. The PRP method is firstly designed for solving the unconstrained optimization problem:
where is continuously differentiable. It generates the iteration sequence in the form
where is the current iteration point, is a step-length, and is the search direction given by
where , .
Combining the projected technique of Solodov and Svaiter  with the PRP method formed by Equation (5) and Equation (6), the following modified PRP formula is defined given in this paper
where and is a constant.
It is show be noted that the proposed direction formula Equation (7) reduces to PRP formula if the exact line search is used. Furthermore, the sufficient descent condition automatically holds for all k, since . There are some conjugate gradient methods with similar idea concerning Equation (7) have been studied in the papers  - .
The corresponding modified PRP conjugate gradient projection algorithm for solving problem (1) starts as follows.
Step 0 Choose any initial point , and select constants , , , , and . Let .
Step 1 If , stop. Otherwise compute search direction by Equation (7) with and replaced by and , respectively.
Step 2 Let , where such that
Step 3 If , stop and let . Otherwise compute the next iteration by
Step 4 Let , and go to Step 1.
Remark 1: In the algorithm 1, the step size given by Equation (8) satisfies
where , is the search direction. Moreover, for any such that ,
comes from the monotonicity property of . This means that the hyperplane
strictly separates the current point from the solution set of the problem. The above facts and Step 3 indicate that the next iteration is computed by projecting onto the intersection of the feasible set with the halfspace .
3. Convergence Analysis
In this section, we are going to discuss the convergence property of the given method. Before that, there are some basic assumptions on problem (1) needs to been given.
Assumption 1: The mapping F is Lipschitz continuous with constant in a set , written , for every ,
Assumption 2: The solution set of the problem (1), denoted by S, is nonempty convex.
For conjugate gradient method, the sufficient descent property is essential in the convergence analysis, the following lemma shows that the search direction generated by Algorithm 1 satisfies the sufficient descent condition independent of line search.
Lemma 1: Let the sequence and be generated by Algorithm 1. Then, for all ,
Proof: For , Equation (12) and Equation (13) follows from the direct application of . For , using Equation (7), the definition of the search direction , it follows that
where the last inequality follows from the fact
In the remaining part of this paper, we assume that for all , otherwise, the solution of the problem (1) has been found.
Lemma2: Let the sequence and be generated by Algorithm 1. Suppose that the Assumption 1 holds. Then there exists a positive number satisfying Equation (8) for all .
Proof: The line search ensure that if , then does not satisfy Equation (8), namely,
where . From Equation (12) and Assumption 1 we have
which means that
The above result Equation (14) shows that the line search procedure Equation (8) always terminates in a finite number of steps.
Lemma3: Let sequences and be generated by Algorithm 1. Suppose that Assumptions 1 and 2 hold. Then both and are bounded. Moreover, we have
Particularly, Equation (15) implies that
Proof: denotes any arbitrary solution of the problem (1). The monotonicity of F and the line search Equation (8) deduce
Equation (3), Equation (9) and Equation (18) imply
Since the sequence is decreasing and convergent, the sequence is bounded. Equation (19) shows that for all k. Then, by Assumption 1, we have
From the Cauchy-Schwarz inequality, the line search Equation (8), the monotonicity of F and Equation (18), it follows that
which shows that the sequence is bounded. Furthermore, the sequence is also bounded, there exists , , such that
Based on Equation (23) and Assumption 1 it follows
Substituting the above relationship into Equation (19), it deduces
From the definition of and Equation (15), it holds that
Combining the definition of , Equation (3), and the Cauchy-Schwarz inequality, we have
which together with Equation (15), proves Equation (16).
Theorem1: Let sequences and be generated by Algorithm 1. Suppose that Assumptions 1 and 2 hold. Then
Proof: We prove this Theorem by contradiction. Assume that Equation (26) does not hold, namely, there exists such that
From Equation (12) and Equation (27),
On the other hand, Equation (13), Equation (21) and the definition of deduce
Finally, from Equation (14), Equation (27) and Equation (28),
which contradicts with Equation (17). Thus, Equation (26) holds.
4. Numerical Experiments
The numerical performances of the proposed Algorithm 1 for large scale nonlinear convex constrained monotone equations with various dimensions and different initial points are studied in this section. Furthermore, the given Algorithm 1 is extended to solve the -norm regularized problems which decode a sparse signal in compressive sensing. The algorithm is coded in MATLAB R2015a and run on a PC with Core i5 CPU and 4 GB memory.
4.1. Experiments on Nonlinear Convex Constrained Monotone Equations
The testing problems are listed as follows.
Problem 1. (Wang et al. ) The elements of are given by
Problem 2. The example is taken from . The elements of are given by
Problem 3. The example is taken from .
Problem 4. The example is taken from .
For convenience, MPRP denotes the proposed Algorithm 1. We compare the MPRP method with CGD method  on problems 1-4. For both methods, set , , . In order to evaluate the efficiency and the robustness of both methods, we test the Problems 1-4 with various dimensions and different initial points: , , , , , where returns a n-by-1 array of ones and returns a n-by-1 array of rand values in MATLAB.
Numerical results are shown in Tables 1-4, in which Init (Dim), NI and NF denote initial points (dimension), the number of iterations and the number of function evaluations respectively. is the final Euclidean norm of the function values, and CPU-time in seconds.
Tables 1-4 indicate that the dimension of the problem has little effect on the number of iterations of the algorithm. However, the computing time is relatively large in high dimension cases. Moreover, we can see from the results of Tables 1-4 that Algorithm 1 is more competitive than CGD algorithm as Algorithm 1 can get the solution of all the test data at a smaller number of iterations and smaller CPU time. So the results of Tables 1-4 show that our method is very efficient.
The numerical performances of the both methods are also evaluated by using the performance profile tool of tool of Dolan and Moré . Figure 1 shows the performance of two methods, it is obviously that the proposed MPRP method is more efficient and robust than CGD method.
Table 1. Numerical results for MPRP/CGD on problem 1.
Table 2. Numerical results for MPRP/CGD on problem 2.
Table 3. Numerical results for MPRP/CGD on problem 3.
Table 4. Numerical results for MPRP/CGD on problem 4.
Figure 1. Performance profiles for two methods MPRP and CGD, where the left and the right figures are represented as the number of function evaluations and the CPU time, respectively.
4.2. Experiments on the l1-Norm Regularization Problem
The problem of the combination of and norms in the cost function often emerges for the signal reconstruction, i.e.:
where is the Euclidean norm, and
is the norm, A is a system matrix, is the observed data, is the signal to be reconstructed, and is a positive regularization parameter.
The optimization problems of the form Equation (28) appear in several signal reconstruction problems, such as sparse signal de-blurring , medical image reconstructions , compressed sensing , and super-resolution . Iterative line search method or fixed point iteration schemes are commonly used to solve problem (28). By using the technique proposed by Figueiredo et al. , we can reformulate problem (28) as a convex quadratic program problem. Let , , , where , for all and for all . The norm can be formulated as , where . The problem (28) is expressed as the bound-constrained quadratic program:
Furthermore, the problem (29) can be rewritten as a standard convex quadratic program problem:
, , , ,
B is a semi-definite positive matrix. Recently, the problem (30) was reformulated as a linear variable inequality (LVI) problem by Xiao et al.  . They pointed out that this LVI problem is equivalent to a linear complementary problem, and z is a solution of the linear complementary problem if and only if it is a solution of the following nonlinear monotone equations:
where is Lipschitz continuous. This result indicates that problem (28) can be solved by MPRP projection method.
In this part of numerical experiments, a compressive sensing scenario is considered, which aims to reconstruct a length-n sparse signal from significantly fewer m observations, where . The quality of restoration is measured by the mean of squared error (MSE) to the original signal , that is
where is the restored signal. In practice, and , and the original contains 26 randomly non-zero elements. A is the Gaussian matrix generated by Matlab’s code , the measurement y contains noise,
where is the Gaussian noise distributed as . The merit function is
where is forced to decrease as the measure in. The experiment starts at the measurement image, i.e. , and terminates when the relative change of the iteration satisfies:
where is the function value at .
We compare the proposed MPRP method with CGD method for this problem. In both methods, the parameters are taken as , and . The same initial point and continuation technique on parameter are used in both methods.
Figure 2 shows simulation results of MPRP and CGD for a signal sparse reconstruction. As we can see in Figure 2, the original sparse signal is restored highly exactly both by MPRP and CGD. Figure 3 provides a series of comparisons among the objective function values and relative error as the iteration numbers and computing time increase. As we can see in Figure 3, the descent rates of MSE and objective function values of MPRP method are faster. The experiments are repeated for 15 random different noise samples in Table 5. We report the
Figure 2. From top to bottom: the original signal, the measurement, and the recovery signals by two methods MPRP and CGD, respectively.
Figure 3. Comparison results of MPRP and CGD methods. From left to right: the changed trends of MSE and the changed trends of the objective function values goes along with the number of iterations and CPU time in seconds, respectively.
Table 5. The experiment results for MPRP/CGD on -norm regularization problem.
number of iterations (Niter) and the CPU time (in second) required for the whole testing process. From Table 5, we can see that MPRP method is better than CGD method. For example, the new method’s iteration number and CPU time are much less than those of the CGD method. To summarize, these experiment results show that the proposed algorithm MPRP can work well in an efficient manner.
In this paper, we proposed a conjugate gradient projection algorithm for solving large-scale nonlinear convex constrained monotone equations based on the well-known Polak-Ribière-Polyak conjugate gradient method which is one of the most effective conjugate gradient methods to solve the unconstrained optimization problems. The algorithm combines CG technique with projection scheme and is a derivative-free method, so it can be applied to solve large-scale non-smooth equations for its low storage requirement. Under some technical conditions, we have established the global convergence. Another contribution of this paper is to use the given method to solve the -norm regularized problems in compressive sensing.
This work was supported by the Scientific Research Project of Tianjin Education Commission (No. 2019KJ232).
 Solodov, M.V. and Svaiter, B.F. (1998) A Globally Convergent Inexact Newton Method for Systems of Monotone Equations. In: Fukushima, M. and Qi, L., Eds., Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods, Kluwer Academic, 355-369.
 Wang, C.W. and Wang, Y.J. (2009) A Superlinearly Convergent Projection Method for Constrained Systems of Nonlinear Equations. Journal of Global Optimization, 44, 283-296.
 Hu, Y.P. and Wei, Z.X. (2015) Wei-Yao-Liu Conjugate Gradient Projection Algorithm for Nonlinear Monotone Equations with Convex Constraints. International Journal of Computer Mathematics, 92, 2261-2272.
 Liu, J.K. and Li, S.J. (2015) A Projection Method for Convex Constrained Monotone Nonlinear Equations with Applications. Computers and Mathematics with Applications, 70, 2442-2453.
 Xiao, Y.H. and Zhu, H. (2013) A Conjugate Gradient Method to Solve Convex Constrained Monotone Equations with Applications in Compressive Sensing. Journal of Mathematical Analysis and Applications, 405, 310-319.
 Yu, G.H., Niu, S.Z. and Ma, J.H. (2013) Multivariate Spectral Gradient Projection Method for Nonlinear Monotone Equations with Convex Constraints. Journal of Industrial and Management Optimization, 9, 117-129.
 Polak, E. and Ribière, G. (1969) Note sur la convergence de méthodes de directions conjugées. Revue Francaise d’Informatique et de Recherche Opératinelle, 3, 35-43.
 Zhang, L. and Li, J.L. (2011) A New Globalization Technique for Nonlinear Conjugate Gradient Methods for Nonconvex Minimization. Applied Mathematics and Computation, 217, 10295-10304.
 Hu, Y.P. and Wei, Z.X. (2014) A Modified Liu-Storey Conjugate Gradient Projection Algorithm for Nonlinear Monotone Equations. International Mathematical Forum, 9, 1767-1777.
 Yuan, G.L. and Hu, W.J. (2018) A Conjugate Gradient Algorithm for Large-Scale Unconstrained Optimization Problems and Nonlinear Equations. Journal of Inequalities and Applications, 1, Article No.: 113.
 Yuan, G.L., Meng, Z.H. and Li, Y. (2016) A Modified Hestenes and Stiefel Conjugate Gradient Algorithm for Large-Scale Nonsmooth Minimizations and Nonlinear Equations. Journal of Optimization Theory and Applications, 168, 129-152.
 Yuan, G.L., Wei, Z.X. and Li, G.Y. (2014) A Modified Polak-Ribière-Polyak Conjugate Gradient Algorithm for Nonsmooth Convex Programs. Journal of Computational and Applied Mathematics, 255, 86-96.
 Yuan, G.L. and Zhang, M.J. (2015) A Three-Terms Polak-Ribière-Polyak Conjugate Gradient Algorithm for Large-Scale Nonlinear Equations. Journal of Computational and Applied Mathematics, 286, 186-195.
 Yuan, G.L. and Zhang, M.J. (2013) A Modified Hestenes-Stiefel Conjugate Gradient Algorithm for Large-Scale Optimization. Numerical Functional Analysis and Optimization, 34, 914-937.
 Yuan, G.L., Wei, Z.X. and Zhao, Q.M. (2014) A Modified Polak-Ribière-Polyak Conjugate Gradient Algorithm for Large-Scale Optimization Problems. IIE Transactions, 46, 397-413.
 Yu, Z.S., Lin, J., Sun, J., Xiao, Y.H., Liu, L.Y. and Li, Z.H. (2009) Spectral Gradient Projection Method for Monotone Nonlinear Equations with Convex Constraints. Applied Numerical Mathematics, 59, 2416-2423.
 Yang, J.C., Wright, J., Huang, T.S. and Ma, Y. (2010) Image Super-Resolution via Sparse Representation. IEEE Transactions on Image Processing, 19, 2861-2873.
 Figueiredo, M., Nowak, R. and Wright, S.J. (2007) Gradient Projection for Sparse Reconstruction, Application to Compressed Sensing and Other Inverse Problems. IEEE Journal of Selected Topics in Signal Processing, 1, 586-597.
 Xiao, Y.H., Wang, Q.Y. and Hu, Q.J. (2011) Non-Smooth Equations Based Method for l1-Norm Problems with Applications to Compressed Sensing. Nonlinear Analysis: Theory, Methods & Applications, 74, 3570-3577.