1.1. The Nonnegative Linear Programming
Consider the linear programming (LP) problem
where and are n-dimensional column vectors of objective coefficients and variables respectively; is an matrix with row vectors is an column vector; and 0 is an vector of zeros.
The non-polynomial simplex methods and the polynomial interior-point barrier-function algorithms are currently the principal two-solution approaches for solving problem but for either there are problem instances for which it performs poorly  . Since the principle use of LP in industrial applications is in binary and integer programming algorithms, however, pivoting algorithms with efficient post-optimality analysis are frequently preferable to interior-point methods. On the other hand, simplex methods often cannot solve large-scale LPs at a speed required by many current applications. The purpose here is to develop an approach for solving a certain class of LPs faster than existing methods.
In this paper we consider the nonnegative linear programming problem (NNLP), which is the special case of with but ; and NNLPs model a large number of linear programming applications such as determining an optimal driving path for navigation systems using traffic data  , updating flight status due to weather conditions  , and detecting errors in DNA sequences  . NNLPs have the following two important properties:
1) the origin is feasible,
Thus NNLPs have a bounded feasible region and bounded objective function if and only if no column of is a zero vector. It follows that the boundedness of an NNLP objective function is easily verifiable without computation.
We propose here an active-set method to solve nonnegative linear programming problems faster than current approaches. Our method divides the constraints of problem into operative and inoperative constraints at each active-set iteration. Operative constraints are those active in the current relaxed subproblem of at iteration while the inoperative ones are constraints of the problem not active in In our active-set method we iteratively solve of after adding one or more violated inoperative constraints from (2) to until the solution to is a solution to .
Active-set methods have been studied by Stone  , Thompson et al.  , Adler et al.  , Zeleny  , Myers and Shih  , Curet  , and Bixby et al.  , among others. The term ‘‘constraint selection technique’’ was introduced in  , while the approaches of  and  illustrate two distinct classes of active-set methods. When the constraint selection metric for choosing violated inoperative constraints to be added to does not depend on the solution , the associated active-set method is called a prior method. On the other hand, if the constraint selection at does depend on , it is called a posterior method. Adler et al.  developed a prior method in which a violated inoperative constraint was chosen randomly at each active-set iteration. Zeleny  proposed a posterior method in which the inoperative constraint most violated by was added. This method is a classical cutting-plane generation technique and is called VIOL here. VIOL is also used as a pricing rule in delayed column generation  , as an approach for adding multiple constraints in the interior point cutting plane method of  , and as part of the sifting algorithm of  for column generation.
More recently, Corley et al.  developed a geometric prior active-set method for called the cosine simplex method. At each active-set iteration a single violated constraint maximizing the cosine of the angle between and is added to the operative set for . This cosine constraint selection criterion is equivalent to the “most-obtuse-angle” pivot rule for the modified simplex algorithm introduced by Pan  , where it was applied to the dual problem for P. Junior and Lins  also utilized a cosine criterion to choose an initial basis for the simplex algorithm on resulting in a fewer number of simplex iterations.
References     are most directly related to the current work and involve the authors here. In  , Corley and Rosenberger proposed the constraint selection metric maximizing
for NNLPs. RAD is a geometric constraint selection criterion for determining the constraints most likely to be binding at optimality. In the associated active-set algorithm of  , all constraints of (2) are initially ordered by decreasing value of RAD prior to solving an initial bounded problem by the primal simplex. The dual simplex is then used when violated inoperative constraints are added according to their RAD value. In computational experiments, RAD proved superior to existing linear programming methods for NNLPs. A similar constraint selection metric GRAD was developed in  to solve general linear programs (LPs). Finally, in  a dynamic active-set method was developed for adding a varying number of violated constraints at based on progress at It was incorporated into both RAD and GRAD to improve the computational results of  and  .
In this paper a posterior constraint selection metric NVRAD is developed for NNLPs. NVRAD may be considered as a posterior version of RAD. The posterior NVRAD is then implemented in the dynamic framework of  . It should be noted that a constraint selection metric and the associated active-set method are identified by the same name - in this case NVRAD. For the active-set method NVRAD, we provide extensive computational extensive computational experiments to show that it solves NNLPs faster than other computational methods, including RAD and various versions of the existing posterior active-set method VIOL described above.
More specifically, in Section 2 we state the posterior constraint selection metric NVRAD and provide a geometric interpretation. A dynamic version of NVRAD for NNLPs is then developed. In Section 2 we extend NVRAD to a hybrid approach HYBR, where RAD and NVRAD are alternated. In Section 3, computational results are presented. NVRAD is shown to be significantly faster for NNLPs than all CPLEX solvers, as well as faster than VIOL and RAD. HYBR appears slightly faster than NVRAD. In Section 4, we present conclusions. Throughout the paper, both a constraint selection metric and the associated active-set algorithm are identified by the same name-RAD or NVRAD, for example. The use the term should be clear from context. The active-set algorithm itself is called a COST, i.e., a “Constraint Optimal Selection Technique”.
2.1. Definition and Interpretation
Let be the current optimal solution for some with a perpendicular dis-
tance to a violated hyperplane It follows that
Note that is the perpendicular distance of hyperplane to the
origin. Consequently, it follows that choosing a violated hyperplane
with a maximum value on the right side of (5) can be interpreted
from the left side of (5) as selecting a violated constraint giving the deepest cut
based on information derived from . But from  , the expression on
the right side of (4) is the distance from the origin to the hyperplane along the vector i.e., the direction of steepest ascent for the objective function (1) of the NNLP For this reason, in  the inoperative constraint maximizing is deemed the best constraint to add to based on prior information. We combine this prior information (4) with the posterior information on the right side of (5) by multiplying them to give
Equation (6) thus incorporates global information from RAD with local information at , and our posterior active-set method adds to an inoperative constraint for which
We mention that the term in the denominator of (7) works better than simply This fact was established in computational results not reported here but obtained to support the above derivation.
2.2. The Dynamic Active-Set Algorithm
A dynamic version of RAD was developed by the authors in  . A similar approach is now used for NVRAD. Let be the optimal extreme point for , with the angle between and Then
is nonnegative since is also an NNLP. We would like to decrease at each active-set iteration so that points more in the same direction as the gradient of the objective function in (1). We adapt our dynamic heuristic of  that adds a varying number of violated inoperative constraints to according to the progress made in reducing the angle
As our ideal goal, let in (8) to give
When , it follows from (9) that
Letting denote absolute value, define
as a measure of the performance of our active-set method at iteration The value of decreases as decreases. Such a decrease usually occurs as approaches an optimal extreme point of itself.
The dynamic COST NVRAD for solving NNLPs is described as follows. Constraints are initially ordered by the RAD constraint selection metric (4). To construct we choose constraints from (2) in descending order of RAD (since there is no ) until the matrix of has no 0 column, i.e., until each variable has an These selected constraints become the constraints of , and we say that the variables are covered by the inequality constraints of the initial problem. is then solved by the primal simplex to achieve an initial solution and is calculated. At iteration let be the number of constraints of problem violated by . Then at iteration and the values of and are calculated; and the percentage of improvement made in reducing the angle between vectors and at iteration is measured by
With denoting the greatest integer function, let
where The value of is an upper bound on the possible number of violated constraints that can be added at active-set iteration The actual number added is The active-set function increases at every iteration since the optimal value of the objective function for is usually less affected by a constraint with a small value of (6) than one with a large value. Hence, more violated constraints should be added as increases. Equation (13) represents one approach for doing so. If (Euler’s number), for example, then If then In other words, a much larger number and perhaps all of the remaining violated constraints could be added. NVRAD stops when i.e., when there are no more violated constraints.
The pseudocode for dynamic NVRAD algorithm is as follows.
Step 1―Identify constraints to initially bound the problem.
2: while do
6: end if
9: end while
Step 2―Using the primal simplex method, obtain an optimal for the initial problem.
Step 3―Perform the following iterations until an answer to problem is found.
2: while Optimized = false do
6: else if then
7: end if
9: end if
14: Solve the following by the dual simplex method to obtain
16: Go to 3
17: is an optimal solution to .
18: end if
19: end while
2.3. A Hybrid Approach
A reasonable conjecture is that that combining the global information of RAD and the local information of NVRAD might be advantageous. Therefore, we will also consider an approach that alternates the dynamic RAD and NVRAD metrics in a single algorithm at even and odd iterations, respectively, to yield a hybrid COST designated here as HYBR. The results obtained for HYBR demonstrate that combining posterior and prior COSTs may be superior to either a prior or posterior approach by itself.
3. Computational Experiments
Dynamic NVRAD is compared in this section with the CPLEX primal simplex, dual simplex, and barrier methods. It is also compared with the prior active-set method RAD and the standard posterior active-set method VIOL, as well as to a normalized version of VIOL called NVIOL that was superior to VIOL in computational results not reported here. Both dynamic and multi-bound, multi-cut versions of NVRAD were compared to dynamic and multi-bound, multi-cut versions of the other active-set methods for insight into the individual merits of the dynamic and posterior approaches.
3.1. Problem Instances
Five sets of NNLPs from  are used to evaluate the performance of the dynamic posterior COST NVRAD. Each of Sets 1 - 4 contains 105 randomly generated NNLPs at 21 different density levels ranging from 0.005 to 1, and four ratios of ( constraints)/( variables) ranging from 200 to 1. The ratios for Sets 1 - 4 are 200, 20, 2, and 1, respectively. For each of Sets 1 - 4, there are five problem instances per combination of density level and ratio. In these problem sets, randomly generated real numbers between 1 and 5, 1 and 10, and 1 and 10 were assigned to the elements of and respectively. To prevent any constraint of from having the form of an upper bound on some variable, each constraint is required to have at least two nonzero . Next, problem Set 5 of NNLPs is a set of large-scale problems with 5000 variables and 1,000,000 constraints. In this set, real numbers between 1 and 100 are assigned to the elements of and with densities ranging from 0.0004 to 0.06. Again, each constraint is required to have at least two nonzero .
3.2. CPLEX Preprocessing
Two CPLEX parameters for solving linear programming are discussed here. The preprocessing pre-solve indicator (PREIND) and the preprocessing dual setting (PREDUAL) are the two parameters that CPLEX uses for solving linear programming. Preprocessing pre-solver is enabled with the parameter setting PREIND = 1 (ON), which reduces both the number of variables and the constraints before any type of algorithm is used. The pre-solver routine in CPLEX is disabled by setting PREIND = 0 (OFF). The second preprocessing parameter in CPLEX affecting the computational speed is PREDUAL. By setting parameter PREDUAL = 0 (ON) or PREDUAL = −1 (OFF), CPLEX automatically selects whether to solve the dual of the original LP or not, respectively.
Both PREIND and PREDUAL were turned off for CPLEX when CPLEX was used as part of NVRAD or HYBR. However, all computational results reported here for any individual CPLEX solver had PREIND and PREDUAL turned on. In other words, our NVRAD was compared to CPLEX at its fastest setting. CPLEX would choose automatically whether to solve either the primal or dual, whichever seemed best. Moreover, preprocessing would substantially reduce the size of any problem by removing appropriate rows or columns of the constraint matrix before applying the primal simplex, dual simplex, or interior-point barrier method. In fact, much of the speed of the CPLEX solvers is due to its proprietary preprocessing routines.
3.3. Computational Results
The experiments were performed on an Intel®CoreTM 2 Duo X9650 3.00 GHz processor with a Linux 64-bit operating system and 8 GB of RAM. The COST NVRAD uses the IBM CPLEX 12.5 callable library to solve by the primal simplex and then by the dual simplex when selected constraints are added to . The CPU times shown in the tables below represent the average computation time of five problem instances at each density level.
The results of Table 1 for Set 1 compare NVRAD to VIOL, as well as to both a dynamic and non-dynamic version of NVIOL. In addition, the dynamic NVRAD described in Section 2.2 was compared to a non-dynamic NVRAD that applies the multi-cut and multi-bound technique of  . The dynamic version was significantly faster. The efficacy of the dynamic approach was further demonstrated by the fact that in higher density problems a dynamic version of NVIOL was up to 21 times faster than the multi-cut, multi-bound NVIOL. Overall, dynamic NVRAD was faster than VIOL and NVIOL on every problem instance.
In Table 2, the CPU times of the test problems solved by dynamic NVRAD are compared with those for RAD. In problem Set 1, RAD is slightly faster than NVRAD over all densities and averages 3.98 compared to 4.55 seconds. However, in problem Set 2, the average computation times for RAD and dynamic
Table 1. CPU times for multi-cut, multi-bound and dynamic active-set approaches on problem Set 1 for random NNLPs with 1000 variables, 200,000 constraints, and , .
++Average of 5 instances at each density. −− Used CPLEX presolve = OFF and predual = OFF.
NVRAD over all densities are 19.07 and 16.86 seconds, respectively. For Set 3, dynamic NVRAD is superior to RAD averaging 38.91 seconds compared to 41.87 seconds. Similarly, for Set 4 the averages are 41.87 for NVRAD as compared to 46.98 for RAD. Thus the results of Table 2 affirm NVRAD’s ability to add appropriate constraints at each iteration. The results for Set 1 simply reflect how well the prior COST RAD performs when is very much larger than
Table 3 presents the CPU times for problem Sets 1 - 4 solved by dynamic versions of both RAD and HYBR. In Table 3 HYBR is superior to RAD. Moreover, a comparison of Table 3 with Table 2 shows that HYBR is also slightly better than dynamic NVRAD on these problem sets. Such observations suggest that combining the global information of RAD and the local information of NVRAD gives a superior performance than either RAD or NVRAD by itself. We note further that HYBR can probably be improved. However, it is not our goal to seek the optimal combination of RAD and NVRAD in HYBR since an optimal combination would likely differ depending on various factors such as density and the ratio m/n.
Table 4, taken from  , provides a comparison of the posterior COST NVRAD with the standard CPLEX solvers. Comparing the results of Table 4 for the CPLEX solvers with the results for NVRAD in Table 2 shows that NVRAD was significantly faster across virtually all ratios and all densities. For example, the primal simplex was the most robust CPLEX solver, but on the average across all densities the primal simplex took approximately 3 to 14 times more CPU time for the different rations than NVRAD. For the dual simplex, the av-
Table 2. CPU times for multi-cut, multi-bound and dynamic active-set approaches on problem Sets 1 - 4 for random NNLPs with , .
++Average of 5 instances of LP at each density. −−Used CPLEX presolve = OFF and predual = OFF.
erage CPU across all densities was approximately 15 to 50 times greater than NVRAD over the different ratios. However, the CPLEX barrier method was slightly faster than NVRAD in problem instances with and with densities less than 0.02. On the other hand, when the density reached 0.08 for , NVRAD was already more than ten times faster than the barrier solver. Furthermore, note that average CPU times in Table 4 greater than 3000 seconds (50 minutes) at any density were not reported. This situation occurred for the CPLEX barrier solver for the ratios 1, 2, 20, and 200 with densities at least 0.3, 0.4, 0.5, and 0.75, respectively.
Finally, for large-scale, low-density test problems with and
Table 3. CPU times for dynamic HYBR and dynamic RAD on problem Sets 1 - 4 for random NNLPs with , .
++Average of 5 instances of LP at each density. −−Used CPLEX presolve = OFF and predual = OFF.
Table 5 compares dynamic NVRAD to multi-cut and multi-bound RAD, VIOL, NVIOL, and NVRAD, as well as to the CPLEX primal simplex, dual simplex, and barrier solvers. Only the prior COST RAD was competitive. NVRAD averaged 63.45 seconds overall as compared to 71.79 for RAD. It should be noted that the highest density used in problem Set 5 was 0.0600 since the CPLEX solvers could not solve denser problems of such magnitude in a reasonable amount of time. Average CPU times greater than 2400 seconds (40 minutes) at any density were not reported in Table 5. This situation occurred beginning at some individual threshold density level for each CPLEX solver.
Table 4. CPU times from  for CPLEX solvers on problem Sets 1 - 4 for random NNLPs with , .
+CPLEX presolve = ON and predual = ON. ++Average of 5 instances at each density. b Runs with CPU times > 3000 s are not report.
Table 5. CPU times for NVRAD versus RAD, VIOL, NVIOL, and the CPLEX solvers on problem Set 5 for random NNLPs with 5000 variables, 1,000,000 constraints, and , .
++Average of 5 instances at each density.b Runs with CPU times > 2400 s are not reported. −−Used CPLEX presolve = OFF and predual = OFF. +Used CPLEX presolve = ON and predual = ON.
An efficient posterior COST called NVRAD was developed here for NNLPs to utilize both prior global information and posterior local information. The associated constraint selection metric NVRAD is a heuristic, so a geometric interpretation was presented to offer insight into its performance. NVRAD’s inherent active-set efficiency was enhanced by a dynamic approach varying the number of constraints added at each iteration. In addition to NVRAD, adynamic active-set approach HYBR was also proposed. HYBR alternates between the posterior method NVRAD and the prior method RAD. To check their performance, both NVRAD and HYBR were used to solve five sets of large-scale NNLPs. Dynamic NVRAD outperformed the previously developed COST RAD, as well as the standard posterior cutting-plane method VIOL. Dynamic NVRAD significantly outperformed the CPLEX primal simplex, dual simplex, and barrier solvers. On the other hand, HYBR appears slightly faster than NVRAD or RAD. The results of this paper provide further evidence that active-set methods may be the fastest approach for solving linear programming problems.
 Thompson, G.L., Tonge, F.M. and Zionts, S. (1996) Techniques for Removing Nonbinding Constraints and Extraneous Variables from Linear Programming Problems. Management Science, 12, 588-608.
 Adler, I., Karp, R. and Shamir, R. (1986) A Family of Simplex Variants Solving an Linear Program in Expected Number of Pivot Steps Depending on d Only. Mathematics of Operations Research, 11, 570-590.
 Bixby, R.E., Gregory, J.W., Lustig, I.J., Marsten, R.E. and Shanno, D.F. (1992) Very Large-Scale Linear Programming: A Case Study in Combining Interior Point and Simplex Methods. Operations Research, 40, 885-897.
 Barnhart, C., Johnson, E., Nemhauser, G., Savelsbergh, M. and Vance, P. (1998) Branch-and-Price: Column Generation for Solving Huge Integer Programs. Operations Research, 46, 316-329.
 Corley, H.W., Rosenberger, J., Yeh, W.-C. and Sung, T.K. (2006) The Cosine Simplex Algorithm. The International Journal of Advanced Manufacturing Technology, 27, 1047-1050.
 Saito, G., Corley, H.W., Rosenberger, J.M., Sung, T.-K. and Noroziroshan, A. (2015) Constraint Optimal Selection Techniques (COSTs) for Nonnegative Linear Programming Problems. Applied Mathematics and Computation, 251, 586-598.
 Saito, G., Corley, H.W. and Rosenberger, J. (2012) Constraint Optimal Selection Techniques (COSTs) for Linear Programming. American Journal of Operations Research, 3, 53-64.
 Noroziroshan, N., Corley, H.W. and Rosenberger, J. (2015) A Dynamic Active-Set Method for Linear Programming. American Journal of Operations Research, 5, 526-535.