More recently, Corley et al. [14] developed a geometric prior active-set method for $P$ called the cosine simplex method. At each active-set iteration $r,$ a single violated constraint maximizing the cosine of the angle between ${a}_{i}$ and $c$ is added to the operative set for ${P}_{r}$ . This cosine constraint selection criterion is equivalent to the “most-obtuse-angle” pivot rule for the modified simplex algorithm introduced by Pan [15] , where it was applied to the dual problem for P. Junior and Lins [16] also utilized a cosine criterion to choose an initial basis for the simplex algorithm on $P$ resulting in a fewer number of simplex iterations.

References [17] [18] [19] [20] are most directly related to the current work and involve the authors here. In [17] , Corley and Rosenberger proposed the constraint selection metric maximizing

$RAD\left({a}_{i},{b}_{i},c\right)=\frac{{a}_{i}c}{{b}_{i}}$ (4)

for NNLPs. RAD is a geometric constraint selection criterion for determining the constraints most likely to be binding at optimality. In the associated active-set algorithm of [18] , all constraints of (2) are initially ordered by decreasing value of RAD prior to solving an initial bounded problem ${P}_{0}$ by the primal simplex. The dual simplex is then used when violated inoperative constraints are added according to their RAD value. In computational experiments, RAD proved superior to existing linear programming methods for NNLPs. A similar constraint selection metric GRAD was developed in [19] to solve general linear programs (LPs). Finally, in [20] a dynamic active-set method was developed for adding a varying number of violated constraints at ${P}_{r}$ based on progress at ${P}_{r-1}.$ It was incorporated into both RAD and GRAD to improve the computational results of [18] and [19] .

1.3. Overview

In this paper a posterior constraint selection metric NVRAD is developed for NNLPs. NVRAD may be considered as a posterior version of RAD. The posterior NVRAD is then implemented in the dynamic framework of [20] . It should be noted that a constraint selection metric and the associated active-set method are identified by the same name - in this case NVRAD. For the active-set method NVRAD, we provide extensive computational extensive computational experiments to show that it solves NNLPs faster than other computational methods, including RAD and various versions of the existing posterior active-set method VIOL described above.

More specifically, in Section 2 we state the posterior constraint selection metric NVRAD and provide a geometric interpretation. A dynamic version of NVRAD for NNLPs is then developed. In Section 2 we extend NVRAD to a hybrid approach HYBR, where RAD and NVRAD are alternated. In Section 3, computational results are presented. NVRAD is shown to be significantly faster for NNLPs than all CPLEX solvers, as well as faster than VIOL and RAD. HYBR appears slightly faster than NVRAD. In Section 4, we present conclusions. Throughout the paper, both a constraint selection metric and the associated active-set algorithm are identified by the same name-RAD or NVRAD, for example. The use the term should be clear from context. The active-set algorithm itself is called a COST, i.e., a “Constraint Optimal Selection Technique”.

2. NVRAD

2.1. Definition and Interpretation

Let ${x}_{r}^{*}$ be the current optimal solution for some ${P}_{r}$ with a perpendicular dis-

tance $d=\frac{{a}_{i}{x}_{r}^{*}-{b}_{i}}{\Vert {a}_{i}\Vert}$ to a violated hyperplane ${a}_{i}x={b}_{i}.$ It follows that

$\frac{d}{\frac{{b}_{i}}{\Vert {a}_{i}\Vert}}=\frac{{a}_{i}{x}_{r}^{*}-{b}_{i}}{{b}_{i}}.$ (5)

Note that $\frac{{b}_{i}}{\Vert {a}_{i}\Vert}$ is the perpendicular distance of hyperplane ${a}_{i}x={b}_{i}$ to the

origin. Consequently, it follows that choosing a violated hyperplane ${a}_{i}x={b}_{i}$

with a maximum value $\frac{{a}_{i}{x}_{r}^{*}-{b}_{i}}{{b}_{i}}$ on the right side of (5) can be interpreted

from the left side of (5) as selecting a violated constraint giving the deepest cut

based on information derived from ${x}_{r}^{*}$ . But from [18] , the expression $\frac{{a}_{i}c}{{b}_{i}}$ on

the right side of (4) is the distance from the origin to the hyperplane ${a}_{i}x\le {b}_{i}$ along the vector $c\text{,}$ i.e., the direction of steepest ascent for the objective function (1) of the NNLP $P.$ For this reason, in [18] the inoperative constraint maximizing $RAD\left({a}_{i},{b}_{i},c\right)$ is deemed the best constraint to add to ${P}_{r}$ based on prior information. We combine this prior information (4) with the posterior information on the right side of (5) by multiplying them to give

$NVRAD\left({a}_{i},{b}_{i},c,{x}_{r}^{*}\right)=\frac{{a}_{i}c}{{b}_{i}^{2}}\left({a}_{i}{x}_{r}^{*}-{b}_{i}\right).$ (6)

Equation (6) thus incorporates global information from RAD with local information at ${x}_{r}^{*}$ , and our posterior active-set method adds to ${P}_{r}$ an inoperative constraint ${i}^{*}$ for which

${i}^{*}\in \underset{i\notin \text{OPERATIVE}}{\mathrm{arg}\mathrm{max}}\left\{\frac{{a}_{i}c}{{b}_{i}{}^{2}}\left({a}_{i}{x}_{r}^{*}-{b}_{i}\right):{a}_{i}{x}_{r}^{*}>{b}_{i}\right\}.$ (7)

We mention that the term ${b}_{i}^{2}$ in the denominator of (7) works better than simply ${b}_{i}.$ This fact was established in computational results not reported here but obtained to support the above derivation.

2.2. The Dynamic Active-Set Algorithm

A dynamic version of RAD was developed by the authors in [20] . A similar approach is now used for NVRAD. Let ${x}_{r}^{*}$ be the optimal extreme point for ${P}_{r}$ , with ${\theta}_{r}$ the angle between ${x}_{r}^{*}$ and $c.$ Then

$\text{}\mathrm{cos}\text{}{\theta}_{i}=\frac{{c}^{\text{T}}{x}_{r}^{*}}{\Vert {x}_{r}^{*}\Vert \Vert c\Vert},$ (8)

is nonnegative since ${P}_{r}$ is also an NNLP. We would like to decrease ${\theta}_{r}$ at each active-set iteration so that ${x}_{r}^{*}$ points more in the same direction as the gradient $c$ of the objective function in (1). We adapt our dynamic heuristic of [20] that adds a varying number of violated inoperative constraints to ${P}_{r}$ according to the progress made ${P}_{r-1}$ in reducing the angle ${\theta}_{r-1}.$

As our ideal goal, let ${\theta}_{r}=0$ in (8) to give

${c}^{\text{T}}{x}_{r}^{*}=\Vert {x}_{r}^{*}\Vert \Vert c\Vert .$ (9)

When ${\theta}_{r}=0$ , it follows from (9) that

$\frac{{\displaystyle {\sum}_{j=1}^{n}{c}_{j}{x}_{rj}^{*}}}{\sqrt{{\displaystyle {\sum}_{j=1}^{n}{c}_{j}^{2}}}}=\sqrt{{\displaystyle {\sum}_{j=1}^{n}{\left({x}_{rj}^{*}\right)}^{2}}}.$ (10)

Letting $|\cdot |$ denote absolute value, define

${\delta}_{r}\left({x}_{r}^{*}\right)=\left|\frac{{{\displaystyle \sum}}_{j=1}^{n}{c}_{j}{x}_{rj}^{*}}{\sqrt{{{\displaystyle \sum}}_{j=1}^{n}{c}_{j}^{2}}}-\sqrt{{{\displaystyle \sum}}_{j=1}^{n}{\left({x}_{rj}^{*}\right)}^{2}}\right|$ (11)

as a measure of the performance of our active-set method at iteration $r.$ The value of ${\delta}_{r}\left({x}_{r}^{*}\right)$ decreases as ${\theta}_{r}$ decreases. Such a decrease usually occurs as ${x}_{r}^{*}$ approaches an optimal extreme point of $P$ itself.

The dynamic COST NVRAD for solving NNLPs is described as follows. Constraints are initially ordered by the RAD constraint selection metric (4). To construct ${P}_{0}$ we choose constraints from (2) in descending order of RAD (since there is no ${x}_{r}^{*}$ ) until the ${A}_{0}$ matrix of has no 0 column, i.e., until each variable ${x}_{j}$ has an ${a}_{ij}>0.$ These selected constraints become the constraints of ${P}_{0}$ , and we say that the variables are covered by the inequality constraints of the initial problem. ${P}_{0}$ is then solved by the primal simplex to achieve an initial solution ${x}_{0}^{*},$ and ${\delta}_{0}\left({x}_{0}^{*}\right)$ is calculated. At iteration $r$ let ${\gamma}_{r}$ be the number of constraints of problem $P$ violated by ${x}_{r}^{*}$ . Then at iteration $r-1$ and $r,$ the values of ${\delta}_{r-1}\left({x}_{r-1}^{*}\right)$ and ${\delta}_{r}\left({x}_{r}^{*}\right)$ are calculated; and the percentage of improvement made in reducing the angle between vectors ${x}_{r}^{*}$ and $c$ at iteration $r$ is measured by

${\omega}_{r}=\text{max}\left\{0,\left(\frac{{\delta}_{r-1}\left({x}_{r-1}^{*}\right)-{\delta}_{r}\left({x}_{r}^{*}\right)}{{\delta}_{r-1}\left({x}_{r-1}^{*}\right)}\right)\right\}\times 100,r=1,2,\cdots .$ (12)

With $[\cdot ]$ denoting the greatest integer function, let

$\{\begin{array}{l}{\phi}_{r+1}={\phi}_{r}\times \left(1+\left[{\left(\mathrm{ln}{\omega}_{r}\right)}^{-1}\right]\right),r=1,2,\cdots ,\text{if}{\omega}_{r}>1\\ {\phi}_{r+1}={\gamma}_{r},r=1,2,\cdots ,\text{if}{\omega}_{r}\le 1,\end{array}$ (13)

where ${\phi}_{1}=200.$ The value of ${\phi}_{r}$ is an upper bound on the possible number of violated constraints that can be added at active-set iteration $r.$ The actual number added is $\mathrm{min}\left\{{\phi}_{r+1},{\gamma}_{r}\right\}.$ The active-set function ${\phi}_{r}$ increases at every iteration since the optimal value of the objective function for ${P}_{r}$ is usually less affected by a constraint with a small value of (6) than one with a large value. Hence, more violated constraints should be added as $r$ increases. Equation (13) represents one approach for doing so. If ${\omega}_{r}\text{>}e$ (Euler’s number), for example, then ${\phi}_{r+1}={\phi}_{r}.$ If ${\omega}_{r}=1.01,$ then ${\phi}_{r+1}=101{\phi}_{r}.$ In other words, a much larger number and perhaps all of the remaining violated constraints could be added. NVRAD stops when ${\gamma}_{r}=0,$ i.e., when there are no more violated constraints.

The pseudocode for dynamic NVRAD algorithm is as follows.

Step 1―Identify constraints to initially bound the problem.

1: ${a}^{*}\leftarrow 0,\text{BOUNDING}\leftarrow \varnothing $

2: while ${a}^{*}\ngtr 0$ do

3: $\text{Let}{i}^{*}\in \underset{i\text{}\notin \text{EXPLORED}}{\mathrm{arg}\mathrm{max}}RAD\left({a}_{i},{b}_{i},c\right).$

4: $\text{if}\exists \text{}j|{a}_{j}^{*}=0\text{and}{a}_{ij}^{*}>0\text{then}$

5: $\text{BOUNDING}\leftarrow \text{BOUNDING}\cup \left\{i\right\}$

6: end if

7: ${a}^{*}\leftarrow {a}^{*}+{a}_{i}^{*}$

8. $\text{Optimized}\leftarrow \text{false}$

9: end while

Step 2―Using the primal simplex method, obtain an optimal ${x}_{0}^{*}$ for the initial problem.

$\begin{array}{l}\left({P}_{0}\right)\text{Maximize}z={c}^{\text{T}}x\\ \text{subjectto}\\ {a}_{i}x\le {b}_{i},i\in \text{BOUNDING}\\ \text{}x\ge 0.\end{array}$

Step 3―Perform the following iterations until an answer to problem $P$ is found.

1: $r\leftarrow 0$

2: while Optimized = false do

3: Calculate ${\delta}_{r}\left({x}_{r}^{*}\right).$

4: $\text{if}r>1\text{then}{\omega}_{r}=\text{max}\left\{0,\left(\frac{{\delta}_{r-1}\left({x}_{r-1}^{*}\right)-{\delta}_{r}\left({x}_{r}^{*}\right)}{{\delta}_{r-1}\left({x}_{r-1}^{*}\right)}\right)\right\}\times 100$

5: $\text{if}{\omega}_{r}>1\text{then}{\phi}_{r+1}={\phi}_{r}\times \left(1+\left[{\left(\mathrm{ln}{\omega}_{r}\right)}^{-1}\right]\right)$

6: else if ${\omega}_{r}\le 1$ then ${\phi}_{r+1}={\gamma}_{r}$

7: end if

8: else ${\phi}_{r}\leftarrow 0$

9: end if

10: $\text{if}{a}_{i}{x}_{r}^{*}>{b}_{i},i=1,\cdots ,\text{rowsthen}$

11: ${\gamma}_{r}\leftarrow \text{\#{}\left\{{a}_{i}{x}_{r}^{*}>{b}_{i},i=1,\cdots ,\text{rows}\right\}$

12: $\text{Let}{i}^{*}\in \underset{i\text{}\notin \text{OPERATIVE}}{\mathrm{arg}\mathrm{max}}\left\{NVRAD\left({a}_{i},{b}_{i},c,{x}_{r}^{*}\right)=\frac{{a}_{i}c}{{b}_{i}{}^{2}}\left({a}_{i}{x}_{r}^{*}-{b}_{i}\right):{a}_{i}{x}_{r}^{*}>{b}_{i}\right\}.$

13: $\text{for}i=1,\cdots ,\mathrm{min}\left\{{\phi}_{r+1},{\gamma}_{r}\right\}\text{OPERATIVE}\leftarrow \text{OPERATIVE}\cup \left\{i\right\}\text{end}$

14: Solve the following ${P}_{r}$ by the dual simplex method to obtain ${x}_{r}^{*}.$

15: $r\leftarrow r+1$

16: Go to 3

17: $\text{elseOptimized}\leftarrow \text{true}//{x}_{r}^{*}$ is an optimal solution to $P$ .

18: end if

19: end while

2.3. A Hybrid Approach

A reasonable conjecture is that that combining the global information of RAD and the local information of NVRAD might be advantageous. Therefore, we will also consider an approach that alternates the dynamic RAD and NVRAD metrics in a single algorithm at even and odd iterations, respectively, to yield a hybrid COST designated here as HYBR. The results obtained for HYBR demonstrate that combining posterior and prior COSTs may be superior to either a prior or posterior approach by itself.

3. Computational Experiments

Dynamic NVRAD is compared in this section with the CPLEX primal simplex, dual simplex, and barrier methods. It is also compared with the prior active-set method RAD and the standard posterior active-set method VIOL, as well as to a normalized version of VIOL called NVIOL that was superior to VIOL in computational results not reported here. Both dynamic and multi-bound, multi-cut versions of NVRAD were compared to dynamic and multi-bound, multi-cut versions of the other active-set methods for insight into the individual merits of the dynamic and posterior approaches.

3.1. Problem Instances

Five sets of NNLPs from [18] are used to evaluate the performance of the dynamic posterior COST NVRAD. Each of Sets 1 - 4 contains 105 randomly generated NNLPs at 21 different density levels ranging from 0.005 to 1, and four ratios of ( $m$ constraints)/( $n$ variables) ranging from 200 to 1. The ratios for Sets 1 - 4 are 200, 20, 2, and 1, respectively. For each of Sets 1 - 4, there are five problem instances per combination of density level and ratio. In these problem sets, randomly generated real numbers between 1 and 5, 1 and 10, and 1 and 10 were assigned to the elements of $A,b,$ and $c,$ respectively. To prevent any constraint of $P$ from having the form of an upper bound on some variable, each constraint is required to have at least two nonzero ${a}_{ij}$ . Next, problem Set 5 of NNLPs is a set of large-scale problems with 5000 variables and 1,000,000 constraints. In this set, real numbers between 1 and 100 are assigned to the elements of $b$ and $c$ with densities $p$ ranging from 0.0004 to 0.06. Again, each constraint is required to have at least two nonzero ${a}_{ij}$ .

3.2. CPLEX Preprocessing

Two CPLEX parameters for solving linear programming are discussed here. The preprocessing pre-solve indicator (PREIND) and the preprocessing dual setting (PREDUAL) are the two parameters that CPLEX uses for solving linear programming. Preprocessing pre-solver is enabled with the parameter setting PREIND = 1 (ON), which reduces both the number of variables and the constraints before any type of algorithm is used. The pre-solver routine in CPLEX is disabled by setting PREIND = 0 (OFF). The second preprocessing parameter in CPLEX affecting the computational speed is PREDUAL. By setting parameter PREDUAL = 0 (ON) or PREDUAL = −1 (OFF), CPLEX automatically selects whether to solve the dual of the original LP or not, respectively.

Both PREIND and PREDUAL were turned off for CPLEX when CPLEX was used as part of NVRAD or HYBR. However, all computational results reported here for any individual CPLEX solver had PREIND and PREDUAL turned on. In other words, our NVRAD was compared to CPLEX at its fastest setting. CPLEX would choose automatically whether to solve either the primal or dual, whichever seemed best. Moreover, preprocessing would substantially reduce the size of any problem $P$ by removing appropriate rows or columns of the constraint matrix $A$ before applying the primal simplex, dual simplex, or interior-point barrier method. In fact, much of the speed of the CPLEX solvers is due to its proprietary preprocessing routines.

3.3. Computational Results

The experiments were performed on an Intel^{®}Core^{TM} 2 Duo X9650 3.00 GHz processor with a Linux 64-bit operating system and 8 GB of RAM. The COST NVRAD uses the IBM CPLEX 12.5 callable library to solve
${P}_{0}$ by the primal simplex and then
${P}_{r},r=1,2,\cdots $ by the dual simplex when selected constraints are added to
${P}_{r-1}$ . The CPU times shown in the tables below represent the average computation time of five problem instances at each density level.

The results of Table 1 for Set 1 compare NVRAD to VIOL, as well as to both a dynamic and non-dynamic version of NVIOL. In addition, the dynamic NVRAD described in Section 2.2 was compared to a non-dynamic NVRAD that applies the multi-cut and multi-bound technique of [18] . The dynamic version was significantly faster. The efficacy of the dynamic approach was further demonstrated by the fact that in higher density problems a dynamic version of NVIOL was up to 21 times faster than the multi-cut, multi-bound NVIOL. Overall, dynamic NVRAD was faster than VIOL and NVIOL on every problem instance.

In Table 2, the CPU times of the test problems solved by dynamic NVRAD are compared with those for RAD. In problem Set 1, RAD is slightly faster than NVRAD over all densities and averages 3.98 compared to 4.55 seconds. However, in problem Set 2, the average computation times for RAD and dynamic

Table 1. CPU times for multi-cut, multi-bound and dynamic active-set approaches on problem Set 1 for random NNLPs with 1000 variables, 200,000 constraints, and ${a}_{ij}=1-5,$ ${b}_{i}=1-10$ , ${c}_{j}=1-10$ .

^{++}Average of 5 instances at each density. ^{−−} Used CPLEX presolve = OFF and predual = OFF.

NVRAD over all densities are 19.07 and 16.86 seconds, respectively. For Set 3, dynamic NVRAD is superior to RAD averaging 38.91 seconds compared to 41.87 seconds. Similarly, for Set 4 the averages are 41.87 for NVRAD as compared to 46.98 for RAD. Thus the results of Table 2 affirm NVRAD’s ability to add appropriate constraints at each iteration. The results for Set 1 simply reflect how well the prior COST RAD performs when $m$ is very much larger than $n.$

Table 3 presents the CPU times for problem Sets 1 - 4 solved by dynamic versions of both RAD and HYBR. In Table 3 HYBR is superior to RAD. Moreover, a comparison of Table 3 with Table 2 shows that HYBR is also slightly better than dynamic NVRAD on these problem sets. Such observations suggest that combining the global information of RAD and the local information of NVRAD gives a superior performance than either RAD or NVRAD by itself. We note further that HYBR can probably be improved. However, it is not our goal to seek the optimal combination of RAD and NVRAD in HYBR since an optimal combination would likely differ depending on various factors such as density and the ratio m/n.

Table 4, taken from [18] , provides a comparison of the posterior COST NVRAD with the standard CPLEX solvers. Comparing the results of Table 4 for the CPLEX solvers with the results for NVRAD in Table 2 shows that NVRAD was significantly faster across virtually all ratios $m/n$ and all densities. For example, the primal simplex was the most robust CPLEX solver, but on the average across all densities the primal simplex took approximately 3 to 14 times more CPU time for the different rations $m/n$ than NVRAD. For the dual simplex, the av-

Table 2. CPU times for multi-cut, multi-bound and dynamic active-set approaches on problem Sets 1 - 4 for random NNLPs with ${a}_{ij}=1-5,$ ${b}_{i}=1-10$ , ${c}_{j}=1-10$ .

^{++}Average of 5 instances of LP at each density. ^{−−}Used CPLEX presolve = OFF and predual = OFF.

erage CPU across all densities was approximately 15 to 50 times greater than NVRAD over the different ratios. However, the CPLEX barrier method was slightly faster than NVRAD in problem instances with $m/n=20$ and with densities less than 0.02. On the other hand, when the density reached 0.08 for $m/n=20$ , NVRAD was already more than ten times faster than the barrier solver. Furthermore, note that average CPU times in Table 4 greater than 3000 seconds (50 minutes) at any density were not reported. This situation occurred for the CPLEX barrier solver for the ratios 1, 2, 20, and 200 with densities at least 0.3, 0.4, 0.5, and 0.75, respectively.

Finally, for large-scale, low-density test problems with $n=5000$ and

Table 3. CPU times for dynamic HYBR and dynamic RAD on problem Sets 1 - 4 for random NNLPs with ${a}_{ij}=1-5,$ ${b}_{i}=1-10$ , ${c}_{j}=1-10$ .

^{++}Average of 5 instances of LP at each density. ^{−−}Used CPLEX presolve = OFF and predual = OFF.

$m=1,000,000.$ Table 5 compares dynamic NVRAD to multi-cut and multi-bound RAD, VIOL, NVIOL, and NVRAD, as well as to the CPLEX primal simplex, dual simplex, and barrier solvers. Only the prior COST RAD was competitive. NVRAD averaged 63.45 seconds overall as compared to 71.79 for RAD. It should be noted that the highest density used in problem Set 5 was 0.0600 since the CPLEX solvers could not solve denser problems of such magnitude in a reasonable amount of time. Average CPU times greater than 2400 seconds (40 minutes) at any density were not reported in Table 5. This situation occurred beginning at some individual threshold density level for each CPLEX solver.

Table 4. CPU times from [18] for CPLEX solvers on problem Sets 1 - 4 for random NNLPs with ${a}_{ij}=1-5,$ ${b}_{i}=1-10$ , ${c}_{j}=1-10$ .

^{+}CPLEX presolve = ON and predual = ON. ^{++}Average of 5 instances at each density. ^{b} Runs with CPU times > 3000 s are not report.

Table 5. CPU times for NVRAD versus RAD, VIOL, NVIOL, and the CPLEX solvers on problem Set 5 for random NNLPs with 5000 variables, 1,000,000 constraints, and ${a}_{ij}=1-5,$ ${b}_{i}=1-100$ , ${c}_{j}=1-100$ .

^{++}Average of 5 instances at each density.^{b} Runs with CPU times > 2400 s are not reported. ^{−−}Used CPLEX presolve = OFF and predual = OFF. ^{+}Used CPLEX presolve = ON and predual = ON.

4. Conclusion

An efficient posterior COST called NVRAD was developed here for NNLPs to utilize both prior global information and posterior local information. The associated constraint selection metric NVRAD is a heuristic, so a geometric interpretation was presented to offer insight into its performance. NVRAD’s inherent active-set efficiency was enhanced by a dynamic approach varying the number of constraints added at each iteration. In addition to NVRAD, adynamic active-set approach HYBR was also proposed. HYBR alternates between the posterior method NVRAD and the prior method RAD. To check their performance, both NVRAD and HYBR were used to solve five sets of large-scale NNLPs. Dynamic NVRAD outperformed the previously developed COST RAD, as well as the standard posterior cutting-plane method VIOL. Dynamic NVRAD significantly outperformed the CPLEX primal simplex, dual simplex, and barrier solvers. On the other hand, HYBR appears slightly faster than NVRAD or RAD. The results of this paper provide further evidence that active-set methods may be the fastest approach for solving linear programming problems.

Cite this paper

Corley, H. , Noroziroshan, A. and Rosenberger, J. (2017) Posterior Constraint Selection for Nonnegative Linear Programming.*American Journal of Operations Research*, **7**, 26-40. doi: 10.4236/ajor.2017.71002.

Corley, H. , Noroziroshan, A. and Rosenberger, J. (2017) Posterior Constraint Selection for Nonnegative Linear Programming.

References

[1] Todd, M.J. (2002) The Many Facets of Linear Programming. Mathematical Programming, 91, 417-436.

https://doi.org/10.1007/s101070100261

[2] Dare, P. and Saleh, H. (2000) GPS Network Design: Logistics Solution Using Optimal and Near-Optimal Methods. Journal of Geodesy, 74, 467-478.

https://doi.org/10.1007/s001900000104

[3] Rosenberger, J.M., Johnson, E.L. and Nemhauser, G.L. (2003) Rerouting Aircraft for Airline Recovery. Transportation Science, 37, 408-421.

https://doi.org/10.1287/trsc.37.4.408.23271

[4] Li, H.-L. and Fu, C.-J. (2005) A Linear Programming Approach for Identifying a Consensus Sequence on DNA Sequences. Bioinformatics, 21, 1838-1845.

https://doi.org/10.1093/bioinformatics/bti286

[5] Stone, J.J. (1958) The Cross-Section Method, an Algorithm for Linear Programming. DTIC Document, P-1490.

[6] Thompson, G.L., Tonge, F.M. and Zionts, S. (1996) Techniques for Removing Nonbinding Constraints and Extraneous Variables from Linear Programming Problems. Management Science, 12, 588-608.

https://doi.org/10.1287/mnsc.12.7.588

[7] Adler, I., Karp, R. and Shamir, R. (1986) A Family of Simplex Variants Solving an Linear Program in Expected Number of Pivot Steps Depending on d Only. Mathematics of Operations Research, 11, 570-590.

https://doi.org/10.1287/moor.11.4.570

[8] Zeleny, M. (1986) An External Reconstruction Approach (ERA) to Linear Programming. Computers & Operations Research, B, 95-100.

https://doi.org/10.1016/0305-0548(86)90067-5

[9] Myers, D.C. and Shih, W. (1988) A Constraint Selection Technique for a Class of Linear Programs. Operations Research Letters, 7, 191-195.

https://doi.org/10.1016/0167-6377(88)90027-2

[10] Curet, N.D. (1993) A Primal-Dual Simplex Method for Linear Programs. Operations Research Letters, 13, 233-237.

https://doi.org/10.1016/0167-6377(93)90045-I

[11] Bixby, R.E., Gregory, J.W., Lustig, I.J., Marsten, R.E. and Shanno, D.F. (1992) Very Large-Scale Linear Programming: A Case Study in Combining Interior Point and Simplex Methods. Operations Research, 40, 885-897.

https://doi.org/10.1287/opre.40.5.885

[12] Barnhart, C., Johnson, E., Nemhauser, G., Savelsbergh, M. and Vance, P. (1998) Branch-and-Price: Column Generation for Solving Huge Integer Programs. Operations Research, 46, 316-329.

https://doi.org/10.1287/opre.46.3.316

[13] Mitchell, J.E. (2000) Computational Experience with an Interior Point Cutting Plane Algorithm. SIAM Journal on Optimization, 10, 1212-1227.

https://doi.org/10.1137/S1052623497324242

[14] Corley, H.W., Rosenberger, J., Yeh, W.-C. and Sung, T.K. (2006) The Cosine Simplex Algorithm. The International Journal of Advanced Manufacturing Technology, 27, 1047-1050.

https://doi.org/10.1007/s00170-004-2278-1

[15] Pan, P.-Q. (1990) Practical Finite Pivoting Rules for the Simplex Method. Operations-Research-Spektrum, 12, 219-225.

https://doi.org/10.1007/BF01721801

[16] Junior, H.V. and Lins, M.P.E. (2005) An Improved Initial Basis for the Simplex Algorithm. Computers and Operations Research, 32, 1983-1993.

https://doi.org/10.1016/j.cor.2004.01.002

[17] Corley, H.W. and Rosenberger, J.M. (2011) System, Method and Apparatus for Allocating Resources by Constraint Selection. US Patent No. 8082549.

[18] Saito, G., Corley, H.W., Rosenberger, J.M., Sung, T.-K. and Noroziroshan, A. (2015) Constraint Optimal Selection Techniques (COSTs) for Nonnegative Linear Programming Problems. Applied Mathematics and Computation, 251, 586-598.

https://doi.org/10.1016/j.amc.2014.11.080

[19] Saito, G., Corley, H.W. and Rosenberger, J. (2012) Constraint Optimal Selection Techniques (COSTs) for Linear Programming. American Journal of Operations Research, 3, 53-64.

https://doi.org/10.4236/ajor.2013.31004

[20] Noroziroshan, N., Corley, H.W. and Rosenberger, J. (2015) A Dynamic Active-Set Method for Linear Programming. American Journal of Operations Research, 5, 526-535.

https://doi.org/10.4236/ajor.2015.56041

[1] Todd, M.J. (2002) The Many Facets of Linear Programming. Mathematical Programming, 91, 417-436.

https://doi.org/10.1007/s101070100261

[2] Dare, P. and Saleh, H. (2000) GPS Network Design: Logistics Solution Using Optimal and Near-Optimal Methods. Journal of Geodesy, 74, 467-478.

https://doi.org/10.1007/s001900000104

[3] Rosenberger, J.M., Johnson, E.L. and Nemhauser, G.L. (2003) Rerouting Aircraft for Airline Recovery. Transportation Science, 37, 408-421.

https://doi.org/10.1287/trsc.37.4.408.23271

[4] Li, H.-L. and Fu, C.-J. (2005) A Linear Programming Approach for Identifying a Consensus Sequence on DNA Sequences. Bioinformatics, 21, 1838-1845.

https://doi.org/10.1093/bioinformatics/bti286

[5] Stone, J.J. (1958) The Cross-Section Method, an Algorithm for Linear Programming. DTIC Document, P-1490.

[6] Thompson, G.L., Tonge, F.M. and Zionts, S. (1996) Techniques for Removing Nonbinding Constraints and Extraneous Variables from Linear Programming Problems. Management Science, 12, 588-608.

https://doi.org/10.1287/mnsc.12.7.588

[7] Adler, I., Karp, R. and Shamir, R. (1986) A Family of Simplex Variants Solving an Linear Program in Expected Number of Pivot Steps Depending on d Only. Mathematics of Operations Research, 11, 570-590.

https://doi.org/10.1287/moor.11.4.570

[8] Zeleny, M. (1986) An External Reconstruction Approach (ERA) to Linear Programming. Computers & Operations Research, B, 95-100.

https://doi.org/10.1016/0305-0548(86)90067-5

[9] Myers, D.C. and Shih, W. (1988) A Constraint Selection Technique for a Class of Linear Programs. Operations Research Letters, 7, 191-195.

https://doi.org/10.1016/0167-6377(88)90027-2

[10] Curet, N.D. (1993) A Primal-Dual Simplex Method for Linear Programs. Operations Research Letters, 13, 233-237.

https://doi.org/10.1016/0167-6377(93)90045-I

[11] Bixby, R.E., Gregory, J.W., Lustig, I.J., Marsten, R.E. and Shanno, D.F. (1992) Very Large-Scale Linear Programming: A Case Study in Combining Interior Point and Simplex Methods. Operations Research, 40, 885-897.

https://doi.org/10.1287/opre.40.5.885

[12] Barnhart, C., Johnson, E., Nemhauser, G., Savelsbergh, M. and Vance, P. (1998) Branch-and-Price: Column Generation for Solving Huge Integer Programs. Operations Research, 46, 316-329.

https://doi.org/10.1287/opre.46.3.316

[13] Mitchell, J.E. (2000) Computational Experience with an Interior Point Cutting Plane Algorithm. SIAM Journal on Optimization, 10, 1212-1227.

https://doi.org/10.1137/S1052623497324242

[14] Corley, H.W., Rosenberger, J., Yeh, W.-C. and Sung, T.K. (2006) The Cosine Simplex Algorithm. The International Journal of Advanced Manufacturing Technology, 27, 1047-1050.

https://doi.org/10.1007/s00170-004-2278-1

[15] Pan, P.-Q. (1990) Practical Finite Pivoting Rules for the Simplex Method. Operations-Research-Spektrum, 12, 219-225.

https://doi.org/10.1007/BF01721801

[16] Junior, H.V. and Lins, M.P.E. (2005) An Improved Initial Basis for the Simplex Algorithm. Computers and Operations Research, 32, 1983-1993.

https://doi.org/10.1016/j.cor.2004.01.002

[17] Corley, H.W. and Rosenberger, J.M. (2011) System, Method and Apparatus for Allocating Resources by Constraint Selection. US Patent No. 8082549.

[18] Saito, G., Corley, H.W., Rosenberger, J.M., Sung, T.-K. and Noroziroshan, A. (2015) Constraint Optimal Selection Techniques (COSTs) for Nonnegative Linear Programming Problems. Applied Mathematics and Computation, 251, 586-598.

https://doi.org/10.1016/j.amc.2014.11.080

[19] Saito, G., Corley, H.W. and Rosenberger, J. (2012) Constraint Optimal Selection Techniques (COSTs) for Linear Programming. American Journal of Operations Research, 3, 53-64.

https://doi.org/10.4236/ajor.2013.31004

[20] Noroziroshan, N., Corley, H.W. and Rosenberger, J. (2015) A Dynamic Active-Set Method for Linear Programming. American Journal of Operations Research, 5, 526-535.

https://doi.org/10.4236/ajor.2015.56041