As a branch of mathematics, linear programming is the domain that had the most successful in optimization      . Since its formulation from 1930 to 1940 and the development of the Simplex method of Dantzig in 1949   , researchers in various fields have been led to formulate and solve linear problems.
Although, the Simplex algorithm is often effective in practice, the fact that it is not a polynomial algorithm, as shown by Klee and Minty  , has incited researchers to propose other algorithms and led to the birth of the interior point algorithms.
In 1979, L. Khachiyan proposed the first polynomial algorithm for linear programming  ; it is based on the ellipsoid method, studied by Arkadi Nemirovski and David B. Yudin, a preliminary version of which having been introduced by Naum Z. Shor. Unfortunately, this ellipsoid method has a poor efficiency.
In 1984, N. K. Karmakar published an interior point algorithm which has a polynomial convergence ,and this caused a renewed interest to interior point methods, as well in linear programming that in nonlinear programming      .
Gabasov and Kirillova have generalized the Simplex method in 1995    , and developed the Adaptive Method (AM), a primal-dual method, for linear programming with bounded variables. As the Simplex method, the Adaptive Method is a support method, but it can start from any support (base) and any feasible solution and can move to the optimal solution by interior or boundary points; this method was extended, later, to solve general linear and convex problems   , and also optimal control problems  .
In linear programming, several variants of the AM have been proposed. They generally address the problem of initialization of the method and the choice of the direction   .
In this work, a new variant of the adaptive method called “Pivot adaptive method” (PAM) is presented. Unlike the original method and its variants, we need not to compute the inverse of the basic matrix at each iteration  , or to solve the linear systems with the basic matrix  . In fact, to compute the new feasible solution and the new support, we only need compute the decomposition of vectors columns of the matrix of the problem in the current basis. For this computation we use the simplex pivoting rule  by introducing a new matrix GAMMA which allows to minimize the computation time. This new variant of adaptive method allows us to present the resolution of a given problem under the shape of successive tables (as is done with the Simplex method) as we will see in example. The proofs for the theorem of the optimality criterion and for the theorem of existence of an optimal support that are not given by Gabasov  will be presented here, and without giving too many details a brief comparison between our method and the Simplex method will be given at the end of this article.
The paper is organized as follows. In Section 2, after giving the statement of the problem, some important definitions are given. In Section 3, we describe step by step the “Pivot Adaptive Method” and in parallel the proofs of the corresponding theorems are given. In Section 4, an example will be solved to illustrate the PAM, and more details will be given to explain how present the resolution under the shape of successive tables. In Section 5, we give a brief comparison between the PAM and the Simplex Method by solving the Klee-Minty problem. Section 6, concludes the paper.
2. Statement of the Problem and Definitions
In this article we consider the primal linear programming problem with bounded variables presented in the following standard form:
where: is the index set of the variables , is the index set of the parameters (corresponds to the constraints), put: , , , and introduce the vectors:
・ , with: , .
・ , with: , .
and the matrix
・ , with: ,
( : the jth column of the matrix A), and ( : the ith row of the matrix A), , , .
We assume that . Then the problem (1) takes the following form:
are called the general constraints of (P).
Denote the feasible region of (P) as: .
2.1. Definition 1
Each vector of the set H is called a feasible solution of (P).
2.2. Definition 2
Any vector that satisfies the general constraints of the problem (P) is called a pseudo-feasible solution.
2.3. Definition 3
A feasible solution is called optimal if: .
2.4. Definition 4
For given value , the solution is called an ò-optimal (or suboptimal) if: , where is an optimal solution for the problem (P).
2.5. Definition 5
The set of m indices ( ) is called a support of (P) if the submatrix is non-singular ( ), then the set of ( ) indices is called non-support, is the matrix support, and is the matrix non-support.
2.6. Definition 6
The pair formed by a feasible solution x and the support is called a support feasible solution (SF-solution).
2.7. Definition 7
The SF-solution is called non-degenerate if: .
Recall that, unlike the Simplex Method in which the feasible solution x and the basic indices are related intimately, in the adaptive method there are independent.
3. Optimality Criterion
For the smooth running of calculations, in the rest of the article, we assume that the sets and are vectors of indices (i.e.: the order of indices in these sets is important and must be respected).
3.1. Formula of the Objective Value Increment
Let be the initial SF-solution of the problem (P), and an arbitrary pseudo-feasible solution of (P). We set , then the increment of the objective function value is given by:
Substituting the vector in (3) we get:
Here, we define the m-vector of multipliers y as follows:
In  , Gabasov defines the support gradient and noted it D as: , but he also recognizes that for every , the kth support derivative is equal to , in fact, is the rate of change of the objective function when the kth non-support components of the feasible solution x is increased and all the other non-support components are fixed, at the same time the support components are changed in such way to satisfy the constraints .
Then, in the pivot adaptive method, we define the n-vector of reduced gains (or support gradient) δ as follows:
To reduce the computation time in the adaptive method of Gabasov  , we define the ( ) matrix G as follows:
which is the decomposition of the columns of the matrix A on the support .
We have: , is the product of the ith row of the matrix and the jth column of the matrix . Then the n-vector of support gradient given in (8) can be computed as follows:
We see that, to compute the quantity , we begin to compute and not ; therefore we need not to compute the inverse of the basic matrix .
From (6) and (10) we obtain
3.2. Definition 8
For an given support , any vector of that satisfies
is called a primal pseudo-feasible solution accompanying the support .
3.3. Proposition 1
A primal pseudo-feasible solution accompanying a given support , χ, verifies just the general constraints of the problem (P), ie . If more, the support components of the vector χ verifies: , then χ is feasible and optimal solution for (P).
3.4. Theorem 1 (The Optimality Criterion)
Let be an SF-solution of the problem (P), and be the support gradient computed by (10), then the relations
are sufficient, and in the case of non-degeneracy of the SF-solution also necessary for the optimality of the feasible solution x.
1) The sufficient condition:
for the SF-solution , suppose that the relation (13) are verified, we must proof that x is optimal, it amounts to show that:
Let , and , then:
then x is optimal.
2) The necessary condition:
Suppose that an SF-solution non-degenerate, i.e.:
and that x optimal, i.e.:
Reasoning by absurdity:
Suppose that the relations (13) are not verified, thus we have, at least, one of the following three cases:
Suppose then we have the first case, i.e.
(the proof uses the same reasoning for the second and the third cases). Construct the n-vector as follows:
where is the product of the jth row of the matrix and the kth column of the matrix . Let .
For , then for have:
a value of θ is chosen such that:
Put: , .
we have: , , because is non-degenerate.
Let , then
contradiction with (16).
3.5. Definition 9
The pair satisfying the relations (13) will be called an optimal SF-solution of the problem (P).
3.6. The Suboptimality Criterion
Let an optimal solution of (P), and x a feasible solutions of (P).
From (11) we obtained:
is called the suboptimality estimate of the SF-solution since
We have the following theorem of suboptimality.
3.7. Theorem 2 (Sufficient Condition of Suboptimality)
For a feasible solution x to be ò-optimal for given positive number ò, it is sufficient of the existence of a support such that .
3.8. Proposition 2
From (12) and (27) we deduce:
3.9. The Dual Problem
The dual of the primal problem (P) is given by the following linear problem
where , with , is the transpose of the jth column of the matrix A. The problem D has n general constraints, and variables.
Like (P), the problem (D) has an optimal solution , and , where is the optimal solution of (P).
3.10. Definition 10
Any -vector satisfying all the constraints of the problem D is called a dual feasible solution.
3.11. Definition 11
Let be a support of the problem (P), and δ the corresponding support gradient defined in (10), the -vector which satisfies:
is called the dual feasible solution accompanying the support .
3.12. Proposition 3
・ A dual feasible solution accompanying a given support , is dual feasible, i.e.: .
・ For any support we have
where λ and χ are respectively the dual feasible solution and an primal pseudo-feasible solution accompanying the support .
3.13. Definition 12
Let y be dual feasible solution: . The -vector such that:
is called a coordinated dual feasible point to the m-vector y.
3.14. Proposition 4
The coordinated dual feasible point to y is dual feasible, i.e.: , , .
3.15. Decomposition of the Suboptimality Estimate of an SF-Solution
Let be a SF-solution of the problem (P), be its suboptimality estimate calculated by (27), be the dual feasible solution accompanying the support , χ the primal pseudo-feasible solution accompanying the support , be the optimal solution for the problem (P), be the optimal solution for the dual problem (D).
From (9), (10), (29), (32), and proposition 1 we have:
where: is the degree of non-optimality of the support , and is the degree of non-optimality of the feasible solution x.
3.16. Proposition 5
・ The feasible solution x will be optimal if , and the support will be optimal if , but the pair will be an optimal SF-solution if , so the optimality of the feasible solution x can be not identified if it is examined with an unfit support, because in this case there will be , and .
・ As , then the support will be optimal if its accompanying dual feasible solution λ is optimal solution for the dual problem D.
3.17. Theorem 3 (Existence of an Optimal Support)
For each problem (P), there is always an optimal support.
As (P) has an optimal solution, its dual (D) given by (30) has also an optimal solution, let this solution, and define the sets:
we have .
Let , and define the following linear problem:
We have , where is the coordinated dual feasible point to the vector y, and is an optimal feasible solution of the problem ( ).
On the other hand, there exists a “vertex” of the set that is an optimal feasible solution for ( ), and this vertex is the intersection of at least m hyperplane .
Put where .
Then: , and so: , further the coordinated dual feasible point to is the dual feasible solution accompanying the support , and its optimal feasible solution for the problem (D), then, according to the proposition (4), we conclude that is an optimal support for (P).
4. The Description of the Pivot Adaptive Method “PAM”
As the Simplex Method, to solve the problem (P), the adaptive method (and therefore also the pivot adaptive method) has two phases: the initialization phase, and the second phase, we will detail each one in the following.
1) The initialization phase:
In this phase an initial support feasible solution SF-solution will be determined, and thus the matrix G defined in (9).
We assume that , and .
Notice that each problem (P) can be reduced to this case,i.e.:
,and ,in fact:
a) if then both term of the corresponding constraint is multiplied by (−1).
b) if , then we do the change of variable: , then:
and the general constraints of (P) will be write as:
where , is the jth column of the matrix A, from this relation we obtain:
in (36), if we multiple the two terms by (−1).
Then to determine an initial SF-solution to the problem (P), define the artificial problem of (P) as follows:
where , and are the m artificial variables added to the problem (P).
We resolve the artificial problem with the “pivot adaptive method” starting with the obvious initial feasible solution: , and the canonical support (determined by the indices of the artificial variables), and we take the matrix .
If H is not empty, at the optimum of , , and , so x is a feasible solution of (P), and the optimal SF-solution of the problem can be taken as the initial SF-solution for the problem P.
2) The second phase:
Assume that an initial SF-solution of the problem (P) is found in the initialization phase, and let G the correspondent matrix computed by (9), an arbitrary number, the problem will be solved with the pivot adaptive method that we describe in the following.
At the iterations of the pivot adaptive method the transfer from one SF-solution to a new one is carried out in such a way that:
the transfer will be realized as two procedures making up the iteration:
i) The procedure of the feasible solution change :
During this procedure we decrease the degree of non-optimality of the feasible solution: , and then improve the primal criterion :
i.1) compute the support gradient δ by (10), and the suboptimality estimate by (27).
i.2) If , then stop the resolution with: x is an ò-optimal feasible solution for (P) (P).
i.3) If , we compute the non-support components of the primal pseudo-feasible solution accompanying the support by (12), and the search direction with:
i.4) Find the primal step length with:
Let such that: which is the position of in the vector of indices .
i.5) Compute the new feasible solution: , and .
i.6) If , then stop the resolution with: is the optimal SF-solution of (P).
i.7) If , then compute:
i.8) If , then stop the resolution with: is the ò-optimal SF-solution of (P).
i.9) if , then go to ii) to change the support to .
ii) The procedure of the support change :
During this procedure we decrease the degree of non-optimality of the support: ,and then improve the dual criterion , where is the dual feasible solution accompanying the support , and is the dual feasible solution accompanying the support .
In this procedure there are two rules to change support: rule of “the short step”, and rule of “the long step”, the two rules will be presented.
ii.1) to the found in (i.4), compute , and .
ii.2) compute the dual direction t with:
and arrange the obtained values in increasing order as:
As was said before, the change of support can be done with two different rules, to use the “short step rule” go to a), and to use the “long step rule” go to b).
a) Change of the support with the “short step rule”:
a.1) From (46) put:
Let such that: which is the position of in the vector of indices .
Put: , , then , .
go to ii.4).
b) Change of the support with the “long step rule”:
b.1) for every of (47) compute
b.2) as we choose such that
Let such that: which is the position of in the vector of indices .
b.3) put: , and , , then , .
・ where: , .
Go to ii.4). ii.4) compute as follows:
is “the pivot”. Go to ii.5).
ii.5) Set: , , , , and . Go to i.2).
5. The Pivot Adaptive Method
Let be an initial SF-solution for the problem (P), G the matrix computed by (9), and are given.
The “Pivot Adaptive Method” (rule of the “short step” and rule of “the long step” are presented) is summarized in the following steps:
Algorithm 1 (The Pivot Adaptive Method)
1) compute the support gradient by (10), and by (27).
2) if , then STOP with: x is an ò-optimal feasible solution for (P).
3) if , then compute the non-support components of the primal pseudo-feasible solution accompanying the support by (12), and the search direction l with (41).
4) find the primal step length with (42). Let such that: which is the position of in the vector of indices .
5) Compute the new feasible solution: , and .
6) if , then STOP with: is the optimal SF-solution for (P).
7) if , then compute .
8) if , then STOP with: is the ò-optimal SF-solution for (P).
9) if , then change the support to .
10) compute , and: .
11) compute the dual direction t with (44).
12) compute , for with (45), and arrange the obtained values in increasing order.
13) do the change of support by one of the rules:
a) Change of the support with the “short step rule” as follows:
a.1) compute with (47), and set: , .
a.2) compute . Go to (14).
b) Change of the support with the “long step rule” as follows:
b.1) for every of (46) compute .
b.2) as we choose such that (50) is verified.
b.3) set: , and , .
b.4) compute where: , . Go to (14).
14) compute with (51) and .
15) set: , , , , and . Go to (2).
In this section, a linear problem will be resolved with the Pivot Adaptive Method using the “short step rule”, and in parallel we will explain how to realize these calculations under the shape of successive tables as is given in Figure 1.
Consider the following linear problem with bounded variables:
where: , , , , , .
Put , .
First phase (Initialization):
Let be the initial SF-solution of the problem (P) where; , , so: , , then The matrix G is given by: where: , .
Note that for the initial support we have chosen the canonical support, and for the initial feasible solution we took an interior point of the feasible region of the problem (P).
u In the preamble part of the tables (Figure 1)which has four rows we put sequentially: , , , . The tables have 8 columns.
Starting with the SF-solution , and the matrix G to resolve the problem (P) with the PAM using the short step rule.
u In the first table of PAM (Figure 1) (represents the first iteration) there are 8 rows, in the first 3 rows we have the matrix G (indicating the vector formed in the first column of the tables), and in the rest rows we have sequentially: δ, l, θ, x2, σ.
The support gradient where: , , and .
1) Iteration 1:
・ Change solution: we have: , and , , , (4 is the second component of the vector of index ).
u The case of is marked by yellow in tables (Figure 1), it correspond at the second vector of which is (which will come out of the basis.
Then, , the new feasible solution , and .
・ Change support (with the short step rule):
, and: , the dual direction , , then (1 is the first component of the vector of index ).
u The case of is marked by green in tables (Figure 1), his row correspond to vector which will go into the basis.
So , .
Figure 1. Tables of the pivot adaptive method.
u For have the 2nd table (Figure 1), we applied the pivoting rule where the pivot is marked by red. The new support gradient where: , , and .
As , we go to another iteration, we compute then: where:
2) Iteration 2:
・ Change solution: we have: , and , , , (3 is the first component of the vector of indice ).
Then, , the new feasible solution , and .
・ Change support (with the short step rule): , and: , the dual direction , , then (2 is the second component of the vector of index ).
So , .
The new support gradient where: , , and .
Then is the optimal SF-solution of the problem (P), and .
7. Brief Numerical Comparison between the PAM and the Simplex Method
To realize a numerical comparison between the Simplex algorithm implemented in the function “linprog” under the MATLAB programming language version 7.14, 0.739 (R2012a) and our algorithm (PAM), an implementation for the later with the short step rule, has been developed.
The comparison is carried on the resolution of the Klee-Minty problem which has the following form  :
where . (P3) has n variables, n constraints and 2n vertices.
For a fixed n, to write (P3) in the form of the problem (P) given in (1), we added to each of the n constraints of (P3) a spread variable, and we found, for each of the 2n variables of the result problem, a lower born and an upper born. Then the problem (P3) can be done as a linear problem with bounded variables as follows:
The Simplex algorithm chosen here takes as a starting solution the original, then in our implementation of the (PAM) we impose the same initial feasible solution. For the initial support we had taken the canonical one , then .
We have considered problems with matrix A of size , where , for each size, we obtained the number of iterations, and the time of resolution, where the time is given into milliseconds.
This results are reported in Figure 2, and in Figure 3, where “Simplex(it)”, “PAM(it)”, “Simplex(tm)”, “PAM(tm)” represent respectively the number of iterations of the Simplex algorithm, the number of iterations of the Pivot Adaptive Method, the time of the Simplex algorithm, the time of the Pivot Adaptive Method. The time is given into milliseconds.
Figure 2. Number of iterations and time for Linprog-Simplex and PAM for Klee-Minty problem.
Figure 3. Time comparison between Linprog-Simplex and PAM for the Klee-Minty problem.
We remark that the (PAM) is better than the Simplex algorithm in computation time, and in necessary number of iterations to solve the problems. Recall that the optimal solution of the problem (P3) is given by .
The main contribution of this article is a new variant of the Adaptive Method that we have called “Pivot Adaptive Method” (PAM). In this variant, we use the simplex pivoting rule in order to avoid computing the inverse of the basic matrix in each iteration. As was recognized in this work, the algorithm saves our time compared to the original algorithm (AM), and allows us to present the resolution of a given problem under the shape of successive tables as seen in an example.
We have implemented our algorithm (PAM) ((PAM) with the short step rule, and (PAM) with the long step rule) using MATLAB, and we have done a brief comparison with the primal simplex algorithm (using linprog-simplex in MATLAB) for solving the Klee-Minty problem. Indeed, the (PAM) is more efficient in number of iterations, and in computation time.
In a subsequent work, we shall do a more thorough comparison between the Simplex algorithm of Dantzig and the (PAM), and we shall extend the (PAM) to the problems with a large scale using the constraint selection technique.
 Kojima, M., Megiddo, N., Noma, T. and Yoshise, A. (1989) A Unified Approach to Interior Point Algorithms for Linear Programming. In: Megiddo, N., Ed., Progress in Mathematical Programming: Interior Point and Related Methods, Springer Verlag, New York, 29-47.
 Cafieri, S., Dapuzzo, M., Marino, M., Mucherino, A. and Toraldo, G. (2006) Interior Point Solver for Large-Scale Quadratic Programming Problems with Bound Constraints. Journal of Optimization Theory and Applications, 129, 55-75.
 Balashevich, N.V., Gabasov, R. and Kirillova, F.M. (2000) Numerical Methods of Program and Positional Optimization of the Linear Control Systems. Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki, 40, 838-859.