An Improved Affine-Scaling Interior Point Algorithm for Linear Programming
Abstract: In this paper, an Improved Affine-Scaling Interior Point Algorithm for Linear Programming has been proposed. Computational results of selected practical problems affirming the proposed algorithm have been provided. The proposed algorithm is accurate, faster and therefore reduces the number of iterations required to obtain an optimal solution of a given Linear Programming problem as compared to the already existing Affine-Scaling Interior Point Algorithm. The algorithm can be very useful for development of faster software packages for solving linear programming problems using the interior-point methods.

1. Introduction

The Simplex Method (SM) remained a popular solution method of practical linear programming (LP) problems, until the development of interior point methods.  was the pioneer in the field and his Projective Scaling Method was able to compete with the SM as applied to realistic problems. As the name suggests, interior point methods approach an optimal point (which is known to be on the boundary of the feasible set) through a sequence of interior points . Unlike the SM, iterates are calculated not on the boundary, but in the interior of the feasible region. Starting with an initial interior point, the method moves through the interior of the feasible set along an improving direction to another interior point. There, a new improving direction is found, along which a move is made to yet another interior point. This process is repeated, resulting in a sequence of interior points that converge to an optimal boundary point. Many different types of interior-point methods for linear programming have been developed. Most of the methods fall under one of the three main categories: the projective and potential reduction method, affine-scaling method and path-following methods . In this paper, an Improved Affine-Scaling Interior Point Algorithm of LP has been proposed with the view of increasing the efficiency of the original algorithm due to .

2. Materials and Methods

The Affine-Scaling Interior-Point Algorithm was first introduced by . He subsequently published a convergence analysis in . Dikin’s work went largely unnoticed for many years until    and  rediscovered it as a simple variant of Karmarkar’s algorithm. Here, the problem is rescaled in order to make the initial point stay some distance away from any boundary constraint and then restrict the step length, so that the next move will not reach the boundary. The algorithm is as follows.

Given an optimization problem in standard form:

Optimize $Z={c}^{\text{T}}x$

Subject to $Ax=b$

$x\ge 0,$

where c, A and b are the cost coefficients, technological coefficients and resource availability respectively, the Affine-Scaling Interior-Point Algorithm is summarized in the following steps:

Step 1: Given the initial trial solution, $x={\left({x}_{1},{x}_{2},\cdots ,{x}_{n}\right)}^{\text{T}},$ set

$D=\left[\begin{array}{ccccc}{x}_{1}& 0& 0& \cdots & 0\\ 0& {x}_{2}& 0& \cdots & 0\\ 0& 0& {x}_{3}& \cdots & 0\\ ⋮& ⋮& ⋮& & ⋮\\ 0& 0& 0& \cdots & {x}_{n}\end{array}\right]$

Step 2: Calculate $\stackrel{˜}{A}=AD\text{and}\stackrel{˜}{c}=Dc.$

Step 3: Calculate $P=I-{\stackrel{˜}{A}}^{\text{T}}{\left(\stackrel{˜}{A}{\stackrel{˜}{A}}^{\text{T}}\right)}^{-1}\stackrel{˜}{A}\text{and}{C}_{p}=P\stackrel{˜}{c}$ where P is a projection matrix and ${C}_{p}$ is a projected gradient.

Step 4: Identify the negative component of ${C}_{p}$ having the largest absolute value, and set $v$ to this absolute value. Then Calculate

$\stackrel{˜}{x}=\left[\begin{array}{c}1\\ 1\\ ⋮\\ 1\end{array}\right]+\frac{\theta }{v}{C}_{p}$, [1.0]

where $0<\theta <1$ .

Step 5: Calculate $x=D\stackrel{˜}{x}$ as the trial solution for the next iteration starting from step 1.

(If this trial solution is virtually unchanged from the preceding one, then the algorithm has virtually converged to an optimal solution and the algorithm is terminated) .

3. Results and Discussions

The Improved Affine-Scaling Interior-Point Algorithm

In the Affine-scaling interior-point algorithm  discussed above, the selected constant, $\theta$ in Equation 1.0 is required to be such that $0<\theta <1$. Thus, according to , the possible $\theta$ values should exclude 0 and 1. The selected constant, $\theta$ measures the fraction used of the distance that could be moved before the feasible region is exited .  published convergence analysis of the method using $\theta$ = 0.5.  used $\theta$ = 2/3 in their convergence result.  used $\theta$ values of 0.5 and 0.9 in their Interactive Operations Research (IOR) software. In this study, an investigation into the consequence of $\theta$ value of one (1) on the algorithm has been undertaken. Subsequently, it has been observed that, $\theta$ value of one (1) gives the least number of iterations of a given LP problem. The observation has led to an improved Affine-scaling interior-point algorithm which is the same the Affine-scaling interior-point algorithm  but with $\theta$ values now given as $0<\theta \le 1$.

Table 1(a) and Table 1(b) present some computational results of selected practical problems affirming the proposed algorithm. The tables specify the LP problems, selected $\theta$ values (with their corresponding number of iterations in brackets) and their optimal solutions using a developed Interior-Point Program based on the Affine-Scaling Interior Point Algorithm which was written in MATLAB. To obtain the optimal solution of any LP problem in standard form, the developed program requires the user to input the initial feasible trial solution (which gives the diagonal matrix), cost coefficients, the number of columns/rows of the identity matrix, technological coefficients and the selected constant.

(a) (b)

Table 1. (a): Computational results of selected practical problems affirming the proposed algorithm; (b): Computational Results of selected practical problems affirming the proposed algorithm.

It is seen from Table 1(a) and Table 1(b) that, the number of iterations decreases as $\theta$ values increase, and that $\theta$ = 1 gives the least number of iterations. Since $\theta$ value of one (1) gives the least number of iterations, it should be included in the algorithm as proposed above to increase efficiency of the algorithm.

4. Conclusion

An Improved Affine-Scaling Interior Point Algorithm for Linear Programming has been proposed. Computational results of selected practical problems affirming the proposed algorithm have been provided. The proposed algorithm is accurate, faster and therefore reduces the number of iterations required to obtain an optimal solution of a given Linear Programming (LP) problem as compared to the already existing Affine-Scaling Interior Point Algorithm. The algorithm can be very useful for development of faster software packages for solving linear programming problems using the interior-point methods.

Future Work

In this paper, computational results of selected practical problems affirming the proposed algorithm have been provided. We hope to provide a rigorous proof of the algorithm in the near future.

Cite this paper: Boah, D. and Twum, S. (2019) An Improved Affine-Scaling Interior Point Algorithm for Linear Programming. Journal of Applied Mathematics and Physics, 7, 2531-2536. doi: 10.4236/jamp.2019.710172.
References

   Karmarkar, N. (1984) Polynomial-Time Algorithm for Linear Programming. Combinatorica, 4, 373-395.
https://doi.org/10.1145/800057.808695

   Eiselt, H.A. and Sandblom, C.L. (2007) Linear Programming and Its Applications. Springer-Verlag, Berlin Heidelberg, 280.

   Singh, J.N. and Singh, D. (2002) Interior-Point Methods for Linear Programming: A Review. International Journal of Mathematical Education in Science and Technology, 33, 405-423.
https://doi.org/10.1080/00207390110119934

   Dikin, I.I. (1967) Iterative Solution of Problems of Linear and Quadratic Programming. Soviet Mathematics Doklady, 8, 674-675.

   Dikin, I.I. (1974) On the Speed of an Iterative Process. Upravlyaemye Sistemi, 12, 54-60.

   Karmarkar, N. and Ramakrishnan, U. (1985) Further Developments in the New Polynomial-Time Algorithm for Linear Programming. Talk Given at ORSA/TIMS National Meeting, Boston.

   Barnes, E.R. (1986) A Variation on Karmarkar’s Algorithm for Solving Linear Programming Problems. Mathematical Programming, 36, 174-182.
https://doi.org/10.1007/BF02592024

   Vanderbei, R.J., Meketon, M.S. and Freedman, B.A. (1986) A Modification of Karmarkar’s Linear Programming Algorithm. Algorithmica, 1, 395-407.
https://doi.org/10.1007/BF01840454

   Adler, I., Resende, M., Veiga, G. and Karmarkar, N. (1989) An Implementation of Karmarkar’s Algorithm for Linear Programming. Mathematical Programming, 44, 297-335.
https://doi.org/10.1007/BF01587095

   Hillier, F.S. and Lieberman, G.J. (2010) Introduction to Operations Research. Ninth Edition, McGraw-Hill Higher Education, New York, 195-259.

   Tsuchiya, T. and Muramatsu, M. (1995) Global Convergence of a Long-Step Affine Scaling Algorithm for Degenerate Linear Programming Problems. SIAM Journal on Optimization, 5, 525-551.
https://doi.org/10.1137/0805027

Top