The Simplex Method (SM) remained a popular solution method of practical linear programming (LP) problems, until the development of interior point methods.  was the pioneer in the field and his Projective Scaling Method was able to compete with the SM as applied to realistic problems. As the name suggests, interior point methods approach an optimal point (which is known to be on the boundary of the feasible set) through a sequence of interior points . Unlike the SM, iterates are calculated not on the boundary, but in the interior of the feasible region. Starting with an initial interior point, the method moves through the interior of the feasible set along an improving direction to another interior point. There, a new improving direction is found, along which a move is made to yet another interior point. This process is repeated, resulting in a sequence of interior points that converge to an optimal boundary point. Many different types of interior-point methods for linear programming have been developed. Most of the methods fall under one of the three main categories: the projective and potential reduction method, affine-scaling method and path-following methods . In this paper, an Improved Affine-Scaling Interior Point Algorithm of LP has been proposed with the view of increasing the efficiency of the original algorithm due to .
2. Materials and Methods
The Affine-Scaling Interior-Point Algorithm was first introduced by . He subsequently published a convergence analysis in . Dikin’s work went largely unnoticed for many years until    and  rediscovered it as a simple variant of Karmarkar’s algorithm. Here, the problem is rescaled in order to make the initial point stay some distance away from any boundary constraint and then restrict the step length, so that the next move will not reach the boundary. The algorithm is as follows.
Given an optimization problem in standard form:
where c, A and b are the cost coefficients, technological coefficients and resource availability respectively, the Affine-Scaling Interior-Point Algorithm is summarized in the following steps:
Step 1: Given the initial trial solution, set
Step 2: Calculate
Step 3: Calculate where P is a projection matrix and is a projected gradient.
Step 4: Identify the negative component of having the largest absolute value, and set to this absolute value. Then Calculate
Step 5: Calculate as the trial solution for the next iteration starting from step 1.
(If this trial solution is virtually unchanged from the preceding one, then the algorithm has virtually converged to an optimal solution and the algorithm is terminated) .
3. Results and Discussions
The Improved Affine-Scaling Interior-Point Algorithm
In the Affine-scaling interior-point algorithm  discussed above, the selected constant, in Equation 1.0 is required to be such that . Thus, according to , the possible values should exclude 0 and 1. The selected constant, measures the fraction used of the distance that could be moved before the feasible region is exited .  published convergence analysis of the method using = 0.5.  used = 2/3 in their convergence result.  used values of 0.5 and 0.9 in their Interactive Operations Research (IOR) software. In this study, an investigation into the consequence of value of one (1) on the algorithm has been undertaken. Subsequently, it has been observed that, value of one (1) gives the least number of iterations of a given LP problem. The observation has led to an improved Affine-scaling interior-point algorithm which is the same the Affine-scaling interior-point algorithm  but with values now given as .
Table 1(a) and Table 1(b) present some computational results of selected practical problems affirming the proposed algorithm. The tables specify the LP problems, selected values (with their corresponding number of iterations in brackets) and their optimal solutions using a developed Interior-Point Program based on the Affine-Scaling Interior Point Algorithm which was written in MATLAB. To obtain the optimal solution of any LP problem in standard form, the developed program requires the user to input the initial feasible trial solution (which gives the diagonal matrix), cost coefficients, the number of columns/rows of the identity matrix, technological coefficients and the selected constant.
Table 1. (a): Computational results of selected practical problems affirming the proposed algorithm; (b): Computational Results of selected practical problems affirming the proposed algorithm.
It is seen from Table 1(a) and Table 1(b) that, the number of iterations decreases as values increase, and that = 1 gives the least number of iterations. Since value of one (1) gives the least number of iterations, it should be included in the algorithm as proposed above to increase efficiency of the algorithm.
An Improved Affine-Scaling Interior Point Algorithm for Linear Programming has been proposed. Computational results of selected practical problems affirming the proposed algorithm have been provided. The proposed algorithm is accurate, faster and therefore reduces the number of iterations required to obtain an optimal solution of a given Linear Programming (LP) problem as compared to the already existing Affine-Scaling Interior Point Algorithm. The algorithm can be very useful for development of faster software packages for solving linear programming problems using the interior-point methods.
In this paper, computational results of selected practical problems affirming the proposed algorithm have been provided. We hope to provide a rigorous proof of the algorithm in the near future.
 Singh, J.N. and Singh, D. (2002) Interior-Point Methods for Linear Programming: A Review. International Journal of Mathematical Education in Science and Technology, 33, 405-423.
 Adler, I., Resende, M., Veiga, G. and Karmarkar, N. (1989) An Implementation of Karmarkar’s Algorithm for Linear Programming. Mathematical Programming, 44, 297-335.
 Tsuchiya, T. and Muramatsu, M. (1995) Global Convergence of a Long-Step Affine Scaling Algorithm for Degenerate Linear Programming Problems. SIAM Journal on Optimization, 5, 525-551.