Sensitivity Analysis on the Negative Degree of Difficulty Geometric Programming Problem

Affiliation(s)

^{1}
Department of Statistics, Federal University of Technology, Owerri, Imo State, Nigeria.

^{2}
Department of Maritime Management Technology, Federal University of Technology, Owerri, Imo State, Nigeria.

^{3}
Department of Transport Management Technology, Federal University of Technology, Owerri, Imo State, Nigeria.

ABSTRACT

The range of optimal values in cost optimization models provides management with options for decision making. However, it can be quite challenging to achieve feasible range of optimality in Geometric programming (Gp) models having negative degrees of difficulty. In this paper, we conduct sensitivity analysis on the optimal solution of Geometric programming problem with negative degree of difficulty. Using imprest data, we determine the optimal objective function, dual decision variables, primal decision variables; the range of values, the cost coefficient and RHS constraint must lie for the solution to stay optimal. From the analysis, we established that incremental sensitivity analysis has the functional form .

The range of optimal values in cost optimization models provides management with options for decision making. However, it can be quite challenging to achieve feasible range of optimality in Geometric programming (Gp) models having negative degrees of difficulty. In this paper, we conduct sensitivity analysis on the optimal solution of Geometric programming problem with negative degree of difficulty. Using imprest data, we determine the optimal objective function, dual decision variables, primal decision variables; the range of values, the cost coefficient and RHS constraint must lie for the solution to stay optimal. From the analysis, we established that incremental sensitivity analysis has the functional form .

KEYWORDS

Negative Degree of Difficulty, Sensitivity Analysis, Primal Decision Variables, Dual Decision Variables, Global Optimal Solution

Negative Degree of Difficulty, Sensitivity Analysis, Primal Decision Variables, Dual Decision Variables, Global Optimal Solution

1. Introduction

In every optimization problem, it will be of interest not only to obtain an optimal solution of the problem but also to find how stable that optimal solution would be in the different coefficients of the problem [1]. But in geometric programming literature, a negative degree of difficulty problem is said not to have a solution, see [2] [3] and [4]. Nevertheless, the work done by [5] shows that a negative degree of difficulty geometric programming problem can be solved using the modified generalized inverse method. According to [6], sensitivity analysis was carried out in a case where the degree of difficulty was zero (k = 0) by [7] and a follow-up studies where the degree of difficulty was one (k = 1) was done by [6] [8] [9]. But due to the peculiar nature of geometric programming with negative degrees of difficulty, no work was previously done on it.

We carry out sensitivity analysis or post optimality test in geometric programming on the cost coefficients of the objective function and the right-hand side (RHS) of the constraint equations to determine the effect of changes on the new optimal dual decision variables and the new optimal objective function. The range of values for the changes was determined; the new optimal objective function and the dual decision variables were also determined. Sensitivity analysis is the optimal dual decision variable for the problem. It is concerned with how small changes in the constraint or objective function affect the optimal objective value [10]. It should be noted that the standard form of geometric programming is that the right-hand side inequality must be bounded above by unity. So a change in the right-hand side constraint is similar to a change in the cost coefficient of the constraint equations after it has been transformed and the new right-hand side reduced to unity. The effect of the change on the right-hand side of the constraint equation was tested.

We observed that [8] did a fundamental work on sensitivity analysis in geometric programming for k > 0. The author stated and proved some important theories which were adopted by [9]. The authors [6] gave much meaning to the work of [8] by computing the values of important parameters in sensitivity analysis. They advanced their earlier work [9], while maintaining the basic theories and their fundamental work. In carrying out sensitivity analysis in this paper, we adopted the procedure described by [6] [8] [9] and [10]. We carried out incremental sensitivity analysis where the cost coefficient is increased by a given percentage and the effect on the optimal objective function and optimal dual variables were determined. Our primary concern is on the cost coefficients of the objective function and the RHS constraint equations and their overall effect in the new optimal objective function, and to determine the range of values for such changes.

2. Method

Our interest is to determine the range of values the change in the cost coefficients and the right-hand side will lie. We have illustrated the methodology by modeling a real life data (monthly imprests) obtained from the Department of Maritime Management Technology as a negative degree of difficulty geometric programming problem, and its optimal solution was determined before the sensitivity analysis carried out on it. We observed that [6] used incremental procedure for sensitivity analysis which was based on the transformed dual decision variable given by

${y}_{i}^{\ast}={b}_{i}^{\left(0\right)}+{\displaystyle \underset{j=1}{\overset{n}{\sum}}{r}_{j}{b}_{i}^{\left(j\right)}}$ (1)

The n-dimensional vectors
${b}_{i}^{\left(0\right)},{b}^{\left(j\right)};i=1,\cdots ,n$ are constants; y_{i} is the dual decision variable and r_{j} is a constant. For positive degree of difficulty problem, k > 0, the optimal dual decision variables
${y}_{i}^{*}$ are independent of the primal cost coefficients, c_{i} and the relationship among changes in the c_{i} and y_{i} are given by

$d{y}_{i}={\displaystyle \underset{j=1}{\overset{n}{\sum}}\left\{{b}_{i}^{\left(0\right)}{\displaystyle \underset{i=1}{\overset{n}{\sum}}\left[{J}_{jl}^{-1}\left({y}^{\ast}\right){\displaystyle \underset{i=1}{\overset{n}{\sum}}{b}_{i}^{\left(i\right)}\left(d{c}_{i}/{c}_{i}\right)}\right]}\right\}}$ (2)

where

${J}_{jl}\left({y}^{\ast}\right)={\displaystyle \underset{i=1}{\overset{n}{\sum}}\left\{{b}_{i}^{\left(0\right)}{b}_{i}^{\left(j\right)}/{y}_{i}^{\ast}\right\}}$ (3)

where J_{ji} is the Jacobian matrix and

$dv/{v}^{\ast}={\displaystyle \sum {y}_{i}^{\ast}d{c}_{i}/{c}_{i}}$ (4)

For a change in the primal coefficient, $d{c}_{i}/{c}_{i}$ we compute for the new solution as

${v}^{1}={v}^{\ast}+dv$ (5)

and the new dual variables becomes

${y}_{i}^{1}={y}_{i}^{\ast}+d{s}_{i}$ (6)

The range of value for ${c}_{i}$ is

${c}_{i}-{y}_{i}^{\ast}{D}^{-1}<{c}_{i}^{1}<{c}_{i}+{y}_{i}^{\ast}{D}^{-1}$ (7)

where

$D={\displaystyle \underset{j=1}{\overset{n}{\sum}}\left\{{b}_{i}^{\left(j\right)}{\displaystyle \underset{i=1}{\overset{n}{\sum}}\left[{J}_{jl}^{-1}\left({y}^{\ast}\right){b}_{i}^{\left(i\right)}\right]}\right\}}$ (8)

For constraint equations, we compute for the range of values as follows:

${g}_{k}\left(x\right)\le 1$ (9)

Let the right-hand side be increased or decreased by R. Hence, we have

${g}_{k}\left(x\right)\le 1\pm R$

Let $1\pm R={\Delta}_{R}$

We have

${g}_{k}\left(x\right)\le {\Delta}_{R}$

Converting the constraint equation to standard form, we have

$\frac{1}{{\Delta}_{R}}{g}_{k}\left(x\right)\le 1$ (10)

The new problem becomes:

Minimize $f\left(x\right)$ (11)

Subject to $\frac{1}{{\Delta}_{R}}{g}_{k}\left(x\right)\le 1$ (12)

The effect of the change in the RHS on the cost coefficients of the constraint equation is then determined along side with its range of values.

The range of values for ${\Delta}_{R}$ is

${c}_{R}-{y}_{i}^{\ast}{D}_{R}^{-1}<{\Delta}_{R}<{c}_{R}+{y}_{i}^{\ast}{D}_{R}^{-1}$ (13)

Table 1. Expenditure on monthly imprest (September to November, 2016) by the Department of Maritime Management Technology, FUTO.

Source: Dept. of Maritime Management Technology, FUTO, 2016.

3. Application of the Technique

Table 1 represents the expenditure on monthly imprest from September to November, 2016 by the Department of Maritime Management Technology, Federal University of Technology Owerri. Our interest here is to minimize the cost of expenditure on the imprest. In trying to do that, we let some arbitrary variables X_{i} to represent weights of various items the Department purchased during the period.

Let X_{1} = snacks; X_{2} = FUTO water; X_{3} = generator maintenance; X_{4} = Amstel malt and X_{5} = stationeries.

The total expenditure for the period is the sum of the individual monthly expenditure on these variables ${X}_{i};i=1,\cdots ,5$. These sums are written as:

${U}_{i}\left(x\right)={U}_{1}\left(x\right)+\cdots +{U}_{n}\left(x\right)$ (14)

where

${U}_{1}\left(x\right)=20000{x}_{1}{x}_{3}{x}_{5}$

${U}_{2}\left(x\right)=20000{x}_{2}{x}_{4}$

${U}_{3}\left(x\right)=20000{x}_{1}{x}_{3}{x}_{4}$

${x}_{i}>0$; ${C}_{j}>0$ and $n=\text{3}$

Putting the U_{i}(x) in an unconstrained geometric programming form, we have

Minimize $f\left(x\right)=20000{x}_{1}{x}_{3}{x}_{5}+20000{x}_{2}{x}_{4}+20000{x}_{1}{x}_{3}{x}_{4}$ (15)

Reformulating Equation (15) to constrained geometric programming problem, we have

Minimize $f\left(x\right)=20000{x}_{1}{x}_{2}^{-2}{x}_{3}{x}_{4}^{-1}{x}_{5}+20000{x}_{1}^{-1}{x}_{2}^{2}{x}_{3}^{-2}{x}_{4}{x}_{5}^{-1}+20000{x}_{1}^{-1}{x}_{2}{x}_{3}{x}_{4}{x}_{5}^{-1}$ (16)

Subject to

${x}_{1}^{2}{x}_{2}^{2}{x}_{5}{x}_{4}+{x}_{1}^{-1}{x}_{2}^{-2}{x}_{3}^{2}{x}_{4}^{-2}{x}_{5}\le 1$ (17)

Equations (16) and (17) are constrained geometric programming problem with negative one degree of difficulty (K = −1) given as $K=n-\left(m+1\right)=-1$; where K is the degree of difficulty, n is the number of terms and m is the number of variables. Degree of difficulty is the measure of computational complexity of the problem.

Since the solution is not unique, we maximize the dual geometric program subject to linear constraint.

$\text{Maximize}\text{\hspace{0.17em}}f\left(y\right)={{\displaystyle \underset{k=0}{\overset{m}{\prod}}{\displaystyle \underset{j=1}{\overset{{N}_{k}}{\prod}}\left(\frac{{C}_{kj}}{{y}_{kj}}{\displaystyle \underset{j=1}{\overset{{N}_{k}}{\sum}}{y}_{kj}}\right)}}}^{{y}_{kj}}$ (18)

Subject to

$Ay=B$ (19)

Forming the orthogonality and normality condition from the exponent matrix and writing it in the form of Equation (19), we have

${y}_{1}-{y}_{2}-{y}_{3}+2{y}_{4}-{y}_{5}=0$

$-2{y}_{1}+2{y}_{2}+{y}_{3}+2{y}_{4}-2{y}_{5}=0$

${y}_{1}-2{y}_{2}+{y}_{3}+0{y}_{4}+2{y}_{5}=0$

$-{y}_{1}+{y}_{2}+{y}_{3}+{y}_{4}-2{y}_{5}=0$

${y}_{1}-{y}_{2}-{y}_{3}+{y}_{4}+{y}_{5}=0$

${y}_{1}+{y}_{2}+{y}_{3}+0{y}_{4}+0{y}_{5}=1$

${\left[\begin{array}{ccccc}1& -1& -1& 2& -1\\ -2& 2& 1& 2& -2\\ 1& -2& 1& 0& 2\\ -1& 1& 1& 1& -2\\ 1& -1& -1& 1& 1\\ 1& 1& 1& 0& 0\end{array}\right]}_{{A}_{\left(6\times 5\right)}}{\left[\begin{array}{c}{y}_{1}\\ {y}_{2}\\ {y}_{3}\\ {y}_{4}\\ {y}_{5}\end{array}\right]}_{{y}_{\left(5\times 1\right)}}={\left[\begin{array}{c}0\\ 0\\ 0\\ 0\\ 0\\ 1\end{array}\right]}_{{B}_{\left(6\times 1\right)}}$

>> $A=\left[1,-1,-1,2,-1;\cdots ;1,1,1,0,0\right]$;

>> $B=\left[0;0;0;0;0;1\right]$;

${y}^{*}=Pinv\left(A\right)\ast B$

${y}^{*}=\left[\begin{array}{c}0.4620\\ 0.3662\\ 0.1696\\ 0.0530\\ 0.0483\end{array}\right]>0$

${y}^{*}$ are the optimal weights of the dual decision variables and they satisfy the orthogonality and normality conditions.

From Equation (18), we correct for ${y}^{*}$ and compute the optimal objective function as follows:

$\begin{array}{c}{f}^{*}\left(y\right)={f}^{*}\left(x\right)\\ =\left({\left(\frac{20000}{0.4620}\right)}^{0.4620}\right)\ast \left({\left(\frac{20000}{0.3662}\right)}^{0.3662}\right)\ast \left({\left(\frac{20000}{0.1696}\right)}^{0.1696}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\ast \left({\left(\frac{1}{0.0530}\right)}^{0.0530}\right)\ast \left({\left(\frac{1}{0.0483}\right)}^{0.0483}\right)\ast \left({\left(0.1013\right)}^{0.1013}\right)\\ =58534\end{array}$

The above is the global optimal objective function; that is, the optimal expenditure for the period should be 58,534 instead of 60,000. The contribution of each primal decision variable to the objective function is:

${x}_{i}=\mathrm{exp}\left({w}^{*}\right)=\left[\begin{array}{c}12.0770\\ 0.7094\\ 0.6900\\ 22.7982\\ 1.8578\end{array}\right]>0$

Allocation of New Values to the Items purchased

Table 2 presents the summary of Optimal Allocation of Imprest from September to November 2016. In the table, C_{in} represents the initial allocation for each items; C_{j} represents the new average optimal allocation for each month and C_{jn} represents the final allocation for each items purchased.

Table 2. Summary of optimal allocation of imprest from Sept. to Nov. 2016.

4. Application of the Technique to Sensitivity Analysis

4.1. Change in the Cost Coefficients

A 10% increase in the September imprest will result in some changes in the optimal dual decision variables and the optimal objective function.

The optimal dual decision variable ${y}_{i}^{\ast}$ as presented in Equation (1) is computed as

${y}_{i}^{\ast}={b}_{i}^{\left(0\right)}+{\displaystyle \underset{j=1}{\overset{n}{\sum}}{r}_{j}{b}_{i}^{(j)}}$

$\left[\begin{array}{c}0.0000\\ 0.2505\\ 0.11601\\ 0.03625\\ 0.03304\end{array}\right]+{r}_{1}\left[\begin{array}{c}1.0000\\ 0.2505\\ 0.11601\\ 0.03625\\ 0.03304\end{array}\right]$; where ${r}_{1}=0.4620$

We chose any value in the column of b^{(j}^{)} and let it be r_{1} and we make use of the relationship
$\begin{array}{l}{b}_{2}\left(1+r\right)={y}_{i}^{\ast}\Rightarrow {b}_{2}\left(1+r\right)=0.3662\\ {b}_{2}=\frac{0.3662}{1+0.4620}=0.2505;\text{\hspace{0.17em}}\text{etc}.\end{array}$

${J}_{11}=\frac{{\displaystyle \underset{i=1}{\overset{n}{\sum}}{b}_{i}^{\left(i\right)}{b}_{j}^{\left(j\right)}}}{{y}_{i}^{\ast}}=\frac{{\displaystyle \underset{j=1}{\overset{n}{\sum}}{b}_{j}^{2}}}{{y}_{i}^{\ast}}$

${J}_{11}=\frac{1}{0.4620}+\frac{0.06275}{0.3662}+\frac{0.01346}{0.1696}+\frac{0.01314}{0.0530}+\frac{0.001092}{0.0483}=2.685753$

Hence, if c_{1} = 20,000.00 is increased by 10%, making the total imprest for the month of September to be 22,000.00, we compute the changes in the dual decision variables as follows:

$d{y}_{5}={b}_{5}^{\left(1\right)}{J}_{11}^{-1}{b}_{1}^{\left(1\right)}d{c}_{1}/{c}_{1}$

$d{y}_{5}=\left(0.03304\right)\left(1/{J}_{11}\right)\left(1.0000\right)\left(1/10\right)=0.001230$

$d{y}_{4}=\left(0.03625\right)\left(1/{J}_{11}\right)\left(1.0000\right)\left(1/10\right)=0.00135$

$d{y}_{3}=\left(0.11601\right)\left(1/{J}_{11}\right)\left(1.0000\right)\left(1/10\right)=0.00432$

$d{y}_{2}=\left(0.2505\right)\left(1/{J}_{11}\right)\left(1.0000\right)\left(1/10\right)=0.00933$

$d{y}_{1}=\left(1.0000\right)\left(1/{J}_{11}\right)\left(1.0000\right)\left(1/10\right)=0.03723$

The new dual decision variables are:

${y}^{1}={y}_{i}^{\ast}+d{y}_{i}$

${y}_{1}=0.4620+0.03723=0.49923$

${y}_{2}=0.3662+0.00933=0.37553$

${y}_{3}=0.1696+0.00432=0.17392$

${y}_{4}=0.0530+0.00135=0.05435$

${y}_{5}=0.0483+0.001230=0.04953$

${y}_{i}^{1}=\left[\begin{array}{c}0.49923\\ 0.37553\\ 0.17392\\ 0.05435\\ 0.04953\end{array}\right]>0$

The new optimal dual decision variables satisfies orthogonality and normality conditions, the non-negativity condition, and we can observe some variations in the corresponding values of the dual decision variables.

The new optimal objective function is computed as follows:

$\begin{array}{l}{f}^{\ast \left(New\right)}\left(x\right)=\left({\left(\frac{20000}{0.49923}\right)}^{0.49923}\right)\ast \left({\left(\frac{20000}{0.37553}\right)}^{0.37553}\right)\ast \left({\left(\frac{20000}{0.17392}\right)}^{0.17392}\right)\\ \text{\hspace{1em}}\text{\hspace{1em}}\ast \left({\left(\frac{1}{0.05435}\right)}^{0.05435}\right)\ast \left({\left(\frac{1}{0.04953}\right)}^{0.04953}\right)\ast \left({\left(0.10388\right)}^{0.10388}\right)\end{array}$

${f}^{\ast \left(New\right)}\left(x\right)=96464$

From the above sensitivity analysis, we can see that a 10% increase on September imprest will lead to increase in the optimal objective function values from 58,534 to 96,464 and the optimal dual decision variables changes from

${y}^{*}=\left[\begin{array}{c}0.4620\\ 0.3662\\ 0.1696\\ 0.0530\\ 0.0483\end{array}\right]>0$ to ${y}_{i}^{1}=\left[\begin{array}{c}0.4992\\ 0.3755\\ 0.1739\\ 0.05435\\ 0.04953\end{array}\right]>0$

Table 3 presents the percentage change in the cost coefficient; that is, the percentage change in the imprest allocation for the month and the corresponding new optimal objective functions associated with the change.

The range of value for which the changes in September imprest lie is:

$c-{y}_{i}^{\ast}{D}^{-1}<{c}^{1}<c+{y}_{i}^{\ast}{D}^{-1}$

${D}^{-1}=2.685753$

${y}_{i}^{\ast}{D}^{-1}=0.4620\ast 2.685753=1.24$

$19998.76\le {c}^{1}\le 20001.24$

So, if the cost coefficient (c_{1}), where c_{1} is the imprest for September, lies between 19,998.76 and 20,001.24, the optimal objective function will remain optimal, that is, will remain 58,534.

In this study, we have established that incremental sensitivity analysis on the negative degrees of difficulty geometric programming problems can be represented as

$f\left(x\right)=Qx+c$

where $f\left(x\right)$ is the new optimal objective function; Q is the slope of the linear function and c is the initial global optimal solution and $x$ is the percentage change in the cost coefficient; see Figure 1. If we determine the slope of the line, we can always get the new optimal objective function without calculating it afresh. Hence, we determined the slope of the line as, Q = 3793, and if September imprest is increased by 15%, we have

$f\left(x\right)=3793{x}_{i}+{c}_{i}$

$f\left(x\right)=\text{3793}\times \text{15}+\text{58534}=\text{115429naira}$

Table 3. Percentage Change in the Cost Coefficient/optimal objective function for K = −1.

Figure 1. Graph of incremental sensitivity analysis for negative one degree of difficulty.

Therefore, we can predict the new optimal solution without starting the calculations afresh.

4.2. Addition of Constraint Equation(s)

If a constraint equation with a unit cost coefficient is added to the existing problem on impress, what happen to the optimal solution?

Solution:

Minimize $f\left(x\right)=20000{x}_{1}{x}_{2}^{-2}{x}_{3}{x}_{4}^{-1}{x}_{5}+20000{x}_{1}^{-1}{x}_{2}^{2}{x}_{3}^{-2}{x}_{4}{x}_{5}^{-1}+20000{x}_{1}^{-1}{x}_{2}{x}_{3}{x}_{4}{x}_{5}^{-1}$

Subject to ${x}_{1}^{2}{x}_{2}^{2}{x}_{4}{x}_{5}+{x}_{1}^{-1}{x}_{2}^{-2}{x}_{3}^{2}{x}_{4}^{-2}{x}_{5}\le 1$

Subject to ${x}_{1}^{-2}{x}_{2}^{-2}{x}_{4}^{-1}{x}_{5}^{-1}\le 1$

Forming orthogonality and normality conditions, we have

${y}_{1}-{y}_{2}-{y}_{3}+2{y}_{4}-{y}_{5}-2{y}_{6}=0$

$-2{y}_{1}+2{y}_{2}+{y}_{3}+2{y}_{4}-2{y}_{5}-2{y}_{6}=0$

${y}_{1}-2{y}_{2}+{y}_{3}+0{y}_{4}+2{y}_{5}+0{y}_{6}=0$

$-{y}_{1}+{y}_{2}+{y}_{3}+{y}_{4}-2{y}_{5}-{y}_{6}=0$

${y}_{1}-{y}_{2}-{y}_{3}+{y}_{4}+{y}_{5}-{y}_{6}=0$

${y}_{1}+{y}_{2}+{y}_{3}+0{y}_{4}+0{y}_{5}+0{y}_{6}=1$

${\left[\begin{array}{cccccc}1& -1& -1& 2& -1& -2\\ -2& 2& 1& 2& -2& -2\\ 1& -2& 1& 0& 2& 0\\ -1& 1& 1& 1& -2& -1\\ 1& -1& -1& 1& 1& -1\\ 1& 1& 1& 0& 0& 0\end{array}\right]}_{{A}_{\left(6\times 6\right)}}{\left[\begin{array}{c}{y}_{1}\\ {y}_{2}\\ {y}_{3}\\ {y}_{4}\\ {y}_{5}\\ {y}_{6}\end{array}\right]}_{{y}_{\left(6\times 1\right)}}={\left[\begin{array}{c}0\\ 0\\ 0\\ 0\\ 0\\ 1\end{array}\right]}_{{B}_{\left(6\times 1\right)}}$

>> $A=\left[1,-1,-1,2,-1,-2;\cdots ;1,1,1,0,0,0\right]$;

>> $B=\left[0;0;0;0;0;1\right]$;

${y}^{*}=Pinv\left(A\right)\ast B=0$

and ${f}^{*}\left(x\right)=0$.

Hence, we conclude that addition of constraint equations in this problem will lead to degenerate and infeasible optimal solution.

5. Conclusions

Since the expenditure of monthly imprest can vary from time to time depending on the Departmental needs, therefore, we advise as follows (see Table 2):

1) The Department could spend ~~N~~3902.266 instead of ~~N~~4000 and ~~N~~3609.60 instead of ~~N~~3700 respectively on snacks for the month of September and November, 2016.

2) The Department could spend ~~N~~7024.0226 instead of ~~N~~7200 on FUTO water for the month of October, 2016.

3) The Department could spend ~~N~~9950.778 instead of ~~N~~10,200 and ~~N~~11,219.00 instead of ~~N~~11,500 respectively on generator maintenance for the month of September and November, 2016.

4) The Department could spend ~~N~~12,487.3074 instead of ~~N~~12,800 and ~~N~~4682.73 instead of ~~N~~4800 respectively on Amstel Malt for the month of October and November, 2016.

5) The Department could spend ~~N~~5658.286 instead of ~~N~~5800 on stationery for the month of September, 2016, if she had applied our model.

In addition, we conducted sensitivity analysis on the modeled problem, and observed that the incremental sensitivity analysis is of the form $f\left(x\right)=Q{x}_{i}+{c}_{i}$. When there is 15% increase in the September allocation (expenditure), the optimal objective function will be $f\left(x\right)=3793\times 15+58534=115429$. Sensitivity analysis helps us to determine the boundary of adjustment of the cost coefficients so that the optimal objective function will not be violated. We found such point to be: $19998.76\le {c}^{1}\le 20001.24$. Moreover, these results can guide the Department on how to allocate her scarce resources in particular and to the university in general.

Acknowledgements

We acknowledge the authors, researchers and publishers whose materials we have used. We also acknowledge the Department of Maritime Management Technology, Federal University of Technology Owerri (FUTO) for providing us with data for this paper.

Cite this paper

Amuji, H. , Olewuezi, N. , Onwuegbuchunam, D. and Igboanusi, C. (2020) Sensitivity Analysis on the Negative Degree of Difficulty Geometric Programming Problem.*American Journal of Operations Research*, **10**, 13-23. doi: 10.4236/ajor.2020.101002.

Amuji, H. , Olewuezi, N. , Onwuegbuchunam, D. and Igboanusi, C. (2020) Sensitivity Analysis on the Negative Degree of Difficulty Geometric Programming Problem.

References

[1] Arua, A.I., Chigbu, P.E., Chukwu, W.I E., Ezekwem, C.C. and Okafor, F.C. (2000) Advanced Statistics for Higher Education (Vol. 1). Academic Publishers, Nsukka, Nigeria.

[2] Avriel, M. and Williams, A.C. (1970) Complimentary Geometric Programming. SIAM Journal on Applied Mathematics, 19, 125-141. https://doi.org/10.1137/0119011

[3] Sengupta, J.K. and Porttillo-Campbell, J.H. (1972) The Approach of Geometric Programming with Economic Applications. Journal of Institutional and Theoretical Economics, 128, 437-455.

[4] Ojha, A.K. and Das, A.K. (2010) Geometric Programming Problem with Co-Efficient and Exponents Associated with Binary Numbers. International Journal of Computer Science Issues, 7, 49-55.

[5] Amuji, H.O., Ugwuowo, F.I., Chukwu, W.I.E and Uche, P.I. (In press) A Modified Generalized Inverse Method for Solving Geometric Programming Problems with Extended Degrees of Difficulties. International Journal of Operational Research.

http://www.inderscience.com/info/ingeneral/forthcoming.php?jcode=ijor

[6] Dinkel, J.J. and Kochenberger, G.A. (1977) On Sensitivity Analysis in Geometric Programming. Operations Research, 25,155-163.

https://doi.org/10.1287/opre.25.1.155

[7] Duffin, R.J. (1970). Linearizing Geometric Programs. SIAM Review, 12, 211-227.

https://doi.org/10.1137/1012043

[8] Theil, H. (1972) Substitution Effects in Geometric Programming. Management Science, 19, 25-30. https://doi.org/10.1287/mnsc.19.1.25

[9] Dinkel, J.J. and Kochenberger, G.A. (1974) A Note on Substitution Effects in Geometric Programming. Management Science, 20, 1141-1143.

https://doi.org/10.1287/mnsc.20.7.1141

[10] Boyd, S., Kim, S.J., Vandenberghe, L. and Hassibi, A. (2007) A Tutorial on Geometric Programming. Optimization and Engineering, 8, 67-127.

https://doi.org/10.1007/s11081-007-9001-7

[1] Arua, A.I., Chigbu, P.E., Chukwu, W.I E., Ezekwem, C.C. and Okafor, F.C. (2000) Advanced Statistics for Higher Education (Vol. 1). Academic Publishers, Nsukka, Nigeria.

[2] Avriel, M. and Williams, A.C. (1970) Complimentary Geometric Programming. SIAM Journal on Applied Mathematics, 19, 125-141. https://doi.org/10.1137/0119011

[3] Sengupta, J.K. and Porttillo-Campbell, J.H. (1972) The Approach of Geometric Programming with Economic Applications. Journal of Institutional and Theoretical Economics, 128, 437-455.

[4] Ojha, A.K. and Das, A.K. (2010) Geometric Programming Problem with Co-Efficient and Exponents Associated with Binary Numbers. International Journal of Computer Science Issues, 7, 49-55.

[5] Amuji, H.O., Ugwuowo, F.I., Chukwu, W.I.E and Uche, P.I. (In press) A Modified Generalized Inverse Method for Solving Geometric Programming Problems with Extended Degrees of Difficulties. International Journal of Operational Research.

http://www.inderscience.com/info/ingeneral/forthcoming.php?jcode=ijor

[6] Dinkel, J.J. and Kochenberger, G.A. (1977) On Sensitivity Analysis in Geometric Programming. Operations Research, 25,155-163.

https://doi.org/10.1287/opre.25.1.155

[7] Duffin, R.J. (1970). Linearizing Geometric Programs. SIAM Review, 12, 211-227.

https://doi.org/10.1137/1012043

[8] Theil, H. (1972) Substitution Effects in Geometric Programming. Management Science, 19, 25-30. https://doi.org/10.1287/mnsc.19.1.25

[9] Dinkel, J.J. and Kochenberger, G.A. (1974) A Note on Substitution Effects in Geometric Programming. Management Science, 20, 1141-1143.

https://doi.org/10.1287/mnsc.20.7.1141

[10] Boyd, S., Kim, S.J., Vandenberghe, L. and Hassibi, A. (2007) A Tutorial on Geometric Programming. Optimization and Engineering, 8, 67-127.

https://doi.org/10.1007/s11081-007-9001-7