Fast Computation of Pareto Set for Bicriteria Linear Programs with Application to a Diet Formulation Problem

Show more

1. Introduction

Animal diet formulation is a very important problem from an economic and environmental point of view, so it is an interesting example in operations research. Many modern animal diet formulation methods tend to take into account nitrogen and phosphorus excretions that are detrimental from an environmental point of view. Following [1] , it is appropriate to apply a tax on excretions so as to change the behavior of the producers in the swine industry. These changes in behavior are studied using a formulation of the problem as a bicriteria problem and are obtained by the determination of the Pareto set of the problem. For linear models, this Pareto set is a simple polygonal line. This fact implies that changes in behavior of the producers are abrupt and correspond to particular values of the tax. In other words even in increasing the tax it can happen that there is no change in behavior. Behavior changes happend only at very particular values of the tax. We will see that these behaviors correspond to efficient extreme points of the Pareto set, and to every extreme point corresponds a tax interval so that any value of the tax in this interval leads to the behavior given by that extreme point.

The computation and visualization of the Pareto set, also known as the efficiency set, for bicriteria linear programming problems is a useful tool for decision makers. We could try to compute this set in the decision space [2] - [10] , but due to the high dimension of this space, it can be a quite large and complicated set. Methods to obtain this set are also complicated, see for example [11] . Fortunately, the geometric aspect of the Pareto set in the criterion (or outcome) space for bicriteria linear program is quite simple [12] .

The outline of the paper is the following. The bicriteria problem is presented in Section 2. We will see in Section 3, that the Pareto set of a bicriteria linear problem is a simple polygonal line with L + 1 extreme points joined by L adjacent segments. Then in Section 4 we presents the link between the geometric structure of the Pareto set and the weighted-sums approach. Then an elementary algorithm to determine the Pareto set in the criterion space is suggested and its complexity is analyzed. Let us point out that this method uses only elementary result from a linear program solver, that is to say the optimal solution (values of the decision variables). This fact is an interesting property of the method.

Few methods exist for computing the Pareto set in the criteria space. One such method is presented in [13] . The method requires information about the dual, assume the feasible set is compact, and determine the Pareto set with at most 2L + 4 calls to a linear program solver. Another simple method for bi-criteria problems is presented in [12] to obtain the Pareto set in the criterion space. The algorithm is based on information about the reduced costs of all nonbasic variables, which is equivalent to have information about the solution of the dual problem. For bi-criteria linear problems we could also use a parametric analysis to obtain the Pareto set [11] [14] . The last two methods require that the software used to solve a linear program send information about the dual, reduced cost or postoptimal analysis, which is not always possible for a simple linear program solver. Unfortunately, even if it seems that those two methods require around 2L iterations, their complexities are nowhere analyzed. Moreover they can cycle as explained in [15] (pages 281-282), and [16] (pages 162-166).

Finally, in Section 5, we compute the Pareto sets for least cost diet formulation problems for pig, or any monogastric animal, taking into account the nitrogen and/or phosphorus excretions. Tax systems related to efficient extreme points of this problem are described.

2. Bicriteria Linear Programming Problem

Let us consider the standard form of the bicriteria linear programming problem [11]

$\left(P\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\{\begin{array}{ll}\hfill & \mathrm{min}\text{\hspace{0.17em}}{z}_{1}\left(x\right)={c}_{1}x\hfill \\ \hfill & \mathrm{min}\text{\hspace{0.17em}}{z}_{2}\left(x\right)={c}_{2}x\hfill \\ \text{\hspace{0.05em}}\text{suject}\text{\hspace{0.17em}}\text{to}\text{\hspace{0.05em}}\hfill & \hfill \\ \hfill & Ax=b\hfill \\ \hfill & x\ge 0\hfill \end{array}$

where x is a column vector in ${\mathbb{R}}^{n}$ , and the ${c}_{k}$ 's ( $k=1,2$ ) are two row vectors ${c}_{k}=\left({c}_{k\mathrm{,1}}\mathrm{,}\cdots \mathrm{,}{c}_{k\mathrm{,}n}\right)$ in ${\mathbb{R}}^{n}$ . The feasible set $\mathcal{S}$ in ${\mathbb{R}}^{n}$ is defined by $\mathcal{S}=\left\{x\in {\mathbb{R}}^{n}\mathrm{|}Ax=b\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}x\ge 0\right\}$ , where A is a $\left(m\mathrm{,}n\right)$ -matrix, and b is a column vector in ${\mathbb{R}}^{m}$ . Let C be the $\left(\mathrm{2,}n\right)$ -matrix given by

$C=\left(\begin{array}{c}{c}_{1}\\ {c}_{2}\end{array}\right)=\left(\begin{array}{ccc}{c}_{1,1}& \cdots & {c}_{1,n}\\ {c}_{2,1}& \cdots & {c}_{2,n}\end{array}\right).$

The feasible set in the criterion space ${\mathbb{R}}^{2}$ is then ${\mathcal{S}}_{c}=\left\{z\in {\mathbb{R}}^{2}\mathrm{|}z=Cx\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}x\in \mathcal{S}\right\}=C\mathcal{S}$ . It is well-known that $\mathcal{S}$ and ${\mathcal{S}}_{c}$ are polyhedral sets in ${\mathbb{R}}^{n}$ and ${\mathbb{R}}^{2}$ respectively. Throughout this paper we will suppose that the two criteria are lower bounded on $\mathcal{S}$ which means that for $i=1,2$ we have

${z}_{i}^{\mathrm{min}}=\mathrm{min}\left\{{z}_{i}\left(x\right)={c}_{i}x|x\in \mathcal{S}\right\}>-\infty .$

3. Structure of the Pareto Set

3.1. Efficiency Set

A feasible solution $x\in \mathcal{S}$ is an efficient solution if and only if it does not exist any other feasible solution $\stackrel{\xaf}{x}\in \mathcal{S}$ such that 1) ${z}_{i}\left(\stackrel{\xaf}{x}\right)\le {z}_{i}\left(x\right)$ for $i=1,2$ , and 2) ${z}_{j}\left(\stackrel{\xaf}{x}\right)<{z}_{j}\left(x\right)$ for at least one $j\in \left\{\mathrm{1,2}\right\}$ . The set of all efficient solutions is called the efficiency set noted $\mathcal{E}$ , also called Pareto set. The corresponding set in the criterion space is the set ${\mathcal{E}}_{c}=C\mathcal{E}$ .

3.2. Geometric Structure

Under the assumption that the two cost vectors ${c}_{1}$ and ${c}_{2}$ are linearly independant, Using weighted-sums, we can replace the bicriteria linear programming problem by a single criterion linear programming problem. We consider $\lambda \in \left[\mathrm{0,1}\right]$ and the weighted-sum function is

$z\left(x\mathrm{;}\lambda \right)=\left(1-\lambda \right){z}_{1}\left(x\right)+\lambda {z}_{2}\left(x\right)=\left[\left(1-\lambda \right){c}_{1}+\lambda {c}_{2}\right]x\mathrm{,}$

and we consider the single criteria problem for $\lambda \in \left[\mathrm{0,1}\right]$

$\left(P\left(\lambda \right)\right)\text{\hspace{1em}}\{\begin{array}{ll}\hfill & min\text{\hspace{0.17em}}z\left(x\mathrm{;}\lambda \right)=\left(1-\lambda \right){z}_{1}\mathrm{(}x\mathrm{)}+\lambda {z}_{2}\left(x\right)=\left[\left(1-\lambda \right){c}_{1}+\lambda {c}_{2}\right]x\hfill \\ \text{\hspace{0.05em}}\text{subject}\text{\hspace{0.17em}}\text{to}\text{\hspace{0.05em}}\hfill & \hfill \\ \hfill & x\in \mathcal{S}\mathrm{.}\hfill \end{array}$

The value function $\phi \left(\lambda \right)$ of $\left(P\left(\lambda \right)\right)$ is defined by

$\phi \left(\lambda \right)=min\left\{z\left(x\mathrm{;}\lambda \right)\mathrm{|}x\in \mathcal{S}\right\}\mathrm{.}$

From [11] we have

$\mathcal{E}={\displaystyle \underset{\lambda \in \left(\mathrm{0,1}\right)}{\cup}}\mathrm{arg}\underset{x\in \mathcal{S}}{\mathrm{min}}z\left(x\mathrm{;}\lambda \right)\mathrm{.}$

Hence the efficiency set $\mathcal{E}$ in the decision space is a connected set and is the union of faces, edges and vertices of $\mathcal{S}$ . This set may be quite complex due to the high dimension of the decision space. On the other side ${\mathcal{E}}_{c}$ , which is the image in ${\mathbb{R}}^{2}$ of $\mathcal{E}$ by a linear transform, is a much simpler set.

Since we have assumed that both criteria are lower bounded on $\mathcal{S}$ , it follows that ${\mathcal{E}}_{c}$ is a simple compact polygonal line. Indeed in that case ${\mathcal{E}}_{c}$ is the union of a finite number L of segments $\left[{Q}_{l-1}\mathrm{,}{Q}_{l}\right]$

${\mathcal{E}}_{c}={\displaystyle \underset{l=1}{\overset{L}{\cup}}}\left[{Q}_{l-1}\mathrm{,}{Q}_{l}\right]$

where

$\left[{Q}_{l-1}\mathrm{,}{Q}_{l}\right]=\left\{Q\in {\mathbb{R}}^{2}\mathrm{|}Q=\left(1-\sigma \right){Q}_{l-1}+\sigma {Q}_{l}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\sigma \in \left[\mathrm{0,1}\right]\right\}\mathrm{,}$

and such that

$\left({Q}_{l-1},{Q}_{l}\right)\cap \left({Q}_{\stackrel{\u02dc}{l}-1},{Q}_{\stackrel{\u02dc}{l}}\right)=\varnothing \text{\hspace{1em}}\text{\hspace{0.05em}}\text{if}\text{\hspace{0.05em}}\text{\hspace{0.17em}}l\ne \stackrel{\u02dc}{l},$

with

$\left({Q}_{l-1},{Q}_{l}\right)=\left\{Q\in {\mathbb{R}}^{2}|Q=\left(1-\sigma \right){Q}_{l-1}+\sigma {Q}_{l}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\sigma \in \left(0,1\right)\right\}.$

To each segment is associated a weight ${\lambda}_{l-\mathrm{1,}l}$ such that the vector ${\left(1-{\lambda}_{l-\mathrm{1,}l}\mathrm{,}{\lambda}_{l-\mathrm{1,}l}\right)}^{t}$ is orthogonal to the segment $\left[{Q}_{l-1}\mathrm{,}{Q}_{l}\right]$ in ${\mathbb{R}}^{2}$ . To each point Q of ${\mathcal{E}}_{c}$ is associated an interval $\Lambda \left(Q\right)$ defined by

$\Lambda \left(Q\right)=\{\begin{array}{l}\left[{\underset{\_}{\lambda}}_{l}\mathrm{,}{\stackrel{\xaf}{\lambda}}_{l}\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}Q={Q}_{l}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(l=0,\cdots ,L\right),\\ \left[{\lambda}_{l-1,l},{\lambda}_{l-1,l}\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}Q\in \left({Q}_{l-1},{Q}_{l}\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(l=1,\cdots ,L\right),\end{array}$

where

$\{\begin{array}{l}{\underset{\_}{\lambda}}_{0}=0,\hfill \\ {\stackrel{\xaf}{\lambda}}_{l-1}={\underset{\_}{\lambda}}_{l}={\lambda}_{l-1,l}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.05em}}\text{\hspace{0.17em}}l=1,\cdots ,L,\hfill \\ {\stackrel{\xaf}{\lambda}}_{L}=1,\hfill \end{array}\text{\hspace{0.05em}}$

with ${\stackrel{\xaf}{\lambda}}_{l}-{\underset{\_}{\lambda}}_{l}>0$ for $l=0,\cdots ,L$ .

3.3. Weak Efficiency Set

We will call weak efficiency set, or weak Pareto set, the set defined by

${\mathcal{E}}^{f}={\displaystyle \underset{\lambda \in \left[\mathrm{0,1}\right]}{\cup}}\mathrm{arg}\underset{x\in \mathcal{S}}{\mathrm{min}}z\left(x\mathrm{;}\lambda \right)\mathrm{.}$

Obviously $\mathcal{E}\subseteq {\mathcal{E}}^{f}$ . In the criteria space we will have ${\mathcal{E}}_{c}^{f}=C{\mathcal{E}}^{f}$ . Geometrically in the criterion space ${\mathbb{R}}^{2}$ , this means we add to ${\mathcal{E}}_{c}$ possibly a vertical segment or a ray from ${Q}_{0}$ in the positive direction of ${z}_{2}$ , ${D}_{0}=\left(0,1\right)$ ,

$R\left({Q}_{0};{D}_{0}\right)=\left\{{Q}_{0}+\eta {D}_{0}|\eta \in \left(0,{\eta}_{0}\right]\right\}\subset {\mathcal{S}}_{c},$

and/or a horizontal segment or a ray from ${Q}_{L}$ in the positive direction of ${z}_{1}$ , ${D}_{L}=\left(1,0\right)$ ,

$R\left({Q}_{L};{D}_{L}\right)=\left\{{Q}_{L}+\eta {D}_{L}|\eta \in \left(0,{\eta}_{L}\right]\right\}\subset {\mathcal{S}}_{c},$

where ${\eta}_{0}$ and ${\eta}_{L}$ are nonnegative finite or infinite values. They are the maximal values of $\eta $ such that $R\left({Q}_{0}\mathrm{;}{D}_{0}\right)$ and $R\left({Q}_{L}\mathrm{;}{D}_{L}\right)$ are both subsets of ${\mathcal{S}}_{c}$ . To these points on ${\mathcal{E}}_{c}$ we set

$\Lambda \left(Q\right)=\{\begin{array}{l}\left[\mathrm{0,0}\right]\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}Q\in R\left({Q}_{0}\mathrm{;}{D}_{0}\right)\mathrm{,}\\ \left[\mathrm{1,1}\right]\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{if}\text{\hspace{0.17em}}Q\in R\left({Q}_{L}\mathrm{;}{D}_{L}\right)\mathrm{.}\end{array}$

3.4. Link to Parametric Analysis

The parametric analysis is based on the weighted-sum given by

$\stackrel{\u02dc}{z}\left(x;\mu \right)={z}_{1}\left(x\right)+\mu {z}_{2}(x)$

for $\mu \in \left[\mathrm{0,}+\infty \right)$ , and the value function in this case is defined by

$\stackrel{\u02dc}{\phi}\left(\mu \right)=min\left\{\stackrel{\u02dc}{z}\left(x\mathrm{;}\mu \right)\mathrm{|}x\in \mathcal{S}\right\}\mathrm{.}$

Instead of $\left(P\left(\lambda \right)\right)$ , we could consider the single criteria problem for $\mu \ge 0$

$\left(P\left(\mu \right)\right)\text{\hspace{1em}}\{\begin{array}{ll}\hfill & \mathrm{min}\stackrel{\u02dc}{z}\left(x;\mu \right)={z}_{1}\left(x\right)+\mu {z}_{2}\left(x\right)=\left({c}_{1}+\mu {c}_{2}\right)x\hfill \\ \text{\hspace{0.05em}}\text{subject}\text{\hspace{0.17em}}\text{to}\text{\hspace{0.05em}}\hfill & \hfill \\ \hfill & x\in \mathcal{S}\mathrm{.}\hfill \end{array}$

Since $\lambda $ and $\mu $ are related by the formulae

$\lambda =\frac{\mu}{1+\mu}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mu =\frac{\lambda}{1-\lambda}\mathrm{,}$

to the efficient extreme points ${\left\{{Q}_{l}\right\}}_{l=0}^{L}$ on the efficiency set ${\mathcal{E}}_{c}$ correspond also the following intervals for the parameter $\mu $

$\stackrel{\u02dc}{\Lambda}\left(Q\right)=\{\begin{array}{l}\left[{\underset{\_}{\mu}}_{l},{\stackrel{\xaf}{\mu}}_{l}\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}Q={Q}_{l}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(l=0,\cdots ,L\right),\\ \left[{\mu}_{l-1,l},{\mu}_{l-1,l}\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{if}\text{\hspace{0.17em}}Q\in \left({Q}_{l-1},{Q}_{l}\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(l=1,\cdots ,L\right),\end{array}$

where

$\{\begin{array}{l}{\underset{\_}{\mu}}_{0}=0,\hfill \\ {\stackrel{\xaf}{\mu}}_{l-1}={\underset{\_}{\mu}}_{l}={\mu}_{l-1,l}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}l=1,\cdots ,L,\text{\hspace{0.05em}}\hfill \\ {\stackrel{\xaf}{\mu}}_{L}=+\infty .\hfill \end{array}$

In many applications, the parameter $\mu $ is in fact a tax over the the second criteria (for a minimization problem). Interesting enough is to observe that the behavior change (extreme point) only for the critical values ${\mu}_{l-\mathrm{1,}l}$ of the parameter $\mu $ . Indeed when $\mu $ increases and its value passes through ${\mu}_{l-\mathrm{1,}l}$ , the optimal point, extreme point, move from ${Q}_{l-1}$ to ${Q}_{l}$ . Thus, any level of taxes $\mu $ strictly between the values ${\mu}_{l-1,l}={\underset{\_}{\mu}}_{l}$ and ${\mu}_{l,l+1}={\stackrel{\xaf}{\mu}}_{l}$ causes the same behavior described by ${Q}_{l}$ .

4. Computation of the Pareto Set

4.1. Preliminaries

Let us associate to any $Q=\left({z}_{1}\mathrm{,}{z}_{2}\right)\in {\mathcal{S}}_{c}$ the weighted-sum function given by

${\phi}_{Q}\left(\lambda \right)=\left(1-\lambda \right){z}_{1}+\lambda {z}_{2}\mathrm{.}$

Then the value function $\phi \left(\lambda \right)$ associated to $\left(P\left(\lambda \right)\right)$ is such that

$\begin{array}{c}\phi \left(\lambda \right)=\mathrm{min}\left\{{\phi}_{Q}\left(\lambda \right)|Q\in {\mathcal{S}}_{c}\right\}\\ =\mathrm{min}\left\{{\phi}_{Q}\left(\lambda \right)|Q\in {\mathcal{E}}_{c}\right\}\\ =\mathrm{min}\left\{{\phi}_{{Q}_{l}}\left(\lambda \right)|l=0,\cdots ,L\right\}.\end{array}$

Hence we have the following results.

Theorem 4.1. [12] Let $Q\in {\mathcal{E}}_{c}$ , we have $\phi \left(\lambda \right)={\phi}_{Q}\left(\lambda \right)$ if and only if $\lambda \in \Lambda \left(Q\right)$ .

Theorem 4.2. [12] Let $Q\in {\mathcal{E}}_{c}$ and $0\le {\lambda}_{1}<{\lambda}_{2}\le 1$ . Then ${\lambda}_{1}$ and ${\lambda}_{2}\in \Lambda \left(Q\right)$ if and only if $\left[{\lambda}_{1}\mathrm{,}{\lambda}_{2}\right]\subseteq \Lambda \left(Q\right)$ . It follows that Q is one of the ${Q}_{l}$ ( $l\in \mathrm{0,}\cdots \mathrm{,}L$ ).

Theorem 4.3. [17] The function $\phi \left(\lambda \right)$ is continuous, piecewise linear and concave. The abscissae of slope changes are the increasing values ${\lambda}_{l-\mathrm{1,}l}$ for $l=1,\cdots ,L$ .

Let us observe that the slope associated to ${\phi}_{Q}\left(\lambda \right)$ strictly decreases for Q going from ${Q}_{0}$ to ${Q}_{L}$ on ${\mathcal{E}}_{c}$ , since ${z}_{1}$ increases and ${z}_{2}$ decreases steadily. We deduce the next results.

Theorem 4.4. [12] Let ${{Q}^{\prime}}_{i}$ and ${{Q}^{\prime}}_{j}$ be two distinct points on ${\mathcal{E}}_{c}$ . For $\lambda \in \left[\mathrm{0,1}\right]$ , ${\left(1-\lambda \mathrm{,}\lambda \right)}^{t}$ is orthogonal to the segment $\left[{{Q}^{\prime}}_{i}\mathrm{,}{{Q}^{\prime}}_{j}\right]$ if and only if ${\phi}_{{{Q}^{\prime}}_{i}}\left(\lambda \right)={\phi}_{{{Q}^{\prime}}_{j}}\left(\lambda \right)$ .

Theorem 4.5. Let ${{Q}^{\prime}}_{i}$ and ${{Q}^{\prime}}_{j}$ be two distinct points on ${\mathcal{E}}_{c}$ and $\lambda \in \left[\mathrm{0,1}\right]$ , such that ${\left(1-\lambda \mathrm{,}\lambda \right)}^{t}$ is orthogonal to the segment $\left[{{Q}^{\prime}}_{i}\mathrm{,}{{Q}^{\prime}}_{j}\right]$ . For a fixed $\lambda $ , the function ${\phi}_{Q}\left(\lambda \right)$ is constant as a function of Q on the segment $\left[{{Q}^{\prime}}_{i}\mathrm{,}{{Q}^{\prime}}_{j}\right]$ . Let us note this constant value by ${\phi}_{{{Q}^{\prime}}_{i}{{Q}^{\prime}}_{j}}$ . Moreover

1) if $\phi \left(\lambda \right)={\phi}_{{{Q}^{\prime}}_{i}{{Q}^{\prime}}_{j}}$ then $\left[{{Q}^{\prime}}_{i}\mathrm{,}{{Q}^{\prime}}_{j}\right]\subset {\mathcal{E}}_{c}$ , $\lambda \in \Lambda \left({{Q}^{\prime}}_{i}\right)$ and $\lambda \in \Lambda \left({{Q}^{\prime}}_{j}\right)$ ;

2) if $\phi \left(\lambda \right)>{\phi}_{{{Q}^{\prime}}_{i}{{Q}^{\prime}}_{j}}$ then $\left({{Q}^{\prime}}_{i}\mathrm{,}{{Q}^{\prime}}_{j}\right)\cap {\mathcal{E}}_{c}=\varnothing $ .

Theorem 4.6. Let ${{Q}^{\prime}}_{i}$ and ${{Q}^{\prime}}_{j}$ be two distinct points on ${\mathcal{E}}_{c}$ . If $\lambda \in \Lambda \left({{Q}^{\prime}}_{i}\right)$ is such that $\phi \left(\lambda \right)={\phi}_{{{Q}^{\prime}}_{j}}\left(\lambda \right)$ then $\lambda \in \Lambda \left({{Q}^{\prime}}_{j}\right)$ . Moreover there exists $l\in \left\{0,\cdots ,L\right\}$ such that $\lambda ={\lambda}_{l-1,l}$ and $\left[{{Q}^{\prime}}_{i}\mathrm{,}{{Q}^{\prime}}_{j}\right]\subseteq \left[{Q}_{l-1}\mathrm{,}{Q}_{l}\right]\subseteq {\mathcal{E}}_{c}$ .

Theorem 4.7. Let ${{Q}^{\prime}}_{i}$ and ${{Q}^{\prime}}_{j}$ be two distinct points on ${\mathcal{E}}_{c}^{f}$ . Let ${{\lambda}^{\prime}}_{i}\in \Lambda \left({{Q}^{\prime}}_{i}\right)$ and ${{\lambda}^{\prime}}_{j}\in \Lambda \left({{Q}^{\prime}}_{j}\right)$ , and consider the following two lines

${\mathcal{L}}_{i}\left({{\lambda}^{\prime}}_{i}\right)=\left\{Q\in {\mathbb{R}}^{2}\mathrm{|}{\phi}_{Q}\left({{\lambda}^{\prime}}_{i}\right)=\phi \left({{\lambda}^{\prime}}_{i}\right)\right\}$

and

${\mathcal{L}}_{j}\left({{\lambda}^{\prime}}_{j}\right)=\left\{Q\in {\mathbb{R}}^{2}\mathrm{|}{\phi}_{Q}\left({{\lambda}^{\prime}}_{j}\right)=\phi \left({{\lambda}^{\prime}}_{j}\right)\right\}\mathrm{.}$

(A) If ${{\lambda}^{\prime}}_{i}\ne {{\lambda}^{\prime}}_{j}$ , the point of intersection of ${\mathcal{L}}_{i}\left({{\lambda}^{\prime}}_{i}\right)$ and ${\mathcal{L}}_{j}\left({{\lambda}^{\prime}}_{j}\right)$ is $\stackrel{\u02dc}{Q}\left({{\lambda}^{\prime}}_{i}\mathrm{,}{{\lambda}^{\prime}}_{j}\right)=\left(\psi \left(\mathrm{0;}{{\lambda}^{\prime}}_{i}\mathrm{,}{{\lambda}^{\prime}}_{j}\right)\mathrm{,}\psi \left(\mathrm{1;}{{\lambda}^{\prime}}_{i}\mathrm{,}{{\lambda}^{\prime}}_{j}\right)\right)$ where

$\psi \left(\lambda ;{{\lambda}^{\prime}}_{i},{{\lambda}^{\prime}}_{j}\right)=\frac{{{\lambda}^{\prime}}_{j}-\lambda}{{{\lambda}^{\prime}}_{j}-{{\lambda}^{\prime}}_{i}}\phi \left({{\lambda}^{\prime}}_{i}\right)+\frac{\lambda -{{\lambda}^{\prime}}_{i}}{{{\lambda}^{\prime}}_{j}-{{\lambda}^{\prime}}_{i}}\phi \left({{\lambda}^{\prime}}_{j}\right),$

so

$\psi \left(0;{{\lambda}^{\prime}}_{i},{{\lambda}^{\prime}}_{j}\right)=\frac{{{\lambda}^{\prime}}_{j}\phi \left({{\lambda}^{\prime}}_{i}\right)}{{{\lambda}^{\prime}}_{j}-{{\lambda}^{\prime}}_{i}}-\frac{{{\lambda}^{\prime}}_{i}\phi \left({{\lambda}^{\prime}}_{j}\right)}{{{\lambda}^{\prime}}_{j}-{{\lambda}^{\prime}}_{i}}$

and

$\psi \left(1;{{\lambda}^{\prime}}_{i},{{\lambda}^{\prime}}_{j}\right)=\frac{{{\lambda}^{\prime}}_{j}-1}{{\lambda}_{j}-{{\lambda}^{\prime}}_{i}}\phi \left({{\lambda}^{\prime}}_{i}\right)+\frac{1-{{\lambda}^{\prime}}_{i}}{{{\lambda}^{\prime}}_{j}-{{\lambda}^{\prime}}_{i}}\phi \left({{\lambda}^{\prime}}_{j}\right).$

(B) If ${{\lambda}^{\prime}}_{i}={{\lambda}^{\prime}}_{j}$ , then ${\mathcal{L}}_{i}\left({{\lambda}^{\prime}}_{i}\right)={\mathcal{L}}_{j}\left({{\lambda}^{\prime}}_{j}\right)$ which contains the segment $\left[{{Q}^{\prime}}_{i}\mathrm{,}{{Q}^{\prime}}_{j}\right]$ .

4.2. Algorithm

In this section we consider both criteria upper bounded on $\mathcal{S}$ . In the forthcoming algorithm we initialize the process with the two points ${Q}_{0}$ and ${Q}_{L}$ on ${\mathcal{E}}_{c}$ . Then we gradually obtain a sequence of points ${\left\{{{Q}^{\prime}}_{i}\right\}}_{i=0}^{I}$ on ${\mathcal{E}}_{c}$ , and

a sequence of intervals associated to these points ${\left\{{\Lambda}^{\prime}\left({{Q}^{\prime}}_{i}\right)=\left[{\underset{\_}{\lambda}}^{\prime}{}_{i},{{\stackrel{\xaf}{\lambda}}^{\prime}}_{i}\right]\right\}}_{i=0}^{I}$ such that ${\Lambda}^{\prime}\left({{Q}^{\prime}}_{i}\right)\subseteq \Lambda \left({{Q}^{\prime}}_{i}\right)$ and

$\phi \left(\lambda \right)={\phi}_{{{Q}^{\prime}}_{i}}\left(\lambda \right)\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}\text{all}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\lambda \in {\Lambda}^{\prime}\left({{Q}^{\prime}}_{i}\right).$

At the end of the process $I=L$ and we have ${{Q}^{\prime}}_{l}={Q}_{l}$ with

$\left[{\underset{\_}{\lambda}}^{\prime}{}_{l},{{\stackrel{\xaf}{\lambda}}^{\prime}}_{l}\right]={\Lambda}^{\prime}\left({{Q}^{\prime}}_{l}\right)=\Lambda \left({Q}_{l}\right)=\left[{\underset{\_}{\lambda}}_{l},{\stackrel{\xaf}{\lambda}}_{l}\right]$

for.

Algorithm (Pareto bicriteria)

STEP 0. Initialization.

(A) Enter the data of the problem.

(B) Determine for and set. For and set. We get the initial point which as the same first coordinate as, and which as the same second coordinate as. Those two points might not be on, but are on.

(C) Set and;

(D) Set and;

(E) Set.

STEP 1. As long that there exists an index i such that, select one such index i and do:

(A) Find such that, hence such that is orthogonal to the segment (see Theorem 4.4);

(B) Solve, compute with;

(C) Update the list of points and their intervals :

I) Modification of the intervals. If then all the segment is in (see Theorem 4.5), and is defined on by (see Theorems 4.1 and 4.5)

We modify as follow:

a) for:;

b) for:;

In the sequel no more point will be generated on.

II) Point insertion and interval modification. If then, insert the point and modify intervals as follows (see Theorem 4.6):

a) Insert between and in the list with ;

b) Set;

c) If, then and any is in, hence we modify by setting;

d) If, then and any is in, hence we modify by setting.

STEP 2. For any i such that, remove from the list and set.

STEP 3. End of the process (and). The output is the list.

Let us observe that this process use only optimal solutions of, optimal values of the decision variables, which is easily obtained from any elementary linear program solver.

Remark 4.8. This algorithm produces at each iteration an inner and an outer approximation. The inner approximation is the polygonal line joining the for. The outer approximation is the polygonal line joining the points, , , , , , , , , , , as long as the’s are well determined (see Theorem 4.7). At the end of the algorithm the two approximations agree.

4.3. Complexity

In this section we are going to determine the maximum number of calls to a linear program solver to completely determine the Pareto set, or equivalently its efficient extreme points. The result is given in the last theorem of this section and says that it takes at most calls to a linear program solver to generates the extreme points.

We will use the following ordering on. For any two distinct points and on, we will say that precedes on, or equivalently that follows on, if moving from on in the direction from to we move from to. We will note or equivalently.

Theorem 4.9. The algorithm generates at most 3 points on on and two of these points are and.

Proof. Let us remark that the algorithm will eventually find a point in for any. Let be the first point generated by the algorithm in. This first point can be generated at STEP 0, an initial point, if for or for. Otherwise, it is generated through STEP 1-C-II, with and. Then this point is included in the list, and there are three cases to study:

1) for a and we have with. We will have, or if the lower bound is modified through STEP 1-C-II-c (if and).

2) for a and we have with. We will have, or if the upper bound is modified through STEP 1-C-II-d (if and).

3) for and we have.

Let be the second point generated by the algorithm in. must be one of the two points used to generate, and hence. This point is generated through STEP 1-C-II, and it is included in the list. There are two cases to study:

1) and, we will have and . Consequently and

with modified as in the preceding case. Moreover if we will modify the upper bound to get.

2) and, we will have and

. Consequently and with modified as in the preceding case. Moreover if we will modify the lower bound to get.

Two points of are now in the list. We can have a point with or an extreme point, with, or, with. Otherwise the two points are the extreme points and. In that case, if and, it can happen that and we will have terminated with the interval. Otherwise let us note the third point generated in. There are two cases to study:

1) We have only one extreme point, or, of the segment in the list and. As in the preceding paragraph, we will introduce it in the list, and depending of the case, by passing through STEP 1-C-II, if and, or if and. Moreover, we will have respectively or .

2) We already have two extreme points and in the list. In that case and and we will have and. We pass through STEP 1-C-I and we modify the intervals to get et. Since , is not added to the list.

In the sequel, the algorithm generate no more point on because if we have two points and we have, or else, if we have three points, and and we have and.

Theorem 4.10. If, respectively, then, respectively, is eventually removed of the list without any supplementary call to the linear program solver.

Proof. When is introduced in the list, there is no supplementary call for. Similarly for the interval when is introduced in the list. The points and are removed from the list at STEP 2 since and.

Theorem 4.11. The algorithm generates the extreme points of the Pareto set in at most calls to a linear program solver.

Proof. The initialization STEP 0 requires 2 calls. For STEP 1, as we generate the for and possibly one supplementary call for each segment for, there is at most calls. Hence the algorithm requires at most calls.

5. A Real World Application: Pig Diet Formulation

To illustrate our method of computation of the Pareto set we consider the pig diet formulation problem taking into account not only the cost of the diet but also environmental considerations, such as the reduction of nitrogen or phosphorus excretions. One way to analyze this problem is to rewrite the problem as bicriteria problem. Hence the Pareto set indicates the effect of the reduction of excretions, nitrogen or phosphorus, on the cost of the diet. This information is certainly useful for a decision maker which have to choose a diet which decrease the excretions without being too expensive [1] . Even if in thispaper we describe the problem for the swine industry, the method could be applied to any monogastric animal: pig, rabbit, chicken, etc.

5.1. Classical Model

The least cost diet problem, introduced in [18] , is a classical linear programming problem [19] [20] [21] . A decision variable is assigned to each ingredient and represents the amount (in kg) of the j^{th} ingredient per unit weight (1 kg) of the feed. Together, they form the decision vector in our model. The model's objective function is the diet cost. A vector of unit costs is used, where each represents the unit cost of the j^{th} ingredient (euro/kg or $/kg). Thus the total cost of a unit of weight (1 kg) of diet is which must be minimized over the set of feasible diets denoted by. The classic least cost animal diet formulation model is:

The constraints impose some bounds on the quantity of the different ingredients in the diet. For example a unit of feed is produced (a 1 kg mix), expressed by the constraint. Some ingredients, or combinations of ingredients, can be imposed on the diet. These restrictions give rise to equality constraints (=) or inequality constraints (≥ or ≤). More specifically, to satisfy protein requirements, the following constraints are introduced for the L groups of amino acids contained in the ingredients. We set

where represents the amount of digestible amino acid l contained in a unit of ingredient j and is the minimum amount of digestible amino acid l required. Finally, the diet must satisfy the digestible phosphorus requirements given by

where is the amount of digestible phosphorus contained in a unit of ingredient j.

5.2. Modelling of Nitrogen and Phosphorus Excretions

Nitrogen and phosphorus excretions are directly related to the excess of amounts of protein (amino acids) and phosphorus in the diet. Hence, we have to establish the protein and the phosphorus contents of the diet and take into account the parts that are actually assimilated.

The protein content of a diet is, where is the amount of protein per unit of ingredient j. The total excretion of protein is then given by the amount in protein of the diet from which we remove the amount of protein effectively digested given by, then

Hence decreasing the total excretion is equivalent to decrease the protein content of the diet while maintained fixed the needs in protein.

As for the nitrogen, the amount of phosphorus of a unit weight diet is, where is the amount of phosphorus per unit of ingredient j. The amount is the the amount of phosphorus which is actually digested. In this way the phosphorus excretion is given by the phosphorus content of the diet from which we remove the amount of phosphorus which is actually digested

Hence, decreasing the phosphorus excretion is equivalent to decreasing the phosphorus content of the diet while maintained fixed the needs in phosphorus.

5.3. Data

The ingredients and their corresponding variables are described in Table 1. Table 2 contains the entire model together with the values of the technical coefficients of the model.

5.4. Software

The algorithm was programmed in MATLAB, which includes in its standard library the linear program solver called Linprog. This software can use the simplex method or an interior point method.

5.5. Two Criteria Models and Results

At first we analyse the relation between the cost of the diet and the two different excretions (nitrogen and phosphorus). As a curiosity, we also consider the interactions between the two kind of excretions: nitrogen and phosphorus.

Table 1. List of available ingredients.

5.5.1. Cost and Excretions

We have considered two separate bicriteria models. We look for least cost diets while taking into account the nitrogen excretion for the first model and the phosphorus excretion for the second model. For each of these two bicriteria problems, the Pareto curve indicates the diet cost increase caused by an excretion decrease.

While considering the nitrogen excretion, the problem is :

Table 3 presents the set of efficient extreme points of the Pareto set in the criterion space, and the Pareto curve is sketched in Figure 1. For this problem, the algorithm detects segments and 11 efficient extreme points

for. A total of 22 calls to the linear program solver was required (the predicted maximum is).

From its associated weighted-sum model given by

Table 3. Efficient extreme points in the criterion space and the corresponding taxes for. and the corresponding taxes.

Table 3. Efficient extreme points in the criterion space and the corresponding taxes for. and the corresponding taxes.

Figure 1. Pareto curve: nitrogen excretion vs diet cost.

we get the following expression for its value function, defined for, by

defined for, and. So this expression depends on the interval in which is.

For the parametric model given by

we get the following expression for its value function defined for by

defined for, and. So this expression depends on the interval in which is.

So we see that for any tax value in we will always have the same expression for the value function, or the same behavior given by the efficient extreme point, and the change in the behavior will happend only when the taxation level passes through the extremities or of this interval

A similar analysis holds for the second bicriteria problem with phosphorus excretion. Indeed, for the phosphorus excretion problem, the model is:

Table 4 presents the efficient extreme points in the criterion space while the Pareto curve is sketched in Figure 2. For this problem, the algorithm detects segments and 23 extreme points

for. A total of 45 calls to the linear program solver was required (the predicted maximum is).

From its associated weighted-sum model given by

Table 4. Efficient extreme points in the criterion space for, and the corresponding taxes.

Figure 2. Pareto curve: phosphorus excretion vs diet cost.

we get the following expression for its value function defined for by

defined for, and. So this expression depends on the interval in which is.

For the parametric model given by

we get the following expression for its value function defined for by

defined for for, and. So this expression depends on the interval in which is.

So we see that for any tax value in we will always have the same expression for the value function, or the same behavior given by the efficient extreme point, and the change in the behavior will happend only when the taxation level passes through the extremities or of this interval.

These problems of taxation are nice examples of abrupt (discrete) changes in behavior depending on the level of taxation of one criterion.

5.5.2. The Two Kinds of Excretion as Criteria

As a curiosity, we have computed the Pareto set for the bicriteria problem where the two kinds of excretions are considered. This bicriteria problem is given by

Table 5 presents the set of efficient extreme points of the Pareto set in the criteria space. Its corresponding Pareto curve is sketched in Figure 3. This table shows the opposite effect of trying to reduce simultaneously both excretions. Minimizing one excretion leads to an increse in the other excretion. For this problem, the algorithm detects segments and 6 extreme points. A total of 12 calls to the linear program solver was required (the predicted maximum is).

For each, the value function is

Figure 3. Pareto curve: phosphorus excretion vs nitrogen excretion.

Table 5. Efficient extreme points in the criterion space for pour.

for. For all value of the parameter in the interval we will have the same expression for the value functionn or the same behavior and a change in the behavior will happend for values of the parameter corresponding to the extremities ou of this interval.

Let us observe that the last line of Table 3 () corresponds to the first line of Table 5 () and the last line of Table 4 () corresponds to the last line of Table 5 ().

6. Conclusion

In this paper we have considered bicriteria linear programming problems and have presented an elementary and efficient algorithm to compute the Pareto set in the criterion space. We have illustrated the method on a real important application. This application also suggests that it could be interresting to extend the method to three-criteria problems. Moreover it could be interesting to compare our method to other methods to find the Pareto set in the criterion space, but it is out of the scope of this paper and could be a nice subject for a future research.

Acknowledgements

This work has been supported in part by the Natural Sciences and Engineering Research Council of Canada and by the canadian corporation Swine Innovation Porc.

References

[1] Dubeau, F., Julien, P.-O. and Pomar, C. (2011) Formulating Diets for Growing Pigs: Economic and Environmental Considerations. Annals of Operations Research, 190, 239-269.

https://doi.org/10.1007/s10479-009-0633-1

[2] Adulbhan, P. and Tabucanon, M.T. (1977) Bicriterion Linear Programming. Computers & Operations Research, 4, 147-153. https://doi.org/10.1016/0305-0548(77)90036-3

[3] Benson, H.P. (1979) Vector Maximization with Two Objective Functions. Journal of Optimization Theory and Applications, 28, 253-258. https://doi.org/10.1007/BF00933245

[4] Cohon, J.L., Church, R.L. and Sheer, D.P. (1979) Generating Multiobjective Trade-Offs. Water Resources Research, 15, 1001-1010. https://doi.org/10.1029/WR015i005p01001

[5] Gearhart, W.B. (1979) On the Characterization of Pareto-Optimal Solutions in Bicriteria Optimization. Journal of Optimization Theory and Applications, 27, 301-307.

https://doi.org/10.1007/BF00933233

[6] Geoffrion, A.M. (1967) Solving Bicriterion Mathematical Programs. Operations Research, 15, 39-54. https://doi.org/10.1287/opre.15.1.39

[7] Kiziltan, G. and Yucaoglu, E. (1982) An Algorithm for Bicriterion Linear Programming. European Journal of Operationl Research, 10, 406-411.

https://doi.org/10.1016/0377-2217(82)90091-1

[8] Prasad, S.Y. and Karwan, M.H. (1992) A Note on Solving Bicriteria Linear Programming Problems Using Single Criteria Software. Computers & Operations Research, 19, 169-173.

https://doi.org/10.1016/0305-0548(92)90090-R

[9] Sadagan, S. and Ravindran, A. (1982) Interactive Solution of Bicriteria Mathematical Programs. Naval Research Logistics Quarterly, 29, 443-459.

https://doi.org/10.1002/nav.3800290307

[10] Walker, J. (1978) An Interactive Method as an Aid in Solving Bicriteria Mathematical Programming Problems. Journal of the Operational Research Society, 29, 915-922. https://doi.org/10.1057/jors.1978.195

[11] Steuer, R.E. (1986) Multiple Criteria Optimization. Wiley, New York.

[12] Dubeau, F. and Kadri, A. (2012) Computation and Visualization of the Pareto Set in the Criterion Space for the Bicriteria Linear Programming Problem. International Journal of Mathematics and Computation, 15, 1-15.

[13] Benson, H.P. (1997) Generating the Efficient Outcome Set in Multiple Objective Linear Programs: The Bi-Criteria Case. Acta Mathematica Vietnamica, 22, 29-51.

[14] Bertsimas, D. and Tsitsiklis, J.N. (1997) Introduction to Linear Optimization. Athenas Scientific and Dynamic Ideas, Belmont.

[15] Murty, K.G. (1983) Linear Programming. Wiley, New York.

[16] Chvatal, V. (1983) Linear Programming. W.H. Freeman and Company, New York.

[17] Ferris, M.C., Mangasarian, O.L. and Wright, S.J. (2007) Linear Programming with MATLAB, MPS-SIAM Series on Optimization, Philadelphia.

[18] Stigler, G.J. (1945) The Cost of Subsistance. Journal of Farm Economics, 27, 303-314.

https://doi.org/10.2307/1231810

[19] Dantzig, G.B. (1963) Linear Programming and Extensions. Princeton Press, Princeton.

[20] Dantzig, G.B. (1990) The Diet Problem. Interfaces, 20, 43-47.

https://doi.org/10.1287/inte.20.4.43

[21] Garille, S.G. and Gass, S.I. (2001) Stigler’s Diet Problem Revisited. Operations Research, 49, 1-13. https://doi.org/10.1287/opre.49.1.1.11187