A Lagrange Relaxation Based Approach to Solve a Discrete-Continous Bi-Level Model
ABSTRACT
In this work we propose a solution method based on Lagrange relaxation for discrete-continuous bi-level problems, with binary variables in the leading problem, considering the optimistic approach in bi-level programming. For the application of the method, the two-level problem is reformulated using the Karush-Kuhn-Tucker conditions. The resulting model is linearized taking advantage of the structure of the leading problem. Using a Lagrange relaxation algorithm, it is possible to find a global solution efficiently. The algorithm was tested to show how it performs.

1. Introduction

The use of optimization techniques has been a fundamental part of solving problems. However, the most common mathematical programming models are often unsuitable for situations involving more than one goal and more than one decision-maker.

In most cases, decisions about a given process are the result of an interaction between the preferences of a group of individuals, so deciding on a single criterion seems insufficient, particularly when the decision-making process is applied to complex organizational environments.

Consequently, it is more realistic to seek the achievement of several objectives at the same time. This implies that problems must be solved according to a criterion that satisfies all decision-makers as a whole. As a result, contradictory and incommensurable criteria will commonly appear. This situation has led to the development of multi-objective optimization.

Another way to consider multiple objectives is through bilevel optimization problems, which involve two decision-makers and also consider interdependence between them. These problems are considered as the generalization of the Stackelberg problem  for non-cooperative games.

The bilevel programming problem can be formulated as:

$\begin{array}{lll}\underset{x}{\mathrm{min}}\hfill & F\left(x,y\right)\hfill & \hfill \\ \text{subject}\text{\hspace{0.17em}}\text{to}:\hfill & {G}_{k}\left(x,y\right)\le 0\hfill & k=1,\cdots ,p\hfill \\ \hfill & \text{where for each}\text{\hspace{0.17em}}x\text{\hspace{0.17em}}\text{fixed}\text{\hspace{0.17em}}y\text{\hspace{0.17em}}\text{solves}\hfill & \hfill \\ \hfill & \underset{y}{\mathrm{min}}\text{ }f\left(x,y\right)\hfill & \hfill \\ \hfill & \text{subject}\text{\hspace{0.17em}}\text{to}:\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{g}_{l}\left(x,y\right)\le 0\hfill & l=1,\cdots ,q\hfill \end{array}$ (1)

where $x\in {ℝ}^{{n}_{1}}$ are the higher level variables controlled by the top-level decision-maker or “leader”, and $y\in {ℝ}^{{n}_{2}}$ are the lower-level variables controlled by the lower-level decision-maker or “follower”. $F,f:{ℝ}^{{n}_{1}+{n}_{2}}\to ℝ$ are the upper and lower goal functions respectively. In the decision process, the leader makes a decision that optimizes his objective function. Once the leader has decided, the follower reacts by seeking to optimize his own objective function.

For bi-level programming problem, its fundamental properties and concepts have been studied since the 1970s. A large number of references can be found in  . In general, a bi-level programming problem is difficult to solve:  proved that even the simplest case, as the linear bi-level problem, is NP-hard.

Additionally, in many bi-level programming problems, a subset of variables is restricted to taking discrete values. With these characteristics,   and  proposed branch and bound algorithms for mixed and binary integers.  developed a branch and bound algorithm to solve binary non-linear mixed integer problems. In the work of  , it is pointed out that the branch and bound methods require linear or non-linear convex functions at the lower level of the bi-level problem to be functional.

 proposed fundamental properties to find a solution in bi-level linear programs with binary variables. They suggested penalty function methods to solve discrete bi-level problems.

 proposed a reformulation and linearization algorithm for the whole bi-level mixed-integer general problem with continuous variables in the follower using the representation of its convex hull.  proposed an algorithm to solve the bi-level quadratic problem and the mixed-integer linear based on parametric programming. More recently,  used two algorithms using multiparametric programming to solve bi-level integer problems with the integer variables controlled by the first level.  proposed an algorithm based on the same strategy for bi-level mixed integer problems where the follower solves an integer problem.

 presented an algorithm for the global optimization of bi-level mixed-integer nonlinear problems consisting of generating a convergent lower bound and an optimal upper bound.  proposed an exact algorithm for the linear mixed-integer bi-level problem with some simplifications.  considered integer bi-level problems with the leader objective function being linear-fractional and linear the follower; he proposes an iterative algorithm of cut generation to solve the problem.

Using decomposition techniques,  proposed an algorithm based on Benders decomposition to solve the linear mixed integer binary problem; with this method, the target values are bounded, and Karush-Kuhn-Tucker (KKT) conditions are used to transform the problem into two problems of one level. Based on the last proposal,  and  proposed the use of Benders decomposition with a continuous subproblem.

Based on the dual decomposition,  proposed a method based on Lagrangian relaxation to obtain an acceptable solution solving a sequence of single level mixed-integer programs. Their heuristic pretends to solve a bi-level problem with binary variables in both levels, considering that the upper-level constraints do not contain any lower-level variable.

In this work, a solution method for discrete-continuous bi-level problems based on Lagrangian relaxation is presented. For our proposal, the reformulation of the bilevel problem to a single level problem with Karush-Kuhn-Tucker conditions is applied. This nonlinear problem can be linearized by taking advantages of the structure of the binary variables of the leader problem. Finally, the problem can be solved with a dual decomposition method.

Due to their complexity, bilevel problems are usually solved by heuristic methods. Our proposal allows us to find a global optimal through a decomposition technique, taking advantage of some characteristics of the problems involved. The paper is organized as follows: the following section presents the definition and formulation of the discrete-continuous bi-level problem. In Section 2, the solution algorithm is presented. In section 3 the results of the application of the algorithm are shown.

2. Problem Definition and Mathematical Formulation

In this section, we formulate the discrete linear binary problem (DCBLP), where partial cooperation (or optimistic approach) is assumed  , in which, if the follower has alternative optimal solutions, choose the one that is the best for the leader. The problem can be written as:

$\begin{array}{lll}\underset{x,y}{\mathrm{min}}\hfill & {c}_{1}x+{d}_{1}y\hfill & \hfill \\ \text{subject}\text{\hspace{0.17em}}\text{to}:\hfill & {A}_{1}x+{B}_{1}y\le {b}_{1}\hfill & \hfill \\ \hfill & \underset{y}{\mathrm{min}}\hfill & {c}_{2}x+{d}_{2}y\hfill \\ \hfill & \text{subject}\text{\hspace{0.17em}}\text{to}:\hfill & {A}_{2}x+{B}_{2}y\le {b}_{2}\hfill \\ \hfill & \hfill & x\in \left\{0,1\right\},\text{\hspace{0.17em}}y\ge 0\hfill \end{array}$ (2)

where ${c}_{1},{d}_{1}\in {ℝ}^{{n}_{1}}$ , ${c}_{2},{d}_{2}\in {ℝ}^{{n}_{2}}$ , ${b}_{1}\in {ℝ}^{p}$ , ${b}_{2}\in {ℝ}^{q}$ , ${A}_{1}\in {ℝ}^{p×{n}_{1}}$ , ${B}_{1}\in {ℝ}^{p×{n}_{2}}$ , ${A}_{2}\in {ℝ}^{q×{n}_{1}}$ , ${B}_{2}\in {ℝ}^{q×{n}_{2}}$ .

When the lower-level problem is linear, it can be reformulated by replacing the lower-level problem with its Karush-Kuhn-Tucker (KKT) conditions   :

$\underset{x,y}{\mathrm{min}}$ ${c}_{1}x+{d}_{1}y$ (3)

subject to:

${A}_{1}x+{B}_{1}y\le {b}_{1}$ (4)

${A}_{2}x+{B}_{2}y\le {b}_{2}$ (5)

${\lambda }^{\text{T}}\left({b}_{2}-{A}_{2}x-{B}_{2}y\right)=0$ (6)

$\left({d}_{2}+{\lambda }^{\text{T}}{B}_{2}\right)y=0$ (7)

$x\in \left\{0,1\right\},y,\lambda \ge 0$ (8)

where

${d}_{2}^{\text{T}}y\ge -{\lambda }^{\text{T}}{B}_{2}y\ge {\lambda }^{\text{T}}{A}_{2}x-{\lambda }^{\text{T}}{b}_{2}$ (9)

For a given selection of the leader problem, the problem can be reformulated by relating the primal and dual constraints and requiring the duality gap to be zero. By including the equation:

${d}_{2}^{\text{T}}y={\lambda }^{\text{T}}{A}_{2}x-{\lambda }^{\text{T}}{b}_{2}$ (10)

satisfaction of both complementarity constraints is guaranteed.

Then, the problem (2) can be written as:

$\underset{x,y}{\mathrm{min}}$ ${c}_{1}x+{d}_{1}y$ (11)

subject to:

${A}_{1}x+{B}_{1}y\le {b}_{1}$ (12)

${A}_{2}x+{B}_{2}y\le {b}_{2}$ (13)

$-{\lambda }^{\text{T}}{B}_{2}\le {d}_{2}$ (14)

${d}_{2}y={\lambda }^{\text{T}}{A}_{2}x-{\lambda }^{\text{T}}{b}_{2}$ (15)

$x\in \left\{0,1\right\},\text{\hspace{0.17em}}y,\lambda \ge 0$ (16)

Considering the problems (11)-(16), the term $\lambda x=\mu =\left[{\mu }_{1},\cdots ,{\mu }_{{n}_{1}}\right]$ , where ${\mu }_{i}$ is the i-th column of $\mu$ , can be linearized    and Constraint (15) can be rewritten to obtain the following equivalent problem:

$\underset{x,y}{\mathrm{min}}$ ${c}_{1}x+{d}_{1}y$ (17)

subject to:

${A}_{1}x+{B}_{1}y\le {b}_{1}$ (18)

${A}_{2}x+{B}_{2}y\le {b}_{2}$ (19)

$-{\lambda }^{\text{T}}{B}_{2}\le {d}_{2}$ (20)

${d}_{2}y=\underset{i=1}{\overset{{n}_{1}}{\sum }}{\mu }_{i}^{\text{T}}{A}_{2i}-{\lambda }^{\text{T}}{b}_{2}$ (21)

${\mu }_{li}\ge {\lambda }_{l}-M\left(1-{x}_{i}\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}l=1,\cdots ,q;\text{\hspace{0.17em}}i=1,\cdots ,{n}_{1}$ (22)

${\mu }_{i}\le \lambda \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,\cdots ,{n}_{1}$ (23)

${\mu }_{l}^{\text{T}}\le Mx\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}l=1,\cdots ,q$ (24)

$\mu \ge 0$ (25)

$x\in \left\{0,1\right\},\text{\hspace{0.17em}}y,\lambda \ge 0$ (26)

where M is a large positive number.

With Constraints (22)-(25), it is ensured that variables ${\mu }_{li}$ take value zero if ${x}_{i}=0$ and ${\lambda }_{l}$ if ${x}_{i}=1$ , $i=1,\cdots ,{n}_{1}$ , $l=1,\cdots ,q$ .

To avoid the use of binary variables, the problem can be transformed into the following model equivalent to the problem (17)-(26):

$\underset{x,y}{\mathrm{min}}$ ${c}_{1}x+{d}_{1}y$ (27)

subject to:

${A}_{1}x+{B}_{1}y\le {b}_{1}$ (28)

${A}_{2}x+{B}_{2}y\le {b}_{2}$ (29)

$-{\lambda }^{\text{T}}{B}_{2}\le {d}_{2}$ (30)

${d}_{2}y=\underset{i=1}{\overset{{n}_{1}}{\sum }}{\mu }_{i}^{\text{T}}{A}_{2i}-{\lambda }^{\text{T}}{b}_{2}$ (31)

${\mu }_{li}\ge {\lambda }_{l}-M\left(1-{x}_{i}\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}l=1,\cdots ,q;\text{\hspace{0.17em}}i=1,\cdots ,{n}_{1}$ (32)

${\mu }_{i}\le \lambda \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,\cdots ,{n}_{1}$ (33)

${\mu }_{l}^{\text{T}}\le Mx\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}l=1,\cdots ,q$ (34)

$\mu \ge 0$ (35)

$x\le {x}^{2}$ (36)

$0\le x\le 1,\text{\hspace{0.17em}}y,\lambda \ge 0$ (37)

Constraint (36) is considered as a complicated constraint so that it can be dualized. From this idea, the following section proposes a solution method for the DCBLP.

3. Solution Algorithm

Lagrange Relaxation

Considering the problem (27)-(37), Constraint (36) as complicated constraint, and a given $u\ge 0$ . A lagrangian relaxation is defined by:

$\begin{array}{l}\underset{x,y}{\mathrm{min}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{c}_{1}x+{d}_{1}y+u\left(x-{x}^{2}\right)\\ \text{subject}\text{\hspace{0.17em}}\text{to}:\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{A}_{1}x+{B}_{1}y\le {b}_{1}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{A}_{2}x+{B}_{2}y\le {b}_{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-{\lambda }^{\text{T}}{B}_{2}\le {d}_{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{d}_{2}y=\underset{i=1}{\overset{{n}_{1}}{\sum }}{\mu }_{i}^{\text{T}}{A}_{2i}-{\lambda }^{\text{T}}{b}_{2}\end{array}$

$\begin{array}{l}{\mu }_{li}\ge {\lambda }_{l}-M\left(1-{x}_{i}\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}l=1,\cdots ,q;\text{\hspace{0.17em}}i=1,\cdots ,{n}_{1}\\ {\mu }_{i}\le \lambda \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,\cdots ,{n}_{1}\\ {\mu }_{l}^{\text{T}}\le Mx\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}l=1,\cdots ,q\\ \mu \ge 0\\ 0\le x\le 1,\text{\hspace{0.17em}}y,\lambda \ge 0\end{array}$ (38)

where the dual function $\omega \left(u\right)$ is defined as the Lagrange subproblem:

$\begin{array}{l}\omega \left(u\right)=\underset{x,y}{\mathrm{min}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{c}_{1}x+{d}_{1}y+u\left(x-{x}^{2}\right)\\ \text{subject}\text{\hspace{0.17em}}\text{to}:\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{A}_{1}x+{B}_{1}y\le {b}_{1}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{A}_{2}x+{B}_{2}y\le {b}_{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-{\lambda }^{\text{T}}{B}_{2}\le {d}_{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{d}_{2}y=\underset{i=1}{\overset{{n}_{1}}{\sum }}{\mu }_{i}^{\text{T}}{A}_{2i}-{\lambda }^{\text{T}}{b}_{2}\end{array}$

$\begin{array}{l}{\mu }_{li}\ge {\lambda }_{l}-M\left(1-{x}_{i}\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}l=1,\cdots ,q;\text{\hspace{0.17em}}i=1,\cdots ,{n}_{1}\\ {\mu }_{i}\le \lambda \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,\cdots ,{n}_{1}\\ {\mu }_{l}^{\text{T}}\le Mx\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}l=1,\cdots ,q\\ \mu \ge 0\\ 0\le x\le 1,\text{\hspace{0.17em}}y,\lambda \ge 0\end{array}$ (39)

From the previous function, for all $u\ge 0$

$\omega \left(u\right)\le {c}_{1}{x}^{*}+{d}_{1}{y}^{*}$ (40)

Then, the values of the dual function are lower bounds of the optimal value of (27)-(37). From this definition, we have the duality gap:

${c}_{1}{x}^{*}+{d}_{1}{y}^{*}-\omega \left({u}^{*}\right)\ge 0$ (41)

To solve the problem, we try to reduce the duality gap, considering that if the problem (27)-(37) is nonconvex, the duality gap is usually greater than zero. However, for convex problems, the duality gap disappears  .

So the dual problem is to find the vector of multipliers u for which the lower bound given by the dual function is maximum:

$\begin{array}{l}\underset{u}{\mathrm{max}}\omega \left(u\right)\\ \text{subject}\text{\hspace{0.17em}}\text{to}:u\ge 0\end{array}$ (42)

If the set of feasible solutions could be available in the feasible region of 39, the problem could be solved by listing all of them as

$\omega \left(u\right)=\underset{x,y}{\mathrm{min}}{c}_{1}{x}^{s}+{d}_{1}{y}^{s}+u\left({x}^{s}-{\left({x}^{s}\right)}^{2}\right),\text{ }s=1,\cdots ,r$ (43)

To solve the problem, a dual decomposition algorithm is proposed, for which a method of dual cutting planes is used. In this method, an approximation of the dual function is maximized and has convergence properties similar to those of the subgradient methods  .

The dual function is concave and the problem can be reformulated as the following problem, called the master problem of Lagrangian relaxation   :

$\begin{array}{l}\underset{u}{\mathrm{max}}\omega \left(u\right)\\ \text{subject}\text{\hspace{0.17em}}\text{to}:\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\omega \le {c}_{1}{x}^{1}+{d}_{1}{y}^{1}+u\left({x}^{1}-{\left({x}^{1}\right)}^{2}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\omega \le {c}_{1}{x}^{r}+{d}_{1}{y}^{r}+u\left({x}^{r}-{\left({x}^{r}\right)}^{2}\right)\end{array}$ (44)

where each constraint is a Lagrange cut.

The optimization of the dual problem consists of the iterative resolution of the master problem whose number of Lagrange cuts increases with each iteration.

With the solution of each master problem, a new value of the multiplier u is obtained, which, when evaluated in the Lagrange subproblem, generates a new cut for the master problem.

When the multiplier generates non-bounded subproblems, a constraint must be entered on the master problem that eliminates the multiplier. This cut is known as a boundary cut and is obtained by solving the boundary subproblem

$\begin{array}{l}\omega \left(u\right)=\underset{x,y}{\mathrm{min}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{c}_{1}x+{d}_{1}y+u\left(x-{x}^{2}\right)\\ \text{subject}\text{\hspace{0.17em}}\text{to}:\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{A}_{1}x+{B}_{1}y\le 0\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{A}_{2}x+{B}_{2}y\le 0\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-{\lambda }^{\text{T}}{B}_{2}\le 0\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{d}_{2}y=\underset{i=1}{\overset{{n}_{1}}{\sum }}{\mu }_{i}^{\text{T}}{A}_{2i}-{\lambda }^{\text{T}}{b}_{2}\end{array}$

$\begin{array}{l}{\mu }_{li}-{\lambda }_{l}-M{x}_{i}\ge 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}l=1,\cdots ,q;\text{\hspace{0.17em}}i=1,\cdots ,{n}_{1}\\ {\mu }_{i}\le \lambda \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,\cdots ,{n}_{1}\\ {\mu }_{l}^{\text{T}}\le Mx\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}l=1,\cdots ,q\\ \mu \ge 0\\ 0\le x\le 1,\text{\hspace{0.17em}}y,\lambda \ge 0\end{array}$ (45)

If the optimal solution is a negative value, a boundary cut must be entered in the master problem with the form:

$0\le {c}_{1}{x}^{r}+{d}_{1}{y}^{r}+u\left({x}^{r}-{\left({x}^{r}\right)}^{2}\right)$ (46)

In this way, the Lagrangian relaxation algorithm iterates between a master problem formed by Lagrange and boundary cuts and a subproblem that evaluates the proposed multipliers.

Lagrangian Relaxation Algorithm

The structure of the Lagrangian relaxation algorithm considering the above interpretation is described below:

Step 0. Initialization $k=0$ , $ϵ={10}^{-6}$

Step 1. Solve the Lagrangian Relaxation Master Problem

$\begin{array}{l}\underset{u}{\mathrm{max}}\omega \left(u\right)\\ \text{subject}\text{\hspace{0.17em}}\text{to}:\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\omega \le {c}_{1}{x}^{k}+{d}_{1}{y}^{k}+u\left({x}^{k}-{\left({x}^{k}\right)}^{2}\right)\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}{u}^{k}\text{\hspace{0.17em}}\text{generateaLagrangecut}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\le {c}_{1}{x}^{k}+{d}_{1}{y}^{k}+u\left({x}^{k}-{\left({x}^{k}\right)}^{2}\right)\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}{u}^{k}\text{\hspace{0.17em}}\text{generateaboundarycut}\end{array}$ (47)

Get the value of ${u}^{k}$ and go to step 2.

Step 2. Solve the boundary subproblem (45). If ${\omega }^{*}\left(u\right)\ge 0$ go to step 3. In another case, obtain the solution ${x}^{k}$ , form the corresponding boundary cut, and go to step 1.

Step 3. Solve the Lagrange subproblem (39). Get the solution ${x}^{k}$ , form the corresponding Lagrange cut, and go to step 4.

Step 4. If $|{u}^{k}-{u}^{k}-1|<ϵ$ stop. Otherwise go to step 1.

4. Results and Discussion

4.1. Comparisons

In the literature, some articles consider the same type of problems that in this work. Those problems were used to test the performance of the proposed algorithm.

Numerical experiments were performed on an Intel Core i7 2.5 GHz PC with 8.0 GB RAM, compiling 64-bit GAMS 24.7.3 for Windows with the CPLEX and LINDO solvers. Table 1 shows the results obtained when using the algorithm and its comparison with published works.

Based on the obtained results from the examples, the algorithm reaches the optimum in a very short time and with little iteration: 2, 4, 3, and 4 respectively.

4.2. Test of Algorithm

To show the algorithm operation solving an application problem, sixty-one problems were tested, for which the leader and follower solutions are presented. The performance of the algorithm was measured using the execution time and comparing the obtained solutions with those corresponding to the single-level reformulation with Karush-Kuhn-Tucker conditions and with the Benders decomposition method proposed in  . The results are shown in Table 2.

The experiments were performed on an Intel Core i7 2.5 GHz PC with 8.0 GB RAM, compiling 64-bit GAMS 24.7.3 for Windows with the CPLEX and IPOPT solvers.

The behavior of the algorithm, in general, is good as can be seen in Figures 1-3. Execution time is reasonable. Even for problems of 30 binary variables controlled by the leader problem and 2700 by the follower, the maximum running time was 12 minutes. Using a Lagrange relaxation algorithm, it is possible to find a global solution efficiently.

5. Conclusions

In this paper, we propose an algorithm to solve the discrete-continuous bi-level problem with results very close to the optimum. Lagrangian relaxation is applied to the reformulation to a single-level of the problem considering the Karush-Kuhn-Tucker conditions, and the binary variables are relaxed for the construction of the Lagrange subproblem.

The computational results of the approach show that the algorithm obtains good results. The method reaches the optimum in 68% of the solved problems,

Figure 1. Comparison of the optimal value of the leader objective function.

Figure 2. Comparison of the optimal value of the follower objective function.

Figure 3. Computing time in seconds.

Table 1. Comparison of results.

Table 2. Lagrange relaxation algorithm results of test examples.

in relatively short times, being 12 minutes the maximum time necessary to find a solution, corresponding to a problem with 30 binary variables in the leader problem and 2700 in the follower, two restrictions on the leader and 5610 on the follower.

As a future work, the approach will be used without considering the single-level reformulation of the bilevel problem and the consideration of more than one objective in the leader function.

The method can be used for problems involving cooperative decision-making at two levels, e.g. allocation of resources at minimum cost considering maximisation of the level of service, location of unwanted facilities, among others.

Acknowledgements

We thank the National Council of Science and Technology (CONACYT) of Mexico for the grant and the National Autonomous University of Mexico for the resources provided for the development of this research.

Cite this paper
Alarcón-Bernal, Z. and Aceves-García, R. (2019) A Lagrange Relaxation Based Approach to Solve a Discrete-Continous Bi-Level Model. Open Journal of Optimization, 8, 100-111. doi: 10.4236/ojop.2019.83009.
References
   Von Stackelberg, H. (1934) Marktform und gleichgewicht. Springer, Berlin.

   Colson, B., Marcotte, P. and Savard, G. (2007) An Overview of Bilevel Optimization. Annals of Operations Research, 153, 235-256.
https://doi.org/10.1007/s10479-007-0176-2

   Ben-Ayed, O. and Blair, C.E. (1990) Computational Difficulties of Bilevel Linear Programming. Operations Research, 38, 556-560.
https://doi.org/10.1287/opre.38.3.556

   Moore, J.T. and Bard, J.F. (1990) Mixed Integer Linear Bilevel Programming Problem. Operations Research, 38, 911-921.
https://doi.org/10.1287/opre.38.5.911

   Wen, U.P. and Yang, Y.H. (1990) Algorithms for Solving the Mixed Integer Two-Level Linear Programming Problem. Computers and Operations Research, 17, 133-142.
https://doi.org/10.1016/0305-0548(90)90037-8

   Bard, J.F. and Moore, J.T. (1992) An Algorithm for the Discrete Bilevel Programming Problem. Naval Research Logistics, 39, 419-435.
https://doi.org/10.1002/1520-6750(199204)39:3<419::AID-NAV3220390310>3.0.CO;2-C

   Edmunds, T.A. and Bard, J.F. (1992) An Algorithm for the Mixed-Integer Nonlinear Bilevel Programming Problem. Annals of Operations Research, 34, 149-162.
https://doi.org/10.1007/BF02098177

   Dempe, S. (2003) Annotated Bibliography on Bilevel Programming and Mathematical Programs with Equilibrium Constraints. Optimization, 52, 333-359.
https://doi.org/10.1080/0233193031000149894

   Vicente, L., Savard, G. and Judice, J. (1996) Discrete Linear Bilevel Programming Problem. Journal of Optimization Theory and Applications, 89, 597-614.
https://doi.org/10.1007/BF02275351

   Ko, C.A. (2005) Global Optimization of Mixed-Integer Bilevel Programming Problems. Computational Management Science, 2, 181-212.
https://doi.org/10.1007/s10287-005-0025-1

   Faísca, N.P., Dua, V., Rustem, B., Saraiva, P.M. and Pistikopoulos, E.N. (2007) Parametric Global Optimisation for Bilevel Programming. Journal of Global Optimization, 38, 609-623.
https://doi.org/10.1007/s10898-006-9100-6

   Domínguez, L.F. and Pistikopoulos, E.N. (2010) Multiparametric Programming Based Algorithms for Pure Integer and Mixed-Integer Bilevel Programming Problems. Computers and Chemical Engineering, 34, 2097-2106.
https://doi.org/10.1016/j.compchemeng.2010.07.032

   Köppe, M., Queyranne, M. and Ryan, C.T. (2010) Parametric Integer Programming Algorithm for Bilevel Mixed Integer Programs. Journal of Optimization Theory and Applications, 146, 137-150.
https://doi.org/10.1007/s10957-010-9668-3

   Mitsos, A. (2010) Global Solution of Nonlinear Mixed-Integer Bilevel Programs. Journal of Global Optimization, 47, 557-582.
https://doi.org/10.1007/s10898-009-9479-y

   Xu, P. and Wang, L. (2014) An Exact Algorithm for the Bilevel Mixed Integer Linear Programming Problem under Three Simplifying Assumptions. Computers and Operations Research, 41, 309-318.
https://doi.org/10.1016/j.cor.2013.07.016

   Sharma, V., Dahiya, K. and Verma, V. (2014) A Class of Integer Linear Fractional Bilevel Programming Problems. Optimization, 63, 1565-1581.
https://doi.org/10.1080/02331934.2014.883509

   Saharidis, G.K. and Ierapetritou, M.G. (2009) Resolution Method for Mixed Integer Bi-Level Linear Problems Based on Decomposition Technique. Journal of Global Optimization, 44, 29-51.
https://doi.org/10.1007/s10898-008-9291-0

   Fontaine, P. and Minner, S. (2014) Benders Decomposition for Discrete-Continuous Linear Bilevel Problems with Application to Traffic Network Design. Transportation Research Part B: Methodological, 70, 163-172.
https://doi.org/10.1016/j.trb.2014.09.007

   Caramia, M. and Mari, R. (2015) A Decomposition Approach to Solve a Bilevel Capacitated Facility Location Problem with Equity Constraints. Optimization Letters, 10, 997-1019.

   Rahmani, A. and MirHassani, S.A. (2015) Lagrangean Relaxation-Based Algorithm for Bi-Level Problems. Optimization Methods and Software, 30, 1-14.
https://doi.org/10.1080/10556788.2014.885519

   Dempe, S. (2002) Foundations of Bilevel Programming. Springer Science & Business Media, Berlin.

   Bard, J.F. (1998) Practical Bilevel Optimization: Algorithms and Applications, Volume 30. Springer Science & Business Media, Berlin.

   Cao, D. and Chen, M.Y. (2006) Capacitated Plant Selection in a Decentralized Manufacturing Environment: A Bilevel Optimization Approach. European Journal of Operational Research, 169, 97-110.
https://doi.org/10.1016/j.ejor.2004.05.016

   Bertsekas, D.P. (1999) Nonlinear Programming. Athena Scientific, Belmont.

   Nowak, I. (2006) Relaxation and Decomposition Methods for Mixed Integer Nonlinear Programming, Volume 152. Springer Science & Business Media, Berlin.

   De Haro, S.C.L., Galán, A.R. and Moreno, A.B. (2004) Modelado de algoritmos de descomposición con gams.

   Tuy, H. and Ghannadan, S. (1998) A New Branch and Bound Method for Bilevel Linear Programs. In: Multilevel Optimization: Algorithms and Applications, Springer, Berlin, 231-249.
https://doi.org/10.1007/978-1-4613-0307-7_10

Top