A Complex Algorithm for Solving a Kind of Stochastic Programming

Show more

1. Introduction

The stochastic programming with recourse, as an important method for solving optimization problems with uncertain parameters, was first proposed by G. Dantzig, the founder of linear programming. In the design of the optimal number of airline flights, he first considered a two-stage stochastic programming problem with recourse [1], considering the Randomness of passenger flow. With a lot of research by scholars such as G. Tinter and D. Walkup [2] [3], the theory and application of stochastic programming have been systematically developed, gradually showing its advantages in practical applications. Stochastic programming has the characteristics of many variables, many constraints, and large-scale problems. Therefore, solving stochastic programming problems has always been a difficult point, and it is also one of the problems that many scholars at home and abroad are keen to study. The dual decomposition L-algorithm given in [4] is considered to be one of the classical algorithms for solving stochastic programming problems. This method is a cutting plane method which belongs to the external linear approximation. By using the constraints of feasibility cutting and optimal cutting, it gradually reduces the feasibility area, and finally makes the algorithm converge to obtain the optimal solution. With the birth of the L-algorithm, a large number of researches have focused on the improvement of the L-algorithm, including the improvement of the simplicity multiplier, the improvement of the optimal cutting scheme and so on. The research of L-algorithm is becoming more and more mature [5] [6].

Generally, in the study of the stochastic programming with recourse, the second stage function with recourse is determined by the expectation criterion on the premise that the probability distribution of the random variable has complete information, so that the stochastic programming problem is equivalent to a definite mathematical programming problem. However, in practical problems, due to the lack of historical data and the limitation of statistical methods, the probability distribution information of random variables is not easily obtained, and only partial information may be obtained. Thus, the classic stochastic programming algorithms are no longer applicable. In order to solve this problem, the linear partial information theory (LPI) was proposed in reference [7], which can determine the fuzzy variables through the form of linear constraints. This method provides ideas for dealing with the problem of incomplete random variable information in stochastic programming. In reference [8], the paper introduced in detail the use of $\alpha $ -cut technology to transform the probability constraints with fuzzy relations in stochastic programming into LPI form, and obtained the stochastic programming model under the linear partial information probability distribution.

Considering that the stochastic programming problem is transformed into the corresponding equivalent problem, the problem can be regarded as a kind of nonlinear programming problem, which can be solved by nonlinear programming method. With the continuous development of nonlinear programming methods, a large number of nonlinear programming methods have been applied to stochastic programming problems. In reference [9], the paper studied the quadratic stochastic programming model with recourse, which was transformed into quadratic programming model by B-regular [10] and semi smooth concept [11] [12]. Finally, quasi Newton method was introduced to solve the stochastic programming problem. The complex method [13] [14] is a kind of variable polyhedron algorithm that can determine the search direction only by comparing the value of the objective function to solve the constrained nonlinear programming problem. This method is simple and easy to implement for the requirements of the function. It can effectively solve the problem that the stochastic programming model with probability constraints can’t be derived. Therefore, the complex method becomes a solution to the two-stage stochastic programming problem with uncertain probability distribution.

Aiming at the problem of stochastic programming under the uncertain probability distribution, this paper discusses a kind of stochastic two-stage programming model based on the maximal minimum expectation criterion under LPI based on the literature [8] [13] [14], which is a robust decision-making model under the linear partial information probability distribution and has high practical value. Considering the discreteness of the random variables and the uncertainty of the probability distribution in the model, this paper introduces the complex method and designs a stochastic programming algorithm based on the complex method. The validity of the algorithm is verified by solving the examples.

2. The Stochastic Programming Model with Recourse under LPI

Let $\left(\Omega ,{\text{2}}^{\Omega},P\right)$ be a probability space, where $\Omega =\left\{{\omega}_{\text{1}},{\omega}_{2},\cdots ,{\omega}_{l}\right\}$ is a finite sample space, ${\text{2}}^{\Omega}$ is the power set of sample space $\Omega $, and $P={\left({p}_{1},{p}_{2},\cdots ,{p}_{l}\right)}^{\text{T}}$ is the probability distribution corresponding to sample set $\Omega =\left\{{\omega}_{\text{1}},{\omega}_{2},\cdots ,{\omega}_{l}\right\}$, that is ${p}_{i}=\mathrm{Pr}\left(\left\{w={w}_{i}\right\}\right),i=1,\cdots ,l$, where $\mathrm{Pr}\left(\theta \right)$ is the probability function of event $\theta $, $P\left(\Omega \right)=1$. In reference [9], the following stochastic programming problems are considered:

$\{\begin{array}{l}\underset{x\in {R}^{n}}{\mathrm{min}}\text{}f\left(x\right)+g\left(y\right)\\ \text{s}\text{.t}.\text{}Cx\le b,\end{array}$ (1)

where,

$\begin{array}{l}g\left(y\right)={E}_{P}\left(\varphi \left(x,\omega \right)\right)\\ \varphi \left(x,\omega \right)=\underset{y\in {R}^{m}}{\mathrm{max}}\text{}-\frac{1}{2}{y}^{\text{T}}Hy+{\left(\sigma \left(\omega \right)-x\right)}^{\text{T}}y\\ \text{s}\text{.t}.\text{}Wy\le q\end{array}$ (2)

here, $x\in {R}^{n},y\in {R}^{m}$ are the decision variables in the first and second stages, $H\in {R}^{m\times m}$ is a symmetric positive definite matrix, $\sigma \left(\omega \right)\in {R}^{m}$ is a random variable in space $\Omega $, $C\in R{}^{k\times n},b\in {R}^{k\times 1},W\in {R}^{t\times m},q\in {R}^{t\times 1}$ are all known coefficient matrices, $f\left(x\right)$ is a convex function with x as the decision variable, and $g\left(y\right)$ is the second stage function with recourse.

Assuming that the random variables in the model are finitely discrete, the second stage compensation function $g\left(y\right)$ can be expressed as

$g\left(y\right)={\displaystyle \underset{i=1}{\overset{l}{\sum}}{p}_{i}\varphi \left(x,{\omega}_{i}\right)}$ (3)

The establishment of the above model is based on the assumption that the probability distribution information of the random variables in the model is complete, that is, $P={\left({p}_{1},{p}_{2},\cdots ,{p}_{l}\right)}^{\text{T}}$ is completely determined. But due to the limitations of historical data, such complete probability distribution information is not easy to obtain. Based on the literature [8], the paper considers comprehensively the structure of the stochastic programming model and its application scenarios in practical problems, and makes the following assumptions about the probability distribution of random variables in stochastic programming:

Suppose that the probability distribution information of random variables aren’t known completely, but have linear partial information, that is, the following constraint condition is satisfied:

$\phi =\left\{P={\left({p}_{1},\cdots ,{p}_{l}\right)}^{\text{T}}\in {R}^{l}|BP\le d,{\displaystyle \underset{i=1}{\overset{l}{\sum}}{p}_{i}=1;{p}_{i}\ge 0,i=1,\cdots ,l}\right\}$ (4)

In the formula, $B\in {R}^{{m}_{1}\times l}$ and $d\in {R}^{{m}_{\text{1}}}$ are both known matrices.

From the above assumption, it can be concluded that the solution space $\phi $ composed of LPI (P) of probability distribution P of random variables is a bounded convex polyhedron. The value on this convex polyhedron $\phi $ is the probability distribution of random variables in the model.

Since the probability distribution of random variables has linear partial information, simply using the expectation criterion to determine the second stage function with recourse will no longer be applicable. The paper expands the second stage function with recourse of the model, and combines the maximal minimum expectation criterion in the expectation model to give the two-stage stochastic programming model with recourse under the LPI discussed in the paper:

$\{\begin{array}{l}\underset{x\in {R}^{n}}{\mathrm{min}}\text{}f\left(x\right)+\underset{P\in \pi}{\mathrm{max}}{\displaystyle \underset{i=1}{\overset{l}{\sum}}{p}_{i}\varphi \left(x,{\omega}_{i}\right)}\\ \text{s}\text{.t}.\text{}Cx\le b,\end{array}$ (5)

where,

$\begin{array}{l}\varphi \left(x,\omega \right)=\underset{y\in {R}^{m}}{\mathrm{max}}\text{}-\frac{1}{2}{y}^{\text{T}}Hy+{\left(\sigma \left(\omega \right)-x\right)}^{\text{T}}y\\ \text{s}\text{.t}.\text{}Wy\le q\end{array}$ (6)

$\xi =\left\{P={\left({p}_{1},\cdots ,{p}_{l}\right)}^{\text{T}}\in {R}^{l}|BP\le d,{\displaystyle \underset{i=1}{\overset{l}{\sum}}{p}_{i}=1;{p}_{i}\ge 0,i=1,\cdots ,l}\right\}$ (7)

Models (5)-(7) are the stochastic programming models with linear partial information probability distributions given in the paper. It can be seen that this model is a generalized form of the stochastic programming model in reference [9]. When P takes a certain value in $\phi $, that is, the probability distribution information of the random variable is complete, the model is consistent with the model in [9]. The biggest difference between the two models is that the probability distribution information assumed in the paper is incomplete, so when determining the function with recourse in the second stage, the paper uses the method of maximizing the expectation of the compensation value. This strategy is a robust choice, which can ensure that the optimization goal of the final decision-making scheme is not worse than the optimization goal of any possible situation. This is a conservative and robust decision model.

Because the second stage compensation function $\underset{P\in \pi}{\mathrm{max}}{\displaystyle \underset{i=1}{\overset{l}{\sum}}{p}_{i}\varphi \left(x,{\omega}_{i}\right)}$ is not differentiable, the gradient information of the model does not exist, and the previous gradient-based method will not be applicable. In order to solve the two-stage stochastic programming problem with linear partial information probability distribution given in this paper, the complex optimization algorithm based on direct optimization method is introduced. By improving the complex method, it is adapted to the solution process of the model, and then a stochastic programming algorithm based on the improved complex method under the uncertain probability distribution is given. Then, several examples are used to verify the effectiveness of the designed model and the algorithm.

3. Complex Method

As a direct optimization algorithm, the complex method is simple and easy to implement, so it is widely used in engineering optimization problems [15] [16]. This method can be regarded as the variable polyhedron method derived from simplex method. The biggest difference between the method and simplex method is that this method can be directly used to solve the optimization problem with constraints, and does not limit the number of vertices of complex shape, so it is more widely used than simple method. Assuming that the variables of the optimization problem are in the n-dimensional space, the complex shape with iteration in the complex method is a polyhedron composed of more than $n+1$ vertices, which is formed by the combination of multiple simplexes. Considering the nondifferentiability of the stochastic programming model established in this paper, the complex method is introduced into the solution of stochastic programming under LPI, and the optimal value of the problem is obtained by using the variable polyhedron iterative process of the complex method in the optimization.

The complex method is an optimization method that only needs to compare the objective value of the optimization function to determine the optimization direction. Its basic idea is that we should first construct an initial complex shape in the feasible region. Then by comparing the objective function values of each vertex, we can find a new point in the feasible region where the objective function values are improved, and use it to replace the vertices with poor objective function values to form a new complex shape. By repeating the above process, the complex shape is continuously deformed, transferred and shrunk, gradually approaching the best. When the objective function value of each vertex in the complex shape is not much different or the distance between each vertex is very close, the vertex with the lowest objective function value can be regarded as the best [17] [18]. The following is a detailed description of the iterative process of the complex method.

In n-dimensional space, a polyhedron composed of $k\ge n+1$ points is called a complex shape. Referring to the previous literature, there are two main methods to generate initial complex shape: manual definition of initial complex shape and random generation of initial complex shape. Considering the complexity of the stochastic programming model, the paper uses the second method. The following is the specific operation of randomly generating the initial complex shape:

1) Suppose that the vertices of the complex shape are n-dimensional, the number of vertices of the initial complex shape is determined to be k, and an initial vertex is selected manually in a given feasible region;

2) Suppose that the upper and lower bounds of the vertices of the complex shape are $upb\in {R}^{n},lob\in {R}^{n}$ respectively, where $upb$ are the upper bounds of the vertices and $lob$ are the lower bounds of the vertices. Then the remaining $k-1$ vertices are generated by using the random number in $\left[0,1\right]$. The build rule is ${x}_{i}=lob+{r}_{i}\left(upb-lob\right)$, where ${r}_{i}$ is the random number in interval $\left[0,1\right]$, $i=2,\cdots ,k$ ;

3) Check whether the generated k vertices are in the feasible region: assuming that w vertices are in the feasible region and the remaining $k-w$ vertices are not in the feasible region, the $k-w$ vertices that are not in the feasible region can be translated into the feasible region by the following methods:

a) The geometric centers of w vertices in the feasible region are calculated and recorded as ${x}_{gc}=\frac{1}{w}{\displaystyle \underset{i=1}{\overset{w}{\sum}}{x}_{i}}$ ;

b) If $k-w$ vertices that are not in the feasible region are recorded as ${x}_{out,j}$, $j=\text{1},\cdots ,k-w$, then a vertex ${{x}^{\prime}}_{out,j}$ in the feasible region can be found on the line between ${x}_{gc}$ and ${x}_{out,j}$. The specific searching method is as follows:

${{x}^{\prime}}_{out,j}={x}_{ge}+\rho \left({x}_{out,j}-{x}_{ge}\right),\rho \in \left(0,1\right),j=1,\cdots ,k-w$ (8)

If the result ${{x}^{\prime}}_{out,j}$ is not in the feasible region, the formula $\rho =0.5\rho $ can be used to continuously reduce $\rho $ until the vertex is translated into the feasible region. Through the above steps, we can get the initial complex shape that meets the conditions.

In the generated complex shape, let the worst point be recorded as ${x}^{h}$, the secondary bad point as ${x}^{s}$, and the best point as ${x}^{l}$. The centroid of other vertices with the worst points removed in the complex shape is calculated by formula ${x}^{c}=\frac{1}{n}{\displaystyle \underset{i\ne h}{\sum}{x}^{i}}$, which is recorded as ${x}^{c}$. In the process of the complex shape optimization, several methods of vertex transformation for polyhedron in the iterative process are as follows:

1) Mapping method:

Transformation thought: We expect to find a better value in the opposite of the worst point ${x}^{h}$, to replace ${x}^{h}$.

Search direction: It searches along the direction from the worst point ${x}^{h}$ to the centroid ${x}^{c}$, i.e. along the direction of ${x}^{h}\to {x}^{c}$.

Step factor: Mapping factor $\alpha :\alpha >1$, representing the step size of the mapping.

Mapping iteration formula: ${x}^{r}={x}^{c}+\alpha \left({x}^{c}-{x}^{h}\right)$, where ${x}^{r}$ is called mapping point.

Rule of judgement: If ${x}^{r}$ is in the feasible region and $f\left({x}^{r}\right)<f\left({x}^{h}\right)$, ${x}^{r}$ will be used instead of ${x}^{h}$ to form a new complex shape and carry out the next iteration.

2) Expansion method:

Transformation thought: According to the advantages and disadvantages of mapping point ${x}^{r}$ obtained by mapping method, we expect to get better transformation vertices. If the function value of the mapping point is less than the function value of the best ${x}^{l}$, i.e. $f\left({x}^{r}\right)<f\left({x}^{l}\right)$, then the direction from ${x}^{c}$ to ${x}^{r}$ is the current optimal direction and it can be expanded in this direction.

Expansion iteration formula: ${x}^{e}={x}^{c}+\beta \left({x}^{r}-{x}^{c}\right)$.

Expansion coefficient $\beta :\beta \ge 1$.

Rule of judgement:

a) if $f\left({x}^{e}\right)<f\left({x}^{l}\right)$, the expansion is successful, and ${x}^{e}$ replaces ${x}^{h}$ to form a new complex shape.

b) If $f\left({x}^{e}\right)>f\left({x}^{l}\right)$, expansion fails, and ${x}^{r}$ replaces ${x}^{h}$ to form a new complex shape.

3) Shrinkage method:

Transformation thought: If $f\left({x}^{h}\right)<f\left({x}^{r}\right)$ in the mapping method, it indicates that the step size of the mapping method is too large, let $\alpha =0.5\alpha $, and we repeat the mapping method. If it still fails until $\alpha <{10}^{-5}$, it indicates that the current optimization direction is not right. In this case, shrinkage method is considered to find the search direction in the complex shape.

Shrinkage direction: through the failure of the mapping method, it shows that the optimization direction ${x}^{h}\to {x}^{c}$ of the mapping method is not correct, so the complex shape is shrunk along the direction from the center of the centroid ${x}^{c}$ to the worst point ${x}^{h}$, i.e. along the direction of ${x}^{c}\to {x}^{h}$.

Shrinkage coefficient: $\gamma :0<\gamma <1$.

Shrinkage formula: ${x}^{k}={x}^{c}-\gamma \left({x}^{c}-{x}^{h}\right)$.

Rule of judgement: If $f\left({x}^{k}\right)<f\left({x}^{h}\right)$, we use shrinkage point ${x}^{k}$ to replace the worst point ${x}^{h}$ to form a new polyhedron; If the shrinkage fails, we carry out the compression step.

4) Compression method:

Transformation thought: shrinkage failure means that the effect of iteration points in the search direction composed of the most nearly ${x}^{h}$ and the center of mass ${x}^{c}$ is not good. In this case, we generally compress the compound shape to the best point ${\mu}^{l}$, so as to find the compound shape with good performance.

Compression formula: ${x}^{i}={x}^{l}+\delta \left({x}^{i}-{x}^{l}\right),i=1,\cdots ,k$, use this formula to replace all points except the best point in the current composite shape.

Compression factor: $\delta :0<\delta <1$

The basic thought of the complex method is to change the complex shape step by step through continuous iteration, so that the final approximation of complex shape can be compressed to the optimal solution, and the iteration can be completed [19]. Therefore, the termination condition of the complex method is given here, that is

$\sqrt{\frac{1}{k}\left\{{\displaystyle \underset{i=1}{\overset{k}{\sum}}{\left[f\left({x}^{i}\right)-f\left({x}^{j}\right)\right]}^{2}}\right\}}\le \epsilon $ (9)

where ${x}^{j}=\frac{1}{k}{\displaystyle \underset{i=1}{\overset{k}{\sum}}{x}^{i}},i=1,\cdots ,k$.

The following is the specific steps of the complex method: Set the parameter $\alpha ,\beta ,\gamma ,\delta $, and the convergence parameter $\epsilon >0$. The number of vertices of the complex shape is determined. If the decision variable is n-dimensional, the number of vertices of the complex shape should be between $n+1$ and $\text{2}n$.

1) Generate the initial complex shape. The steps of generating the initial complex shape by using the random method given in this paper are used to get the initial complex shape satisfying the requirements;

2) Calculate the function value of each vertex in the current complex shape, and sort out the worst point ${x}^{h}$, the secondary bad point ${x}^{s}$, the best point ${x}^{l}$, and calculate the centroid ${x}^{c}$ of the current complex shape;

3) According to the mapping coefficient $\alpha $ and the mapping formula, the mapping point ${x}^{r}$ is calculated:

a) If the mapping point ${x}^{r}$ is within the feasible region, step 4) is carried out;

b) If the mapping point ${x}^{r}$ is not in the feasible region, we reduce the mapping coefficient $\alpha $, that is $\alpha =0.5\alpha $, and then repeat step 3);

4) Calculate the function value of the mapping point ${x}^{r}$, and compare the function value of ${x}^{r}$ with the vertex of the current complex shape:

a) If $f\left({x}^{r}\right)<f\left({x}^{l}\right)$, the expansion step is carried out. Using the expansion formula, the expansion point ${x}^{e}$ can be got. If $f\left({x}^{e}\right)<f\left({x}^{r}\right)$, then we replace ${x}^{h}$ with ${x}^{e}$ to get a new polyhedron, and carry out step 6); otherwise, we replace ${x}^{h}$ with ${x}^{r}$ to get a new polyhedron, and carry out step 6);

b) If $f\left({x}^{l}\right)<f\left({x}^{r}\right)<f\left({x}^{h}\right)$, ${x}^{r}$ is used instead of ${x}^{h}$ to get a new polyhedron, and step 6) is carried out;

c) If $f\left({x}^{r}\right)\ge f\left({x}^{h}\right)$, compare the value of mapping coefficient $\alpha $ : if $\alpha >{10}^{-5}$, we reduce $\alpha $, and set $\alpha =0.5\alpha $. Then step 3) is carried out; otherwise, we carried out the contraction step of the complex method, and use the contraction formula ${x}^{k}={x}^{c}-\gamma \left({x}^{c}-{x}^{h}\right)$ to calculate the contraction point ${x}^{k}$. Then step 5) is carried out;

5) Compare the function values of the contraction point and the worst point ${x}^{h}$ : if $f\left({x}^{k}\right)<f\left({x}^{h}\right)$, we replace ${x}^{h}$ with ${x}^{k}$ to get a new polyhedron, and carry out step 6); otherwise, the compression step of the complex method is carried out to get a new complex shape. Then step 2) is carried out;

6) Judge whether the current complex shape meets the termination condition $\sqrt{\frac{1}{k}\left\{{\displaystyle \underset{i=1}{\overset{k}{\sum}}{\left[f\left({x}^{i}\right)-f\left({x}^{j}\right)\right]}^{2}}\right\}}\le \epsilon $. If it does, we stop the iteration. At this time, the best solution is the best solution and the best function value is the best value. Otherwise, step 2) is carried out.

Through the concrete steps of the complex method, the nonlinear programming problem can be solved. The stochastic programming problem under LPI proposed in this paper can also be regarded as a nondifferentiable nonlinear programming problem. Therefore, the paper innovatively introduces the complex method into the solution of the model, which provides a feasible way for the stochastic programming algorithm under the uncertain probability distribution.

4. Example Analysis

Combined with the compensation two-stage stochastic programming model (5) - (6) given above, this paper presents a complex method of decision variable $x\in {R}^{\text{6}}$ to solve the stochastic programming model. At the same time, in view of the different probability distribution information of random variables, the paper discusses the examples according to different probability distribution information, so as to compare and analyze the two-stage stochastic programming model under different probability distribution information.

In the model (5) - (6), $f\left(x\right)$ in the first stage is a general convex function form. Here, it is set as a quadratic function form in the calculation example, in which the decision variable is $x\in {R}^{\text{6}}$. As for the random variable in the compensation function of the second stage, the capacity of the calculation example is set to 7, i.e. $l=7$, so $P={\left({p}_{1},\cdots ,{p}_{\text{7}}\right)}^{\text{T}}\in {R}^{\text{7}}$. Therefore the paper considers the following stochastic programming problems:

$\{\begin{array}{l}\underset{x\in {R}^{6}}{\mathrm{min}}\text{}\frac{1}{2}{x}^{\text{T}}Ax+{D}^{\text{T}}x+\underset{P\in \pi}{\mathrm{max}}{\displaystyle \underset{i=1}{\overset{7}{\sum}}{p}_{i}\varphi \left(x,{\omega}_{i}\right)}\\ \text{s}\text{.t}.\text{}Cx\le b,\end{array}$ (10)

$\begin{array}{l}\varphi \left(x,{\omega}_{i}\right)=\underset{y\in {R}^{6}}{\mathrm{max}}\text{}-\frac{1}{2}{y}^{\text{T}}Hy+{\left(\sigma \left({\omega}_{i}\right)-x\right)}^{\text{T}}y\\ \text{s}\text{.t}.\text{}Wy\le q\end{array}$ (11)

The parameters of correlation matrix and variables used in the model are: $A=diag\left(2,2,3,1,2,1\right)$, a diagonal matrix; $H\in {R}^{3\times 3}$, a unit matrix; other parameters of correlation matrix are as follows:

$D=\left(\begin{array}{c}2\\ 3\\ 1\\ 4\\ 2\\ 1\end{array}\right)$ ; $C=\left(\begin{array}{cccccc}3& 1& 0& 2& 1& 3\\ 1& 1& 2& 0& 1& 2\\ 2& 3& 1& 4& 0& 3\end{array}\right)$ ; $b=\left(\begin{array}{c}12\\ 5\\ 20\end{array}\right)$ ;

$W=\left(\begin{array}{cccccc}1& 0& 2& 1& 1& 3\\ 2& -1& 0& 3& 1& 2\\ 3& 2& 1& 0& 1& 1\end{array}\right)$ ; $q=\left(\begin{array}{c}7\\ 7\\ 7\end{array}\right)$ ;

In the paper, the corresponding value of random variable is fixed, and the probability of occurrence of random variable is uncertain information, that is, the probability of occurrence of random variable is variable. In order to make the example more universal, the value of the random variable $\sigma \left({w}_{i}\right)={\left({\sigma}^{1}\left({w}_{i}\right),{\sigma}^{2}\left({w}_{i}\right),{\sigma}^{3}\left({w}_{i}\right),{\sigma}^{4}\left({w}_{i}\right),{\sigma}^{5}\left({w}_{i}\right),{\sigma}^{6}\left({w}_{i}\right)\right)}^{\text{T}},i=1,\cdots ,7$ is generated by a random number with lower bound ${\left(1,2,3,4,5,6\right)}^{\text{T}}$ and upper bound ${\left(6,7,8,9,10,11\right)}^{\text{T}}$, and determined. The value of $\sigma \left(\omega \right)$ is

$\sigma \left(\omega \right)=\left(\begin{array}{l}\text{3}\text{.08515}\text{.60163}\text{.00065}\text{.51175}\text{.73386}\text{.4617}\\ \text{3}\text{.18002}\text{.12965}\text{.74836}\text{.17667}\text{.10187}\text{.6517}\\ \text{3}\text{.75405}\text{.54074}\text{.45456}\text{.55419}\text{.464710}\text{.4815}\\ \text{5}\text{.83514}\text{.73627}\text{.86347}\text{.57418}\text{.48867}\text{.0804}\\ \text{2}\text{.11006}\text{.35374}\text{.03368}\text{.59317}\text{.44219}\text{.0587}\\ \text{5}\text{.46433}\text{.65997}\text{.10614}\text{.20855}\text{.53838}\text{.9753}\\ \text{1}\text{.38155}\text{.89965}\text{.19207}\text{.61739}\text{.88998}\text{.6925}\end{array}\right)$

Combined with the matrix parameters of the model given above, according to the completeness of the probability distribution information of the designed random variables, the paper analyzes and discusses the examples in three cases.

Case (1):

It is assumed that the probability distribution of random variables involved in the model does not have too much effective information, and only has the following linear partial information constraints:

$\xi =\left\{P={\left({p}_{1},\cdots ,{p}_{\text{7}}\right)}^{\text{T}}\in {R}^{\text{7}}|{\displaystyle \underset{i=1}{\overset{\text{7}}{\sum}}{p}_{i}=1,{p}_{i}\ge 0,i=1,\cdots ,\text{7}}\right\}$

This means that the occurrence of random variables in the case is accidental, and we cannot know the exact value of random variables in the case. For such a specific problem, we use the robust decision-making scheme designed in this paper to find the optimal decision-making result under the condition of maximizing the compensation function, so as to ensure that the actual result will not be worse than the expected decision-making result.

In this paper, the first initial point of the initial complex shape is taken as ${x}_{0}={\left(0,0,0,0,0,0\right)}^{\text{T}}$, the number of vertices of the complex shape is set as 12. As the paper introduces in the vertex transformation method of complex shape, the mapping coefficient $\alpha >1$, expansion coefficient $\beta \ge 1$, contraction coefficient $0<\gamma <1$, the compression coefficient $0<\delta <1$ and the smaller the convergence parameter $\epsilon $, the higher the accuracy of the algorithm. Therefore these parameters adopted in the complex method are respectively taken as $\alpha =1.3$, $\beta =1$, $\gamma =0.7$, $\delta =0.5$, $\epsilon ={10}^{-6}$. Through the operation of the program, the iterative process is shown in Table 1.

As shown in the above table, the algorithm stops iteration after the 448th time, and the optimal solution x = (−2.1646, 0.7194, −0.3065, −0.4003, 1.3779, −0.7288) is obtained. At the same time, the optimal value ${W}^{1}=62.2188$ of stochastic programming is obtained. In order to more intuitively explain the iterative process of the complex method in solving stochastic programming, the paper presents the iterative graph of the optimal value changing with the number of iterations w, as shown in Figure 1.

Figure 1. Iterative figure of optimal value.

Table 1. Iterative process of optimal solution.

It can be seen that the optimal value of the model gradually decreases with the increase of the number of iterations, and keeps approaching to the optimal solution. The final optimal value converges to 62.2188, which shows that for the solution of stochastic programming, the complex method has good convergence and the designed algorithm is effective.

Case (2):

Compared with case (1), we set the probability distribution information of random variables in case (2) more complete, and its probability distribution has some linear constraint information. Under the constraint of case (1), case (2) supposes that the probability distribution of random variables has the following linear constraints:

$\{\begin{array}{l}{p}_{1}+{p}_{2}+{p}_{3}\le \frac{1}{2}\hfill \\ {p}_{4}+{p}_{5}\le \frac{1}{3}\hfill \\ {p}_{6}+{p}_{7}\le \frac{1}{3}\hfill \\ \frac{1}{9}\le {p}_{7}\le \frac{1}{5}\hfill \end{array}$

Let $B=\left(\begin{array}{ccccccc}1& 1& 1& 0& 0& 0& 0\\ 0& 0& 0& 1& 1& 0& 0\\ 0& 0& 0& 0& 0& 1& 1\\ 0& 0& 0& 0& 0& 0& 1\\ 0& 0& 0& 0& 0& 0& -1\end{array}\right)$, $d={\left(\begin{array}{ccccc}\frac{1}{2}& \frac{1}{3}& \frac{1}{3}& \frac{1}{5}& -\frac{1}{9}\end{array}\right)}^{\text{T}}$, then the probability distribution of random variables in case (2) has the following linear partial information:

$\xi =\left\{P={\left({p}_{1},\cdots ,{p}_{\text{7}}\right)}^{\text{T}}\in {R}^{\text{7}}|BP\le d,{\displaystyle \underset{i=1}{\overset{\text{7}}{\sum}}{p}_{i}=1,{p}_{i}\ge 0,i=1,\cdots ,\text{7}}\right\}$

In this case, the other relevant parameters set in case (1) are kept unchanged. The complex method is used to solve the stochastic programming problem in case (2), and the robust decision scheme and result in case (2) are given. The results of the iterative process are shown in Table 2.

The program is terminated after 431 iterations, and the optimal solution x is (−2.0086, 0.6482, −0.4208, −0.7191, 0.9701, −0.2265). At this time, the optimal value of stochastic programming problem is obtained, that is ${W}^{2}=56.1144$. It can be seen that the optimal value of case (2) is better than that of case (1), which shows that when the probability distribution information of random variables is more complete, the decision result is better. The optimal value of the model changes with the number of iterations w, as shown in Figure 2.

Case (3):

In order to compare the influence of the completeness of the probability distribution information of the random variables on the decision result, the probability of the random variables in case (3) is set as a fixed value. Next, the other parameters of the stochastic programming model are consistent with the situations (1) and (2), and the probability distribution of the random variables is set as $P={\left(\frac{3}{25}\text{,}\frac{3}{25}\text{,}\frac{1}{5},\frac{3}{25},\frac{3}{25},\frac{1}{5},\frac{3}{25}\right)}^{\text{T}}$, that is, the example in this paper is strengthened to the classical stochastic programming model. In this paper, the result of case (3) obtained by the complex method under the condition that the probability information of the random variables is complete is shown in Table 3.

The experimental result shows that the program ends after 385 iterations. The optimal solution and the optimal value of the example are: x = (−1.6394, 0.1992, −0.1810, −1.0080, 0.5954, −0.6059), ${W}^{3}=45.1761$, respectively. At this time, the optimal value of case (3) is far less than that of case (1) and case (2), which also shows that when the probability distribution information of random variables in the stochastic programming problem is complete, the better decision result can be obtained. The trend chart of the optimal value iteration in case (3) is shown in Figure 3.

In order to illustrate the significance of stochastic programming model under uncertain probability distribution in reality, the paper brings the optimal solution of case (3) into the objective function of case (1), and the optimal value difference value between them is 64.3512 - 62.2188, that is 2.1324; similarly, the

Figure 2. Iterative figure of optimal value.

Figure 3. Iterative figure of optimal value.

Table 2. Iterative process of optimal solution.

Table 3. Iterative process of optimal solution.

optimal solution of case (3) is brought into case (2), and the optimal value difference value between them is 57.1422 - 56.1144, that is 1.0278. It can be seen that the difference values 2.1324 and 1.0278 are the loss value caused by the inaccuracy of the probability distribution information of the random variables when using the classical stochastic programming model. This also fully shows the significance of the stochastic programming model based on the maximum minimum expectation criterion under the uncertainty probability distribution in the actual problem. The model can effectively reduce the loss caused by decision-making in the face of the stochastic programming problem with incomplete information of the probability distribution of random variables.

5. Conclusion

Under the guidance of linear partial information theory, the stochastic programming model with uncertain probability distribution is established based on the maximum minimum expectation criterion. According to the nondifferentiability of the model, the paper designs a solution method based on the complex method. Finally, the solution algorithm is used to solve several specific examples, which show the value of the model in practical problems and the effectiveness of the designed solution algorithm.

References

[1] Dantzig, G.B. (1955) Linear Programming under Uncertainty. Management Science, 1, 197-206.

https://doi.org/10.1287/mnsc.1.3-4.197

[2] Sengupta, J.K., Tintner, G. and Morrison, B. (1963) Stochastic Linear Programming with Applications to Economic Models. Economica, 30, 262.

https://doi.org/10.2307/2601546

[3] Wets, J.B. (1967) Stochastic Programs with Recourse. Siam Journal on Applied Mathematics, 15, 1299-1314.

https://doi.org/10.1137/0115113

[4] Van Slyke, R.M. and Wets, R. (1969) L-Shaped Linear Programs with Applications to Optimal Control and Stochastic Programming. SIAM Journal on Applied Mathematics, 17, 638-663.

https://doi.org/10.1137/0117061

[5] Ketabchi, S., and Behboodi-Kahoo, M. (2015) Augmented Lagrangian Method within L-shape Method for Stochastic Linear Programs. Applied Mathematics & Computation, 266, 12-20.

https://doi.org/10.1016/j.amc.2015.05.007

[6] Abaffy, J. and Allevi, E.A. (2004) A Modified L-Shaped Method. Journal of Optimization Theory and Applications, 123, 255-270.

https://doi.org/10.1007/s10957-004-5148-y

[7] Kofler, E. (2001) Linear Partial Information with Applications. Fuzzy Sets & Systems, 118, 167-177.

https://doi.org/10.1016/S0165-0114(99)00088-3

[8] Abdelaziz, F.B. and Masri, H. (2005) Stochastic Programming with Fuzzy Linear Partial Information on Probability Distribution. European Journal of Operational Research, 162, 619-629.

https://doi.org/10.1016/j.ejor.2003.10.049

[9] Chen, X.J., Qi, L.Q, and Womersley, R.S. (1995) Newton’s Method for Quadratic Stochastic Programs with Recourse. Journal of Computational and Applied Mathematics, 60, 29-46.

https://doi.org/10.1016/0377-0427(94)00082-C

[10] Pang, J.S. (1990) Newton’s Method for B-Differentiable Equations. Mathematics of Operations Research, 15, 311-341.

https://doi.org/10.1287/moor.15.2.311

[11] Qi, L.Q. (1993) Convergence Analysis of Some Algorithms for Solving Nonsmooth Equations. Mathematics of Operations Research, 18, No. 1.

https://doi.org/10.1287/moor.18.1.227

[12] Qi, L.Q. and Sun, J. (1993) A Nonsmooth Version of Newton's Method. Mathematical Programming, 58, 353-367.

https://doi.org/10.1007/BF01581275

[13] Li, L., Chi, S.C. and Lin, G. (2005) Modified Complex Method and Its Application to the Slope Stability Analysis. Journal of Harbin Institute of Technology, 37, 1429-1432.

[14] Li, Z.L., He, Y.L. and Yu, Z.Q. (1994) The Improvement of Complex Method and Its Application in the Optimization of the Main Frame of Radial Gate. Design of Hydroelectric Power Station, 1994, 38-42.

[15] Zhang, H.M., Lu, P.L. and Wu, J.G. (2006) Investigation on Improvement and Application of Complex Method for Discrete Variables. Journal of China Jiliang University, 17, 300-304.

[16] Wang, F.L. (1996) The Problem of Initial Complex Shape in Complex Method. Journal of Zhejiang Agricultural University, 1996, 73-76.

[17] Pei, J.H. and Sun, S.C. (2000) A Method of Modifier Algorithm of Constrained Complex Method in Concave Feasible Fields. Journal of Nanjing University of Science and Technology (Nature Science), 2000, 16-19.

[18] Wu, J.G. (1986) A Complex Method for Solving Equality Constraints. Journal of Shenzhen University, 1986, 76-85.

[19] Chen, Y.H., Li, H.H. and Li, Z.T. (2002) Solution of the Multi-dimension Nonlinearity Restriction Optimization Problem by Compound Method. Precise Manufacturing & Automation, 2002, 37-38.