Received 12 March 2016; accepted 26 June 2016; published 29 June 2016
In industrial processes, there exists a class of hybrid systems which are comprised of some subsystems which couple each other through energy, quality, etc. For example, urban drainage network system, transportation system, energy power, Net system and irrigation system. These systems have many components, wide space dis- tribution, many constraints and many targets. We can obtain good control performance if the centralized control is used to control this kind of systems. But its flexibility and fault tolerance are relatively weak. If the distributed control is adopted, its flexibility and fault tolerance are better   . So, the problem of distributed control for these hybrid systems has become an important research project   .
Model predictive control (MPC) is receding horizon control which can deal with the constraints of systems states and inputs during the design of optimization control  . It adopts the strategies such as feedback correction, rolling optimization  and has strong ability to deal with constraints and good dynamic performance  . Therefore, it can be more effective to solve the optimal control problem for distributed systems. That is distributed model predictive control  .
However, in the actual application, the limitation of measuring equipment in economy makes the state feedback hard to realize. In reference  , a distributed predictive control algorithm is designed in the case of the states not being measured. But this method can only optimize the performance of each subsystem, not the overall performance of the system.
In this paper, a distributed predictive control method based on Lyapunov function and state observer is designed to optimize the overall system’s performance. This algorithm adds the quadratic function of the decoy system's input variables to the performance index of the subsystem, expands the coordination degree, and optimizes the performance of the system.
This paper is arranged as follows. In the second section, the control problem for distributed system under network mode is described in detail. The output feedback controller based on Lyapunov functions and state observers is designed in the third section, and the stability domain is given. The fourth section designs distributed predictive controller. In the fifth section, the distributed prediction controllers performance is analyzed, and the steps of the algorithm design are given. The simulation results verify the effectiveness of the method proposed in this paper in the sixth section. Conclusion is given in Section 7.
2. Problem Formulation
Consider the distributed system S which is comprised of m related subsystems. The state space description of can be expressed as
where denotes the state variable of subsystem, denotes the input variable and
satisfies, and denotes the measurable output variable. are con-
stant matrices with corresponding dimension, respectively. The distributed structure of the system under the network mode is shown in Figure 1.
Synthesizing all subsystems, we can get the system model as:
Figure 1. Distributed schematic diagram of the system under network pattern.
The control objective is to design an output feedback control law for the linear discrete-time distributed system (3) (4) based on Lyapunov function and state observer under the premise that network connectivity and fault tolerance of the system are not added. And then the stability domain is described. Furthermore, taking the stability domain as a terminal constraint to design output feedback model predictive controller in order to make the system stable outside the stability domain. Make sure that under the premise of initial feasibility, the system is successive feasible.
3. Output Feedback Control Based on Lypunov Function and State Observer
This section shows the controller design based on Lypunov function under the states are available at first to get the stability domain description. Then it shows the output feedback controller design under the states are not available.
3.1. The State-Feedback Controller Design Based on Lypunov Function
Consider the subsystem (1) (2), and structure the state feedback controller as follows:
where is the state feedback gain. Give the following assumption:
Assumption (i). For the subsystem, there exists feedback law so that the eigenvalues of are always in the unit circle, and the system is asymptotically stable, where,.
Define the following matrices:
where are positive diagonal matrices, and, ,
Lemma 1. If the Assumption (i) is satisfied, there exists a non-empty set as a invariant set of the system, and the system is stable under the state feedback control law, where is the biggest to make sure .
proof. Select a Lyapunov function candidate.
The difference of along the trajectories of the closed-loop system is given by
Since the input constraint, we have.
And since, so.
The proof is completed, and the set is the invariant set of the system.
Therefore, all states from can always keep in and asymptotically stable at the origin. That is to say for the given positive real number d, if, we have, and
Thus, the stability domain of the subsystem is defined as follows:
Suppose that at, all states of the subsystem satisfy, and the subsystems use control law, so the system is asymptotically stable based on Lemma 1.
3.2. Output Feedback Controller Design Based on the State Estimation
Design the state observer as follows  :
where is the observer state of the system, F is the state observer gain to be identified. We can get the error dynamic equation based on Equation (3) and (5) as
Therefore, the error dynamic equation of the state observer is regarded as a new autonomous system. That is to say if the new system (7) is stable, the estimation states can track the real states well.
Define a quadratic function on the observe error as follows:
Theorem 1. Consider the error dynamic equation of the state observer (7), if there exist matrices and, and the inequality
is satisfied, then the inequality
is satisfied, where, are decay factors, L is positive definite symmetric matrix, and satisfies. So there exists, such that if the inequality above is satisfied, then. In other words, the observer state converges to the real state.
Proof. By the Schur complement lemma, the inequality (8) is equivalent to
Substituting into the above formula, we derive
Multiply and, in the both side of the above formula at the same time, we have
By (8), we have
the inequality is satisfied.
Therefore, is regarded as the Lyapunov function of zero input dynamic error system, and satisfies the stability constraints (9), then the autonomous system (7) is asymptotically stable. In other words, there always exists, when, , and observer state ultimately converge to the real state.
Remark 1. By Theorem 1, the state observer gain F can be computed off line through the feasibility of the linear matrix inequality (8).
Thus, for the given, if, and, then the closed loop system is asympto-
tically stable at the origin. There exists, such that . Also, for the given positive
real number, there exists, such that for any, we have.
Lemma 2  . For the given any real number, there exist positive real number and set
, where, such that if, with, then
4. Distributed Output Feedback Model Predictive Control
This section studies the design of model predictive controller when states are not measured. Since the input constraint is related to the observer states, states constraint is constraints of the real states, and there are some errors between real states and observer states, the observer errors have influence on the future input and states. So the observer states are used in the performance index directly to design the controller. In order to keep the system stable, we adopt infinite horizon model predictive control strategy. Therefore, the optimization problem at time k is as follows:
where is state predictive value. is input predictive value,. and are weight coefficient matrices.
The optimization problem decomposes into two parts as
Suppose the Lyapunov function
satisfies the stability constraint
When the closed loop system is stable,
Superpose (16) from to, we get
That is to say
Therefore, the optimization problem (15) transforms into minimizing, and then the performance (10) transforms into the following performance
We have the following predictive model based on state observer of the subsystem
Its partial derivative is
Because the control law of the subsystem affects not only the performance of its own subsystem, but also that of its downstream subsystem, controller optimizes the performance of its own subsystem and down- stream subsystem. Here, input and state sequences got at are made as state sequences esti- mations of upstream subsystem. Therefore, define the performance of subsystem as
And in order to improve the convergence of optimization problem, the weighting coefficients are added.
Next, the model predictive control optimization problem of all subsystems in the distributed model predictive control algorithm is shown as:
Problem 1. For subsystem, satisfies Lemma 1 and 2. and, are known. Seek control sequence to minimize perfor- mance:
is the neighbouring subsystem of the subsystem, and, , are the design parameters. is the state track under the action of,
and, ,. Let
,. In order to guarantee the feasibility, we define the terminal
constraint set as, not.
Give the following assumption:
Assumption (ii). At initial moment, there always exists a feasible control law,
of all subsyetems to make observer states, and
5. Performance Analysis
The distributed model predictive controller based on Lyapunov function and state observer is designed on the condition of initial feasibility, so the main content in this section is to ensure successive feasibility and stability.
5.1. Successive Feasibility
This part mainly studies: if the system is feasible at time, then
is the feasible solution of the optimal problem (17)-(22) at time k.
and satisfy the constraint conditions of the problem.
By, we get
which satisfies the stability condition (20).
Lemma 3. If the Assumption (i) and (ii) are satisfied, and the problem (17)-(22) have feasible solution at any time and satisfy
, then when
Proof. Since the problem (17)-(22) have feasible solution at any time, so
From, we derive
Lemma 4. If the Assumption (i) and (ii) are satisfied, and the problem (17)-(22) have feasible solution at any time and satisfy
then for all, we have
and satisfies (18) (19).
Proof. When, since, so from the predictive model, we derive
The above two formulas subtract, we have
By Theorem 1, there always exists a, when, we have, and the observer state eventually converges to. Therefore,
Obviously, (23) is satisfied.
Therefore, satisfies the constraint (18).
The above two formulas subtract, we have
Therefore, (23) is satisfied.
Next we can derive that satisfies (19).
Lemma 5. If the Assumption (i) and (ii) are satisfied, and the problem (17)-(22) has feasible solution at any time, then,.
Proof. Since the problem (17)~ (22) has feasible solution at any time, so
. Then we only need to proof that when, we have. By Lemma 3 and 4, and triangle inequality, we derive
Lemma 6. If the Assumption (i) and (ii) are satisfied, and the problem (17)-(22) has feasible solution at any
Proof. By triangle inequality, we have
Remark 2. According to Lemma 2 to 6, if the assumption (i) and (ii) are satisfied, then are feasible solution of (17)-(22). Since, so by Lemma ,we can derive that the closed loop system states satisfy.
Theorem 2. If the Assumption (i) and (ii) are satisfied, the control law satisfies the constraint condition (18)- (22), and design parameters satisfy the following inequality
then the system asymptotically stable at the origin.
Proof. When gets into, we adopt state feedback control to make system asymptotically stable. Next, we only need to prove that when, the system asymptotically stable to the origin.
By the constraint (20), we have
For, we make difference as
By Theorem 2, we have
By Lemma 4, we have
Substitute (25)-(27) into (24), we derive
therefore , when
the system is asymptotically stable.
5.3. Algorithm Steps
We give the distributed model predictive control algorithm based on Lyapunov function and state observer.
Algorithm Off-line part:
1. Give decay coefficient, stable matrix L;
2. By Theorem 1, we obtain observer gain.
1. Choose the appropriate parameter and Lyapunov function, and obtain the stability domain estimation by calculation (Here is only the form, since the states are unavailable, real to use is);
2. Initialize, , , to satisfy Assumption (ii). At, If, then for any, adopt feedback control, or calculate, then send to upstream and downstream subsystems;
3. Receive, ,. If, choose the feedback control law , or solve the optimal problem, we get, and then apply to the subsystem;
4. Let, repeat step 2).
6. Numerical Example
Consider the distributed system under networked control as follows:
that is this system has two subsystems.
Let the subsystem control constraint as.
We use the Matlab simulationtools to simulate the algorithm proposed in this paper:
By the algorithm above, we can obtain that the stability domain of the subsystem 1 and 2 shown in Figure 2, Figure 3. Choose the initial states, the states track of the subsystem 1 and 2 are shown in Figures 4-7, “-” and “*” are real states and estimation states, respectively. Figure 8, Figure 9 show the input track of all the subsystems.
From the simulation results, we can see the algorithm can guarantee that estimation stats track the real states well, and asymptotically stable to the origin. We can also see that the control low satisfied the constraint and stable eventually.
For a kind of the distributed systems with input and state constraint and unavailable states under networked con-
Figure 2. The stability domain of the subsystem 1.
Figure 3. The stability domain of the subsystem 2.
Figure 4. The state components of the subsystem 1 (“-” representatives the real state, “*” representatives the estimation state).
Figure 5. The state components of the subsystem 1 (“-” representatives the real state, “*” representatives the estimation state).
Figure 6. The state components of the subsystem 2 (“-” representatives the real state, “*” representatives the estimation state).
Figure 7. The state components of the subsystem 2 (“-” representatives the real state, “*” representatives the estimation state).
Figure 8. The control line of the subsystem 1.
Figure 9. The control line of the subsystem 2.
trol patten, we consider the design and stability problem of the output feedback predictive controller based on Lyapunov function and state observer. The main idea is: For the considered system, use Lyapunov function and
states reconstruction to design output feedback controller in order to get the stability domain. Furthermore, the stability domain as a terminal constraint, the distributed model predictive controller is designed. The controller is successive feasibility under the condition of initial feasibility. The simulation results verify the effectiveness of the method proposed in this paper.
 Scattolini, R. (2009) Architectures for Distributed and Hierarchical Model Predictive Control—A Review. Journal of Process Control, 19, 723-731.
 Christofides, P.D, Scattolini, R., Munoznde la Pena, D. and Liu, J.F. (2013) Distributed Model Predictive Control: A Tutorial Review and Future Research Directions. Computers & Chemical Engineering, 51, 21-41.
 Camponogara, E., Jia, D., Krogh, B.H. and Talukdar, S. (2002) Distributed Model Predictive Control. IEEE Control Systems, 22, 44-52.
 Al-Gherwi, W., Budman, H. and Elkamel, A. (2011) A Robust Distributed Model Predictive Control Algorithm. Journal of Process Control, 21, 1127-1137.
 Farina, M. and Scattolini, R. (2012) Distributed Predictive Control: A Non-Cooperative Algorithm with Neighbor-to-Neighbor Communication for Linear Systems. Automatica, 48, 1088-1096.
 Zheng, Y., Li, S.Y. and Li, N. (2011) Distributed Model Predictive Control over Network Information Exchange for Large-Scale Systems. Control Engineering Practice, 19, 757-769.
 El-Farra, N.H. and Christofides, P.D. (2003) Bounded Robust Control of Constrained Multivariable Nonlinear Process. Chemical Engineering Science, 58, 3025-3047.