Improving Local Priority Hysteresis Switching Logic Convergence

Show more

1. Introduction

One of the earliest definitions of the term “adaption” was introduced by Drenick and Shahbender [2] in 1957:

“Adaptive systems in control theory are control systems that monitor their own performance and adjust their parameters in the direction of better performance.”

Adaptive control ensures the satisfactory performance of a closed-loop system by switching among a set of candidate controllers K when no single controller k can achieve the targeted performance objectives.

Two techniques have generally been used to achieve this goal: unfalsified adaptive control [3] [4] [5] and multiple model adaptive control [6] [7] [8]. In both cases, the switching process is supervised by a unit that provides the best controller k for the feedback loop from the controller set K based on the plant input/output data and performance criterion $\mu \left(t\right)$. Figure 1 shows the general architecture of an adaptive control system.

One serious challenge associated with adaptive control switching systems is a type of instability called chattering. Chattering means the switching system will cycle among two or more candidate controllers without ever converging. Various techniques can help to avoid chattering for a finite and infinite candidate controller set. Using a continuum of candidate controllers instead of a finite set allows for greater flexibility for the adaptive control system to manage uncertainty [9] [10] [11] [12].

One technique involves the hysteresis switching algorithm reported by Morse and Middleton in [13] [14]. Morse and his co-workers demonstrated adaptive control convergence for a finite controller set. Also, Hespanha et al. [9] [10] and Stefanovic et al. [11] proved adaptive control convergence for an infinite controller set.

These studies contributed to ensuring adaptive control switching system convergence by adding constraints to the switching scheme that required strictly positive hysteresis and local priority constants.

Unfortunately, however, such constraints in the switching scheme can hinder the adaptive control system in achieving optimality.

This paper contributes to the body of knowledge by easing the local priority hysteresis switching logic constraints in switching schemes based on the persistent excitation assumption. Easing these constraints is necessary to achieve the objective of high performance.

This paper is organized as follows. In Section 2, preliminary facts are given. Section 3 reviews definitions of local priority hysteresis switching logic. Section 4 contains the main results. Simulation examples are shown in Section 5. The conclusion and avenues for future work are provided in Section 6.

2. Preliminaries

Definition 1. Suppose that $f\mathrm{:}{R}^{n}\to R$ is twice differentiable on $X\subset {R}^{n}$

Figure 1. Adaptive control system.

and that for some $\alpha >0$,

${\nabla}^{2}f\left(x\right)\ge \alpha I,\text{\hspace{0.17em}}\forall x\in X\mathrm{.}$ (1)

Then, f is uniformly convex on X.

With respect to uniform convexity, if $f\left(x\right)$ is uniformly convex on a connected set $X\subset {R}^{n}$, then for every $\alpha >0$ that satisfies (1), we have ( [15], Prop.A.23)

$\begin{array}{l}f\left(y\right)-f\left(x\right)\\ ={\left(\nabla f\left(x\right)\right)}^{\text{T}}\left(y-x\right)+{\displaystyle {\int}_{0}^{1}}{\displaystyle {\int}_{0}^{1}}\left(y-x\right){\nabla}^{2}f\left(x+\tau y\right)\left(y-x\right)\text{d}\tau \text{d}t\\ \ge {\left(\nabla f\left(x\right)\right)}^{\text{T}}\left(y-x\right)+\frac{\alpha}{2}{\Vert y-x\Vert}^{2}\end{array}$ (2)

for any $\alpha >0$ satisfying (1).

Definition 2. A function $\nu \left(k\mathrm{,}z\mathrm{,}t\right)$ is said to be equi-quasi-positive definite (EQPD) in k if for some continuous monotonic function $\varphi \mathrm{:}\left[\mathrm{0,}\infty \right)\mapsto \left[\mathrm{0,}\infty \right]$ with $\varphi \left(0\right)=0$ and $\varphi \left(x\right)>0,\forall x>0$, it holds for all sufficiently large values of $z\ne 0$ and t that

$\stackrel{^}{k}\left(t\right)=\underset{k}{\mathrm{arg}\mathrm{min}}\left\{\nu \left(k,z,t\right)\right\}$ exists, and (3)

$\nu \left(k\mathrm{,}z\mathrm{,}t\right)-\nu \left(\stackrel{^}{k}\left(t\right)\mathrm{,}z\mathrm{,}t\right)\ge \varphi \left(\Vert k-\stackrel{^}{k}\left(t\right)\Vert \right)>0$ (4)

Definition 3. (Second order Taylor theorem expansion) Let $C\subseteq {\mathbb{R}}^{n}$, and let $f\mathrm{:}{\mathbb{R}}^{n}\mapsto \mathbb{R}$ be twice continuously differentiable over C. Then,

$f\left(x\right)=f\left(a\right)+\nabla f\left(a\right)\left(x-a\right)+{\nabla}^{2}f(\xi )$

$a\le \xi \le x$ or $\xi =\alpha a+\left(1-\alpha \right)x$ for $\alpha \in \left[\mathrm{0,1}\right]$

where the gradient $\nabla f\left(x\right)$ of the function $f\left(x\right)$ is a row vector of size n, i.e.,

$\nabla f\left(x\right)=\left(\frac{\partial f}{\partial {x}_{1}}\left(x\right),\frac{\partial f}{\partial {x}_{2}}\left(x\right),\cdots ,\frac{\partial f}{\partial {x}_{n}}(x)\right)$

the Hessian ${\nabla}^{2}f\left(x\right)$ is an $n\times n$ matrix.

${\nabla}^{2}f\left(x\right)=\left(\begin{array}{cccc}\frac{{\partial}^{2}f}{\partial {x}_{1}^{2}}\left(x\right)& \frac{{\partial}^{2}f}{\partial {x}_{1}\partial {x}_{2}}\left(x\right)& \cdots & \frac{{\partial}^{2}f}{\partial {x}_{1}\partial {x}_{n}}\left(x\right)\\ \frac{{\partial}^{2}f}{\partial {x}_{2}\partial {x}_{1}}\left(x\right)& \frac{{\partial}^{2}f}{\partial {x}_{2}^{2}}\left(x\right)& \cdots & \frac{{\partial}^{2}f}{\partial {x}_{2}\partial {x}_{n}}\left(x\right)\\ \vdots & \vdots & \ddots & \vdots \\ \frac{{\partial}^{2}f}{\partial {x}_{n}\partial {x}_{1}}\left(x\right)& \frac{{\partial}^{2}f}{\partial {x}_{n}\partial {x}_{2}}\left(x\right)& \cdots & \frac{{\partial}^{2}f}{\partial {x}_{n}^{2}}(x)\end{array}\right)$

and

$x-a=\left(\begin{array}{c}{x}_{1}-{a}_{1}\\ {x}_{2}-{a}_{2}\\ \vdots \\ {x}_{n}-{a}_{n}\end{array}\right)$

Lemma 1. (Weierstrass theorem [15] ) Let $\mathbb{P}$ be a non-empty subset of ${\mathbb{R}}^{n}$ and let $\mu \mathrm{:}\mathbb{P}\mapsto \mathbb{R}$ be a lower semicontinuous function at all points of $\mathbb{P}$. If $\mathbb{P}$ is compact, then $\stackrel{^}{p}\left(t\right)=\underset{p\in \mathbb{P}}{\mathrm{arg}\mathrm{min}}{\mu}_{p}\left(t\right)$ exists.

Definition 4. The system is persistently excited if for all sufficiently large values of $\tau >0$ and all p values it holds that ${\nabla}_{p}^{2}\left({\mu}_{p}\left(\tau \right)\right)\ge \alpha I$ for some $\alpha >0$.

Under the persistent excitation assumption and for a sufficient length of time t, the function ${\mu}_{p}\left(\tau \right)$ is a uniformly convex function in p. Therefore, if the system is persistently excited, the monitoring signal will become uniformly convex.

The definition of persistent excitation (PE) is critical in adaptive scheme studies that seek parameter convergence; see, for example, [16] - [21].

3. Local Priority Hysteresis Switching Logic

In this section, we present notations and definitions of local priority hysteresis switching logic. The intent is to introduce a switching scheme that can be applied when the unknown parameters of the system belong to a continuum set. Due to the differences between finite and infinite sets of candidate controllers, an infinite set of candidate controllers (typically, a continuum of controllers) can result in a better environment that facilitates the feasibility assumption (i.e., there is a controller in the candidate controller set can satisfy the adaptive control performance).

The supervisory control approach [22] [23] is used to achieve or maintain a desired performance level in a closed-loop system via switching through the given set of candidate controllers. The basic idea in selecting a controller strategy is to determine which nominal process model is associated with the smallest monitoring signals “ ${\mu}_{p}\left(t\right)$ ”. Then, the corresponding candidate controller can be selected.

Assume a linear single-input and single-output (SISO) finite-dimension uncertain process $\mathcal{P}$ shown in Figure 2. $\mathcal{P}$ is assumed to be a stabilizable and observable model with control input signals u and measured output signals y.

The supervisor contains three subsections, as shown in Figure 2.

1) Multi-estimator ${\Sigma}_{\mathbb{E}}$ —is a dynamic system with inputs of u and y and outputs of the signals ${y}_{p}$, $p\in \mathbb{P}$. $\mathbb{P}$ is a compact subset of a finite-dimension normalized linear vector space.

2) Monitoring signal generator ${\Sigma}_{\mathbb{M}}$ —is a dynamic system with inputs of estimation errors ${e}_{p}={y}_{p}-y$ and outputs of monitoring signals ${\mu}_{p}$, $p\in \mathbb{P}$ where ${\mu}_{p}$ is defined as the integral norms of the estimation errors, called monitoring signals.

3) Switching logic ${\Sigma}_{\mathbb{S}}$ —involves a switched system with inputs of the monitoring signals ${\mu}_{p}$ and outputs of the parameters that optimize the performance criterion $\stackrel{^}{p}$, which is defined as follows.

$\stackrel{^}{p}\left(t\right)=\underset{p\in \mathbb{P}}{\mathrm{arg}\mathrm{min}}\left\{{\mu}_{p}\left(t\right)\right\}$ (5)

$\stackrel{^}{p}\left(t\right)$ takes its values in $\mathbb{P}$ and is used to select the associated controller parameter.

Figure 2. Supervisory control block diagram.

Each ${y}_{p}$ converges to y if the transfer function of $\mathcal{P}$ equals the nominal process model transfer function ${\vartheta}_{p}$ in the absence of disturbances. Unmodeled dynamics and noise, which represent disturbance inputs and noise signals, are represented by d and n, respectively.

Assume that the transfer function of $\mathcal{P}$ from u, the output of the multi-controller “ $\mathbb{K}$ ”, and y belong to a family of admissible process model transfer functions.

$\mathbb{F}=\underset{p\in \mathbb{P}}{{\displaystyle \cup}}f\left(p\right)$ (6)

For each p, $f\left(p\right)$ denotes a family of transfer functions centered in a known nominal process model transfer function ${\vartheta}_{p}$, where p is a parameter with values based on a given index set $\mathbb{P}$ and $\mathbb{P}$ is typically a continuum. In the absence of noise, the unmodeled dynamics and disturbances (6) can be written as follows.

$\mathbb{V}=\underset{p\in \mathbb{P}}{{\displaystyle \cup}}{\vartheta}_{p}$ (7)

The state-space equations for the three subsystems are described in [10] ; the multi-estimator ${\Sigma}_{\mathbb{E}}$ can be stated as follows.

${\stackrel{\dot{}}{x}}_{\mathbb{E}}={A}_{\mathbb{E}}{x}_{\mathbb{E}}+{b}_{\mathbb{E}}y+{d}_{\mathbb{E}}u$

${y}_{p}={c}_{p}{x}_{\mathbb{E}}$, $p\in \mathbb{P}$

where ${x}_{\mathbb{E}}$ is the estimated state that is assumed to be available for the controller at all times and ${A}_{\mathbb{E}}$ is a stable matrix.

The matrix ${c}_{p}$ is designed based on each $p\in \mathbb{P}$ so that ${c}_{p}$ exists and is unique (see reference [22] ; Section IV). Moreover, for $\mathbb{P}$ to form a continuum, ${c}_{p}$ is assumed to be linearly dependent on p, which ensures tractability (see reference [22] ; Section XI). Therefore, the matrix ${c}_{p}$ can be written as follows.

${c}_{p}={p}^{\text{T}}A+b$ (8)

For a SISO system, A is a nonzero matrix with a dimension $n\times n$, p is $n\times 1$ unknown process parameters, and b is a $1\times n$ vector.

In [9], the candidate controller set = $\left\{{k}_{p}\mathrm{:}p\in \mathbb{P}\right\}$ is selected so that each $p\in \mathbb{P}$, ${k}_{p}$ is a controller stabilizing all the process models in $f\left(p\right)$, where $\mathcal{P}$ is any element of $\mathbb{F}$. We assume that feasibility holds. A set ${D}_{\gamma}$ is defined as follows.

${D}_{\gamma}\left(q\right)\mathrm{:}=\left\{p\in \mathbb{P}\mathrm{:}\left|q-p\right|\le \gamma \right\}$ (9)

where $\left|\text{\hspace{0.05em}}\cdot \text{\hspace{0.05em}}\right|$ is a norm function in $\mathbb{P}$ and $\gamma $ is a proper positive constant. The output of ${\Sigma}_{\mathbb{S}}$ at each instance is $\stackrel{^}{p}\left(t\right)$. In this case, a hysteresis constant $h>0$ is selected, and $\stackrel{^}{p}\left(0\right)=\underset{p\in \mathbb{P}}{\mathrm{arg}\mathrm{min}}\left\{{\mu}_{p}\left(0\right)\right\}$. Assume that at time ${t}_{i}$, $\stackrel{^}{p}\left({t}_{i}\right)$ switches to $q\in \mathbb{P}$ and remains fixed until time ${t}_{i+1}>{t}_{i}$, such that the following inequality is satisfied.

$\left(1+h\right)\underset{p\in \mathbb{P}}{\mathrm{min}}\left\{{\mu}_{p}\left({t}_{i+1}\right)\right\}\le \underset{p\in {D}_{\gamma}\left(q\right)}{\mathrm{min}}\left\{{\mu}_{p}\left({t}_{i+1}\right)\right\}$

We set $\stackrel{^}{p}\left({t}_{i+1}\right)=\underset{p\in \mathbb{P}}{\mathrm{arg}\mathrm{min}}\left\{{\mu}_{p}\left({t}_{i}+1\right)\right\}$. Repeating these steps yields a sequence of switching signals that converge as time passes.

Assume $\stackrel{^}{k}\left(t\right)$ is the controller parameter associated with the process parameter $\stackrel{^}{p}\left(t\right)$, then the active controller in the feedback loop ${k}_{L}\left(t\right)$ changes as follows.

${k}_{L}\left({t}_{i}\right)=\stackrel{^}{k}\left({t}_{i}\right)\mathrm{.}$ (10)

A key result is the local priority hysteresis switching convergence lemma, which can be stated as follows.

Lemma 2. (Convergence Lemma [10] ) Assume that both of the following hold:

1) Monotonicity: For all p it holds that

${\mu}_{p}\left(t\right)\ge {\mu}_{p}\left(\tau \right)$ for all $t>\tau $

2) Feasibility: A ${p}^{\star}\in \mathbb{P}$ exists for which the monitoring signal is uniformly bounded

$\underset{t}{\mathrm{sup}}{\mu}_{{p}^{*}}\left(t\right)<\infty .$

Then, if constant $\gamma $ and hysteresis constant h are strictly positive, the local priority hysteresis switching logic converges after a finite number of switches.

A concern with constant $\gamma $ required by Lemma 2 is that the constant $\gamma $ prevents the adaptive system from switching to a new parameter $\stackrel{^}{p}\left(t\right)$ that minimizes the monitoring signal ${\mu}_{p}\left(t\right)$ if this parameter is in the set ${D}_{\gamma}$ (i.e., $\stackrel{^}{p}\left(t\right)\in {D}_{\gamma}$ ). Another notable issue with this lemma is that the strictly positive hysteresis constant h can slow the supervisor’s adaptive response and limit the accuracy with which the supervisor can minimize the monitoring signal ${\mu}_{p}\left(t\right)$ to $\pm h$.

In the following section, we re-examine the local priority hysteresis switching logic convergence when the constraints on the switching scheme are eased, allowing the supervisor to instantaneously respond and continuously use the optimal, adaptive, zero-hysteresis trend.

${k}_{L}\left(t\right)=\stackrel{^}{k}\left(t\right)$ (11)

4. Main Results

The following lemmas and theorem establish that under the PE assumption, if the strictly positive local priority and hysteresis constants are eased in the local priority hysteresis switching logic convergence lemma, the optimal process parameter $\stackrel{^}{p}\left(t\right)$, as defined in (5), still converges as $t\to \infty $ under the same conditions (i.e., monotonic monitoring signal and feasible system) given in [9] and with the identical monitoring signal.

Recall that the authors in [9] use the following monitoring signal.

${\mu}_{p}\left(\tau \right)={\displaystyle {\int}_{0}^{\tau}}{\Vert {e}_{p}\left(t\right)\Vert}^{2}\text{d}t$ (12)

Since ${c}_{p}={p}^{\text{T}}A+b$, ${y}_{p}={c}_{p}{x}_{\mathbb{E}}$ and ${e}_{p}={y}_{p}-y$, ${\mu}_{p}\left(t\right)$ can be written as follows.

${\mu}_{p}\left(\tau \right)={\displaystyle {\int}_{0}^{\tau}}{\Vert \left({p}^{\text{T}}A+b\right){x}_{\mathbb{E}}\left(t\right)-y\left(t\right)\Vert}^{2}\text{d}t$ (13)

Then,

${\nabla}_{p}^{2}\left({\mu}_{p}\left(\tau \right)\right)=2{\displaystyle {\int}_{0}^{\tau}}A{x}_{\mathbb{E}}\left(t\right){x}_{\mathbb{E}}^{\text{T}}\left(t\right){A}^{\text{T}}\text{d}t$ (14)

The following lemmas will be used in the proof of the main result.

Lemma 3. Let ${\mu}_{p}\left(t\right)$ be monotonically increasing in t for all p, and suppose that a minimizing value $\stackrel{^}{p}\left(t\right)=\underset{p}{\mathrm{arg}\mathrm{min}}\left\{{\mu}_{p}\left(t\right)\right\}$ exists for all t. Then,

${\mu}_{\stackrel{^}{p}\left({t}_{m}\right)}\left({t}_{m}\right)\ge {\mu}_{\stackrel{^}{p}\left({t}_{n}\right)}\left({t}_{n}\right)$ for all ${t}_{m}\ge {t}_{n}$.

Proof. By monotonicity

${\mu}_{p}\left({t}_{m}\right)\ge {\mu}_{p}\left({t}_{n}\right),\text{\hspace{0.17em}}\forall {t}_{m}\ge {t}_{n}$ (15)

Also, since $\stackrel{^}{p}\left(t\right)$ minimizes ${\mu}_{p}(t)$

${\mu}_{\stackrel{^}{p}\left(t\right)}\left(t\right)\le {\mu}_{p}\left(t\right),\text{\hspace{0.17em}}\forall p\in \mathbb{P}\mathrm{.}$ (16)

From (15) and from (16)

${\mu}_{\stackrel{^}{p}\left({t}_{m}\right)}\left({t}_{n}\right)\ge {\mu}_{\stackrel{^}{p}\left({t}_{n}\right)}\left({t}_{n}\right)\mathrm{.}$

Hence,

${\mu}_{\stackrel{^}{p}\left({t}_{m}\right)}\left({t}_{m}\right)\ge {\mu}_{\stackrel{^}{p}\left({t}_{n}\right)}\left({t}_{n}\right),\text{\hspace{0.17em}}\forall {t}_{m}\ge {t}_{n}\mathrm{.}$

☐

Lemma 4. Let ${\mu}_{p}\left(t\right)$ be monotonically increasing in t for all p, and suppose that a

minimizing value $\stackrel{^}{p}\left(t\right)=\underset{p}{\mathrm{arg}\mathrm{min}}\left\{{\mu}_{p}\left(t\right)\right\}$ exists for all t. If the system is persistently

excited (def. 4), then ${\mu}_{\stackrel{^}{p}\left({t}_{m}\right)}\left({t}_{m}\right)-{\mu}_{\stackrel{^}{p}\left({t}_{n}\right)}\left({t}_{n}\right)\ge \varphi \left(\Vert \stackrel{^}{p}\left({t}_{m}\right)-\stackrel{^}{p}\left({t}_{n}\right)\Vert \right),\forall {t}_{m}\ge {t}_{n}$.

Proof. Writing the monitoring signal ${\mu}_{p}\left(t\right)$ in the second order Taylor theorem expansion form

$\begin{array}{c}{\mu}_{p}\left(t\right)={\mu}_{\stackrel{^}{p}\left(t\right)}\left(t\right)+{\left(p-\stackrel{^}{p}\left(t\right)\right)}^{\text{T}}{\nabla}_{p}\left({\mu}_{\stackrel{^}{p}\left(t\right)}\left(t\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{1}{2}{\left(p-\stackrel{^}{p}\left(t\right)\right)}^{\text{T}}{\nabla}_{p}^{2}\left({\mu}_{\xi \left(t\right)}\left(t\right)\right)\left(p-\stackrel{^}{p}\left(t\right)\right)\end{array}$ (17)

where $\xi \left(t\right)$ can be written as $\alpha p+\left(1-\alpha \right)\stackrel{^}{p}\left(t\right)$ and $\alpha \in \left[\mathrm{0,1}\right]$.

Since $\stackrel{^}{p}\left(t\right)$ minimizes, we have

(18)

Additionally, since the system is persistently excited,

(19)

From (18) and (19), Equation (17) can be written as follows.

(20)

or, equivalently,

(21)

By monotonicity

(22)

Therefore,

and hence for all

☐

Theorem 1. (Main Result) Consider the supervisory control system in Figure 2.

Assume that the following conditions hold.

1) Monotonicity: For all p, it holds that

for all.

2) Feasibility: A value exists for which the monitoring signal is uniformly bounded, as follows.

If the system is persistently excited (def. 4), then the optimal process parameter converges as t increases to infinity to a point in the closure of the set.

Proof. By feasibility and by Lemma 3, exists and is monotonic in t and has an upper bound. Hence,

(23)

(24)

Since the systems is persistently excited, it follows from Lemma 4 that for all,

(25)

Thus, for all, it holds that

Therefore, for every, there exists such that for all.

Moreover, as. Since, , is a nondecreasing continuous function that satisfies and for

, it follows that for every, a exists, such that for all. Therefore, the sequence is a Cauchy sequence. Since every Cauchy sequence converges [24], it follows that converges as to a point in the closure of the set.

Remark 1. (Performance Improvement) According to the certainty equivalence concept [25],

“The nominal process model with the smallest performance criterion signal ‘best’ approximates the actual process, and therefore the candidate controller associated with that model can be expected to do the best job of controlling the process.”

The basic idea is to determine which nominal process model is associated with the smallest monitoring signals. Then, the corresponding candidate controller can be selected.

As shown in theorem 1, the approach introduced in this paper (which relies on easing the local priority hysteresis switching logic constraints) improves adaptive controller convergence. Based on certainty equivalence [25], this method also improves adaptive control performance.

5. Simulation Examples

In this section, two systems are examined to demonstrate the unconstrained performance criterion introduced in this work and how it can converge to an optimum solution within a certain time without unstructured uncertainty or plant/sensor noise. In both examples, the reference signal is set to a unit step function.

Example 1: Consider the model reference adaptive control in Figure 2 with a

first-order model reference and actual plant, where p is

a parameter with values from index set and is. The simulation starts with the initial process parameter set to. The monitoring signal “” converges to the optimum value if parameter p converges to −4. The results are shown in Figure 3 and Figure 4. The graph in Figure 3 shows that the

Figure 3. Simulation results for example 1.

Figure 4. Simulation results for example 1.

performance criterion instantaneously and continuously responds and the output of the plant converges to the output of the reference model “” when “”, which is the point when the error “e” reaches zero. The graph in Figure 4 shows the rapid and smooth convergence of the error and process parameters to optimal values. In both graphs, exact matching is achieved.

Example 2: In this example, a second-order system shows how the unconstrained performance criterion manages a system that exhibits a transient response. Consider the model reference adaptive control in Figure 2 with a second-order

model reference and actual plant,

where is a parameter with values from the index set and is. The simulation starts with the initial process parameter set to. The results are shown in Figure 5 and Figure 6. The graph in Figure 5 shows that the error “e” reaches zero when the process parameter “” converges to −6. The graph in Figure 6 shows the smooth convergence of the error and process parameter to the optimal values over a short time. In this example, the plant performance and error convergence results are satisfactory, and exact matching is achieved.

6. Conclusion and Future Work

6.1. Conclusion

In this paper, we examined the local priority hysteresis switching logic and established performance criteria under which the hysteresis constant can be set to zero. The main results indicate that when the convergence lemma conditions

Figure 5. Simulation results for example 2.

Figure 6. Simulation results for example 2.

(i.e., monotonic monitoring signal and feasible system) hold, the PE assumption ensures convergence in local priority hysteresis switching logic without adding constraints to the switching logic. Easing these constraints improves adaptive control convergence, which results in improved performance.

6.2. Future Work

The quadratic model reference performance criterion in the example satisfies the EQPD condition, but it lacks the fading memory term, which may lead to difficulties for plants with unstructured uncertainty or plant/sensor noise. In those cases, a performance criterion with a fading memory term that satisfies the EQPD condition must be designed. Whether such a performance criterion exists remains an open question to address in future studies.

References

[1] Alhajri, M. (2017) Relaxing Convergence Constraints in Local Priority Hysteresis Switching Logic. World Academy of Science, Engineering and Technology Conference, 572-576.

[2] Drenick, R.F. and Shahbender, R.A. (1957) Adaptive Servomechanisms. Transactions of the American Institute of Electrical Engineers, Part II: Applications and Industry, 76, 286-292.

https://doi.org/10.1109/TAI.1957.6367242

[3] Patil, S., Sung, Y. and Safonov, M. (2014) Unfalsified Adaptive Control with Reset and Bumpless Transfer. 2014 IEEE 53rd Annual Conference on Decision and Control, Los Angeles, CA, 15-17 December 2014, 1264-1270.

https://doi.org/10.1109/CDC.2014.7039555

[4] Manzar, M.N., Battistelli, G. and Sedigh, A.K. (2017) Input-Constrained Multi-Model Unfalsified Switching Control. Automatica, 83, 391-395.

https://doi.org/10.1016/j.automatica.2017.04.044

[5] Sajjanshetty, K. and Safonov, M. (2014) Unfalsified Adaptive Control: Multi-Objective Cost-Detectable Cost Functions. 2014 IEEE 53rd Annual Conference on Decision and Control, Los Angeles, CA, 15-17 December 2014, 1283-1288.

https://doi.org/10.1109/CDC.2014.7039558

[6] Lyrnpcropoulos, G., Borrello, M. and Margari, N. (2018) Multiple Model Adaptive Control of Valve Flow Using Event-Triggered Switching. 2018 IEEE Conference on Control Technology and Applications (CCTA), Copenhagen, 21-24 August 2018, 121-126.

https://doi.org/10.1109/CCTA.2018.8511416

[7] Narendra, K.S. and Esfandiari, K. (2018) Adaptive Control of Linear Periodic Systems Using Multiple Models. 2018 IEEE Conference on Decision and Control, Miami Beach, FL, 17-19 December 2018, 589-594.

https://doi.org/10.1109/CDC.2018.8619514

[8] Wang, Q., Dai, W., Ma, X. and Yang, C. (2017) Multiple Models and Neural Networks Based Adaptive PID Decoupling Control of Mine Main Fan Switchover System. IET Control Theory & Applications, 12, 446-455.

https://doi.org/10.1049/iet-cta.2017.0701

[9] Hespanha, J., Liberzon, D., Mors, S., Anderson, B., Brinsmead, T. and De Bruyne, F. (2001) Multiple Model Adaptive Control. Part 2: Switching. International Journal of Robust and Nonlinear Control, 11, 479-496.

https://doi.org/10.1002/rnc.594

[10] Hespanha, J., Liberzon, D. and Morse, S. (2003) Hysteresis-Based Switching Algorithms for Supervisory Control of Uncertain Systems. Automatica, 39, 263-272.

https://doi.org/10.1016/S0005-1098(02)00241-8

[11] Stefanovic, M. and Safonov, M. (2008) Safe Adaptive Switching Control: Stability and Convergence. IEEE Transactions on Automatic Control, 53, 2012-2021.

https://doi.org/10.1109/TAC.2008.929395

[12] Alharashani, M. (2010) Relaxing Convergence Assumptions for Continuous Adaptive Control. Ph.D. Thesis, University of Southern California, Los Angeles, CA.

[13] Morse, S., Mayne, D. and Goodwin, G. (1992) Applications of Hysteresis Switching in Parameter Adaptive Control. IEEE Transactions on Automatic Control, 37, 1343-1354.

https://doi.org/10.1109/9.159571

[14] Middleton, R.H., Goodwin, G.C., Hill, D.J. and Mayne, D.Q. (1988) Design Issues in Adaptive Control. IEEE Transactions on Automatic Control, 33, 50-58.

https://doi.org/10.1109/9.360

[15] Bertsekas, D. (1999) Nonlinear Programming. Athena Scientific, Belmont, MA.

[16] Anderson, B. (1977) Exponential Stability of linear Equations Arising in Adaptive Identification. IEEE Transactions on Automatic Control, 22, 83-88.

https://doi.org/10.1109/TAC.1977.1101406

[17] Åström, K. and Torsten, B. (1965) Numerical Identification of Linear Dynamic Systems from Normal Operating Records. IFAC Proceedings Volumes, 2, 96-111.

https://doi.org/10.1016/S1474-6670(17)69024-4

[18] Åström, K. and Wittenmark, B. (2013) Adaptive Control. Courier Corporation, North Chelmsford, MA.

[19] Na, J., Mahyuddin, M., Herrmann, G., Ren, X. and Barber, P. (2015) Robust Adaptive Finite-Time Parameter Estimation and Control for Robotic Systems. International Journal of Robust and Nonlinear Control, 25, 3045-3071.

https://doi.org/10.1002/rnc.3247

[20] Cho, N., Shin, H., Kim, Y. and Tsourdos, A. (2018) Composite Model Reference Adaptive Control with Parameter Convergence under Finite Excitation. IEEE Transactions on Automatic Control, 63, 811-818.

https://doi.org/10.1109/TAC.2017.2737324

[21] Narendra, K. and Annaswamy, A. (1987) Persistent Excitation in Adaptive Systems. International Journal of Control, 45, 127-160.

https://doi.org/10.1080/00207178708933715

[22] Morse, S. (1996) Supervisory Control of Families of Linear Set-Point Controllers—Part I: Exact Matching. IEEE Transactions on Automatic Control, 41, 1413-1431.

https://doi.org/10.1109/9.539424

[23] Morse, S. (1997) Supervisory Control of Families of Linear Set-Point Controllers—Part II: Robustness. IEEE Transactions on Automatic Control, 42, 1500-1515.

https://doi.org/10.1109/9.649687

[24] Greenberg, M. (1975) Topology: A First Course. Prentice-Hall, Englewood Cliffs, NJ.

[25] Morse, S. (1992) Towards a Unified Theory of Parameter Adaptive Control. II. Certainty Equivalence and Implicit Tuning. IEEE Transactions on Automatic Control, 37, 15-29.