Geometrical Frameworks in Identification Problem

Show more

1. Introduction

The framework (FR) concept is applied in control, identification, and analysis and data processing tasks. FR is the synonym of such concepts as a frame, structure, system, platform, concept, a basis, and set of approaches. The term “framework” is used in two directions in scientific research. The first direction of FR application represents the term integrating a set of method approaches or procedures. So, FR in [1] is interpreted as the set of mathematical and technical procedures and methods for identification of the automobile battery control process. The approach to the identification is based on the Bayesian framework. In [2], this concept combines the set of identification methods based on prediction error computing. Proposed methods show that such procedures allow obtaining estimations in some optimum sense. The key moment in this parametric paradigm is the choice of a necessitated reference structure. The same paradigm based on the creation of the new concept to system identification is proposed in [3] [4]. It is based on the compilation of existing approaches.

The framework can be interpreted as the theoretical model structure for the analysis of a content transmitted to video [5].

So, we have the system of theoretical provisions which is applied for the solution of a specific problem. The hybrid system identification scheme (methodology) based on the continuous optimization application is proposed in [6]. In [7], the uniform theoretical concept (framework) is proposed for nonlinear discrete dynamic system identification. It is based on the application of neural networks. The procedure (framework) is proposed in [8] for the identification of functional refusal. It is the basis of the new approach for functional refusal risk estimation in physical systems. The framework is based on the integration of functionality hierarchical systemic models and behavioral simulation. Such interpretation of FR is dominating (see, for example [9] [10] [11] [12] [13] ).

The second interpretation of FR is based on the application of mapping describing processes and properties of the system in the generalized view. Bases of such approach are proposed in the qualitative theory of dynamic systems [14] [15] [16]. Some geometrical framework corresponds to such mapping. This approach is widely applied in chaos research. The attractor is the framework example in identification problems (see for example [17] [18] [19] ). The framework equation is specified as a priori within unknown parameters in these works. Further, the identification problem is solved to obtain a required form of the attractor. Another approach [20] [21] [22] [23] is based on the geometrical frameworks (GF) application for analyzing the system under uncertainty. GF gives the solution to the structural identification problem. Further, we will interpret this approach as the methodology based on the design and the analysis GF. The main difference between proposed by GF and frameworks in [17] [18] [19]: mathematical mapping (GF) is not postulated a priori, and is determined based on data processing. The GF is the main object of the analysis and allows deciding on system behaviour and properties. They contain the following areas of the identification theory.

1) Structural identification of the nonlinear system.

2) The estimation of Lyapunov exponents.

3) Structural identifiability of the nonlinear system.

4) The system phase portrait reconstruction on the time series.

5) The system structure estimation with lag variables.

The structure of the paper. It is a review of the application of GF in identification problems. Section 2 contains the problem statement. The methodology for geometrical frameworks design in identification problems states in Section 3. We are showed that GF for static and dynamic systems differ significantly. The special class of mappings is applied to decision-making on the linear dynamic system structure. We show the GF application to the estimation of Lyapunov exponents. The significant geometrical framework obtaining depends on the structural identifiability of the dynamic system. The structural identifiability of nonlinear dynamic systems is presented in Section 4. It is showed that the system input should be S-synchronizing for the obtaining of significant GF. Reconstruction of the system phase portrait or attractor is also the identification problem. This problem is discussed in Section 5. The system structure choice with lag variables is discussed in Section 6. Two approaches to the choice of the system structure are considered. The first approach is based on statistical methods application. The second approach is founded on the Lyapunov exponents estimation. The proposed approach implementation example is described. The conclusion contains the main inferences and results.

2. Problem Statement

Consider dynamic system

$\begin{array}{l}\stackrel{\dot{}}{X}=AX+\phi \left(y\right)I+Bu,\\ y={C}^{\text{T}}X,\end{array}$ (1)

where
$u\in R$,
$y\in R$ are the input and the output;
$A\in {R}^{m\times m}$,
$B\in {R}^{m}$,
$I\in {R}^{m}$
$C\in {R}^{m}$ are matrices of corresponding dimensions;
$\phi \left(y\right)$ is a scalar nonlinear function. *A* is the Hurwitz matrix.

We suppose that $\chi =\phi \left(y\right)$ belongs to the set

$\chi \in {F}_{\phi}=\left\{{\upsilon}_{1}{\xi}^{2}\le \phi \left(\xi \right)\xi \le {\upsilon}_{2}{\xi}^{2},\xi \ne 0,\phi \left(0\right)=0,\text{\hspace{0.17em}}{\upsilon}_{1}\ge 0,{\upsilon}_{2}<\infty \right\}.$ (2)

The system (1) nonlinear part is described by static (algebraic) equations often. Therefore, further, we consider the case when $\phi \left(y\right)$ describe by the algebraic equation.

The informational set be known for the system (1)

${\text{I}}_{o}=\left\{u\left(t\right),y\left(t\right),t\in J=\left[{t}_{0},{t}_{k}\right]\right\}$. (3)

Problem: evaluate the class of nonlinear function
$\phi \left(y\right)$ in (1) and characterise the matrix *A* on the basis of the data processing (3).

3. Geometrical Frameworks in Dynamic Systems Structural Identification Problem

3.1. ${S}_{ey}$ -Frameworks

The geometrical framework ${S}_{ey}$ design is one of the solution main stages in the structural identification problem. The method for the framework ${S}_{ey}$ design is defined by the estimation possibility of system structural parameters. The framework ${S}_{ey}$ is derivative from a phase portrait $S$. $S$ is the starting point for further researches on the formation ${S}_{ey}$ under uncertainty. The GF design approach depends on system properties and the considered problem of structure identification. The synthesis ${S}_{ey}$ method is proposed in [21] and is generalized on dynamic systems in [20] [22]. The approach based on the forming of a subset ${\text{I}}_{GF}$ which allows obtaining a mapping for the design ${S}_{ey}$. ${\text{I}}_{GF}$ is the result of the set ${\text{I}}_{o}$ analysis. ${\text{I}}_{GF}$ may contain data on the transient process or the steady motion in the system, which contains the information about system nonlinear properties is formed.

The set ${\text{I}}_{N,g}$ is identified as follows. Apply to $y\left(t\right)$ the differentiation operation and designate by the obtained variable as ${x}_{1}$. Determine the model

${\stackrel{^}{x}}_{1}^{l}\left(t\right)={H}^{\text{T}}{\left[1\text{\hspace{0.17em}}\text{\hspace{0.17em}}u\left(t\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}y\left(t\right)\right]}^{\text{T}}$, (4)

where
${\stackrel{^}{x}}_{1}^{l}$ is the estimation of the linear component in
${x}_{1}$ on the time gape
${J}_{g}=J\backslash {J}_{tr}$ corresponding to the steady motion in the system (1);
$H\in {R}^{3}$ is the vector of model (4) parameters;
${J}_{tr}$ is the time gap corresponding to transient process in the system. Determine by the vector *H* applying the least square method.

Obtain the forecast for the variable ${x}_{1}$ using the model (4) and form the error $e\left(t\right)={\stackrel{^}{x}}_{1}^{l}\left(t\right)-{x}_{1}\left(t\right)$. $e\left(t\right)$ depends on the nonlinearity $\phi \left(y\right)$ in the system (1). So, we have the set ${\text{I}}_{N,g}=\left\{y\left(t\right),e\left(t\right),\text{\hspace{0.17em}}t\in {J}_{g}\right\}$. Further, we apply the designation $y\left(t\right)$ assuming that $y\left(t\right)\in {\text{I}}_{N,g}$.

Construct the phase portrait $S$ and GF ${S}_{ey}$ described by functions $\Gamma :\left\{y\right\}\to \left\{{y}^{\prime}\right\}$, ${\Gamma}_{ey}:\left\{y\right\}\to \left\{e\right\}$. ${S}_{ey}$ is the basis for the analysis and the identification system design? The framework ${S}_{ey}$ should have specified properties [20]. Properties of structural identifiability and S-synchronizability [23] are basic. The correctness sign of obtained GF is the regularity of its presentation, and the fulfilled condition $\left|e\left(t\right)\right|>{\delta}_{e}$ for $\forall t\ge {t}_{q}>{t}_{tr}$, where ${t}_{tr}$ is the end time of the transient process, ${\delta}_{e}>0$ is some number. The described approach application gives how significant ${S}_{ey}$, and insignificant $N{S}_{ey}$ frameworks ( ${S}_{ey}=N{S}_{ey}$ ). Decision-making on the significance ${S}_{ey}$ is based on the results obtained in [20]. The framework $N{S}_{ey}$ is the result of nonfulfillment of the condition S-synchronizability (SS) (see Section 4). S-synchronizability of the system (1) (framework ${S}_{ey}$ ) gives the excitation constancy condition fulfilment for the input $u\left(t\right)$. The significance ${S}_{ey}$ estimation algorithm is based on the sector set properties analysis for ${S}_{ey}$ [21] if the SS-condition is satisfied.

Definition 1. The framework ${S}_{ey}$ is called the regular if the condition S-synchronizability is satisfied for the system (1).

The example of the regular framework ${S}_{ey}$ for the system with a static hysteresis is presented in Figure 1 [21].

If the function $\phi \left(y\right)$ has the complex law of change, the approach application described above can give a “false” framework $N{S}_{ey}$.

The example of such framework for the system describing processes in RC-OTA the chaotic oscillator [24]

$\begin{array}{l}\stackrel{\xa8}{x}-0.1\stackrel{\dot{}}{x}+x=\phi ,\\ \stackrel{\dot{}}{\phi}=10\left(-\phi +\mathrm{sgn}\left(x+\mathrm{sgn}\left(\phi \right)\right)\right),\end{array}$ (5)

is shown in Figure 2. RC-OTA is applied to the design of electronic and control systems,

Figure 1. Frameworks $S$, ${S}_{ey}$ for the second order system (1) with static hysteresis.

Figure 2. Frameworks $S$, ${S}_{ey}$ for the second order system (5) with dynamic hysteresis.

$\mathrm{sgn}\left(x\right)=\{\begin{array}{l}1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}x>0,\\ 0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}x=0,\\ -1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}x<0.\end{array}$

The obtaining of the regular framework gives to the application of the hierarchical immersion method [21] in state-space. This method provides the model (4) structure choice for each layer of hierarchy.

The example of the regular framework ${S}_{ey}$ for the system (5) is shown in Figure 3. Designations showed in Figure 3 given in [21].

Example 1. Consider a mechanical system with Bouc-Wen hysteresis [25]. This has the form

$\begin{array}{l}m\stackrel{\xa8}{x}+c\stackrel{\dot{}}{x}+F\left(x,z,t\right)=f\left(t\right),\text{\hspace{0.17em}}y=x,\\ F\left(x,z,t\right)=\alpha kx\left(t\right)+\left(1-\alpha \right)kdz\left(t\right),\end{array}$ (6)

Figure 3. Regular structure for the system (5).

$\stackrel{\dot{}}{z}={d}^{-1}\left(a\stackrel{\dot{}}{x}-\beta \left|\stackrel{\dot{}}{x}\right|{\left|z\right|}^{n}\text{sign}\left(z\right)-\gamma \stackrel{\dot{}}{x}{\left|z\right|}^{n}\right),$

where $m>0$ is weight, $c>0$ is damping, $F\left(x,z,t\right)$ is the restoring force, $d>0$, $n>0$, $k>0$, $\alpha \in \left(0,1\right)$, $f\left(t\right)$ is exciting force, $a,\beta ,\gamma $ are some numbers. Denote by the system (6) as SBW.

SBW-system parameters for controlling the actuator are 5, 6 are $d=a=m$, $n=1.5$, $\beta =0.5$, $\alpha =1.5$, $k=0.6$, $m=1$, $c=2$. The exciting force $f\left(t\right)=2-2\mathrm{sin}\left(0.15\pi t\right)$.

The model (4) has the form:
$\stackrel{\dot{}}{\stackrel{^}{x}}=-0.199x+0.471f$. The application of the proposed method gives
${S}_{ey}$ -frameworks (Figure 4). Ranges of definition *y* and *z* match. Analysis of the
${S}_{ey}$ -structure shows that the system (6) is nonlinear.

3.2. ${S}_{{k}_{s,\rho}}$, $S{K}_{\Delta {k}_{s,\rho}^{i}}$ -Frameworks

Another class of framework ${S}_{{k}_{s,\rho}}$ is designed based on the analysis of system (1) general solution. ${S}_{{k}_{s,\rho}}$ apply to the structure choice for the system (1) linear part. This task differs from the problem considered above. Therefore, mappings allowing making decisions should have another form [22] [23]. They are based on the analysis of the Lyapunov exponent (LE) dynamics change. Apply the model

${\stackrel{^}{X}}_{q}\left(t\right)={\stackrel{^}{A}}_{q}W\left(t\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall t\in {J}_{q}$ (7)

to the particular solution estimation of the system (1) on the output *y*, where
${\stackrel{^}{A}}_{q}\in {R}^{2\times 2}$ is the parameter matrix,
$W={\left[u\text{\hspace{0.17em}}\text{\hspace{0.17em}}{u}^{\prime}\right]}^{\text{T}}$,
${\stackrel{^}{X}}_{q}\in {R}^{2}$ is the estimation of the system output and its derivative. The choice of the interval
${J}_{q}\subset J$ depends on system (1) properties.

Further, we obtain the estimation for the system (1) general solution on the basis ${\stackrel{^}{X}}_{q}$

${\stackrel{^}{X}}_{g}\left(t\right)=X\left(t\right)-{\stackrel{^}{X}}_{q}\left(t\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall t\in {J}_{g}$,

Figure 4. Framework
${S}_{ey}$, phase portrait and *z*.

where ${\stackrel{^}{X}}_{g}\left(t\right)={\left[{\stackrel{^}{y}}_{g}\left(t\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\stackrel{\dot{}}{\stackrel{^}{y}}}_{g}\left(t\right)\right]}^{\text{T}}$. This approach can be generalized on the case $m>2$.

Functions

$\rho \left({\stackrel{^}{y}}_{g}\right)={\rho}_{g}=\mathrm{ln}\left|{\stackrel{^}{y}}_{g}\left(t\right)\right|,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall t\in {\stackrel{\xaf}{J}}_{g}\subset {J}_{g}$,

${k}_{s}\left(t,\rho \right)=\rho \left({\stackrel{^}{y}}_{g}\right)/t$ (8)

are basis of the mapping describing ${S}_{{k}_{s,\rho}}$, where ${\stackrel{\xaf}{J}}_{g}=\left[{t}_{0},\stackrel{\xaf}{t}\right]$ is determined on the basis by the LE theory [26]. ${k}_{s}\left(t,\rho \right)$ is the basis for the Lyapunov exponent calculation.

Remark 1. The framework R1 use simplifies the choice of the upper bound for a time at the calculation LE.

Perform the analysis of sets

${\text{I}}_{{k}_{s}}=\left\{{k}_{s}\left(t,\rho \left({\stackrel{^}{y}}_{g}\left(t\right)\right)\right),t\in {\stackrel{\xaf}{J}}_{g}\right\},\text{\hspace{0.17em}}{\text{I}}_{{{k}^{\prime}}_{s}}=\left\{{k}_{s}\left(t,\rho \left({\stackrel{\dot{}}{\stackrel{^}{y}}}_{g}\left(t\right)\right)\right),t\in {\stackrel{\xaf}{J}}_{g}\right\},\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\stackrel{\xaf}{J}}_{g}\subset {J}_{g}$ (9)

for the LE determination.

On sets ${\text{I}}_{{k}_{s}}$, ${\text{I}}_{{{k}^{\prime}}_{s}}$, the framework ${S}_{{k}_{s,\rho}}$ described by the function ${\Gamma}_{{k}_{s,\rho}}:{\text{I}}_{{k}_{s}}\to {\text{I}}_{{{k}^{\prime}}_{s}}$ is introduced. The framework ${S}_{{k}_{s,\rho}}$ reflects the change dynamics of indexes depending on LE. Consider also the function describing the first difference ${k}_{s}\left(t,\rho \left({\stackrel{\dot{}}{\stackrel{^}{y}}}_{g}\left(t\right)\right)\right)$ change

$\Delta {{k}^{\prime}}_{s}\left(t\right)={k}_{s}\left(t,\rho \left({\stackrel{\dot{}}{\stackrel{^}{y}}}_{g}\left(t+\tau \right)\right)\right)-{k}_{s}\left(t,\rho \left({\stackrel{\dot{}}{\stackrel{^}{y}}}_{g}\left(t\right)\right)\right)$, (10)

where $\tau >0$.

Form the set ${\text{I}}_{\Delta {{k}^{\prime}}_{s}}=\left\{\Delta {k}_{s}\left(t,\rho \left({\stackrel{\dot{}}{\stackrel{^}{y}}}_{g}\left(t\right)\right)\right),t\in {\stackrel{\xaf}{J}}_{g}\right\}$ and introduce the framework $S{K}_{\Delta {{k}^{\prime}}_{s,\rho}}$ which function ${\Gamma}_{\Delta {{k}^{\prime}}_{s,\rho}}:{\text{I}}_{{k}_{s,\rho}}\to {\text{I}}_{\Delta {{k}^{\prime}}_{s,\rho}}$ corresponds.

Consider
$\Delta {k}_{s}^{i}\left(t\right)$ is determined by analogy with (10), and *i* designates *i*-th derivative
${\stackrel{^}{y}}_{g}\left(t\right)$. The framework
$LS{K}_{\Delta {{k}^{\prime}}_{s,\rho}}$ with
${\Gamma}_{{}_{\Delta {{k}^{\prime}}_{s,\rho}}}:{\text{I}}_{{k}_{s,\rho}}\to B\left({\text{I}}_{\Delta {{k}^{\prime}}_{s,\rho}}\right)$, where
$B\left({\text{I}}_{\Delta {{k}^{\prime}}_{s,\rho}}\right)\subset \left\{-1;1\right\}$. Define by elements of the binary set
$B\left({\text{I}}_{\Delta {{k}^{\prime}}_{s,\rho}}\right)$ as

$b\left(t\right)=\{\begin{array}{l}1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{if}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\Delta {{k}^{\prime}}_{s}\left(t\right)\ge 0,\\ -1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\Delta {{k}^{\prime}}_{s}\left(t\right)<0,\end{array}$ $t\in {\stackrel{\xaf}{J}}_{g}$. (11)

Frameworks
$S{K}_{\Delta {k}_{s,\rho}^{i}}$ which are based on the change
$\Delta {k}_{s}^{i}\left(t\right)$ (
$i>1$ ) analysis are formed similarly.
$\Delta {k}_{s}^{i}\left(t\right)$ is determined by analogy with (10), and *i* designates *i*-th derivative
${\stackrel{^}{y}}_{g}\left(t\right)$.

The application (8)-(11) allows to obtain the LE set and to estimate their type. The approach generalization on periodic dynamic systems is given in [27].

4. Structural Identifiability and Structural Identification of Nonlinear Dynamic System

In Section 3, it is noted that the structure estimation of nonlinear dynamic systems depends on the system identifiability.

Many publications (see for example [28] [29] [30] ) are devoted to the dynamic systems parametric identifiability problem. The structural identifiability of nonlinear dynamic systems reduced to the parametrical identifiability based on various approximation methods application [29] [30] [31] [32]. Proposed approaches are generalized to the case when not all system parameters can identify

In [23], structural identifiability is considered in the following aspect: determine conditions in which the nonlinear system structure estimation is possible under uncertainty. The solution to this problem for the system (1) is given in [23] when the nonlinear function
$\phi \left(y\right)$ satisfies the condition (2). Decision-making based on the analysis of the framework
${S}_{ey}$ described the system (1) behavior in the steady-state. It is showed that the system should satisfy to *h*-identifiability property.

Let conditions be satisfied.

B1. The initial set
${\text{I}}_{o}$ gives the parametrical identification problem solution of the model (1). It means that the input
$u\left(t\right)$ is constantly excited on the interval *J*.

B2. The input $u\left(t\right)$ use gives to the informative framework ${S}_{ey}\left({\text{I}}_{N,g}\right)$. It means that the analysis S1 gives the estimation problem solution of the system (1) properties.

Remark 2. The excitation constancy property, which is the basis for the parametric identifiability estimation, is affected by the identifiability problem solution.

Let the framework ${S}_{ey}$ be closed and the area ${S}_{ey}$ is not zero. Designate by height ${S}_{ey}$ as $h\left({S}_{ey}\right)$ where the height is the distance between two points of opposite sides of the framework ${S}_{ey}$.

Statement 1 [21]. Let i) the linear part of the system (1) is stable, and the nonlinearity $\phi (\cdot )$ satisfies the condition (2); ii) the input $u\left(t\right)$ is limited, piecewise continuous and constantly excited; iii) ${\delta}_{S}>0$ exists such that $h\left({S}_{ey}\right)\ge {\delta}_{S}$. Then the framework ${S}_{ey}$ is identifiable on the set ${\text{I}}_{N,g}$.

Definition 2. The framework
${S}_{ey}$ having the specified properties in the statement 1 is *h*-identified.

Statement 1 conditions fulfillment can give “insignificant”
${S}_{ey}$ -framework (
$N{S}_{ey}$ -framework). Therefore, *h*-identifiability is a sufficient but necessary condition of structural identifiability (SI). Such a condition is S-synchronizability for the system (1) [23].

Introduce designations: ${D}_{y}=\text{dom}\left({S}_{ey}\right)$ is the domain (set $\left\{y\right\}$ ), ${D}_{y}={D}_{y}\left({D}_{y}\right)=\underset{t}{\mathrm{max}}y\left(t\right)-\underset{t}{\mathrm{min}}y\left(t\right)$ is the diameter ${D}_{y}$. Let $u\left(t\right)\in \text{U}$ is an admissible set of inputs for the system (1).

Definition 3 [23]. The input $u\left(t\right)\in \text{U}$ S-synchronizes the system (1) if the framework ${S}_{ey}$ definitional domain has the maximum diameter ${D}_{y}$ on the set $\left\{y\left(t\right),t\in J\right\}$.

Synchronization $u\left(t\right)\in \text{U}$ is understood as the choice of input ${u}_{h}\left(t\right)\in \text{U}$ such that allows reflecting all features ${S}_{ey}$ characterizing $\phi \left(y\right)$. It is possible only in case when $u\left(t\right)$ ensures $\underset{{u}_{h}}{\mathrm{max}}{D}_{y}$.

Synchronization allows obtaining the framework ${S}_{ey}\ne N{S}_{ey}$. Such selection ${u}_{h}\left(t\right)\in \text{U}$ can be interpreted as the synchronization between the model and the system. Therefore, the fulfillment of the condition ${d}_{h,y}=\underset{{u}_{h}}{\mathrm{max}}{D}_{y}$ ensures the system ${h}_{{\delta}_{h}}$ -identifiability.

Let the input ${u}_{h}\left(t\right)$ synchronize the set ${D}_{y}$. If $u\left(t\right)$ is S-synchronizing, then we will write ${u}_{h}\left(t\right)\in \text{S}$. Let’s notice that the finite set $\left\{{u}_{h}\left(t\right)\right\}\in \text{S}$ exists for the system (1). The choice of the optimum input ${u}_{h}\left(t\right)$ depends on ${d}_{h,y}$. Ensuring this condition is one of the prerequisites for the system (1) structural identifiability.

Consider the reference structure ${S}_{ey}^{ref}$. ${S}_{ey}^{ref}$ reflects all properties of the function $\phi \left(y\right)$. Denote by diameter ${D}_{y}\left({S}_{ey}^{ref}\right)$ as ${D}_{y}^{ref}$. If ${u}_{h}\left(t\right)\in \text{S}$, that ${D}_{y}^{ref}$ exists for the system (1).

Corollary from definitions 2, 3. If ${S}_{ey}\cong {S}_{ey}^{ref}$, then $\left|{D}_{y}-{D}_{y}^{ref}\right|\le {\epsilon}_{y}$ where ${\epsilon}_{y}\ge 0$, $\cong $ is the sign of proximity. Elements of the subset ${\text{U}}_{\text{S}}$ have property

$\left|{D}_{y}\left({S}_{ey}\left(u\left(t\right)|{}_{u\in {\text{U}}_{\text{S}}}\right)\right)-{D}_{y}^{ref}\right|\le {\epsilon}_{y}$,

and

$\left|{D}_{y}\left({S}_{ey}\left(u\left(t\right)|{}_{u\in {\text{UU}}_{\text{S}}}\right)\right)-{d}_{h,y}\right|>{\epsilon}_{y}$

is the condition $N{S}_{ey}$ appearance.

Let
${S}_{ey}$ is *h*-identifiable and
${S}_{ey}={F}_{{S}_{ey}}^{l}\cup {F}_{{S}_{ey}}^{r}$, where
${F}_{{S}_{ey}}^{l},{F}_{{S}_{ey}}^{r}$ are the left and right fragments
${S}_{ey}$. Secants for
${F}_{{S}_{ey}}^{l},{F}_{{S}_{ey}}^{r}$ have the form

${\gamma}_{S}^{r}={a}^{r}y$, ${\gamma}_{S}^{l}={a}^{l}y$ (12)

where ${a}^{l},{a}^{r}$ are numbers determined by the least squares method (LSM).

Definition 4. If the framework
${S}_{ey}$ is *h*-identifiable and the condition
$\left|\left|{a}^{l}\right|-\left|{a}^{r}\right|\right|\le {\delta}_{h}$ is satisfied, then the framework
${S}_{ey}$ (the system (1)) is structurally identifiable or
${h}_{{\delta}_{h}}$ -identifiable.

Definition 4 shows if the system (1) is ${h}_{{\delta}_{h}}$ -identifiable, then the structure ${S}_{ey}$ has the maximum area ${D}_{y}$ diameter, and the system is S-synchronizable.

Let the structure
$S$ have *m* features. We understand features of the function
$\phi \left(y\right)$ as loss of continuity, inflection points or extremes. These features are signs of the function nonlinearity.

Definition 5. If the framework
${S}_{ey}$ is
${h}_{{\delta}_{h}}$ -identifiable, then the model (4) is *SM*-identifying.

Theorem 1 [20]. Let 1) the input
$u\left(t\right)$ is constantly excited and ensures the system (1) S-synchronization; 2) the phase portrait
$S$ of the system (1) has features; 3) the
${S}_{ey}$ -framework is
${h}_{{\delta}_{h}}$ -identifiable and contains fragments corresponding to features of the system (1). Then the model (4) is *SM*-identifying.

Remark 3. According to the results of Section 4, the process design of the model (4) structure can have a hierarchical form. It is rightly for nonlinearities, which do not satisfy the condition (2).

Consider the framework ${S}_{ey}$. Designate by the center ${S}_{ey}$ on the set ${J}_{y}=\left\{y\left(t\right)\right\}$ as ${\u0441}_{\text{S}}$, and the center of the area ${D}_{y}$ as ${\u0441}_{{D}_{y}}$.

Theorem 2 [23]. Let on the set U of representative inputs $u\left(t\right)$ of the system (1): i) exists $\epsilon \ge 0$ such that $\left|{\u0441}_{S}-{\u0441}_{{D}_{y}}\right|\le \epsilon $ ; ii) the condition $\left|\left|{a}^{l}\right|-\left|{a}^{r}\right|\right|\le {\delta}_{h}$ is satisfied. Then the system (1) is ${h}_{{\delta}_{h}}$ -identifiable, and the input ${u}_{h}\left(t\right)\in \text{S}$. Then the system (1) is ${h}_{{\delta}_{h}}$ -identifiable and the input ${u}_{h}\left(t\right)\in \text{S}$.

Some subset $\left\{{u}_{h,i}\left(t\right)\right\}\subset {\text{U}}_{h}\subseteq \text{U}$ ( $i\ge 1$ ) which elements have the S-synchronizability property exists. Everyone ${u}_{h,i}\left(t\right)$ corresponds to the framework ${S}_{ey,i}\left({u}_{h,i}\right)$ with the diameter ${D}_{y,i}$ of the domain ${D}_{y,i}$. As ${u}_{h,i}\left(t\right)\in \text{S}$, diameters ${D}_{y,i}$ will have the feature ${d}_{h,\Sigma}$ -optimality.

Let the hypothetical framework ${S}_{ey}$ of the system (1) have the diameter ${d}_{h,\Sigma}$.

Definition 6. The framework ${S}_{ey,i}$ has the feature ${d}_{h,\Sigma}$ -optimality on the set ${\text{U}}_{h}$ if exists ${\epsilon}_{\Sigma}>0$ such that $\left|{d}_{h,\Sigma}-{D}_{y,i}\right|\le {\epsilon}_{\Sigma}$ $\forall i=\stackrel{\xaf}{1,\#{\text{U}}_{h}}$.

Definition 7. If the subset of inputs $\left\{{u}_{h,i}\left(t\right)\right\}={\text{U}}_{h}\subset \text{U}$ ( $i\ge 1$ ), which elements ${u}_{h,i}\left(t\right)\in \text{S}$ and frameworks ${S}_{ey,i}\left({u}_{h,i}\right)$ have property ${d}_{h,\Sigma}$ -optimality, exists, then frameworks ${S}_{ey,i}\left({u}_{h,i}\right)$ are indiscernible on sets $\left\{{u}_{h,i}\left(t\right)\right\}$.

Definitions 6, 7 show that the ${h}_{{\delta}_{h}}$ -identifiability estimation can be obtained on any input $u\left(t\right)\subset {\text{U}}_{h}$. The approach proposed to the estimation of the system (1) ${h}_{{\delta}_{h}}$ -identifiability in [23]. The approach bases on the integral indicator application for the framework ${S}_{ey}$ analysis and is the development of results obtained in [21].

Example 2. Consider the system (6). The structure ${S}_{ey}$ is shown in Figure 4. The model approximating ${S}_{ey}$ has the form

${\gamma}_{ey}=0.033y-0.153$, ${r}_{ey}^{2}=0.983$ (13)

where ${\gamma}_{ey}=\stackrel{^}{e}$ is the secant framework ${S}_{ey}$, ${r}_{ey}^{2}$ is determination coefficient.

The structural identifiability of the SBW-system follows from theorem 3,
${\delta}_{h}=0.002$. SBW-system is S-synchronized, and the model (4) for obtaining
${S}_{ey}$ is *SM*-identifying. The center of the framework
${S}_{ey}$ is
${c}_{\text{S}}=-0.001$ and obtain from the analysis of the
$\text{dom}\left({S}_{ey}\right)$. Modifications of secants (12) have the form

$\begin{array}{l}{\gamma}_{e}^{l}=\text{0}\text{.0313}y-\text{0}\text{.146},\text{\hspace{0.17em}}\text{\hspace{0.17em}}{r}_{ye,l}^{2}=0.\text{912,}\\ {\gamma}_{e}^{r}=\text{0}\text{.032}y-\text{0,15},\text{\hspace{0.17em}}\text{\hspace{0.17em}}{r}_{ye,r}^{2}=0.\text{926}\text{.}\end{array}$ (14)

Models (13) structurally coincide with (14). These results confirm the fulfillment of the condition

$\left|{D}_{y}\left({S}_{ey}\left(u\left(t\right)|{}_{u\in {\text{U}}_{\text{S}}}\right)\right)-{D}_{y}^{ref}\right|\le {\epsilon}_{y}$.

Example 2. Consider the system consisting of a nonlinear actuator and an object. The object has dry and quadratic friction. The actuator is described by the nonlinear function with saturation (system ${S}_{ST}$ )

$\begin{array}{l}\left[\begin{array}{c}{\stackrel{\dot{}}{x}}_{1}\\ {\stackrel{\dot{}}{x}}_{2}\end{array}\right]=\left[\begin{array}{cc}0& 1\\ 0& -1\end{array}\right]\left[\begin{array}{c}{x}_{1}\\ {x}_{2}\end{array}\right]+\left[\begin{array}{c}0\\ -{c}_{1}{\phi}_{1}\left({x}_{2}\right)\end{array}\right]+\left[\begin{array}{c}0\\ c{\phi}_{2}\left(u\right)\end{array}\right],\\ y={x}_{1},\end{array}$

where
${\phi}_{1}\left({x}_{2}\right)={x}_{2}^{2}\text{sign}\left({x}_{2}\right)$ is quadratic friction,
${\phi}_{2}\left(u\right)=\text{sat}\left(u\right)$ is dry friction,
$x={x}_{1}$ is the rotation angle of the object shaft, *u* is excitation current of the actuator winding, *y* is output,
${c}_{1}=2$,
$c=1$,
$u\left(t\right)=3\mathrm{sin}\left(0.1\pi t\right)$.

Measurements set is ${\text{I}}_{o}=\left\{u\left(t\right),y\left(t\right),t=\left[0,{t}_{k}\right]\right\}$, ${t}_{k}<\infty $.

The frameworks $S,{S}_{ey}$ are presented in Figure 5. Apply the proposed approach to SI estimation and obtain the structural identifiability of the system ${S}_{ST}$. The conclusion about the nonlinearity structure cannot be base on $S,{S}_{ey}$. The nonlinear input complicates the task. Analysis of the structure ${S}_{ey}$ shows that the input ${\phi}_{2}\left(u\right)$ is constant on the interval ${\stackrel{\xaf}{J}}_{y}=\left[4;8.5\right]$ and the constancy excitation condition not hold.

Figure 5 shows that you can set ${\stackrel{^}{\phi}}_{2}\left(u\right)=\text{sat}\left(u\right)$. ${J}_{y}=\left[2;4\right]\vee \left[8.5;10\right]$ is an interval the decision-making about the nonlinearity form. The application of the model (4) (framework ${S}_{ey}$ ) is inefficient. Therefore, perform the analysis of ${\stackrel{\dot{}}{x}}_{2}$ dependence on available variables.

Coefficients of determination between ${\stackrel{\dot{}}{x}}_{2}$ and ${x}_{2},y$ are respectively equal ${r}_{{x}_{2}{\stackrel{\dot{}}{x}}_{2}}^{2}=\text{0}\text{.995}$, ${r}_{y{\stackrel{\dot{}}{x}}_{2}}^{2}=\text{0}\text{.916}$. We see that there is a relationship between ${\stackrel{\dot{}}{x}}_{2}$ and ${x}_{2}$. Use the hierarchical immersion (HI) method to refine structural relationships. HI allows to step by step refining relationships in the system ${S}_{ST}$ and

Figure 5. Frameworks $S,{S}_{ey}$.

gives the final estimate for nonlinearity. We found that the influence degree of the $\left|{x}_{2}\right|{x}_{2}$ on system properties is 97%. The framework ${S}_{\epsilon ,\left|{x}_{2}\right|{x}_{2}}$ (Figure 6) confirms the properties of the system ${S}_{ST}$.

So, the analysis confirms the possibility of the system ${S}_{ST}$ structural identification estimation and its identifiability at the interval ${J}_{y}$. The model (4) application depends on the system structure (framework ${S}_{ey}$ ). The general approach to the choice of the model (4) structure not succeeds. The nonlinearity structure depends on the specifics of the system. This conclusion illustrates this example. It confirms the versatility and complexity of the considered problem. The system with several nonlinearities requires the development of proposed approaches.

Example 3. System for generating self-oscillations

${\stackrel{\dot{}}{y}}_{1}={y}_{2},$

${\stackrel{\dot{}}{y}}_{2}=-g{y}_{2}+{k}_{0}{y}_{5},$

${\stackrel{\dot{}}{y}}_{3}=-{T}_{1}^{-1}\left({y}_{3}+{f}_{1}\left({y}_{1}\right)\right),$

${\stackrel{\dot{}}{y}}_{4}=-{T}_{2}^{-1}\left({y}_{4}-{k}_{2}{y}_{2}\right),$

${\stackrel{\dot{}}{y}}_{5}=-{T}_{3}^{-1}u-{T}_{3}^{-1}{f}_{3}\left({y}_{3}+{y}_{4}\right),$

where ${\left[{y}_{1},{y}_{2}\right]}^{\text{T}}$ is state vector of an object; ${y}_{3},{y}_{4}$ are output of gauges; ${y}_{5}$ is the output of a linear transducer amplifier with a linear actuator (feedback) (TA); ${f}_{1}(\cdot ),{f}_{3}(\cdot )$ are saturation functions with dead zone; ${T}_{1},{T}_{2},{T}_{3}$ are time constants of elements; ${k}_{0},{k}_{2}$ is gain; $g>0$. The function ${f}_{i}\left(x\right)$ has the form

${f}_{i}\left(x\right)=\{\begin{array}{l}c,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{1em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}x\ge {d}_{2,i},\\ 2\left(x-{d}_{1,i}\right),\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}{d}_{1,i}<x<{d}_{2,i},\\ 0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{1em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}-{d}_{1,i}\le x\le {d}_{1,i},\\ 2\left(x+{d}_{1,i}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}-{d}_{1,i}<x,\\ -c,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{1em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}x<-{d}_{2,i},\end{array}$

Figure 6. Frameworks ${S}_{\epsilon ,\left|{x}_{2}\right|{x}_{2}}$.

where $i=1;3$, $c=2$, ${d}_{1,1}=0.5$, ${d}_{2,1}=1.5$, ${d}_{1,3}=0.25$, ${d}_{2,3}=1.25$.

Difficulties in SI estimation.

1) The signal ${y}_{5}\left(t\right)$ presence, which is the actuator output of and the object input. ${y}_{5}\left(t\right)$ affects all processes in the system.

2) The indirect effect of variables on each other. It is a fundamental feature of systems with multiple nonlinearities. This feature levels the influence of some variables on system properties. Estimation of leveling is not always possible under uncertainty.

The compensation for these difficulties. First, build a tree of relationships. The example relationships ${y}_{1}$, ${y}_{2}$ tree with other variables are shown in Figure 7. Markers highlight significant relationships that exceed the 80% level. Such a layered tree is obtained for the system state vector.

Apply the approach described in Section 4. The analysis showed that the object is described by the linear equation (variables ${y}_{1},{y}_{2}$ ). Variables ${y}_{1}$, ${y}_{5}$ impact the variable ${y}_{3}$ (the amplifier-gauge 1 output), and variables ${y}_{2}$, ${y}_{4}$, ${\stackrel{\dot{}}{y}}_{5}$ are impacted variable ${y}_{3}$. The phase portrait of the amplifier-gauge 1 is shown in Figure 8. We see that the amplifier-gauge 1 is nonlinear.

Choose the model similar to (4) and variables to estimate the nonlinear function. Analyze the relationships for this element and obtain the model

${\stackrel{^}{\stackrel{\dot{}}{y}}}_{3}=-\text{0}\text{.778}{\stackrel{\dot{}}{y}}_{5}-\text{0}\text{.0928}$, ${r}_{{\stackrel{\dot{}}{y}}_{3},{\stackrel{\dot{}}{y}}_{5}}^{2}=0.69$.

Introduce the error ${\epsilon}_{3}={\stackrel{\dot{}}{y}}_{3}-{\stackrel{^}{\stackrel{\dot{}}{y}}}_{3}$ and the framework ${S}_{{\epsilon}_{3}{y}_{1}}$ described by the function ${\gamma}_{{\epsilon}_{3}{y}_{1}}:{y}_{1}\to {\epsilon}_{3}$ (Figure 9).

We see that the framework ${S}_{{\epsilon}_{3}{y}_{1}}$ is ${h}_{{\delta}_{h}}$ -identifiable. Diameters of the framework ${S}_{{\epsilon}_{3}{y}_{1}}$ are almost equal. ${S}_{{\epsilon}_{3}{y}_{1}}$ has a dead zone in the range [–0.5; 0.5] and growth in the segment [0.5; 1.5]. Therefore, the nonlinearity has the form ${f}_{1}\left(x\right)$.

The next element is an amplifier-gauge 3 with the output ${y}_{4}$. Variables ${y}_{2}$

Figure 7. Layers graph for *y*_{1} and *y*_{2}.

Figure 8. Phase portrait of the first gauge.

Figure 9. ${S}_{{\epsilon}_{3}{y}_{1}}$ -framework.

and ${\stackrel{\dot{}}{y}}_{3}$ influence on ${y}_{4}$. ${\stackrel{\dot{}}{y}}_{3}$ reflects the variable ${y}_{2}$ influence of object. The structural analysis showed that the framework ${S}_{{\stackrel{\dot{}}{y}}_{3}{y}_{2}}$ does not contain features, and ${S}_{ey}$ -analog is an insignificant framework. Therefore, amplifier-gauge three does not contain nonlinearities.

Consider the last element with the output ${y}_{5}$. Variables ${y}_{3}$ and ${\stackrel{\dot{}}{y}}_{4}$ impact on ${\stackrel{\dot{}}{y}}_{5}$, and variables ${y}_{2}$, ${\stackrel{\dot{}}{y}}_{3}$ impact on ${\stackrel{\dot{}}{y}}_{5}$. Applying the model (secant) ${\stackrel{^}{\stackrel{\dot{}}{y}}}_{5}={a}_{53}{\stackrel{\dot{}}{y}}_{3}+{b}_{53}$ to the framework ${S}_{{\stackrel{\dot{}}{y}}_{5}{\stackrel{\dot{}}{y}}_{3}}$ and the determination of the misalignment ${\epsilon}_{5}={\stackrel{\dot{}}{y}}_{5}-{\stackrel{^}{\stackrel{\dot{}}{y}}}_{5}$ gives the framework ${S}_{{\epsilon}_{5}{y}_{4}}$ described by the function ${\gamma}_{{\epsilon}_{5}{y}_{4}}:{y}_{4}\to {\epsilon}_{5}$ (Figure 10). Figure 10 shows the phase portrait ${S}_{{y}_{5}}$ of this element.

We see (Figure 10) that ${S}_{{\epsilon}_{5}{y}_{4}}$ is zero in the interval (−0.25; 0.25), Structure ${S}_{{\epsilon}_{5}{y}_{4}}$ has a linear growth of ${S}_{{\epsilon}_{5}{y}_{4}}$ by [0.25; 1.25], which coincides with ${f}_{3}$. This element is structurally identifiable by ${y}_{4}$. But this element is not identifiable by ${y}_{3}$.

Figure 10. Phase portrait ${S}_{{y}_{5}}$ and framework ${S}_{{\epsilon}_{5}{y}_{4}}$.

So, we see that the possibility of structural identifiability of a nonlinear system depends on the interaction of its elements. Just the structural organization of the system determines the ability to solve the structural identifiability problem.

Therefore, we see that the possibility of the structural identifiability estimation of the nonlinear system depends on the interaction of its elements. Just the structural organization of the system determines the ability of structural identifiability problem solves.

In the appendix, we state the problem of structural identification on a set of model structures. Next, we introduce the concept of structural identifiability at the set level.

5. System Attractor Reconstruction

Reconstruction (restoration) of the phase portrait (PP) or a system attractor can be performed on the basis of time series analysis. The proof of this approach is given in [33], and the practical application is based on Wolf and Rosenstein algorithms [34] [35]. This problem can be interpreted as the system structure restoration task in the phase space. Many authors (see reviews in [36] [37] [38] ) have studied this problem. Reconstruction attractor procedures are heuristic [37]. The phase portrait construction depends on the choice to recover optimum parameters of reconstruction. The main parameter is the choice time delay for generating new variables on base the available time series. To solve this problem, various approaches (see references in [38] [39] ) use: the autocorrelation and cross-correlation, the choice of the attractor shape, the method of the neighbor, and also the prediction statistics based on various models. Recommendations about the choice of the delay value estimation method are not provided. It is explained by the complexity and the variety of considered objects. The second problem is concatenated to the quality criteria choice [37] for the estimation of the PP reconstruction. Unfortunately, this problem has not obtained the final solution. Some recommendations are given for solving this problem in [37].

The choice of an attractor dimension [36] [40] [41] is also an important task. An attempt to resolve this problem is made in [36]. In [38], a statistic is proposed for the choice of the delay value and the attractor dimension. It is shown that these statistics can be applied to the attractor creation for multidimensional systems. The reconstructed attractor further analysis problem is not completed at this stage. As a rule, the designed attractor not always satisfies the requirements of the researcher. The attractor is not smooth. Therefore, smoothing various methods [36] [42] apply to obtain a smooth mapping.

Identification of the dynamic system was considered in [36] [37] based on the obtained set of state space variables. This issue is discussed in the review [43] in more detail. Various approximation methods of the operator describing the system state are applied to the model design. The basis of identification methods is interpolation procedures decomposition of a nonlinear function on the specified basis, the application of spline-functions and neural networks, and many other approaches [43].

Remark 4. As noted in [37], none of the considered identification methods is efficient. The major role is played by heuristics, the researcher experience and the prior information. This remark is true for PP restoration methods [36]. As a rule, at first, the data approximation is performed on the given class of functions. Then the phase portrait, topologically the equivalent to an initial system, is construction. Next, unknown parameters are introduced in the obtained model that the properties of the obtained mapping improve. For this purpose, various heuristics and procedures are applied for additional information accounting on a system. Obtained models are very unwieldy and inconvenient for the application. Therefore, in [36], it is noted that the use of complex models is not always justified in practical applications.

6. System Structure Choice with Lag Variables

Models with distributed lags (DL) are widely applied in various areas [44] - [50]. Independent and dependent variables can have the delay. The distributed lag accounting activates autocorrelation between variables [45] and the parameter identification process complicates. Various schemes of parameters approximation at DL [44] [47] apply to system parameters identification. The prior information is considered at the same time. Such an approach reduces the estimated parameter number of the system. Parametric schemes minimize the number of unknown parameters. The least-squares procedure and its modifications apply to the parameter estimation. Methods of the maximum length lag choice are considered. Statistics based on the analysis of residuals [45] [48] are the basis of the applied approaches. The Akai criteria and Bayes information criteria are used for decision-making on the model structure. The identification of the system structure and parameters was not examined under uncertainty.

Scheme choice of the model parameters approximation is bound with the performance of labour-consuming calculations under uncertainty. Consider The approach^{1} to the structure DL choice based on the analysis of properties framework
${S}_{k,e}^{v}$. Therefore, previously considered methods do not apply to DL analysis. The structure estimation of the system with DL is based on the analysis using secants [38].

Further, the estimation method of the DL system structure based on Lyapunov exponent identification is stated. This method is the development of the approach described in subsection 3.2. The direct transfer of results [22] [23] on the considered system class is impossible since these systems have the specifics.

Consider the system

${y}_{n}={A}^{\text{T}}{U}_{n}+{B}^{\text{T}}{X}_{n}+{\xi}_{n}$, (15)

where ${y}_{n}\in R$ is output; ${U}_{n}\in {R}^{k}$ is input vector which elements are limited extremely nondegenerate functions; $n\in {J}_{N}=\left[0,N\right]$ is discrete time, $N<\infty $ ; ${X}_{n}\in {R}^{m}$, ${X}_{n}=X\left({u}_{i,n}\in {U}_{n}\right)={\left[{u}_{i,n-1},{u}_{i,n-2},\cdots ,{u}_{i,n-m}\right]}^{\text{T}}$ is the vector of distributed lags on ${u}_{i,n}\in {U}_{n}$ ; $A\in {R}^{k},B\in {R}^{m}$ are constant parameter vectors; ${\xi}_{n}\in R$ is external disturbance, $\left|{\xi}_{n}\right|<\infty $ for all $n\in {J}_{N}$.

Let the informational set ${\text{I}}_{o}$ for the system (15) contains the information on measured inputs and output on an interval ${J}_{N}$

${\text{I}}_{o}=\left\{{U}_{n},{y}_{n},n\in {J}_{N}\right\}$. (16)

Problem: estimate the vector ${X}_{n}$ dimension based on data (16) analysis.

Remark 5. Here the case of lags availability on the input ${U}_{n}$ is considered. If the output ${y}_{n}$ contains lags, then the proposed approach allows to estimate the DL structure and in this case.

Analyze the effect of
${u}_{j,n}$,
$j=\stackrel{\xaf}{1,k}$ on the output
${y}_{n}$. Identify by determination coefficient
${r}_{{u}_{j},y}^{2}$ for everyone
${u}_{j,n-1}$. Introduce the number
$\delta >0$. Find *j* such that
${r}_{{u}_{j},y}^{2}\ge \delta $ satisfied and designate
$i=j$. So, the element of the vector
${U}_{n}$ is determined. Form the vector
${\stackrel{\u02dc}{U}}_{n}\in {R}^{m-1}$, which does not contain the element
${u}_{i,n}$, for the lag estimation on
${u}_{i,n}$ and apply the model

${\stackrel{^}{\stackrel{\u02dc}{y}}}_{n}={\stackrel{\u02dc}{B}}^{\text{T}}{\stackrel{\u02dc}{U}}_{n}$, (17)

where ${\stackrel{\u02dc}{B}}^{\text{T}}\in {R}^{m-1}$ is the parameter vector.

The system (15) is not dynamic in the standard sense.

Assumption 1. Let the system (15) contain the variable ${\pi}_{n}={u}_{j,n}$ which changes on the dynamic law

${S}_{\pi}:{\pi}_{n}={\displaystyle \underset{i=1}{\overset{h}{\sum}}{\alpha}_{i}{\pi}_{n-i}}+\kappa {\zeta}_{n}$, (18)

where ${\alpha}_{i}$, $\kappa $ are some numbers, $h<\infty $, ${\zeta}_{n}\in R$ is some limited function for all $n\in {J}_{N}$.

Let the system (18) be stable, *i.e.*
${\alpha}_{i}<1$.

Definition 8. The systems (13) have *π*-steady state or *π*-state if such
$j\ge 1$ exists that the variable
${u}_{j,n}\in {U}_{n}$ satisfies the Equation (18).

Allocate the transient process (the system (18) general solution) for the application of LE to the
${S}_{\pi}$ -system. Localize a space in (15) to which the variable
${\pi}_{n}={u}_{j,n}$ belongs. *π*-steady state eliminates the interval
${J}_{g}={J}_{N}\backslash {J}_{\pi ,N}$, where the interval
${J}_{g}$ corresponds to the *π*-state in the
${S}_{\pi}$ -system.

Consider the set ${\text{I}}_{o}$ (16). Apply the model (17) on the interval ${J}_{\pi ,N}$, where ${J}_{\pi ,N}$ choose so that the coefficient of determination was maximal between ${\stackrel{^}{\stackrel{\u02dc}{y}}}_{n}$ and ${\stackrel{^}{y}}_{n}$. Next, calculate the error ${e}_{n}={y}_{n}-{\stackrel{^}{\stackrel{\u02dc}{y}}}_{n}$. Note that the variable ${e}_{n}$ contains information about ${u}_{j,n}$.

Now the analysis is reduced to the study of the discrete dynamic system properties with the output ${e}_{n}$. We obtain a system $S{D}_{e}$ that is a prototype of the system (18).

The problem is reduced to LE estimation based on the set ${\text{I}}_{e}=\left\{{e}_{n},n\in {J}_{g}\right\}$ analysis. This problem is close to the attractor reconstruction problem of the dynamic system by the time set ${\text{I}}_{e}$. We apply Takens theorem [33] for the phase portrait reconstruction. F. Takens has proved that the new series ${e}_{d,n}$, based on lagging values ${e}_{n}$, gives to the PP reconstruction problem solution. The obtained row ${e}_{d,n}$ describes the change in the dynamics of the derivative variable ${e}_{n}$. Many procedures are proposed for the choice of the delay interval [36]. It is supposed that trajectories of the dynamic system belong to the smooth manifold. Note that the delay interval choice problem did not have the final solution. Heuristic procedures, algorithms of approximation and smoothing time series are often used in practical applications. A priori information is important. After obtaining of the set $\left\{{e}_{n},{e}_{d,n},n=\stackrel{\xaf}{1,{n}_{k}}\right\}$, the problem solution of the design phase portrait. This problem is nontrivial [36] [37] [40] also.

Remark 6. Smoothing algorithms are widely used in the attractor reconstruction problem. Smoothing procedures application to set ${\text{I}}_{e}$ elements can be given to the loss of valuable information at the LE identification for the system (15). Residual errors caused by disturbance ${\xi}_{n}$ in (15) are impacted on properties of LE estimations.

Use the formula (8) for the calculation of Lyapunov exponents. Considering remark 6, we will detect LE, but not their values.

Consider analogues of frameworks ${S}_{{k}_{s,\rho}}$, $S{K}_{\Delta {{k}^{\prime}}_{s,\rho}}$ and $LS{K}_{\Delta {{k}^{\prime}}_{s,\rho}}$, defined at ${t}_{n}=n\tau $, where $\tau $ is the data measurement interval. Introduce the discrete analogue of the function (11)

${b}_{n}=\{\begin{array}{l}1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}\Delta {{k}^{\prime}}_{s,n}\ge 0\\ -1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}\Delta {{k}^{\prime}}_{s,n}<0\end{array}$. (19)

where ${b}_{n}=b\left(n\tau \right)$, $\Delta {{k}^{\prime}}_{s,n}=\Delta {{k}^{\prime}}_{s}\left(n\tau \right)$.

Theorem 3 [22]. If function
${b}_{n}$ changes the sign *h* times on the interval
$\left[{t}_{0},{t}^{*}\right]\subset {\stackrel{\xaf}{J}}_{g}$
$\left({t}^{*}\le \stackrel{\xaf}{t}\right)$ that the system (18) has the order *h*.

It is shown if the theorem 3 conditions are satisfied, then local minima of the framework $S{K}_{\Delta {{k}^{\prime}}_{s,\rho}}$ corresponds to LE estimations of the system (16) in space $\left({k}_{s,\rho},\Delta {{k}^{\prime}}_{s,\rho}\right)$.

In [22], it is shown, if Theorem 3 conditions are satisfied, then the local minima of the framework $S{K}_{\Delta {{k}^{\prime}}_{s,\rho}}$ correspond to LE estimations of the system (18) in space $\left({k}_{s,\rho},\Delta {{k}^{\prime}}_{s,\rho}\right)$.

Theorem 4. If conditions of Theorem 3 are satisfied and the framework
$S{K}_{\Delta {{k}^{\prime}}_{s,\rho}}$ described by the function
${\Gamma}_{\Delta {{k}^{\prime}}_{s,\rho}}:{\text{I}}_{{k}_{s,\rho}}\to {\text{I}}_{\Delta {{k}^{\prime}}_{s,\rho}}$ has local minima on the plane
$\left({k}_{s,\rho},\Delta {{k}^{\prime}}_{s,\rho}\right)$, then
${S}_{\pi}$ -system have *π*-state.

The proof of theorem 4 is obvious. The local minima quantity corresponds to the lag structure of the system (15) on the variable ${u}_{j,n}$.

So, we showed that the discrete informational set ${\text{I}}_{o}$ modification is based on the approach [22]. This modification allows extending the methodology of geometrical frameworks application to systems with the distributed lags of input variables.

Consider the identifiability problem of Lyapunov exponents. Let the vector ${U}_{n}$ is limited constantly excited

$P{E}_{\alpha}:{U}_{n}{U}_{n}^{\text{T}}\ge \alpha {I}_{k}$ (20)

for some $\alpha >0$ and $\forall n\ge 0$ on the interval ${J}_{N}$, where ${I}_{k}\in {R}^{k\times k}$ is the unit matrix.

If (20) is satisfied, then we will write
${U}_{n}\in P{E}_{\alpha}$. As shown in Section 4, the fulfilment (20) is sufficient for the
${S}_{\pi}$ -system *π*-state estimation.
${\stackrel{\u02dc}{U}}_{n}={U}_{n}\backslash {u}_{i,n}$ and
${\stackrel{\u02dc}{U}}_{n}\in P{E}_{\stackrel{\xaf}{\alpha}}$,
$\stackrel{\xaf}{\alpha}>0$.
${S}_{\pi}$ -system with *π*-state corresponds to the system (15). Let the framework
$LS{K}_{\Delta {{k}^{\prime}}_{s,\rho}}$ and the function
${b}_{n}$ which on the interval
$\left[{t}_{0},{t}^{*}\right]\subset {\stackrel{\xaf}{J}}_{g}$ changes the sign *h* of times exist.

Let the framework
$LS{K}_{\Delta {{k}^{\prime}}_{s,\rho}}$ and the function
${b}_{n}$, which on the interval
$\left[{t}_{0},{t}^{*}\right]\subset {\stackrel{\xaf}{J}}_{g}$ changes the sign *h* of times, exist. Then the system has *h* Lyapunov exponents. Therefore,
${S}_{\pi}$ -system is identifiable on the set
${M}_{{S}_{\pi}}$ LE. So, it is true

Theorem 5. Let i) the vector
${U}_{n}$ of the system (13) have property
${U}_{n}\in P{E}_{\alpha}$ ; ii) the vector
${\stackrel{\u02dc}{B}}^{\text{T}}\in {R}^{m-1}$ of the model (17) is identifying with
${\stackrel{\u02dc}{U}}_{n}\in P{E}_{\stackrel{\xaf}{\alpha}}$ ; iii) the framework
$LS{K}_{\Delta {{k}^{\prime}}_{s,\rho}}$ and the function
${b}_{n}$ (19) satisfying theorems 3 conditions exist; iv) the
${S}_{\pi}$ -system (20) have the *π*-state. Then the dynamic
${S}_{\pi}$ -system (18) corresponding to the system (17) is identifiable on the Lyapunov exponent set.

Example 4. Consider the system with $k=3$ and $h=2$, $A={\left[0,7;3;3,5\right]}^{\text{T}}$, $B={\left[0,4;0,45\right]}^{\text{T}}$, ${X}_{n}={\left[{u}_{1,n-1},{u}_{1,n-2}\right]}^{\text{T}}$. ${u}_{1,n}$ is obtained as the system (18) output with the input ${\zeta}_{n}$, distributed to the normal law with the zero average and final dispersion. ${u}_{1,n}$. The set ${\text{I}}_{o}$ (16) is generated for $n\in \left[1;60\right]$. The analysis of the set ${\text{I}}_{o}$ has shown that lags had by the variable ${u}_{1,n}$. Time series ${\left\{{e}_{n}\right\}}_{n=\stackrel{\xaf}{1;60}}$, ${\left\{{e}_{d,n}\right\}}_{n=\stackrel{\xaf}{1;60}}$ are formed. Apply the model (17)

${\stackrel{^}{\stackrel{\u02dc}{y}}}_{n}=\left[3;3,52\right]{\left[{u}_{2,n};{u}_{3,n}\right]}^{\text{T}}+7,35$, (21)

which is obtained on the basis of LSM for $n\in \left[30;60\right]$. The determination coefficient of the model (21) is 0.99.

The system (16) phase portrait and its smoothed analogue (variable ${e}_{d,n}^{sm}$ ) are shown in Figure 11.

Figure 12 shows processes are non-smooth in the ${S}_{\pi}$ -system. Results of the lag structure estimation are presented in Figure 12 where frameworks ${S}_{{k}_{s,\rho}}$ and $LS{K}_{\Delta {{k}^{\prime}}_{s,\rho}}$ are shown. Designations in Figure 12: ${\mu}_{1},{\mu}_{2}$ are estimations of Lyapunov exponents, ${k}_{s}$ are calculated on the basis of (8)

${k}_{s,e,n}=\frac{\rho \left({e}_{n}\right)}{n\tau}$, ${k}_{s,{e}_{d},n}=\frac{\rho \left({e}_{d,n}\right)}{n\tau}$.

The set ${M}_{{S}_{\pi}}$ of Lyapunov exponent is shown in Figure 12. The analysis of results shows that the system (18) describing the change ${u}_{1,n}$ has the order 2.

Example 5. Consider the control system for supplying cars to the Vladivostok transport hub (Russia). Study the case of 6 cars simultaneous giving from railway tracks on berth tracks. The maximum capacity of the hub is 175 cars. Let ${N}_{4}$ is the number of cars from the railway; ${N}_{5}$ is the number of cars received

Figure 11. System (15) phase portrait of the with $k=3$ and $h=2$.

Figure 12. Lyapunov exponents estimations.

on the railway lines of the port. Determine $\omega ={N}_{5}-{N}_{4}$. The variable $\omega $ reflects the current status of a hub and influences on the process of cars giving. The mathematical model for decision-making has the form

${\stackrel{^}{N}}_{5,n}=f\left({\stackrel{^}{N}}_{5,n-1},{N}_{4,n},{\omega}_{n}\right)$, (22)

where
${\stackrel{^}{N}}_{5,n}$ is a model output in an instant *n*. The model (22) structure is described by an autoregressive equation of the first order. Apply the approach stated above and evaluate the impact
$\omega $. The system (18) has the first order to
$\omega $. Apply algorithms from Section 3.2 and estimate the autoregression order. The model (22) has the form

${\stackrel{^}{N}}_{5,n}=1.06{\stackrel{^}{N}}_{5,n-1}-0.13{\omega}_{n-1}-0,08{N}_{4,n}-4.59$. (23)

The determination coefficient of the model (23) is 0.964. The simulation showed good predictive properties of the model (23).

So, modelling results confirm the approach efficiency to the structure estimation of the system (15).

7. Conclusions

The analysis of the concept “framework” application in identification problems is fulfilled. It is showed that this concept is widely used in parametric estimation problems. The term “framework” can be interpreted as a frame, a structure, the system, a platform, the concept, the basis, the system of approaches. It is shown that framework can be used in two directions: 1) the conceptual idea integrating the number of methods, approaches or procedures; 2) the mapping describing in the generalized form processes and properties in the system. The second direction is closer to methods that are applied in the qualitative theory of dynamic systems. In work, this approach is interpreted as the methodology based on the analysis of virtual geometrical frameworks (GF). In work, this approach is interpreted as the methodology based on the analysis of virtual geometrical frameworks. The main difference GF: they did not postulate a prior, and they are determined based on the experimental data processing. GF is the main object of analysis. They allow the decision-making about the properties and features of the system. The review contains the identification theory areas where this methodology is applicable.

1) Structural identification of the nonlinear system.

2) Lyapunov exponent estimation of the system.

3) Structural identifiability of the nonlinear system.

4) The system phase portrait reconstruction on the time series.

5) The system structure estimation with lag variables.

We consider the application of Lyapunov exponents for the decision-making on static systems structure with lag variables.

Appendix. Structural Identifiability at Structures Set of System Operator

Consider the system

$\begin{array}{l}\stackrel{\dot{}}{X}\left(t\right)=F\left(X,A,t\right)+Bu\left(t\right),\\ y\left(t\right)={C}^{\text{T}}X\left(t\right)+\xi \left(t\right),\end{array}$ (A.1)

where $X\in {R}^{m}$ is the state vector, $F:{R}^{m}\times {R}^{k}\times J\to {R}^{m}$ is a smooth continuously differentiable vector function, $y\in R$ is the output, $u\in R$ is the input, $A\in {R}^{k}$ is the parameter vector, $B\in {R}^{m}$, $\xi \in R$ is a piecewise continuous bounded perturbation.

A priori information

${\text{I}}_{a}\left(X,{\text{S}}_{S},{\text{G}}_{S},u,\xi \right)\subset {\text{S}}_{S}\cup {\text{G}}_{S}\cup {\text{I}}_{a}^{X}\cup {\text{I}}_{a}^{u}\cup {\text{I}}_{a}^{\xi}$, (A.2)

is a set contained available information about the structure of the vector function $F\in {\text{S}}_{S}$, parameters $\left(A,B\right)\subset {\text{G}}_{S}$, characteristics of the input, output, and perturbation.

The set ${\text{S}}_{S}$ can contain information about the class of operators, describing the system (A.1) dynamics and some of its structural parameters ${\text{A}}_{S}$. The cardinality of set ${\text{A}}_{S}$ determines by the level of a priori information. In identification problems, the ${\text{S}}_{S}$ and ${\text{A}}_{S}$ formation based on the researcher intuition. The experimental information has the form

${\text{I}}_{o}=\left\{u\left(t\right),y\left(t\right),t\in J=\left[{t}_{0},{t}_{k}\right]\right\}$. (A.3)

Let operator ${\stackrel{^}{F}}_{i}(\cdot )\in {R}^{m}$ be a contender for forming the structure of the vector function $F\left(X,A,t\right)$ in (A.1). We suppose that ${\stackrel{^}{F}}_{i}(\cdot )\in {\text{S}}_{S}$ is parameterized up to the pair $\left({\stackrel{^}{A}}_{i},{\stackrel{^}{B}}_{i}\right)\in {\text{A}}_{S}\subset {\text{S}}_{S}$. Apply the model

$\stackrel{\dot{}}{\stackrel{^}{X}}\left(t\right)=\stackrel{^}{F}\left(\stackrel{^}{X},\stackrel{^}{A},t\right)+\stackrel{^}{B}u\left(t\right)$. (A.4)

Problem: based on a priori
${\text{I}}_{a}$ and experimental
${\text{I}}_{o}$ information and the parametric identification, estimate the structure of the vector function *F* in (A.1) to minimize the cardinality of the set
${\text{S}}_{S}$

$\mathrm{arg}\underset{\stackrel{^}{F}\in {\text{S}}_{S}}{\mathrm{min}}\text{\hspace{0.17em}}\#{\text{S}}_{S}={F}^{*}$. (A.5)

The fulfillment of (A.5) is equivalent to the following condition

$\mathrm{arg}\underset{\left(\stackrel{^}{A},\stackrel{^}{B}\right)\in {\text{A}}_{S}}{\mathrm{min}}\#{\text{A}}_{S}=\left({A}^{*},{B}^{*}\right)$. (A.6)

We do not specify the class of parametric identification methods since their form depends on the elements of the set ${\text{S}}_{S}$. The choice of the identification criterion $\#{\text{S}}_{S}$ reflects the singularity and complexity of the problem.

Let there be a pair $\left({A}^{*},{B}^{*}\right)\in {\text{A}}_{S}^{*}\subset {\text{S}}_{S}^{*}\subseteq {\text{S}}_{S}$ that satisfies the condition (A.6), and pair $\left({\stackrel{^}{A}}_{i},{\stackrel{^}{B}}_{i}\right)\in {\stackrel{^}{A}}_{S}\subset {\stackrel{^}{S}}_{S}\subset {\text{S}}_{S}$.

Definition A1. System (A.1) is locally parametrically identifiable on the set ${\text{A}}_{S}\subset {\text{S}}_{S}$ if there exist structures $\stackrel{^}{F}\left({\stackrel{^}{A}}_{S}\right)\in {\stackrel{^}{S}}_{S}\subseteq {\text{S}}_{S}$, ${F}^{*}\left({\text{A}}_{S}^{*}\right)\in {\text{S}}_{S}^{*}\subseteq {\text{S}}_{S}$ such that $\left|\#{\stackrel{^}{A}}_{S}-\#{\text{A}}_{S}^{*}\right|\le {\epsilon}_{A}$, where ${\epsilon}_{A}\ge 0$

$\stackrel{^}{F}\left({\stackrel{^}{A}}_{S}\right)=F\left\{{\stackrel{^}{F}}_{i}\in {\stackrel{^}{S}}_{S}\subseteq {\text{S}}_{S}:{\stackrel{^}{F}}_{i}\left({\stackrel{^}{A}}_{i}\in {\stackrel{^}{A}}_{S}\right),i=\stackrel{\xaf}{1,\#{\stackrel{^}{A}}_{S}}\right\}$,

${F}^{*}\left({\text{A}}_{S}^{*}\right)=F\left\{{F}_{i}^{*}\in {\text{S}}_{S}^{*}\subseteq {\text{S}}_{S}:{F}_{i}^{*}\left({A}_{i}\in {\text{A}}_{S}^{*}\right),i=\stackrel{\xaf}{1,\#{\text{A}}_{S}^{*}}\right\}$.

Obtain from definition A1

$\left|\#{\stackrel{^}{A}}_{S}-\#{\text{A}}_{S}^{*}\right|\le {\epsilon}_{A}\Rightarrow \left|\#{\stackrel{^}{S}}_{S}-\#{\text{S}}_{S}^{*}\right|\le {\epsilon}_{S}$,

where ${\epsilon}_{S}\ge 0$.

Definition A2. System (6.17) is called structurally identifiable on the set ${\text{A}}_{S}\subset {\text{S}}_{S}$ if structures $\stackrel{^}{F}\left({\stackrel{^}{A}}_{S}\right)\in {\stackrel{^}{S}}_{S}\subseteq {\text{S}}_{S}$, ${F}^{*}\left({\text{A}}_{S}^{*}\right)\in {\text{S}}_{S}^{*}\subseteq {\text{S}}_{S}$ exist such that $\#{\stackrel{^}{S}}_{S}=\#{\text{S}}_{S}^{*}$ and

$\left(\#{\stackrel{^}{S}}_{S}=\#{\text{S}}_{S}^{*}\right)\Rightarrow \left(\#{\stackrel{^}{A}}_{S}=\#{\text{A}}_{S}^{*}\right)$. (A.7)

(A.7) gives the condition for the global parametric identifiability of the system (A.1) for a specified a priori information ${\text{I}}_{a}$ on the set ${\text{S}}_{S}$.

Let be specify a set of structures ${\mathfrak{M}}_{S}=\left\{{S}_{ey,i},i=\stackrel{\xaf}{1,\#{\text{S}}_{S}}\right\}\subset {\text{S}}_{S}$, described the nonlinear properties of the system (A.1) for ${\stackrel{^}{F}}_{i}\left({A}_{i}\in {\stackrel{^}{A}}_{S}\right)\in {\text{S}}_{S}$ candidates. Let the class of inputs ${U}_{S\text{,}h}=\left\{{u}_{i}\in {\text{U}}_{h}\subset \text{U}:{u}_{i}\left(t\right)\in P{E}_{{\alpha}_{i}},i=\stackrel{\xaf}{1,\#{\text{U}}_{h}}\right\}$ exist, where $P{E}_{\alpha}$ F1 is the property of constant excitation, ${\text{U}}_{\text{S}}$ is the inputs set which S-synchronize the system (A.1).

Let elements of the structure subset ${\mathfrak{M}}_{S\text{,}d}\subset {\mathfrak{M}}_{S}$ have the property of ${\text{d}}_{h,y}$ -optimality.

Definition A3. Structures ${S}_{ey,i}\in {\mathfrak{M}}_{S\text{,}d}$ defined on the input class ${U}_{S\text{,}h}$ and having the property of ${d}_{h,y}$ -optimality are structurally indistinguishable on the set $\left\{{u}_{h,i}\left(t\right)\right\}={\text{U}}_{h}$.

We see that the system (A.1) on the structure set ${\text{S}}_{S}$, on which the subset ${S}_{ey,i}\in {\mathfrak{M}}_{S\text{,}d}$ is defined, is structurally identifiable for any ${u}_{i}\in {U}_{S\text{,}h}$.

Definition A4. A system (A.1) is parametrically identifiable on the set ${\text{A}}_{S}\subset {\text{S}}_{S}$ and having the structure ${S}_{ey,i}\in {\mathfrak{M}}_{S\text{,}d}$ with the input ${u}_{h,i}\in {U}_{S\text{,}h}$ is structurally identified on the set ${\text{S}}_{S}$.

We have presented the concept of structural identifiability at the level of sets.

NOTES

^{1}Karabutov, N.N. (2012) Structural identification of static systems with distributed lags. *International* *Journal* *of* *Control* *Science* *and* *Engineering*, 2, 136-142. DOI: 10.5923/j.control.20120206.01.

References

[1] Saha, B., Goebel, K., Poll, S. and Christopherson, J. (2009) Prognostics Methods for Battery Health Monitoring Using a Bayesian Framework. IEEE Transactions on Instrumentation and Measurement, 58, 291-296.

https://doi.org/10.1109/TIM.2008.2005965

[2] Pillonetto, G., Dinuzzo, F., Chen, T., De Nicolao, G. and Ljung, L. (2014) Kernel Methods in System Identification, Machine Learning and Function Estimation: A Survey. Automatica, 50, 657-682.

https://doi.org/10.1016/j.automatica.2014.01.001

[3] Toth, R., Sanandaij, B.M., Poolla K. and Vincent, T.L. (2012) Compressive System Identification in the Linear Time-Invariant Framework. 50th IEEE Conference on Decision and Control and European Control Conference, Orlando, 12-15 December 2011, 783-790.

[4] Pintelon, R. and Schoukens, J. (2012) System Identification. A Frequency Domain Approach. 2nd Edition, John Wiley & Sons, Inc., Hoboken.

https://doi.org/10.1002/9781118287422

[5] Varna, A.L., Swaminathan, A. and Wu, M. (2008) A Decision Theoretic Framework for Analyzing Binary Hash-based Content Identification. Proceedings of the 8th ACM Workshop on Digital Rights Management, Alexandria, October 2008, 67-76.

https://doi.org/10.1145/1456520.1456532

[6] Lauer, F., Bloch, G. and Vidal, R. (2011) A Continuous Optimization Framework for Hybrid System Identification. Automatica, 47, 608-613.

https://doi.org/10.1016/j.automatica.2011.01.020

[7] Iqbal, A., Ullah, I., Saeed, M.A. and Husen, A. (2019) A Classification Framework to Detect DoS Attacks. International Journal of Computer Network and Information Security, 11, 40-47.

https://doi.org/10.5815/ijcnis.2019.09.05

[8] Abdallah, H.M., Taha, A. and Selim, M.M. (2019) Cloud-Based Framework for Efficient Storage of Unstructured Patient Health Records. International Journal of Computer Network and Information Security, 11, 10-21.

https://doi.org/10.5815/ijcnis.2019.06.02

[9] Chen, L. and Narendra, K.S. (2004) Identification and Control of a Nonlinear Discrete-Time System Based on its Linearization: A Unified Framework. IEEE Transactions on Neural Networks, 15, 663-673.

https://doi.org/10.1109/TNN.2004.826206

[10] Kurtoglu, T. and Tumer, I.Y. (2008) A Graph-Based Fault Identification and Propagation Framework for Functional Design of Complex Systems. Journal of Mechanical Design, 130, Article ID: 051401.

https://doi.org/10.1115/1.2885181

[11] Papadopoulos, P.N., Guo, T. and Milanović, J.V. (2018) Probabilistic Framework for Online Identification of Dynamic Behavior of Power Systems with Renewable Generation. IEEE Transactions on Power Systems, 33, 45-54.

https://doi.org/10.1109/TPWRS.2017.2688446

[12] Roettgen, D., Allen, M.S., Kammer, D. and Mayes, R.L. (2017) Substructuring of a Nonlinear Beam Using a Modal Iwan Framework, Part I: Nonlinear Modal Model Identification. In Allen, M., Mayes, R. and Rixen, D., Eds., Dynamics of Coupled Structures, Vol. 4, Springer, Cham, 165-178.

https://doi.org/10.1007/978-3-319-54930-9_15

[13] Carino, J.A., Delgado-Prieto, M., Iglesias, J.A. and Sanchis, A. (2018) Fault Detection and Identification Methodology under an Incremental Learning Framework Applied to Industrial Machinery. IEEE Access, 6, 49755-49766.

https://doi.org/10.1109/ACCESS.2018.2868430

[14] The, C.Y., Kerk, Y.W., Tay, K.M. and Lim, C.P. (2018) On Modeling of Data-Driven Monotone Zero-Order TSK Fuzzy Inference Systems Using a System Identification Framework. IEEE Transactions on Fuzzy Systems, 26, 3860-3874.

https://doi.org/10.1109/TFUZZ.2018.2851258

[15] Nagarajaiah, S. (2017) Sparse and Low-Rank Methods in Structural System Identification and Monitoring. Procedia Engineering, 199, 62-69.

https://doi.org/10.1016/j.proeng.2017.09.153

[16] Wiggins, S. (2003) Introduction to Applied Nonlinear Dynamical Systems and Chaos. Springer Science & Business Media, Berlin.

[17] Shilnikov, L.P., Shilnikov, A.L., Turaev, D.V. and Chua, L.O. (2001) Methods of Qualitative Theory in Nonlinear Dynamics (Part II). World Scientific, Hackensack.

https://doi.org/10.1142/4221

[18] Michel, A., Wang, K. and Hu, B. (2001) Qualitative Theory of Dynamical Systems. CRC Press, Boca Raton.

https://doi.org/10.1201/9780203908297

[19] Pershin, Y.V. and Slipko, V.A. (2018) Dynamical Attractors of Memristors and Their Networks. Europhysics Letters, 125, 20002.

https://doi.org/10.1209/0295-5075/125/20002

[20] Lu, J. and Zhang, S. (2001) Controlling Chen’s Chaotic Attractor Using Backstepping Design Based on Parameters Identification. Physics Letters A, 286, 148-152.

https://doi.org/10.1016/S0375-9601(01)00383-8

[21] Li, C., Min, F. and Li, C. (2018) Multiple Coexisting Attractors of the Serial-Parallel Memristor-Based Chaotic System and Its Adaptive Generalized Synchronization. Nonlinear Dynamics, 94, 2785-2806.

https://doi.org/10.1007/s11071-018-4524-3

[22] Karabutov, N. (2017) Structural Methods of Design Identification Systems. In: Uvarova, L., Nadykto, A.B. and Latyshev, A.V., Eds., Nonlinearity: Problems, Solutions and Applications, Vol. 1, Nova Science Publishers Inc., New York, 233-274.

[23] Karabutov, N. (2018) Frameworks in Identification Problems: Design and Analysis. URSS/Lenand, Moscow.

[24] Achho, L. (2013) Hysteresis Modeling and Synchronization of a Class of RC-OTA Hysteretic-Jounce-Chaotic Oscillators. Universal Journal of Applied Mathematics, 1, 82-85.

https://doi.org/10.13189/ujam.2013.010207

[25] Gao, H., Jézéquel, L., Cabrol, E. and Vitry, B. (2019) Characterization of a Bouc-Wen Model-Based Damper Model for Automobile Comfort Simulation. Surveillance, Vishno and AVE Conferences, Lyon, Jul 2019, hal-02188563.

[26] Karabutov, N. (2015) Structural Methods of Estimation Lyapunov Exponents Linear Dynamic System. International Journal of Intelligent Systems and Applications, 7, 1-11.

https://doi.org/10.5815/ijisa.2015.10.01

[27] Karabutov, N. (2018) About Structural Identifiability of Nonlinear Dynamic Systems under Uncertainty. International Journal of Intelligent Systems and Applications, 18, 51-61.

http://doi.org/10.5815/ijisa.2020.01.02

[28] Bylov, F., Vinograd, R.E., Grobman, D.M. and Nemytskii, V.V. (1966) Theory of Lyapunov Exponents and Its Application to Problems of Stability. Nauka, Moscow.

[29] Karabutov, N. (2018) About Lyapunov Exponents Identification for Systems with Periodic Coefficients. International Journal of Intelligent Systems and Applications, 10, 1-10.

https://doi.org/10.5815/ijisa.2018.11.01

[30] Bodunov, N.A. (2012.) Introduction to the Theory of Local Parametrical Identifiability. Differential Equations and Management Processes.

[31] Saccomani, M.P. (2013) Structural vs Practical Identiability in System Biology. 2013 International Work-Conference on Bio Informatics and Biomedical Engineering, Granada, 18-20 March 2013, 305-313.

[32] Chis, O.-T., Banga, J.R. and Balsa-Canto, E. (2011) Structural Identifiability of Systems Biology Models: A Critical Comparison of Methods. PLoS ONE, 6, e27755.

https://doi.org/10.1371/journal.pone.0027755

[33] Stigter, J.D. and Peeters, R.L.M. (2007) On a Geometric Approach to the Structural Identifiability Problem and Its Application in a Water Quality Case Study. 2007 European Control Conference, Kos, 2-5 July 2007, 3450-3456.

https://doi.org/10.23919/ECC.2007.7068560

[34] Krasovsky, A.A. (1987) Reference Book on Automatic Control Theory. Nauka, Moscow.

[35] Takens, F. (1980) Detecting Strange Attractors in Turbulence. In: Rand, D.A. and Young, L.-S., Eds., Dynamical Systems and Turbulence, Warwick 1980, Vol. 898, Springer-Verlag, Berlin, Heidelberg, 366-381.

https://doi.org/10.1007/BFb0091924

[36] Wolf, A., Swift, J.B., Swinney, H.L. and Vastano, J.A. (1985) Determining Lyapunov Exponents from a Time Series. Physica D: Nonlinear Phenomena, 16, 285-301.

https://doi.org/10.1016/0167-2789(85)90011-9

[37] Rosenstein, M.T., Collins, J.J. and De Luca, C.J. (1993) A Practical Method for Calculating Largest Lyapunov Exponents from Small Data Sets Source. Physica D: Nonlinear Phenomena, 65, 117-134.

https://doi.org/10.1016/0167-2789(93)90009-P

[38] Anishchenko, V.S., Astakhov, V., Neiman, A. Vadivasova, T. and Schimansky-Geier, L. (2007) Nonlinear Dynamics of Chaotic and Stochastic Systems. Nonlinear Dynamics of Chaotic and Stochastic Systems: Tutorial and Modern Developments. 2nd Edition, Springer, Berlin, Heidelberg.

https://doi.org/10.1007/978-3-540-38168-6

[39] Malinetskiy, G.G. and Potapov, A.B. (2000) Modern Problems of Nonlinear Dynamics. Editorial URSS, Moscow.

[40] Pecora, L.M., Moniz, L., Nichols, J. and Carroll, T.L. (2007) A Unified Approach to Attractor Reconstruction. Chaos, 17, Article ID: 013110.

https://doi.org/10.1063/1.2430294

[41] Bradley, E. and Kantz, H. (2015) Nonlinear Time-Series Analysis Revisited. Chaos, 25, Article ID: 097610.

https://doi.org/10.1063/1.4917289

[42] Liebert, W., Pawelzik, K. and Schuster, H.G. (1991) Optimal Embedding of Chaotic Attractors from Topological Considerations. Europhysics, Letter, 14, Article No. 521.

https://doi.org/10.1209/0295-5075/14/6/004

[43] Buzug, T. and Pflster, G. (1992) Optimal Delay Time and Embedding Dimension for Delay-Time Coordinates by Analysis of the Global Static and Local Dynamical Behaviour of Strange Attractors. Physical Review A, 45, Article No. 7073.

https://doi.org/10.1103/PhysRevA.45.7073

[44] Deshpande, A., Chen, Q., Wang, Y., Lai, Y.-C.G. and Do, Y. (2010) Effect of Smoothing on Robust Chaos. Physical Review E, 82, Article ID: 026209.

https://doi.org/10.1103/PhysRevE.82.026209

[45] Aguirre, L.A. and Letellier, C. (2009) Modeling Nonlinear Dynamics and Chaos: A Review. Mathematical Problems in Engineering, 2009, Article ID: 238960.

https://doi.org/10.1155/2009/238960

[46] Johnston, J. (1972) Econometric Methods. 2nd Edition, McGraw-Hill Book Company, New York.

[47] Armstrong, B. (2006) Models for the Relationship between Ambient Temperature and Daily Mortality. Epidemiology, 17, 624-631.

https://doi.org/10.1097/01.ede.0000239732.50999.8f

[48] Malinvaud, E. (1980) Statistical Methods in Econometrics. 3rd Edition, North-Holland Publishing Co., Amsterdam.

[49] Dhrymes, P.J. (1971) Distributed Lags: Problems of Estimation and Formulation. Holden-Day, San Francisco.

[50] Solow, R. (1960) On a Family of Lag Distributions. Econometrica, 28, 393-406.

https://doi.org/10.2307/1907729