Repetitive Control Uncertainty Conditions in State Feedback Solution

Show more

1. Introduction

Systems of a repetitive [1] nature are those where the reference trajectory required to follow to a high precision, $r\left(t\right)$ , is of a repetitive structure, $r\left(t\right)=r\left(t+T\right)$ . Repetitive control (RC) [2] is a control technique that learns from previous experiences to enhance ad infinitum periodic reference tracking and rejects periodic disturbances.

RC was first mentioned with the reported work of [3] where the main objective was to reject periodic disturbances in a power supply control application or as in [4] to track periodic reference in a motion control application.

RC has a direct impact on industry, which utilizes RC applications, such as systems with rotary movement such as disc drives [5] , electrical motors [6] that need to overcome torque ripples, robotics [7] , and other systems performing repetitive tasks.

The idea behind RC is to use previous periods/trials to modify the control signal such that the overall system learns to follow a periodic reference trajectory with period T to a high precision. Most previously designed frameworks report their work in a continuous time-domain due to the nature of the repetitive system and use time instants t to form the forcing function in the update law for a certain upcoming time instant denoted by $t+T$ . In literature, [8] [9] several structures for RC updating laws. The internal model principle (IMP) [10] is the main principle for all RC designs, where the principle suggests tracking/rejecting a periodic signal without steady-state errors to model the periodic signal as an autonomous system inside a feedback loop. Then, the small gain theorem is used to design the control system, such that the overall system is sufficiently stable to achieve the required task.

Repetitive control and iterative learning control (ILC), which is another technique used to accommodate periodic disturbances and to enhance the performance of repetitive systems, are not similar even though they use the same updating technique. The main difference is that in RC there is no resetting between trials; the reference that is followed is continuous with $r\left(t\right)=r\left(t+N\right)$ , where N is the number of samples. That is, the system initial states for trial k are those of the final states of trial $k-1$ . In comparison, for ILC, the system resets to the home position after each trial to start the next trial. A comparison between RC and ILC that clarifies similarities and differences can be found in [9] . The similarities in the general structure of the two methods can divert the design to lift the batch process description to be formed in a matrix representation [11] , “the general repetitive control laws discussed above can correspond to general learning control gain matrices L, with the stipulation that the gains in the matrix within each diagonal are all the same.”

Based on the above statements, a lifted form that maps the problem structure from the property of being expressed in time and trial indexes to a uniform structure that depends on the trial index alone can be generally considered. Thus the design starts with first defining the periodic signal and then setting the required steps to design the RC controller in the lifted model with the presence of the delay model in the feedback loop. Any periodic signal can be generated by an autonomous system containing a delay model along the forward path with a positive feedback loop [3] . An accommodation of this type of signal can be achieved with the internal model principle by duplicating the delay system inside a feedback loop. A framework that addresses the similarities between RC and ILC and clarifies the best controller depending on the location of the internal model is found in [12] [13] . It had been shown that both RC and ILC are not similar, but they are related by duality. In turn, a modified framework design [14] incorporates both current error feedback as well as past error feedforward.

This paper reintroduces the RC design within the proposed framework in a state feedback structure. As well it presents new robust conditions that set limitations towards the design that are different than those introduced in [15] where the authors discussed the uncertainty condition for the proposed design in the presence of current error feedback alone in the frequency domain. The novelty of this paper lies within setting new robust conditions for different cases in the RC design within the framework discussed using singular values. Simulation results showed that the new design performs reliably well in the presence of system uncertainty.

The following section briefly discusses RC design in a general case under the proposed framework in [14] . Design robustness and the development of performance against unmodelled dynamics of the proposed RC designs are presented in Section 3.0. Simulation results from a Non-Minimum Phase (NMP) plant, which was the platform for several past RC designs verifications, can be found in Section 4.0. Conclusions and future work are discussed in Section 6.

2. Problem Formulation Background and Solution

Starting with a linear time-invariant system with m outputs, p inputs and n states having a discrete overall transfer function in the state space form given by $P\left(z\right)=C{\left(z{I}_{n}-A\right)}^{-1}B+D$ . The matrices A, B, C and D are of proper dimensions. Also let the system output be $y\left(z\right)$ and the input $u\left(z\right)$ , then the process output equation is $y\left(z\right)=P\left(z\right)u\left(z\right)$ .

The general platform to describe the RC controller or the ILC controller is the same due to the similarities they both hold, thus it is no harm to initially describe the system in the lifted form as a start. Consider a “single trial” with a finite time duration with N samples, where the model of the system dynamics at trial k can be expressed as

$\begin{array}{l}{x}_{k}\left(i+1\right)=A{x}_{k}\left(i\right)+B{u}_{k}\left(i\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}_{k}\left(0\right)={x}_{k-1}\left(N-1\right)\\ {y}_{k}\left(i\right)=C{x}_{k}\left(i\right)+D{u}_{k}\left(i\right)\end{array}$ (1)

where $0\le i\le N-1$ . In the above equation, the RC controller does not reset to the initial state after each trial as done by ILC; ${x}_{k}\left(0\right)={x}_{0}$ . Now, introduce the input and output vectors as

${u}_{k}={\left[{u}_{k}\left(0\right)\mathrm{,}{u}_{k}\left(1\right)\mathrm{,}\cdots \mathrm{,}{u}_{k}\left(N-1\right)\right]}^{\text{T}}$

${y}_{k}={\left[{y}_{k}\left(0\right)\mathrm{,}{y}_{k}\left(1\right)\mathrm{,}\cdots \mathrm{,}{y}_{k}\left(N-1\right)\right]}^{\text{T}}$

Then the dynamics for each trial can be written in the form of

${y}_{k}=P{u}_{k}$ (2)

where

$P=\left[\begin{array}{ccccc}D& 0& 0& \cdots & 0\\ CB& D& 0& \cdots & 0\\ CAB& CB& D& \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ C{A}^{N-2}B& C{A}^{N-3}B& C{A}^{N-4}B& \cdots & D\end{array}\right]$

where its elements are the Markov parameters. Defining the reference; in discrete form, to hold the vector elements of

$r={\left[r\left(0\right)\mathrm{,}r\left(1\right)\mathrm{,}\cdots \mathrm{,}r\left(N-1\right)\right]}^{\text{T}}$

As discussed, the RC problem can be illustrated with the structure presented in Figure 1, where block $\Phi \left(z\right)$ is a diagonal transfer function matrix that has an internal model representation in its diagonals. As [13] pointed out, in an RC case there are m channels as the block operates in the output space with N state variables.

A periodic signal that is considered as an autonomous system consisting of a positive feedback control loop with a pure time delay in the forward path with appropriate initial conditions can generate a periodic signal with appropriate boundary conditions by modelling with a signal of length N in discrete-time as

${x}_{w}\left({t}_{k+1}\right)={A}_{w}x\left({t}_{k}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}_{w}\left({t}_{0}\right)={x}_{w0}$

$w\left({t}_{k}\right)={C}_{w}x\left({t}_{k}\right)$ (3)

where the $N\times N$ matrix ${A}_{w}$ is given by

${A}_{w}=\left[\begin{array}{ccccc}0& 1& 0& \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0& 0& 0& \cdots & 1\\ 1& 0& 0& \cdots & 0\end{array}\right]$

and the $1\times N$ row vector ${C}_{w}$ as

${C}_{w}=\left[\begin{array}{ccccc}1& 0& 0& \cdots & 0\end{array}\right]$

A robust controller $K\left(z\right)$ (where z denotes the discrete-time delay operator) is required for the robust periodic control problem, which is defined as:

Given a $m\times l$ transfer-function matrix $P\left(z\right)$ with an input vector that consists of the plant input and a disturbance input; $u={u}_{p}+{u}_{w}$ , the output signal as defined in (2) and a reference signal $r\left({t}_{k}\right)=r\left({t}_{k+N}\right),\text{\hspace{0.17em}}{t}_{k}=0,\Delta T,2\Delta T,\cdots $ with N sampling time. The design of $K\left(z\right)$ requires that the overall closed loop system is asymptotically stable, and the tracking error; ${e}_{k}=r-{y}_{k}$ , tends to zero along the trial domain and the previous two conditions are robust.

The solution considered in [12] [13] uses the internal model principle [10] as well as the small gain theorem to set stability conditions to design the feedback gain and the observer gain using a Linear Quadratic Regulator (LQR) where the periodic disturbances act on the system output. [14] considers a more general case than the work presented in [12] [13] as it incorporates both current error and past error in the designed framework instead of the current error feedback.

Figure 1. RC as a feedback problem [14] .

This paper considers the RC design scheme in the state feedback reported in [14] and the different stability conditions within depending on the error case considered; either current error feedback or past error feed forward. The following subsection explains in brief the design steps in [14] and the stability conditions for each case.

RC Controller Design via State Feedback

For a single channel, consider the system in (3), and also introduce the following $N\times 1$ vector

${B}_{w}={\left[\begin{array}{ccccc}0& \cdots & 0& 0& 1\end{array}\right]}^{\text{T}}$

and

${D}_{w}=(\begin{array}{ll}0\hfill & \text{ifpasterrorfeedforwardcase}\hfill \\ 1\hfill & \text{ifcurrenterrorfeedbackcase}\hfill \end{array}$

for a multi-input multi-output (MIMO) case define ${A}_{r}$ to be a diagonal matrix consisting of ${A}_{w}$ along its diagonal.

${A}_{r}=\text{diag}\left\{{A}_{w}\right\}$

and the same is true for ${B}_{r}\mathrm{,}{C}_{r}$ and ${D}_{r}$ where each diagonal block is repeated m times (acting on the system output). Thus, if considering the periodic problem proposed in Figure 1, the transfer function of the delay model, $\Phi \left(z\right)$ , is given as

${C}_{r}{\left(z{I}_{{N}_{m}}-{A}_{r}\right)}^{-1}{B}_{r}+{D}_{r}=(\begin{array}{ll}{\left({z}^{N}{I}_{m}-{I}_{m}\right)}^{-1}\hfill & \text{if}\text{\hspace{0.17em}}{D}_{w}=0\hfill \\ {\left({I}_{m}-{z}^{-N}{I}_{m}\right)}^{-1}\hfill & \text{if}\text{\hspace{0.17em}}{D}_{w}=1\hfill \end{array}$

Now, the design considers whether the state feedback can be found with more details in [14] [15] . The overall idea of this method is to combine both the plant and the internal model in one structure as

$\left[\begin{array}{c}{x}_{r}\left(i+1\right)\\ x\left(i+1\right)\end{array}\right]=\left[\begin{array}{cc}{A}_{r}& -{B}_{r}C\\ 0& A\end{array}\right]\left[\begin{array}{c}{x}_{r}\left(i\right)\\ x\left(i\right)\end{array}\right]+\left[\begin{array}{c}-{B}_{r}D\\ B\end{array}\right]{u}_{k}\left(i\right)+\left[\begin{array}{c}{B}_{r}\\ 0\end{array}\right]r\left(i\right)$ (4)

where stabilising this system guarantees periodic disturbance accommodation since the output of the combined system is the plant output and its input is the control input signal, where ${x}_{r}$ is the internal model system state. Manipulate the combined system by choosing the control input of the combined system as

$u\left(i\right)=-{K}_{r}\left[\begin{array}{c}{\stackrel{^}{x}}_{r}\left(i\right)\\ \stackrel{^}{x}\left(i\right)\end{array}\right]$

with an observer to estimate the states. This in turn will end up with the overall system of the form [14]

$\begin{array}{c}\left[\begin{array}{c}{\stackrel{^}{x}}_{r}\left(i+1\right)\\ \stackrel{^}{x}\left(i+1\right)\end{array}\right]=\left[\begin{array}{cc}{A}_{r}& -{B}_{r}C\\ 0& A\end{array}\right]\left[\begin{array}{c}{\stackrel{^}{x}}_{r}\left(i\right)\\ \stackrel{^}{x}\left(i\right)\end{array}\right]-\left[\begin{array}{c}-{B}_{r}D\\ B\end{array}\right]{K}_{r}\left[\begin{array}{c}{\stackrel{^}{x}}_{r}\left(i\right)\\ \stackrel{^}{x}\left(i\right)\end{array}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{L}_{r}\left(v\left(i\right)-\left(\left[\begin{array}{cc}{C}_{r}& -{D}_{r}C\end{array}\right]+{D}_{r}D{K}_{r}\right)\left[\begin{array}{c}{\stackrel{^}{x}}_{r}\left(i\right)\\ \stackrel{^}{x}\left(i\right)\end{array}\right]\right)\end{array}$ (5)

The overall structure description can be found in [14] . The design at this stage concentrates on isolating the delay ${z}^{-N}{I}_{m}$ and finding the overall transfer function that links its output by its input and is termed $H\left(z\right)$ . A stability condition following the small gain theorem suggests

$\Vert H\left(z\right)\Vert <1$ (6)

where the overall transfer function around the delay operator, $H\left(z\right)$ , differs depending on the error case considered, either past error feedforward or current error feedback, for a Past error feedforward case

$H\left(z\right)=\left(G\left(z\right)+P\left(z\right)\right)G{\left(z\right)}^{-1}$ (7)

and for a Current error feedback case

$H\left(z\right)=G\left(z\right){\left(G\left(z\right)+P\left(z\right)\right)}^{-1}$ (8)

where $G\left(z\right)$ in both cases is governed by the following

$G\left(z\right)=\left[\begin{array}{cc}{C}_{r}& -{D}_{r}C\end{array}\right]{\left(zI-\left[\begin{array}{cc}{A}_{r}& -{B}_{r}C\\ 0& A\end{array}\right]+\left[\begin{array}{c}{B}_{r}D\\ -B\end{array}\right]{K}_{r}\right)}^{-1}\left[\begin{array}{c}-{B}_{r}D\\ B\end{array}\right]-{D}_{r}D$

The solution required depends on solving the linear quadratic regulator to find K via the Riccati equation, such that the model considers the difference between the combined system around the plant and the delay model and the estimator structure to minimize the required cost function [14] .

3. Robust RC Design in State Feedback

In this section the robustness property of the two designed RC controllers in [14] in past error feedforward and current error feedback is investigated through the stability condition assigned in (6). Previous reported work did not discuss this subject, which forms the main novelty of the work presented in this paper. Starting with the stability condition in (6) and considering the following cases:

For past error feedforward in state feedback design the starting point is the stability condition given in (6) where the induced norm has to be less than 1 to guarantee system stability. A more conservative restriction is to consider singular values instead, thus the condition will be

$\sigma \left(H\left(z\right)\right)<1$

which clearly indicates that all the eigenvalues of $H\left(z\right)$ are inside the unit circle once the maximum singular value is considered. Verifying this condition in the maximum case assures reference tracking and periodic disturbance accommodation. Now, consider the case where unmodelled system dynamics or system uncertainty defined as ( $\Delta $ ) act on the system in operation. To examine this case define [ $P={P}_{o}+{P}_{o}\Delta W$ ] where ${P}_{o}\mathrm{,}\Delta \mathrm{,}W$ are the nominal plant, the uncertainty, and the uncertainty weight, respectively. Each of the defined variables is stable, causal and linear time invariant for simplicity. Now, in combination with the definition of $H\left(z\right)$ given in (7), we can write the following derivation:

$\sigma \left(G+{P}_{o}+{P}_{o}\Delta W\right)<\sigma \left(G\right)\le 1$

$\sigma \left(G+{P}_{o}\right)+\sigma \left({P}_{o}\Delta W\right)<\sigma \left(G\right)\le 1$

taking the uncertainty part on one side and the other parts from the right side yields

$\sigma \left({P}_{o}\Delta W\right)<\sigma \left(G\right)-\sigma \left(G\right)-\sigma \left({P}_{o}\right)$ (9)

maximizing the left-hand side will give the possible variation in system dynamics; meanwhile, the right-hand side, sets the upper bound for the system so as to not have unwanted performance through the operation. This can be found if the right-hand side was of the form $\stackrel{\xaf}{\sigma}\left(G\right)-\underset{\_}{\sigma}\left(G\right)-\stackrel{\xaf}{\sigma}\left(P\right)$ . To extend the previous property and set a weight for the uncertainty that gives a better upper bound and permits the system to deal with unmodelled dynamics through the operation can be found if we manipulate Equation (9) to be of the form

$\sigma \left(\Delta \right)<\frac{\sigma \left(G\right)-\sigma \left(G\right)-\sigma \left({P}_{o}\right)}{\sigma \left(W\right)\sigma \left({P}_{o}\right)}$ (10)

maximizing the left-hand side of Equation (10), such that the right-hand side is kept at a minimum, can be seen as solving the following

$\stackrel{\xaf}{\sigma}\left(\Delta \right)<\frac{\mathrm{min}}{\mathrm{max}}=\frac{\stackrel{\xaf}{\sigma}\left(G\right)-\underset{\_}{\sigma}\left(G\right)-\underset{\_}{\sigma}\left({P}_{o}\right)}{\sigma \left(W\right)\stackrel{\xaf}{\sigma}\left({P}_{o}\right)}$ (11)

Now, $\stackrel{\xaf}{\sigma}\left(G\right)\ne \underset{\_}{\sigma}\left(G\right)$ unless G is a scalar multiplied by the identity, which is not true in our design. Thus, returning back to Equation (11), to suppress the uncertainty effect to a higher level, further investigation toward weight (W) is taken into account and can be expressed by $\stackrel{\xaf}{\sigma}\left(\Delta \right)<1$ , in the following

$1<\frac{\stackrel{\xaf}{\sigma}\left(G\right)-\underset{\_}{\sigma}\left(G\right)-\underset{\_}{\sigma}\left({P}_{o}\right)}{\sigma \left(W\right)\stackrel{\xaf}{\sigma}(Po)}$

$\stackrel{\xaf}{\sigma}\left(W\right)<\frac{\stackrel{\xaf}{\sigma}\left(G\right)-\underset{\_}{\sigma}\left(G\right)-\underset{\_}{\sigma}\left({P}_{o}\right)}{\stackrel{\xaf}{\sigma}\left({P}_{o}\right)}<1$ (12)

The condition given in Equation (12) will set the upper limit to the weighting factor such that uncertainty is extended and avoids a high level of unmodelled dynamics compared to a case where there is no consideration of a weighting factor.

For current error feedback in the state feedback design the starting point again is the stability condition given in (6) where the induced norm has to be less than 1 to guarantee system stability. Again consider system uncertainty as ( $\Delta $ ) acting on the system in operation. To examine this case again, define [ $P={P}_{o}+{P}_{o}\Delta W$ ] with the definitions given and properties considered in the past-error case. Following the same steps, in combination with the definition of $H\left(z\right)$ given in (8), we can write the following derivation in term of the singular values as

$\sigma \left(\frac{G}{G-{P}_{o}-{P}_{o}\Delta W}\right)<1$

which leads to writing the above after manipulation as

$\stackrel{\xaf}{\sigma}\left(\Delta \right)>\frac{\sigma \left(G\right)-\sigma \left(G\right)-\sigma \left({P}_{o}\right)}{\sigma \left(W\right)\sigma \left({P}_{o}\right)}$ (13)

Since the uncertainty is assumed to be stable ( $\stackrel{\xaf}{\sigma}\left(\Delta \right)<1$ ), then equation (13) can be written as

$\frac{\sigma \left(G\right)-\sigma \left(G\right)-\sigma \left({P}_{o}\right)}{\sigma \left(W\right)\sigma \left({P}_{o}\right)}<1$ (14)

Equation (14) will give the proper condition for the weighting factor (W) such that the left-hand side is minimized, which will be

$1>\stackrel{\xaf}{\sigma}\left(W\right)>\frac{\stackrel{\xaf}{\sigma}\left(G\right)-\underset{\_}{\sigma}\left(G\right)-\underset{\_}{\sigma}\left({P}_{o}\right)}{\stackrel{\xaf}{\sigma}\left({P}_{o}\right)}$ (15)

Condition (15) is the same as that in (12) to a limit where in (15) it sets the lower limit to the weight selection while (12) sets the upper limit to the uncertainty weight, which has a wider and better range than that of (15). This result supports the experimental results obtained for the past error feedforward case in ILC, instead of the current error feedback in [14] , where the past error feedforward case has a more reliable design against system perturbation.

The next section presents simulation results obtained on a non-minimum phase plant where the results show a performance improvement against system uncertainty and modelling mismatch when considering the robust design to previously reported design frameworks for the RC case.

4. Simulation Results

This section presents simulation results obtained for a non-minimum phase plant (NMP) to verify that the proposed design suppresses the dynamic matrix changes of a plant with a difficult mathematical structure, such as with NMP.

In this example, a NMP plant was tested against dynamic matrix changes with both the absence and the presence of a weighting factor. The physical system was constructed to implement both ILC and Repetitive controller (RC) schemes, which led to the verity of reported works, for example [16] . In this paper, the simulation results obtained support the idea of extending the level of system modelling mismatch through the weighting factor. The NMP plant has one zero in the right half plane which makes this system hard to test in a RC scheme due to the instability associated with plant inversion. As a result, any sudden change in system dynamics; modelling mismatch as an example, would result in an unstable output response. The mathematical equation that describes the system shown in Figure 2 is given as:

$P\left(s\right)=\frac{1.202\left(4-s\right)}{s\left(s+9\right)\left({s}^{2}+12s+56.25\right)}$ (16)

This system has been tested in two different cases, the first is where the weighting factor is ignored, while the second case considers the presence of the

Figure 2. Non-minimum phase plant experimental test facility [17] .

Figure 3. Output response and log of MSE with matrix A changed by 5%.

weighting factor. A reference signal of 2 seconds was applied with a selected sampling frequency of 100 Hz generating 200 sample points that were recorded in each trial. The system can be operated for large number of trials, but only the first 10 cycles are presented since they include all the needed information to clarify the weighting factor effect. Figure 3 shows a comparison of the two cases when the matrix A is changed by 5%. Plot A and plot B are for the output signal and the Log of the mean squared error with the weighting factor presence, while plot C and plot D are for the case where the weighting factor is omitted. It is found that with a change of 5% in the dynamic matrix (A), the system in plot C and D tend to be unstable and the error increases opposite to the case with weighting factor presence; plot A and B, it can be seen that the system can overcome model mismatch and the response tends to follow the reference signal as the number of trial increases.

Figure 4. Output responses for different variations in A matrix with the presence of the weighting parameter.

Figure 5. Mean squared error for the NMP output with the weighting factor presence.

Figure 4 shows different output responses for the system with varying the A matrix for the past error case where the weighting factor was found with equation 12 to be 0.8176 and the selected weight is half of the limit found, 0.4088. The figure shows an extension in system stability against uncertainty up to 14%. Figure 5 shows the mean squared error for the responses shown in 4 where it assures system stability against system perturbation. Omitting the weighting factor effect when system dynamic matrix changed limit the ability to cope with the system to less than 5% change in the A matrix; see Figure 6.

Figure 6. Mean squared error for the NMP output without the weighting factor.

5. Conclusion

In this paper conditions were set to extend the range of linear system uncertainty based on the singular value principle for the RC design presented in [14] . Different cases were discussed and conditions were found that extend system robustness against system unmodelled dynamics. Simulations verified successful use of a weighting factor to extend the range of uncertainty considered with the RC design via state feedback. A high level of reference tracking was achieved for up to 14% of system uncertainty in the NMP model. In the future, experimental results are expected to verify the developed conditions to overcome system perturbation. Future work will examine the effect of the newly obtained conditions to extend the range of uncertainty on system performance physically.

References

[1] Alsubaie, M.A., Cai, Z., Freeman, C.T., Lewin, P.L. and Rogers, E. (2008) Repetitive and Iterative Learning Controllers Designed by Duality with Experimental Verification. Proceedings of the 17th World Congress, Seoul, July 2008, 3562-3567.

[2] De Roover, D. and Bosgra, O.H. (1997) Dualization of the Internal Model Principle in Compensator and Observer Theory with Application to Repetitive and Learning Control. Proceedings of the American Control Conference, 6, 3902-3906.

[3] De Roover, D., Bosgra, O.H. and Steinbuch, M. (2000) Internal Model-Based Design of Repetitive and Iterative Learning Controllers for Linear Multivariable Systems. International Journal of Control, 73, 914-929.

https://doi.org/10.1080/002071700405897

[4] Francis, B.A. and Wonham, W.M. (1975) The Internal Model Principle for Linear Multivariable Regulators. Applied Mathematics and Optimization, 2, 170-194.

https://doi.org/10.1007/BF01447855

[5] Freeman, C., Lewin, P. and Rogers, E. (2005) Experimental Evaluation of Iterative Learning Control Algorithms for Non-Minimum Phase Plants. International Journal of Control, 78, 826-846.

https://doi.org/10.1080/00207170500158565

[6] Freeman, C.T., Alsubaie, M.A., Cai, Z., Rogers, E. and Lewin, P.L. (2013) A Common Setting for the Design of Iterative Learning and Repetitive Controllers with Experimental Verication. International Journal of Adaptive Control and Signal Processing, 27, 230-249.

https://doi.org/10.1002/acs.2299

[7] Freeman, C.T., Lewin, P.L., Rogers, E., Owens, D.H. and Hatonen, J. (2008) An Optimality-Based Repetitive Control Algorithm for Discrete-Time Systems. IEEE Transactions on Circuits and Systems I: Regular Papers, 55, 412-423.

https://doi.org/10.1109/TCSI.2007.914005

[8] Hara, S., Yamamoto, Y., Omata, T. and Nakano, M. (1988) Repetitive Control System: A New Type Servo System for Periodic Exogenous Signals. IEEE Transactions on Automatic Control, 33, 659-668.

https://doi.org/10.1109/9.1274

[9] Inoue, T., Nakano, M. and Iwai, S. (1981) High Accuracy Control of Servomechanism for Repeated Contouring. Proceedings of the 10th Annual Symposium on Incremental Motion Control Systems and Devices, Urbana-Champaign, 1-4 June 1981, 285-292.

[10] Inoue, T., Nakano, M., Kubo, T., Matsumoto, S. and Baba, H. (1981) High Accuracy Control of a Proton Synchrotron Magnet Power Supply. Proceedings of the 8th IFAC World Congress, Kyoto, 24-28 August 1981, 216-221.

[11] Kaneko, K. and Horowitz, R. (1997) Repetitive and Adaptive Control of Robot Manipulators with Velocity Estimation. IEEE Transactions on Robotics and Automation, 13, 204-217.

https://doi.org/10.1109/70.563643

[12] Longman, R.W. (2000) Iterative Learning Control and Repetitive Control for Engineering Practice. International Journal of Control, 73, 930-954.

https://doi.org/10.1080/002071700405905

[13] Longman, R.W. (2010) On the Theory and Design of Linear Repetitive Control Systems. European Journal of Control, 16, 447-496.

https://doi.org/10.3166/ejc.16.447-496

[14] Mattavelli, P., Tubiana, L. and Zigliotto, M. (2005) Torque-Ripple Reduction in pm Synchronous Motor Drives Using Repetitive Current Control. IEEE Transactions on Power Electronics, 20, 1423-1431.

https://doi.org/10.1109/TPEL.2005.857559

[15] Moon, J.H., Lee, M.N. and Chung, M.J. (1998) Repetitive Control for the Track-Following Servo System of an Optical Disk Drive. IEEE Transactions on Control Systems Technology, 6, 663-670.

https://doi.org/10.1109/87.709501

[16] Rogers, E., Galkowski, K. and Owens, D. (2007) Control System Theory and Applications for Linear Repetitive Processes. Springer, Berlin.

[17] Wang, Y., Gao, F. and Doyle, F.J. (2009) Survey on Iterative Learning Control, Repetitive Control, and Run-to-Run Control. Journal of Process Control, 19, 1589-1600.

https://doi.org/10.1016/j.jprocont.2009.09.006