An Upper Limit for Iterative Learning Control Initial Input Construction Using Singular Values
Show more
Abstract: Selecting a proper initial input for Iterative Learning Control (ILC) algorithms has been shown to offer faster learning speed compared to the same theories if a system starts from blind. Iterative Learning Control is a control technique that uses previous successive projections to update the following execution/trial input such that a reference is followed to a high precision. In ILC, convergence of the error is generally highly dependent on the initial choice of input applied to the plant, thus a good choice of initial start would make learning faster and as a consequence the error tends to zero faster as well. Here in this paper, an upper limit to the initial choice construction for the input signal for trial 1 is set such that the system would not tend to respond aggressively due to the uncertainty that lies in high frequencies. The provided limit is found in term of singular values and simulation results obtained illustrate the theory behind.

1. Introduction

Iterative Learning Control (ILC)  is a control method that uses information obtained from previous executions/trials to update the next trial control input to enhance the performance of a repetitive system and accommodate periodic disturbances from trial-to-trial. ILC is adequate to repetitive systems where the reference trajectory has to be followed to a high level of accuracy infinite number of times.

Repetitive systems are those where the reference to follow, $r\left(t\right)$ , has fixed time duration, $0\le t\le T$ and has to be repeated large number of times which in turn tends to set the mathematical modelling expression outside the conventional control representation to a 2D system representation  . The idea of ILC came up as a consequence of human learning mechanism; for example, if a tennis player wants to learn how to hit a tennis ball with excellence, then he has to perform a large number of repetitions until he reaches perfection. This is ideally the learning principle of ILC where a system performs a successful operation in term of system stability within the operation time, measures the input $u\left(t\right)$ , output ( $y\left(t\right)$ ) and calculates the error signal ( $e\left(t\right)$ ) and uses those information to update the next trial control input to enhance the performance in term of error norm trial-to-trial. The ILC control technique has a direct reflection on the industry and its applications; it can be seen in applications of pick and place such as robot arm, chemical batch processes, automated manufacturing plants and operations as such      .

The error signal ( ${e}_{k}\left(t\right)$ ) is the forcing part to update the upcoming control input. After each trial, a system has to be reset to its initial position to start the next one. The resetting time, known as the stoppage time, is the time required for a system to do all the needed computations to update the next trial control input signal to start the next trial. One approach of ILC is of the form ${u}_{k+1}={u}_{k}+L{e}_{k}$ , where ${u}_{k}$ is the trial k input; L is the learning gain and ${e}_{k}$ is the trial k error signal. One good starting point for rich information about ILC Theory and applications is   .

Repetitive control (RC)  is another control tool used to accommodate periodic disturbances through the use of previous trial informations. One major difference between ILC and RC is that in repetitive control, the initial conditions/states of a trial $k+1$ is the final states of trial k, while in ILC the initial states are kept the same for all trials. This leads to a different application paradigm where RC is used to systems of continuous repetitive operations such as disc drives   .

Most of the literature suggest that the initial input can be an array of zeros such that the error for the first trial is the reference it self. This assures that the learning gain in the design theory is totally responsible for building up the control signal from the beginning, but if it could be possible to predict a better starting point that assures faster learning process it would be better in term of learning speed. This idea lead to several reported works in the literature such as those in   where the prediction depends on running several tests, recording the best solution and manipulate the recorded control input signal such that the new trajectory is followed from trial 1 with better error norm. The method used picks the k-nearest neighbour from the stored data which are so close to the new trajectory. Those points are then used to construct the new input signal for trial 1. Another reported work   introduced a better solution to construct an initial input for iterative learning control theories for linear systems based on stored data with better performance compared to those introduced in   . This solution created is based on finding the optimum selection that combines each stored data with its weight to form the new input signal. Also  ￥ introduced another form of solution based on the presence of system model. This solutions directly depends on inverting the frequency component of the reference trajectory convoluted with the frequency component of the system model and sum as much as it is possible of the provided information to form the initial input signal. This theory does not give the limit at which a designer should stop in summing the frequency components. Every system representation is an approximate to the system behaviour, thus taking the inverse is critical in this issue due to system uncertainty and model mismatching. There are also several reported works regarding predicting trial information such as  .

This paper sets an upper limit condition in constructing the initial input using singular values based on system presence. It is required in input construction that it would not generate a high effort at the beginning to assure safe operation. This development is highly required in most industrial applications such as in robot arms and chemical patch processes. In parallel it speeds up learning process compared to the same method when the initial input construction omitted. In order to do so, the upper limit condition guarantees safe construction for such input to fulfill the operation requirements.

The following section reintroduces the work presented in   that shows the relationship between the output required to obtain versus the initial input selection based on the presence of the system model. The new boundary condition that governs the initial input construction is introduced in Section 3. An examples of the use of the initial input on system learning process on a gantry robot arm is illustrated through simulation results is given in Section 4. Conclusion and future work are discussed in Section 5.

2. Initial Input Construction Based on System Knowledge: Revisited

In term of understanding the initial input construction reported in  , a brief introduction to Discrete Fourier Transform (DFT) is introduced as:

Let $u={\left[u\left(0\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}u\left(1\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}u\left(N-1\right)\right]}^{\text{T}}\in {ℝ}^{N}$ be an array of N elements, then (under the necessary assumption relating to existence) the DFT of this array, denoted by $\stackrel{^}{u}$ , is defined as

${\stackrel{^}{u}}_{i}=\underset{n=0}{\overset{N-1}{\sum }}\text{ }\text{ }{u}_{n}{\text{e}}^{-j2\text{π}ni/N}$ (1)

where $\stackrel{^}{u}\in {ℂ}^{N}$ and $i=\left\{0,1,\cdots ,N-1\right\}$ . As mentioned earlier, ILC trajectory reference has a fixed length of, T, and a sampling frequency must be chosen as

${f}_{s}=\frac{N}{T}$ for $N\in {2}^{M},\text{\hspace{0.17em}}M\in ℤ$ . The Inverse Discrete Fourier Transform (IDFT) can be driven to be

${u}_{n}=\frac{1}{N}\underset{i=0}{\overset{N-1}{\sum }}\text{ }\text{ }{\stackrel{^}{u}}_{i}{\text{e}}^{j2\text{π}ni/N}$ (2)

Given g as the finite impulse response of an linear time-invariant (LTI) system, then the convolution between g and u produces the output sequence

$y\left(q\right)=\underset{i=0}{\overset{q}{\sum }}\text{ }\text{ }g\left(q-i\right)u\left(q\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}q=0,1,\cdots ,N-1$ (3)

The DFT of y can then be calculated using

$\stackrel{^}{y}=\stackrel{^}{g}\odot \stackrel{^}{u}$ (4)

where $\odot$ is the component-wise multiplication.

Now, given a reference trajectory ${y}_{d}$ of length N and it is required to construct an initial input vector ${u}_{0}^{*}$ such that learning speed is improved when used with suitable ILC controller. In this paper an ILC law of the form

${u}_{k+1}={u}_{k}+L{e}_{k}$ (5)

will be considered where L is chosen to be the adjoint of the process matrix G, with ${u}_{k}={\left[{u}_{k}\left(0\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}{u}_{k}\left(1\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}{u}_{k}\left(N-1\right)\right]}^{\text{T}}$ , and ${e}_{k}={\left[{y}_{d}\left(0\right)-{u}_{k}\left(0\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}{y}_{d}\left(1\right)-{u}_{k}\left(1\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}{y}_{d}\left(N-1\right)-{u}_{k}\left(N-1\right)\right]}^{\text{T}}$ . Let the associated LTI plant be given in state-space form of the following form

$x\left(q+1\right)=Ax\left(q\right)+Bu\left(q\right)$

$y\left(q\right)=Cx\left(q\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}q=0,1,\cdots ,N-1$ (6)

where the sample time has been set at unity for notational simplicity, $x\left(\cdot \right)\in {ℝ}^{n}$ , $x\left(0\right)=0$ , and the operators A, B and C are of appropriate dimensions. Then using the plant model ${y}_{k}=G{u}_{k}$  , where

$G=\left[\begin{array}{ccccc}D& 0& 0& \cdots & 0\\ CB& D& 0& \cdots & 0\\ CAB& CB& D& \cdots & 0\\ ⋮& ⋮& ⋮& \ddots & ⋮\\ C{A}^{N-2}B& C{A}^{N-3}B& \cdots & \cdots & D\end{array}\right]$ (7)

with ${y}_{k}={\left[{y}_{k}\left(0\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}{y}_{k}\left(1\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}{y}_{k}\left(N-1\right)\right]}^{\text{T}}$ , is the lift form, then the error evolution equation can be driven easily to be

${e}_{k+1}=\left(I-GL\right){e}_{k}$ (8)

Taking the DFT of both sides of (5) now gives

${\stackrel{^}{u}}_{k+1}={\stackrel{^}{u}}_{k}+\stackrel{^}{l}\odot {\stackrel{^}{e}}_{k}$ (9)

and likewise for (8)

${\stackrel{^}{e}}_{k+1}=\left(\stackrel{^}{I}-\stackrel{^}{g}\odot \stackrel{^}{l}\right)\odot {\stackrel{^}{e}}_{k}$ (10)

where $\stackrel{^}{I}\in {ℝ}^{N}$ and has each entry equal to unity. Repeated application leads to

${\stackrel{^}{e}}_{k+1}={\left(\stackrel{^}{I}-\stackrel{^}{g}\odot \stackrel{^}{l}\right)}^{k}\odot {\stackrel{^}{e}}_{0}$ (11)

where the power operation is applied in component-wise fashion. Now consider the error progression starting from an arbitrary initial input, ${u}_{0}$ ,

$\begin{array}{c}{‖{e}_{k+1}‖}^{2}=\underset{i=0}{\overset{N-1}{\sum }}{|{\stackrel{^}{e}}_{k+1,i}|}^{2}=\underset{i=0}{\overset{N-1}{\sum }}{|{\left(1-{\stackrel{^}{g}}_{i}{\stackrel{^}{l}}_{i}\right)}^{k}{\stackrel{^}{e}}_{0,i}|}^{2}\\ =\underset{i=0}{\overset{N-1}{\sum }}{|1-{\stackrel{^}{g}}_{i}{\stackrel{^}{l}}_{i}|}^{2k}{|{\stackrel{^}{e}}_{0,i}|}^{2}=\underset{i=0}{\overset{N-1}{\sum }}{|1-{\stackrel{^}{g}}_{i}{\stackrel{^}{l}}_{i}|}^{2k}{|{\stackrel{^}{y}}_{d,i}-{\stackrel{^}{g}}_{i}{\stackrel{^}{u}}_{0,i}|}^{2}\end{array}$ (12)

This is minimized with respect to the initial input by setting ${\stackrel{^}{u}}_{0}$ equal to

${\stackrel{^}{u}}_{0,i}^{*}=\frac{{\stackrel{^}{y}}_{d,i}}{{\stackrel{^}{g}}_{i}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=0,1,\cdots ,N-1$ (13)

This effectively generates a steady-state inverse (over the duration of the trial). The above derivation is found in  where more detailed description can be found there for cases that include system uncertainty for example. In (13), there is a lack in considering the maximum number of frequency components to include in constructing the initial input for trial 1. This was an open question and had been pointed as a possible area for further investigation. In the next section, and based on a submitted work by the author, a new condition is found that contains the number of frequency components to include in constructing the initial input based on the plant model presence. The new condition gives the upper limit of the initial input in term of singular values properties.

3. Setting an Upper limit to Constructing an Initial Input to ILC

This section represents the novelty of this paper where a new condition is found that sets the maximum number of frequency components to include in constructing the initial input for trial 1 in any selected ILC method based on the presence of the plant model. This construction is as pointed out earlier includes system model presence. The key start was to consider the ILC design given in  , where part of the given design was discussed and load disturbances were added to form the new presented work in  . The major finding of that work was the load disturbance limitation such that the system will perform well and error will tend to low value based on the singular values as follows

$\stackrel{¯}{\sigma }\left({d}_{k}\right)<\stackrel{¯}{\sigma }\left(\underset{i=0}{\overset{k}{\sum }}\left({\Psi }_{i}\right)-\underset{j=0}{\overset{k-1}{\sum }}\left({d}_{j}\right)-G{u}_{0}\right)-\underset{_}{\sigma }\left(\underset{h=0}{\overset{k-1}{\sum }}\left({\Psi }_{h}\right)-\underset{v=0}{\overset{k-1}{\sum }}\left({d}_{v}\right)-G{u}_{0}\right)$ (14)

where d is the load disturbance, $\Psi$ is the system output with the presence of load disturbances given by  and $\stackrel{¯}{\left(.\right)}$ and $\underset{_}{\left(.\right)}$ represent the maximum and minimum singular values respectively

${\Psi }_{k}\left(t+\delta \right)=G\left(q\right){u}_{k}\left(t\right)+{d}_{k}\left(t\right),$ (15)

${y}_{k}\left(t\right)={\Psi }_{k}\left(t\right)+{n}_{k}\left(t\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}t=0,1,\cdots ,n-1.$ (16)

$G$ is the process matrix and ${u}_{0}$ is the initial input for the first trial. Here in this argument, two cases are to consider where the first is to assume that disturbances influence is still acting on the system while starting the operation; and this is a very small possibility in term of quality production. The second case is to consider system operation after the disturbances influence is vanished. Thus, in this case we can assume ${d}_{k}={d}_{k-1}={d}_{k-2}={d}_{0}$ . Now, we go through the following to find the upper limit to the initial input for the first trial

$0<\stackrel{¯}{\sigma }\left(\underset{i=0}{\overset{k}{\sum }}\left({\Psi }_{i}\right)-0-G{u}_{0}\right)-\underset{_}{\sigma }\left(\underset{h=0}{\overset{k-1}{\sum }}\left({\Psi }_{h}\right)-G{u}_{0}\right)$ (17)

$\underset{_}{\sigma }\left(\underset{h=0}{\overset{k-1}{\sum }}\left({\Psi }_{h}\right)-G{u}_{0}\right)<\stackrel{¯}{\sigma }\left(\underset{i=0}{\overset{k}{\sum }}\left({\Psi }_{i}\right)-G{u}_{0}\right)$ (18)

Breaking up the last equation leads to

$\stackrel{¯}{\sigma }\left(G{u}_{0}\right)-\underset{_}{\sigma }\left(G{u}_{0}\right)<\stackrel{¯}{\sigma }\left(\underset{i=0}{\overset{k}{\sum }}\left({\Psi }_{i}\right)\right)-\underset{_}{\sigma }\left(\underset{h=0}{\overset{k-1}{\sum }}\left({\Psi }_{h}\right)\right)$ (19)

$\stackrel{¯}{\sigma }\left(G{u}_{0}\right)<\stackrel{¯}{\sigma }\left(\underset{i=0}{\overset{k}{\sum }}\left({\Psi }_{i}\right)\right)$ (20)

where $\underset{_}{\sigma }\left(G{u}_{0}\right)$ , and $\underset{_}{\sigma }\left({\sum }_{h=0}^{k-1}\left({\Psi }_{h}\right)\right)$ are assumed to be zero for simplicity. Thus, the result obtained in (20) says that for a linear repetitive system to start its operation with better performance, a designer can use the above inequality as a guide or reference when constructing the initial input such that the condition above is met. This condition says if a designer has the performance of first three trials in any previous operation, those information can be used to set the initial input according to (20).

4. Simulation Results

Simulation results presented in this section are obtained for a gantry robot Z-axis. The gantry robot shown in Figure 1 has been the benchmark for several ILC developed methods either in simulation or experimentally. The gantry robot represents a robot arm that is used in several industrial applications especially those of pick and place operations. The gantry performance accuracy, error convergence rate and robustness are important issues to a manufacturer, therefore its initial start represent an important issue for enhancing the performance and reducing the error convergence for the first trials. The gantry is constructed of three orthogonal axes where the Z-axis is the shortes axis fixed over the X and Y horizontal parts of the gantry and it consists of a ball-screw stage driven by a rotary brushless linear DC motor. The Z-axis of a 3rd order

Figure 1. Gantry robot as a test facility  .

representation and given as in 

${G}_{X}\left(z\right)=\frac{15.8869\left(s+850.3\right)}{s\left(s+353.81+461.03j\right)\left(s+353.81-431.03j\right)}$ (21)

For enhancing system performance, stability and disturbance rejection, the gantry system is fitted in a feedback loop with a PID controller whose parameters can be found in  .

One common ILC method is the adjoint method addressed in; for example,  , where the equation that governs the new control input is of the form ${u}_{k+1}={u}_{k}+\beta {G}^{T}{e}_{k}$ , where ${G}^{T}$ is the adjoint operator, and $\beta$ is the step size. This algorithm had been proven to result a monotonic convergence along the trial domain  .

In this paper, the step size is chosen to be 0.5 and the system is operated for 10 trials; to show the advantage of using input prediction over the unpredicted case. The input is constructed using (13) for the first 5 frequency components and it showed an advantage of using an input with such construction method. In this example, the model given is treated as an exact model due to the fact that those results are obtained in simulation, but for the case of experimental implementation, it would be very sensitive in constructing the input and (20) should be considered since all models are an approximation to the exact behaviour.

Figure 2 shows reconstruction of the reference signal using the frequency components 1, 2, and 3 individually and a reconstruction using the first 5 frequency components summed together. The more the frequency components used, the more accurate the reconstruction is. This idea had been projected over the initial input construction using (13) to form ${u}_{0}$ for first trial operation.

Applying (20) as an upper limit to the construction of ${u}_{0}$ leads us to next table where the first column represents the frequency component while the second represents the result of applying (20). It can be seen that for the first 5 frequency component and the sum of the first 5 frequency components, the norm of ${u}_{0}$ is still less than 1, thus there is no harm to use the sum of the first 5 frequency components to construct ${u}_{0}$ according to (20).

Figure 3 shows the error ration of each trial for the input prediction case over the unpredicted case and it clearly shows the advantage of using more frequency components to construct better initial input with better starting error norm. The

Figure 2. Reference construction from frequency components.

Figure 3. Error norms ratio plot for the predicted case over the unpredicted case.

error norm here is for an array of 1500 points representing the reference length; $T/Ts$ , where $T$ is the length of the reference in seconds (3 s) and $Ts$ is the sampling period which is 1/500 s.

Overall the initial input construction provides better error start which in turn speeds up learning process depending on the ILC method chosen and enhances the repetitive system performance. Notice the uncertainty effect where it clearly shows an increase in the control effort as the number of frequency components increases in constructing ${u}_{0}$ . Thus the condition given in (20) minimizes the effect of the uncertainty depending on limiting the first trial control effort by not exceeding the limit in (20).

5. Conclusions and Future Work

This paper uses the design model used to set the upper limit of load disturbances acting on a system  as a guidance for developing conditions for setting the upper limit of an initial input for iterative learning control strategies. There was a lack of setting an upper limit for the initial input construction reported in  such that the system operation starts from the optimum choice of ${u}_{0}$ . Here the upper limit derived sets the number of frequency components to consider to construct the initial input according to design given there. The condition found is based on the principles of the singular values and remedies the lack of setting the upper limit. Simulation results show the advantage of using initial input over the case where the initial input construction is omitted.

In future, verifying the proposed work experimentally over the gantry is under consideration. Checking the validity of the proposed condition over different types of systems such as minimum-phase plants will be investigated experimentally in the future.

Cite this paper: Alajmi, N. , Alobaidly, A. , Alhajri, M. , Salamah, S. and Alsubaie, M. (2017) An Upper Limit for Iterative Learning Control Initial Input Construction Using Singular Values. Intelligent Control and Automation, 8, 154-163. doi: 10.4236/ica.2017.83012.
References

   Arimoto, S., Kawamura, S. and Miyazaki, F. (1984) Bettering Operation of Robots by Learning. Journal of Robotic Systems, 1, 123-140.
https://doi.org/10.1002/rob.4620010203

   Rogers, E., Galkowski, K. and Owens, D. (2007) Control System Theory and Applications for Linear Repetitive Processes. Springer (Lecture Notes in Control and Information Sciences), Berlin Heidelberg.

   Lee, J.H. and Lee, K.S. (2007) Iterative Learning Control Applied to Batch Processes: An Overview. Control Engineering Practice, 15, 1306-1318.

   Cho, W., Edgar, T.F. and Lee, J. (2008) Iterative Learning Dual-Mode Control of Exothermic Batch Reactors. Control Engineering Practice, 16, 1244-1249.

   Tayebi, A., Abdul, S., Zaremba, M.B. and Ye, Y. (2008) Robust Iterative Learning Control Design: Application to a Robot Manipulator. IEEE/ASME Transactions on Mechatronics, 13, 608-613.
https://doi.org/10.1109/TMECH.2008.2004627

   Freeman, C.T. (2011) Constrained Point-to-Point Iterative Learning Control. IFAC Proceedings, 44, 3611-3616.

   Nygren, J., Pelckmans, K. and Carlsson, B. (2014) Approximate Adjoint-Based Iterative Learning Control. International Journal of Control, 87, 1028-1046.
https://doi.org/10.1080/00207179.2013.865144

   Bristow, D.A., Tharayil, M. and Alleyne, A.G. (2006) A Survey of Iterative Learning Control. IEEE Control Systems, 26, 96-114.
https://doi.org/10.1109/MCS.2006.1636313

   Ahn, H.-S., Chen, Y.Q. and Moore, K.L. (2007) Iterative Learning Control: Brief Survey and Categorization. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 37, 1099-1121.
https://doi.org/10.1109/TSMCC.2007.905759

   Hara, S., Omata, T. and Nakano, M. (1985) Synthesis of Repetitive Control Systems and Its Application. 24th IEEE Conference on Decision and Control, Lauderdale, Florida, 11-13 December 1985, 1384-1392.

   Moon, J.H., Lee, M.N. and Chung, M.J. (2002) Repetitive Control for the Track-Following Servo System of an Optical Disk Drive. IEEE Transactions on Control Systems Technology, 6, 663-670.
https://doi.org/10.1109/87.709501

   Chen, Y.-Q., Moore, K.L., Yu, J. and Zhang, T. (2008) Iterative Learning Control and Repetitive Control in Hard Disk Drive Industry—A Tutorial. International Journal of Adaptive Control and Signal Processing, 22, 325-343.
https://doi.org/10.1002/acs.1003

   Arif, M., Ishihara, T. and Inooka, H. (2001) Incorporation of Experience in Iterative Learning Controllers Using Locally Weighted Learning. Automatica, 37, 881-888.

   Arif, M., Ishihara, T. and Inooka, H. (2002) Experience-Based Iterative Learning Controllers for Robotic Systems. Journal of Intelligent & Robotic Systems, 35, 381-396.
https://doi.org/10.1023/A:1022399105710

   Freeman, C.T., Alsubaie, M., Cai, Z., Rogers, E. and Lewin, P.L. (2011) Initial Input Selection for Iterative Learning Control. Journal of Dynamic Systems, Measurement, and Control, 133, Article ID: 054504.
https://doi.org/10.1115/1.4003096

   Freeman, C.T., Alsubaie, M., Cai, Z., Rogers, E. and Lewin, P.L. (2011) Model and Experience-Based Initial Input Construction for Iterative Learning Control. International Journal of Adaptive Control and Signal Processing, 25, 430-447.
https://doi.org/10.1002/acs.1209

   Cho, B., Owens, D.H. and Freeman, C.T. (2016) Iterative Learning Control with Predictive Trial Information: Convergence, Robustness, and Experimental Verification. IEEE Transactions on Control Systems Technology, 24, 1101-1108.
https://doi.org/10.1109/TCST.2015.2476779

   Hatonen, J.J., Owens, D.H. and Ylinen, R. (2003) A New Optimality Based Repetitive Control Algorithms for Discrete-Time Systems. Proceedings of the European Control Conference (ECC03), Cambridge.

   Freeman, C.T., Alsubaie, M.A., Cai, Z., Rogers, E. and Lewin, P.L. (2013) A Common Setting for the Design of Iterative Learning and Repetitive Controllers with Experimental Verification. International Journal of Adaptive Control and Signal Processing, 27, 230-249.
https://doi.org/10.1002/acs.2299

   Alsubaie, M. and Rogers, E. (2017) Designed Iterative Learning Control via State Feedback in Past and Current Errors Schemes Robustness and Load Disturbances Conditions. The Journal of Systems and Control Engineering.

   Ratcliffe, J.D. (2005) An Iterative Learning Control Implemented on a Multi-Axis System. PhD Thesis, School of Electronics and Computer Science, University of Southampton, Southampton.

   Hatonen, J.J., Owens, D.H. and Moore, K.L. (2004) An Algebraic Approach to Iterative Learning Control. International Journal of Control, 77, 45-54.
https://doi.org/10.1080/00207170310001638614

Top