Iterative learning control (ILC), [1] and repetitive control(RC) [2] are two typical learning control strategies developed for systems performing tasks repetitively. ILC aims to achieve complete tracking of the system output to a desired trajectory over a prespecified interval, through updating the control input cycle by cycle, while RC addresses the problem of the periodic reference tracking and periodic disturbance rejection. The contractionmappingbased learning control [1] features simplicity, especially reflected in the only use of the output measurements. However, the learning gains are not easy to be determined because of the difficulty in solving norm inequalities, that may lead to obstacles to the applications of the conventional learning control method.
Early in 1990s, aiming to overcome the mentioned limitation of the contractionmappingbased method, there have been intense researches in developing Lyapunovlike based designs of iterative learning control [3][5] and repetitive control [6], [7]. Recently, such Lyapunovlike approach has received further attention [8], [9], [20], [21][25]. In [8], [9], the learning control problems were formulated for broader classes of uncertain systems with local Lipschitz nonlinearities and timevarying parametric and norm bounded uncertainties. Note that in the mentioned works, the full state information is assumed to be available. However, in many applications, the system state may not be available for the controller design, where it is necessary to design outputbased learning controllers in the framework of Lyapunovlike based learning control theory.
For linear systems, Kalman filter [10] and Luenberger observer [11] are two kinds of basic practical observers that adequately address the linear state estimation problem. Observers for nonlinear uncertain systems have recently received a great deal of attention, and there have been many designs such as adaptive observers [12], robust observers [13], sliding mode observers [14], neural observers [15], fuzzy observers [16], etc. In the published literature, works have been done for outputfeedbackbased learning control. In [17], a transformation to give an output feedback canonical form is taken for nonlinear uncertain systems with welldefined relative degree. But the uncertainties in the transformed dynamics should be state independent. In [18], an adaptive learning algorithm is presented for unknown constants, linking two consecutive iterations by making the initial value of the parameter estimation in the next iteration equal to the final value of the one in the current iteration. The results are extended to output feedback nonlinear systems. But the nonlinear function of the system is assumed to be concerned with the system output only.
The observers used in learning control are reported in [19] and [20]. The former addresses the finite interval learning control problem while the latter addresses the infinite interval learning control problem. The nonlinear functions in [19] are not parametrized and the observer based learning controller is designed in the framework of contraction mapping approach without the requirement of zero relative degree. The observer used in [20] is special and complicated. By virtue of the separation principle, the state estimation observer and the parameter estimation learning law are taken into account respectively. The Lyapunovlike functions are employed and two classes of nonlinearties, the global Lipschitz continuous function of state variables and the the local Lipschitz continuous function of output variables, are all considered.
As is known, repositioning is required at the beginning of each cycle in ILC. Repositioning errors will accumulate with iteration number increasing, which may lead the system to diverge finally. The variables to be learned are assumed to be repetitive. Repetitive control requires no repositioning, but the variables to be estimated need to be periodic. It is commonly seen that a repetitive signal may not be periodic. RLC has been developed recently [21][25], and formally formulated in [24], given as follows:
F1) every operation ends in the same finite time of duration;
F2) the desired trajectory is given a priori and is closed;
F3) the initial condition of the system at the beginning of each cycle is aligned with the final position of the preceding cycle;
F4) the timevarying variables to be learnt are iteration independent cycle;
F5) the system dynamics are invariant throughout all the cycles.
Unlike the ILC and RC, RLC can handle the finite time interval tracking without the repositioning. In the published literature, however, there are few results on observerbased RLC.
In this paper, through Lyapunovlike synthesis, the ILC problem is addressed for a class of nonlinear systems that only the system output measurements are available. Compared with the existing works, the main contributions of our paper are given as follows. Firstly, the parametric uncertainties discussed in this paper are statedependent, while the uncertainties treated in the existing results [17][19] are assumed to be outputdependent. The statedependent terms cannot be directly used in the output feedback controller design due to the lack of the state information. Secondly, a robust learning observer is given by simply using Luenberger observer design. Reference [20] pointed out that for learning control systems many conventional observers are difficult to apply. We clarify the possibility of designing observerbased learning controller by using a Luenberger observer. The estimation of output, instead of the system output itself, is applied to form the error equation, which helps to establish convergence of the system output to the desired one. Finally, the method used in outputfeedback ILC design is extended to the RLC design. To the best of our knowledge, the outputfeedback RLC problem is still open. In this paper, the fully saturated learning laws are developed for estimating timevarying unknowns. The boundedness of the estimations plays an important role in establishing stability and convergence results of the closedloop system.
The rest of the paper is organized as follows. The problem formulation and preliminaries are given in Section Ⅱ. The main results of this paper are given in Section Ⅲ and Section Ⅳ, providing performance and convergence analysis of the observer based ILC and RLC, respectively. Section Ⅴ presents simulation results and gives the comparison of the ILC and RLC schemes. The final section draws the conclusion of this work.
Ⅱ. PROBLEM FORMULATION AND PRELIMINARIESConsider a class of uncertain nonlinear systems described by
$ \begin{eqnarray} &&\dot{x}(t)=Ax(t)+B(u(t)+\Theta(t)\xi(x(t), t))\nonumber\\ &&y(t)=Cx(t) \end{eqnarray} $  (1) 
where
Remark 1:
Assume that the system operates repeatedly over a specified interval
$ \begin{eqnarray} &&\dot{x}_k(t)=Ax_{k}(t)+B(u_k(t)+\Theta(t)\xi(x_k(t), t))\nonumber\\ &&y_k(t)=Cx_k(t). \end{eqnarray} $  (2) 
Given a desired trajectory
Assumption 1: For system (2), there exist positive matrices
$ \begin{eqnarray} PA+A^TP=Q \end{eqnarray} $  (3) 
$ B^TP=C. $  (4) 
Assumption 2: Rank
Assumption 3: The nonlinear function
Remark 2: Assumption 1 is the common strictly positive real(SPR) condition. It guarantees the asymptotic stability of the linear part of the system which helps us construct the Lyapunovlike function easily. It also indicates that if
For the learning controller design, saturation function
$ \begin{eqnarray} \mathit{\rm{sat}}(f)=\left\{ \begin{array}{ll} \bar{f}^1, &\mathit{\rm{if}} \ f<\bar{f}^1\\[0.5mm] f, &\mathit{\rm{if}} \ \bar{f}^1 \leq f \leq \bar{f}^2\\[0.5mm] \bar{f}^2, &\mathit{\rm{if}}\ f >\bar{f}^2 \end{array} \right. \end{eqnarray} $  (5) 
where
Lemma 1: For
$ \begin{eqnarray} {\text{tr}}((F_1\mathit{\rm{sat}}(F_2))^T(F_2\mathit{\rm{sat}}(F_2)))\leq 0. \end{eqnarray} $  (6) 
Proof: It follows that for the matrices
$ \begin{eqnarray} &&{\text{tr}}((F_1\mathit{\rm{sat}}(F_2))^T(F_2\mathit{\rm{sat}}(F_2)))\nonumber\\ &&~~~~=\sum\limits_{j=1}^{n_1}\sum\limits_{i=1}^{m}(f_{1ij}\mathit{\rm{sat}}(f_{2ij}))(f_{2ij}\mathit{\rm{sat}}(f_{2ij})). \end{eqnarray} $  (7) 
Let
$ \begin{eqnarray} (f_{1ij}\mathit{\rm{sat}}(f_{2ij}))(f_{2ij}\mathit{\rm{sat}}(f_{2ij})) \leq 0. \end{eqnarray} $  (8) 
The following property of trace is used as below
$ \begin{eqnarray} {\text{tr}}(G^Tg_2g_1^T)={\text{tr}}(G^Tg_2g_1^T)^T=g_2^TGg_1 \end{eqnarray} $  (9) 
where
To establish stability and convergence of the repetitive learning control systems in Section Ⅳ, the following lemma is given.
Lemma 2: The sequence of nonnegative functions
$ \begin{eqnarray}\lim\limits_{k\rightarrow \infty} f_k(t)=0, \forall t\in [0, T]\nonumber\end{eqnarray} $ 
if
$ \begin{eqnarray} \lim\limits_{k\rightarrow \infty}\int_0^T f_k(\tau)d\tau=0 \end{eqnarray} $  (10) 
and
The proof of lemma 2 can be found in [19].
Ⅲ. OBSERVERBASED ILC A. State EstimationLet
$ \begin{eqnarray} \dot{\hat{x}}_k(t)&\hspace{0.2cm}=&\hspace{0.2cm}A\hat{x}_k(t)+Bu_k(t)+B\hat{\Theta}_k(t)\xi(\hat{x}_k(t))\nonumber\\ &&+\frac{1}{2}B\hat{\mu}_k(t)(y_k(t)C\hat{x}_k(t)) \end{eqnarray} $  (11) 
where
By defining the estimation error
$ \begin{eqnarray} ~~~~~~ \delta \dot{x}_k(t)&\hspace{0.2cm}=&\hspace{0.2cm}\dot{x}_k(t)\dot{\hat{x}}_k(t) \nonumber\\ &&\hspace{0.6cm}= A\delta x_k(t)\!+\!B\tilde{\Theta}_k\xi(\hat{x}_k)\!+\!B\Theta(t)(\xi(x_k)\!\!\xi(\hat{x}_k))\nonumber\\ &&\hspace{0.2cm}\frac{1}{2}B\hat{\mu}_k(t)(y_k(t)\!\!C\hat{x}_k(t)) \end{eqnarray} $  (12) 
where
$ \begin{eqnarray} W_k^1(t)=\delta x_k^T(t)P \delta x_k(t) \end{eqnarray} $  (13) 
where
$ \begin{eqnarray} \dot{W}_k^1(t)&\hspace{0.2cm}=&\hspace{0.2cm}2\delta x_k^T(t)P A\delta x_k(t) +2\delta x_k^T(t)PB\tilde{\Theta}_k\xi(\hat{x}_k)\nonumber\\ &&\hspace{0.2cm}+2\delta x_k^T(t)PB\Theta(t)(\xi(x_k)\xi(\hat{x}_k))\nonumber\\ &&\hspace{0.2cm}\delta x_k^T(t)PB\hat{\mu}_k(t)(y_k(t)C\hat{x}_k(t)). \end{eqnarray} $  (14) 
According to Assumptions 1 and 2, we have
$ \begin{eqnarray} \dot{W}_k^1(t)&\hspace{0.2cm}\leq&\hspace{0.2cm} \lambda_1\delta x_k^2+2\delta x_k^T(t)PB\tilde{\Theta}_k\xi(\hat{x}_k)\nonumber\\ &&\hspace{0.2cm}+2y_k\!\!\!\!C\hat{x}_k\theta_m\gamma\delta x_k\!\!\hat{\mu}_ky_k\!\!\!\!C\hat{x}_k^2. \end{eqnarray} $  (15) 
Using the inequality
$ \begin{eqnarray} 2y_k&\hspace{0.2cm}&\hspace{0.2cm}C\hat{x}_k\theta_m\gamma\delta x_k\leq \frac{\lambda_1}{2}\delta x_k^2\nonumber\\ &&\hspace{0.2cm}+\frac{2\theta_m^2\gamma^2}{\lambda_1}y_kC\hat{x}_k^2 \end{eqnarray} $  (16) 
it can be verified that
$ \begin{eqnarray} \dot{W}_k^1(t)&\hspace{0.2cm}\leq&\hspace{0.2cm}\frac{\lambda_1}{2}\delta x_k^2+2\delta x_k^TPB\tilde{\Theta}_k\xi(\hat{x}_k)\nonumber\\ &&\hspace{0.2cm}+\tilde{\mu}_k(t)y_kC\hat{x}_k^2 \end{eqnarray} $  (17) 
where
Let us define the novel error function
$ \begin{eqnarray} \hspace{0.5cm}\dot{e}_k(t)&\hspace{0.2cm}=&\hspace{0.2cm}C\dot{\hat{x}}_k(t)\dot{y}_d(t)= CA\hat{x}_k(t)+CB\hat{\Theta}_k(t)\xi(\hat{x}_k(t))\nonumber\\ &&\hspace{0.2cm}+\frac{1}{2}CB\hat{\mu}_k(t)(y_k(t)\!\!C\hat{x}_k(t))\!\!\dot{y}_d(t)\!+\!CBu_k(t) \end{eqnarray} $  (18) 
from which we can easily obtain the control law
$ \begin{eqnarray} \hspace{0.5cm}u_k&\hspace{0.2cm}=&\hspace{0.2cm}\hat{\Theta}_k(t)\xi(\hat{x}_k(t))+(CB)^{1}(\dot{y}_d(t)CA\hat{x}_k(t)\nonumber\\ &&\hspace{0.2cm}L_1e_k(t))\frac{1}{2}\hat{\mu}_k(t)(y_k(t)C\hat{x}_k(t)) \end{eqnarray} $  (19) 
where
Using the following Lypunov function candidate
$ \begin{eqnarray} W_k^2(t)=\frac{1}{2}e_k(t)^2 \end{eqnarray} $  (20) 
and considering the control input (19) and the error dynamic (18), we obtain
$ \begin{eqnarray} \dot{W}_k^2(t) = e_k^T(t)\dot{e}_k(t)=e_k^T(t)L_1\dot{e}_k(t)=\lambda_2 e_k^2 \end{eqnarray} $  (21) 
where
It should be noted that the error dynamics (18) is independent of nonlinear uncertainties in system (1), and all variables in (18) are available for controller design. This is the reason why we use
$ \begin{eqnarray} \left\{\begin{array}{l} \hat{\Theta}^{*}_k(t)=\hat{\Theta}_{k1}(t)+2L_2(y_k(t)C\hat{x}_k(t))\xi^T(\hat{x}_k)\\ \hat{\Theta}_k(t)=\mathit{\rm{sat}}(\hat{\Theta}^*_k(t))\\ \hat{\Theta}_{1}(t)=\{0\}_{m\times n_1}, t\in [0, T] \end{array} \right. \end{eqnarray} $  (22) 
and
$ \begin{eqnarray} \left\{\begin{array}{l} \hat{\mu}^*_k(t)=\hat{\mu}_{k1}(t)+l_3y_kC\hat{x}_k^2\\ \hat{\mu}_k(t)=\mathit{\rm{sat}}(\hat{\mu}^*_k(t))\\ \hat{\mu}_{1}(t)=0, \forall t\in [0, T] \end{array} \right. \end{eqnarray} $  (23) 
where
Assumption 4: At the beginning of each cycle,
Remark 3: Assumption 4 is about the initial states resetting condition. This part focuses on the design of observers. An extension of initial states condition is given in Section Ⅳ. See Assumptions 5 and 6.
C. Convergence and BoundednessTheorem 1: For system(1) satisfying Assumptions 14, let controller (19) together with full saturated learning laws (22) and (23), where
1) all signals in the closedloop are bounded on
2)
Proof: Let us consider the following Lyapunovlike function
$ \begin{eqnarray} W_k(t)&\hspace{0.2cm}=&\hspace{0.2cm}W_k^1(t)+W_k^2(t)+\frac{1}{2l_3}\int_0^t \tilde{\mu}_k^2(\tau) d\tau \nonumber\\ &&\hspace{0.2cm}+\frac{1}{2}\int_0^t {\text{tr}}[\tilde{\Theta}^T_k(\tau) L_2^{1}\tilde{\Theta}_k(\tau)] d\tau \end{eqnarray} $  (24) 
where
For
$ \begin{eqnarray} \Delta W_k(t)&\hspace{0.2cm}=&\hspace{0.2cm}W_k(t)W_{k1}(t)\nonumber\\ &&\hspace{0.6cm}=W_k^1(t)+W_k^2(t)\nonumber\\ &&\hspace{0.2cm}+\frac{1}{2}\int_0^t \{{\text{tr}}[\tilde{\Theta}^T_k(\tau) L_2^{1}\tilde{\Theta}_k(\tau)]\nonumber\\ &&\hspace{0.2cm}{\text{tr}}[\tilde{\Theta}^T_{k1}(\tau)L_2^{1}\tilde{\Theta}_{k1}(\tau)]\} d\tau\nonumber\\ &&\hspace{0.2cm}+\frac{1}{2l_3}\int_0^t [\tilde{\mu}_k^2(\tau) \tilde{\mu}_{k1}^2(\tau)] d\tauW_{k1}^1(t)\nonumber\\ &&\hspace{0.2cm}W_{k1}^2(t). \end{eqnarray} $  (25) 
Assumption 4 implies
$ \begin{eqnarray} W_k^1(t)&\hspace{0.2cm}=&\hspace{0.2cm}W_k^1(0)+\int_0^t\dot{W}_k^1(\tau)d\tau \leq\int_0^t(\frac{\lambda_1}{2}\delta x_k^2\nonumber\\ &&\hspace{0.2cm}+2\delta x_k^TPB\tilde{\Theta}_k\xi(\hat{x}_k)+\tilde{\mu}_ky_kC\hat{x}_k^2) d\tau \end{eqnarray} $  (26) 
and
$ \begin{eqnarray} W_k^2(t)=W_k^2(0)+\int_0^t\dot{W}_k^2(\tau)d\tau =\lambda_2\int_0^t e_k^2 d\tau. \end{eqnarray} $  (27) 
Using the equalities
$ \begin{eqnarray} \frac{1}{2}[{\text{tr}}(\tilde{\Theta}^T_kL_2^{1} \tilde{\Theta}_k){\text{tr}}(\tilde{\Theta}^T_{k1}L_2^{1} \tilde{\Theta}_{k1})]\nonumber\\ ={\text{tr}}[(\hat{\Theta}_k\hat{\Theta}_{k1})^TL_2^{1} \tilde{\Theta}_k]~~~~~~~~~~~~~ \nonumber\\ \frac{1}{2}{\text{tr}}[(\hat{\Theta}_k\hat{\Theta}_{k1})^T L_2^{1}(\hat{\Theta}_k\hat{\Theta}_{k1})] \end{eqnarray} $  (28) 
and
$ \begin{eqnarray} \frac{1}{2l_3}\int_0^t \tilde{\mu}_k^2\tilde{\mu}_{k1}^2 d\tau&\hspace{0.2cm}=&\hspace{0.2cm}\frac{1}{l_3}\int_0^t \tilde{\mu}_k(\hat{\mu}_k\hat{\mu}_{k1})d\tau\nonumber\\ &&\hspace{0.2cm}\frac{1}{2l_3}\int_0^t(\hat{\mu}_k\hat{\mu}_{k1})^2d\tau \end{eqnarray} $  (29) 
and substituting (26), (27) into (25), it can be verified that
$ \begin{eqnarray} \Delta W_k(t) \leq &\hspace{0.2cm}&\hspace{0.2cm}\frac{\lambda_1}{2}\int_0^t \delta x_k^2 d\tau\lambda_2\int_0^t e_k^2 d\tau \nonumber\\ &&\hspace{0.6cm}+\int_0^t 2\delta x_k^TPB\tilde{\Theta}_k\xi(\hat{x}_k)d\tau\nonumber\\ &&\hspace{0.6cm}+\int_0^t\tilde{\mu}_ky_kC\hat{x}_k^2 d\tau\nonumber\\ &&\hspace{0.6cm}\int_0^t {\text{tr}}((\hat{\Theta}_k\!\!\hat{\Theta}_{k1})^TL_2^{1} \tilde{\Theta}_k) d\tau\!\!\frac{1}{l_1}\int_0^t \tilde{\mu}_k\nonumber\\ &&\hspace{0.6cm}\times(\hat{\mu}_k\!\!\hat{\mu}_{k1}) d\tauW_{k1}^1(t)\!\!W_{k1}^2(t). \end{eqnarray} $  (30) 
Applying learning laws (22) and (23), inequality (6) and Lemma 1, we obtain
$ \begin{eqnarray} &2\delta x_k^TPB\tilde{\Theta}_k\xi(\hat{x}_k){\text{tr}}[(\hat{\Theta}_k\hat{\Theta}_{k1})^T L_2^{1} \tilde{\Theta}_k]\nonumber\\ =&\hspace{0.6cm} {\text{tr}}[(\hat{\Theta}^*_k\mathit{\rm{sat}}(\hat{\Theta}^*_k))^TL_2^{1}(\Theta\mathit{\rm{sat}}(\hat{\Theta}_k^*))]\leq 0 \end{eqnarray} $  (31) 
and
$ \begin{eqnarray} &&\hspace{0.5cm}\tilde{\mu}_ky_kC\hat{x}_k^2\frac{1}{l_3} \tilde{\mu}_k(\hat{\mu}_k\hat{\mu}_{k1}) \nonumber\\ &&=\frac{1}{l_3}(\mu_k\mathit{\rm{sat}}(\hat{\mu}^*_k))(\hat{\mu}^*_k \mathit{\rm{sat}}(\hat{\mu}^*_k))\leq 0. \end{eqnarray} $  (32) 
Substituting (31) and (32) into (30) gives rise to
$ \begin{eqnarray} \Delta W_k(t) \leq &\hspace{0.2cm}&\hspace{0.2cm}\frac{\lambda_1}{2}\int_0^t \delta x_k^2 d\tau\lambda_2\int_0^t e_k^2 d\tau W_{k1}^1(t)\nonumber\\ &&\hspace{0.6cm}W_{k1}^2(t)\nonumber\\ \leq &\hspace{0.2cm}&\hspace{0.2cm}\frac{\lambda_1}{2}\int_0^t \delta x_k^2 d\tau\lambda_2\int_0^t e_k^2 d\tau\nonumber\\ &&\hspace{0.6cm}\lambda_3\delta x_{k1}^2\!\!\!\!\frac{1}{2}e_{k1}(t)^2\!\leq\!\! 0 \end{eqnarray} $  (33) 
where
$ \begin{eqnarray} W_0(t)&\hspace{0.2cm}=&\hspace{0.2cm}\delta x_0^TP \delta x_0+\frac{1}{2}e_0^2+\frac{1}{2}\int_0^t {\text{tr}}[\tilde{\Theta}^T_0L_2^{1} \tilde{\Theta}_0] d\tau\nonumber\\ &&\hspace{0.2cm}+\frac{1}{2l_3}\int_0^t \tilde{\mu}_0^2 d\tau. \end{eqnarray} $  (34) 
Since
$ \left\{ \begin{align} & \hat{\Theta }_{0}^{*}(t)=2{{L}_{2}}({{y}_{0}}(t)C{{{\hat{x}}}_{0}}(t)){{\xi }^{T}}({{{\hat{x}}}_{0}}) \\ & {{{\hat{\Theta }}}_{0}}(t)=\text{sat}(\hat{\Theta }_{0}^{*}(t)) \\ \end{align} \right. $  (35) 
$ \left\{ \begin{align} & \hat{\mu }_{0}^{*}(t)={{l}_{3}}{{y}_{0}}C{{{\hat{x}}}_{0}}{{}^{2}} \\ & {{{\hat{\mu }}}_{0}}(t)=\text{sat}(\hat{\mu }_{0}^{*}(t)). \\ \end{align} \right. $  (36) 
Taking the derivative of
$ \begin{eqnarray} \dot{W}_0 \leq &\hspace{0.2cm}&\hspace{0.2cm}\frac{\lambda_1}{2}\delta x_0^2\lambda_2 e_0^2\frac{1}{2}{\text{tr}}(\hat{\Theta}_0^TL_2^{1}\hat{\Theta}_0) \nonumber\\ &&\hspace{0.6cm}+{\text{tr}}((\hat{\Theta}_0^*\mathit{\rm{sat}}(\hat{\Theta}_0^*))^TL_2^{1}(\Theta\mathit{\rm{sat}}(\hat{\Theta}_0^*))) \nonumber\\ &&\hspace{0.6cm}+\frac{1}{2} {\text{tr}}(\Theta^TL_2^{1} \Theta) +\frac{1}{l_3}(\mu{\text{sat}}(\hat{\mu}_0^*))(\hat{\mu}_0^*{\text{sat}}(\hat{\mu}_0^*))\nonumber\\ &&\hspace{0.6cm}+\frac{1}{2l_3} s^2\frac{1}{2l_3}\hat{\mu}_0^2 \leq\frac{1}{2} {\text{tr}}(\Theta^TL_2^{1} \Theta) +\frac{1}{2l_3} \mu^2. \end{eqnarray} $  (37) 
Since
From (33), we obtain
$ \begin{eqnarray} W_k(t) &\hspace{0.2cm}=&\hspace{0.2cm} W_0(t)+ \sum\limits_{i=1}^{k}\Delta W_i(t)\nonumber\\ &&\hspace{0.6cm}\leq W_0(t)\frac{\lambda_1}{2}\sum\limits_{i=1}^{k}\int_0^t \delta x_i^2 d\tau\lambda_2 \sum\limits_{i=1}^{k}\int_0^t e_i^2 d\tau\nonumber\\ &&\hspace{0.2cm}\lambda_3\sum\limits_{i=1}^{k1}\delta x_i^2\frac{1}{2}\sum\limits_{i=1}^{k1}e_i^2. \end{eqnarray} $  (38) 
Since
$ \begin{eqnarray} \lim\limits_{k\rightarrow \infty} W_k(t) \leq W_0(t)\frac{\lambda_1}{2}\lim\limits_{k\rightarrow \infty}\sum\limits_{i=1}^{k}\int_0^t \delta x_i^2 d\tau\nonumber\\ \lambda_2 \lim\limits_{k\rightarrow \infty}\sum\limits_{i=1}^{k}\int_0^t e_i^2 d\tau \lambda_3\lim\limits_{k\rightarrow \infty}\sum\limits_{i=1}^{k1}\delta x_i^2\nonumber\\ \frac{1}{2}\lim\limits_{k\rightarrow \infty}\sum\limits_{i=1}^{k1}e_i^2. \end{eqnarray} $  (39) 
By the positiveness of
In this section, we extend the observerbased ILC design into RLC design for uncertain nonlinear systems. The following properties are assumed according to the repetitive learning control formulation.
Assumption 5: The desired trajectory is given to satisfy
$ \begin{eqnarray} y_d(0)=y_d(T) \end{eqnarray} $  (40) 
and
Assumption 6: At the beginning of each cycle,
$ \begin{eqnarray} x_k(0)=x_{k1}(T) \end{eqnarray} $  (41) 
$ \begin{eqnarray} \hat{x}_k(0)=\hat{x}_{k1}(T) \end{eqnarray} $  (42) 
where
Remark 4: Assumptions 5 and 6 satisfy F2) and F3). The initial state estimation condition (42) is required for the observer (11). We do not need Assumption 3 which is a strict condition in practical system. No extra limits of the unknown timevarying function
Theorem 2: Considering system(1) with controller (19) and full saturated learning laws (22) and (23), where the states are given by the observer (11) over a specified time interval
1) all signals in the closedloop are bounded on
2)
Proof: Assumptions 5 and 6 imply
$ \begin{eqnarray} &\hspace{0.6cm}&\hspace{0.2cm}W_k^1(0)+W_k^2(0)\nonumber\\ &&\hspace{0.6cm}=(x_k(0)\hat{x}_k(0))^T P (x_k(0)\hat{x}_k(0))+\frac{1}{2}\hat{y}_k(0)y_d(0)^2\nonumber\\ &&\hspace{0.6cm}=(x_{k1}(T)\hat{x}_{k1}(T))^T P (x_{k1}(T) \hat{x}_{k1}(T))\nonumber\\ &&\hspace{0.2cm}+\frac{1}{2}C\hat{x}_{k1}(T)y_d(T)^2\nonumber\\ &&\hspace{0.6cm}=W_{k1}^1(T)+W_{k1}^2(T) \end{eqnarray} $  (43) 
where
$ \begin{eqnarray} W_k(t)&\hspace{0.2cm}=&\hspace{0.2cm} W_k^1(t)+W_k^2(t)+\frac{1}{2}\int_0^t \{{\text{tr}}[\tilde{\Theta}^T_k(\tau)L_2^{1} \tilde{\Theta}_k(\tau)\nonumber\\ &&\hspace{0.2cm}{\text{tr}}[\tilde{\Theta}^T_{k1}(\tau)L_2^{1} \tilde{\Theta}_{k1}(\tau)]\} d\tau+\frac{1}{2l_1}\int_0^t [\tilde{\mu}_k^2(\tau)\nonumber\\ &&\hspace{0.2cm}\tilde{\mu}_{k1}^2(\tau)] d\tau\!\!\!\!W_{k1}^1(t)\!\!\!\!W_{k1}^2(t)\!\!+\!\!W_{k1}(t). \end{eqnarray} $  (44) 
In view of (17) and (21), substituting (28) and (29) into (44), we obtain
$ \begin{eqnarray} W_k(t)&\hspace{0.2cm} \leq&\hspace{0.2cm} \frac{\lambda_1}{2}\int_0^t \delta x_k^2 d\tau\lambda_2\int_0^t e_k^2 d\tau+W_k^1(0)\nonumber\\ &&\hspace{0.2cm} +\int_0^t 2\delta x_k^TPB\tilde{\Theta}_k\xi(\hat{x}_k)d\tau\!\! +\!\!\int_0^t\tilde{\mu}_ky_k\!\!\!\!C\hat{x}_k^2 d\tau\nonumber\\ &&\hspace{0.2cm}\int_0^t {\text{tr}}((\hat{\Theta}_k(t)\!\!\!\!\hat{\Theta}_{k1}(t))^TL_2^{1} \tilde{\Theta}_k) d\tau\!\!+\!\!W_k^2(0)\nonumber\\ &&\hspace{0.2cm} \frac{1}{l_1}\int_0^t \tilde{\mu}_k(\tau)(\hat{\mu}_k(\tau)\hat{\mu}_{k1}(\tau)) d\tauW_{k1}^1(t)\nonumber\\ &&\hspace{0.2cm} W_{k1}^2(t)+W_{k1}(t). \end{eqnarray} $  (45) 
Applying inequalities (31) and (32), we have
$ \begin{eqnarray} W_k(t) &\hspace{0.2cm}\leq&\hspace{0.2cm} \frac{\lambda_1}{2}\int_0^t \delta x_k^2 d\tau\lambda_2\int_0^t e_k^2 d\tau+W_k^1(0)\nonumber\\ &&\hspace{0.2cm}+W_k^2(0) W_{k1}^1(t)W_{k1}^2(t)+W_{k1}(t). \end{eqnarray} $  (46) 
In addition, by the definition of
$ \begin{eqnarray} W_{k1}(t)W_{k1}^1(t)W_{k1}^2(t)~~~~~~~~~~~~~~~~~~~~~~~~~\nonumber\\ = \frac{1}{2}\int_0^t {\text{tr}}(\tilde{\Theta}^T_{k1}L_2^{1} \tilde{\Theta}_{k1}) d\tau+\frac{1}{2l_3}\int_0^t \tilde{\mu}_{k1}^2 d\tau. \end{eqnarray} $  (47) 
Therefore, in view of (43), we have
$ \begin{eqnarray} W_k(t) &\hspace{0.2cm}\leq&\hspace{0.2cm} \frac{\lambda_1}{2}\int_0^t \delta x_k^2 d\tau\lambda_2\int_0^t e_k^2 d\tau\nonumber\\ &&\hspace{0.2cm} +\frac{1}{2l_3}\int_0^t \tilde{\mu}_{k1}^2 d\tau+W_{k1}^1(T)+W_{k1}^2(T)\nonumber\\ &&\hspace{0.2cm}+\frac{1}{2}\int_0^t {\text{tr}}(\tilde{\Theta}^T_{k1}L_2^{1} \tilde{\Theta}_{k1}) d\tau. \end{eqnarray} $  (48) 
It follows that
$ \begin{eqnarray} W_k(t) &\hspace{0.2cm}\leq&\hspace{0.2cm} \frac{1}{2}\int_0^t {\text{tr}}(\tilde{\Theta}^T_{k1}L_2^{1} \tilde{\Theta}_{k1}) d\tau+\frac{1}{2l_3}\int_0^t \tilde{\mu}_{k1}^2 d\tau\notag\\ &&\hspace{0.2cm} +W_{k1}^1(T)+W_{k1}^2(T)\notag\\ &&\hspace{0.6cm} \leq \frac{1}{2}\int_0^T {\text{tr}}(\tilde{\Theta}^T_{k1}L_2^{1} \tilde{\Theta}_{k1}) d\tau+\frac{1}{2l_3}\int_0^T \tilde{\mu}_{k1}^2 d\tau\notag\\ &&\hspace{0.2cm} +W_{k1}^1(T)+W_{k1}^2(T). % \end{array} \end{eqnarray} $  (49) 
It is obvious that the righthand side of the last inequality is actually the
$ \begin{eqnarray} W_k(t)\leq W_{k1}(T) \end{eqnarray} $  (50) 
for all
$ \begin{eqnarray} W_k(T)\leq W_{k1}(T). \end{eqnarray} $  (51) 
From above, it is clearly seen that
Setting
$ \begin{eqnarray} W_k(T) \leq &\hspace{0.2cm}&\hspace{0.2cm}\frac{\lambda_1}{2}\int_0^T \delta x_k^2 d\tau\lambda_2\int_0^T e_k^2 d\tau\nonumber\\ &&\hspace{0.6cm}+\frac{1}{2}\int_0^T {\text{tr}}(\tilde{\Theta}^T_{k1}L_2^{1} \tilde{\Theta}_{k1}) d\tau\nonumber\\ &&\hspace{0.6cm} +\frac{1}{2l_3}\int_0^T \tilde{\mu}_{k1}^2(\tau) d\tau +W_{k1}^1(T)+W_{k1}^2(T)\nonumber\\ \leq &\hspace{0.2cm}&\hspace{0.2cm}\frac{\lambda_1}{2}\int_0^T \delta x_k^2 d\tau\nonumber\\ &&\hspace{0.6cm}\lambda_2\int_0^T e_k^2 d\tau+W_{k1}(T). \end{eqnarray} $  (52) 
Therefore,
$ \begin{eqnarray} W_k(T)W_{k1}(T) \leq &\hspace{0.2cm}&\hspace{0.2cm}\frac{\lambda_1}{2}\int_0^T \delta x_k^2 d\tau\nonumber\\[3mm] &&\hspace{0.6cm} \lambda_2\int_0^T e_k^2 d\tau. \end{eqnarray} $  (53) 
Since
$ \begin{eqnarray} \lim\limits_{k\rightarrow \infty} \int_0^T \delta x_k^2 d\tau=0 \end{eqnarray} $  (54) 
$ \begin{eqnarray} \lim\limits_{k\rightarrow \infty}\int_0^T e_k^2 d\tau=0. \end{eqnarray} $  (55) 
Now, based on the above analysis and using Lemma 1, we can summarize the stability and convergence results as Theorem 2.
Ⅴ. ILLUSTRATIVE EXAMPLESIn this section, two illustrative examples are presented to show the design procedure and the performance of the proposed controller for the cases of ILC and RLC, respectively.
Example 1: Consider the following system
$ \begin{eqnarray} \left[\!\!\begin{array}{l} \dot{x}_{1k}\\\dot{x}_{2k} \end{array}\!\!\right]&\hspace{0.2cm}=&\hspace{0.2cm}\left[\!\!\begin{array}{ll}1 \ \ \ \ \ 3\\ \ \ 2 \ \ 2 \end{array}\!\!\right]\left[\!\!\begin{array}{l} x_{1k}\\x_{2k} \end{array}\!\!\right]\nonumber\\[3mm]&&\hspace{0.2cm}+ \left[\!\!\begin{array}{l} 0\\1 \end{array}\!\!\right](u_k(t) +\eta(t, x_{1k}, x_{2k})) \end{eqnarray} $  (56) 
$ \begin{eqnarray} y_k(t)=[\!\begin{array}{l} 0\ \ 1 \end{array}\!]\left[\!\begin{array}{l} x_{1k}\\x_{2k} \end{array}\!\right] \end{eqnarray} $  (57) 
where
Choosing
$ \begin{eqnarray} Q=(PA+A^TP)= \left[\!\!\begin{array}{ll}2 &\ \ 5\\[1mm] \ \ 5 &4 \end{array}\!\!\right] \end{eqnarray} $  (58) 
and
$ \begin{eqnarray} B^TP=[\!\begin{array}{l} 0\ \ 1 \end{array}\!]\left[\!\begin{array}{ll} 1 &0\\0 &1 \end{array}\!\right]=C. \end{eqnarray} $  (59) 
therefore, Assumption 1 is satisfied.
Observer (11), control law (19) and full saturated learning laws (22) and (23) are applied. The desired trajectory is given by
Download:


Fig. 1 Desired trajectory, output estimation and system output in the case of ILC ( 
Download:


Fig. 2 State estimation errors in the case of ILC. 
Download:


Fig. 3 Output tracking errors in the case of ILC. 
Download:


Fig. 4 Control input in the case of ILC ( 
Example 2: Consider the circuit [20] described by
$ \begin{eqnarray} \left[\!\!\begin{array}{l} \dot{x}_{1k}\\\dot{x}_{2k} \end{array}\!\!\right]&\hspace{0.2cm}=&\hspace{0.2cm}\left[\!\!\begin{array}{ll}\frac{R_1M_2}{M_1M_2M_3^2} &\ \ \frac{R_2M_3}{M_1M_2M_3^2}\\ \ \ \frac{R_1M_3}{M_1M_2M_3^2} &\frac{R_2M_1}{M_1M_2M_3^2} \end{array}\!\!\right]\left[\!\!\begin{array}{l} x_{1k}\\x_{2k} \end{array}\!\!\right]\nonumber\\ &&\hspace{0.2cm} +\! \left[\!\!\begin{array}{l} \frac{M_2M_3}{M_1M_2M_3^2}\\ \frac{M_1M_3}{M_1M_2M_3^2} \end{array}\!\!\right](u_k(t)\!+\!\eta(t, x_{1k}, x_{2k})) \end{eqnarray} $  (60) 
$ \begin{eqnarray} y_k(t)= [\!\begin{array}{l} 0\ \ 2 \end{array}]\!\left[\!\begin{array}{l} x_{1k}\\x_{2k} \end{array}\!\right] \end{eqnarray} $  (61) 
where
Download:


Fig. 5 Desired trajectory, output estimation and system output in the case of RLC ( 
Download:


Fig. 6 The changes of state 
Download:


Fig. 7 Output tracking errors in the case of RLC. 
Download:


Fig. 8 Control input in the case of RLC ( 
The vertical quantities in Fig. 7 represent
In this paper, an observerbased iterative learning controller has been presented for a class of nonlinear systems. The uncertainties treated is parameterized into two parts. One is the unknown timevarying matrixvalued parameters and the other is the Lipschitz continuous function, which is also unknown due to unmeasurable system states. The learning controller designed for trajectory tracking composes of parameter estimation and state estimation which is given by a robust learning observer. The parameter estimations are constructed by full saturated learning algorithms, by which the boundedness of the parameter estimations are guaranteed. Further, the extension to repetitive learning control is provided. The observerbased RLC avoids the initial repositioning and does not require the strict periodicity constraint in repetitive control. The global stability of the learning system and asymptotic convergence of the tracking error are established through theoretical derivations for both ILC and RLC schemes, respectively.
[1]  S. Arimoto, S. Kawamura, and F. Miyazaki, "Bettering operation of robots by learning, " J. Robot. Syst., vol. 1, no. 2, pp. 123140, Jun. 1984. http://www.mendeley.com/catalog/betteringoperationrobotslearning/ 
[2]  T. Inoue, M. Nakano, and S. Iwai, "High accuracy control of servomechanism for repeated contouring, " in Proc. 10th Annu. Symposium on Incremental Motion Control Systems and Devices, Oxford, England, 1981, pp. 282292. 
[3]  T. Kuc and J. S. Lee, "An adaptive learning control of uncertain robotic systems, " in Proc. 30th IEEE Conf. Decision Control, Britain, UK, 1991, pp. 12061211. http://xueshu.baidu.com/s?wd=paperuri%3A%280b4d1429a87f48e277f45a49eae8159e%29&filter=sc_long_sign&tn=SE_xueshusource_2kduw22v&sc_vurl=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D261560&ie=utf8&sc_us=4311581159098582858 
[4]  C. Ham, Z. Qu, and J. Kaloust, " A new framework of learning control for a class of nonlinear systems, " in Proc. American Control Conf., Seattle, USA, 1995, pp. 30243028. http://xueshu.baidu.com/s?wd=paperuri%3A%282ee75f5e918b6ab4481d1c4b92a88ead%29&filter=sc_long_sign&tn=SE_xueshusource_2kduw22v&sc_vurl=http%3A%2F%2Flibra.msra.cn%2FPublication%2F50027489%2Fanewframeworkoflearningcontrolforaclassofnonlinearsystems&ie=utf8&sc_us=10398511950872274498 
[5]  B. H. Park, T. Y. Kuc, and J. S. Lee, "Adaptive learning control of uncertain robotic systems". Int. J. Control , vol.65, no.5, pp.725–744, 1996. DOI:10.1080/00207179608921719 
[6]  N. Sadegh, R. Horowitz, W. W. Kao, and M. Tomizuka, "A unified approach to the design of adaptive and repetitive controllers for robotic manipulators, " J. Dyn. Sys., Meas., Control, vol. 112, no. 4, pp. 618629, Dec. 1990. http://xueshu.baidu.com/s?wd=paperuri%3A%28c87521dcb0db95e84db6ebf306552a15%29&filter=sc_long_sign&tn=SE_xueshusource_2kduw22v&sc_vurl=http%3A%2F%2Fdx.doi.org%2F10.1115%2F1.2896187&ie=utf8&sc_us=11064356641441691675 
[7]  W. Messner, R. Horowitz, W. W. Kao, and M. Boals, "A new adaptive learning rule, " IEEE Trans. Autom. Control, vol. 36, no. 2, pp. 188197, Feb. 1991. http://www.mendeley.com/research/newadaptivelearningrule/ 
[8]  J. X. Xu and Y. Tan, "A composite energy functionbased learning control approach for nonlinear systems with timevarying parametric uncertainties, " IEEE Trans. Autom. Control, vol. 37, no. 11, pp. 19401945, Nov. 2002. http://xueshu.baidu.com/s?wd=paperuri%3A%285ebeb655c13f44fc6649486ce024676c%29&filter=sc_long_sign&tn=SE_xueshusource_2kduw22v&sc_vurl=http%3A%2F%2Fdx.doi.org%2F10.1109%2FTAC.2003.817004&ie=utf8&sc_us=5774042070062982352 
[9]  J. X. Xu, Y. Tan, and T. H. Lee, "Iterative learning control design based on composite energy function with input saturation, " Automatica, vol. 40, no. 8, pp. 13711377, Aug. 2004. http://www.mendeley.com/research/iterativelearningcontroldesignbasedcompositeenergyfunctioninputsaturation1/ 
[10]  A. Gelb, Iterative learning control design based on composite energy function with input saturation. MA: MIT PRESS, 1974. 
[11]  D. G. Luenberger, "Observing the state of a linear system, " IEEE Trans. Milit. Electron., vol. 8, no. 2, pp. 7480, Apr. 1964. http://xueshu.baidu.com/s?wd=paperuri%3A%286e029b21a5d73f415ca45719eebee3e5%29&filter=sc_long_sign&tn=SE_xueshusource_2kduw22v&sc_vurl=http%3A%2F%2Fdx.doi.org%2F10.1109%2FTME.1964.4323124&ie=utf8&sc_us=3116668829177157849 
[12]  G. Besançon, "Remarks on nonlinear adaptive observer design, " Syst. Control Lett., vol. 41, no. 4, pp. 271280, Nov. 2000. http://www.ixueshu.com/document/7358692e393e0106318947a18e7f9386.html 
[13]  D. Fissore, "Robust control in presence of parametric uncertainties: Observerbased feedback controller design, " Chem. Eng. Sci., vol. 63, no. 7, pp. 18901900, Apr. 2008. http://xueshu.baidu.com/s?wd=paperuri%3A%28771b5313b048968522e5ad71ea334cf3%29&filter=sc_long_sign&tn=SE_xueshusource_2kduw22v&sc_vurl=http%3A%2F%2Fdx.doi.org%2F10.1016%2Fj.ces.2007.12.019&ie=utf8&sc_us=9734836141484680245 
[14]  C. Edwards, S. K. Spurgenon, and R. J. Patton, "Sliding mode observers for fault detection and isolation, " Automatica, vol. 36, no. 4, pp. 541553, Apr. 2000. http://lra.le.ac.uk/handle/2381/30204 
[15]  T. Poznyak, I. Chairez, and A. Poznyak, "Application of a neural observer to phenols ozonation in water: Simulation and kinetic parameters identification, " Water Res., vol. 39, no. 12, pp. 26112620, Jul. 2005. http://med.wanfangdata.com.cn/Paper/Detail/PeriodicalPaper_PM15996710 
[16]  J. H. Park, G. T. Park, S. H. Kim, and C. J. Moon, "Outputfeedback control of uncertain nonlinear systems using a selfstructuring adaptive fuzzy observer, " Fuzzy Sets Syst., vol. 151, no. 1, pp. 2142, Apr. 2005. http://xueshu.baidu.com/s?wd=paperuri%3A%287126d127853bac52000744eb59c96010%29&filter=sc_long_sign&tn=SE_xueshusource_2kduw22v&sc_vurl=http%3A%2F%2Fdx.doi.org%2F10.1016%2Fj.fss.2004.07.007&ie=utf8&sc_us=14952449194194341963 
[17]  C. J. Chien and C. Y. Yao, "Iterative learning of model reference adaptive controller for uncertain nonlinear systems with only output measurement, " Automatica, vol. 40, no. 5, pp. 855864, May 2004. http://cat.inist.fr/?aModele=afficheN&cpsidt=15621344 
[18]  M. French and E. Rogers, "Nonlinear iterative learning by an adaptive Lyapunov technique, " Int. J. Control, vol. 73, no. 10, pp. 840850, Jul. 2000. http://xueshu.baidu.com/s?wd=paperuri%3A%2849e828640aedd43693b6b5437ec9ea2f%29&filter=sc_long_sign&tn=SE_xueshusource_2kduw22v&sc_vurl=http%3A%2F%2Fwww.ingentaconnect.com%2Fcontent%2Ftandf%2Ftcon%2F2000%2F00000073%2F00000010%2Fart00004&ie=utf8&sc_us=3427499935881186778 
[19]  A. Tayebi and J. X. Xu, "Observerbased iterative learning control for a class of timevarying nonlinear systems, " IEEE Trans. Circ. Syst. I: Fund. Theor. Appl., vol. 50, no. 3, pp. 452455, Mar. 2003. http://xueshu.baidu.com/s?wd=paperuri%3A%28244492969f83045e571420660e8d4e6d%29&filter=sc_long_sign&tn=SE_xueshusource_2kduw22v&sc_vurl=http%3A%2F%2Fwww.ingentaconnect.com%2Fcontent%2Fiee%2F10577122%2F2003%2F00000050%2F00000003%2Fart00016&ie=utf8&sc_us=9506852684151117913 
[20]  J. X. Xu and J. Xu, "Observer based learning control for a class of nonlinear systems with timevarying parametric uncertainties, " IEEE Trans. Autom. Control, vol. 49, no. 2, pp. 275281, Feb. 2004. http://www.mendeley.com/research/observerbasedlearningcontrolclassnonlinearsystemstimevaryingparametricuncertainties/ 
[21]  W. E. Dixon, E. Zergeroglu, D. M. Dawson, and B. T. Costic, "Repetitive learning control: A Lyapunovbased approach, " IEEE Trans. Syst. Man Cybernet. B: Cybernet., vol. 32, no. 4, pp. 538545, Aug. 2002. http://xueshu.baidu.com/s?wd=paperuri%3A%28efce6c4c7118667d0c4ce9e515cd12be%29&filter=sc_long_sign&tn=SE_xueshusource_2kduw22v&sc_vurl=http%3A%2F%2Fwww.ncbi.nlm.nih.gov%2Fpubmed%2F18238149&ie=utf8&sc_us=381651049142266413 
[22]  J. X. Xu and Z. H. Qu, "Robust iterative learning control for a class of nonlinear systems, " Automatica, vol. 34, no. 8, pp. 983988, Aug. 1998. http://www.mysciencework.com/publication/show/robustiterativelearningcontrolforaclassofnonlinearsystems 
[23]  M. X. Sun, "A barbalatlike lemma with its application to learning control, " IEEE Trans. Autom. Control, vol. 54, no. 9, pp. 22222225, Sep. 2009. http://www.mendeley.com/research/barbalatlikelemmaapplicationlearningcontrol/ 
[24]  M. X. Sun, S. S. Ge, and I. M. Y. Mareels, "Adaptive repetitive learning control of robotic manipulators without the requirement for initial repositioning, " IEEE Trans. Robot., vol. 22, no. 3, pp. 563568, Jun. 2006. 
[25]  M. X. Sun, D. W. Wang, and P. N. Chen, "Repetitive learning control of nonlinear systems over finite intervals, " Sci. China Ser. F, vol. 53, no. 1, pp. 115128, Jan. 2010. http://xueshu.baidu.com/s?wd=paperuri%3A%28e09ed33c25a801df2d5b1843ee0b4272%29&filter=sc_long_sign&tn=SE_xueshusource_2kduw22v&sc_vurl=http%3A%2F%2Fen.cnki.com.cn%2FArticle_en%2FCJFDTOTALJFXG201001010.htm&ie=utf8&sc_us=3332271224393366824 