Iterative Learning Control With Incomplete Information: A Survey
  IEEE/CAA Journal of Automatica Sinica  2018, Vol. 5 Issue(5): 885-901   PDF    
Iterative Learning Control With Incomplete Information: A Survey
Dong Shen     
Beijing University of Chemical Technology, Beijing 100029, China
Abstract: This paper conducts a survey on iterative learning control (ILC) with incomplete information and associated control system design, which is a frontier of the ILC field. The incomplete information, including passive and active types, can cause data loss or fragment due to various factors. Passive incomplete information refers to incomplete data and information caused by practical system limitations during data collection, storage, transmission, and processing, such as data dropouts, delays, disordering, and limited transmission bandwidth. Active incomplete information refers to incomplete data and information caused by man-made reduction of data quantity and quality on the premise that the given objective is satisfied, such as sampling and quantization. This survey emphasizes two aspects: the first one is how to guarantee good learning performance and tracking performance with passive incomplete data, and the second is how to balance the control performance index and data demand by active means. The promising research directions along this topic are also addressed, where data robustness is highly emphasized. This survey is expected to improve understanding of the restrictive relationship and trade-off between incomplete data and tracking performance, quantitatively, and promote further developments of ILC theory.
Key words: Data dropout     data robustness     incomplete information     iterative learning control (ILC)     quantized control     sampled control     varying lengths    
Ⅰ. INTRODUCTION

Many practical systems follow the same operation mode where they repeatedly complete a given task in a finite time interval. For instance, the industrial production process generally consists of successive batches of production tasks; that is, the system completes a production batch following a given procedure within the desired time interval and then repeats it again and again. For such systems that can be clearly divided into successive operation batches, if the operation time lengths of each batch are identical and the operation circumstances of different batches are similar, then we can fully utilize the operation data and experience to adjust the action strategy for the next batch. This basic concept of "learning" motivates the proposal and developments of iterative learning control (ILC), which is now an important branch of intelligent control [1]. In other words, ILC is a typical control strategy mimicking the learning process of the human being, of which the pivotal idea is to continuously learn the inherent repetitive factors of system operation processes based on various data from completed batches such that the tracking performance is gradually improved. This control strategy imposes little requirement on system information and thus is typically a data-driven control methodology, which can effectively deal with the traditional control challenges such as high-nonlinearity, strong coupling, modeling difficulty, and tracking of high precision.

After developments over three decades, ILC has resulted in a number of valuable results in both theory and applications; for details, see survey papers and special issues [2]-[7]. We note that the invariance of system dynamics including identical tracking reference, identical operation length, and identical initial state is a basic requirement of ILC, for which the proposed update law can reduce the invariance and improve tracking performance. Recently, much effort has been devoted to relax this requirement. For example, in [8], [9], attempts have been made for the nonrepetitive uncertain system to take into account essential limitations of ILC dealing with nonrepetitive factors. The case of nonrepetitive parameters was also explored in a recent paper [10] among others. Moreover, scholars are working on novel analysis and synthesis approaches other than the conventional contraction mapping method, which imposes some restrictive conditions on the systems. The repetitive process based approach has shown its effectiveness in [11]-[14], and ILC can be easily turned into a repetitive process whose dynamics and control problems have been well investigated. Various stability criteria have been studied in [11]-[14] for different problems which can be applied to derive fruitful results of ILC by suitable transformation. We note that the 2D system based approach [15] and frequency based approach [16] are both important synthesis methods for deriving performance-guaranteed controller design of ILC. In addition, it should be pointed out that, along with fast developments in theoretical analysis, the applications of ILC have been greatly enlarged such as robotics [17], [18], dual-mode flyback inverter [19], and stroke rehabilitation systems [20]. In sum, ILC has gained significant progress for both theoretical analysis and practical applications in the past decades.

In order to achieve excellent control performance, most ILC literature depends on the acquisition and utilization of full system information and operation data. That is, the data employed by the learning algorithms are assumed to have infinite-precision. To this end, we have to increase the quantity and precision of sensors for complex systems to acquire more accurate information, increase the network bandwidth to transmit mass data, and increase the number of servers and improve the computation ability to guarantee good execution of complex algorithms. All of these inevitably increase the system burden and control cost. On the one hand, due to various uncertainties, the practical systems would suffer data dropout and loss during the operation, which results in additional difficulty in acquiring complete information. On the other hand, if we could efficiently reduce the acquisition and computation of mass data, provided that the tracking precision and control performance is decreased, we can not only reduce the cost of hardware and software, but also increase operational efficiency and system robustness. In consideration of the above two aspects, it is of great theoretical and practical significance to design data-driven ILC algorithms with incomplete information such that a high quality of control performance is achieved. We note that the influence of incomplete information on the tracking performance of data-driven ILC is essentially a robustness problem of ILC. It is worth pointing out that such robustness problem is different from the traditional model-based robustness problem. That is, the former emphasizes the perspective of data, which focuses on the inherent restriction between the incomplete information and control performance, whereas the latter emphasizes the perspective of the model, which concentrates on the robustness with respect to the unmodeled dynamics.

In practical applications, there are various factors that can lead to the incomplete information problem, including both objective and subjective factors. To make our expression clear to follow, we classify the incomplete information scenarios into two categories: passive incomplete information and active incomplete information. Passive incomplete information refers to incomplete data and information caused by practical system limitations during data collection, storage, transmission, and processing, such as sensor/actuator saturation, data dropouts, communication delay, packet disordering, and limited transmission bandwidth. This incomplete information problem is common in networked control systems that are widely employed in engineering implementations due to their high flexibility and robustness. Active incomplete information refers to incomplete data and information caused by man-made reduction of data quantity and quality on the premise that the specified control objective is satisfied, such as sampling and quantization. By sampling, we acquire the operation data of a continuous-time system with a specified frequency only and skip the information between adjacent sampling time instants. By quantizing, we transform a value interval as an integer within a finite or infinite candidate set, which is common in the conversion from analog signal to digital signal. Clearly, both sampling and quantization can reduce the mass of data, which reduces the burden in acquiring, storing, and transmitting and increases the system operating efficiency. Therefore, it is of great importance to investigate how incomplete information influences control performance as well as determine how large the influence is and how to overcome the influence.

We note that the control design and analysis with both passive and active incomplete information have obtained many results in traditional control methodologies, especially in the field of networked control systems. However, ILC differs from traditional control methodologies in that it considers dual-evolution along both the time axis and iteration axis. The kernel dynamics lie in the iteration-axis, which is essentially different from the time-axis-based evolution of traditional system dynamics. Consequently, the results in networked control systems cannot be extended to ILC directly. Indeed, in ILC field, related results are very few and there are many open problems. Moreover, for learning control with incomplete information, it is most important to consider the data robustness of incomplete information and the associated overall design of the control systems; that is, it is important to understand the inherent restriction between incomplete information and control performance in a novel framework.

This paper is devoted to providing a survey of ILC with incomplete information, where we address the recent progress on ILC with passive incomplete information such as data dropouts, communication delays, and iteration-varying length, as well as with active incomplete information such as sampling and quantization. We will give a research framework for various incomplete information problems from the perspective of design and analysis techniques. Moreover, we provide a primary discussion on the data robustness and related topics in ILC with incomplete information. It is expected that the survey can help the reader to grasp the overall view of this topic and comprehend the fundamental techniques. The structure of the overview is shown in Fig. 1. We note that, to some extent, terminal ILC and point-to-point ILC can be regarded as a type of incomplete information. The methods for this issue have been well reviewed in [5] and thus will not be repeated here.

Download:
Fig. 1 Main structure of the overview

The rest of this paper is arranged as follows. Section Ⅱ gives the basic formulation, design and analysis techniques, and primary convergence results of ILC. In Section Ⅲ, the recent progress on ILC with passive incomplete information is discussed, where the issues of random data dropouts, communication delays and limits, and iteration-varying lengths are elaborated, respectively. In Section Ⅳ, we proceed to review the progress on ILC with active incomplete information, where the sampling and quantization issues are emphasized. The data robustness and promising research directions are expounded in Section Ⅴ. Section Ⅵ concludes the paper with remarks.

Notations: Throughout the paper, we use $k$ and $t$ to denote the iteration index and time index, respectively. $\|\cdot\|$ denotes a unspecified but well-defined norm of a vector or matrix. $\mathbb{P}(\cdot)$ denotes the probability of its indicated event and $\mathbb{E}$ denotes the mathematical expectation of the indicated random variable.

Ⅱ. ILC BACKGROUNDS

In this section, we provide the basic formulation of ILC as well as the primary design and analysis techniques. To this end, we first propose the essential principle of ILC. In particular, the fundamental idea of ILC is to improve the tracking performance for a given reference along the iteration axis. The main concept of networked ILC is shown in Fig. 2, where $y_d$ denotes the reference trajectory. At the $k$ th iteration, the input $u_k$ is fed to the plant and the corresponding system output is denoted by $y_k$ . Generally, $u_k$ is not good enough and therefore, the tracking error at the $k$ th iteration $e_k=y_d-y_k$ is nonzero. In this case, the input for the next iteration (i.e., the $(k+1)$ th iteration) is constructed as a function of the input and tracking error of previous iterations, although it is usually specified as a linear combination for the algorithm's simplicity. Then, the newly generated input $u_{k+1}$ is transmitted to the plant and stored in the memory for subsequent updating. Consequently, a closed-loop feedback is formed along the iteration axis. In other words, ILC can be viewed as an iteration-based feedback control methodology. In addition, the system should be repeatable; that is, the given tracking task is iteration-invariant, the system can be reset to the same initial state, and the operation process is completed in the same time interval. In other words, repetition is the inherent requirement for learning systems.

Download:
Fig. 2 Framework of networked ILC

Now we proceed to the basic formulation of ILC according to the discrete-time system. Consider the following discrete-time linear time-invariant system:

$ \begin{split} x_k(t+1)&=Ax_k(t)+Bu_k(t)\\ y_k(t)&=Cx_k(t) \end{split} $ (1)

where $x_k(t)\in\mathbb{R}^n$ , $u_k(t)\in\mathbb{R}^p$ , and $y_k(t)\in\mathbb{R}^q$ denote the system state, input, and output, respectively. The subscript $k$ denotes the iteration index, and $t$ labels the time instant in an iteration with $t=0, 1, \ldots, N$ , where $N$ is the iteration length. Matrices $A$ , $B$ , and $C$ are system matrices with appropriate dimensions. If we append the subscript $t$ to these matrices, i.e., $A_t$ , $B_t$ , and $C_t$ , the system turns into time-varying case.

We denote the reference trajectory as $y_d(t)$ , $t=0, 1, \ldots, N$ . The general control objective for ILC is to seek a suitable updating algorithm such that the generated input sequence can drive the corresponding output $y_k(t)$ to track $y_d(t)$ asymptotically as the iteration number $k$ increases.

We assume the initial state to be reset to the desired one at each iteration, which is the well-known identical initialization condition (i.i.c.). That is, $x_k(0)=x_0$ , $\forall k$ , where $x_0$ satisfies $y_d(0)=Cx_0$ . If such condition is not satisfied, it leads to an initial-state-shift problem, which has been deeply studied in ILC. A most common case is called bounded uncertain initial state assumption; that is, the initial state $x_k(0)$ locates in a small neighborhood of the desired one, i.e., $\|x_k(0)-x_0\|\leq \epsilon$ , where $\|\cdot\|$ denotes some predefined norm.

Note that the correction mechanism of ILC is to employ the tracking error information of previous iterations to adjust the input signal. To this end, denote the tracking error $e_k(t)=y_d(t)-y_k(t)$ , $\forall t$ . Then, the updating algorithm for generating $u_{k+1}(t)$ is actually a function of previous inputs $u_k(t)$ and errors $e_k(t)$ , of which the general form is

$ \begin{align} u_{k+1}(t)=h(u_k(\cdot), \ldots, u_0(\cdot), e_k(\cdot), \ldots, e_0(\cdot)) \end{align} $ (2)

where $h(\cdot)$ is a function to be designed in practical applications. When the update depends only on the information of the last iteration, it is called a first-order ILC update law; otherwise, it is called a high-order ILC update law. To save memory size and enhance the operation efficiency, most ILC update laws are of first-order, i.e.,

$ \begin{align*} u_{k+1}(t)=h(u_k(\cdot), e_k(\cdot)). \end{align*} $

Additionally, the update law is usually linear for simplicity. A simple but common update law is as follows:

$ \begin{align}\label{Plaw} u_{k+1}(t)=u_k(t)+Ke_k(t+1) \end{align} $ (3)

where $K$ is the learning gain matrix and also the designed parameter. In (3), $u_k(t)$ can be viewed as the current input command, while $Ke_k(t+1)$ is the innovation term. The update law (3) is called P-type. If the innovation term is replaced by $K[e_k(t+1)-e_k(t)]$ , the update law is called D-type.

For system (1) and update law (3), a basic convergence condition on $K$ is that the following inequality is fulfilled,

$ \begin{align*} \|I-CBK\| < 1 \end{align*} $

where $I$ denotes the unity matrix. Then, we have $\|e_k(t)\|\rightarrow0$ as $k\rightarrow\infty$ . This condition can be easily derived from the lifted formulation in the following. We observe from this condition that the system matrix $A$ is not involved in the above convergence condition, which originates from the essential update mechanism of ILC. It also reveals that ILC can handle more system unknowns for a precise tracking task.

For discrete-time ILC, the lifting technique is a useful tool to transform the two-axis-based evolution dynamics into one-axis-based evolution dynamics. To see this point, considering system (1) and learning law (3) and noting that the iteration length is $N$ , we define

$ \begin{align*} U_k&=[u_k^T(0), u_k^T(1), \ldots, u_k^T(N-1)]^T\\ Y_k&=[y_k^T(1), y_k^T(2), \ldots, y_k^T(N)]^T \end{align*} $

as the lifted supervectors of input and output at the $k$ th iteration, respectively. Denote

$ {G}=\left[ \begin{array}{ccccc} CB &0 &0&\ldots&0\\ CAB&CB&0&\ldots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots \\ CA^{N-1}B&CA^{N-2}B&\ldots &\ldots&CB \end{array} \right] $

then we have

$ Y_k={G}U_k+{d} $

where

$ {d}=[(CAx_0)^T, (CA^2x_0)^T, \ldots, (CA^Nx_0)^T]^T. $

Similarly, we can define $Y_d=[y_d^T(1), y_d^T(2), \ldots, y_d^T(N)]^T$ and $E_k=[e_k^T(1), e_k^T(2), \ldots, e_k^T(N)]^T$ , then it leads to

$ U_{k+1}=U_k+{K}E_k $

where ${K}=\mbox{diag}\{K, K, \ldots, K\}$ . By simple calculation, one has

$ \begin{align*} E_{k+1}= &Y_d-Y_{k+1}=Y_d-{G}U_{k+1}-{d}\\ = &Y_d-{G}U_k-{GK}E_k-{d}\\ = &E_k-{GK}E_k\\ = &(I-{GK})E_k. \end{align*} $

Consequently, noting that ${GK}$ is a lower block-triangular matrix with the diagonal blocks being $CBK$ , we can clearly obtain the above convergence condition $\|I-CBK\| < 1$ . Moreover, with lifting techniques, it is noted that the time instant variable $t$ has been removed from the new formulations; that is, the time evolution dynamics of an iteration has been integrated into ${G}$ , whereas the relationship between adjacent iterations has been highlighted. Indeed, the lifting technique has provided us an intrinsic understanding of the principle of ILC.

At the end of this section, we remark that the asymptotical tracking performance is derived according to the tracking error $e_k(t)$ directly in the above statements. If we have additional assumptions on the reference trajectory $y_d(t)$ that it is realizable in the sense that there exists a unique desired input $u_d(t)$ such that $Y_d={G}U_d+{d}$ , where $U_d=[u_d^T(0), u_d^T(1), \ldots, u_d^T(N-1)]^T$ , then the proof is usually conducted by showing $U_k\rightarrow U_d$ as $k\rightarrow\infty$ . For a system with stochastic noises, this transformation is more convenient for convergence analysis. In sum, if the existence of a unique desired input is guaranteed according to the specified tracking reference, we can prove the asymptotical convergence of the input sequence. The output convergence to the desired reference is a direct corollary. If the uniqueness of the desired input is not available, we can either prove the convergence of the input sequence to the set of all possible desired inputs or verify the convergence of the output to the reference directly.

Ⅲ. ILC WITH PASSIVE INCOMPLETE INFORMATION

In this section, we provide an in-depth survey of ILC with passive incomplete information, where we concentrate on random incomplete information scenarios such as random data dropouts, communication delays and limits, and iteration-varying lengths. The common factor of these scenarios is that their information loss is due to practical conditions and environments. We note that other hardware limitations such as sensor/actuator saturation may also reduce the quality of data and information; however, they are omitted in this paper as they are generally deterministic.

A. Random Data Dropouts

From Fig. 2 it is seen that the measured output and generated input are transmitted through networks. Due to data congestion, limited bandwidth, and linkage fault, the data packet may be lost during transmission. The data transmission has two alternative states: successful transmission and loss. Thus, the data dropout is usually described by a random binary variable, say $\gamma_k(t)$ for the data packet at time instant $t$ of the $k$ th iteration. In particular, the variable $\gamma_k(t)$ is set to 1 if the corresponding data packet is successfully transmitted, and 0 otherwise. Indeed, whether the data dropout occurs or not can be regarded as a switch that opens and closes the network in a random manner. Generally, to describe the random data dropout, we need to establish a suitable mathematical model for the binary variable $\gamma_k(t)$ . Specifically, we have the following three most common models.

1) Random sequence model (RSM): For each time instant $t$ , the data dropout is random without assuming any certain probability distribution, but there exists a positive integer $K\geq 1$ such that at least in one iteration the data packet is successfully sent back during arbitrary successive $K$ iterations.

2) Bernoulli variable model (BVM): The random variable $\gamma_k(t)$ is independent for different time instants $t$ and iteration number $k$ . Moreover, $\gamma_k(t)$ obeys a Bernoulli distribution with

$ \begin{align} \label{Bernoulli} \mathbb{P}(\gamma_k(t)=1)=\overline{\gamma}, \quad \mathbb{P}(\gamma_k(t)=0)=1-\overline{\gamma} \end{align} $ (4)

where $\overline{\gamma}=\mathbb{E}\gamma_k(t)$ with $0 < \overline{\gamma} < 1$ .

3) Markov chain model (MCM): The random variable $\gamma_k(t)$ is independent for different time instants $t$ . Moreover, for an arbitrary fixed $t$ , the evolution of $\gamma_k(t)$ along the iteration axis follows a two-state Markov chain, of which the probability transition matrix is

$ \begin{align} \label{Markov} P=\left[\begin{array}{cc} P_{11}&P_{10} \\ P_{01}&P_{00} \end{array}\right] =\left[\begin{array}{cc} \mu&1-\mu \\ 1-\nu&\nu \end{array}\right] \end{align} $ (5)

with $0 < \mu, \nu < 1$ , where $P_{11}=\mathbb{P}(\gamma_{k+1}(t)=1\mid\gamma_k(t)=1)$ , $P_{10}=\mathbb{P}(\gamma_{k+1}(t)=0\mid\gamma_k(t)=1)$ , $P_{01}=\mathbb{P}(\gamma_{k+1}(t)=1\mid\gamma_k(t)=0)$ , $P_{00}=\mathbb{P}(\gamma_{k+1}(t)=0\mid\gamma_k(t)=0)$ .

We first remark on the inherent connections among the above three models. Clearly, BVM is a special case of MCM as MCM would convert into BVM when $\mu+\nu=1$ . RSM differs from both BVM and MCM as it requires no probability distribution or statistics property of the random variable $\gamma_k(t)$ . However, compared with BVM and MCM, RSM pays the price that the successive data dropout length is bounded. In particular, both BVM and MCM admit arbitrary successive data dropouts associated with a suitable probability of occurring. Consequently, RSM cannot cover BVM/MCM and vice versa. The range relationship of these models is shown in Fig. 3. It is worth pointing out that RSM implies that the data dropout is not totally stochastic. Moreover, BVM differs from MCM because the data dropout occurs independently along the iteration axis for BVM, while it occurs dependently for MCM. This point can also explain why MCM is more general than BVM.

Download:
Fig. 3 Data dropout models

From the definition of RSM, we note that RSM only requires an upper bound of successive data dropouts along the iteration axis for every time instant $t$ . In particular, it is required the information packet to be received at least once for any successive $K$ iterations; that is, $\sum_{i=0}^{K-1}\gamma_{k+i}(t)\geq 1$ for all $k\geq 1$ , $\forall t$ . Therefore, the maximum length of successive data dropouts is $K-1$ . It is clear that when $K=1$ there is no data dropout occurring and when $K=2$ there is no successive data dropout occurring. Moreover, the value of $K$ is an index of the data dropout level. However, it is not sufficient to depict the influence of data dropouts, because $K$ corresponds to the worst case of data dropouts rather than the general case.

To clearly describe the average level of data dropouts along the iteration axis, we introduce a concept called data dropout rate (DDR), which is defined as $\lim_{n\rightarrow\infty}1/n\times\left[\sum_{k=1}^n\big(1-\gamma_k(t)\big)\right]$ . For RSM, we note that a larger $K$ generally corresponds to a higher DDR and vice versa; however, the connection between $K$ and DDR is not necessarily positively correlated. In other words, the DDR is another important index of the average level of data dropouts and it should be additionally clarified as we assume no probability property of RSM. For BVM, the mathematical expectation $\overline{\gamma}$ of the BVM (see (4)) is closely related to the DDR in the light of the law of large numbers; that is, DDR is equal to $1-\overline{\gamma}$ . Specifically, the data dropout is independent along the iteration axis; thus, $\lim_{n\rightarrow\infty}1/n\times\left[\sum_{k=1}^n\big(1-\gamma_k(t)\big)\right] =1-\mathbb{E}\gamma_k(t)=1-\overline{\gamma}$ . If $\overline{\gamma}=0$ , which implies that the network is completely broken down, then no information can be received from the plant, and thus no algorithm can be applied to improve the tracking performance. If $\overline{\gamma}=1$ , which implies that no data dropout occurs, then the framework converts into the classical ILC problem. For MCM, the transition probabilities $\mu$ and $\nu$ denote average levels of retaining the same state for successful transmission and loss, respectively. By solving the equation $\pi P=\pi$ , where $P$ is given in (5), we have the stationary distribution $\pi$ as follows,

$ \begin{align} \pi=\left[\frac{1-\nu}{2-\mu-\nu}, \frac{1-\mu}{2-\mu-\nu}\right]. \end{align} $ (6)

Then, DDR for MCM is $\frac{1-\mu}{2-\mu-\nu}$ . In short, we can obtain the DDR for both BVM and MCM as we have the additional probability distribution of these two models.

Taking the recent research literature into account, we observe that the progress can be reviewed from five perspectives: system types, data dropout models, dropout positions, update schemes, and analysis techniques, as is shown in Fig. 4. In the past decade, ILC under random data dropouts has been fully developed in all the perspectives; however, there are still open problems for further research.

Download:
Fig. 4 The research framework of ILC with data dropouts

1) Analysis Techniques: For smooth reading, we first review the analysis techniques and the related convergence results, especially the convergence meanings in consideration of the randomness of data dropouts besides optional stochastic noises. We review papers from the research groups in this issue to provide a basic outline of recent works.

Ahn et al. provided earlier attempts to the ILC for linear systems in the presence of data dropouts [21]-[23] using the Kalman filtering based technique, which was first proposed by Saab in [24]. The main difference among the contributions lies in the descriptions of data dropouts. In particular, the first paper [21] assumed that the whole output vector was considered as a packet, whereas this assumption was relaxed to the case that only partial information of an output vector may suffer loss problem in [22]. Moreover, in [23] both data dropouts and delayed control signals were taken into account. In [24], the input was derived by optimizing the input error covariance and thus the mean-square convergence of the input sequence was obtained. Therefore, [21]-[23] all contributed to a mean-square convergence.

Bu et al contributed different research angle for this problem in [25]-[29]. First, by using the exponential stability theory of asynchronous dynamical systems, which was given by Hassibi et al in [30], the convergence of both first- and high-order update laws was established with an existence assumption of certain quadratic Lyapunov functions. Such a technique is not easy to extend to other systems and the authors used an expectation-based transform technique to derive the convergence for linear systems. In particular, in [26] the recursion of the tracking errors along the iteration axis, where the random data dropout variable was involved, eliminated the randomness by taking mathematical expectation to both sides. As a result, only the convergence in expectation sense was obtained. The techniques were then extended to nonlinear systems in [27] and an inequality of the input error rather than a recursion was obtained due to the nonlinearity. Moreover, in [28], a new $H_\infty$ framework was defined with the help of lifting techniques and resolved the ILC problem under the newly introduced framework. In particular, an $H_\infty$ performance index along the iteration axis and the asymptotical convergence were obtained and the design condition for learning gain matrices was solved through LMI techniques. Furthermore, in [29] the widely used 2D systems approach was revisited for the case with data dropouts. Specifically, a 2D system involving with dropout variables was derived and a mean-square asymptotically stability technique for 2D systems [31] was applied to deduce the convergence. Additionally, an LMI-based controller design was also provided.

Liu and Ruan considered the problem using the traditional contraction mapping method in [32]-[34]. In [32], both linear and affine nonlinear systems were taken into account, where the data dropouts were assumed to occur at both the output and input sides. The recursion of the input error was first taken with an absolute operator and expectation operator, and then the convergence in expectation sense was derived using a technical lemma on contraction with respect to all previous iterations. As a result, the design condition for learning gains is fairly restrictive. A similar problem was also addressed in [33] following the same procedures of [32], where the difference between the two papers was the renewal of output information. When removing the data dropout at the input side, the results for both intermittent and successive update algorithms were also given in [34]. To recap, in these results, in order to allow a general successive data dropouts along the iteration axis, a restrictive convergence property for nonnegative sequences was derived and employed, which in turn may limit its applications.

Shen et al considered the random data dropouts for stochastic systems in [35]-[42], where the stochastic approximation was employed to derive the almost-sure and mean-square convergence. First, Shen and Wang proposed the RSM for data dropouts in [35] for both linear and nonlinear systems with stochastic noises. The almost-sure convergence was obtained by introducing a decreasing sequence to suppress the noise influence and improve the input signal. However, in [35], the control direction was assumed to be known prior, and this restriction was removed in [36], where a novel direction probing mechanism was employed. When considering the BVM, [37], [38] also addressed both intermittent and successive update schemes with a strict almost-sure convergence analysis for linear and nonlinear systems, respectively. Note that stochastic noises are involved in the systems. Thus, the controller design and convergence analysis are distinct from the existing related literature. Detailed performance comparisons between the two types of algorithms and for related design parameters were also provided in [37], [38]. Moreover, the general data dropout case, i.e., both networks at the output and input sides suffering loss, was considered in [39]-[41] for deterministic linear systems, stochastic linear systems, and nonlinear systems, respectively. In these three papers, the data dropout was only described as a Bernoulli variable without any further restrictions on its successive dropouts. Note that the input fed to the plant and the one generated at the learning controller may be different due to the lossy network at the input side. Thus, the asynchronism between the two inputs should be well depicted. In fact, such asynchronism was modeled as a Markov chain and then the almost-sure and mean-square convergence were established in the papers. The first attempt for data dropouts modeled by Markov chain was given in [42]. For both noise-free and stochastic linear systems, a unified framework was established for the design and analysis of ILC for three models, namely, RSM, BVM, and MCM. Both mean square and almost sure convergence of the input sequence to the desired input were strictly established. In short, the stochastic approximation technique is successfully applied to systems with stochastic noises and random data dropouts in the above papers.

There are scattered results on this topic such as in [43]-[47]. In [43], the authors contributed a detailed analysis of the effect of data dropouts. In particular, when only a single packet at the output side or the input side was dropped, the fundamental influence of data dropouts on tracking performance was carefully evaluated and revealed that neither a contraction nor expansion arose. This technique was then extended in [44] to study the general data dropout case; that is, networks at both output and input sides suffer data dropouts. In [45], both data dropouts and communication delays were jointly considered, where the expectation operator and the traditional contraction mapping technique with $\lambda$ -norm were applied in sequence to show the convergence in the expectation sense. In [46], the singular coupled systems were investigated for a finite-iteration tracking problem, where the basic contraction for tracking error was established under suitable norms. In [47], the ILC problem for multi-agent systems with finite-leveled quantization and random packet losses was addressed, where the packet loss occurring at the communication networks among agents was modeled by BVM. We note that a decreasing sequence in [47], which originated from the stochastic approximation theory, ensures the asymptotical convergence.

To recap, the main techniques for addressing random data dropouts are done by either eliminating the randomness by taking mathematical expectation or projecting the problem into a traditional analysis framework for stochastic systems using Kalman filtering and stochastic approximation techniques. We should emphasize that the former method actually ignores the specific effect but considers the averaged performance of data dropouts.

2) System Types: Like the development processes of other control methodologies, the research results for linear systems are much more than that for nonlinear systems. We note that ILC focuses on evolution along the iteration axis, whereas the time-axis-based dynamics is less significant due to finite operation length. Therefore, research for linear time-invariant systems and linear time-varying systems have little distinction. Results with linear systems include [21], [23], [25], [26], [28], [29], [32], [33], [39], [42], [44], [45], most of which are the discrete-time type.

There are some papers for nonlinear systems such as [27], [32]-[34], [41], [43]. However, we note that nonlinear systems are generally of the affine type. This is because affine nonlinear systems separate the evolution influence of the previous state and the current input with respect to time instants. Moreover, the nonlinear functions are assumed to be globally Lipschitz. That is, for a nonlinear function $f(x)$ , the condition indicates $\|f(x_1)-f(x_2)\|\leq k_f\|x_1-x_2\|$ , where $k_f$ is a Lipschitz constant. This condition is imposed to facilitate the use of Gronwall's technical lemma [48], which is fairly common in the convergence analysis of ILC for nonlinear systems. One promising direction for reducing restrictions on nonlinear functions is to introduce other convergence analysis methods. The case of general nonlinear functions without global Lipschitz condition is still of great significance both in theory and for practical applications.

In addition, stochastic noises are also included in systems in several papers including [22], [35]-[38], [40]. Specifically, in [22], [35], [37], [40] both random systems disturbances and measurement noises are assumed for linear systems, whereas in [36], [38] only measurement noises are considered as the involved systems are nonlinear. For systems with stochastic noises, the techniques of stochastic control would play an important role in the design and analysis. We also remark that a few results on special systems are reported such as singular systems [46] and multi-agent systems [47]. It is worth pointing out that the ILC problem for special types of systems under data dropouts have few reports.

3) Data Dropout Models: As we have clarified at the beginning of the section, there are three models of random data dropouts, namely, RSM, BVM, and MCM. The most popular model is BVM, where data dropouts have a clear probability distribution and good independence. Most ILC papers adopt this model, including [21]-[23], [25]-[29], [32]-[34], [37]-[41], [44]-[46]. However, a major issue in BVM is the treatment of successive data dropouts where several limitations are imposed in the existing literature. In particular, the data dropout is independent for different time instants and different iterations in BVM. Thus, it is natural that adjacent data packets may be dropped simultaneously. In many existing papers, in order to provide a specified data compensation, additional requirements are imposed. For instance, in [27], [43], the dropped packet was compensated for with a packet one-time-instant back within the same iteration. Consequently, a limitation arises where packets at adjacent time instants are not allowed to drop within the same iteration. In [44]-[46] the lost packet was compensated for with the packet at the same time instant, but one-iteration back. Consequently, there is no simultaneous data dropout at the same time instant across any two adjacent iterations under this condition. Indeed, a more suitable compensation mechanism for the lost packet is to employ the packet at the same time instant from the latest available iteration. In other words, say we find a packet, $y_k(t)$ , which is lost during the transmission. We may replace it with the latest available packet from previous iterations, say $y_{\tau}(t)$ , where $\tau < k$ . Clearly, $y_{\tau}(t)$ is successfully transmitted while $y_i(t)$ with $\tau+1\leq i\leq k-1$ are all lost. This general compensation mechanism is investigated in [32]-[34], [37], [38], [40].

There are quite a few papers on other models. In [35], [36] the RSM was used for data dropouts. In this case, the statistical property of data dropouts is removed and thus can vary along the iteration axis. In other words, the distinct difference with RSM is the removal of steady distribution assumptions on data dropouts. In [42], a unified framework was proposed for all the three models where MCM was first studied in the ILC field. Moreover, the authors of [43] carefully analyzed the effect of single packet loss. For the multiple packet loss case, a general discussion was given instead of strict analysis and description. The authors claimed that the data dropout level should be far smaller than 100 $\%$ to ensure a satisfied tracking performance. In short, the development of various data dropout models other than BVM requires more effort because the quantitative depiction of the relationship between data dropouts and tracking performance is still unclear.

4) Dropout Positions: As is seen from Fig. 2, there are two networks connecting the plant and the learning controller, which are separated into different sites. One is at the measurement side to transmit the output information back to the learning controller. The other is at the actuator side to transmit the generated input signal to the plant for the next operation process. To facilitate convergence analysis, most papers only assume data dropouts at the measurement side, while the network at the actuator side is assumed to work well, as in [21], [22], [25], [26], [28], [29], [35]-[38]. Although some papers claimed that their results can be extended to the general case that both networks suffer packet loss, it is actually not a trivial extension.

In particular, when the network at the actuator side is assumed to work well, i.e., all generated input signals can be successfully transmitted to the plant, the computed control generated by the learning controller and the actual control fed to the plant are always the same. Thus, the input used in the update algorithm is always equal to the actual control. However, when the network at the actuator side is lossy, the computed control may be lost during the transmission and then the plant has to compensate for it with other available signals. Consequently, the actual control may differ from the computed control. In other words, there exists an additional asynchronism between the computed control and the actual control. This random asynchronism imposes extra difficulty in addressing the data dropout problem since it is hard to separate from evolution dynamics as an individual variable. As a matter of fact, it has been proven in [39]-[41] that such asynchronism can be described by a Markov chain when modeling the dropouts by BVM, which paves a novel way to establish the convergence. Other papers considering the general data dropout position problem include [27], [32]-[34] where the randomness of the data dropout at the actuator side is eliminated by taking mathematical expectation for recursions of both input errors and tracking errors.

5) Update Schemes: There are two major update schemes which can be referred to when designing the update algorithms. One is event-triggering and the other one is iteration-triggering. We provide a brief explanation of the schemes by taking the algorithms in the learning controller as an example. The principle of the first update scheme is as follows: if the output information is successfully transmitted, then the learning controller employs such information to generate a new input signal; otherwise, the learning controller would stop updating until the corresponding output information is successfully transmitted in the subsequent iterations. In other words, when the corresponding packet is lost, it is replaced by 0. Clearly, this updating scheme is event-triggering. We call it an intermittent update scheme (IUS). The principle of the other update scheme is as follows: if the output information is successfully transmitted, then the learning controller employs such information to generate the input, which is same as the previous update scheme; if the output information is lost during transmission, then the learning controller would employ the iteration-latest available output information for generating the input, which is different from the previous scheme. This update scheme keeps working for all iterations no matter whether the information is lost or not, so it is iteration-triggering. We call it a successive update scheme (SUS).

When considering an unreliable network at the measurement side, it has been shown that both IUS and SUS work well for the learning controller, as shown in [37], [38]. It is worth pointing out that a SUS outperforms an IUS when the DDR is large, as it continuously improves the tracking performance. When considering the unreliable network at the actuator side, it is clear that the IUS scheme is not applicable. In other words, the computed control packet which is lost cannot be simply replaced by 0 as it would greatly damage the tracking performance. That is, the lost input signal must be compensated for with a suitable packet to maintain the operation process of the plant. Clearly, the simple compensation mechanism is to employ the latest available input from the previous iteration. In such case, we may regard it as a SUS. As a matter of fact, such mechanism for the input has been reported in [32]-[34], [39]-[41]. From another viewpoint, we could regard an IUS as a non-compensation type and a SUS as a simple compensation type. Generally, a sufficient compensation for the dropped data can effectively improve the tracking performance. Thus the specific compensation mechanism is of great significance according to particular problems, but related results are very few.

We have classified the above literature on ILC under data dropouts in Table Ⅰ from the mentioned five perspectives. From this table, it can be seen that the data dropout problem has been deeply investigated from all perspectives. However, we note that the research for MCM and its generalization is promising.

Table Ⅰ
CLASSIFICATION OF THE PAPERS ON ILC UNDER DATA DROPOUTS
B. Communication Delay and Limited Capacity

Besides random data dropouts, there are many other random factors caused by limited communication capacity. Communication delay is one of them, which has been witnessed to somewhat progress in the past decade. In earlier attempts [23], [45], the time-delay within an iteration was discussed. Such a delay was assumed to occur for the input signal and modeled by a random matrix according to the lifted system in [23]. The Kalman-filtering-based stability analysis technique was applied to derive an iteration-stability of the proposed update law. In [45] the one-step delay was addressed such that the packet could be transmitted on schedule or one-step later. A Bernoulli random variable was used to describe a random delay, of which the randomness was eliminated by taking expectation in the convergence analysis.

The Bernoulli model was then employed in [49], [50] for describing the random one-iteration communication delay, where the communication delay was assumed to occur at both the output and input sides. That is, the output signal for updating the input may come from either the current or previous iteration, and obeys a simple Bernoulli distribution. Technically, the one-iteration delay provides a certain deterministic property of the communication delay, which allows us to construct a finite-iteration contraction along the iteration axis. Indeed, in [49] the error of the $(k+3)$ th iteration can be bounded linearly by the error of the $k$ th, $(k+1)$ th, and $(k+2)$ th iterations. In [50] the authors derived an interesting condition on the probability of the occurrence of communication delay. In particular, assume the probabilities to be $\overline{\alpha}$ and $\overline{\beta}$ for the case where one-iteration communication delay occurs at the output side and the input side. It is deduced in [50] that the condition $\overline{\alpha}+\overline{\beta}-\overline{\alpha}\overline{\beta}< 0.5$ should be fulfilled. In other words, the probabilities of communication delay should be sufficiently small. This condition may shed light on the development of the inherent relationship between random communication delay and tracking performance. However, more efforts are needed to discover a quantitative description of the influence of incomplete information on tracking performance.

The successive iteration-based communication delay was considered in [51]. In particular, a large-scale system consisting of several subsystems was considered in the paper, where the communication between different subsystems suffered random and possibly asynchronous communication delays due to potentially different work efficiency among subsystems. The communication delay was modeled similarly to the RSM given in the last subsection and decentralized ILC algorithms were constructed based on available information. However, due to random successive communication delays, the memory was assumed to have enough capacity such that the arrived data can be well stored. An extreme case for the memory size is that only the data of one iteration can be accommodated by the memory. Clearly, it is the minimum buffer capacity to ensure the learning process. Such a case was studied in [52], where multiple communication constraints were considered for networked nonlinear systems, including data dropouts, communication delays, and packet disordering. In that paper, a RSM was employed to describe the combined effect of the multiple communication constraints. Both an IUS and a SUS were applied to construct the learning algorithms. Compared with [50], the restrictions on occurrence probability of communication delays were removed and successive communication delays were allowed in the progress. However, we would like to remark that the research on ILC with communication delays has gained little attention from scholars compared with that on ILC with data dropouts. The randomness of uncertain communication delay may lead to a mismatch of the input and tracking error in the update law (for example, (3)). It is vital to figure out the effect of this mismatch in convergence analysis and provide a data compensation mechanism in control synthesis.

C. Iteration-Varying Lengths

In Section Ⅲ-A, the data dropout is considered independently for different time instants, whereas in practical applications, the data may be dropped dependently along the time axis. In other words, the data dropouts at the former time instants would have a direct influence on those at the later time instants within the same iteration. For example, if one data packet is dropped due to a linkage fault at some time instant, then the following data of the iteration may be all dropped. That is, to the learning controller, the iteration ends early. It results in a typical problem, called the iteration-varying length problem. This problem has been encountered in certain biomedical application systems. For example, while applying ILC in a functional electrical stimulation (FES) for upper limb movement and gait assistance, it has been seen that the operation processes end early for at least the first few passes due to safety considerations because the output significantly deviates from the desired trajectory [53]. The FES-induced foot motion and the associated variable-length-trial problem are detailed in [54] and [55], which clearly demonstrate the violation of the identical-trial-length assumption typically used in ILC. Another example can be seen in the analysis of humanoid and biped walking robots, which feature periodic or quasi-periodic gaits [56]. For analysis, these gaits are divided into phases that are defined by the time at which the foot strikes the ground, and the duration of the resulting phases are usually not the same from iteration to iteration. A third example can be found in [57], where the trajectory-tracking problem for a lab-scale gantry crane was investigated. In this example, the output was constrained to be within a small neighborhood of the desired reference, because the iteration would end if the output drifted outside the specified boundary, thereby resulting in the varying-length iteration problem. Whether caused by the communication limits or by the safety consideration, iteration-varying length problem always results in incomplete information problem for the learning process.

There were some early research attempts to provide a suitable design and analysis framework for the iteration-varying length problem that contributed to the groundwork for subsequent investigations [53]-[57]. For example, based on experimental verifications and primary convergence analysis that were given in [53]-[55], a systematic proof of the monotonic convergence in different norm senses were further elaborated in [58]. In particular, necessary and sufficient conditions for monotonic convergence were derived strictly by carefully analyzing the path property of the proposed algorithm. Moreover, other issues including the controller design guidelines and influence of disturbances were also discussed. However, no specific formulation of iteration-varying length was imposed in this framework as it concerned the contraction between adjacent iterations.

The first random model of iteration-varying length was proposed in [59] for discrete-time systems and then extended to continuous-time systems in [60]. In the model, a binary random variable was used to represent the occurrence of the output at each time instant and each iteration; that is, the random variable is equal to 1 if the output appears and 0 otherwise (similar to the model of data dropout). The variable was then multiplied with the tracking error denoting the actual information of the update process. To compensate for the lost information, an iteration-average operator for averaging all historical data was introduced to the ILC algorithm in [59], whereas in [60], this average operator was replaced by a moving-iteration-average operator to reduce the influence of very old data. Both operators provide good compensation as shown by the theoretical analysis and simulations. Moreover, a lifted framework of ILC of a discrete-time linear system was provided in [61] to avoid the conservatism of the conventional $\lambda$ -norm-based contraction analysis in [59], [60]. In these papers, we note two distinct points that the asymptotical convergence in mathematical expectation sense is derived and the distribution of the introduced random variable is known to the controller.

Stronger convergence results were given in [62] and [63] for linear and nonlinear discrete-time systems, respectively. In particular, the classical P-type ILC algorithm was employed for the discrete-time linear system in [62], where the possible iteration length has finite cases. Next, the evolution of lifted-error-vectors along the iteration axis was transformed into a random switching system with finite switching states. Consequently, the authors established recursive computation formulas of such vectors' statistics (i.e., the mathematical expectations and covariances). The convergence in the mathematical expectation, mean square, and almost sure senses were derived simultaneously. In [63] the affine nonlinear system was considered. It is clear that the lifting techniques cannot be applied to such types of systems. As a result, a technical lemma on the commutativity of the expectation operator and the absolute-value operator was first created for paving a novel way to derive the strong convergence. A recent work [64] proposed two improved ILC schemes to fully utilize the iteration-moving-average operator. Specifically, a searching mechanism was introduced to collect useful information while avoiding redundant tracking information from the past, so a faster convergence speed was expected. In these contributions, the probability distribution of the random length is not required prior.

In addition, some extensions have also been reported in the existing literature. Nonlinear stochastic systems were investigated in [65], where the bounded disturbances were included. The average-operator-based scheme similar to [59] was improved by collecting all available information. Nevertheless, we note that a Gaussian distribution of the variable iteration length was assumed, which limits the possible application range. In [66], the authors extended the method to discrete-time linear systems with a vector relative degree. Thus, we need to carefully select the output data for the learning algorithms. In addition, the variable length issue was extended to stochastic impulse differential equations in [67] and fractional order systems in [68]. The sampled-data control for continuous-time nonlinear systems was proposed in [69], where both the generic PD-type and a modified PD-type scheme were employed with suitable design conditions of the learning matrices. We remark that the convergence analyses derived in these papers were primarily based on the mature contraction mapping method.

In short, as a special case of passive incomplete information, the iteration-varying length problem has gained some progress. However, the existing literature has witnessed the following limitations. First, most papers considered discrete-time systems so that the possible length has finite outcomes. Second, the systems are limited to be linear or globally Lipschitz nonlinear. Third, the average-operator-based design of ILC controller is widely studied, which motivates us to consider how to efficiently use the available information. Novel analysis techniques are also of great interest to replace the conventional contraction-mapping method. Additionally, the randomly iteration-varying length problem can be regarded as a special case of the data dropout problem; that is, the former is a time-axis-based successive dropout case (from the actual ending time instant to the desired ending time instant). Therefore, the results in ILC with data dropouts can be applied to deal with the varying length problem and vice versa.

Ⅳ. ILC WITH ACTIVE INCOMPLETE INFORMATION

In the previous section, we reviewed recent progress on ILC with passive incomplete information. In the section, we proceed to review the progress on ILC with active incomplete information. In other words, we collect the papers where information quality is intentionally reduced. Two major reduction actions are considered, namely, sampled-data ILC and quantized ILC. The former case indicates that only the signal at assigned time instants, rather than the whole time interval, are available, and the latter case indicates that only the assigned values rather than the precise values are available. By sampling and quantization, we can heavily reduce the amount of the data.

A. Sampled-Data ILC

In this subsection, we present a review of sampled-data ILC from the perspective of research issues. Before that, we first formulate the problem of sampled-data ILC, as shown in Fig. 5. Let $\Delta_T$ be the sampling period of the digital control system and $N\Delta_T=T$ , where $T$ is the iteration length and $N$ is the total sampling number within one iteration. For sampled-data ILC, only information on the sampling time instants $n\Delta_T$ , $0\leq n\leq N$ , is available. The block diagram in Fig. 5 consists of a sampler at the output side to generate sampled output and a holder at the input side to regain continuous signal for the controlled plant.

Download:
Fig. 5 The research framework of sampled-data ILC

There are two primary problems associated with sampled-data ILC: the behavior at the sampling instants and how the interval performance (between sampling instants) is. To be specific, the former aims to construct suitable learning algorithms to guarantee convergence at the sampling instants, and the latter focuses on quantitative analysis of the tracking performance between different sampling instants and possible solutions to reduce the tracking errors in the sampling interval. Generally, the former problem is similar to discrete-time ILC as they share the same design and analysis techniques. However, the latter problem indeed makes sampled-data ILC different from the traditional discrete-time systems.

Considering the system models, both linear and affine nonlinear systems without disturbances attract the most attention, and both linear and affine nonlinear systems with bounded disturbances have been under investigation, while the other systems are of little consideration. The reference classification is given in Table Ⅱ. These papers are mainly written by several research groups with different special interests. Therefore, we review the publications by the research interests/groups. In each category, four perspectives of the publications are explored, i.e., the system model, the update scheme, the convergence result, and the analysis techniques.

Table 2
Classification of References for Sampled-Data ILC

1) Frequency-Based Sampled-Data ILC: The frequency-based design and analysis of sampled-data ILC are presented in [70]-[73], where the kernel issue focuses on the fundamental analysis and synthesis of sampled-data theory in ILC.

Reference [70] presented a framework for the design and analysis of sampled-data ILC in both time and frequency domains. For a fundamental framework, the LTI system was adopted, while P-type, D-type, D$^2$ -type, and general filter algorithms were studied with deriving the sufficient conditions for monotonic convergence. The relative degree issue between the continuous-time system and its corresponding sampled-data system was remarked upon. These theoretical results were then experimentally verified by a piezoelectric motor in [71] and some selection guidelines were also provided for practical applications. In [72], a novel sampled-data ILC algorithm in the frequency form was proposed for the extreme precision motion tracking problem of a piezoelectric positioning stage. The convergence condition and the robustness analysis under the inverse model in the frequency field were expressed with an experimental validation. It was shown that sampled-data ILC is better than conventional open-loop control and PI control. This problem was extended in [73], where a sampled-data ILC was added to a direct feedback control with both repeatable and nonrepeatable components simultaneously. As verified by experiment studies, this combination was demonstrated to have an advantage in precise tracking and fast convergence speed. In short, frequency-based design and analysis is an interesting perspective for sampled-data ILC, but there still exist many areas to be investigated by scholars and engineers.

2) Bounded Convergence Under Bounded Disturbances: A series of papers on the bounded set convergence at the sampling time instants are contributed for linear and nonlinear systems with bounded disturbances [74]-[79]. In these papers, bounded system disturbances $w_k(t)$ and/or measurement noises $v_k(t)$ are added to the linear and nonlinear systems, that is, $\|w_k(t)\|\leq \epsilon_1$ , $\|v_k(t)\|\leq \epsilon_2$ , where $\epsilon_1$ and $\epsilon_2$ are some positive constants. In addition, the initial state error is also assumed to be bounded, i.e., $\|x_k(0)-x_0\|\leq \epsilon_3$ , where $x_0$ denotes the desired initial state and $\epsilon_3$ is a positive constant. Due to the existence of such unknown disturbances, it is difficult to expect zero-error tracking performance no matter whether at all sampling instants or during the sampling interval. Instead, it is shown that the tracking errors at the sampling instants converged to a set whose bounds are a function of $\epsilon_i$ , $i=1, 2, 3$ . The major differences between these papers lie in the design of updating schemes.

In an early paper [74], the conventional P-type update law was employed using the available sampling information for affine nonlinear systems. The convergence was conducted based on the well-known $\lambda$ -norm techniques. As is pointed out in many papers, the convergence in $\lambda$ -norm might result in poor transient performance before coming to ultimate convergence. The result in common norm sense was given in [75] according to the D-type update law, where a direct calculation on the inequalities of the input error norm led to a contraction mapping. A similar problem was also addressed in [76]. Papers [77]-[79] concentrated on the impact of involving current iteration tracking error or feedback control for LTI systems. In particular, [77] constructed an update law with only the tracking errors from the current iteration and as a result a lot of storage can be saved facilitating practical applications. An extension to general formulations of the update law was provided in [79], where a full utilization of the tracking errors in the current iteration was deeply discussed. The convergence was established using the Lyapunov method. The combination of feedback control and ILC for sampled-data was proposed in [78].

It is noted that different update algorithms are investigated by Chien and his co-workers including P-type, D-type, and feedback of current error. This research mainly focuses on bounded convergence to some given set by letting the sampling period be small enough under bounded disturbances.

3) Sampled-Data ILC With Arbitrary Relative Degree: An in-depth study on sampled-data ILC for nonlinear systems with arbitrary relative degree was carried out in [80]-[84]. The relative degree is a description of the input-output relationship, which reflects the minimum effect order between the input and its corresponding output. For continuous-time systems, the relative degree is defined by the Lie derivative of the output with respect to the input; for discrete-time systems, it is defined by the function composition. However, for sampled-data control, the integral should be included to define the relative degree. Consider the following SISO affine nonlinear systems as an example,

$ \begin{split} \dot x_k(t)&=f(x_k(t))+b(x_k(t))u_k(t)\\ y_k(t)&=g(x_k(t)) \end{split} $ (7)

where $f(\cdot)$ , $b(\cdot)$ , and $g(\cdot)$ are nonlinear functions. The above system with input generated by a zero-order holder from sampled signals has extended relative degree $\eta$ for $x_k(t)$ , if, $\forall 0\leq j\leq N-1$ ,

$ \begin{align*} &\int_{j\Delta_T}^{(j+1)\Delta_T}L_bg(x(t_1))dt_1=0, \\ &\int_{j\Delta_T}^{(j+1)\Delta_T}\int_{j\Delta_T}^{t_1}\cdots\int_{j\Delta_T}^{t_i}L_bL_f^ig(x(t_{i+1}))dt_{i+1}\cdots dt_1=0, \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad 1\leq i\leq \eta-2, \\ &\int_{j\Delta_T}^{(j+1)\Delta_T}\int_{j\Delta_T}^{t_1}\cdots\int_{j\Delta_T}^{t_{\eta-1}}L_b L_f^{\eta-1}g(x(t_{\eta}))dt_{\eta}\cdots dt_1\neq 0. \end{align*} $

Roughly speaking, a relative degree larger than 1 indicates that the direct input-output coupling matrix is zero. In such a case, it is interesting to ask whether the conventional P-type update scheme guarantees the convergence. Such a problem was resolved in [80]-[82]. In particular, it was shown in [80], [81] that the basic P-type scheme based on the available sampled data can ensure a zero-error tracking for the sampling time instants. It was then extended to a general case called sampled-data ILC with lower-order differentiations for general nonlinear systems in [82], where the authors used lower-order to indicate that the derivative in the learning controller was less than the relative degree.

Another important issue is the initial rectifying problem [83], [84]. In other words, the initial state is shifted from its desired value. These papers propose an effective rectifying mechanism such that the actual output would be shifted back to the desired one after some time interval. In [83], the fixed initial shift was considered and the proposed initial rectifying action was able to drive the system output to the desired trajectory within a specified error bound. Then the initial shift was extended to an arbitrarily varying case and a so-called varying-order sampled-data ILC was designed and analyzed. In all the studies, the convergence analysis was established with the help of a technical lemma, which is an extension of the contraction mapping principle.

4) Interval Performance of Sampled-Data ILC: It is observed that in papers such as [74]-[84], only the performance at the sampling instants is considered while the intersample behavior is seldom discussed. However, achieving good performance at the sampling instants (at-sample) can be at the expense of poor intersample behavior [85]. However, guaranteeing acceptable intersample tracking performance is a difficult problem for sampled-data ILC. Early attempts are given in [86], [87].

In [86], the multirate ILC approach was proposed to balance the at-sample performance and the intersample behavior, where the key idea was to generate a command signal at a low sampling rate by using fast sampled measurements. The details of multirate systems and multirate ILC were given to enable an optimal sampled-data ILC in the paper. Further, the authors developed an ILC framework for sampled-data systems by incorporating the system identification and a low-order optimal ILC controller in [87], as an on-going study of [86]. The proposed system identification procedure delivers a model that encompasses the intersample in a multirate setting for the closed-loop system so that the resulting model could be used for the optimal ILC synthesis. As a consequence, the computational burden is much less than common optimization-based algorithms for large systems.

In short, there still lack more in-depth studies on the intersample behavior of sampled-data ILC including novel design and analysis technique for improving the tracking performance between different sampling instants.

5) Scattered Contributions: Reference [88] presented a limiting property of the inverse of sampled-data systems. To be specific, for a continuous-time system with a relative degree of one or two, the inverse of the corresponding sampled-data system can approximate the inverse of the original continuous-time system independently of the stability of the zeros as the sampling period $\Delta_T$ goes to zero.

Time-delay was introduced into the affine nonlinear model in [89] with other settings similar to [74], [75]. The PD-type update scheme was employed with a bounded convergence analysis; however, the differential signal is not suitable for sampled-data implementation.

The sampled-data ILC for singular systems was addressed in [90] using a P-type learning algorithm and $\lambda$ -norm techniques. An online optimal sampled-data ILC problem was dealt with in [91] for LTI system with bounded disturbances, where the control objective was to minimize a smooth objective function of inputs and outputs. A gradient descent method was employed to generate the optimal solution iteratively.

Based on the above reviews, we have several remarks. First of all, much attention is paid to LTI and affine nonlinear systems with/without bounded disturbances, whereas there has been little progress with time-varying systems, general nonlinear systems, and stochastic systems. Moreover, most papers contribute to the at-sample performance, while the intersample behavior is seldom considered. However, good at-sample tracking performance does not necessarily imply acceptable intersample behavior. Furthermore, the traditional contraction mapping method and its extensions are the main technique for convergence analysis, which restricts the research range of systems and problems. Last but not least, the implementation of sampled-data ILC in practical applications is of great significance, but few publications are found on this direction [92]. Therefore, a systematic framework of sampled-data ILC is yet blank and much effort should be made by considering the above aspects of sampled-data ILC. Meanwhile, a sampled-data control methodology is usually combined with the quantized technique to further reduce the data amount, where the latter is reviewed in the next subsection.

B. Quantized ILC

To reduce the communication burden, another effective method is to introduce a quantization mechanism. That is, we first quantize the measured signal and then transmit the signal. In fact, the quantization method has been deeply studied in the networked control field; however, few papers have been reported on quantized ILC.

An early attempt on the quantized ILC was given in [93], where the output measurements were quantized by a logarithmic quantizer and then fed to the controller for updating ILC law. By using the sector bound technique and conventional contraction mapping method, it was shown that the tracking error converged to a small range whose upper bound depended on the quantization density. Meanwhile, the tracking error also depended on the target value, which can be seen from the expression of the upper bound. That is, the larger the output measurement is, the larger the final tracking error upper bound is. To achieve zero-error tracking performance, an alternative framework was proposed in [94], where the desired reference was first transmitted to the local plant to generate a tracking error and then the tracking error was quantized by a logarithm quantizer and transmitted. In other words, the tracking error, rather than the output signal, was quantized. This scheme can guarantee the zero-error convergence with the inherent principle of the logarithmic quantizer. The extension to stochastic systems was addressed in [95], where a detailed comparison of the tracking index was provided by considering both stochastic noises and quantization error. It can be seen from the simulations that the ultimate index value is completely generated by the stochastic noises, indicating that the quantization error is eliminated asymptotically. The extension of the above quantization methods to input quantization case was provided in [96] with similar conclusions of [93], [94]. Similar idea of quantizing the measured error was also used in [97], [98] for dealing with discrete-time and continuous-time multi-agent systems, respectively. We remark that the logarithm quantizer should have infinitesimal precision near zero, which is hard to implement in applications. Thus, it is important to propose new quantization mechanisms to improve the tracking performance.

In [99], a uniform quantizer was used with an additional scaling mechanism implemented between the plant and controller. In this case, the measured signal is first scaled by prior scaling functions and then quantized by the uniform quantizer; then, at the controller, the received signal is converted using the scaling functions again to obtain a well-approximation of the original signal. Such process is called the encoding and decoding mechanism. In fact, the scaling functions play a role to enhance quantization precision. In [47], another quantization method called $\Sigma\Delta$ -quantizer, of which the parameters selection ensured a quantization bound similarly to the logarithm sector bounded property, was introduced. The quantization error was treated as a zero-mean martingale difference sequence, which may be a restrictive condition. In [100], a probabilistic quantizer was first introduced into the design of quantized ILC. This quantizer clearly produces a random quantization error with zero-mean and bounded variance. As a result, with the help of a decreasing learning gain, it can be proven that the actual tracking error would converge to zero although a rough uniform probabilistic quantizer. These results show a promising research direction for addressing the quantized ILC problem according to practical requirements.

In sum, quantized ILC is still in its first stage compared with more fruitful results using conventional quantized control. Two valuable research directions should be highlighted for this issue. The first one is to provide an estimation on the relationship between quantized data and the tracking performance. The other one is to investigate effective soft mechanisms for data acquiring, transforming, transmitting, and recovering to eliminate or reduce the effect of quantized data.

Ⅴ. DATA ROBUSTNESS AND PROMISING DIRECTIONS

As has been explained in the previous sections, ILC requires little information on the system matrices. In other words, the design of learning controller mainly depends on the input and tracking information of the previous iterations. Thus, it is a typical data-driven method [101]. From this viewpoint, the ILC problem under incomplete information essentially is a data robustness problem. That is, the inherent control objective is to investigate how the control schemes perform according to different levels of data loss. Generally, if the designed learning control scheme can behave well even if most data is lost due to various restriction conditions, we say the scheme has good data robustness; if the designed learning control scheme is very sensitive to the data loss, we say the scheme has poor data robustness. However, we should note that the concept of data robustness is still unclear [101], and therefore, the research on ILC under incomplete information would settle a fundamental cognition and may guide us to find a direction in establishing the data robustness for data-driven control.

In the traditional control theory, robust control indicates an approach to controller design for dealing with model and/or parameter uncertainty. We define the robustness of this framework as the property of maintaining certain control performance when the uncertain parameters or disturbances vary within some set (typically compact). Therefore, the traditional control robustness is defined with respect to the system itself. While considering the data-driven control, the system information is excluded. Thus, it is not suitable to follow the above definition of control robustness. As a matter of fact, the robustness for data-driven control should be coined with respect to the information/data itself. Particularly, the inherent relationship between the incomplete information/data and the control performance would explicitly describe the robustness issue. Along this line, we would like to share the following points. First, the average data loss can approximate $100 \%$ in various passive incomplete information cases (e.g., data dropouts) while retaining the asymptotical convergence. That is, the DDR can be any number less than 1 while the convergence of ILC algorithm is guaranteed. Thus, there may not exist a critical value of the data loss for data robustness issue. Second, although the asymptotical convergence can be ensured when large data loss appears, the transient performance of the learning algorithms would generally deteriorate (for example with the slow convergence speed and transient growth problems). Thus, the description of data robustness should take these indices into account. Third, data-driven control features little model information in designing the control algorithms and thus, the data robustness may be defined independent of the system model. That is, the data robustness should be same for all or at least most types of systems. In sum, a mathematical formulation of such definition needs more investigations.

In ILC with incomplete information, the emphasis should be put on the robustness significance contained in the lost information and related control system design. In other words, we should concentrate on the in-depth understanding of the restriction and trade-off between the information and tracking indices of ILC (such as tracking precision, convergence speed, control energy, and data amount). Based on this relation, we can evaluate the key factors of improving the tracking performance when losing partial data. In this respect, we highlight the following possible prospective research topics.

1) A good solution for data dropout problem can be extended to many other types of incomplete information environments; thus, it deserves more deep investigations on the essential points, for examples, the quantitative influence of data dropouts on tracking performance, novel compensation mechanisms of the lost data with respect to specified objectives, and the controller design and analysis under general data dropout environments.

2) When considering communication channels, many open problems are waiting for profound exploration and exploitation on various communication constraints such as random communication delay and multiple delays, random and/or unrecognized packet disordering, very limited communication bandwidth, insufficient memory storage, and multi-channel transmission and fusion problem. Moreover, the combined effect of multiple communication constraints is also of interest.

3) Sampling is an effective and economic treatment of continuous-time systems using computer technology, whereas the specific involvement of sampling techniques is not so clear for applications. It yet lacks an explicit answer to the many practical requirements such as the lowest sampling frequency, the specific sampling pattern (uniform or nonuniform), the inherent relation between the sampling pattern and the control performance. Moreover, it is also important to develop suitable sampling framework to satisfy the trade-off between minimum data amount and optimal tracking performance.

4) Quantized ILC is in its embryonic stage as only tentative convergence results for the common quantizer are provided, whereas the essential performance improvements based on finite precision quantizer are not investigated. The kernel issue is to deal with the inevitable quantization error, find out the tracking limitation using quantized data, search suitable treatments for eliminating or reducing the effect of quantization, and establish the analysis and synthesis framework for quantized ILC.

5) In the existing literature, the passive incomplete information is generally formulated by random variables and techniques in stochastic control are applied to derive the performance analysis, whereas the active incomplete information is usually described as a certain loss variable and the bounded convergence analysis for conventional ILC is achieved. Since the ILC problem can be well conducted as a repetitive process [102], it is expected the repetitive process based approach can provide a meaningful solution framework to the ILC with incomplete information.

When investigating the data robustness issue of ILC, we should pay special attention to the triple shown in Fig. 6: (incomplete) information, index, and control. The incomplete information not only includes both passive and active types, but also includes a mixture of both. The indices contain tracking precision, convergence speed, input energy, etc. The control part includes algorithms design and analysis as well as the experimental verification of the theoretical results. Based on this triple, we have a triple of key points in investigation: restrictive relationship, control system, and synthesis/analysis. In particular, the restrictive relationship between the incomplete information and control indices plays a fundamental role. With an in-depth understanding of the relationship, one can implement the specific realization of the control system and then establish the synthesis and analysis framework for the specific problems.

Download:
Fig. 6 The research triple of ILC with incomplete information
Ⅵ. CONCLUSIONS

In this paper, we have surveyed the recent progress on ILC with incomplete information, which is caused by practical conditions, or passive incomplete information, and man-made treatments, or active incomplete information. For passive incomplete information, the random loss conditions such as data dropouts, communication delay and constraints, and iteration-varying lengths are given much attention. For active incomplete information, we focus on the sampled-data ILC and quantized ILC, both of which considerably reduce the amount of data required for acquiring and processing. Based on this survey, it is observed that ILC with incomplete information is actually a case of the data robustness problem. For such a problem, two issues should be given sufficient concern: the first is to evaluate the influence of incomplete information on control performance, and the second is to design a suitable synthesis and analysis framework. It is expected that this survey will give the reader a better understanding of ILC with incomplete information and provide useful guidelines for further research to perfect the framework.

REFERENCES
[1] S. Arimoto, S. Kawamura, and F. Miyazaki, "Bettering operation of robots by learning, " J. Robotic Syst. , vol. 1, no. 2, pp. 123-140, Jan. 1984. http://onlinelibrary.wiley.com/doi/10.1002/rob.4620010203/abstract
[2] D. A. Bristow, M. Tharayil, and A. G. Alleyne, "A survey of iterative learning control, " IEEE Control Syst., vol. 26, no. 3, pp. 96-114, Jan. 2006. http://ieeexplore.ieee.org/document/1636313/
[3] H. S. Ahn, Y. Q. Chen, and K. L. Moore, "Iterative learning control: Brief survey and categorization, " IEEE Trans. Syst. Man Cybern. C, vol. 37, no. 6, pp. 1099-1121, Nov. 2007. http://ieeexplore.ieee.org/document/4343981/
[4] Y. Q. Wang, F. R. Gao, and F. J Doyle Ⅲ, "Survey on iterative learning control, repetitive control and run-to-run control, " J. Process Control, vol. 19, no. 10, pp. 1589-1600, Dec. 2009. https://www.sciencedirect.com/science/article/pii/S0959152409001681
[5] D. Shen and Y. Wang, "Survey on stochastic iterative learning control, " J. Process Control, vol. 24, no. 12, pp. 64-77, Dec. 2014. https://www.sciencedirect.com/science/article/pii/S0959152414001140
[6] H. S. Ahn and D. Bristow, "Special issue on 'iterative learning control', " Asian J. Control, vol. 13, no. 1, pp. 1-2, Jan. 2011. http://onlinelibrary.wiley.com/doi/10.1002/asjc.334/abstract
[7] C. Freeman and Y. Tan, "Iterative learning control and repetitive control, " Int. J. Control, vol. 84, no. 7, pp. 1193-1295, Aug. 2011.
[8] D. Y. Meng and K. L. Moore, "Robust iterative learning control for nonrepetitive uncertain systems, " IEEE Trans. Autom. Control, vol. 62, no. 2, pp. 907-913, Feb. 2017. http://ieeexplore.ieee.org/document/7463016/
[9] D. Y. Meng and K. L. Moore, "Convergence of iterative learning control for SISO nonrepetitive systems subject to iteration-dependent uncertainties, " Automatica, vol. 79, pp. 167-177, May 2017. https://www.sciencedirect.com/science/article/pii/S0005109817300675
[10] M. Yu and Y. C. Li, "Robust adaptive iterative learning control for discrete-time nonlinear systems with time-iteration-varying parameters, " IEEE Trans. Syst. Man Cybern. : Syst., vol. 47, no. 7, pp. 1737-1745, Jul. 2017. http://ieeexplore.ieee.org/document/7880611/
[11] L. Hladowski, K. Galkowski, W. Nowicka, and E. Rogers, "Repetitive process based design and experimental verification of a dynamic iterative learning control law, " Control Eng. Pract. , vol. 46, pp. 157-165, Jan. 2016. https://www.sciencedirect.com/science/article/pii/S0967066115300344
[12] H. F. Tao, W. Paszke, E. Rogers, H. Z. Yang, and K. Galkowski, "Iterative learning fault-tolerant control for differential time-delay batch processes in finite frequency domains, " J. Process Control, vol. 56, pp. 112-128, Aug. 2017.
[13] S. Mandra, K. Galkowski, and H. Aschemann, "Robust guaranteed cost ILC with dynamic feedforward and disturbance compensation for accurate PMSM position control, " Control Eng. Pract., vol. 65, pp. 36-47, Aug. 2017. https://www.sciencedirect.com/science/article/pii/S0967066117301144
[14] B. Altin and K. Barton, "Exponential stability of nonlinear differential repetitive processes with applications to iterative learning control, " Automatica, vol. 81, pp. 369-376, Jul. 2017.
[15] Y. Q. Wang, H. Zhang, S. L. Wei, D. H. Zhou, and B. Huang, "Control performance assessment for ILC-controlled batch processes in a 2-D system framework". IEEE Trans. Syst. Man Cybern.: Syst. , 2017. DOI:10.1109/TSMC.2017.2672563
[16] M. M. G. Ardakani, S. Z. Khong, and B. Bernhardsson, "On the convergence of iterative learning control, " Automatica, vol. 78, pp. 266-273, Apr. 2017. https://www.sciencedirect.com/science/article/pii/S0005109816305386
[17] T. T. Meng and W. He, "Iterative learning control of a robotic arm experiment platform with input constraint, " IEEE Trans. Ind. Electron., vol. 65, no. 1, pp. 664-672, Jan. 2018.
[18] X. Li, Y. H. Liu, and H. Y. Yu, "Iterative learning impedance control for rehabilitation robots driven by series elastic actuators, " Automatica, vol. 90, pp. 1-7, Apr. 2018. https://www.sciencedirect.com/science/article/pii/S0005109817306180
[19] H. Kim, J. S. Lee, J. S. Lai, and M. Kim, "Iterative learning controller with multiple phase-lead compensation for dual-mode flyback inverter, " IEEE Trans. Power Electron. , vol. 32, no. 8, pp. 6468-6480, Aug. 2017. http://ieeexplore.ieee.org/document/7579576/
[20] C. T. Freeman, "Robust ILC design with application to stroke rehabilitation, " Automatica, vol. 81, pp. 270-278, Jul. 2017.
[21] H. S. Ahn, Y. Q. Chen, and K. L. Moore, "Intermittent iterative learning control, " in Proc. 2006 IEEE Conf. Computer Aided Control System Design, 2006 IEEE Int. Conf. Control Applications, 2006 IEEE Int. Symp. Intelligent Control, Munich, Germany, 2006, pp. 832-837.
[22] H. S. Ahn, K. L. Moore, and Y. Q. Chen, "Discrete-time intermittent iterative learning controller with independent data dropouts". IFAC Proc. Vol. , vol.41, no.2, pp.12442–12447, 2008. DOI:10.3182/20080706-5-KR-1001.02106
[23] H. S. Ahn, K. L. Moore, and Y. Q. Chen, "Stability of discretetime iterative learning control with random data dropouts and delayed controlled signals in networked control systems, " in Proc. 10th Int. Conf. Control Automation, Robotics, and Vision, Hanoi, Vietnam, 2008, pp. 757-762.
[24] S. S. Saab, "A discrete-time stochastic learning control algorithm, " IEEE Trans. Autom. Control, vol. 46, no. 6, pp. 877-887, Jun. 2001. http://ieeexplore.ieee.org/document/928588/
[25] X. H. Bu and Z. S. Hou, "Stability of iterative learning control with data dropouts via asynchronous dynamical system, " Int. J. Autom. Comput., vol. 8, no. 1, pp. 29-36, Feb. 2011. https://link.springer.com/article/10.1007/s11633-010-0551-3
[26] X. H. Bu, Z. S. Hou, and F. S. Yu, "Stability of first and high order iterative learning control with data dropouts, " Int. J. Control Autom. Syst. , vol. 9, no. 5, pp. 843-849, Oct. 2011. https://link.springer.com/article/10.1007%2Fs12555-011-0504-9
[27] X. H. Bu, F. S. Yu, Z. S. Hou, and F. Z. Wang, "Iterative learning control for a class of nonlinear systems with random packet losses, " Nonlin. Anal. : Real World Appl., vol. 14, no. 1, pp. 567-580, Feb. 2013. https://www.sciencedirect.com/science/article/pii/S1468121812001423
[28] X. H. Bu, Z. S. Hou, F. S. Yu, and F. Z. Wang, "H iterative learning controller design for a class of discrete-time systems with data dropouts". Int. J. Syst. Sci. , vol.45, no.9, pp.1902–1912, 2014. DOI:10.1080/00207721.2012.757815
[29] X. H. Bu, Z. S. Hou, S. T. Jin, and R. H. Chi, "An iterative learning control design approach for networked control systems with data dropouts, " Int. J. Robust Nonlin. Control, vol. 26, pp. 91-109, Jan. 2016.
[30] A. Hassibi, S. P. Boyd, and J. P. How, "Control of asynchronous dynamical systems with rate constraints on events, " in Proc. 38th IEEE Conf. Decision and Control, Phoenix, USA, 1999, pp. 1345-1351.
[31] X. H. Bu, H. Q. Wang, Z. S. Hou, and Q. Wei, "Stabilisation of a class of two-dimensional nonlinear systems with intermittent measurements, " IET Control Theory Appl., vol. 8, no. 15, pp. 1596-1604, Oct. 2014.
[32] J. Liu and X. E. Ruan, "Synchronous-substitution-type iterative learning control for discrete-time networked control systems with Bernoullitype stochastic packet dropouts". IMA J. Math. Control Inf. , 2017. DOI:10.1093/imamci/dnx008
[33] J. Liu and X. E. Ruan, "Networked iterative learning control for discrete-time systems with stochastic packet dropouts in input and output channels". Adv. Differ. Equat. , 2017. DOI:10.1186/s13662-017-1103-8
[34] J. Liu and X. E. Ruan, "Networked iterative learning control design for nonlinear systems with stochastic output packet dropouts, " Asian J. Control, vol. 20, no. 3, pp. 1077-1087, May 2018. http://onlinelibrary.wiley.com/doi/10.1002/asjc.1457/full?scrollTo=references
[35] D. Shen and Y. Q. Wang, "Iterative learning control for networked stochastic systems with random packet losses". Int. J. Control , vol.88, no.5, pp.959–968, 2015.
[36] D. Shen and Y. Q. Wang, "ILC for networked nonlinear systems with unknown control direction through random lossy channel, " Syst. Control Lett. , vol. 77, pp. 30-39, Mar. 2015. https://www.sciencedirect.com/science/article/pii/S016769111400276X
[37] D. Shen, C. Zhang, and Y. Xu, "Two updating schemes of iterative learning control for networked control systems with random data dropouts, " Inf. Sci. , vol. 381, pp. 352-370, Mar. 2017. https://www.sciencedirect.com/science/article/pii/S0020025516318333
[38] D. Shen, C. Zhang, and Y. Xu, "Intermittent and successive ILC for stochastic nonlinear systems with random data dropouts, " Asian J. Control, vol. 20, no. 3, May 2018. http://onlinelibrary.wiley.com/doi/10.1002/asjc.1480/full
[39] D. Shen, Y. Q. Jin, and Y. Xu, "Learning control for linear systems under general data dropouts at both measurement and actuator sides: a Markov chain approach, " J. Franklin Inst., vol. 354, no. 13, pp. 5091-5109, Sep. 2017. https://www.sciencedirect.com/science/article/pii/S0016003217302594
[40] D. Shen and J. X. Xu, "A novel Markov chain based ILC analysis for linear stochastic systems under general data dropouts environments, " IEEE Trans. Autom. Control, vol. 62, no. 11, pp. 5850-5857, Nov. 2017. http://ieeexplore.ieee.org/document/7779121/
[41] Y. Jin and D. Shen, "Iterative learning control for nonlinear systems with data dropouts at both measurement and actuator sides". Asian J. Control , 2017. DOI:10.1002/asjc.1656
[42] D. Shen and J. X. Xu, "A framework of iterative learning control under random data dropouts: mean square and almost sure convergence, " Int. J. Adapt. Control Sign. Process., vol. 31, no. 12, pp. 1825-1852, Dec. 2017. http://shendongacademy.com/Publications/J-2017-14.pdf
[43] Y. J. Pan, H. J. Marquez, T. W. Chen, and L. Sheng, "Effects of network communications on a class of learning controlled non-linear systems, " Int. J. Syst. Sci. , vol. 40, no. 7, pp. 757-767, Jan. 2009. https://dl.acm.org/citation.cfm?id=1568373.1568381
[44] L. X. Huang and Y. Fang, "Convergence analysis of wireless remote iterative learning control systems with dropout compensation, " Math. Probl. Eng., vol. 2013, pp. Article No. 609284, Mar. 2013.
[45] C. P. Liu, J. X. Xu, and J. Wu, "Iterative learning control for remote control systems with communication delay and data dropout, " Math. Probl. Eng., vol. 2012, pp. Article No. 705474, Jan. 2012.
[46] W. J. Xiong, L. Xu, T. W. Huang, X. H. Yu, and Y. H. Liu, "Finiteiteration tracking of singular coupled systems based on learning control with packet losses". IEEE Trans. Syst. Man Cybern.: Syst. , 2018. DOI:10.1109/TSMC.2017.2770160
[47] T. Zhang and J. M. Li, "Iterative learning control for multi-agent systems with finite-leveled sigma-delta quantization and random packet losses, " IEEE Trans. Circuit. Syst. -I: Regul. Papers, vol. 64, no. 8, pp. 2171-2181, Aug. 2017. http://ieeexplore.ieee.org/document/7914678/
[48] T. H. Gronwall, "Note on the derivatives with respect to a parameter of the solutions of a system of differential equations, " Ann. Math. , vol. 20, no. 4, pp. 292-296, Jul. 1919.
[49] J. Liu and X. E. Ruan, "Networked iterative learning control approach for nonlinear systems with random communication delay, " Int. J. Syst. Sci., vol. 47, no. 16, pp. 3960-3969, Apr. 2016. http://www.tandfonline.com/doi/full/10.1080/00207721.2016.1165894?src=recsys
[50] J. Liu and X. E. Ruan, "Networked iterative learning control design for discrete-time systems with stochastic communication delay in input and output channels, " Int. J. Syst. Sci., vol. 48, no. 9, pp. 1844-1855, Feb. 2017. https://advancesindifferenceequations.springeropen.com/articles/10.1186/s13662-017-1103-8
[51] D. Shen and H. F. Chen, "Iterative learning control for large scale nonlinear systems with observation noise, " Automatica, vol. 48, no. 3, pp. 577-582, Mar. 2012. https://www.sciencedirect.com/science/article/pii/S0005109812000192
[52] D. Shen, "Data-driven learning control for stochastic nonlinear systems: multiple communication constraints and limited storage, " IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 6, pp. 2429-2440, Jun. 2018. http://ieeexplore.ieee.org/document/7920392/
[53] T. Seel, T. Schauer, and J. Raisch, "Iterative learning control for variable pass length systems, " IFAC Proc. Vol., vol. 44, no. 1, pp. 4880-4885, Jan. 2011.
[54] T. Seel, C. Werner, and T. Schauer, "The adaptive drop foot stimulator - multivariable learning control of foot pitch and roll motion in paretic gait, " Med. Eng. Phys., vol. 38, no. 11, pp. 1205-1213, Nov. 2016. https://www.sciencedirect.com/science/article/pii/S1350453316301230
[55] T. Seel, C. Werner, J. Raisch, and T. Schauer, "Iterative learning control of a drop foot neuroprosthesis - generating physiological foot motion in paretic gait by automatic feedback control, " Control Eng. Pract. , vol. 48, pp. 87-97, Mar. 2016. https://www.sciencedirect.com/science/article/pii/S0967066115300484
[56] R. W. Longman and K. D. Mombaur, Investigating the use of iterative learning control and repetitive control to implement periodic gaits. in Fast Motions in Biomechanics and Robotics. Berlin, Heidelberg: Springer, 2014: 189-218.
[57] M. Guth, T. Seel, and J. Raisch, "Iterative learning control with variable pass length applied to trajectory tracking on a crane with output constraints, " in Proc. 52nd IEEE Ann. Conf. Decision and Control, Florence, Italy, 2013, pp. 6676-6681. https://www.sciencedirect.com/science/article/pii/S0016003216302381
[58] T. Seel, T. Schauer, and J. Raisch, "Monotonic convergence of iterative learning control systems with variable pass length". Int. J. Control , vol.90, no.3, pp.393–406, 2017. DOI:10.1080/00207179.2016.1183172
[59] X. F. Li, J. X. Xu, and D. Q. Huang, "An iterative learning control approach for linear systems with randomly varying trial lengths, " IEEE Trans. Autom. Control, vol. 59, no. 7, pp. 1954-1960, Jul. 2014. http://ieeexplore.ieee.org/document/6682999/
[60] X. F. Li, J. X. Xu, and D. Q. Huang, "Iterative learning control for nonlinear dynamic systems with randomly varying trial lengths, " Int. J. Adapt. Control Sign. Process., vol. 29, no. 11, pp. 1341-1353, Nov. 2015. http://onlinelibrary.wiley.com/doi/10.1002/acs.2543/abstract
[61] X. F. Li and J. X. Xu, "Lifted system framework for learning control with different trial lengths, " Int. J. Autom. Comput., vol. 12, no. 3, pp. 273-280, Jun. 2015.
[62] D. Shen, W. Zhang, Y. Q. Wang, and C. J. Chien, "On almost sure and mean square convergence of P-type ILC under randomly varying iteration lengths, " Automatica, vol. 63, pp. 359-365, Jan. 2016.
[63] D. Shen, W. Zhang, and J. X. Xu, "Iterative learning control for discrete nonlinear systems with randomly iteration varying lengths, " Syst. Control Lett. , vol. 96, pp. 81-87, Oct. 2016. https://www.sciencedirect.com/science/article/pii/S0167691116301001
[64] X. F. Li and D. Shen, "Two novel iterative learning control schemes for systems with randomly varying trial lengths, " Syst. Control Lett., vol. 107, pp. 9-16, Sep. 2017. https://www.sciencedirect.com/science/article/pii/S0167691117301214
[65] J. T. Shi, X. He, and D. H. Zhou, "Iterative learning control for nonlinear stochastic systems with variable pass length, " J. Franklin Inst. , vol. 353, pp. 4016-4038, Oct. 2016. https://www.sciencedirect.com/science/article/pii/S0016003216302381
[66] Y. S. Wei and X. D. Li, "Varying trail lengths-based iterative learning control for linear discrete-time systems with vector relative degree, " Int. J. Syst. Sci., vol. 48, no. 10, pp. 2146-2156, Apr. 2017.
[67] S. D. Liu, A. Debbouche, and J. R. Wang, "On the iterative learning control for stochastic impulsive differential equations with randomly varying trial lengths, " J. Comput. Appl. Math., vol. 312, pp. 47-57, Mar. 2017. https://www.sciencedirect.com/science/article/pii/S0377042715005385
[68] S. D. Liu and J. R. Wang, "Fractional order iterative learning control with randomly varying trial lengths, " J. Franklin Inst., vol. 354, no. 2, pp. 967-992, Jan. 2017. https://www.sciencedirect.com/science/article/pii/S0016003216304203
[69] L. J. Wang, X. F. Li, and D. Shen, "Sampled-data iterative learning control for continuous-time nonlinear systems with iteration-varying lengths, " Int. J. Robust Nonlin. Control, vol. 28, no. 8, pp. 3073-3091, May 2018.
[70] K. Abidi and J. X. Xu, "Iterative learning control for sampled-data systems: from theory to practice, " IEEE Trans. Ind. Electron., vol. 58, no. 7, pp. 3002-3015, Jul. 2011. http://ieeexplore.ieee.org/document/5559422/
[71] J. X. Xu, K. Abidi, X. L. Niu, and D. Q. Huang, "Sampled-data iterative learning control for a piezoelectric motor, " in Proc. 2012 IEEE Int. Symp. Industrial Electronics, Hangzhou, China, 2012, pp. 899-904.
[72] J. X. Xu, D. Q. Huang, V. Venkataramanan, and T. C. T. Huynh, "Extreme precise motion tracking of piezoelectric positioning stage using sampled-data iterative learning control, " IEEE Trans. Control Syst. Technol., vol. 21, no. 4, pp. 1432-1439, Jul. 2013. http://ieeexplore.ieee.org/document/6228523/
[73] D. Q. Huang, J. X. Xu, V. Venkataramanan, and T. C. T. Huynh, "Highperformance tracking of piezoelectric positioning stage using currentcycle iterative learning control with gain scheduling, " IEEE Trans. Ind. Electron., vol. 61, no. 2, pp. 1085-1098, Feb. 2014. http://ieeexplore.ieee.org/document/6480838/
[74] C. J. Chien, "The sampled-data iterative learning control for nonlinear systems, " in Proc. 36th Conf. Decision and Control, San Diego, California, USA, 1997, pp. 4306-4311.
[75] C. J. Chien, "A sampled-data iterative learning control using fuzzy network design, " Int. J. Control, vol. 73, no. 10, pp. 902-913, Nov. 2000.
[76] C. J. Chien, Y. C. Hung, and R. H. Chi, "Sample-data adaptive iterative learning control for a class of unknown nonlinear systems, " in Proc. 13th Int. Conf. Control, Automation, Robotics & Vision, Singapore, 2014, pp. 1461-1466.
[77] C. J. Chien and C. L. Tai, "A DSP based sampled-data iterative learning control system for brushless DC motors, " in Proc. 2004 IEEE Int. Conf. Control Applications, Taipei, China, 2004, pp. 995-1000.
[78] C. J. Chien and K. Y. Ma, "Feedback control based sampled-data ilc for repetitive position tracking control of dc motors, " in Proc. 2013 CACS Int. Automatic Control Conference, Nantou, China, 2013, pp. 377-382.
[79] C. J. Chien, Y. C. Hung, and R. H. Chi, "On the current error based sampled-data iterative learning control with reduced memory capacity, " Int. J. Autom. Comput., vol. 12, no. 3, pp. 307-315, Jun. 2015. https://link.springer.com/article/10.1007/s11633-015-0890-1
[80] M. X. Sun, D. W. Wang, and G. Y. Xu, "Sampled-data iterative learning control for SISO nonlinear systems with arbitrary relative degree, " in Proc. 2000 American Control Conf. , Chicago, USA, 2000, pp. 667-671.
[81] M. X. Sun and D. W. Wang, "Sampled-data iterative learning control for nonlinear systems with arbitrary relative degree, " Automatica, vol. 37, no. 2, pp. 283-289, Feb. 2001. https://www.sciencedirect.com/science/article/pii/S0005109800001412
[82] M. X. Sun, D. W. Wang, and Y. Y. Wang, "Sampled-data iterative learning control with well-defined relative degree, " International Journal of Robust And Nonlinear Control, vol. 14, no. 8, pp. 719-739, May 2004.
[83] S. Zhu, X. X. He, and M. X. Sun, "Initial rectifying of a sampled-data iterative learning controller, " in Proc. 6th World Congress on Intelligent Control and Automation, Dalian, China, 2006, pp. 3829-3833.
[84] M. X. Sun, Z. L. Li, and S. Zhu, "Varying-order sampled-data iterative learning control for MIMO nonlinear systems". Acta Autom. Sinica , vol.39, no.7, pp.1027–1036, 2013.
[85] T. Oomen, M. van de Wal, and O. Bosgra, "Design framework for highperformance optimal sampled-data control with application to a wafer stage, " Int. J. Control, vol. 80, no. 6, pp. 919-934, Jul. 2007. http://www.tandfonline.com/doi/full/10.1080/00207170701216329?scroll=top&needAccess=true
[86] T. Oomen, J. van de Wijdeven, and O. Bosgra, "Suppressing intersample behavior in iterative learning control, " Automatica, vol. 45, no. 4, pp. 981-988, Apr. 2009. https://www.sciencedirect.com/science/article/pii/S0005109808005311
[87] T. Oomen, J. van de Wijdeven, and O. H. Bosgra, "System identification and low-order optimal control of intersample behavior in ILC, " IEEE Trans. Autom. Control, vol. 56, no. 11, pp. 2734-2739, Nov. 2011.
[88] T. Sogo and N. Adachi, "A limiting property of the inverse of sampleddata systems on a finite-time interval, " IEEE Trans. Autom. Control, vol. 46, no. 5, pp. 761-765, May 2001. http://ieeexplore.ieee.org/document/920797/
[89] Y. Fan, S. P. He, and F. Liu, "PD-type sampled-data iterative learning control for nonlinear systems with time delays and uncertain disturbances, " in Proc. 2009 Int. Conf. Computational Intelligence and Security, Beijing, China, 2009, pp. 201-205.
[90] P. Sun, Z. Fang, and Z. Z. Han, "Sampled-data iterative learning control for singular systems, " in Proc. 4th World Congress on Intelligent Control and Automation, Shanghai, China, 2002, pp. 555-559.
[91] S. H. Zhou, Y. Tan, D. Oetomo, C. Freeman, and I. Mareels, "On on-line sampled-data optimal learning for dynamic systems with uncertainties". Proc. 9th Asian Control Conf., Istanbul, Turkey , pp.1–7, 2013.
[92] D. W. Wang, Y. Q. Ye, and B. Zhang, Practical Iterative Learning Control with Frequency Domain Design and Sampled Data Implementation. Singapore: Springer, 2014.
[93] X. H. Bu, T. H. Wang, Z. S. Hou, and R. H. Chi, "Iterative learning control for discrete-time systems With quantised measurements, " IET Control Theory Appl., vol. 9, no. 9, pp. 1455-1460, Jun. 2015. http://ieeexplore.ieee.org/iel7/4079545/7112863/07112886.pdf?arnumber=7112886
[94] Y. Xu, D. Shen, and X. H. Bu, "Zero-error convergence of iterative learning control using quantized error information, " IMA J. Math. Control Inf., vol. 34, no. 3, pp. 1061-1077, Sep. 2017.
[95] D. Shen and Y. Xu, "Iterative learning control for discrete-time stochastic systems with quantized information, " IEEE/CAA J. of Autom. Sinica, vol. 3, no. 1, pp. 59-67, Jan. 2016. http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=7373763
[96] X. H. Bu, Z. S. Hou, L. Z. Cui, and J. Q. Yang, "Stability analysis of quantized iterative learning control systems using lifting representation, " Int. J. Adapt. Control Sign. Process. , vol. 31, no. 9, pp. 1327-1336, Sep. 2017.
[97] W. J. Xiong, X. H. Yu, R. Patel, and W. W. Yu, "Iterative learning control for discrete-time systems with event-triggered transmission strategy and quantization, " Automatica, vol. 72, pp. 84-91, Oct. 2016.
[98] T. Zhang and J. M. Li, "Event-triggered iterative learning control for multi-agent systems with quantization, " Asian J. Control, vol. 20, no. 3, pp. 1088-1101, May 2018.
[99] W. J. Xiong, X. H. Yu, Y. Chen, and J. Gao, "Quantized iterative learning consensus tracking of digital networks with limited information communication, " IEEE Trans. Neural Netw. Learn. Syst., vol. 28, no. 6, pp. 1473-1480, Jun. 2017. http://ieeexplore.ieee.org/document/7425248/
[100] D. Shen and J. X. Xu, "Zero-error tracking of iterative learning control using probabilistically quantized measurements". Proc. 11th 2017 Asian Control Conf., Gold Coast, Australia , pp.1029–1034, 2017.
[101] Z. S. Hou and Z. Wang, "From model-based control to data-driven control: Survey, classification and perspective, " Inf. Sci., vol. 235, pp. 3-35, Jun. 2013.
[102] E. Rogers, K. Galkowski, and D. H. Owens, Control Systems Theory and Applications for Linear Repetitive Processes. Berlin Heidelberg: Springer-Verlag, 2007.