IEEE/CAA Journal of Automatica Sinica  2018, Vol. 5 Issue(6): 1054-1061   PDF    
Self-Tuning Asynchronous Filter for Linear Gaussian System and Applications
Wenjun Lv1, Yu Kang2,3, Yunbo Zhao4     
1. Department of Automation, University of Science and Technology of China, Hefei 230027, China;
2. State Key Laboratory of Fire Science, Department of Automation and Institute of Advanced Technology, University of Science and Technology of China, Hefei 230027, China;
3. Key Laboratory of Technology in GeoSpatial Information Processing and Application System, Chinese Academy of Sciences, Beijing 100190, China;
4. College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China
Abstract: In this paper, optimal filtering problem for a class of linear Gaussian systems is studied. The system states are updated at a fast uniform sampling rate and the measurements are sampled at a slow uniform sampling rate. The updating rate of system states is several times the sampling rate of measurements and the multiple is constant. To solve the problem, we will propose a self-tuning asynchronous filter whose contributions are twofold. First, the optimal filter at the sampling times when the measurements are available is derived in the linear minimum variance sense. Furthermore, considering the variation of noise statistics, a regulator is introduced to adjust the filtering coefficients adaptively. The case studies of wheeled robot navigation system and air quality evaluation system will show the effectiveness and practicability in engineering.
Key words: Air quality evaluation system     linear Gaussian system     wheeled robot navigation system    

Due to its conciseness which may facilitate solutions, the linear Gaussian system is a preferable model to describe many physical systems, especially in engineering field such as target tracking, robot navigation, and fault diagnosis [1]. The Kalman filter has been proved to be an optimal filter for linear Gaussian systems for the filtering problem [2]. Furthermore, based on the concept of the conventional Kalman filter, a variety of improved Kalman filters have been investigated to solve different practical problems [3]-[8]. This paper is devoted to solve the optimal filtering problem for a class of linear Gaussian systems where the system states are updated at a fast uniform sampling rate and the measurements are sampled at a slow uniform sampling rate. The updating rate of system states is several times the sampling rate of measurements and the multiple is constant. In other words, there is currently no available measurement methods for systems whose states update during the period between two adjacent measurements. Therefore, the conventional Kalman filter is invalid for this case due to the asynchronism between system states updating and measurements sampling.

The proposed problem in this paper can be categorized as either a filtering problem of a special class of systems suffering the measurement dropouts or delays, or a multi-rate information fusion problem. The distributed $H_\infty$ filtering problem for a class of discrete-time Takagi-Sugeno fuzzy systems was investigated in [9]. Compared with the robust filter, there is no doubt that the optimal filter is preferred. The optimal linear estimation problem for linear discrete-time stochastic systems with multiple packet dropouts are well investigated in [10]-[12]. For this kind of systems, they developed the optimal filter, predictor, and smoother, whose solutions are computed via a Riccati difference equation. Aiming at the communication delays, the optimal filtering problem for the networked systems containing multi-state Markovian transmission delay and one-step random communication delay were investigated [13], [14]. These works were investigated under the occurrences of measurement dropouts and delays which can be modelled as a stochastic process. However, the measurements are usually timestamped, so the time instants when these uncertainties occur are determinable. As for the multi-rate sampling discrete system, optimal linear filter was obtained in the sense of linear minimum variance [15], [16]. But the state argumentation method will result in huge computational complexity. Although modelling without resorting to state argumentation method, the filtering accuracy is relatively low in that the system noises were ignored during the modelling [17], [18]. In this paper, in order to solve the aforementioned problem, the optimal filter at the sampling time when the measurements are available is derived in the linear minimum variance sense. Furthermore, considering the variation of noise statistics, a regulator is introduced to adjust the filtering coefficients adaptively.

The proposed filter has a typical application in wheeled robot navigation systems. The external references are often used to correct the navigation system by means of global positioning system (GPS) or the other approaches which are time-consuming [19]-[21], such that they will suffer from the multi-rate issue. Another application of the proposed filter can be found in the air quality evaluation systems to estimate the street air pollutant concentration in real time, which is of great significance due to its dominance for government to make more effective control strategies to improve the atmospheric environment. The conventional method termed as model-based method and the emerging technology called mobile-sensing method can be combined to improve the performance based on their complementary characteristics, but in an asynchronous way [22]-[24]. In this paper, we consider the application of the proposed filter design for these two practical systems.

The rest of the paper is organized as follows. Section Ⅱ formulates the problem. Section Ⅲ discusses the solution in detail. Section Ⅳ is composed of two case studies, i.e., the wheeled robot navigation system and the air quality evaluation system. The paper is concluded in Section Ⅴ.


Considering a linear Gaussian system described by the following linear stochastic difference equation

$ {\pmb{ x}}_{t+1}={F}{\pmb{ x}}_t+{G}{\pmb{ u}}_t+{\Gamma}{\pmb{ w}}_t $ (1a)
$ {\pmb{ y}}_{N t}={H}{\pmb{ x}}_{N t}+{\pmb{ v}}_{N t} $ (1b)

where ${F}$, ${G}$, ${\Gamma}$ and ${H}$ are known time-invariant matrices with proper dimensions, $t=0, 1, \ldots$, is the sampling time of system states, $Nt$ is the sampling time of measurements ($N$ is termed as synchronization coefficient), ${\pmb{ x}}_t\in \mathbb{R}^n$ are the system states, ${\pmb{ y}}_t\in\mathbb{R}^m$ are the measurements, ${\pmb{ u}}_t\in \mathbb{R}^p$ are the controls, and ${\pmb{ w}}_t\in\mathbb{R}^q$ and ${\pmb{ v}}_t\in \mathbb{R}^m$ are the system noise and measurement noise, respectively.

The following assumptions are needed.

Assumption 1: The noises are independent Gaussian white processes such that

$\begin{array}{l} {E}\left\{\begin{bmatrix}{\pmb{ w}}_t\\{\pmb{ v}}_t\end{bmatrix}\begin{bmatrix}{\pmb{ w}}_\tau^\prime &{\pmb{ v}}_\tau^\prime\end{bmatrix}\right\} =\begin{bmatrix}{Q}&{\pmb{ 0}}\\{\pmb{ 0}}&{R}\end{bmatrix}\delta_{t\tau}\end{array} $ (2a)
$ {E}\{{\pmb{ x}}_0\}={\pmb{ \chi}} $ (2b)
$ {E}\{({\pmb{ x}}_0-{\pmb{ \chi}})({\pmb{ x}}_0-{\pmb{ \chi}})^\prime\}={\pmb{ \rho}} $ (2c)

where ${Q}$ and ${R}$ are variance matrices of ${\pmb{ w}}_t$ and ${\pmb{ v}}_t$, respectively; $\delta_{t\tau}$ is the Kronecker delta function where $t$ and $\tau$ are both sampling times, $\delta_{t\tau}=1$ if $t=\tau$ and $\delta_{t\tau}=0$ otherwise; ${E}$ represents mathematical expectation and prime is for the transposition.

Assumption 2: The pairwise structure $({\mathcal{F}}, {H})$ are completely observable, that is, the observability matrix ${{O}}=[{H}, $ ${H}{\mathcal{F}}, $ $\ldots, $ ${H}{\mathcal{F}}^{n-1}]^\prime$ is of full-rank where ${\mathcal{F}}={F}^N$.

Assumption 3: The noise variance matrices ${Q}$ and ${R}$ are unknown and slowly changing.

Remark 1: According to Assumption 1, the system and measurement noises are assumed to be the independent Gaussian white processes. This is reasonable for some practical systems. It is well known that many random variables (RVs) in nature are actually the sum of many independent RVs. According to the fact that the sum of independent RVs tends to a Gaussian RV regardless of their probability distribution functions [25]; this is why the noises are assumed to have Gaussian distribution. As for the independence assumption, the coloured noise can be modelled by a linear system with white noise as its input, indicating that the coloured noise can be "whitened" by state argumentation method.

The problem is to develop a linear minimum variance filter ${\pmb{ \widehat{x}}}_{t}$ based on the measurement set $\{{\pmb{ y}}_0, {\pmb{ y}}_N, {\pmb{ y}}_{2N}, \ldots, {\pmb{ y}}_{\lfloor t/N\rfloor N}\}$ where $\lfloor t/N\rfloor$ is the largest integer not larger than $t/N$.


The proposed filter is first discussed, followed by the noise variance estimator.

A. Improved Kalman Filter

The Kalman filter has been proved to be an optimal filter for linear Gaussian systems. It estimates the system states and then obtains feedback in the form of measurement, dividing the equations into two groups: predictor equation and corrector equation. The former is responsible for projecting forward the current state estimations and error covariance to obtain the priori estimations for the next time instant. The latter is responsible for the feedback, that is, incorporating a new measurement into the priori estimations to obtain an improved posteriori estimation. The process is repeated with the previous posteriori estimation used to predict the new priori estimation. Hence, the Kalman filter performs as a time-variant recursive optimal filter.

Suppose that $\{{\pmb{ y}}_{Nt}, t=0, 1, \ldots\}$ is obtained at time $Nt$, coupled with the optimal estimation ${\pmb{ \widehat{x}}}_{Nt}$ of ${\pmb{ x}}_{Nt}$ which is the linear summation of $\{{\pmb{ y}}_{Nt}, t=0, 1, \ldots\}$. For the next time instant without measurement, we have

$ \begin{align} {\pmb{ \widehat{x}}}_{Nt+1}={F}{\pmb{ \widehat{x}}}_{Nt}+{G}{\pmb{ u}}_{Nt} \end{align} $ (3)

which is an unbiased estimation. Define the estimation error ${\pmb{ \overline{x}}}_t$ $={\pmb{ x}}_t-{\pmb{ \widehat{x}}}_t$ and its variance ${P}_t={E}\{{\pmb{ \overline{x}}}_t {\pmb{ \overline{x}}}_t^\prime\}$, then

$ \begin{align} {P}_{Nt+1} ={E}\{{\pmb{ \overline{x}}}_{Nt+1} {\pmb{ \overline{x}}}_{Nt+1}^\prime\} \end{align} $ (4a)
$\begin{array}{l} &={E}\{({\pmb{ x}}_{Nt+1}-{\pmb{ \widehat{x}}}_{Nt+1})({\pmb{ x}}_{Nt+1}-{\pmb{ \widehat{x}}}_{Nt+1})^\prime\} \notag\\ &={E}\{ ({F}{\pmb{ \overline{x}}}_{Nt}+{\Gamma}{\pmb{ w}}_{Nt})({F}{\pmb{ \overline{x}}}_{Nt}+{\Gamma}{\pmb{ w}}_{Nt})^\prime \} \notag\\ &={E}\{ {F}{\pmb{ \overline{x}}}_{Nt}{\pmb{ \overline{x}}}_{Nt}^\prime{F}^\prime+{\Gamma}{\pmb{ w}}_{Nt}{\pmb{ w}}_{Nt}^\prime{\Gamma}^\prime \} \notag \\ &={F}{P}_{Nt}{F}^\prime+{\Gamma}{Q}{\Gamma}^\prime \end{array} $ (4b)

which is essentially the predicted estimation error variance in the conventional Kalman filter.

During the period of $\{Nt+\tau, \tau=1, 2, \ldots, N-1\}$,

$ {\pmb{ \widehat{x}}}_{Nt+\tau} ={F}{\pmb{ \widehat{x}}}_{Nt+\tau-1}+{G}{\pmb{ u}}_{Nt+\tau-1} $ (5a)
$ ={F}^\tau{\pmb{ \widehat{x}}}_{Nt}+\sum\limits_{i=0}^{\tau-1}{F}^{\tau-1-i}{G}{\pmb{ u}}_{Nt+i} $ (5b)

with the estimation error variance of

$ {P}_{Nt+\tau} ={F}{P}_{Nt+\tau-1}{F}^\prime+{\Gamma}{Q}{\Gamma}^\prime $ (6a)
$ ={F}^\tau{P}_{Nt}{F}^{\tau\prime}+\sum\limits_{i=0}^{\tau-1}{F}^i{\Gamma}{Q}{\Gamma}^\prime{F}^{i\prime} $ (6b)

At the time $Nt+N$ when the measurement is available, we have

$ {\pmb{ \widetilde{x}}}_{Nt+N}={\mathcal{F}}{\pmb{ \widehat{x}}}_{Nt}+{\pmb{ \mu}}_{Nt} $ (7a)
$ {\widetilde{P}}_{Nt+N} ={\mathcal{F}}{P}_{Nt}{\mathcal{F}}^\prime+{\mathcal{Q}} $ (7b)

where ${\mathcal{F}}={F}^N$, ${\pmb{ \mu}}_{Nt}=\sum_{i=0}^{N-1}{F}^{N-1-i}{G}{\pmb{ u}}_{Nt+i}$ and ${\mathcal{Q}}=\sum_{i=0}^{N-1}{F}^i{\Gamma}{Q}{\Gamma}^\prime{F}^{i\prime}$. The ${\widetilde{x}}_{Nt+N}$ and ${\widetilde{P}}_{Nt+N}$ are essentially the predicted state estimation and predicted error variance in a conventional Kalman filter. Similar to the conventional Kalman filter [26], we can deduce the so-called asynchronous Kalman filter (ASYNCKF) presented in Algorithm 1. We use the word "asynchronous" to indicate that the predicted estimation for system states and correction by measurements are not synchronized.


The effect on filtering performance brought by $N$ can be obtained by a simple analysis. The estimation error variance ${P}_{t}$ will increase continuously during the period without measurements, because there is no measurements to correct the estimation. Meanwhile, a larger $N$ will result in a larger ${\mathcal{Q}}$, and thus a larger ${P}_{t}$ at $t=0, N, 2N, \ldots$. Therefore, there is no doubt that the filter performs better with the decrease of $N$.

The ASYNCKF algorithm is a time-varying recursive filter implemented by calculating the optimal Kalman gain ${K}_t$ at $t=$ $0, $ $N, 2N, \ldots$, which brings great computational complexity; especially for high-dimensional matrices. However, if ${K}_t$ is converged to a constant matrix ${K}$ as $t\rightarrow\infty$, the time-invariant filter can be obtained via substituting ${K}_t$ by ${K}$. This is only available if the process equation (1a) and measurement model (1b) are synchronized.

Considering the period without measurements of $\{Nt+\tau, $ $\tau$ $=1, 2, \ldots, N-1\}$,

$ {\pmb{ {x}}}_{Nt+\tau} ={F}{\pmb{ {x}}}_{Nt+\tau-1}+{G}{\pmb{ u}}_{Nt+\tau-1} +{\Gamma}{\pmb{ w}}_{Nt+\tau-1} $ (8a)
$ ={F}^\tau{\pmb{ \widehat{x}}}_{Nt}+\sum\limits_{i=0}^{\tau-1}{F}^{\tau-1-i}{G}{\pmb{ u}}_{Nt+i}+\sum\limits_{i=0}^{\tau-1}{F}^{\tau-1-i}{\Gamma}{\pmb{ w}}_{Nt+i} $ (8b)

and then setting $\tau=N$, the synchronization model of (1a) and (1b) can be derived as

$ {\pmb{ x}}_{Nt+N}={\mathcal{F}}{\pmb{ x}}_{Nt}+{\pmb{ \mu}}_{Nt}+{\pmb{ \omega}}_{Nt} $ (9a)
$ {\pmb{ y}}_{Nt}={H}{\pmb{ x}}_{N t}+{\pmb{ v}}_{N t} $ (9b)

where ${\mathcal{F}}={F}^N$, ${\pmb{ \mu}}_{Nt}=\sum_{i=0}^{N-1}{F}^{N-1-i}{G}{\pmb{ u}}_{Nt+i}$ and ${\pmb{ \omega}}_{Nt}=\sum_{i=0}^{N-1}{F}^{N-1-i}{\Gamma}{\pmb{ w}}_{Nt+i}$ is the Gaussian white noise with the variance matrix of ${\mathcal{Q}}=\sum_{i=0}^{N-1}{F}^i{\Gamma}{Q}{\Gamma}^\prime{F}^{i\prime}$. The sufficient condition for the convergence of ${K}_t$ at $t=0, N, 2N, \ldots$ is that the controllability matrix ${{C}}=[{I}, {\mathcal{F}}, \ldots, {\mathcal{F}}^{n-1}]$ and observability matrix ${{O}}=[{H}, {H}{\mathcal{F}}, \ldots, {H}{\mathcal{F}}^{n-1}]^\prime$ are of full-rank. The first condition holds obviously and the second one is under the Assumption 2. Then, we have the time-invariant form of ASYNCKF as

$ \begin{align} {\pmb{ \widehat{x}}}_{t+1}=({I}-{K}{H})({F}{\pmb{ \widehat{x}}}_t+{G}{\pmb{ u}}_t)+{K}{\pmb{ y}}_t \end{align} $ (10)

where the matrix ${K}$ is set as a null matrix at $t\neq 0, N, 2N, \ldots$ and a non-zero matrix at $t=0, N, 2N, \ldots$ whose value can be determined by solving the following equation set

$ {E}={\mathcal{F}}{P}{\mathcal{F}}^\prime+{\mathcal{Q}} $ (11a)
$ {K}={E}{H}^\prime({H}{E}{H}^\prime+{R})^{-1} $ (11b)
$ {P}=({I}-{K}{H}){\mathcal{F}}. $ (11c)

It has been proven that the time-invariant filter will converge to the optimal time-variant Kalman filter with probability one, and therefore the convergence of the time-invariant form of ASYNCKF can be easily deduced [27]. Note that we use the word "time-invariant" here to show the invariance of ${K}_{Nt}$; the Kalman Gain switches between two constant matrices instead of an infinite set. This greatly reduces the computational complexity, making the time-invariant form more applicable in practice in spite of its sub-optimality.

B. Noise Variance Estimator

The proposed regulator can be described as a noise variance estimator ${\mathcal{\widehat{Q}}}_{Nt}$ and ${\widehat{R}}_{Nt}$ which are the linear functions over $\{{\pmb{ y}}_{Nt}, $ $t=0, 1, 2, \ldots\}$. Rewriting (9a) yields

$ {\pmb{ x}}_{Nt+N}=q^{-N}{\mathcal{F}}{\pmb{ x}}_{Nt+N}+{G}{\pmb{ \mu}}_{Nt}+{\Gamma}{\pmb{ \omega}}_{Nt} $ (12)

where $q^{-1}$ is the backward shift operator such that $q^{-1}{\pmb{ x}}_{t+1}={\pmb{ x}}_t$. Then

$ {\pmb{ x}}_{Nt+N}=({I}-q^{-N}{\mathcal{F}})({G}{\pmb{ \mu}}_{Nt}+{\Gamma}{\pmb{ \omega}}_{Nt}). $ (13)

Substituting (13) into (9b) yields

$ {\pmb{ y}}_{Nt}={H}({I}-q^{-N}{\mathcal{F}})({G}q^{-N}{\pmb{ \mu}}_{Nt}+{\Gamma}q^{-N}{\pmb{ \omega}}_{Nt})+{\pmb{ v}}_{Nt} $ (14)

and then using

$ {I}-q^{-N}{\mathcal{F}}= \frac{{\rm adj}({I}-q^{-N}{\mathcal{F}})}{{\rm det}({I}-q^{-N}{\mathcal{F}})} $ (15)

where ${\rm adj}$ is adjoint matrix and ${\rm det}$ is the determinant, (14) becomes

$ \begin{align} &{\rm det}({I}-q^{-N}{\mathcal{F}}){\pmb{ y}}_{Nt}= \notag\\ &\qquad {H}{\rm adj}({I}-q^{-N}{\mathcal{F}})({G}q^{-N}{\pmb{ \mu}}_{Nt}+{\Gamma} q^{-N}{\pmb{ \omega}}_{Nt}) \notag\\ &\qquad\ \ +{\rm det}({I}-q^{-N}{\mathcal{F}}){\pmb{ v}}_{Nt}. \end{align} $ (16)

Conducting the left-coprime factorization [28] on both sides, we have

$ \begin{align} &{A}(q^{-N}){\pmb{ y}}_{Nt}= \notag\\ &{B}_\mu(q^{-N}){\pmb{ \mu}}_{Nt}+{B}_\omega(q^{-N}){\pmb{ \mu}}_{Nt}+{A}(q^{-N}) {\pmb{ v}}_{Nt} \end{align} $ (17)

where ${A}(q^{-N})$, ${B}_\mu(q^{-N})$ and ${B}_\omega(q^{-N})$ are polynomial matrices with the form of

$ {X}(q^{-N})={X}_0+q^{-N}{X}_1+\cdots+q^{-N\ell_x}{X}_{\ell_x} $

with ${A}_0={I}$, ${B}_0=0$. Let ${\pmb{ z}}_{Nt}={A}(q^{-N}){\pmb{ y}}_{Nt}-{B}_\mu(q^{-N}){\pmb{ \mu}}_{Nt}$, then

$ \mathit{\boldsymbol{z}}_{Nt}={B}_\omega(q^{-N})\mathit{\boldsymbol{\mu}}_{Nt}+{A}(q^{-N})\mathit{\boldsymbol{v}}_{Nt} $ (18)

where ${\pmb{ z}}_{Nt}$ is the sum of two independent moving average processes, so ${\pmb{ z}}_{Nt}$ is another moving average process whose order is the same as that of the component process of higher order [29].

Define the correlation function of ${\pmb{ z}}_{Nt}$ as ${\Lambda}^{(i)}={E}\{{\pmb{ z}}_{Nt}{\pmb{ z}}^\prime_{N(t-i)}\}$, where $i=0, 1, 2, \ldots, \ell_z$ and $\ell_z=\max(i, $ ${\Lambda}^{(i)}$ $\neq$ ${\pmb{ 0}})$. Then

$ {\Lambda}^{(i)}=\sum^{\ell_\lambda}\limits_{k=i}({B}_{\omega, k} {\mathcal{Q}}{B}_{\omega, k-i}^\prime+ {A}_{k}{R}{A}_{k-i}^\prime) $ (19)

where $\ell_\lambda=\max(\ell_a, \ell_b)$, $\ell_\lambda\geq\ell_z$ and ${A}_k={\pmb{ 0}}$ at $k>\ell_a$ and ${B}_{\omega, k}$ $={\pmb{ 0}}$ at $k>\ell_b$. Expanding (19) by the matrix elements yields the following linear equations

$ {\Delta}{\pmb{ \theta}}={\pmb{ \vartheta}} $ (20)

where ${\pmb{ \theta}}$ is a $\ell_\theta\times 1$ column vector consisting of all the unknown elements in ${\mathcal{Q}}$ and ${R}$, ${\vartheta}$ is a $\ell_\vartheta\times 1$ column vector whose elements each consist of a constant plus one element of ${\Lambda}^{(i)}$, ${\Delta}$ for a known $\ell_\vartheta\times\ell_\theta$ constant matrix. If ${\Delta}$ has the full column rank, then it has the same row rank equalling $\ell_\theta$. Select $\ell_\theta$ linear independent equations, then

$ {\underline{\Delta}}{\pmb{ \theta}}={\pmb{ \underline{\vartheta}}} $ (21)

where ${\underline{\Delta}}$ is known $\ell_\theta\times\ell_\theta$ nonsingular constant matrix. So ${\pmb{ \theta}}$ can be solved as

$ {\theta}={\underline{\Delta}}^{-1}{\pmb{ \underline{\vartheta}}}. $ (22)

Substituting the estimations ${\mathcal{\widehat{Q}}}_{Nt}$ and ${\widehat{R}}_{Nt}$ into (22) yields

$ {\pmb{ \widehat{\theta}}}_{Nt}={\underline{\Delta}}^{-1}{\pmb{ \underline{\widehat{\vartheta}}}} _{Nt} $ (23)

where ${\widehat{\theta}}_{Nt}$ contains the estimation ${\mathcal{\widehat{Q}}}_{Nt}$ and ${\widehat{R}}_{Nt}$.

So the remaining work is to design the estimators for ${\Lambda}^{(i)}$, that is, ${\widehat{\Lambda}}_{Nt}^{(i)}$. Furthermore, considering the variation of noise statistics, the estimator ${\widehat{\Lambda}}_{Nt}^{(i)}$ must gradually discard the history data, and thus

$ {\widehat{\Lambda}}_{Nt}^{(i)}=\frac{1}{t}\sum\limits_{k=i}^{t}\gamma^{t-k}{\pmb{ z}}_{Nk}{\pmb{ z}}_{N(k-i)}^\prime $ (24a)
$ \begin{align} &=\frac{1}{t}\left(\gamma\sum\limits_{k=i}^{t-1}\gamma^{t-1-k}{\pmb{ z}}_{Nk}{\pmb{ z}}_{N(k-i)}^\prime+{\pmb{ z}}_{Nt}{\pmb{ z}}_{N(t-i)}^\prime\right) \notag\\ &=\frac{t-1}{t}\frac{1}{t-1}\left(\gamma\sum\limits_{k=i}^{t-1}\gamma^{t-1-k}{\pmb{ z}}_{Nk}{\pmb{ z}}_{N(k-i)}^\prime+{\pmb{ z}}_{Nt}{\pmb{ z}}_{N(t-i)}^\prime\right) \notag\\ &=\left(1-\frac{1}{t}\right)\gamma{\widehat{\Lambda}}_{Nt-N}^{(i)}+\frac{1}{t}{\pmb{ z}}_{Nt}{\pmb{ z}}_{N(t-i)}^\prime \notag\\ &=\gamma{\widehat{\Lambda}}_{Nt-N}^{(i)}+\frac{1}{t}\left({\pmb{ z}}_{Nt}{\pmb{ z}}_{N(t-i)}^\prime-\gamma{\widehat{\Lambda}}_{Nt}^{(i)}\right)\end{align} $ (24b)

where $0 < \gamma < 1$ is the fading factor whose fading rate grows as the positive number $\gamma$ gets smaller. This is only available at $Nt$, $t\geq i$ while the estimation at other period can be determined by experience or arbitrarily for its asymptotical optimality. The $\gamma$ should be selected as a larger number for slowly changing noises. To facilitate the implementation in computation, the recursive form is derived as shown in (24b). According to the ergodicity of stationary stochastic process we have ${\widehat{\Lambda}}_{Nt}^{(i)}\rightarrow {\Lambda}^{(i)}$ with probability one when $t\rightarrow\infty$.

The method in (24a) and (24b) is termed as the {fading estimation}. The alternative is the {finite-memory estimation} with the form of

$ {\widehat{\Lambda}}_{Nt}^{(i)}=\frac{1}{T}\sum\limits_{k=t-T+1}^{t}{\pmb{ z}}_{Nk}{\pmb{ z}}_{N(k-i)}^\prime $ (25)

whose recursive form does not exist. However, this method is still a promising solution. As seen in Fig. 1, the blue plots represent for the weights assignment on history data of fading estimation, it is found that all the history data are shrunken sharply except the current one if $\gamma$ is set to be a relatively small and positive. The fading factor is able to control the steepness of the blue plots. The finite-memory estimation shown as the red curve performs much better than that of the fading estimation in spite of the inexistence of recursive form. Hence, the fading estimation is favored in the real-time situation, and the finite-memory estimation is favored in the scenarios which require greater accuracy.

Fig. 1 The comparison between fading estimation and finite-memory estimation on the weights assigned to the history data.

Substituting ${\widehat{\Lambda}}_{Nt}^{(i)}$ into (23), the noise variance estimators ${\mathcal{\widehat{Q}}}_{Nt}$ and ${\widehat{R}}_{Nt}$ can be worked out and applied to adjust the Kalman Gain. Note that there is no need to estimate ${Q}$ since the correction of ${\widetilde{P}}_{Nt}$ can be solved by applying (7b) which involves ${\mathcal{Q}}$ only. The noise variance estimator converges to the real value with probability one, and thus the improved Kalman filter possess the optimality. The detailed procedure is given in Algorithm 2.


Two simulation tests for the wheeled robot navigation system and air quality evaluation system by applying the proposed self-tuning asynchronous filter will be presented.

A. Wheeled Robot Navigation System

A global positioning system (GPS) aided wheeled robot navigation system is composed of three parts: the space segment, the control segments and the user segment, as illustrated in Fig. 2. The wheeled robot is mounted with a GPS receiver to determine its kinematic states. However, the GPS has a low updating frequency, and its signal may weaken due to various reasons.

Fig. 2 Illustration for wheeled robot navigation system.

The kinematics of a wheeled robot on roads can be approximately treated as the motion on the two-dimensional plane, which can be decomposed to the eastward motion and northward motion. Let $T$ be the sampling time interval. The eastward motion is described by eastward location $x_{E, t}$ and eastward velocity $\dot{x}_{E, t}$ at time $Tt$. Similarly, the northward motion is described by northward location $x_{N, t}$ and northward velocity $\dot{x}_{N, t}$ at time $Tt$. Based on the Newton's laws of motion, the eastward motion and northward motion are modelled as

$ x_{E, t+1}=x_{E, t}+T\dot{x}_{E, t}+0.5T^2(u_{E, t}+w_{E, t}) $ (26a)
$ \dot{x}_{E, t+1}=\dot{x}_{E, t}+T(u_{E, t}+w_{E, t}) $ (26b)
$ x_{N, t+1}=x_{N, t}+T\dot{x}_{N, t}+0.5T^2(u_{N, t}+w_{N, t}) $ (26c)
$ \dot{x}_{N, t+1}=\dot{x}_{N, t}+T(u_{N, t}+w_{N, t}) $ (26d)

where $u_{E, t}$, $u_{N, t}$, $w_{E, t}$ and $w_{N, t}$ are the eastward input acceleration, northward input acceleration, eastward random acceleration, and northward random acceleration, respectively. They can be modelled as Gaussian white noises with changing variance. The updating rate of GPS is usually slower than that of $u_{E, t}$ and $u_{N, t}$ thus we have the measurement equation as

$ y_{E, Nt}=x_{E, Nt}+v_{E, Nt} $ (27a)
$ y_{N, Nt}=x_{N, Nt}+v_{N, Nt} $ (27b)

where $y_{E, Nt}$ and $y_{N, Nt}$ are eastward location and northward location derived from the GPS readings. The noise $v_{E, Nt}$ and $v_{N, Nt}$ may change with time due to the influence brought by amount of observable satellites or interference signal. Similarly, they can be modelled as Gaussian white noises with changing variance.

Introducing the state variables ${\pmb{ x}}_t=[x_{E, t}, \dot{x}_{E, t}, x_{N, t}, \dot{x}_{N, t}]^\prime$, ${\pmb{ y}}_{Nt}$ $=[y_{E, Nt}, y_{N, Nt}]^\prime$, ${\pmb{ u}}_t=[u_{E, t}, u_{N, t}]^\prime$, ${\pmb{ w}}_t=[w_{E, t}, w_{N, t}]^\prime$ and ${\pmb{ v}}_{Nt}=[v_{E, Nt}, v_{N, Nt}]^\prime$, the equations from (26a) to (26d), (27a) and (27b) can be rewritten in the form of (1a), (1b), (9a) and (9b) where

$ \begin{array}{l} F = \left[ {\begin{array}{*{20}{c}} 1&T&0&0\\ 0&1&0&0\\ 0&0&1&T\\ 0&0&0&1 \end{array}} \right],\;\;\;{\cal F} = \left[ {\begin{array}{*{20}{c}} 1&{NT}&0&0\\ 0&1&0&0\\ 0&0&1&{NT}\\ 0&0&0&1 \end{array}} \right]\\ G = \Gamma = \left[ {\begin{array}{*{20}{c}} {0.5{T^2}}&0\\ T&0\\ 0&{0.5{T^2}}\\ 0&T \end{array}} \right],\;\;\;H = \left[ {\begin{array}{*{20}{c}} 1&0&0&0\\ 0&0&1&0 \end{array}} \right]\\ Q = \left[ {\begin{array}{*{20}{c}} {\sigma _E^2}&0\\ 0&{\sigma _N^2} \end{array}} \right],\;\;R = \left[ {\begin{array}{*{20}{c}} {\varsigma _E^2}&0\\ 0&{\varsigma _N^2} \end{array}} \right]. \end{array} $

It is noted that the modelling procedure is very typical by applying some basic physical principles. It is also termed as the target tracking problem which is a popular research topic worldwide [27], [28].

Before the simulation, some parameters should be determined: the sampling time interval $T=0.1$, the noise variance $\sigma^2_E$ $= \sigma^2_N=49$ for the first half period and $\sigma^2_E=900$ for the second half period and $\varsigma^2_E=400$ for the whole test. To verify the effectiveness of the proposed filter, three cases are taken into consideration as follows.

Case 1: The test is done using the asynchronous filter without considering the variation of noise characteristics.

Case 2: The test is done using the self-tuning asynchronous filter.

Case 3: The test is done using the conventional Kalman filter. The last measurement received will be used to project forward the estimation while the measurements are missing.

The results are shown in Fig. 3 where the black curves represent the real tracks and those in red, blue, and green for the estimations under different cases. The error is defined as the linear distance between the estimations and the real locations. The comparison between Case 1 and Case 2 indicates that the noise variance estimator is of significant importance for the influence brought by variation of noise characteristics can be eliminated to the largest extent. Using the last measurement to project forward the estimation can smooth the estimations during the period without measurements, but the comparison between Case 2 and Case 3 shows the superiority of the self-tuning asynchronous filter. Furthermore, the mean value of errors under the three cases are $54.74$, $39.54$, and $72.03$, respectively, such that the proposed filter has the smallest mean error. In summary, the proposed filter is relatively better than the existing methods.

Fig. 3 Localization results and errors under the Cases 1, 2 and 3.

Partially enlarging Fig. 3 (b) yields the Fig. 4, where the blue circles represent measurements, the red curves are estimations, and the black curves are the real tracks. It is found that the data derived by dead-reckoning method without correction will diverge to infinity, but the GPS will correct the estimation in time while the error gets larger. Although the performance of dead-reckoning method deteriorates over time, it performs quite well in a short period, which has complemented the absence of GPS. Hence the wheeled robot navigation system is usually integrated with the two methods by fusing their outputs, to achieve a relatively accurate estimation in real time.

Fig. 4 Partially enlarged detail in the Fig. 3 (b).
B. Air Quality Evaluation System

Based on the existing dispersion model shown in [30], [31], we will present a discrete linear parameterised box-type dispersion model, which is a parameterised semi-empirical model which makes use of a priori assumptions about the flow and dispersion conditions. As illustrated in Fig. 5 the streets can be treated as a closed box with its top exchanging air with the high-level atmosphere for the very deepness of the street canyon. The inner atmosphere is relatively stable and the pollutant concentration will decrease with a fixed proportion $k$ which is determined by the street structure. Due to the outmigration of high-pollution factories and the development in energy industry, traffic emission has become the dominant factor of street air pollution, especially in traffic-dense area. So it is assumed that the air pollution has arisen almost solely by traffic emission $u$. The high-level atmosphere is relatively stable such that its pollutant concentration is symbolized by a constant $\pi$. Hence, the atmospheric dispersion system can be described by

$ \begin{align} x_{t+1}-\pi=k(x_{t}-\pi)+u_{t}+w_{t} \end{align} $ (28)
Fig. 5 Illustration for atmospheric dispersion system.

where $x_t$ is the street (the inner box) pollutant concentration, $\pi$ the average high-level (the external box) pollutant concentration, $u_t$ the new concentration by traffic emission, $w_t$ the Gaussian white noise, $k\in(0, 1)$ the dispersion coefficient and $t$ the discrete time. The equation reveals how the inner-external differential concentration changes over time. Considering the uncertainty existing in the model, a Gaussian white noise $w_t$ is introduced to improve its precision.

The monitoring cars equipped with atmospheric sensors can be used to correct the estimation. Rearranging (28) and modeling the mobile-sensing method yields

$ x_{t+1}=kx_{t}+\chi+u_{t}+w_{t} $ (29a)
$ y_{Nt}=x_{Nt}+v_{Nt} $ (29b)

where $\chi=(1-k)\pi$, $y_t$ stands for the outputs of monitoring car, $w_t\sim { {\rm N}}(0, q)$, and $v_t\sim { {\rm N}}(0, r)$ stand for two independent Gaussian white noises. The constant $\chi$ can be seen as the time-invariant part of $u_{t}$ such that this model is consistent with 1 (a) and 1 (b).

Set the parameters as dispersion coefficient $k=0.8$, noise variance $q=49$ for the first half period, $q=169$ for the second half period and $r=250$ for the whole test. In this test, the variation of estimation error variance under different $N$ can be observed. It may give us a better understanding of the influence of accuracy by multi-rate-induced asynchronism.

The results are shown in Fig. 6. The first test result in (a) shows the asymptotical stability, where the weight $\alpha_t$ is set to be $0$ initially and then converges to $0.6917$ and $0.4761$. It is found that the filter assigns more weights to the measurements with the increase of $Q$. Test results from (b) to (f) show the estimation error variance changing curve over time with different sampling time of measurements. The curves (b) show the estimation error variance with setting $N$ as $\infty$, that is, there is no available measurements during the whole test. However, the error does not go to infinity, but stays steady at $257.9$ and $889.5$ because the system itself is stable. The curves from (c) to (e) are based on setting $N$ as $48$, $12$ and $3$, their error bounds are $[126.9, 257.9]$, $[116.4, 208.6]$, $[93.85, 125]$ for the first half period and $[195.1, 889.5]$, $[183.5, 643.3]$, $[155, 294.5]$ for the second half period, respectively. The filter seems to reduce the estimation error variance while it is getting large such that the upper bound can be significantly reduced before the variance reaches steady state. With the sampling time of measurements getting smaller, the upper bound of error is reduced at the same time.

Fig. 6 Simulation results for $\alpha_t$ at $t=0, N, 2N, \ldots$ and the estimation error variance under different $N$.

In this paper, the optimal filtering problem for linear Gaussian systems is considered. A self-tuning asynchronous filter is proposed whose noise variance estimator is able to adjust filtering coefficients adaptively. Two case studies, the wheeled robot navigation system and the air quality evaluation system, have demonstrated the effectiveness of the proposed approach.

[1] Q. B. Ge, T. Shao, Q. M. Yang, X. F. Shen, and C. L. Wen, "Multisensor nonlinear fusion methods based on adaptive ensemble fifth-degree iterated cubature information filter for biomechatronics, " IEEE Trans. Syst. Man Cybern. Syst., vol. 46, no. 7, pp. 912-925, Jul. 2016.
[2] R. E. Kalman, "A new approach to linear filtering and prediction problems, " J. Basic Eng., vol. 82, no. 1, pp. 35-45, Mar. 1960.
[3] X. Lu, L. L. Wang, H. X. Wang, and X. H. Wang, "Kalman filtering for delayed singular systems with multiplicative noise, " IEEE/CAA J. Autom. Sinica, vol. 3, no. 1, pp. 51-58, Jan. 2016.
[4] C. X. Miao and J. J. Li, "Autonomous landing of small unmanned aerial rotorcraft based on monocular vision in GPS-denied area, " IEEE/CAA J. Autom. Sinica, vol. 2, no. 1, pp. 109-114, Jan. 2015.
[5] B. Feng, H. B. Ma, M. Y. Fu, and S. T. Wang, "A framework of finite-model Kalman filter with case study: MVDP-FMKF algorithm, " Acta Autom. Sinica, vol. 39, no. 8, pp. 1246-1256, Aug. 2013.
[6] B. Allik, C. Miller, M. J. Piovoso, and R. Zurakowski, "The Tobit Kalman filter: an estimator for censored measurements, " IEEE Trans. Control Syst. Technol., vol. 24, no. 1, pp. 365-371, Jan. 2016.
[7] I. Arasaratnam and S. Haykin, "Cubature Kalman filters, " IEEE Trans. Automat. Control, vol. 54, no. 6, pp. 1254-1269, Jun. 2009.
[8] S. C. Smith and P. Seiler, "Estimation with lossy measurements: jump estimators for jump systems, " IEEE Trans. Automat. Control, vol. 48, no. 12, pp. 2163-2171, Dec. 2003.
[9] L. X. Zhang, Z. P. Ning, and Z. D. Wang, "Distributed filtering for fuzzy time-delay systems with packet dropouts and redundant channels, " IEEE Trans. Syst. Man Cybern. Syst., vol. 46, no. 4, pp. 559-572, Apr. 2016.
[10] S. L. Sun, L. H. Xie, W. D. Xiao, and Y. C. Soh, "Optimal linear estimation for systems with multiple packet dropouts, " Automatica, vol. 44, no. 5, pp. 1333-1342, May 2008.
[11] S. L. Sun, "Optimal linear filters for discrete-time systems with randomly delayed and lost measurements with/without time stamps, " IEEE Trans. Automat. Control, vol. 58, no. 6, pp. 1551-1556, Jun. 2013.
[12] S. L. Sun, "Optimal linear estimators for discrete-time systems with one-step random delays and multiple packet dropouts, " Acta Autom. Sinica, vol. 38, no. 3, pp. 349-354, Mar. 2012.
[13] C. Y. Han, H. S. Zhang, and M. Y. Fu, "Optimal filtering for networked systems with Markovian communication delays, " Automatica, vol. 49, no. 10, pp. 3097-3104, Oct. 2013.
[14] Y. H. Yang, M. Y. Fu, and H. S. Zhang, "State estimation subject to random communication delays, " Acta Autom. Sinica, vol. 39, no. 3, pp. 237-243, Mar. 2013.
[15] Y. Liang, T. W. Chen, and Q. Pan, "Multi-rate optimal state estimation, " Int. J. Control, vol. 82, no. 11, pp. 2059-2076, Sep. 2009.
[16] L. P. Yan, B. S. Liu, and D. H. Zhou, "The modeling and estimation of asynchronous multirate multisensor dynamic systems, " Aerosp. Sci. Technol., vol. 10, no. 1, pp. 63-71, Jan. 2006.
[17] L. P. Yan, B. S. Liu, and D. H. Zhou, "Asynchronous multirate multisensor information fusion algorithm, " IEEE Trans. Aeros. Electron. Syst., vol. 43, no. 3, pp. 1135-1146, Jul. 2007.
[18] L. P. Yan, D. H. Zhou, M. Y. Fu, and Y. Q. Xia, "State estimation for asynchronous multirate multisensor dynamic systems with missing measurements, " IET Signal Process., vol. 4, no. 6, pp. 728-739, Dec. 2010.
[19] W. Shao, S. L. Sui, L. Meng, and Y. B. Yue, "Stable estimation of horizontal velocity for planetary lander with motion constraints, " IEEE/CAA J. Autom. Sinica, vol. 2, no. 2, pp. 198-206, Apr. 2015.
[20] W. Zheng, F. Zhou, and Z. F. Wang, "Robust and accurate monocular visual navigation combining IMU for a quadrotor, " IEEE/CAA J. Autom. Sinica, vol. 2, no. 1, pp. 33-44, Jan. 2015.
[21] K. Wang, W. Wang, and Y. Zhuang, "A MAP approach for vision-based self-localization of mobile robot, " Acta Autom. Sinica., vol. 34, no. 2, pp. 159-166, Feb. 2008.
[22] J. P. Shi and R. M. Harrison, "Regression modelling of hourly NOx and NO2 concentrations in urban air in London, " Atmos. Environ., vol. 31, no. 24, pp. 4081-4094, Dec. 1997.
[23] P. Zito, H. Chen, and M. C. Bell, "Predicting real-time roadside CO and NO2 concentrations using neural networks, " IEEE Trans. Intell. Transport. Syst., vol. 9, no. 3, pp. 514-522, Sep. 2008.
[24] G. S. W. Hagler, E. D. Thoma, and R. W. Baldauf, "High-resolution mobile monitoring of carbon monoxide and ultrafine particle concentrations in a near-road environment, " J. Air Waste Manag. Assoc., vol. 60, no. 3, pp. 328-336, Mar. 2010.
[25] D. Simon, Optimal State Estimation: Kalman, H, and Nonlinear Approaches.. Hoboken, New Jersey, USA: Wiley-Interscience, 2006.
[26] C. K. Chui and G. R. Chen, Kalman Filtering with Real-Time Applications. Berlin Heidelberg, Germany: Springer, 2008.
[27] Z. L. Deng, Y. Gao, L. Mao, Y. Li, and G. Hao, "New approach to information fusion steady-state Kalman filtering, " Automatica, vol. 41, no. 10, pp. 1695-1707, Oct. 2005.
[28] Y. Gao, W. J. Jia, X. J. Sun, and Z. L. Deng, "Self-tuning multisensor weighted measurement fusion Kalman filter, " IEEE Trans. Aeros. Electron. Syst., vol. 45, no. 1, pp. 179-191, Jan. 2009.
[29] G. E. P. Box and G. M. Jenkins, "Time series analysis: forecasting and control, " vol. 14, no. 2, pp. 199-201, 1970.
[30] S. E. Nicholson, "A pollution model for street-level air, " Atmos. Environ. (1967), vol. 9, no. 1, pp. 19-31, Jan. 1975.
[31] R. Berkowicz, "OSPML——A parameterised street pollution model, " in Urban Air Quality: Measurement, Modelling and Management, R. S. Sokhi, R. S. José, N. Moussiopoulos, and R. Berkowicz, Eds. Netherlands: Springer, 2000, pp. 323-331.