^{2} Department of Electrical and Computer Engineering, College of Engineering, University of Sharjah, United Arab Emirates
Navigation system is one of the most used applications on wheelchairs in the field of robotic systems, it can provide advantages to users of this type of chairs due to its simplicity. In ordinary life, this category needs to navigate and to know strategic places such as parking, metro stations and also inside a building composed of several places without the help of anyone. This requires the development of a system capable to accomplish these tasks, which allow the places identification and easy navigation, including helping to locate the chair mobility at any time. Practically, the location determination using a single sensor is insufficient to capture all relevant characteristics of real environments. Then, it is necessary to combine data from multiple sensors in a process known as fusion. Multisensor data fusion is a technology to allow combining information from several sensors, where the goal is to have a better accuracy and to give a more precise result than the one that could be obtained by the use of one single sensor^{[1, 2]}. On the other hand, the multisensor data fusion techniques are applied to improve the system performance in several ways, such as increasing the robustness and reliability, and the methods are used to optimize uncertainty and to reduce the reduction time of measurement and cost^{[3, 4]}. In the literature, a few authors have worked in this research area^{[5, 6]}. For instance, Naidu^{[7]} presented different fusion architectures to combine the radar data with infrared search and track (IRST) in the target tracking system by using extended Kalman filter (EKF). In [8], an EKF was designed. It combines the noisy data measurements of encoders (ENC), compass and accelerometer (ACC) in order to obtain the best position estimate while reducing measurement uncertainties. The proposed algorithm is based on two data fusion architectures, state vector fusion (SVF) and measurement fusion (MF). Simpson et al.^{[9]} developed smart wheelchairs for navigation assistance mainly in indoor environments, however Ding et al.^{[10]} developed a smart wheelchair equipped with wireless internet access and global positioning system (GPS). Defects of the previous methods are of large cost in addition to being dependent to being not effective in closed places and other places with internet access. In the networkcentric applied research team (NCART), within the NEPWAK project, Ferworn et al.^{[11]} introduced techniques for modifying and using power wheelchairs as mobile platforms, enabling communication and remote control, and extended the mobility range of Ferworn NEPWAK beyond a local area. Whereas in [12], the authors described the development of a navigation system to give a semiautonomous operation of wheelchairs where they make extensive use of freely available open source software. However, Wattavarangkul and Wakahara^{[13]} present an indoor navigation system for wheelchairs, using smartphones as a sensor for navigation tools. The system in our hands is depending on a range of sensors which are different in terms of manufacturer, technology, mechanism of action and the action protocol. There exist many complications and problems in the case of mixing them^{[14]}. For this reason, our task in this paper is to find the best way that allows the use of different information from various sensors and simultaneously make correct judgments. In the literature, different navigation systems are reformulated as a nonlinear estimation problem where the optimal solution is given by the EKF^{[15]}. EKF has been a popular approach for localization of a mobile robot^{[16]}. However, the estimation quality performance of the EKF depends on the correct a priori knowledge of process measurement noise and covariance matrices
1) loss of information to capture the true posterior density changes completely, for example, a nonlinear filter designed under the gaussian assumption may fail to capture the key characteristics of a posterior density multimodal.
2) The high degree nonlinear system describing numerical errors in equations at the state space model levels.
3) Numerical errors. Indeed, each of the aforementioned filters has its own domain of applicability and it is doubtful that a single filter would be considered effective for a wide range of applications. The motivation of this study was to introduce a more accurate nonlinear filter that could be applied to solve a wide range (from low to large) nonlinear filtering problems. In this aspect, we find two architectures of multisensor data fusion, able to use the information from all sources and merge them to provide optimal information.
The purpose of this paper is to improve the performance of this estimator (UKF) and find architectures able to combine noisy measurements from all sources in order to obtain the best estimate of the state, while reducing measurement uncertainty. The first technique is a fusion architecture which is a measurement fusion (MF), and the second one is another architecture with different principles that is the state vector fusion (SVF)^{[5, 14]}. The two proposed methods of fusion architectures proved their effectiveness in this area and can handle any kind of sensors. They require only a small number of sensors with the lowest cost.
This paper is organized as follows: details of the system and the fusion algorithms are respectively described in Sections 2 and 3. The study is reinforced by evaluation of each studied method in Section 4. Section 5 presents the results of the simulation. The conclusion includes system evaluation as well as the advantages and disadvantages of the proposed algorithm.
2 System sensor designThe sensor system consists of a magnetic compass, an accelerometer and two odometers (absolute encoders). These odometers determine the position
$ \begin{equation} \label{eq1} \theta=\tan^{1}(\frac{y}{x}). \end{equation} $  (1) 
To take account of the tangent function validity over
$ \begin{equation*} \label{eq2} ~~~~ {\rm \theta } = \left\{ \begin{array}{l} 90^\circ {\rm }, ~~~~{ x} = 0{{\rm ~~~~and} ~~~~~~~~y} < 0 \\ 270^\circ {\rm }, ~~~~{ x} = 0{{\rm ~~~~and} ~~~~~~~~y} > 0 \\ \left( {\left( {180  \tan ^{  1} (\dfrac{{ y}}{{ x}})} \right){\times}\dfrac{{180}}{{{ pi}}}} \right)^{\rm o}, ~{ x} < 0 \\[0.3cm] \left( {\left( {  \tan ^{  1} (\dfrac{{ y}}{{ x}})} \right){\times}\dfrac{{180}}{{{ pi}}}} \right)^{\rm o}, ~{ x} > 0~{\rm and}~y < 0 \\[0.3cm] \left( {\left( {360  \tan ^{  1} (\dfrac{{ y}}{{ x}})} \right){\times}\dfrac{{180}}{{{ pi}}}} \right)^{\rm o}, { x} > 0~{\rm and}~y > 0. \\ \end{array} \right. \end{equation*} $  (2) 
To measure the linear acceleration, an accelerometer is used. The principle of most accelerometers is based on the fundamental law of motion which is
The proposed platform occupies the wheelchair with three sensor types, compass, accelerometer, and absolute encoders having the ability to perceive parameters helping to determine the position. The raw data of these sensors are combined using an unscented Kalman filter. The objective is then to determine the position of the wheelchair during operations as accurately as possible^{[21]}.
We start with the kinematic model of the wheelchair. On each rear wheel, an odometer is mounted. The measurements are made by rotary encoders coupled to the wheel axis. Fig. 1 shows the system coordinates as well as the wheelchair notations where (
$ \begin{equation*} \label{eq3} \left\{{\begin{array}{l} {\Delta d_g=\Delta t\times r\times v_g}\\ {\Delta d_d=\Delta t\times r\times v_d}.\\ \end{array}}\right. \end{equation*} $  (3) 
Download:


Fig. 1. Configuration and measurement model. System is equipped with both encoders, compass and accelerometer 
These can be converted to linear incremental movement
$ \begin{equation*} \label{eq4} \left\{{\begin{array}{l} {\Delta d =\dfrac{{\left({{\rm\Delta} d_g+{\rm\Delta }d_d}\right)}}{2}} \\ {\Delta \theta=\dfrac{{\left({{\rm\Delta }d_g{\rm\Delta }d_d}\right)}}{l}}.\\ \end{array}}\right. \end{equation*} $  (4) 
According to [27, 28], the basic movements during time
$ \begin{equation*} \label{eq5} \left\{ {\begin{array}{l} {x_{k + 1} = x_k + {\rm \Delta }d_k \times \cos (\Phi )} \\ {y_{k + 1} = y_k + {\rm \Delta }d_k \times \sin (\Phi )} \\ {\theta _{k + 1} = \theta _k + {\rm \Delta }\theta _k } \\ \end{array}} \right. \end{equation*} $  (5) 
where
Practically, (5) is not really accurate because inevitable errors occur in the system. These errors can be both systematic such as the imperfect model and unsystematic such as wheel slippage. They have accumulated characteristics that affect the system stability if adequate compensation is not considered. In our system, the compensation is performed by merging measures of a compass, an accelerometer and absolute encoders. This is accomplished by using the UKF. Equation (6) is described by a nonlinear function
$ \begin{equation*} \label{eq6} \left\{ {\begin{array}{l} {X_{k + 1} = f(X_k, u_k, w_k )} \\ {Z_k = h(X_k, v_k )} \\ \end{array}} \right. \end{equation*} $  (6) 
where
The random variables
$ w_k \sim N(0, Q_k ), v_k \sim N(0, R_k ), E(w_i v_j^{\rm T} )=0. $ 
In the case of the nonlinear model, the optimal estimation solution is the EKF. This method of filtering gives good results when the models of evolution of the status and measurement are close to the linear case (first order approximation). However, the optimality of the filter and the convergence are not guaranteed. It can be linearized only in the case of existence of the Jacobian matrix that can be difficult to calculate and may generate errors in the algorithm. In this work, the deterministic methods that share the same principles are studied. These filters are based on a deterministic sampling approach for the Bayesian solution numerically, using the unscented transformation for obtaining the unscented Kalman filter (UKF)^{[30]}.
3.2 Unscented transformation (UT)Unscented transformation (UT) is a method for calculating the statistics of a random variable which undergoes a nonlinear transformation. The UT builds on the principle that it is easier to approximate a probability distribution of an arbitrary nonlinear function. The UT uses a small number of chosen test points, which are called Sigma points. Each sigma point is propagated through the nonlinear function, and then the mean and covariance are computed through the use of a weighted statistical linear regression process^{[31]}. These points are then propagated through the nonlinear function whose mean and covariance of the estimation are then recovered. The result is a filter that captures most accurate mean and the true covariance^{[32]}.
3.3 Unscented Kalman filter (UKF)For the
$ \begin{eqnarray*} \label{eq7} %\setcounter{equation}{7} \left\{ {\begin{array}{ll} {{ X}_0 = { \bar X}, ~~ i = 0{\rm }} \\ {{ X}_{ i} = { \bar X} + {\rm }\left[{\sqrt {\left( {{ L} + { \lambda }} \right){ P}_{ x} } } \right]_{\rm i} {\rm, }~~ i = 1 \cdots { L}} \\[0.3cm] {{ X}_{ i} = { \bar X}  \left[{\sqrt {\left( {{ L} + { \lambda }} \right){ P}_{ x} } } \right]_{{ i}  { n}_{ x} }, ~~{ i} = { L} + 1 \cdots 2{ L}} \\ \end{array}} \right. \end{eqnarray*} $  (7) 
where
$ \begin{equation*} \label{eq8} \left\{ \begin{array}{l} { W}_{0\left( { X} \right)} = \dfrac{{ \lambda }}{{{ n}_{\rm x} + { \lambda }}}, ~~ { i} = 0 \\ { W}_{0\left( { P} \right)} = \dfrac{{ \lambda }}{{{ n}_{\rm x} + { \lambda }}} + \left( {1  {\rm \alpha }^2 + {\rm \beta }} \right), ~~ { i} = 0 \\ { W}_{{ i}\left( { X} \right)} = { W}_{{ i}\left( { P} \right)} = \dfrac{1}{{2({ n}_{ x} + { \lambda })}}, ~~ { i} = 1 \cdots { }2{ L}. \\ \end{array} \right. \end{equation*} $  (8) 
UKF algorithm proceeds according to the following successive stages^{[17]}: We assume that the observations denoted as
1) Initialisation
2) Estimation
For
a) Prediction
ⅰ) Calculate
$ ({ X}_0, { X}_1, \cdots, { X}_L, \cdots, { X}_{2L} ). $ 
ⅱ) Evaluation of samples propagates
$ \begin{equation} \hat X_{i, kk  1} = f\left( {{ X}_{i, k  1k  1}, u_{k  1} } \right). \end{equation} $  (9) 
ⅲ) The predicted state and the predicted covariance are computed as
$ \begin{equation} \label{eq10} X_{kk  1} = \sum \limits_{i = 0}^{2L} w_{i\left( x \right)} \hat X_{i, kk  1} \end{equation} $  (10) 
$ \begin{align} \label{eq11} P_{kk  1} \approx& \sum \limits_{i = 0}^{2L} w_{i\left( p \right)} (\hat X_{i, kk  1}  X_{kk  1} )\times\nonumber\\ &(\hat X_{i, kk  1}  X_{kk  1} )^{\rm T} + Q_k \end{align} $  (11) 
where
$ \begin{equation} Q_k = \left[{\begin{array}{*{20}c} {\sigma _x^2 } & 0 & 0 \\ 0 & {\sigma _y^2 } & 0 \\ 0 & 0 & {\sigma _\theta ^2 } \\ \end{array}} \right]. \end{equation} $  (12) 
b) Correction
ⅰ) Correct and propagate evaluation samples
$ \begin{equation} (\widehat{{ Y}}_{0, kk  1}, \cdots, \widehat{{ Y}}_{L, kk  1}, \cdots, \widehat{{ Y}}_{2L, kk  1} ) \end{equation} $  (13) 
knowing that
$ \begin{equation} \label{eq14} \hat { Y}_{i, kk  1} = h\left( {\hat X_{i, kk  1}, u_k } \right). \end{equation} $  (14) 
ⅱ) Estimate the predicted observation
$ \begin{equation} \label{eq15} \widehat{y}_{kk  1} = \sum \limits_{i = 0}^{2L} w_{i\left( x \right)} \hat { Y}_{i, kk  1} \end{equation} $  (15) 
$ \begin{equation} \label{eq16} \begin{array}{l} P_{yy, kk  1} = \sum \limits_{i = 0}^{2L} w_{i\left( p \right)} (\hat { Y}_{i, kk  1}  \widehat{y}_{kk  1} )\times~\\ ~~~~~~~~~~~~~~~~\left( {\hat { Y}_{i, kk  1}  \widehat{y}_{kk  1} } \right)^{\rm T} + R \end{array} \end{equation} $  (16) 
$ \begin{equation} \label{eq17} \begin{array}{l} P_{xy, kk  1} = \sum \limits_{i = 0}^{2L} w_{i\left( p \right)} (\hat X_{i, kk  1}  X_{kk  1} )\times\\ ~~~~~~~~~~~~~~~~\left( {\hat { Y}_{i, kk  1}  \widehat{y}_{kk  1} } \right)^{\rm T} . \end{array} \end{equation} $  (17) 
ⅲ)Compute the Kalman gain
$ \begin{equation} \label{eq18} G_k = \frac{P_{xy, kk  1} }{P_{yy, kk  1}}. \end{equation} $  (18) 
ⅳ) Correct the state and the covariance matrix error:
$ \begin{equation} \label{eq19} X_{kk} = X_{kk  1} + G_k \left( {y_{k}  \widehat{y}_{kk  1} } \right) \end{equation} $  (19) 
$ \begin{equation} \label{eq20} P_{kk} = P_{kk  1}  G_k P_{yy, kk  1} G_k ^{\rm T}. \end{equation} $  (20) 
End for.
3.4 Multisensor data fusionTo improve the measurement performance, we have proposed two architectures to merge the sensors measures state vector fusion (SVF) and measurement fusion (MF). In both architectures, we used compass measures merged with the accelerometer measurements in a single measurement vector,
The covariance matrix of this vector is defined as follows:
$ \begin{equation} \label{eq21} R^{\left( b \right)} = \left[{\begin{array}{*{20}c} {\sigma _{a, x}^2 } & 0 & 0 \\ 0 & {\sigma _{a, y}^2 } & 0 \\ 0 & 0 & {\sigma _{b, \theta }^2 } \\ \end{array}} \right] \end{equation} $  (21) 
where
$ \begin{equation} \label{eq22} R^{\left( c \right)} = \left[{\begin{array}{*{20}c} {\sigma _{c, x}^2 } & 0 & 0 \\ 0 & {\sigma _{c, y}^2 } & 0 \\ 0 & 0 & {\sigma _{b, \theta }^2 } \\ \end{array}} \right]. \end{equation}\\ $  (22) 
The operation principle of SVF is based on the position calculation from the measurement of each sensor separately and the resulting state vectors are merged to obtain a final state vector. The covariance matrices are combined to give the final state matrix (Fig. 2).
Download:


Fig. 2. Diagram of SVF architecture 
For measurements of
1) Prediction stage:
$ \begin{equation} \label{eq23} \hat X_{i, kk  1}^{\left( c \right)} = f\left( {{ X}_{i, k  1k  1}^{\left( c \right)}, u_{k  1} } \right) \end{equation} $  (23) 
$ \begin{equation} \label{eq24} X_{kk  1}^{\left( c \right)} = \sum \limits_{i = 0}^{2L} w_{i\left( x \right)} \hat X_{i, kk  1}^{\left( c \right)} \end{equation} $  (24) 
$ \begin{align} \label{eq25} P_{kk  1}^{\left( c \right)} =& \sum \limits_{i = 0}^{2L} w_{i\left( p \right)} (\hat X_{i, kk  1}^{\left( c \right)}  X_{kk  1}^{\left( c \right)} ) \times\nonumber\\ & (\hat X_{i, kk  1}^{\left( c \right)}  X_{kk  1}^{\left( c \right)} )^{\rm T} + Q_k. \end{align} $  (25) 
2) Correction stage:
$ \begin{equation} \label{eq26} \hat {\cal Y}_{i, kk  1}^{\left( c \right)} = h\left( {\hat X_{i, kk  1}^{\left( c \right)}, u_k } \right) \end{equation} $  (26) 
$ \begin{equation} \label{eq27} \widehat{y}_{kk  1}^{\left( c \right)} = \sum \limits_{i = 0}^{2L} w_{i\left( x \right)} \hat { Y}_{i, kk  1}^{\left( c \right)} \end{equation} $  (27) 
$ \begin{align} \label{eq28} P_{yy, kk  1}^{\left( c \right)} = & \sum \limits_{i = 0}^{2L} w_{i\left( p \right)} (\hat { Y}_{i, kk  1}^{\left( c \right)}  \widehat{y}_{kk  1}^{\left( c \right)}  )\times\nonumber\\ &\left( {\hat { Y}_{i, kk  1}^{\left( c \right)}  \widehat{y}_{kk  1}^{\left( c \right)} } \right)^{\rm T} + R^{\left( c \right)} \end{align} $  (28) 
$ \begin{align} \label{eq29} P_{xy, kk  1}^{\left( c \right)} =& \sum \limits_{i = 0}^{2L} w_{i\left( p \right)} (\hat X_{i, kk  1}  X_{kk  1} )\times\nonumber\\ &\left( {\hat { Y}_{i, kk  1}^{\left( c \right)}  \widehat{y}_{kk  1}^{\left( c \right)} } \right)^{\rm T} \end{align} $  (29) 
$ \begin{equation} \label{eq30} G_k^{\left( c \right)} = \frac{P_{xy, kk  1}^{\left( c \right)} }{P_{yy, kk  1}^{\left( c \right)}} \end{equation} $  (30) 
$ \begin{equation} \label{eq31} X_{kk}^{\left( c \right)} = X_{kk  1}^{\left( c \right)} + G_k^{\left( c \right)} (Y_k^{\left( c \right)}  \widehat{y}_{kk  1}^{\left( c \right)} ) \end{equation} $  (31) 
$ \begin{equation} \label{eq32} P_{kk}^{ ( c )} = P_{kk  1}^{ ( c )}  G_k^{ ( c )} P_{yy, kk  1}^{(c)} G_k^{{(c)}{\rm T}}. \end{equation} $  (32) 
For measurements of
1) Prediction stage: Equations (23)(25).
2) Correction stage: Equations (26)(23).
SVF:
$ \begin{align} \label{eq33} X_{f{\rm }, kk} =& X_{kk}^{\left( c \right)} {\rm } + P_{kk}^{\left( c \right)} \times\left[{P_{kk}^{\left( c \right)} + P_{kk}^{\left( b \right)} } \right]^{  1} \times\nonumber\\ &\left( {X_{kk}^{\left( b \right)} {\rm }  X_{kk}^{\left( c \right)} {\rm }} \right) \end{align} $  (33) 
$ \begin{equation} \label{eq34} P_{f, kk} = P_{kk}^{\left( c \right)} + P_{kk}^{\left( b \right)} \times\left( {P_{kk}^{\left( c \right)} + P_{kk}^{\left( b \right)} } \right)^{  1} \times P_{kk}^{{\left( c \right)} ^{\rm T}}. \end{equation} $  (34) 
In this architecture (Fig. 3), all sources of measurements are merged into a measurement vector. On the other hand, the covariance matrices of these sources are also merged to produce the same results.
Download:


Fig. 3. Diagram of MF architecture 
1) Prediction stage:
$ \begin{equation}\label{eq35} \hat X_{i, kk  1} = f\left( {{ X}_{i, k  1k  1}, u_{k  1} } \right) \end{equation} $  (35) 
$ \begin{equation}\label{eq36} X_{kk  1} = \sum \limits_{i = 0}^{2L} w_{i\left( x \right)} \hat X_{i, kk  1} \end{equation} $  (36) 
$ \begin{equation*}\label{eq37} \begin{array}{l} P_{kk  1} = \sum \limits_{i = 0}^{2L} w_{i\left( p \right)} (\hat X_{i, kk  1}  X_{kk  1} )\\ ~~~~~~~~ (\hat X_{i, kk  1}  X_{kk  1} )^{\rm T} + {\rm }W_k \times Q_{k  1} \times W_k^{\rm T}.~~~~ \end{array} \end{equation*} $  (37) 
2) MF:
$ \begin{equation} Y_f = Y^{\left( c \right)} + Y^{\left( c \right)} \times \left[{R^{\left( c \right)} + R^{\left( b \right)} } \right]{\rm }^{  1} \times \left( {Y^{\left( b \right)}  Y^{\left( c \right)} } \right) \end{equation} $  (38) 
$ \begin{equation}\label{eq39} R_f = R^{\left( c \right)}  R^{\left( c \right)} \times \left[{R^{\left( c \right)} + R^{\left( b \right)} } \right]{\rm }^{  1} \times R^{\left( c \right)}. \end{equation} $  (39) 
3) Correction stage:
$ \begin{equation}\label{eq40} \hat { Y}_{i, kk  1} = h\left( {\hat X_{i, kk  1}, u_k } \right) \end{equation} $  (40) 
$ \begin{equation}\label{eq41} \widehat{y}_{kk  1} = \sum \limits_{i = 0}^{2L} w_{i\left( x \right)} \hat { Y}_{i, kk  1} \end{equation} $  (41) 
$ \begin{align}\label{eq42} %\begin{array}{l} P_{yy, kk  1} = \sum \limits_{i = 0}^{2L} w_{i\left( p \right)} (\hat { Y}_{i, kk  1}\widehat{y}_{kk  1} )\times \nonumber\\ ~~~~~~~~~~~~~~~\left({\hat { Y}_{i, kk  1}  \widehat{y}_{kk  1} } \right)^{\rm T} + R_f %\end{array} \end{align} $  (42) 
$ \begin{align}\label{eq43} P_{xy, kk  1} =& \sum \limits_{i = 0}^{2L} w_{i\left( p \right)} (\hat X_{i, kk  1} X_{kk  1} )\times\nonumber \\ &\left({\hat { Y}_{i, kk  1}  \widehat{y}_{kk  1} } \right)^{\rm T} \end{align} $  (43) 
$ \begin{equation}\label{eq44} G_k = \dfrac{P_{xy, kk  1} }{P_{yy, kk  1}} \end{equation} $  (44) 
$ \begin{equation}\label{eq45} X_{kk} = X_{kk  1} + G_k \left( {Y_f  \widehat{y}_{kk  1} } \right) \end{equation} $  (45) 
$ \begin{equation}\label{eq46} P_{kk} = P_{kk  1}  G_k P_{yy, kk  1} G_k ^{\rm T}. \end{equation} $  (46) 
Both fusion architectures are evaluated by computing the following performance metrics:
1) Absolute error (AE) in
$ \begin{equation}\label{eq47} AE_x = \left {x_i  \widehat{x_i }} \right, \quad i = 1, 2, \cdots, N \end{equation} $  (47) 
similarly for
Where
2) Mean absolute error in
$ \begin{equation} \label{eq48} MAE_x = \frac{1}{N} \sum \limits_{i = 1}^N x_i  \widehat{x_i } \end{equation} $  (48) 
similarly for
3) The percentage fit error (PFE) in
$ \begin{equation} PFE_x = 100\times \frac{{norm\left( {x  \widehat x} \right)}}{{norm\left( x \right)}} \end{equation} $  (49) 
similarly for
4) Root mean square error in
$ \begin{equation} RMSPE = \sqrt {\frac{1}{N} \sum \limits_{i = 1}^N \frac{{(x  \widehat x)^2 + (y  \widehat y)^2 }}{2}}. \end{equation} $  (50) 
5) Root sum square error in
$ \begin{equation} RSSPE = \sqrt {(x  \widehat x)^2 + (y  \widehat y)^2 }. \end{equation} $  (51) 
6) State error (SE) is
$ SE_x = (x  \widehat x) $ 
with theoretical bounds of
$ \pm 2\sqrt {\widehat{P}_x} $ 
$ SE_y = (y  \widehat y) $ 
with theoretical bounds of
$ \pm 2\sqrt {\widehat{P}_y } $ 
$ SE_\theta = (\theta  \widehat\theta ) $ 
with theoretical bounds of
$ \pm 2\sqrt {\widehat{P}_\theta. } $ 
7) Compute innovation sequence
In this section, an experiment was simulated to calculate the position under two different scenarios: with the MF algorithm and the SVF algorithm.
The simulation uses the following parameters:
1) Sampling period
2) Initial state vector:
3) The initial covariance matrix is defined by
$ \begin{equation} P_0 = \left[{\begin{array}{*{20}c} {0.01} & 0 & 0 \\ 0 & {0.01} & 0 \\ 0 & 0 & {0.3} \\ \end{array}} \right]. \\ \end{equation} $  (52) 
4) The covariance matrices of measurements noise:
$ \begin{equation} R_ b = \left[{\begin{array}{*{20}c} {10^{8} } & 0 & 0 \\ 0 & {10^{8} } & 0 \\ 0 & 0 & {85\times 10^{9} } \\ \end{array}} \right] \\ \end{equation} $  (53) 
$ \begin{equation} R_c = \left[{\begin{array}{*{20}c} {10^{8} } & 0 & 0 \\ 0 & {10^{8} } & 0 \\ 0 & 0 & {8\times 10^{4} } \\ \end{array}} \right].{\rm }\\ \end{equation} $  (54) 
5) The noise system covariance matrix is
$ \begin{equation} Q_k = {\rm }\left[{\begin{array}{*{20}c} {0.01^2 } & 0 & 0 \\ 0 & {0.01^2 } & 0 \\ 0 & 0 & {0.1^2 } \\ \end{array}} \right]. \end{equation} $  (55) 
Both data fusion architectures are evaluated by using a sequential simulation algorithm of the unscented Kalman filter. The mean absolute error in position is shown in Table 2 and Figs. 47.
Download:


Fig. 4. Mean absolute errors in 
Download:


Fig. 5. Mean absolute errors in 
Download:


Fig. 6. Mean absolute errors in 
Download:


Fig. 7. Zoomed view of Fig. 6 
Percentage of fit error (PFE) and the root mean square error in position (RMSPE) are presented in Table 1. The variations in the mean square error and the execution time for each algorithm are shown in Table 3 and Fig. 8.
Download:


Fig. 8. Execution time for each algorithm 
The state error with its theoretical limits shows the filter robustness. Figs. 9 and 10 show that the MF algorithm is more robust than the SVF one and therefore presents a relatively high performance.
Download:


Fig. 9. State error in positions with theoretical bounds for MF algorithm 
Download:


Fig. 10. State error in positions with theoretical bounds for SVF algorithm 
Bold values indicate the best results. The root sum square error in position is shown in Fig. 11. From the tables and figures, we can see that the location with the odometer or accelerometer shows performance degradation compared to fusion architectures. MF shows the lowest uncertainty followed by SVF. The uncertainty in the state estimate is high if the odometer is followed by the accelerometer. This shows the need for fusion. For the execution time, localization with odometer takes less time with degraded performance.
Download:


Fig. 11. Root sum square error in position 
On the other hand, we investigated the consistency of the MF and SVF algorithms using sequences of innovation with theoretical limits, as shown in Figs. 12 and 13. The simulation results show that the two algorithms are consistent.
Download:


Fig. 12. Innovation sequence with theoretical bounds for MF algorithm 
Download:


Fig. 13. Innovation sequence with theoretical bounds for SVF algorithm 
Finally, we studied the autocorrelation (AC) of the innovation sequence to show that the two algorithms extracted all the information represented by the signal. The simulation result shows in Figs. 14 and 15 that both algorithms are satisfactory.
Download:


Fig. 14. Autocorrelation of innovation sequence along with theoretical bounds for MF algorithm 
Download:


Fig. 15. Autocorrelation of innovation sequence along with theoretical bounds for SVF algorithm 
6 Conclusions
In order to increase the UKF algorithm performance for the position estimation, we showed the importance and the need for data fusion. Two fusion algorithms (SVF and MF) were addressed and detailed mathematical expressions are given which could be useful for the implementation. The simulation results show that the MF algorithm provides state estimates with relatively less uncertainty followed by the SVF algorithm. For future development, the estimation problem of wheelchair position could also be investigated by using the cubature Kalman filter (CKF)^{[34]}. In addition, the Bayesian nonlinear filtering using quadrature and cubature rules^{[35]}, will also solve this problem. We hope that this work will be an effective solution for reducing navigation problems for wheelchair users.
AcknowledgementWe would like to thank the Laboratory of Automatics and Signals at Annaba (LASA) whose members displayed great interest and support to carry out this work.
[1]  S. Safari, F. Shabani, D. Simon . Multirate multisensor data fusion for linear systems using Kalman filters and a neural network. Aerospace Science and Technology., vol.39 , pp.465–471, 2014. doi:10.1016/j.ast.2014.06.005 
[2]  F. Cappello, R. Sabatini, S. Ramasamy. Multisensor Data Fusion Techniques for RPAS Detect, Track and Avoid, SAE Technical Paper 2015012475, Seattle, Washington, USA, 2015. 
[3]  D. U. Guanglong, P. Zhang . Humanmanipulator interface using hybrid sensors with Kalman filters and adaptive multispace transformation. Measurement., vol.55 , pp.413–422, 2014. doi:10.1016/j.measurement.2014.05.021 
[4]  W. J. Qi, P. Zhang, Z. L. Deng . Robust weighted fusion Kalman filters for multisensor timevarying systems with uncertain noise variances. Signal Processing., vol.99 , pp.185–200, 2014. doi:10.1016/j.sigpro.2013.12.013 
[5]  J. R. Raol . MultiSensor Data Fusion with MATLAB^{®}. Boca Raton, USA: CRC Press Inc, 2009 . 
[6]  D. L. Hall, S. A. H. McMullen . Mathematical Techniques in Multisensor Data Fusion. Norwood, USA: Artech House, 2004 . 
[7]  V. Naidu . Fusion architectures for 3D target tracking using IRST and radar measurements. Journal of Aerospace Sciences & Technologies., vol.62 , no.3 , pp.183–195, 2010. 
[8]  D. Nada, M. Bousbia Salah, M. Bettayeb. Fusion architectures with extended Kalman Filter for locate wheelchair position using sensors measurements. In Proceedings of International Conference on Electrical Sciences and Technologies in Maghreb (CISTEM), IEEE, Tunis, Tunisia, pp. 17, 2014. 
[9]  R. Simpson, E. LoPresti, S. Hayashi, I. Nourbakhsh, D. Miller . The smart wheelchair component system. The Journal of Rehabilitation Research and Development., vol.41 , no.3B , pp.429–442, 2004. doi:10.1682/JRRD.2003.03.0032 
[10]  D. Ding, B. Parmanto, H. A. Karimi, D. Roongpiboonsopit, G. Pramana, T. Conahan, P. Kasemsuppakorn. Design considerations for a personalized wheelchair navigation system. In Proceedings of the 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, IEEE, Lyon, France, pp. 47904793, 2007. 
[11]  A. Ferworn, A. Arora, M. Jaseemuddin . IP mobility issues for a mobile telerobotic systemNEPWAInternational K. Journal of Automation and Computing., vol.1 , no.1 , pp.10–16, 2004. doi:10.1007/s1163300400100 
[12]  R. Tang, X. Q. Chen, M. Hayes, I. Palmer. Development of a navigation system for semiautonomous operation of wheelchairs. In Proceedings of IEEE/ASME International Conference on Mechatronics and Embedded Systems and Applications (MESA), IEEE, Suzhou, China, pp. 257262, 2012. 
[13]  N. Wattanavarangkul, T. Wakahara. Indoor navigation system for wheelchair using smartphones. Information Technology Convergence, Lecture Notes in Electrical Engineering, J. J. Park, L. Barolli, F. Xhafa, H. Y. Jeong, Eds., Netherlands: Springer, pp. 233241, 2013. 
[14]  J. B. Gao, C. J. Harris . Some remarks on Kalman filters for the multisensor fusion. Information Fusion., vol.3 , no.3 , pp.191–201, 2002. doi:10.1016/S15662535(02)000702 
[15]  B. D. O. Anderson, J. B. Moore, M. Eslami . Optimal filtering. IEEE Transactions on Systems, Man, and Cybernetics., vol.12 , no.2 , pp.235–236, 1982. doi:10.1109/TSMC.1982.4308806 
[16]  H. Zhao, Z. Y. Wang . Motion measurement using inertial sensors, ultrasonic sensors, and magnetometers with extended Kalman filter for data fusion. IEEE Sensors Journal., vol.12 , no.5 , pp.943–953, 2012. doi:10.1109/JSEN.2011.2166066 
[17]  E. A. Wan, R. Van Der Merwe. The unscented Kalman filter. Kalman Filtering and Neural Networks, S. Haykin, Ed., New York, USA: John Wiley Sons Inc., pp. 221280, 2001. 
[18]  W. L. Li, S. H. Sun, Y. M. Jia, J. P. Du . Robust unscented Kalman filter with adaptation of process and measurement noise covariances. Digital Signal Processing., vol.48 , pp.93–103, 2016. doi:10.1016/j.dsp.2015.09.004 
[19]  S. Kernbach . Encoderfree odometric system for autonomous microrobots. Mechatronics., vol.22 , no.6 , pp.870–880, 2012. doi:10.1016/j.mechatronics.2012.05.004 
[20]  S. G. Tzafestas. 4Mobile robot sensors. Introduction to Mobile Robot Control, S. G. Tzafestas, Ed., Oxford, UK: Elsevier, pp. 101135, 2014. 
[21]  S. B. Lazarus, I. Ashokaraj, A. Tsourdos, R. Zbikowski, P. M. G. Silson, N. Aouf, B. A. White . Vehicle localization using sensors data fusion via integration of covariance intersection and interval analysis. IEEE Sensors Journal., vol.7 , no.9 , pp.1302–1314, 2007. doi:10.1109/JSEN.2007.901556 
[22]  T. Guilford, G. K. Taylor . The sun compass revisited. Animal Behaviour., vol.97 , pp.135–143, 2014. doi:10.1016/j.anbehav.2014.09.005 
[23]  Datasheet. 1Axis and 2Axis Magnetic Sensors HMC1001/1002/1021/1022, Honeywell, Morristown, USA, [Online], Available: https://aerocontent.honeywell.com/aero/common/documents/myaerospacecatalogdocuments/MissilesMunitions/, August 2008. 
[24]  M. J. Caruso. Applications of magnetic sensors for low cost compass systems. In Proceedings of the Position Location and Navigation Symposium, IEEE, San Diego, USA, pp. 177184, 2000. 
[25]  M. A. Horton, A. R. Newton. Method and Apparatus for Determining Position and Orientation of a Moveable Object Using Accelerometers, Patent 5615132, USA, March 1997. 
[26]  M. BousbiaSalah, M. Fezari. A navigation tool for blind people. Innovations and Advanced Techniques in Computer and Information Sciences and Engineering, T. Sobh, Ed., Netherlands: Springer, pp. 333337, 2007. 
[27]  M. BousbiaSalah, M. Bettayeb, A. Larbi . A navigation aid for blind people. Journal of Intelligent and Robotic Systems., vol.64 , no.34 , pp.387–400, 2011. doi:10.1007/s1084601195557 
[28]  M. L. Anjum, J. Park, W. Hwang, H. I. Kwon, J. H. Kim, C. Lee, K. S. Kim, D. I. Cho. Sensor data fusion using unscented Kalman filter for accurate localization of mobile robots. In Proceedings of International Conference on Control Automation and Systems (ICCAS), IEEE, Gyeonggido, Korea, pp. 947952, 2010. 
[29]  N. Houshangi, F. Azizi. Mobile robot position determination using data integration of odometry and gyroscope. In Proceedings of 2006 World Automation Congress, IEEE, Budapest, Hungary, pp. 18, 2006. 
[30]  S. Julier, J. Uhlmann, H. F. DurrantWhyte . A new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE Transactions on Automatic Control., vol.45 , no.3 , pp.477–482, 2000. doi:10.1109/9.847726 
[31]  F. Azizi, N. Houshangi. Sensor integration for mobile robot position determination. In Proceedings of IEEE International Conference on Systems, Man and Cybernetics, IEEE, Washington, USA, pp. 11361140, 2003. 
[32]  A. Sakai, Y. Tamura, Y. Kuroda. An efficient solution to 6dof localization using unscented Kalman filter for planetary rovers. In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, St. Louis, USA, pp. 41544159, 2009. 
[33]  C. J. Sun, H. Y. Kuo, C. E. Lin. A sensor based indoor mobile localization and navigation using unscented Kalman filter. In Proceedings of 2010 IEEE/ION Position Location and Navigation Symposium (PLANS), IEEE, Indian Wells, California, USA, pp. 327331, 2010. 
[34]  I. Arasaratnam, S. Haykin, T. R. Hurd . Cubature Kalman filtering for continuousdiscrete systems:Theory and simulations. IEEE Transactions on Signal Processing., vol.58 , no.10 , pp.4977–4993, 2010. doi:10.1109/TSP.2010.2056923 
[35]  P. Closas, C. FernÿndezPrades. Bayesian nonlinear filters for direct position estimation. In Proceedings of IEEE Aerospace Conference, IEEE, Big Sky, USA, pp. 112, 2010. 