自动化学报  2017, Vol. 43 Issue (8): 1425-1433   PDF    
一种新型区间二型模糊神经网络隶属函数的设计
王家军1     
杭州电子科技大学自动化学院 杭州 310018
摘要: 对于区间二型模糊神经网络(IT2FNN),论文给出了一种新型的模糊隶属函数(FMF)设计方法.通过所设计的模糊隶属函数,可以衍生出三种区间二型模糊隶属函数(IT2FMF).每种区间二型模糊隶属函数都具有不同的不确定域.论文将三种衍生模糊隶属函数应用于简化区间二型模糊神经网络辨识两个非线性系统.通过仿真,将衍生区间二型模糊隶属函数的辨识性能与高斯和椭圆型模糊隶属函数进行了对比.仿真结果表明,通过调节简化区间二型模糊神经网络的参数,本文所设计的区间二型模糊隶属函数比高斯和椭圆型模糊隶属函数具有更好的辨识性能.
关键词: 模糊隶属函数(FMF)     区间二型模糊神经网络(IT2FNN)     非线性系统     系统辨识    
A New Type of Fuzzy Membership Function Designed for Interval Type-2 Fuzzy Neural Network
Jiajun Wang1     
School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China
Abstract: A new type of fuzzy membership function (FMF) is proposed for interval type-2 fuzzy neural network (IT2FNN) in this paper. Three types of interval type-2 FMF (IT2FMF) can be derived from the proposed type of FMF. And each type of IT2FMF has different shape of footprint of uncertainty (FOU). The derived IT2FMFs are applied to a simplified T2FNN to identify two nonlinear systems. The identification performance of the derived IT2FMFs are compared with Gaussian and ellipsoidal type of IT2FMFs through simulation. Simulation results certify that the derived IT2FMFs can achieve better identification performance than Gaussian and ellipsoidal type of IT2FMFs with elaborately tuning of the parameters for the simplified IT2FNN.
Key words: Fuzzy membership function (FMF)     interval type-2 fuzzy neural network (IT2FNN)     nonlinear system     system identification    
1 Introduction

As the extension of the type-1 fuzzy set (T1FS) theory, the type-2 fuzzy set theory is more advanced and complex [1]. The type-2 fuzzy set theory is further developed with the interval type-2 fuzzy set (IT2FS) theory [2]-[4]. The IT2FS has more advanced ability to deal with the uncertainties of the system. And it is used to solve identification, control, prediction and pattern recognition problems [5], [6]. Compared with T1FS, the excellent processing ability of the IT2FS originates from the interval type-2 FMF (IT2FMF). The selection of the FMF for the IT2FS has very large effects on the performance of the IT2FS. The research on the FMF of the IT2FS is still an open problem. The key point of this paper is the introduction of a new type of FMF to enhance the performance of the IT2FS.

As we known, there exist six types of IT2FMFs that can be selected from the literatures at present, which are triangular, trapezoidal, sigmoidal, pi-shaped, Guassian and ellipsoidal type of FMFs [7], [8]. Guassian, triangular, sigmoidal and pi-shaped type of FMFs have three parameters that need to be updated online. Trapezoidal and ellipsoidal type of FMFs have four updating parameters. At present, Guassian type of IT2FMF is widely applied in the IT2FS. And it become a standard selection in the interval type-2 fuzzy neural network (IT2FNN) [9]-[11].

The fuzzy neural network (FNN) is the hybridization of the neural network and fuzzy system, which inherits the learning ability from the neural network and the capability of fuzzy reasoning to uncertain information [12]-[14]. The Takagi-Sugeno-Kang (TSK) type of fuzzy models are effective in the system identification problems [15]-[18]. The combination of the TSK-type with FNN can achieve superior learning accuracy than Mamdani type of FNN. The interval type-2 TSK fuzzy neural network (IT2TSKFNN) unites the IT2FS in the antecedent part and the TSK-type as the consequent parts. And it has the united advantages of the IT2FS, TSK-type fuzzy set and neural network [16], [17]. In this paper, IT2TSKFNN is selected as the target FNN to test the performance of the proposed FMF.

Although the IT2FNN has superior performance in processing the uncertainties of the system than T1FNN, IT2FNN is computationally intensive because the type-reduction procedure is very complex. And this confines the application of the IT2FNN. The iterative K-M algorithm is the general method to realize the type-reduction of the IT2FNN [3]. The consequent weights of almost all the IT2FNNs except TSK-type need to be rearranged in ascending order according to the iterative K-M algorithm. In this paper, we adopt the simplified IT2FNN to test the derived IT2FMFs. The simplified IT2FNN can be realized with the computation of distribution factor $q_r$ and $q_l$ without incurring the K-M iterative computation [19].

The main contribution of this paper can be given as following three aspects.

1) A new type of IT2FMF is proposed. Based on the proposed IT2FMF, three type of IT2FMFs can be derived. This make the selection of the IT2FMFs for the IT2FNN have larger freedom.

2) The derived IT2FMFs are tested with the simplified IT2FNN. The design procedure of the simplified IT2FNN is given step by step. And the parameter updating computation is demonstrated in details.

3) The derived IT2FMFs for the simplified IT2FNN can achieve better identification performance than Gaussian and ellipsoidal type of IT2FMFs in two typical nonlinear examples.

This paper is organized as follows. In Section 2, the proposed type of FMF is introduced. In Section 3, the design procedure of the simplified IT2FNN is presented. In Section 4, the parameter updating rules are derived. In Section 5, the simulation studies are given to show the effectiveness of the derived IT2FMFs. In Section 6, some conclusions are given.

2 Introduction of IT2FMFs 2.1 Gaussian Type of IT2FMF

The Gaussian type of IT2FMF is given in Fig. 1 (a). The mathematical expression of the Gaussian type of IT2FMF can be expressed as

$ \begin{align} \mu(x)={\rm exp}\left(-\frac{1}{2}\frac{(x-m)^2}{\sigma^2}\right)\equiv G(x, m, \sigma) \end{align} $ (1)
Figure 1 Gaussian and ellipsoidal type of IT2FMFs.

where $m$ is the mean value, and $\sigma$ is the standard deviation (STD), and $x$ is the input variable. In (1), the mean value $m$ and the STD $\sigma$ all can be seen as uncertain values. In this paper, the mean value $m$ is selected as the uncertain value ($m\in [m_1, ~m_2]$, where $m_1< m_2$) and the STD $\sigma$ is fixed. The footprint of uncertainty (FOU) of the Gaussian type of FMF is bounded by lower MF $\underline{\mu}$ and upper MF $\overline{\mu}$, which can be defined as following equation

$ \begin{align} \underline{\mu}=\begin{cases} G(x, m_2, \sigma), & x\leq \dfrac{m_1+m_2}{2}\\[2mm] G(x, m_1, \sigma), &x> \dfrac{m_1+m_2}{2}\end{cases} \end{align} $ (2)

and

$ \begin{align} \overline{\mu}= \begin{cases} G(x, m_1, \sigma), & x\leq m_1\\ 1, &m_1<x\leq m_2\\ G(x, m_2, \sigma), &x>m_2.\end{cases} \end{align} $ (3)
2.2 Ellipsoidal Type of IT2FMF

The ellipsoidal type of IT2FMF is given in Fig. 1 (b). The equation of the ellipsoidal type of IT2FMF can be defined as following equation [8]

$ \begin{align} \mu= \begin{cases} \left(1-|\dfrac{x-m}{\sigma}|^a\right)^{\frac{1}{a}}, &a_2 < a < a_1, ~{\rm if} ~|x-m| < \sigma\\ 0, &{\rm otherwise}\end{cases} \end{align} $ (4)

where $m$ is the middle value, $d$ is the width of the FMF, and $x$ is the input value. The parameters $a_1$ and $a_2$ determine the area of the FOU of the ellipsoidal type of IT2FMF, which can be selected as

$ {{a}_{1}}>1~~\text{and}~~0 < {{a}_{2}} < 1. $ (5)

The boundaries of the FOU of the ellipsoidal type of IT2FMF are the lower MF $\underline{\mu}$ and the upper MF $\overline{\mu}$. The boundary FMFs are given in Table Ⅰ.

Table Ⅰ The MFs of the Ellipsoidal and Derived IT2FMFs

From the definition of the ellipsoidal type of IT2FMF, we can obtain two points.

1) There are four parameters $m$, $\sigma$, $a_1$ and $a_2$ that need to be updated in the identification of a system.

2) The computation of the derivation of the ellipsoidal type of IT2FMF is not an easy job. It need complex computation.

2.3 The Proposed IT2FMFs

Originating from the ellipsoidal type of IT2FMF, the proposed type of FMF can be defined as following equation

$ \begin{align} \mu= \begin{cases} 1-|\frac{x-m}{\sigma}|^a, &{\rm if} ~|x-m| < \sigma\\ 0, &{\rm otherwise}\end{cases} \end{align} $ (6)

where $m$, $\sigma$ and $x$ are the same as the definition in (4). $a$ is the parameter that can be used to adjust the shape of the FOU. According to the different value of parameter $a$, we can obtain three different type of IT2FMFs.

1) When $a>0$ and $a\neq1$, the obtained FMFs are called exponential type.

2) When $a=1$, the obtained FMF is called linear type.

According to different combination with the parameter $a$, we can have three types of IT2FMFs.

1) The combination of the first case is that the upper MF is exponential-type FMF and the lower MF is linear-type FMF. This combination is called exponential-linear-type IT2FMF (EL-type IT2FMF).

2) The combination of the second case is that the upper MF is linear-type MF and the lower MF is exponential-type MF. This combination is called linear-exponential-type IT2FMF (LE-type IT2FMF).

3) The combination of third case is that the upper and lower MFs are all exponential-type MFs. This combination is called exponential-exponential-type IT2FMF (EE-type IT2FMF).

The figures of the derived IT2FMFs are given in Fig. 2, where subgraphs (a), (b) and (c) represent EL-type, LE-type and EE-type IT2FMFs respectively. The upper and lower MFs are given in Table Ⅰ respectively. Compared with the ellipsoidal type of IT2FMF, the derived IT2FMFs have the following merits.

Figure 2 The shape of FOU for the derived IT2FMFs.

1) The parameters that need to be updated are reduced. The ellipsoidal type of IT2FMF has four kinds of parameters that need to be updated, whereas the proposed IT2FMF has three kinds of parameters.

2) The computation complexity is simplified. Simplification of the computation is very important for the application of the IT2FNN.

3) The design freedoms of the FMFs are increased with different combination. The different combination can make them have different performance for different systems.

3 The Design Procedure of IT2FNN

To test the effectiveness of the derived IT2FMFs, the simplified IT2FNN is selected as the target IT2FNN [19]. The structure of the simplified IT2FNN is given in Fig. 3. There are six layers that need to be designed in the simplified IT2FNN.

Figure 3 The structure of the simplified IT2FNN with two input, three rules and one final output.

Layer 1: is the input layer. The input value $x_i$ ($i= 0, 1, $ $\ldots, $ $n$, $n$ represents the number of input) is directly transmitted to Layer 2 and Layer 4. There are no weights that need to be updated in Layer 1.

Layer 2: is the FMF layer. In this layer, the fuzzification operation is finished with IT2FMFs. And in Fig. 3, the FMFs can be Gaussian, ellipsoidal or one of the derived IT2FMFs. After the interval type-2 fuzzification in Layer 2, the interval $[\underline{\mu}_{ij}, \overline{\mu}_{ij}]$ ($i=1, \ldots, n$ represents the actual input, $j=1, \ldots, m$ represents the fuzzy rules for each actual input) can be acquired.

Layer 3: is firing layer. Each node in this layer represents one fuzzy logic rule and performs a fuzzy meet operation using an algebraic product operation. The output of a rule node represents the firing strength $F_j$ of the corresponding fuzzy rule $R_j$ that is an interval type-1 fuzzy set. The firing strength $F_j$ can be computed with the following expression

$ {{F}_{j}}=[{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{f}}_{j}},{{\bar{f}}_{j}}],\ \ \ j=1,\ldots ,m $ (7)
$ {{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{f}}_{j}}=\prod\limits_{i=1}^{n}{{{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\mu }}}_{ij}}}, \ {{{\bar{f}}}_{j}}=\prod\limits_{i=1}^{n}{{{{\bar{\mu }}}_{ij}}}, i=1, \ldots, n $ (8)

where the index $m$ represents the fuzzy rules for each in-put variable, and the index $n$ represents the actual input variable.

Layer 4: is the consequent layer. The node in this layer is called TSK-type node. Each rule node in the Layer 3 has its corresponding TSK-type node in the Layer 4. The output of each node is an interval type-1 fuzzy set, denoted by $[w_{jl}, w_{jr}]$, which can be called TSK-type weights. And the TSK-type weights can be computed as following expression

$ \begin{align} [w_{jl}, w_{jr}]=&\ [c_{0j}-s_{0j}, ~c_{0j}+s_{0j}]\notag\\ & +\sum^n_{i=1}[c_{ij}-s_{ij}, ~c_{ij}+s_{ij}]x_i \end{align} $ (9)

where $c_{ij}$ and $s_{ij}$ are called consequent parameter. Each TSK-type weight can be expressed as following

$ \begin{align} w_{jl}=\sum^n_{i=0} c_{ij}x_i-\sum^n_{i=0} s_{ij}|x_i| \end{align} $ (10)

and

$ \begin{align} w_{jr}=\sum^n_{i=0} c_{ij}x_i+\sum^n_{i=0} s_{ij}|x_i| \end{align} $ (11)

where $x_0\equiv 1$.

Layer 5: can be called output processing layer or type reduction layer. The distribution factor $q$ can be designed to enable the adaptive adjustment of the upper and lower value of the output. The application of the distribution factor $q$ can alleviate the computation in type reduction without using the K-M iterative procedure. The output of $[y_l, ~y_r]$ can be computed as following expressions

$ \begin{align} y_l=\frac{(1-q_l)\sum\limits_{j=1}^m \overline{f}_j w_{jl}+q_l \sum\limits_{j=1}^m\underline{f}_j w_{jl} }{\sum\limits_{j=1}^m(\underline{f}_j+\overline{f}_j)} \end{align} $ (12)

and

$ \begin{align} y_r=\frac{(1-q_r)\sum\limits_{j=1}^m\underline{f}_j w_{jr}+q_r \sum\limits_{j=1}^m\overline{f}_jw_{jr} }{\sum\limits_{j=1}^m(\underline{f}_j+\overline{f}_j)} \end{align} $ (13)

where $q_l$ and $q_r$ are called left and right distribution factor.

Layer 6: is the output layer. Because the output of Layer 5 is an interval set, it can not be used for the output directly. The defuzzification can be realized by computing the average of $y_l$ and $y_r$.

$ \begin{align} y=\frac{y_l+y_r}{2}. \end{align} $ (14)

The simplified IT2FNN can reduce the computational complexity of the IT2FNN. The parameter updating computation is a key part to realize the simplified IT2FNN. The gradient descent method (GDM) will be used in the parameter updating computation of the simplified IT2FNN.

4 Parameter Updating Rules

In the parameter updating design of the IT2FNN, many different design method can be applied, such like GDM, extended Kalman filter (EKF), and particle swarm optimization (PSO) [8]. In this paper, we applied the GDM in the parameter updating of single-output system identification. The cost function is defined as

$ \begin{align} E=\frac{1}{2}(y(k)-y_d(k))^2=\frac{1}{2}e(k)^2 \end{align} $ (15)

where $y_d(k)$ and $y(k)$ are the desired output and the actual output of the simplified IT2FNN respectively, $e(k)=$ $y(k)- y_d(k)$ is the identification error and $k$ is the sample number. According to the GDM, the parameters can be updated with the following algorithm

$ \begin{align} X(k+1)=X(k)-\eta \frac{\partial E}{\partial X(k)} \end{align} $ (16)

where $X(k)$ can represents $m$, $\sigma$, $a$, $c$, $s$ or $q$, and $\eta$ is the learning rate.

When the Gaussian type of FMF is selected in the simplified IT2FNN, there are three kinds of parameters that need to be updated, which are consequent parameters, distribution factors and antecedent parameters.

4.1 Consequent Parameter and Distribution Factor Updating Algorithm

The consequent parameters include $c$ and $s$. The updating algorithm of $\frac{\partial E}{\partial c_{ij}}$ and $\frac{\partial E}{\partial s_{ij}}$ for the consequent parameters can be given as the following expressions

$ \begin{align} \frac{\partial E}{\partial c_{ij}}&=\frac{\partial E}{\partial y}\left(\frac{\partial y}{\partial y_l}\frac{\partial y_l}{\partial w_{jl}}\frac{\partial w_{jl}}{\partial c_{ij}}+\frac{\partial y}{\partial y_r}\frac{\partial y_r}{\partial w_{jr}}\frac{\partial w_{jr}}{\partial c_{ij}}\right)\notag \\ &=\frac{((1-q_l+q_r)\overline{f}_j+(1-q_r+q_l)\underline{f}_j)x_i e}{\sum\limits_{j=1}^m(\underline{f}_j+\overline{f}_j)} \end{align} $ (17)

and

$ \begin{align} \frac{\partial E}{\partial s_{ij}}&=\frac{\partial E}{\partial y}\left(\frac{\partial y}{\partial y_l}\frac{\partial y_l}{\partial w_{jl}}\frac{\partial w_{jl}}{\partial s_{ij}}+\frac{\partial y}{\partial y_r}\frac{\partial y_r}{\partial w_{jr}}\frac{\partial w_{jr}}{\partial s_{ij}}\right)\notag \\ &=\frac{((1-q_r-q_l)\underline{f}_j+(1-q_l+q_r)\overline{f}_j)|x_i| e}{\sum\limits_{j=1}^m(\underline{f}_j+\overline{f}_j)}. \end{align} $ (18)

Remark 1: In consequent parameter updating, $i=0$, $\ldots$, $n$, $j=1, \ldots, m$. In following antecedent parameter updating, $i=1, \ldots, n$, $j=1, \ldots, m$.

The distribution factor include the left factor $q_l$ and right factor $q_r$. The updating algorithms of $\frac{\partial E}{\partial q_l}$ and $\frac{\partial E}{\partial q_r}$ for the distribution factor can be computed with the following expressions

$ \begin{align} \frac{\partial E}{\partial q_l}=\frac{\sum\limits_{j=1}^mw_{jl}(\underline{f}_j-\overline{f}_j)e} {\sum\limits_{j=1}^m(\underline{f}_j+\overline{f}_j)} \end{align} $ (19)

and

$ \begin{align} \frac{\partial E}{\partial q_r}=\frac{\sum\limits_{j=1}^mw_{jr}(\overline{f}_j-\underline{f}_j)e} {\sum\limits_{j=1}^m(\underline{f}_j+\overline{f}_j)}. \end{align} $ (20)
4.2 Antecedent Parameter Updating Algorithm

The antecedent parameters include $m$, $\sigma$ and $a$. The common updating algorithm of $\frac{\partial E}{\partial X}$ for the antecedent parameters can be given as following expression

$ \begin{align} \frac{\partial E}{\partial X}=&\left[\left(\frac{\partial y_l}{\partial \overline{f}_j}+\frac{\partial y_r}{\partial \overline{f}_j}\right)\frac{\partial\overline{f}_j}{\partial \overline{\mu}_{ij}}\frac{\partial\overline{\mu}_{ij}}{\partial X}\right.\notag\\ & +\left.\left(\frac{\partial y_l}{\partial \underline{f}_j}+\frac{\partial y_r}{\partial \underline{f}_j}\right)\frac{\partial\underline{f}_j}{\partial \underline{\mu}_{ij}}\frac{\partial \underline{\mu}_{ij}}{\partial X}\right]e \end{align} $ (21)

where $X$ can be $m$, $\sigma$ or $a$. For the simplified IT2FNN, the partial derivative $\frac{\partial y_l}{\partial \overline{f}_j}$, $\frac{\partial y_l}{\partial \underline{f}_j}$, $\frac{\partial y_r}{\partial \overline{f}_j}$ and $\frac{\partial y_r}{\partial \underline{f}_j}$ can be computed with the following expressions

$ \begin{align} \frac{\partial y_l}{\partial \overline{f}_j}=\frac{(1-q_l)w_{jl}-y_l}{\sum\limits_{j=1}^m(\underline{f}_j+\overline{f}_j)}, ~~~ \frac{\partial y_l}{\partial \underline{f}_j}=\frac{q_lw_{jl}-y_l}{\sum\limits_{j=1}^m(\underline{f}_j+ \overline{f}_j)} \end{align} $ (22)

and

$ \begin{align} \frac{\partial y_r}{\partial \overline{f}_j}=\frac{q_r w_{jr}-y_r}{\sum\limits_{j=1}^m(\underline{f}_j+\overline{f}_j)}, ~~~ \frac{\partial y_r}{\partial \underline{f}_j}=\frac{(1-q_r)w_{jr}-y_r}{\sum\limits_{j=1}^m(\underline{f}_j+\overline{f}_j)}. \end{align} $ (23)

The computation of the partial derivatives $\frac{\partial \overline{\mu}_{ij}}{\partial X}$ and $\frac{\partial\underline{\mu}_{ij}}{\partial X}$ in (21) is relation to the selection of the IT2FMFs. Their computation with different type of IT2FMFs are given as following parts.

4.2.1 When the Gaussian Type of IT2FMF is Selected

There are three kinds of antecedent parameters that need to be updated in Gaussian type of IT2FMF, which are $m_1$, $m_2$ and $\sigma$. The computation of the partial derivatives $\frac{\partial \overline{\mu}_{ij}}{\partial X}$ and $\frac{\partial\underline{\mu}_{ij}}{\partial X}$ can be given as following expressions

$ \begin{align} &\frac{\partial \overline{\mu}_{ij}}{\partial m_{ij1}}= \begin{cases} \dfrac{(x_i-m_{ij1})\overline{\mu}_{ij}}{(\sigma_{ij})^2}, &x_i\leq m_{ij1}\\ 0, &{\rm otherwise}\end{cases}\end{align} $ (24)
$ \begin{align} &\frac{\partial \underline{\mu}_{ij}}{\partial m_{ij1}}= \begin{cases} \dfrac{(x_i-m_{ij1})\underline{\mu}_{ij}}{(\sigma_{ij})^2}, &x_i > \dfrac{m_{ij1}+m_{ij2}}{2}\\ 0, &{\rm otherwise}\end{cases}\end{align} $ (25)
$ \begin{align} &\frac{\partial \overline{\mu}_{ij}}{\partial m_{ij2}}= \begin{cases} \dfrac{(x_i-m_{ij2})\overline{\mu}_{ij}}{(\sigma_{ij})^2}, &x_i> m_{ij2}\\ 0, &{\rm otherwise}\end{cases}\end{align} $ (26)
$ \begin{align} &\frac{\partial \underline{\mu}_{ij}}{\partial m_{ij2}}= \begin{cases} \dfrac{(x_i-m_{ij2})\underline{\mu}_{ij}}{(\sigma_{ij})^2}, &x_i \leq \dfrac{m_{ij1}+m_{ij2}}{2}\\ 0, &{\rm otherwise}\end{cases} \end{align} $ (27)
$ \begin{align} \frac{\partial \overline{\mu}_{ij}}{\partial \sigma_{ij}}= \begin{cases} \dfrac{(x_i-m_{ij1})^2\overline{\mu}_{ij}}{(\sigma_{ij})^3}, &x_i< m_{ij1}\\ \dfrac{(x_i-m_{ij2})^2\overline{\mu}_{ij}}{(\sigma_{ij})^3}, &x_i> m_{ij2}\\ 0, &{\rm otherwise}\end{cases} \end{align} $ (28)

and

$ \begin{align} \frac{\partial \underline{\mu}_{ij}}{\partial \sigma_{ij}}= \begin{cases} \dfrac{(x_i-m_{ij2})^2\underline{\mu}_{ij}}{(\sigma_{ij})^3}, &x_i\leq \dfrac{m_{ij1}+m_{ij2}}{2}\\ \dfrac{(x_i-m_{ij1})^2\underline{\mu}_{ij}}{(\sigma_{ij})^3}, &x_i> \dfrac{m_{ij1}+m_{ij2}}{2}.\\ \end{cases} \end{align} $ (29)

Equations (24)-(29) give the computation of the partial derivatives of the Gaussian type of IT2FMF with respect of the parameters $m_1$, $m_2$ and $\sigma$. When $\frac{\partial \overline{\mu}_{ij}}{\partial X}$ and $\frac{\partial\underline{\mu}_{ij}}{\partial X}$ are acquired, then the partial derivative $\frac{\partial E}{\partial X}$ can be obtained with (22) and (23).

4.2.2 When the Ellipsoidal Type of IT2FMF Is Selected

When the ellipsoidal type of IT2FMF is selected, there are four kinds of antecedent parameters that need to be updated, which are $m$, $\sigma$, $a_1$ and $a_2$. The computation of the partial derivatives $\frac{\partial \overline{\mu}_{ij}}{\partial X}$ and $\frac{\partial\underline{\mu}_{ij}}{\partial X}$ for the IT2FMF can be given in (30)-(35) at the bottom of this page.

$ \frac{\partial {{{\bar{\mu }}}_{ij}}}{\partial {{m}_{ij}}}=\left\{ \begin{align} &-\frac{1}{{{\sigma }_{ij}}}{{\left[1-{{\left( \frac{{{m}_{ij}}-{{x}_{i}}}{{{\sigma }_{ij}}} \right)}^{{{a}_{ij1}}}} \right]}^{\frac{1-{{a}_{ij1}}}{{{a}_{ij1}}}}}{{\left( \frac{{{m}_{ij}}-{{x}_{i}}}{\sigma } \right)}^{({{a}_{ij1}}-1)}}, \ \ \ {{m}_{ij}}-{{\sigma }_{ij}} \\ &\frac{1}{{{\sigma }_{ij}}}{{\left[1-{{\left( \frac{{{x}_{i}}-{{m}_{ij}}}{{{\sigma }_{ij}}} \right)}^{{{a}_{ij1}}}} \right]}^{\frac{1-{{a}_{ij1}}}{{{a}_{ij1}}}}}{{\left( \frac{{{x}_{i}}-{{m}_{ij}}}{\sigma } \right)}^{({{a}_{ij1}}-1)}}, \ \ \ \ \ \ \ {{m}_{ij}} \\ &0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{otherwise} \\ \end{align} \right. $ (30)
$ \begin{align} & \frac{\partial {{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\mu }}}_{ij}}}{\partial {{m}_{ij}}}=\left\{ \begin{array}{*{35}{l}} {} & \frac{1}{{{\sigma }_{ij}}}{{\left[ 1-{{\left( \frac{{{x}_{i}}-{{m}_{ij}}}{{{\sigma }_{ij}}} \right)}^{{{a}_{ij2}}}} \right]}^{\frac{1-{{a}_{ij2}}}{{{a}_{ij2}}}}}{{\left( \frac{{{x}_{i}}-{{m}_{ij}}}{\sigma } \right)}^{({{a}_{ij2}}-1)}},{{m}_{ij}}-{{\sigma }_{ij}} < {{x}_{i}}\le {{m}_{ij}} & {} \\ {} & -\frac{1}{{{\sigma }_{ij}}}{{\left[ 1-{{\left( \frac{{{m}_{ij}}-{{x}_{i}}}{{{\sigma }_{ij}}} \right)}^{{{a}_{ij2}}}} \right]}^{\frac{1-{{a}_{ij2}}}{{{a}_{ij2}}}}}{{\left( \frac{{{m}_{ij}}-{{x}_{i}}}{\sigma } \right)}^{({{a}_{ij2}}-1)}},{{m}_{ij}} < {{x}_{i}}\le {{m}_{ij}}+{{\sigma }_{ij}} & {} \\ {} & 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{otherwise} & {} \\ \end{array} \right. \\ & \\ \end{align} $ (31)
$ \frac{\partial {{{\bar{\mu }}}_{ij}}}{\partial {{\sigma }_{ij}}}=\left\{ \begin{align} &\frac{1}{{{\sigma }_{ij}}}{{\left[1-{{\left( \frac{{{m}_{ij}}-{{x}_{i}}}{{{\sigma }_{ij}}} \right)}^{{{a}_{ij1}}}} \right]}^{\frac{1-{{a}_{ij1}}}{{{a}_{ij1}}}}}{{\left( \frac{{{m}_{ij}}-{{x}_{i}}}{\sigma } \right)}^{{{a}_{ij1}}}}, {{m}_{ij}}-{{\sigma }_{ij}} < {{x}_{i}}\le {{m}_{ij}} \\ &\frac{1}{{{\sigma }_{ij}}}{{\left[1-{{\left( \frac{{{x}_{i}}-{{m}_{ij}}}{{{\sigma }_{ij}}} \right)}^{{{a}_{ij1}}}} \right]}^{\frac{1-{{a}_{ij1}}}{{{a}_{ij1}}}}}{{\left( \frac{{{x}_{i}}-{{m}_{ij}}}{\sigma } \right)}^{{{a}_{ij1}}}}, \ {{m}_{ij}} < {{x}_{i}}\le {{m}_{ij}}+{{\sigma }_{ij}} \\ &0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{otherwise} \\ \end{align} \right. $ (32)
$ \frac{\partial {{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\mu }}}_{ij}}}{\partial {{\sigma }_{ij}}}=\left\{ \begin{align} & \frac{1}{{{\sigma }_{ij}}}{{\left[ 1-{{\left( \frac{{{m}_{ij}}-{{x}_{i}}}{{{\sigma }_{ij}}} \right)}^{{{a}_{ij2}}}} \right]}^{\frac{1-{{a}_{ij2}}}{{{a}_{ij2}}}}}{{\left( \frac{{{m}_{ij}}-{{x}_{i}}}{\sigma } \right)}^{{{a}_{ij2}}}},{{m}_{ij}}-{{\sigma }_{ij}} < {{x}_{i}}\le {{m}_{ij}} \\ & \frac{1}{{{\sigma }_{ij}}}{{\left[ 1-{{\left( \frac{{{x}_{i}}-{{m}_{ij}}}{{{\sigma }_{ij}}} \right)}^{{{a}_{ij2}}}} \right]}^{\frac{1-{{a}_{ij2}}}{{{a}_{ij2}}}}}{{\left( \frac{{{m}_{ij}}-{{x}_{i}}}{\sigma } \right)}^{{{a}_{ij2}}}},{{m}_{ij}} < {{x}_{i}}\le {{m}_{ij}}+{{\sigma }_{ij}} \\ & 0,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{otherwise} \\ \end{align} \right. $ (33)
$ \frac{\partial {{{\bar{\mu }}}_{ij}}}{\partial {{a}_{ij1}}}=\left\{ \begin{align} &-{{\left[1-{{\left( \frac{{{m}_{ij}}-{{x}_{i}}}{{{\sigma }_{ij}}} \right)}^{{{a}_{ij1}}}} \right]}^{\frac{1}{{{a}_{ij1}}}}}\left\{ \frac{1}{a_{ij1}^{2}}\text{ln}\left[1-{{\left( \frac{{{m}_{ij}}-{{x}_{i}}}{{{\sigma }_{ij}}} \right)}^{{{a}_{ij1}}}} \right]+\frac{{{\left( \frac{{{m}_{ij}}-{{x}_{i}}}{{{\sigma }_{ij}}} \right)}^{{{a}_{ij1}}}}\text{ln}\left( \frac{{{m}_{ij1}}-{{x}_{i}}}{{{\sigma }_{ij}}} \right)}{{{a}_{ij1}}\left( 1-{{\left( \frac{{{m}_{ij}}-{{x}_{i}}}{{{\sigma }_{ij}}} \right)}^{{{a}_{ij1}}}} \right)} \right\} \\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {{m}_{ij}}-{{\sigma }_{ij}} < {{x}_{i}}\le {{m}_{ij}} \\ &-{{\left[1-{{\left( \frac{{{x}_{i}}-{{m}_{ij}}}{{{\sigma }_{ij}}} \right)}^{{{a}_{ij1}}}} \right]}^{\frac{1}{{{a}_{ij1}}}}}\left\{ \frac{1}{a_{ij1}^{2}}\text{ln}\left[1-{{\left( \frac{{{x}_{i}}-{{m}_{ij}}}{{{\sigma }_{ij}}} \right)}^{{{a}_{ij1}}}} \right]+\frac{{{\left( \frac{{{x}_{i}}-{{m}_{ij}}}{{{\sigma }_{ij}}} \right)}^{{{a}_{ij1}}}}\text{ln}\left( \frac{{{x}_{i}}-{{m}_{ij}}}{{{\sigma }_{ij}}} \right)}{{{a}_{ij1}}\left( 1-{{\left( \frac{{{x}_{i}}-{{m}_{ij}}}{{{\sigma }_{ij}}} \right)}^{{{a}_{ij1}}}} \right)} \right\}, \\ &~\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {{m}_{ij}} < {{x}_{i}}\le {{m}_{ij}}+{{\sigma }_{ij}} \\ &0, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{otherwise} \\ \end{align} \right., $ (34)
$ \frac{\partial {{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\mu }}}_{ij}}}{\partial {{a}_{ij2}}}=\left\{ \begin{align} & -{{\left[ 1-{{\left( \frac{{{m}_{ij}}-{{x}_{i}}}{{{\sigma }_{ij}}} \right)}^{{{a}_{ij2}}}} \right]}^{\frac{1}{{{a}_{ij2}}}}}\left\{ \frac{1}{a_{ij2}^{2}}\text{ln}\left[ 1-{{\left( \frac{{{m}_{ij}}-{{x}_{i}}}{{{\sigma }_{ij}}} \right)}^{{{a}_{ij2}}}} \right]+ \right.\left. \frac{{{\left( \frac{{{m}_{ij}}-{{x}_{i}}}{{{\sigma }_{ij}}} \right)}^{{{a}_{ij2}}}}\text{ln}\left( \frac{{{m}_{ij1}}-{{x}_{i}}}{{{\sigma }_{ij}}} \right)}{{{a}_{ij2}}\left( 1-{{\left( \frac{{{m}_{ij}}-{{x}_{i}}}{{{\sigma }_{ij}}} \right)}^{{{a}_{ij2}}}} \right)} \right\} \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {{m}_{ij}}-{{\sigma }_{ij}} < {{x}_{i}}\le {{m}_{ij}} \\ & -{{\left[ 1-{{\left( \frac{{{x}_{i}}-{{m}_{ij}}}{{{\sigma }_{ij}}} \right)}^{{{a}_{ij2}}}} \right]}^{\frac{1}{{{a}_{ij2}}}}}\left\{ \frac{1}{a_{ij2}^{2}}\text{ln}\left[ 1-{{\left( \frac{{{x}_{i}}-{{m}_{ij}}}{{{\sigma }_{ij}}} \right)}^{{{a}_{ij2}}}} \right]+\left. \frac{{{\left( \frac{{{x}_{i}}-{{m}_{ij}}}{{{\sigma }_{ij}}} \right)}^{{{a}_{ij2}}}}\text{ln}\left( \frac{{{x}_{i}}-{{m}_{ij}}}{{{\sigma }_{ij}}} \right)}{{{a}_{ij2}}\left( 1-{{\left( \frac{{{x}_{i}}-{{m}_{ij}}}{{{\sigma }_{ij}}} \right)}^{{{a}_{ij2}}}} \right)} \right\} \right. \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {{m}_{ij}} < {{x}_{i}}\le {{m}_{ij}}+{{\sigma }_{ij}} \\ & 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{otherwise} \\ \end{align} \right. $ (35)

and $\frac{\partial \underline{\mu}_{ij}}{\partial a_{ij1}}=0$, $\frac{\partial \overline{\mu}_{ij}}{\partial a_{ij2}}=0$.

4.2.3 When the Derived IT2FMFs Are Selected

When the derived IT2FMFs are selected, there are three kinds of antecedent parameters that need to be updated, which are $m$, $\sigma$ and $a$. The computation of the partial derivatives $\frac{\partial \overline{\mu}_{ij}}{\partial X}$ and $\frac{\partial\underline{\mu}_{ij}}{\partial X}$ for the derived IT2FMFs are given with the following three cases.

1) EL-type IT2FMF

In the EL-type IT2FMF, the parameter $a>1$. The computation of the partial derivative $\frac{\partial \overline{\mu}_{ij}}{\partial X}$ and $\frac{\partial\underline{\mu}_{ij}}{\partial X}$ of the EL-type IT2FMF can be given in Table Ⅱ.

Table Ⅱ $\frac{\partial \overline{\mu}_{ij}}{\partial X}$ and $\frac{\partial\underline{\mu}_{ij}}{\partial X}$ of EL-type IT2FMF

2) LE-type IT2FMF

In the LE-type IT2FMF, the parameter $0 < a < 1$. The computation of the partial derivative $\frac{\partial \overline{\mu}_{ij}}{\partial X}$ and $\frac{\partial\underline{\mu}_{ij}}{\partial X}$ of the LE-type IT2FMF are given in Table Ⅲ.

Table Ⅲ $\frac{\partial \overline{\mu}_{ij}}{\partial X}$ and $\frac{\partial\underline{\mu}_{ij}}{\partial X}$ of LE-type IT2FMF

3) EE-type IT2FMF

In the computation of the partial derivative $\frac{\partial \overline{\mu}_{ij}}{\partial X}$ and $\frac{\partial\underline{\mu}_{ij}}{\partial X}$ for the LE-type IT2FMF, there are two case need to be considered. One is when $0 < a < 1$, and the other is when $a>1$. The computation of $\frac{\partial \overline{\mu}_{ij}}{\partial X}$ and $\frac{\partial\underline{\mu}_{ij}}{\partial X}$ for two cases is given in Table Ⅳ and Table Ⅴ.

Table Ⅳ $\frac{\partial \overline{\mu}_{ij}}{\partial X}$ and $\frac{\partial\underline{\mu}_{ij}}{\partial X}$ of EE-type IT2FMF When $0 < a < 1$
Table Ⅴ $\frac{\partial \overline{\mu}_{ij}}{\partial X}$ and $\frac{\partial\underline{\mu}_{ij}}{\partial X}$ of EE-type IT2FMF When $a>1$

Remark 2: From the above computation, we can obtain the following conclusions.

1) The difference realization between the FMFs mainly focuses on the computation of $\frac{\partial \overline{\mu}_{ij}}{\partial X}$ and $\frac{\partial\underline{\mu}_{ij}}{\partial X}$.

2) Consequent parameter and distribution factor updating algorithm are all the same for Gaussian, ellipsoidal, and the derived IT2FMFs in the simplified IT2FNN.

3) The computation of the ellipsoidal type of IT2FMF is more complex than Gaussian and the derived IT2FMFs. And the computation of EL-type and LE-type IT2FMFs are more easy to be realized than the EE-type IT2FMF.

5 Simulation Results and Analysis

To test the effectiveness of the derived IT2FMFs, the IT2FMFs are applied in the simplified IT2FNN to identify two typical nonlinear time-varying systems [5], [19], [20]. The structure of the system identification constructed with MATLAB/Simulink is given in Fig. 4.

Figure 4 The structure of the system identification with simplified IT2FNN.

To compare the performance of the derived IT2FMFs with the selected IT2FMFs, the rules of each node in the second layer of the simplified IT2FNN is set to be $m$ $=$ $3$, and the input of the simplified IT2FNN is set to be $n$ $=$ $2$. In the simulation of the identification with simplified IT2FNN, the common initialization data are given as the following data

$ \begin{align} c_{ij}=0.1, ~~s_{ij}=0.01, ~~\eta=0.8 \end{align} $ (36)

where $i=0, 1, 2$ and $j=1, 2, 3$.

The antecedent parameters of different type of IT2FMFs in this paper are given in Table Ⅵ, where $i=1, 2$ and $j=$ $1, $ $2, $ $3$.

Table Ⅵ The Initial Antecedent Parameters for Different Type of IT2FMFs

The integral of the absolute value of the error (IAE) is selected as the performance criterion, and which is given as following expression

$ \begin{align} { IAE}=\sum ^{+\infty}_{k=1}|e(k)|T_s \end{align} $ (37)

where $T_s$ is the sample time, and $e(k)=y(k)-y_d(k)$ is the identification error. In the simulation, the sample time $T_s$ is set to be $0.001$ s.

5.1 Example 1

The first nonlinear system to be identified is given as the following expression

$ \begin{align} y_d(k+1)=\frac{y_d(k)}{1+y^2_d(k)}+u^3(k) \end{align} $ (38)

where $k$ is the sample number. The input variables of the simplified IT2FNN is $u(k)$ and $y_d(k)$. The input signal is generated with $u(k)={\rm sin}(2 \pi k/1000)$.

The identification results of the system in (38) are given in Fig. 5 (a) with different type of IT2FMFs. The identification errors and IAEs are given in Fig. 5 (b) and Fig. 5 (c). In Fig. 5, the subscript 1, 2, 3, 4 and 5 present Gaussian, ellipsoidal, EL-type, LE-type and EE-type IT2FMFs, respectively. When the output of the system contains the uniform random noise (between $[-0.1, 0.1]$), the simulation results are given in Fig. 6 (a). And the comparison of identification errors and IAEs with disturbance are given in Fig. 6 (b) and Fig. 6 (c). The comparison data of the IAEs for Example 1 without and with disturbance are given in Table Ⅶ at 2 second.

Figure 5 Identification of Example 1 with different FMFs.
Figure 6 Identification of Example 1 with disturbance.
Table Ⅶ The Comparison of the IAEs for Example 1
5.2 Example 2

The second nonlinear system to be identified is given as the following expression

$ \begin{align} y_d(k+1)=\frac{f}{a+y^2_d(k-1)+y^2_d(k-2)} \end{align} $ (39)

where the parameters $f$ $a$, $b$ and $c$ are time-varying parameters, and which are given as following expressions

$ \begin{align} f=&\ y_d(k)y_d(k-1)y_d(k-2)\notag\\ & \times[y_d(k-2)-b]u(k-1)+cu(k) \end{align} $ (40)
$ \begin{align} a(t)=&\ 1.2-0.2{\rm cos}\left(\frac{2\pi k}{T}\right) \end{align} $ (41)
$ \begin{align} b(t)=&\ 1-0.4{\rm sin}\left(\frac{2\pi k}{T}\right)~~~ \end{align} $ (42)
$ \begin{align} c(t)=&\ 1+0.4{\rm sin}\left(\frac{2\pi k}{T}\right)~~ \end{align} $ (43)

where $T$ is the samples per period. To test the identification performance, the input signal is given as following expression

$ \begin{align} u(k)= \begin{cases} {\rm sin}(\frac{\pi k}{25}), &k < 250\\[1mm] 1, &250\leq k < 500\\[1mm] -1, &200\leq k < 750\\[1mm] g, &750\leq k < 1000\end{cases} \end{align} $ (44)

where

$ \begin{align} g=0.3{\rm sin}\left(\frac{\pi k}{25}\right)+0.1{\rm sin}\left(\frac{\pi k}{32}\right)+0.6{\rm sin}\left(\frac{\pi k}{10}\right) . \end{align} $ (45)

The identification results of the system in (39) are given in Fig. 7 (a) with different FMFs. The identification errors and IAEs are given in Fig. 7 (b) and Fig. 7 (c). And the identification results with uniform noise between $[-0.1, 0.1]$ are given in Fig. 8 (a). The comparison of identification errors and IAEs with disturbance are given in Fig. 8 (b) and Fig. 8 (c). The comparison data of the IAEs for Example 2 without and with disturbance are given in Table Ⅷ at 2 second.

Figure 7 Identification of Example 2 with different IT2FMFs.
Figure 8 Identification of Example 2 with disturbance.
Table Ⅷ The Comparison of the IAEs for Example 2
5.3 Analysis and Discussion

From the simulation results and comparison, we can acquire the following five conclusions.

1) The proposed type (including EL-type, LE-type and EE-type) of IT2FMFs are effective and can be applied in the system identification with simplified IT2FNN.

2) The derived IT2FMFs can achieve better performance than Gaussian and ellipsoidal type of IT2FMFs with elaborately tuning of the parameters of the simplified IT2FNN.

3) Ellipsoidal type of IT2FMF can be used in the static parameter system. And it is more robustness than Gaussian type of IT2FMF under disturbance environment. While when it is used in the time-varying parameter system, the identification error is larger than Gaussian and the derived IT2FMFs.

4) In static system identification, the EL-type IT2FMF has better identification accuracy than LE-type IT2FMF with disturbance. While in time-varying system identification, the LE-type IT2FMF has better identification accuracy than EL-type IT2FMF with disturbance.

5) Among the derived FMFs, the EE-type IT2FMF has stronger identification ability than EL-type and LE-type IT2FMFs, with or without considering the time-varying or disturbance characteristics of the actual system.

Remark 3: Although the derived IT2FMFs can achieve better identification performance than Gaussian and ellipsoidal type of IT2FMFs in the above two examples, we can not say that the derived IT2FMFs can guarantee better performance in all kinds of environment. Because uncertainty can appear different for different system, one type of IT2FMF can not fit all the condition. This paper gives more freedom in the selection of the IT2FMFs that can be use in the IT2FNN design.

6 Conclusions

In this paper, a new type of FMF is proposed for the IT2FNN. And three type of IT2FMFs can be derived with the proposed type of FMF. The whole paper can be summarized with the following three conclusions.

1) The derived three types of IT2FMFs are simpler than ellipsoidal type of IT2FMF and have better identification ability in system identification.

2) The derived IT2FMFs and the adoption of the distribution factor $q$ can simplify the computation of the type reduction problem of the IT2FNN. And this combination can make the realization of the IT2FNN an easy job.

3) The proposed IT2FMFs can give the selection of the IT2FMFs more freedom in IT2FS. This is very meaningful for the research of the IT2FNN.

References
1
L. A. Zadeh, "The concept of a linguistic variable and its application to approximate reasoning-I, " Inform. Sci. , vol. 8, no. 3, pp. 199-249, 1975. http://link.springer.com/chapter/10.1007%2F978-1-4684-2106-4_1
2
N. N. Karnik, J. M. Mendel, and Q. L. Liang, "Type-2 fuzzy logic systems, " IEEE Trans. Fuzzy Syst. , vol. 7, no. 6, pp. 643 -658, Dec. 1999. http://dl.acm.org/citation.cfm?id=2234815
3
Q. L. Liang and J. M. Mendel, "Interval type-2 fuzzy logic systems: Theory and design, " IEEE Trans. Fuzzy Syst. , vol. 8, no. 5, pp. 535-550, Oct. 2000. http://dl.acm.org/citation.cfm?id=2234876
4
J. M. Mendel, R. I. John, and F. L. Liu, "Interval type-2 fuzzy logic systems made simple, " IEEE Trans. Fuzzy Syst. , vol. 14, no. 6, pp. 808-821, Dec. 2006. http://ieeexplore.ieee.org/document/4016089/
5
R. H. Abiyev and O. Kaynak, "Type 2 fuzzy neural structure for identification and control of time-varying plants, " IEEE Trans. Ind. Electron. , vol. 57, no. 12, pp. 4147-4159, Dec. 2010.
6
C. T. Lin, N. R. Pal, S. L. Wu, Y. T. Liu, and Y. Y. Lin, "An interval type-2 neural fuzzy system for online system identification and feature elimination, " IEEE Trans. Neural Netw. Learn. Syst. , vol. 26, no. 7, pp. 1442-1455, Jul. 2015.
7
B. I. Choi and F. C. H. Rhee, "Interval type-2 fuzzy membership function generation methods for pattern recognition, " Inform. Sci. , vol. 179, no. 13, pp. 2102-2122, Jun. 2009.
8
M. A. Khanesar, E. Kayacan, M. Teshnehlab, and O. Kaynak, "Extended Kalman filter based learning algorithm for type-2 fuzzy logic systems and its experimental evaluation, " IEEE Trans. Ind. Electron. , vol. 59, no. 11, pp. 4443-4455, Nov. 2012. http://ieeexplore.ieee.org/document/5764520/
9
Y. Y. Lin, J. Y. Chang, and C. T. Lin, "Identification and prediction of dynamic systems using an interactively recurrent self-evolving fuzzy neural network, " IEEE Trans. Neural Netw. Learn. Syst. , vol. 24, no. 2, pp. 310-321, Feb. 2013.
10
Y. Y. Lin, J. Y. Chang, N. R. Pal, and C. T. Lin, "A mutually recurrent interval type-2 neural fuzzy system (MRIT2NFS) with self-evolving structure and parameters, " IEEE Trans. Fuzzy Syst. , vol. 21, no. 3, pp. 492-509, Jun. 2013.
11
C. H. Wang, C. S. Cheng, and T. T. Lee, "Dynamical optimal training for interval type-2 fuzzy neural network (T2FNN), " IEEE Trans. Syst. Man Cybernet. B, vol. 34, no. 3, pp. 1462-1477, Jun. 2004. http://europepmc.org/abstract/MED/15484917
12
J. R. Castro, O. Castillo, P. Melin, and A. Rodriguez-Díaz, "A hybrid learning algorithm for a class of interval type-2 fuzzy neural networks, " Inform. Sci. , vol. 179, no. 13, pp. 2175-2193, Jun. 2009.
13
C. F. Juang and C. Y. Chen, "Data-driven interval type-2 neural fuzzy system with high learning accuracy and improved model interpretability, " IEEE Trans. Cybernet. , vol. 43, no. 6, pp. 1781-1795, Dec. 2013.
14
C. F. Juang and Y. W. Tsao, "A self-evolving interval type-2 fuzzy neural network with online structure and parameter learning, " IEEE Trans. Fuzzy Syst. , vol. 16, no. 6, pp. 1411-1424, Dec. 2008. http://ieeexplore.ieee.org/document/4529083
15
C. S. Chen, "TSK-type self-organizing recurrent-neural-fuzzy control of linear microstepping motor drives, " IEEE Trans. Power Electron. , vol. 25, no. 9, pp. 2253-2265, Sep. 2010.
16
C. S. Chen, "Supervisory interval type-2 TSK neural fuzzy network control for linear microstepping motor drives with uncertainty observer, " IEEE Trans. Power Electron. , vol. 26, no. 7, pp. 2049-2064, Jul. 2011.
17
Y. Y. Lin, J. Y. Chang, and C. T. Lin, "A TSK-type-based self-evolving compensatory interval type-2 fuzzy neural network (TSCIT2FNN) and its applications, " IEEE Trans. Ind. Electron. , vol. 61, no. 1, pp. 447-459, Jan. 2014. http://ieeexplore.ieee.org/document/6469210/
18
X. P. Xie, H. J. Ma, Y. Zhao, D. W. Ding, and Y. C. Wang, "Control synthesis of discrete-time T-S fuzzy systems based on a novel Non-PDC control scheme, " IEEE Trans. Fuzzy Syst. , vol. 21, no. 1, pp. 147-157, Feb. 2013.
19
Y. Y. Lin, S. H. Liao, J. Y. Chang, and C. T. Lin, "Simplified interval type-2 fuzzy neural networks, " IEEE Trans. Neural Netw. Learn. Syst. , vol. 25, no. 5, pp. 959-969, May2014.
20
C. F. Juang, R. B. Huang, and Y. Y. Lin, "A recurrent self-evolving interval type-2 fuzzy neural network for dynamic system processing, " IEEE Trans. Fuzzy Syst. , vol. 17, no. 5, pp. 1092-1105, Oct. 2009.