自动化学报  2017, Vol. 43 Issue (12): 2244-2252   PDF    
忆阻递归神经网络稳定性分析及其在联想记忆中的应用
鲍刚1, 陈媛媛1, 温思雨1, 赖陟岑1
Stability Analysis for Memristive Recurrent Neural Network and Its Application to Associative Memory
Gang Bao1, Yuanyuan Chen1, Siyu Wen1, Zhicen Lai1     
1. Hubei Key Laboratory of Cascaded Hydropower Stations Operation and Control, School of Electrical Engineering and New Energies, China Three Gorges University, Yichang 443002, China
Abstract: Memristor is a nonlinear resistor with variable resistance. This paper discusses dynamic properties of memristor and recurrent neural network (RNN) with memristors as connection weights. Firstly, it establishes that there exists a threshold voltage for memristor. Secondly, it presents a model for memristive recurrent neural network (MRNN) which has variable and bounded coe-cients, and analyzes stability of memristive neural network by some maths tools. Thirdly, it gives a synthesis algorithm for associative memory based on memristive recurrent neural network. At last, three examples verify our results.
Key words: Associative memory     memristor     memristive recurrent neural network (MRNN)     stability    
1 Introduction

Artificial neural networks are developed for solving some complex problems in control, optimal computation, pattern recognition, information processing, and associative memory [1]-[13]. American scientist Hopfield makes a great contribution for the development of neural network. That is the implementation of neural network by simple circuit devices, resistors, capacitors and amplifiers [14]. Hopfield neural network (HNN) can mimic the human's associative memory function and accomplish optimization. The key point is the weights of HNN which are implemented by resistors for simulating neuron synapse. While the bottleneck is that linear resistors cannot reflect variability of synapse for resistance of linear resistor being invariable.

Memristor [15], [16], the arising fourth circuit device, makes it better to simulate the variability of neuron synapse. Pershin and Ventra [17] gives their experimental research results that neurons with memristors as synapses can simulate the associative memory function of a dog. Hence, memristor is the advancing spot in the present physics research. Several models of memristor have been set up and its properties have been analyzed in [18]-[21]. Based on these analyses, memristor can be used to mimic synapse in neural computing architecture [22], construct memristor bridge synapse [23] and brain combined with the conventional complementary metal oxide semiconductor (CMOS) technology [24], set memristive neural network [25], [26] and implement memristor array for image processing [27] etc.

Some researchers derive mathematical model of memristive recurrent neural network (MRNN) by replacing resistors with memristors in Hopfield and cellular neural network circuit [28]-[30]. MRNN is modeled by state-dependent switched systems by simplifying the memristance as two-valued device with different terminal voltage. With differential inclusion theory, Lyapunov-Krasovskii function and some other math tools, some sufficient conditions are derived for dynamics of MRNN, such as, convergence and attractivity [31]-[33], periodicity and dissipativity [34], dissipativity for stochastic and discrete case, global exponential almost periodicity, and complete stability [35], multi-stability [36], etc. Considering the trouble from the switching property of memristor, researchers derive some interesting results about exponential stabilization, reliable stabilization, and finite-time stabilization of MRNN by designing different state feedback controllers [37], [38] and sampled-data controller [39]. All of these results make a solid foundation for MRNN's application to associative memory.

Associative memory is a distinguished function of human brain which can be simulated by recurrent neural network (RNN). The design problem is that some given prototype patterns are to be stored by RNN, and then the stored patterns can be recalled by some prompt information. In the existing literatures [40]-[46], there are two design methods for associative memory. One is that prototype patterns are designed as multiple locally asymptotically stable equilibria and initial values are the recalling probes. Another is that a prototype pattern is designed as the unique globally asymptotically stable equilibrium point with one external input as the recalling probe. Different external inputs mean different equilibrium points, i.e., different prototype patterns.

To the best of our knowledge, the bottleneck of associative memory based on RNN is that capacity of RNN is limited and different storage task needs different RNN because resistance can not be changed. Furthermore, there are few works about associative memory based on MRNN. Hence, the contribution of this paper is obtaining a threshold voltage for memristor by simulation, presenting a novel type of MRNN with infinite number of sub neural networks, and design a program for associative memory based on MRNN. Compared with MRNN models in the existing literatures, the difference is that every coefficient of MRNN has infinite number of values, not two values. Furthermore, every coefficient can be changed by the external input. So the associative memory based on MRNN seems to solve the problem of storage capacity.

The rest of this paper is organized as the following sections. Memristor property analysis and some preliminaries are stated in Section 2. Then, some sufficient conditions are given to ensure global stability and multi-stability of MRNN by some maths tools in Section 3, respectively. Next, design procedure for associative memory based on MRNN is given in Section 4. To elucidate our results, three simulation examples are presented in Section 5. At last, conclusion is drawn in Section 6.

2 Memristor Recurrent Neural Network Model 2.1 Memristor and Its Property

The definition of memristor [15] is a functional relation between charge $q$ and magnetic flux $\varphi$, i.e., $g(\varphi, q)=0$. Memristance of memristor is defined as the following formula with the assumption of linear dopant drift as follows

$ v(t)=\Big(R_{\rm on}\frac{w(t)}{D}+R_{\rm off}\Big(1-\frac{w(t)}{D}\Big)\Big)i(t) $ (1)
$ \frac{dw(t)}{dt}=\mu_V\frac{R_{\rm on}}{D}i(t)\label{B} $ (2)

where $w(t)$, $D$, $i(t)$, $v(t)$, $\mu_V$ are the length of dopant region, the length of memristor, the current, voltage across the device and the average ion mobility, respectively.

The $v-i$ simulation curve of memristor (1) with MATLAB is shown in Fig. 1.

Figure 1 The curve of $(v(t), i(t))$ under voltage sources with different amplitudes. The applied voltage source is $v(t)=v_0\sin(\omega t)$, $v_0=1.5, 1, 0.15, 0.01$ V, $\omega=2\pi$ rad/s and the other parameters are $s(t_0)=0.1$, $t_0=0$ s, $R_{\rm on}=100 \Omega $, $r=160$, $D=10^{-6} \mbox{cm}$, $\mu_V=10^{-10} \mbox{cm}^2/\mbox{sV}$. From four subplots, there is a threshold voltage existing for one memristor.

From Fig. 1, it shows that a memristor will not change its resistance unless the terminal voltage exceeds a certain threshold value $V_T$ as described in [20], [27]. It can be expressed by the following formula

$ \begin{align}\label{M1} R(w)=&\; \left\{\begin{array}{ll} R(w, u),&u>V_T\\ R_w,& u<V_T \end{array}\right. \end{align} $ (3)

where $R_w$ is a constant between $R_{\rm on}$ and $R_{\rm off}$; $R(w, u)$ can be calculated by the following formula [10]

$ \begin{align}\label{CH} T_w=\frac{\Phi_D}{V_AR_{\rm off}^2}\big[(R(w_0))^2-(R(w))^2\big] \end{align} $ (4)

where $\Phi_D={(rD)^2}/[2\mu_v(r-1)];$ $V_A$, $T_w$, and $R(w_0)$, $R(w)$ are voltage amplitude, time width, resistances of the device at the states $w_0, w$, respectively.

Remark 1: Simulation shows that there exists a threshold voltage for the memristor, i.e., memristance can be changed by terminal voltage with amplitude value being greater than threshold value. The result is consistent with the theoretical analysis in [20]. This property of memristor can reflect variability of neuron synapses. Furthermore, it makes memristor being suitable for constructing neural network with coefficients which can be changed according to our needs.

2.2 Model

In this section, we will firstly present the mathematical model for MRNN, and then give some concepts and lemmas in order to obtain our main results. MRNN is modelled by the following differential equation systems:

$ \begin{align}\label{SM} \frac{dx_i(t)}{dt}\;= &-c_ix_i(t)+\sum\limits_{j=1}^{n}a_{ij}(u_i)f(x_j(t)) \nonumber\\&+ \sum\limits_{j=1}^{n}b_{ij}(u_i)f(x_j(t-\tau(t)))+u_i \end{align} $ (5)

where $i=1, 2, \ldots, n, $ $x(t)=(x_1(t), \ldots, x_n(t))^T \in \mathbb{R}^n$ is the state vector; $A(u_i) = (a_{ij}(u_i))$, $B(u_i)=(b_{ij}(u_i))$ and $C=\mbox{diag}\{c_1, c_2, \ldots, c_n\}$ are connection weight matrices; $a_{ij}, b_{ij}$ are related to external inputs $u= (u_1, \ldots, u_n)^T \in \mathbb{R}^n$; $c_i>0, i=1, 2, \ldots, n$, $\forall~ t\geq t_0, \forall~ i, j\in\{1, 2, \ldots, n\}, $ $0<\tau(t)\leq \tau$ is the time-varying delay; $f$ is a bounded activation function satisfying the following condition

$ \begin{align}\label{eq-1} |f(r_1)-f(r_2)|\leq \mu|r_1-r_2| \end{align} $ (6)

where $r_1$, $r_2$, $\mu \in \mathbb{R}$ and $\mu>0$.

According to circuit theory and the property of memristor, it results that there exist some constants $\underline{a}_{ij}$, $\overline{a}_{ij}$, $\underline{b}_{ij}$, $\overline{b}_{ij}$ such that

$ \begin{align}\label{inter}\left\{ \begin{array}{ll} \underline{a}_{ij}\leq a_{ij}(u_i)\leq \overline{a}_{ij}\\ \underline{b}_{ij}\leq b_{ij}(u_i)\leq \overline{b}_{ij}. \end{array} \right. \end{align} $ (7)

Remark 2: Compared with these models in [31]-[38], the difference of MRNN (5) is that coefficients $a_{ij}(u_i)$ and $b_{ij}(u_i)$, $i, j=1, 2, \ldots, n$ are continuous variable functions with respect to external inputs $u_i$. Memristor has multi resistances as demonstrated by real device experiments and circuit simulation in [47]. Hence, MRNN can be seen as a neural network with an infinite number of modes because $a_{ij}(u_i)$, $b_{ij}(u_i)$ belong to intervals $[\underline{a}_{ij}, \overline{a}_{ij}]$, and $[\underline{a}_{ij}, \overline{a}_{ij}]$, respectively. While the existing results are $2^{n^2+1}$ sub modes in [31], [32]. So MRNN (5) seems to model human neurons network more better.

2.3 Preliminaries

Let $u=(u_1, u_2, \ldots, u_n)$ be the external input and denote $x(t;t_0, \phi, u)$ as the state of MRNN (5) with some $u$ and initial value,

$ \phi(\vartheta) = (\phi _{1} (\vartheta ), \phi_{2} (\vartheta ), \ldots, \phi _{n} (\vartheta ))^T $

where $\phi (\vartheta ) \in C([t_0-\tau, t_0], {\mathcal D}), {\mathcal D}\in \mathbb{R}^n.$ Then, $x(t;t_0, \phi, u) $ is continuous and satisfies MRNN (5) and $x(s;t_0, \phi, u)=\phi (s), $ for $s\in [t_0-\tau, t_0].$ For simplicity, let $x(t)$ be the state of MRNN (5).

Definition 1 [48]: The equilibrium point $x^{\star}$ of MRNN (5) is said to be locally exponentially stable in region $\mathcal{D}$, if there exist constants $ \alpha>0, \beta >0$ such that $\forall~ t\geq t_0$

$ \left\|x(t;t_0, \phi, u)-x^*\right\| \leq \beta ||\phi-x^*||_{\infty}\exp\{-\alpha (t-t_{0})\} $

where $x(t;t_0, \phi, u) $ is the solution of MRNN (5) with any external input $u$ and initial condition $\phi (\vartheta ) \in C([t_0-\tau, t_0], \mathcal{D})$. $\mathcal{D}$ is said to be a locally exponentially attractive set of the equilibrium point $x^*.$ When $\mathcal{D}=\mathbb{R}^{n}, $ $x^*$ is said to be globally exponentially stable.

Lemma 1 [49]: Let ${\mathcal D}$ be a bounded and closed set in $\mathbb{R}^{n}, $ and $H$ be a mapping on complete metric space $({\mathcal D}, ||\cdot||), $ where $\forall~ x, y\in{\mathcal D}, $ $||x-y||=\max_{1\leq i\leq {n}}\{|x_i-y_i|\}$ is measurement in ${\mathcal D}.$ If $H({\mathcal D})\subset{\mathcal D}$ and there exists a constant $\alpha<1$ such that $\forall~ x, y\in {\mathcal D}, $ $||H(x)-H(y)||\leq \alpha ||x-y||, $ then there exists a unique $x^*\in {\mathcal D}$ such that $H(x^*)=x^*.$

3 Stability Analysis for MRNN

Stability of MRNN is the foundation for its application to associative memory. So we discuss global stability and multi-stability of MRNN in the following subsections. Firstly, we analyze the differences between MRNN and traditional RNN. Traditional RNN is described by, for $i, j=1, 2, \ldots, n$,

$ \begin{align}\label{SM-2} \frac{dx_i(t)}{dt}\;= &-c_ix_i(t)+\sum\limits_{j=1}^{n}a_{ij}f(x_j(t)) \nonumber\\&+ \sum\limits_{j=1}^{n}b_{ij}f(x_j(t-\tau(t)))+u_i \end{align} $ (8)

where $c_i$, $a_{ij}$, $b_{ij}$, $u_i$ have the same means as those in (5).

Discussion: According to above analysis, coefficients $a_{ij}(u_i)$, $b_{ij}(u_i)$ of MRNN (5) can take any values in $[\underline{a}_{ij}, \overline{a}_{ij}]$, $[\underline{b}_{ij}, \overline{b}_{ij}]$. While the corresponding coefficients of RNN cannot be changed. So MRNN (5) is a family of neural networks with infinitely many modes or sub neural networks. Hence MRNN may have infinite number of globally or locally stable equilibrium points. Coefficiens of the interval RNN [50] may be constants in different intervals because coefficients increments $\triangle a_{ij}$, $\triangle b_{ij}$ are caused by noises and implementation errors. This is different from MRNN but the systematic analysis method [50] can be as a reference for stability analysis of MRNN.

3.1 Global Stability Analysis

This subsection discusses global stability of MRNN (5). By using comparative principle and the existing stability criteria, it derives some sufficient conditions for global stability of (5). The following activation function will be adopted in the rest of the paper

$ \begin{align}\label{f} f (r)=&\; \left\{\begin{array}{ll} 4k-3,&r\in \big[4k-3, +\infty\big)\\ 2r-(4k-3), &r\in \big[4k-5, 4k-3\big)\\ \ldots, \\ 2r-5, &r\in \big[3, 5\big)\\ 1, &r\in \big[1, 3\big)\\ r, &r\in \big(-1, 1\big)\\ -1, &r\in \big(-3, -1\big]\\ 2r+5, &r\in \big(-5, -3\big]\\ \ldots, \\ 2r+4k-3, &r\in \big(3-4k, 5-4k\big]\\ 3-4k,&r\in \big(-\infty, 3-4k\big].\end{array}\right. \end{align} $ (9)

Obviously, $|f(r_1)-f(r_2)|\leq |r_1-r_2|$, $r_1, r_2\in \mathbb{R}$. In order to derive our result, the following lemma is needed.

Lemma 2: If the following three differential systems

$ \begin{equation}\label{sys-1} \dot{y}(t)=g_1(y(t)) \end{equation} $ (10)
$ \begin{equation}\label{sys-2} \dot{y}(t)=g_2(y(t)) \end{equation} $ (11)
$ \begin{equation}\label{sys-3} \dot{y}(t)=g_3(y(t)) \end{equation} $ (12)

have one common equilibrium point $y^{\star}=0$, $g_1(0)=g_2(0)=g_3(0)=0$ and satisfying $g_1(y)\leq g_2(y) \leq g_3(y)$, then system (11) is globally exponentially stable if systems (10) and (12) are globally exponentially stable.

Proof: Take the same initial value $y(t_0)=0$ for three systems, let $y_1^{\star}$, $y_2^{\star}$ denote equilibrium points of (10) and (12), respectively. Then

$ \begin{align} &\;g_1(y)\leq g_2(y) \leq g_3(y)\nonumber\\ &\;\int_{t_0}^{t}g_1(y)\leq \int_{t_0}^{t}g_2(y) \leq \int_{t_0}^{t}g_3(y)\nonumber\\ &\;y_1(t)\leq y_2(t)\leq y_3(t). \end{align} $ (13)

Hence, $|y_1(t)|\leq |y_2(t)|\leq |y_3(t)|$ or $|y_3(t)|\leq |y_2(t)|\leq |y_3(t)|$. And (10) and (11) are globally exponentially stable, there exist $\alpha_1, \alpha_3, \beta_1, \beta_3$, initial values $\phi_1, \phi_3$ satisfying

$ \begin{align*} &|y_1(t)|\leq \|\phi_1\|\exp\{-\alpha_1(t-t_0)\}\\ &|y_3(t)|\leq \|\phi_3\|\exp\{-\alpha_3(t-t_0)\}. \end{align*} $

So there must exist $\alpha_2, \beta_2$ and an initial value $\phi_2$, and the following inequality

$ \begin{align*} |y_2(t)|\leq \|\phi_2\|\exp\{-\alpha_2(t-t_0)\} \end{align*} $

is valid, i.e., (11) is globally exponentially stable.

Because the external inputs $u_i$, $i=1, 2, \ldots, n$ are just used to change the memristance, we assume that all of sub neural networks have the same external inputs $u_i$, $i=1, 2, \ldots, n$ in the following discussion.

Lemma 3 [48]: If for $c_i$, $a_{ij}$ and $b_{ij}$, $\forall i, j\in \{1, 2, \ldots, n\}$, $C-|A|-|B|$ is a nonsingular $M$ matrix with $|A|=(|\mu_ja_{ij}|)_{n\times n}$ and $|B|=(|\omega_jb_{ij}|)_{n\times n}$, $\mu_j, \omega_j$ are positive constants for $j=1, 2, \ldots, n$, then the corresponding equilibrium point of (14) is globally exponentially stable.

By (5) and (7), we have $NN_1$

$ \begin{align}\label{SM-1} \frac{dx_i(t)}{dt}\;= &-c_ix_i(t)+\sum\limits_{j=1}^{n}\overline{a}_{ij}(u_i)f(x_j(t)) \nonumber\\&+ \sum\limits_{j=1}^{n}\overline{b}_{ij}(u_i)f(x_j(t-\tau(t)))+u_i \end{align} $ (14)

and $NN_2$

$ \frac{dx_i(t)}{dt}\;= -c_ix_i(t)+\sum\limits_{j=1}^{n}{\underline{a}_{ij}}(u_i)f(x_j(t)) \nonumber\\~~~~~~~~+ \sum\limits_{j=1}^{n}{\underline{b}_{ij}}(u_i)f(x_j(t-\tau(t)))+u_i $ (15)

for $i=1, 2, \ldots, n$.

Theorem 1: If coefficients of neural networks (14) and (15) satisfy that $C-|A|-|B|$ is a nonsingular $M$ matrix with $|A|=(|a_{ij}|)_{n\times n}$ and $|B|=(|b_{ij}|)_{n\times n}$, then (5) is globally exponentially stable for $\forall a_{ij}(u_i)\in[\underline{a}_{ij}, \overline{a}_{ij}]$, $\forall b_{ij}(u_i)\in[\underline{b}_{ij}, \overline{b}_{ij}]$ and bounded external inputs $u_i$, $i, j=1, 2, \ldots, n$.

Proof: Because the activation $f(r)$ satisfy the Lipschitz condition and $u_i$, $i=1, 2, \ldots, n$ are bounded, there must exist one equilibrium for (5) at least by the Schauder fixed point theorem for $\forall a_{ij}(u_i)\in[\underline{a}_{ij}, \overline{a}_{ij}]$, $\forall b_{ij}(u_i)\in[\underline{b}_{ij}, \overline{b}_{ij}]$ and bounded external inputs $u_i$, $i, j=1, 2, \ldots, n$. Denote $x^{\star}$, $\underline{x}_{i}^{\star}$, $\overline{x}_{i}^{\star}$ equilibrium points of (5), (14), (15), respectively.

Let

$ \begin{align*} &\;z(t)=(x_1(t))-x_1^{\star}, x_2(t)-x_2^{\star}, \ldots, x_n(t)-x_n^{\star})\\ &\;\overline{f}(z_i(t))=f(x_i(t)+x_i^{\star})-f(x_i^{\star}). \end{align*} $

Hence,

$ \begin{align} \dot{z}_i(t)= &-c_iz_i(t)+\sum\limits_{i=1}^na_{ij}(u_i)\overline{f}(z_i(t))\nonumber\\ &+\sum\limits_{i=1}^na_{ij}(u_i)\overline{f}(z_i(t-\tau(t))). \end{align} $ (46)

Let $V(t)=(V_1(t), V_2(t), \ldots, V_n(t))$, $V_i(t)=|x_i(t)|$, then

$ \begin{align}\label{V-1} D^{+}V_i(t)\leq&-c_iV_i(t)+\sum\limits_{i=1}^n|a_{ij}(u_i)|V_j(t)\nonumber\\ &+\sum\limits_{i=1}^n|b_{ij}(u_i)|V_j(t-\tau(t)). \end{align} $ (17)

Let

$ \begin{align} \Psi(t)=&-c_iV_i(t)+\sum\limits_{i=1}^n|a_{ij}(u_i)|V_j(t)\nonumber\\ &+\sum\limits_{i=1}^n|b_{ij}(u_i)|V_j(t-\tau(t)). \end{align} $ (18)

Since

$ \begin{align*} &\;\underline{a}_{ij}\leq a_{ij}(u_i)\leq \overline{a}_{ij}, &\;\underline{b}_{ij}\leq b_{ij}(u_i)\leq\overline{b}_{ij} \end{align*} $

then

$ \begin{align*} &\;|\underline{a}_{ij}|\leq |a_{ij}(u_i)|\leq |\overline{a}_{ij}|, &\;|\underline{b}_{ij}|\leq |a_{ij}(u_i)|\leq|\overline{b}_{ij}| \end{align*} $

or

$ \begin{align*} &\;|\overline{a}_{ij}|\leq |a_{ij}(u_i)|\leq |\underline{a}_{ij}|, &\;|\overline{b}_{ij}|\leq |a_{ij}(u_i)|\leq|\underline{b}_{ij}|. \end{align*} $

So

$ \begin{align}\label{neq-1} \Psi(t)\leq&-c_iV_i(t)+\sum\limits_{i=1}^n|\overline{a}_{ij}|V_j(t)\nonumber\\ &+\sum\limits_{i=1}^n|\overline{b}_{ij}|V_j(t-\tau(t)) \end{align} $ (19)

or

$ \begin{align}\label{neq-2} \Psi(t)\leq &-c_iV_i(t)+\sum\limits_{i=1}^n|\underline{a}_{ij}|V_j(t)\nonumber\\ &+\sum\limits_{i=1}^n|\underline{b}_{ij}|V_j(t-\tau(t)). \end{align} $ (20)

According to the condition of Theorem $1$, (17), (19), (20) and Lemma $3$, there must exist positive constants $\alpha$ and $\beta$ satisfying $|V_i(t)|\leq \alpha\exp\{-\beta t\}$. Hence the conclusion of this theorem is valid.

Remark 3: When $\overline{a}_{ij}=\underline{a}_{ij}$, $\overline{b}_{ij}=\underline{b}_{ij}$, for $i, j=1, 2, \ldots, n$, the result in [48] can be obtained from Theorem $1$. So we generalize the result of [47] to discuss global stability of MRNN (5) with infinite number of sub neural networks. Compared with the existing literatures, its main merit is that MRNN (5) has many globally exponentially stable equilibrium points. The systematic method in [43] can be used to derive sufficient conditions for global stability of (5) by virtue of many global stability criteria in the existing literatures.

3.2 Multi-stability of MRNN

Multi-stability of RNN means that RNN has coexisting multi attractors. Memory patterns can be stored by these attractors. Memory capacity of RNN is up to the number of attractors. Another factor affecting memory is the activation function $f(r)$. Zeng et al. has derived some sufficient conditions for multi-stability of $n$ dimensional RNN with the activation function $f(r)=(|r+1|-|r-1|)/2$ which has $3^{n}$ equilibrium points and $2^{n}$ equilibrium points of them are locally exponentially stable. And then Zeng et al. [49] generalize their work to $n$ dimensional RNN with the activation function (9). They derive that RNN with the activation function (9) has $(4k-1)^n$ equilibrium points and $(2k)^{n}$ equilibrium points of them are locally exponentially stable in $\bar{\Omega}_k$, where

$ \begin{align}\label{O1} \Omega_k=&\; \Big\{\prod\limits_{i=1}^n \ell^{(i)}, ~~\ell^{(i)}=\big(-\infty, -(4k-3)\big]~{\rm or}\nonumber\\ &\; ~~~ \big(-(4k-3), -(4k-5)\big]~{\rm or}~\ldots~{\rm or}~ \big(-3, -1\big]~{\rm or}\nonumber\\ &\; ~~~~ \big(-1, 1\big)~{\rm or}~\big[1, 3\big)~{\rm or}~\ldots~{\rm or}~\nonumber\\ &\; ~~~~ \big[4k-5, 4k-3\big)~{\rm or}~\big[4k-3, +\infty\big)\Big\}. \end{align} $ (21)

But there are limited number of equilibrium points and output patterns for RNN with these two kinds of activation functions. Hence, we discuss multi-stability of MRNN with the activation function (9).

Lemma 4 [49]: For the given integer $k\ge 1, $ if $\forall~ i, j\in \{1, 2, \ldots, n\}, $ the following inequalities are valid for coefficients $c_i$, $a_{ij}$, $b_{ij}$ and external inputs $u_i$ of RNN with the activation function (9)

$ \begin{align} &\;a_{ii}+b_{ii} -(4k-3)\sum\limits_{j=1, j\neq i}^n\Big(|a_{ij}+b_{ij}|\Big)-|u_i|>c_i\label{th2-1} \end{align} $ (22)
$ \begin{align} &\;a_{ii}+b_{ii} +\sum\limits_{j=1, j\neq i}^n\Big(|a_{ij}+b_{ij}|\Big)+\frac{|u_i|}{(4k-3)}\nonumber\\ &\;~~~~~~~~~~~~~~~~<\Big(1+\frac{2}{(4k-3)}\Big)c_i,\label{th2-2} \end{align} $ (23)

then RNN with the activation function (9) has $(4k-1)^n$ equilibrium points and $(2k)^n$ of them are locally exponentially stable.

Let $\breve{a}_{ij}=\max\{|\underline{a}_{ij}|, \overline{a}_{ij}\}$, $\breve{b}_{ij}=\max\{|\underline{b}_{ij}|, \overline{b}_{ij}\}$. Then, we have the following results.

Theorem 2: If the following inequalities are valid

$ \begin{align} &\;\underline{a}_{ii}+\underline{b}_{ii} -(4k-3)\sum\limits_{j=1, j\neq i}^n\Big(|\breve{a}_{ij}|+|\breve{b}_{ij}|\Big)-|u_i|>c_i \end{align} $ (24)
$ \begin{align} &\;\overline{a}_{ii}+\overline{b}_{ii} +\sum\limits_{j=1, j\neq i}^n\Big(|\breve{a}_{ij}|+|\breve{b}_{ij}|\Big)+\frac{|u_i|}{(4k-3)}\nonumber\\ &\;~~~~~~~~~~~~~~~~<(1+\frac{2}{(4k-3)})c_i \end{align} $ (25)

then for $c_i$, $a_{ij}\in [\underline{a}_{ij}, \overline{a}_{ij}]$ and $b_{ij}\in [\underline{b}_{ij}, \overline{b}_{ij}]$, $\forall i, j\in \{1, 2, \ldots, n\}$, the corresponding MRNN (5) has $(4k-1)^{n}$ equilibria located in $\Omega_k$, $(2k)^n$ of them are locally exponentially stable.

Proof: In order to prove multi-stability of (5), it is sufficient to verify whether conditions (22) and (23) are valid or not. For $\underline{a}_{ii}\leq a_{ii}(u_i)\leq \overline{a}_{ii}$, we have

$ \begin{align*} &a_{ii}(u_i)+b_{ii}(u_i)\\&\;\quad-(4k-3)\sum\limits_{j=1, j\neq i}^n\Big(|a_{ij}(u_i)+b_{ij}(u_i)|\Big)-|u_i|\\ &\;\geq \underline{a}_{ii}+\underline{b}_{ii}-(4k-3)\sum\limits_{j=1, j\neq i}^n\Big(|a_{ij}(u_i)+b_{ij}(u_i)|\Big)-|u_i|\\ &\;\geq \underline{a}_{ii}+\underline{b}_{ii} -(4k-3)\sum\limits_{j=1, j\neq i}^n\Big(|\breve{a}_{ij}|+|\breve{b}_{ij}|\Big)-|u_i|\\ &\;>c_i. \end{align*} $

And then

$ \begin{align*} &\;(1+\frac{2}{(4k-3)})c_i\\ &\;>\overline{a}_{ii}+\overline{b}_{ii} +\sum\limits_{j=1, j\neq i}^n\Big(|\breve{a}_{ij}|+|\breve{b}_{ij}|\Big)+\frac{|u_i|}{(4k-3)}\\ &\;\geq a_{ii}(u_i)+b_{ii}(u_i)+\sum\limits_{j=1, j\neq i}^n\Big(|\breve{a}_{ij}|+|\breve{b}_{ij}|\Big)+\frac{|u_i|}{(4k-3)}\\ &\;\geq a_{ii}(u_i)+b_{ii}(u_i)+\sum\limits_{j=1, j\neq i}^n\Big(|a_{ij}(u_i)|+|b_{ij}(u_i)|\Big) \\ &\;\quad+\frac{|u_i|}{(4k-3)}. \end{align*} $

Hence, (22) and (23) are valid for $c_i$, $a_{ij}(u_i), b_{ij}(u_i)$, $i, j=1, 2, \ldots, n$. By Lemma $4$, the conclusion of Theorem $2$ is valid.

Remark 4: In fact, (24) and (25) are minimum value and maximum value of (22) and (23), respectively. Hence, we generalize the systematic method [50], [51] to analyzing multi-stability of MRNN. Compared with results in [49], the conditions are more conservative. But MRNN has infinite number of sub neural networks, i.e, globally exponentially stable equilibrium points of MRNN (5) are infinite times $(2k)^n$. By virtue of the existing results for multi-stability of RNN, we can obtain many sufficient conditions for multi-stability of MRNN (5).

4 Associative Memory Synthesis

Based on the above analysis, we discuss associative memory design method based on MRNN (5). Memory patterns are described by bipolar value $\{-1, 1\}$. Associative memory is implemented by RNN circuit. So the activation function is taken as $f(r)=(|r+1|-|r-1|)/2$, $r\in \mathbb{R}$ and weight values are simulated by linear resistors. So associative memory can just remember bitmap, and storage capacity is limited. So our associative memory synthesis is based on MRNN with (9). It is able to memorize gray map and has infinite storage capacity. The key point of associative memory synthesis is the computation for weights value. So we firstly describe the synthesis problem, and then present our design method based on Zeng and Wang's work [43]. The activation function $F(r)=0, r<0$, $F(r)=f(r), r>0$ where $f(r)$ is defined as (9). The purpose is to make the designed neural network be able to memorize gray map.

Synthesis Problem: There are $p$ memory patterns being denoted by vectors $\alpha^{1}, \alpha^{2}, \ldots, \alpha^p$, $\alpha^i \in \{0, 1, 3, 5, \ldots, 4k-3\}^n$, $i=1, 2, \ldots, p$. Compute coefficients $c_i$, $a_{ij}, b_{ij}$ and $u_i$ in order that $\alpha^{1}, \alpha^{2}, \ldots, \alpha^p$, $\alpha^i$ are stable memory vectors of MRNN (9).

Design procedure:

Step 1: Use vectors $\alpha^{1}, \alpha^{2}, \ldots, \alpha^p$, $\alpha^i \in \{0, 1, 3, 5, \ldots, 4k-3\}^n$ ($n$, the dimension of MRNN) presenting the desired memory pattern. If $p\leq (2k)^n$, then go to Step 2 computing coefficients $c_i$, $a_{ij}, b_{ij}$ and $u_i$. If $p=q(2k)^n+\gamma$, then divide $\alpha^{1}, \alpha^{2}, \ldots, \alpha^p$ into $q+1$ groups. Go to Step 2 and compute coefficients for each group.

Step 2: For the desired memory vectors, do the following:

1) $-\tilde{U}_kS(l)^T(S(l)S(l)^T)^{-1}=(t_{ij})_{n\times n}$ ($l\leq n$);

2) Take $\sigma_i>1$, $i=1, 2, \ldots, n$ and choose $a_{ij}, b_{ij}$ satisfying $a_{ii}+b_{ii}-\sigma_i=t_{ii}$ and $a_{ij}+b_{ij}=t_{ij}$; where $\tilde{U}_l, S(l)$ are defined in [41].

Step 3: If $p=(2k)^n$, compute memristance $M_{ij}$ according to $a_{ij}, b_{ij}$; if $p=q(2k)^n+\gamma$, compute memristance $M_{ij}$ according to $|a_{ij}|_{\max}$, $|b_{ij}|_{\max}$ where

$ \begin{align*} &|a_{ij}|_{\max}=\max\limits_{1\leq \delta \leq q+1}|a_{ij}^{\delta}|\\ &|b_{ij}|_{\max}=\max\limits_{1\leq \delta \leq q+1}|b_{ij}^{\delta}|. \end{align*} $

Remark 5: Compared with the work in [41], we do not require that $p$, the number of desired memory vectors, is less than or equal to $(2k)^n$. Hence, we generalize Zeng and Wang's work [43]. And we choose the activation function (9) in order to make the designed associative memory MRNN be able to memorize gray map not bitmap. This is one difference from the existing work. Another merit is that the designed MRNN has infinite number of equilibrium points, i.e., MRNN can be used to implement large storage capacity associative memory. For example, RNN with $f(r)=(|r+1|-|r-1|)$ only has $2^n$ memory patterns in $\{-1, 1\}^n$ and RNN with (9) only has $(4)^n$ memory patterns in $\{-5, -1, 1, 5\}^n$ when $k=2$. MRNN breaks this bottleneck for it has variable coefficients and infinite memory patterns.

5 Illustrative Examples

Example 1: Consider the following MRNN with activation function $f(r), r\in \mathbb{R}$ (9) with $k=1, n=2.$

$ \begin{eqnarray}\label{ex1} \left\{\begin{array}{l} \dot{x}_1(t)=-{x}_1(t)+a_{11}f({x}_1(t))+a_{11}f({x}_2(t))\\ ~~~~~~~~~~+b_{11}f({x}_1(t-0.1))\\ ~~~~~~~~~~+b_{12}f({x}_2(t-0.1))+0.8\\ \dot{x}_2(t)=-{x}_2(t)+a_{21}f({x}_1(t))+a_{22}f({x}_2(t))\\ ~~~~~~~~~~+b_{21}f({x}_1(t-0.2))\\ ~~~~~~~~~~+b_{22}f({x}_2(t-0.2))+0.4 \end{array}\right. \end{eqnarray} $ (26)

where

$ \begin{align*} &-3\leq a_{11}\leq -2, ~ \frac{1}{3}\leq a_{12}\leq \frac{1}{2}\\ &-\frac{1}{3}\leq a_{21}\leq \frac{1}{2}, ~ -3\leq a_{22}\leq -2\\ &0.9\leq b_{11}\leq 1, ~ -\frac{1}{4}\leq b_{12}\leq \frac{1}{4}\\ &\frac{1}{4}\leq b_{21}\leq \frac{1}{2}, ~ 0.6\leq b_{22}\leq 1\\ &c_1=c_2=4.8.\\ \end{align*} $

According to Theorem 1, every sub neural network of MRNN (26) is globally exponentially stable. Let $a_{11}=a_{22}=-3$, $a_{12}=a_{21}={1}/{2}$, $b_{11}=b_{22}=1$, $b_{12}={1}/{4}$, $b_{21}={1}/{2}$, and simulate with 50 initial values. The dynamic characteristics are shown in Figs. 2-4.

Figure 2 Transient behaviors of $x_{1}(t)$ of MRNN (26).
Figure 3 Transient behaviors of $x_{2}(t)$ of MRNN (26).
Figure 4 Phase plot of $x_{1}(t)$ and $x_{2}(t)$ of MRNN (26).

Example 2: Consider a MRNN with activation function $f(r), r\in \mathbb{R}$ (9) with $k=2, n=2.$

$ \begin{eqnarray}\label{ex2} \left\{\begin{array}{l} \dot{x}_1(t)=-{x}_1(t)+a_{11}f({x}_1(t))+a_{11}f({x}_2(t))\\ ~~~~~~~~~~+b_{11}f({x}_1(t-0.1))\\ ~~~~~~~~~~+b_{12}f({x}_2(t-0.1))+0.05\\ \dot{x}_2(t)=-{x}_2(t)+a_{21}f({x}_1(t))+a_{22}f({x}_2(t))\\ ~~~~~~~~~~+b_{21}f({x}_1(t-0.2))\\ ~~~~~~~~~~+b_{22}f({x}_2(t-0.2))-0.04 \end{array}\right. \end{eqnarray} $ (27)

where

$ \begin{align*} &\;0.5\leq a_{11}\leq 0.7, ~0.01\leq a_{12}\leq 0.02\\ &\;-0.02\leq a_{21}\leq -0.01, ~0.4\leq a_{22}\leq 0.6\\ &\;0.4\leq b_{11}\leq 0.5, ~-0.02\leq b_{12}\leq -0.01\\ &\;0.005\leq b_{21}\leq 0.01, ~0.6\leq b_{22}\leq 0.7. \end{align*} $

According to Theorem $2$, every sub neural network of MRNN (26) has $7^2$ isolated equilibrium points and $4^2$ of them are locally exponentially stable. Take maximum values for $a_{ij}, b_{ij}$, $i, j=1, 2$ and simulate with $50$ initial values. The dynamics characteristics are shown in Fig. 5.

Figure 5 Transient behaviors of $x_{1}(t)$ and $x_{2}(t)$ of MRNN (27)

Example $3$: The same example has been introduced by Lu and Liu [52], Zeng and Wang [43] for associative memory synthesis. The desired memory patterns are three letters "I, L, U" and number "7" as plotted by gray Fig. 6.

Figure 6 Three letters "I, L, U" and number "7" being presented by gray map.

These four desired patterns can be denoted by memory vectors

$ \begin{align*} &\;\alpha^1=(1, 1, 1, 1, 5, 5, 5, 5, 1, 1, 1, 1)\\ &\;\alpha^2=(5, 5, 5, 5, 1, 1, 1, 5, 1, 1, 1, 5)\\ &\;\alpha^3=(5, 5, 5, 5, 1, 1, 1, 5, 5, 5, 5, 5)\\ &\;\alpha^4=(5, 1, 1, 1, 5, 1, 1, 5, 5, 5, 5, 5). \end{align*} $

The objective is to design one $12$ dimension MRNN with $\alpha^1, \alpha^2, \alpha^3, \alpha^4$ being stable memory vectors. Obviously, the number of stable memory vectors is less than $(2k)^n$ ($k=2, n=12$). For $l=12$, then we add eights vectors $\alpha^5, \ldots, \alpha^8$ such that

$ \begin{equation*} S(12)=\left[ \begin{array}{cccccccccccc} 1 & 5 & 5 & 5 & 1 & 5 & 5 & 5 & 1 & 5 & 5 & 5\\ 1 & 5 & 5 & 1 & 1 & 1 & 5 & 1 & 5 & 5 & 5 & 5\\ 1 & 5 & 5 & 1 & 1 & 5 & 1 & 5 & 5 & 1 & 5 & 5\\ 1 & 5 & 5 & 1 & 1 & 5 & 5 & 1 & 5 & 5 & 1 & 5\\ 5 & 1 & 1 & 5 & 5 & 1 & 1 & 1 & 1 & 1& 1 & 1\\ 1 & 5 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\ 5 & 1 & 5 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\ 1 & 5 & 1 & 5 & 5 & 5 & 1 & 5 & 5 & 1 & 5 & 1\\ 5 & 1 & 5 & 1 & 5 & 1 & 1 & 1 & 5 & 1 & 1 & 1\\ 1 & 5 & 1 & 5 & 1 & 5 & 1 & 1 & 1 & 5 & 1 & 1\\ 5 &1 & 5 & 1 & 5 & 1 & 1 &1 & 1 & 1 & 5 & 1\\ 1 & 5 & 1 & 5 & 1 & 5 & 1 & 1& 1 & 1& 5 & 1 \\ \end{array} \right] \end{equation*} $

is an invertible matrix. Choose $u_i=1.825$ (external inputs), $i=1, 2, \ldots, 12$, $\lambda_i^{l}=1.5 (i=1, 2, \ldots, 12; l=1, 2, 3, 4)$, $\lambda_i^{l}=0.1 (i=1, 2, \ldots, 12; l=5, 6, \ldots, 12)$. The function of $\lambda_i^{l}$ is to make these memory vectors be in the stable region $\Omega_k$. According to associative memory synthesis program, we can obtain

$ \begin{equation*} W=\left[ {\begin{array}{*{20}{c}} 0.5020& 0.2342 & 1.1002 & 0.0348 \\ -0.6667& 0.6667 &-0.8889 &0.4444 \\ 0.8354 &-0.0992 & 1.9891 &-0.4097 \\ -0.3333 & 0.3333 &-0.6667 & 0.6667 \\ 5.0000 &-5.0000 &10.0000 & 0.0000 \\ 5.0000 &-5.0000 &10.0000 &0.0000 \\ -12.5307 & 1.4877& -6.5031 &-0.5215 \\ -7.5307 &-3.5123&-16.5031& -0.5215 \\ 7.5307 & 3.5123& 13.1697 &-2.8119 \\ -10.0000 &10.0000&-13.3333 & 6.6667 \\ 0.0000 &-0.0000& -6.6667 & 3.3333 \\ 7.5307 & 3.5123&19.8364 &-6.1452 \\ \end{array}} \right.\\ \begin{array}{*{20}{c}} 0.1227& -0.3640 & 0.9202 & 0.5215 \\ -0.2222 & 0.4444& -0.2222 & 0 \\ 0.3449& -0.1418 & 1.1425 & 0.5215 \\ -0.0000& 0.0000& 0& 0.0000 \\ 0.0000 & 0.0000 & 10.0000 & 0 \\ 0.0000& -10.0000& 10.0000 & 0\\ -1.8405 & 5.4601& -3.8037& 12.1779 \\ 8.1595 & 5.4601 &-13.8037 & 2.1779 \\ -1.4928& -8.7935& 10.4703& -2.1779\\ -3.3333 & 6.6667 & -3.3333 & 0.0000\\ 3.3333 & 3.3333 & -6.6667& -0.0000\\ -4.8262 &-12.1268 & 17.1370 & -2.1779 \\ \end{array}\\ \left. {\begin{array}{*{20}{c}} 0.1227 & 0.9202& 0.2342& 0.7209\\ 0.0000 &-0.4444& 0.4444& -0.0000\\ 0.7894 & 1.3647& 0.1230& 1.0542\\ -0.6667& -0.0000& 0.3333& -0.3333\\ 0.0000& 10.0000& -5.0000& 5.0000\\ 0.0000& 10.0000& 5.0000& 5.0000\\ -1.8405& -3.8037& 1.4877& -5.8129\\ 8.1595&-13.8037& -3.5123&-10.8129\\ 1.8405& 7.1370& 0.1789& 10.8129\\ 0.0000& 3.3333& -3.3333& -0.0000\\ 0.0000& -3.3333& 3.3333& 0.0000\\ -8.1595 &10.4703& 6.8456& 10.8129\\ \end{array}} \right] \end{equation*} $

where $W=(t_{ij})$, $t_{ij}=a_{ij}+b_{ij}$. It is easy to verify that $\alpha^1, \alpha^2, \alpha^3, \alpha^4$ are stable memory vectors according to Theorem $2$. Take $c_i=1, i=1, 2, \ldots, n$, $a_{ij}=b_{ij}$, $u_i=0.425$, then we have the MRNN with these desired patterns as stable memory vectors.

6 Concluding Remarks

In this paper, we have introduced MRNN which is a family of recurrent neural networks. Some sufficient conditions are derived to assure its mono-stability and multi-stability. In the existing literature on neural network, the largest number of equilibrium points is $(4k-1)^{n}$ and $(2k)^{n}$ of them are locally exponentially stable. In fact, associative memory output patterns are up to the activation function. This point affects the storage capacity of associative memory. Our MRNN with coefficients in intervals cannot be limited by output value of the activation. Hence MRNN can increase the storage capacity of associative memory. This is the main merit which is different from traditional artificial neural network. So self-adaptive and self-organization recurrent neural network can be realized with memristor [26] in the future.

References
1
T. Mareda, L. Gaudard, and F. Romerio, "A parametric genetic algorithm approach to assess complementary options of large scale windsolar coupling, " IEEE/CAA J. Autom. Sinica, vol. 4, no. 2, pp. 260-272, Apr. 2017. http://kns.cnki.net/KCMS/detail/detail.aspx?filename=zdhb201702013&dbname=CJFD&dbcode=CJFQ
2
Y. Zhao, Y. Li, F. Y. Zhou, Z. K. Zhou, and Y. Q. Chen, "An iterative learning approach to identify fractional order KiBaM model, " IEEE/CAA J. Autom. Sinica, vol. 4, no. 2, pp. 322-331, Apr. 2017. http://ieeexplore.ieee.org/document/7833249
3
L. Li, Y. L. Lin, N. N. Zheng, and F. Y. Wang, "Parallel learning: a perspective and a framework, " IEEE/CAA J. Autom. Sinica, vol. 4, no. 3, pp. 389-395, Jul. 2017. http://kns.cnki.net/KCMS/detail/detail.aspx?filename=zdhb201703001&dbname=CJFD&dbcode=CJFQ
4
M. Yue, L. J. Wang, and T. Ma, "Neural network based terminal sliding mode control for WMRs affected by an augmented ground friction with slippage effect, " IEEE/CAA J. Autom. Sinica, vol. 4, no. 3, pp. 498-506, Jul. 2017. http://d.wanfangdata.com.cn/Periodical/zdhxb-ywb201703009
5
W. Y. Zhang, H. G. Zhang, J. H. Liu, K. Li, D. S. Yang, and H. Tian, "Weather prediction with multiclass support vector machines in the fault detection of photovoltaic system, " IEEE/CAA J. Autom. Sinica, vol. 4, no. 3, pp. 520-525, Jul. 2017. http://ieeexplore.ieee.org/document/7974898/
6
D. Shen and Y. Xu, "Iterative learning control for discrete-time stochastic systems with quantized information, " IEEE/CAA J. Autom. Sinica, vol. 3, no. 1, pp. 59-67, Jan. 2016. http://d.wanfangdata.com.cn/Periodical/zdhxb-ywb201601007
7
Z. Y. Guo, S. F. Yang, and J. Wang, "Global synchronization of stochastically disturbed memristive neurodynamics via discontinuous control laws, " IEEE/CAA J. Autom. Sinica, vol. 3, no. 2, pp. 121-131, Apr. 2016. http://d.wanfangdata.com.cn/Periodical/zdhxb-ywb201602002
8
X. W. Feng, X. Y. Kong, and H. G. Ma, "Coupled cross-correlation neural network algorithm for principal singular triplet extraction of a cross-covariance matrix, " IEEE/CAA J. Autom. Sinica, vol. 3, no. 2, pp. 147-156, Apr. 2016. http://d.wanfangdata.com.cn/Periodical/zdhxb-ywb201602005
9
S. M. Chen, X. L. Chen, Z. K. Pei, X. X. Zhang, and H. J. Fang, "Distributed filtering algorithm based on tunable weights under untrustworthy dynamics, " IEEE/CAA J. Autom. Sinica, vol. 3, no. 2, pp. 225-232, Apr. 2016. http://ieeexplore.ieee.org/document/7451110/
10
L. Li, Y. S. Lv, and F. Y. Wang, "Traffic signal timing via deep reinforcement learning, " IEEE/CAA J. Autom. Sinica, vol. 3, no. 3, pp. 247-254, Jul. 2016. http://www.en.cnki.com.cn/Article_en/CJFDTOTAL-ZDHB201603003.htm
11
F. Y. Wang, X. Wang, L. X. Li, and L. Li, "Steps toward parallel intelligence, " IEEE/CAA J. Autom. Sinica, vol. 3, no. 4, pp. 345-348, Oct. 2016. http://ieeexplore.ieee.org/document/7589480/
12
T. Giitsidis and G. Ch. Sirakoulis, "Modeling passengers boarding in aircraft using cellular automata, " IEEE/CAA J. Autom. Sinica, vol. 3, no. 4, pp. 365-384, Oct. 2016. http://ieeexplore.ieee.org/document/7589483
13
B. B. Alagoz, "A note on robust stability analysis of fractional order interval systems by minimum argument vertex and edge polynomials, " IEEE/CAA J. Autom. Sinica, vol. 3, no. 4, pp. 411-421, Oct. 2016.
14
J. J. Hopfield, "Neural networks and physical systems with emergent collective computational abilities, " Proc. Natl. Acad. Sci. USA, vol. 79, no. 8, pp. 2554-2558, Apr. 1982. http://europepmc.org/abstract/MED/6953413
15
L. Chua, "Memristor-the missing circuit element, " IEEE Trans. Circuit Theory, vol. 18, no. 5, pp. 507-519, Sep. 1971. http://www.nrcresearchpress.com/servlet/linkout?suffix=refg1/ref1&dbid=16&doi=10.1139%2Fcjp-2013-0456&key=10.1109%2FTCT.1971.1083337
16
D. B. Strukov, G. S. Snider, D. R. Stewart, and R. S. Williams, "The missing memristor found, " Nature, vol. 453, no. 7191, pp. 80-83, May 2008.
17
Y. V. Pershin and M. Di Ventra, "Experimental demonstration of associative memory with memristive neural networks, " Neural Netw. , vol. 23, no. 7, pp. 881-886, Sep. 2010. http://europepmc.org/abstract/MED/20605401
18
F. Corinto, A. Ascoli, and M. Gilli, "Nonlinear dynamics of memristor oscillators, " IEEE Trans. Circuits Syst. Ⅰ: Reg. Pap. , vol. 58, no. 6, pp. 1323-1336, Jun. 2011. http://ieeexplore.ieee.org/document/5704223/
19
O. Kavehei, A. Iqbal, Y. S. Kim, K. Eshraghiam, S. F. Al-Sarawi, and D. Abbott, "The fourth element: characteristics, modelling and electromagnetic theory of the memristor, " Proc. Roy. Soc. A-Math. Phy. Eng. Sci. , vol. 466, no. 2120, pp. 2175-2202, Mar. 2010. http://www.jstor.org/stable/25706341
20
Y. Ho, G. M. Huang, and P. Li, "Dynamical properties and design analysis for nonvolatile memristor memories, " IEEE Trans. Circuits Syst. Ⅰ: Reg. Pap. , vol. 58, no. 4, pp. 724-736, Apr. 2011. http://ieeexplore.ieee.org/document/5604689/
21
L. Chua, "Resistance switching memories are memristors, " Appl. Phys. A, vol. 102, no. 4, pp. 765-783, Mar. 2011. http://link.springer.com/article/10.1007/s00339-011-6264-9
22
G. Snider, "Memristors as synapses in a neural computing architecture, " in Memristor and Memristor Syst. Symp. , Berkeley, CA, Nov. 2008.
23
H. Kim, M. P. Sah, C. J. Yang, T. Roska, and L. O. Chua, "Neural synaptic weighting with a pulse-based memristor circuit, " IEEE Trans. Circuits Syst. Ⅰ: Reg. Pap. , vol. 59, no. 1, pp. 148-158, Jan. 2012. http://ieeexplore.ieee.org/document/5976989/
24
M. P. Sah, H. Kim, and L. O. Chua, "Brains are made of memristors, " IEEE Circuits Syst. Mag. , vol. 14, no. 1, pp. 12-36, Feb. 2014. http://ieeexplore.ieee.org/xpls/icp.jsp?arnumber=6744690
25
F. Z. Wang, N. Helian, S. N. Wu, X. Yang, Y. K. Guo, G. Lim, and M. M. Rashid, "Delayed switching applied to memristor neural networks, " J. Appl. Phys. , vol. 111, no. 7, Article ID, 07E317, Apr. 2012. http://scitation.aip.org/content/aip/journal/jap/111/7/10.1063/1.3672409
26
K. D. Cantley, A. Subramaniam, H. J. Stiegler, R. A. Chapman, and E. M. Vogel, "Neural learning circuits utilizing nano-crystalline silicon transistors and memristors, " IEEE Trans. Neural Netw. Learn. Syst. , vol. 23, no. 4, pp. 565-573, Apr. 2012. http://www.ncbi.nlm.nih.gov/pubmed/24805040
27
X. F. Hu, S. K. Duan, L. D. Wang and X. F. Liao, "Memristive crossbar array with applications in image processing, "Sci. China Inform. Sci., 2012, 55(2): 461-472. DOI:10.1007/s11432-011-4410-9
28
M. Itoh and L. Chua, "Memristor cellular automata and memristor discrete-time cellular neural networks, " Int. J. Bifurcation Chaos, vol. 19, no. 11, pp. 3605-3656, Mar. 2009. http://www.worldscientific.com/doi/abs/10.1142/S0218127409025031
29
S. P. Wen, Z. G. Zeng, and T. W. Huang, "Associative learning of integrate-and-fire neurons with memristor-based synapses, " Neural Proc. Lett. , vol. 38, no. 1, pp. 69-80, Aug. 2013. http://link.springer.com/article/10.1007/s11063-012-9263-8
30
A. L. Wu, S. P. Wen, and Z. G. Zeng, "Synchronization control of a class of memristor-based recurrent neural networks, " Inf. Sci. , vol. 183, no. 1, pp. 106-116, Jan. 2012. http://dl.acm.org/citation.cfm?id=2051433
31
S. T. Qin, J. Wang, and X. P. Xue, "Convergence and attractivity of memristor-based cellular neural networks with time delays, " Neural Netw. , vol. 63, pp. 223-233, Mar. 2015. http://www.sciencedirect.com/science/article/pii/S0893608014002706
32
Z. Y. Guo, J. Wang, and Z. Yan, "Attractivity analysis of memristor-based cellular neural networks with time-varying delays, " IEEE Trans. Neural Netw. Learn. Syst. , vol. 25, no. 4, pp. 704-717, Apr. 2014. http://ieeexplore.ieee.org/document/6603322/
33
S. P. Wen, T. W. Huang, Z. G. Zeng, Y. R. Chen, and P. Li, "Circuit design and exponential stabilization of memristive neural networks, " Neural Netw. , vol. 63, pp. 48-56, Mar. 2015. http://dl.acm.org/citation.cfm?id=2947803
34
G. D. Zhang, Y. Shen, Q. Yin, and J. W. Sun, "Global exponential periodicity and stability of a class of memristor-based recurrent neural networks with multiple delays, " Inf. Sci. , vol. 232, pp. 386-396, May 2013. http://dl.acm.org/citation.cfm?id=2444088
35
Z. Y. Guo, J. Wang, and Z. Yan, "Global exponential dissipativity and stabilization of memristor-based recurrent neural networks with time-varying delays, " Neural Netw. , vol. 48, pp. 158-172, Dec. 2013. http://www.ncbi.nlm.nih.gov/pubmed/24055958
36
X. B. Nie, W. X. Zheng, and J. D. Cao, "Coexistence and localµ-stability of multiple equilibrium points for memristive neural networks with nonmonotonic piecewise linear activation functions and unbounded time-varying delays, " Neural Netw. , vol. 84, pp. 172-180, Dec. 2016. http://www.ncbi.nlm.nih.gov/pubmed/27794268
37
S. B. Ding, Z. S. Wang and H. G. Zhang, "Dissipativity analysis for stochastic memristive neural networks with time-varying delays: a discrete-time case, "IEEE Trans. Neural Netw. Learn. Syst., 2016(99): 1-13. DOI:10.1109/TNNLS.2016.2631624
38
A. L. Wu, Z. G. Zeng, X. S. Zhu and J. E. Zhang, "Exponential synchronization of memristor-based recurrent neural networks with time delays, "Neurocomputing, 2011, 74(17): 3043-3050. DOI:10.1016/j.neucom.2011.04.016
39
S. B. Ding, Z. S. Wang, N. N. Rong, and H. G. Zhang, "Exponential stabilization of memristive neural networks via saturating sampled-data control, " IEEE Trans. Cybern. , vol. 47, no, 10, pp. 3027-3039, Jun. 2017. http://ieeexplore.ieee.org/document/7955063/
40
A. N. Michel and D. L. Gray, "Analysis and synthesis of neural networks with lower block triangular interconnecting structure, " IEEE Trans. Circuits Syst. , vol. 37, no. 10, pp. 1267-1283, Oct. 1990.
41
G. Yen and A. N. Michel, "A learning and forgetting algorithm in associative memories: the eigenstructure method, " IEEE Trans. Circuits Syst. Ⅱ: Anal. Digit. Signal Proc. , vol. 39, no. 4, pp. 212-225, Apr. 1992. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=136571
42
G. Seiler, A. J. Schuler, and J. A. Nossek, "Design of robust cellular neural networks, " IEEE Trans. Circuits Syst. Ⅰ: Fundam. Theory Appl. , vol. 40, no. 5, pp. 358-364, May 1993. http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=232580
43
Z. G. Zeng and J. Wang, "Analysis and design of associative memories based on recurrent neural networks with linear saturation activation functions and time-varying delays, " Neural Comput. , vol. 19, no. 8, pp. 2149-2182, Aug. 2007. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6796141
44
M. Brucoli, L. Carnimeo, and G. Grassi, "Discrete-time cellular neural networks for associative memories with learning and forgetting capabilities, " IEEE Trans. Circuits Syst. Ⅰ: Fundam. Theory Appl. , vol. 42, no. 7, pp. 396-399, Jul. 1995. http://www.ams.org/mathscinet-getitem?mr=1351873
45
A. C. B. Delbem, L. G. Correa, and L. Zhao, "Design of associative memories using cellular neural networks, " Neurocomputing, vol. 72, no. 10-12, pp. 2180-2188, Jan. 2009. http://dl.acm.org/citation.cfm?id=1539067.1539948&coll=DL&dl=GUIDE&CFID=358008649&CFTOKEN=38409485
46
G. Grassi, "On discrete-time cellular neural networks for associative memories, " IEEE Trans. Circuits Syst. Ⅰ: Fundam. Theory Appl. , vol. 48, no. 1, pp. 107-111, Jan. 2001. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=903193
47
A. Ascoli, R. Tetzlaff, L. O. Chua, J. P. Strachan, and R. S. Williams, "History erase effect in a non-volatile memristor, " IEEE Trans. Circuits Syst. Ⅰ: Reg. Pap. , vol. 63, no. 3, pp. 389-400, Mar. 2016. http://ieeexplore.ieee.org/document/7444186/
48
Z. Y. Guo, J. Wang, and Z. Yan, "A systematic method for analyzing robust stability of interval neural networks with time-delays based on stability criteria, " Neural Netw. , vol. 54, pp. 112-122, Jun. 2014. http://www.ncbi.nlm.nih.gov/pubmed/24699443
49
Z. G. Zeng, J. Wang, and X. X. Liao, "Global exponential stability of a general class of recurrent neural networks with time-varying delays, " IEEE Trans. Circuits Syst. Ⅰ: Fundam. Theory Appl. , vol. 50, no. 10, pp. 1353-1358, Oct. 2003. http://ieeexplore.ieee.org/xpls/icp.jsp?arnumber=1236548
50
Z. G. Zeng, T. W. Huang, and W. X. Zheng, "Multistability of recurrent neural networks with time-varying delays and the piecewise linear activation function, " IEEE Trans. Neural Netw. , vol. 21, no. 8, pp. 1371-1377, Aug. 2010. http://www.ncbi.nlm.nih.gov/pubmed/20624705
51
Z. G. Zeng, J. Wang, and X. X. Liao, "Stability analysis of delayed cellular neural networks described using cloning templates, " IEEE Trans. Circuits Syst. Ⅰ: Reg. Pap. , vol. 51, no. 11, pp. 2313-2324, Nov. 2004. http://ieeexplore.ieee.org/document/1356162
52
Z. J. Lu and D. R. Liu, "A new synthesis procedure for a class of cellular neural networks with space-invariant cloning template, " IEEE Trans. Circuits Syst. Ⅱ: Anal. Digit. Signal Proc. , vol. 45, no. 12, pp. 1601-1605, Dec. 1998. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=746682