Skip to main content

Theory and Modern Applications

Almost sure exponential stabilization of neural networks by aperiodically intermittent control based on delay observations

Abstract

This paper is concerned with almost sure exponential stabilization of neural networks by intermittent control based on delay observations. By the stochastic comparison principle and Itô’s formula, a sufficient criterion is derived, under which unstable neural networks can be stabilized by stochastic intermittent control based on delay observations. The range of intermittent rate is given, and the upper bound of time delay can be solved from a transcend equation. Finally, two examples are provided to demonstrate the feasibility and validity of our proposed methods.

1 Introduction

In the past decade, the Hopfield neural networks have been thoroughly investigated. Nowadays they are widely applied in several areas, such as imagine processing, communication engineering, and optimization. These applications mainly depend on the asymptotic behavior of the neural networks [1,2,3,4,5], especially their stability, and therefore more and more authors study the stability of neural networks. The readers can refer to [6,7,8,9,10] and their references.

Actually, many practical problems are related to the unstable neural networks, which cannot be directly applied in engineering unless they are stabilized in advance. Hence, various control strategies have been proposed in order to stabilize the unstable neural networks, including intermittent control [11,12,13,14,15], pinning control [16], impulsive control [17], finite-time control [18], and adaptive control [19]. Meanwhile, noise disturbance is ubiquitous in the real world. As a result, an increasing number of authors have revealed the positive impact of the white noise on the systems in recent decades [20,21,22,23,24]. For example, Mao showed that noise can suppress an explosive solution for population systems in [25]. Also, some scholars have paid attention to the stochastic stabilization for neural networks. Shen and Wang have utilized the white noise to stabilize the unstable networks in [26]. The readers can refer to [27,28,29] for more details.

In the past decades, more and more scholars have realized that there is time delay τ between the observation time of the state and the arrival time of the feedback control. Thus, time-delay feedback control has attracted more and more researchers’ attention [30,31,32,33,34]. It was Guo and Mao who first integrated the delay feedback control strategy with the stochastic stabilization theory. Guo and Mao showed that the differential system is stabilized by delay feedback control provided the time delay is no more than an upper bound in [22]. Nevertheless, there are distinct characteristics in neural networks. It is significant to investigate the stochastic delay stabilization for neural networks based on the network characteristics. To the best of our knowledge, the stochastic delay stabilization for neural networks has scarcely been investigated yet. Accordingly, tackling this issue constitutes the first motivation of this paper.

Moreover, the intermittent control strategy has attracted some scholars [11,12,13, 35,36,37,38,39,40]. The networks are controlled by white noise during working time, and the white noise is removed from the networks during rest time. The corresponding controlled system can be regarded as a switching of a closed-loop subsystem during working time and an open-loop subsystem during rest time. Intermittent control has its advantages over classic continuous control. We cut costs by reducing the excessive wear of the controller due to long time work. Presently there are several results applying intermittent control to networks [11, 12, 37]. Then a question arises naturally: Can we integrate the intermittent control strategy with stochastic delay stabilization strategy? To date, there are few results available on this topic since the simultaneous presence of the delay feedback control and intermittent noise complicates the problem. As a result, the proposed approaches in existing literature cannot be directly adopted. Thus, overcoming the difficulties stemming from delay feedback control and intermittent noise is the second motivation.

Summarizing the above statements, this paper focuses on stochastic intermittent stabilization based on delay feedback control for neural networks. Sufficient conditions for almost sure exponential stabilization are obtained provided that the time delay is bounded by \(\tau _{0}\) and intermittent rate ϕ satisfies \(2\lambda _{+}(-D+| \bar{A}|K)<\sigma ^{2}(1-\phi )\) (D, Ā, K, and σ will be defined in Sect. 2, see [41] for more details). The main contribution of this paper lies in three aspects as follows. (1) The stochastic delay stabilization for neural networks has been investigated based on neural networks characteristic. (2) By the stochastic comparison principle and Itô’s formula, the stochastic intermittent stabilization based on delay feedback control can be obtained. (3) We succeed in overcoming the difficulties mainly arising from the simultaneous presence of the intermittent noise and delay feedback control.

2 Preliminary

Throughout this paper, unless otherwise specified, let \((\varOmega , \mathcal{F}, \{\mathcal{F}_{t}\}_{t\geq 0}, \mathcal{P})\) be a complete probability space with a filtration \(\{\mathcal{F}_{t}\}_{t\geq 0}\) satisfying the usual conditions. Let \(\tau >0\), and denote by \(C=C([-\tau ,0];R^{n})\) the family of continuous functions ξ from \([-\tau ,0]\) to \(R^{n} \) with the norm \(\|\xi \|= \sup_{-\tau \leq \theta \leq 0}|\xi (\theta )|<\infty \). Denote by \(L^{2}_{\mathcal{F}_{0}}([-{\tau },0]; R^{n})\) the family of all \(\mathcal{F}_{0}\)-measurable \(C([-{\tau },0]; R^{n})\) valued random variables \(\zeta =\{\zeta (\theta ): -{\tau }\leq \theta \leq 0\}\) such that \(\sup_{-\tau \leq \theta \leq 0}E|\zeta (\theta )|^{2}<\infty \), where \(E|\cdot |\) stands for the mathematical expectation operator with respect to the given probability measure \(\mathcal{P}\). Let \(G=(g_{ij})_{n\times n}\). Denote \(\bar{G}=(\bar{g}_{ij})_{n\times n}\) with \(\bar{g}_{ii}=\max \{g_{ii},0\}\), \(\bar{g}_{ij}=g_{ij}\), and \(|G|=(|g_{ij}|)_{n\times n}\).

Consider the unstable neural networks as follows:

$$\begin{aligned} \dot{x}(t)=-Dx(t)+A f\bigl(x(t)\bigr), \end{aligned}$$
(1)

where \(D=\operatorname{diag}(d_{1},d_{2},\ldots ,d_{n})\), \(A=(a_{ij})_{n \times n}\), \(f(x)= (f_{1}(x_{1}),f_{2}(x_{2}),\ldots ,f_{n}(x_{n}) )^{T}\), \(f_{i}(\cdot ):R\rightarrow R\) is an activation function. \(x(t)\in R^{n}\), the variable \(x_{i}(t)\) represents the voltage on the input of the ith neuron. Consider the following neural networks by aperiodically intermittent control based on delay state observations:

$$\begin{aligned} \textstyle\begin{cases} {\mathrm{d}}x(t)= (-Dx(t)+A f(x) )\,\mathrm{d}t+\varSigma h(t)x(t- \tau )\,\mathrm{d}B(t), \\ h(t)=\bigl\{ \scriptsize{ \textstyle\begin{array}{l@{\quad}l} 1, &t\in [t_{i},s_{i}], \\ 0, &t\in (s_{i},t_{i+1}], i=0,1,2,\ldots , \end{array}\displaystyle }\end{cases}\displaystyle \end{aligned}$$
(2)

where \(\varSigma =\operatorname{diag}(\sigma ,\ldots ,\sigma )\), \(B(t)\) is a scalar Brown motion, \(\varSigma h(t)x(t-\tau )\,\mathrm{d}B(t)\) is an aperiodically intermittent controller. \(t_{k}-t_{k-1}>0\) is the kth time interval length. The feedback control is imposed on the networks in the time interval \([t_{k},s_{k})\), while the control is removed in the rest of the interval \([s_{k},t_{k+1})\). Set \(\phi _{k}=(t_{k+1}-t _{k})^{-1}(t_{k+1}-s_{k})\), and \(\phi =\limsup_{k\longrightarrow + \infty }\phi _{k}\) is the intermittence rate.

In order to analyze the asymptotic behavior for networks (2), we define the auxiliary networks without delay observations:

$$\begin{aligned} {\mathrm{d}}y(t)= \bigl(-Dy(t)+A f(y) \bigr)\,\mathrm{d}t+ \varSigma h(t)y(t) \,\mathrm{d}B(t). \end{aligned}$$
(3)

The key technique in this paper is the comparison principle. The pth moment difference between networks (2) and (3) is estimated. The whole frame is based on two basic assumptions.

Assumption 1

For each \(i=1,2,\ldots ,n\), there exists \(\kappa _{i}\) such that

$$ 0< \frac{f_{i}(u)-f_{i}(v)}{u-v}\leq \kappa _{i},\quad u,v \in R. $$

Denote \(K=\operatorname{diag}(\kappa _{1},\ldots ,\kappa _{n})\).

Assumption 2

\(\lambda _{+}(-D+|\bar{A}|K)= \sup_{|x|=1, x\in R^{n}_{+}}x^{T}(-D+|\bar{A}|K)x>0\).

Remark 1

If \(\lambda _{+}(-D+|\bar{A}|K)<0\), calculating the derivative of \(|x(t)|^{2}\) along networks (1) gives

$$\begin{aligned} D^{+} \bigl\vert x(t) \bigr\vert ^{2}&=2x^{\mathrm{T}}(t) \bigl(-D+Af\bigl(x(t)\bigr)\bigr)x(t) \\ & =-2x^{\mathrm{T}}(t)Dx(t)+2\sum_{i=1}^{n}a^{+}_{ii} \kappa _{i} \bigl\vert x^{2}_{i} \bigr\vert +2\sum_{i=1}^{n}\sum _{j\neq ij=1}^{n} \vert a_{ij} \vert \kappa _{j} \vert x_{i} \vert \vert x_{j} \vert \\ & =2\bigl([x]^{+}\bigr)^{\mathrm{T}}|(-D+\bar{A}|K)[x]^{+}= \lambda _{+}\bigl(-D+ \vert \bar{A} \vert K\bigr) \bigl\vert x(t) \bigr\vert ^{2}< 0, \end{aligned}$$

where \([x]^{+}=(|x_{1}|,|x_{2}|,\ldots ,|x_{n}|)^{T}\), and \(\bar{a} _{ii}=\max \{a_{ii},0\}\), \(\bar{a}_{ij}=|a_{ij}|\). This implies networks (1) are stable.

3 Main results

The main results will be presented in this section.

Theorem 3.1

Let Assumption 1 and \(2\lambda _{+}(-D+| \bar{A}|K)<\sigma ^{2}(1-\phi )\) hold. System (2) can be stabilized by stochastic intermittent control \(\sigma h(t)X(t-\tau )\,\mathrm{d}B(t)\) provided \(\tau <\tau _{0}\), where \(\tau _{0}\) is the solution of (23).

Proof

We divide the proof into three steps for convenience. The main aim of step 1 is to show the stability of system (3). The moment estimation on \(E|x(t)-y(t)|^{p}\) has been performed in step 2. We will show the stabilization by stochastic intermittent control with time delay in step 3.

Step 1. We will show that system (3) is pth moment exponentially stable if \(p\in (0,1-2\lambda _{+}(-D+|\bar{A}|K)\sigma ^{-2}(1-\phi )^{-1})\) is sufficiently small. Applying Itô’s formula to \(V=|y(t)|^{p}\) yields

$$\begin{aligned} {\mathrm{d}}V= {}&\biggl\{ -p \vert y \vert ^{p-2}y^{T}D y+p \vert y \vert ^{p-2}y^{T}A f(y)+ \frac{1}{2}\sigma ^{2}h^{2}(t)p(p-1) \vert y \vert ^{p} \biggr\} \,\mathrm{d}t \\ &{}+p \sigma h(t) \vert y \vert ^{p}\,\mathrm{d}B(t). \end{aligned}$$
(4)

By similar computations in Remark 1, we have

$$\begin{aligned} LV&\leq p \vert y \vert ^{p-2} \vert y \vert ^{T} \bigl(-D + \vert \bar{A} \vert K\bigr) \vert y \vert + \frac{1}{2}\sigma ^{2}h ^{2}(t)p(p-1) \vert y \vert ^{p} \\ & \leq p \biggl(\lambda _{+}\bigl(-D + \vert \bar{A} \vert K \bigr)+\frac{1}{2} \sigma ^{2}h(t) (p-1) \biggr) \vert y \vert ^{p}. \end{aligned}$$

For convenience, denote \(\lambda _{1}=\lambda _{+}(-D+|\bar{A}|K)\), then (4) can be written as

$$\begin{aligned} {\mathrm{d}} \bigl\vert y(t) \bigr\vert ^{p}\leq \biggl(\lambda _{1}p+\frac{1}{2}\sigma ^{2}h(t)p(p-1) \biggr) \bigl\vert y(t) \bigr\vert ^{p}\,\mathrm{d}t+p\sigma h(t) \bigl\vert y(t) \bigr\vert ^{p}\,\mathrm{d}B(t). \end{aligned}$$

It follows from the stochastic comparison principle that

$$\begin{aligned} \bigl\vert y(t) \bigr\vert ^{p}\leq \bigl\vert y(t_{0}) \bigr\vert ^{p}\exp \biggl\{ \biggl(\lambda _{1}p- \frac{1}{2}p\sigma ^{2} \biggr) (t-t_{0})+p\sigma \bigl(B(t)-B(t_{0}) \bigr) \biggr\} ,\quad t_{0}\leq t\leq s_{0}. \end{aligned}$$
(5)

Note that \(t=s_{0}\),

$$\begin{aligned} \bigl\vert y(s_{0}) \bigr\vert ^{p}\leq \bigl\vert y(t_{0}) \bigr\vert ^{p}\exp \biggl\{ \biggl( \lambda _{1}p- \frac{1}{2}p\sigma ^{2} \biggr) (s_{0}-t_{0})+ p\sigma \bigl(B(s_{0})-B(t _{0}) \bigr) \biggr\} . \end{aligned}$$

It is obvious that \(h(t)=0\), \(s_{0}< t\leq t_{1}\), then we obtain

$$\begin{aligned} \bigl\vert y(t) \bigr\vert ^{p}\leq{}& \bigl\vert y(s_{0}) \bigr\vert ^{p}\exp \biggl\{ \biggl(\lambda _{1}p-\frac{1}{2}p ^{2}\sigma ^{2} \biggr) (t-s_{0}) \biggr\} \\ \leq{}& \bigl\vert y(t_{0}) \bigr\vert ^{p}\exp \biggl\{ \biggl(\lambda _{1}p-\frac{1}{2}p\sigma ^{2} \biggr) (s_{0}-t_{0})+\lambda _{1}p(t-s_{0}) \\ &{}+p\sigma \bigl(B(s_{0})-B(t_{0}) \bigr) \biggr\} ,\quad s_{0}< t\leq t_{1}. \end{aligned}$$
(6)

Applying Itô’s formula to \(V_{1}(t)=e^{p\sigma B(t)}\) yields

$$\begin{aligned} \mathrm{d}V_{1}(t)=\frac{1}{2}p^{2}\sigma ^{2}V_{1}\,\mathrm{d}t+p\sigma V_{1}(t) \, \mathrm{d}B(t). \end{aligned}$$

Simple computations show that \(EV_{1}=e^{\frac{1}{2}p^{2}\sigma ^{2}t}\). Taking expectation on both sides of (5) yields

$$\begin{aligned} E \bigl\vert y(t) \bigr\vert ^{p}\leq \bigl\vert y(t_{0}) \bigr\vert ^{p}\exp \biggl\{ \biggl(\lambda _{1}p+ \frac{1}{2}\sigma ^{2}p(p-1) \biggr) (t-t_{0}) \biggr\} ,\quad t_{0}\leq t \leq s_{0}. \end{aligned}$$

In the same way, using (6) yields

$$\begin{aligned} E \bigl\vert y(t) \bigr\vert ^{p}\leq \bigl\vert y(t_{0}) \bigr\vert ^{p}\exp \biggl\{ \biggl(\lambda _{1}p+ \frac{1}{2}\sigma ^{2}p(p-1) \biggr) (s_{0}-t_{0})+\lambda _{1}p(t-s_{0}) \biggr\} ,\quad s_{0}< t\leq t_{1}. \end{aligned}$$

Denote \(\alpha _{1}=\lambda _{1}p+\frac{1}{2}\sigma ^{2}p(p-1)\), \(\alpha _{2}=\lambda _{1}p\). For \(t_{i}\leq t< s_{i}\), we can readily verify that

$$\begin{aligned} E \bigl\vert y(t) \bigr\vert ^{p}&\leq \bigl\vert y(t_{0}) \bigr\vert ^{p}\exp \Biggl\{ \alpha _{1}\sum_{k=0}^{i-1}(s _{k}-t_{k})+\alpha _{2}\sum _{k=0}^{i-1}(t_{k+1}-s_{k})+ \alpha _{1}(t-t _{i}) \Biggr\} \\ & = \bigl\vert y(t_{0}) \bigr\vert ^{p}\exp \Biggl\{ \alpha _{1}\sum_{k=0}^{i-1}(1- \phi _{k}) (t_{k+1}-t_{k})+\alpha _{2}\sum_{k=0}^{i-1}\phi _{k}(t_{k+1}-t _{k})+\alpha _{1}(t-t_{i}) \Biggr\} . \end{aligned}$$

It follows from the definition of ϕ that, for any \(\varepsilon >0\), there exists a positive integer \(N>0\) such that \(\phi _{k}<\phi + \varepsilon \) for any \(k>N\). Consequently, for \(t_{i}\leq t< s_{i}\), we have

$$\begin{aligned} &E \bigl\vert y(t) \bigr\vert ^{p} \\ &\quad \leq \bigl\vert y(t_{0}) \bigr\vert ^{p}\exp \Biggl\{ C+\sum _{k=N+1}^{i-1} \bigl( \alpha _{1}(1-\phi -\varepsilon )+\alpha _{2}(\phi +\varepsilon ) \bigr) (t _{k+1}-t_{k})+\alpha _{1}(t-t_{i}) \Biggr\} , \end{aligned}$$
(7)

where C is a constant. Similarly, for \(s_{i}\leq t< t_{i+1}\), we get

$$\begin{aligned} E \bigl\vert y(t) \bigr\vert ^{p}\leq{} &\bigl\vert y(t_{0}) \bigr\vert ^{p}\exp \Biggl\{ \alpha _{1}\sum_{k=0}^{i}(s _{k}-t_{k})+\alpha _{2}\sum _{k=0}^{i-1}(t_{k+1}-s_{k})+ \alpha _{2}(t-s _{i}) \Biggr\} \\ ={}& \bigl\vert y(t_{0}) \bigr\vert ^{p}\exp \Biggl\{ \alpha _{1}\sum_{k=0}^{i}(1- \phi _{k}) (t _{k+1}-t_{k})+\alpha _{2}\sum_{k=0}^{i-1}\phi _{k}(t_{k+1}-t_{k})+\alpha _{2}(t-s_{i}) \Biggr\} \\ ={}& \bigl\vert y(t_{0}) \bigr\vert ^{p}\exp \Biggl\{ C+\sum_{k=N+1}^{i-1} \bigl(\alpha _{1}(1- \phi -\varepsilon ) +\alpha _{2}(\phi + \varepsilon ) \bigr) (t_{k+1}-t _{k}) \\ &{} +\alpha _{1}(1-\phi _{k}) (t_{i+1}-t_{i})+ \alpha _{2}(t-s _{i}) \Biggr\} . \end{aligned}$$
(8)

Combining (7) and (8), for every \(\varepsilon >0\),

$$\begin{aligned} \limsup_{t\rightarrow \infty }\frac{1}{t}\log E \bigl\vert y(t) \bigr\vert ^{p}\leq - \bigl( \alpha _{1}(1-\phi -\varepsilon )+\alpha _{2}(\phi +\varepsilon ) \bigr). \end{aligned}$$
(9)

Letting \(\varepsilon \rightarrow 0\) yields

$$\begin{aligned} \limsup_{t\rightarrow \infty }\frac{1}{t}\log E \bigl\vert y(t) \bigr\vert ^{p}\leq -\gamma =p \biggl(\lambda _{1}- \frac{1}{2}\sigma ^{2}(1-\phi ) (1-p) \biggr)< 0. \end{aligned}$$

Then we can claim that there exists a positive real number \(T_{1}>0\) such that, for any \(t-t_{0}>T_{1}\),

$$\begin{aligned} E \bigl\vert y(t) \bigr\vert ^{p}\leq E \bigl\vert y(t_{0}) \bigr\vert ^{p}e^{-0.5\gamma (t-t_{0})},\quad t-t_{0}>T _{1}. \end{aligned}$$
(10)

Step 2. The main aim now is to estimate the pth moment for solution process \(x(t)\) and the difference process \(x(t)-y(t)\) between networks (2) and (3). Applying Itô’s formula to \(|x(t)|^{2}\) yields

$$\begin{aligned} \bigl\vert x(t) \bigr\vert ^{2}\leq{}& \bigl\vert x(t_{0}) \bigr\vert ^{2}+ \int _{t_{0}}^{t} \bigl(2x^{T}(s) \bigl(-D + \vert \bar{A} \vert K\bigr)x(s)+\sigma ^{2}h^{2}(t)x^{T}(s- \tau )x(s-\tau ) \bigr) \,\mathrm{d}s \\ &{} +2\sigma \int _{t_{0}}^{t}h(s)x^{T}(s)x(s-\tau ) \,\mathrm{d}B(s). \end{aligned}$$

Taking expectations on both sides, we have

$$\begin{aligned} E \bigl\vert x(t) \bigr\vert ^{2}\leq E \bigl\vert x(t_{0}) \bigr\vert ^{2}+2\lambda _{1} \int _{t_{0}}^{t}E \bigl\vert x(s) \bigr\vert ^{2} \,\mathrm{d}s+\sigma ^{2} \int _{t_{0}}^{t}E \bigl\vert x(s-\tau ) \bigr\vert ^{2}\,\mathrm{d}s. \end{aligned}$$

Taking supremum on \([t_{0}-\tau , t]\) gives

$$\begin{aligned} \sup_{t_{0}-\tau \leq u\leq t}E \bigl\vert x(u) \bigr\vert ^{2} &\leq E \vert \zeta \vert ^{2}+ \sup_{t_{0}\leq u\leq t}E \bigl\vert x(t) \bigr\vert ^{2} \\ & \leq 2E \vert \zeta \vert ^{2}+2\lambda _{1} \int _{t_{0}}^{t} \Bigl( \sup_{t_{0}\leq u\leq s}E \bigl\vert x(u) \bigr\vert ^{2} \Bigr)\,\mathrm{d}s+\sigma ^{2} \int _{t_{0}}^{t} \Bigl(\sup_{t_{0}-\tau \leq u\leq s}E \bigl\vert x(u) \bigr\vert ^{2} \Bigr)\,\mathrm{d}s. \end{aligned}$$

Note that the right term of the above inequality is monotonically increasing for \(t\geq t_{0}\), then we obtain

$$\begin{aligned} \sup_{t_{0}-\tau \leq u\leq t}E \bigl\vert x(t) \bigr\vert ^{2} \leq 2E \vert \zeta \vert ^{2}+\bigl(2 \lambda _{1}+ \sigma ^{2}\bigr) \int _{t_{0}}^{t} \Bigl(\sup_{t_{0}-\tau \leq u\leq s}E \bigl\vert x(u) \bigr\vert ^{2} \Bigr)\,\mathrm{d}s. \end{aligned}$$

The Gronwall inequality then gives

$$\begin{aligned} \sup_{t_{0}-\tau \leq u\leq t}E \bigl\vert x(u) \bigr\vert ^{2}\leq 2e^{(2\lambda _{1}+\sigma ^{2})(t-t_{0})}E \Vert \zeta \Vert ^{2}. \end{aligned}$$
(11)

By the Burkholder–Davis–Gundy inequality and the Hölder inequality, we obtain

$$\begin{aligned} &E \Bigl(\sup_{0\leq u\leq \tau } \bigl\vert x(t+u)-x(t) \bigr\vert ^{2} \Bigr) \\ &\quad\leq 2E \biggl\{ \sup_{0\leq u\leq \tau } \biggl\vert \int _{t}^{t+u} \bigl[-Dx(t)+Af \bigl(x(s) \bigr) \bigr]\,\mathrm{d}s \biggr\vert ^{2} \biggr\} +2E \biggl\{ \sup _{0\leq u\leq \tau } \biggl\vert \int _{t}^{t+u}\sigma x(s-\tau )\,\mathrm{d}B(s) \biggr\vert ^{2} \biggr\} \\ &\quad\leq 2\lambda ^{2}_{2}\tau \int _{t}^{t+\tau }E \bigl\vert x(s) \bigr\vert ^{2}\,\mathrm{d}s+8 \sigma ^{2} \int _{t}^{t+\tau }E \bigl\vert x(s-\tau ) \bigr\vert ^{2}\,\mathrm{d}s. \end{aligned}$$

Using (11) yields

$$\begin{aligned} &E \Bigl(\sup_{0\leq u\leq \tau } \bigl\vert x(t+u)-x(t) \bigr\vert ^{2} \Bigr) \\ &\quad \leq \bigl[4\tau \bigl(\lambda ^{2}_{2}\tau \exp \bigl\{ \bigl(2\lambda _{1}+\sigma ^{2}\bigr) \tau \bigr\} +4\sigma ^{2}\bigr)\exp \bigl\{ \bigl(2\lambda _{1}+\sigma ^{2}\bigr) (t-t_{0})\bigr\} \bigr]E \Vert \zeta \Vert ^{2}, \end{aligned}$$
(12)

where \(\lambda _{2}=\sup_{|x|=1}x^{T}(D^{2}+K^{T}(x)A^{T}AK(x)-0.5DAK(x)-0.5K ^{T}(x)A^{T}D)x\), with \(K(x)=\operatorname{diag} (f(x_{1}/x_{1},f(x_{2}/x _{2},\ldots , f(x_{n}/x_{n}) )\). It follows from the Hölder inequality that

$$\begin{aligned} E \Bigl(\sup_{0\leq u\leq \tau } \bigl\vert x(t+u)-x(t) \bigr\vert ^{p} \Bigr)\leq F _{1}(\tau ,p)\exp \bigl\{ \bigl(p \lambda _{1}+0.5p\sigma ^{2}\bigr) (t-t_{0}) \bigr\} E \Vert \zeta \Vert ^{p}, \end{aligned}$$

where \(F_{1}(\tau ,p)= [4\tau (\lambda ^{2}_{2}\tau \exp \{(2\lambda _{1}+\sigma ^{2})\tau \}+4\sigma ^{2}) ]^{\frac{p}{2}}\). Now we estimate the expectation \(E(\sup_{t_{0}\leq u\leq t}|x(u)|^{2})\). The element inequality gives

$$\begin{aligned} \bigl\vert x(t) \bigr\vert ^{2}\leq 3 \bigl\vert x(t_{0}) \bigr\vert ^{2}+3\lambda ^{2}_{2}(t-t_{0}) \int _{t_{0}} ^{t} \bigl\vert x(s) \bigr\vert ^{2}\,\mathrm{d}s+3\sigma ^{2} \biggl\vert \int _{t_{0}}^{t} \bigl\vert x(s-\tau ) \bigr\vert ^{2} {\,\mathrm{d}}B(s) \biggr\vert ^{2}. \end{aligned}$$

Taking supremum on \([t_{0}, t]\) and expectations on both sides yields

$$\begin{aligned} E \Bigl(\sup_{t_{0}\leq u\leq t} \bigl\vert x(u) \bigr\vert ^{2} \Bigr) \leq 3 \bigl\vert x(t_{0}) \bigr\vert ^{2}+3 \lambda ^{2}_{2}(t-t_{0}) \int _{t_{0}}^{t}E \bigl\vert x(s) \bigr\vert ^{2}\,\mathrm{d}s+12 \sigma ^{2} \int _{t_{0}}^{t}E \bigl\vert x(s-\tau ) \bigr\vert ^{2}\,\mathrm{d}s. \end{aligned}$$

Together with (11), we have

$$\begin{aligned} E \Bigl(\sup_{t_{0}\leq u\leq t} \bigl\vert x(u) \bigr\vert ^{2} \Bigr)\leq \biggl(3+\frac{6( \lambda ^{2}(t-t_{0})+4\sigma ^{2})}{2\lambda _{1}+\sigma ^{2}} \bigl( \exp \bigl\{ \bigl(2\lambda _{1}+\sigma ^{2}\bigr) (t-t_{0})\bigr\} -1 \bigr) \biggr)E \Vert \zeta \Vert ^{2}. \end{aligned}$$

The Hölder inequality then gives

$$\begin{aligned} E \Bigl(\sup_{t_{0}\leq u\leq t} \bigl\vert x(u) \bigr\vert ^{p} \Bigr)&\leq \biggl(3+\frac{6( \lambda ^{2}(t-t_{0})+4\sigma ^{2})}{2\lambda _{1}+\sigma ^{2}} \bigl( \exp \bigl\{ \bigl(2\lambda _{1}+\sigma ^{2}\bigr) (t-t_{0})\bigr\} -1 \bigr) \biggr)^{\frac{p}{2}}E \Vert \zeta \Vert ^{p} \\ & :=F_{2}(\tau ,p,t-t_{0})E \vert \zeta \vert ^{p}. \end{aligned}$$
(13)

Next we estimate the pth moment difference between networks (2) and (3). By Itô’s formula and the Hölder inequality, we get

$$\begin{aligned} &E \bigl\vert x(t)-y(t) \bigr\vert ^{2} \\ &\quad=E \int _{t_{0}+\tau }^{t} \bigl\{ 2 \bigl(x(s)-y(s) \bigr) \bigl[-D \bigl(x(s)-y(s) \bigr)+A \bigl(f \bigl(x(s) \bigr)-f \bigl(y(s) \bigr) \bigr) \bigr] \\ &\qquad{} +\sigma ^{2}h ^{2}(t) \bigl(x(s-\tau )-y(s) \bigr) \bigr\} {\,\mathrm{d}}s \\ &\quad\leq 2\bigl(\lambda _{1}+\sigma ^{2}\bigr) \int _{t_{0}+\tau }^{t}E \bigl\vert x(s)-y(s) \bigr\vert ^{2} \,\mathrm{d}s+2\sigma ^{2} \int _{t_{0}+\tau }^{t}E \bigl\vert x(s-\tau )-x(s) \bigr\vert ^{2} \,\mathrm{d}s. \end{aligned}$$
(14)

Instituting (12) to (14) gives

$$\begin{aligned} E \bigl\vert x(t)-y(t) \bigr\vert ^{2}\leq {}&2\bigl(\lambda _{1}+\sigma ^{2}\bigr) \int _{t_{0}+\tau } ^{t}E \bigl\vert x(s)-y(s) \bigr\vert ^{2}\,\mathrm{d}s \\ &{} +2\sigma ^{2}F_{1}(\tau ,2) \bigl(2\lambda _{1}+\sigma ^{2}\bigr)^{-1}\exp \bigl\{ \bigl(2 \lambda _{1}+\sigma ^{2}\bigr) (t-\tau -t_{0})\bigr\} . \end{aligned}$$
(15)

It follows from the Gronwall inequality that

$$\begin{aligned} E \bigl\vert x(t)-y(t) \bigr\vert ^{2}\leq{} &\bigl(2\lambda _{1}+\sigma ^{2}\bigr)^{-1}F_{1}( \tau ,2) \exp \bigl\{ \bigl(2\lambda _{1}+\sigma ^{2} \bigr) (t-\tau -t_{0})\bigr\} \\ &{} \times \bigl[2\sigma ^{2}+4\bigl(\lambda _{1}+\sigma ^{2}\bigr) \bigl(\exp \bigl\{ \sigma ^{2}(t-\tau -t_{0})\bigr\} -1\bigr) \bigr]E \Vert \zeta \Vert ^{2}. \end{aligned}$$
(16)

It follows from the Hölder inequality that

$$\begin{aligned} E \bigl\vert x(t)-y(t) \bigr\vert ^{p}\leq F_{3}( \tau ,p,t-t_{0})E \Vert \zeta \Vert ^{p}, \end{aligned}$$

where

$$\begin{aligned} F_{3}(\tau ,p,t-t_{0})= {}&\bigl(\bigl(2\lambda _{1}+\sigma ^{2}\bigr)^{-1}F_{1}( \tau ,2) \exp \bigl\{ \bigl(2\lambda _{1}+\sigma ^{2} \bigr) (t-\tau -t_{0})\bigr\} \\ &{} \times \bigl[2\sigma ^{2}+4\bigl(\lambda _{1} +\sigma ^{2}\bigr) \bigl(\exp \bigl\{ \sigma ^{2}(t-\tau -t_{0})\bigr\} -1\bigr) \bigr] \bigr)^{\frac{p}{2}}. \end{aligned}$$
(17)

Step 3. Let \(x(t)=x(t,t_{0},\zeta )\), \(y(t_{0}+\tau +T)=y (t_{0}+\tau +T,t_{0}+\tau ,x(t_{0}+\tau ) )\) for simplicity. Taking \(T=\max \{T_{1}, \frac{2}{\gamma }\log (\frac{2^{2.5p}}{\epsilon })\}\) with \(\epsilon \in (0,1)\), assertion (10) gives that

$$\begin{aligned} E \bigl\vert y(t_{0}+\tau +T) \bigr\vert ^{p}\leq E \bigl\vert x(t_{0}+\tau ) \bigr\vert ^{p}e^{\frac{1}{2}p (\alpha -\frac{1}{2}\sigma ^{2}(1-p)(1-\phi ) )T} \leq e^{- \frac{1}{2}\gamma T} \bigl(2e^{(2\alpha +\sigma ^{2})\tau } \bigr)^{ \frac{p}{2}}E \Vert \zeta \Vert ^{p}. \end{aligned}$$
(18)

The elementary inequality \((x+y)^{p}\leq 2^{p}(x^{p}+y^{p})\) for \(x,y\geq 0\) yields

$$\begin{aligned} E \bigl\vert x(t_{0}+\tau +T) \bigr\vert ^{p}\leq 2^{p}E \bigl\vert y(t_{0}+\tau +T) \bigr\vert ^{p}+ 2^{p}E \bigl\vert x(t _{0}+\tau +T)-y(t_{0}+\tau +T) \bigr\vert ^{p}. \end{aligned}$$

It follows from (17) and (18) that

$$\begin{aligned} E \bigl\vert x(t_{0}+\tau +T) \bigr\vert ^{p}\leq 2^{p} \bigl(e^{-\frac{1}{2}\gamma T} \bigl(2e ^{(2\alpha +\sigma ^{2})\tau } \bigr)^{\frac{p}{2}}+F_{3}(\tau ,p,T+ \tau ) \bigr)E \Vert \zeta \Vert ^{p}. \end{aligned}$$
(19)

Using the elementary inequality and (19),

$$\begin{aligned} E \bigl\vert x(t_{0}+2\tau +T) \bigr\vert ^{p}\leq{}& 2^{p}E \bigl\vert x(t_{0}+\tau +T) \bigr\vert ^{p} \\ &{}+2^{p}E \Bigl(\sup_{0\leq \mu \leq \tau } \bigl\vert x(t_{0}+\tau +T+u)-x(t_{0}+ \tau +T) \bigr\vert \Bigr) \\ \leq{}& 2^{p}E \bigl\vert x(t_{0}+\tau +T) \bigr\vert ^{p}+2^{p}F_{1}(\tau ,p,T+ \tau )E \Vert \zeta \Vert ^{p}. \end{aligned}$$
(20)

Using (19) and (20), we have

$$\begin{aligned} E \bigl\vert x(t_{0}+2\tau +T) \bigr\vert ^{p}&\leq \bigl(\epsilon e^{(\alpha +\frac{1}{2} \sigma ^{2})p\tau }+2^{p}F_{1}( \tau ,p,T+\tau )+4^{p}F_{3}(\tau ,p,T+ \tau ) \bigr)E \Vert \zeta \Vert ^{p} \end{aligned}$$
(21)
$$\begin{aligned} & :=G(\tau , \epsilon , p, T)E \Vert \zeta \Vert ^{p}. \end{aligned}$$
(22)

Note that, for given \(\epsilon \in (0,1)\), \(G(\tau , \varepsilon , p, T)\) is a monotonously increasing function, \(G(0, \epsilon , p, T)= \epsilon <1\), and \(G(0, \epsilon , p, T)\rightarrow \infty \) as \(\tau \rightarrow \infty \). Now we claim that there exists a unique \(\tau _{0}\) to the following equation:

$$\begin{aligned} \epsilon e^{(\alpha +\frac{1}{2}\sigma ^{2})p\tau }+2^{p}F_{1}( \tau ,p,T+ \tau )+4^{p}F_{3}(\tau ,p,T+\tau )=1. \end{aligned}$$
(23)

Note the left item of (23), it is monotonically increasing when we think of it as a function with the independent variable τ, and it is equal to ϵ if \(\tau =0\). As a result, equation (23) has a unique solution if \(\tau _{0}>0\). Determine \(\tau \in (0,\tau _{0})\), and \(\zeta \in \mathcal{L}^{2}_{\mathcal{F} _{t_{0}}} (\varOmega ,C([-\tau ,0]; R^{n}) )\) is an arbitrary initial value. From \(\tau <\tau _{0}\) we can verify that

$$\begin{aligned} \epsilon e^{(\alpha +\frac{1}{2}\sigma ^{2})p\tau }+2^{p}F_{1}(\tau ,p,T+ \tau )+4^{p}F_{3}(\tau ,p,T+\tau )< 1 \end{aligned}$$

and therefore we can find a suitable constant \(c>0\) such that

$$\begin{aligned} \epsilon e^{(\alpha +\frac{1}{2}\sigma ^{2})p\tau }+2^{p}F_{1}(\tau ,p,T+ \tau )+4^{p}F_{3}(\tau ,p,T+\tau )=e^{-c(T+2\tau )}. \end{aligned}$$

We obtain from (21) that

$$\begin{aligned} E \bigl\vert x(t_{0}+2\tau +T) \bigr\vert ^{p}\leq e^{-c(T+2\tau )}E \Vert \zeta \Vert ^{p}. \end{aligned}$$
(24)

Next we discuss the solution \(x(t)\) for \(t\geq t_{0}+2\tau +T\). We know that there is a unique solution to networks (2) for \(t>t_{0}-\tau \). In other words, we can regard \(x(t_{0}+2\tau +T)\) as the initial value of \(x(t)\) at \(t=t_{0}+2\tau +T\). By (24) we have

$$\begin{aligned} E \bigl\vert x \bigl(t_{0}+2(2\tau +T) \bigr) \bigr\vert ^{p}\leq e^{-c(T+2\tau )}E|x(t_{0}+2 \tau +T)|^{p}. \end{aligned}$$

Analyzing (24) and the equation above, we get

$$\begin{aligned} E \bigl\vert x \bigl(t_{0}+2(2\tau +T) \bigr) \bigr\vert ^{p}\leq e^{-2c(T+2\tau )}E \Vert \zeta \Vert ^{p}. \end{aligned}$$

After repeated iteration we have

$$\begin{aligned} E \bigl\vert x \bigl(t_{0}+c(2\tau +T) \bigr) \bigr\vert ^{p}\leq e^{-nc(T+2\tau )}E \Vert \zeta \Vert ^{p},\quad n=1,2,\ldots . \end{aligned}$$
(25)

Moreover, we can verify that it is established if \(n=0\). Using (13) and (25), we have

$$\begin{aligned} E \Bigl(\sup_{t_{0}+n\varXi \leq t\leq t_{0}+(n+1)\varXi } \bigl\vert x(t) \bigr\vert ^{p} \Bigr)\leq F_{2} E \bigl\vert x (t_{0}+n\varXi ) \bigr\vert ^{p}\leq F_{2} e^{-nc\varXi }E \Vert \zeta \Vert ^{p},\quad n=0,1,\ldots , \end{aligned}$$
(26)

where \(\varXi =T+2\tau \) and \(F_{2}\) is defined by (13). Using Markov’s inequality and (26),

$$\begin{aligned} P \Bigl(\sup_{t_{0}+n\varXi \leq t\leq t_{0}+(n+1)\varXi } \bigl\vert x(t) \bigr\vert ^{p} \geq e^{-\frac{1}{2}nc\varXi } \Bigr)&\leq e^{-\frac{1}{2}nc\varXi }E \Bigl( \sup_{t_{0}+n\varXi \leq t\leq t_{0}+(n+1)\varXi } \bigl\vert x(t) \bigr\vert ^{p} \Bigr) \\ & \leq F_{2} e^{-\frac{1}{2}nc\varXi }E \Vert \zeta \Vert ^{p},\quad n=0,1, \ldots . \end{aligned}$$

Using the Borel–Cantelli lemma, we can verify that, for almost every ω, there exists an integer \(N_{0}=N_{0}(\omega )\) such that

$$\begin{aligned} \sup_{t_{0}+n\varXi \leq t\leq t_{0}+(n+1)\varXi } \bigl\vert x(t) \bigr\vert ^{p}< e^{- \frac{1}{2}nc\varXi },\quad N\geq N_{0}. \end{aligned}$$

That is,

$$\begin{aligned} \sup_{t_{0}+n\varXi \leq t\leq t_{0}+(n+1)\varXi } \frac{\log \vert x(t) \vert }{t}< \sup_{t_{0}+n\varXi \leq t\leq t_{0}+(n+1) \varXi } \frac{\log \vert x(t) \vert }{n\varXi }< -\frac{c}{2p}. \end{aligned}$$

For almost every ω, we get

$$\begin{aligned} \limsup_{t\rightarrow +\infty }\frac{\log \vert x(t) \vert }{t}< - \frac{c}{2p}< 0 \end{aligned}$$

as desired. □

Remark 2

Shen and Wang studied the stabilization of recurrent neural networks by continuous noise (see [26]). Compared to the existing results, we show that neural networks can be stabilized by intermittent noise with time delay.

Step 1 of Theorem 3.1 implies a sufficient condition on stabilization by stochastic intermittent control without delay observations as follows.

Corollary 3.2

Let Assumption 1 and \(2\lambda _{+}(-D+| \bar{A}|K)<\sigma ^{2}(1-\phi )\) hold. Networks (2) can be stabilized by stochastic intermittent control \(h(t)\varSigma x(t)\,\mathrm{d}B(t)\).

Remark 3

Guo and Mao have discussed almost sure stabilization of delay differential systems by delay feedback control in [22]. Nevertheless, there is distinct characteristics in delay neural networks. We can make full use of the network characteristics. Thus, it is desirable to derive the stabilization condition. In this study the neural networks are stabilized by aperiodically intermittent noise based on delay observations. Comparing with the results in [22], we further integrate the intermittent control strategy.

When \(\psi =0\), the white noise is continuous and Theorem 3.1 implies a criterion on stabilization by delay feedback control.

Corollary 3.3

Let Assumption 1 and \(2\lambda _{+}(-D+| \bar{A}|K)<\sigma ^{2}\) hold. Networks (2) can be stabilized by delay feedback control \(\varSigma x(t-\tau )\,\mathrm{d}B(t)\) provided \(\tau >\tau _{0}\), where \(\tau _{0}\) is the solution to (23).

4 Numerical example

A numerical example is presented in this section. We verify that the theorem above is available.

Example 1

Consider two-neural networks:

$$\begin{aligned} {\mathrm{d}}x(t)=\bigl(-Dx(t)+A f(x)\bigr)\,\mathrm{d}t, \end{aligned}$$
(27)

where \(x(t)= (x_{1}(t),x_{2}(t) )^{T}\), \(f(x)=\tanh x\), and the other parameters in networks (27) are selected as follows:

$$\begin{aligned} D= \begin{pmatrix} 0.1 & 0 \\ 0 &0.1 \end{pmatrix} ,\qquad A= \begin{pmatrix} 0.5 &0.5 \\ 0.5 &0.6 \end{pmatrix} . \end{aligned}$$

Figure 1(a) shows that networks (27) are unstable. The controller \(\varSigma h(t)x(t-\tau )\,\mathrm{d}B(t)\) is designed. That is,

$$\begin{aligned} \begin{aligned} &\mathrm{d}x(t)= \bigl(-Dx(t)+A f(x) \bigr)\,\mathrm{d}t+ \varSigma h(t)x(t- \tau )\,\mathrm{d}B(t), \\ & h(t)=\textstyle\begin{cases} 1, &t\in [t_{i},s_{i}], \\ 0, &t\in (s_{i},t_{i+1}], i=0,1,2,\ldots , \end{cases}\displaystyle \end{aligned} \end{aligned}$$
(28)

where \(\tau =0.015\), \(\varSigma =0.2I\).

Figure 1
figure 1

The left curve shows an unstable trajectory of \(X(t)\) generated by the EM scheme for system (27) without feedback control, while the right one shows a stable trajectory for system (28) by continuous noise with time delay \(\tau =0.015\)

We take aperiodic controlled intervals \([0,0.1]\cup [2.0,2.1]\cup [4.0,4.1] \cup [6.0,6.1]\cup [8.0,8.1]\cup\cdots \) , or \([0,0.33]\cup [1.00,1.34] \cup [2.00,2.33]\cup [3.00,3.33]\cup [4.00,4.33]\cup [5.00, 5.34] \cup [6.00,6.33]\cup [7.00,7.33]\cup [8.0,8.33]\cup [9.00,9.34]\cup\cdots \) , or \([0,0.9]\cup [2.0,3.1]\cup [4.0,5.1]\cup [6.0,6.9]\cup [8.0,9.0]\cup \cdots \) , the intermittent rates are \(0.05,0.33,0.5\) respectively. We draw four figures by Matlab. The networks are stabilized by continuous white noise with delay observations (the time delay is 0.015 (Fig. 1b), 0.03 (Fig. 2a), 0.06 (Fig. 2b), respectively), while they are not stabilized if time delay is 0.12 (Fig. 3a). We see easily that the bigger time delay is better owing to less observation interval; however, the networks cannot be stabilized by continuous noise if the time delay is big enough. We fix time delay \(\tau =0.03\) and switch the control strategy from continuous noise to intermittent noise, then we change the intermittent rate ϕ (\(\phi =0.05\) (Fig. 3b), 0.33 (Fig. 4a), 0.5 (Fig. 4b), respectively). The networks are stabilized when \(\phi =0.33\) or \(\phi =0.5\), while they are unstable when \(\phi =0.05\). That is, the networks are unstable if the intermittent rate is small enough. The networks work best when \(\phi =0.5\).

Figure 2
figure 2

The left curve shows a stable trajectory of \(X(t)\) generated by the EM scheme for system (28) by continuous noise with time delay \(\tau =0.03\), while the right one shows a stable trajectory for system (28) by continuous noise with time delay \(\tau =0.06\). We can see that there is a slight difference between them

Figure 3
figure 3

The left curve shows an unstable trajectory of \(X(t)\) generated by the EM scheme for system (28) by continuous noise with time delay \(\tau =0.12\), while the right one shows an unstable trajectory for system (28) by intermittent noise with time delay \(\tau =0.03\), the intermittent rate \(\phi =0.05\) on [0,10]

Figure 4
figure 4

The left curve shows a stable trajectory of \(X(t)\) generated by the EM scheme for system (28) by intermittent noise with time delay \(\tau =0.03\) (intermittent rate \(\phi =0.33\) on[0,10]), while the right one shows a stable trajectory for system (28) by intermittent noise with time delay \(\tau =0.03\) (the intermittent rate \(\phi =0.5\) on [0,10])

The numerical example shows that the proposed methods are practical and efficient. We observe the states less frequently and cut the costs by reducing the controlled time compared with the algorithm proposed by Mao (see [24]). The neural networks are stabilized by aperiodic intermittent control with delay observations.

5 Conclusions

In this study, we have investigated the exponential stabilization for neural networks by aperiodically intermittent control based on delay observations. First of all, by using the stochastic comparison principle, Itô’s formula, and the sequence analysis technique, we show that the unstable neural networks can be stabilized by aperiodically intermittent noise. Secondly, in terms of the characteristic of neural networks, we show that the networks are exponentially stabilized based on delay observations. Finally, a numerical example is provided to illustrate the superiority and effectiveness of the proposed approaches.

References

  1. Zeng, Z., Wang, J.: Associative memories based on continuous-time cellular neural networks designed using space-invariant cloning templates. Neural Netw. 22(5–6), 651–657 (2009)

    Article  MATH  Google Scholar 

  2. Cao, Y., Samidurai, R., Sriraman, R.: Robust passivity analysis for uncertain neural networks with leakage delay and additive time-varying delays by using general activation function. Math. Comput. Simul. 155(5–6), 57–77 (2019)

    Article  MathSciNet  Google Scholar 

  3. Samidurai, R., Sriraman, R., Cao, J., Tu, Z.: Nonfragile stabilization for uncertain system with interval time varying delays via a new double integral inequality. Math. Methods Appl. Sci. 41, 6272–6287 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  4. Cao, J., Rakkiyappan, R., Maheswari, K., Chandrasekar, A.: Exponential h(a) filtering analysis for discrete-time switched neural networks with random delays using sojourn probabilities. Sci. China, Technol. Sci. 59, 387–402 (2016)

    Article  Google Scholar 

  5. Guo, Z., Wang, J., Yan, Z.: Global exponential dissipativity and stabilization of memristor-based recurrent neural networks with time-varying delays. Neural Netw. 48, 158–172 (2013)

    Article  MATH  Google Scholar 

  6. Zhu, E., Yin, G.: Stability in distribution of stochastic delay recurrent neural networks with Markovian switching. Neural Comput. Appl. 27, 2141–2151 (2016)

    Article  Google Scholar 

  7. Zhang, H., Wang, Y.: Stability analysis of Markovian jumping stochastic Cohen–Grossberg neural networks with mixed time delays. IEEE Trans. Neural Netw. 19, 366–370 (2008)

    Article  Google Scholar 

  8. Lei, L., Cao, J., Qian, C.: pth moment exponential input-to-state stability of delayed recurrent neural networks with Markovian switching via vector Lyapunov function. IEEE Trans. Neural Netw. Learn. Syst. 99, 1–12 (2017)

    Google Scholar 

  9. Mohamad, S., Gopalasmy, K.: Exponential stability of continuous-time and discrete-time cellular neural networks with delays. Appl. Math. Comput. 135, 17–38 (2013)

    MathSciNet  Google Scholar 

  10. Wu, A., Zeng, Z.: Exponential stabilization of memristive neural networks with time delays. IEEE Trans. Neural Netw. Learn. Syst. 23, 1919–1929 (2012)

    Article  Google Scholar 

  11. Zhang, G., Shen, Y.: Exponential stabilization of memristor-based chaotic neural networks with time-varying delays via intermittent control. IEEE Trans. Neural Netw. Learn. Syst. 26, 1431–1441 (2015)

    Article  MathSciNet  Google Scholar 

  12. Zhao, H., Cai, G.: Exponential Synchronization of Complex Delayed Dynamical Networks with Uncertain Parameters via Intermittent Control. Springer, Berlin (2015)

    Book  Google Scholar 

  13. Yu, J., Hu, C., Jiang, H., Teng, Z.: Exponential synchronization of Cohen–Grossberg neural networks via periodically intermittent control. Neurocomputing 74, 1776–1782 (2011)

    Article  Google Scholar 

  14. Gao, J., Cao, J.: Aperiodically intermittent synchronization for switching complex networks dependent on topology structure. Adv. Differ. Equ. 2017, 244 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  15. Feng, Y., Yang, X., Song, Q., Cao, J.: Synchronization of memristive neural networks with mixed delays via quantized intermittent control. Appl. Math. Comput. 339, 874–887 (2018)

    MathSciNet  Google Scholar 

  16. Lu, J., Ho, D., Wang, Z.: Pinning stabilization of linearly coupled stochastic neural networks via minimum number of controllers. IEEE Trans. Neural Netw. 20, 1617–1629 (2009)

    Article  Google Scholar 

  17. Lu, J., Wang, Z., Cao, J., Ho, D.: Pinning impulsive stabilization of nonlinear dynamical networks with time-varying delay. Int. J. Bifurc. Chaos 22, 1250176 (2012)

    Article  MATH  Google Scholar 

  18. Liu, X., Park, J., Jiang, N., Cao, J.: Nonsmooth finite-time stabilization of neural networks with discontinuous activations. Neural Netw. 52, 25–32 (2014)

    Article  MATH  Google Scholar 

  19. Wang, L., Shen, Y., Zhang, G.: Finite-time stabilization and adaptive control of memristor-based delayed neural networks. IEEE Trans. Neural Netw. 28, 2649–2659 (2017)

    MathSciNet  Google Scholar 

  20. Wu, F., Hu, S.: Suppression and stabilisation of noise. Int. J. Control 82, 2150–2157 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  21. Liu, L., Shen, Y.: Noise suppress explosive solution of differential systems whose coefficient obey the polynomial growth condition. Automatica 48, 619–624 (2012)

    Article  Google Scholar 

  22. Guo, Q., Mao, X., Yue, R.: Almost sure exponential stability of stochastic differential delay equations. SIAM J. Control Optim. 54, 1219–1233 (2016)

    MathSciNet  MATH  Google Scholar 

  23. Zhu, S., Shen, Y., Chen, G.: Noise suppress or express exponential growth for hybrid Hopfield neural networks. Phys. Lett. A 374, 2035–2043 (2010)

    Article  MATH  Google Scholar 

  24. Mao, X.: Almost sure exponential stabilization by discrete-time stochastic feedback control. IEEE Trans. Autom. Control 61, 1619–1624 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  25. Mao, X., Marion, G., Renshaw, E.: Environmental noise suppresses explosion in population dynamics. Stoch. Anal. Appl. 97, 95–110 (2002)

    MathSciNet  MATH  Google Scholar 

  26. Shen, Y., Wang, J.: Noise induced stabilization of the recurrent neural networks with mixed time-varying delays and Markovian-switching parameters. IEEE Trans. Neural Netw. 18, 1457–1462 (2007)

    Google Scholar 

  27. Russo, G., Shorten, R.: On noise-induced synchronization and consensus (2016) arXiv:1602.06467

  28. Ma, L., Wang, Z., Fan, Q., Liu, Y.: Consensus control of stochastic multi-agent systems: a survey. Sci. China Inf. Sci. 60, 5–19 (2017)

    Article  MathSciNet  Google Scholar 

  29. Liao, X., Mao, X.: Exponential stability and instability of stochastic neural networks. Stoch. Anal. Appl. 14, 165–185 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  30. Sun, J.: Delay-dependent stability criteria for time-delay chaotic systems via time-delay feedback control. Chaos Solitons Fractals 21, 143–150 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  31. Zhu, Q., Zhang, Q.: pth moment exponential stabilization of hybrid stochastic differential equations by feedback controls based on discrete-time state observations with a time delay. IET Control Theory Appl. 11, 1992–2003 (2017)

    Article  MathSciNet  Google Scholar 

  32. Chen, W., Xu, S., Zou, Y.: Stabilization of hybrid neutral stochastic differential delay equations by delay feedback control. Syst. Control Lett. 88, 1–13 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  33. Mao, X., Lam, J., Huang, L.: Stabilisation of hybrid stochastic differential equations by delay feedback control. Syst. Control Lett. 57, 927–935 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  34. Maharajan, C., Raja, R., Cao, J., Ravi, J., Rajchakit, G.: Global exponential stability of Markovian jumping stochastic impulsive uncertain BAM neural networks with leakage, mixed time delays, and α-inverse Hölder activation functions. Adv. Differ. Equ. 2018, 113 (2018)

    Article  MATH  Google Scholar 

  35. Wang, J., Xu, C., Chen, M.Z.Q., Feng, J., Chen, G.: Stochastic feedback coupling synchronization of networked harmonic oscillators. Automatica 87, 404–411 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  36. Gawthrop, P.: Intermittent control: a computational theory of human control. Biol. Cybern. 104, 31–51 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  37. Liu, X., Chen, T.: Synchronization of nonlinear coupled networks via aperiodically intermittent pinning control. IEEE Trans. Neural Netw. Learn. Syst. 26, 113–126 (2015)

    Article  MathSciNet  Google Scholar 

  38. Liu, X., Chen, T.: Cluster synchronization in directed networks via intermittent pinning control. IEEE Trans. Neural Netw. 22, 1009–1020 (2011)

    Article  Google Scholar 

  39. Lu, J., Ho, D., Cao, J.: A unified synchronization criterion for impulsive dynamical networks. Automatica 46, 1215–1221 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  40. Cai, S., Hao, J., He, Q., Liu, Z.: Exponential synchronization of complex delayed dynamical networks via pinning periodically intermittent control. Phys. Lett. A 375, 1965–1971 (2011)

    Article  MATH  Google Scholar 

  41. Mao, X., Yuan, C.: Stochastic Differential Equations with Markovian Switching. Imperial College, London (2006)

    Book  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to show their great thanks to Professor Jifeng Chu for his great and consistent support, without him this work would not be possible.

Funding

The research work is supported by Fundamental Research Funds for the Central Universities (Grant No. 2018B19914); the National Science Foundation of China (Grant Nos. 61773152); the Chinese Postdoctoral Science Foundation (Grant No. 2016M601698, 2017T100318); the Jiangsu Province Postdoctoral Science Foundation (Grant No. 1701078B); and the project funded by the Qing Lan Project of Jiangsu Province, China.

Author information

Authors and Affiliations

Authors

Contributions

The authors have made the same contribution. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Lei Liu.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, X., Liu, L. & Feng, L. Almost sure exponential stabilization of neural networks by aperiodically intermittent control based on delay observations. Adv Differ Equ 2019, 353 (2019). https://doi.org/10.1186/s13662-019-2260-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-019-2260-8

Keywords