Skip to main content

Theory and Modern Applications

Adaptive almost surely asymptotically synchronization for stochastic delayed neural networks with Markovian switching

Abstract

In this paper, the problem of the adaptive almost surely asymptotically synchronization for stochastic delayed neural networks with Markovian switching is considered. By utilizing a new nonnegative function and the M-matrix approach, we derive a sufficient condition to ensure adaptive almost surely asymptotically synchronization for stochastic delayed neural networks. Some appropriate parameters analysis and update laws are found via the adaptive feedback control techniques. We also present an illustrative numerical example to demonstrate the effectiveness of the M-matrix-based synchronization condition derived in this paper.

1 Introduction

As we know, the stochastic delayed neural networks (SDNNs) with Markovian switching have played an important role in the fields of science and engineering for their many practical applications, including image processing, pattern recognition, associative memory, and optimization problems [1, 2]. In the past several decades, the characteristics of the SDNNs with Markovian switching, such as the various stability [3, 4], have received a lot of attention from scholars in various fields of nonlinear science. Wang et al. in [5] considered exponential stability for delayed recurrent neural networks with Markovian jumping parameters. Zhang et al. investigated stochastic stability for Markovian jumping genetic regulatory networks with mixed time delays [6]. Huang et al. investigated robust stability for stochastic delayed additive neural networks with Markovian switching [7]. The researchers presented a number of sufficient conditions to achieve the global asymptotic stability and exponential stability for the SDNNs with Markovian switching [811]. As is well known, time delays, as a source of instability and oscillations, always appear in various aspects of neural networks. Recently, the time delays of neural networks have received a lot of attention [1215]. The linear matrix inequality (LMI, for short) approach is one of the most extensively used in recent publications [16, 17].

In recent years, it has been found that the synchronization of the coupled neural networks has potential applications in many fields such as biology and engineering [1821]. In the coupled nonlinear dynamical systems, many neural networks may experience abrupt changes in their structure and parameters caused by some phenomena such as component failures or repairs, changing subsystem interconnections, and abrupt environmental disturbances. The synchronization may help to protect interconnected neurons from the influence of random perturbations which affect all neurons in the system. Therefore, from the neurophysiological as well as theoretical point of view, it is important to investigate the impact of synchronization on the SDNNs. Moreover, in the adaptive synchronization for the neural networks, the control law needs to be adapted or updated in realtime. So, the adaptive synchronization for neural networks has been used in real neural networks control such as parameter estimation adaptive control, model reference adaptive control, etc. Some stochastic synchronization results have been investigated. For example, in [22], an adaptive feedback controller is designed to achieve complete synchronization for unidirectionally coupled delayed neural networks with stochastic perturbation. In [23], via adaptive feedback control techniques with suitable parameters update laws, several sufficient conditions are derived to ensure lag synchronization for unknown delayed neural networks with or without noise perturbation. In [24], a class of chaotic neural networks is discussed, and based on the Lyapunov stability method and the Halanay inequality lemma, a delay independent sufficient exponential synchronization condition is derived. The simple adaptive feedback scheme has been used for the synchronization for neural networks with or without time-varying delay in [25]. A general model of an array of N linearly coupled delayed neural networks with Markovian jumping hybrid coupling is introduced in [26] and some sufficient criteria have been derived to ensure the synchronization in an array of jump neural networks with mixed delays and hybrid coupling in mean square.

It should be pointed out that, to the best of our knowledge, the adaptive almost surely asymptotically synchronization for the SDNNs with Markovian switching is seldom mentioned although it is of practical importance. Motivated by the above statements, in this paper, we aim to analyze the adaptive almost surely asymptotically synchronization for the SDNNs with Markovian switching. M-matrix-based criteria for determining whether adaptive almost surely asymptotically synchronization for the SDNNs with Markovian switching are developed. An adaptive feedback controller is proposed for the SDNNs with Markovian switching. A numerical simulation is given to show the validity of the developed results.

The rest of this paper is organized as follows: in Section 2, the problem is formulated and some preliminaries are given; in Section 3, a sufficient condition to ensure the adaptive almost surely asymptotically synchronization for the SDNNs with Markovian switching is derived; in Section 4, an example of numerical simulation is given to illustrate the validity of the results; Section 5 gives the conclusion of the paper.

2 Problem formulation and preliminaries

Throughout this paper, stands for the mathematical expectation operator, x 2 is used to denote a vector norm defined by x 2 = i = 1 n x i 2 , ‘T’ represents the transpose of a matrix or a vector, I n is an n-dimensional identical matrix.

Let { r ( t ) } t 0 be a right-continuous Markov chain on the probability space taking values in a finite state space S={1,2,,N} with generator Γ= ( γ i j ) N × N given by

P { r ( t + δ ) = j | r ( t ) = i } = { γ i j δ + o ( δ ) if  i j , 1 + γ i i δ + o ( δ ) if  i = j ,

where δ>0 and γ i j 0 is the transition rate from i to j if ij while

γ i j = j i γ i j .

We denote r(0)= r 0 .

In this paper, we consider the neural network called drive system and represented by the compact form as follows:

dx(t)= [ C ( r ( t ) ) x ( t ) + A ( r ( t ) ) f ( x ( t ) ) + B ( r ( t ) ) f ( x ( t τ ( t ) ) ) + D ( r ( t ) ) ] dt,
(1)

where t0 is the time, x(t)= ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T R n is the state vector associated with n neurons, f(x(t))= ( f 1 ( x 1 ( t ) ) , f 2 ( x 2 ( t ) ) , , f n ( x n ( t ) ) ) T R n denote the activation functions of the neurons, τ(t) is the transmission delay satisfying that 0<τ(t) τ ¯ and τ ˙ (t) τ ˆ <1, where τ ¯ , τ ˆ are constants. As a matter of convenience, for t0, we denote r(t)=i and A(r(t))= A i , B(r(t))= B i , C(r(t))= C i , D(r(t))= D i , respectively. In model (1), furthermore, iS, C i =diag{ c 1 i , c 2 i ,, c n i } (i.e., C i is a diagonal matrix) has positive and unknown entries c k i >0, A i = ( a j k i ) n × n and B i = ( b j k i ) n × n are the connection weight and the delayed connection weight matrices, respectively. D i = ( d 1 i , d 2 i , , d n i ) T R n is the constant external input vector.

For the drive system (1), a response system is constructed as follows:

d y ( t ) = [ C ( r ( t ) ) y ( t ) + A ( r ( t ) ) f ( y ( t ) ) + B ( r ( t ) ) f ( y ( t τ ( t ) ) ) + D ( r ( t ) ) + U ( t ) ] d t + σ ( t , r ( t ) , y ( t ) x ( t ) , y ( t τ ( t ) ) x ( t τ ( t ) ) ) d ω ( t ) ,
(2)

where y(t) is the state vector of the response system (2), U(t)= ( u 1 ( t ) , u 2 ( t ) , , u n ( t ) ) T R n is a control input vector with the form of

U ( t ) = K ( t ) ( y ( t ) x ( t ) ) = diag { k 1 ( t ) , k 2 ( t ) , , k n ( t ) } ( y ( t ) x ( t ) ) ,
(3)

ω(t)= ( ω 1 ( t ) , ω 2 ( t ) , , ω n ( t ) ) T is an n-dimensional Brown moment defined on a complete probability space (Ω,F,P) with a natural filtration { F t } t 0 (i.e., F t =σ{ω(s):0st} is a σ-algebra) and is independent of the Markovian process { r ( t ) } t 0 , and σ: R + ×S× R n × R n R n × n is the noise intensity matrix and can be regarded as a result of the occurrence of eternal random fluctuation and other probabilistic causes.

Let e(t)=y(t)x(t). For the purpose of simplicity, we mark e(tτ(t))= e τ (t) and f(x(t)+e(t))f(x(t))=g(e(t)). From the drive system (1) and the response system (2), the error system can be represented as follows:

d e ( t ) = [ C ( r ( t ) ) e ( t ) + A ( r ( t ) ) g ( e ( t ) ) + B ( r ( t ) ) g ( e τ ( t ) ) + U ( t ) ] d t + σ ( t , r ( t ) , e ( t ) , e τ ( t ) ) d ω ( t ) .
(4)

The initial condition associated with system (4) is given in the following form:

e(s)=ξ(s),s[ τ ¯ ,0]

for any ξ L F 0 2 ([ τ ¯ ,0], R n ), where L F 0 2 ([ τ ¯ ,0], R n ) is the family of all F 0 -measurable C([ τ ¯ ,0]; R n )-value random variables satisfying that sup τ ¯ s 0 E | ξ ( s ) | 2 <, and C([ τ ¯ ,0]; R n ) denotes the family of all continuous R n -valued functions ξ(s) on [ τ ¯ ,0] with the norm ξ= sup τ ¯ s 0 |ξ(s)|.

To obtain the main result, we need the following assumptions.

Assumption 1 The activation functions of the neurons f(x(t)) satisfy the Lipschitz condition. That is, there exists a constant L>0 such that

| f ( u ) f ( v ) | L|uv|,u,v R n .

Assumption 2 The noise intensity matrix σ(,,,) satisfies the linear growth condition. That is, there exist two positives H 1 and H 2 such that

trace ( σ ( t , r ( t ) , u ( t ) , v ( t ) ) ) T ( σ ( t , r ( t ) , u ( t ) , v ( t ) ) ) H 1 | u ( t ) | 2 + H 2 | v ( t ) | 2

for all (t,r(t),u(t),v(t)) R + ×S× R n × R n .

Assumption 3 In the drive system (1)

f(0)0,σ( t 0 , r 0 ,0,0)0.

Remark 1 Under Assumption 1Assumption 3, the error system (4) admits an equilibrium point (or trivial solution) e(t,ξ), t0.

The following stability concept and synchronization concept are needed in this paper.

Definition 1 The trivial solution e(t,ξ) of the error system (4) is said to be almost surely asymptotically stable if

P ( lim t | x ( t ; i 0 , ξ ) | = 0 ) =1

for any ξ L L 0 p ([ τ ¯ ,0]; R n ).

The response system (2) and the drive system (1) are said to be almost surely asymptotically synchronized if the error system (4) is almost surely asymptotically stable.

The main purpose of the rest of this paper is to establish a criterion of the adaptive almost surely asymptotically synchronization of system (1) and response system (2) by using the adaptive feedback control and M-matrix techniques.

To this end, we introduce some concepts and lemmas which will be frequently used in the proofs of our main results.

Definition 2 [27]

A square matrix M= ( m i j ) n × n is called a nonsingular M-matrix if M can be expressed in the form M=s I n G with some G0 (i.e., each element of G is nonnegative) and s>ρ(G), where ρ(G) is the spectral radius of G.

Lemma 1 [8]

If M= ( m i j ) n × n R n × n with m i j <0 (ij), then the following statements are equivalent:

  1. (1)

    M is a nonsingular M-matrix.

  2. (2)

    Every real eigenvalue of M is positive.

  3. (3)

    M is positive stable. That is, M 1 exists and M 1 >0 (i.e., M 1 0 and at least one element of M 1 is positive).

Lemma 2 [5]Let x R n , y R n . Then

x T y+ y T xϵ x T x+ ϵ 1 y T y

for any ϵ>0.

Consider an n-dimensional stochastic delayed differential equation (SDDE, for short) with Markovian switching

dx(t)=f ( t , r ( t ) , x ( t ) , x τ ( t ) ) dt+g ( t , r ( t ) , x ( t ) , x τ ( t ) ) dω(t)
(5)

on t[0,) with the initial data given by

{ x ( θ ) : τ ¯ θ 0 } =ξ L L 0 2 ( [ τ ¯ , 0 ] ; R n ) .

If V C 2 , 1 ( R + ×S× R n ; R + ), define an operator from R + ×S× R n to R by

L V ( t , i , x , x τ ) = V t ( t , i , x ) + V x ( t , i , x ) f ( t , i , x , x τ ) + ( 1 / 2 ) trace ( g T ( t , i , x , x τ ) V x x ( t , i , x ) g ( t , i , x , x τ ) ) + j = 1 N γ i j V ( t , j , x ) ,

where

V t ( t , i , x ) = V ( t , i , x ) t , V x ( t , i , x ) = ( V ( t , i , x ) x 1 , V ( t , i , x ) x 2 , , V ( t , i , x ) x n ) , V x x ( t , i , x ) = ( 2 V ( t , i , x ) x j x k ) n × n .

For the SDDE with Markovian switching, we have the Dynkin formula as follows.

Lemma 3 (Dynkin formula) [8, 28]

Let V C 2 , 1 ( R + ×S× R n ; R + ) and τ 1 , τ 2 be bounded stopping times such that 0 τ 1 τ 2 a.s. (i.e., almost surely). If V(t,r(t),x(t)) and LV(t,r(t),x(t), x τ (t)) are bounded on t[ τ 1 , τ 2 ] with probability 1, then

EV ( τ 2 , r ( τ 2 ) , x ( τ 2 ) ) =EV ( τ 1 , r ( τ 1 ) , x ( τ 1 ) ) +E τ 1 τ 2 LV ( s , r ( s ) , x ( s ) , x τ ( s ) ) ds.

For the SDDE with Markovian switching again, the following hypothesis is imposed on the coefficients f and g.

Assumption 4 Both f and g satisfy the local Lipschitz condition. That is, for each h>0, there is an L h >0 such that

| f ( t , i , x , y ) f ( t , i , x ¯ , y ¯ ) | + | g ( t , i , x , y ) g ( t , i , x ¯ , y ¯ ) | L h ( | x x ¯ | + | y y ¯ | )

for all (t,i)R×S and those x,y, x ¯ , y ¯ R n with xy x ¯ y ¯ h. Moreover,

sup { | f ( t , i , 0 , 0 ) | | g ( t , i , 0 , 0 ) | : t 0 , i S } <.

Now we cite a useful result given by Yuan and Mao [29].

Lemma 4 [29]

Let Assumption  4 hold. Assume that there are functions V C 2 , 1 ( R + ×S× R n ; R + ), γ L 1 ( R + ; R + ) and w 1 , w 2 C( R n ; R + ) such that

LV(t,i,x,y)γ(t) w 1 (x)+ w 2 (y),(t,i,x,y) R + ×S× R n × R n ,
(6)
w 1 (0)= w 2 (0)=0, w 1 (x)> w 2 (x),x0
(7)

and

lim | x | inf 0 t < , i S V(t,i,x)=.
(8)

Then the solution of Eq. (5) is almost surely asymptotically stable.

3 Main results

In this section, we give a criterion of the adaptive almost surely asymptotically synchronization for the drive system (1) and the response system (2).

Theorem 1 Assume that is a nonsingular M-matrix, where

η = 2 γ + α + L 2 + β + H 1 , γ = min i S min 1 j n c j i , α = max i S ( ρ ( A i ) ) 2 , β = max i S ( ρ ( B i ) ) 2 , p 2 .

Let m>0 and (in this case, ( q 1 , q 2 , , q N ) T := M 1 m 0, i.e., all elements of M 1 m are positive by Lemma  1). Assume also that

( L 2 + H 2 ) q ¯ < ( η q i + k = 1 N γ i k q k ) ,iS,
(9)

where q ¯ = max i S q i .

Under Assumptions 13, the noise-perturbed response system (2) can be adaptive almost surely asymptotically synchronized with the delayed neural network (1) if the update law of the feedback control gain K(t) of the controller (3) is chosen as

k ˙ j = q i α j e j 2 ,
(10)

where α j >0 (j=1,2,,n) are arbitrary constants.

Proof Under Assumptions 13, it can be seen that the error system (4) satisfies Assumption 4.

For each iS, choose a nonnegative function as follows:

V(t,i,e)= q i | e | 2 + j = 1 n 1 α j k j 2 .

Then it is obvious that condition (8) holds.

Computing LV(t,i,e, e τ ) along the trajectory of the error system (4), and using (10), one can obtain that

L V ( t , i , e , e τ ) = V t + V e [ C i e + A i g ( e ) + B i g ( e τ ) + U ( t ) ] + ( 1 / 2 ) trace ( σ T ( t , i , e , e τ ) V e e σ ( t , i , e , e τ ) ) + k = 1 N γ i k V ( t , k , e ) = 2 j = 1 n 1 α j k j k ˙ j + 2 q i e T [ C i e + A i g ( e ) + B i g ( e τ ) + U ( t ) ] + q i trace ( σ T ( t , i , e , e τ ) σ ( t , i , e , e τ ) ) + k = 1 N γ i k q k | e | 2 = 2 q i e T [ C i e + A i g ( e ) + B i g ( e τ ) ] + q i trace ( σ T ( t , i , e , e τ ) σ ( t , i , e , e τ ) ) + k = 1 N γ i k q k | e | 2 .
(11)

Now, using Assumptions 12 together with Lemma 2 yields

e T C i eγ | e | 2 ,
(12)
2 e T A i g(e) e T A i ( A i ) T e+ g T (e)g(e) ( α + L 2 ) | e | 2 ,
(13)
2 e T B i g( e τ ) e T B i ( B i ) T e+ g T ( e τ )g( e τ )β | e | 2 + L 2 | e τ | 2
(14)

and

q i trace ( σ T ( t , i , e , e τ ) σ ( t , i , e , e τ ) ) q i ( H 1 | e | 2 + H 2 | e τ | 2 ) .
(15)

Substituting (12)(15) into (11) yields

L V ( t , i , e , e τ ) ( η q i + k = 1 N γ i k q k ) | e | 2 + ( L 2 + H 2 ) q i | e τ | 2 m | e | 2 + ( L 2 + H 2 ) q ¯ | e τ | 2 ,
(16)

where m=(η q i + k = 1 N γ i k q k ) by ( q 1 , q 2 , , q N ) T = M 1 m .

Let w 1 (e)=m | e | 2 , w 2 ( e τ )=( L 2 + H 2 ) q ¯ | e τ | 2 . Then inequalities (6) and (7) hold by using (9), where γ(t)=0 in (6). By Lemma 4, the error system (4) is adaptive almost surely asymptotically stable, and hence the noise-perturbed response system (2) can be adaptive almost surely asymptotically synchronized with the drive delayed neural network (1). This completes the proof. □

Remark 2 In Theorem 1, condition (9) of the adaptive almost surely asymptotically synchronization for the SDNN with Markovian switching obtained by using M-matrix and the Lyapunov functional method is generator-dependent and very different to other methods such as the linear matrix inequality method. And it is easy to check the condition if the drive system and the response system are given and the positive constant m is well chosen. To the best of the authors’ knowledge, this method is the first development in the research area of synchronization for neural networks.

Now, we are in a position to consider two special cases of the drive system (1) and the response system (2).

Special case 1 The Markovian jumping parameters are removed from the neural networks (1) and the response system (2). In this case, N=1 and the drive system, the response system and the error system can be represented, respectively, as follows:

dx(t)= [ C x ( t ) + A f ( x ( t ) ) + B f ( x ( t τ ( t ) ) ) + D ] dt,
(17)
d y ( t ) = [ C y ( t ) + A f ( y ( t ) ) + B f ( y ( t τ ( t ) ) ) + D + U ( t ) ] d t d y ( t ) = + σ ( t , y ( t ) x ( t ) , y ( t τ ( t ) ) x ( t τ ( t ) ) ) d ω ( t )
(18)

and

d e ( t ) = [ C e ( t ) + A g ( e ( t ) ) + B g ( e τ ( t ) ) + U ( t ) ] d t + σ ( t , e ( t ) , e τ ( t ) ) d ω ( t ) .
(19)

For this case, one can get the following result that is analogous to Theorem 1.

Corollary 1 Let

η = 2 γ + α + L 2 + β + H 1 , γ = min 1 j n c j , α = ( ρ ( A ) ) 2 , β = ( ρ ( B ) ) 2 , p 2 .

Assume that

η<0

and

L 2 + H 2 <η.
(20)

Under Assumptions 13, the noise-perturbed response system (18) can be adaptive almost surely asymptotically synchronized with the delayed neural network (17) if the update law of the feedback gain K(t) of the controller (3) is chosen as

k ˙ j = α j e j 2 ,
(21)

where α j >0 (j=1,2,,n) are arbitrary constants.

Proof Choose the following nonnegative function:

V(t,e)= | e | 2 + j = 1 n 1 α j k j 2 .

The rest of the proof is similar to that of Theorem 1, and hence omitted. □

Special case 2 The noise-perturbation is removed from the response system (2), which yields the noiseless response system

d y ( t ) = [ C ˆ ( r ( t ) ) y ( t ) + A ˆ ( r ( t ) ) f ( y ( t ) ) + B ˆ ( r ( t ) ) f ( y ( t τ ( t ) ) ) + D ( r ( t ) ) + U ( t ) ] d t
(22)

and the error system

de(t)= [ C ( r ( t ) ) e ( t ) + A ( r ( t ) ) g ( e ( t ) ) + B ( r ( t ) ) g ( e τ ( t ) ) + U ( t ) ] dt,
(23)

respectively.

In this case, one can get the following results.

Corollary 2 Assume that is a nonsingular M-matrix, where

η=2γ+α+ L 2 +β.

Let m>0 and (in this case, ( q 1 , q 2 , , q N ) T := M 1 m 0 by Lemma  1). Assume also that

L 2 q ¯ < ( η q i + k = 1 N γ i k q k ) ,iS,
(24)

where q ¯ = max i S q i .

Under Assumptions 13, the noiseless-perturbed response system (22) can be adaptive almost surely asymptotically synchronized with the unknown drive delayed neural network (1) if the update law of the feedback gain K(t) of the controller (3) is chosen as

k ˙ j = q i α j e j 2 ,
(25)

where α j >0 are arbitrary constants.

Proof For each iS, choose a nonnegative function as follows:

V(t,i,e)= q i | e | 2 + j = 1 n 1 α j k j 2 .

The rest of the proof is similar to that of Theorem 1, and hence omitted. □

4 Numerical example

In the section, an illustrative example is given to support our main results.

Example 1 Consider a delayed neural network (1), and its response system (2) with Markovian switching and the following network parameters:

C 1 = [ 2 0 0 2.4 ] , C 2 = [ 1.5 0 0 1 ] , A 1 = [ 3.2 1.5 2.7 3.2 ] , A 2 = [ 2.1 0.6 0.8 3.2 ] , B 1 = [ 2.7 3.1 0 2.3 ] , B 2 = [ 1.4 2.1 0.3 1.5 ] , D 1 = [ 0.4 0.5 ] , D 2 = [ 0.4 0.6 ] , Γ = [ 1.2 1.2 0.5 0.5 ] , σ ( t , e ( t ) , e ( t τ ) , 1 ) = ( 0.4 e 1 ( t τ ) , 0.5 e 2 ( t ) ) T , σ ( t , e ( t ) , e ( t τ ) , 2 ) = ( 0.5 e 1 ( t ) , 0.3 e 2 ( t τ ) ) T , f ( x ( t ) ) = g ( x ( t ) ) = tanh ( x ( t ) ) , τ = 0.12 , L = 1 .

It can be checked that Assumption 1Assumption 3 and inequality (9) are satisfied and the matrix M is a nonsingular M-matrix. So, the noise-perturbed response system (2) can be adaptive almost surely asymptotically synchronized with the drive delayed neural network (1) by Theorem 1. The simulation results are given in Figures 12. Figure 1 shows that the state response e 1 (t) and e 2 (t) of the errors system converge to zero. Figure 2 shows the dynamic curve of the feedback gain k 1 and k 2 . From the simulations, it can be seen that the stochastic delayed neural networks with Markovian switching are adaptive almost surely asymptotically synchronization.

Figure 1
figure 1

The response curve of the state variable e 1 (t) and e 2 (t) of the errors system.

Figure 2
figure 2

The dynamic curve of the feedback gain k 1 and k 2 .

5 Conclusions

In this paper, we have proposed the concept of adaptive almost surely asymptotically synchronization for the stochastic delayed neural networks with Markovian switching. Making use of the M-matrix and Lyapunov functional method, we have obtained a sufficient condition, under which the response stochastic delayed neural network with Markovian switching can be adaptive almost surely asymptotically synchronized with the drive delayed neural networks with Markovian switching. The method to obtain the sufficient condition of the adaptive synchronization for neural networks is different to that of the linear matrix inequality technique. The condition obtained in this paper is dependent on the generator of the Markovian jumping models and can be easily checked. Extensive simulation results are provided to demonstrate the effectiveness of our theoretical results and analytical tools.

References

  1. Sevgen S, Arik S: Implementation of on-chip training system for cellular neural networks using iterative annealing optimization method. Int. J. Reason.-Based Intell. Syst. 2010, 2: 251-256.

    Google Scholar 

  2. Lütcke H, Helmchen F: Two-photon imaging and analysis of neural network dynamics. Rep. Prog. Phys. 2010., 74: Article ID 086602

    Google Scholar 

  3. Xu Y, Li B, Zhou W, Fang J: Mean square function synchronization of chaotic systems with stochastic effects. Nonlinear Dyn. 2012. 10.1007/s11071-011-0217-x

    Google Scholar 

  4. Zhao L, Hu J, Fang J, Zhang W: Studying on the stability of fractional-order nonlinear system. Nonlinear Dyn. 2012. 10.1007/s11071-012-0469-0

    Google Scholar 

  5. Wang Z, Liu Y, Liu X: Exponential stability of delayed recurrent neural networks with Markovian jumping parameters. Phys. Lett. A 2006, 356: 346-352. 10.1016/j.physleta.2006.03.078

    Article  Google Scholar 

  6. Zhang W, Fang J, Tang Y: Stochastic stability of Markovian jumping genetic regulatory networks with mixed time delays. Appl. Math. Comput. 2011, 17: 7210-7225.

    Article  MathSciNet  Google Scholar 

  7. Huang H, Ho D, Qu Y: Robust stability of stochastic delayed additive neural networks with Markovian switching. Neural Netw. 2007, 20: 799-809. 10.1016/j.neunet.2007.07.003

    Article  Google Scholar 

  8. Mao X, Yuan C: Stochastic Differential Equations with Markovian Switching. Imperial College Press, London; 2006.

    Book  Google Scholar 

  9. Wang Z, Ho D, Liu Y, Liu X:Robust H control for a class of nonlinear discrete time-delay stochastic systems with missing measurements. Automatica 2010, 45: 1-8.

    Google Scholar 

  10. Wang Z, Liu Y, Liu G, Liu X: A note on control of discrete-time stochastic systems with distributed delays and nonlinear disturbances. Automatica 2010, 46: 543-548. 10.1016/j.automatica.2009.11.020

    Article  Google Scholar 

  11. Zhou W, Lu H, Duan C: Exponential stability of hybrid stochastic neural networks with mixed time delays and nonlinearity. Neurocomputing 2009, 72: 3357-3365. 10.1016/j.neucom.2009.04.012

    Article  Google Scholar 

  12. Tang Y, Fang J, Miao Q: Synchronization of stochastic delayed neural networks with Markovian switching and its application. Int. J. Neural Syst. 2009, 19: 43-56. 10.1142/S0129065709001823

    Article  Google Scholar 

  13. Min X, Ho D, Cao J: Time-delayed feedback control of dynamical small-world networks at Hopf bifurcation. Nonlinear Dyn. 2009, 58: 319-344. 10.1007/s11071-009-9485-0

    Article  Google Scholar 

  14. Xu Y, Zhou W, Fang J: Topology identification of the modified complex dynamical network with non-delayed and delayed coupling. Nonlinear Dyn. 2012, 68: 195-205. 10.1007/s11071-011-0217-x

    Article  MathSciNet  Google Scholar 

  15. Wang Z, Liu Y, Liu X: Exponential stabilization of a class of stochastic system with Markovian jump parameters and mode-dependent mixed time-delays. IEEE Trans. Autom. Control 2010, 55: 1656-1662.

    Article  Google Scholar 

  16. Hassouneh M, Abed E: Lyapunov and LMI analysis and feedback control of border collision bifurcations. Nonlinear Dyn. 2007, 50: 373-386. 10.1007/s11071-006-9169-y

    Article  MathSciNet  Google Scholar 

  17. Tang Y, Leung S, Wong W, Fang J: Impulsive pinning synchronization of stochastic discrete-time networks. Neurocomputing 2010, 73: 2132-2139. 10.1016/j.neucom.2010.02.010

    Article  Google Scholar 

  18. Zhang W, Tang Y, Fang J, Zhu W: Exponential cluster synchronization of impulsive delayed genetic oscillators with external disturbances. Chaos 2011, 21: 37-43.

    Google Scholar 

  19. Tang Y, Gao H, Zou W, Kurths J: Identifying controlling nodes in neuronal networks in different scales. PLoS ONE 2012., 7: Article ID e41375

    Google Scholar 

  20. Tang Y, Wang Z, Gao H, Swift S, Kurths J: A constrained evolutionary computation method for detecting controlling regions of cortical networks. IEEE/ACM Trans. Comput. Biol. Bioinform. 2012, 9: 1569-1581.

    Article  Google Scholar 

  21. Ma Q, Xu S, Zou Y, Shi G: Synchronization of stochastic chaotic neural networks with reaction-diffusion terms. Nonlinear Dyn. 2012, 67: 2183-2196. 10.1007/s11071-011-0138-8

    Article  MathSciNet  Google Scholar 

  22. Li X, Cao J: Adaptive synchronization for delayed neural networks with stochastic perturbation. J. Franklin Inst. 2008, 354: 779-791.

    Article  MathSciNet  Google Scholar 

  23. Sun Y, Cao J: Adaptive lag synchronization of unknown chaotic delayed neural networks with noise perturbation. Phys. Lett. A 2007, 364: 277-285. 10.1016/j.physleta.2006.12.019

    Article  Google Scholar 

  24. Chen G, Zhou J, Liu Z: Classification of chaos in 3-D autonomous quadratic systems - I: basic framework and methods. Int. J. Bifurc. Chaos 2006, 16: 2459-2479. 10.1142/S0218127406016203

    Article  Google Scholar 

  25. Cao J, Lu J: Adaptive synchronization of neural networks with or without time-varying delays. Chaos 2006., 16: Article ID 013133

    Google Scholar 

  26. Tang Y, Fang J: Adaptive synchronization in an array of chaotic neural networks with mixed delays and jumping stochastically hybrid coupling. Commun. Nonlinear Sci. Numer. Simul. 2009, 14: 3615-3628. 10.1016/j.cnsns.2009.02.006

    Article  MathSciNet  Google Scholar 

  27. Berman A, Plemmons R: Nonnegative Matrices in Mathematical Sciences. Academic Press, New York; 1979.

    Google Scholar 

  28. Øksendal B: Stochastic Differential Equations: An Introduction with Applications. Springer, Berlin; 2005.

    Google Scholar 

  29. Yuan C, Mao X: Robust stability and controllability of stochastic differential delay equations with Markovian switching. Automatica 2004, 40: 343-354. 10.1016/j.automatica.2003.10.012

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We would like to thank the referees and the editor for their valuable comments and suggestions, which have led to a better presentation of this paper. This work is supported by the National Natural Science Foundation of China (61075060), the Innovation Program of Shanghai Municipal Education Commission (12zz064) and the Fundamental Research Funds for the Central Universities.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Yan Gao or Wuneng Zhou.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the manuscript. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Ding, X., Gao, Y., Zhou, W. et al. Adaptive almost surely asymptotically synchronization for stochastic delayed neural networks with Markovian switching. Adv Differ Equ 2013, 211 (2013). https://doi.org/10.1186/1687-1847-2013-211

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1847-2013-211

Keywords