Skip to main content

Theory and Modern Applications

Global robust exponential synchronization of BAM recurrent FNNs with infinite distributed delays and diffusion terms on time scales

Abstract

In this article, the global robust exponential synchronization of reaction-diffusion BAM recurrent fuzzy neural networks (FNNs) with infinite distributed delays on time scales is investigated. Applied Lyapunov functional and inequality skills, some sufficient criteria are established to guarantee the global robust exponential synchronization of reaction-diffusion BAM recurrent FNNs with infinite distributed delays on time scales. One example is given to illustrate the effectiveness of our results.

1 Introduction

The study on the artificial neural networks has attracted much attention because of their potential applications such as signal processing, image processing, pattern classification, quadratic optimization, associative memory, moving object speed detection, etc. Many kinds of models of neural networks have been proposed by some famous scholars. One of these important neural network models is the bidirectional associative memory (BAM) neural network models, which were first introduced by Kosko [13]. It is a special class of recurrent neural networks that can store bipolar vector pairs. The BAM neural network is composed of neurons arranged in two layers, the X-layer and the Y-layer. The neurons in one layer are fully interconnected to the neurons in the other layer. Through iterations of forward and backward information flows between the two layers, it performs a two-way associative search for stored bipolar vector pairs and generalize the single-layer auto-associative Hebbian correlation to a two-layer pattern-matched heteroassociative circuits. Therefore, this class of networks possesses good application prospects in some fields such as pattern recognition, signal and image process, artificial intelligence [4]. In general, artificial neural networks have complex dynamical behaviors such as stability, synchronization, periodic or almost periodic solutions, invariant sets and attractors, and so forth. We can refer to [527] and the references cited therein. Therefore, the analysis of dynamical behaviors for neural networks is a necessary step for practical design of neural networks. As one of the famous neural network models, it has attracted many attention in the past two decades [2848] since the BAM model was proposed by Kosko. The dynamical behaviors such as uniqueness, global asymptotic stability, exponential stability and invariant sets and attractors of the equilibrium point or periodic solutions were investigated for BAM neural networks with different types of time delays (see [2844, 48]).

Synchronization has attracted much attention after it was proposed by Carrol et al. [49, 50]. The principle of drive-response synchronization is this: the driver system sends a signal through a channel to the responder system, which uses this signal to synchronize itself with the driver. Namely, the response system is influenced by the behavior of the drive system, but the drive system is independent of the response one. In recent years, many results concerning a synchronization problem of time lag neural networks have been investigated in the literature [5, 6, 815, 27, 36, 49, 50].

As is well known, both in biological and man-made neural networks, strictly speaking, diffusion effects cannot be avoided when electrons are moving in asymmetric electromagnetic fields, so we must consider that the activations vary in space as well as in time. Many researchers have studied the dynamical properties of continuous time reaction-diffusion neural networks (see, for example, [8, 11, 17, 18, 24, 25, 27, 32, 48]).

However, in mathematical modeling of real world problems, we will encounter some other inconveniences such as complexity and uncertainty or vagueness. Fuzzy theory is considered as a more suitable setting for the sake of taking vagueness into consideration. Based on traditional cellular neural networks (CNNs), T Yang and LB Yang proposed the fuzzy CNNs (FCNNs) [23] which integrate fuzzy logic into the structure of traditional CNNs and maintain local connectedness among cells. Unlike previous CNNs structures, FCNNs have fuzzy logic between their template input and/or output besides the sum of product operation. FCNNs are very a useful paradigm for image processing problems, which is a cornerstone in image processing and pattern recognition. Therefore, it is necessary to consider both the fuzzy logic and delay effect on dynamical behaviors of neural networks. To the best of our knowledge, few authors have considered the synchronization of reaction-diffusion recurrent fuzzy neural networks with delays and Dirichlet boundary conditions on time scales which is a challenging and important problem in theory and application. Therefore, in this paper, we will investigate the global robust exponential synchronization of delayed reaction-diffusion BAM recurrent fuzzy neural networks (FNNs) on time scales as follows:

{ u i Δ ( t , x ) = k = 1 l x k ( a i k u i x k ) b i u i ( t , x ) + j = 1 m c i j f j ( v j ( t τ , x ) ) + I i u i Δ ( t , x ) = + j = 1 n p i j F j ( u j ( t τ , x ) ) + j = 1 n r i j 0 + k i j ( s ) F j ( u j ( t s , x ) ) Δ s u i Δ ( t , x ) = + j = 1 n q i j F j ( u j ( t τ , x ) ) + j = 1 n w i j 0 + k i j ( s ) F j ( u j ( t s , x ) ) Δ s u i Δ ( t , x ) = + j = 1 n d i j μ j + j = 1 n S i j μ j + j = 1 n T i j μ j , v j Δ ( t , x ) = k = 1 l x k ( ξ j k v j x k ) η j v j ( t , x ) + i = 1 n ζ j i g i ( u i ( t τ , x ) ) + J j v j Δ ( t , x ) = + i = 1 m λ j i G i ( v i ( t τ , x ) ) + i = 1 m ρ j i 0 + κ j i ( s ) G i ( v i ( t s , x ) ) Δ s v j Δ ( t , x ) = + i = 1 m π j i G i ( v i ( t τ , x ) ) + i = 1 m σ j i 0 + κ j i ( s ) G i ( v i ( t s , x ) ) Δ s v j Δ ( t , x ) = + i = 1 m h j i ν i + i = 1 m M j i ν i + i = 1 m N j i ν i ,
(1.1)

subject to the following initial conditions

{ u i ( s , x ) = ϕ i ( s , x ) , ( s , x ) [ τ , 0 ] T × Ω , v j ( s , x ) = φ j ( s , x ) , ( s , x ) [ τ , 0 ] T × Ω ,
(1.2)

and Dirichlet boundary conditions

{ u i ( t , x ) = 0 , ( t , x ) [ 0 , ) T × Ω , v j ( t , x ) = 0 , ( t , x ) [ 0 , ) T × Ω ,
(1.3)

where i=1,2,,n; j=1,2,,m. TR is a time scale and T[0,+) [ 0 , + ) T is unbounded and T[τ,0] [ τ , 0 ] T ϕ. τ>0 is constant time delay. x= ( x 1 , x 2 , , x l ) T Ω R l and Ω={x= ( x 1 , x 2 , , x l ) T :| x i |< l i ,i=1,2,,l} is a bounded compact set with smooth boundary Ω in space R l . u= ( u 1 , u 2 , , u n ) T R n , v= ( v 1 , v 2 , , v m ) T R m . u i (t,x) and v j (t,x) are the state of the i th neurons and the j th neurons at time t and in space x, respectively. I= ( I 1 , I 2 , , I n ) T R n and J= ( J 1 , J 2 , , J m ) T R m are constant input vectors. The smooth functions a i k >0 and ξ j k >0 correspond to the transmission diffusion operators along with the i th neurons and the j th neurons, respectively. b i >0, η j >0, μ j , ν i , c i j , p i j , r i j , q i j , w i j , d i j , S i j , T i j , ζ j i , λ j i , ρ j i , π j i , σ j i , h j i , M j i , M j i are constants. b i and η j denote the rate with which the i th neurons and j th neurons will reset their potential to the resting state in isolation when disconnected from the network and external inputs, respectively. c i j , p i j , r i j , q i j , w i j , d i j , S i j , T i j , ζ j i , λ j i , ρ j i , π j i , σ j i , h j i , M j i , M j i denote the connection weights. f j () (j=1,2,,m) and g i () (i=1,2,,n) denote the activation function of the j th neurons of Y-layer on the i th neurons of X-layer and the i th neurons of X-layer on the j th neurons of Y-layer at time t and in space x, respectively. F j () (j=1,2,,n) denotes the fuzzy activation function of the j th neurons on the i th neurons inside of X-layer. G i () (i=1,2,,m) denotes the fuzzy activation function of the i th neurons on the j th neurons inside of Y-layer. μ j (j=1,2,,n) denotes the bias of the j th neurons on the i th neurons inside of X-layer. ν i (i=1,2,,m) denotes the bias of the i th neurons on the j th neurons inside of Y-layer. , denote the fuzzy AND and fuzzy OR operations, respectively. ϕ(t,x)= ( ϕ 1 ( t , x ) , ϕ 2 ( t , x ) , , ϕ n ( t , x ) ) T : [ τ , 0 ] T ×Ω R n , φ(t,x)= ( φ 1 ( t , x ) , φ 2 ( t , x ) , , φ m ( t , x ) ) T : [ τ , 0 ] T ×Ω R m are rd-continuous with respect to t [ τ , 0 ] T and continuous with respect to xΩ.

In order to investigate the global robust exponential synchronization for system (1.1)-(1.3), the quantities b i , a i k , c i j , p i j , r i j , q i j , w i j , η j , ξ j k , ζ j i , λ j i , ρ j i , π j i and σ j i may be considered as intervals as follows: 0< b ̲ i b i <, a ̲ i k a i k a ¯ i k , | c ̲ i j || c i j || c ¯ i j |, | p ̲ i j || p i j || p ¯ i j |, | r ̲ i j || r i j || r ¯ i j |, | q ̲ i j || q i j || q ¯ i j |, | w ̲ i j || w i j || w ¯ i j |, 0< η ̲ j η j <, ξ ̲ j k ξ j k ξ ¯ j k , | ζ ̲ j i || ζ j i || ζ ¯ j i |, | λ ̲ j i || λ j i || λ ¯ j i |, | ρ ̲ j i || ρ j i || ρ ¯ j i |, | π ̲ j i || π j i || π ¯ j i |, | σ ̲ j i || σ j i || σ ¯ j i |.

Take the time scale T=R (real number set), then system (1.1)-(1.3) can be changed into the following continuous case (1.4)-(1.6):

{ u i ( t , x ) t = k = 1 l x k ( a i k u i x k ) b i u i ( t , x ) + j = 1 m c i j f j ( v j ( t τ , x ) ) + I i u i ( t , x ) t = + j = 1 n p i j F j ( u j ( t τ , x ) ) + j = 1 n r i j 0 + k i j ( s ) F j ( u j ( t s , x ) ) d s u i ( t , x ) t = + j = 1 n q i j F j ( u j ( t τ , x ) ) + j = 1 n w i j 0 + k i j ( s ) F j ( u j ( t s , x ) ) d s u i ( t , x ) t = + j = 1 n d i j μ j + j = 1 n S i j μ j + j = 1 n T i j μ j , v j ( t , x ) t = k = 1 l x k ( ξ j k v j x k ) η j v j ( t , x ) + i = 1 n ζ j i g i ( u i ( t τ , x ) ) + J j v j ( t , x ) t = + i = 1 m λ j i G i ( v i ( t τ , x ) ) + i = 1 m ρ j i 0 + κ j i ( s ) G i ( v i ( t s , x ) ) d s v j ( t , x ) t = + i = 1 m π j i G i ( v i ( t τ , x ) ) + i = 1 m σ j i 0 + κ j i ( s ) G i ( v i ( t s , x ) ) d s v j ( t , x ) t = + i = 1 m h j i ν i + i = 1 m M j i ν i + i = 1 m N j i ν i ,
(1.4)

subject to the following initial conditions

{ u i ( s , x ) = ϕ i ( s , x ) , ( s , x ) [ τ , 0 ] × Ω , v j ( s , x ) = φ j ( s , x ) , ( s , x ) [ τ , 0 ] × Ω ,
(1.5)

and Dirichlet boundary conditions

{ u i ( t , x ) = 0 , ( t , x ) [ 0 , ) × Ω , v j ( t , x ) = 0 , ( t , x ) [ 0 , ) × Ω .
(1.6)

Take the time scale T=Z (integer number set), then system (1.1)-(1.3) can be changed into the following discrete case (1.7)-(1.9):

{ Δ t u i ( t , x ) = k = 1 l x k ( a i k u i x k ) b i u i ( t , x ) + j = 1 m c i j f j ( v j ( t τ , x ) ) + I i Δ t u i ( t , x ) = + j = 1 n p i j F j ( u j ( t τ , x ) ) + j = 1 n r i j s = 0 k i j ( s ) F j ( u j ( t s , x ) ) Δ t u i ( t , x ) = + j = 1 n q i j F j ( u j ( t τ , x ) ) + j = 1 n w i j s = 0 k i j ( s ) F j ( u j ( t s , x ) ) Δ t u i ( t , x ) = + j = 1 n d i j μ j + j = 1 n S i j μ j + j = 1 n T i j μ j , Δ t v j ( t , x ) = k = 1 l x k ( ξ j k v j x k ) η j v j ( t , x ) + i = 1 n ζ j i g i ( u i ( t τ , x ) ) + J j Δ t v j ( t , x ) = + i = 1 m λ j i G i ( v i ( t τ , x ) ) + i = 1 m ρ j i s = 0 κ j i ( s ) G i ( v i ( t s , x ) ) Δ t v j ( t , x ) = + i = 1 m π j i G i ( v i ( t τ , x ) ) + i = 1 m σ j i s = 0 κ j i ( s ) G i ( v i ( t s , x ) ) Δ t v j ( t , x ) = + i = 1 m h j i ν i + i = 1 m M j i ν i + i = 1 m N j i ν i ,
(1.7)

subject to the following initial conditions

{ u i ( s , x ) = ϕ i ( s , x ) , ( s , x ) { τ , τ + 1 , , 2 , 1 , 0 } × Ω , v j ( s , x ) = φ j ( s , x ) , ( s , x ) { τ , τ + 1 , , 2 , 1 , 0 } × Ω ,
(1.8)

and Dirichlet boundary conditions

{ u i ( t , x ) = 0 , ( t , x ) Z + × Ω , v j ( t , x ) = 0 , ( t , x ) Z + × Ω ,
(1.9)

where tZ, τ is a positive integer, Z + ={0,1,2,}, Δ t u i (t,x) u i (t+1,x) u i (t,x), Δ t v j (t,x) v j (t+1,x) v j (t,x).

If we choose T=R, then σ(t)=t, μ(t)=0. In this case, system (1.1)-(1.3) is the continuous reaction-diffusion BAM recurrent FNNs (1.4)-(1.6). If T=Z, then μ(t)=1, system (1.1)-(1.3) is the discrete difference reaction-diffusion BAM recurrent FNNs (1.7)-(1.9). In this paper, we study the global robust exponential synchronization of reaction-diffusion BAM recurrent FNNs (1.1)-(1.3), which unify both the continuous case and the discrete difference case. What is more, system (1.1)-(1.3) is a good model for handling many problems such as predator-prey forecast or optimizing of goods output.

The rest of this paper is organized as follows. In Section 2, some notations and basic theorems or lemmas on time scales are given. In Section 3, the main results of global robust exponential synchronization are obtained by constructing the appropriate Lyapunov functional and applying inequality skills. In Section 4, one example is given to illustrate the effectiveness of our results.

2 Preliminaries

In this section, we first recall some basic definitions and lemmas on time scales which are used in what follows.

Let T be a nonempty closed subset (time scale) of . The forward and backward jump operators ρ,σ:TT and the graininess μ: R + are defined, respectively, by

σ ( t ) = inf { s T : s > t } , ρ ( t ) = sup { s T : s < t } , μ ( t ) = σ ( t ) t .

A point tT is called left-dense if t>infT and ρ(t)=t, left-scattered if ρ(t)<t, right-dense if t<supT and σ(t)=t, and right-scattered if σ(t)>t. If T has a left-scattered maximum m, then T k =T{m}, otherwise T k =T. If T has a right-scattered minimum m, then T k =T{m}, otherwise T k =T.

Definition 2.1 ([51])

A function f:TR is called regulated provided its right-hand side limits exist (finite) at all right-hand side points in T and its left-hand side limits exist (finite) at all left-hand side points in T.

Definition 2.2 ([51])

A function f:TR is called rd-continuous provided it is continuous at right-dense point in T and its left-hand side limits exist (finite) at left-dense points in T. The set of rd-continuous function f:TR will be denoted by C rd = C rd (T)= C rd (T,R).

Definition 2.3 ([51])

Assume f:TR and t T k . Then we define f Δ (t) to be the number (if it exists) with the property that given any ϵ>0 there exists a neighborhood U of t (i.e., U=(tΔ,t+Δ)T for some Δ>0) such that

| [ f ( σ ( t ) ) f ( s ) ] f Δ (t) [ σ ( t ) s ] |<ϵ|σ(t)s|

for all sU. We call f Δ (t) the delta (or Hilger) derivative of f at t. The set of functions f:TR that is a differentiable and whose derivative is rd-continuous is denoted by C rd 1 = C rd 1 (T)= C rd 1 (R,T).

If f is continuous, then f is rd-continuous. If f is rd-continuous, then f is regulated. If f is delta differentiable at t, then f is continuous at t.

Lemma 2.1 ([51])

Let f be regulated, then there exists a function F which is delta differentiable with region of differentiation D such that F Δ (t)=f(t) for all tD.

Definition 2.4 ([51])

Assume that f:TR is a regulated function. Any function F as in Lemma 2.1 is called a Δ-antiderivative of f. We define the indefinite integral of a regulated function f by

f(t)Δt=F(t)+C,

where C is an arbitrary constant and F is a Δ-antiderivative of f. We define the Cauchy integral by a b f(s)Δs=F(b)F(a) for all a,bT.

A function F:TR is called an antiderivative of f:TR provided F Δ (t)=f(t) for all t T k .

Lemma 2.2 ([51])

If a,bT, α,βR and f,gC(T,R), then

  1. (i)

    a b [αf(t)+βg(t)]Δt=α a b f(t)Δt+β a b g(t)Δt,

  2. (ii)

    if f(t)0 for all atb, then a b f(t)Δt0,

  3. (iii)

    if |f(t)|g(t) on [a,b){tT:atb}, then | a b f(t)Δt| a b g(t)Δt.

A function p:TR is called regressive if 1+μ(t)p(t)0 for all t T k . The set of all regressive and rd-continuous functions f:TR will be denoted by R=R(T)=R(T,R). We define the set R + of all positively regressive elements of by R + = R + (T,R)={pR:1+μ(t)p(t)>0 for all tT}. If p is a regressive function, then the generalized exponential function e p (t,s) is defined by e p (t,s)=exp{ s t ξ μ ( τ ) (p(τ))Δτ} for all s,tT, with the cylinder transformation

ξ h (z)= { Log ( 1 + h z ) h , if  h 0 , z , if  h = 0 .

Let p,q:TR be two regressive functions, we define

p q = p + q + μ p q , p = p 1 + μ p , p q = p p ( q ) .

If p R + , then p R + .

The generalized exponential function has the following properties.

Lemma 2.3 ([51])

Assume that p,q:TR are two regressive functions, then

  1. (i)

    e p (σ(t),s)=(1+μ(t)p(t)) e p (t,s);

  2. (ii)

    1/ e p (t,s)= e p (t,s);

  3. (iii)

    e p (t,s)=1/ e p (s,t)= e p (s,t);

  4. (iv)

    e p (t,s) e p (s,r)= e p (t,r);

  5. (v)

    [ e p ( t , s ) ] Δ =p(t) e p (t,s);

  6. (vi)

    [ e p ( c , ) ] Δ =p [ e p ( c , ) ] σ for all cT;

  7. (vii)

    (d/dz)[ e z (t,s)]=[ s t 1/(1+μ(τ)z)Δτ] e z (t,s).

Lemma 2.4 ([51])

Assume that f,g:TR are delta differentiable at t T k . Then

( f g ) Δ (t)= f Δ (t)g(t)+f ( σ ( t ) ) g Δ (t)= g Δ (t)f(t)+g ( σ ( t ) ) f Δ (t).

Lemma 2.5 ([52])

For each tT, let N be a neighborhood of t. Then, for V C rd (T, R + ), define D + V Δ (t) to mean that, given ϵ>0, there exists a right neighborhood N ϵ N of t such that

1 u ( t ) [ V ( σ ( t ) ) V ( t ) μ ( t ) f ( t ) ] < D + V Δ (t)+ϵfor each s N ϵ ,s>t,

where μ(t)=σ(t)s. If t is right-scattered and V(t) is continuous at t, this reduces to D + V Δ (t)= V ( σ ( t ) ) V ( t ) σ ( t ) t .

Next, we introduce the Banach space which is suitable for system (1.1)-(1.3).

Let Ω={x= ( x 1 , x 2 , , x l ) T :| x i |< l i ,i=1,2,,l} be an open bounded domain in R l with smooth boundary Ω. Let C rd (T×Ω, R n + m ) be the set consisting of all the vector function y(t,x)= ( y 1 ( t , x ) , y 2 ( t , x ) , , y n + m ( t , x ) ) T which is rd-continuous with respect to tT and continuous with respect to xΩ. For every tT and xΩ, we define the set C T t ={y(t,):yC(Ω, R n + m )}. Then C T t is a Banach space with the norm y(t,)= ( i = 1 n + m y i ( t , ) 2 2 ) 1 / 2 , where y i ( t , ) 2 = ( Ω | y i ( t , x ) | 2 d x ) 1 / 2 . Let C rd ( [ τ , 0 ] T ×Ω, R n + m ) consist of all functions f(t,x) which map [ τ , 0 ] T ×Ω into R n + m and f(t,x) is rd-continuous with respect to t [ τ , 0 ] T and continuous with respect to xΩ. For every t [ τ , 0 ] T and xΩ, we define the set C [ τ , 0 ] T t ={u(t,):uC(Ω, R n + m )}. Then C [ τ , 0 ] T t is a Banach space equipped with the norm ψ 0 = ( i = 1 n + m ϕ i 1 2 ) 1 / 2 , where ψ(t,x)= ( ψ 1 ( t , x ) , ψ 2 ( t , x ) , , ψ n + m ( t , x ) ) T C [ τ , 0 ] T t , ψ i ( t , ) 1 = ( Ω | ψ i ( , x ) | τ 2 d x ) 1 / 2 , | ψ i ( , x ) | τ = sup s [ τ , 0 ] T | ψ i (s,x)|.

In order to achieve the global robust exponential synchronization, the following system (2.1)-(2.3) is the controlled slave system corresponding to the master system (1.1)-(1.3):

{ u ˜ i Δ ( t , x ) = k = 1 l x k ( a i k u ˜ i x k ) b i u ˜ i ( t , x ) + j = 1 m c i j f j ( v ˜ j ( t τ , x ) ) + I i u ˜ i Δ ( t , x ) = + j = 1 n p i j F j ( u ˜ j ( t τ , x ) ) + j = 1 n r i j 0 + k i j ( s ) F j ( u ˜ j ( t s , x ) ) Δ s u ˜ i Δ ( t , x ) = + j = 1 n q i j F j ( u ˜ j ( t τ , x ) ) + j = 1 n w i j 0 + k i j ( s ) F j ( u ˜ j ( t s , x ) ) Δ s u ˜ i Δ ( t , x ) = + j = 1 n d i j μ j + j = 1 n S i j μ j + j = 1 n T i j μ j + m i E i ( t , x ) , v ˜ j Δ ( t , x ) = k = 1 l x k ( ξ j k v ˜ j x k ) η j v ˜ j ( t , x ) + i = 1 n ζ j i g i ( u ˜ i ( t τ , x ) ) + J j v ˜ j Δ ( t , x ) = + i = 1 m λ j i G i ( v ˜ i ( t τ , x ) ) + i = 1 m ρ j i 0 + κ j i ( s ) G i ( v ˜ i ( t s , x ) ) Δ s v ˜ j Δ ( t , x ) = + i = 1 m π j i G i ( v ˜ i ( t τ , x ) ) + i = 1 m σ j i 0 + κ j i ( s ) G i ( v ˜ i ( t s , x ) ) Δ s v ˜ j Δ ( t , x ) = + i = 1 m h j i ν i + i = 1 m M j i ν i + i = 1 m N j i ν i + m n + j E n + j ( t , x ) ,
(2.1)

subject to the following initial conditions

{ u ˜ i ( s , x ) = ϕ ˜ i ( s , x ) , ( s , x ) [ τ , 0 ] T × Ω , v ˜ j ( s , x ) = φ ˜ j ( s , x ) , ( s , x ) [ τ , 0 ] T × Ω ,
(2.2)

and Dirichlet boundary conditions

{ u ˜ i ( t , x ) = 0 , ( t , x ) [ 0 , ) T × Ω , v ˜ j ( t , x ) = 0 , ( t , x ) [ 0 , ) T × Ω ,
(2.3)

where E i (t,x)= u ˜ i (t,x) u i (t,x) (i=1,2,,n) and E n + j (t,x)= v ˜ n + j (t,x) v n + j (t,x) (j=1,2,,m) are error functions. m k >0 (k=1,2,,n+m) is a constant error weighting coefficient. u ˜ (t,x)= ( u ˜ 1 ( t , x ) , u ˜ 2 ( t , x ) , , u ˜ n ( t , x ) ) T C rd (T×Ω, R n ), v ˜ (t,x)= ( v ˜ 1 ( t , x ) , v ˜ 2 ( t , x ) , , v ˜ m ( t , x ) ) T C rd (T×Ω, R m ), ϕ ˜ (t,x)= ( ϕ ˜ 1 ( t , x ) , ϕ ˜ 2 ( t , x ) , , ϕ ˜ n ( t , x ) ) T C([τ,0]×Ω, R n ), φ ˜ (t,x)= ( φ ˜ 1 ( t , x ) , φ ˜ 2 ( t , x ) , , φ ˜ m ( t , x ) ) T C([τ,0]×Ω, R m ).

From (1.1)-(1.3) and (2.1)-(2.3), we obtain the error system (2.4)-(2.6) as follows:

{ E i Δ ( t , x ) = k = 1 l x k ( a i k E i x k ) + ( m i b i ) E i ( t , x ) E i Δ ( t , x ) = + j = 1 m c i j [ f j ( v ˜ j ( t τ , x ) ) f j ( v j ( t τ , x ) ) ] E i Δ ( t , x ) = + j = 1 n p i j [ F j ( u ˜ j ( t τ , x ) ) F j ( u j ( t τ , x ) ) ] E i Δ ( t , x ) = + j = 1 n r i j 0 + k i j ( s ) [ F j ( u ˜ j ( t s , x ) ) F j ( u j ( t s , x ) ) ] Δ s E i Δ ( t , x ) = + j = 1 n q i j [ F j ( u ˜ j ( t τ , x ) ) F j ( u j ( t τ , x ) ) ] E i Δ ( t , x ) = + j = 1 n w i j 0 + k i j ( s ) [ F j ( u ˜ j ( t s , x ) ) F j ( u j ( t s , x ) ) ] Δ s , E n + j Δ ( t , x ) = k = 1 l x k ( ξ j k E n + j x k ) + ( m n + j η j ) E n + j ( t , x ) E n + j Δ ( t , x ) = + i = 1 n ζ j i [ g i ( u ˜ i ( t τ , x ) ) g i ( u i ( t τ , x ) ) ] E n + j Δ ( t , x ) = + i = 1 m λ j i [ G i ( v ˜ i ( t τ , x ) ) G i ( v i ( t τ , x ) ) ] E n + j Δ ( t , x ) = + i = 1 m ρ j i 0 + κ j i ( s ) [ G i ( v ˜ i ( t s , x ) ) G i ( v i ( t s , x ) ) ] Δ s E n + j Δ ( t , x ) = + i = 1 m π j i [ G i ( v ˜ i ( t τ , x ) ) G i ( v i ( t τ , x ) ) ] E n + j Δ ( t , x ) = + i = 1 m σ j i 0 + κ j i ( s ) [ G i ( v ˜ i ( t s , x ) ) G i ( v i ( t s , x ) ) ] Δ s ,
(2.4)

subject to the following initial conditions

{ E i ( s , x ) = ϕ ˜ i ( s , x ) ϕ i ( s , x ) , ( s , x ) [ τ , 0 ] T × Ω , E n + j ( s , x ) = φ ˜ j ( s , x ) φ j ( s , x ) , ( s , x ) [ τ , 0 ] T × Ω ,
(2.5)

and Dirichlet boundary conditions

{ E i ( t , x ) = 0 , ( t , x ) [ 0 , ) T × Ω , E n + j ( t , x ) = 0 , ( t , x ) [ 0 , ) T × Ω .
(2.6)

The following definition is significant to study the global robust exponential synchronization of coupled neural networks (1.1)-(1.3) and (2.1)-(2.3).

Definition 2.5 Let y(t,x)= ( u 1 ( t , x ) , u 2 ( t , x ) , , u n ( t , x ) , v 1 ( t , x ) , v 2 ( t , x ) , , v m ( t , x ) ) T R n + m and y ˜ (t,x)= ( u ˜ 1 ( t , x ) , u ˜ 2 ( t , x ) , , u ˜ n ( t , x ) , v ˜ 1 ( t , x ) , v ˜ 2 ( t , x ) , , v ˜ m ( t , x ) ) T R n + m be the solution vectors of system (1.1)-(1.3) and its controlled slave system (2.1)-(2.3), respectively. E(t,x)= ( E 1 ( t , x ) , E 2 ( t , x ) , , E n + m ( t , x ) ) T R n + m is the error vector. Then the coupled systems (1.1)-(1.3) and (2.1)-(2.3) are said to be globally exponentially synchronized if there exists a controlled input vector z(t,x)= ( m 1 E 1 ( t , x ) , m 2 E 2 ( t , x ) , , m n + m E n + m ( t , x ) ) T and a positive constant α R + and M1 such that

E ( t , ) = y ˜ ( t , ) y ( t , ) M e α (t,0),t [ 0 , ) T ,

where α is called the degree of exponential synchronization on time scales.

3 Main results

In this section, we will consider the global robust exponential synchronization of coupled systems (1.1)-(1.3) and (2.1)-(2.3). At first, we need to introduce some useful lemmas.

Lemma 3.1 ([53])

Let Ω be a cube | x i |< l i (i=1,2,,l) and assume that h(x) is a real-valued function belonging to C 1 (Ω) which vanishes on the boundary Ω of Ω, i.e., h(x) | Ω =0. Then

Ω h 2 (x)dx l i 2 Ω | h x i | 2 dx.

Lemma 3.2 ([23])

Suppose that y= ( y 1 , y 2 , , y n + m ) T and y ˜ = ( y ˜ 1 , y ˜ 2 , , y ˜ n + m ) T are the solutions to systems (1.1)-(1.3) and (2.1)-(2.3), respectively, then

| j = 1 m p i j f j ( y ˜ j ) j = 1 m p i j f j ( y j ) | j = 1 m | p i j | | f j ( y ˜ j ) f j ( y j ) | , | j = 1 m q i j f j ( y ˜ j ) j = 1 m q i j f j ( y j ) | j = 1 m | q i j | | f j ( y ˜ j ) f j ( y j ) | , | j = 1 n p i j g j ( y ˜ j ) j = 1 n p i j g j ( y j ) | j = 1 n | p i j | | g j ( y ˜ j ) g j ( y j ) | , | j = 1 n q i j g j ( y ˜ j ) j = 1 n q i j g j ( y j ) | j = 1 n | q i j | | g j ( y ˜ j ) g j ( y j ) | , | j = 1 n p i j F j ( y ˜ j ) j = 1 n p i j F j ( y j ) | j = 1 n | p i j | | F j ( y ˜ j ) F j ( y j ) | , | j = 1 n q i j F j ( y ˜ j ) j = 1 n q i j F j ( y j ) | j = 1 n | q i j | | F j ( y ˜ j ) F j ( y j ) | , | j = 1 m p i j G j ( y ˜ j ) j = 1 m p i j G j ( y j ) | j = 1 m | p i j | | f j ( y ˜ j ) G j ( y j ) | , | j = 1 m q i j G j ( y ˜ j ) j = 1 m q i j G j ( y j ) | j = 1 m | q i j | | G j ( y ˜ j ) G j ( y j ) | .

Throughout this paper, we always assume that:

(H1) The neurons activation f j , F i , g i and G j are Lipschitz continuous, that is, there exist positive constants α j , β i , γ i and δ j such that | f j (ξ) f j (η)| α j |ξη|, | F i (ξ) F i (η)| β i |ξη|, | g i (ξ) g i (η)| γ i |ξη|, | G j (ξ) G j (η)| δ j |ξη| for any ξ,ηR, i=1,2,,n; j=1,2,,m.

(H2) The delay kernels k i j , κ j i : [ 0 , + ) T [0,+) (i=1,2,,n; j=1,2,,m) are real-valued non-negative rd-continuous functions and satisfy the following conditions:

0 k i j ( s ) Δ s = 1 , 0 s k i j ( s ) Δ s < , 0 κ j i ( s ) Δ s = 1 , 0 s κ j i ( s ) Δ s < ,

and there exist constants ω 1 >0, ω 2 >0 such that

0 k i j (s) e ω 1 (s,0)Δs<, 0 κ j i (s) e ω 2 (s,0)Δs<.

(H3) The following conditions are always satisfied:

k = 1 l 2 a ̲ i k l k 2 + 2 ( m i b ̲ i ) + j = 1 m α j | c ¯ i j | + ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | + | r ¯ i ν | + | w ¯ i ν | ) + ν = 1 n β i ( | p ¯ ν i | + | q ¯ ν i | ) e 1 1 ( τ , 0 ) + ν = 1 n β i ( | r ¯ ν i | + | w ¯ ν i | ) 0 + k ν i ( s ) e 1 1 ( s , 0 ) Δ s + j = 1 m γ i | ζ ¯ j i | e 1 1 ( τ , 0 ) < 0 , i = 1 , 2 , , n ; k = 1 l 2 ξ ̲ j k l k 2 + 2 ( m n + j η ̲ j ) + i = 1 n γ i | ζ ¯ j i | + ϱ = 1 m δ ϱ ( | λ ¯ j ϱ | + | π ¯ j ϱ | + | ρ ¯ j ϱ | + | σ ¯ j ϱ | ) + ϱ = 1 m δ j ( | λ ¯ ϱ j | + | π ¯ ϱ j | ) e 1 1 ( τ , 0 ) + ϱ = 1 m δ j ( | ρ ¯ ϱ j | + | σ ¯ ϱ j | ) 0 + κ ϱ j ( s ) e 1 1 ( s , 0 ) Δ s + i = 1 n α j | c ¯ i j | e 1 1 ( τ , 0 ) < 0 , j = 1 , 2 , , m .

Theorem 3.1 Assume that (H1)-(H3) hold. Then the controlled slave system (2.1)-(2.3) is globally robustly exponentially synchronous with the master system (1.1)-(1.3).

Proof Calculating the delta derivation of E i ( t , ) 2 2 (i=1,2,,n) and E n + j ( t , ) 2 2 (j=1,2,,m) along the solution of (2.1), we can obtain

( E i ( t , ) 2 2 ) Δ = Ω ( ( E i ( t , x ) ) 2 ) Δ d x = Ω ( E i ( t , x ) + E i ( σ ( t ) , x ) ) ( E i ( t , x ) ) Δ d x = Ω ( 2 E i ( t , x ) + μ ( t ) ( E i ( t , x ) ) Δ ) ( E i ( t , x ) ) Δ d x = 2 Ω E i ( t , x ) ( E i ( t , x ) ) Δ d x + μ ( t ) Ω ( ( E i ( t , x ) ) Δ ) 2 d x = 2 k = 1 l Ω E i ( t , x ) x k ( a i k E i x k ) d x + 2 Ω ( m i b i ) ( E i ( t , x ) ) 2 d x + 2 Ω E i ( t , x ) j = 1 m c i j [ f j ( v ˜ j ( t τ , x ) ) f j ( v j ( t τ , x ) ) ] d x + 2 Ω E i ( t , x ) j = 1 n p i j [ F j ( u ˜ j ( t τ , x ) ) F j ( u j ( t τ , x ) ) ] d x + 2 Ω E i ( t , x ) j = 1 n q i j [ F j ( u ˜ j ( t τ , x ) ) F j ( u j ( t τ , x ) ) ] d x + 2 Ω E i ( t , x ) [ j = 1 n r i j 0 + k i j ( s ) [ F j ( u ˜ j ( t s , x ) ) F j ( u j ( t s , x ) ) ] Δ s ] d x + 2 Ω E i ( t , x ) [ j = 1 n w i j 0 + k i j ( s ) [ F j ( u ˜ j ( t s , x ) ) F j ( u j ( t s , x ) ) ] Δ s ] d x + μ ( t ) ( E i ( t , ) ) Δ 2 2
(3.1)

and

( E n + j ( t , ) 2 2 ) Δ = Ω ( ( E n + j ( t , x ) ) 2 ) Δ d x = Ω ( E n + j ( t , x ) + E n + j ( σ ( t ) , x ) ) ( E n + j ( t , x ) ) Δ d x = Ω ( 2 E n + j ( t , x ) + μ ( t ) ( E n + j ( t , x ) ) Δ ) ( E n + j ( t , x ) ) Δ d x = 2 Ω E n + j ( t , x ) ( E i ( t , x ) ) Δ d x + μ ( t ) Ω ( ( E n + j ( t , x ) ) Δ ) 2 d x = 2 k = 1 l Ω E n + j ( t , x ) x k ( ξ j k E n + j x k ) d x + 2 Ω ( m n + j η j ) ( E n + j ( t , x ) ) 2 d x + 2 Ω E n + j ( t , x ) i = 1 n ζ j i [ g i ( u ˜ i ( t τ , x ) ) g i ( u i ( t τ , x ) ) ] d x + 2 Ω E n + j ( t , x ) i = 1 m λ j i [ G i ( v ˜ i ( t τ , x ) ) G i ( v i ( t τ , x ) ) ] d x + 2 Ω E n + j ( t , x ) i = 1 m π j i [ G i ( v ˜ i ( t τ , x ) ) G i ( v i ( t τ , x ) ) ] d x + 2 Ω E n + j ( t , x ) [ i = 1 m ρ j i 0 + κ j i ( s ) [ G i ( v ˜ i ( t s , x ) ) G i ( v i ( t s , x ) ) ] Δ s ] d x + 2 Ω E n + j ( t , x ) [ i = 1 m σ j i 0 + κ j i ( s ) [ G i ( v ˜ i ( t s , x ) ) G i ( v i ( t s , x ) ) ] Δ s ] d x + μ ( t ) ( E n + j ( t , ) ) Δ 2 2 .
(3.2)

Employing Green’s formula [17], Dirichlet boundary condition (2.6) and Lemma 3.1, we have

k = 1 l Ω E i ( t , x ) x k ( a i k E i x k ) d x = k = 1 l Ω a i k E i ( t , x ) E i ( t , x ) n k d S k = 1 l Ω a i k ( E i ( t , x ) x k ) 2 d x = k = 1 l Ω a i k ( E i ( t , x ) x k ) 2 d x k = 1 l Ω a i k l k 2 ( E i ( t , x ) ) 2 d x
(3.3)

and

k = 1 l Ω E n + j ( t , x ) x k ( ξ j k E n + j x k ) d x = k = 1 l Ω ξ j k E n + j ( t , x ) E n + j ( t , x ) n k d S k = 1 l Ω ξ j k ( E n + j ( t , x ) x k ) 2 d x = k = 1 l Ω ξ j k ( E n + j ( t , x ) x k ) 2 d x k = 1 l Ω ξ j k l k 2 ( E n + j ( t , x ) ) 2 d x .
(3.4)

By applying Lemma 3.2, (3.1)-(3.4), conditions (H1)-(H3) and the Hölder inequality, and noting the robustness of parameter intervals, we get

( E i ( t , ) 2 2 ) Δ k = 1 l 2 a ̲ i k l k 2 E i ( t , ) 2 2 + 2 ( m i b ̲ i ) E i ( t , ) 2 2 + 2 j = 1 m α j | c ¯ i j | E n + j ( t τ , ) 2 E i ( t , ) 2 + 2 ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | ) E ν ( t τ , ) 2 E i ( t , ) 2 + 2 ν = 1 n β ν ( | r ¯ i ν | + | w ¯ i ν | ) 0 + k i ν ( s ) E ν ( t s , ) 2 E i ( t , ) 2 Δ s + μ ( t ) ( E i ( t , ) ) Δ 2 2 k = 1 l 2 a ̲ i k l k 2 E i ( t , ) 2 2 + 2 ( m i b ̲ i ) E i ( t , ) 2 2 + j = 1 m α j | c ¯ i j | [ E n + j ( t τ , ) 2 2 + E i ( t , ) 2 2 ] + ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | ) [ E ν ( t τ , ) 2 2 + E i ( t , ) 2 2 ] + ν = 1 n β ν ( | r ¯ i ν | + | w ¯ i ν | ) [ 0 + k i ν ( s ) E ν ( t s , ) 2 2 Δ s + E i ( t , ) 2 2 ] + μ ( t ) Q ( t ) ( E i ( t , ) ) 2 2 = [ k = 1 l 2 a ̲ i k l k 2 + 2 ( m i b ̲ i ) + j = 1 m α j | c ¯ i j | + ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | + | r ¯ i ν | + | w ¯ i ν | ) + μ ( t ) Q ( t ) ] E i ( t , ) 2 2 + j = 1 m α j | c ¯ i j | × E n + j ( t τ , ) 2 2 + ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | ) × E ν ( t τ , ) 2 2 + ν = 1 n β ν ( | r ¯ i ν | + | w ¯ i ν | ) × 0 + k i ν ( s ) E ν ( t s , ) 2 2 Δ s ,
(3.5)

where ( E i ( t , ) ) Δ 2 2 =Q(t) E i ( t , ) 2 2 , Q(t)0, i=1,2,,n.

Similar to the arguments of (3.5), we obtain

( E n + j ( t , ) 2 2 ) Δ [ k = 1 l 2 ξ ̲ j k l k 2 + 2 ( m n + j η ̲ j ) + i = 1 n γ i | ζ ¯ j i | + ϱ = 1 m δ ϱ ( | λ ¯ j ϱ | + | π ¯ j ϱ | + | ρ ¯ j ϱ | + | σ ¯ j ϱ | ) + μ ( t ) R ( t ) ] E n + j ( t , ) 2 2 + i = 1 n γ i | ζ ¯ j i | × E i ( t τ , ) 2 2 + ϱ = 1 m δ ϱ ( | λ ¯ j ϱ | + | π ¯ j ϱ | ) × E n + ϱ ( t τ , ) 2 2 + ϱ = 1 m δ ϱ ( | ρ ¯ j ϱ | + | σ ¯ j ϱ | ) × 0 + κ j ϱ ( s ) E n + ϱ ( t s , ) 2 2 Δ s ,
(3.6)

where ( E n + j ( t , ) ) Δ 2 2 =R(t) E n + j ( t , ) 2 2 , R(t)0, j=1,2,,m.

If the first inequality of condition (H3) holds, there exists one positive number ς>0 (may be sufficiently small) such that

k = 1 l 2 a ̲ i k l k 2 + 2 ( m i b ̲ i ) + j = 1 m α j | c ¯ i j | + ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | + | r ¯ i ν | + | w ¯ i ν | ) + ν = 1 n β i ( | p ¯ ν i | + | q ¯ ν i | ) e 1 1 ( τ , 0 ) + ν = 1 n β i ( | r ¯ ν i | + | w ¯ ν i | ) 0 + k ν i ( s ) e 1 1 ( s , 0 ) Δ s + j = 1 m γ i | ζ ¯ j i | e 1 1 ( τ , 0 ) + ς < 0 , i = 1 , 2 , , n .
(3.7)

Now we consider the functions

h i ( z i ) = z i z i k = 1 l 2 a ̲ i k l k 2 + 2 ( m i b ̲ i ) + j = 1 m α j | c ¯ i j | + ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | + | r ¯ i ν | + | w ¯ i ν | ) + ν = 1 n β i ( | p ¯ ν i | + | q ¯ ν i | ) e 1 1 ( τ , 0 ) + ν = 1 n β i ( | r ¯ ν i | + | w ¯ ν i | ) 0 + k ν i ( s ) e 1 1 ( s , 0 ) Δ s + j = 1 m γ i | ζ ¯ j i | e 1 1 ( τ , 0 ) + max { e z i z i ( σ ( t ) , 0 ) , e ( θ ( z i ) 1 ) μ ( t ) Q ( t ) E i ( t , ) 2 2 ( t , 0 ) } θ ( z i ) μ ( t ) Q ( t ) e z i z i ( σ ( t ) , 0 ) ,
(3.8)

where θ( z i )= 0 z i ( e z i s / ( z i s ) 2 )ds, i=1,2,,n. From (3.7) we achieve h i (0)<ς<0 and h i ( z i ) is continuous for z i [0,+). Moreover, h i ( z i )+ as z i +, thereby there exist constants ϵ ¯ i (0,+) such that h i ( ϵ ¯ i )=0 and h i ( ϵ ¯ i )<0 for ϵ ¯ i (0, ϵ ¯ i )(0,1). Choosing ϵ ¯ = min 1 i n ϵ ¯ i , obviously 1> ϵ ¯ >0, we have, for i=1,2,,n,

h i ( ϵ ¯ ) = ϵ ¯ ϵ ¯ k = 1 l 2 a ̲ i k l k 2 + 2 ( m i b ̲ i ) + j = 1 m α j | c ¯ i j | + ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | + | r ¯ i ν | + | w ¯ i ν | ) + ν = 1 n β i ( | p ¯ ν i | + | q ¯ ν i | ) e 1 1 ( τ , 0 ) + ν = 1 n β i ( | r ¯ ν i | + | w ¯ ν i | ) 0 + k ν i ( s ) e 1 1 ( s , 0 ) Δ s + j = 1 m γ i | ζ ¯ j i | e 1 1 ( τ , 0 ) + max { e ϵ ¯ ϵ ¯ ( σ ( t ) , 0 ) , e ( θ ( ϵ ¯ ) 1 ) μ ( t ) Q ( t ) E i ( t , ) 2 2 ( t , 0 ) } θ ( ϵ ¯ ) μ ( t ) Q ( t ) e ϵ ¯ ϵ ¯ ( σ ( t ) , 0 ) 0 .
(3.9)

Similar to the above arguments of (3.7)-(3.9), we can always choose 0< ϵ ¯ ¯ <1 such that for j=1,2,,m,

ϵ ¯ ¯ ϵ ¯ ¯ k = 1 l 2 ξ ̲ j k l k 2 + 2 ( m n + j η ̲ j ) + i = 1 n γ i | ζ ¯ j i | + ϱ = 1 m δ ϱ ( | λ ¯ j ϱ | + | π ¯ j ϱ | + | ρ ¯ j ϱ | + | σ ¯ j ϱ | ) + ϱ = 1 m δ j ( | λ ¯ ϱ j | + | π ¯ ϱ j | ) e 1 1 ( τ , 0 ) + ϱ = 1 m δ j ( | ρ ¯ ϱ j | + | σ ¯ ϱ j | ) 0 + κ ϱ j ( s ) e 1 1 ( s , 0 ) Δ s + i = 1 n α j | c ¯ i j | e 1 1 ( τ , 0 ) + max { e ϵ ¯ ¯ ϵ ¯ ¯ ( σ ( t ) , 0 ) , e ( θ ( ϵ ¯ ¯ ) 1 ) μ ( t ) R ( t ) E i ( t , ) 2 2 ( t , 0 ) } θ ( ϵ ¯ ¯ ) μ ( t ) R ( t ) e ϵ ¯ ¯ ϵ ¯ ¯ ( σ ( t ) , 0 ) 0 .
(3.10)

Thus, taking ϵ=min{ ϵ ¯ , ϵ ¯ ¯ }, we derive, for i=1,2,,n; j=1,2,,m,

ϵ ϵ k = 1 l 2 a ̲ i k l k 2 + 2 ( m i b ̲ i ) + j = 1 m α j | c ¯ i j | + ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | + | r ¯ i ν | + | w ¯ i ν | ) + ν = 1 n β i ( | p ¯ ν i | + | q ¯ ν i | ) e 1 1 ( τ , 0 ) + ν = 1 n β i ( | r ¯ ν i | + | w ¯ ν i | ) 0 + k ν i ( s ) e 1 1 ( s , 0 ) Δ s + j = 1 m γ i | ζ ¯ j i | e 1 1 ( τ , 0 ) + max { e ϵ ϵ ( σ ( t ) , 0 ) , e ( θ ( ϵ ) 1 ) μ ( t ) Q ( t ) E i ( t , ) 2 2 ( t , 0 ) } θ ( ϵ ) μ ( t ) Q ( t ) e ϵ ϵ ( σ ( t ) , 0 ) 0
(3.11)

and

ϵ ϵ k = 1 l 2 ξ ̲ j k l k 2 + 2 ( m n + j η ̲ j ) + i = 1 n γ i | ζ ¯ j i | + ϱ = 1 m δ ϱ ( | λ ¯ j ϱ | + | π ¯ j ϱ | + | ρ ¯ j ϱ | + | σ ¯ j ϱ | ) + ϱ = 1 m δ j ( | λ ¯ ϱ j | + | π ¯ ϱ j | ) e 1 1 ( τ , 0 ) + ϱ = 1 m δ j ( | ρ ¯ ϱ j | + | σ ¯ ϱ j | ) 0 + κ ϱ j ( s ) e 1 1 ( s , 0 ) Δ s + i = 1 n α j | c ¯ i j | e 1 1 ( τ , 0 ) + max { e ϵ ϵ ( σ ( t ) , 0 ) , e ( θ ( ϵ ) 1 ) μ ( t ) R ( t ) E i ( t , ) 2 2 ( t , 0 ) } θ ( ϵ ) μ ( t ) R ( t ) e ϵ ϵ ( σ ( t ) , 0 ) 0 .
(3.12)

Take the Lyapunov functional V(t) as follows:

V(t)=V ( t , E ( t ) ) = V 1 (t)+ V 2 (t),
(3.13)

where

V 1 ( t ) = i = 1 n { e ϵ ϵ ( t , 0 ) E i ( t , ) 2 2 + e ( θ ( ϵ ) 1 ) μ ( t ) Q ( t ) E i ( t , ) 2 2 ( t , 0 ) V 1 ( t ) = + j = 1 m α j | c ¯ i j | t τ t e ϵ ϵ ( σ ( s + τ ) , 0 ) E n + j ( s , ) 2 2 Δ s V 1 ( t ) = + ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | ) t τ t e ϵ ϵ ( σ ( s + τ ) , 0 ) E ν ( s , ) 2 2 Δ s V 1 ( t ) = + ν = 1 n β ν ( | r ¯ i ν | + | w ¯ i ν | ) 0 + k i ν ( s ) [ t s t e ϵ ϵ ( σ ( s + r ) , 0 ) E ν ( r , ) 2 2 Δ r ] Δ s } , V 2 ( t ) = j = 1 m { e ϵ ϵ ( t , 0 ) E n + j ( t , ) 2 2 + e ( θ ( ϵ ) 1 ) μ ( t ) R ( t ) E n + j ( t , ) 2 2 ( t , 0 ) V 2 ( t ) = + i = 1 n γ i | ζ ¯ j i | t τ t e ϵ ϵ ( σ ( s + τ ) , 0 ) E i ( s , ) 2 2 Δ s V 2 ( t ) = + ϱ = 1 m δ ϱ ( | λ ¯ j ϱ | + | π ¯ j ϱ | ) t τ t e ϵ ϵ ( σ ( s + τ ) , 0 ) E n + ϱ ( s , ) 2 2 Δ s V 2 ( t ) = + ϱ = 1 m δ ϱ ( | ρ ¯ j ϱ | + | σ ¯ j ϱ | ) 0 + κ j ϱ ( s ) [ t s t e ϵ ϵ ( σ ( s + r ) , 0 ) E n + ϱ ( r , ) 2 2 Δ r ] Δ s } .

Calculating D + V 1 Δ (t) along (2.1) associated with (3.5) and noting that (d/dz)[ e z (t,s)]=[ s t 1/(1+μ(τ)z)Δτ] e z (t,s)>0 if and only if z R + (that is, e z (t,s) is increasing with respect to z if and only if z R + ), we have

D + V 1 Δ ( t ) = i = 1 n { ( ϵ ϵ ) e ϵ ϵ ( t , 0 ) E i ( t , ) 2 2 + e ϵ ϵ ( σ ( t ) , 0 ) ( E i ( t , ) 2 2 ) Δ + ( θ ( ϵ ) 1 ) μ ( t ) Q ( t ) E i ( t , ) 2 2 e ( θ ( ϵ ) 1 ) μ ( t ) Q ( t ) E i ( t , ) 2 2 ( t , 0 ) + j = 1 m α j | c ¯ i j | e ϵ ϵ ( σ ( t + τ ) , 0 ) E n + j ( t , ) 2 2 j = 1 m α j | c ¯ i j | e ϵ ϵ ( σ ( t ) , 0 ) E n + j ( t τ , ) 2 2 + ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | ) e ϵ ϵ ( σ ( t + τ ) , 0 ) E ν ( t , ) 2 2 ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | ) e ϵ ϵ ( σ ( t ) , 0 ) E ν ( t τ , ) 2 2 + ν = 1 n β ν ( | r ¯ i ν | + | w ¯ i ν | ) 0 + k i ν ( s ) e ϵ ϵ ( σ ( s + t ) , 0 ) E ν ( t , ) 2 2 Δ s ν = 1 n β ν ( | r ¯ i ν | + | w ¯ i ν | ) 0 + k i ν ( s ) e ϵ ϵ ( σ ( t ) , 0 ) E ν ( t s , ) 2 2 Δ s } i = 1 n { ( ϵ ϵ ) e ϵ ϵ ( t , 0 ) E i ( t , ) 2 2 + e ϵ ϵ ( σ ( t ) , 0 ) [ ( k = 1 l 2 a ̲ i k l k 2 + 2 ( m i b ̲ i ) + j = 1 m α j | c ¯ i j | + ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | + | r ¯ i ν | + | w ¯ i ν | ) + μ ( t ) Q ( t ) ) E i ( t , ) 2 2 + j = 1 m α j | c ¯ i j | × E n + j ( t τ , ) 2 2 + ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | ) × E ν ( t τ , ) 2 2 + ν = 1 n β ν ( | r ¯ i ν | + | w ¯ i ν | ) × 0 + k i ν ( s ) E ν ( t s , ) 2 2 Δ s , ] + ( θ ( ϵ ) 1 ) μ ( t ) Q ( t ) E i ( t , ) 2 2 e ( θ ( ϵ ) 1 ) μ ( t ) Q ( t ) E i ( t , ) 2 2 ( t , 0 ) + j = 1 m α j | c ¯ i j | e ϵ ϵ ( σ ( t + τ ) , 0 ) E n + j ( t , ) 2 2 j = 1 m α j | c ¯ i j | e ϵ ϵ ( σ ( t ) , 0 ) E n + j ( t τ , ) 2 2 + ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | ) e ϵ ϵ ( σ ( t + τ ) , 0 ) E ν ( t , ) 2 2 ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | ) e ϵ ϵ ( σ ( t ) , 0 ) E ν ( t τ , ) 2 2 + ν = 1 n β ν ( | r ¯ i ν | + | w ¯ i ν | ) 0 + k i ν ( s ) e ϵ ϵ ( σ ( s + t ) , 0 ) E ν ( t , ) 2 2 Δ s ν = 1 n β ν ( | r ¯ i ν | + | w ¯ i ν | ) 0 + k i ν ( s ) e ϵ ϵ ( σ ( t ) , 0 ) E ν ( t s , ) 2 2 Δ s } i = 1 n { ( ϵ ϵ ) e ϵ ϵ ( σ ( t ) , 0 ) E i ( t , ) 2 2 + e ϵ ϵ ( σ ( t ) , 0 ) [ k = 1 l 2 a ̲ i k l k 2 + 2 ( m i b ̲ i ) + j = 1 m α j | c ¯ i j | + ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | + | r ¯ i ν | + | w ¯ i ν | ) ] E i ( t , ) 2 2 + μ ( t ) Q ( t ) E i ( t , ) 2 2 × max { e ϵ ϵ ( σ ( t ) , 0 ) , e ( θ ( ϵ ) 1 ) μ ( t ) Q ( t ) E i ( t , ) 2 2 ( t , 0 ) } + ( θ ( ϵ ) 1 ) μ ( t ) Q ( t ) E i ( t , ) 2 2 × max { e ϵ ϵ ( σ ( t ) , 0 ) , e ( θ ( ϵ ) 1 ) μ ( t ) Q ( t ) E i ( t , ) 2 2 ( t , 0 ) } + j = 1 m α j | c ¯ i j | e ϵ ϵ ( τ , 0 ) e ϵ ϵ ( σ ( t ) , 0 ) E n + j ( t , ) 2 2 + ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | ) e ϵ ϵ ( τ , 0 ) e ϵ ϵ ( σ ( t ) , 0 ) E ν ( t , ) 2 2 + ν = 1 n β ν ( | r ¯ i ν | + | w ¯ i ν | ) 0 + k i ν ( s ) e ϵ ϵ ( s , 0 ) e ϵ ϵ ( σ ( t ) , 0 ) E ν ( t , ) 2 2 Δ s } e ϵ ϵ ( σ ( t ) , 0 ) i = 1 n { E i ( t , ) 2 2 [ ϵ ϵ k = 1 l 2 a ̲ i k l k 2 + 2 ( m i b ̲ i ) + j = 1 m α j | c ¯ i j | + ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | + | r ¯ i ν | + | w ¯ i ν | ) + max { e ϵ ϵ ( σ ( t ) , 0 ) , e ( θ ( ϵ ) 1 ) μ ( t ) Q ( t ) E i ( t , ) 2 2 ( t , 0 ) } θ ( ϵ ) μ ( t ) Q ( t ) e ϵ ϵ ( σ ( t ) , 0 ) ] + j = 1 m α j | c ¯ i j | e ϵ ϵ ( τ , 0 ) E n + j ( t , ) 2 2 + ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | ) e ϵ ϵ ( τ , 0 ) E ν ( t , ) 2 2 + ν = 1 n β ν ( | r ¯ i ν | + | w ¯ i ν | ) 0 + k i ν ( s ) e ϵ ϵ ( s , 0 ) E ν ( t , ) 2 2 Δ s } e ϵ ϵ ( σ ( t ) , 0 ) i = 1 n { E i ( t , ) 2 2 [ ϵ ϵ k = 1 l 2 a ̲ i k l k 2 + 2 ( m i b ̲ i ) + j = 1 m α j | c ¯ i j | + ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | + | r ¯ i ν | + | w ¯ i ν | ) + ν = 1 n β i ( | p ¯ ν i | + | q ¯ ν i | ) e 1 1 ( τ , 0 ) + ν = 1 n β i ( | r ¯ ν i | + | w ¯ ν i | ) 0 + k ν i ( s ) e 1 1 ( s , 0 ) Δ s + max { e ϵ ϵ ( σ ( t ) , 0 ) , e ( θ ( ϵ ) 1 ) μ ( t ) Q ( t ) E i ( t , ) 2 2 ( t , 0 ) } θ ( ϵ ) μ ( t ) Q ( t ) e ϵ ϵ ( σ ( t ) , 0 ) ] } + e ϵ ϵ ( σ ( t ) , 0 ) j = 1 m i = 1 n α j | c ¯ i j | e 1 1 ( τ , 0 ) E n + j ( t , ) 2 2 .
(3.14)

By applying (3.6), we can similarly calculate D + V 2 Δ (t) along (2.1) as follows:

D + V 2 Δ ( t ) e ϵ ϵ ( σ ( t ) , 0 ) j = 1 m { E n + j ( t , ) 2 2 [ ϵ ϵ k = 1 l 2 ξ ̲ j k l k 2 + 2 ( m n + j η ̲ j ) + i = 1 n γ i | ζ ¯ j i | + ϱ = 1 m δ ϱ ( | λ ¯ j ϱ | + | π ¯ j ϱ | + | ρ ¯ j ϱ | + | σ ¯ j ϱ | ) + ϱ = 1 m δ j ( | λ ¯ ϱ j | + | π ¯ ϱ j | ) e 1 1 ( τ , 0 ) + ϱ = 1 m δ j ( | ρ ¯ ϱ j | + | σ ¯ ϱ j | ) 0 + κ ϱ j ( s ) e 1 1 ( s , 0 ) Δ s + max { e ϵ ϵ ( σ ( t ) , 0 ) , e ( θ ( ϵ ) 1 ) μ ( t ) R ( t ) E i ( t , ) 2 2 ( t , 0 ) } θ ( ϵ ) μ ( t ) R ( t ) e ϵ ϵ ( σ ( t ) , 0 ) ] } + e ϵ ϵ ( σ ( t ) , 0 ) i = 1 n j = 1 m γ i | ζ ¯ j i | e 1 1 ( τ , 0 ) E i ( t , ) 2 2 .
(3.15)

From (3.11)-(3.15), we get

D + V ( t ) = D + V ( t , E ( t ) ) = D + V 1 ( t ) + D + V 2 ( t ) e ϵ ϵ ( σ ( t ) , 0 ) i = 1 n { E i ( t , ) 2 2 [ ϵ ϵ k = 1 l 2 a ̲ i k l k 2 + 2 ( m i b ̲ i ) + j = 1 m α j | c ¯ i j | + ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | + | r ¯ i ν | + | w ¯ i ν | ) + ν = 1 n β i ( | p ¯ ν i | + | q ¯ ν i | ) e 1 1 ( τ , 0 ) + ν = 1 n β i ( | r ¯ ν i | + | w ¯ ν i | ) 0 + k ν i ( s ) e 1 1 ( s , 0 ) Δ s + j = 1 m γ i | ζ ¯ j i | e 1 1 ( τ , 0 ) + max { e ϵ ϵ ( σ ( t ) , 0 ) , e ( θ ( ϵ ) 1 ) μ ( t ) Q ( t ) E i ( t , ) 2 2 ( t , 0 ) } θ ( ϵ ) μ ( t ) Q ( t ) e ϵ ϵ ( σ ( t ) , 0 ) ] } + e ϵ ϵ ( σ ( t ) , 0 ) j = 1 m { E n + j ( t , ) 2 2 [ ϵ ϵ k = 1 l 2 ξ ̲ j k l k 2 + 2 ( m n + j η ̲ j ) + i = 1 n γ i | ζ ¯ j i | + ϱ = 1 m δ ϱ ( | λ ¯ j ϱ | + | π ¯ j ϱ | + | ρ ¯ j ϱ | + | σ ¯ j ϱ | ) + ϱ = 1 m δ j ( | λ ¯ ϱ j | + | π ¯ ϱ j | ) e 1 1 ( τ , 0 ) + ϱ = 1 m δ j ( | ρ ¯ ϱ j | + | σ ¯ ϱ j | ) 0 + κ ϱ j ( s ) e 1 1 ( s , 0 ) Δ s + i = 1 n α j | c ¯ i j | e 1 1 ( τ , 0 ) + max { e ϵ ϵ ( σ ( t ) , 0 ) , e ( θ ( ϵ ) 1 ) μ ( t ) R ( t ) E i ( t , ) 2 2 ( t , 0 ) } θ ( ϵ ) μ ( t ) R ( t ) e ϵ ϵ ( σ ( t ) , 0 ) ] } 0 .
(3.16)

Note that (3.16) means that the Lyapunov functional V(t,E(t)) is monotone decreasing with respect to t [ 0 , + ) T . Therefore, in the light of (3.13) we get, for t [ 0 , + ) T ,

e ϵ ϵ ( t , 0 ) E ( t , ) 2 = e ϵ ϵ ( t , 0 ) i = 1 n E i ( t , ) 2 2 + e ϵ ϵ ( t , 0 ) j = 1 m E n + j ( t , ) 2 2 V ( t , e ( t ) ) V ( 0 , e ( 0 ) ) = i = 1 n { E i ( 0 , ) 2 2 + 1 + j = 1 m α j | c ¯ i j | τ 0 e ϵ ϵ ( σ ( s + τ ) , 0 ) E n + j ( s , ) 2 2 Δ s + ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | ) τ 0 e ϵ ϵ ( σ ( s + τ ) , 0 ) E ν ( s , ) 2 2 Δ s + ν = 1 n β ν ( | r ¯ i ν | + | w ¯ i ν | ) 0 + k i ν ( s ) [ s 0 e ϵ ϵ ( σ ( s + r ) , 0 ) E ν ( r , ) 2 2 Δ r ] Δ s } + j = 1 m { E n + j ( 0 , ) 2 2 + 1 + i = 1 n γ i | ζ ¯ j i | τ 0 e ϵ ϵ ( σ ( s + τ ) , 0 ) E i ( s , ) 2 2 Δ s + ϱ = 1 m δ ϱ ( | λ ¯ j ϱ | + | π ¯ j ϱ | ) τ 0 e ϵ ϵ ( σ ( s + τ ) , 0 ) E n + ϱ ( s , ) 2 2 Δ s + ϱ = 1 m δ ϱ ( | ρ ¯ j ϱ | + | σ ¯ j ϱ | ) 0 + κ j ϱ ( s ) [ s 0 e ϵ ϵ ( σ ( s + r ) , 0 ) E n + ϱ ( r , ) 2 2 Δ r ] Δ s } i = 1 n ϕ ˜ i ϕ i 1 2 + n + i = 1 n j = 1 m α j | c ¯ i j | φ ˜ n + j φ n + j 1 2 τ 0 e ϵ ϵ ( σ ( s + τ ) , 0 ) Δ s + i = 1 n ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | ) ϕ ˜ ν ϕ ν 1 2 τ 0 e ϵ ϵ ( σ ( s + τ ) , 0 ) Δ s + i = 1 n ν = 1 n β ν ( | r ¯ i ν | + | w ¯ i ν | ) ϕ ˜ ν ϕ ν 1 2 0 + k i ν ( s ) [ s 0 e ϵ ϵ ( σ ( s + r ) , 0 ) Δ r ] Δ s + j = 1 m φ ˜ j φ j 1 2 + m + j = 1 m i = 1 n γ i | ζ ¯ j i | ϕ ˜ i ϕ i 1 2 τ 0 e ϵ ϵ ( σ ( s + τ ) , 0 ) Δ s + j = 1 m ϱ = 1 m δ ϱ ( | λ ¯ j ϱ | + | π ¯ j ϱ | ) φ ˜ ϱ φ ϱ 1 2 τ 0 e ϵ ϵ ( σ ( s + τ ) , 0 ) Δ s + j = 1 m ϱ = 1 m δ ϱ ( | ρ ¯ j ϱ | + | σ ¯ j ϱ | ) φ ˜ ϱ φ ϱ 1 2 0 + κ j ϱ ( s ) [ s 0 e ϵ ϵ ( σ ( s + r ) , 0 ) Δ r ] Δ s M 2 ,

which implies that

E ( t , ) M e ϵ (t,0).
(3.17)

Obviously, M>1. According to Definition 2.5, we conclude that the controlled slave system (2.1)-(2.3) is globally robustly exponentially synchronous with the master system (1.1)-(1.3) on the time scale [ 0 , + ) T . The proof is complete. □

When the time scale T=R and T=Z, we will obtain the following two important corollaries.

Corollary 3.1 Assume that the following (H4)-(H6) hold. Then the master system (1.4)-(1.6) and its controlled slave system are globally robustly exponentially synchronous.

(H4) The neurons activation f j , F i , g i and G j are Lipschitz continuous, that is, there exist positive constants α j , β i , γ i and δ j such that | f j (ξ) f j (η)| α j |ξη|, | F i (ξ) F i (η)| β i |ξη|, | g i (ξ) g i (η)| γ i |ξη|, | G j (ξ) G j (η)| δ j |ξη| for any ξ,ηR, i=1,2,,n; j=1,2,,m.

(H5) The delay kernels k i j , κ j i :[0,+)[0,+) (i=1,2,,n; j=1,2,,m) are real-valued non-negative continuous functions and satisfy the following conditions:

0 k i j (s)ds=1, 0 s k i j (s)ds<, 0 κ j i (s)ds=1, 0 s κ j i (s)ds<

and there exist constants ω 1 >0, ω 2 >0 such that

0 k i j (s) e s ω 1 ds<, 0 κ j i (s) e s ω 2 ds<.

(H6) The following conditions are always satisfied:

k = 1 l 2 a ̲ i k l k 2 + 2 ( m i b ̲ i ) + j = 1 m α j | c ¯ i j | + ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | + | r ¯ i ν | + | w ¯ i ν | ) + ν = 1 n β i ( | p ¯ ν i | + | q ¯ ν i | ) e 2 τ + ν = 1 n β i ( | r ¯ ν i | + | w ¯ ν i | ) 0 + k ν i ( s ) e 2 s d s + j = 1 m γ i | ζ ¯ j i | e 2 τ < 0 , i = 1 , 2 , , n ; k = 1 l 2 ξ ̲ j k l k 2 + 2 ( m n + j η ̲ j ) + i = 1 n γ i | ζ ¯ j i | + ϱ = 1 m δ ϱ ( | λ ¯ j ϱ | + | π ¯ j ϱ | + | ρ ¯ j ϱ | + | σ ¯ j ϱ | ) + ϱ = 1 m δ j ( | λ ¯ ϱ j | + | π ¯ ϱ j | ) e 2 τ + ϱ = 1 m δ j ( | ρ ¯ ϱ j | + | σ ¯ ϱ j | ) 0 + κ ϱ j ( s ) e 2 s d s + i = 1 n α j | c ¯ i j | e 2 τ < 0 , j = 1 , 2 , , m .

Corollary 3.2 Assume that the following (H7)-(H9) hold. Then the master system (1.7)-(1.9) and its controlled slave system are globally robustly exponentially synchronous.

(H7) The neurons activation f j , F i , g i and G j are Lipschitz continuous, that is, there exist positive constants α j , β i , γ i and δ j such that | f j (ξ) f j (η)| α j |ξη|, | F i (ξ) F i (η)| β i |ξη|, | g i (ξ) g i (η)| γ i |ξη|, | G j (ξ) G j (η)| δ j |ξη| for any ξ,ηR, i=1,2,,n; j=1,2,,m.

(H8) The delay kernels k i j , κ j i : Z + [0,+) (i=1,2,,n; j=1,2,,m) are real-valued non-negative rd-continuous functions and satisfy the following conditions:

s = 0 k i j (s)=1, s = 0 s k i j (s)<, s = 0 κ j i (s)=1, s = 0 s κ j i (s)<,

and there exist constants ω 1 >0, ω 2 >0 such that

s = 0 k i j (s) ( 1 + ω 1 ) s <, s = 0 κ j i (s) ( 1 + ω 2 ) s <.

(H9) The following conditions are always satisfied:

k = 1 l 2 a ̲ i k l k 2 + 2 ( m i b ̲ i ) + j = 1 m α j | c ¯ i j | + ν = 1 n β ν ( | p ¯ i ν | + | q ¯ i ν | + | r ¯ i ν | + | w ¯ i ν | ) + ν = 1 n β i ( | p ¯ ν i | + | q ¯ ν i | ) 4 τ + ν = 1 n β i ( | r ¯ ν i | + | w ¯ ν i | ) s = 0 + k ν i ( s ) 4 s + j = 1 m γ i | ζ ¯ j i | 4 τ < 0 , i = 1 , 2 , , n ; k = 1 l 2 ξ ̲ j k l k 2 + 2 ( m n + j η ̲ j ) + i = 1 n γ i | ζ ¯ j i | + ϱ = 1 m δ ϱ ( | λ ¯ j ϱ | + | π ¯ j ϱ | + | ρ ¯ j ϱ | + | σ ¯ j ϱ | ) + ϱ = 1 m δ j ( | λ ¯ ϱ j | + | π ¯ ϱ j | ) 4 τ + ϱ = 1 m δ j ( | ρ ¯ ϱ j | + | σ ¯ ϱ j | ) s = 0 + κ ϱ j ( s ) 4 s + i = 1 n α j | c ¯ i j | 4 τ < 0 , j = 1 , 2 , , m .

4 Illustrative example

Consider the following reaction-diffusion BAM recurrent FNNs on time scales:

{ u i Δ ( t , x ) = k = 1 l x k ( a i k u i x k ) b i u i ( t , x ) + j = 1 m c i j f j ( v j ( t τ , x ) ) + I i u i Δ ( t , x ) = + j = 1 n p i j F j ( u j ( t τ , x ) ) + j = 1 n r i j 0 + k i j ( s ) F j ( u j ( t s , x ) ) Δ s u i Δ ( t , x ) = + j = 1 n q i j F j ( u j ( t τ , x ) ) + j = 1 n w i j 0 + k i j ( s ) F j ( u j ( t s , x ) ) Δ s u i Δ ( t , x ) = + j = 1 n d i j μ j + j = 1 n S i j μ j + j = 1 n T i j μ j , v j Δ ( t , x ) = k = 1 l x k ( ξ j k v j x k ) η j v j ( t , x ) + i = 1 n ζ j i g i ( u i ( t τ , x ) ) + J j v j Δ ( t , x ) = + i = 1 m λ j i G i ( v i ( t τ , x ) ) + i = 1 m ρ j i 0 + κ j i ( s ) G i ( v i ( t s , x ) ) Δ s v j Δ ( t , x ) = + i = 1 m π j i G i ( v i ( t τ , x ) ) + i = 1 m σ j i 0 + κ j i ( s ) G i ( v i ( t s , x ) ) Δ s v j Δ ( t , x ) = + i = 1 m h j i ν i + i = 1 m M j i ν i + i = 1 m N j i μ i ,
(4.1)

subject to the following initial conditions

{ u i ( s , x ) = ϕ i ( s , x ) , ( s , x ) [ τ , 0 ] T × Ω , v j ( s , x ) = φ j ( s , x ) , ( s , x ) [ τ , 0 ] T × Ω ,
(4.2)

and Dirichlet boundary conditions

{ u i ( t , x ) = 0 , ( t , x ) [ 0 , ) T × Ω , v j ( t , x ) = 0 , ( t , x ) [ 0 , ) T × Ω ,
(4.3)

where n=m=l=2, f j (v)= F i (v)= g i (v)= G j (v)= e v e v e v + e v (i,j=1,2), k i j (t)= κ j i (t)= 26 27 ( 1 3 ) t (i,j=1,2), T={3n:n=0,±1,±2,}, Ω={x:| x i |<1,i=1,2}, τ=1. I=( I 1 , I 2 ) and J=( J 1 , J 2 ) are the constant input vectors. μ=( μ 1 , μ 2 ) and ν=( ν 1 , ν 2 ) are the constant bias vectors. Obviously, f j (v), F i (v), g i (v) and G i (v) satisfy the Lipschitz condition with α j = β i = γ i = δ j =1. Let ( b ̲ 1 , b ̲ 2 )=(9.5,10.5), ( η ̲ 1 , η ̲ 2 )=(8.5,9),

( a ̲ 11 a ̲ 12 a ̲ 21 a ̲ 22 ) = ( 0.7 0.4 0.2 0.8 ) , ( c ¯ 11 c ¯ 12 c ¯ 21 c ¯ 22 ) = ( 0.4 0.5 0.6 0.1 ) , ( p ¯ 11 p ¯ 12 p ¯ 21 p ¯ 22 ) = ( 0.1 0.2 0.3 0.5 ) , ( q ¯ 11 q ¯ 12 q ¯ 21 q ¯ 22 ) = ( 0.2 0.1 0.7 0.8 ) , ( r ¯ 11 r ¯ 12 r ¯ 21 r ¯ 22 ) = ( 0.4 0.3 0.6 0.9 ) , ( w ¯ 11 w ¯ 12 w ¯ 21 w ¯ 22 ) = ( 0.2 0.1 0.8 0.3 ) , ( ξ ̲ 11 ξ ̲ 12 ξ ̲ 21 ξ ̲ 22 ) = ( 0.6 0.4 0.2 0.7 ) , ( ζ ¯ 11 ζ ¯ 12 ζ ¯ 21 ζ ¯ 22 ) = ( 0.1 0.5 0.1 0.1 ) , ( λ ¯ 11 λ ¯ 12 λ ¯ 21 λ ¯ 22 ) = ( 0.2 0.1 0.3 0.4 ) , ( π ¯ 11 π ¯ 12 π ¯ 21 π ¯ 22 ) = ( 0.3 0.2 0.6 0.4 ) , ( ρ ¯ 11 ρ ¯ 12 ρ ¯ 21 ρ ¯ 22 ) = ( 0.5 0.2 0.7 0.8 ) , ( σ ¯ 11 σ ¯ 12 σ ¯ 21 σ ¯ 22 ) = ( 0.3 0.1 0.9 0.2 ) .

Take the controlled input vector z(t,x)= ( m 1 E 1 ( t , x ) , m 2 E 2 ( t , x ) , m 3 E 3 ( t , x ) , m 4 E 4 ( t , x ) ) T , here ( m 1 , m 2 , m 3 , m 4 )=(5,3,2,3). By a simple calculation, we have

σ ( t ) = t + 3 , μ ( t ) = 3 , e 1 1 ( t , 0 ) = ( e 1 ( t , 0 ) ) 2 = 4 t , 0 + k i j ( s ) Δ s = 0 + κ j i ( s ) Δ s = s = 0 + 26 27 ( 1 3 ) 3 s = 1 , 0 + s k i j ( s ) Δ s = 0 + s κ j i ( s ) Δ s = s = 0 + 78 27 s ( 1 3 ) 3 s = 3 26 < + , 0 + s e ω ( s , 0 ) k i j ( s ) Δ s = 0 + s e ω ( s , 0 ) κ j i ( s ) Δ s = s = 0 + 78 27 s ( 1 + 3 ω ) s ( 1 3 ) 3 s 0 + s e ω ( s , 0 ) k i j ( s ) Δ s = 27 ( 1 + 3 ω ) ( 26 3 ω ) 2 < + ( 0 < ω < 13 ) , k = 1 2 2 a ̲ 1 k l k 2 + 2 ( m 1 b ̲ 1 ) + j = 1 2 α j | c ¯ 1 j | + ν = 1 2 β ν ( | p ¯ 1 ν | + | q ¯ 1 ν | + | r ¯ 1 ν | + | w ¯ 1 ν | ) + ν = 1 2 β i ( | p ¯ ν 1 | + | q ¯ ν 1 | ) 4 τ + ν = 1 2 β i ( | r ¯ ν 1 | + | w ¯ ν 1 | ) s = 0 + k ν 1 ( 3 s ) 4 s + j = 1 2 γ 1 | ζ ¯ j 1 | 4 τ 0.24 < 0 , k = 1 2 2 a ̲ 2 k l k 2 + 2 ( m 2 b ̲ 2 ) + j = 1 2 α j | c ¯ 2 j | + ν = 1 2 β ν ( | p ¯ 2 ν | + | q ¯ 2 ν | + | r ¯ 2 ν | + | w ¯ 2 ν | ) + ν = 1 2 β i ( | p ¯ ν 2 | + | q ¯ ν 2 | ) 4 τ + ν = 1 2 β 2 ( | r ¯ ν 2 | + | w ¯ ν 2 | ) s = 0 + k ν 2 ( 3 s ) 4 s + j = 1 2 γ 2 | ζ ¯ j 2 | 4 τ 0.78 < 0 , k = 1 2 2 ξ ̲ 1 k l k 2 + 2 ( m 3 η ̲ 1 ) + i = 1 2 γ i | ζ ¯ 1 i | + ϱ = 1 2 δ ϱ ( | λ ¯ 1 ϱ | + | π ¯ 1 ϱ | + | ρ ¯ 1 ϱ | + | σ ¯ 1 ϱ | ) + ϱ = 1 2 δ 1 ( | λ ¯ ϱ 1 | + | π ¯ ϱ 1 | ) 4 τ + ϱ = 1 2 δ 1 ( | ρ ¯ ϱ 1 | + | σ ¯ ϱ 1 | ) s = 0 + κ ϱ 1 ( 3 s ) 4 s + i = 1 2 α 1 | c ¯ i 1 | 4 τ 0.19 < 0 , k = 1 2 2 ξ ̲ 2 k l k 2 + 2 ( m 4 η ̲ 2 ) + i = 1 2 γ i | ζ ¯ 2 i | + ϱ = 1 2 δ ϱ ( | λ ¯ 2 ϱ | + | π ¯ 2 ϱ | + | ρ ¯ 2 ϱ | + | σ ¯ 2 ϱ | ) + ϱ = 1 2 δ 2 ( | λ ¯ ϱ 2 | + | π ¯ ϱ 2 | ) 4 τ + ϱ = 1 2 δ 2 ( | ρ ¯ ϱ 2 | + | σ ¯ ϱ 2 | ) s = 0 + κ ϱ 2 ( 3 s ) 4 s + i = 1 2 α 2 | c ¯ i 2 | 4 τ 1.03 < 0 .

Thus, conditions (H1)-(H3) are satisfied. It follows from Theorem 3.1 that the master system (4.1)-(4.3) and its controlled slave system are globally robustly exponentially synchronized.

Author’s contributions

The author read and approved the final manuscript.

References

  1. Kosko B: Adaptive bidirectional associative memories. Appl. Opt. 1987, 26(23):4947-4960. 10.1364/AO.26.004947

    Article  Google Scholar 

  2. Kosko B: Bidirectional associative memories. IEEE Trans. Syst. Man Cybern. 1988, 18(1):49-60. 10.1109/21.87054

    Article  MathSciNet  Google Scholar 

  3. Kosko B: Neural Networks and Fuzzy Systems: A Dynamical Systems Approach to Machine Intelligence. Prentice Hall, New York; 1992.

    Google Scholar 

  4. Mathai G, Upadhyaya BR: Performance analysis and application of the bidirectional associative memory to industrial spectral signatures. Neural Netw. 1989, 1: 33-37.

    Google Scholar 

  5. Yu WW, Cao JD: Adaptive synchronization and lag synchronization of uncertain dynamical system with time delay based on parameter identification. Physica A 2007, 375: 467-482. 10.1016/j.physa.2006.09.020

    Article  Google Scholar 

  6. Tang Y, Fang JA: Robust synchronization in an array of fuzzy delayed cellular neural networks with stochastic ally hybrid coupling. Neurocomputing 2009, 72: 3253-3262. 10.1016/j.neucom.2009.02.010

    Article  Google Scholar 

  7. Yuan K, Cao JD: Exponential stability and periodic solutions of fuzzy cellular neural networks with time-varying delays. Neurocomputing 2006, 69: 1619-1627. 10.1016/j.neucom.2005.05.011

    Article  Google Scholar 

  8. Wang K, Teng ZD, Jiang HJ: Adaptive synchronization in an array of linearly coupled neural networks with reaction-diffusion terms and time delays. Commun. Nonlinear Sci. Numer. Simul. 2012, 17: 3866-3875. 10.1016/j.cnsns.2012.02.020

    Article  MathSciNet  Google Scholar 

  9. Sheng L, Yang HZ: Exponential synchronization of a class of neural networks with mixed time-varying delays and impulsive effects. Neurocomputing 2008, 71: 3666-3674. 10.1016/j.neucom.2008.03.004

    Article  Google Scholar 

  10. Yan P, Lv T: Exponential synchronization of fuzzy cellular neural networks with mixed delays and general boundary conditions. Commun. Nonlinear Sci. Numer. Simul. 2012, 17: 1003-1011. 10.1016/j.cnsns.2011.07.013

    Article  MathSciNet  Google Scholar 

  11. Gan QT: Exponential synchronization of stochastic Cohen-Grossberg neural networks with mixed time-varying delays and reaction-diffusion via periodically intermittent control. Neural Netw. 2012, 31: 12-21.

    Article  Google Scholar 

  12. Zhang YJ, Xu SY: Robust global synchronization of complex networks with neutral-type delayed nodes. Appl. Math. Comput. 2010, 216: 768-778. 10.1016/j.amc.2010.01.075

    Article  MathSciNet  Google Scholar 

  13. Luo M, Xu J: Suppression of collective synchronization in a system of neural groups with washout-filter-aided feedback. Neural Netw. 2011, 24: 538-543. 10.1016/j.neunet.2011.02.008

    Article  Google Scholar 

  14. Yang XS, Huang CX, Zhu QX: Synchronization of switched neural networks with mixed delays via impulsive control. Chaos Solitons Fractals 2011, 44: 817-826. 10.1016/j.chaos.2011.06.006

    Article  MathSciNet  Google Scholar 

  15. Li T, Fei SM, Zhu Q, Cong S: Exponential synchronization of chaotic neural networks with mixed delays. Neurocomputing 2008, 71: 3005-3019. 10.1016/j.neucom.2007.12.029

    Article  Google Scholar 

  16. Du BZ, James L: Stability analysis of static recurrent neural networks using delay-partitioning and projection. Neural Netw. 2009, 22: 343-347. 10.1016/j.neunet.2009.03.005

    Article  Google Scholar 

  17. Liu PC, Yi FQ, Guo Q, Yang J, Wu W: Analysis on global exponential robust stability of reaction-diffusion neural networks with S-type distributed delays. Physica D 2008, 237: 475-485. 10.1016/j.physd.2007.09.014

    Article  MathSciNet  Google Scholar 

  18. Li YK, Zhao KH: Robust stability of delayed reaction-diffusion recurrent neural networks with Dirichlet boundary conditions on time scales. Neurocomputing 2011, 74: 1632-1637. 10.1016/j.neucom.2011.01.006

    Article  Google Scholar 

  19. Gilli M: Stability of cellular neural networks with nonpositive templates and nonmonotonic output functions. IEEE Trans. Circuits Syst. I 1994, 41: 518-528.

    Article  MathSciNet  Google Scholar 

  20. Gopalsamy K, He XZ: Stability in asymmetric Hopfield nets with transmission delays. Physica D 1994, 76: 344-358. 10.1016/0167-2789(94)90043-4

    Article  MathSciNet  Google Scholar 

  21. Zeng Z, Wang J: Improved conditions for global exponential stability of recurrent neural networks with time-varying delays. Chaos Solitons Fractals 2006, 23(3):623-635.

    Google Scholar 

  22. Shao YF: Exponential stability of periodic neural networks with impulsive effects and time-varying delays. Appl. Math. Comput. 2011, 217: 6893-6899. 10.1016/j.amc.2011.01.068

    Article  MathSciNet  Google Scholar 

  23. Yang T, Yang LB: The global stability of fuzzy cellular neural networks. IEEE Trans. Circuits Syst. I 1996, 43: 880-883. 10.1109/81.538999

    Article  Google Scholar 

  24. Zhao KH, Li YK: Existence and global exponential stability of equilibrium solution to reaction-diffusion recurrent neural networks on time scales. Discrete Dyn. Nat. Soc. 2010., 2010: Article ID 624619

    Google Scholar 

  25. Li YK, Zhao KH, Ye Y: Stability of reaction-diffusion recurrent neural networks with distributed delays and Neumann boundary conditions on time scales. Neural Process. Lett. 2012, 36: 217-234. 10.1007/s11063-012-9232-2

    Article  Google Scholar 

  26. Zhao KH, Wang LWJ, Liu JQ: Global robust attractive and invariant sets of fuzzy neural networks with delays and impulses. J. Appl. Math. 2013., 2013: Article ID 935491

    Google Scholar 

  27. Zhao KH: Globally exponential synchronization of diffusion recurrent FNNs with time-delays and impulses on time scales. WSEAS Trans. Math. 2014, 13: 224-235.

    Google Scholar 

  28. Zhang JY, Yang YR: Global stability analysis of bidirectional associative memory neural networks with time delay. Int. J. Circuit Theory Appl. 2001, 29(2):185-196. 10.1002/cta.144

    Article  Google Scholar 

  29. Zhao H: Global stability of bidirectional associative memory neural networks with distributed delays. Phys. Lett. A 2002, 297: 182-190. 10.1016/S0375-9601(02)00434-6

    Article  MathSciNet  Google Scholar 

  30. Liu B, Huang L: Global exponential stability of BAM neural networks with recent-history distributed delays and impulse. Neurocomputing 2006, 69(16-18):2090-2096. 10.1016/j.neucom.2005.09.014

    Article  Google Scholar 

  31. Song QK, Cao JD: Global exponential stability of bidirectional associative memory neural networks with distributed delays. J. Comput. Appl. Math. 2007, 202: 266-279. 10.1016/j.cam.2006.02.031

    Article  MathSciNet  Google Scholar 

  32. Song QK, Cao JD: Global exponential stability and existence of periodic solutions in BAM networks with delays and reaction-diffusion terms. Chaos Solitons Fractals 2005, 23(2):421-430. 10.1016/j.chaos.2004.04.011

    Article  MathSciNet  Google Scholar 

  33. Cao JD, Wang L: Exponential stability and periodic oscillatory solution in BAM networks with delays. IEEE Trans. Neural Netw. 2002, 13(2):457-463. 10.1109/72.991431

    Article  Google Scholar 

  34. Wu XL, Zhang JH, Guan XP, Meng H: Delay-dependent asymptotic stability of BAM neural networks with time delay. Kybernetes 2010, 39(8):1313-1321. 10.1108/03684921011063600

    Article  MathSciNet  Google Scholar 

  35. Zhu QX, Li XD, Yang XS: Exponential stability for stochastic reaction-diffusion BAM neural networks with time-varying and distributed delays. Appl. Math. Comput. 2011, 217: 6078-6091. 10.1016/j.amc.2010.12.077

    Article  MathSciNet  Google Scholar 

  36. Ge JH, Xu J: Synchronization and synchronized periodic solution in a simplified five-neuron BAM neural network with delays. Neurocomputing 2011, 74: 993-999. 10.1016/j.neucom.2010.11.017

    Article  Google Scholar 

  37. Zhang ZQ, Yang Y, Huang YS: Global exponential stability of interval general BAM neural networks with reaction-diffusion terms and multiple time-varying delays. Neural Netw. 2011, 24: 457-465. 10.1016/j.neunet.2011.02.003

    Article  Google Scholar 

  38. Ding W, Wang LS: 2 N Almost periodic attractors for Cohen-Grossberg-type BAM neural networks with variable coefficients and distributed delays. J. Math. Anal. Appl. 2011, 373: 322-342. 10.1016/j.jmaa.2010.06.055

    Article  MathSciNet  Google Scholar 

  39. Li YK, Chen XR, Zhao L: Stability and existence of periodic solutions to delayed Cohen-Grossberg BAM neural networks with impulses on time scales. Neurocomputing 2009, 72: 1621-1630. 10.1016/j.neucom.2008.08.010

    Article  Google Scholar 

  40. Li YK, Gao S: Global exponential stability for impulsive BAM neural networks with distributed delays on time scales. Neural Process. Lett. 2010, 31(1):65-91. 10.1007/s11063-009-9127-z

    Article  Google Scholar 

  41. Li YK: Global exponential stability of BAM neural networks with delays and impulses. Chaos Solitons Fractals 2005, 24(1):279-285. 10.1016/j.chaos.2004.09.027

    Article  MathSciNet  Google Scholar 

  42. Cao JD, Wan Y: Matrix measure strategies for stability and synchronization of inertial BAM neural network with time delays. Neural Netw. 2014, 53: 165-172.

    Article  Google Scholar 

  43. Du YH, Zhong SM, Zhou N: Global asymptotic stability of Markovian jumping stochastic Cohen-Grossberg BAM neural networks with discrete and distributed time-varying delays. Appl. Math. Comput. 2014, 243: 624-636.

    Article  MathSciNet  Google Scholar 

  44. Jian JG, Wang BX: Stability analysis in Lagrange sense for a class of BAM neural networks of neutral type with multiple time-varying delays. Neurocomputing 2015. 10.1016/j.neucom.2014.07.041

    Google Scholar 

  45. Berezansky L, Braverman E, Idels L: New global exponential stability criteria for nonlinear delay differential systems with applications to BAM neural networks. Appl. Math. Comput. 2014, 243: 899-910.

    Article  MathSciNet  Google Scholar 

  46. Li YK, Yang L, Wu WQ: Anti-periodic solution for impulsive BAM neural networks with time-varying leakage delays on time scales. Neurocomputing 2015. 10.1016/j.neucom.2014.08.020

    Google Scholar 

  47. Zhang AC, Qiu JL, She JH: Existence and global exponential stability of periodic solution for high-order discrete-time BAM neural networks. Neural Netw. 2014, 50: 98-109.

    Article  Google Scholar 

  48. Quan ZY, Huang LH, Yu SH, Zhang ZQ: Novel LMI-based condition on global asymptotic stability for BAM neural networks with reaction-diffusion terms and distributed delays. Neurocomputing 2014, 136: 213-223.

    Article  Google Scholar 

  49. Carroll TL, Pecora LM: Cascading synchronized chaotic systems. Phys. D, Nonlinear Phenom. 1993, 67(1-3):126-140. 10.1016/0167-2789(93)90201-B

    Article  Google Scholar 

  50. Carroll TL, Heagy J, Pecora LM: Synchronization and desynchronization in pulse coupled relaxation oscillators. Phys. Lett. A 1994, 186(3):225-229. 10.1016/0375-9601(94)90343-3

    Article  Google Scholar 

  51. Bohner M, Peterson A: Dynamic Equation on Time Scales: An Introduction with Applications. Birkhäuser, Boston; 2001.

    Book  Google Scholar 

  52. Lakshmikantham V, Vatsala AS: Hybrid system on time scales. J. Comput. Appl. Math. 2002, 141: 227-235. 10.1016/S0377-0427(01)00448-4

    Article  MathSciNet  Google Scholar 

  53. Lu JG: Global exponential stability and periodicity of reaction-diffusion delayed recurrent neural networks with Dirichlet boundary conditions. Chaos Solitons Fractals 2008, 35(1):116-125. 10.1016/j.chaos.2007.05.002

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The author would like to thank the anonymous referees for their useful and valuable suggestions. This work is supported by the National Natural Sciences Foundation of Peoples Republic of China under Grant (No. 11161025; No. 11326101), Yunnan Province natural scientific research fund project (No. 2011FZ058).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kaihong Zhao.

Additional information

Competing interests

The author declares to have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhao, K. Global robust exponential synchronization of BAM recurrent FNNs with infinite distributed delays and diffusion terms on time scales. Adv Differ Equ 2014, 317 (2014). https://doi.org/10.1186/1687-1847-2014-317

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1847-2014-317

Keywords