- Research
- Open Access
- Published:
Global robust exponential synchronization of BAM recurrent FNNs with infinite distributed delays and diffusion terms on time scales
Advances in Difference Equations volume 2014, Article number: 317 (2014)
Abstract
In this article, the global robust exponential synchronization of reaction-diffusion BAM recurrent fuzzy neural networks (FNNs) with infinite distributed delays on time scales is investigated. Applied Lyapunov functional and inequality skills, some sufficient criteria are established to guarantee the global robust exponential synchronization of reaction-diffusion BAM recurrent FNNs with infinite distributed delays on time scales. One example is given to illustrate the effectiveness of our results.
1 Introduction
The study on the artificial neural networks has attracted much attention because of their potential applications such as signal processing, image processing, pattern classification, quadratic optimization, associative memory, moving object speed detection, etc. Many kinds of models of neural networks have been proposed by some famous scholars. One of these important neural network models is the bidirectional associative memory (BAM) neural network models, which were first introduced by Kosko [1–3]. It is a special class of recurrent neural networks that can store bipolar vector pairs. The BAM neural network is composed of neurons arranged in two layers, the X-layer and the Y-layer. The neurons in one layer are fully interconnected to the neurons in the other layer. Through iterations of forward and backward information flows between the two layers, it performs a two-way associative search for stored bipolar vector pairs and generalize the single-layer auto-associative Hebbian correlation to a two-layer pattern-matched heteroassociative circuits. Therefore, this class of networks possesses good application prospects in some fields such as pattern recognition, signal and image process, artificial intelligence [4]. In general, artificial neural networks have complex dynamical behaviors such as stability, synchronization, periodic or almost periodic solutions, invariant sets and attractors, and so forth. We can refer to [5–27] and the references cited therein. Therefore, the analysis of dynamical behaviors for neural networks is a necessary step for practical design of neural networks. As one of the famous neural network models, it has attracted many attention in the past two decades [28–48] since the BAM model was proposed by Kosko. The dynamical behaviors such as uniqueness, global asymptotic stability, exponential stability and invariant sets and attractors of the equilibrium point or periodic solutions were investigated for BAM neural networks with different types of time delays (see [28–44, 48]).
Synchronization has attracted much attention after it was proposed by Carrol et al. [49, 50]. The principle of drive-response synchronization is this: the driver system sends a signal through a channel to the responder system, which uses this signal to synchronize itself with the driver. Namely, the response system is influenced by the behavior of the drive system, but the drive system is independent of the response one. In recent years, many results concerning a synchronization problem of time lag neural networks have been investigated in the literature [5, 6, 8–15, 27, 36, 49, 50].
As is well known, both in biological and man-made neural networks, strictly speaking, diffusion effects cannot be avoided when electrons are moving in asymmetric electromagnetic fields, so we must consider that the activations vary in space as well as in time. Many researchers have studied the dynamical properties of continuous time reaction-diffusion neural networks (see, for example, [8, 11, 17, 18, 24, 25, 27, 32, 48]).
However, in mathematical modeling of real world problems, we will encounter some other inconveniences such as complexity and uncertainty or vagueness. Fuzzy theory is considered as a more suitable setting for the sake of taking vagueness into consideration. Based on traditional cellular neural networks (CNNs), T Yang and LB Yang proposed the fuzzy CNNs (FCNNs) [23] which integrate fuzzy logic into the structure of traditional CNNs and maintain local connectedness among cells. Unlike previous CNNs structures, FCNNs have fuzzy logic between their template input and/or output besides the sum of product operation. FCNNs are very a useful paradigm for image processing problems, which is a cornerstone in image processing and pattern recognition. Therefore, it is necessary to consider both the fuzzy logic and delay effect on dynamical behaviors of neural networks. To the best of our knowledge, few authors have considered the synchronization of reaction-diffusion recurrent fuzzy neural networks with delays and Dirichlet boundary conditions on time scales which is a challenging and important problem in theory and application. Therefore, in this paper, we will investigate the global robust exponential synchronization of delayed reaction-diffusion BAM recurrent fuzzy neural networks (FNNs) on time scales as follows:
subject to the following initial conditions
and Dirichlet boundary conditions
where ; . is a time scale and is unbounded and . is constant time delay. and is a bounded compact set with smooth boundary ∂ Ω in space . , . and are the state of the i th neurons and the j th neurons at time t and in space x, respectively. and are constant input vectors. The smooth functions and correspond to the transmission diffusion operators along with the i th neurons and the j th neurons, respectively. , , , , , , , , , , , , , , , , , , are constants. and denote the rate with which the i th neurons and j th neurons will reset their potential to the resting state in isolation when disconnected from the network and external inputs, respectively. , , , , , , , , , , , , , , denote the connection weights. () and () denote the activation function of the j th neurons of Y-layer on the i th neurons of X-layer and the i th neurons of X-layer on the j th neurons of Y-layer at time t and in space x, respectively. () denotes the fuzzy activation function of the j th neurons on the i th neurons inside of X-layer. () denotes the fuzzy activation function of the i th neurons on the j th neurons inside of Y-layer. () denotes the bias of the j th neurons on the i th neurons inside of X-layer. () denotes the bias of the i th neurons on the j th neurons inside of Y-layer. ⋀, ⋁ denote the fuzzy AND and fuzzy OR operations, respectively. , are rd-continuous with respect to and continuous with respect to .
In order to investigate the global robust exponential synchronization for system (1.1)-(1.3), the quantities , , , , , , , , , , , and may be considered as intervals as follows: , , , , , , , , , , , , , .
Take the time scale (real number set), then system (1.1)-(1.3) can be changed into the following continuous case (1.4)-(1.6):
subject to the following initial conditions
and Dirichlet boundary conditions
Take the time scale (integer number set), then system (1.1)-(1.3) can be changed into the following discrete case (1.7)-(1.9):
subject to the following initial conditions
and Dirichlet boundary conditions
where , τ is a positive integer, , , .
If we choose , then , . In this case, system (1.1)-(1.3) is the continuous reaction-diffusion BAM recurrent FNNs (1.4)-(1.6). If , then , system (1.1)-(1.3) is the discrete difference reaction-diffusion BAM recurrent FNNs (1.7)-(1.9). In this paper, we study the global robust exponential synchronization of reaction-diffusion BAM recurrent FNNs (1.1)-(1.3), which unify both the continuous case and the discrete difference case. What is more, system (1.1)-(1.3) is a good model for handling many problems such as predator-prey forecast or optimizing of goods output.
The rest of this paper is organized as follows. In Section 2, some notations and basic theorems or lemmas on time scales are given. In Section 3, the main results of global robust exponential synchronization are obtained by constructing the appropriate Lyapunov functional and applying inequality skills. In Section 4, one example is given to illustrate the effectiveness of our results.
2 Preliminaries
In this section, we first recall some basic definitions and lemmas on time scales which are used in what follows.
Let be a nonempty closed subset (time scale) of ℝ. The forward and backward jump operators and the graininess are defined, respectively, by
A point is called left-dense if and , left-scattered if , right-dense if and , and right-scattered if . If has a left-scattered maximum m, then , otherwise . If has a right-scattered minimum m, then , otherwise .
Definition 2.1 ([51])
A function is called regulated provided its right-hand side limits exist (finite) at all right-hand side points in and its left-hand side limits exist (finite) at all left-hand side points in .
Definition 2.2 ([51])
A function is called rd-continuous provided it is continuous at right-dense point in and its left-hand side limits exist (finite) at left-dense points in . The set of rd-continuous function will be denoted by .
Definition 2.3 ([51])
Assume and . Then we define to be the number (if it exists) with the property that given any there exists a neighborhood U of t (i.e., for some ) such that
for all . We call the delta (or Hilger) derivative of f at t. The set of functions that is a differentiable and whose derivative is rd-continuous is denoted by .
If f is continuous, then f is rd-continuous. If f is rd-continuous, then f is regulated. If f is delta differentiable at t, then f is continuous at t.
Lemma 2.1 ([51])
Let f be regulated, then there exists a function F which is delta differentiable with region of differentiation D such that for all .
Definition 2.4 ([51])
Assume that is a regulated function. Any function F as in Lemma 2.1 is called a Δ-antiderivative of f. We define the indefinite integral of a regulated function f by
where C is an arbitrary constant and F is a Δ-antiderivative of f. We define the Cauchy integral by for all .
A function is called an antiderivative of provided for all .
Lemma 2.2 ([51])
If , and , then
-
(i)
,
-
(ii)
if for all , then ,
-
(iii)
if on , then .
A function is called regressive if for all . The set of all regressive and rd-continuous functions will be denoted by . We define the set of all positively regressive elements of ℛ by . If p is a regressive function, then the generalized exponential function is defined by for all , with the cylinder transformation
Let be two regressive functions, we define
If , then .
The generalized exponential function has the following properties.
Lemma 2.3 ([51])
Assume that are two regressive functions, then
-
(i)
;
-
(ii)
;
-
(iii)
;
-
(iv)
;
-
(v)
;
-
(vi)
for all ;
-
(vii)
.
Lemma 2.4 ([51])
Assume that are delta differentiable at . Then
Lemma 2.5 ([52])
For each , let N be a neighborhood of t. Then, for , define to mean that, given , there exists a right neighborhood of t such that
where . If t is right-scattered and is continuous at t, this reduces to .
Next, we introduce the Banach space which is suitable for system (1.1)-(1.3).
Let be an open bounded domain in with smooth boundary ∂ Ω. Let be the set consisting of all the vector function which is rd-continuous with respect to and continuous with respect to . For every and , we define the set . Then is a Banach space with the norm , where . Let consist of all functions which map into and is rd-continuous with respect to and continuous with respect to . For every and , we define the set . Then is a Banach space equipped with the norm , where , , .
In order to achieve the global robust exponential synchronization, the following system (2.1)-(2.3) is the controlled slave system corresponding to the master system (1.1)-(1.3):
subject to the following initial conditions
and Dirichlet boundary conditions
where () and () are error functions. () is a constant error weighting coefficient. , , , .
From (1.1)-(1.3) and (2.1)-(2.3), we obtain the error system (2.4)-(2.6) as follows:
subject to the following initial conditions
and Dirichlet boundary conditions
The following definition is significant to study the global robust exponential synchronization of coupled neural networks (1.1)-(1.3) and (2.1)-(2.3).
Definition 2.5 Let and be the solution vectors of system (1.1)-(1.3) and its controlled slave system (2.1)-(2.3), respectively. is the error vector. Then the coupled systems (1.1)-(1.3) and (2.1)-(2.3) are said to be globally exponentially synchronized if there exists a controlled input vector and a positive constant and such that
where α is called the degree of exponential synchronization on time scales.
3 Main results
In this section, we will consider the global robust exponential synchronization of coupled systems (1.1)-(1.3) and (2.1)-(2.3). At first, we need to introduce some useful lemmas.
Lemma 3.1 ([53])
Let Ω be a cube () and assume that is a real-valued function belonging to which vanishes on the boundary ∂ Ω of Ω, i.e., . Then
Lemma 3.2 ([23])
Suppose that and are the solutions to systems (1.1)-(1.3) and (2.1)-(2.3), respectively, then
Throughout this paper, we always assume that:
(H1) The neurons activation , , and are Lipschitz continuous, that is, there exist positive constants , , and such that , , , for any , ; .
(H2) The delay kernels (; ) are real-valued non-negative rd-continuous functions and satisfy the following conditions:
and there exist constants , such that
(H3) The following conditions are always satisfied:
Theorem 3.1 Assume that (H1)-(H3) hold. Then the controlled slave system (2.1)-(2.3) is globally robustly exponentially synchronous with the master system (1.1)-(1.3).
Proof Calculating the delta derivation of () and () along the solution of (2.1), we can obtain
and
Employing Green’s formula [17], Dirichlet boundary condition (2.6) and Lemma 3.1, we have
and
By applying Lemma 3.2, (3.1)-(3.4), conditions (H1)-(H3) and the Hölder inequality, and noting the robustness of parameter intervals, we get
where , , .
Similar to the arguments of (3.5), we obtain
where , , .
If the first inequality of condition (H3) holds, there exists one positive number (may be sufficiently small) such that
Now we consider the functions
where , . From (3.7) we achieve and is continuous for . Moreover, as , thereby there exist constants such that and for . Choosing , obviously , we have, for ,
Similar to the above arguments of (3.7)-(3.9), we can always choose such that for ,
Thus, taking , we derive, for ; ,
and