- Research
- Open access
- Published:
Existence and global asymptotic stability criteria for nonlinear neutral-type neural networks involving multiple time delays using a quadratic-integral Lyapunov functional
Advances in Difference Equations volume 2021, Article number: 112 (2021)
Abstract
In this paper we consider a standard class of the neural networks and propose an investigation of the global asymptotic stability of these neural systems. The main aim of this investigation is to define a novel Lyapunov functional having quadratic-integral form and use it to reach a stability criterion for the under study neural networks. Since some fundamental characteristics, such as nonlinearity, including time-delays and neutrality, help us design a more realistic and applicable model of neural systems, we will use all of these factors in our neural dynamical systems. At the end, some numerical simulations are presented to illustrate the obtained stability criterion and show the essential role of the time-delays in appearance of the oscillations and stability in the neural networks.
1 Introduction
During recent past, it has been shown that wide classes of the real world phenomena can be stated as neural networks. This advantage makes the neural networks powerful resources to study and investigate the mentioned phenomena. On the other hand, unifying various categories of the real life problems in the framework of the neural networks helps us transform these natural phenomena into essentially mathematical engineering problems. So, we can restrict ourselves to investigating the neural networks instead of multi-oriented researches on the aforementioned topics. In this way, the monographs [1, 14] can be helpful. The concept of the neural networks will be more important in theory and in applications if we combine them with time-delays to reach time-delay neural networks (the importance of the time-delay systems can be learned from the monograph [16]). Thanks to the time-delay neural networks, one can study qualitative dynamics of some of the most important bioscience problems, such as the dynamics of diabetes, population dynamics and epidemiology, for instance, time-delay neural networks are capable of making a research field the study of evolutionary dynamics of the COVID-19 pandemic virus. This ability can be summarized in the fact that in time-delay dynamical systems derivatives of unknown functions at particular times can be described in terms of the values of those functions at previous times. In this case, the studied neural networks have great potential to describe rising/falling oscillations, as well as stability/instability in their qualitative dynamics. Let us proceed a bit further. If time-delays appear both in the state of the interconnecting neurons and their derivatives, then the studied dynamical system is said to be a neutral-type time-delay neural network It is expected that these advanced neural networks represent a complete characterization of the neural systems having extended applications in engineering problems. Here, we suggest some of the recent most motivating research papers related to the neutral-type time-delay neural networks and the cited bibliography therein for more consultation on this topic; see [2–8, 11, 24, 25]. The nonlinear nature of the dynamical systems makes them an excellent platform to study the real world phenomena and their engineering refinements, so in the light of the nonlinear neutral-type time-delay neural networks one can concentrate on engineering and artificial intelligence problems such as intelligent recognition processes, including speech recognition, lip reading, handwriting recognition, image recognition, pattern and sequence recognition, data processing, blind signal separation, email spam filtering, signal processing, control problems, fixed point computations, function approximation, optimization problems, and many other real life problems. These applications and corresponding information can be represented in the case that the states of neurons become stable. In other words, as has been proven, such neural systems are required to have constant equilibrium points independent of the initial data that are globally asymptotically stable, see [26]. This is why stabilization of the neural networks has recently attracted wide audience and witnessed day-to-day growing investigations. In this way, the interested follower is advised to study the following papers: [9, 10, 13, 15, 17–23, 26–44]. For convenience let us make a convention here. From now on, we call the nonlinear neutral-type time-delay neural networks as the NNTDNNs.
At the end of this section, we state that the rest of the paper will be organized is follows. In Sect. 2, we first define the main NNTDNNs that will be stabilized later. Also some basic setting and discussions will be made here. Section 3, the main part of the paper, includes the stability analysis to achieve global asymptotic stabilization of the main neural system. Prior to this analysis, by the use of the coincidence degree theory, it will be shown under which conditions the studied NNTDNN has at least one solution to be stabilized. In Sect. 4, we present some numerical simulations to verify the validity of the presented stability criterion. Finally, in Sect. 5, we summarize the solvability and stability criteria presented in this paper.
2 Formulation and basic setting
Prior to presenting the stabilization process, we introduce the mathematical model of the dynamical neural system that has described above. As stated, our neural network contains multiple discrete time-delays in the states of the interconnecting neurons and further multiple discrete time-delays in the time derivatives of states of these neurons. Accordingly, our desired NNTDNN is introduced as follows:
with the following properties:
- \((P_{1})\):
-
\(y_{i}(t)\) denotes the state of the ith neuron;
- \((P_{2})\):
-
n presents the number of the neurons;
- \((P_{3})\):
-
\(c_{i}\) is some positive real constant;
- \((P_{4})\):
-
\(a_{ij}\) and \(b_{ij}\) are real constants;
- \((P_{5})\):
-
\(e_{ij}\) is the neutral coefficient;
- \((P_{6})\):
-
\(\tau _{ij}\) is the time delay of the neuron’s state;
- \((P_{7})\):
-
\(\zeta _{ij}\) is the neutral time delay;
- \((P_{8})\):
-
\(g_{j}\in C (\mathbb{R},\mathbb{R} )\) is a nonlinear activation function for which there exists a positive real constant \(M_{1}\) such that \(|g_{j}(x)|\leq M_{1}\). In addition, \(g_{j}\) obeys the Lipschitz-continuity condition, that is, there exists a positive real constant \(l_{j}\) such that
$$ \bigl\vert g_{j}(v)-g_{j}(w) \bigr\vert \leq l_{j} \vert v-w \vert , \quad v,w\in \mathbb{R}, v\neq w, $$ - \((P_{9})\):
-
\(f_{j}\in C (\mathbb{R},\mathbb{R} )\) is a given function with the property that there exists a positive real positive constant \(M_{2}\) such that \(|f_{j}(x)|\leq M_{2}\). Furthermore, \(f_{j}\) is a Lipschitz-continuous function, so that there exists a positive real constant \(m_{j}\) such that
$$ \bigl\vert f_{j}(v)-f_{j}(w) \bigr\vert \leq m_{j} \vert v-w \vert , \quad v,w\in \mathbb{R}, v\neq w, $$ - \((P_{10})\):
-
\(u_{i}\) is a constant (an external input);
- \((P_{11})\):
-
\(\eta :=\max \{\tau _{ij}\mid i,j=1,2,\dots ,n\}\) and \(k:=\max \{\zeta _{ij}\mid i,j=1,2,\dots ,n\}\) with \(\delta :=\max \{\eta ,k\}\).
At the end of the detailed statement of the neural model (2.1), we note that the accompanying initial data of the neutral-type neural network (2.1) are introduced by
This is an opportunity to describe the philosophy of the functional space corresponding to the NNTDNN model (2.1). For the stabilization of the NNTDNN, (2.1) is required to be solvable first, that is, the neutral-type neural system (2.1) has to have at least one solution to be stabilized. So, prior to stability analysis, we have to apply an appropriate solvability procedure that gives us a mathematical key to reach at least one solution of this NNTDNN and, consequently, provides a plan to stabilize this solution. Our preferred solvability key is the coincidence degree theory, which, in order to be applicable, must act on a relevant periodic solution space. So, we introduce this periodic functional space as follows:
We believe that it is necessary to consult more on the neutral-type time delay neural networks, and on the importance of the time delays and stability methods. But at this moment we provide a brief description of the coincidence degree theory. To find the complementary details, we refer the interested followers to [12], Chaps. IV and V). So, let us start as follows.
Definition 2.1
Let \(\mathcal{Y}\) and \(\mathcal{Z}\) be real normed spaces. A linear mapping \(L:\operatorname{dom} L\subset \mathcal{Y}\rightarrow \mathcal{Z}\) is called a Fredholm mapping provided it satisfies the following conditions:
-
(i)
kerL has finite dimension,
-
(ii)
ImL is closed and has finite codimension.
Let L be a Fredholm mapping. Then its index is given by
Assume that L is a Fredholm mapping with zero index and there exist continuous projectors \(P:\mathcal{Y}\rightarrow \mathcal{Y}\) and \(Q:\mathcal{Z}\rightarrow \mathcal{Z}\) such that
So, one may derive that
is invertible. Let us denote the inverse by \(K_{P}:\operatorname{Im} L\rightarrow \operatorname{dom} L\cap \ker P\). The generalized inverse of L denoted by \(K_{P,Q}:Z\rightarrow \operatorname{dom} L\cap \ker P\) is defined by \(K_{P,Q}=K_{P}(I-Q)\).
If L is a Fredholm mapping with zero index, then for every isomorphism \(J:\operatorname{Im} Q\rightarrow \ker L\), the mapping \(JQ+K_{P,Q}:Z\rightarrow \operatorname{dom} L\) is an isomorphism and, for every \(u\in \operatorname{dom} L\),
Here, we define the L-compact operators that play an important role in the coincidence degree theory.
Definition 2.2
Let \(L:\operatorname{dom} L\subset \mathcal{Y}\rightarrow \mathcal{Z}\) be a Fredholm mapping, E be a metric space, and let \(N:E\rightarrow \mathcal{Z}\) be a mapping. Then N is called L-compact on E provided that \(QN:E\rightarrow \mathcal{Z}\) is continuous and \(K_{P,Q}N:E\rightarrow \mathcal{Y}\) is compact on E. In addition, we say that N is L-completely continuous if it is L-compact on every bounded \(E\subset \mathcal{Y}\).
Now, having all of these preliminaries, we present the main solvability result as follows.
Theorem 2.3
Let \(\Omega \subset \mathcal{Y}\) be open and bounded, L be a Fredholm mapping with zero index, and let N be L-compact on Ω̅. Furthermore, assume that the following conditions are satisfied:
-
(i)
\(Lu\neq \lambda Nu\) for every \((u,\lambda )\in ((\operatorname{dom} L\setminus \ker L)\cap \partial \Omega )\times (0,1)\);
-
(ii)
\(Nu\notin \operatorname{Im} L\) for every \(u\in \ker L\cap \partial \Omega \);
-
(iii)
\(\deg (JQ N|_{\ker L\cap \partial \Omega },\Omega \cap \ker L,0)\neq 0\) with \(Q:\mathcal{Y}\rightarrow \mathcal{Y}\) a continuous projector such that \(\ker Q=\operatorname{Im} L\) and \(J:\operatorname{Im} Q\rightarrow \ker L\) is an isomorphism.
Then, the equation \(Lu=Nu\) has at least one solution in \(\operatorname{dom} L\cap \overline{\Omega }\).
We finalize this section with a quick overview of the stability tools for the time delay neural networks and complexities of the multiple essentially distinct time delays to stabilize the corresponding neural systems. To this aim, let us consider the forthcoming cases:
- \((C_{1})\):
-
If \(\tau _{ij}=\tau _{j}\) and \(\zeta _{ij}=\zeta _{j}\), for \(i,j=1,2,\dots ,n\), then we have the following NNTDNNs:
$$ \begin{aligned} \dot{y}_{i}(t):={}&{-}c_{i}y_{i}(t)+ \sum_{j=1}^{n}a_{ij}g_{j} \bigl(y_{j}(t) \bigr)+\sum_{j=1}^{n}b_{ij}g_{j} \bigl(y_{j}(t-\tau _{j}) \bigr) \\ &{}+ \sum _{j=1}^{n}e_{ij}f_{j} \bigl(\dot{y}_{j}(t-\zeta _{j}) \bigr)+u_{i},\quad i=1,2,\dots ,n. \end{aligned} $$(2.5) - \((C_{2})\):
-
If \(\tau _{ij}=\tau \) and \(\zeta _{ij}=\zeta \), for \(i,j=1,2,\dots ,n\), then we get the following NNTDNNs:
$$ \begin{aligned} \dot{y}_{i}(t):={}&{-}c_{i}y_{i}(t)+ \sum_{j=1}^{n}a_{ij}g_{j} \bigl(y_{j}(t) \bigr)+\sum_{j=1}^{n}b_{ij}g_{j} \bigl(y_{j}(t-\tau ) \bigr) \\ &{}+ \sum_{j=1}^{n}e_{ij}f_{j} \bigl(\dot{y}_{j}(t-\zeta ) \bigr)+u_{i},\quad i=1,2,\dots ,n. \end{aligned} $$(2.6)
The direct consequences of the NNTDNNs (2.5) and (2.6) is that both of these neural networks can be represented in the vector matrix form:
in which
and
Furthermore, the vector matrices \(g(y(t-\tau ))\) and \(f(\dot{y}(t-\zeta ))\) for the two cases \((C_{1})\) and \((C_{2})\) are respectively represented as follows:
- \((V_{1})\):
-
$$ \begin{aligned} &g\bigl(y(t-\tau )\bigr) := \bigl[g_{1}\bigl(y_{1}(t-\tau _{1}) \bigr),g_{2}\bigl(y_{2}(t- \tau _{2})\bigr), \dots ,g_{n}\bigl(y_{n}(t-\tau _{n})\bigr) \bigr]^{T}, \\ &f\bigl(\dot{y}(t-\zeta )\bigr) :=\bigl[f_{1}\bigl( \dot{y}_{1}(t-\zeta _{1})\bigr),f_{2}\bigl( \dot{y}_{2}(t- \zeta _{2})\bigr),\dots ,f_{n} \bigl(\dot{y}_{n}(t-\zeta _{n})\bigr)\bigr]^{T}, \end{aligned} $$(2.10)
and
- \((V_{2})\):
-
$$ \begin{aligned} &g\bigl(y(t-\tau )\bigr) := \bigl[g_{1}\bigl(y_{1}(t-\tau )\bigr),g_{2} \bigl(y_{2}(t- \tau )\bigr),\dots ,g_{n} \bigl(y_{n}(t-\tau )\bigr)\bigr]^{T}, \\ &f\bigl(\dot{y}(t-\zeta )\bigr) :=\bigl[f_{1}\bigl( \dot{y}_{1}(t-\zeta )\bigr),f_{2}\bigl( \dot{y}_{2}(t- \zeta )\bigr),\dots ,f_{n}\bigl( \dot{y}_{n}(t-\zeta )\bigr)\bigr]^{T}. \end{aligned} $$(2.11)
Now, comparing the NNTDNN (2.1) with the vector matrix NNTDNNs (2.7)–(2.9), (2.10) and (2.7)–(2.9), (2.11), we come to the conclusion that the NNTDNN (2.1) cannot be represented in the vector matrix form. This is the main complexity of the multiple essentially distinct time delays. In the vector matrix case (2.7), establishing the stability conditions is easier than deriving stability analysis for the NNTDNN (2.1). So, the stabilization techniques such as LMI, that stands for the linear matrix inequality, are not applicable on the NNTDNN (2.1). As instances of this these cases, we suggest the papers [19, 28–30, 36, 40], and the cited bibliography therein for more consultation. In this case, it is reasonable that we are interested in developing the classic mathematical techniques to reach improved stabilization tools such as the novel Lyapunov functionals. Since the stability analysis of the NNTDNNs is not widely investigated in the literature in comparison with the other techniques, this is a good time to make new investigations on neural dynamical systems like the NNTDNN (2.1).
3 Stability analysis
As stated in the introduction of the paper, for the stability analysis of the NNTDNN (2.1), we need a criterion that guarantees the existence of at least one solution for the NNTDNN (2.1) to be stabilized. So, we start with the solvability result as follows.
Theorem 3.1
Suppose that hypotheses \((P_{8})\) and \((P_{9})\) are satisfied. Then, the NNTDNN (2.1) has at least one T-periodic solution.
Proof
Our proof strategy is to show that all conditions of Theorem 2.3 are satisfied and, consequently, we conclude that the neural system (2.1) has at least one solution. So, begin with the definition of the basic operators L and N as follows:
and
Also, let us define the projectors \(P:Y\longrightarrow \ker L\) and \(Q:Y\longrightarrow Y\) as follows:
So, we get the following:
Besides,
that is,
So, this proves that L is a Fredholm operator of index zero. Next, we show the L-compactness of the operator N. To this aim, we first define the operator \(K_{P}:\operatorname{Im} L\longrightarrow \ker P\cap \operatorname{dom} L\) as
in which
Since we do not want to waste the time and space for proving straightforward exercises, so, for a given open and bounded subset Ω of Y, we come to the conclusion that both QN and \(K_{P}(I-Q)N\) are continuous, and \(QN (\overline{\Omega } )\) and \(K_{P}(I-Q)N (\overline{\Omega } )\) are both relatively compact, that is, the operator N is L-compact. In accordance with Theorem 2.3, it is time to prove that the condition (i) is satisfied. The plan is as follows. We will show that if \(Lu=\lambda Nu\), then \(u\in \Omega \), for a given open and bounded subset Ω, that is, we will prove the counterpart of the assumption (i). So, let us begin with
where \(y=(y_{1},y_{2},\dots ,y_{n})\in Y\) is an arbitrary solution of the operator equation (3.4). Equivalently, one has
Integrating both sides of the recent neural system over the interval \([0,T]\), for \(i=1,2,\dots ,n\), yields
The left-hand side of the equality (3.5) implies that there exists \(s_{i}\in [0,T]\), for \(i=1,2,\dots ,n\), such that
Consequently, comparing (3.6) with (3.2), we arrive at the following equality:
Hence, we get that
Turning to the hypotheses \((P_{8})\) and \((P_{9})\) leads us to the following inequality:
In order to reach the desired conclusion, we have to multiply both sides of the operator equation (3.4) and then integrate over the interval \([0,T]\). In this case, it follows that
Therefore, we come to the conclusion that
In continuation, thanks to the inequality
we have
Equivalently, it has shown that
We combine here the inequalities (3.8) and (3.12), and then arrive at the following inequality:
Thus,
in which
In this case, we have proved that
Let \(\gamma :=A+\epsilon \), with
Then, if we define
we have proved that, if \(Ly=\lambda Ny\), \(\lambda \in (0,1)\), then, \(y\in \Omega \cap \operatorname{dom} L\). Equivalently, for each \(y\in \partial \Omega \cap \operatorname{dom} L\) and \(\lambda \in (0,1)\), we have \(Ly\neq \lambda Ny\). So, the condition (i) in Theorem 2.3 is satisfied.
Next, we are going to prove the condition (ii) in Theorem 2.3. To this end, suppose \(y\in \partial \Omega \cap \ker L\) is a constant vector \(y=(y_{1},y_{2},\dots ,y_{n})\in \mathbb{R}^{n}\), with \(\|y\|_{Y}=\gamma \). Then, it follows that
After some manipulation, we get that for each \(y\in \partial \Omega \cap \ker L\),
The latter inequality proves that \(Ny\notin \operatorname{Im} L\), for each \(y\in \partial \Omega \cap \ker L\). So, this has proven that the condition (ii) in Theorem 2.3 holds.
Here, we are in such a position to complete the existence analysis by showing that the condition (iii) in Theorem 2.3 is also fulfilled. To this aim, let us define
in which the isomorphism \(J:\operatorname{Im} Q\longrightarrow \ker L\) stands for the identity operator and
So, for each \(y\in \partial \Omega \cap \ker L\), it follows that
such that
So, we get
Therefore, similar to the previous step, one may derive that
that is, for each \(y\in \partial \Omega \cap \ker L\), one has \(H(\lambda ,y)\neq 0^{T}\). Then, by the use of degree invariance under a homotopy, we arrive at
So, the condition (iii) in Theorem 2.3 is fulfilled. Since all conditions of Theorem 2.3 are satisfied, we come to the conclusion that the NNTDNN (2.1) has at least one solution. This completes the proof. □
This situation is at the borderline between the solvability and stability of the NNTDNN (2.1), meaning that it just remains to handle the uniqueness of the existing solution (equilibrium point in view of the dynamical systems) of the NNTDNN (2.1). To this aim, wee just need a little bit of creativity to introduce the transformation \(z(t):=y(t)-y^{*}\) where \(y^{*}=(y_{1}^{*},y_{2}^{*},\dots ,y_{n}^{*})^{T}\in \mathbb{R}^{n}\) stands for a given equilibrium point of the NNTDNN (2.1). In this case, the NNTDNN (2.1) can be restated as the following transformed NNTDNN:
The golden point of the NNTDNN (3.17) is that this neural network has the origin as its unique equilibrium point. This property enables us now to manage the claimed global asymptotic stability analysis. Prior to presenting the stability analysis, we point out that in the new NNTDNN (3.17), all properties of the NNTDNN (2.1) hold, just in the properties \((P_{8})\) and \((P_{9})\) the Lipschitz-continuities will be transformed into the following ones. So, the NNTDNN (3.17) has the following properties:
- \((P_{\text{new}})\):
-
Comparing both of the neural networks (2.1) and (3.17), we have
$$ (P_{i,\text{new}})=(P_{i}),\quad i=1,2,\dots ,7,10,11; $$(3.18) - \((P_{8, \text{new}})\):
-
$$ \textstyle\begin{cases} \vert \mathcal{G}_{i}(z_{i}(t)) \vert \leq l_{i} \vert z_{i}(t) \vert , \quad i=1,2,\dots ,n, \\ \mathcal{G}_{i}(z_{i}(t)):=g_{i}(z_{i}+y_{i}^{*})-g_{i}(y_{i}^{*}),\quad \text{with } \mathcal{G}_{i}(0)=0, i=1,2,\dots ,n. \end{cases} $$(3.19)
Furthermore,
- \((P_{9, \text{new}})\):
-
$$ \textstyle\begin{cases} |\mathcal{F}_{i}(z_{i}(t))|\leq m_{i} \vert z_{i}(t) \vert ,\quad i=1,2,\dots ,n, \\ \mathcal{F}_{i}(z_{i}(t)):=f_{i}(z_{i}+y_{i}^{*})-f_{i}(y_{i}^{*}),\quad \text{with } \mathcal{F}_{i}(0)=0, i=1,2,\dots ,n. \end{cases} $$(3.20)
Now, we are ready to state and prove the main stability result as the following theorem.
Theorem 3.2
Suppose the nonlinear neutral-type time-delay neural network (3.17) satisfies the properties \((P_{1, \mathrm{new}})\)–\((P_{11, \mathrm{new}})\). If α is a positive constant such that \(0<\alpha <1\), then the origin of the NNTDNN (3.17) is globally asymptotically stable provided that the following conditions are satisfied for each \(i=1,2,\dots ,n\):
Proof
The main strategy to prove this theorem is to define the following quadratic-integral Lyapunov functional:
In this case, one has
and
We continue keeping in the mind the following key point:
Thanks to this key point, and concentrating on the first two parts of \(\dot{V}_{1}\), given by (3.27), it follows that
In continuation, we are going to bound the multiple series (3.31)–(3.39) one-by-one as follows:
Therefore, we get
Let us now consider (3.32). Then,
that is,
At this step, if we consider (3.33) and arrive at
Equivalently,
Now, it is time to estimate the triple group (3.34)–(3.36). So, we begin with (3.34) as follows:
Hence, we have shown that
Here we take a look at (3.35). In this case,
Thus, it yields
Next, we focus on (3.36). So, we have the following:
So, in a compact form, we get the following inequality:
Now we prepare ourselves to complete the mentioned unification process by estimation of the last triple series (3.37)–(3.39). To this aim, we first consider (3.37). Then, we have
Thus, it follows that
Prior to the final step, we have the triple (3.38) that gives us the following:
Equivalently, it has demonstrated that
This is the final step, where we estimate the triple series (3.39). Hence, one has
In a compact form, we have proven that
Now, let us compare (3.31)–(3.39) with (3.40)–(3.48) and then gather the obtained data into (3.27). In this case, we come to the conclusion that
Since
then, in the light of (3.27), (3.28), and some simplifications, we arrive at
Since, for each \(i=1,2,\dots ,n\), the constants \(m_{i}^{2}\), \(l_{i}^{2}\), \(\epsilon _{1,i}\), \(\epsilon _{2,i}\), and \(\epsilon _{3,i}\) all are positive, this is a direct consequence of the inequality (3.51), that is, \(\dot{V}(z(t),\dot{z}(t),t)<0\), except that for each \(i,j=1,2,\dots ,n\), \(z_{i}(t):=0\), \(z_{i}(t-\tau _{ij}):=0\), and \(\dot{z}_{i}(t-\zeta _{ij}):=0\), that is, the origin \(z(t)=[z_{1}(t),z_{2}(t),\dots ,z_{n}(t)]^{T}=[0,0,\dots ,0]^{T}\) is asymptotically stable. Besides, if \(\|z(t)\|_{Y}\to \infty \), then, according to (3.24) and (3.25), \(\dot{V}(z(t),\dot{z}(t),t)\to \infty \), that is, the Lyapunov functional (3.24)–(3.26) is radially unbounded. So, the origin \(z(t):=0\) is globally asymptotically stable. This completes the stability analysis of the NNTDNN (3.17). □
4 Numerical simulations
This section is devoted to the numerical applications illustrating the implementability of the stability criterion presented in Theorem 3.2.
Example 4.1
Let us consider the NNTDNN
Indeed, the neural dynamical system (4.1) is a reduced version of the NNTDNN (3.17) for \(n=2\), having the following coefficient matrices:
Furthermore, the time delays and activation functions have been chosen as follows:
In this case, \(l_{i}=m_{i}:=1\), \(i=1,2\), and \(\delta :=5\). Having the above mentioned data in hand, it is easy to check that, for \(\alpha =0.25\) and each \(i=1,2\),
Since all conditions of Theorem 3.2 are satisfied, we come to the conclusion that the origin of the NNTDNN (4.1) is globally asymptotically stable, as can be observed in the following numerical simulation (see Fig. 1).
Example 4.2
In this simulation we choose \(n=4\) in the NNTDNN (3.17) and consider the following neural network:
In this case, we can summarize the coefficients of the neural network (4.5) as follows:
Besides, the time delays of the NNTDNN (4.6) are as follows:
So, we get that \(\delta :=5\). Finally, according to the neural dynamical system (4.5), we have
Keeping the aforementioned setting and choosing \(\alpha =0.5\), with a direct computation, it is concluded that
that is, the origin of the NNTDNN (4.5) is globally asymptotically stable, as is shown in Fig. 2.
5 Concluding remarks
We finalize this paper with a compact description of the managed investigation. In this paper, the nonlinear neutral-type time-delayed neural network (2.1) has been studied. The main aim of our investigation was to introduce a novel Lyapunov functional to stabilize this neural dynamical system. Since the NNTDNN (2.1) cannot be represented in the vector matrix form, such as the matrix neural network (2.7), it cannot be stabilized with some standard stability tools such as the linear matrix inequalities technique. Therefore, the quadratic-integral Lyapunov functional (3.24)–(3.26) has been introduced as the basic stability key for the global asymptotical stabilization of the NNTDNN (2.1). In what follows, we summarize the steps of our investigation:
- \((S_{1})\):
-
In this paper, we consider a mathematical model of the n interconnecting neurons (2.1).
- \((S_{2})\):
-
This neural network model is nonlinear, that makes it more suitable for studying the real world phenomena.
- \((S_{3})\):
-
This neuro-system is of neutral type, meaning that not only the neuron states but also the states of their derivatives appear in the nonlinearities which makes it a more accurate mathematical statement of the studied model.
- \((S_{4})\):
-
This model includes multiple time delays, which have important roles in the (dis)appearance of the stability and oscillation within the model (as illustrated in the numerical simulations).
- \((S_{5})\):
-
In this paper, we apply the coincidence degree theory to solve the neural dynamical system (2.1), which is a rare mathematical solvability tool for the neural networks.
- \((S_{6})\):
-
After guaranteeing the existence of at least one solution for the NNTDNN (2.1), in order to reach a unique solution, we have transformed the neuro-system (2.1) to the NNTDNN (3.17) that has the origin as its unique solution.
- \((S_{7})\):
-
In this paper, we have defined the new quadratic-integral Lyapunov functional (3.24)-(3.26) including all of the time delays to globally asymptotically stabilization of the transformed neuro-system (3.17).
- \((S_{8})\):
-
At the end of the stability analysis, some numerical simulations have been presented, which justify the implementability of the presented stability criterion.
Availability of data and materials
Not applicable.
References
Anthony, M.: Discrete Mathematics of Neural Networks. SIAM, Philadelphia (2001)
Arik, S.: Stability analysis of delayed neural networks. IEEE Trans. Circuits Syst. 47(7), 1089–1092 (2000)
Arik, S.: An improved global stability result for delayed cellular neural networks. IEEE Trans. Circuits Syst. 49(8), 1211–1214 (2002)
Arik, S.: An analysis of global asymptotic stability of delayed cellular neural networks. IEEE Trans. Neural Netw. 13(5), 1239–1242 (2002)
Arik, S.: Global asymptotic stability of a larger class of neural networks with constant time delay. Phys. Lett. A 311, 504–511 (2003)
Arik, S.: A modified Lyapunov functional with application to stability of neutral-type neural networks with time delays. J. Franklin Inst. 356(3), 276–291 (2019)
Arik, S.: New criteria for stability of neutral-type neural networks with multiple time delays. IEEE Trans. Neural Netw. Learn. Syst. 31(5), 1–10 (2019)
Arik, S., Tavsanoglu, V.: Equilibrium analysis of delayed CNN’s. IEEE Trans. Circuits Syst. 45(2), 168–171 (1998)
Bartosiewicz, Z.: Exponential stability of nonlinear positive systems on time scales. Nonlinear Anal. Hybrid Syst. 32, 143–150 (2019)
Dong, Z., Wang, X., Zhang, X.: A nonsingular M-matrix-based global exponential stability analysis of higher-order delayed discrete-time Cohen–Grossberg neural networks. Appl. Math. Comput. 385, 125401 (2020)
Faydasicok, O.: New criteria for global stability of neutral-type Cohen–Grossberg neural networks with multiple delays. Neural Netw. 125, 330–337 (2020)
Gaines, R.E., Mawhin, J.: Coincidence Degree and Nonlinear Differential Equations. Springer, Berlin (1977)
Gao, S., Wang, Q., Wu, B.: Existence and global exponential stability of periodic solutions for coupled control systems on networks with feedback and time delays. Commun. Nonlinear Sci. Numer. Simul. 63, 72–87 (2018)
Graupe, D.: Principles of Artificial Neural Networks. World Scientific, Singapore (2007)
He, Z., Li, C., Li, H., Zhang, Q.: Global exponential stability of high-order Hopfield neural networks with state-dependent impulses. Physica A 542, 123434 (2020)
Kharitonov, V.L.: Time-Delay Systems. Springer, Berlin (2013)
Li, Y., Qin, J.: Existence and global exponential stability of periodic solutions for quaternion-valued cellular neural networks with time-varying delays. Neurocomputing 31, 91–103 (2018)
Liu, Y., Huang, J., Qin, Y., Yang, X.: Finite-time synchronization of complex-valued neural networks with finite-time distributed delays. Neurocomputing 416, 152–157 (2020)
Luo, Q., Zeng, Z., Liao, X.: Global exponential stability in Lagrange sense for neutral type recurrent neural networks. Neurocomputing 74, 638–645 (2011)
Ma, Q., Feng, G., Xu, S.: Delay-dependent stability criteria for reaction–diffusion neural networks with time-varying delays. IEEE Trans. Cybern. 43(6), 1913–1920 (2013)
Manivannan, R., Samidurai, R., Cao, J., Alsaedi, A., Alsaadi, F.E.: Global exponential stability and dissipativity of generalized neural networks with time-varying delay signals. Neural Netw. 87, 149–159 (2017)
Martynyuk, A.A., Stamova, I.M.: Stability of sets of hybrid dynamical systems with aftereffect. Nonlinear Anal. Hybrid Syst. 32, 106–114 (2019)
Olfati-Saber, R., Murray, R.M.: Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Autom. Control 49(9), 1520–1533 (2004)
Ozcan, N.: New conditions for global stability of neutral-type delayed Cohen–Grossberg neural networks. Neural Netw. 106, 1–7 (2018)
Ozcan, N.: Stability analysis of Cohen–Grossberg neural networks of neutral-type: multiple delays case. Neural Netw. 113, 20–27 (2019)
Roska, T., Wu, C.W., Chua, L.O.: Stability of cellular neural networks with dominant nonlinear and delay-type templates. IEEE Trans. Circuits Syst. 44(4), 270–272 (1993)
Ruan, D., Huang, Z., Guo, X.: Inequalities and stability of stochastic Hopfield neural networks with discrete and distributed delays. Neurocomputing 407, 281–291 (2020)
Samidurai, R., Rajavel, S., Sriraman, R., Cao, J., Alsaedi, A., Alsaadi, F.E.: Novel results on stability analysis of neutral-type neural networks with additive time-varying delay components and leakage delay. Int. J. Control. Autom. Syst. 15(4), 1888–1900 (2017)
Samli, R., Arik, S.: New results for global stability of a class of neutral-type neural systems with time delays. Appl. Math. Comput. 210, 564–570 (2009)
Shi, K., Zhong, S., Zhu, H., Liu, X., Zeng, Y.: New delay-dependent stability criteria for neutral-type neural networks with mixed random time-varying delays. Neurocomputing 168, 896–907 (2015)
Shi, M., Guo, J., Huang, C.: Global exponential stability of delayed inertial competitive neural networks. Adv. Differ. Equ. 2020, 87 (2020). https://doi.org/10.1186/s13662-019-2476-7
Song, Q., Long, L., Zhao, Z., Liu, Y., Alsaadi, F.E.: Stability criteria of quaternio-valued neutral-type delayed neural networks. Neurocomputing 412, 287–294 (2020)
Song, Q., Yu, Q., Zhao, Z., Liu, Y., Alsaadi, F.E.: Boundedness and global robust stability analysis of delayed complex-valued neural networks with interval parameter uncertainties. Neural Netw. 103, 55–62 (2018)
Sun, M., Liu, J.: A novel noise-tolerant Zhang neural network for time-varying Lyapunov equation. Adv. Differ. Equ. 2020, 116 (2020). https://doi.org/10.1186/s13662-020-02571-7
Wang, Y., Lou, J., Yan, H., Lu, J.: Stability criteria for stochastic neural networks with unstable subnetworks under mixed switchings. Neurocomputing (2020, in press)
Weera, W., Niamsup, P.: Novel delay-dependent exponential stability criteria for neutral-type neural networks with non-differentiable time-varying discrete and neutral delays. Neurocomputing 173, 886–898 (2016)
Xiao, Q., Huang, T.: Stability of delayed inertial neural networks on time scales: a unified matrix-measure approach. Neural Netw. 130, 33–38 (2020)
Yang, B., Wang, J., Wang, J.: Stability analysis of delayed neural networks via a new integral inequality. Neural Netw. 88, 49–57 (2017)
Yogambigai, J., Syed Ali, M., Alsulami, H., Alhodaly, M.S.: Global Lagrange stability for neutral-type inertial neural networks with discrete and distributed time delays. Chin. J. Phys. 65, 513–525 (2020)
Zhang, G., Wang, T., Li, T., Fei, S.: Multiple integral Lyapunov approach to mixed-delay-dependent stability of neutral neural networks. Neurocomputing 275, 1782–1792 (2018)
Zhang, G., Zeng, Z., Hu, J.: New results on global exponential dissipativity analysis of memristive inertial neural networks with distributed time-varying delays. Neural Netw. 97, 183–191 (2018)
Zhang, J., Huang, C.: Dynamics analysis on a class of delayed neural networks involving inertial terms. Adv. Differ. Equ. 2020, 120 (2020). https://doi.org/10.1186/s13662-020-02566-4
Zhang, X., Han, Q., Zeng, Z.: Hierarchical type stability criteria for delayed neural networks via canonical Bessel–Legendre inequalities. IEEE Trans. Cybern. 48(5), 1660–1671 (2018)
Zhang, Z., Liu, W., Zhou, D.: Global asymptotic stability to a generalized Cohen–Grossberg BAM neural networks of neutral type delays. Neural Netw. 25, 94–105 (2012)
Acknowledgements
Not applicable.
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
The author read and approved the final version of the current paper.
Corresponding author
Ethics declarations
Competing interests
The author declares that he has no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Gholami, Y. Existence and global asymptotic stability criteria for nonlinear neutral-type neural networks involving multiple time delays using a quadratic-integral Lyapunov functional. Adv Differ Equ 2021, 112 (2021). https://doi.org/10.1186/s13662-021-03274-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13662-021-03274-3