Skip to main content

Theory and Modern Applications

Exponential stability in the Lagrange sense for Clifford-valued recurrent neural networks with time delays

Abstract

This paper considers the Clifford-valued recurrent neural network (RNN) models, as an augmentation of real-valued, complex-valued, and quaternion-valued neural network models, and investigates their global exponential stability in the Lagrange sense. In order to address the issue of non-commutative multiplication with respect to Clifford numbers, we divide the original n-dimensional Clifford-valued RNN model into \(2^{m}n\) real-valued models. On the basis of Lyapunov stability theory and some analytical techniques, several sufficient conditions are obtained for the considered Clifford-valued RNN models to achieve global exponential stability according to the Lagrange sense. Two examples are presented to illustrate the applicability of the main results, along with a discussion on the implications.

1 Introduction

Recurrent neural network (RNN) models have been successfully employed for solving problems of optimization, control, associative memory, and signal and image processing. Recently, the dynamic analysis of RNN models has attracted deep interest from various researchers, and many scientific papers with respect to the stability theory of RNNs have been published [1–6]. Due to the limited speed of signal propagation, time delays (either constant or time-varying) are often encountered in neural network (NN) models employed for solving real-world applications. Time delays are the main source of various dynamics such as chaos, divergence, poor functionality, and instability [4–9]. Therefore, studies on NN dynamics that incorporate either constant or time-varying delays are necessary. On the other hand, complex-valued and quaternion-valued NN models are important in light of their potential applications in many fields including color night vision, radar imaging, polarized signal classification, 3D wind forecasting, and others [9–13]. Recently, many important results have been published concerning different dynamics of the complex-valued and quaternion-valued NN models [14–20]. Particularly, stability analysis [14, 18–20], synchronization analysis [15], stabilizability and instabilizability analysis [16], controllability and observability [21], optimization [22, 23], and so on.

Clifford algebra provides a powerful theory for solving problems in geometry problems. In addition, Clifford algebra has been applied to numerous areas of science and engineering such as neural computation, computer and robot vision, and control problems [24–28]. In dealing with high-dimensional data and spatial geometric transformation, Clifford-valued NN models are superior to real-valued, complex-valued, and quaternion-valued NN models [26–29]. Recently, theoretical and applied studies on Clifford-valued NN models have become a new research subject. However, the dynamic properties of Clifford-valued NN models are usually more complex than those of real-valued, complex-valued and quaternion-valued NN models. Due to the issue of non-commutativity of multiplication with respect to Clifford numbers, studies on Clifford-valued NN dynamics are still limited [30–39].

By using the linear matrix inequality approach, the authors in [30] derived the global exponential stability criteria for delayed Clifford recurrent NN models. For Clifford-valued NN models with time delays, their global asymptotic stability problems were investigated in [31] by a decomposition method. Using a direct approach, the presence and global exponential stability conditions pertaining to almost periodic solutions were extracted for Clifford-valued neutral high-order Hopfield NN models with leakage delays in [34]. Using the Banach fixed point theorem and the Lyapunov functional, the global asymptotic almost periodic synchronization problems for the Clifford-valued NN cellular models were examined in [35]. The weighted pseudo-almost automorphic solutions for neutral type fuzzy cellular Clifford NN models were discussed in [37]. In [39], the authors explored the presence of an anti-periodic solution to the problem of Clifford-valued inertial Cohen-Grossberg NN models by using suitable Lyapunov functional. Recently, the effects of neutral delay and discrete delays have been considered in a class of Clifford-valued NNs [40], and the associated existence, uniqueness and global stability criteria have been obtained.

It is worth noting that many previous NNs studies focus mostly on the global stability in the Lyapunov sense. In many real physical systems, however, the equilibrium point may not exist. Besides, computationally restrictive and multi-stability dynamics have been found to be necessary for dealing with critical neural computation in applications [41, 42]. As an example, when employing NN models for associative memory or pattern recognition, multiple equilibrium points are often required. In these cases, NN models are no longer globally stable in the Lyapunov sense. In dealing with multi-stable models, more realistic concepts of stability are necessary [43–46]. It should be noticed that in Lyapunov stability of NN models, the existence and uniqueness of the equilibrium points are required. In contrast, the Lagrange stability, which is concerned with stability of the total model, does not require the information of equilibrium points. Therefore, the Lagrange stability or the attractive sets of NN models is useful, and many studies have been published [45–48].

Motivated by the above discussions, we investigate the exponential stability in the Lagrange sense for Clifford-valued RNNs with time delays in this paper. To the best of our knowledge, there is no result on the topic of Lagrange stability of Clifford-valued NNs. Undoubtedly, this interesting topic is still an open challenge. Therefore, our main objective of this paper is to derive new sufficient conditions to ensure the Lagrange exponential stability for Clifford-valued RNNs. The main contributions of this work are as follows: (1) This is the first paper to study such a problem for global exponential stability of Clifford-valued RNN models in the Lagrange sense, which encompasses real-valued, complex-valued, and quaternion-valued NN models as special cases. (2) By using the Lyapunov functional and some analytical methods, new sufficient conditions for ascertaining the exponential stability of the considered Clifford-valued RNN models in the Lagrange sense are derived. This is achieved by decomposing the n-dimensional Clifford-valued RNN model into \(2^{m}n\)-dimensional real-valued RNN models. (3) The results obtained can be used for the study of monostable, multi-stable, and other complex NN models. In addition, the obtained results are new and different compared with those in the existing literature. The usefulness of the obtained results is validated with two numerical examples.

We organize this paper as follows. The proposed Clifford-valued RNN model is formally defined in Sect. 2. The new global Lagrange exponential stability criterion is presented in Sect. 3, while two numerical examples are given in Sect. 4. The conclusions of the results are given in Sect. 5.

2 Mathematical fundamentals and problem formulation

2.1 Notations

\(\mathbb{A}\) is defined as the Clifford algebra and has m generators over the real number \(\mathbb{R}\). Let \(\mathbb{R}^{n}\) and \(\mathbb{A}^{n}\) be an n-dimensional real vector space and an n-dimensional real Clifford vector space, respectively; while \(\mathbb{R}^{n \times n}\) and \(\mathbb{A}^{n \times n}\) denote the set of all \(n \times n\) real matrices and the set of all \(n \times n\) real Clifford matrices, respectively. We define the norm of \(\mathbb{R}^{n}\) as \(\|r\|=\sum_{i=1}^{n}|r_{i}|\), and for \(A=(a_{ij})_{n\times n}\in \mathbb{R}^{n\times n}\), denote \(\|A\|=\max_{1\leq i\leq n} \{\sum_{j=1}^{n}|a_{ij}| \}\). While \(r=\sum_{A}r^{A}e_{A}\in \mathbb{A}\), denote \(|r|_{\mathbb{A}}=\sum_{A}|r^{A}|\), and for \(A=(a_{ij})_{n\times n}\in \mathbb{A}^{n\times n}\), denote \(\|A\|_{\mathbb{A}}=\max_{1\leq i\leq n} \{\sum_{j=1}^{n}|a_{ij}|_{ \mathbb{A}} \}\). Superscripts T and ∗, respectively, denote the matrix transposition and matrix involution transposition. For any matrix, (<0) denotes a positive (negative) definite matrix. Let \(\mathscr{C}\mathscr{C}\mathscr{F}^{+}\) be the set of all nonnegative continuous functionals . For \(\varphi \in \mathcal{C}([-\tau , 0], \mathbb{A}^{n})\), \(\|\varphi \|_{\tau }\leq \sup_{-\tau \leq s\leq 0}\|\varphi (t+s) \|\).

2.2 Clifford algebra

The Clifford real algebra over \(\mathbb{R}^{m}\) is defined as

$$\begin{aligned} \mathbb{A}= \biggl\{ \sum_{A\subseteq \{1,2,\ldots ,m\}}a^{A}e_{A}, a^{A} \in \mathbb{R} \biggr\} , \end{aligned}$$

where \(e_{A}=e_{l_{1}}e_{l_{2}}\ldots e_{l_{\nu }}\) with \(A=\{l_{1},l_{2},\ldots ,l_{\nu }\}\), \(1 \leq l_{1} < l_{2} <\cdots < l_{\nu }\leq m\).

Moreover, \(e_{\emptyset }=e_{0}=1\) and \(e_{l}=e_{\{l\}}\), \(l=1,2,\ldots ,m\), are denoted as the Clifford generators, and they fulfill the following relations:

$$\begin{aligned} \textstyle\begin{cases} e_{i}e_{j} + e_{j}e_{i} = 0, & i \neq j, i,j={1,2,\ldots ,m}, \\ e_{i}^{2} =-1, & i = 1,2,\ldots ,m, \\ e_{0}^{2}=1. \end{cases}\displaystyle \end{aligned}$$
(1)

For simplicity, when an element is the product of multiple Clifford generators, its subscripts are incorporated together, e.g., \(e_{4}e_{5}e_{6}e_{7}=e_{4567}\).

Let \(\Lambda =\{\emptyset ,1,2,\ldots , A,\ldots , 12\dots m\}\), and we have

$$\begin{aligned} \mathbb{A}= \biggl\{ \sum_{A}a^{A}e_{A}, a^{A}\in \mathbb{R} \biggr\} , \end{aligned}$$

where \(\sum_{A}\) denotes \(\sum_{A\in \Lambda }\) and \(\mathbb{A}\) is isomorphic to \(\mathbb{R}^{2^{m}}\).

For any Clifford number \(r=\sum_{A}r^{A}e_{A}\), the involution of r is defined by

$$\begin{aligned} \bar{r}=\sum_{A}r^{A} \bar{e}_{A}, \end{aligned}$$

where \(\bar{e}_{A}=(-1)^{\frac{\sigma [A](\sigma [A]+1)}{2}}e_{A}\), and

$$\begin{aligned} \sigma [A]= \textstyle\begin{cases} 0, & \text{if } A=\emptyset , \\ \nu , & \text{if } A=l_{1}l_{2}\ldots l_{\nu }. \end{cases}\displaystyle \end{aligned}$$

From the definition, we can directly deduce that \(e_{A}\bar{e}_{A}=\bar{e}_{A}e_{A}=1\). For a Clifford-valued function \(r=\sum_{A}r^{A}e_{A}:\mathbb{R}\rightarrow \mathbb{A}\), where \(r^{A}:\mathbb{R}\rightarrow \mathbb{R}\), \(A\in \Lambda \), its derivative is represented by \(\frac{dr(t)}{dt}=\sum_{A}\frac{dr^{A}(t)}{dt}e_{A}\).

Since \(e_{B}\bar{e}_{A}=(-1)^{\frac{\sigma [A](\sigma [A]+1)}{2}}e_{B}e_{A}\), we can write \(e_{B}\bar{e}_{A}=e_{C}\) or \(e_{B}\bar{e}_{A}=-e_{C}\), where \(e_{C}\) is a basis of Clifford algebra \(\mathbb{A}\). As an example, \(e_{l_{1}l_{2}}\bar{e}_{l_{2}l_{3}}=-e_{l_{1}l_{2}}e_{l_{2}l_{3}}=-e_{l_{1}}e_{l_{2}}e_{l_{2}}e_{l_{3}}=-e_{l_{1}}(-1)e_{l_{3}}=e_{l_{1}}e_{l_{3}} = e_{l_{1}l_{3}}\). Therefore, it is possible to identify a unique corresponding basis \(e_{C}\) for a given \(e_{B}\bar{e}_{A}\). Define

$$\begin{aligned} \sigma [B.\bar{A}]= \textstyle\begin{cases} 0, & \text{if } e_{B}\bar{e}_{A} = e_{C}, \\ 1, & \text{if } e_{B}\bar{e}_{A} =-e_{C}, \end{cases}\displaystyle \end{aligned}$$

and then, \(e_{B}\bar{e}_{A} =(-1)^{\sigma [B.\bar{A}]}e_{C}\).

Moreover, for \(\mathscr{G}=\sum_{C}\mathscr{G}^{C}e_{C} \in \mathbb{A}\), we define \(\mathscr{G}^{B.\bar{A}}=(-1)^{\sigma [B.\bar{A}]}\mathscr{G}^{C}\) for \(e_{B}\bar{e}_{A}=(-1)^{\sigma [B.\bar{A}]}e_{C}\). Therefore

$$\begin{aligned} \mathscr{G}^{B.\bar{A}}e_{B}\bar{e}_{A} &= \mathscr{G}^{B.\bar{A}}(-1)^{ \sigma [B.\bar{A}]}e_{C} =(-1)^{\sigma [B.\bar{A}]} \mathscr{G}^{C}(-1)^{ \sigma [B.\bar{A}]}e_{C} = \mathscr{G}^{C}e_{C}. \end{aligned}$$

2.3 Problem definition

The following Clifford-valued RNN model with discrete time-varying delays is considered:

$$\begin{aligned} \textstyle\begin{cases} \dot{r}_{i}(t)= -d_{i}r_{i}(t)+\sum_{j=1}^{n}a_{ij}h_{j}(r_{j}(t))+ \sum_{j=1}^{n}b_{ij}h_{j}(r_{j}(t-\tau _{j}(t)))+u_{i},\quad t \geq 0, \\ r_{i}(s)=\varphi _{i}(s),\quad s\in [-\tau , 0], \end{cases}\displaystyle \end{aligned}$$
(2)

where \(i,j= 1,2,\ldots ,n\), and n corresponds to the number of neurons; \(r_{i}(t)\in \mathbb{A}\) represents the state vector of the ith unit; \(d_{i}\in \mathbb{R}^{+}\) indicates the rate with which the ith unit will reset its potential to the resting state in isolation when it is disconnected from the network and external inputs; \(a_{ij}, b_{ij}\in \mathbb{A}\) indicate the strengths of connection weights without and with time-varying delays between cells i and j, respectively; \(u_{i}\in \mathbb{A}\) is an external input on the ith unit; \(h_{j}(\cdot ):\mathbb{A}^{n} \rightarrow \mathbb{A}^{n}\) is the activation function of signal transmission; \(\tau _{j}(t)\) corresponds to the transmission delay which satisfies \(0\leq \tau _{j}(t)\leq \tau \), where Ï„ is a positive constant and \(\tau =\max_{1\leq j\leq n}\{\tau _{j}(t)\}\). Furthermore, \(\varphi _{i}\in \mathcal{C}([-\tau , 0],\mathbb{A}^{n})\) is the initial condition for NN model (1).

For convenience of discussion, we rewrite (1) in the vector form

(3)

where \(r(t)=(r_{1}(t),r_{2}(t),\ldots,r_{n}(t))^{T}\in \mathbb{A}^{n}\); with \(d_{i}>0\), \(i=1,2,\ldots,n\); and ; ; ; \(h(r(t))=(h_{1}(r_{1}(t)),h_{2}(r_{2}(t)),\ldots,h_{n}(r_{n}(t)))^{T} \in \mathbb{A}^{n}\); \(h(r(t-\tau (t)))=(h_{1}(r_{1}(t-\tau (t))),h_{2}(r_{2}(t-\tau (t))),\ldots,h_{n}(r_{n}(t- \tau (t))))^{T}\in \mathbb{A}^{n}\).

(A1) Function \(h_{j}(\cdot )\) fulfills the Lipschitz continuity condition with respect to the n-dimensional Clifford vector. For each \(j=1,2,\ldots,n\), there exists a positive constant \(k_{j}\) such that, for any \(x, y\in \mathbb{A}\),

$$\begin{aligned} \bigl\vert h_{j}(x)-h_{j}(y) \bigr\vert _{\mathbb{A}}\leq k_{j} \vert x-y \vert _{\mathbb{A}},\quad j=1,2,\ldots,n, \end{aligned}$$
(4)

where \(k_{j}\) (\(j=1,2,\ldots,n\)) is known as the Lipschitz constant and \(h_{j}(0)=0\). In addition, there exists a positive constant \(k_{j}\) such that \(|h(x)|_{\mathbb{A}}\leq k_{j}\) for any \(x\in \mathbb{A}\).

According to Assumption (A1), it is clear that

(5)

where .

Remark 2.1

There exists a constant \(k_{j}>0\) (\(j=1,2,\ldots,n\)) such that \(\forall x=(x_{1}, x_{2},\ldots, x_{2^{m}})^{T}\in \mathbb{R}^{2^{m}}\) and \(\forall y=(y_{1}, y_{2},\ldots, y_{2^{m}})^{T}\in \mathbb{R}^{2^{m}}\)

$$\begin{aligned} \bigl\vert h_{j}^{A}(x)-h_{j}^{A}(y) \bigr\vert \leq k_{j}\sum_{\ell =1}^{2^{m}} \vert x_{\ell }-y_{\ell } \vert ,\quad A\in \Lambda , j=1,2,\ldots n. \end{aligned}$$

Remark 2.2

The proposed system model in this paper is more general than the system model proposed in previous works [4, 6, 19]. For example, when we consider \(m=0\) in NN model (3), then the model can be reduced to the real-valued NN model proposed in [4]. If we take \(m=1\) in NN model (3), then the model can be reduced to the complex-valued NN model proposed in [6]. If we choose \(m=2\) in NN model (3), then the model can be reduced to the quaternion-valued NN model proposed in [19].

3 Main results

In order to overcome the issue of non-commutativity of multiplication of Clifford numbers, we transform the original Clifford-valued model (3) into multidimensional real-valued model. This can be achieved with the help of \(e_{A}\bar{e}_{A}=\bar{e}_{A}e_{A}=1\) and \(e_{B}\bar{e}_{A}e_{A}=e_{B}\). Given any \(\mathscr{G}\in \mathbb{A}\), unique \(\mathscr{G}^{C}\) that is able to satisfy \(\mathscr{G}^{C}e_{C}h^{A}e_{A}=(-1)^{\sigma [B.\bar{A}]}\mathscr{G}^{C}h^{A}e_{B}= \mathscr{G}^{B.\bar{A}}h^{A}e_{B}\) can be identified. By decomposing (3) into \(\dot{r}=\sum_{A}\dot{r}^{A}e_{A}\), we have

(6)

where

Corresponding to the basis of Clifford algebra, the Clifford-valued NN model can be reformulated into novel real-valued ones. Define

and then (6) can be written as

(7)

with the initial value

$$\begin{aligned} \tilde{r}(s)=\tilde{\varphi }(s),\quad s\in [-\tau , 0], \end{aligned}$$
(8)

where \(\tilde{\varphi }(s)=((\varphi ^{0}(s))^{T},(\varphi ^{1}(s))^{T},\ldots,( \varphi ^{A}(s))^{T},\ldots,(\varphi ^{12\ldots m}(s))^{T})^{T}\in \mathbb{R}^{2^{m}n}\).

In addition, notice that (4) can be expressed as the following inequality:

(9)

where

Definition 3.1

([47])

NN model (7) is said to be uniformly stable in the Lagrange sense if, for any \(\beta >0\), there exists a constant such that for any \(\tilde{\varphi }\in \mathcal{C}_{\beta }=\{\tilde{\varphi }\in \mathcal{C}([-\tau ,0], \mathbb{R}^{2^{m}})\mid \|\tilde{\varphi }\| \leq \beta \}\) and \(t\geq 0\).

Definition 3.2

([47])

Given a radially unbounded and positive definite function \(\mathcal{V}(\cdot )\), a nonnegative continuous function , and two constants \(\rho >0\) and \(\upsilon >0\) such that, for any solution \(\tilde{r}(t)\) of NN model (7), \(\mathcal{V}(t)>\rho \) implies

for any \(t\geq 0\) and \(\tilde{\varphi }\in \mathcal{C}_{\beta }\). Then NN (7) is said to be globally exponentially attractive (GEA) with respect to \(\mathcal{V}(t)\), and the compact set \(\Omega =\{\tilde{r}(t)\in \mathbb{R}^{2^{m}n} |\mathcal{V}(t) \leq \rho\} \) is said to be a GEA set of NN model (7).

Definition 3.3

([47])

NN model (7) is globally exponentially stable in the Lagrange sense if it is both uniformly stable in the Lagrange sense and is globally exponentially attractive. If there is a need to emphasize the Lyapunov-like function, NN model (7) becomes globally exponentially stable in the Lagrange sense with respect to \(\mathcal{V}\).

Lemma 3.4

([47])

For positive definite matrix , positive real constant ϵ, and \(x, y\in \mathbb{R}^{2^{m}n}\), it holds that .

Lemma 3.5

([47])

Given with , , , this is equivalent to one of the following conditions: (i) , , (ii) , .

Lemma 3.6

([47])

Suppose that there exists \(\varsigma _{1}>\varsigma _{2}>0\), \(\tilde{r}(t)\) which satisfies \(\mathcal{D}^{+}\tilde{r}(t)\leq -\varsigma _{1}\tilde{r}(t)+ \varsigma _{2}\tilde{\bar{r}}(t)\), a nonnegative continuous quantity function for all \(t\in [t_{0}-\tau , t_{0}]\), then \(\tilde{r}(t)\leq \tilde{\bar{r}}(t_{0})e^{(-\lambda (t-t_{0}))}\) holds for any \(t\geq t_{0}\), where \(\tilde{\bar{r}}(t)=\sup_{t-\tau \leq s\leq t}\tilde{r}(s)\), \(\tau \geq 0\), and λ is the unique positive root of \(\lambda =\varsigma _{1}-\varsigma _{2}e^{\lambda \tau }\).

Lemma 3.7

([47])

Given matrices , , , , and appropriate reversible matrices , , then

Lemma 3.8

([48])

Given positive constants β and γ and suppose \(\mathscr{V}(t)\in \mathcal{C}([0, +\infty ), \mathbb{R})\), in which

$$\begin{aligned} \mathcal{D}^{+}\mathscr{V}(t)\leq -\beta \mathscr{V}(t)+\gamma ,\quad t \geq 0, \end{aligned}$$

then

$$\begin{aligned} \mathscr{V}(t)-\frac{\gamma }{\beta }\leq \biggl(\mathscr{V}(0)- \frac{\gamma }{\beta } \biggr)e^{-\beta t},\quad t\geq 0. \end{aligned}$$

If \(\mathscr{V}(t)\geq \frac{\gamma }{\beta }\), \(t\geq 0\), then \(\mathscr{V}(t)\) exponentially approaches \(\frac{\gamma }{\beta }\) as t increases.

3.1 Exponential stability

Theorem 3.9

Under Assumption (A1), if there exist positive definite matrices , positive diagonal matrices such that the following LMI hold:

(10)
(11)

where , then NN model (7) is globally exponentially stable in the Lagrange sense. Moreover, the set

(12)

is a globally exponentially attractive set of NN model (7), where ϵ is a proper positive constant.

Proof

Consider the following Lyapunov function which is positive definite and radially unbounded

(13)

Then, the Dini derivative \(\mathcal{D}^{+}\mathcal{V}(t)\) can be computed along the solutions of NN model (7), we get

(14)

There exists a positive diagonal matrix , and through Assumption (A1) we have

(15)

Using Assumption (A1) and Lemma 3.4, there exist positive defined matrices and such that the following inequalities are true:

(16)

and

(17)

Through Lemma 3.7 and (14)–(17), we have

(18)

where

and

There exists \(\epsilon >0\), and from (10) we have

(19)

where .

According to Lemma 3.5, we have

This indicates that

(20)

By combining (11), (18), and (20), we have

(21)

Through (A1) and (13), we have

(22)

From (21) and (22), we have

(23)

where \(\tilde{\mathcal{V}}(t)=\sup_{t-\tau \leq s \leq t}\mathcal{V}(s)\).

According to (23), we have

$$\begin{aligned} \mathcal{D}^{+}\bigl(\mathcal{V}(t)-\vartheta \bigr)\leq -(1+\epsilon ) \bigl( \mathcal{V}(t)-\vartheta \bigr)+\tilde{\mathcal{V}}(t)-\vartheta ,\quad t\geq 0, \end{aligned}$$
(24)

where .

We can derive \(\mathcal{V}(t)-\vartheta \leq (\tilde{V}(t)-\vartheta ) e^{-\lambda t}\) from Lemma 3.6, where λ is the unique positive root of \(\lambda =(1+\epsilon )-e^{\lambda t}\).

As such, NN model (7) is globally exponentially stable in the Lagrange sense, and the set

is a globally exponentially attractive set of NN model (7). The proof of Theorem 3.9 is completed. □

Now we have the following Corollary 3.10.

Corollary 3.10

Under (A1), if there exist positive definite matrices , positive diagonal matrix such that the following LMI hold:

(25)
(26)

where . As such, NN model (7) is globally exponentially stable in the Lagrange sense. Moreover, the set

is a globally exponentially attractive set of NN model (7).

Proof

Consider the following Lyapunov function which is positive definite and radially unbounded:

(27)

Corollary 3.10 can be proven by applying a similar approach pertaining to Theorem 3.9. So, the proof is omitted here. □

Corollary 3.11

Suppose that the activation function \(h_{i}(\cdot )\) is bounded, i.e., \(|h_{i}(\cdot )|_{A}\leq k_{i}\), where \(k_{i}\) (\(i=1,2,\ldots, n\)) is a positive constant, then NN (6) is globally exponentially stable in the Lagrange sense. Moreover, the compact set

(28)

is the globally exponentially attractive set of (6), where

Proof

First of all, we prove that NN model (6) is uniformly stable in the Lagrange sense. Consider the following Lyapunov function which is positive definite and radially unbounded:

$$\begin{aligned} \mathcal{V}(t)=\frac{1}{2}\sum_{i=1}^{n} \sum_{A}\bigl(r^{A}_{i}(t) \bigr)^{2}. \end{aligned}$$
(29)

Let \(0<\mu _{i}<d_{i}\) (\(i=1,2,\ldots,n\)). Then the Dini derivative \(\mathcal{D}^{+}\mathcal{V}(t)\) can be computed along the positive half trajectory of NN model (6), and we have

(30)

Through Lemma 3.8, for any \(t\geq 0\), we have

$$\begin{aligned} \mathcal{V}(t)-\rho \leq \bigl(\mathcal{V}(0)-\rho \bigr)e^{-\upsilon t}, \end{aligned}$$
(31)

where and \(\upsilon =2\min_{1\leq i\leq n}\{d_{i}-\mu _{i}\}\).

This ensures that the solution of NN model (6) is uniformly bounded. Hence, NN model (6) is uniformly stable in the Lagrange sense. Observe that

Then , and (31) implies that, for any \(t\geq 0\),

(32)

Through Definition 3.2, NN model (6) is globally exponentially stable and \(\tilde{\Omega }_{1}\) is a globally exponentially attractive set. This proves the global exponential stability in the Lagrange sense of NN model (6). □

Corollary 3.12

If the activation function \(h_{i}(\cdot )\) is bounded, i.e., \(|h_{i}(\cdot )|_{A}\leq k_{i}\), where \(k_{i}\) (\(i=1,2,\ldots,n\)) is a positive constant, then NN (6) is globally exponentially stable in the Lagrange sense. Moreover, the compact set

(33)

is a globally exponentially attractive set of NN model (6), where

Proof

We first prove that \(\tilde{\Omega }_{2}\) is a globally exponentially attractive set. We employ the positive definite and radially unbounded Lyapunov function

$$\begin{aligned} \mathcal{V}(t)=\sum_{i=1}^{n}\sum _{A} \bigl\vert r^{A}_{i}(t) \bigr\vert . \end{aligned}$$
(34)

Then the Dini derivative \(\mathcal{D}^{+}\mathcal{V}(t)\) can be computed along the positive half trajectory of NN model (6), and we have

(35)

Similarly, we infer that \(\tilde{\Omega }_{2}\) is a globally exponentially attractive set by Lemma 3.8 and Definition 3.2. Thus, the proof procedure is omitted. □

Remark 3.13

It is worth mentioning that the Clifford number multiplication does not satisfy commutativity, which complicates the investigation of Clifford-valued NNs’ dynamics. On the other hand, the method of decomposition is very successful in overcoming the difficulty of non-commutative Clifford numbers multiplication. Recent authors have obtained sufficient conditions for global stability and periodic solutions of Clifford-valued NN models by using the decomposition method [31, 33, 35, 39, 40].

Remark 3.14

In [30–39], the authors studied the global asymptotic stability or global exponential stability and periodic solutions of Clifford-valued NN models in the Lyapunov sense. Compared with some previous studies of Clifford-valued NN models [30–40], in this paper, for the first time we have derived new sufficient conditions with respect to the global exponential stability in the Lagrange sense for a class of Clifford-valued NNs. Therefore, the proposed results in this paper are different and new compared with those in the existing literature [30–40].

4 Numerical examples

Two numerical examples are presented to demonstrate the feasibility and effectiveness of the results established in Sect. 3.

Example 1

For \(m=2\) and \(n=2\), a two-neuron Clifford-valued NN model can be described by

(36)

The multiplication generators are: \(e_{1}^{2}=e_{2}^{2}=e_{12}^{2}=e_{1}e_{2}e_{12}=-1\), \(e_{1}e_{2}=-e_{2}e_{1}=e_{12}\), \(e_{1}e_{12}=-e_{12}e_{1}=-e_{2}\), \(e_{2}e_{12}=-e_{12}e_{2}=e_{1}\), \(\dot{r}_{1}=\dot{r}^{0}_{1}e_{0}+\dot{r}^{1}_{1}e_{1}+\dot{r}^{2}_{1}e_{2}+ \dot{r}^{12}_{1}e_{12}\), \(\dot{r}_{2}=\dot{r}^{0}_{2}e_{0}+\dot{r}^{1}_{2}e_{1}+\dot{r}^{2}_{2}e_{2}+ \dot{r}^{12}_{2}e_{12}\). Furthermore, we take

According to their definitions, we have

and

Then we choose the activation function for NN model (36) as follows: \(h_{1}(r)=h_{2}(r)=\frac{1}{4}(|r+1|-|r-1|)\) for all \(r=r^{0}e_{0}+r^{1}e_{1}+r^{2}e_{2}+r^{12}e_{12}\). It is obvious that the function satisfies (A1) with \(k_{1}=k_{2}=0.25\). Let the time-varying delay \(\tau (t)=0.4|\cos (t)|+0.2\) with \(\tau =0.6\) and \(\epsilon =0.6\). By using MATLAB, the LMI conditions of (10), (11) in Theorem 3.9 are true with \(t_{\min }=-0.1027\). The feasible solutions of the existing positive definite matrices are as follows:

and the positive diagonal matrices , and . Calculating the eigenvalues of , we have 5.0600, 5.0600, 5.0648, 5.0649, 5.1072, 5.1074, 5.1127, and 5.1130. Therefore, we establish that NN model (7) is globally exponentially stable in the Lagrange sense, and the set

is a globally exponentially attractive set of model (7) corresponding to Theorem 3.9, where \(0<\epsilon \leq 0.2491\).

It is straightforward to verify that all conditions of Theorem 3.9 are fulfilled. Under 20 randomly selected initial values, the time responses of the states \(r^{0}_{1}(t)\), \(r^{0}_{2}(t)\), \(r^{1}_{1}(t)\), \(r^{1}_{2}(t)\), \(r^{2}_{1}(t)\), \(r^{2}_{2}(t)\), \(r^{12}_{1}(t)\), and \(r^{12}_{2}(t)\) of model (36) are presented in Figures 1 to 4, respectively.

Figure 1
figure 1

The time responses of states \(r^{0}_{1}(t)\), \(r^{0}_{2}(t)\) of NN model (36)

Figure 2
figure 2

The time responses of states \(r^{1}_{1}(t)\), \(r^{1}_{2}(t)\) of NN model (36)

Figure 3
figure 3

The time responses of states \(r^{2}_{1}(t)\), \(r^{2}_{2}(t)\) of NN model (36)

Figure 4
figure 4

The time responses of states \(r^{12}_{1}(t)\), \(r^{12}_{2}(t)\) of NN model (36)

Example 2

Consider the Clifford-valued NN model (36) with the following parameters:

We choose the activation function of NN model (36) as follows: \(h_{1}(r)=h_{2}(r)=\frac{1}{2}(|r+1|-|r-1|)\) for all \(r=r^{0}e_{0}+r^{1}e_{1}+r^{2}e_{2}+r^{12}e_{12}\). It is obvious that the function satisfies (A1) with \(k_{1}=k_{2}=0.5\). The time-varying delay is considered as \(\tau (t)= 0.2|\cos (t)|+0.03\) with \(\tau =0.23\). Besides, it is easy to obtain \(d_{1}=2.2\), \(d_{2}=2.4\), \(a^{A.\bar{B}}_{11}= 0.6\), \(a^{A.\bar{B}}_{12}= 0.9\), \(a^{A.\bar{B}}_{21}= 0.7\), \(a^{A.\bar{B}}_{22}= 0.9\), \(b^{A.\bar{B}}_{11}= 0.8\), \(b^{A.\bar{B}}_{12}=0.8\), \(b^{A.\bar{B}}_{21}= 0.7\), \(b^{A.\bar{B}}_{22}= 0.5\), \(u_{1}= 0.4\), \(u_{2}= 0.3\).

Select \(\mu _{1}=\mu _{2}=0.6\) to satisfy \(0<\mu _{i}<d_{i}\) (\(i=1,2\)). To obtain our global exponential attractive set, we firstly calculate , \(i=1,2\). Furthermore, we calculate and . Then we obtain the set from Corollary 3.11: \(\tilde{\Omega }_{2}= \{r^{A} |\sum_{A} \frac{(r^{A}_{1}(t))^{2}+(r^{A}_{2}(t))^{2}}{2}\leq 0.8714 \}\).

5 Conclusions

In this paper, the global exponential stability problem for Clifford-valued RNN models with time-varying delays in the Lagrange sense has been examined with the use of Lyapunov functional and LMI methods. The sufficient conditions have been obtained by decomposing the n-dimensional Clifford-valued RNN model into a \(2^{m}n\)-dimensional real-valued RNN model to ensure the global exponential stability of the considered NN model in the Lagrange sense. Furthermore, the estimated global exponentially attractive set has been obtained. In addition, the validity and feasibility of the results obtained have been demonstrated with two numerical examples. The results obtained in this paper can be further extended to other complex systems. We will be exploring the stabilizability and instabilizability analysis of Clifford-valued NN models with the help of various control systems. The corresponding results will be carried out in the near future.

Availability of data and materials

Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.

References

  1. Cao, J., Wang, J.: Global asymptotic stability of a general class of recurrent neural networks with time-varying delays. IEEE Trans. Circuits Syst. I 50, 34–44 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  2. Cao, J., Wang, J.: Global asymptotic and robust stability of recurrent neural networks with time delays. IEEE Trans. Circuits Syst. I 52, 417–426 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  3. Cao, J., Yuan, K., Li, H.: Global asymptotical stability of recurrent neural networks with multiple discrete delays and distributed delays. IEEE Trans. Neural Netw. 17, 1646–1651 (2006)

    Article  Google Scholar 

  4. Yang, B., Hao, M., Cao, J., Zhao, X.: Delay-dependent global exponential stability for neural networks with time-varying delay. Neurocomputing 338, 172–180 (2019)

    Article  Google Scholar 

  5. Zhang, Z., Liu, X., Chen, J., Guo, R., Zhou, S.: Further stability analysis for delayed complex-valued recurrent neural networks. Neurocomputing 251, 81–89 (2017)

    Article  Google Scholar 

  6. Pan, J., Liu, X., Xie, W.: Exponential stability of a class of complex-valued neural networks with time-varying delays. Neurocomputing 164, 293–299 (2015)

    Article  Google Scholar 

  7. Kaviarasan, B., Kwon, O.M., Park, M.J., Sakthivel, R.: Stochastic faulty estimator-based non-fragile tracking controller for multi-agent systems with communication delay. Appl. Math. Comput. 392, 125704 (2021)

    MathSciNet  MATH  Google Scholar 

  8. Sakthivel, R., Sakthivel, R., Kaviarasan, B., Alzahrani, F.: Leader-following exponential consensus of input saturated stochastic multi-agent systems with Markov jump parameters. Neurocomputing 287, 84–92 (2018)

    Article  Google Scholar 

  9. Hirose, A.: Complex-Valued Neural Networks: Theories and Applications. World Scientific, Singapore (2003)

    Book  MATH  Google Scholar 

  10. Nitta, T.: Solving the XOR problem and the detection of symmetry using a single complex-valued neuron. Neural Netw. 16, 1101–1105 (2003)

    Article  Google Scholar 

  11. Isokawa, T., Nishimura, H., Kamiura, N., Matsui, N.: Associative memory in quaternionic Hopfield neural network. Int. J. Neural Syst. 18, 135–145 (2008)

    Article  Google Scholar 

  12. Matsui, N., Isokawa, T., Kusamichi, H., Peper, F., Nishimura, H.: Quaternion neural network with geometrical operators. J. Intell. Fuzzy Syst. 15, 149–164 (2004)

    MATH  Google Scholar 

  13. Mandic, D.P., Jahanchahi, C., Took, C.C.: A quaternion gradient operator and its applications. IEEE Signal Process. Lett. 18, 47–50 (2011)

    Article  Google Scholar 

  14. Samidurai, R., Sriraman, R., Zhu, S.: Leakage delay-dependent stability analysis for complex-valued neural networks with discrete and distributed time-varying delays. Neurocomputing 338, 262–273 (2019)

    Article  Google Scholar 

  15. Aouiti, C., Bessifi, M., Li, X.: Finite-time and fixed-time synchronization of complex-valued recurrent neural networks with discontinuous activations and time-varying delays. Circuits Syst. Signal Process. 39, 5406–5428 (2020)

    Article  Google Scholar 

  16. Zhang, Z., Liu, X., Zhou, D., Lin, C., Chen, J., Wang, H.: Finite-time stabilizability and instabilizability for complex-valued memristive neural networks with time delays. IEEE Trans. Syst. Man Cybern. Syst. 48, 2371–2382 (2018)

    Article  Google Scholar 

  17. Li, Y., Meng, X.: Almost automorphic solutions for quaternion-valued Hopfield neural networks with mixed time-varying delays and leakage delays. J. Syst. Sci. Complex. 33, 100–121 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  18. Tu, Z., Zhao, Y., Ding, N., Feng, Y., Zhang, W.: Stability analysis of quaternion-valued neural networks with both discrete and distributed delays. Appl. Math. Comput. 343, 342–353 (2019)

    MathSciNet  MATH  Google Scholar 

  19. Shu, H., Song, Q., Liu, Y., Zhao, Z., Alsaadi, F.E.: Global μ-stability of quaternion-valued neural networks with non-differentiable time-varying delays. Neurocomputing 247, 202–212 (2017)

    Article  Google Scholar 

  20. Tan, M., Liu, Y., Xu, D.: Multistability analysis of delayed quaternion-valued neural networks with nonmonotonic piecewise nonlinear activation functions. Appl. Math. Comput. 341, 229–255 (2019)

    MathSciNet  MATH  Google Scholar 

  21. Jiang, B.X., Liu, Y., Kou, K.I., Wang, Z.: Controllability and observability of linear quaternion-valued systems. Acta Math. Sin. Engl. Ser. 36, 1299–1314 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  22. Liu, Y., Zheng, Y., Lu, J., Cao, J., Rutkowski, L.: Constrained quaternion-variable convex optimization: a quaternion-valued recurrent neural network approach. IEEE Trans. Neural Netw. Learn. Syst. 31, 1022–1035 (2020)

    Article  MathSciNet  Google Scholar 

  23. Xia, Z., Liu, Y., Lu, J., Cao, J., Rutkowski, L.: Penalty method for constrained distributed quaternion-variable optimization. IEEE Trans. Cybern. (2020). https://doi.org/10.1109/TCYB.2020.3031687

    Article  Google Scholar 

  24. Pearson, J.K., Bisset, D.L.: Neural networks in the Clifford domain. In: Proc. IEEE ICNN, Orlando, FL, USA (1994)

    Google Scholar 

  25. Pearson, J.K., Bisset, D.L.: Back Propagation in a Clifford Algebra. ICANN, Brighton (1992)

    Google Scholar 

  26. Buchholz, S., Sommer, G.: On Clifford neurons and Clifford multi-layer perceptrons. Neural Netw. 21, 925–935 (2008)

    Article  MATH  Google Scholar 

  27. Kuroe, Y.: Models of Clifford recurrent neural networks and their dynamics. In: IJCNN-2011, San Jose, CA, USA (2011).

    Google Scholar 

  28. Hitzer, E., Nitta, T., Kuroe, Y.: Applications of Clifford’s geometric algebra. Adv. Appl. Clifford Algebras 23, 377–404 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  29. Buchholz, S.: A theory of neural computation with Clifford algebras. PhD thesis, University of Kiel (2005)

  30. Zhu, J., Sun, J.: Global exponential stability of Clifford-valued recurrent neural networks. Neurocomputing 173, 685–689 (2016)

    Article  Google Scholar 

  31. Liu, Y., Xu, P., Lu, J., Liang, J.: Global stability of Clifford-valued recurrent neural networks with time delays. Nonlinear Dyn. 332, 259–269 (2019)

    Google Scholar 

  32. Shen, S., Li, Y.: \(S^{p}\)-Almost periodic solutions of Clifford-valued fuzzy cellular neural networks with time-varying delays. Neural Process. Lett. 51, 1749–1769 (2020)

    Article  Google Scholar 

  33. Li, Y., Xiang, J., Li, B.: Globally asymptotic almost automorphic synchronization of Clifford-valued recurrent neural networks with delays. IEEE Access 7, 54946–54957 (2019)

    Article  Google Scholar 

  34. Li, B., Li, Y.: Existence and global exponential stability of pseudo almost periodic solution for Clifford-valued neutral high-order Hopfield neural networks with leakage delays. IEEE Access 7, 150213–150225 (2019)

    Article  Google Scholar 

  35. Li, Y., Xiang, J.: Global asymptotic almost periodic synchronization of Clifford-valued CNNs with discrete delays. Complexity 2019, Article ID 6982109 (2019)

    MATH  Google Scholar 

  36. Li, B., Li, Y.: Existence and global exponential stability of almost automorphic solution for Clifford-valued high-order Hopfield neural networks with leakage delays. Complexity 2019, Article ID 6751806 (2019)

    Google Scholar 

  37. Aouiti, C., Dridi, F.: Weighted pseudo almost automorphic solutions for neutral type fuzzy cellular neural networks with mixed delays and D operator in Clifford algebra. Int. J. Syst. Sci. 51, 1759–1781 (2020)

    Article  MathSciNet  Google Scholar 

  38. Aouiti, C., Gharbia, I.B.: Dynamics behavior for second-order neutral Clifford differential equations: inertial neural networks with mixed delays. Comput. Appl. Math. 39, 120 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  39. Li, Y., Xiang, J.: Existence and global exponential stability of anti-periodic solution for Clifford-valued inertial Cohen-Grossberg neural networks with delays. Neurocomputing 332, 259–269 (2019)

    Article  Google Scholar 

  40. Rajchakit, G., Sriraman, R., Lim, C.P., Unyong, B.: Existence, uniqueness and global stability of Clifford-valued neutral-type neural networks with time delays. Math. Comput. Simul. (2021). https://doi.org/10.1016/j.matcom.2021.02.023

    Article  Google Scholar 

  41. Liao, X.X., Luo, Q., Zeng, Z.G., Guo, Y.: Global exponential stability in Lagrange sense for recurrent neural networks with time delays. Nonlinear Anal., Real World Appl. 9, 1535–1557 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  42. Liao, X.X., Luo, Q., Zeng, Z.G.: Positive invariant and global exponential attractive sets of neural networks with time-varying delays. Neurocomputing 71, 513–518 (2008)

    Article  Google Scholar 

  43. Wang, X., Jiang, M., Fang, S.: Stability analysis in Lagrange sense for a non-autonomous Cohen-Grossberg neural network with mixed delays. Nonlinear Anal. 70, 4294–4306 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  44. Luo, Q., Zeng, Z., Liao, X.: Global exponential stability in Lagrange sense for neutral type recurrent neural networks. Neurocomputing 74, 638–645 (2011)

    Article  Google Scholar 

  45. Tu, Z., Wang, L.: Global Lagrange stability for neutral type neural networks with mixed time-varying delays. Int. J. Mach. Learn. Cybern. 9, 599–609 (2018)

    Article  Google Scholar 

  46. Wang, B., Jian, J., Jiang, M.: Stability in Lagrange sense for Cohen-Grossberg neural networks with time-varying delays and finite distributed delays. Nonlinear Anal. Hybrid Syst. 4, 65–78 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  47. Tu, Z., Jian, J., Wang, K.: Global exponential stability in Lagrange sense for recurrent neural networks with both time-varying delays and general activation functions via LMI approach. Nonlinear Anal., Real World Appl. 12, 2174–2182 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  48. Song, Q., Shu, H., Zhao, Z., Liu, Y., Alsaadi, F.E.: Lagrange stability analysis for complex-valued neural networks with leakage delay and mixed time-varying delays. Neurocomputing 244, 33–41 (2017)

    Article  Google Scholar 

Download references

Acknowledgements

This research is made possible through financial support from the Rajamangala University of Technology Suvarnabhumi, Thailand. The authors are grateful to the Rajamangala University of Technology Suvarnabhumi, Thailand for supporting this research.

Funding

The research is supported by the Rajamangala University of Technology Suvarnabhumi, Thailand.

Author information

Authors and Affiliations

Authors

Contributions

Funding acquisition, NB; Conceptualization, GR; Software, GR and NB; Formal analysis, GR and NB; Methodology, GR and NB; Supervision, GR, PA, RS, PH, and CPL; Writing–original draft, GR; Validation, GR and NB; Writing–review and editing, GR and NB. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to N. Boonsatit.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rajchakit, G., Sriraman, R., Boonsatit, N. et al. Exponential stability in the Lagrange sense for Clifford-valued recurrent neural networks with time delays. Adv Differ Equ 2021, 256 (2021). https://doi.org/10.1186/s13662-021-03415-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-021-03415-8

Keywords