Skip to main content

Theory and Modern Applications

Periodic solutions for complex-valued neural networks of neutral type by combining graph theory with coincidence degree theory

Abstract

In this paper, by combining graph theory with coincidence degree theory as well as Lyapunov functional method, sufficient conditions to guarantee the existence and global exponential stability of periodic solutions of the complex-valued neural networks of neutral type are established. In our results, the assumption on the boundedness for the activation function in (Gao and Du in Discrete Dyn. Nat. Soc. 2016:Article ID 1267954, 2016) is removed and the other inequality conditions in (Gao and Du in Discrete Dyn. Nat. Soc. 2016:Article ID 1267954, 2016) are replaced with new inequalities.

1 Introduction

Because in a lot of practical applications complex signals often occur and the complex-valued neural networks are preferable and practical, hence, up to now, there has been increasing research interest in the stability of equilibrium point and periodic solutions of complex-valued neural networks, for example, see [2–17] and the references therein.

On the other hand, time delays have been extensively studied in last decades due to their potential existence in many fields [12, 13, 18–20]. Up to now, the dynamical behaviors of neural networks of neutral type have been extensively investigated and a lot of interesting results on the global asymptotic stability and global exponential stability of equilibrium point and periodic solutions for neural networks of neutral type have been published, for example, see [18, 21–28] and the references therein.

But so far, very few studies have been reported on the dynamical behaviors of complex-valued neural networks of neutral type with time delays [1, 29]. This motivates us to carry out a study on dynamical behaviors of complex-valued neural networks of neutral type. Recently, in [1], the authors discussed the existence and exponential stability of periodic solutions for the following delayed complex-valued neural networks of neutral type:

$$ \frac{d[K_{n}z_{n}(t)]}{dt}=-d_{n}(t)z_{n}(t)+ \sum_{j=1}^{l}b_{nj}(t)F_{j} \bigl(z_{j}(t)\bigr) +\sum_{j=1}^{l}e_{nj}(t)G_{j} \bigl(z_{j}\bigl(t-\tau_{nj}(t)\bigr)\bigr)+P_{n}(t), $$
(1)

where \(n\in\mathbf{L}=\{1, 2,\ldots, l\}\), l is a positive integer,

$$\begin{gathered} K_{n}z_{n}(t)=K_{n}u_{n}(t)+iK_{n}v_{n}(t), \\ K_{n}u_{n}(t)=u_{n}(t)-c_{n}u_{n}(t- \tau), \\ K_{n}v_{n}(t)=v_{n}(t)-c_{n}v_{n}(t- \tau),\end{gathered} $$

\(\tau, c_{n}\in R \) with \(|c_{n}|\neq1\), \(d_{n}(t)\geq0 \) is the self feedback connection weight, and \(e_{nj}(t)\), \(b_{nj}(t)\) are complex-valued connection weights, \(F_{j}(z_{j})\) and \(G_{j}(z_{j}): C\rightarrow C\) are the activation functions of the neurons. \(P_{n}(t)\in C \) is the external input, \(\tau_{nj}\geq0 \) corresponds to the transmission delays with \(\tau'_{nj}\leq \sigma<1\), \(\tau_{nj}\leq\sigma^{\ast}\).

In [1], first, by means of using coincidence degree theory and the a priori estimate method of periodic solutions, under the assumptions that the activation functions were bounded, a sufficient condition on the existence of periodic solutions was established for system (1). Then, by constructing an appropriate Lyapunov functional, a sufficient condition to guarantee the global exponential stability of periodic solutions to system (1) was obtained.

In recent years, graph theory has been applied to studying global asymptotic stability of discrete-time Cohen–Grossberg neural networks with finite and infinite delays [30] and the existence and global stability of periodic solutions for coupled networks [31–35]. Some sufficient conditions on the existence and global stability of equilibrium point and periodic solutions for some neural networks and coupled networks have been established [30–36].

Recently, without applying the a priori estimate method of periodic solutions, we have established some criteria to guarantee the existence of periodic solutions for neural networks with time delays by combining coincidence degree theory with Lyapunov functional method or linear matrix inequality method [14, 15, 37].

However, so far, the results on the existence and global exponential stability of periodic solutions for delayed complex-valued neural networks of neutral type have not been obtained by combining coincidence degree theory with graph theory as well as Lyapunov functional method.

The objective of this paper is to establish new criteria to guarantee the existence and global exponential stability of periodic solutions of system (1) by removing the assumption on the boundedness for the activation functions in [1] and replacing inequality conditions in [1] with new inequality conditions, by combining coincidence degree theory with graph theory as well as Lyapunov functional method.

Thus the contribution of our paper lies in the following two aspects: (1) Combination of coincidence degree theory with graph theory as well as Lyapunov functional is introduced to study the existence and exponential stability of periodic solutions for delayed complex-valued neural networks of neutral type; (2) Novel sufficient conditions to guarantee the existence and global exponential stability of periodic solutions for system (1) are derived by removing the limitation on the boundedness for the activation functions in [1] and replacing inequality conditions in [1] with new inequality conditions.

The paper is organized as follows. Some preliminaries and lemmas are introduced in Sect. 2. In Sect. 3, a sufficient condition is derived to guarantee the existence of periodic solutions of system (1). In Sect. 4, a sufficient condition is established to guarantee exponential stability of periodic solutions for system (1). In Sect. 5, an example is stated to prove the effectiveness of our theoretical results.

2 Preliminaries

Let \(z_{n}(t)=u_{n}(t)+iv_{n}(t)\), \(F_{j}(z_{j}(t))=F_{j}^{R}(u_{j}(t), v_{j}(t))+iF_{j}^{I}(u_{j}(t), v_{j}(t))\), \(G_{j}(z_{j}(t-\tau_{nj}(t)))=G_{j}^{R}(u_{j}(t-\tau_{nj}(t)), v_{j}(t-\tau_{nj}(t)))+iG_{j}^{I}(u_{j}(t-\tau_{nj}), v_{j}(t-\tau_{nj}(t)))=G_{j}^{R}(u_{j}^{t}, v^{t}_{j})+iG_{j}^{I}(u_{j}^{t}, v_{j}^{t})\), \(u_{j}(t-\tau_{nj}(t))=u_{j}^{t}\), \(v_{j}(t-\tau_{nj}(t))=v_{j}^{t}\), \(b_{nj}(t)=b_{nj}^{R}(t)+ib^{I}_{nj}(t)\), \(e_{nj}(t)=e^{R}_{nj}(t)+ie^{I}_{nj}(t)\), \(P_{n}(t)=P_{n}^{R}(t)+iP_{n}^{I}(t)\). After separating each state variable, the connection weight, the activation function, and the external input into its real and imaginary parts, then system (1) can be rewritten as follows:

$$ \begin{gathered} \begin{aligned} \frac{d[K_{n}u_{n}(t)]}{dt}={}&{-}d_{n}(t)u_{n}(t)+\sum _{j=1}^{l}b^{R}_{nj}F_{j}^{R} \bigl(u_{j}(t), v_{j}(t)\bigr)-\sum _{j=1}^{l}b^{I}_{nj}F_{j}^{I} \bigl(u_{j}(t), v_{j}(t)\bigr) \\ &+\sum_{j=1}^{l}e^{R}_{nj}(t)G_{j}^{R} \bigl(u_{j}^{t}, v_{j}^{t}\bigr)-\sum _{j=1}^{l}e_{nj}^{I}(t)G_{j}^{I} \bigl(u_{j}^{t}, v_{j}^{t} \bigr)+P_{n}^{R}(t), \end{aligned} \\ \begin{aligned}\frac{d[K_{n}v_{n}(t)]}{dt}={}&{-}d_{n}(t)v_{n}(t)+ \sum_{j=1}^{l}b^{R}_{nj}F_{j}^{I} \bigl(u_{j}(t), v_{j}(t)\bigr)+\sum _{j=1}^{l}b^{I}_{nj}F_{j}^{R} \bigl(u_{j}(t), v_{j}(t)\bigr) \\ &+\sum_{j=1}^{l} e^{R}_{nj}(t)G_{j}^{I} \bigl(u_{j}^{t}, v_{j}^{t}\bigr)+\sum _{j=1}^{l} e_{nj}^{I}(t)G_{j}^{R} \bigl(u_{j}^{t}, v_{j}^{t} \bigr)+P_{n}^{I}(t). \end{aligned} \end{gathered} $$
(2)

The initial values of system (2) are

$$u_{n}(s)=\phi_{n}(s),\qquad v_{n}(s)= \psi_{n}(s),\quad s\in[-\sigma, 0], \sigma=\max \Bigl\{ \tau, \max _{t\in[0, \omega], 1\leq j\leq l}\bigl\{ \tau_{nj}(t)\bigr\} \Bigr\} . $$

Let \(|\cdot|\) be the Euclidean norm for R and \(\mathbf{L}=\{1, 2,\ldots, l\}\). We introduce the notations as follows:

  1. (1)

    \(\underline{F}=\min_{t\in[0, \omega]}\{|F(x)|\}\), \(\overline{F}=\max_{t\in[0, \omega]}\{|F(t)|\}\), where \(F(t)\) is a continuous ω-periodic function with \(\omega>0\);

  2. (2)
    $$\begin{aligned}& \begin{aligned}A_{mj\delta} ={}&\overline{b_{nj}^{R}} \bigl[l_{j}^{R}+l_{j}^{I}+ \bigl\vert F_{j}^{R}(0,0) \bigr\vert \delta^{2}\bigr]+ \overline {b^{I}_{nj}}\bigl[k_{j}^{R}+k_{j}^{I}+ \bigl\vert F_{j}^{I}(0,0) \bigr\vert \delta^{2}\bigr] \\ & +\overline{e_{nj}^{R}} \bigl[q_{j}^{R}+q_{j}^{I}+ \bigl\vert G_{j}^{R}(0,0) \bigr\vert \delta^{2}\bigr]+\overline {e^{I}_{nj}} \bigl[p_{j}^{R}+p_{j}^{I}+ \bigl\vert G_{j}^{I}(0,0) \bigr\vert \delta^{2}\bigr]+ \delta^{2}\overline{P^{R}_{n}}, \end{aligned} \\& A_{nj} =\overline{b_{nj}^{R}}\bigl(l_{j}^{R}+l_{j}^{I} \bigr)+\overline{b^{I}_{nj}}\bigl(k_{j}^{R}+k_{j}^{I} \bigr) +\overline{e_{nj}^{R}}\bigl(q_{j}^{R}+q_{j}^{I} \bigr)+\overline{e^{I}_{nj}}\bigl(p_{j}^{R}+p_{j}^{I} \bigr), \\& \begin{aligned}B_{mj\delta} ={}&\overline{b_{nj}^{R}} \bigl(l_{j}^{R}+l_{j}^{I}+ \delta^{2} \bigl\vert F_{j}^{R}(0,0) \bigr\vert \bigr)+\overline {b^{I}_{nj}}\bigl(k_{j}^{R}+k_{j}^{I}+ \delta^{2} \bigl\vert F_{j}^{I}(0,0) \bigr\vert \bigr) \\ &+\overline{e_{nj}^{R}}\bigl(q_{j}^{R}+q_{j}^{I}+\delta^{2} \bigl\vert G_{j}^{R}(0,0) \bigr\vert \bigr)+\overline{e^{I}_{nj}}\bigl(p_{j}^{R}+p_{j}^{I}+ \delta^{2} \bigl\vert F_{j}^{I}(0,0) \bigr\vert \bigr), \end{aligned} \\& B_{nj}=\overline{b_{nj}^{R}}\bigl(l_{j}^{R}+l_{j}^{I} \bigr)+\overline{b^{I}_{nj}}\bigl(k_{j}^{R}+k_{j}^{I} \bigr) +\overline{e_{nj}^{R}}\bigl(q_{j}^{R}+q_{j}^{I} \bigr)+\overline{e^{I}_{nj}}\bigl(p_{j}^{R}+p_{j}^{I} \bigr), \\& \begin{aligned}A^{\ast}_{nj\delta} ={}& \overline{b_{nj}^{R}}\bigl[k_{j}^{R}+k_{j}^{I}+ \bigl\vert F_{j}^{I}(0,0) \bigr\vert \delta^{2}\bigr]+\overline {b^{I}_{nj}} \bigl[l_{j}^{R}+l_{j}^{I}+ \bigl\vert F_{j}^{R}(0,0) \bigr\vert \delta^{2}\bigr] \\ & + \overline{e_{nj}^{R}}\bigl[p_{j}^{R}+p_{j}^{I}+ \bigl\vert G_{j}^{I}(0,0) \bigr\vert \delta^{2}\bigr]+\overline {e^{I}_{nj}} \bigl[q_{j}^{R}+q_{j}^{I}+ \bigl\vert G_{j}^{R}(0,0) \bigr\vert \delta^{2}\bigr]+ \delta^{2}\overline{P^{I}_{n}}, \end{aligned} \\& A^{\ast}_{nj}=\overline{b_{nj}^{R}} \bigl(k_{j}^{R}+k_{j}^{I}\bigr)+ \overline{b^{I}_{nj}}\bigl(l_{j}^{R}+l_{j}^{I} \bigr) +\overline{e_{nj}^{R}}\bigl(p_{j}^{R}+p_{j}^{I} \bigr)+\overline{e^{I}_{nj}}\bigl(q_{j}^{R}+q_{j}^{I} \bigr), \\& \begin{aligned}B^{\ast}_{nj\delta}={}&\overline {b_{nj}^{R}}\bigl(k_{j}^{R}+k_{j}^{I}+ \delta^{2} \bigl\vert F_{j}^{I}(0,0) \bigr\vert \bigr)+\overline {b^{I}_{nj}}\bigl(l_{j}^{R}+l_{j}^{I}+ \delta^{2} \bigl\vert F_{j}^{R}(0,0) \bigr\vert \bigr) \\ &+\overline{e_{nj}^{R}}\bigl(p_{j}^{R}+p_{j}^{I}+\delta^{2} \bigl\vert G_{j}^{I}(0,0) \bigr\vert \bigr)+\overline{e^{I}_{nj}} \bigl(q_{j}^{R}+q_{j}^{I}+\delta ^{2} \bigl\vert F_{j}^{R}(0,0) \bigr\vert \bigr), \end{aligned} \\& B^{\ast}_{nj}=\overline{b_{nj}^{R}} \bigl(k_{j}^{R}+k_{j}^{I}\bigr)+ \overline{b^{I}_{nj}}\bigl(l_{j}^{R}+l_{j}^{I} \bigr) +\overline{e_{nj}^{R}}\bigl(q_{j}^{I}+q_{j}^{R} \bigr)+\overline{e^{I}_{nj}}\bigl(q_{j}^{R}+q_{j}^{I} \bigr), \\& E_{nj}=\overline{e_{nj}^{R}}q_{j}^{R}+ \overline{e^{I}_{nj}}p_{j}^{R}+\overline {e_{nj}^{R}}p_{j}^{I}+ \overline{e_{nj}^{I}}q_{j}^{I},\qquad F_{nj}=\overline{e_{nj}^{R}}q_{j}^{I}+ \overline{e^{I}_{nj}}p_{j}^{I}+\overline {e_{nj}^{R}}p_{j}^{R}+ \overline{e_{nj}^{I}}q_{j}^{R}, \\& U_{nj}=\overline{b_{nj}^{R}}\bigl(l_{j}^{R}+k_{j}^{I} \bigr)+\overline {b_{nj}^{I}}\bigl(k_{j}^{R}+l_{j}^{I} \bigr),\qquad V_{nj}=\overline {b_{nj}^{R}} \bigl(l_{j}^{I}+k_{j}^{R}\bigr)+ \overline{b_{nj}^{I}}\bigl(k_{j}^{I} +l_{j}^{R}\bigr). \end{aligned}$$

Throughout this paper, we always assume that

\((h_{1})\) :

There exist positive constants \(l_{n}^{R}\), \(l_{n}^{I}\), \(k_{n}^{R}\), \(k_{n}^{I}\), \(q_{n}^{R}\), \(q_{n}^{I}\), \(p_{n}^{R}\), \(p_{n}^{I} \) such that, for \(\forall(x_{1}, y_{1}), (x_{2}, y_{2})\in R\times R\), \(n\in\mathbf{L}\),

$$\begin{gathered} \bigl\vert F_{n}^{R}(x_{1}, y_{1})-F_{n}^{R}(x_{2}, y_{2}) \bigr\vert \leq l_{n}^{R} \vert x_{1}-x_{2} \vert +l_{n}^{I} \vert y_{1}-y_{2} \vert , \\ \bigl\vert F_{n}^{I}(x_{1}, y_{1})-F_{n}^{I}(x_{2}, y_{2}) \bigr\vert \leq k_{n}^{R} \vert x_{1}-x_{2} \vert +k_{n}^{I} \vert y_{1}-y_{2} \vert , \\ \bigl\vert G_{n}^{R}(x_{1}, y_{1})-G_{n}^{R}(x_{2}, y_{2}) \bigr\vert \leq q_{n}^{R} \vert x_{1}-x_{2} \vert +q_{n}^{I} \vert y_{1}-y_{2} \vert , \\ \bigl\vert G_{n}^{I}(x_{1},y_{1})-G_{n}^{I}(x_{2}, y_{2}) \bigr\vert \leq p_{n}^{R} \vert x_{1}-x_{2} \vert +p_{n}^{I} \vert y_{1}-y_{2} \vert .\end{gathered} $$
\((h_{2})\) :

\(d_{n}(t)\), \(b_{nj}^{R}(t)\), \(b^{I}_{nj}(t)\), \(e^{R}_{nj}(t)\), \(e^{I}_{nj}(t)\), \(P_{n}^{R}(t)\), \(P_{n}^{I}(t)\) (\(n\in\mathbf{L}\)) are all continuous ω-periodic functions.

\((h_{3})\) :

\((1+|c_{n}|)(U_{nj}+\frac{E_{nj}}{1-\sigma})<\frac{2\underline {d_{j}}}{l}-\frac{|c_{j}|}{l}\overline{d_{j}}-A_{jn}-|c_{j}|B_{jn}\).

\((h_{4})\) :

\((1+|c_{n}|)(V_{nj}+\frac{F_{nj}}{1-\sigma})<\frac {2\underline{d_{j}}}{l}-\frac{|c_{j}|}{l} \overline{d_{j}}-A^{\ast}_{jn}-|c_{j}|B^{\ast}_{jn}\).

Lemma 2.1

(Gaines and Mawhin [38])

Suppose that \(X^{\ast}\) and \(Z^{\ast}\) are two Banach spaces and \(L^{\ast}: D(L^{\ast})\subset X^{\ast}\rightarrow Z^{\ast}\) is a Fredholm operator with index zero. Moreover, \(\Omega\subset X^{\ast}\) is an open bounded set and \(N^{\ast}:\overline{\Omega}\rightarrow Z^{\ast}\) is \(L^{\ast}\)-compact on Ω. If the following conditions hold:

  1. (a)

    \(L^{\ast}u\neq\lambda N^{\ast}u\), \(\forall u\in\partial\Omega \cap D(L^{\ast})\), \(\forall\lambda\in(0, 1)\),

  2. (b)

    \(QN^{\ast}u\neq0\), \(\forall u\in\partial\Omega\cap \operatorname{Ker}L^{\ast}\),

  3. (c)

    \(\operatorname{deg}_{B}(J^{\ast}QN^{\ast}, \Omega\cap \operatorname{Ker}L^{\ast}, 0)\neq0\),

where \(J^{\ast}: \operatorname{Im} Q\rightarrow\operatorname{Ker} L^{\ast}\) is an isomorphism, then the equation \(L^{\ast}u=N^{\ast}u \) has a solution on \(\overline{\Omega}\cap D(L^{\ast})\).

Definition 2.1

(Graph theory [39])

A directed graph \(g=(U, K)\) contains a set \(U=\{1, 2,\ldots, n\}\) of vertices and a set K of arcs \((i, j)\) leading from initial vertex i to terminal vertex j. A subgraph Γ of g is said to be spanning if Γ and g have the same vertex set. A subgraph Γ is unicyclic if it is a disjoint union of rooted trees whose roots form a directed cycle. For a weighted digraph g with l vertices, we define the weight matrix \(B=(b^{\ast}_{ij})_{n\times n}\) whose entry \(b^{\ast}_{ij}>0 \) is equal to the weight of arc \((j, i)\) if it exists, and 0 otherwise. A digraph g is strongly connected if, for any pair of distinct vertices, there exists a directed path from one to the other. The Laplacian matrix of \((g, B) \) is defined as \(L=(q_{ij})_{l\times l}\), where \(q_{ij}=-b^{\ast}_{ij}\) for \(i\neq j \) and \(q_{ij}=\sum_{k\neq i}b^{\ast}_{ik} \) for \(i=j\).

Lemma 2.2

([39])

Suppose that \(l\geq2\) and \(c^{\ast}_{k} \) denotes the cofactor of the kth diagonal element of the Laplacian matrix of \((g, B)\). Then

$$\sum_{k, h=1}^{l}c^{\ast}_{k}b^{\ast}_{kh}G_{kh}(x_{k}, x_{h})=\sum_{Q\in \Omega}W(Q)\sum_{(k, h)\in K(C_{Q})}G_{hk} (u_{h}, u_{k}),$$

where \(G_{kh}(u_{k}, u_{h}) \) is an arbitrary function, Q is the set of all spanning unicyclic graphs of \((g, B)\), \(W(Q)\) is the weight of Q, \(C_{Q} \) denotes the directed cycle of Q, and \(K(C_{Q})\) is the set of arcs in \(C_{Q}\). In particular, if \((g, B)\) is strongly connected, then \(c^{\ast}_{k}>0 \) for \(1\leq k\leq l\).

Remark 1

If \((u_{1}(t), u_{2}(t),\ldots, u_{l}(t), v_{1}(t), v_{2}(t),\ldots, v_{l}(t) )^{T}\) is an ω periodic solution of system (2), then \((z_{1}(t), z_{2}(t),\ldots, z_{l}(t))^{T}\), where \(z_{n}(t)=u_{n}^{R}(t)+iv_{n}^{I}(t)\), \(n=1, 2,\ldots, l \), must be an ω periodic solution to system (1). Thus, in order to show the existence of periodic solutions of system (1), we only need to show the existence of periodic solutions of system (2). For proving the global exponential stability of periodic solutions of system (1), we only need to prove the global exponential stability of periodic solutions of system (2).

Lemma 2.3

(Lemma 2.1 [40])

If \(|c_{n}|<1\), \(n=1, 2,\ldots, l\), then the inverse of difference operator \(B_{n} \) denoted by \(B_{n}^{-1}\) exists and

$$\bigl\Vert B_{n}^{-1} \bigr\Vert \leq\frac{1}{1- \vert c_{n} \vert }. $$

3 The existence of periodic solutions

Lemma 3.1

For any \(\lambda\in(0, 1)\), we are concerned with the following system:

$$ \begin{gathered} \begin{aligned} \frac{d [K_{n}u_{n}(t)]}{dt}={}&\lambda \Biggl\{ -d_{n}(t)u_{n}(t)+\sum _{j=1}^{l}b^{R}_{nj}(t)F_{j}^{R} \bigl(u_{j}(t), v_{j}(t)\bigr) \\ & -\sum_{j=1}^{l}b_{nj}^{I}(t)F_{j}^{I} \bigl(u_{j}(t), v_{j}(t)\bigr) \\ &+\sum_{j=1}^{l}e_{nj}^{R}(t)G_{j}^{R} \bigl(u_{j}^{t}, v_{j}^{t}\bigr)-\sum _{j=1}^{l}e_{nj}^{I}(t)G_{j}^{I} \bigl(u_{j}^{t}, v_{j}^{t} \bigr)+P_{n}^{R}(t) \Biggr\} , \end{aligned} \\ \begin{aligned}\frac{d [K_{n}v_{n}(t)]}{dt}={}&\lambda \Biggl\{ -d_{n}(t)v_{n}(t)+\sum_{j=1}^{l}b^{R}_{nj}(t)F_{j}^{I} \bigl(u_{j}(t), v_{j}(t)\bigr) \\ &+\sum_{j=1}^{l} b_{nj}^{I}(t)F_{j}^{R} \bigl(u_{j}(t), v_{j}(t)\bigr) \\ &+\sum_{j=1}^{l}e_{nj}^{R}(t)G_{j}^{I} \bigl(u_{j}^{t}, v_{j}^{t}\bigr)+\sum _{j=1}^{l}e_{nj}^{I}(t)G_{j}^{R} \bigl(u_{j}^{t}, v_{j}^{t} \bigr)+P_{n}^{I}(t) \Biggr\} . \end{aligned} \end{gathered} $$
(3)

Then the periodic solutions of system (3) are bounded and the boundary must be independent of the choice of λ under assumptions \((h_{1})\)–\((h_{4})\) if the periodic solutions of system (3) exist. Namely, there exists a positive constant M such that

$$\big\| \bigl(u(t), v(t)\bigr)^{T}\big\| = \big\| \bigl(u_{1}(t), u_{2}(t),\ldots, u_{l}(t), v_{1}(t), v_{2}(t),\ldots, v_{l}(t)\bigr)^{T} \big\| \leq M,$$

where

$$\big\| \bigl(u(t), v(t)\bigr)^{T}\big\| =\sum_{m=1}^{l}\max_{t\in[0, \omega]}\bigl(\big|u_{m}(t)\big|+\big|v_{m}(t)\big|\bigr).$$

Proof

From \((h_{3})\) and \((h_{4})\), it follows that there exists a positive number δ such that

\((h_{5})\) :

\((1+|c_{n}|)(U_{nj}+\frac{E_{nj}}{1-\sigma})<\frac{2\underline {d_{j}}}{l}-\frac{|c_{j}|}{l}\overline{d_{j}} -A_{jn\delta}-|c_{j}|B_{jn\delta}-l\delta^{2}\overline{P^{R}_{n}}-\delta\).

\((h_{6})\) :

\((1+|c_{n}|)(V_{nj}+\frac{F_{nj}}{1-\sigma})<\frac{2\underline {d_{j}}}{l}-\frac{|c_{j}|}{l}\overline{d_{j}} -A^{\ast}_{jn\delta}-|c_{j}|B^{\ast}_{jn\delta}-l\delta^{2}\overline {P^{I}_{n}}-\delta\).

Suppose that \((u(t), v(t))^{T}=(u_{1}(t), u_{2}(t),\ldots, u_{l}(t), v_{1}(t), v_{2}(t),\ldots, v_{l}(t))^{T}\) is one ω-periodic solution of system (3) for some \(\lambda\in(0, 1)\). Let \(V_{n}(t)=V_{1n}(t)+V_{2n}(t)\),

$$V_{1n}(t)=\bigl[K_{n}u_{n}(t)\bigr]^{2}+ \bigl[K_{n}v_{n}(t)\bigr]^{2}, $$

where

$$ \begin{aligned} V_{2n}(t)={}&\lambda \Biggl\{ \vert c_{n} \vert \int_{t-\tau}^{t} \Biggl(\sum _{j=1}^{l}B_{nj\delta}+\delta^{2} \overline{P^{R}_{n}} \Biggr)u_{n}^{2}(s) \,ds \\ &+\frac{(1+ \vert c_{n} \vert )}{1-\sigma}\sum_{j=1}^{l}E_{nj} \int_{t-\tau _{nj}(t)}^{t}u_{j}^{2}(s)\,ds \\ & +\frac{(1+ \vert c_{n} \vert )}{1-\sigma}\sum_{j=1}^{l}F_{nj} \int_{t-\tau _{nj}(t)}^{t}v_{j}^{2}(s)\,ds \\ &+ \vert c_{n} \vert \int_{t-\tau}^{t} \Biggl(\sum _{j=1}^{l}B^{\ast}_{nj\delta}+ \delta^{2}\overline{P^{I}_{n}} \Biggr)v_{n}^{2}(s) \,ds \Biggr\} . \end{aligned} $$

Then we have, along with the solutions of system (3),

$$\begin{aligned} \frac{dV_{1n}(t)}{dt}={}&\lambda \bigl[u_{n}(t)-c_{n}u_{n}(t-\tau)\bigr] \Biggl(-d_{n}(t)u_{n}(t)+\sum_{j=1}^{l}b_{nj}^{R}(t) F_{j}^{R}\bigl(u_{j}(t), v_{j}(t) \bigr) \\ &-\sum_{j=1}^{l}b_{nj}^{I}(t) F_{j}^{I}\bigl(u_{j}(t), v_{j}(t) \bigr)+\sum_{j=1}^{l}e_{nj}^{R}(t)G_{j}^{R} \bigl(u_{j}^{t}, v_{j}^{t}\bigr) \\ &-\sum_{j=1}^{l}e_{nj}^{I}(t)G_{j}^{I} \bigl(u_{j}^{t}, v_{j}^{t} \bigr)+P_{n}^{R}(t) \Biggr) \\ &+\bigl[v_{n}(t)-c_{n} v_{n}(t-\tau)\bigr] \Biggl(-d_{n}(t)v_{n}(t)+ \sum_{j=1}^{l}b_{nj}^{R}(t) F_{j}^{I}\bigl(u_{j}(t), v_{j}(t) \bigr) \\ &+\sum_{j=1}^{l}b_{nj}^{I}(t)F_{j}^{R} \bigl(u_{j}(t), v_{j}(t)\bigr)+\sum_{j=1}^{l}e_{nj}^{R}(t)G_{j}^{I} \bigl(u_{j}^{t}, v_{j}^{t}\bigr) \\ &+\sum _{j=1}^{l}e_{nj}^{I}(t)G_{j}^{R} \bigl(u_{j}^{t}, v_{j}^{t} \bigr)+P_{n}^{I}(t) \Biggr) \\ \leq{}&\lambda \Biggl\{ \bigl(-2\underline{d_{n}}+ \vert c_{n} \vert \overline {d_{n}}\bigr)u_{n}^{2}(t)+ \vert c_{n} \vert \overline{d_{n}}u_{n}^{2}(t- \tau) \\ &+ 2 \bigl[ \bigl\vert u_{n}(t) \bigr\vert + \vert c_{n} \vert \bigl\vert u_{n}(t-\tau) \bigr\vert \bigr] \Biggl(\sum_{j=1}^{l} \overline{b_{nj}^{R}} \bigl[l_{j}^{R} \bigl\vert u_{j}(t) \bigr\vert +l_{j}^{I} \bigl\vert v_{j}(t) \bigr\vert + \bigl\vert F_{j}^{R}(0,0) \bigr\vert \bigr] \\ & +\sum_{j=1}^{l}\overline{b_{nj}^{I}} \bigl[ \bigl\vert F_{j}^{I}(0, 0) \bigr\vert +k_{j}^{R} \bigl\vert u_{j}(t) \bigr\vert +k_{j}^{I} \bigl\vert v_{j}(t) \bigr\vert \bigr] \\ & +\sum_{j=1}^{l}\overline{e_{nj}^{R}} \bigl[G_{j}^{R} \bigl\vert u_{j}^{t} \bigr\vert +q_{j}^{I} \bigl\vert v_{j}^{t} \bigr\vert + \bigl\vert G_{j}^{R}(0,0) \bigr\vert \bigr] \\ &+\sum_{j=1}^{l}\overline {e_{nj}^{I}} \bigl[p_{j}^{R} \bigl\vert u_{j}^{t} \bigr\vert +p_{j}^{I} \bigl\vert v_{j}^{t} \bigr\vert + \bigl\vert G_{j}^{I}(0,0) \bigr\vert \bigr]+\overline{P_{n}^{R}} \Biggr) \\ &+ \bigl(-2 \underline{d_{n}}+ \vert c_{n} \vert \overline{d_{n}} \bigr)v_{n}^{2}(t)+ \vert c_{n} \vert \overline {d_{n}}v_{n}^{2}(t-\tau)+ 2 \bigl[ \bigl\vert v_{n}(t) \bigr\vert + \vert c_{n} \vert \bigl\vert v_{n}(t-\tau) \bigr\vert \bigr] \\ &\times \Biggl(\sum_{j=1}^{l}\overline {b_{nj}^{R}} \bigl[k_{j}^{R} \bigl\vert u_{j}(t) \bigr\vert +k_{j}^{I} \bigl\vert v_{j}(t) \bigr\vert + \bigl\vert F_{j}^{I}(0,0) \bigr\vert \bigr] \\ &+\sum _{j=1}^{l}\overline{b_{nj}^{I}} \bigl[ \bigl\vert F_{j}^{R}(0, 0) \bigr\vert +l_{j}^{R} \bigl\vert u_{j}(t) \bigr\vert +l_{j}^{I} \bigl\vert v_{j}(t) \bigr\vert \bigr] \\ & +\sum_{j=1}^{l}\overline{e_{nj}^{R}} \bigl[p_{j}^{R} \bigl\vert u_{j}^{t} \bigr\vert +p_{j}^{I} \bigl\vert v_{j}^{t} \bigr\vert + \bigl\vert G_{j}^{I}(0,0) \bigr\vert \bigr] \\ &+\sum_{j=1}^{l}\overline{e_{nj}^{I}} \bigl[q_{j}^{R} \bigl\vert u_{j}^{t} \bigr\vert +q_{j}^{I} \bigl\vert v_{j}^{t} \bigr\vert + \bigl\vert G_{j}^{R}(0,0) \bigr\vert \bigr]+\overline{P_{n}^{I}} \Biggr) \Biggr\} . \end{aligned}$$
(4)

From (4), by using the inequality \(2|ab|\leq a^{2}+b^{2}\) (\(a, b=u_{n}(t), u_{n}(t-\tau), v_{n}(t), v_{n}(t-\tau), u_{j}(t), v_{j}(t), u_{j}^{t}, v_{j}^{t} \)), \(2u_{n}(t)|F^{R}_{j}(0,0)|\leq |F^{R}_{j}(0,0)|[\delta^{2}u_{n}^{2}(t)+\frac{1}{\delta^{2}}]\), \(2u_{n}(t)|F^{I}_{j}(0,0)|\leq |F^{I}_{j}(0,0)|[\delta^{2}u_{n}^{2}(t)+\frac{1}{\delta^{2}}]\), \(2u_{n}(t)|G^{I}_{j}(0,0)|\leq |G^{I}_{j}(0,0)|[\delta^{2}u_{n}^{2}(t)+\frac{1}{\delta ^{2}}]\), \(2u_{n}(t)|G^{R}_{j}(0,0)|\leq |G^{R}_{j}(0,0)|[\delta^{2}u_{n}^{2}(t)+\frac{1}{\delta^{2}}]\), \(2u_{n}(t)\overline{P^{R}_{n}}\leq \overline{P^{R}_{n}}[u_{n}^{2}(t)\delta^{2}+\frac{1}{\delta^{2}}]\), it follows that

$$\begin{aligned} \frac{dV_{1n}(t)}{dt} \leq{}&\lambda \Biggl\{ \Biggl(-2\underline{d_{n}}+ \vert c_{n} \vert \overline{d_{n}}+\sum_{j=1}^{l}A_{nj\delta} \Biggr)u_{n}^{2}(t)+ \vert c_{n} \vert \Biggl( \sum_{j=1}^{l} B_{nj\delta}+\delta^{2}\overline{P^{R}_{n}} \Biggr)u^{2}_{n}(t-\tau) \\ &+\sum_{j=1}^{l} \bigl(1+ \vert c_{n} \vert \bigr)\bigl(\overline{b_{nj}^{R}}l_{j}^{R}+ \overline {b_{nj}^{I}}k_{j}^{R} \bigr)u_{j}^{2}(t)+\bigl(1+ \vert c_{n} \vert \bigr)\sum_{j=1}^{l}\bigl( \overline{b_{nj}^{R}} l_{j}^{I}+\overline{b_{nj}^{I}}k_{j}^{I} \bigr)v_{j}^{2}(t) \\ & +\bigl(1+ \vert c_{n} \vert \bigr)\sum_{j=1}^{l}\bigl( \overline{e_{nj}^{R}}q_{j}^{R}+ \overline{e_{nj}^{I}}p_{j}^{R}\bigr) \bigl(u_{j}^{t} \bigr)^{2} +\bigl(1+ \vert c_{n} \vert \bigr)\sum _{j=1}^{l}\bigl(\overline{e_{nj}^{R}}q_{j}^{I}+ \overline {e_{nj}^{I}}p_{j}^{I}\bigr) \bigl(v^{t}_{j}\bigr)^{2} \\ &+ \Biggl(-2\underline{d_{n}}+ \vert c_{n} \vert \overline{d_{n}}+\sum_{j=1}^{l}A^{\ast}_{nj\delta} \Biggr)v_{n}^{2}(t)+ \vert c_{n} \vert \Biggl(\sum_{j=1}^{l} B^{\ast}_{nj\delta}+\delta^{2}\overline{P^{I}_{n}} \Biggr)v^{2}_{n}(t-\tau) \\ &+\sum_{j=1}^{l} \bigl(1+ \vert c_{n} \vert \bigr) \bigl(\overline{b_{nj}^{R}}k_{j}^{R}+\overline{b_{nj}^{I}}l_{j}^{R} \bigr)v_{j}^{2}(t)+\bigl(1+ \vert c_{n} \vert \bigr)\sum_{j=1}^{l}\bigl( \overline{b_{nj}^{R}} k_{j}^{I}+ \overline{b_{nj}^{I}}l_{j}^{I} \bigr)u_{j}^{2}(t) \\ & +\bigl(1+ \vert c_{n} \vert \bigr)\sum _{j=1}^{l}\bigl(\overline{e_{nj}^{R}}p_{j}^{R} +\overline {e_{nj}^{I}}q_{j}^{R}\bigr) \bigl(v_{j}^{t}\bigr)^{2} +\bigl(1+ \vert c_{n} \vert \bigr)\sum_{j=1}^{l} \bigl(\overline{e_{nj}^{R}}p_{j}^{I}+\overline{e_{nj}^{I}}q_{j}^{I}\bigr) \bigl(u^{t}_{j}\bigr)^{2} \Biggr\} \\ &+\lambda N, \end{aligned}$$
(5)

where N is a positive constant.

Since

$$ \begin{aligned}[b] \frac{dV_{2n}(t)}{dt} ={}&\lambda \Biggl\{ \frac{1+ \vert c_{n} \vert }{1-\sigma}\sum_{j=1}^{l} \bigl\{ E_{nj}u_{j}^{2}(t)-E_{nj}x_{j}^{2} \bigl(t-\tau_{ij}(t)\bigr) \bigl(1-\tau'_{nj}(t) \bigr) \\ &+F_{nj}v_{j}^{2}(t)-F_{nj}v_{j}^{2} \bigl(t-\tau_{nj}(t)\bigr) \bigl(1-\tau'_{nj}(t) \bigr) \bigr\} \\ &+ \vert c_{n} \vert \sum_{j=1}^{l} \bigl(B_{nj\delta}+\delta^{2}\overline{P^{R}_{n}} \bigr)u_{n}^{2}(t) - \vert c_{n} \vert \sum_{j=1}^{l} \bigl(B_{nj\delta}+\delta^{2}\overline{P^{R}_{n}} \bigr)u_{n}^{2}(t-\tau) \\ &+ \vert c_{n} \vert \sum_{j=1}^{l} \bigl(B^{\ast}_{nj\delta}+\delta^{2}\overline{P^{I}_{n}} \bigr)v_{n}^{2}(t)- \vert c_{n} \vert \sum _{j=1}^{l}\bigl(B^{\ast}_{nj\delta}+ \delta^{2}\overline{P^{I}_{n}}\bigr)v_{n}^{2}(t- \tau ) \Biggr\} \\ \leq{}&\lambda\sum_{j=1}^{l} \biggl\{ \frac{E_{nj}(1+ \vert c_{n} \vert )}{1-\sigma }u_{j}^{2}(t)-E_{nj}\bigl(1+ \vert c_{n} \vert \bigr)u_{j}^{2}\bigl(t- \tau_{nj}(t)\bigr) \\ & +\frac{F_{nj}(1+ \vert c_{n} \vert )}{1-\sigma}v_{j}^{2}(t)-\bigl(1+ \vert c_{n} \vert \bigr) F_{nj}v_{j}^{2}\bigl(t- \tau_{nj}(t)\bigr) \biggr\} \\ &+ \vert c_{n} \vert \sum _{j=1}^{l}\bigl(B_{nj\delta}+\delta ^{2}\overline{P^{R}_{n}}\bigr)u_{n}^{2}(t)- \vert c_{n} \vert \sum_{j=1}^{l} \bigl(B_{nj\delta} +\delta^{2}\overline{P^{R}_{n}} \bigr)u_{n}^{2}(t-\tau) \\ & +\vert c_{n} \vert \sum_{j=1}^{l} \bigl(B^{\ast}_{nj\delta}+\delta^{2}\overline{P^{I}_{n}} \bigr)v_{n}^{2}(t)- \vert c_{n} \vert \sum _{j=1}^{l}\bigl(B^{\ast}_{nj\delta}+ \delta^{2}\overline{P^{I}_{n}}\bigr)v_{n}^{2}(t- \tau ) \}, \end{aligned} $$
(6)

from (5) and (6), we have

$$ \begin{aligned}[b] \frac{dV_{n}(t)}{dt} \leq{}&\lambda \Biggl\{ \Biggl[ -2\underline{d_{n}}+ \vert c_{n} \vert \overline{d_{n}}+\sum_{j=1}^{l}A_{nj\delta }+ \vert c_{n} \vert \Biggl(\sum_{j=1}^{l}B_{nj\delta}+\delta^{2}\overline{P^{R}_{n}}\Biggr) \Biggr]u_{n}^{2}(t) \\ & +\bigl(1+ \vert c_{n} \vert \bigr)\sum_{j=1}^{l}\biggl(U_{nj}+ \frac{E_{nj}}{1-\sigma}\biggr)u_{j}^{2}(t)+\bigl(1+ \vert c_{n} \vert \bigr)\sum_{j=1}^{l} \biggl(V_{nj}+\frac{F_{nj}}{1-\sigma}\biggr)v_{j}^{2}(t) \\ &+ \Biggl[ -2\underline{d_{n}}+ \vert c_{n} \vert \overline{d_{n}}+\sum_{j=1}^{l}A^{\ast}_{nj\delta} + \vert c_{n} \vert \Biggl(\sum_{j=1}^{l} B^{\ast}_{nj\delta}+\delta^{2}\overline{P^{I}_{n}} \Biggr) \Biggr]v_{n}^{2}(t) \Biggr\} +\lambda N \\ ={}&\lambda\sum_{j=1}^{l} \Biggl\{ \biggl[ -2\frac{\underline{d_{n}}}{l}+ \vert c_{n} \vert \frac{\overline{d_{n}}}{l}+A_{nj\delta}+ \vert c_{n} \vert \bigl(B_{nj\delta }+l\delta^{2}\overline{P^{R}_{n}} \bigr)+\delta \biggr]u_{n}^{2}(t) \\ & + \vert c_{n} \vert )\sum_{j=1}^{l}(V_{nj}+ \bigl(1+ \vert c_{n} \vert \bigr) \biggl(U_{nj}+ \frac{E_{nj}}{1-\sigma}\biggr)u_{j}^{2}(t) \\ & +\bigl(1+ \vert c_{n} \vert \bigr) \biggl(V_{nj}+\frac{F_{nj}}{1-\sigma}\biggr)v_{j}^{2}(t)\\ &+ \biggl[ -2\frac{\underline{d_{n}}}{l}+ \vert c_{n} \vert \frac{\overline{d_{n}}}{l}+A^{\ast}_{nj\delta} + \vert c_{n} \vert \bigl( B^{\ast}_{nj\delta}+l \delta^{2}\overline{P^{I}_{n}}\bigr) +\delta \biggr]v_{n}^{2}(t)\\ &-\delta \bigl[u_{n}^{2}(t)+v_{n}^{2}(t) \bigr]+N \Biggr\} . \end{aligned} $$
(7)

By using \((h_{5})\) and \((h_{6})\), from (7), it follows that

$$ \begin{aligned}[b] \frac{dV_{n}(t)}{dt} \leq{}&\lambda \sum_{j=1}^{l} \biggl\{ \biggl[ 2 \frac{\underline{d_{j}}}{l}- \vert c_{j} \vert \frac{\overline{d_{j}}}{l}-A_{jn\delta}- \vert c_{j} \vert (B_{jn\delta }+l\delta^{2}\overline{P^{R}_{j}}-\delta \biggr]u_{j}^{2}(t) ) \\ &- \biggl[ 2\frac{\underline{d_{n}}}{l}- \vert c_{n} \vert \frac{\overline{d_{n}}}{l}-A_{nj\delta}- \vert c_{n} \vert \bigl(B_{nj\delta}+l \delta^{2}\overline{P^{R}_{n}}\bigr)-\delta \biggr]u_{n}^{2}(t) \biggr\} \\ & +\lambda\sum _{j=1}^{l} \biggl\{ \biggl[ 2\frac{\underline{d_{j}}}{l}- \vert c_{j} \vert \frac{\overline{d_{j}}}{l}-A^{\ast}_{jn\delta}- \vert c_{j} \vert \bigl(B^{\ast}_{jn\delta} +l \delta^{2}\overline{P^{I}_{j}}\bigr)-\delta \biggr]v_{j}^{2}(t) \\ &- \biggl[ 2\frac{\underline{d_{n}}}{l} - \vert c_{n} \vert \frac{\overline{d_{n}}}{l}-A^{\ast}_{nj\delta}- \vert c_{n} \vert \bigl(B^{\ast}_{nj\delta }+l \delta^{2}\overline{P^{I}_{n}}\bigr) -\delta \biggr]v_{n}^{2}(t) \biggr\} \\ &-\delta\bigl[u_{n}^{2}(t)+v_{n}^{2}(t) \bigr]+N \}. \end{aligned} $$
(8)

Letting \(b_{nj}=1\) (\(n\neq j\)), \(b_{nj}=0\), \(n=j\), \(G_{nj}(u^{2}_{n}(t), u^{2}_{j}(t))= [2\frac{\underline{d_{j}}}{l}-|c_{j}|\frac{\overline{d_{j}}}{l}-A_{jn\delta }-|c_{j}|(B_{jn\delta}+l\delta^{2}\overline{P^{R}_{j}})-\delta ] u_{j}^{2}(t)- [ 2\frac{\underline{d_{n}}}{l}-|c_{n}|\frac{\overline{d_{n}}}{l}-A_{nj\delta}-|c_{n}|(B_{nj\delta }+l\delta^{2}\overline{P^{R}_{n}})-\delta ]u_{n}^{2}(t)\) and \(p_{n}(u_{n}^{2}(t))= [ 2\frac{\underline{d_{n}}}{l}-|c_{n}|\frac{\overline{d_{n}}}{l}-A_{nj\delta}-|c_{n}|(B_{nj\delta }+l\delta^{2}\overline{P^{R}_{n}})-\delta ]u_{n}^{2}(t)\); \(b^{\ast}_{nj}=1\) (\(n\neq j\)), \(b_{nj}^{\ast}=0\), \(n=j\), \(G^{\ast}_{nj}(v^{2}_{n}(t), v^{2}_{j}(t))= [2\frac{\underline{d_{j}}}{l}-|c_{j}|\frac{\overline{d_{j}}}{l}-A^{\ast}_{jn\delta }-|c_{j}|(B^{\ast}_{jn\delta}+l\delta^{2}\overline{P^{I}_{j}})-\delta ] v_{j}^{2}(t)- [ 2\frac{\underline{d_{n}}}{l}-|c_{n}|\frac{\overline{d_{n}}}{l}-A^{\ast}_{nj\delta}-|c_{n}|(B^{\ast}_{nj\delta}+l\delta^{2}\overline{P^{I}_{n}}) -\delta ]v_{n}^{2}(t)\) and \(p^{\ast}_{n}(v_{n}^{2}(t))= [ 2\frac{\underline{d_{n}}}{l}-|c_{n}|\frac{\overline{d_{n}}}{l}-A^{\ast}_{nj\delta}-|c_{n}|(B^{\ast}_{nj\delta}+l\delta^{2}\overline{P^{I}_{n}})-\delta ]v_{n}^{2}(t)\), then we have, from (8),

$$\begin{aligned}& \begin{aligned}[b] \frac{dV_{n}(t)}{dt} \leq{}&\lambda \Biggl\{ \sum_{j=1}^{l}b_{nj}G_{nj} \bigl(u_{n}^{2}(t), u_{j}^{2}(t) \bigr) \\ &+\sum_{j=1}^{l} b^{\ast}_{nj}G^{\ast}_{nj} \bigl(v_{n}^{2}(t), v_{j}^{2}(t) \bigr)- \sum_{j=1}^{l}\delta\bigl[u_{n}^{2}(t)+v_{n}^{2}(t) \bigr]+\frac{N}{l} \Biggr\} , \end{aligned} \end{aligned}$$
(9)
$$\begin{aligned}& G_{nj} \bigl(u_{n}^{2}(t), u_{j}^{2}(t) \bigr)=p_{j}\bigl(u_{j}^{2}(t) \bigr)-p_{n}\bigl(u_{n}^{2}(t)\bigr), \end{aligned}$$
(10)

and

$$ G^{\ast}_{nj} \bigl(v_{n}^{2}(t), v_{j}^{2}(t) \bigr)=p^{\ast}_{j} \bigl(v_{j}^{2}(t)\bigr)-p^{\ast}_{n} \bigl(v_{n}^{2}(t)\bigr). $$
(11)

We construct the following Lyapunov function for system (3):

$$V(t)= \sum_{n=1}^{l}c^{\ast}_{n}V_{n}(t), $$

where \(c^{\ast}_{n}>0 \) is the cofactor of the nth diagonal element of the Laplacian matrix of \((g, B)\). From (9), we have

$$ \begin{aligned}[b] \frac{dV(t)}{dt}={}&\sum _{n=1}^{l}c^{\ast}_{n} \frac{dV_{n}(t)}{dt} \\ ={}&\lambda\sum_{n=1}^{l}c^{\ast}_{n} \sum_{j=1}^{l} \bigl\{ b_{nj}G_{nj} \bigl(u_{n}^{2}(t), u_{j}^{2}(t) \bigr)+b_{nj}^{\ast}G^{\ast}_{nj} \bigl(v_{j}^{2}(t), v_{j}^{2}(t)\bigr) \\ & -\delta \bigl[u_{n}^{2}(t)+v_{n}^{2}(t) \bigr]+N \bigr\} . \end{aligned} $$
(12)

From Lemma 2.2, it follows that

$$\begin{aligned}& \sum_{n=1}^{l}\sum _{j=1}^{l}c^{\ast}_{n}b_{nj}G_{nj} \bigl(u_{n}^{2}(t), u_{j}^{2}(t) \bigr)= \sum_{Q\in\Omega}W(Q)\sum_{(n, j)\in K(C_{\Omega})}G_{nj} \bigl(u_{n}^{2}(t), u_{j}^{2}(t) \bigr), \end{aligned}$$
(13)
$$\begin{aligned}& \sum_{n=1}^{l}\sum _{j=1}^{l}c^{\ast}_{n}b^{\ast}_{nj}G^{\ast}_{nj} \bigl(v_{n}^{2}(t), v_{j}^{2}(t) \bigr)= \sum_{Q\in\Omega}W(Q)\sum_{(n, j)\in K(C_{\Omega})}G^{\ast}_{nj} \bigl(v_{n}^{2}(t), v_{j}^{2}(t) \bigr). \end{aligned}$$
(14)

By substituting (10) into (13) and substituting (11) into (14), it follows that, from the fact \(W(Q)>0\),

$$ \begin{aligned}[b] &\sum_{n=1}^{l} \sum_{j=1}^{l}c^{\ast}_{n}b_{nj}G_{nj} \bigl(u_{n}^{2}(t), u_{j}^{2}(t) \bigr) \\ &\quad=\sum_{Q\in\Omega}W(Q)\sum _{(n, j)\in K(C_{\Omega})} \bigl[p_{j}\bigl(u_{j}^{2}(t) \bigr)-p_{n}\bigl(u_{n}^{2}(t)\bigr) \bigr]\leq0 \end{aligned} $$
(15)

and

$$ \begin{aligned}[b] &\sum_{n=1}^{l} \sum_{j=1}^{l}c^{\ast}_{n}b^{\ast}_{nj}G^{\ast}_{nj} \bigl(v_{n}^{2}(t), v_{j}^{2}(t) \bigr) \\ &\quad=\sum_{Q\in\Omega}W(Q)\sum _{(n, j)\in K(C_{\Omega})} \bigl[p^{\ast}_{j} \bigl(v_{j}^{2}(t)\bigr)-p^{\ast}_{n} \bigl(v_{n}^{2}(t) \bigr) \bigr]\leq 0. \end{aligned} $$
(16)

Substituting (15) and (16) into (12) gives

$$ \frac{dV(t)}{dt}\leq\lambda\sum_{n=1}^{l}c_{n}^{\ast}\sum_{j=1}^{l} \bigl(-\delta \bigl[u_{n}^{2}(t)+v_{n}^{2}(t)\bigr]+N \bigr). $$
(17)

Integrating (17) from 0 to ω gives

$$ \int_{0}^{\omega}\sum_{n=1}^{l}c_{n}^{\ast}\delta \bigl[u_{n}^{2}(s)+v_{n}^{2}(s) \bigr]\,ds\leq\omega N \sum_{n=1}^{l} c_{n}^{\ast}. $$
(18)

Integrating (17) from 0 to t gives

$$ \begin{aligned}[b] V(t)\leq{}& V(0)+ \int_{0}^{t}\sum_{n=1}^{l}c_{n}^{\ast}l \bigl(\delta\bigl[u_{n}^{2}(s)+v_{n}^{2}(s) \bigr]+N \bigr)\,ds \\ \leq{}& V(0)+ \int_{0}^{\omega}\sum_{n=1}^{l}c_{n}^{\ast}l \bigl(\delta\bigl[u_{n}^{2}(s)+v_{n}^{2}(s) \bigr]+N \bigr)\,ds. \end{aligned} $$
(19)

Substituting (18) into (19) gives

$$ V(t)\leq V(0)+2l\omega N\sum_{n=1}^{l}c_{n}^{\ast}. $$
(20)

From (20) and the definitions of \(V_{n}(t)\) and \(V_{1n}(t)\), we have

$$ \begin{aligned}[b] &\sum_{n=1}^{l}c_{n}^{\ast}\bigl\{ \bigl[u_{n}(t)-c_{n}u_{n}(t-\tau _{1})\bigr]^{2}+\bigl[v_{n}(t)-c_{n}v_{n}(t- \tau_{1}\bigr]^{2} \bigr\} \\ & \quad\leq V(0)+2l\omega N\sum_{n=1}^{l}c_{n}^{\ast}. \end{aligned} $$
(21)

Letting \(|u_{n}(\xi)|=\max_{t\in[0, \omega]}|u_{n}(t)|\), \(|v_{n}(\eta)|=\max_{t\in[0, \omega]}|v_{n}(t)|\), then from (21), it follows that

$$\sum_{n=1}^{l}c_{n}^{\ast}\bigl(1- \vert c_{n} \vert \bigr)^{2} \bigl[u_{n}^{2}(\xi)+v^{2}_{n}(\eta)\bigr] \leq V(0)+2l\omega N\sum_{n=1}^{l}c_{n}^{\ast}. $$

Hence there exists a positive constant M such that \(\|(u(t), v(t))^{T}\|\leq M\). This completes the proof of Lemma 3.1. □

Theorem 3.1

Assume that \((h_{1})\)–\((h_{4})\) hold and \(|c_{n}|<1\). Then system (2) has at least one ω periodic solution.

Proof

We will prove the existence of periodic solutions of system (2) by means of using Lemma 2.1. We are concerned with the Banach spaces: \(X^{\ast}=Z^{\ast}=\{(u(t), v(t))^{T}\in C(R, R^{2l}): u(t+\omega)=u(t), v(t+\omega)=v(t)\} \) with the norm \(\|(u(t), v(t))^{T}\|=\sum_{n=1}^{l}\max_{t\in[0, \omega]}(|u_{n}(t)|+|v_{n}(t)|)\). Set \(L^{\ast}: \operatorname{Dom}L^{\ast}\subset X^{\ast}\rightarrow X^{\ast}\), \(L^{\ast}(u(t), v(t))= (\frac{d[K_{1}u_{1}(t)]}{dt}, \frac{d[K_{2}u_{2}(t)]}{dt},\ldots, \frac{d[K_{l}u_{l}(t])}{dt}, \frac{d[K_{1}v_{1}(t)]}{dt}, \frac {d[K_{2}v_{2}(t)]}{dt},\ldots, \frac{d[K_{l}v_{1}(t)]}{dt} )^{T}\) and

$$N^{\ast}\bigl(u(t),v(t)\bigr)= \bigl(f_{1}(t), f_{2}(t),\ldots, f_{l}(t), f^{\ast}_{1}(t), f^{\ast}_{2}(t),\ldots, f^{\ast}_{l}(t) \bigr), $$

where, for \(n\in\textbf{L}\),

$$ \begin{gathered} \begin{aligned} f_{n}(t)={}&{-}d_{n}(t)u_{n}(t)+ \sum_{j=1}^{l}b^{R}_{nj}(t)F_{j}^{R} \bigl(u_{j}(t),v_{j}(t)\bigr)-\sum _{j=1}^{l}b_{nj}^{I}(t)F_{j}^{I} \bigl(u_{j}(t), v_{j}(t)\bigr) \\ &+\sum_{j=1}^{l}e_{nj}^{R}(t) G_{j}^{R}\bigl(u_{j}^{t}, v_{j}^{t}\bigr)-\sum_{j=1}^{l}e_{nj}^{I}(t)G_{j}^{I} \bigl(u_{j}^{t}, v_{j}^{t} \bigr)+P^{R}_{n}(t), \end{aligned} \\ \begin{aligned}f^{\ast}_{n}(t)={}&{-}d_{n}(t)v_{n}(t)+ \sum_{j=1}^{l}b^{R}_{nj}(t)F_{j}^{I} \bigl(u_{j}(t),v_{j}(t)\bigr)+\sum _{j=1}^{l}b_{nj}^{I}(t) F_{j}^{R}\bigl(u_{j}(t), v_{j}(t) \bigr) \\ & +\sum_{j=1}^{l}e_{nj}^{R}(t) G_{j}^{I}\bigl(u_{j}^{t}, v_{j}^{t}\bigr)+\sum_{j=1}^{l}e_{nj}^{I}(t)G_{j}^{R} \bigl(u_{j}^{t}, v_{j}^{t} \bigr)+P^{I}_{n}(t). \end{aligned} \end{gathered} $$

Thus \(\operatorname{Ker}L^{\ast}=\{u=(u(t),v(t))^{T}\in X^{\ast}: u\in R^{2l}\}\), \(\operatorname{Im} L^{\ast}=\{w\in Z^{\ast}: \int_{0}^{\omega}w(t)\,dt=0\}\) is closed in \(Z^{\ast}\) and \(\operatorname{Dim} \operatorname{Ker} L^{\ast}=2l=\operatorname{Codim} \operatorname{Im}L^{\ast}\). Hence, the operator \(L^{\ast}\) is a Fredholm mapping of index 0. We construct the projectors \(P^{\ast}: X^{\ast}\cap \operatorname{Dom} L^{\ast}\rightarrow \operatorname{Ker}L^{\ast}\) and \(Q^{\ast}: Z^{\ast}\rightarrow Z^{\ast}\) as

$$\begin{gathered} P^{\ast}u=\frac{1}{\omega} \int_{0}^{\omega}u(t)\,dt,\quad u\in X^{\ast}; \\ Q^{\ast}w=\frac{1}{\omega} \int_{0}^{\omega}w(t)\,dt,\quad w\in Z^{\ast}.\end{gathered} $$

Therefore, \(\operatorname{Lm}P^{\ast}=\operatorname{Ker}L^{\ast}\), \(\operatorname{Lm}L^{\ast}=\operatorname{Ker}Q^{\ast}=\operatorname {Im}(I-Q^{\ast})\). Moreover, the generalized inverse \(K_{p} \) of \(L^{\ast}\) is given as \(K_{p}=(L^{\ast})^{-1} (\int_{0}^{t}w(s)\,ds )\). Since \(|c_{n}|<1\), from Lemma 2.3, it is not difficult to show that \(N^{\ast}\) is \(L^{\ast}\)-compact on Ω̅. The concrete form of the operator equation \(L^{\ast}(u, v)=\lambda N^{\ast}(u, v)\), \((u, v)^{T}\in X^{\ast}\), \(\lambda\in(0, 1) \) is system (2). From Lemma 3.1, for every periodic solution \((u(t), v(t))^{T}=(u_{1}(t), u_{2}(t),\ldots, u_{l}(t), v_{1}(t),v_{2}(t),\ldots, v_{l}(t))^{T} \) of system (2), there exists a positive constant M such that \(\|(u(t), v(t))^{T}\|< M\). We set \(\Omega=\{(u(t), v(t))^{T}\in X^{\ast}: \|(u(t), v(t))^{T}\|< M\}\), \(M>\sqrt{\frac{2Nl\sum_{n=1}^{l}c_{n}^{\ast}}{\min_{1\leq n\leq l}\{c^{\ast}_{n}\}}}\). Then, for each \((u(t), v(t))^{T}\in \partial\Omega\cap \operatorname{Dom} L^{\ast}\), \(L^{\ast}(u(t), v(t))\neq\lambda N^{\ast}(u(t), v(t))\), \(\lambda\in(0, 1)\). Hence, condition (1) in Lemma 2.2 is satisfied. Secondly, we will show that when \((u(t), v(t))^{T}\in\partial\Omega\cap\operatorname{Ker}L^{\ast}\), \(Q^{\ast}N^{\ast}(u(t), v(t))\neq0\). Since \((u, v)^{T}\in\partial\Omega\cap\operatorname{Ker}L^{\ast}\), \((u, v)^{T} \) is a constant vector with \(\|(u, v)^{T}\|=M\), then when \((u, v)^{T}\in\partial\Omega\cap \operatorname{Ker}L^{\ast}\), \(Q^{\ast}N^{\ast}(u, v)= ( f_{1}(\xi_{1}), f_{2}(\xi_{2}),\ldots, f_{l}(\xi_{l}), f_{1}^{\ast}(\xi_{1}), f_{2}^{\ast}(\xi_{2}),\ldots, f_{l}^{\ast}(\xi_{l}) )\), where \(\xi_{i}\ (i=1, 2,\ldots,l)\in[0, \omega]\). When \((u, v)^{T}\in\partial\Omega\cap\operatorname{Ker}L^{\ast}\), we have

$$ \begin{aligned}[b] &[u_{n}-c_{n}u_{n}, v_{n}-c_{n}v_{n}] \bigl[Q^{\ast}N^{\ast}(u, v)_{n}\bigr]^{T} \\ &\quad=[u_{n}-c_{n}u_{n}, v_{n}-c_{n}v_{n}] \bigl(f_{n}(\xi_{n}), f_{n}^{\ast}( \xi_{n})\bigr)^{T} \\ &\quad =(u_{n}-c_{n}u_{n}) \Biggl[-d_{n}( \xi_{n})u_{n}+\sum_{j=1}^{l}b_{nj}^{R}( \xi _{n})F_{j}^{R}(u_{j},v_{j}) \\ &\qquad{} -\sum_{j=1}^{l}b_{nj}^{I}( \xi_{n}) F_{j}^{I}(u_{j},v_{j})+ \sum_{j=1}^{l}e^{R}_{nj}( \xi_{n}) G_{j}^{R}(u_{j},v_{j})- \sum_{j=1}^{l}e_{nj}^{I}( \xi_{n})G_{j}^{I}(u_{j}, v_{j})+P^{R}_{n}(\xi_{n}) \Biggr] \\ &\qquad{}+(v_{n}-c_{n}v_{n}) \Biggl[-d_{n}(\xi_{n})v_{n}+ \sum _{j=1}^{l}b_{nj}^{R}( \xi_{n})F_{j}^{I}(u_{j},v_{j}) \\ &\qquad{}-\sum_{j=1}^{l}b_{nj}^{I}( \xi_{n}) F_{j}^{R}(u_{j},v_{j})+ \sum_{j=1}^{l}e^{R}_{nj}( \xi_{n}) G_{j}^{I}(u_{j},v_{j})- \sum_{j=1}^{l}e_{nj}^{I}( \xi_{n})G_{j}^{R}(u_{j}, v_{j})+P^{I}_{n}(\xi_{n}) \Biggr] \\ &\qquad{}+0. \end{aligned} $$
(22)

It is obvious that

$$ \begin{aligned}[b] 0={}& \vert c_{n} \vert \Biggl(\sum_{j=1}^{l}B_{nj} \delta+\delta^{2}\overline{P^{R}_{n}}\Biggr) \bigl(u_{n}^{2}-u_{n}^{2}\bigr)+ \frac{1+ \vert c_{n} \vert }{1-\sigma}\sum_{j=1}^{l}E_{nj} \bigl(u_{j}^{2}-u_{j}^{2}\bigr) \\ &+ \vert c_{n} \vert \Biggl(\sum_{j=1}^{l}B_{nj\delta}^{\ast}+ \delta^{2}\overline{ P^{I}_{n}}\Biggr) \bigl(v_{n}^{2}-v_{n}^{2}\bigr)+ \frac{1+ \vert c_{n} \vert }{1-\sigma}\sum_{j=1}^{l}f_{nj} \bigl(v_{j}^{2}-v_{j}^{2}\bigr). \end{aligned} $$
(23)

Substituting (23) into (22) gives

$$\begin{aligned} & [u_{n}-c_{n}u_{n}, v_{n}-c_{n}v_{n}] \bigl[QN(u, v)_{n} \bigr]^{T} \\ &\quad=[u_{n}-c_{n}u_{n}, v_{n}-c_{n}v_{n}] \bigl(f_{n}(\xi_{n}), f_{n}^{\ast}( \xi_{n})\bigr)^{T} \\ &\quad\leq(u_{n}-c_{n}u_{n}) \Biggl[-d_{n}( \xi_{n})u_{n}+\sum_{j=1}^{l}b_{nj}^{R}( \xi _{n})F_{j}^{R}(u_{j},v_{j})-\sum_{j=1}^{l}b_{nj}^{I}( \xi_{n}) F_{j}^{I}(u_{j},v_{j}) \\ &\qquad{}+ \sum_{j=1}^{l} e^{R}_{nj}( \xi_{n}) G_{j}^{R}(x_{j},y_{j})-\sum_{j=1}^{l}e_{nj}^{I}( \xi_{n})G_{j}^{I}(u_{j}, v_{j})+P^{R}_{n}(\xi_{n}) \Biggr] \\ & \qquad{}+(v_{n}-c_{n}v_{n}) \Biggl[-d_{n}( \xi_{n})v_{n}+\sum_{j=1}^{l}b_{nj}^{R}( \xi_{n})F_{j}^{I}(u_{j},v_{j})+ \sum_{j=1}^{l}b_{nj}^{I}( \xi_{n}) F_{j}^{R}(u_{j},v_{j}) \\ &\qquad{}+ \sum_{j=1}^{l}e^{R}_{nj}( \xi_{n}) G_{j}^{I}(u_{j},v_{j})+\sum_{j=1}^{l}e_{nj}^{I}( \xi_{n}) G_{j}^{R}(u_{j}, v_{j})+P^{I}_{n}(\xi_{n}) \Biggr] \\ &\qquad{}+ \vert c_{n} \vert \Biggl(\sum_{j=1}^{l}B_{nj} \delta+\delta^{2}\overline{P^{R}_{n}}\Biggr) \bigl(u_{n}^{2}-u_{n}^{2}\bigr)+ \frac{1+ \vert c_{n} \vert }{1-\sigma}\sum_{j=1}^{l}E_{nj} \bigl(u_{j}^{2}-u_{j}^{2}\bigr) \\ &\qquad{}+ \vert c_{n} \vert (\sum_{j=1}^{l}B_{nj\delta}^{\ast}+ \delta^{2}\overline{ P^{I}_{n}}\bigl(v_{n}^{2}-v_{n}^{2} \bigr)+\frac{1+ \vert c_{n} \vert }{1-\sigma}\sum_{j=1}^{l}f_{nj} \bigl(v_{j}^{2}-v_{j}^{2}\bigr). \end{aligned}$$
(24)

From (24), the same proofs as those of (7)–(17) give

$$ \begin{aligned}[b] &\sum_{n=1}^{l}c^{\ast}_{n}[u_{n}-c_{n}y_{n}, v_{n}-c_{n}v_{n}] \bigl[Q^{\ast}N^{\ast}(u, v)_{n}\bigr]^{T} \\ &\quad=\sum_{n=1}^{l}c^{\ast}_{n}[u_{n}-c_{n}u_{n}, v_{n}-c_{n}v_{n}]\bigl(f_{n}( \xi_{n}), f_{n}^{\ast}(\xi_{m}) \bigr)^{T} \\ &\quad\leq\sum_{n=1}^{l}c^{\ast}_{n} \sum_{j=1}^{l} \biggl\{ \biggl[ 2 \frac{\underline{d_{j}}}{l}- \vert c_{j} \vert \frac{\overline{d_{j}}}{l}-A_{jn\delta}- \vert c_{j} \vert \bigl(B_{jn\delta }+l\delta^{2}\overline{P^{R}_{j}} \bigr)-\delta \biggr]u_{j}^{2} \\ & \qquad{}- \biggl[ 2 \frac{\underline{d_{n}}}{l}- \vert c_{n} \vert \frac{\overline{d_{n}}}{l}-A_{nj\delta}- \vert c_{n} \vert \bigl(B_{nj\delta}+l \delta^{2}\overline{P^{R}_{n}}\bigr)-\delta \biggr]u_{n}^{2} \biggr\} \\ & \qquad{} +\sum_{j=1}^{l} \biggl\{ \biggl[ 2\frac{\underline{d_{j}}}{l}- \vert c_{j} \vert \frac{\overline{d_{j}}}{l}-A^{\ast}_{jn\delta}- \vert c_{j} \vert \bigl(B^{\ast}_{jn\delta}+l \delta^{2}\overline{P^{I}_{j}}\bigr)-\delta \biggr] v_{j}^{2} \\ &\qquad{}- \biggl[ 2\frac{\underline{d_{n}}}{l}- \vert c_{n} \vert \frac{\overline{d_{n}}}{l}-A^{\ast}_{nj\delta}- \vert c_{n} \vert \bigl(B^{\ast}_{nj\delta} +l\delta^{2}\overline{P^{I}_{n}}\bigr) - \delta \biggr]v_{n}^{2}(t) \biggr\} \\ &\qquad{}-\delta\bigl(u_{n}^{2}+v_{n}^{2} \bigr)+N \} \\ &\quad\leq\sum_{n=1}^{l}\sum _{j=1}^{l}c_{n}^{\ast}\bigl[-\delta \bigl(u_{n}^{2}+v_{n}^{2}\bigr)+N\bigr]. \end{aligned} $$
(25)

Since \(\sum_{n=1}^{l}(|u_{n}|+|v_{n}|)=M\), then

$$ M^{2}\leq l\sum_{n=1}^{l} \bigl(u_{n}^{2}+v_{n}^{2}+2 \vert u_{n} \vert \vert v_{n} \vert \bigr)\leq 2l\sum _{n=1}^{l}\bigl(u_{n}^{2}+v_{n}^{2} \bigr). $$

Namely,

$$ \sum_{n=1}^{l} \bigl(u_{n}^{2}+v_{n}^{2}\bigr)\geq \frac{M^{2}}{2l}. $$
(26)

Substituting (26) into (25) gives

$$ \begin{aligned}[b] &\sum_{n=1}^{l}c^{\ast}_{n}[u_{n}-c_{n}y_{n}, v_{n}-c_{n}v_{n}] \bigl[Q^{\ast}N^{\ast}(u, v)_{n}\bigr]^{T} \\ &\quad\leq-\min_{1\leq n\leq l}\bigl\{ c_{n}^{\ast}\bigr\} \frac{M^{2}}{2}+Nl\sum_{n=l}^{l} \bigl\{ c_{n}^{\ast}\bigr\} < 0. \end{aligned} $$
(27)

Thus when \((u,v)\in\partial\Omega\cap\operatorname{Ker}L^{\ast}\), \(Q^{\ast}N^{\ast}(u, v)\neq0\). Thus, condition (b) in Lemma 2.1 is satisfied.

Thirdly, we show that when \((u, v)^{T}\in\partial\Omega\cap\operatorname{Ker}L^{\ast}\), \(\operatorname{deg}\{J^{\ast}Q^{\ast}N^{\ast}, \Omega\cap\operatorname {Ker}L^{\ast}, 0\}\neq0\). We construct a mapping \(H(u, v, \mu^{\ast}) \) by setting

$$ \begin{aligned} H\bigl(u, v, \mu^{\ast}\bigr) ={}&{-} \mu^{\ast}(\underline{d_{1}}u_{1}, \underline{d_{2}}u_{2},\ldots, \underline{d_{l}}v_{l}, \underline{d_{1}}v_{1}, \underline{d_{2}}v_{2}, \ldots, \underline{d_{l}}v_{l}) \\ &+\bigl(1-\mu^{\ast}\bigr) \bigl(f_{1}(\xi_{1}), f_{2}(\xi_{2}),\ldots, f_{l}(\xi_{l}), f_{1}^{\ast}(\xi_{1}), f_{2}^{\ast}( \xi_{2}),\ldots, f_{l}^{\ast}(\xi_{l}) \bigr), \end{aligned} $$

where \(\forall(u, v, \mu^{\ast})\in\partial\Omega\cap \operatorname{Ker}L^{\ast}\times[0, 1]\). If when \((u, v, \mu^{\ast})\in\partial\Omega\cap\operatorname{Ker}L^{\ast}=R^{2l}\cap \operatorname{Ker} L^{\ast}\), \(H(u, v,\mu^{\ast})=0\), then for \(n\in\textbf{L}\),

$$ 0=-\mu^{\ast}\underline{d_{n}}u_{n}+ \bigl(1-\mu^{\ast}\bigr)f_{n}(\xi_{n}) $$
(28)

and

$$ 0=-\mu^{\ast}\underline{d_{n}}u_{n}+ \bigl(1-\mu^{\ast}\bigr)f^{\ast}_{n}(\xi_{n}). $$
(29)

From (28) and (29), we have

$$\begin{aligned} 0 ={}&(u_{n}-c_{n}u_{n}) \bigl[-\mu^{\ast}\underline{d_{n}}u_{n}+\bigl(1- \mu^{\ast}\bigr)f_{n}(\xi_{n})\bigr] \\ &+(v_{n}-c_{n}v_{n})\bigl[-\mu^{\ast}\underline{d_{n}}v_{n}+\bigl(1-\mu^{\ast}\bigr)f_{n}^{\ast}(\xi_{n})\bigr] \\ \leq{}&{-}(1-c_{n})\mu^{\ast}\underline{d_{n}}u_{n}^{2}-(1-c_{n}) \bigl(1-\mu^{\ast}\bigr)\underline{d_{n}}u^{2}_{n}+u_{n} \Biggl\{ \sum_{j=1}^{l}b_{nj}^{R}( \xi_{n})F_{j}^{R}(u_{j}, v_{j}) \\ & -\sum_{j=1}^{l}b_{nj}^{I}( \xi_{n}) F_{j}^{I}(u_{j}, v_{j})+\sum_{j=1}^{l}e_{nj}^{R}( \xi_{n})G_{j}^{R}(u_{j}, v_{j})-\sum_{j=1}^{l} e_{nj}^{I}(\xi_{n})G_{j}^{I}(u_{j}, v_{j})+P^{R}_{n}(\xi_{n})] \Biggr\} \\ &-(1-c_{n}) \mu^{\ast}\underline{d_{n}}v_{n}^{2} -(1-c_{n}) \bigl(1-\mu^{\ast}\bigr)\overline{d_{n}}v^{2}_{n}+v_{n} \Biggl\{ \sum_{j=1}^{l}b_{nj}^{R}( \xi_{n})F_{j}^{I}(u_{j}, v_{j}) \\ &+\sum_{j=1}^{l}b_{nj}^{I}( \xi_{n})F_{j}^{R}(u_{j}, v_{j}) +\sum_{j=1}^{l}e_{nj}^{R}( \xi_{n}) G_{j}^{I}(u_{j}, v_{j})+\sum_{j=1}^{l} e_{nj}^{I}(\xi_{n})G_{j}^{R}(u_{j}, v_{j})+P^{I}_{n}(\xi_{n}) \Biggr\} \\ \leq{}& -(1-c_{n})\underline{d_{n}}u_{n}^{2}+(1-c_{n}) \bigl(1-\mu^{\ast}\bigr) \vert u_{n} \vert \Biggl\{ \sum _{j=1}^{l}\overline{b_{nj}^{R}} \bigl\vert F_{j}^{R}(u_{j}, v_{j}) \bigr\vert \\ &+\sum_{j=1}^{l}\overline{b_{nj}^{I}} \bigl\vert F_{j}^{I}(u_{j}, v_{j}) \bigr\vert +\sum_{j=1}^{l} \overline{e_{nj}^{R}} \bigl\vert G_{j}^{R}(u_{j}, v_{j}) \bigr\vert +\sum_{j=1}^{l}\overline{e_{nj}^{I}} \bigl\vert G_{j}^{I}(u_{j}, v_{j}) \bigr\vert +\overline{P^{R}_{n}} \Biggr\} \\ &-(1-c_{n}) \underline{d_{n}}v_{n}^{2}+(1-c_{n}) \bigl(1-\mu^{\ast}\bigr) \vert v_{n} \vert \Biggl\{ \sum_{j=1}^{l} \overline{b_{nj}^{R}} \bigl\vert G_{j}^{I}(u_{j}, v_{j}) \bigr\vert \\ &+\sum_{j=1}^{l} \overline{b_{nj}^{I}} \bigl\vert F_{j}^{R}(u_{j}, v_{j}) \bigr\vert +\sum_{j=1}^{l}\overline{e_{nj}^{R}} \bigl\vert G_{j}^{I}(u_{j}, v_{j}) \bigr\vert +\sum_{j=1}^{l} \overline{e_{nj}^{I}} \bigl\vert G_{j}^{R}(u_{j}, v_{j}) \bigr\vert +\overline{P^{I}_{n}} \Biggr\} . \end{aligned}$$
(30)

By the same proofs as those in (7)–(17), from (30), we obtain

$$ \begin{aligned}[b] & {-}(1-c_{n}) \underline{d_{n}}u_{n}^{2}+(1-c_{n}) \bigl(1-\mu^{\ast}\bigr) \vert u_{n} \vert \Biggl\{ \sum _{j=1}^{l}\overline{b_{nj}^{R}} \bigl\vert F_{j}^{R}(u_{j}, v_{j}) \bigr\vert \\ &\qquad{} +\sum_{j=1}^{l} \overline{b_{nj}^{I}} \bigl\vert F_{j}^{I}(u_{j}, v_{j}) \bigr\vert +\sum_{j=1}^{l} \overline{e_{nj}^{R}} \bigl\vert G_{j}^{R}(u_{j}, v_{j}) \bigr\vert +\sum_{j=1}^{l} \overline{e_{nj}^{I}} \bigl\vert G_{j}^{I}(u_{j}, v_{j}) \bigr\vert +\overline{P^{R}_{n}} \Biggr\} \\ &\qquad{}-(1-c_{n})\underline{d_{n}}v_{n}^{2}+(1-c_{n}) \bigl(1-\mu^{\ast}\bigr) \vert v_{n} \vert \{\sum_{j=1}^{l} \overline {b_{nj}^{R}} \bigl\vert F_{j}^{I}(u_{j}, v_{j}) \bigr\vert \\ &\qquad{}+ \sum_{j=1}^{l} \overline{b_{nj}^{I}}|F_{j}^{R}(u_{j}, v_{j}) +\sum_{j=1}^{l} \overline{e_{nj}^{R}} \bigl\vert G_{j}^{I}(u_{j}, v_{j}) \bigr\vert +\sum_{j=1}^{l} \overline{e_{nj}^{I}} \bigl\vert G_{j}^{R}(u_{j}, v_{j}) \bigr\vert \\ &\quad< 0. \end{aligned} $$
(31)

Equation (31) contradicts (30), hence \(H(u, v, \mu^{\ast})\neq0 \) when \((u, v,\mu^{\ast})\in\partial\Omega\cap R^{2l}\cap\operatorname{Ker} L^{\ast}\). Hence, \(L^{\ast}(u, v, \mu^{\ast})\) is a homotopic mapping. Thus, we have

$$ \begin{gathered} \operatorname{deg} \bigl(J^{\ast}Q^{\ast}N^{\ast}\bigl(u, v, \mu^{\ast}\bigr), \partial \Omega\cap \operatorname{Ker}L^{\ast}, (0, 0,\ldots, 0) \bigr) \\ \quad=\operatorname{deg} \bigl(H(u, v, 0), \partial\Omega\cap \operatorname{Ker}L^{\ast}, (0, 0,\ldots, 0) \bigr) \\ \quad=\operatorname{deg} \bigl(H(u, v, 1), \partial\Omega\cap \operatorname{Ker}L^{\ast}, (0, 0,\ldots, 0) \bigr) \\ \quad\neq0. \end{gathered} $$

Thus condition (c) in Lemma 2.1 is satisfied. Hence the proof of Theorem 3.1 is complete. □

4 Exponential stability

Theorem 4.1

Let the conditions in Theorem 3.1 be satisfied. Then the unique ω-periodic solution of system (2) is globally exponentially stable.

Proof

According to Theorem 3.1, system (2) has an ω-periodic solution. Let

$$\bigl(u^{\ast}(t), v^{\ast}(t)\bigr)^{T}= \bigl(u_{1}^{\ast}(t), u_{2}^{\ast}(t),\ldots, u_{l}^{\ast}(t), v_{1}^{\ast}(t), v_{2}^{\ast}(t),\ldots, v_{l}^{\ast}(t) \bigr)^{T} $$

be an ω-periodic solution. From \((h_{3})\) and \((h_{4})\), it follows that there exist two positive numbers \(\delta^{\ast}\) and α such that

\((h_{7})\) :

\((1+|c_{n}|)(U_{nj}+\frac{E_{nj}e^{\alpha\sigma^{\ast}}}{1-\sigma})<2\frac {\underline{d_{j}}}{l}-\frac{|c_{j}|}{l}\overline{d_{j}} -A_{jn}-|c_{j}|e^{\alpha\tau}B_{jn}-\delta^{\ast}-\frac{\alpha}{l}[e^{\alpha \tau}(1+c_{n}^{2})+1+|c_{n}|]\).

\((h_{8})\) :

\((1+|c_{n}|)(V_{nj}+\frac{F_{nj}e^{\alpha\sigma^{\ast}}}{1-\sigma})<\frac {2\underline{d_{j}}}{l}-\frac{|c_{j}|}{l}\overline{d_{j}} -A^{\ast}_{jn}-|c_{j}|e^{\alpha \tau}B^{\ast}_{jn}-\frac{\alpha}{l}[e^{\alpha\tau }(1+c_{n}^{2})+1+|c_{n}|]-\delta^{\ast}\).

Let \((u(t), v(t))^{T}=(u_{1}(t), u_{2}(t),\ldots, u_{l}(t), v_{1}(t), v_{2}(t),\ldots, v_{l}(t))^{T}\) be an arbitrary solution of system (2). We define the following Lyapunov functional: \(V_{n}(t)=V_{1n}(t)+V_{2n}(t)\), \(n\in\textbf{L}\),

$$\begin{gathered} V_{1n}(t)=e^{\alpha t} \bigl(K_{n}X_{n}(t) \bigr)^{2}+e^{\alpha t} \bigl(K_{n}Y_{n}(t) \bigr)^{2}, \\ \begin{aligned} V_{2n}(t) ={}& \vert c_{n} \vert \int_{t-\tau}^{t}e^{\alpha(s+ \tau)}\sum _{j=1}^{l}B_{nj}X^{2}_{n}(s) \,ds \\ &+\frac{(1+ \vert c_{n} \vert )}{1-\sigma}\sum_{j=1}^{l}E_{nj} \int_{t-\tau _{nj}(t)}^{t}e^{\alpha(s+\sigma^{\ast})}X^{2}_{j}(s) \,ds \\ & +\frac{(1+ \vert c_{n} \vert )}{1-\sigma}\sum_{j=1}^{l}F_{nj} \int_{t-\tau _{nj}(t)}^{t}e^{\alpha(s+\sigma^{\ast})}Y^{2}_{j}(s) \,ds \\ &+ \vert c_{n} \vert \int_{t-\tau}^{t} e^{\alpha(s+\tau)}\sum _{j=1}^{l}B^{\ast}_{nj}Y^{2}_{n}(s) \,ds \\ &+\alpha \int_{t-\tau}^{t}\bigl(1+c_{n}^{2} \bigr)e^{\alpha(s+\tau)}X_{n}^{2}(s)\,ds\\ &+ \alpha \int_{t-\tau}^{t}\bigl(1+c_{n}^{2} \bigr)e^{\alpha(s+\tau)}Y_{n}^{2}(s)\,ds, \end{aligned}\end{gathered} $$

where \(X_{n}(t)=u_{n}(t)-u^{\ast}_{n}(t)\), \(Y_{n}(t)=v_{n}(t)-v_{n}^{\ast}(t)\). Then we can get, along with the solutions of system (2),

$$\begin{aligned} \frac{dV_{1n}(t)}{dt}={}& e^{\alpha t} \Biggl\{ \bigl[X_{m}(t)-c_{n}X_{n}(t- \tau)\bigr] \Biggl(-d_{n}(t)X_{n}(t) \\ &+\sum_{j=1}^{l}b_{nj}^{R}(t) \bigl[F_{j}^{R}\bigl(u_{j}(t), v_{j}(t)\bigr)-F_{j}^{R}\bigl(u_{j}^{\ast}(t), v_{j}^{\ast}(t)\bigr) \bigr] \\ &-\sum_{j=1}^{l}b_{nj}^{I}(t) \bigl[F_{j}^{I}\bigl(u_{j}(t), v_{j}(t)\bigr)-F_{j}^{I}\bigl(u_{j}^{\ast}(t), v_{j}^{\ast}(t)\bigr) \bigr] \\ &+\sum_{j=1}^{l}e_{nj}^{R}(t) \bigl[G_{j}^{R}\bigl(u_{j}^{t}, v_{j}^{t}\bigr)-G_{j}^{R} \bigl(u_{j}^{\ast t}, v_{j}^{\ast t}\bigr) \bigr] \\ &-\sum_{j=1}^{l}e_{nj}^{I}(t) \bigl[G_{j}^{I}\bigl(u_{j}^{t}, v_{j}^{t}\bigr)-G_{j}^{I} \bigl(u_{j}^{\ast t}, v_{j}^{\ast t}\bigr) \bigr] \Biggr) \\ &+\bigl[Y_{n}(t)-c_{n} v_{n}(t-\tau)\bigr] \Biggl(-d_{n}(t)Y_{n}(t) \\ & +\sum_{j=1}^{l}b_{nj}^{R}(t)\bigl[F_{j}^{I}\bigl(u_{j}(t), v_{j}(t)\bigr)-F_{j}^{I}\bigl(u^{\ast}_{j}(t), v_{j}^{\ast}(t)\bigr) \bigr] \\ &+\sum_{j=1}^{l}b_{nj}^{I}(t) \bigl[F_{j}^{R}\bigl(u_{j}(t), v_{j}(t)\bigr)-F_{j}^{R}\bigl(u^{\ast}_{j}(t), v_{j}^{\ast}(t)\bigr) \bigr] \\ &+\sum_{j=1}^{l}e_{nj}^{R}(t) \bigl[G_{j}^{I}\bigl(u_{j}^{t}, v_{j}^{t}\bigr)-G_{j}^{I} \bigl(u_{j}^{\ast t}, v_{j}^{\ast t}\bigr) \bigr] \\ &+\sum_{j=1}^{l}e_{nj}^{I}(t) \bigl[G_{j}^{R}\bigl(u_{j}^{t}, v_{j}^{t}\bigr)-G_{j}^{R} \bigl(u_{j}^{\ast t}, v_{j}^{\ast t}\bigr) \bigr] \Biggr) \Biggr\} \\ &+ \alpha e^{\alpha t} \bigl\{ \bigl[X_{n}-c_{n}X_{n}(t-\tau) \bigr]^{2}+ \bigl[Y_{n}-c_{n}Y_{n}(t-\tau)\bigr]^{2} \bigr\} \end{aligned}$$
(32)

and

$$ \begin{aligned}[b] \frac{dV_{2n}(t)}{dt} ={}&e^{\alpha t} \Biggl\{ \frac{1+ \vert c_{n} \vert }{1-\sigma}\sum_{j=1}^{l} \bigl(E_{nj}e^{\alpha \sigma^{\ast}}X_{j}^{2}(t)-E_{nj}X_{j}^{2} \bigl(t-\tau_{nj}(t)\bigr) \bigl(1-\tau'_{nj}(t) \bigr) \\ & +F_{nj}e^{\alpha\sigma^{\ast}}Y_{j}^{2}(t)-F_{nj} Y_{j}^{2}\bigl(t-\tau_{nj}(t)\bigr) \bigl(1- \tau'_{nj}(t)\bigr) \bigr) \\ & + \vert c_{n} \vert \sum_{j=1}^{l}B_{nj\delta}e^{\alpha\tau}X_{n}^{2}(t)- \vert c_{n} \vert \sum_{j=1}^{l}B_{nj}X_{n}^{2}(t- \tau) \\ & + \vert c_{n} \vert \sum_{j=1}^{l}B^{\ast}_{nj\delta}e^{\alpha\tau}Y_{n}^{2}(t)- \vert c_{n} \vert \sum_{j=1}^{l}B^{\ast}_{nj}Y_{n}^{2}(t- \tau) \\ & +\alpha\bigl(1+c_{n}^{2}\bigr)\bigl[e^{\alpha\tau}X_{n}^{2}(t)-X^{2}_{n}(t- \tau)\bigr] \\ &+\alpha\bigl(1+ c_{n}^{2}\bigr)\bigl[e^{\alpha\tau}Y_{n}^{2}(t)-Y^{2}_{n}(t- \tau)\bigr] \Biggr\} . \end{aligned} $$
(33)

From (32) and (33), by using arguments similar to (7)–(17), we have

$$\begin{aligned} \frac{dV_{n}(t)}{dt} \leq{}&e^{\alpha t} \Biggl\{ \Biggl[ -2\underline{d_{n}}+ \vert c_{n} \vert \overline{d_{n}}+\sum _{j=1}^{l}A_{nj}+ \vert c_{n} \vert e^{\alpha\tau} \sum _{j=1}^{l}B_{nj} \\ &+\alpha\bigl[e^{\alpha\tau} \bigl(1+c_{n}^{2}\bigr)+1+ \vert c_{n} \vert \bigr] \Biggr]X_{n}^{2}(t) \\ & +\bigl(1+ \vert c_{n} \vert \bigr)\sum _{j=1}^{l}\biggl(U_{nj}+\frac{E_{nj}e^{\alpha \sigma^{\ast}}}{1-\sigma} \biggr)X_{j}^{2}(t)+\bigl(1+ \vert c_{n} \vert \bigr)\sum_{j=1}^{n}\biggl(V_{nj}+\frac{F_{nj}e^{\alpha\sigma^{\ast}}}{1-\sigma}\biggr)Y_{j}^{2}(t) \\ &+ \Biggl[ -2 \underline{d_{n}}+ \vert c_{n} \vert \overline{d_{n}} +\sum_{j=1}^{l}A^{\ast}_{nj}+ \vert c_{n} \vert e^{\alpha\tau}\sum _{j=1}^{l} B^{\ast}_{nj} \\ &+\alpha \bigl[e^{\alpha\tau}\bigl(1+c_{n}^{2}\bigr)+1+ \vert c_{n} \vert \bigr] \Biggr]Y_{n}^{2}(t) \Biggr\} \\ ={}&e^{\alpha t}\sum_{j=1}^{l} \biggl\{ \biggl[ -2\frac{\underline{d_{n}}}{l}+\frac{ \vert c_{n} \vert \overline {d_{n}}}{l}+A_{nj}+ \vert c_{n} \vert e^{\alpha\tau}B_{nj} \\ & + \frac{\alpha}{l} \bigl[e^{\alpha\tau}\bigl(1+c_{n}^{2}\bigr)+1+ \vert c_{n} \vert \bigr]+\delta^{\ast}\biggr]X_{n}^{2}(t) \\ &+\bigl(1+ \vert c_{n} \vert \bigr) \biggl(U_{nj}+\frac {E_{nj}e^{\alpha \sigma^{\ast}}}{1-\sigma}\biggr)X_{j}^{2}(t)+\bigl(1+ \vert c_{n} \vert \bigr) \biggl(V_{nj}+ \frac{F_{nj}e^{\alpha\sigma^{\ast}}}{1-\sigma}\biggr)Y_{j}^{2}(t) \\ &+ \biggl[ -2 \frac{\underline{d_{n}}}{l}+\frac{ \vert c_{n} \vert }{l}\overline{d_{n}}+A^{\ast}_{nj}+ \vert c_{n} \vert e^{\alpha\tau} B^{\ast}_{nj} \\ & + \frac{\alpha}{l}\bigl[e^{\alpha\tau}\bigl(1+c_{n}^{2} \bigr)+1+ \vert c_{n} \vert \bigr] +\delta^{\ast}\biggr]Y_{n}^{2}(t)- \delta^{\ast}\bigl[X_{n}^{2}(t)+Y_{n}^{2}(t) \bigr] \biggr\} . \end{aligned}$$
(34)

By using \((h_{7})\) and \((h_{8})\), from (34), we obtain

$$ \begin{aligned}[b] \frac{dV_{n}(t)}{dt} \leq{}&e^{\alpha t}\sum_{j=1}^{l} \biggl\{ \biggl[ 2\frac{\underline{d_{j}}}{l}-\frac{ \vert c_{j} \vert }{l}\overline{d_{j}}-A_{jn}- \vert c_{j} \vert e^{\alpha\tau}B_{jn} \\ &- \frac{\alpha}{l} \bigl[e^{\alpha\tau}\bigl(1+c_{n}^{2} \bigr)+1+ \vert c_{n} \vert \bigr]-\delta^{\ast}\biggr]X_{j}^{2}(t) \\ &- \biggl[2\frac{\underline{d_{n}}}{l}-\frac{ \vert c_{n} \vert }{l}\overline {d_{n}}-A_{nj}- \vert c_{n} \vert e^{\alpha\tau}B_{nj} \\ & - \frac{\alpha}{l}\bigl[e^{\alpha\tau}\bigl(1+c_{n}^{2} \bigr)+1+ \vert c_{n} \vert \bigr]-\delta^{\ast}\biggr]X_{n}^{2}(t) \biggr\} \\ & +\sum _{j=1}^{l} \biggl\{ \biggl[ 2\frac{\underline{d_{j}}}{l}- \frac{ \vert c_{j} \vert }{l}\overline{d_{j}}-A^{\ast}_{jn}- \vert c_{j} \vert e^{\alpha\tau}B^{\ast}_{jn} \\ & -\frac{\alpha}{l} \bigl[e^{\alpha\tau}\bigl(1+c_{n}^{2}\bigr)+1+ \vert c_{n} \vert \bigr]-\delta^{\ast}\biggr]Y_{j}^{2}(t) \\ &- \biggl[ 2 \frac{\underline{d_{n}}}{l}-\frac{ \vert c_{n} \vert }{l}\overline{d_{n}}-A^{\ast}_{nj}- \vert c_{n} \vert e^{\alpha\tau}B^{\ast}_{nj} \\ & - \frac{\alpha}{l}\bigl[e^{\alpha\tau}\bigl(1+c_{n}^{2} \bigr)+1+ \vert c_{n} \vert \bigr] -\delta^{\ast}\biggr]Y_{n}^{2}(t) \biggr\} - \delta^{\ast}\bigl[X_{n}^{2}(t)+Y_{n}^{2}(t) \bigr] \}. \end{aligned} $$
(35)

Letting \(e_{nj}=1\) (\(n\neq j\)), \(e_{nj}=0\), \(n=j\), \(G_{nj}(X^{2}_{n}(t), X^{2}_{j}(t))= [2\frac{\underline{d_{j}}}{l}-\frac{|c_{j}|}{l}\overline {d_{j}}-A_{jn}-|c_{j}|e^{\alpha\tau} B_{jn}-\frac{\alpha}{l}[e^{\alpha\tau}(1+c_{n}^{2})+1+|c_{n}|]-\delta^{\ast}] X_{j}^{2}(t)- [ 2\frac{\underline{d_{n}}}{l}-\frac{|c_{n}|}{l}\overline {d_{n}}-A_{nj}-|c_{n}|e^{\alpha\tau}B_{nj} -\frac{\alpha}{l}[e^{\alpha\tau}(1+c_{n}^{2})+1+|c_{n}|]-\delta^{\ast}]X_{n}^{2}(t)\) and \(p_{n}(X_{n}^{2}(t))= [ 2\frac{\underline{d_{n}}}{l}-\frac{|c_{n}|}{l}\overline {d_{n}}-A_{nj}-|c_{n}|e^{\alpha\tau}B_{nj} -\frac{\alpha}{l}[e^{\alpha\tau}(1+c_{n}^{2})+1+|c_{n}|]-\delta^{\ast}]X_{n}^{2}(t)\); \(b^{\ast}_{nj}=1\) (\(n\neq j\)), \(b_{nj}^{\ast}=0\), \(n=j\), \(G^{\ast}_{nj}(Y^{2}_{n}(t), Y^{2}_{j}(t))= [2\frac{\underline{d_{j}}}{l}-\frac{|c_{j}|}{l}\overline {d_{j}}-A^{\ast}_{jn\delta}- |c_{j}|e^{\alpha\tau}B^{\ast}_{jn}- \frac{\alpha}{l}[e^{\alpha\tau }(1+c_{n}^{2})+1+|c_{n}|]-\delta^{\ast}] Y_{j}^{2}(t)- [ \frac{\underline{d_{n}}}{l}-\frac{|c_{n}|}{l}\overline{d_{n}}-A^{\ast}_{nj} -|c_{n}|e^{\alpha\tau}B^{\ast}_{jn}-\frac{\alpha}{l}[e^{\alpha\tau }(1+c_{n}^{2})+1+|c_{n}|]-\delta^{\ast}]Y_{n}^{2}(t)\) and \(p^{\ast}_{n}(Y_{n}^{2}(t))= [ \frac{\underline{d_{n}}}{l}-\frac{|c_{n}|}{l}\overline{d_{n}}-A^{\ast}_{nj}-|c_{n}|e^{\alpha\tau} B^{\ast}_{nj}-\frac{\alpha}{l}[e^{\alpha\tau}(1+c_{n}^{2})+1+|c_{n}|]-\delta ^{\ast}]Y_{n}^{2}(t)\), then we have from (35)

$$\begin{aligned}& \begin{aligned}[b] \frac{dV_{n}(t)}{dt} \leq{}&e^{\alpha t} \Biggl\{ \sum_{j=1}^{l}e_{nj}G_{nj} \bigl(X_{n}^{2}(t), X_{j}^{2}(t) \bigr) \\ &+ \sum_{j=1}^{l} b^{\ast}_{nj}G^{\ast}_{nj} \bigl(Y_{n}^{2}(t), Y_{j}^{2}(t) \bigr)-\sum_{j=1}^{l}\delta^{\ast}\bigl[X_{n}^{2}(t)+Y_{n}^{2}(t)\bigr] \Biggr\} , \end{aligned} \end{aligned}$$
(36)
$$\begin{aligned}& G_{nj} \bigl(X_{n}^{2}, X_{j}^{2}(t) \bigr)=p_{j}\bigl(X_{j}^{2}(t) \bigr)-p_{n}\bigl(X_{n}^{2}(t)\bigr), \end{aligned}$$
(37)

and

$$ G^{\ast}_{nj} \bigl(Y_{n}^{2}, Y_{j}^{2}(t) \bigr)=p^{\ast}_{j} \bigl(Y_{j}^{2}(t)\bigr)-p^{\ast}_{n} \bigl(Y_{n}^{2}(t)\bigr). $$
(38)

From (36)–(38), using the same proofs as those of (7)–(17), we have

$$ \frac{dV(t)}{dt}\leq e^{\alpha t}\sum_{n=1}^{l} \sum_{j=1}^{l} \bigl(-c^{\ast}_{n} \delta^{\ast}\bigl[u_{n}^{2}(t)+v_{n}^{2}(t) \bigr] \bigr)< 0. $$

The rest of the proof is similar to that of the corresponding part in global exponential stability in [1] and it is omitted.

When \(c_{n}=0\), system (1) and system (2) reduce, respectively, to the following complex-valued neural networks with time delays and real-valued neural networks with time delays:

$$ \begin{aligned}[b] z'_{n}(t)={}&{-}d_{n}(t)z_{n}(t)+ \sum_{j=1}^{l}b_{nj}(t)F_{j} \bigl(z_{j}(t)\bigr) \\ & +\sum_{j=1}^{l}e_{nj}(t)G_{j}(z_{j} \bigl(t-\tau_{nj}(t)\bigr)+P_{n}(t) \end{aligned} $$
(39)

and

$$ \begin{gathered} \begin{aligned} \frac{d[u_{n}(t)]}{dt}={}&{-}d_{n}(t)u_{n}(t)+\sum _{j=1}^{l}b^{R}_{nj}F_{j}^{R} \bigl(u_{j}(t), v_{j}(t)\bigr)-\sum _{j=1}^{l}b^{I}_{nj}F_{j}^{I} \bigl(u_{j}(t), v_{j}(t)\bigr) \\ &+\sum_{j=1}^{l}e^{R}_{nj}(t)G_{j}^{R} \bigl(u_{j}^{t}, v_{j}^{t}\bigr)-\sum _{j=1}^{l}e_{nj}^{I}(t)G_{j}^{I} \bigl(u_{j}^{t}, v_{j}^{t} \bigr)+P_{n}^{R}(t), \end{aligned} \\ \begin{aligned}\frac{d[v_{n}(t)]}{dt}={}&{-}d_{n}(t)v_{n}(t)+ \sum_{j=1}^{l}b^{R}_{nj}F_{j}^{I} \bigl(u_{j}(t), v_{j}(t)\bigr)+\sum _{j=1}^{l}b^{I}_{nj}F_{j}^{R} \bigl(u_{j}(t), v_{j}(t)\bigr) \\ &+\sum_{j=1}^{l} e^{R}_{nj}(t)G_{j}^{I} \bigl(u_{j}^{t}, v_{j}^{t}\bigr)+\sum _{j=1}^{l} e_{nj}^{I}(t)G_{j}^{R} \bigl(u_{j}^{t}, v_{j}^{t} \bigr)+P_{n}^{I}(t). \end{aligned} \end{gathered} $$
(40)

From Theorem 4.1, we can obtain the following corollary. □

Corollary 1

Assume that \((h_{1})\) and \((h_{2})\) hold. Further assume that

\((h_{9})\) :
$$U_{nj}+\frac{E_{nj}}{1-\sigma}< \frac{2\underline{d_{j}}}{l}-A_{jn}. $$
\((h_{10})\) :
$$V_{nj}+\frac{F_{nj}}{1-\sigma}< \frac{2\underline{d_{j}}}{l}-A^{\ast}_{jn}. $$

Then system (39) or system (40) has an ω-periodic solution which is globally exponentially stable.

Remark 2

In [2, 14, 15], and [16], the existence and global exponential/asymptotic stability of periodic solutions for complex-valued neural networks have been obtained by using coincidence degree theory, LMI method, and Lypunov functional method. In our paper, by combining graph theory with coincidence degree theory to study periodic solutions, we establish new sufficient conditions to guarantee the existence and global exponential stability of periodic solutions for complex-valued neural networks. Hence, a new study method of periodic solutions for neural networks is introduced in our paper.

Remark 3

In [1], the activation functions were assumed to be bounded, but in our paper, the activation functions are not bounded; hence, our results of exponential stability of periodic solutions for complex-valued neural networks of neutral type are less conservative than those obtained in [1].

Remark 4

Up to now, the results of exponential stability of periodic solutions for neural networks with time delays have not been published by means of graph theory. Hence, our work to study periodic solutions of neural networks by applying graph theory are novel in comparison to those obtained by using only coincidence degree theory or fixed point theorems.

Remark 5

So far, coincidence degree theory has been widely applied to investigate the existence of periodic solutions for neural networks. In recent years, combination of graph theory with coincidence degree theory has been applied to studying the existence of periodic solutions for coupled networks [31–35]. Recently, we have established some sufficient conditions for the existence and global stability of periodic solutions for neural networks by combining coincidence degree theory with Lyapunov functional method [14, 15, 37]. However, the results on the existence and global stability of periodic solutions for neural networks have not been obtained by combining coincidence degree theory with graph theory as well as Lyapunov functional method. Hence, our results on the existence and global exponential stability of periodic solutions for neural networks by using graph theory are novel and complementary to the existing papers.

Remark 6

So far, the existence result of periodic solutions has been different from that of global exponential/asymptotic stability for dynamical systems and differential equations by using coincidence degree theory or fixed point theorems in the existing papers. In our paper, by combining coincidence degree theory with graph theory as well as Lyapunov functional method, by constructing the same Lyapunov functionals in the proofs of the existence of periodic solutions and global exponential stability of periodic solutions, we can obtain novel identical sufficient conditions for the existence of periodic solutions and global exponential stability of periodic solutions. Hence, our study method of periodic solutions is new and our result of global exponential stability for neural networks is concise and easy to verify.

5 Numerical examples

In this section, we give an example for showing our results.

Example 1

We consider the neutral-type system (2) with the following parameters: \(n=1, 2\), \(c_{n}=-0.1\), \(l=2\), \(d_{n}(t)=10+0.5\sin 5t\), \(b^{R}_{nj}(t)=b_{nj}^{I}(t)=e_{nj}^{R}(t)=e_{nj}^{I}(t)=P_{n}^{R}(t)=P^{I}_{n}(t)=0.01+0.09\sin 5t\), \(\tau_{nj}(t)=0.1(2+\sin5t)\), \(F_{j}^{R}(u_{j}(t), v_{j}(t))=F_{j}^{I}(u_{j}(t), v_{j}(t))=G_{j}^{R}(u_{j}(t), v_{j}(t))=0.01|u_{j}(t)|+0.01|v_{j}(t)|\), \(G_{j}^{I}(u_{j}(t), v_{j}(t))=0.01|u_{j}(t)|+0.01|v_{j}(t)|\). Then, in Theorem 4.1, \(c_{n}=-0.1\), \(n=1, 2\), \(l=2\), \(\overline{d_{j}}=10.5\), \(\underline{d_{j}}=9.5\), \(\overline{b^{R}_{jn}}=\overline{b^{I}_{jn}}=\overline{e^{R}_{nj}}=\overline {e^{I}_{nj}}=0.1\), \(l_{n}^{R}=l_{n}^{I}=k_{n}^{R}=k_{n}^{I}=q_{n}^{R}=q_{n}^{I}=p_{n}^{R}=p_{n}^{I}=0.01\), \(\sigma=0.1\), \(\sigma^{\ast}=0.1\).

Since the activation functions in [1] are bounded, while the activation functions in Example 1 are not bounded, hence the global exponential stability of periodic solutions for Example 1 cannot be verified by the result in [1].

It is easy to verify that

$$\begin{gathered} \bigl(1+ \vert c_{n} \vert \bigr) \biggl(U_{nj}+ \frac{E_{nj}}{1-\sigma}\biggr)=1.1(0.004+0.0044)=0.00924, \\ \frac{2\underline{d_{j}}}{l}-\frac{ \vert c_{j} \vert }{l}\overline {d_{j}}-A_{jn}- \vert c_{j} \vert B_{jn}=9.5-0.525-0.008-0.0008=8.9662, \\ \bigl(1+ \vert c_{n} \vert \bigr) \biggl(V_{nj}+ \frac{F_{nj}}{1-\sigma}\biggr)=1.1(0.004+0.0044)=0.00924, \\ \frac{2\underline{d_{j}}}{l}-\frac{ \vert c_{j} \vert }{l}\overline{d_{j}}-A^{\ast}_{jn}- \vert c_{j} \vert B^{\ast}_{jn}=9.5-0.525-0.008-0.0008=8.9662.\end{gathered} $$

Hence

$$\begin{gathered} \bigl(1+ \vert c_{n} \vert \bigr) \biggl(U_{nj}+ \frac{E_{nj}}{1-\sigma}\biggr)< \frac{2\underline {d_{j}}}{l}-\frac{ \vert c_{j} \vert }{l}\overline{d_{j}}-A_{jn}- \vert c_{j} \vert B_{jn}, \\ \bigl(1+ \vert c_{n} \vert \bigr) \biggl(V_{nj}+ \frac{F_{nj}}{1-\sigma}\biggr)< \frac{2\underline {d_{j}}}{l}-\frac{ \vert c_{j} \vert }{l}\overline{d_{j}}-A^{\ast}_{jn}- \vert c_{j} \vert B^{\ast}_{jn}.\end{gathered} $$

Namely \((h_{3})\) and \((h_{4})\) in Theorem 4.1 are satisfied. Thus all the conditions in Theorem 4.1 are satisfied; therefore, by Theorem 4.1, system (2) in Example 1 has a unique \(\frac{2\pi}{5}\) periodic solution which is globally exponentially stable.

The global exponential stability of periodic solutions of the neutral-type complex-valued neural networks in Example 1 is shown in Fig. 1.

Figure 1
figure 1

Global exponential stability of periodic solutions in Example 1

6 Conclusion

By combining graph theory with coincidence degree theory as well as Lyapunov functional method, by constructing the same Lyapunov functionals in the proofs of the existence of periodic solutions and global exponential stability of periodic solutions, novel identical sufficient conditions on the existence of periodic solutions and global exponential stability of periodic solutions for neutral-type complex-valued neural networks are established. In our results, the assumption on the boundedness for the activation functions in [1] is removed and the inequality conditions in [1] are replaced with new inequalities. Hence, our results are less conservative than those obtained in [1] and easy to verify. In near future, we will study nonlinear control of delayed systems [19, 20].

References

  1. Gao, S., Du, B.: Global exponential stability of periodic solutions for neutral-type complex-valued neural networks. Discrete Dyn. Nat. Soc. 2016, Article ID 1267954 (2016)

    MathSciNet  MATH  Google Scholar 

  2. Du, B.: Stability analysis of periodic solution for a complex-valued neural network with bounded and unbounded delays. Asian J. Control 20, 881–892 (2018). https://doi.org/10.1002/asjc.1608

    Article  MathSciNet  MATH  Google Scholar 

  3. Song, Q.K., Yu, Q.Q., Zhao, Z.J., Liu, Y.R., Alsaadi, F.E.: Dynamics of complex-valued neural networks with variable coefficients and proportional delays. Neurocomputing 275, 2762–2768 (2018)

    Article  Google Scholar 

  4. Gong, W., Liang, J., Cao, J.: Matrix measure method for global exponential stability of complex-valued recurrent neural networks with time-varying delays. Neural Netw. 70, 81–89 (2015)

    Article  Google Scholar 

  5. Zhang, Z.Q., Yu, S.H.: Global asymptotic stability for a class of complex-valued Cohen–Grossberg neural networks with time delays. Neurocomputing 171, 1158–1166 (2016)

    Article  Google Scholar 

  6. Tu, Z., Cao, J., Alsaedi, A., Alsaadi, F.E., Hayat, T.: Global Lagrange stability of complex-valued neural networks of neutral type with time-varying delays. Complexity 21, 438–450 (2016)

    Article  MathSciNet  Google Scholar 

  7. Song, Q.K., Yan, H., Zhao, Z., Liu, Y.: Global exponential stability of complex-valued neural networks with both time varying delays and impulsive effects. Neural Netw. 79, 108–116 (2016)

    Article  Google Scholar 

  8. Shi, Y., Cao, J., Chen, G.: Exponential stability of complex-valued memristor-based neural networks with time-varying delays. Appl. Math. Comput. 313, 222–234 (2017)

    MathSciNet  Google Scholar 

  9. Rakkiyappan, R., Velmurugan, G., Li, X., Regan, D.: Global dissipativity of memristor-based complex-valued neural networks with time-varying delays. Neural Comput. Appl. 27, 629–649 (2016)

    Article  Google Scholar 

  10. Li, X., Rakkiyappan, R., Sakthivel, N.: Non-fragile synchronization control for Markovian jumping complex dynamical networks with probabilistic time-varying coupling delay. Asian J. Control 17, 1678–1695 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  11. Zhang, D.W., Jiang, H.J., Wang, J.L.: Global stability of complex-valued recurrent neural networks with both mixed time delays and impulsive effect. Neurocomputing 282, 157–166 (2018)

    Article  Google Scholar 

  12. Huang, C., Cao, J., Xiao, M., Alsaedi, A., Hayat, T.: Bifurcations in a delayed fractional complex-valued neural network. Appl. Math. Comput. 292, 210–227 (2017)

    MathSciNet  Google Scholar 

  13. Liu, D., Zhu, S., Sun, K.: New results for exponential stability of complex-valued memristive neural networks with variable delays. Neurocomputing 275, 758–767 (2018)

    Article  Google Scholar 

  14. Zhang, Z.Q., Li, A.L., Yang, L.: Global asymptotic periodic synchronization for delayed complex-valued BAM neural networks via vector-valued inequality techniques. Neural Process. Lett. (2018). https://doi.org/10.1007/s11063-017-9722-3

    Google Scholar 

  15. Zhang, Z.Q., Zheng, T.: Global asymptotic stability of periodic solutions for delayed complex-valued Cohen–Grossberg neural networks by combining coincidence degree theory with LMI method. Neurocomputing 289, 220–230 (2018)

    Article  Google Scholar 

  16. Liu, D., Zhu, S., Ye, E.: Global exponential periodicity and stability of memristor-based complex-valued delayed neural networks. Int. J. Syst. Sci. 49, 231–245 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  17. Zhang, Z.Q., Hao, D.L., Zhou, D.M.: Global asymptotic stability by complex-valued inequalities for complex-valued neural networks with delays on periodic time scales. Neurocomputing 219, 494–501 (2017)

    Article  Google Scholar 

  18. Zhang, X.Y., Lv, X.X., Li, X.D.: Sampled-data based lag synchronization of chaotic delayed neural networks with impulsive control. Nonlinear Dynamics 90, 2199–2207 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  19. Li, X., Cao, J.: An impulsive delay inequality involving unbounded time-varying delay and applications. IEEE Trans. Autom. Control 62, 3618–3625 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  20. Li, X., Song, S.: Stabilization of delay systems: delay-dependent impulsive control. IEEE Trans. Autom. Control 62, 406–411 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  21. Bao, H., Cao, J.: Existence and uniqueness of solutions to neutral stochastic functional differential equations with infinite delay. Appl. Math. Comput. 215, 1732–1743 (2009)

    MathSciNet  MATH  Google Scholar 

  22. Tu, Z., Wang, L.: Global Lagrange stability for neutral type neural networks with mixed time-varying delay. Int. J. Mach. Learn. Cybern. 9, 599–609 (2018)

    Article  Google Scholar 

  23. Manivannan, R., Samidurai, R., Cao, J., Alsaed, A., Alsaadi, F.E.: Delay-dependent stability criteria for neutral-type neural networks with interval time-varying delay signals under the effect of leakage delay. Adv. Differ. Equ. 2018, Article ID 53 (2018)

    Article  MathSciNet  Google Scholar 

  24. Wang, Z., Lu, S., Cao, J.: Existence of periodic solutions for a p-Laplacian neutral functional differential equation with multiple variable parameters. Nonlinear Anal., Theory Methods Appl. 72, 734–747 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  25. Zhang, Z.Q., Liu, K.Y., Yang, Y.: New LMI-based condition on global asymptotic stability concerning BAM neural networks of neutral type. Neurocomputing 81, 24–32 (2012)

    Article  Google Scholar 

  26. Sun, Y.Q., Zhong, Y.H., Zhou, W.N., Zhou, J., Zhang, X.: Adaptive exponential stabilization of neutral-type neural network with Levy noise and Markovian switching parameters. Neurocomputing 284, 160–170 (2018)

    Article  Google Scholar 

  27. Zhang, H., Ye, R.Y., Cao, J.D., Alsaedi, A.: Delay-independent stability of Riemann–Liouville fractional neutral-type delayed neural networks. Neural Process. Lett. 47, 427–442 (2018)

    Google Scholar 

  28. Lakshmanan, S., Lim, C.P., Poakash, M., Nahavandi, S., Balasubramaniam, P.: Neutral-type of delayed inertial neural networks and their stability analysis using the LMI approach. Neurocomputing 230, 243–250 (2017)

    Article  Google Scholar 

  29. Xu, D.S., Tan, M.C.: Delay-independent stability criteria for complex-valued BAM neutral-type neural networks with delays. Discrete Dyn. Nat. Soc. 89, 819–832 (2017)

    MATH  Google Scholar 

  30. Li, W., Pang, L., Su, H.A., Wang, K.: Global stability for discrete Cohen–Grossberg neural networks with finite and infinite delays. Appl. Math. Lett. 25, 2246–2251 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  31. Zhang, X.H., Li, W.X., Wang, K.: The existence and global exponential stability of periodic solutions for a neutral coupled system on networks with delays. Appl. Math. Comput. 264, 208–217 (2015)

    MathSciNet  Google Scholar 

  32. Zhang, X.H., Li, W.X., Wang, K.: The existence of periodic solutions for coupled systems on networks with time delays. Neurocomputing 152, 287–293 (2015)

    Article  Google Scholar 

  33. Gao, S., Li, S.S., Wu, B.Y.: Periodic solutions of discrete time periodic time-varying coupled systems on networks. Chaos Solitons Fractals 103, 246–255 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  34. Zhang, X.H., Li, W.X., Wang, K.: Periodic solutions of coupled systems on networks with both time-delay and linear coupling. IMA J. Appl. Math. 80, 1871–1889 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  35. Li, X., Bohner, M., Wang, C.: Impulsive differential equations: periodic solutions and applications. Automatica 52, 173–178 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  36. Gao, S., Wang, Q., Wu, B.Y.: Existence and global exponential stability of periodic solutions for coupled control systems on networks with feedback and time delays. Commun. Nonlinear Sci. Numer. Simul. 63, 72–87 (2018). https://doi.org/10.1016/j.cnsns.2018.03.012

    Article  MathSciNet  Google Scholar 

  37. Liao, H.Y., Zhang, Z.Q., Ren, L., Peng, W.L.: Global asymptotic stability of periodic solutions for inertial delayed BAM neural networks via novel computing method of degree and inequality techniques. Chaos Solitons Fractals 104, 785–797 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  38. Gaines, R., Mawhin, J.: Coincidence Degree and Nonlinear Differential Equations. Springer, Berlin (1977)

    Book  MATH  Google Scholar 

  39. West, D.: Introduction to Graph Theory. Prentice Hall, Upper Saddle River (1996)

    MATH  Google Scholar 

  40. Du, B., Liu, Y.R., Abbas, I.A.: Existence and asymptotic behavior results of periodic solution for discrete-time neutral-type neural networks. J. Franklin Inst. 353, 448–461 (2016)

    Article  MathSciNet  Google Scholar 

Download references

Funding

This work was jointly supported by the project supported by the Innovation Platform Open Fund of Hunan Province of China Grant No. 201485, and the Jiangsu Provincial Key Laboratory of Networked Collective Intelligence under Grant No. BM2017002.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Jinde Cao.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Z., Cao, J. Periodic solutions for complex-valued neural networks of neutral type by combining graph theory with coincidence degree theory. Adv Differ Equ 2018, 261 (2018). https://doi.org/10.1186/s13662-018-1716-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-018-1716-6

Keywords