In this section, we briefly recall some basic definitions and lemmas on time scales. For more detail, we refer to [21–23].
Let \(\mathbb{T}\) be a nonempty closed subset (time scale) of \(\mathbb{R}\). The forward and backward jump operators \(\sigma, \rho:\mathbb{T}\rightarrow\mathbb{T}\) and the graininess \(\mu:\mathbb{T}\rightarrow\mathbb{R}^{+}\) are defined, respectively, by
$$\sigma(t)=\inf\{s\in\mathbb{T}:s>t\},\qquad \rho(t)=\sup\{s\in\mathbb{T}:s< t\} \quad\text{and}\quad \mu(t)=\sigma(t)-t. $$
A point \(t\in\mathbb{T}\) is called left-dense if \(t>\inf\mathbb{T}\) and \(\rho(t)=t\), left-scattered if \(\rho(t)< t\), right-dense if \(t<\sup\mathbb{T}\) and \(\sigma(t)=t\), and right-scattered if \(\sigma(t)>t\). If \(\mathbb{T}\) has a left-scattered maximum m, then \(\mathbb{T}^{k}=\mathbb{T}\setminus\{m\}\); otherwise, \(\mathbb{T}^{k}=\mathbb{T}\). If \(\mathbb{T}\) has a right-scattered minimum m, then \(\mathbb{T}_{k}=\mathbb{T}\setminus\{m\}\); otherwise, \(\mathbb{T}^{k}=\mathbb{T}\).
Let \(\omega>0\). Throughout this paper, we assume that the time scale \(\mathbb{T}\) is ω-periodic, that is, \(t\in\mathbb{T}\) implies \(t+\omega\in\mathbb{T}\) and \(\mu(t+\omega)=\mu(t)\). In particular, the time scale \(\mathbb{T}\) under consideration is unbounded above and below.
Definition 2.1
A function \(f:\mathbb{T}\rightarrow\mathbb{R}\) is called \(regulated\) if its right-side limits exist (finite) at all right-side points in \(\mathbb{T}\) and its left-side limits exist (finite) at all left-side points in \(\mathbb{T}\).
Definition 2.2
A function \(f:\mathbb{T}\rightarrow\mathbb{R}\) is called rd-continuous if it is continuous at right-dense points in \(\mathbb{T}\) and its left-side limits exist (finite) at left-dense points in \(\mathbb{T}\). The set of rd-continuous functions \(f:\mathbb{T}\rightarrow\mathbb{R}\) will be denoted by \(C_{\mathrm {rd}}=C_{\mathrm {rd}}(\mathbb{T})=C_{\mathrm {rd}}(\mathbb{T},\mathbb{R})\).
Definition 2.3
Let \(f:\mathbb{T}\rightarrow\mathbb{R}\) and \(t\in\mathbb{T}^{k}\). Then we define \(f^{\Delta}(t)\) to be the number (if it exists) such that, for all \(\varepsilon>0\), there exists a neighborhood U of t (i.e., \(U=(t-\delta,t+\delta)\cap\mathbb{T}\) for some \(\delta>0\)) such that
$$\bigl\vert \bigl[f\bigl(\sigma(t)\bigr)-f(s)\bigr]-f^{\Delta}(t)\bigl[ \sigma(t)-s\bigr] \bigr\vert < \varepsilon \bigl\vert \sigma(t)-s \bigr\vert $$
for all \(s\in U\). We call \(f^{\Delta}(t)\) the delta (or Hilger) derivative of f at t. The set of differentiable functions \(f:\mathbb{T}\rightarrow\mathbb{R}\) with rd-continuous derivatives is denoted by \(C_{\mathrm {rd}}^{1}=C_{\mathrm {rd}}^{1}(\mathbb{T})=C_{\mathrm {rd}}^{1}(\mathbb {T},\mathbb{R})\).
If f is continuous, then f is rd-continuous. If f is rd-continuous, the f is regulated. If f is delta differentiable at t, then f is continuous at t.
Lemma 2.1
Let
f
be regulated. Then there exists a function
F
which is delta differentiable with region of differentiation
D
such that
$$F^{\Delta}(t)=f(t) \quad\textit{for all } t\in D. $$
Definition 2.4
Let \(f:\mathbb{T}\rightarrow\mathbb{R}\) be a regulated function. Any function F as in Lemma 2.1 is called a Δ-antiderivative of f. We define the indefinite integral of a regulated function f by
$$\int f(t)\Delta t=F(t)+C, $$
where C is an arbitrary constant, and F is a Δ-antiderivative of f. We define the Cauchy integral by
$$\int_{a}^{b} f(s)\Delta s=F(b)-F(a) \quad\text{for all } a,b\in\mathbb{T}. $$
A function \(F:\mathbb{T}\rightarrow\mathbb{R}\) is called an antiderivative of \(f:\mathbb{T}\rightarrow\mathbb{R}\) if
$$F^{\Delta}(t)=f(t)\quad \text{for all } t\in\mathbb{T}^{k}. $$
Lemma 2.2
If
\(a,b\in\mathbb{T}\), \(\alpha,\beta\in\mathbb{R}\), and
\(f,g\in C(\mathbb{T},\mathbb{R})\), then we have:
-
(i)
\(\int_{a}^{b}[\alpha f(t)+\beta g(t)]\Delta t=\alpha\int_{a}^{b} f(t)\Delta t+\beta\int_{a}^{b} g(t)\Delta t\);
-
(ii)
if
\(f(t)\geq0\)
for all
\(a\leq t< b\), then
\(\int_{a}^{b} f(t)\Delta t\geq0\);
-
(iii)
if
\(|f(t)|\leq g(t)\)
on
\([a,b):=\{t\in \mathbb{T}:a\leq t< b\}\), then
\(|\int_{a}^{b} f(t)\Delta t|\leq\int_{a}^{b}g(t)\Delta t\).
Definition 2.5
([33])
A time scale \(\mathbb{T}\) is called periodic if there exists \(p>0\) such that if \(t\in\mathbb{T}\), then \(t\pm p\in\mathbb{T}\). For \(\mathbb{T}\neq\mathbb{R}\), the smallest positive p is called the period of the time scale \(\mathbb{T}\).
Definition 2.6
([33])
Let \(\mathbb{T}\neq\mathbb{R}\) be a periodic time scale with period \(p>0\). The function \(f:\mathbb{T}\rightarrow\mathbb{R}\) is called periodic with period ω if there exists a natural number n such that \(\omega=np\), \(f(t+\omega)=f(t)\) for all \(t\in\mathbb{T}\), and ω is the smallest number such that \(f(t+\omega)=f(t)\).
If \(\mathbb{T}=\mathbb{R}\), then we say that f is periodic with period \(\omega>0\) if ω is the smallest positive number such that \(f(t+\omega)=f(t)\) for all \(t\in\mathbb{R}\).
A function \(p: \mathbb{T}\rightarrow\mathbb{R}\) is called regressive if \(1+\mu(t)p(t)\neq0\) for all \(t\in\mathbb{T}^{k}\). The set of all regressive rd-continuous functions \(f:\mathbb{T}\rightarrow\mathbb{R}\) is denoted by \(\mathcal {R}=\mathcal{R}(\mathbb{T})=\mathcal{R}(\mathbb{T},\mathbb{R})\). We define the set \(\mathcal{R}^{+}\) of all positively regressive elements of \(\mathcal{R}\) by \(\mathcal{R}^{+}=\mathcal{R}^{+}(\mathbb{T},\mathbb{R})=\{p\in\mathcal {R}:1+\mu(t)p(t)>0\text{ for all }t\in\mathbb{T}\}\). If p is a regressive function, then the generalized exponential function \(e_{p}\) is defined by \(e_{p}(t,s)=\exp \{\int_{s}^{t}\xi_{\mu(\tau)}(p(\tau))\Delta\tau \}\) for \(s,t\in\mathbb{T}\), with the cylinder transformation
$$ \xi_{h}(z)=\textstyle\begin{cases} \frac{\operatorname{Log}(1+hz)}{h} & \text{if $h\neq0$,} \\ z & \text{if $h=0$.} \end{cases} $$
For two regressive functions \(p, q:\mathbb{T}\rightarrow\mathbb{R}\), we define
$$p\oplus q=p+q+\mu pq,\qquad \ominus p=-\frac{p}{1+\mu p}, \qquad p\ominus q=p\oplus p(\ominus q). $$
The generalized exponential function has the following properties.
Lemma 2.3
([21])
Let
\(p, q:\mathbb{T}\rightarrow \mathbb{R}\)
be two regressive functions. Then
-
(1)
\(e_{0}(t,s)\equiv1\)
and
\(e_{p}(t,t)\equiv1\);
-
(2)
\(e_{p}(\sigma(t),s)=(1+\mu(t)p(t))e_{p}(t,s)\);
-
(3)
\(\frac{1}{e_{p}(t,s)}=e_{\ominus p}(t,s)\);
-
(4)
\(e_{p}(t,s)=\frac{1}{e_{p}(s,t)}=e_{\ominus p}(s,t)\);
-
(5)
\(e_{p}(t,s)e_{p}(s,r)=e_{p}(t,r)\);
-
(6)
\(e_{p}(t,s)e_{q}(t,s)=e_{p\oplus q}(t,s)\);
-
(7)
\(\frac{e_{p}(t,s)}{e_{q}(t,s)}=e_{p\ominus q}(t,s)\);
-
(8)
\([e_{p}(t,s)]^{\Delta}=p(t)e_{p}(t,s)\);
-
(9)
\([e_{p}(c,\cdot)]^{\Delta}=-p[e_{p}(c,\cdot )]^{\sigma}\)
for
\(c\in\mathbb{T}\);
-
(10)
\(\frac{d}{dz}[e_{z}(t,s)]= (\int_{s}^{t}\frac{1}{1+\mu(\tau)z}\Delta \tau )e_{z}(t,s)\).
For convenience, we now introduce some notation:
$$\begin{aligned} &B(0,R)=\bigl\{ (x_{1},x_{2},\ldots,x_{n})^{T} \in\mathbb{R}^{n}: \bigl\Vert (x_{1},x_{2}, \ldots,x_{n}) \bigr\Vert \leq R\bigr\} ,\\ & f^{M}=\max _{t\in[0,\omega]_{\mathbb{T}}}\bigl\{ f(t)\bigr\} , \\ &F_{i}^{0}=\limsup_{ \sum_{i=1}^{n}x_{i}\rightarrow0}\max _{t\in[0,\omega ]_{\mathbb{T}}}\frac{F_{i}(t,x_{1},x_{2},\ldots,x_{n})}{ \sum_{i=1}^{n}x_{i}},\\ & G_{i}^{0}=\limsup _{ \sum_{i=1}^{n}x_{i}\rightarrow0}\max_{t\in[0,\omega ]_{\mathbb{T}}}\frac{G_{i}(t,x_{1},x_{2},\ldots,x_{n})}{ \sum_{i=1}^{n}x_{i}}, \\ &F_{i}^{\infty}=\liminf_{ \sum_{i=1}^{n}x_{i}\rightarrow\infty}\min _{t\in[0,\omega]_{\mathbb{T}}}\frac{F_{i}(t,x_{1},x_{2},\ldots,x_{n})}{ \sum_{i=1}^{n}x_{i}},\\ & G_{i}^{\infty}= \liminf_{ \sum_{i=1}^{n}x_{i}\rightarrow\infty}\min_{t\in[0,\omega]_{\mathbb{T}}}\frac{G_{i}(t,x_{1},x_{2},\ldots,x_{n})}{ \sum_{i=1}^{n}x_{i}}, \\ &\gamma_{i}=\limsup_{ \sum_{i=1}^{n}x_{i}\rightarrow0}\frac{ \sum_{k=1}^{p}I_{ik}(x_{1},x_{2},\ldots,x_{n})}{ \sum_{i=1}^{n}x_{i}}, \end{aligned}$$
where \(i=1,2,\ldots,n\), and f is an rd-continuous ω-periodic function.
Lemma 2.4
([21])
Let
\(r:\mathbb{T}\rightarrow\mathbb{R}\)
be right-dense continuous and regressive. For
\(a\in\mathbb{T}\)
and
\(y_{a}\in\mathbb{R}\), the unique solution of the initial value problem
$$\begin{aligned} y^{\Delta}(t)=r(t)y(t)+h(t),\qquad y(a)=y_{a}, \end{aligned}$$
is given by
$$\begin{aligned} y(t)=e_{r}(t,a)y_{a}+ \int_{a}^{t}e_{r}\bigl(t,\sigma(s) \bigr)h(s)\Delta s. \end{aligned}$$
The existence of periodic solutions of system (1.1) is equivalent to the existence of periodic solutions of the corresponding integral system. So the following lemma is important in our discussion.
Lemma 2.5
A function
\(u(t)=(u_{1}(t),u_{2}(t),\ldots,u_{n}(t))^{T}\)
is an
ω-periodic solution of (1.1) if and only if
\(u(t)\)
is an
ω-periodic solution of the integral system
$$\begin{aligned} u_{i}(t)={}& \int_{t}^{t+\omega}H_{i}(t,s)u_{i}(s) \bigl[F_{i}\bigl(s,u(s)\bigr)+G_{i}\bigl(s,u(s)\bigr) \bigr]\Delta s \\ &{}+ \sum_{t_{k}\in[t,t+\omega)_{\mathbb {T}}}H_{i}(t,t_{k})e_{(r_{i}-h_{i})} \bigl(\sigma(t_{k}),t_{k}\bigr)I_{ik} \bigl(u(t_{k})\bigr),\quad i=1,2,\ldots,n, \end{aligned}$$
(2.1)
where
$$\begin{aligned} H_{i}(t,s)=\frac{e_{(r_{i}-h_{i})}(t,\sigma(s))}{e_{(r_{i}-h_{i})}(0,\omega)-1},\quad s\in[t,t+ \omega]_{\mathbb{T}}, i=1,2,\ldots,n. \end{aligned}$$
(2.2)
Proof
If \(u(t)\) is an ω-periodic solution of (1.1), then for all \(t\in\mathbb{T}\), there exists \(k\in\mathbb{N}^{+}\) such that \(t_{k}\) is the first impulsive point after t. Applying Lemma 2.4 and equation (1.1), for \(s\in[t,t_{k}]_{\mathbb{T}}\), we have
$$\begin{aligned} u_{i}(s)=e_{(r_{i}-h_{i})}(s,t)u_{i}(t)+ \int_{t}^{s}e_{(r_{i}-h_{i})}\bigl(s,\sigma(\tau ) \bigr)u_{i}(\tau) \bigl[F_{i}\bigl(\tau,u(\tau) \bigr)+G_{i}\bigl(\tau,u(\tau)\bigr) \bigr]\Delta \tau, \end{aligned}$$
and thus
$$\begin{aligned} u_{i}(t_{k})=e_{(r_{i}-h_{i})}(t_{k},t)u_{i}(t)+ \int _{t}^{t_{k}}e_{(r_{i}-h_{i})}\bigl(t_{k}, \sigma(\tau)\bigr)u_{i}(\tau) \bigl[F_{i}\bigl(\tau ,u(\tau) \bigr)+G_{i}\bigl(\tau,u(\tau)\bigr) \bigr]\Delta \tau. \end{aligned}$$
Again using Lemma 2.4, for \(s\in(t_{k},t_{k+1}]_{\mathbb{T}}\), we have
$$\begin{aligned} u_{i}(s)={}&e_{(r_{i}-h_{i})}(s,t_{k})u_{i} \bigl(t_{k}^{+}\bigr)+ \int _{t_{k}}^{s}e_{(r_{i}-h_{i})}\bigl(s,\sigma(\tau) \bigr)u_{i}(\tau) \bigl[F_{i}\bigl(\tau ,u(\tau) \bigr)+G_{i}\bigl(\tau,u(\tau)\bigr) \bigr]\Delta \tau \\ ={}&e_{(r_{i}-h_{i})}(s,t_{k})u_{i}(t_{k})+ \int_{t_{k}}^{s}e_{(r_{i}-h_{i})}\bigl(s,\sigma (\tau) \bigr)u_{i}(\tau) \bigl[F_{i}\bigl(\tau,u(\tau) \bigr)+G_{i}\bigl(\tau,u(\tau)\bigr) \bigr]\Delta \tau \\ &{}+e_{(r_{i}-h_{i})}(s,t_{k})I_{ik}\bigl(u(t_{k}) \bigr). \end{aligned}$$
Thus, for \(s\in[t,t_{k+1}]_{\mathbb{T}}\), we get
$$\begin{aligned} u_{i}(s)={}&e_{(r_{i}-h_{i})}(s,t)u_{i}(t)+ \int_{t}^{s}e_{(r_{i}-h_{i})}\bigl(s,\sigma (\tau) \bigr)u_{i}(\tau) \bigl[F_{i}\bigl(\tau,u(\tau) \bigr)+G_{i}\bigl(\tau,u(\tau)\bigr) \bigr]\Delta \tau \\ &{}+e_{(r_{i}-h_{i})}(s,t_{k})I_{ik}\bigl(u(t_{k}) \bigr). \end{aligned}$$
Repeating this process for \(s\in[t,t+\omega]_{\mathbb{T}}\), we obtain
$$\begin{aligned} u_{i}(s)={}&e_{(r_{i}-h_{i})}(s,t)u_{i}(t)+ \int_{t}^{s}e_{(r_{i}-h_{i})}\bigl(s,\sigma (\tau) \bigr)u_{i}(\tau) \bigl[F_{i}\bigl(\tau,u(\tau) \bigr)+G_{i}\bigl(\tau,u(\tau)\bigr) \bigr]\Delta \tau \\ &{}+\sum_{t_{k}\in[t,t+\omega)_{\mathbb {T}}}e_{(r_{i}-h_{i})}(s,t_{k})I_{ik} \bigl(u(t_{k})\bigr). \end{aligned}$$
Let \(s=t+\omega\) in this equality and notice that \(u_{i}(t)=u_{i}(t+\omega)\), \(e_{(r_{i}-h_{i})}(t,t+\omega)=e_{(r_{i}-h_{i})}(0,\omega)\), \(e_{(r_{i}-h_{i})}(t+\omega,\sigma(\tau))=e_{(r_{i}-h_{i})}(t,\sigma(\tau ))e_{(r_{i}-h_{i})}(t+\omega,t)\), \(e_{(r_{i}-h_{i})}(t,t_{k})=e_{(r_{i}-h_{i})}(t,\sigma(t_{k}))\)
\(e_{(r_{i}-h_{i})}\times (\sigma(t_{k}),t_{k})\), and \(e_{(r_{i}-h_{i})}(t,t+\omega)e_{(r_{i}-h_{i})}(t+\omega,t)=1\), we have
$$\begin{aligned} u_{i}(t)={}&u_{i}(t+\omega) \\ ={}&e_{(r_{i}-h_{i})}(t+\omega,t)u_{i}(t)+ \int _{t}^{t+\omega}e_{(r_{i}-h_{i})}\bigl(t+\omega,\sigma( \tau)\bigr)u_{i}(\tau)\\ &{}\times \bigl[F_{i}\bigl(\tau,u(\tau) \bigr)+G_{i}\bigl(\tau,u(\tau)\bigr) \bigr]\Delta\tau \\ &{}+\sum_{t_{k}\in[t,t+\omega)_{\mathbb{T}}}e_{(r_{i}-h_{i})}(t+\omega ,t_{k})I_{ik}\bigl(u(t_{k})\bigr) \\ ={}&e_{(r_{i}-h_{i})}(\omega,0)u_{i}(t)+ \int_{t}^{t+\omega }e_{(r_{i}-h_{i})}\bigl(t,\sigma(\tau) \bigr)e_{(r_{i}-h_{i})}(\omega,0)u_{i}(\tau) \\ &{}\times\bigl[F_{i} \bigl(\tau,u(\tau)\bigr)+G_{i}\bigl(\tau,u(\tau)\bigr) \bigr]\Delta\tau \\ &{}+\sum_{t_{k}\in[t,t+\omega)_{\mathbb{T}}}e_{(r_{i}-h_{i})}\bigl(t,\sigma (t_{k})\bigr)e_{(r_{i}-h_{i})}(\omega,0)e_{(r_{i}-h_{i})}\bigl( \sigma(t_{k}),t_{k}\bigr)I_{ik}\bigl(u(t_{k}) \bigr), \end{aligned}$$
which implies that
$$\begin{aligned} u_{i}(t)={}& \int_{t}^{t+\omega}H_{i}(t, \tau)u_{i}(\tau) \bigl[F_{i}\bigl(\tau ,u(\tau) \bigr)+G_{i}\bigl(\tau,u(\tau)\bigr) \bigr]\Delta \tau \\ &{}+\sum_{t_{k}\in[t,t+\omega)_{\mathbb {T}}}H_{i}(t,t_{k})e_{(r_{i}-h_{i})} \bigl(\sigma(t_{k}),t_{k}\bigr)I_{ik} \bigl(u(t_{k})\bigr). \end{aligned}$$
Thus, we conclude that \(u(t)\) satisfies (2.1).
Let \(u(t)\) be an ω-periodic solution of (2.1). Noting that the above reduction is completely reversible, we see that \(x(t)\) is also an ω-periodic solution of (1.1). This completes the proof of Lemma 2.5. □
Lemma 2.6
If conditions
\((H_{1})\)–\((H_{3})\)
hold, then
\(H_{i}(t,s)\)
\((i=1,2,\ldots,n)\)
defined by (2.2) satisfy the following:
-
(1)
\(\frac{1}{\sigma_{i}-1}\leq H_{i}(t,s)\leq\frac{\sigma _{i}}{\sigma_{i}-1}\), \(\forall s\in[t,t+\omega]_{\mathbb{T}}\), where
\(\sigma_{i}=e_{(r_{i}-h_{i})}(0,\omega)\), \(i=1,2,\ldots,n\);
-
(2)
\(H_{i}(t+\omega,s+\omega)=H_{i}(t,s),i=1,2,\ldots,n\).
Proof
According to conditions \((H_{1})\)–\((H_{3})\), since \(\mu(t)=\sigma(t)-t\geq0\) and \(r_{i}(t)-h_{i}(t)<0\), we have \(0<1+\mu(t)[r_{i}(t)-h_{i}(t)]<1\). In addition, by the definition of the generalized exponential function we get \(\sigma_{i}=e_{(r_{i}-h_{i})}(0,\omega)>1, i=1,2,\ldots,n\). Noticing that \(t\leq s\leq\sigma(s)\leq t+\omega\), we have
$$\begin{aligned} \frac{1}{\sigma_{i}-1}=\frac{e_{(r_{i}-h_{i})}(t,t)}{\sigma_{i}-1}\leq H_{i}(t,s)\leq \frac{e_{(r_{i}-h_{i})}(t,t+\omega)}{\sigma_{i}-1}=\frac {\sigma_{i}}{\sigma_{i}-1}. \end{aligned}$$
Thus, assertion \((1)\) holds. Now we show that the assertion \((2)\) also holds. In fact, since \(\sigma(t+\omega)=\sigma(t)+\omega\), by integration by substitution we have
$$\begin{aligned} H_{i}(t+\omega,s+\omega)={}&\frac{e_{(r_{i}-h_{i})}(t+\omega,\sigma (s+\omega))}{\sigma_{i}-1} =\frac{e_{(r_{i}-h_{i})}(t+\omega,\sigma(s)+\omega)}{\sigma_{i}-1} \\ ={}&\frac{e_{(r_{i}-h_{i})}(t,\sigma(s))}{\sigma_{i}-1}=H_{i}(t,s),\quad i=1,2,\ldots,n. \end{aligned}$$
The proof of Lemma 2.6 is complete. □
To obtain the existence of a periodic solution of system (2.1), we need the some preparations. Let X be a real Banach space, and let K be a closed nonempty subset of X. Then K is a cone if
-
(i)
\(k\alpha+l\beta\in K\) for all \(\alpha,\beta \in K\) and \(k,l\geq0\);
-
(ii)
\(\alpha,-\alpha\in K\) imply \(\alpha=\theta\), where θ is the zero element of X.
Let E be a Banach space, and let K be a cone in E. The semiorder induced by the cone K is denoted by ≤, that is, \(x\leq y\) if and only if \(y-x\in K\). In addition, for a bounded subset \(A\subset E\), let \(\alpha_{E}(A)\) denote the (Kuratowski) measure of noncompactness, namely
$$\begin{aligned} \alpha_{E}(A)={}&\inf \bigl\{ \delta>0: A \text{ admits a finite cover by subsets of } A_{i}\subset A \text{ such that}\\ & \operatorname{diam}(A_{i}) \leq\delta \bigr\} , \end{aligned}$$
where \(\operatorname{diam}(A_{i})\) denotes the diameter of a set \(A_{i}\).
Let \(E,F\) be two Banach spaces and \(D\subset E\). A continuous bounded map \(\Phi: \overline{D}\rightarrow F\) is called k-set-contractive if for any bounded set \(S\subset D\), we have
$$\begin{aligned} \alpha_{F}\bigl(\Phi(S)\bigr)\leq k\alpha_{E}\bigl(\Phi(S) \bigr). \end{aligned}$$
The map Φ is called strict-set-contractive if it is k-set-contractive for some \(0\leq k<1\). Particularly, completely continuous operators are 0-set-contractive.
The following lemma is useful for the proof of our main results.
Lemma 2.7
([34, 35])
Let
K
be a cone in the real Banach space
X, and let
\(K_{r,R}=\{u\in K:r\leq\|u\|\leq R\}\)
with
\(R>r>0\). Suppose that
\(\Phi:K_{r,R}\rightarrow K\)
is strict-set-contractive such that one of the following two conditions is satisfied:
-
(i)
\(\Phi u\nleq u, \forall u\in K, \|u\|=r\)
and
\(\Phi u\ngeq u, \forall u\in K, \|u\|=R\).
-
(ii)
\(\Phi u\ngeq u, \forall u\in K, \|u\|=r\)
and
\(\Phi u\nleq u, \forall u\in K, \|u\|=R\).
Then Φ has at least one fixed point in
\(K_{r,R}\).
Define
$$\begin{aligned} PC(\mathbb{T})={}& \bigl\{ u=(u_{1},u_{2}, \ldots,u_{n}):\mathbb {T}\rightarrow\mathbb{R}^{n},u|_{(t_{k},t_{k+1})} \in C_{\mathrm{rd}}\bigl((t_{k},t_{k+1}),\mathbb{R}^{n} \bigr), \\ &\exists u\bigl(t_{k}^{-}\bigr)=u(t_{k}), u \bigl(t_{k}^{+}\bigr), k\in\mathbb{N}^{+} \bigr\} . \end{aligned}$$
Set
$$\begin{aligned} X=\bigl\{ u:u\in PC(\mathbb{T}),u(t+\omega)=u(t), t\in\mathbb{T}\bigr\} \end{aligned}$$
be equipped with the norm \(\|u\|=\sum_{i=1}^{n}|u_{i}|_{0}\), where \(|u_{i}|_{0}=\sup_{t\in[0,\omega]_{\mathbb{T}}}|u_{i}(t)|\), \(i=1,2,\ldots,n\). Then X is a Banach space. In view of Lemma 2.6, we define the cone K in X as
$$\begin{aligned} K= \biggl\{ u=(u_{1},\ldots,u_{n})\in X: u_{i}(t) \geq\frac{1}{\sigma_{i}} \vert u_{i} \vert _{0}, \forall t \in[0,\omega]_{\mathbb{T}}, i=1,2,\ldots,n \biggr\} . \end{aligned}$$
Let the map Φ be defined by
$$\begin{aligned} (\Phi u) (t)=\bigl((\Phi_{1} u) (t),(\Phi_{2} u) (t),\ldots,(\Phi_{n} u) (t)\bigr)^{T}, \end{aligned}$$
(2.3)
where \(x\in K, t\in\mathbb{T}\),
$$\begin{aligned} (\Phi_{i} u) (t)={}& \int_{t}^{t+\omega}H_{i}(t,s)u_{i}(s) \bigl[F_{i}\bigl(s,u(s)\bigr)+G_{i}\bigl(s,u(s)\bigr) \bigr]\Delta s \\ &{}+ \sum_{t_{k}\in[t,t+\omega)_{\mathbb {T}}}H_{i}(t,t_{k})e_{(r_{i}-h_{i})} \bigl(\sigma(t_{k}),t_{k}\bigr)I_{ik} \bigl(u(t_{k})\bigr),\quad i=1,2,\ldots,n, \end{aligned}$$
and \(H_{i}(t,s)\)
\((i=1,2,\ldots,n)\) are defined by (2.2).
Lemma 2.8
Assume that
\((H_{1})\)–\((H_{3})\)
hold. Then
\(\Phi: K\rightarrow K\)
defined by (2.3) is well defined, that is, \(\Phi(K)\subset K\).
Proof
It is clear that \(\Phi u\in PC(\mathbb{T})\) for all \(u\in K\). In view of Lemma 2.6 and (2.3), we obtain
$$\begin{aligned} (\Phi_{i} u) (t+\omega)={}& \int_{t+\omega}^{t+2\omega}H_{i}(t+\omega ,s)u_{i}(s) \bigl[F_{i}\bigl(s,u(s)\bigr)+G_{i} \bigl(s,u(s)\bigr) \bigr]\Delta s \\ &{}+ \sum_{t_{k}\in[t+\omega,t+2\omega)_{\mathbb{T}}}H_{i}(t+\omega ,t_{k})e_{(r_{i}-h_{i})}\bigl(\sigma(t_{k}),t_{k} \bigr)I_{ik}\bigl(u(t_{k})\bigr) \\ ={}& \int_{t}^{t+\omega}H_{i}(t+\omega,\tau+ \omega)u_{i}(\tau+\omega ) \bigl[F_{i}\bigl(\tau+\omega,u( \tau+\omega)\bigr) \\ &{}+G_{i}\bigl(\tau+\omega,u(\tau+\omega)\bigr) \bigr] \Delta\tau \\ &{}+ \sum_{t_{l}\in[t,t+\omega)_{\mathbb{T}}}H_{i}(t+\omega ,t_{l}+\omega)e_{(r_{i}-h_{i})}\bigl(\sigma(t_{l}+ \omega),t_{l}+\omega \bigr)I_{il}\bigl(u(t_{l}+ \omega)\bigr) \\ ={}& \int_{t}^{t+\omega}H_{i}(t, \tau)u_{i}(\tau) \bigl[F_{i}\bigl(\tau,u(\tau)\bigr) +G_{i}\bigl(\tau,u(\tau)\bigr) \bigr]\Delta\tau \\ &{}+ \sum_{t_{l}\in[t,t+\omega)_{\mathbb {T}}}H_{i}(t,t_{l})e_{(r_{i}-h_{i})} \bigl(\sigma(t_{l}),t_{l}\bigr)I_{il} \bigl(u(t_{l})\bigr)=(\Phi_{i} u) (t), \end{aligned}$$
that is, \((\Phi_{i} u)(t+\omega)=(\Phi_{i} u)(t),\forall t\in\mathbb{T},i=1,2,\ldots,n\). So \(\Phi u\in X\). For any \(u\in K\), we have
$$\begin{aligned} \vert \Phi_{i}u \vert _{0}\leq{}&\frac{\sigma_{i}}{\sigma_{i}-1} \Biggl[ \int_{0}^{\omega }u_{i}(s) \bigl[F_{i}\bigl(s,u(s)\bigr)+G_{i}\bigl(s,u(s)\bigr) \bigr] \Delta s \\ &{}+ \sum_{k=1}^{p}e_{(r_{i}-h_{i})}\bigl( \sigma(t_{k}),t_{k}\bigr)I_{ik}\bigl(u(t_{k}) \bigr) \Biggr],\quad i=1,2,\ldots,n \end{aligned}$$
and
$$\begin{aligned} &(\Phi_{i} u) (t)\\ &\quad\geq\frac{1}{\sigma_{i}-1} \Biggl[ \int_{t}^{t+\omega}u_{i}(s) \bigl[F_{i}\bigl(s,u(s)\bigr)+G_{i}\bigl(s,u(s)\bigr) \bigr] \Delta s + \sum_{k=1}^{p}e_{(r_{i}-h_{i})} \bigl(\sigma(t_{k}),t_{k}\bigr)I_{ik} \bigl(u(t_{k})\bigr) \Biggr] \\ &\quad =\frac{1}{\sigma_{i}-1} \Biggl[ \int_{0}^{\omega}u_{i}(s) \bigl[F_{i}\bigl(s,u(s)\bigr)+G_{i}\bigl(s,u(s)\bigr) \bigr] \Delta s + \sum_{k=1}^{p}e_{(r_{i}-h_{i})} \bigl(\sigma(t_{k}),t_{k}\bigr)I_{ik} \bigl(u(t_{k})\bigr) \Biggr] \\ &\quad\geq \frac{1}{\sigma_{i}} \vert \Phi_{i}x \vert _{0},\quad i=1,2,\ldots,n. \end{aligned}$$
So \(\Phi u\in K\). This completes the proof of Lemma 2.8. □
Lemma 2.9
Assume that
\((H_{1})\)–\((H_{2})\)
hold. Then
\(\Phi: K\rightarrow K\)
defined by (2.3) is completely continuous.
Proof
It is easy to see that Φ is continuous and bounded. Now we let us show that Φ maps bounded sets into relatively compact sets. Let \(\Omega\subset K\) be an arbitrary open bounded set in K. Then there exists \(R>0\) such that \(\|u\|< R\) for any \(u=(u_{1},u_{2},\ldots,u_{n})^{T}\in\Omega\). We prove that \(\overline{\Phi(\Omega)}\) is compact. In fact, for any \(u\in\Omega\) and \(t\in[0,\omega]_{\mathbb{T}}\), we have
$$\begin{aligned} &\bigl\vert (\Phi_{i} u) (t) \bigr\vert \\ &\quad = \int_{t}^{t+\omega}H_{i}(t,s)u_{i}(s) \bigl[F_{i}\bigl(s,u(s)\bigr)+G_{i}\bigl(s,u(s)\bigr) \bigr]\Delta s \\ &\qquad{}+ \sum_{t_{k}\in[t,t+\omega)_{\mathbb {T}}}H_{i}(t,t_{k})e_{(r_{i}-h_{i})} \bigl(\sigma(t_{k}),t_{k}\bigr)I_{ik} \bigl(u(t_{k})\bigr) \\ &\quad \leq\frac{\sigma_{i}}{\sigma_{i}-1} \Biggl[ \int_{0}^{\omega }u_{i}(s) \bigl[F_{i}\bigl(s,u(s)\bigr)+G_{i}\bigl(s,u(s)\bigr) \bigr] \Delta s + \sum_{k=1}^{p}e_{(r_{i}-h_{i})} \bigl(\sigma(t_{k}),t_{k}\bigr)I_{ik} \bigl(u(t_{k})\bigr) \Biggr] \\ &\quad \leq\frac{\sigma_{i}}{\sigma_{i}-1} \Biggl[R\omega \Bigl(\max_{s\in [0,\omega]_{\mathbb{T}},u\in B(0,R)}\bigl\{ F_{i}(s,u)\bigr\} +\max_{s\in[0,\omega]_{\mathbb{T}},u\in B(0,R)}\bigl\{ G_{i}(s,u)\bigr\} \Bigr) \\ &\qquad{}+\sigma_{i} \sum_{k=1}^{p}\max _{u\in B(0,R)}\bigl\{ I_{ik}(u)\bigr\} \Biggr]\triangleq A_{i}, \quad i=1,2,\ldots,n \end{aligned}$$
and
$$\begin{aligned} \bigl\vert (\Phi_{i} u)^{\Delta}(t) \bigr\vert ={}& \bigl\vert \bigl[r_{i}(t)-h_{i}(t)\bigr](\Phi_{i} x) (t)+u_{i}(t) \bigl[F_{i}\bigl(t,u(t)\bigr)+G_{i} \bigl(t,u(t)\bigr) \bigr] \bigr\vert \\ \leq{}&\bigl(r_{i}^{M}+h_{i}^{m} \bigr)A_{i}+R \Bigl(\max_{t\in[0,\omega]_{\mathbb {T}},u\in B(0,R)}\bigl\{ F_{i}(t,u)\bigr\} +\max_{t\in[0,\omega]_{\mathbb{T}},u\in B(0,R)}\bigl\{ G_{i}(t,u)\bigr\} \Bigr) \\ \triangleq{}&B_{i},\quad i=1,2,\ldots,n. \end{aligned}$$
Hence \(\|(\Phi u)\|\leq\sum_{i=1}^{n}A_{i}\) and \(\|(\Phi u)^{\Delta}\|\leq\sum_{i=1}^{n}B_{i}\). It follows from Lemma 2.4 in [36] that \(\Phi(\bar{\Omega})\) is relatively compact in X. The proof of Lemma 2.9 is complete. □