In this section, we present the main result of our paper, which ensures the existence of extremal solutions for problem (1.1).
Theorem 3.1
Suppose that conditions (H1)-(H6) hold. Then problem (1.1) has extremal solutions
\(x^{*},y^{*}\in[x_{0},y_{0}]\). Moreover, there exist monotone iterative sequences
\(\{x_{n}\},\{y_{n}\}\subset C_{\alpha}[0,1]\)
such that
\(x_{n}\rightarrow x^{*}\), \(y_{n}\rightarrow y^{*}\)
uniformly on
\(t\in[0,1]\), as
\(n\rightarrow \infty\)
and
$$x_{0}\leq x_{1}\leq\cdots\leq x_{n}\leq\cdots \leq x^{*}\leq y^{*}\leq\cdots\leq y_{n}\leq\cdots\leq y_{1}\leq y_{0}. $$
Proof
For \(n=0,1,2,\ldots\) , we define
$$ \textstyle\begin{cases} -D^{\alpha}x_{n+1}(t)= f(t,x_{n}(t))-M(t)[x_{n+1}(t)-x_{n}(t)],\quad t\in[0,1], \\ x_{n+1}(0)=0, \\ D^{\alpha-1}x_{n+1}(1)=I^{\beta}\{g(\eta,x_{n}(\eta))+\lambda[x_{n+1}(\eta )-x_{n}(\eta)]\}+k \\ \hphantom{D^{\alpha-1}x_{n+1}(1)}=\lambda I^{\beta}x_{n+1}(\eta)+I^{\beta}[g(\eta ,x_{n}(\eta))-\lambda x_{n}(\eta)]+k, \end{cases} $$
(3.1)
and
$$ \textstyle\begin{cases} -D^{\alpha}y_{n+1}(t)= f(t,y_{n}(t))-M(t)[y_{n+1}(t)-y_{n}(t)],\quad t\in[0,1], \\ y_{n+1}(0)=0, \\ D^{\alpha-1}y_{n+1}(1)=I^{\beta}\{g(\eta,y_{n}(\eta))+\lambda[y_{n+1}(\eta )-y_{n}(\eta)]\}+k \\ \hphantom{D^{\alpha-1}y_{n+1}(1)}=\lambda I^{\beta}y_{n+1}(\eta)+I^{\beta}[g(\eta ,y_{n}(\eta))-\lambda y_{n}(\eta)]+k. \end{cases} $$
(3.2)
In view of Lemma 2.3, for any \(n\in\mathbb{N}\), problems (3.1) and (3.2) have a unique solution \(x_{n+1}(t)\), \(y_{n+1}(t)\) respectively, which are well defined. First, we show that
$$x_{0}(t)\leq x_{1}(t)\leq y_{1}(t)\leq y_{0}(t),\quad t\in[0,1]. $$
Let \(w(t)=x_{1}(t)-x_{0}(t)\). The definitions of \(x_{1}(t)\) and (H1) yield
$$\textstyle\begin{cases} -D^{\alpha}w(t)\geq-M(t)w(t),\quad t\in[0,1], \\ w(0)=0, \\ D^{\alpha-1}w(1)\geq\lambda I^{\beta} w(\eta). \end{cases} $$
According to Lemma 2.6, we have \(w(t)\geq0\), \(t\in[0,1]\), that is, \(x_{1}(t)\geq x_{0}(t)\). Using the same reasoning, we can show that \(y_{0}(t)\geq y_{1}(t)\), for all \(t\in[0,1]\).
Now, we put \(p(t)=y_{1}(t)-x_{1}(t)\). From (H2) and (H3), we get
$$\begin{aligned} -D^{\alpha }p(t) =&f\bigl(t,y_{0}(t)\bigr)-M(t) \bigl[y_{1}(t)-y_{0}(t)\bigr]-f\bigl(t,x_{0}(t) \bigr)+M(t)\bigl[x_{1}(t)-x_{0}(t)\bigr] \\ \geq&-M(t)\bigl[y_{0}(t)-x_{0}(t)\bigr]-M(t) \bigl[y_{1}(t)-y_{0}(t)\bigr]+M(t)\bigl[x_{1}(t)-x_{0}(t) \bigr] \\ =&-M(t)p(t). \end{aligned}$$
Also \(p(0)=0\), and
$$\begin{aligned} D^{\alpha-1}p(1) =&I^{\beta}\bigl\{ g\bigl(\eta,y_{0}(\eta) \bigr)+\lambda\bigl[y_{1}(\eta )-y_{0}(\eta)\bigr]\bigr\} - I^{\beta}\bigl\{ g\bigl(\eta,x_{0}(\eta)\bigr)+\lambda \bigl[x_{1}(\eta)-x_{0}(\eta)\bigr]\bigr\} \\ =&I^{\beta}\bigl\{ g\bigl(\eta,y_{0}(\eta)\bigr)-g\bigl( \eta,x_{0}(\eta)\bigr)+\lambda\bigl[y_{1}(\eta) -y_{0}(\eta)\bigr]-\lambda\bigl[x_{1}(\eta)-x_{0}( \eta)\bigr]\bigr\} \\ \geq&I^{\beta}\bigl\{ \lambda\bigl[y_{0}(\eta)-x_{0}( \eta)\bigr]+\lambda\bigl[y_{1}(\eta) -y_{0}(\eta)\bigr]- \lambda\bigl[x_{1}(\eta)-x_{0}(\eta)\bigr]\bigr\} \\ =&\lambda I^{\beta}p(\eta). \end{aligned}$$
These results and Lemma 2.6 imply that \(y_{1}(t)\geq x_{1}(t)\), \(t\in[0,1]\).
In the next step, we show that \(x_{1}\), \(y_{1}\) are lower and upper solutions of problem (1.1), respectively. Note that
$$\begin{aligned} -D^{\alpha }x_{1}(t) =&f\bigl(t,x_{0}(t)\bigr)-f \bigl(t,x_{1}(t)\bigr)+f\bigl(t,x_{1}(t)\bigr)-M(t) \bigl[x_{1}(t)-x_{0}(t)\bigr] \\ \leq&M(t)\bigl[x_{1}(t)-x_{0}(t)\bigr]+f \bigl(t,x_{1}(t)\bigr)-M(t)\bigl[x_{1}(t)-x_{0}(t) \bigr] \\ =&f\bigl(t,x_{1}(t)\bigr). \end{aligned}$$
Also \(x_{1}(0)=0\), and
$$\begin{aligned} D^{\alpha-1}x_{1}(1) =&I^{\beta}\bigl\{ g\bigl( \eta,x_{0}(\eta)\bigr)-g\bigl(\eta,x_{1}(\eta )\bigr)+g\bigl( \eta,x_{1}(\eta)\bigr) +\lambda\bigl[x_{1}( \eta)-x_{0}(\eta)\bigr]\bigr\} +k \\ \leq&I^{\beta}\bigl\{ \lambda\bigl[x_{0}(\eta)-x_{1}( \eta)\bigr]+g\bigl(\eta,x_{1}(\eta )\bigr)+\lambda\bigl[x_{1}( \eta)-x_{0}(\eta)\bigr]\bigr\} +k \\ =&I^{\beta}g\bigl(\eta,x_{1}(\eta)\bigr)+k \end{aligned}$$
by assumptions (H2) and (H3). This proves that \(x_{1}\) is a lower solution of problem (1.1). Similarly, we can prove that \(y_{1}\) is an upper solution of (1.1).
Using mathematical induction, we see that
$$x_{0}(t)\leq x_{1}(t)\leq\cdots\leq x_{n}(t)\leq x_{n+1}(t)\leq y_{n+1}(t) \leq y_{n}(t)\leq\cdots \leq y_{1}(t)\leq y_{0}(t),\quad t\in[0,1], $$
since the space of solution is \(C_{\alpha}[0,1]\). Using the standard arguments, it is easy to show \(\{x_{n}\}\) and \(\{y_{n}\}\) are uniformly bounded and equi-continuous. By the Arzela-Ascoli theorem, we have \(\{x_{n}\}\) and \(\{y_{n}\}\) converge, say to \(x^{*}(t)\) and \(y^{*}(t)\), uniformly on \([0,1]\), respectively. That is
$$\lim_{n\rightarrow\infty}x_{n}(t)=x^{*}(t),\qquad \lim _{n\rightarrow\infty }y_{n}(t)=x^{*}(t), \quad t\in[0,1]. $$
Moreover, \(x^{*}(t)\) and \(y^{*}(t)\) are the solutions of problem (1.1) and \(x_{0}\leq x^{*}\leq y^{*}\leq y_{0}\) on \([0,1]\).
To prove that \(x^{*}(t)\), \(y^{*}(t)\) are extremal solutions of (1.1), let \(u\in[x_{0},y_{0}]\) be any solution of problem (1.1). We suppose that \(x_{m}(t)\leq u(t)\leq y_{m}(t)\), \(t\in[0,1]\) for some m. Let \(v(t)=u(t)-x_{m+1}(t)\), \(z(t)=y_{m+1}(t)-u(t)\). Then by assumption (H2) and (H3), we see that
$$\textstyle\begin{cases} -D^{\alpha}v(t)\geq-M(t)v(t),\quad t\in[0,1], \\ v(0)=0, \\ D^{\alpha-1}v(1)\geq\lambda I^{\beta}v(\eta), \end{cases} $$
and
$$\textstyle\begin{cases} -D^{\alpha}z(t)\geq-M(t)z(t), \quad t\in[0,1], \\ z(0)=0, \\ D^{\alpha-1}z(1)\geq\lambda I^{\beta}z(\eta). \end{cases} $$
These and Lemma 2.6 imply that \(x_{m+1}(t)\leq u(t)\leq y_{m+1}(t)\), \(t\in [0,1]\), so by induction \(x_{n}(t)\leq u(t)\leq y_{n}(t)\), on \([0,1]\) for all n. Taking the limit as \(n\longrightarrow\infty\), we conclude \(x^{*}(t)\leq u(t)\leq y^{*}(t)\), \(t\in[0,1]\). The proof is complete. □
Example
Consider the following problem:
$$ \textstyle\begin{cases} -D^{\frac{3}{2}}x(t)= -\frac{1}{16}t^{2}x^{2}(t)+\frac{1}{5}t^{3}, \quad t\in [0,1], \\ x(0)=0, \\ D^{\frac{1}{2}}x(1)=I^{\frac{3}{2}}g(\frac{1}{4},x(\frac{1}{4}))+1.2 =\frac{1}{\Gamma(\frac{3}{2})}\int_{0}^{\frac{1}{4}}(\frac{1}{4}-s)^{\frac {1}{2}}(s+1)x(s)\,ds+1.2, \end{cases} $$
(3.3)
where \(\alpha=\frac{3}{2}\), \(\beta=\frac{3}{2}\), \(\eta=\frac{1}{4}\), \(k=1.2\), and
$$\textstyle\begin{cases} f(t,x)= -\frac{1}{16}t^{2}x^{2}(t)+\frac{1}{5}t^{3}, \\ g(t,x)=(t+1)x. \end{cases} $$
Take \(x_{0}(t)=0\), \(y_{0}(t)=2t^{\frac{1}{2}}\). It is not difficult to verify that \(x_{0}\), \(y_{0}\) are lower and upper solutions of (3.3), respectively, and \(x_{0}\leq y_{0}\). So (H1) holds.
In addition, we have
$$ f(t,y)-f(t,x)=-\frac{1}{16}t^{2}x^{2}+ \frac{1}{16}t^{2}y^{2}\geq-\frac {1}{4}t^{\frac{3}{2}}(y-x) $$
(3.4)
and
$$ g(t,y)-g(t,x)=(t+1) (y-x)\geq(y-x), $$
(3.5)
where \(x_{0}(t)\leq x(t)\leq y(t)\leq y_{0}(t)\).
Therefore (H2) and (H3) hold.
From (3.4) and (3.5), we have
$$M(t)=\frac{1}{4}t^{\frac{3}{2}}, \quad \lambda=1. $$
Then
$$\begin{aligned}& \Gamma(\alpha+\beta)=\Gamma(3)=2>\lambda\eta^{\alpha+\beta-1}=\biggl( \frac {1}{4}\biggr)^{2}, \\& 2\Gamma(\alpha+\beta) \int_{0}^{1} \bigl\vert M(s) \bigr\vert \,ds=2 \cdot2 \int_{0}^{1}\frac {1}{4}s^{\frac{3}{2}}\,ds= \frac{2}{5} < \Gamma(\alpha)\bigl[\Gamma(\alpha+\beta)-\lambda \eta^{\alpha+\beta-1}\bigr] \\& \hphantom{2\Gamma(\alpha+\beta) \int_{0}^{1} \bigl\vert M(s) \bigr\vert \,ds}= \Gamma\biggl(\frac{3}{2}\biggr)\biggl[2-\biggl( \frac{1}{4}\biggr)^{2}\biggr]\approx1.717, \\& \Gamma(2-\alpha)\lambda\eta^{\beta}=\Gamma\biggl(2-\frac{3}{2} \biggr)\cdot1\cdot \biggl(\frac{1}{4}\biggr)^{\frac{3}{2}} = \frac{1}{4}\cdot\Gamma\biggl(\frac{3}{2}\biggr)< \Gamma(\beta)=\Gamma \biggl(\frac {3}{2}\biggr), \\& \Gamma(2-\alpha)\cdot t^{\alpha}\cdot M(t)=\Gamma\biggl(\frac{1}{2} \biggr)\cdot t^{\frac{3}{2}}\cdot\frac{1}{4}\cdot t^{\frac{3}{2}} >1- \alpha=-\frac{1}{2}, \quad \mbox{for } t\in(0,1). \end{aligned}$$
It shows that (H4), (H5) and (H6) hold. By Theorem 3.1, problem (3.3) has extremal solutions in \([x_{0}(t), y_{0}(t)]\).