 Research
 Open access
 Published:
Measure of noncompactness and application to stochastic differential equations
Advances in Difference Equations volume 2016, Article number: 28 (2016)
Abstract
In this paper, we study the existence and uniqueness of the solution of stochastic differential equation by means of the properties of the associated condensing nonexpansive random operator. Moreover, by taking account of the results of Diaz and Metcalf, we prove the convergence of Kirk’s process to this solution for small times.
1 Introduction and notations
It has been found over the years that the fixed point theory is a powerful tool for the resolution of nonlinear problems (differential equations, integrodifferential equations, …). The roots of this theory go back to the famous works of Brouwer (1912) and Banach (1922), the latter author gave an abstract formulation of the successive approximations method, systematically used by Liouville (1837) in his results. We note that Banach’s work was established in the case of normed spaces and extended in metric spaces by Caccioppoli (1930). Since then this theory has become a burgeoning field for several authors who have contributed in the elaboration by thousands of papers of the subject. The development of this theory has been heavily linked to that of the functional analysis in the 1950s. The Italian mathematician Darbo has published a result which ensures the existence of fixed point for a type of operators so called condensing operators generalizing the Schauder fixed point and Banach contraction principle. This discovery was the subject of several applications both in linear and nonlinear analysis (integral equations with singular kernels, differential equations defined on unbounded domains, neutral differential equations, differential operators having nonempty essential spectra, boundary value problems in Banach spaces and others). A condensing (or densifying) mapping is a mapping for which the image of any set is in a certain sense more compact than the set itself, the degree of noncompactness of a set is measured by means of functions called measures of noncompactness. Among the application areas of these tools, the theory of probabilistic operators, which is a branch of stochastic analysis which deals with random operators and their properties, is seen as an extension of operators theory (determinist case). This axis of research has emerged in the 1950s, thanks to the works of East European school of probabilities whose main purpose was the resolution of stochastic differential equations and stochastic partial differential equations, modeling the trajectories of random phenomena, studied and developed for the first time by Itô in 1946. A stochastic differential equation is an ordinary differential equation perturbed by a white noise (involving the Brownian motion). The history of this direction goes back to the works of the English botanist Brown who described in 1827 this motion as that of an organic fine particle in suspension in a gas or a fluid. In the late 19 century, scientists (Bachelier, Smoluchowski) addressed the study of this type of motions. Afterwards, and more precisely in 1905, Einstein published a paper in which he showed that the probability density of the Brownian motion satisfies the heat equation. The first rigorous mathematical treatment is due to Wiener in the 1920s of the previous century who has proven the existence of the Brownian motion. For more details of these equations, we can quote for example [1–5].
In this work, we study the existence and uniqueness of the solution of the following Itô stochastic differential equation
where \(a_{1}\) and \(a_{2}\) are Borel measurable functions and \(0 \leq f(\theta) \leq\theta\).
Recall that this equation models for example the motion of a particle subjected to the infinity of shocks at the time t. Here \(a_{1}\) is a coefficient of transfer while \({a_{2}}\) is a diffusion coefficient. In the case where \(a_{1}\) and \(a_{2}\) satisfy the Lipschitz condition with respect to the second variable, the result for \(f(\theta) \equiv\theta\) was established by Gikhman and Skorohod [6] showing that the associated mapping to (1.1) is a contraction and the solution was obtained by means of the successive approximations method.
Our goal here is to investigate the problem (1.1) by imposing general conditions on the functions \(a_{1}\) and \(a_{2}\), therefore, we show that the associated mapping T is nonexpansive and condensing mapping having a unique fixed point which is the solution of (1.1). On the other hand, we prove that if this solution satisfies the metric property of Diaz and Metcalf, the convergence of Kirk’s process to this solution is ensured.
Definition 1.1
A probability space \((\Omega, \mathcal{F}, \mathbb{P})\) is a triplet for which Ω is a nonempty set, \(\mathcal{F}\) is a σalgebra of Ω and \(\mathbb{P}\) is a probability measure defined on \(\mathcal{F}\) (\(\mathbb {P}(\Omega) = 1\)).
Notice that some results concerning the existence of fixed point theorems involving probabilistic metric spaces can be found for example in [7–9].
A real random variable X is a \(\mathbb{P}\)measurable function defined on Ω with values in \({\mathbb{R}}\). A family of random variables \(X_{t}(\omega)\) (\(t \geq0\)) (denoted also by \(X(t,\omega )\) or simply \(X_{t}\)) is called a stochastic process. For \(\omega\in \Omega\), the function \(t \longrightarrow X(t,\omega)\) is the path of the stochastic process \(X(t,\omega)\).
The mean value or expectation \(\mathbb{E}(X)\) of the random variable X is defined as the integral
if it exists.
Two random variables X and Y are said to be independent if, for any \(a, b \in{\mathbb{R}}\),
Definition 1.2
A Wiener process (also called Brownian motion) \(\{w_{t}\}_{t \geq0}\) is a stochastic process with the following properties:

(a)
\(w_{0}=0\);

(b)
for \(0< t_{1} <\cdots< t_{n}\) the random variables \(w_{t_{2}} w_{t_{1}},w_{t_{3}} w_{t_{2}},\ldots,w_{t_{n}} w_{t_{n1}} \) are independent;

(c)
the random variables \(w_{t+s}w_{t}\) have a normal distribution with zero expectation and 1 as a variance.
Remark 1.1
If w is a random variable with zero expectation and variance \(\sigma= \sqrt{\mathbb{E}(w^{2})}\), then \({\mathbb{E}}w = \sqrt{ \frac{2}{\pi}}\sigma\). Thus, we obtain
and hence the series \(\sum_{j} \mathbb {E}w_{t^{n}_{j + 1}}  w_{t^{n}_{j}}\) diverges with \(t_{j + 1}^{n}  t_{j}^{n}\longrightarrow0\), where \(0 < t_{1}^{n}< t_{2}^{n}<\cdots<t_{n}^{n} = T\).
For a pair \((w(t), X(t))\) of a Wiener process \(w(t)\) and random process \(X(t)\), we define the Itô integral as follows:
The Itô integral is not a classical integral, this is due to the nonsmoothness of the paths \(w(t)\) and the divergence of the series \(\sum_{j} \mathbb{E}w_{t^{n}_{j + 1}}  w_{t^{n}_{j}}\).
With a Wiener process, we can associate a filtration \(\mathcal{F}_{t}\) (\(\mathcal{F}_{t} \subset\mathcal{F}\)), \(0 \leq t \leq T\), which is a family of σalgebra generated by the Brownian paths up to time t, in other words,
It is easy to show that the family \(\mathcal{F}_{t}\) is nondecreasing (with respect to the inclusion).
Definition 1.3
A random variable Y is said to be \(\mathcal{F}_{t}\)measurable if knowledge of Y depends only on the information known up to time t.
Definition 1.4
A sequence of real random variables \(X_{n}\) on Ω converges to the random variable X in probability, written
if for every \(\epsilon> 0\)
Let \(\mathcal{M}_{2} ([0, T])\) denote the set of all functions \(X(t,\omega)\) defined and jointly measurable in \(t \in[0, T]\) and \(\omega\in\Omega\) which are also measurable with respect to \(\mathcal{F}_{t}\) for all \(t \in[0, T]\) and such that
In the sequel, without loss of generality, \(X(t, \cdot)\) will be denoted by \(X(t)\).
In the case where \(X(t) = X(t_{k})\) for \(t \in[t_{k}, t_{k + 1})\) (\(0 = t_{0} < t_{1} < t_{2} <\cdots< t_{n} = T\)). \(\int_{0}^{T} X(t)\,dw(t)\) is given by the formula
In the general case where \(X(t)\) is an arbitrary element of \(\mathcal{M}_{2} ([0, T])\), then there exists a sequence of step functions \(X_{n}(t)\) such that
and the sequence \(\int_{0}^{T}X_{n}(t)\,dw(t)\) converges in probability to some limit ξ, which is called the Itô stochastic integral of \(X(t)\) denoted by \(\int _{0}^{T}X(t)\,dw(t)\).
Some properties of the Itô stochastic integral (see [6]):

(i)
Itô integral is linear;

(ii)
if \(\int_{0}^{T} \mathbb {E}(f(t)^{2})\,dt < \infty\), then
$$ \mathbb{E}\biggl( \int_{0}^{T}f(t)\,dw(t)\biggr) = 0, $$(1.2)and
$$ \mathbb{E}\biggl( \sup_{0 \leq s \leq\mu }\biggl \int_{0}^{s}f(t)\,dw(t)\biggr^{2}\biggr) \leq4 \int _{0}^{\mu}\mathbb{E}\bigl(\biglf(t)\bigr^{2} \bigr)\,dt \quad(0 \leq\mu\leq T). $$(1.3)
In the sequel, we assume that in (1.1), the initial data which is the random variable \(X_{0}\) is \(\mathcal{F}_{0}\)measurable.
Definition 1.5
The process \(X(t)\) is called a strong solution of (1.1) if the following three conditions are satisfied:
2 Main results
We denote by \(X_{T}\) the vector space of measurable random functions \(\xi(t, \omega)\) with respect to the σalgebra \(\mathcal{F}_{t}\) for any \(t \in[0, T]\) such that \(\mathbb{P}(\{\omega \in\Omega \mid t\longrightarrow\xi(t, \omega) \mbox{ is continuous}\}) = 1\). We put \(\\xi\_{X_{T}} = \sqrt{\mathbb{E}( \sup_{0 \leq s \leq T}\xi(s, \omega))^{2}}\). It is easy to show that \(\\cdot\_{X_{T}}\) defines a norm on \(X_{T}\).
Theorem 2.1
\((X_{T}, \\cdot\_{X_{T}})\) is a Banach space.
Proof
It suffices to prove that \(X_{T}\) is complete with respect to the norm \(\\cdot\_{X_{T}}\). Let \(\xi_{n}\) be a Cauchy sequence in \(X_{T}\), then we can extract a subsequence \(\xi_{n_{k}}\) which converges almost surely for all \(t \in[0, T]\). From the set of indices \(\{n_{k}\} \) (\(k \geq1\)), we choose a subset of integers \(\{m_{k}\} \) (\(k \geq1\)) such that
By multiplying by \(2^{k}\), we obtain
Using Chebyshev’s inequality, it follows that
Since the series \(\sum_{k = 1}^{+ \infty} 2^{k}\) converges, the BorelCantelli lemma gives
Thus, for almost surely \(\omega\in\Omega\), there exists \(r_{0} \geq1\) such that
It follows that the partial sums
converge uniformly in \([0, T]\) and let \(\xi(t)\) its limit (for this topology). This gives
and consequently
which shows that \(\xi_{m_{k}}\) converges to \(\xi(t)\) in \(X_{T}\).
Since \(\xi_{n}\) is a Cauchy sequence in \(X_{T}\) and contains a subsequence \(\xi_{m_{k}}\) which converges to ξ, \(\xi_{n}\) converges to ξ in \(X_{T}\), and by this we achieve the proof. □
Hereafter, the principal goal is to transform equation (1.1) to a fixed point problem. To this aim, we associate it to the following mapping C given by
It is easy to observe that C is defined on \(X_{T}\), but so that it takes its values in \(X_{T}\), we need to add the following additional conditions on the functions \(a_{1}\) and \(a_{2}\), which is the goal of the following proposition.
Proposition 2.1
Assume that the following assumptions (called growth polynomial assumptions) are satisfied.
Then C is a selfmapping on \(X_{\eta}\) for all \(\eta\in[0, T]\).
Proof
Without loss of generality, we assume that \(X_{0} = 0\). Now, let \(x_{1}, x_{2} \in\mathbb{R}\), from the inequality \((x_{1} + x_{2})^{2}\leq2 x_{1}^{2} + 2 x_{2}^{2}\), we infer that
Using the CauchySchwarz inequality, it follows that
By passing to the sup and using the monotonicity of the expectation, we obtain
Using (1.3) and the commutativity between the expectation and the integral, it follows that
By the assumptions given above, we get
Hence
The fact that \(0 \leq f(\theta) \leq\theta\) (\(\theta\in [0, T]\)) and the last inequality give
Thus \(\(CX)\_{X_{\eta}}\) is finite if \(\X\_{X_{\eta}}\) is finite, which completes the proof. □
If \(X(t)\) is a solution of the equation (1.1) on the interval \([0, s]\), \(0 \leq s \leq T\), we denote \(\varphi(s) = \X\^{2}_{X_{s}}\).
Lemma 2.1
The function φ is bounded on the interval \([0, T]\).
Proof
From the inequality (2.9), by changing η by s, we infer that
Using the fact that \(X(t)\) is a solution of the equation (1.1) on the interval \([0, s]\) and \(0 \leq f(s) \leq s\), it follows that
where \(K = 2M(T^{2} + 4 T)\) and \(K' = 2M(T + 4)\).
Now, Gronwall’s lemma implies that
which gives the result. □
We introduce here the concept of measure of noncompactness of Hausdorff which is a real positive function measuring the degree of noncompactness of sets.
Let X be a complex Banach space and let \(\mathcal{P}(X)\) be the set of all subsets of X, we denote by \(B(x, r)\) and \(\overline{B}(x, r)\), respectively, the open and closed ball of center x and radius \(r > 0\).
Definition 2.1
The Hausdorff measure of noncompactness \(\alpha(A)\) of \(A \in\mathcal{P}(X)\) is defined as the infimum of the numbers \(\epsilon> 0\) such that A has a finite ϵnet in X. Recall that a set \(S \subseteq X\) is called an ϵnet of A if \(A \subseteq S + \epsilon \overline{B}(0, 1) = \{s + \epsilon b: s \in S, b \in \overline{B}(0, 1) \}\).
The Hausdorff measure of noncompactness α enjoys the following properties:

(a)
regularity: \(\alpha(A) = 0\) if and only if A is totally bounded;

(b)
nonsingularity: α is equal to zero on every oneelement set;

(c)
monotonicity: \(A_{1} \subseteq A_{2}\) implies \(\alpha (A_{1}) \leq\alpha(A_{2})\);

(d)
semiadditivity: \(\alpha(A_{1} \cup A_{2}) = \max\{ \alpha(A_{1}), \alpha(A_{2}) \}\);

(e)
Lipschitzianity: \( \alpha(A_{1})  \alpha(A_{2}) \leq \rho(A_{1}, A_{2}) \); here ρ denotes the Hausdorff semimetric: \(\rho(A_{1}, A_{2}) = \inf\{ \epsilon> 0: A_{1} + \epsilon\overline{B}(0, 1) \supset A_{2}, A_{2} + \epsilon\overline{B}(0, 1) \supset A_{1} \}\);

(f)
continuity: for any \(A_{1} \in\mathcal{P}(X)\) and any ϵ, there exists \(\delta> 0\) such that \(\alpha(A_{1})  \alpha(A) < \epsilon\) for all A satisfying \(\rho(A_{1}, A) < \delta\);

(g)
semihomogeneity: \(\alpha(t A) = t \alpha(A)\) for any number t;

(h)
algebraic semiadditivity: \(\alpha(A_{1} + A_{2}) \leq\alpha(A_{1}) + \alpha(A_{2})\);

(i)
invariance under translations: \(\alpha(A + x_{0}) = \alpha(A)\) for any \(x_{0} \in X\).
The goal of the following theorem is to address other properties in view of their importance.
Theorem 2.2
([1], Theorem 1.1.5)
The Hausdorff measure of noncompactness is invariant under passage to the closure and to the convex hull: \(\alpha(A) = \alpha(\overline{A}) = \alpha (co A)\).
We note that the measure of noncompactness has many applications in mathematics. On this topic, we refer to [1, 10–13].
Definition 2.2
Let X be a Banach space. A function ψ defined on \(\mathcal{P}(X)\) with values in some partially ordered set \((\Gamma, \leq)\) is called a measure of noncompactness in the general sense if \(\psi(A) = \psi (\overline{co} A)\) for all \(A \in\mathcal{P}(X)\).
Definition 2.3
Let \((X, \\cdot\)\) be a normed space and ϑ one of measure of noncompactness given above. A continuous mapping \(G: X \longrightarrow X\) is said to be densifying or condensing, if, for every bounded subset of X such that \(\vartheta(A) > 0\), we have \(\vartheta(G(A)) < \vartheta(A)\).
Let \(\mathcal{M}([0, T])\) be the vector space of scalar functions defined on \([0, T]\); it is partially ordered by the usual order ≤. Let \(\gamma: \mathcal{P}(X_{T})\longrightarrow\mathcal{M}([0, T]) \) defined by
Here
where \(\Lambda_{t} = \{ X_{t} = X_{[0, t]} : X \in\Lambda\} \subset X_{t}\) and \(\gamma_{t}\) is the measure of noncompactness of Hausdorff on the space \(X_{t}\).
Lemma 2.2
The function γ defines a measure of noncompactnesss in the general sense on \(X_{T}\) which is additively nonsingular (i.e., \(\gamma(A\cup\{X\}) = \gamma (A)\) for all \(A\subset X_{T}\) and \(X \in X_{T}\)).
Proof
Let \(A \subset X_{T}\), we have \(A \subset\overline{co}(A)\), this gives \(A_{t} \subset({\overline{co}(A)})_{t}\) for all \(t \in[0, T]\), the property of the monotonicity of the Hausdorff measure of noncompactness implies that \(\gamma_{t}(A_{t}) \leq\gamma_{t}({\overline{co}(A)})_{t}\) for all \(t \in[0, T]\), this gives \(\gamma(A) \leq\gamma(\overline {co}(A))\). On the other hand
Again, the monotonicity and the invariance by closure of the Hausdorff measure of noncompactness leads to
Hence
It follows that \(\gamma(\overline{co}(A))\leq\gamma(A)\), by which we achieve the proof for the first assertion. The fact that γ is additively nonsingular is trivial. □
Definition 2.4
Let \((X, \\cdot\)\) be a normed space and K a nonempty bounded subset of X. A selfmapping T on K is called a nonexpansive mapping if \(\T(x)  T(y)\ \leq\ x  y \\) for all \(x, y \in K\).
Now, let us introduce the following conditions:
 (\(\mathcal{H}_{1}\)):

\(a_{1}(s, u_{1})  a_{1}(s, u_{2})^{2}\leq h(s) g(\ u_{1} u_{2}\^{2})\),
where \(h: [0, T]\longrightarrow[0, + \infty[\) integrable and \(\int_{0}^{T}h(s)\,ds \leq \frac{1}{2T + 8}\) and
 (\(\mathcal{H}_{2}\)):

For all \(A > 0\), the inequality
$$\widetilde{h}(s) \leq A \int_{0}^{s}h(\theta ) g\bigl(\widetilde{h}\bigl(f( \theta)\bigr)\bigr)\,d\theta,\quad 0 \leq s \leq T $$cannot admit nontrivial solutions.
 (\(\mathcal{H}_{3}\)):

\(\lambda(f^{1}(B)) \longrightarrow0\) when \(\lambda(B) \longrightarrow0\); here λ is the Lebesgue measure and \(B \subset[0, t] \) (\(0 \leq t \leq T\)).
Remark 2.1
We note that if we take \(h(t) = \alpha\) (\(\alpha> 0\)), \(g(u) = u\), \(f(x) = x\), and \(T = 2 + \sqrt{4 + \frac{1}{2\alpha}}\), then the assumptions given in (\(\mathcal{H}_{1}\)), (\(\mathcal{H}_{2}\)), and (\(\mathcal{H}_{3}\)) are satisfied.
Proposition 2.2
Under the assumption (\(\mathcal{H}_{1}\)), the mapping C defined by (2.1) is nonexpansive on every \(X_{t}\) (\(0 \leq t \leq T\)).
Proof
We have
By using the inequality \((x_{1} + x_{2})^{2}\leq2 x_{1}^{2} + 2 x_{2}^{2}\), we obtain
The CauchySchwarz inequality enables us to write
Passing to the sup on \([0, T]\) and using the monotonicity of the expectation, it follows that
The stochastic inequality (1.3) gives
Therefore
The commutativity between the expectation and the integral together with the fact that g is concave gives
Hence
It follows that
Consequently,
This shows that C is a nonexpansive selfmapping on \(X_{t}\), which gives the result. □
Remark 2.2
We note that nonexpansive selfmappings on bounded subsets of Banach spaces do not necessarily have fixed points, we can refer for example to the famous work of Alspach [14] who gave an example of a weakly compact convex subset M of the space \(L_{1}([0, 1])\) and a fixed point free isometry on M.
In the sequel, we will need the following two lemmas; the first lemma is one of the classical results in measure theory.
Lemma 2.3
Let \(\epsilon> 0\) and let \(\phi: [0, T] \longrightarrow\mathbb{R}\) a monotone function, then the set of discontinuity points of ϕ having a magnitude ≥ϵ is a finite set in \([0, T]\).
Lemma 2.4
For any \(\Lambda\subset X_{T}\), the function
is a nondecreasing bounded function on \([0, T]\).
Proof
Let \(t_{1}, t_{2} \in[0, T]\) such that \(t_{1} \leq t_{2}\); then \(\Lambda _{t_{1}} \subset\Lambda_{t_{2}}\). The monotonicity of the measure of noncompactness of Hausdorff gives \({\gamma}_{t_{1}}(\Lambda_{t_{1}}) \leq {\gamma}_{t_{2}}(\Lambda_{t_{2}})\) and implies that \(\gamma(\Lambda)(t_{1}) \leq\gamma(\Lambda)(t_{2})\), which proves the first assertion. The second assertion follows directly from the first one, indeed by the monotonicity, we deduce that \(\gamma(\Lambda)(t) \leq\gamma(\Lambda )(T)\) for all \(t \in[0, T]\). □
Theorem 2.3
Under the assumptions (\(\mathcal{H}_{1}\)), (\(\mathcal{H}_{2}\)), (\(\mathcal{H}_{3}\)) together with the growth polynomial conditions given in (2.2), C is a condensing mapping with respect to the measure of noncompactness γ on \(X_{t}\) for all \(t \in[0, T]\).
Proof
We show that if there exists \(A \subset X_{t}\) such that \(\gamma(A) \leq\gamma(C(A))\), then we obtain necessarily \(\gamma(A) = 0\). Let \(t > 0\) and \(\epsilon> 0\), by using Lemmas 2.4 and 2.3, we denote by \(\{t_{j}\}_{j = 1}^{m_{0}}\) the set of points in \([0, t]\) for which \(\gamma(A)(t_{j} + 0)  \gamma(A)(t_{j}  0) \geq\epsilon\). It is easy to deduce that there exists \(\delta_{1}\) sufficiently small such that \(\inf_{t, t' \in]t_{j}  \delta_{1}, t_{j} + \delta _{1}[} \gamma(A)(t)  \gamma(A)(t') \geq\epsilon\) for all \(j = 1, \ldots, m_{0}\). Letting \(\Im= [0, t]\backslash \bigcup_{j = 1}^{m_{0}}]t_{j}  \delta_{1}, t_{j} + \delta_{1}[\), we observe that \(\Im= \bigcup_{k = 1}^{m_{0} + 1} I_{k}\) where each \(I_{k}\) is a closed bounded interval with \(I_{k} \cap I_{j} = \emptyset\) for \(k \neq j\). On \(I_{k}\), the function \(t \longrightarrow\gamma(A) (t)\) is uniformly continuous, this implies that the existence of \(\delta_{2} > 0\) sufficiently small such that for all \(s, s' \in I_{k}: s  s' < \delta _{2}\), then \(\gamma(A) (s)  \gamma(A) (s') < \epsilon\). On the other hand, in \(I_{k}\), we choose a finite set \(\{b_{k_{s}}\}_{s = 1}^{r_{k}}\) for which \(\delta_{2} < b_{k_{s}}  b_{k_{s}  1}< \frac{3}{2} \delta _{2}\). Now, for all \(1 \leq s \leq r_{k} \) (\(k = 1, \ldots, m_{0} + 1\)), let \(\{ M_{i_{1}}, M_{i_{2}}, M_{i_{3}}, \ldots, M_{i_{s}} \}\) a \((\gamma(A)(b_{k_{s}}) + \epsilon)\)net of the set \(A_{b_{k_{s}}}\). Thus, we can construct a family of paths \(\{G_{l}: l = 1, \ldots, h\}\) such that \(\mathbb{P} \{ \omega\in\Omega\mid t \longrightarrow G_{l}(t)(\omega) \mbox{ is continuous}\ (l = 1, \ldots, h)\} = 1\) as follows: \(G_{l} \equiv M_{i_{f}}\) on the intervals \(J_{k_{s}} = [b_{k_{s}  1} + \frac{\delta _{2}}{2}, b_{k_{s}}  \frac{\delta_{2}}{2}]\) for all \(1 \leq f \leq s \) (\(k = 1, \ldots, m_{0} + 1\)) (\(l = 1, \ldots, h\)) and linear on the complementary intervals. On the other hand, since \(\gamma(A) \leq\gamma (C(A))\), \(\gamma(A)(\theta) \leq\gamma(C(A))(\theta)\) for all \(\theta \in[0, t]\). Let \(Z \in(C(A))_{\theta} = \{ Y_{[0, \theta]}/ Y \in C(A) \}\), this shows the existence of \(V \in A\) such that \(Y = C(V)\). Moreover, we have \(V_{[0, b_{k_{s}}]} \in A_{b_{k_{s}}}\) (\(1 \leq s \leq r_{k}\)) (\(k = 1, \ldots, m_{0} + 1\)), it follows that there exists \(1 \leq f_{0} \leq s\) for which
Since \(G_{l} \equiv M_{i_{f_{0}}}\) on \(J_{k_{s}}\), it follows that for \(\theta\in J_{k_{s}}\), we have
The uniform continuity of the function \(t \longrightarrow \gamma(A) (t)\) implies that for \(\theta\in J_{k_{s}}\), we have
This gives
We denote \(\chi^{\tau}_{k_{s}} = J_{k_{s}} \cap[0, \sup_{ 0 \leq\theta\leq \tau} f(\theta)]\).
By using the same techniques as Proposition 2.2, we get
Since g is concave, it follows that
We put \(\kappa= 2\tau+ 8\) and \(\Re_{t} = [0, t] \backslash \bigcup_{1 \leq s \leq r_{k}, k = 1, \ldots, m_{0} + 1}{f^{1}(\chi^{\tau}_{k_{s}})}\).
Since \(f^{1}(\{ \chi^{\tau}_{k_{s}}\}) \cap f^{1}(\{ \chi ^{\tau}_{r_{m}}\}) = \emptyset\) (\(k \neq r\), \(s = 1, \ldots, r_{m}\), \(m = 1, \ldots, m_{0} + 1\)), we infer that
Let
and
The fact that g is nondecreasing and (2.10) lead to
The continuity of the integral shows that there exists \(\eta > 0\) such that, for all measurable set \(B \subset[0, \tau]\) with \(\lambda(B) < \eta\), we have
Moreover, the condition (\(\mathcal{H}_{3}\)) implies that it is always possible to choose \(\delta_{2}\) given above sufficiently small such that
Consequently, we can write
The definition of the measure of noncompactness of Hausdorff together with (2.11) leads to
By using the assumption (\(\mathcal{H}_{2}\)), we obtain \(\gamma(A) \equiv0\), which gives the result. □
Theorem 2.4
([1], p.26)
Let \(A: K \longrightarrow K\) be a mapping defined on a closed, bounded, convex subset K of a Banach space X. Assume that A is condensing with respect to the additively nonsingular measure of noncompactness in the general sense Ψ. Then A has at least one fixed point in K.
Theorem 2.5
The mapping \(C: X_{T} \longrightarrow X_{T}\) defined by (2.1) has a unique fixed point in \(X_{T}\).
Proof
The existence follows from Theorem 2.4, more precisely, for τ belonging to the interval \([0, 2 + \sqrt{4 + \frac{1}{2M} \frac{H}{H + 1}}]\), the inequality (2.8) shows that the random mapping \(C: X_{\tau}\longrightarrow X_{\tau}\) leaves \(\overline{B}_{X_{T}}(0, \sqrt {H})\) (the ball of center 0 and radius \(\sqrt{H}\)) invariant, which implies the existence of the solution \(X(t)\) in \(X_{\tau}(\tau\in[0, 2 + \sqrt{4 + \frac{1}{2M} \frac{H}{H + 1}}])\), the result in \(X_{T}\) follows from an extension to the whole interval \([0, T]\).
For the uniqueness, we proceed as follows: Assume that \(X = X(t)_{t \in [0, T]}\) and \(Y = Y(t)_{t \in[0, T]}\) are two strong solutions of equation (1.1) such that \(X(0) = Y(0) = 0\). In other words
and
It follows that
Using the inequality \((x_{1} + x_{2})^{2}\leq2 x_{1}^{2} + 2 x_{2}^{2}\) for \(x_{1}, x_{2} \in\mathbb{R}\), we get
The CauchySchwarz inequality gives
Passing to the sup and expectation and using the assumption (\(\mathcal{H}_{1}\)) together with the stochastic inequality (1.3), we obtain
Hence
By the monotonicity of the expectation and the function g, we infer that
By assumption (\(\mathcal{H}_{2}\)), it follows that
for an arbitrary element \(u \in[0, T]\), consequently \(X(t) = Y(t)\), which implies the uniqueness of the strong solution. □
3 Application to the convergence of Kirk’s iterative process
Let us recall the following theorem due to Diaz and Metcalf [15].
Theorem 3.1
Let B be a continuous selfmapping on a metric space \((X, d)\) such that

(i)
\(\operatorname{Fix}(B) \neq\emptyset\) (\(\operatorname{Fix}(B)\) is the set of fixed point of B);

(ii)
for each \(y \in X\) such that \(y \notin\operatorname {Fix}(B)\), and for each \(z \in\operatorname{Fix}(B)\) we have
$$d\bigl(B(y), z\bigr) < d( y, z). $$
Then one, and only one, of the following properties holds:

(a)
for each \(x_{0} \in X\) the Picard sequence \(\{B^{n}(x_{0})\}\) contains no convergent subsequences;

(b)
for each \(x_{0} \in X\) the sequence \(\{B^{n}(x_{0})\}\) converges to a point belonging to \(F(B)\).
Kirk iteration ([16]): Let \((X, \\cdot\)\) be a normed space and K a closed, convex, and bounded subset of X. Let A be a selfmapping on K. For each \(x \in K\), the sequence \(\{ S^{n}(x)\}\) defined by \(S: K \longrightarrow K\), where
is said to be Kirk’s iterative process.
Theorem 3.2
[16]
Let K be a convex subset of a Banach space and \(A: K \longrightarrow K\) be a nonexpansive mapping. Then \(S(x) = x\) if and only if \(A(x) = x\).
Let \(T > 0\), and \(H = K e^{K'T}\) a real positive which is an upper bound of the function \(\varphi(t) = \X\_{X_{t}}^{2} \) (\(0 \leq t \leq T\)). We denote by \(\overline{B}_{X_{t}}(0, \sqrt{H})\) the closed ball of center 0 and radius \(\sqrt{H}\) in \(X_{t}\).
Theorem 3.3
If there exists \(\tau_{0} \in [0, 2 + \sqrt{4+ \frac{1}{2M} \frac{H}{H + 1}}]\) such that \(C:\overline {B}_{X_{\tau_{0}}}(0, \sqrt{H}) \longrightarrow\overline{B}_{X_{\tau _{0}}}(0, \sqrt{H})\) satisfies DiazMetcalf’s condition, then, for each \(X \in\overline{B}_{X_{\tau_{0}}}(0, \sqrt{H})\), Kirk’s process \(\{ S^{n}(X)\}\) (associated to the mapping C) converges to the unique fixed point of T.
Proof
Following Theorem 2.5, it follows that the mapping C has a unique fixed point \(X^{\star}\). Moreover, S is a densifying mapping. Indeed, if K is a bounded subset of \(\overline{B}_{X_{\tau}}(0, \sqrt {H})\) such that \(\gamma(K) > 0\), then
Hence, by the monotonicity, semiadditivity, and homogeneity properties, we get
The fact that C is densifying shows that
and therefore
Also, by use of Theorem 2.4 together with Theorem 3.2, \(\operatorname{Fix}(S) = \operatorname{Fix}(C) = \{X^{\star}\}\).
For \(X \in K\), let
We have \(S(\widetilde{K}) = \bigcup_{n = 1}^{ + \infty}S^{n}\{X\} \subset\widetilde{K}\) and since \(\widetilde{K} = \{ X\} \cup S(\widetilde{K})\), this gives
Since S is densifying, we establish that \(\gamma(\widetilde {K}) = 0\), by the property of regularity, we establish that K̃ is totally bounded, and consequently \(\overline{\widetilde{K}}\) is compact (since \(X_{\tau_{0}}\) is a Banach space), therefore the sequence \(\{S^{n}(X)\}\) contains a convergent subsequence.
On the other hand, the fact that C satisfies the DiazMetcalf condition shows that, for all \(X \in\overline{B}_{X_{\tau _{0}}}(0, \sqrt{H})\backslash\{X^{\star}\}\), we have
This implies that
Thus,
Then S satisfies also the assumptions of Theorem 3.1, this implies that \(\lim_{n \longrightarrow+ \infty} S^{n}(X)\) exists and is equal to \(X^{\star}\). □
Remark 3.1
Let K be a closed, bounded, and convex subset of a Banach space X and let \(A: K \longrightarrow K\) be a mapping. For each \(x \in K\), some sufficient conditions on A are given to ensure the convergence or the weak convergence of Kirk’s process \(\{S^{n}(x)\}\) to the fixed point of A (see for example [16–18]) with the additional condition that the space X is uniformly convex or strictly convex. Unfortunately, in our case the Banach space \(X_{T}\) is not strictly convex or uniformly convex, indeed, it suffices to take its subspace of functions \(\zeta(t)\) independent of ω equipped with its norm sup, to see that this does not have these properties.
References
Akhmerov, RR, Kamenskii, MI, Potapov, AS, Rodkina, AE, Sadovskii, BN: Measures of Noncompactness and Condensing Operators. Birkhäuser, Basel (1992)
Yamada, T, Watanabe, S: On the uniqueness of solutions of stochastic differential equations. I. J. Math. Kyoto Univ. 11(1), 155167 (1971)
Yamada, T, Watanabe, S: On the uniqueness of solutions of stochastic differential equations. II. J. Math. Kyoto Univ. 11(3), 533563 (1971)
Rodkina, A: Solubility of stochastic differential equations with perturbed argument. Ukr. Math. J. 37(1), 98103 (1985)
Veretennikov, AY: On strong solutions of stochastic differential equations. Teor. Veroâtn. Primen. 24(2), 348360 (1979)
Gikhman, II, Skorohod, AV: Introduction to the Theory of Random Processes. Nauka, Moscow (1965) (in Russian)
Pap, E, Hadzic, O, Mesiar, R: A fixed point theorem in probabilistic metric spaces and applications. J. Math. Anal. Appl. 202(2), 431440 (1996)
Sehgal, VM, BharuchaReid, AT: Fixed points of contraction mappings on probabilistic metric spaces. Theory Comput. Syst. 6(1), 97102 (1972)
Sen, MD, Karapınar, E: Some results on best proximity points of cyclic contractions in probabilistic metric spaces. J. Funct. Spaces 2015, Article ID 470574 (2015)
Aghajani, A, Pourhadi, E: Application of measure of noncompactness to \(l_{1}\) solvability of infinite systems of second order differential equations. Bull. Belg. Math. Soc. Simon Stevin 22(1), 105118 (2015)
Ezzinbi, K, Taoudi, MA: SadovskiiKrasnosel’skii type fixed point theorems in Banach spaces with application to evolution equations. J. Appl. Math. Comput. (2014). doi:10.1007/s1219001408368
Losada, J, Nieto, JJ, Pourhadi, E: On the attractivity of solutions for a class of multiterm fractional functional differential equations. J. Comput. Appl. Math. (Available online 23 July 2015)
Mursaleen, M, Noman, A: Hausdorff measure of noncompactness of certain matrix operators on the sequence of generalized means. J. Math. Inequal. Appl. 417, 96111 (2014)
Alspach, DE: A fixed point free nonexpansive map. Proc. Am. Math. Soc. 3, 423424 (1981)
Diaz, JB, Metcalf, FT: On the set of subsequential limit points of successive approximations. Trans. Am. Math. Soc. 135, 459485 (1969)
Kirk, WA: On successive approximations for nonexpansive mappings in Banach spaces. Glasg. Math. J. 12(1), 69 (1971)
Barbuti, U, Guerra, S: Un Teorema costruttivo di punto fisso negli spazi di Banach. Rend. Ist. Mat. Univ. Trieste 4, 115122 (1972)
Ray, BK, Singh, SP: Fixed point theorems in Banach space. Indian J. Pure Appl. Math. 9, 216221 (1978)
Acknowledgements
The authors would like to thank the editor and the anonymous referees for their remarks and valuable comments, which helped to improve this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Dehici, A., Redjel, N. Measure of noncompactness and application to stochastic differential equations. Adv Differ Equ 2016, 28 (2016). https://doi.org/10.1186/s136620160748z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s136620160748z