In this section, its main duty is to consider almost sure asymptotic stability of system (2.6). The form of system (2.6) is very complex, which is simplified with corresponding symbols for convenience as follows:
$$ dx(t)=f \bigl(x(t-) \bigr)\,dt+g \bigl(x(t-) \bigr)\,dB(t)+ \int _{R^{m}\setminus \{0\}}h \bigl(x(t-),y \bigr) \widetilde{N}(dt,dy), $$
(3.1)
where \(f(x(t-))\), \(g(x(t-))\), and \(h(x(t-),y)\) represent the corresponding parts of system (2.6). For the above simple form, we give the differential operator L in the following assumption.
Assumption A2
Assume that \(V\in C^{1,2}(R_{+}\times R^{n};R_{+})\), \(\lambda \in L^{1}(R_{+};R_{+})\), and \(\mu : R^{n}\rightarrow R_{+}\) are continuous and nonnegative. For any \((t,x)\in R_{+}\times R^{n}\),
$$ LV(t,x)\leq \lambda (t)-\mu (x),\quad \mu (0)=0, $$
where L is the differential operator associated with equation (3.1). And L acts on a V function in the following form:
$$ \begin{aligned} LV(t,x)={}&V_{t}(t,x)+V_{x}(t,x)f(x)+ \frac{1}{2}\operatorname{tr} \bigl[g^{T}(x)V_{xx}(t,x)g(x) \bigr] \\ &{}+ \int _{R^{m}\setminus \{0\}} \bigl[V \bigl(t,x+h(x,y) \bigr)-V(t,x)-V_{x}(t,x)h(x,y) \bigr] \pi (dy), \end{aligned} $$
where
$$ \begin{gathered} V_{t}(t,x)=\frac{\partial V(t,x)}{\partial t},\quad\quad V_{x}(t,x)= \biggl(\frac{\partial V(t,x)}{\partial x_{1}},\ldots, \frac{\partial V(t,x)}{\partial x_{n}} \biggr) \\ V_{xx}(t,x)= \biggl( \frac{\partial ^{2} V(t,x)}{\partial x_{i}\partial x_{j}} \biggr)_{n \times n}. \end{gathered} $$
In the rest of this section, the almost surely stable result reads as follows.
Theorem 3.1
Let Assumption A2hold. Further, suppose that there is a function \(V\in C^{1,2}(R_{+}\times R^{n};R_{+})\) such that, for all \(x\in R^{n}\), \(t\geq 0\),
$$ \alpha _{1} \bigl(t, \vert x \vert \bigr)\leq V(t,x) \leq \alpha _{2} \bigl(t, \vert x \vert \bigr), $$
(3.2)
where \(\alpha _{1}(t,x)\), \(\alpha _{2}(t,x)\) belong to \(\mathcal{K}_{\infty }\) with respect to x. Then, for any initial data \(x_{0}\), the trivial solution \(x(t;x_{0})\) of system (3.1) is almost surely asymptotically stable, i.e.,
$$ \lim_{t\rightarrow \infty }x(t;x_{0})=0,\quad \textit{a.s.} $$
Proof
For notation simplicity, we write \(x(t)\) instead of \(x(t,x_{0})\). It is clear that this theorem holds because of the solution \(x(t)\equiv 0\) a.s. for \(x_{0}=0\). So, for \(x_{0}\neq 0\), we have the following proof. Due to the complexity of the proof, we divide the proof into three steps as follows.
Step 1: In this step, we show that system (3.1) is stable in probability and the sample space is divided. Applying the Itô formula to \(V(t,x)\) and by Assumption A2, for any \(t>0\), we have
$$ \begin{aligned} V \bigl(t,x(t) \bigr)={}&V(0,x_{0})+ \int _{0}^{t}LV \bigl(s,x(s-) \bigr)\,ds + \int _{0}^{t}V_{x} \bigl(s,x(s-) \bigr)g \bigl(x(s-) \bigr)\,dB_{j}(s) \\ &{}+ \int _{0}^{t} \int _{R^{m}\setminus \{0\}} \bigl[V \bigl(s,x(s-)+h \bigl(x(s-),y \bigr) \bigr)-V \bigl(s,x(s-) \bigr) \bigr] \widetilde{N}(ds,dy) \\ \leq{}& V(0,x_{0})+ \int _{0}^{t}\lambda (s)\,ds- \int _{0}^{t}\mu \bigl(x(s-) \bigr)\,ds+ \int _{0}^{t}V_{x} \bigl(s,x(s-) \bigr)g \bigl(x(s-) \bigr)\,dB_{j}(s) \\ & {} + \int _{0}^{t} \int _{R^{m}\setminus \{0\}} \bigl[V \bigl(s,x(s-)+h \bigl(x(s-),y \bigr) \bigr)-V \bigl(s,x(s-) \bigr) \bigr] \widetilde{N}(ds,dy) \\ \leq{}& V(0,x_{0})+ \int _{0}^{t}\lambda (s)\,ds+ \int _{0}^{t}V_{x} \bigl(s,x(s-) \bigr)g \bigl(x(s-) \bigr)\,dB_{j}(s) \\ & {} + \int _{0}^{t} \int _{R^{m}\setminus \{0\}} \bigl[V \bigl(s,x(s-)+h \bigl(x(s-),y \bigr) \bigr)-V \bigl(s,x(s-) \bigr) \bigr] \widetilde{N}(ds,dy) \\ \doteq{}& V_{t}(x_{0})+M(t), \end{aligned} $$
where
$$ V_{t}(x_{0})=V(0,x_{0})+ \int _{0}^{t}\lambda (s)\,ds $$
and
$$ \begin{aligned} M(t)={}& \int _{0}^{t}V_{x} \bigl(s,x(s-) \bigr)g \bigl(x(s-) \bigr)\,dB_{j}(s) \\ &{}+ \int _{0}^{t} \int _{R^{m} \setminus \{0\}} \bigl[V \bigl(s,x(s-)+h \bigl(x(s-),y \bigr) \bigr)-V \bigl(s,x(s-) \bigr) \bigr]\widetilde{N}(ds,dy). \end{aligned} $$
Due to \(x_{0}\in D^{b}_{{\mathcal{F}}_{0}}([-\tau ,0];R^{n})\) and \(\int ^{\infty }_{0}\lambda (t)\,dt<\infty \), \(V_{t}(x_{0})\) is bounded. \(M(t)\) is a supermartingale with respect to the filtration \(\{\mathcal{F}_{t}\}_{t\geq 0}\) generated by \(B(\cdot )\) and \(\widetilde{N}(\cdot ,\cdot )\). By the supermartingale inequality (Rogers&Williams [19], p. 154, (54.5)), for any function \(\delta (\cdot )\in \mathcal{K}_{\infty }\), one has
$$ \mathbb{P} \Bigl\{ \sup_{0\leq s\leq t}V \bigl(s,x(s) \bigr)< \delta \bigl(V_{t}(x_{0}) \bigr) \Bigr\} \geq 1- \frac{21V_{t}(x_{0})}{\delta (V_{t}(x_{0}))}, \quad t\geq 0. $$
(3.3)
It follows from \(\sup_{0\leq s\leq t}V(s,x(s))<\delta (V_{t}(x_{0}))\) that \(\sup_{0\leq s\leq t} \vert x \vert <\upsilon _{t}(V_{t}(x_{0}))\), where \(\upsilon _{t}=\alpha ^{-1}_{1}\circ \delta \), and \(\alpha ^{-1}_{1}\) is the inverse function of \(\alpha _{1}\) with respect to x. For any given \(\epsilon >0\), choosing an appropriate function \(\delta (\cdot )\) and by (3.3), we obtain \(\frac{21V_{t}(x_{0})}{\delta (V_{t}(x_{0}))}\leq \epsilon \). Then, for \(t>0\),
$$ \mathbb{P} \Bigl\{ \sup_{0\leq s\leq t} \bigl\vert x(s) \bigr\vert < \upsilon _{t} \bigl(V_{t}(x_{0}) \bigr) \Bigr\} \geq 1-\epsilon , \quad t\geq 0, $$
which yields
$$ \mathbb{P} \bigl\{ \bigl\vert x(s) \bigr\vert < \upsilon _{t} \bigl(V_{t}(x_{0}) \bigr) \bigr\} \geq 1-\epsilon , \quad t \geq 0. $$
Let us decompose the sample space
$$\begin{aligned}& \Omega _{1}= \Bigl\{ \omega :\limsup_{t\rightarrow \infty }\mu \bigl(x(t, \omega ) \bigr)=0 \Bigr\} , \\& \Omega _{2}= \Bigl\{ \omega :\liminf_{t\rightarrow \infty }\mu \bigl(x(t, \omega ) \bigr)>0 \Bigr\} , \\& \Omega _{3}= \Bigl\{ \omega :\liminf_{t\rightarrow \infty }\mu \bigl(x(t, \omega ) \bigr)=0\text{ and }\limsup_{t\rightarrow \infty }\mu \bigl(x(t, \omega ) \bigr)>0 \Bigr\} . \end{aligned}$$
In order to obtain the results, we will show that \(\mathbb{P}(\Omega _{2})=\mathbb{P}(\Omega _{3})=0\), which implies that \(\mathbb{P}(\Omega _{1})=1\).
Step 2: In this step, we prove \(\mathbb{P}(\Omega _{2})=0\). By the Itô formula and Assumption A2, we have
$$ \begin{aligned} EV \bigl(t,x(t) \bigr)={}&V(0,x_{0})+E \biggl\{ \int _{0}^{t}LV \bigl(s,x(s-) \bigr)\,ds \biggr\} \\ \leq{}& V_{t}(x_{0})-E \biggl\{ \int _{0}^{t}\mu \bigl(x(s) \bigr)\,ds \biggr\} , \end{aligned} $$
(3.4)
where \(E(\cdot )\) is the mathematical expectation. This yields \(E\{\int _{0}^{t}\mu (x(s))\,ds\}\leq V_{t}(x_{0})\), since \(V(t,x)\geq 0\). Letting \(t\rightarrow \infty \) and using Fatou’s lemma give \(E\{\int _{0}^{\infty }\mu (x(s))\,ds\}\leq C_{V_{t}}\), where \(C_{V_{t}}\) is the upper bound of \(V_{t}(x_{0})\). Due to the nonnegative function μ, one has \(\int _{0}^{\infty }\mu (x(s))\,ds\leq C_{V_{t}}\), which implies that \(\mathbb{P}(\Omega _{2})=0\).
Step 3: In this step, we prove \(\mathbb{P}(\Omega _{3})=0\) by contradiction. That is, there exist \(\epsilon _{0}>0\) and \(\epsilon _{1}>0\) such that
$$ \begin{aligned}& \mathbb{P} \bigl\{ \mu \bigl(x(\cdot ) \bigr)\text{ cross from below }\epsilon _{1} \text{ to above }2\epsilon _{1} \\ &\quad \text{ and back infinitely many times} \bigr\} \geq \epsilon _{0}. \end{aligned} $$
(3.5)
For \(r>0\), let \(\rho _{r}=\inf \{t>0: \vert x(t;x_{0}) \vert \geq r, x_{0}\neq 0\}\), and recall the local boundedness of \(f(x)\), \(g(x)\), and \(\int _{R^{m}\setminus \{0\}}h(x,y)\pi (dy)\). Then there exist constants \(C_{f}, C_{g}, C_{h}\in R_{+}\) such that \(\sup_{ \vert x \vert < r} \vert f(x) \vert \leq C_{f}\), \(\sup_{ \vert x \vert < r} \vert g(x) \vert \leq C_{g}\), and \(\sup_{ \vert x \vert < r}\int _{R^{m}\setminus \{0\}} \vert h(x,y) \vert ^{2}\pi (dy)< C_{h}^{2}\). By directly calculating, we get that
$$ \begin{aligned} &E \Bigl\{ \sup_{0\leq s\leq t} \bigl\vert x(t\wedge \rho _{r})-x_{0} \bigr\vert ^{2} \Bigr\} \\ &\quad =E \biggl\{ \sup_{0\leq s\leq t} \biggl\vert \int _{0}^{s\wedge \rho _{r}}f \bigl(x(q-) \bigr)\,dq+ \int _{0}^{s\wedge \rho _{r}}g \bigl(x(q-) \bigr) \,dB_{j}(q) \\ &\quad\quad {}+ \int _{0}^{s\wedge \rho _{r}} \int _{R^{m}\setminus \{0\}}h \bigl(x(q-),y \bigr) \widetilde{N}(dq,dy) \biggr\vert ^{2} \biggr\} \\ &\quad \leq 3E \biggl\{ \sup_{0\leq s\leq t} \biggl\vert \int _{0}^{s\wedge \rho _{r}}f \bigl(x(q-) \bigr)\,dq \biggr\vert ^{2} \biggr\} +3E \biggl\{ \sup_{0\leq s\leq t} \biggl\vert \int _{0}^{s\wedge \rho _{r}}g \bigl(x(q-) \bigr) \,dB_{j}(q) \biggr\vert ^{2} \biggr\} \\ &\quad\quad{} +3E \biggl\{ \sup_{0\leq s\leq t} \biggl\vert \int _{0}^{s\wedge \rho _{r}} \int _{R^{m}\setminus \{0\}}h \bigl(x(q-),y \bigr)\widetilde{N}(dq,dy) \biggr\vert ^{2} \biggr\} \\ &\quad \leq 3C_{f}^{2}t^{2}+3E \biggl\{ \sup_{0\leq s\leq t} \biggl\vert \int _{0}^{s \wedge \rho _{r}}g \bigl(x(q-) \bigr) \,dB_{j}(q) \biggr\vert ^{2} \biggr\} \\ &\quad\quad{} +3E \biggl\{ \sup_{0\leq s\leq t} \biggl\vert \int _{0}^{s\wedge \rho _{r}} \int _{R^{m}\setminus \{0\}}h \bigl(x(q-),y \bigr)\widetilde{N}(dq,dy) \biggr\vert ^{2} \biggr\} . \end{aligned} $$
(3.6)
Combining Burkholder’s inequality (Applebaum [1], Chap. 4, Theorem 4.4.21) with Applebaum [1], Chap. 4, Theorem 4.4.22, Doob’s martingale inequality, we obtain
$$ \begin{aligned} &E \biggl\{ \sup_{0\leq s\leq t} \biggl\vert \int _{0}^{s\wedge \rho _{r}}g \bigl(x(q-) \bigr) \,dB_{j}(q) \biggr\vert ^{2} \biggr\} \\ &\quad \leq 4E \biggl\{ \int _{0}^{t\wedge \rho _{r}} \bigl\vert g \bigl(x(q-) \bigr) \bigr\vert ^{2}\,dq \biggr\} \\ &\quad \leq 4C_{g}^{2}t. \end{aligned} $$
(3.7)
Applying Kunita’s first inequality (Applebaum [1], Chap. 4, Theorem 4.4.23), we get
$$ \begin{aligned} &E \biggl\{ \sup_{0\leq s\leq t} \biggl\vert \int _{0}^{s\wedge \rho _{r}} \int _{R^{m}\setminus \{0\}}h \bigl(x(q-),y \bigr)\widetilde{N}(dq,dy) \biggr\vert ^{2} \biggr\} \\ &\quad \leq 4E \Biggl\{ \sum_{i=1}^{m} \int _{0}^{t\wedge \rho _{r}} \int _{R^{m} \setminus \{0\}} \bigl\vert h \bigl(x(q-),y \bigr) \bigr\vert ^{2}\pi (dy)\,dq \Biggr\} \\ &\quad \leq 4C_{h}^{2}t. \end{aligned} $$
(3.8)
Substituting (3.7) and (3.8) into (3.6), we derive
$$ E \Bigl\{ \sup_{0\leq s\leq t} \bigl\vert x(s\wedge \rho _{r})-x_{0} \bigr\vert ^{2} \Bigr\} \leq 3C_{f}^{2}t^{2}+12C_{g}^{2}t+12C_{h}^{2}t, $$
and further by Chebyshev’s inequality, for any \(\vartheta >0\),
$$ \begin{aligned} &\mathbb{P} \Bigl\{ \sup_{0\leq s\leq t} \bigl\vert x(s\wedge \rho _{r})-x_{0} \bigr\vert >\vartheta \Bigr\} \\ &\quad \leq \frac{E\{\sup_{0\leq s\leq t} \vert x(s\wedge \rho _{r})-x_{0} \vert ^{2}\}}{\vartheta ^{2}} \\ &\quad \leq \frac{3C_{f}^{2}t^{2}+12C_{g}^{2}t+12C_{h}^{2}t}{\vartheta ^{2}}. \end{aligned} $$
Since \(\mu (\cdot )\) is continuous, it must be uniformly continuous in the closed ball \(\mathcal{O}:=\{x\in R^{n}: \vert x \vert \leq \upsilon _{t}(k)\}\), where \(\upsilon _{t}=\alpha ^{-1}_{1}\circ \delta \). For given \(\varrho >0\), choose a function \(\gamma \in \mathcal{K}\) such that, for any \(x,y\in \mathcal{O}\), \(\vert x-y \vert \leq \gamma (\varrho )\), which implies \(\vert \mu (x)-\mu (y) \vert \leq \varrho \). Then, for \(\vert x_{0} \vert \leq r\) and \(\epsilon _{2}>0\),
$$ \begin{aligned} &\mathbb{P} \Bigl\{ \sup_{0\leq s\leq t} \bigl\vert \mu \bigl(x(s) \bigr)- \mu (x_{0}) \bigr\vert >\epsilon _{2} \Bigr\} \\ &\quad \leq \mathbb{P} \Bigl\{ \sup_{0\leq s\leq t} \bigl\vert x(s)-x_{0} \bigr\vert >\gamma ( \epsilon _{2})\text{ and }\sup_{0\leq s\leq t} \bigl\vert x(s) \bigr\vert < \upsilon _{t}(r) \Bigr\} + \mathbb{P} \Bigl\{ \sup_{0\leq s\leq t} \bigl\vert x(s) \bigr\vert \geq \upsilon _{t}(r) \Bigr\} \\ &\quad \leq \mathbb{P} \Bigl\{ \sup_{0\leq s\leq t} \bigl\vert x(s \wedge \rho _{v_{t}(r)})-x_{0} \bigr\vert > \gamma (\epsilon _{2}) \Bigr\} + \mathbb{P} \Bigl\{ \sup_{0\leq s\leq t} \bigl\vert x(s) \bigr\vert \geq \upsilon _{t}(r) \Bigr\} \\ &\quad \leq \frac{3C_{f}^{2}t^{2}+12C_{g}^{2}t+3C_{h}^{2}t}{\gamma (\epsilon _{2})^{2}}+ \epsilon . \end{aligned} $$
Set \(\epsilon =\frac{1}{2}\). For any \(\epsilon _{2}>0\), there exists \(t^{*}=t^{*}(k,\epsilon _{2})\) such that
$$ \mathbb{P} \Bigl\{ \sup_{0\leq s\leq t} \bigl\vert \mu \bigl(x(s) \bigr)- \mu (x_{0}) \bigr\vert \leq \epsilon _{2} \Bigr\} \geq \frac{1}{4},\quad \forall t\in (0,t^{*}]. $$
Define a sequence of stopping times
$$\begin{aligned}& \mathcal{T}_{1}:=\inf \bigl\{ t\geq 0:\mu \bigl(x(t) \bigr)< \epsilon _{1} \bigr\} , \\& \mathcal{T}_{2n}:=\inf \bigl\{ t\geq \mathcal{T}_{2n-1}:\mu \bigl(x(t) \bigr)>2\epsilon _{1} \bigr\} ,\quad n=1,2,\ldots, \\& \mathcal{T}_{2n+1}:=\inf \bigl\{ t\geq \mathcal{T}_{2n}:\mu \bigl(x(t) \bigr)< \epsilon _{1} \bigr\} ,\quad n=1,2,\ldots, \end{aligned}$$
and set \(\inf \emptyset =\infty \). By (3.4), it is easy to get
$$\begin{aligned} \infty >{}&E \int _{0}^{\infty }\mu \bigl(x(s) \bigr)\,ds \\ \geq{}&\sum_{n=1}^{\infty }E \biggl[I_{\{} \mathcal{T}_{2n}< \rho _{r}\} \int _{\mathcal{T}_{2n}}^{\mathcal{T}_{2n+1}}\mu \bigl(x(s) \bigr)\,ds \biggr] \\ \geq{}&\epsilon _{1}\sum_{n=1}^{\infty }E \bigl[I_{\{}\mathcal{T}_{2n}< \rho _{r}\}( \mathcal{T}_{2n+1}-\mathcal{T}_{2n}) \bigr] \\ ={}&\epsilon _{1}\sum_{n=1}^{\infty }E \bigl[I_{\{}\mathcal{T}_{2n}< \rho _{r}\}E( \mathcal{T}_{2n+1}-\mathcal{T}_{2n}\vert \mathcal{F}_{ \mathcal{T}_{2n}}) \bigr]. \end{aligned}$$
(3.9)
By the strong Markov property and setting \(\epsilon _{1}=2\epsilon _{2}\), we obtain
$$ \begin{aligned} &E(\mathcal{T}_{2n+1}- \mathcal{T}_{2n}\vert \mathcal{F}_{ \mathcal{T}_{2n}}) \\ &\quad \geq E \biggl[(\mathcal{T}_{2n+1}-\mathcal{T}_{2n})I_{\{} \sup_{0 \leq s\leq t^{*}} \bigl\vert \mu \bigl(\tilde{x}(s) \bigr)-\mu ( \tilde{x}_{0}) \bigr\vert \leq \frac{\epsilon _{1}}{2}\}\Big\vert \mathcal{F}_{\mathcal{T}_{2n}} \biggr] \\ &\quad \geq t^{*}P \biggl\{ \sup_{0\leq s\leq t^{*}} \bigl\vert \mu \bigl(\tilde{x}(s) \bigr)- \mu (\tilde{x}_{0}) \bigr\vert \leq \frac{\epsilon _{1}}{2}\Big\vert \mathcal{F}_{ \mathcal{T}_{2n}} \biggr\} \\ &\quad \geq \frac{t^{*}}{4}, \end{aligned} $$
(3.10)
where \(\{\mathcal{T}_{2n}<\rho _{r}\}\) and \(t^{*}=t^{*}(k,\epsilon _{1}/2)\) and \(\tilde{x}=x(\cdot +\mathcal{T}_{2n})\). Substituting (3.10) into (3.9), we have
$$ \frac{t^{*}\epsilon _{1}}{4}\sum_{n=1}^{\infty }P\{ \mathcal{T}_{2n}< \rho _{r}\}< \infty . $$
This, together with the Borel–Cantelli lemma, yields
$$ \mathbb{P}\{\mathcal{T}_{2n}< \rho _{r}\text{ for infinitely many }n\}=0. $$
Since
$$ \begin{aligned} \{\mathcal{T}_{2n}< \rho _{r}\text{ for infinitely many }n\} ={}&\{\mathcal{T}_{2n}< \rho _{r} \text{ for infinitely many }n \text{ and }\rho _{r}=\infty \} \\ &{}\cup \{\mathcal{T}_{2n}< \rho _{r}\text{ for infinitely many }n \text{ and }\rho _{r}< \infty \}, \end{aligned} $$
then
$$ \mathbb{P}\{\mathcal{T}_{2n}< \infty \text{ for infinitely many }n \text{ and }\rho _{r}=\infty \}=0. $$
(3.11)
By (3.2), for any \(k>0\), one has
$$ \begin{aligned} \mathbb{P}\{\rho _{r}=\infty \}\geq{}& \mathbb{P} \Bigl\{ \sup_{t\geq 0} \bigl\vert x(t) \bigr\vert < k \Bigr\} \geq \mathbb{P} \Bigl\{ \sup_{t\geq 0} \bigl\vert V(t, \bigl(x(t) \bigr) \bigr\vert < \alpha _{1}(t,k) \Bigr\} \\ \geq{}& 1-\frac{V(0,x_{0})}{\alpha _{1}(t,k)}. \end{aligned} $$
(3.12)
Letting \(k\rightarrow \infty \), we obtain \(\mathbb{P}\{\rho _{r}=\infty \}\rightarrow 1\), which, together with (3.11), yields
$$ \mathbb{P}\{\mathcal{T}_{2n}< \infty \text{ for infinitely many }n\}=0. $$
(3.13)
This does contradict (3.5). Hence \(\mathbb{P}(\Omega _{3})=0\), which implies \(\lim_{t\rightarrow \infty }\mu (x(t))=0\) a.s. This, together with the property of the function \(\mu (0)=0\), yields \(\lim_{t\rightarrow \infty }x(t)=0\) a.s. The proof is completed. □
Before concluding this section, we present an example to illustrate Theorem 3.1.
Example 3.1
Consider a scalar stochastic differential equation with jumps in the form
$$ \begin{aligned} &dx(t)=k_{1}x(t-) \,dt+k_{2}x(t-)\,dB(t)+ \int _{0}^{\infty }k_{3}x(t-)y \widetilde{N}(dt,dy),\quad t>0, \\ &x(0)=x_{0}, \end{aligned} $$
(3.14)
where \(k_{i}\in R\), \(i=1,2,3\), are constants, \(B(t)\) is a scalar standard Brownian motion, and \(\widetilde{N}(\cdot ,\cdot )\) is a compensated Poisson random measure.
Let \(V(t,x)=x^{2}\) for any \(x\in R\), we obtain
$$ LV(t,x)\leq \biggl[2k_{1}+k_{2}^{2}+k_{3}^{2} \int ^{\infty }_{0}y^{2}\pi (dy) \biggr]x^{2}, $$
then, by Theorem 3.1, the solution of system (3.14) is almost surely asymptotically stable with choosing the appropriate constant K as the feedback control part, such that
$$ 2k_{1}+k_{2}^{2}+k_{3}^{2} \int ^{\infty }_{0}y^{2}\pi (dy)< 0. $$