- Research
- Open access
- Published:
The law of iterated logarithm for the estimations of diffusion-type processes
Advances in Difference Equations volume 2020, Article number: 33 (2020)
Abstract
This paper mainly discusses the asymptotic behaviours on the lasso-type estimators for diffusion-type processes with a small noise. By constructing the objective function on the estimation, in view of convexity argument, it is proved that the estimator for different values of γ satisfies the iterated logarithm law. The result also presents the exponential convergence principle for the estimator converging to the true value.
1 Introduction
Let \(\{X_{t}^{\epsilon}\}_{0\leq t\leq T}\) be the solution of the following stochastic diffusion-type process:
where \(x_{0}\) is a fixed constant, \(\theta=(\theta_{1},\ldots,\theta _{p})^{\prime}\in\varTheta\subset\mathbb{R}^{p}\) is an unknown parameter, \(S_{t}(\theta,x)\) is a known measurable, nonanticipative functions such that (1) has a strong unique solution, \(W_{t}\) is a standard Wiener process, which is usually called random noise, and \(\epsilon\in(0,1]\) the diffusion coefficient.
There is a rich literature on the methods to estimate parameter θ, such as least squares estimation, Bayesian estimation, maximum likelihood estimation, and so on. However, there is no uniform standard to comment on those methods. The limiting properties for various estimators attract the attention of statisticians because of their applicability in mathematical finance, biology and other fields; see [5, 16, 21]. As described by Kutoyants [16], the minimum distance estimation is a relatively new method compared with traditional estimation methods, which deals with less stochastic calculations and robustness. This paper considers a class of minimum distance estimators for the diffusion process (1) and discusses their exponential convergence principle. To estimate the unknown parameter θ, we need to introduce the ordinary differential equation:
with initial condition \(X_{0}^{0}=x_{0}\).
The minimum distance estimator \(\widehat{\theta}^{\epsilon}\) is given by
where Θ̅ is the closure of Θ, α is some finite measure on \([0,T]\) and \(\operatorname {argmin}g=\{x:g(x)=\inf g\}\). Kutoyants discussed the consistency and asymptotic normality of estimator \(\widehat{\theta}^{\epsilon}\) in [17, 18]. Dietz and Kutoyants [7] considered a class of minimum distance estimators for diffusion processes with ergodic properties. For general minimum distance estimators, the reader can refer to Kutoyants [16] and references therein. Recently, Zhao and Zhang [22] studied the minimum distance parameter estimation for stochastic differential equations with small α-stable noises.
However, few works considered the convergence rate of \(\widehat{\theta }^{\epsilon}\) converging to the true value of θ. This induces some stochastic dynamical systems not to be reasonably identified by finite observations of spaced time points. What’s more, from the point of view of probability, the law of large numbers, the central limit theorem, and the iterated logarithm law are all contained in the limit theory, which is a whole system. This motivates us to study the convergence rate of \(\widehat{\theta}^{\epsilon}\rightarrow\theta^{*}\) (\(\theta^{*}\) is the true value of θ). In order to generalize the model, we use the constrained minimum distance estimator based on \(L_{\gamma}\)-penalized function contrast:
where \(\gamma>0\) is a fixed constant and \(\lambda_{\epsilon}>0\) is a penalty parameter with respect to ϵ. Without loss of generality, we assume that the trend functional of the processes (1) is of integral type:
Denote the true value of θ by \(\theta^{*}\) and the lasso-type estimator of θ by
In this paper, we will discuss the limit behaviors of \(\frac{\widehat {\theta}^{\epsilon}-\theta^{*}}{\epsilon\sqrt{2\log\log(\epsilon ^{-1}\vee3)}}\), i.e., the iterated logarithm law. This shows that the estimator \(\widehat{\theta}^{\epsilon}\) converges to a constant almost everywhere with an exponential convergence rate. We recall that Gregorio and Iacus showed that \(\widehat{\theta}^{\epsilon}\) satisfies the central limit theorem in [11], that is,
where ⇒ denotes convergence in distribution, \(V(u)\) is a fixed random function. Our result also can be considered a supplement of Gregorio and Iacus’ work.
2 Preliminaries
This section will present some basic notations and assumptions which will be used in the paper. Define the inner product by \(\langle x,y\rangle=\sum_{i=1}^{p}x_{i}y_{i}\) in the space \(\mathbb{R}^{p}\). In particular, use \(|\cdot|\) for the Euclidean distance, that is, \(|y|=\sqrt{y^{\prime}y}=\sqrt{\sum_{i=1}^{p}\langle x_{i},x_{i}\rangle }\), where \(y^{\prime}\) denotes the transpose of y. Let B be some Banach space and write \(\|\cdot\|\) for the corresponding norm. If B is the space of all continuous bounded functions on \(\mathbb{R}^{p}\), we always define \(\|f\|=\sup_{x\in\mathbb {R}^{p}}|f(x)|\) for any \(f\in\mathbf{B}\). Let \(D(\mathbb{T})\) the space of càdlàg functions (i.e., right continuous with left limits) on \(\mathbb{T}\) with the Skorohod topology. For any set \(A\subset\mathbb{R}^{p}\), we define the distance from \(x\in\mathbb{R}^{p}\) to A by \(\rho(x,A)=\inf_{y\in A}\rho (x,y)\). If \(\{x^{\epsilon}\}\) is a suitable family of points in \(\mathbb {R}^{p}\), then let \(\mathbf{C}(\{x^{\epsilon}\})\) denote the cluster set of \(\{x^{\epsilon}\}\). That is, \(\mathbf{C}(\{x^{\epsilon}\})\) are all possible limit points of the sequence \(\{x_{n}\}\). We sometimes use the notation \(\lim_{\epsilon\rightarrow0}x^{\epsilon}=A\) if both \(\lim_{\epsilon\rightarrow0}\rho(x^{\epsilon},A)=0\) and \(\mathbf{C}(\{ x^{\epsilon}\})=A\). Throughout the paper, let \(\mathbf{P}_{\theta}\) denote the law of \(X_{t}(\theta)\) under parameter θ. The subscript θ indicates that the process \(X_{t}^{\epsilon}(\theta)\) depends on θ. If it doesn’t cause confusion, we always omit θ, i.e., \(X_{t}^{\epsilon}=X_{t}^{\epsilon}(\theta)\).
Define \(V_{x}(\theta,t,x)=\frac{\partial}{\partial x}V(\theta,t,x)\) and \(K_{x}(\theta,t,s,x)=\frac{\partial}{\partial x}K(\theta,t,s,x)\). Let \(Y_{t}=\{Y_{t}(\theta), 0\leq t\leq T\}\) be the solution of a diffusion-type process
with initial condition \(Y_{0}=0\). The process \(Y_{t}\) plays a central role in the study of the asymptotic distribution of the estimators in the theory of diffusion process \(X_{t}\) with small noise. Denote the p-dimensional vector of partial derivatives of \(X_{t}^{0}(\theta)\) with respect to \(\theta_{j}\) (\(j=1,\ldots,p\)) by \(\dot{X}_{t}^{0}(\theta)\), that is,
It is easy to see that \(\dot{X}_{t}^{0}(\theta)\) satisfies the following differential equation:
and \(\dot{X}_{0}^{0}(\theta)=0\), where \(\dot{V}(\theta,t,X_{t}^{0}(\theta ))= (\frac{\partial}{\partial\theta_{1}}V(\theta,t,X_{t}^{0}(\theta )),\ldots,\frac{\partial}{\partial\theta_{p}}V(\theta ,t,X_{t}^{0}(\theta)) )^{\prime}\).
We suppose that the following regular conditions for the trend coefficient \(V(\theta,t,x)\) and \(K(\theta,t,s,x)\) hold:
- (A1):
\(\epsilon^{-1}\lambda_{\epsilon}\rightarrow\lambda_{0}\geq0\);
- (A2):
for any \(t\in[0,T]\),
$$\sup_{\theta\in\varTheta,x\in\mathbb{R}^{p}} \biggl\vert \frac{\partial }{\partial\theta}V(\theta,t,x) \biggr\vert < \infty,\qquad \sup_{\theta\in \varTheta,x\in\mathbb{R}^{p}} \biggl\vert \frac{\partial}{\partial x}V( \theta ,t,x) \biggr\vert < \infty $$and
$$\sup_{s,t\in[0,T]}\sup_{\theta\in\varTheta,x\in\mathbb{R}^{p}} \biggl\vert \frac{\partial}{\partial\theta}K(\theta,t,s,x) \biggr\vert < \infty, \qquad \sup _{s,t\in[0,T]}\sup_{\theta\in\varTheta,x\in\mathbb{R}^{p}} \biggl\vert \frac{\partial}{\partial x}K(\theta,t,s,x) \biggr\vert < \infty; $$- (A3):
there exist two positive constants \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\) such that
$$\begin{gathered} \sup_{t\in[0,T],\theta\in\varTheta} \biggl\vert \frac{\partial}{\partial x}V(\theta,t,x)- \frac{\partial}{\partial x}V(\theta,t,y) \biggr\vert < \mathcal{M}_{1} \vert x-y \vert , \\ \sup_{t,s\in[0,T],\theta\in\varTheta} \biggl\vert \frac{\partial}{\partial x}K(\theta,t,s,x)- \frac{\partial}{\partial x}K(\theta,t,s,y) \biggr\vert < \mathcal{M}_{1} \vert x-y \vert \end{gathered} $$and
$$\begin{gathered} \sup_{t,s\in[0,T]}\sup_{x\in\mathbb{R}^{p}} \biggl\vert \frac{\partial }{\partial x}V(\theta_{1},t,s,x)-\frac{\partial}{\partial x}V(\theta _{2},t,s,x) \biggr\vert \leq\mathcal{M}_{2} \vert \theta_{1}-\theta_{2} \vert , \\ \sup_{t,s\in[0,T]}\sup_{x\in\mathbb{R}^{p}} \biggl\vert \frac{\partial }{\partial x}K(\theta_{1},t,s,x)-\frac{\partial}{\partial x}K(\theta _{2},t,s,x) \biggr\vert \leq\mathcal{M}_{2} \vert \theta_{1}-\theta_{2} \vert .\end{gathered} $$
One can check that under conditions (A2) and (A3), the stochastic differential Eq. (1) satisfies Condition \(\mathcal{L}\), that is,
where \(L_{1}\) and \(L_{2}\) are two positive constants and \(K_{s}\) is a nondecreasing right-continuous function, \(0\leq K_{t}\leq K_{0}\), \(K_{0}>0\). By virtue of Theorem 4.6 of [19, 20], there exists a unique \(D([0,T],\mathbb{R}^{p})\)-value strong solution for Eq. (1) under conditions (A2) and (A3) (the reader can also see [8] for the theory on existence and uniqueness). In Lemma 4 below, we will show that the deterministic diffusion process \(X_{t}^{0}(\theta)\) is differentiable with respect to θ at the point \(\theta^{*}\) in \(L_{2}\)-norm under conditions (A2) and (A3), i.e.,
In the book of Kutoyants [16], the reader can see that the assumption is very important for the statistical identification problems. We now introduce the objective function of \(\widehat{\theta^{\epsilon}}-\theta^{*}\):
where \(u=(u_{1},\ldots,u_{p})^{\prime}\). It is easy to see that
A simple calculation yields
In Sect. 4, we will show that \(V_{\epsilon}(u)\) can be approached by some stochastic function.
3 Main result
We state our main result as follows:
Theorem 1
Let \(h(\epsilon)=\sqrt{2\log\log(\epsilon ^{-1}\vee3)}\). Assume that conditions (A1)–(A3) hold. Then, for \(\gamma\geq1\), the process \((\widehat{\theta^{\epsilon}}-\theta ^{*}) /(\epsilon h(\epsilon))\)satisfies the iterated logarithm law, that is,
and
where \(\rho(\cdot,\cdot)\)denotes the Euclidean distance, a.e. stands for almost everywhere, and
and
Here
and \(Y^{\phi}\)is the solution of the integral equation given by
Remark
For \(0<\gamma<1\), supposing that conditions (A2)–(A3) hold and \(\lambda_{\epsilon}/\epsilon^{1-\gamma }\rightarrow\lambda_{0}\geq0\), it can be still proved that \((\widehat {\theta^{\epsilon}}-\theta^{*}) /(\epsilon h(\epsilon))\) satisfies the iterated logarithm law. The proof method is the same as that of Theorem 1.
We below present an example, which is an application of the above result.
Example
Consider the following diffusion process:
where \(\theta\in(\kappa_{1},\kappa_{2})=\varTheta\) and \(W_{t}\) is a standard Wiener process, then the limit solution is
The minimum distance estimator \(\widehat{\theta}^{\epsilon}\) is defined by
It can be checked that the process (14) satisfies the conditions of Theorem 7.5 of [16], so \(\widehat{\theta}^{\epsilon}\) is consistent and asymptotically normal:
where \(Y_{t}\) satisfies \(dY_{t}=\theta^{*} Y_{t}\,dt+dW_{t}\), \(Y_{0}=0\), \(0\leq t\leq T\). It can be easily proved that \(\int_{0}^{T}te^{\theta^{*} t}Y_{t}\alpha(dt)\) is a Gaussian process with variance
By virtue of Theorem 1, a simple calculation can show that the estimator \(\widehat{\theta^{\epsilon}}\) has the following limit behavior:
where \(\sigma^{2}\) is defined in (15).
4 Proofs
In order to prove Theorem 1, we need the following lemmas. Lemma 2 is about the approach of argmins, which is crucial to study the asymptotic theory for some argmin processes of parametrized convex objective functions (see [10] and [14] for other applications).
Lemma 1
Suppose that \(A^{\epsilon}(u)\)and \(B^{\epsilon}(u)\), \(u\in\mathbb{R}^{p}\), are two convex bounded functions. Assume that \(\lim_{\epsilon\rightarrow0}h(\epsilon)=\infty\)and for any \(u\in \mathbb{R}^{p}\), \(\delta>0\),
Then for any compact set \(D\subset\mathbb{R}^{p}\),
Proof
The approach stems from Lemma 3 of Kato [13]. For the sake of completeness, we simply state the proof. By the the convexity and boundedness of \(A^{\epsilon}(u)\), there exists a constant \(\beta_{1}>0\) satisfying
Similarly, there exists another constant \(\beta_{2}>0\) satisfying
Let \(\beta_{0}=\max\{\beta_{1},\beta_{2}\}\). For any \(\epsilon>0\), there exists a finite set \(D_{1}\subset D\) such that each point of D lies within the distance \(\frac{\epsilon}{3\beta_{0}}\) of at least one point of \(D_{1}\). Equation (16) implies that
Giving any \(u\in K\), let v be a point of \(D_{1}\) such that \(|u-v|\leq \frac{\epsilon}{3\beta_{0}}\). Then
Consequently,
By virtue of (20), the desired result is obtained. □
Lemma 2
Suppose that \(A^{\epsilon}(u)\)and \(B^{\epsilon}(u)\)are two suitable families of convex random functions defined on a compact set \(\mathcal{S}\in \mathbb{R}^{p}\), where \(\epsilon\in(0,1]\)is the index parameter. Let \(a^{\epsilon}\)be the argmin of \(A^{\epsilon}(u)\)and assume that \(B^{\epsilon}(u)\)has a unique argmin \(b^{\epsilon}\). Then for any positive constantδ,
where \(\widetilde{\triangle}_{\epsilon}=\sup_{u\in\{u:|u-b^{\epsilon}|\leq\delta\}} |A^{\epsilon}(u)-B^{\epsilon}(u) |\)and
Proof
Let \(\mathbb{S}^{p-1}=\{x\in\mathbb{R}^{p}:|x|=1\}\). For any \(v\in\mathbb{S}^{p-1}\), the convexity of \(A^{\epsilon}(u)\) yields
It is equivalent to
Let \(\triangle_{\epsilon}(u)=A^{\epsilon}(u)-B^{\epsilon}(u)\). We have
Since \(\mathcal{S}\) is a compact set and \(b^{\epsilon}\) is the unique argmin point of \(B^{\epsilon}\), so η is a positive random variable. If \(\widetilde{\triangle}_{\epsilon}<\frac{\eta}{2}\), then \(A^{\epsilon}(b^{\epsilon}+lv)-A^{\epsilon}(b^{\epsilon})>0\) for each v. This implies that if \(|a^{\epsilon}-b^{\epsilon}|>\delta\), then \(A^{\epsilon}(a^{\epsilon})-A^{\epsilon}(b^{\epsilon})>0\). The minimum property of \(a^{\epsilon}\) will lead to a contradiction. Thus, for any positive constant δ,
The proof is completed. □
Lemma 3
Assume that \(A^{\epsilon}(u)\)is a convex random function defined in an open set \(\mathcal{S}\in\mathbb{R}^{p}\). Let \(B^{\epsilon}(u)=-u^{T}U^{\epsilon}+\frac{1}{2}u^{T}Qu\), whereQis a symmetric and positive define \(p\times p\)matrix and \(U^{\epsilon}\)is stochastically bounded. Furthermore, let \(1\leq h(\epsilon)=o(1/\sqrt{\epsilon})\), we assume the following three conditions hold:
- (i)
Random process \(U^{\epsilon}\)satisfies the iterated logarithm law, that is, there exists a fixed bounded symmetric setKin \(\mathbb {R}^{p}\)such that
$$\limsup_{\epsilon\rightarrow0}\rho \biggl(\frac{U^{\epsilon}}{\sqrt {2\epsilon\log\log(\epsilon^{-1}\wedge3))}},K \biggr)=0, \quad\textit{a.e.} $$and
$$\mathbf{P} \biggl(\omega: \mathbf{C} \biggl( \biggl\{ \frac{U^{\epsilon}}{\sqrt{2\epsilon\log\log(\epsilon^{-1}\wedge3))}} \biggr\} \biggr)=K \biggr)=1. $$ - (ii)
For any \(R>0\)and any \(\delta>0\), there exists an \(\epsilon_{0}>0\)such that for all \(\epsilon\in(0,\epsilon_{0}]\),
$$ \mathbf{P} \bigl( \bigl\vert A^{\epsilon}(u)-B^{\epsilon}(u) \bigr\vert \geq\delta h(\epsilon) \bigr)\leq e^{-Rh^{2}(\epsilon)}. $$(24)Then, \(a^{\epsilon}\), the minimizer of convex process \(A^{\epsilon}(u)\), satisfies the iterated logarithm law, that is,
$$\limsup_{\epsilon\rightarrow0}\rho \biggl(\frac{a^{\epsilon}}{\sqrt {2\epsilon\log\log(\epsilon^{-1}\wedge3))}},Q^{-1}K \biggr)=0, \quad\textit{a.e.} $$and
$$\mathbf{P} \biggl(\omega: \mathbf{C} \biggl( \biggl\{ \frac{a^{\epsilon}}{\sqrt{2\epsilon\log\log(\epsilon^{-1}\wedge3))}} \biggr\} \biggr)=Q^{-1}K \biggr)=1, $$where \(Q^{-1}K=\{Q^{-1}x: x\in K\}\).
Proof
Let \(b^{\epsilon}=Q^{-1}U^{\epsilon}\). It is easy to see that \(b^{\epsilon}\) is the unique minimum point of \(B^{\epsilon}(u)\). The continuous mapping theorem in the iterated logarithm law yields
and
Then a simple calculation shows that
where \(c>0\) is the smallest eigenvalue of Q.
As in the proof of Lemma 2, note that by the definition of η, we further have
From (25) and (26), combining condition (ii), we have
This implies that \(a^{\epsilon}\), the minimizer of convex process \(A^{\epsilon}(u)\), satisfies the iterated logarithm law. □
Lemma 4
Assume \(X_{t}^{\epsilon}\)is the solution of the following stochastic differential equation:
and that \(X^{0}_{t}\)is the solution of the ordinary differential equations
Assume that \(b(\cdot)\)and \(\sigma(\cdot)\)are Lipschitz continuous on every compact subset of \(\mathbb{R}\)and there exists a positive constantL, for any \(x,y\in\mathbb{R}^{+}\), satisfying
Then, the process \(\frac{X_{t}^{\epsilon}-X^{0}_{t}}{\sqrt{2\epsilon\log \log(\epsilon^{-1}\vee3)}}\)satisfies the iterated logarithm law, that is,
and
Here, \(K= \{g: g\in\mathcal{H} \textit{ and } I(g)\leq1 \}\)and
where \(\mathcal{H}\)is defined in (12) and \(Y^{\phi}\)is the solution of the integral equation given by
Proof
See Theorem 2.2 of [3] and Proposition 3.2 of [15], or Theorem 3.1 of [6]. □
Lemma 5
Under conditions (A2) and (A3), the deterministic dynamical system \(X_{t}^{0}(\theta)\)is differentiable with respect toθat the point \(\theta^{*}\)in \(L_{2}\)-norm, that is,
Proof
From Eq. (2), we have
So
Applying conditions (A2), (A3) and Gronwall’s inequality, we have
where \(\mathcal{M}\) is some positive constant. From (27), we also get that
and
Taylor’s expansion and (A3) imply that \(|\mathbf{I}_{2}|\leq\mathcal {M}|h|^{2}T\).
For \(\mathbf{I}_{1}\), let
then
Conditions (A2) and (A3) yield that function \(A_{s}(\theta)\) is bounded for any \(s\in[0,T]\). By the differentiability of function \(S_{t}(\theta,x)\) at x for every fixed θ, we get \(\mathbf{I}_{4}=o(|h|)\) and \(\mathbf{I}_{5}=o(|h|)\).
On the other hand, we also have
By applying Gronwall’s inequality and combining (28) and (29), we get
The proof is completed. □
Proof of Theorem 1
Noting that
and \(h(\epsilon)=\sqrt{2\log\log(\epsilon^{-1}\vee3)}\), we define
A simple calculation yields
For the case of \(\gamma>1\), noting that \(\epsilon^{-1}\lambda _{\epsilon}\rightarrow\lambda_{0}\) as \(\epsilon\rightarrow0\), we have
where \(\operatorname {sgn}(x)=1\) for \(x>0\); \(\operatorname {sgn}(x)=-1\) for \(x<0\) and \(\operatorname {sgn}(0)=0\).
For the case of \(\gamma=1\),
where \(\mathcal{I}\{\cdot\}\) denotes the indicator function. From Lemma 5, it implies that \(\mathbf{J}_{1}\rightarrow0\). Thus for any \(\delta>0\) and sufficiently small positive ϵ, we have
In addition, Lemma 5 also implies that for any \(\kappa>0\), there exists a positive constant \(\epsilon_{0}\) such that, when \(\epsilon<\epsilon_{0}\),
By the Cauchy–Schwartz inequality, we have
Thus, for sufficiently small \(\delta>0\),
One can easily check that (A2)–(A3) imply Assumption (A) of [6]. By Lemma 4, the stochastic process \((X_{t}^{\epsilon}-X_{t}^{0}(\theta^{*}))/\epsilon h(\epsilon)\) satisfies the law of iterated logarithm on \(C([0,T];\mathbb{R})\) with the rate function \(I(\cdot)\), that is,
and
where \(K_{1}= \{g: I(g)\leq\frac{1}{2} \}\) and
Here \(\mathcal{H}\) is defined in (12) and \(Y^{\phi}\) in (13).
The invariance principle (see Theorem 4.3 of [9]) yields that the stochastic process
satisfies the law of the iterated logarithm with rate function \(I^{*}(\cdot)\), that is,
and
where K is defined in (11) and
On the other hand, letting \(\kappa\rightarrow0\) in (33), we have
Hence, for any \(\delta>0\) and \(R>0\), there exists an \(\epsilon_{0}>0\) such that for all \(\epsilon\in(0,\epsilon_{0}]\),
Combining (30) and (34)–(36), by applying Lemma 3, we see that the process \((\widehat{\theta}^{\epsilon}-\theta^{*})/ \epsilon h(\epsilon)\) satisfies the law of the iterated logarithm. The proof is completed. □
5 Conclusion
In this paper, we discussed the convergence rate on the estimators \(\widehat{\theta}^{\epsilon}\) converging to the true value. A simple example was given to test the feasibility of this result. Due to the complexity and minimization of the convex process, we did not conduct an extensive simulation study to identify these stochastic diffusion processes and illustrate the finite performance of the proposed method. Recently, we got to know that some novel approaches of modeling such as the accurate discretization method [12], two coupled pendulums methods [4], fractional stochastic modeling [2], and fractional discretization [1] were introduced. Those methods can be helpful when dealing with our simulation at some point. This will become an important research direction for us in the future.
References
Atangana, A.: Fractional discretization: the African’s tortoise walk. Chaos Solitons Fractals 130, Article ID 109399 (2020)
Atangana, A., Bonyah, E.: Fractional stochastic modeling: new approach to capture more heterogeneity. Chaos, Interdiscip. J. Nonlinear Sci. 29, Article ID 013118 (2019)
Baldi, P.: Large deviations and functional iterated logarithm law for diffusion processes. Probab. Theory Relat. Fields 71, 435–453 (1986)
Baleanu, D., Jajarmi, A., Asad, J.H.: Classical and fractional aspects of two coupled pendulums. Rom. Rep. Phys. 71(1), Article ID 103 (2019)
Bressloff, P.C.: Stochastic Processes in Cell Biology. Interdisciplinary Applied Mathematics, vol. 41. Springer, New York (2014)
Caramellino, L.: Strassen’s law of the iterated logarithm for diffusion processes for small time. Stoch. Process. Appl. 74(1), 1–19 (1998)
Dietz, H.M., Kutoyants, Y.A.: A class of minimum-distance estimators for diffusion processes with ergodic properties. Stat. Risk. Model. 15(3), 211–228 (1997)
Freidlin, M.I., Szücs, J., Wentzell, A.D.: Random Perturbations of Dynamical Systems. Grundlehren der mathematischen Wissenschaften, vol. 260. Springer, New York (2012)
Gao, F.G., Wang, S.C.: Asymptotic behaviors for functionals of random dynamical systems. Stoch. Anal. Appl. 34, 258–277 (2015)
Geyer, C.J.: On the asymptotics of convex stochastic optimization. Unpublished manuscript (1996)
Gregorio, A.D., Iacus, S.M.: On penalized estimation for dynamical systems with small noise. Electron. J. Stat. 12, 1614–1630 (2018)
Hajipour, M., Jajarmi, A., Baleanu, D.: On the accurate discretization of a highly nonlinear boundary value problem. Numer. Algorithms 79(3), 679–695 (2018)
Kato, K.: Asymptotics for argmin processes: convexity arguments. J. Multivar. Anal. 100, 1816–1829 (2009)
Knight, K., Fu, W.J.: Asymptotics for lasso-type estimators. Ann. Stat. 28, 1356–1378 (2000)
Kouritzin, M.A., Heunis, A.J.: A law of the iterated logarithm for stochastic processes defined by differential equations with a small parameter. Ann. Probab. 22(2), 659–679 (1994)
Kutoyants, Y.: Identification of Dynamical Systems with Small Noise. Kluwer Academic, Dordrecht (1994)
Kutoyants, Y., Pilibossian, P.: On minimum \(L_{1}\)-norm estimate of the parameter of the Ornstein–Uhlenbeck process. Stat. Probab. Lett. 20(2), 117–123 (1994)
Kutoyants, Y., Pilibossian, P.: On minimum uniform metric estimate of parameters of diffusion-type processes. Stoch. Process. Appl. 51(2), 259–267 (1994)
Liptser, R.S., Shiryayev, A.N.: Statistics of Random Processes, vol. I. Springer, New York (1977)
Liptser, R.S., Shiryayev, A.N.: Statistics of Random Processes, vol. II. Springer, New York (1978)
Nkurunziza, S.: Shrinkage strategies in some multiple multi-factor dynamical systems. ESAIM, Probab. Stat. 16, 139–150 (2012) https://doi.org/10.1051/ps/2010015
Zhao, H., Zhang, C.: Minimum distance parameter estimation for SDEs with small α-stable noises. Stat. Probab. Lett. 145, 301–311 (2019)
Acknowledgements
The authors thank three anonymous reviewers for their valuable comments and suggestions in improving the paper.
Funding
This work is supported by the National Natural Science Foundation of China under NSFC grant (No. 11571326).
Author information
Authors and Affiliations
Contributions
All authors contributed equally to the manuscript and typed, read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Mao, M., Huang, G. The law of iterated logarithm for the estimations of diffusion-type processes. Adv Differ Equ 2020, 33 (2020). https://doi.org/10.1186/s13662-020-2506-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13662-020-2506-5