- Research
- Open access
- Published:
Maximum likelihood estimation for stochastic Lotka–Volterra model with jumps
Advances in Difference Equations volume 2018, Article number: 148 (2018)
Abstract
In this paper, we consider the stochastic Lotka–Volterra model with additive jump noises. We show some desired properties of the solution such as existence and uniqueness of positive strong solution, unique stationary distribution, and exponential ergodicity. After that, we investigate the maximum likelihood estimation for the drift coefficients based on continuous time observations. The likelihood function and explicit estimator are derived by using semimartingale theory. In addition, consistency and asymptotic normality of the estimator are proved. Finally, computer simulations are presented to illustrate our results.
1 Introduction
The following famous population dynamics
is often used to model population growth of a single species, where \(X_{t}\) represents its population size at time t, \(a>0\) is the rate of growth, and \(b>0\) represents the effect of intraspecies interaction. This equation is also known as the Lotka–Volterra model or logistic equation. In this paper, we consider one-dimensional stochastic Lotka–Volterra equation with both multiplicative Brownian noises and additive jump noises, that is,
where \(x_{0}\) is a positive initial value, a, b, σ, \(r \in (0, \infty)\), \((W_{t})_{t \geq0}\) is a one-dimensional Brownian motion (which is also known as the Wiener process), and \((J_{t})_{t \geq0}\) is a one-dimensional subordinator independent of \((W_{t})_{t \geq0}\) (the precise characterization is given below in Sect. 2). The “a.s.” above is the abbreviation of “almost surely”. Suppose that σ and r are known parameters, while a and b are unknown parameters. We will focus on the maximum likelihood estimation (MLE) of the parameter \(\theta=(a,b)' \in{\mathbb {R}}_{++}^{2}\) based on the continuous time observations of the path \(X^{T}:=(X_{t})_{0 \leq t \leq T}\). Here and after, ′ denotes the transposition of a vector or a matrix.
Stochastic Lotka–Volterra equation, being a reasonable and popular approach to model population dynamics perturbed by random environment, has recently been studied by many authors both from a mathematical perspective and in the context of real biological dynamics. For the mathematical studies, see, for example, [1–8]. In particular, Mao et al. [3] investigated a multi-dimensional stochastic Lotka–Volterra system driven by one-dimensional standard Brownian motion. They revealed that the environmental noise could suppress population explosion. Later, Mao [4] proved a finite second moment of the stationary distribution under Brownian noise, which is very important in application.
The other case is the stochastic dynamics with Lévy noise, which can be used to describe the sudden environmental shocks, e.g., earthquakes, hurricanes, and epidemics. Bao et al. [5] considered a competitive Lotka–Volterra population model with Lévy jumps, see also Bao et al. [6]. Recently, Zhang et al. [8] considered a stochastic Lotka–Volterra model driven by α-stable noise, they got a unique positive strong solution of their model. Moreover, they proved stationary property and exponential ergodicity under relatively small noise and extinction under large enough noise.
We note that our equation (1.1) cannot be covered by [5, 6, 8]. The proof of positive solution in [5, 8] heavily depends on the explicit solution of the corresponding equation. This method does not work for our equation (1.1). We turn to prove that the hitting time of point 0 of the solution is almost surely infinite. We also prove that stationary property and exponential ergodicity do not depend on the weight of the noise, which is different from the conditions needed in [5, 6, 8]. From this point of view, equation (1.1) has its own interest.
On the other hand, the study of the influence of noise is active in the context of real ecosystems. The influence of noise is of paramount importance in open systems, and many noise induced phenomena have been found, like stochastic resonance, noise enhanced stability, noise delayed extinction, and so on. For more details, see, for example, [9–11]. However, in this paper, we shall mainly study our equation (1.1) from the view of mathematics.
We notice that there are huge works in economics and finance considering MLE of jump-diffusion models, where the data is usually observed discretely. In this case, transition densities play an important role, but their closed-form expressions cannot be obtained in general. It will be computationally expensive to conduct MLE. To overcome the difficulty, a popular method is to use closed-form expansions to approximate transition densities. To deepen this topic, we refer the reader to [12–14] and the references therein. The situation of this paper is different from the topic mentioned above. We focus on the MLE of equation (1.1) with regard to the continuous observations. The main difficulty is how to check the existence of the likelihood function. After that, we can get our MLE explicitly.
Our motivation also comes from the problem of parameter estimation for jump-type CIR (Cox–Ingersoll–Ross) process as in Barczy et al. [15] (for related topics, see, e.g., Li et al. [16]). The authors considered the following jump-type CIR process:
where \((W_{t})_{t \geq0}\) and \((J_{t})_{t \geq0}\) are the same as in equation (1.1). By using the Laplace transform of the process \(\int_{0}^{t} X_{s} \,ds\), \(t \geq0\), they proved the asymptotic properties of MLE of b under different cases. As the authors pointed out, the asymptotic property for MLE of a or joint MLE of \((a,b)\) is still open because of the lack of the necessary explicit Laplace transform \(\int_{0}^{t} 1/X_{s} \,ds\), \(t \geq0\). By studying equation (1.1), we wish to bring some light to this question. For other topics of statical inference for stochastic processes, the reader can refer to the excellent monograph [17].
The rest of this paper is organized as follows. In Sect. 2, we firstly prove the existence of a unique strong positive solution of equation (1.1). After that, we derive the unique stationary distribution and the exponential ergodicity of the solution. In Sect. 3, joint MLE of parameter \(\theta=(a,b)'\) is deduced from the theory of semimartingale. We prove the strong consistency and asymptotic normality in Sect. 4. In Sect. 5, we illustrate our results by computer simulations.
2 Preliminaries
Let \((\Omega,{\mathcal {F}}, ({\mathcal {F}}_{t})_{t\geq0}, {\mathbb {P}})\) be a filtered probability space with the filtration \(({\mathcal {F}}_{t})_{t\geq0}\) satisfying the usual conditions. Equation (1.1) will be considered in this probability space. Let \((W_{t})_{t\geq0}\) in equation (1.1) be a Wiener process. We assume that the jump process \((J_{t})_{t \geq0}\) in equation (1.1) is a subordinator with zero drift. That is, its characteristic function takes the form
where ν is the Lévy measure concentrated on \((0,\infty)\) satisfying
We recall that a subordinator is an increasing Lévy process. For example, the Poisson process, α-stable subordinators, and gamma subordinators are all of this type; for more details, see, e.g., Applebaum [18] p. 52–54. Moreover, we suppose that \((W_{t})_{t\geq0}\) and \((J_{t})_{t \geq0}\) in (1.1) are independent. Let \(N(dt,dz)\) be the random measure associated with the subordinator \((J_{t})_{t \in{\mathbb {R}}_{+}}\), that is,
where \(\delta_{p}\) is the Dirac measure at point p. Let \(\tilde {N}(dt,dz):= N(dt,dz)-\nu(dz)\,dt\). Then, for \(t \in{\mathbb {R}}_{+} \), we can write equation (1.1) as
The following assumptions are needed.
- (A1):
-
a, b, σ, \(r \in(0, \infty)\) and \(\int _{0}^{\infty}z \nu(dz) < \infty\).
- (A2):
-
\(\int_{0}^{\infty}z^{2} \nu(dz) < \infty\).
Throughout this paper, we write \({\mathbb {R}}\), \({\mathbb {R}}_{+}\), and \({\mathbb {R}}_{++}\) for real numbers, nonnegative real numbers, and positive real numbers, respectively. The value of constant C with or without subscript may vary from line to line. First, we prove there is a unique strong positive solution for equation (2.3).
Proposition 2.1
Assume that (A1) holds. Then, for any \(x_{0} \in{\mathbb {R}}_{++}\), there is a unique strong solution \((X_{t})_{t\in{\mathbb {R}}_{+}}\) of equation (2.3) such that \({\mathbb {P}}(X_{0}=x_{0})=1\) and \({\mathbb {P}}(X_{t} \in{\mathbb {R}}_{++} \textit{ for all } t\in{\mathbb {R}}_{+})=1\).
Proof
Since the coefficients of equation (2.3) are locally Lipschitz continuous, for the given initial value \(x_{0} \in{\mathbb {R}}_{++}\), there is a unique solution \(X_{t}\) on \([0, \tau_{e})\), where \(\tau_{e}\) is the explosion time. In the following, we shall prove that the solution is nonexplosive and positive. The proof below is divided into two steps.
Step 1: We show that the solution of (2.3) is nonexplosive. That is, \(\tau_{e} = \infty\) a.s. To this end, let \(k_{0}\) be a sufficiently large real number such that \(x_{0} < k_{0}\). For each integer \(k > k_{0}\), define the stopping time
and we set \(\inf\{\emptyset\}=\infty\) by invention. It is easy to see that \(\tau_{k}\) is increasing as \(k \to\infty\). Let \(\tau_{\infty}= \lim_{k \to\infty} \tau_{k}\), then \(\tau_{\infty}\leq\tau_{e}\) a.s. If we can prove \(\tau_{\infty}= \infty\) a.s., then \(\tau_{e} = \infty\) a.s. Let \(T>0\) be arbitrary. For any \(0 \leq t \leq T\), we have
Taking the expectation, we get
By using Gronwall’s inequality
On the other hand,
therefore
Putting \(k \to\infty\) yields
Since T is arbitrary, we get
Step 2: We show that the solution is positive. Let \(\tilde{\tau}_{0} := \inf\{t \in[0, \infty): X_{t}=0\}\). Let \(\tilde{k}_{0}\) be a large enough number such that \(x_{0}> 1/\tilde{k}_{0}\). For each integer \(k > \tilde{k}_{0}\), define the stopping time
Similarly, if we can prove \(\tilde{\tau}_{\infty}:=\lim_{k \to\infty } \tilde{\tau}_{k} = \infty\) a.s., then we get \(\tilde{\tau}_{0} = \infty\) a.s., which implies the positive solution. Let \(g(x) = x - \log x\), for any \(0 \leq t \leq T\), by Itô’s formula
where
and \(M_{t\wedge\tilde{\tau}_{k}}\) is a local martingale defined by
Note that \((M_{t\wedge\tilde{\tau}_{k}})_{t\in{\mathbb {R}}_{+}}\) is a true martingale and \(\int_{0}^{\infty}[\log x - \log(x+ rz) {]} \nu (dz) \leq0\). Therefore, there exists a positive number C such that \(\mathcal{A}f(x) \leq C\) for all \(x\in{\mathbb {R}}_{+}\), it follows
On the other hand,
By taking \(k \to\infty\) and \(T \to\infty\), we get
The proof is complete. □
Remark 2.2
From the study of real ecosystems (see, e.g., [19]), it is known that the effects of random fluctuations are proportional to the population size in the presence of multiplicative noise, while they are not proportional to the population size any more in the presence of additive noise. For the latter case, strongly negative values of the noise can cause negative values of the population size. For our equation, there are in fact two types of noise: one is the multiplicative Brownian noise and the other one is additive positive jump noise. Due to the positivity of the additive noise, our equation has a unique positive solution. Therefore, the phenomena stated above are not in contradiction with our result.
In the following, our aim is to show that under assumption (A1) equation (2.3) has a unique stationary distribution. We need the following lemmas.
Lemma 2.3
Let assumption (A1) hold. Then there exists a constant \(C>0\) such that
Proof
Applying Itô’s formula, we have
It is easy to see that \((a+1)x-bx^{2}+ r \int_{0}^{\infty}z \nu(dz)\) has an upper bound for all \(x \in{\mathbb {R}}_{+}\). Hence
which implies the desired result. □
Lemma 2.4
Under assumption (A1), equation (2.3) has the Feller property.
Proof
The proof is essentially the same as the proof of Lemma 3.2 of [7], so we omit the proof. □
Based on the standard argument, we can obtain the following result from Lemma 2.3 and Lemma 2.4 (see, e.g., [7, 20]).
Proposition 2.5
Under assumption (A1), equation (2.3) has a unique stationary distribution.
Proposition 2.6
Under assumption (A1), equation (2.3) is exponentially ergodic.
Proof
We define the Lyapunov function \(V(x)=x\). Then
where L is the infinitesimal generator of the solution \((X_{t})_{t \in {\mathbb {R}}_{+}}\). It is easy to see, for all \(x \in{\mathbb {R}}_{++}\), there exist two positive constants γ and K such that
which satisfies the condition for exponential ergodicity in [21]. Then our desired result follows from Theorem 6.1 of [21]. □
Remark 2.7
The results above show that stationary property and exponential ergodicity do not depend on the weight of the noise. These are different from the conditions needed in [5, 6, 8], in which the results only hold under relatively small noise.
Here is a result we will use later to prove the existence of the likelihood function.
Proposition 2.8
Suppose that assumption (A1) holds, then
for \(t \in{\mathbb {R}}_{+}\).
Proof
From equation (2.3), for \(t\in{\mathbb {R}}_{+}\), we have
By taking the expectation and noting that function \(ax - b/2 x^{2} + r \int_{0}^{\infty}z \nu(dz) \) has an upper bound, we obtain
which implies our result. □
3 Existence and uniqueness of MLE
In this section, we shall deduce our maximum likelihood estimation by using the semimartingale theory.
Let \({\mathbb {D}}:=D({\mathbb {R}}_{+},{\mathbb {R}})\) be the space of càdlàg functions (right-continuous with left limits) from \({\mathbb {R}}_{+}\) to \({\mathbb {R}}\). We denote by \((\mathcal{B}_{t}({\mathbb {D}}))_{t\geq0}\) the canonical filtration on \({\mathbb {D}}\). That is, for the canonical process \(\eta=(\eta _{t})_{t\geq0}\) defined by
Then
Let \(\mathcal{B}({\mathbb {D}})\) be the smallest σ-algebra containing \((\mathcal{B}_{t}({\mathbb {D}}))_{t\geq0}\). We shall call \(({\mathbb {D}},\mathcal{B}({\mathbb {D}}), [4](\mathcal{B}_{t}({\mathbb {D}}))_{t\geq0})\) the canonical space.
In this section, we denote by \(X^{\theta}=(X^{\theta}_{t})_{t\in {\mathbb {R}}_{+}}\) the unique strong solution of equation (2.3) with parameter \(\theta=(a,b)'\). Let \({\mathbb {P}}^{\theta}\) be the probability measures induced by \(X^{\theta}\) on the canonical space and \({\mathbb {P}}^{\theta}_{t}\) be the restriction probability measure of \({\mathbb {P}}^{\theta}\) on σ-algebra \(\mathcal {B}_{t}({\mathbb {D}})\). We can write equation (2.3) in the form
This form is the so-called Grigelionis decomposition for a semimartingale (see, e.g., [22] Theorem 2.1.2 and [23]). It follows that, under probability measure \({\mathbb {P}}^{\theta}\), \((\eta_{t})_{t\in{\mathbb {R}}_{+}}\) is a semimartingale with semimartingale characteristics \((B^{\theta},C^{\theta},\mu ^{\theta})\), where
and
where K is a Borel kernel from \({\mathbb {R}}_{++}\) to \({\mathbb {R}}_{++}\) given by
for \(t\in{\mathbb {R}}_{+}\) and \(A \in\mathcal{B}({\mathbb {R}}_{++})\).
In order to get the likelihood ratio process, we present the following result from [23], see also [15, 24].
Lemma 3.1
Let Ψ be a parametric space. For ψ, \(\tilde{\psi} \in \Psi\), let \({\mathbb {P}}^{\psi}\) and \({\mathbb {P}}^{\tilde{\psi}}\) be two probability measures on the canonical space \(({\mathbb {D}},\mathcal {B}({\mathbb {D}}),(\mathcal{B}_{t}({\mathbb {D}}))_{t\geq0})\). We assume that, under these two probability measures, the canonical process \((\eta_{t})_{t\in{\mathbb {R}}_{+}}\) is a semimartingale with characteristics \((B^{\psi},C^{\psi},\mu^{\psi})\) and \((B^{\tilde{\psi}},C^{\tilde{\psi}},\mu^{\tilde{\psi}})\), respectively. We further assume that, for each \(\phi\in\{\psi,\tilde{\psi}\}\), there exists a nondecreasing, continuous, and adapted process \((F_{t}^{\phi})_{t\in{\mathbb {R}}_{+}}\) with \(F_{0}^{\phi}=0\) and a predictable process \((c^{\phi}_{t})_{t \in{\mathbb {R}}_{+}}\) such that
This can be guaranteed by the condition
- (B1):
-
\({\mathbb {P}}^{\phi}( \mu^{\phi}(\{t\}\times{\mathbb {R}})=0 )=1\) for each \(\phi\in\{\psi,\tilde{\psi}\}\).
Let \(\mathcal{P}\) be the predictable σ-algebra on \({\mathbb {D}}\times{\mathbb {R}}_{+}\). We also assume that there exist a \(\mathcal{P}\otimes\mathcal{B}({\mathbb {R}})\)-measurable function \(V^{\psi,\tilde{\psi}}: {\mathbb {D}}\times{\mathbb {R}}_{+}\times {\mathbb {R}}\to{\mathbb {R}}_{++}\) and a predictable \({\mathbb {R}}\)-valued process \(\beta^{\psi,\tilde{\psi}}\) satisfying
- (B2):
-
\(\mu^{\psi}(dt,dz) = V^{\psi,\tilde{\psi}}(t,z) \mu^{\tilde{\psi }}(dt,dz)\),
- (B3):
-
\(\int_{0}^{t}\int_{{\mathbb {R}}} ( \sqrt{V^{\psi,\tilde{\psi}}(s,z) -1})^{2} \mu^{\tilde{\psi}}(ds,dz)< \infty\),
- (B4):
-
\(B_{t}^{\psi}= B_{t}^{\tilde{\psi}} + \int_{0}^{t} c^{\psi}_{s} \beta_{s}^{\psi ,\tilde{\psi}} \,dF^{\psi}_{s} +\int_{0}^{t}\int_{|z|\leq1} z ( V^{\psi,\tilde{\psi}}(s,z) -1) \mu ^{\tilde{\psi}}(ds,dz)\),
- (B5):
-
\(\int_{0}^{t} c^{\psi}_{s} (\beta_{s}^{\psi,\tilde{\psi}})^{2} \,dF^{\psi}_{s} < \infty\)
\({\mathbb {P}}^{\psi}\)-a.s. for every \(t \in{\mathbb {R}}_{+}\). Moreover, we assume that, for each \(\phi\in\{\psi,\tilde{\psi}\}\), local uniqueness holds for the martingale problem on the canonical space corresponding to the triple \((B^{\phi},C^{\phi},\mu^{\phi})\) with the given initial value \(x_{0}\), and \({\mathbb {P}}^{\phi}\) is the unique solution. Then, for any \(T \in{\mathbb {R}}_{+}\), \({\mathbb {P}}^{\psi}_{T}\) is absolutely continuous with respect to \({\mathbb {P}}^{\tilde{\psi}}_{T}\). The corresponding Randon–Nikodym derivative is
where \((\eta_{t}^{\mathrm{cont}})_{t\in{\mathbb {R}}_{+}}\) is a continuous martingale part of \((\eta_{t})_{t\in{\mathbb {R}}_{+}}\) under \({\mathbb {P}}^{\tilde{\psi }}\) and \(N^{\eta}\) is the random jump measure of process \((\eta_{t})_{t\in{\mathbb {R}}_{+}}\) defined as
where \(\delta_{p}\) is the Dirac measure at p.
In the following, let \(\theta=(a,b)'\), \(\tilde{\theta}=(\tilde{a},\tilde{b})' \in{\mathbb {R}}_{++}^{2}\).
Proposition 3.2
Let assumption (A1) hold, then for all \(T \in{\mathbb {R}}_{++}\), we have
Moreover, under probability measure \({\mathbb {P}}^{\tilde{\theta}}\), we have
where \(\eta^{\mathrm{cont}}\) denotes the continuous martingale part of η under probability measure \({\mathbb {P}}^{\tilde{\theta}}\).
Proof
The main task is to check the conditions in Lemma 3.1 and then to apply Lemma 3.1 to get our result. First, it is clear that \(\mu^{\theta}\) and \(\mu^{\tilde{\theta}}\) do not depend on the unknown parameter. Hence
and \(V^{\theta, \tilde{\theta}} \equiv1\). Therefore, conditions (B1)–(B3) readily hold. From (3.1) and (3.2), we see that, for \(t \in{\mathbb {R}}_{+}\), \(c_{t}^{\theta}= \sigma^{2} \eta_{t}^{2} \) with \(F_{t}^{\theta}=t\) and
By choosing \(\beta_{t}^{\theta,\tilde{\theta}} = \frac{1}{\sigma ^{2}}(\frac{a-\tilde{a}}{\eta_{t}}- (b-\tilde{b})) \) for \(t \in {\mathbb {R}}_{+}\), we get (B4). Now we check (B5), that is, for \(t\in {\mathbb {R}}_{+}\)
Note that
According to Proposition 2.8, we see that
for \(t\in{\mathbb {R}}_{+}\), which implies that (B5) holds. Finally, the local unique property of the corresponding martingale problem comes from the fact that our equation has a unique strong solution. Therefore, all the conditions of Lemma 3.1 are satisfied. For \(T \in{\mathbb {R}}_{++}\), by exchanging the roles of θ and θ̃, we obtain
The proof is complete. □
In the following, our aim is to estimate the parameter based on the continuous time observations of \(X^{T}:=(X_{t})_{0 \leq t \leq T}\). Now, we set \({\mathbb {P}}^{\tilde{\theta}}\) as a fixed reference measure. Since
then under \({\mathbb {P}}\) we have
Next, we can define the log-likelihood function with respect to the dominated measure \({\mathbb {P}}^{\tilde{\theta}}\) as
Then the maximum likelihood estimator (MLE) \(\hat{\theta}_{T}\) of the unknown parameter θ is defined as
Proposition 3.3
If assumption (A1) holds, then for every \(T \in{\mathbb {R}}_{++}\), there exists a unique MLE \(\hat{\theta}_{T}\) with the form
almost surely.
Proof
By Hölder’s inequality, we have
and
From equation (1.1), we see that the constant solution is impossible. Hence,
It follows that (3.3) is well defined almost surely. Note that
Hence, for \(t \in[0,T]\), \(J_{t}\) is a measurable function of \(X^{T}\), which implies that (3.3) is a true statistic. Next, we have
By direct calculation, we can get our desired result. □
4 Asymptotic properties
In order to get the asymptotic properties of our estimator, we need the following result.
Proposition 4.1
Let assumptions (A1)–(A2) hold. Then, for any \(x_{0} \in{\mathbb {R}}_{++}\), there exists a positive constant C such that
Proof
We follow the approach used in Lemma 4.1 of [4]. By the exponential martingale inequality, we get
where we choose \(\alpha= b/(2\sigma^{2})\). The well-known Borel–Cantelli lemma implies that for almost all \(\omega\in\Omega\), there is a random integer \(k_{0}=k_{0}(\omega)\) such that
for all \(t \in[0,k]\), \(k\geq k_{0}\), almost surely. Substituting this to our equation (2.3), we have
for all \(t \in[0,k]\), \(k\geq k_{0}\), almost surely. Hence
for all \(t \in[0,k]\), \(k\geq k_{0}\), almost surely. Now, for almost all \(\omega\in\Omega\), let \(k \geq k_{0}\) and \(k-1 \leq t \leq k\), then
Letting \(t \to\infty\) and hence \(k \to\infty\), we obtain
Under assumption (A2), note that \((\int_{0}^{t}\int_{0}^{\infty}r z \tilde {N}(ds,dz))_{t\in{\mathbb {R}}_{+}}\) is a local martingale with Meyer’s angle bracket process \((\int_{0}^{t}\int_{0}^{\infty}r z^{2}\nu (dz)\,ds)_{t\in{\mathbb {R}}_{+}}\) and
By using the strong law of large numbers for local martingales (Lemma A.1), we get
almost surely. Hence, there exists a constant \(C_{2}\) such that
almost surely. Combining this with (4.1), we complete the proof. □
Corollary 4.2
Suppose that assumptions (A1)–(A2) hold. The invariant measure π has a finite second moment, moreover
and
Proof
The proof of the first result is essentially the same as the proof of Theorem 4.2 in [4], and the second is the same as the proof in [20]. So, we omit them. □
In the following, we present the weak and strong consistency of our estimator.
Theorem 4.3
Under assumption (A1), the estimator \(\hat{\theta}_{T}=(\hat{a}_{T}, \hat{b}_{T})'\) of \(\theta=(a,b)'\) admits the weak consistency, i.e.,
where \(\xrightarrow{{\mathbb {P}}}\) denotes the convergence in probability. Under assumptions (A1)–(A2), the estimator \(\hat{\theta}_{T}=(\hat {a}_{T}, \hat{b}_{T})'\) of \(\theta=(a,b)'\) admits the strong consistency, i.e.,
Proof
We have
Note that
Case 1: Under assumption (A1), for \(I_{1}\), we have
According to the strong law of large numbers for continuous local martingales (Lemma A.2), we have
Then we obtain \(\lim_{T \to\infty} I_{1}=0\), a.s. For \(I_{2}\), we have
Note \((\frac{\int_{0}^{T} X_{s} \,ds}{ T})_{T>0}\) is tight. Indeed, by Lemma 2.3, for \(M>0\), we have
On the other hand, by Proposition 2.5 and Proposition 2.6, we have
where π is the unique invariant measure. It follows that
By Proposition 2.8, we also have
for each \(T >0\). Then, again by Lemma A.2, we get
From (4.2) and (4.3), we get \(\lim_{T \to\infty} I_{2}=0\) in probability. Therefore, we obtain \(\lim_{T \to\infty} \hat{a}_{T} =a\) in probability. Similarly, we can prove \(\lim_{T \to\infty} \hat{b}_{T} =b\) in probability.
Case 2: Under assumptions (A1)–(A2). For \(I_{1}\), we have
According to Corollary 4.2 and Lemma A.2, we have \(\lim_{T \to\infty} I_{1}=0\), a.s. For \(I_{2}\), we have
Again by Corollary 4.2 and Lemma A.2, we immediately get \(\lim_{T \to\infty} I_{2}=0\), a.s. Therefore, we obtain \(\lim_{T \to\infty} \hat{a}_{T} =a\) a.s. Similarly, we can prove \(\lim_{T \to\infty} \hat{b}_{T} =b\) a.s. We complete the proof. □
For simplicity of our notations, we denote \(\mu_{1}:= \int_{0}^{\infty}y \pi(dy)\) and \(\mu_{2}:=\int_{0}^{\infty}y^{2} \pi(dy)\). Now we present the following asymptotic normality.
Theorem 4.4
Under assumptions (A1)–(A2). The estimator \(\hat{\theta}_{T}\) of θ is asymptotically normal, i.e.,
as \(T \to\infty\), where \(\xrightarrow{\mathcal{D}}\) denotes the convergence in distribution, \(\Sigma=AA'\) and
By a random scaling, we also have
as \(T \to\infty\), where I is the identity matrix.
Proof
We write our estimator in the matrix form
Let
then \((M_{t})_{t\in{\mathbb {R}}_{+}}\) is a 2-dimensional continuous local martingale with \(M_{0}=0\) a.s. and with quadratic variation process
Let
Then, by Corollary 4.2, we have
where
By applying Lemma (A.3), we get
where Z is a 2-dimensional standard normal random vector. Note that, again by Corollary 4.2,
Combining (4.4) with (4.5), by using Slutsky’s lemma, we have
as \(T \to\infty\). We have proved the first result. Next, it is easy to see that
a.s. \(T \to\infty\). Again by Slutsky’s lemma, we have
We finish the proof. □
5 Simulation results
In this section, we present some computer simulations. First, we apply Euler–Maruyama method to illustrate the stationary solution of equation (1.1) under assumption (A1). We consider the following two examples.
Examples 5.1
Let \(a=5\), \(b=1\), \(\sigma=1\), \(r=1\), and \(x_{0}=10\) for equation (1.1). Let \((J_{t})_{t\geq0}\) be a Poisson process with intensity 1. Note that the Poisson process with intensity 1 is a subordinator with Lévy measure \(\nu(dz)=\delta_{1}(dz)\). It follows from Proposition 2.5 there is a unique stationary distribution. We apply the Euler–Maruyama method to perform a computer simulation of 30,000 iterations of the single path of \(X_{t}\) with initial value \(x_{0}=10\), \(T=30\), and step size \(\Delta=0.001\), which is shown in Fig. 1.
(Left) Computer simulation of 30,000 iterations of a single path \(X_{t}\) of Example 5.1. (Right) The histogram of the path
Examples 5.2
Let \(a=5\), \(b=1\), \(\sigma=1\), \(r=1\), and \(x_{0}=10\) for equation (1.1). Let \((J_{t})_{t\geq0}\) be a compound Poisson process with exponentially distributed jump size, namely
We set \(c=1\) and \(\lambda=10\). It is easy to see that ν satisfies assumption (A1). Again by Proposition 2.5 there is a unique stationary distribution. We apply the Euler–Maruyama method to perform a computer simulation of 2000 iterations of the single path of \(X_{t}\) with initial value \(x_{0}=10\), \(T=20\), and step size \(\Delta=0.01\), which is shown in Fig. 2.
(Left) Computer simulation of 2000 iterations of a single path \(X_{t}\) of Example 5.2. (Right) The histogram of the path
From the simulation paths of Fig. 1 and Fig. 2, we can see their stationary trends. The distributions implied by their histograms can be seen as the approximations of the stationary distributions.
Next, we exhibit the consistency of the MLE. It follows from Theorem 3.3 that our MLE is
We perform 1000 Monte Carlo simulations of the sample paths generated by Example 5.1 and Example 5.2. The results are presented in Table 1. We see that the estimate errors become small as the observation time increases. This is consistent with our theoretical result.
Finally, we investigate the asymptotic distribution of the MLE in (5.1). That is, we will focus on the distribution of the following statistic:
We perform 1000 Monte Carlo simulations for Example 5.1 with \(a=1\), \(b=7\), \(\sigma=1\), \(r=1\), \(T=10\), and \(\Delta=0.01\) and \(x_{0}=10\). The 3D histogram of the 1000 simulations is presented in Fig. 3. By comparing the 3D histogram of the 1000 simulations to the 3D histogram of standard normal distribution (Fig. 3), we can see the tendency of the joint normality. The trend of normality of each element of the estimator \(\beta_{T}\) can be seen from Fig. 4, where the histogram of each element is given.
(Left) 3D histogram of 1000 Monte Carlo simulations of \(\beta _{T}\) of Example 5.1 with \(a=1\), \(b=7\), \(\sigma=1\), \(r=1\), \(T=10\), and \(\Delta=0.01\) and \(x_{0}=10\). (Right) The 3D histogram of 1000 random vectors from 2-dimensional standard normal distribution
1000 Monte Carlo simulations of Example 5.1 with \(a=1\), \(b=7\), \(\sigma=1\), \(r=1\), \(T=10\), and \(\Delta=0.01\) and \(x_{0}=10\). (Left) The histogram of the first element of \(\beta_{T}\). (Right) The histogram of the second element of \(\beta_{T}\)
6 Conclusions
In this paper, we consider a stochastic Lotka–Volterra model with both multiplicative Brownian noises and additive jump noises. Some desired properties of the solution, such as existence and uniqueness of positive strong solution, unique stationary distribution, and exponential ergodicity, are proved. We also investigate the maximum likelihood estimation for the drift coefficients based on continuous time observations. The likelihood function and explicit estimator are derived by using semimartingale theory, and then consistency and asymptotic normality of the estimator are proved. Finally, we give some computer simulations, which are consistent with our theoretical results. The case with multiplicative jump noises will be the subject of future investigation.
References
Bahar, A., Mao, X.: Stochastic delay Lotka–Volterra model. J. Math. Anal. Appl. 292(2), 364–380 (2004)
Bahar, A., Mao, X.: Stochastic delay population dynamics. Int. J. Pure Appl. Math. 11, 377–400 (2004)
Mao, X., Marion, G., Renshaw, E.: Environmental Brownian noise suppresses explosions in population dynamics. Stoch. Process. Appl. 97(1), 95–110 (2002)
Mao, X.: Stationary distribution of stochastic population systems. Syst. Control Lett. 60(6), 398–405 (2011)
Bao, J., Mao, X., Yin, G., Yuan, C.: Competitive Lotka–Volterra population dynamics with jumps. Nonlinear Anal., Theory Methods Appl. 74(17), 6601–6616 (2011)
Bao, J., Yuan, C.: Stochastic population dynamics driven by Lévy noise. J. Math. Anal. Appl. 391(2), 363–375 (2012)
Tong, J., Zhang, Z., Bao, J.: The stationary distribution of the facultative population model with a degenerate noise. Stat. Probab. Lett. 83(2), 655–664 (2013)
Zhang, Z., Zhang, X., Tong, J.: Exponential ergodicity for population dynamics driven by α-stable processes. Stat. Probab. Lett. 125, 149–159 (2017)
Spagnolo, B., Valenti, D., Fiasconaro, A.: Noise in ecosystems: a short review. Math. Biosci. Eng. 1(1), 185–211 (2004)
Valenti, D., Fiasconaro, A., Spagnolo, B.: Stochastic resonance and noise delayed extinction in a model of two competing species. Phys. A, Stat. Mech. Appl. 331(3–4), 477–486 (2004)
La Cognata, A., Valenti, D., Dubkov, A.A., Spagnolo, B.: Dynamics of two competing species in the presence of Lévy noise sources. Phys. Rev. E 82(1), 011121 (2010)
Aït-Sahalia, Y.: Transition densities for interest rate and other nonlinear diffusions. J. Finance 54(4), 1361–1395 (1999)
Li, C.: Maximum-likelihood estimation for diffusion processes via closed-form density expansions. Ann. Stat. 41(3), 1350–1380 (2013)
Li, C., Chen, D.: Estimating jump-diffusions using closed-form likelihood expansions. J. Econom. 195(1), 51–70 (2016)
Barczy, M., Alaya, M.B., Kebaier, A., Pap, G.: Asymptotic properties of maximum likelihood estimator for the growth rate for a jump-type CIR process based on continuous time observations. arXiv preprint. arXiv:1609.05865 (2016)
Li, Z., Ma, C.: Asymptotic properties of estimators in a stable Cox–Ingersoll–Ross model. Stoch. Process. Appl. 125(8), 3196–3233 (2015)
Kutoyants, Y.A.: Statistical Inference for Ergodic Diffusion Processes. Springer, Berlin (2010)
Applebaum, D.: Lévy Processes and Stochastic Calculus, 2nd edn. Cambridge University Press, Cambridge (2009)
Valenti, D., Denaro, G., Spagnolo, B., Mazzola, S., Basilone, G., Conversano, F., Bonanno, A.: Stochastic models for phytoplankton dynamics in Mediterranean Sea. Ecol. Complex. 27, 84–103 (2016)
Khasminskii, R.: Stochastic Stability of Differential Equations, vol. 66. Springer, Berlin (2011)
Meyn, S.P., Tweedie, R.L.: Stability of Markovian processes III: Foster–Lyapunov criteria for continuous-time processes. Adv. Appl. Probab. 25(3), 518–548 (1993)
Jacod, J., Protter, P.: Discretization of Processes. Stochastic Modelling and Applied Probability, vol. 67. Springer, Berlin (2011)
Jacod, J., Shiryaev, A.N.: Limit Theorems for Stochastic Processes, vol. 288. Springer, Berlin (2013)
Sorensen, M.: Likelihood methods for diffusions with jumps. In: Statistical Inference in Stochastic Processes, pp. 67–105 (1991)
Liptser, R.S.: A strong law of large numbers for local martingales. Stochastics 3(1–4), 217–228 (1980)
Liptser, R.S., Shiryayev, A.N.: Statistics of Random Processes II. Applications, 2nd edn. Springer, Berlin (2001)
van Zanten, H.: A multivariate central limit theorem for continuous local martingales. Stat. Probab. Lett. 50(3), 229–235 (2000)
Acknowledgements
This work was supported in part by the National Natural Science Foundation of China (11401029), Teacher Research Capacity Promotion Program of Beijing Normal University Zhuhai, the National Natural Science Foundation of China (11671104), the National Natural Science Foundation of China (71761019), and Jiangxi Provincial Natural Science Foundation (20171ACB21022). The authors appreciate the anonymous referees for their valuable suggestions and questions.
Ethics declarations
Competing interests
The authors declare that they have no competing interest.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix: Limit theorems for local martingales
Appendix: Limit theorems for local martingales
In this section, we recall some limit theorems for local martingales. The first one is a strong law of large numbers for local martingales, e.g., [25].
Lemma A.1
Let \((M_{t})_{t \in{\mathbb {R}}_{+}}\) be a one-dimensional local martingale vanishing at time \(t=0\). For \(t \in{\mathbb {R}}_{+}\), we define
where \(({\langle}M{\rangle}_{t})_{t\in{\mathbb {R}}_{+}}\) is Meyer’s angle bracket process. Then
implies
The next result is a strong law of large numbers for continuous local martingales, see, e.g., Lemma 17.4 of [26].
Lemma A.2
Let \((M_{t})_{t \in{\mathbb {R}}_{+}}\) be a one-dimensional square-integrable continuous local martingale vanishing at time \(t=0\). Let \(([M]_{t})_{t\in{\mathbb {R}}_{+}}\) be the quadratic variation process of M such that, for \(t \in{\mathbb {R}}_{+}\),
and
Then
The last one is about the asymptotic behavior of continuous multivariate local martingales, see Theorem 4.1 of [27].
Lemma A.3
Let \((M_{t})_{t \in{\mathbb {R}}_{+}}\) be a d-dimensional square-integrable continuous local martingale vanishing at time \(t=0\). Suppose that there exists a function \(Q: {\mathbb {R}}_{+} \to{\mathbb {R}}^{d\times d}\) such that \(Q(t)\) is an invertible (non-random) matrix for all \(t \in{\mathbb {R}}_{+}\), \(\lim_{t \to\infty} \Vert Q(t) \Vert=0\) and
where \(\Vert Q(t) \Vert:= \sup\{|Q(t)x| : x\in{\mathbb {R}}^{d}, |x|=1 \}\), \([M]_{t}\) is the quadratic variation process of M and ζ is a \(d\times d\) random matrix. Then
where Z is a d-dimensional standard normally distributed random vector independent of ζ.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Zhao, H., Zhang, C. & Wen, L. Maximum likelihood estimation for stochastic Lotka–Volterra model with jumps. Adv Differ Equ 2018, 148 (2018). https://doi.org/10.1186/s13662-018-1605-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13662-018-1605-z