Theory and Modern Applications

# Moment estimates for invariant measures of stochastic Burgers equations

## Abstract

In this paper, we study moment estimates for the invariant measure of the stochastic Burgers equation with multiplicative noise. Based upon an a priori estimate for the stochastic convolution, we derive regularity properties on invariant measure. As an application, we prove smoothing properties for the transition semigroup by introducing an auxiliary semigroup. Finally, the m-dissipativity of the associated Kolmogorov operator is given.

## 1 Introduction

We consider the following stochastic Burgers equation with a multiplicative noise on the interval $$[0,1]$$ with Dirichlet boundary conditions:

$$\textstyle\begin{cases} dX(t,\xi)= (\Delta_{\xi}X(t,\xi)+ \frac{1}{2} \partial_{\xi }(X^{2}(t,\xi)) )\,dt+g(X(t,\xi))\,d W(t,\xi),\quad t\geq0, \\ X(t,0)=X(t,1)=0,\quad t\geq0, \\ X(0,\xi)=x(\xi),\quad \xi\in[0,1], \end{cases}$$
(1.1)

where $$x\in H=L^{2}(0,1)$$, and $$(W(t))_{t\geq0}$$ is a cylindrical Wiener process on H, defined on a filtered probability space $$(\varOmega, \mathcal{F}, \mathbb{P})$$ adapted to a filtration $$(\mathcal{F}_{t})_{t\geq0}$$ that is assumed to be right-continuous and complete in the sense that $$\mathcal{F}_{0}$$ contains all $$\mathbb{P}$$-null sets. Moreover, the function g is a real valued function that is supposed to be Lipschitz continuous and bounded.

Equation (1.1) has been well studied by several authors, we refer to Refs. [1â€“8], and it is well known that there exists a unique mild solution with paths in $$C([0,T],L^{2}(0,1))$$ for any $$T>0$$. We denote the unique mild solution of Eq.Â (1.1) by $$X(t,x)$$, that is, for any $$x\in H$$ the process $$(X(t,x))_{t\geq0}$$ is adapted to the filtration $$(\mathcal{F}_{t})_{t\geq0}$$, and it fulfills the equation

$$X(t,x)=e^{tA}x+ \int_{0}^{t}e^{(t-s)A}b\bigl(X(s,x)\bigr)\,ds+ \int _{0}^{t}e^{(t-s)A}g\bigl(X(s,x)\bigr) \,dW(s),$$
(1.2)

$$\mathbb{P}$$-a.s. for all $$t\in[0,T]$$, where A and b are the operators defined by

$$Ax=\Delta_{\xi}x,\quad x\in D(A)=H^{2}(0,1)\cap H^{1}_{0}(0,1),$$
(1.3)

and

$$b(x)= \frac{1}{2} \partial_{\xi}\bigl(x^{2}\bigr),\quad x \in D(b)=H^{1}_{0}(0,1),$$
(1.4)

and for any $$x\in H$$ the symmetric Nemytskii operator $$g(x)\in L(H)$$ is defined by

$$\bigl[g(x)y\bigr](\xi)=g\bigl(x(\xi)\bigr)y(\xi),\quad y\in H, \xi \in[0,1].$$
(1.5)

In this paper, we will mainly be interested in studying the moment estimates for the invariant measure, the properties of the corresponding transition semigroup and the associated Kolmogorov operator. The corresponding transition semigroup $$P_{t}$$ is given by

$$P_{t}\varphi(x)=\mathbb{E}\bigl[\varphi\bigl(X(t,x)\bigr)\bigr],\quad t \geq0, x\in H, \varphi \in B_{b}(H),$$
(1.6)

where $$B_{b}(H)$$ is the Banach space of all Borel bounded mappings $$\varphi:H\rightarrow\mathbb{R}$$ endowed with the sup norm

$$\Vert \varphi \Vert _{0}=\sup_{x\in H} \bigl\vert \varphi(x) \bigr\vert ,$$

and $$\mathbb{E}$$ means the expectation taken with respect to $$\mathbb{P}$$. The Kolmogorov operator is formally denoted as

$$K_{0}\varphi(x)=\frac{1}{2}\operatorname{Tr}\bigl[g(x)g(x)^{*}D^{2} \varphi(x)\bigr]+\bigl\langle x,A^{*}D\varphi(x)\bigr\rangle +\bigl\langle D\varphi(x),b(x)\bigr\rangle ,\quad x\in H,$$
(1.7)

here $$\varphi(x) \in\mathscr{E}_{A}(H)$$, which is the linear span of all real and imaginary parts of functions of the form

$$x\mapsto e^{i\langle x,h\rangle},\quad x\in H,h\in D\bigl(A^{*}\bigr).$$
(1.8)

Da Prato and Gatarek [1] studied Eq.Â (1.1) and proved that it had a unique invariant measure Î½ for the semigroup $$P_{t}$$. The existence can be obtained by the Krylovâ€“Bogoliubov theorem and an a priori estimate on the solution, and the uniqueness is shown by the Doob theorem.

Moment estimates for invariant measures have been a much debated topic for many authors for a period of time. It is worth mentioning, for the abstract form of problem (1.1) (see (2.5) in Sect.Â 2) with additive noise case (that is, the operator $$g(x)$$ is a symmetric positive definite operator), Da Prato and Debussche [9] proved that the moment of the invariant measure was finite by introducing a modified Ornsteinâ€“Uhlenbeck process to get an a priori estimate on the solution. Es-Sarhir and Stannat [10, 11] proved moment estimates of the invariant measure based on a pathwise estimate on the stochastic convolution. Lewis and Nualart [12] gave a random field solution and obtained moment estimates for the solution to Burgersâ€™ equation by the Feynmanâ€“Kac representation. Dong and Sun [13] considered the averaging principle for one dimensional stochastic Burgers equation and showed that the slow component strongly and weakly converges to the solution of the corresponding averaged equation by proving the moment estimates. The same result may be generalized to the stochastic Navierâ€“Stokes equations with white noise or LÃ©vy noise with the aid of [14, 15] and the references therein.

The aim of this paper is mainly to extend the results of [9] to the case of multiplicative noise with coefficient satisfying proper properties. In this situation, there are many difficulties by using the tools of the ItÃ´ formula and a modified Ornsteinâ€“Uhlenbeck process, thus we estimate the mild solution by choosing the factorization formula and a generalization of maximal inequality of martingales to stochastic convolution, which can be found in the monographs by Da Prato and Zabczyk [16].

The rest of this paper is organized as follows. Some notions and preliminary results for Eq.Â (1.1) are given in Sect.Â 2. In Sect.Â 3, we derive moment estimates on the solution and the corresponding invariant measure with the help of the method of factorization, then show that the invariant measure has $$L^{p}$$ ($$p\geq1$$) regularity; see TheoremÂ 3.3 and CorollaryÂ 3.5. Finally, we discuss the smoothing properties of the semigroup $$P_{t}$$ in interpolation space, and prove that the associated Kolmogorov operator is m-dissipative in Sect.Â 4.

## 2 Preliminaries

We list some notations which are applied in this paper. We denote the norm of $$L^{p}(0,1)$$, $$p\geq1$$ by $$|\cdot|_{p}$$. Let H be a separable real Hilbert space $$L^{2}(0,1)$$ (norm $$|\cdot|$$, inner product$$\langle\cdot,\cdot \rangle$$), $$L(H)$$ represent the Banach algebra of all linear bounded operators from H into H endowed with the norm

$$\Vert T \Vert =\sup\bigl\{ \vert Tx \vert ; x\in H, \vert x \vert =1 \bigr\} ,\quad T\in L(H),$$
(2.1)

and $$L_{1}(H)$$ be the space of all nuclear operators.

Considering the abstract form of problem (1.1), we introduce the linear self-adjoint operator A defined by (1.3), and the nonlinear operators b and g corresponding to (1.4) and (1.5). We denote by $$e^{tA}$$, $$t\geq0$$ the semigroup on H generated by A, and by $$\{e_{k}\}$$ the complete orthonormal system on H which diagonalizes A and by $$\{\lambda_{k}\}$$ the corresponding eigenvalues. We have

$$e_{k}(\xi)=\sqrt{2}\sin(k\pi\xi),\quad \xi\in[0,1], k=1,2, \ldots,$$
(2.2)

and

$$\lambda_{k}=-\pi^{2}k^{2},\quad k=1,2, \ldots.$$
(2.3)

Finally, the cylindrical Wiener process $$W(t)$$, $$t\geq0$$ can be formally written as

$$W(t)=\sum^{\infty}_{k=1}e_{k} \beta_{k}(t),\quad t\geq0,$$
(2.4)

where $$\{\beta_{k}\}$$ is a sequence of mutually independent standard Brownian motions on a filtered probability space $$(\varOmega, \mathcal {F},\mathcal{F}_{t}, \mathbb{P})$$.

Now, we can rewrite problem (1.1) as follows:

$$\textstyle\begin{cases} dX= (AX+b(X) )\,dt+g(X)\,dW(t),\\ X(0)=x. \end{cases}$$
(2.5)

We shall consider the solution of equation (2.5) in the space $$Z_{T}$$ consisting of all continuous process on $$[0,T]$$ with value in $$L^{2}(0,1)$$, such that

$$\Vert X \Vert _{T}^{p^{*}}=\mathbb{E} \Bigl(\sup _{t\in[0,T]} \bigl\vert X(t) \bigr\vert ^{p^{*}} \Bigr)< \infty,$$
(2.6)

for certain $$p^{*}>4$$, which is equivalent to the norm of the space $$L^{p^{*}}(\varOmega, C([0,T], L^{2}(0,1)))$$.

### Definition 2.1

A mild solution of Eq.Â (2.5) is a process $$X\in Z_{T}$$ such that

$$X(t,x)=e^{tA}x+ \int_{0}^{t}e^{(t-s)A}b\bigl(X(s,x)\bigr)\,ds+ \int _{0}^{t}e^{(t-s)A}g\bigl(X(s,x)\bigr) \,dW(s),$$
(2.7)

$$\mathbb{P}$$-a.s. for all $$t\in[0,T]$$.

An important role will be played by the stochastic convolution

$$W_{X}(t)= \int_{0}^{t}e^{(t-s)A}g\bigl(X(s,x)\bigr) \,dW(s)=\sum^{\infty }_{k=1} \int_{0}^{t}e^{(t-s)A}\bigl[g\bigl(X(s,x) \bigr)e_{k}\bigr]\,d\beta_{k}(s),$$
(2.8)

where $$X\in Z_{T}$$. In fact, though the cylindrical Wiener process (2.4) does not leave H, the stochastic convolution $$W_{X}(t)$$ does.

The following result is about the existence and uniqueness of a mild solution for Eq.Â (2.5); see e.g. [1].

### Proposition 2.2

Let $$T>0$$and any $$x\in H$$, there exists a unique mild solution $$X(t,x)$$of Eq.Â (2.5).

At last, for the need of the proof in next sections, we introduce some important notions and inequalities. For any $$\gamma\in\mathbb{R}$$, the interpolation space denoted by

$$V_{\gamma}:=\bigl(D\bigl((-A)^{\gamma}\bigr), \Vert \cdot \Vert _{\gamma}\bigr),$$
(2.9)

for any $$x\in V_{\gamma}$$, $$\|x\|_{\gamma}^{2}=\langle(-A)^{\gamma }x, (-A)^{\gamma}x \rangle$$ which is equivalent to the norm of the Sobolev space $$H^{2\gamma}(0,1)$$. For any $$p\geq2$$, by the Sobolev embedding theorem, we get

\begin{aligned} V_{\gamma}\subset L^{p}(0,1),\quad \gamma=(p-2)/(4p) . \end{aligned}

The PoincarÃ© inequality renders

$$\Vert v \Vert _{\gamma_{1}}\leq\pi^{\gamma_{1}-\gamma_{2}} \Vert v \Vert _{\gamma _{2}},$$
(2.10)

where $$\gamma_{1}\leq\gamma_{2}$$, $$v\in V_{\gamma_{2}}$$. We will also use the classical interpolation estimate

$$\Vert v \Vert _{\gamma_{2}}\leq \Vert v \Vert _{\gamma_{1}}^{\frac{\gamma_{3}-\gamma _{2}}{\gamma_{3}-\gamma_{1}}} \Vert v \Vert _{\gamma_{3}}^{\frac{\gamma_{2}-\gamma_{1}}{\gamma_{3}-\gamma _{1}}},$$
(2.11)

here $$\gamma_{1}<\gamma_{2}<\gamma_{3}$$, $$v\in V_{\gamma_{3}}$$, and the Agmon estimate

$$\vert v \vert _{\infty}\leq \vert v \vert ^{\frac{1}{2}} \Vert v \Vert _{\frac{1}{2}}^{\frac{1}{2}},\quad v\in V_{\frac{1}{2}}.$$
(2.12)

Defining the trilinear form $$b: V_{\gamma_{1}}\times V_{\gamma _{2}}\times V_{\gamma_{3}}$$, $$\gamma_{i}\geq0$$, $$i=1,2,3$$ by

$$b(x,y,z):= \int_{0}^{1}x\partial_{\xi}yz\,d \xi;$$
(2.13)

moreover, applying integration by parts, we obtain the following identity:

$$b(x,y,y)=b(y,y,x)=-\frac{1}{2}b(y,x,y), \qquad b(x,y,z)=-b(y,x,z)-b(x,z,y).$$

We introduce the following result, which was proved in [17], Lemma 2.1.

### Lemma 2.3

The trilinearbis well defined and is continuous on $$V_{\gamma _{1}}\times V_{\gamma_{2}+\frac{1}{2}}\times V_{\gamma_{3}}$$where $$\gamma_{i}\geq0$$, and

$$\gamma_{1}+\gamma_{2}+\gamma_{3}\geq \frac{1}{4}, \quad\textit{if }\gamma _{i}\neq\frac{1}{4}, i=1,2,3,$$

or

$$\gamma_{1}+\gamma_{2}+\gamma_{3}> \frac{1}{4},\quad \textit{if }\gamma_{i}= \frac {1}{4}\textit{ for some }i,$$

that is, there exists a constant $$c>0$$such that

$$\bigl\vert b(x,y,z) \bigr\vert \leq c \Vert x \Vert _{\gamma_{1}} \Vert y \Vert _{\gamma_{2}+\frac{1}{2}} \Vert z \Vert _{\gamma_{3}},\quad \forall x \in V_{\gamma_{1}}, y\in V_{\gamma_{2}+\frac{1}{2}}, z\in V_{\gamma_{3}}.$$
(2.14)

## 3 Moment estimates for invariant measure

In this section we first estimate the stochastic convolution term of the mild solution for Eq.Â (2.5) in the interpolation space, then we derive estimates of moments of the associated invariant measure.

### Proposition 3.1

For all $$k\geq1$$, there exists a constant $$C_{k}$$such that for any $$T>0$$we have

$$\sup_{0\leq t\leq T}\mathbb{E}\bigl[ \bigl\vert W_{X}(t) \bigr\vert ^{k}\bigr]\leq C_{k}$$
(3.1)

and

$$\sup_{0\leq t\leq T}\mathbb{E}\bigl[ \bigl\Vert W_{X}(t) \bigr\Vert _{\beta}^{k}\bigr]\leq C_{k},\quad \textit{for all }\beta\in\biggl(0,\frac{1}{4}\biggr).$$
(3.2)

### Proof

By the boundedness of the probability measure and the HÃ¶lder inequality, we need only prove the result for all $$k\geq p^{*}$$. Firstly, using Theorem 5.2.5 in [18], for any $$\frac {1}{p^{*}}<\alpha<\frac{1}{4}$$, there exists the following decomposition:

$$W_{X}(t)= \int_{0}^{t}(t-s)^{\alpha-1}e^{(t-s)A}h(s) \,ds:=R_{\alpha }h(t),$$
(3.3)

where $$h(t)=\frac{\sin(\pi\alpha)}{\pi}\int_{0}^{t}(t-\tau )^{-\alpha}e^{(t-\tau)A}g(X(\tau,x))\,dW(\tau)$$. By the HÃ¶lder inequality, for any $${m>1}$$, we have

\begin{aligned}[b] \bigl\vert R_{\alpha}h(t) \bigr\vert &\leq \int_{0}^{t}(t-s)^{\alpha-1} \bigl\vert e^{(t-s)A}h(s) \bigr\vert \, ds \\ &\leq \int_{0}^{t}(t-s)^{\alpha-1}e^{-\pi^{2}(t-s)} \bigl\vert h(s) \bigr\vert \,ds \\ &\leq \biggl( \int_{0}^{t}e^{-\pi^{2}(t-s)}(t-s)^{(\alpha-1)\frac {m}{m-1}} \,ds \biggr)^{\frac{m-1}{m}} \\ &\quad\times \biggl( \int_{0}^{t}e^{-\pi^{2}(t-s)} \bigl\vert h(s) \bigr\vert ^{m}\,ds \biggr)^{\frac{1}{m}}. \end{aligned}
(3.4)

Notice that if desired there exists a constant $$c_{1}$$ such that

$$\int_{0}^{t}e^{-\pi^{2}(t-s)}(t-s)^{(\alpha-1)\frac{m}{m-1}}\, ds\leq \int_{0}^{\infty}e^{-\pi^{2}s}s^{(\alpha-1)\frac{m}{m-1}}\, ds \leq c_{1}< \infty,$$
(3.5)

we only need $$(\alpha-1)\frac{m}{m-1}+1>0$$, that is, $$m\alpha>1$$. Thus we can choose $$m=k$$. Then putting (3.5) into (3.4), we have

$$\sup_{0\leq t\leq T}\mathbb{E}\bigl[ \bigl\vert W_{X}(t) \bigr\vert ^{k}\bigr]\leq c_{1}^{m-1}\sup _{0\leq t\leq T} \int_{0}^{t}e^{-\pi^{2}(t-s)}\mathbb{E}\bigl[ \bigl\vert h(s) \bigr\vert ^{k}\bigr]\,ds.$$
(3.6)

By applying Lemma 7.7 in [19], we get

\begin{aligned}[b] & \int_{0}^{t}e^{-\pi^{2}(t-s)}\mathbb{E}\bigl[ \bigl\vert h(s) \bigr\vert ^{k}\bigr]\,ds \\ & \quad\leq c_{2} \int_{0}^{t}e^{-\pi^{2}(t-s)} \biggl( \int_{0}^{s} \bigl(\mathbb{E}\bigl[ \bigl\Vert (s-\tau)^{-\alpha}e^{(s-\tau)A}g\bigl(X(\tau,x)\bigr) \bigr\Vert _{\mathrm{HS}}^{k}\bigr] \bigr)^{\frac{2}{k}}\,d\tau \biggr)^{\frac{k}{2}}\,ds, \end{aligned}
(3.7)

on the other hand,

\begin{aligned}[b] \bigl\Vert (s- \tau)^{-\alpha}e^{(s-\tau)A}g\bigl(X(\tau,x)\bigr) \bigr\Vert _{\mathrm{HS}}^{2}&=(s-\tau )^{-2\alpha}\sum _{i=1}^{\infty} \bigl\vert e^{(s-\tau)A}g\bigl(X( \tau ,x)\bigr)e_{i} \bigr\vert ^{2} \\ &=(s-\tau)^{-2\alpha}\sum_{i=1}^{\infty} \sum_{j=1}^{\infty }e^{-2\pi^{2}j^{2}(s-\tau)} \bigl\vert \bigl\langle g\bigl(X(\tau,x)\bigr)e_{i},e_{j}\bigr\rangle \bigr\vert ^{2} \\ &=(s-\tau)^{-2\alpha}\sum_{j=1}^{\infty}e^{-2\pi^{2}j^{2}(s-\tau )} \bigl\vert g\bigl(X(\tau,x)\bigr)e_{j} \bigr\vert ^{2} \\ &\leq \Vert g \Vert _{0}^{2}(s-\tau)^{-2\alpha} \sum_{j=1}^{\infty}e^{-2\pi ^{2}j^{2}(s-\tau)}, \end{aligned}
(3.8)

considering Eq.Â (2.5) in [20], we have $$J(t)=\sum_{j=1}^{\infty }e^{-tj^{2}}\leq 4t^{-\frac{1}{2}}e^{-\frac{t}{2}}$$, then

\begin{aligned}[b] \int_{0}^{s}(s-\tau)^{-2\alpha}\sum _{j=1}^{\infty}e^{-2\pi ^{2}j^{2}(s-\tau)}\,d\tau&\leq \int_{0}^{\infty}\tau^{-2\alpha }J\bigl(2 \pi^{2}\tau\bigr)\,d\tau \\ &\leq4 \int_{0}^{\infty}\tau^{-2\alpha}\bigl(2 \pi^{2}\tau\bigr)^{-\frac {1}{2}}e^{-\pi^{2}\tau}\,d\tau \\ &=2\sqrt{2}\pi^{4\alpha-2}\varGamma \biggl(\frac{1}{2}-2\alpha \biggr):=c_{3}< \infty, \end{aligned}
(3.9)

combining (3.6)â€“(3.9), we obtain

$$\sup_{0\leq t\leq T}\mathbb{E}\bigl[ \bigl\vert W_{X}(t) \bigr\vert ^{k}\bigr]\leq c_{1}^{m-1}c_{2} \biggl(\frac{\sin(\pi\alpha)}{\pi} \biggr)^{k} \Vert g \Vert _{0}^{k}c_{3}^{\frac{k}{2}} \int_{0}^{T}e^{-\pi^{2}s}\, ds:=C_{k}< \infty,$$
(3.10)

thus we complete the proof of (3.1). For any $$\beta\in (0,\frac{1}{4})$$, using (3.3) we get

$$(-A)^{\beta}W_{X}(t)= \int_{0}^{t}(t-s)^{\alpha-1}(-A)^{\beta }e^{(t-s)A}h(s) \,ds:=R_{\alpha, \beta}h(t),$$
(3.11)

where $$h(t)=\frac{\sin(\pi\alpha)}{\pi}\int_{0}^{t}(t-\tau )^{-\alpha}e^{(t-\tau)A}g(X(\tau,x))\,dW(\tau)$$, $$\frac {1}{p^{*}}<\alpha<\frac{1}{4}$$. For $$\beta\in(0,\frac{1}{4})$$, there exists a constant $$c_{\beta}$$ such that

$$\bigl\Vert (-A)^{\beta}e^{tA} \bigr\Vert \leq c_{\beta}t^{-\beta}e^{-\pi^{2}t},$$
(3.12)

again by the HÃ¶lder inequality, for any $$m>1$$, it follows that

\begin{aligned}[b] \bigl\vert R_{\alpha, \beta}h(t) \bigr\vert &\leq \int_{0}^{t}(t-s)^{\alpha -1} \bigl\vert (-A)^{\beta}e^{(t-s)A}h(s) \bigr\vert \,ds \\ &\leq c_{\beta} \int_{0}^{t}(t-s)^{\alpha-1-\beta}e^{-\pi ^{2}(t-s)} \bigl\vert h(s) \bigr\vert \,ds \\ &\leq c_{\beta} \biggl( \int_{0}^{t}e^{-\pi^{2}(t-s)}(t-s)^{(\alpha -1-\beta)\frac{m}{m-1}} \,ds \biggr)^{\frac{m-1}{m}} \\ &\quad\times \biggl( \int_{0}^{t}e^{-\pi^{2}(t-s)} \bigl\vert h(s) \bigr\vert ^{m}\,ds \biggr)^{\frac{1}{m}}. \end{aligned}
(3.13)

Similarly, if desired there exists a constant $$c_{1}$$ such that

$$\int_{0}^{t}e^{-\pi^{2}(t-s)}(t-s)^{(\alpha-1-\beta)\frac {m}{m-1}} \,ds\leq \int_{0}^{\infty}e^{-\pi^{2}s}s^{(\alpha-1-\beta )\frac{m}{m-1}}\,ds \leq c_{1}< \infty,$$
(3.14)

we only need $$(\alpha-1-\beta)\frac{m}{m-1}+1>0$$, that is, $$\alpha >\beta+\frac{1}{m}$$; notice that $$\frac{1}{p^{*}}<\alpha<\frac{1}{4}$$, thus

$$\beta+\frac{1}{m}< \frac{1}{4},$$
(3.15)

here we can choose $$m=k$$, so, for any $$\beta\in(0,\frac{1}{4})$$, there exists a sufficiently large constant kÌ„, such that (3.15) holds. Next, fully analogous to the proof of (3.1), we can prove that for any $$k>\bar{k}$$, there exists a constant $$C_{k}$$ such that

$$\sup_{0\leq t\leq T}\mathbb{E}\bigl[ \bigl\vert (-A)^{\beta}W_{X}(t) \bigr\vert ^{k}\bigr]\leq C_{k}.$$
(3.16)

Consequently, by the HÃ¶lder inequality, for any $$k\geq p^{*}$$,

$$\sup_{0\leq t\leq T}\mathbb{E}\bigl[ \bigl\Vert W_{X}(t) \bigr\Vert _{\beta}^{k}\bigr]\leq C_{k},$$
(3.17)

and the proof is complete.â€ƒâ–¡

### Remark 3.2

1. (1)

The results of PropositionÂ 3.1 hold for all $$\beta<\frac{1}{4}$$. For any $$\beta\leq0$$, by the PoincarÃ© inequality $$\|v\|_{\beta}\leq\pi^{\beta}|v|$$, then combining with (3.1) we get the result.

2. (2)

From (3.9) in the proof of PropositionÂ 3.1, we see that the integral on the left is controlled by the Gamma function. However, the domain of the Gamma function is $$(0,\infty)$$, so we must have $$\alpha<\frac{1}{4}$$. Equation (3.15) shows that $$\beta<\frac{1}{4}$$, thus $$\beta <\frac{1}{4}$$ in our conclusion is the best parameter control.

We mention that there exists a unique invariant measure for the transition semigroup defined by (1.6) in our introduction; now we derive an estimate of its moments.

### Theorem 3.3

For any $$k\geq1$$, there exists a constantCsuch that

$$\int_{H} \vert y \vert ^{k} \nu(dy)< C,$$
(3.18)

and

$$\int_{H} \Vert y \Vert _{\beta}^{2} \nu(dy)< C, \quad\textit{for any }\beta\in\biggl[0,\frac {1}{4}\biggr).$$
(3.19)

### Proof

By the HÃ¶lder inequality, we only need prove the result for any $$k\geq p^{*}$$. It is well known that the invariant measure Î½ is ergodic because of the uniqueness, that is, for any $$\varphi\in L^{2}(H,\nu)$$, we have

$$\lim_{T\rightarrow\infty}\frac{1}{T} \int_{0}^{T}\mathbb {E}\bigl[\varphi\bigl(X(t,x) \bigr)\bigr]\,dt=\lim_{T\rightarrow\infty}\frac{1}{T} \int _{0}^{T}P_{t}\varphi(x)\,dt= \int_{H}\varphi(y) \nu(dy).$$
(3.20)

Assume that $$\nu_{T}(dy)=\frac{1}{T}\int_{0}^{T}\pi_{t}(x,dy)\,dt$$, $$T\geq1$$, where $$\pi_{t}(x,dy)$$ represents the distribution function of $$X(t,x)$$, $$t\geq0$$, $$X(t,x)$$ is the mild solution for Eq.Â (2.5), x is the initial value. Then (3.20) can be denoted by

$$\lim_{T\rightarrow\infty} \int_{H}\varphi(y) \nu_{T}(dy)= \int _{H}\varphi(y) \nu(dy).$$

For any given $$k\geq1$$, for all $$m\geq1$$, $$y\in H$$, set $$\zeta _{m}(y)=\chi_{\{|y|^{k}\leq m\}} |y|^{k}$$, obviously $$\zeta_{m}\in L^{2}(H,\nu)$$, and $$\zeta_{m}(y)\leq |y|^{k}$$. Therefore,

$$\int_{H}\zeta_{m}(y) \nu(dy)=\lim _{T\rightarrow\infty} \int _{H}\zeta_{m}(y) \nu_{T}(dy)\leq \lim_{T\rightarrow\infty} \int _{H} \vert y \vert ^{k} \nu_{T}(dy).$$
(3.21)

Taking the limit $$m\rightarrow\infty$$ in both sides of the inequality (3.21) yields

$$\int_{H} \vert y \vert ^{k} \nu(dy)\leq\lim _{T\rightarrow\infty} \int _{H} \vert y \vert ^{k} \nu_{T}(dy),$$
(3.22)

using the distribution of $$X(t,x)$$, $$t\geq0$$, the right side of the above inequality can be denoted by

$$\int_{H} \vert y \vert ^{k} \nu_{T}(dy)=\frac{1}{T} \int_{0}^{T}\mathbb {E}\bigl[ \bigl\vert X(t,x) \bigr\vert ^{k}\bigr] \,dt,$$
(3.23)

and by PropositionÂ 2.2, for any $$k\geq p^{*}$$, there exists a constant c such that

$${\sup_{t\in[0,T]}\mathbb{E}\bigl[ \bigl\vert X(t,x) \bigr\vert ^{ k}\bigr]\leq c< \infty},$$

then (3.18) can be proved by combining (3.21)â€“(3.23) with the above inequality.

To prove (3.19), for any $$\beta\in[0,\frac{1}{4})$$, $$k\geq 1$$, $$m\geq1$$, and $$y\in H$$, we set $$\zeta_{m}(y)=\chi_{\{\|y\|_{\beta}^{k}\leq m\}} \|y\|_{\beta}^{k}$$, then $$\zeta_{m}\in L^{2}(H,\nu)$$, and for all $$y\in H$$, we have $$\zeta_{m}(y)\leq\|y\|_{\beta}^{k}$$, and

$$\int_{H}\zeta_{m}(y) \nu(dy)=\lim _{T\rightarrow\infty} \int _{H}\zeta_{m}(y) \nu_{T}(dy)\leq \lim_{T\rightarrow\infty} \int _{H} \Vert y \Vert _{\beta}^{k} \nu_{T}(dy).$$
(3.24)

Taking the limit $$m\rightarrow\infty$$ in both sides of the inequality (3.24), we have

$$\int_{H} \Vert y \Vert _{\beta}^{k} \nu(dy)\leq\lim_{T\rightarrow\infty } \int_{H} \Vert y \Vert _{\beta}^{k} \nu_{T}(dy),$$
(3.25)

the term of the right-hand side can be rewritten as

$$\int_{H} \Vert y \Vert _{\beta}^{k} \nu_{T}(dy)=\frac{1}{T} \int _{0}^{T}\mathbb{E}\bigl[ \bigl\Vert X(t,x) \bigr\Vert _{\beta}^{k}\bigr]\,dt.$$
(3.26)

Moreover, the mild solution $$X(t,x)$$ has the following decomposition:

$$X(t,x)=Y(t)+W_{X}(t),\quad t\geq0.$$
(3.27)

In view of PropositionÂ 3.1, we have

$$\mathbb{E}\bigl[ \bigl\Vert X(t,x) \bigr\Vert _{\beta}^{k} \bigr]\leq 2^{k-1} \bigl(\mathbb{E}\bigl[ \bigl\Vert Y(t) \bigr\Vert _{\beta}^{k}\bigr]+\mathbb{E}\bigl[ \bigl\Vert W_{X}(t) \bigr\Vert _{\beta}^{k}\bigr] \bigr),$$
(3.28)

by (3.25), (3.26) and (3.28) it follows that

$$\int_{H} \Vert y \Vert _{\beta}^{k} \nu(dy)\leq\lim_{T\rightarrow\infty }\frac{2^{k-1}}{T} \biggl( \int_{0}^{T}\mathbb{E}\bigl[ \bigl\Vert Y(t) \bigr\Vert _{\beta }^{k}\bigr]\,dt+ \int_{0}^{T}\mathbb{E}\bigl[ \bigl\Vert W_{X}(t) \bigr\Vert _{\beta}^{k}\bigr]\,dt \biggr).$$
(3.29)

Considering (3.2), to prove (3.19), we only need the following claim.

### Claim 1

Let $$k=2$$, for any $$\beta\in [0,\frac{1}{4})$$, there exists a constantCsuch that

$$\lim_{T\rightarrow\infty}\frac{1}{T} \int_{0}^{T}\mathbb{E}\bigl[ \bigl\Vert Y(t) \bigr\Vert _{\beta}^{k}\bigr]\,dt\leq C.$$
(3.30)

Thanks to (3.27), $$Y(t)$$ satisfies the equation

$$\frac{d}{dt}Y(t)=AY(t)+b\bigl(Y(t)+W_{X}(t)\bigr),\quad t \geq0,$$
(3.31)

taking the inner product in H with $$(-A)^{2\beta-1}Y(t)$$, on combining (1.4) with integration by parts and Dirichlet boundary conditions, yields

\begin{aligned} &\frac{d}{dt} \bigl\Vert Y(t) \bigr\Vert _{\beta-\frac{1}{2}}^{2}+2 \bigl\Vert Y(t) \bigr\Vert _{\beta }^{2}\\ &\quad=2\bigl\langle b \bigl(Y(t)+W_{A}(t)\bigr), (-A)^{2\beta-1}Y(t)\bigr\rangle \\ &\quad= \int_{0}^{1}\partial_{\xi} \bigl[Y(t)+W_{A}(t)\bigr]^{2}\cdot(-A)^{2\beta -1}Y(t)\,d \xi \\ &\quad=\bigl[Y(t)+W_{A}(t)\bigr]^{2}(-A)^{2\beta-1}Y(t)|_{0}^{1}- \int _{0}^{1}\bigl[Y(t)+W_{A}(t) \bigr]^{2}\cdot\partial_{\xi}(-A)^{2\beta -1}Y(t)\,d\xi \\ &\quad=- \int_{0}^{1}\bigl[Y(t)^{2}+2Y(t)W_{A}(t)+W_{A}^{2}(t) \bigr]\cdot\partial _{\xi}(-A)^{2\beta-1}Y(t)\,d\xi \\ &\quad=- \biggl[ \int_{0}^{1}Y(t)^{2}\cdot \partial_{\xi}(-A)^{2\beta -1}Y(t)\,d\xi+ \int_{0}^{1}W_{A}^{2}(t)\cdot \partial_{\xi }(-A)^{2\beta-1}Y(t)\,d\xi \\ &\qquad{}+ \int_{0}^{1}2Y(t)W_{A}(t)\cdot \partial_{\xi}(-A)^{2\beta -1}Y(t)\,d\xi \biggr]. \end{aligned}

Then according (2.13), $$b(x,y,z):=\int_{0}^{1}x\partial_{\xi }yz\,d\xi$$, we denote

\begin{aligned}& I_{1}(t)= \biggl\vert \int_{0}^{1}Y(t)^{2}\cdot \partial_{\xi }(-A)^{2\beta-1}Y(t)\,d\xi \biggr\vert = \bigl\vert b \bigl(Y(t),(-A)^{2\beta -1}Y(t),Y(t)\bigr) \bigr\vert , \\& I_{2}(t)= \biggl\vert \int_{0}^{1}W_{A}^{2}(t)\cdot \partial_{\xi }(-A)^{2\beta-1}Y(t)\,d\xi \biggr\vert = \bigl\vert b \bigl(W_{A}(t),(-A)^{2\beta -1}Y(t),W_{A}(t)\bigr) \bigr\vert , \\& I_{3}(t)= \biggl\vert \int_{0}^{1}2Y(t)\cdot W_{A}(t)\cdot \partial_{\xi }(-A)^{2\beta-1}Y(t)\,d\xi \biggr\vert =2 \bigl\vert b\bigl(Y(t),(-A)^{2\beta -1}Y(t),W_{A}(t)\bigr) \bigr\vert . \end{aligned}

Therefore,

$$\frac{d}{dt} \bigl\Vert Y(t) \bigr\Vert _{\beta-\frac{1}{2}}^{2}+2 \bigl\Vert Y(t) \bigr\Vert _{\beta}^{2} \leq I_{1}(t)+I_{2}(t)+I_{3}(t).$$
(3.32)

By applying LemmaÂ 2.3, choosing $$\gamma_{1}=\gamma_{3}=0$$, $$\gamma_{2}=\frac{1}{2}-\beta>\frac{1}{4}$$, for all $$x,z\in H$$, $$y\in V_{\beta}$$, we have

$$\bigl\vert b\bigl(x,(-A)^{2\beta-1}y,z\bigr) \bigr\vert \leq c \vert x \vert \bigl\Vert (-A)^{2\beta-1}y \bigr\Vert _{1-\beta} \vert z \vert =c \vert x \vert \Vert y \Vert _{\beta} \vert z \vert ,$$

using the interpolation inequality and the Young inequality, we have

\begin{aligned}& \begin{aligned} \bigl\vert I_{1}(t) \bigr\vert &= \bigl\vert b \bigl(Y(t),(-A)^{2\beta-1}Y(t),Y(t)\bigr) \bigr\vert \leq c \bigl\vert Y(t) \bigr\vert ^{2} \bigl\Vert Y(t) \bigr\Vert _{\beta} \\&\leq\frac{1}{3} \bigl\Vert Y(t) \bigr\Vert _{\beta}^{2}+c_{1} \bigl\vert Y(t) \bigr\vert ^{4},\end{aligned} \\& \begin{aligned} \bigl\vert I_{2}(t) \bigr\vert &= \bigl\vert b \bigl(W_{X}(t),(-A)^{2\beta-1}Y(t),W_{X}(t)\bigr) \bigr\vert \leq c \bigl\vert W_{X}(t) \bigr\vert ^{2} \bigl\Vert Y(t) \bigr\Vert _{\beta}\\&\leq\frac{1}{3} \bigl\Vert Y(t) \bigr\Vert _{\beta }^{2}+c_{2} \bigl\vert W_{X}(t) \bigr\vert ^{4},\end{aligned} \end{aligned}

and

\begin{aligned} \bigl\vert I_{3}(t) \bigr\vert =&2 \bigl\vert b \bigl(Y(t),(-A)^{2\beta}Y(t),W_{X}(t)\bigr) \bigr\vert \leq c \bigl\vert Y(t) \bigr\vert \bigl\Vert Y(t) \bigr\Vert _{\beta} \bigl\vert W_{X}(t) \bigr\vert \\ \leq&\frac{1}{3} \bigl\Vert Y(t) \bigr\Vert _{\beta}^{2}+c_{3} \bigl\vert Y(t) \bigr\vert ^{4}+c_{4} \bigl\vert W_{X}(t) \bigr\vert ^{4}, \end{aligned}

by (3.32) it follows that

\begin{aligned} \frac{d}{dt} \bigl\Vert Y(t) \bigr\Vert _{\beta-\frac{1}{2}}^{2}+ \bigl\Vert Y(t) \bigr\Vert _{\beta }^{2}\leq c\bigl( \bigl\vert Y(t) \bigr\vert ^{4}+ \bigl\vert W_{X}(t) \bigr\vert ^{4}\bigr). \end{aligned}

Therefore

$$\bigl\Vert Y(t) \bigr\Vert _{\beta-\frac{1}{2}}^{2}+ \int_{0}^{t} \bigl\Vert Y(s) \bigr\Vert _{\beta}^{2}\, ds\leq \vert x \vert ^{2}+c \biggl( \int_{0}^{t} \bigl\vert Y(s) \bigr\vert ^{4}\,ds+ \int_{0}^{t} \bigl\vert W_{X}(s) \bigr\vert ^{4}\, ds \biggr),$$
(3.33)

now in view of (3.1) we get

$$\mathbb{E} \biggl[ \int_{0}^{t} \bigl\vert W_{X}(s) \bigr\vert ^{4}\,ds \biggr]= \int _{0}^{t}\mathbb{E}\bigl[ \bigl\vert W_{X}(s) \bigr\vert ^{4}\bigr]\,ds\leq C t.$$

Considering that $$X(t,x)\in Z_{T}$$, we have

$$\mathbb{E} \biggl[ \int_{0}^{t} \bigl\vert Y(s) \bigr\vert ^{4}\,ds \biggr]\leq8 \biggl( \int _{0}^{t}\mathbb{E}\bigl[ \bigl\vert X(t,x) \bigr\vert ^{4}\bigr]\,ds+ \int_{0}^{t}\mathbb {E}\bigl[ \bigl\vert W_{X}(s) \bigr\vert ^{4}\bigr]\,ds \biggr)\leq C t.$$

Taking the expectation we conclude that

$$\int_{0}^{t}\mathbb{E}\bigl[ \bigl\Vert Y(s) \bigr\Vert _{\beta}^{2}\bigr]\,ds\leq C(t+1).$$

The result follows.â€ƒâ–¡

### Remark 3.4

1. (1)

We can conclude that the invariant measure Î½ has the $$L^{p}$$ ($$p\geq1$$) regularity from (3.18).

2. (2)

By the Sobolev embedding theorem, for any $$p\geq2$$, we have

$$\vert x \vert _{p}\leq c \Vert x \Vert _{\beta}, \quad\text{for }\beta=\frac{p-2}{4p},$$
(3.34)

therefore, by (3.19) and HÃ¶lderâ€™s inequality, for any $$p\geq1$$, there exists a constant C such that

$$\int_{H} \vert y \vert _{p}^{2} \nu(dy)< C.$$

### Corollary 3.5

For any $$\beta\in[0,\frac{1}{4})$$, there exists $$c_{\beta}=\frac {4}{4\beta+3}$$such that

$$\int_{H} \Vert y \Vert _{\beta}^{2k} \nu(dy)< \infty,\quad \forall k\in[0,c_{\beta }).$$
(3.35)

### Proof

For any $$\beta\in[0,\frac{1}{4})$$, $$c_{\beta}=\frac{4}{4\beta +3}>1$$, considering the conclusion of TheoremÂ 3.3, we need only to prove the result for $$k\in(1,c_{\beta})$$. Analogous to the proof of TheoremÂ 3.3, we need to show the following claim.

### Claim 2

For any $$\beta\in[0,\frac{1}{4})$$, $$k\in(1,c_{\beta})$$, there exists a constantCsuch that

$$\lim_{T\rightarrow\infty}\frac{1}{T} \int_{0}^{T}\mathbb{E}\bigl[ \bigl\Vert Y(t) \bigr\Vert _{\beta}^{k}\bigr]\,dt\leq C.$$

Note that $$Y(t)$$ satisfies Eq.Â (3.31), let $$\gamma=\beta -\frac{1}{2k}$$, it follows that

\begin{aligned} & \frac{1}{2k}\frac{d}{dt} \bigl\vert (-A)^{\gamma}Y(t) \bigr\vert ^{2k} \\ &\quad= \bigl\vert (-A)^{\gamma}Y(t) \bigr\vert ^{2k-2}\bigl\langle AY(t)+b\bigl(Y(t)+W_{X}(t)\bigr),(-A)^{2\gamma}Y(t)\bigr\rangle \\ &\quad=- \bigl\Vert Y(t) \bigr\Vert _{\gamma}^{2k-2} \bigl\Vert Y(t) \bigr\Vert _{\gamma+\frac{1}{2}}^{2}+ \bigl\Vert Y(t) \bigr\Vert _{\gamma}^{2k-2}\bigl\langle b\bigl(Y(t)+W_{X}(t) \bigr), (-A)^{2\gamma }Y(t)\bigr\rangle , \end{aligned}

by the interpolation inequality, we get

$$\bigl\Vert Y(t) \bigr\Vert _{\beta}\leq \bigl\Vert Y(t) \bigr\Vert _{\gamma}^{1-\frac{1}{k}} \bigl\Vert Y(t) \bigr\Vert _{\gamma+\frac{1}{2}}^{\frac{1}{k}},$$

then using the PoincarÃ© inequality, $$\|Y(t)\|_{\gamma}\leq c\| Y(t)\|_{\beta}$$, yields

\begin{aligned} \frac{1}{2k}\frac{d}{dt} \bigl\vert (-A)^{\gamma}Y(t) \bigr\vert ^{2k}+ \bigl\Vert Y(t) \bigr\Vert _{\beta }^{2k} \leq& \bigl\Vert Y(t) \bigr\Vert _{\gamma}^{2k-2} \bigl\vert \bigl\langle b\bigl(Y(t)+W_{X}(t)\bigr), (-A)^{2\gamma}Y(t)\bigr\rangle \bigr\vert \\ \leq& c \bigl\Vert Y(t) \bigr\Vert _{\beta}^{2k-2} \bigl\vert \bigl\langle b\bigl(Y(t)+W_{X}(t)\bigr), (-A)^{2\gamma}Y(t) \bigr\rangle \bigr\vert \\ \leq&\frac{c}{2} \bigl\Vert Y(t) \bigr\Vert _{\beta}^{2k-2} \bigl\vert b\bigl(Y(t),(-A)^{2\gamma }Y(t),Y(t)\bigr) \bigr\vert \\ &{}+\frac{c}{2} \bigl\Vert Y(t) \bigr\Vert _{\beta}^{2k-2} \bigl\vert b\bigl(W_{X}(t),(-A)^{2\gamma }Y(t),W_{X}(t) \bigr) \bigr\vert \\ &{}+c \bigl\Vert Y(t) \bigr\Vert _{\beta}^{2k-2} \bigl\vert b\bigl(Y(t),(-A)^{2\gamma}Y(t),W_{X}(t)\bigr) \bigr\vert \\ :=&I_{1}(t)+I_{2}(t)+I_{3}(t). \end{aligned}

Now using the properties of the trilinear quantity $$b(x,y,z)$$, set $$\gamma_{1}=\gamma_{3}=0$$, $$\gamma_{2}=\frac{1}{k}-\frac {1}{2}-\beta>\frac{1}{4}$$, then, for all $$x,z\in H$$, $$y\in V_{\beta}$$, it follows that

$$\bigl\vert b\bigl(x,(-A)^{2\gamma}y,z\bigr) \bigr\vert \leq c \vert x \vert \bigl\Vert (-A)^{2\gamma}y \bigr\Vert _{\frac{1}{k}-\beta} \vert z \vert =c \vert x \vert \Vert y \Vert _{\beta} \vert z \vert ;$$

applying, respectively, the interpolation inequality and Young inequality, we get

\begin{aligned}& \begin{aligned} \bigl\vert I_{1}(t) \bigr\vert &= \frac{c}{2} \bigl\Vert Y(t) \bigr\Vert _{\beta}^{2k-2} \bigl\vert b\bigl(Y(t),(-A)^{2\gamma }Y(t),Y(t)\bigr) \bigr\vert \\ &\leq\frac{c}{2} \bigl\vert Y(t) \bigr\vert ^{2} \bigl\Vert Y(t) \bigr\Vert _{\beta}^{2k-1} \\ &\leq\frac{1}{6} \bigl\Vert Y(t) \bigr\Vert _{\beta}^{2k}+c_{1} \bigl\vert Y(t) \bigr\vert ^{4k}, \end{aligned} \\& \begin{aligned} \bigl\vert I_{2}(t) \bigr\vert &= \frac{c}{2} \bigl\Vert Y(t) \bigr\Vert _{\beta }^{2k-2} \bigl\vert b\bigl(W_{X}(t),(-A)^{2\gamma}Y(t),W_{X}(t) \bigr) \bigr\vert \\ &\leq\frac{c}{2} \bigl\vert W_{X}(t) \bigr\vert ^{2} \bigl\Vert Y(t) \bigr\Vert _{\beta}^{2k-1} \\ &\leq\frac{1}{6} \bigl\Vert Y(t) \bigr\Vert _{\beta}^{2k}+c_{2} \bigl\vert W_{X}(t) \bigr\vert ^{4k}, \end{aligned} \end{aligned}

and

\begin{aligned} \bigl\vert I_{3}(t) \bigr\vert =&c \bigl\Vert Y(t) \bigr\Vert _{\beta}^{2k-2} \bigl\vert b\bigl(Y(t),(-A)^{2\gamma }Y(t),W_{X}(t) \bigr) \bigr\vert \\ \leq&c \bigl\vert Y(t) \bigr\vert \bigl\Vert Y(t) \bigr\Vert _{\beta}^{2k-1} \bigl\vert W_{X}(t) \bigr\vert \\ \leq&\frac{1}{6} \bigl\Vert Y(t) \bigr\Vert _{\beta}^{2k}+c_{3} \bigl\vert Y(t) \bigr\vert ^{4k}+c_{4} \bigl\vert W_{X}(t) \bigr\vert ^{4k}. \end{aligned}

By the above inequalities we deduce

\begin{aligned} \frac{1}{2k}\frac{d}{dt} \bigl\vert (-A)^{\gamma}Y(t) \bigr\vert ^{2k}+\frac{1}{2} \bigl\Vert Y(t) \bigr\Vert _{\beta}^{2k}\leq c\bigl( \bigl\vert Y(t) \bigr\vert ^{4k}+ \bigl\vert W_{X}(t) \bigr\vert ^{4k} \bigr). \end{aligned}

Finally, integrating the above inequality with respect to t yields

\begin{aligned} & \frac{1}{2k} \bigl\vert (-A)^{\gamma}Y(t) \bigr\vert ^{2k}+\frac{1}{2} \int_{0}^{t} \bigl\Vert Y(s) \bigr\Vert _{\beta}^{2k}\,ds \\ &\quad\leq \frac{1}{2k} \vert x \vert ^{2k}+c \biggl( \int_{0}^{t} \bigl\vert Y(s) \bigr\vert ^{4k}\,ds+ \int _{0}^{t} \bigl\vert W_{X}(s) \bigr\vert ^{4k}\,ds \biggr). \end{aligned}

Therefore, by PropositionÂ 2.2 and PropositionÂ 3.1, we complete the proof.â€ƒâ–¡

## 4 Transition semigroup and the Kolmogorov operator

This section mainly study the smoothing properties of the semigroup $$P_{t}$$ defined by (1.6). Actually, it is well known that it has the strong Feller property by the existence and uniqueness of the invariant measure. But the result as regards the properties of its derivation $$DP_{t}\varphi(x)$$ is less obvious. We introduce an auxiliary semigroup similar to the method in [9, 14], denoteing by

$${S_{t}\varphi(x)=\mathbb{E}\bigl[e^{-c\int_{0}^{t}|X(s,x)|^{4}\, ds}\varphi\bigl(X(t,x) \bigr)\bigr]},$$
(4.1)

where $$X(t,x)$$ is the unique mild solution of Eq.Â (2.5), the constant c is sufficiently large. Then the required results can be derived by the estimates for the smoothing properties of $$S_{t}$$ and identity

$$P_{t}\varphi(x)=S_{t}\varphi(x)+c \int _{0}^{t}S_{t-s}\bigl( \vert x \vert ^{4}P_{s}\varphi(x)\bigr)\,ds.$$
(4.2)

### 4.1 Estimate for the derivation of $$X(t,x)$$

We have the following result for the GÃ¢teaux derivative of $$X(t,x)$$, which can be found in [1].

### Lemma 4.1

For any $$h\in H$$, the GÃ¢teaux derivative of the mild solution $$X(t,x)$$, denoted as $$\eta^{h}(t,x)=:DX(t,x)h$$, satisfies

$$\left \{ \textstyle\begin{array}{ll} d\eta^{h}(t,x)= [A\eta^{h}(t,x)+b^{\prime}(X(t,x))\eta ^{h}(t,x)]\,dt+g^{\prime}(X(t,x))\eta^{h}(t,x)\,dW(t), \\ \eta^{h}(0,x)=h, \end{array}\displaystyle \right .$$
(4.3)

where $$b^{\prime}(X(t,x))\eta^{h}(t,x)=\partial_{\xi}(X(t,x)\eta ^{h}(t,x))$$. Furthermore, for any $$T>0$$, $$p\geq2$$, there exists a constant $$c_{p,T}>0$$such that

$$\bigl\Vert \eta^{h}(t,x) \bigr\Vert _{T}^{p}= \mathbb{E} \Bigl[\sup_{t\in[0,T]} \bigl\vert \eta ^{h}(t,x) \bigr\vert ^{p} \Bigr]< c_{p,T} \vert h \vert ^{p}< \infty.$$

It is easy to see that $$\eta^{h}(t,x)$$ fulfills the following integral equation:

\begin{aligned} \eta^{h}(t,x)&=e^{tA}h+ \int_{0}^{t}e^{(t-s)A}b^{\prime} \bigl(X(s,x)\bigr)\eta ^{h}(s,x)\,ds \\ &\quad+ \int_{0}^{t}e^{(t-s)A}g^{\prime} \bigl(X(s,x)\bigr)\eta^{h}(s,x)\,d W_{s}. \end{aligned}
(4.4)

Formally, we denote the stochastic integral term as $$Z(t)$$, defining

$$Z(t)= \int_{0}^{t}e^{(t-s)A}g^{\prime} \bigl(X(s,x)\bigr)\eta^{h}(s,x)\,d W_{s}.$$
(4.5)

Now, in view of the generalized form of the maximal inequality for the stochastic integral with respect to martingales, see e.g. [16], for any $$k\geq1$$, $$\beta<\frac{1}{4}$$, one can obtain

\begin{aligned} \mathbb{E} \bigl[ \bigl\vert (-A)^{\beta}Z(t) \bigr\vert ^{2k} \bigr]\leq c \biggl( \int _{0}^{t} \bigl(\mathbb{E}\bigl[ \bigl\Vert (-A)^{\beta}e^{(t-s)A}g^{\prime}\bigl(X(s,x)\bigr)\eta ^{h}(s,x) \bigr\Vert _{\mathrm{HS}}^{2k}\bigr] \bigr)^{\frac{1}{k}}\,ds \biggr)^{k}, \end{aligned}

here

\begin{aligned} & \bigl\Vert (-A)^{\beta}e^{(t-s)A}g^{\prime}\bigl(X(s,x) \bigr)\eta^{h}(s,x) \bigr\Vert _{\mathrm{HS}}^{2} \\ &\quad= \sum_{i,j=1}^{\infty} \bigl\vert \bigl\langle (-A)^{\beta }e^{(t-s)A}\bigl[g^{\prime}\bigl(X(s,x)\bigr) \eta^{h}(s,x)e_{i}\bigr], e_{j} \bigr\rangle \bigr\vert ^{2} \\ &\quad= \sum_{i,j=1}^{\infty}(\pi j)^{4\beta}e^{-2\pi ^{2}j^{2}(t-s)} \bigl\vert \bigl\langle g^{\prime} \bigl(X(s,x)\bigr)\eta^{h}(s,x)e_{i}, e_{j} \bigr\rangle \bigr\vert ^{2} \\ &\quad= \sum_{j=1}^{\infty}(\pi j)^{4\beta}e^{-2\pi^{2}j^{2}(t-s)} \bigl\vert g^{\prime}\bigl(X(s,x)\bigr) \eta^{h}(s,x) e_{j} \bigr\vert ^{2} \\ &\quad\leq c \bigl\Vert g^{\prime} \bigr\Vert _{0} \bigl\vert \eta^{h}(s,x) \bigr\vert ^{2}\sum _{j=1}^{\infty}j^{ 4\beta}e^{-2\pi^{2}j^{2}(t-s)} , \end{aligned}

then we define

$$J_{\beta}(t)= \sum_{j=1}^{\infty}j^{ 4\beta}e^{-j^{2}t},$$

it is easy to show that there exists a constant $$c_{\beta}>0$$ such that

$$J_{\beta}(t)\leq c_{\beta}t^{-\frac{1}{2}-2\beta}e^{-t}.$$
(4.6)

Hence combining Eq.Â (3.5), as for $$-\frac{1}{2}-2\beta+1>0$$, that is, $$\beta<\frac{1}{4}$$, we have

\begin{aligned}[b] \mathbb{E} \bigl[ \bigl\Vert Z(t) \bigr\Vert _{\beta}^{2k} \bigr]&=\mathbb{E} \bigl[ \bigl\vert (-A)^{\beta}Z(t) \bigr\vert ^{2k} \bigr] \\ &\leq c \biggl( \int_{0}^{t}c_{\beta} \bigl\Vert g^{\prime} \bigr\Vert _{0} \bigl(\mathbb {E}\bigl[ \bigl\vert \eta^{h}(s,x) \bigr\vert ^{2k}\bigr] \bigr)^{\frac{1}{k}}J_{\beta}\bigl(2\pi^{2}(t-s)\bigr)\, ds \biggr)^{k} \\ &\leq c_{\beta} \bigl\Vert g^{\prime} \bigr\Vert _{0}^{k} \vert h \vert ^{2k} \biggl( \int_{0}^{\infty} s^{-\frac{1}{2}-2\beta}e^{-2\pi^{2}s}\,ds \biggr)^{k} \\ &\leq c_{\beta,k,g} \vert h \vert ^{2k}. \end{aligned}
(4.7)

### Lemma 4.2

For any $$\beta\in[-\frac{3}{8},-\frac{1}{4})$$, there exists a constant $$c_{1},c_{2}>0$$, such that, for any $$t\geq0$$, $$x,h\in H$$, we have

\begin{aligned}[b] &\mathbb{E} \bigl[e^{-c_{1}\int_{0}^{t} \vert X(s,x) \vert ^{4}\,ds} \bigl\Vert \eta ^{h}(t,x) \bigr\Vert _{\beta}^{2} \bigr]+ \mathbb{E} \biggl[ \int_{0}^{t}e^{-c_{1}\int_{0}^{s} \vert X(\tau,x) \vert ^{4}\, d\tau} \bigl\Vert \eta^{h}(s,x) \bigr\Vert _{\beta+\frac{1}{2}}^{2}\,ds \biggr]\\ &\quad\leq \Vert h \Vert _{\beta}^{2}e^{c_{2}t}.\end{aligned}
(4.8)

### Proof

For any $$t\geq0$$, $$x,h\in H$$, set $$Y^{h}(t)= \eta^{h}(t,x)-Z(t)$$, thus $$Y^{h}(t)$$ satisfies

$$\frac{d}{dt}Y^{h}(t)= AY^{h}(t)+b^{\prime} \bigl(X(t,x)\bigr)\eta^{h}(t,x),$$

taking the inner product in H with $$(-A)^{2\beta}Y^{h}(t)$$, integrating by parts and using HÃ¶lderâ€™s inequality, one finds

\begin{aligned} \frac{1}{2}\frac{d}{dt} \bigl\Vert Y^{h}(t) \bigr\Vert _{\beta}^{2}+ \bigl\Vert Y^{h}(t) \bigr\Vert _{\beta +\frac{1}{2}}^{2} =& \int_{0}^{1}\partial_{\xi} \bigl(X(t,x) \bigl[Y^{h}(t)+Z(t)\bigr] \bigr) (-A)^{2\beta}Y^{h}(t) \,d\xi \\ =&- \int_{0}^{1}X(t,x)\bigl[Y^{h}(t)+Z(t) \bigr]\partial_{\xi}\bigl((-A)^{2\beta }Y^{h}(t)\bigr) \,d\xi \\ \leq& \bigl\vert X(t,x) \bigr\vert \bigl( \bigl\vert Y^{h}(t) \bigr\vert _{4}+ \bigl\vert Z(t) \bigr\vert _{4}\bigr) \bigl\vert \partial_{\xi }\bigl((-A)^{2\beta}Y^{h}(t) \bigr) \bigr\vert _{4}, \end{aligned}

and on account of the imbedding $$V_{\frac{1}{8}}\hookrightarrow L^{4}(0,1)$$ and the interpolation inequality, we obtain

$$\bigl\vert Y^{h}(t) \bigr\vert _{4}\leq c \bigl\Vert Y^{h}(t) \bigr\Vert _{\frac{1}{8}}\leq c \bigl\Vert Y^{h}(t) \bigr\Vert _{\beta}^{2\beta+\frac{3}{4}} \bigl\Vert Y^{h}(t) \bigr\Vert _{\beta+\frac {1}{2}}^{-2\beta+\frac{1}{4}}$$

and

\begin{aligned} \bigl\vert \partial_{\xi}\bigl((-A)^{2\beta}Y^{h}(t) \bigr) \bigr\vert _{4}&\leq c \bigl\Vert \partial_{\xi } \bigl((-A)^{2\beta}Y^{h}(t)\bigr) \bigr\Vert _{\frac{1}{8}} \\ &\leq c \bigl\Vert Y^{h}(t) \bigr\Vert _{2\beta+\frac{5}{8}}\leq c \bigl\Vert Y^{h}(t) \bigr\Vert _{\beta }^{-2\beta-\frac{1}{4}} \bigl\Vert Y^{h}(t) \bigr\Vert _{\beta+\frac{1}{2}}^{2\beta +\frac{5}{4}}.\end{aligned}

Therefore

\begin{aligned} \frac{1}{2}\frac{d}{dt} \bigl\Vert Y^{h}(t) \bigr\Vert _{\beta}^{2}+ \bigl\Vert Y^{h}(t) \bigr\Vert _{\beta +\frac{1}{2}}^{2} \leq& c \bigl\vert X(t,x) \bigr\vert \bigl\Vert Y^{h}(t) \bigr\Vert _{\beta}^{\frac {1}{2}} \bigl\Vert Y^{h}(t) \bigr\Vert _{\beta+\frac{1}{2}}^{\frac{3}{2}} \\ &{} + c \bigl\vert X(t,x) \bigr\vert \bigl\vert Z(t) \bigr\vert _{4} \bigl\Vert Y^{h}(t) \bigr\Vert _{\beta+\frac{1}{2}} \\ \leq&c_{1} \bigl\vert X(t,x) \bigr\vert ^{4} \bigl\Vert Y^{h}(t) \bigr\Vert _{\beta}^{2}+ \frac{1}{4} \bigl\Vert Y^{h}(t) \bigr\Vert _{\beta+\frac{1}{2}}^{2} \\ &{}+ \bigl\vert X(t,x) \bigr\vert ^{4} +c \bigl\vert Z(t) \bigr\vert _{4}^{4}+\frac{1}{4} \bigl\Vert Y^{h}(t) \bigr\Vert _{\beta+\frac {1}{2}}^{2}. \end{aligned}

Integrating the last inequality with respect to t and taking expectations, we have

\begin{aligned} \mathbb{E} \bigl[e^{-c_{1}\int_{0}^{t} \vert X(s,x) \vert ^{4}\,ds} \bigl\Vert Y^{h}(t) \bigr\Vert _{\beta}^{2} \bigr]+ \mathbb{E} \biggl[ \int_{0}^{t}e^{-c_{1}\int_{0}^{s} \vert X(\tau,x) \vert ^{4}\, d\tau} \bigl\Vert Y^{h}(s) \bigr\Vert _{\beta+\frac{1}{2}}^{2}\,ds \biggr]\leq \Vert h \Vert _{\beta}^{2}e^{c_{2}t}. \end{aligned}

Therefore, using once again that $$\eta^{h}(t,x)=Y^{h}(t)+Z(t)$$, (4.8) follows by the Minkowski inequality and (4.7).â€ƒâ–¡

### 4.2 Estimate to the regularity of $$P_{t}\varphi$$

By the results to the Feynmanâ€“Kac semigroup ([1], Lemma 4.1) and ([15], Theorem 2.1), for any $$\varphi\in C_{b}(H)$$, which is a Banach space of all uniformly continuous and bounded functions on H endowed with the supremum norm $$\|\varphi\|_{0}=\sup_{x\in H}|\varphi (x)|$$, the semigroup $$S_{t}\varphi$$ is differentiable in any direction $$h\in H$$, and the directional derivative $$DS_{t}\varphi(x)\cdot h$$ can be denoted as

\begin{aligned}[b] D S_{t}\varphi(x)\cdot h & = \frac{1}{t}\mathbb{E} \biggl[e^{-c\int _{0}^{t} \vert X(s,x) \vert ^{4}\,ds}\varphi\bigl(X(t,x)\bigr) \int_{0}^{t}\bigl\langle g^{-1} \bigl(X(s,x)\bigr)\eta^{h}(s,x), d W(s)\bigr\rangle \biggr] \\ &\quad - 4c\mathbb{E} \biggl[e^{-c\int_{0}^{t} \vert X(s,x) \vert ^{4}\,ds}\varphi \bigl(X(t,x)\bigr) \int_{0}^{t}\biggl(1-\frac{s}{t}\biggr) \bigl\vert X(s,x) \bigr\vert ^{2}\bigl\langle X(s,x), \eta ^{h}(s,x)\bigr\rangle \,ds \biggr]. \end{aligned}
(4.9)

Moreover, for any $$\varphi\in C_{b}^{1}(H)$$, all $$x\in H$$ and $$h\in H$$, it follows that

\begin{aligned}[b] D S_{t}\varphi(x)\cdot h& = \mathbb{E} \bigl[e^{-c\int _{0}^{t} \vert X(s,x) \vert ^{4}\,ds}D\varphi\bigl(X(t,x)\bigr)\cdot \eta^{h}(t,x) \bigr] \\ &\quad - 4c\mathbb{E} \biggl[e^{-c\int_{0}^{t} \vert X(s,x) \vert ^{4}\,ds}\varphi \bigl(X(t,x)\bigr) \int_{0}^{t} \bigl\vert X(s,x) \bigr\vert ^{2}\bigl\langle X(s,x), \eta^{h}(s,x)\bigr\rangle \,ds \biggr]. \end{aligned}
(4.10)

To estimate $$D S_{t}$$, we always assume that the function $$g^{-1}$$ is also bounded besides the assumptions in the introduction, and use the space of continuous functions with kth-polynomial growth, denoted by $$C_{b,k}(H)$$, which is a Banach space of mappings $$\varphi: H\rightarrow\mathbb{R}$$ such that $$\frac{\varphi}{1+|\cdot |^{k}}\in C_{b}(H)$$, equipped with the norm $$\|\varphi\|_{0,k}=\| (1+|\cdot|^{k})^{-1}\varphi\|_{0}$$. Then the following result can be obtained.

### Proposition 4.3

For any $$\varphi\in C_{b,4}(H)$$and $$x\in H$$, there exists a constantcsuch that

$$\bigl\Vert D S_{t}\varphi(x) \bigr\Vert _{\frac{3}{8}}\leq c e^{ct}\bigl(1+ \vert x \vert ^{4}\bigr) \bigl(1+t^{-\frac{7}{8}} \bigl\Vert g^{-1} \bigr\Vert _{0}\bigr) \Vert \varphi \Vert _{0,4}.$$
(4.11)

### Proof

For any $$\varphi\in C_{b,4}(H)$$, with the help of the approximation theorem, assume $$\varphi\in C_{b}(H)$$, by (4.9) and HÃ¶lderâ€™s inequality, we have

\begin{aligned} \bigl\vert D S_{t}\varphi(x)\cdot h \bigr\vert ^{2} \leq&\frac{1}{t^{2}} \Vert \varphi \Vert _{0,4}^{2} \mathbb{E} \bigl[e^{-c\int_{0}^{t} \vert X(s,x) \vert ^{4}\, ds}\bigl(1+ \bigl\vert X(t,x) \bigr\vert ^{4}\bigr)^{2} \bigr] \\ & {}\times\mathbb{E} \biggl[e^{-c\int_{0}^{t} \vert X(s,x) \vert ^{4}\,ds} \biggl( \int_{0}^{t}\bigl\langle g^{-1} \bigl(X(s,x)\bigr)\eta^{h}(s,x), d W(s)\bigr\rangle \biggr)^{2} \biggr] \\ &{}+4c \Vert \varphi \Vert _{0,4}^{2}\mathbb{E} \bigl[e^{-c\int _{0}^{t} \vert X(s,x) \vert ^{4}\,ds}\bigl(1+ \bigl\vert X(t,x) \bigr\vert ^{4} \bigr)^{2} \bigr] \\ & {}\times\mathbb{E} \biggl[e^{-c\int_{0}^{t} \vert X(s,x) \vert ^{4}\,ds} \biggl( \int_{0}^{t} \bigl\vert X(s,x) \bigr\vert ^{2}\bigl\langle X(s,x), \eta^{h}(s,x)\bigr\rangle \, ds \biggr)^{2} \biggr], \end{aligned}

from the result of PropositionÂ 2.2, we get

$$\mathbb{E} \bigl[e^{-c\int_{0}^{t} \vert X(s,x) \vert ^{4}\, ds}\bigl(1+ \bigl\vert X(t,x) \bigr\vert ^{4}\bigr)^{2} \bigr]\leq ce^{ct}\bigl(1+ \vert x \vert ^{4}\bigr)^{2}.$$
(4.12)

Using the same argument as the proof of Lemma 4.1 in [9], and by the ItÃ´ formula we obtain

\begin{aligned} &\mathbb{E} \biggl[e^{-c\int_{0}^{t} \vert X(s,x) \vert ^{4}\,ds} \biggl( \int _{0}^{t}\bigl\langle g^{-1} \bigl(X(s,x)\bigr)\eta^{h}(s,x), d W(s)\bigr\rangle \biggr)^{2} \biggr] \\ &\quad\leq \bigl\Vert g^{-1} \bigr\Vert _{0}^{2} \mathbb{E} \biggl[ \int_{0}^{t}e^{-c\int_{0}^{s} \vert X(\tau,x) \vert ^{4}\,d\tau} \bigl\vert \eta^{h}(s,x) \bigr\vert ^{2}\,ds \biggr], \end{aligned}

applying once again the fact that $$|x|\leq c \|x\|_{-\frac {3}{8}}^{\frac{1}{2}}\|x\|_{\frac{1}{8}}^{\frac{3}{2}}$$, by LemmaÂ 4.2, it implies

\begin{aligned}[b] &\mathbb{E} \biggl[e^{-c\int_{0}^{t} \vert X(s,x) \vert ^{4}\,ds} \biggl( \int _{0}^{t}\bigl\langle g^{-1} \bigl(X(s,x)\bigr)\eta^{h}(s,x), d W(s)\bigr\rangle \biggr)^{2} \biggr] \\ &\quad\leq c \bigl\Vert g^{-1} \bigr\Vert _{0}^{2} \mathbb{E} \biggl[ \int_{0}^{t}e^{-c\int _{0}^{s} \vert X(\tau,x) \vert ^{4}\,d\tau} \bigl\Vert \eta^{h}(s,x) \bigr\Vert _{-\frac{3}{8}}^{\frac{1}{2}} \bigl\Vert \eta^{h}(s,x) \bigr\Vert _{\frac{1}{8}}^{\frac{3}{2}}\,ds \biggr] \\ &\quad\leq c \bigl\Vert g^{-1} \bigr\Vert _{0}^{2} \mathbb{E} \biggl[ \int_{0}^{t}e^{-c\int _{0}^{s} \vert X(\tau,x) \vert ^{4}\,d\tau} \bigl\Vert \eta^{h}(s,x) \bigr\Vert _{-\frac{3}{8}}^{2}\,ds \biggr]^{\frac{1}{4}} \\ &\qquad \times\mathbb{E} \biggl[ \int_{0}^{t}e^{-c\int_{0}^{s} \vert X(\tau ,x) \vert ^{4}\,d\tau} \bigl\Vert \eta^{h}(s,x) \bigr\Vert _{\frac{1}{8}}^{2}\,ds \biggr]^{\frac {3}{4}} \\ &\quad\leq c \bigl\Vert g^{-1} \bigr\Vert _{0}^{2}e^{ct}t^{\frac{1}{4}} \Vert h \Vert ^{2}_{-\frac{3}{8}}. \end{aligned}
(4.13)

Therefore,

\begin{aligned}[b] &\mathbb{E} \biggl[e^{-c\int_{0}^{t} \vert X(s,x) \vert ^{4}\,ds} \biggl( \int _{0}^{t} \bigl\vert X(s,x) \bigr\vert ^{2}\bigl\langle X(s,x), \eta^{h}(s,x)\bigr\rangle \,ds \biggr)^{2} \biggr] \\ &\quad\leq \mathbb{E} \biggl[e^{-c\int_{0}^{t} \vert X(s,x) \vert ^{4}\,ds} \biggl( \int _{0}^{t} \bigl\vert X(s,x) \bigr\vert ^{3} \bigl\vert \eta^{h}(s,x) \bigr\vert \,ds \biggr)^{2} \biggr] \\ &\quad\leq \mathbb{E} \biggl[e^{-c\int_{0}^{t} \vert X(s,x) \vert ^{4}\,ds} \biggl( \int _{0}^{t} \bigl\vert X(s,x) \bigr\vert ^{6}\,ds \biggr) \biggl( \int_{0}^{t} \bigl\vert \eta^{h}(s,x) \bigr\vert ^{2}\, ds \biggr)^{2} \biggr] \\ &\quad\leq ce^{ct} \Vert h \Vert ^{2}_{-\frac{3}{8}}. \end{aligned}
(4.14)

By Eqs.Â (4.12), (4.13), (4.14) it follows that

$$\bigl\vert D S_{t}\varphi(x)\cdot h \bigr\vert \leq ce^{ct}\bigl(1+ \vert x \vert ^{4}\bigr) \bigl(1+ \bigl\Vert g^{-1} \bigr\Vert _{0}t^{-\frac{7}{8}}\bigr) \Vert h \Vert _{-\frac{3}{8}},$$

and (4.11) follows.â€ƒâ–¡

If $$\varphi\in C_{b}^{1}(H)$$, combining with (4.10), the following corollary can be proved by the same argument as above. So we omit it.

### Corollary 4.4

Assume that $$\varphi\in C_{b}^{1}(H)$$, and for any $$x\in H$$, $$\| D\varphi(x)\|_{\frac{3}{8}}\leq c_{\varphi}$$, then we have

\begin{aligned} \bigl\Vert D S_{t}\varphi(x) \bigr\Vert _{\frac{3}{8}}\leq c_{\varphi} e^{ct}. \end{aligned}

### Proposition 4.5

Assume that the assumption of CorollaryÂ 4.4holds, then there exists a constantcsuch that

$$\bigl\Vert D P_{t}\varphi(x) \bigr\Vert _{\frac{3}{8}}\leq \bigl(c_{\varphi}+c \Vert \varphi \Vert _{0}\bigl(1+ \vert x \vert ^{4}\bigr)\bigr) e^{ct}.$$
(4.15)

### Proof

For any $$h\in H$$, by (4.2) we have

$$DP_{t}\varphi(x)\cdot h=DS_{t}\varphi(x)\cdot h+c \int _{0}^{t}DS_{t-s}\bigl( \vert x \vert ^{4}P_{s}\varphi\bigr) (x)\cdot h\,ds,$$

considering the Feller property of the semigroup $$P_{t}$$, it follows that

$$\bigl\Vert \vert x \vert ^{4}P_{s}\varphi \bigr\Vert _{0,4}=\sup_{x\in H}\frac{ \vert x \vert ^{4} \vert \varphi (x) \vert }{1+ \vert x \vert ^{4}}\leq \Vert \varphi \Vert _{0}.$$

Consequently, applying PropositionÂ 4.3 and CorollaryÂ 4.4 yields

\begin{aligned} \bigl\vert D P_{t}\varphi(x)\cdot h \bigr\vert \leq&c_{\varphi} e^{ct} \Vert h \Vert _{-\frac {3}{8}}+ c \Vert \varphi \Vert _{0}\bigl(1+ \vert x \vert ^{4} \bigr) \int_{0}^{t}\bigl(1+ \Vert g \Vert ^{-1}_{0}(t-s)^{-\frac{7}{8}}\bigr)e^{c(t-s)}\,ds \Vert h \Vert _{-\frac{3}{8}} \\ \leq& \bigl(c_{\varphi}+c \Vert \varphi \Vert _{0} \bigl(1+ \vert x \vert ^{4}\bigr)\bigr) e^{ct} \Vert h \Vert _{-\frac{3}{8}}. \end{aligned}

Hence the result follows.â€ƒâ–¡

### 4.3 Kolmogorov operator

In this section, we discuss the properties of the Kolmogorov operator in the space $$L^{2}(H,\nu)$$. By a standard result it follows that the semigroup $$P_{t}$$ can be uniquely extended to a contraction Markov semigroup in $$L^{2}(H,\nu)$$, whose infinitesimal generator is denoted as $$(K_{2},D(K_{2}))$$, that is, $$K_{2}$$ is m-dissipative by the Lumerâ€“Phillips theorem in [21]. By using the ItÃ´ formula we see that $$K_{2}$$ is an extension of $$K_{0}$$, that is, for any $$\varphi\in\mathscr{E}_{A}(H)$$, $$K_{2}\varphi=K_{0}\varphi$$. So, the operator $$K_{0}$$ is dissipative and closeable in $$L^{2}(H,\nu )$$, here we denoted it by $$(\bar{K_{0}},D(\bar{K_{0}}))$$. Therefore, by the above results, the following conclusion holds.

### Proposition 4.6

The operator $$K_{2}$$is the closure of $$K_{0}$$, that is, $$\bar {K_{0}}=K_{2}$$, and $$\bar{K_{0}}$$ism-dissipative in $$L^{2}(H,\nu)$$.

### Proof

Analogous to the method in [9], we introduce the approximation operator

$$K_{0}^{n}\varphi(x)=\frac{1}{2}\operatorname{Tr} \bigl[g_{n}(x)g_{n}(x)^{*}D^{2} \varphi(x)\bigr]+\bigl\langle Ax+b_{n}(x),D\varphi (x)\bigr\rangle ,\quad \varphi\in\mathscr{E}_{A}(H),$$

where $$g_{n}$$, $$b_{n}$$ denotes the regular Galerkin approximations of g and b. Then we consider the elliptic equation

$$\lambda\varphi_{n}-K_{0}^{n} \varphi_{n}=f,$$
(4.16)

here $$\lambda>0$$ is sufficiently large, $$f\in\mathscr{E}_{A}(H)$$, thus its solution is given by

$$\varphi_{n}(x)= \int_{0}^{+\infty}e^{-\lambda t}P_{t}^{n}f(x) \,dt,$$
(4.17)

where $$P_{t}^{n}$$ is the corresponding transition semigroup associated to the approximated problem

$$\left \{ \textstyle\begin{array}{ll} d X= (AX+b_{n}(X) )\,dt+g_{n}(X)\,dW(t),\\ X(0)=x. \end{array}\displaystyle \right .$$
(4.18)

Moreover, proceeding as in [22] it is not difficult to see that $$\varphi_{n}\in D(\bar{K_{0}})$$, so that

$$\lambda\varphi_{n}(x)- \bar{K_{0}}\varphi_{n}(x)=f(x)+ \bigl\langle b_{n}(x)-b(x),D\varphi_{n}(x)\bigr\rangle .$$
(4.19)

Upon using PropositionÂ 4.5, we derive

\begin{aligned} \bigl\Vert D\varphi_{n}(x) \bigr\Vert _{\frac{3}{8}} \leq& \int_{0}^{+\infty }e^{-\lambda t} \bigl\Vert D P_{t}^{n}f(x) \bigr\Vert _{\frac{3}{8}}\,dt \\ \leq& \int_{0}^{+\infty}e^{-(\lambda-c) t} \bigl(c_{f}+c \Vert f \Vert _{0}\bigl(1+ \vert x \vert ^{4}\bigr)\bigr)\,dt \\ =&\frac{c_{f}+c \Vert f \Vert _{0}(1+ \vert x \vert ^{4})}{\lambda-c}. \end{aligned}

By TheoremÂ 3.3, for any $$k\geq0$$, we have $$\int_{H}\|x^{2}\| _{\frac{1}{8}}^{2}|x|^{2k}<\infty$$, thus

\begin{aligned} \int_{H} \bigl\vert \bigl\langle b(x), D \varphi_{n}(x)\bigr\rangle \bigr\vert ^{2} \nu(dx)&\leq C \int_{H} \bigl\Vert b(x) \bigr\Vert _{-\frac{3}{8}}^{2} \bigl\Vert D\varphi_{n}(x) \bigr\Vert _{\frac {3}{8}}^{2} \nu(dx) \\ &\leq C \int_{H} \bigl\Vert x^{2} \bigr\Vert _{\frac{1}{8}}^{2} \bigl(1+ \vert x \vert ^{4} \bigr)^{2} \nu(dx) < \infty.\end{aligned}

On the other hand, notice the fact that $$\|b_{n}(x)-b(x)\|_{-\frac {3}{8}}\rightarrow0$$, Î½-a.s. and $$\|b_{n}(x)-b(x)\|_{-\frac {3}{8}}\leq\|b(x)\|_{-\frac{3}{8}}$$, with the help of the dominated convergence theorem it follows that

\begin{aligned} \lim_{n\rightarrow+\infty}\bigl\langle b_{n}(x)-b(x),D\varphi _{n}(x)\bigr\rangle =0,\quad \mbox{in } L^{p}(H,\nu). \end{aligned}

Therefore, on account of (4.19) we have $$\mathscr {E}_{A}(H)\subset R(\lambda-\bar{K_{0}})\subset L^{2}(H,\nu)$$, and by TheoremÂ 3.20 in [23], $$\bar{K_{0}}$$ is m-dissipative in $$L^{2}(H,\nu)$$. We completed the proof.â€ƒâ–¡

## References

1. Da Prato, G., Gatarek, D.: Stochastic Burgers equation with correlated noise. Stoch. Int. J. Probab. Stoch. Process. 52, 29â€“41 (1995)

2. Leon, J., Nualart, D., Pettersson, R.: The stochastic Burgers equation: finite moments and smoothness of the density. Infin. Dimens. Anal. Quantum Probab. Relat. Top. 3, 363â€“385 (2000)

3. Twardowska, K., Zabczyk, J.: A note on stochastic Burgersâ€™ system of equations. Stoch. Anal. Appl. 22, 1641â€“1670 (2007)

4. Boulanba, L., Mellouk, M.: On a high-dimensional nonlinear stochastic partial differential equation. Stoch. Int. J. Probab. Stoch. Process. 86, 975â€“992 (2011)

5. Hairer, M., Weber, H.: Rough Burgers-like equations with multiplicative noise. Probab. Theory Relat. Fields 155, 71â€“126 (2013)

6. BrÃ©hier, C.E., Debussche, A.: Kolmogorov equations and weak order analysis for SPDES with nonlinear diffusion coefficient. J. Math. Pures Appl. 119, 193â€“254 (2018)

7. Catuogno, P., Colombeau, J.F., Olivera, C.: Generalized solutions of the multidimensional stochastic Burgers equation. J. Math. Anal. Appl. 464, 1375â€“1382 (2018)

8. Zhang, T.S.: Stochastic Burgers type equations with reflection: existence, uniqueness. J. Differ. Equ. 267, 4537â€“4571 (2019)

9. Da Prato, G., Debussche, A.: m-Dissipativity of Kolmogorov operators corresponding to Burgers equations with space-time white noise. Potential Anal. 26, 31â€“55 (2007)

10. Es-Sarhir, A., Stannat, W.: Invariant measures for semilinear SPDEâ€™s with local Lipschitz drift coefficients and applications. J. Evol. Equ. 8, 129â€“154 (2008)

11. Es-Sarhir, A., Stannat, W.: Improved moment estimates for invariant measures of semilinear diffusions in Hilbert spaces and applications. J. Funct. Anal. 259, 1248â€“1272 (2009)

12. Lewis, P., Nualart, D.: Stochastic Burgersâ€™ equation on the real line: regularity and moment estimates. Stoch. Int. J. Probab. Stoch. Process. 90, 1053â€“1086 (2017)

13. Dong, Z., Sun, X., Xiao, H., et al.: Averaging principle for one dimensional stochastic Burgers equation. J. Differ. Equ. 265, 4749â€“4797 (2018)

14. Da Prato, G., Debussche, A.: Ergodicity for the 3D stochastic Navierâ€“Stokes equations. J. Math. Pures Appl. 82, 877â€“947 (2003)

15. Dong, Z., Xie, Y.: Ergodicity of stochastic 2D Navierâ€“Stokes equation with LÃ©vy noise. J. Differ. Equ. 251, 196â€“222 (2011)

16. Da Prato, G., Zabczyk, J.: Stochastic Equations in Infinite Dimensions. Cambridge University Press, Cambridge (1992)

17. Temam, R.: Navierâ€“Stokes Equations and Nonlinear Functional Analysis. SIAM, Philadelphia (1995)

18. Da Prato, G., Zabczyk, J.: Ergodicity for Infinite Dimensional Systems. Cambridge University Press, Cambridge (1996)

19. Da Prato, G.: Kolmogorov equations for stochastic PDEâ€™s with multiplicative noise. Stoch. Anal. Appl. 2, 235â€“263 (2007)

20. Da Prato, G., Zabczyk, J.: Differentiability of the Feynmanâ€“Kac semigroup and a control application. Rend. Lincei Mat. Appl. 8, 183â€“188 (1997)

21. Lumer, G., Phillips, R.S.: Dissipative operators in a Banach space. Pac. J. Math. 11, 679â€“698 (1961)

22. Da Prato, G., RÃ¶ckner, M.: Singular dissipative stochastic equations in Hilbert spaces. Probab. Theory Relat. Fields 124, 261â€“303 (2002)

23. Da Prato, G.: Kolmogorov Equations for Stochastic PDEs. BirkhÃ¤user, Berlin (2004)

## Funding

This research was partly supported by the NNSF of China grants 11571126, 11626178, and the Fundamental Research Funds for the Central Universities (WUT: 2018IA004).

## Author information

Authors

### Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

### Corresponding author

Correspondence to Yu Shi.

## Ethics declarations

### Competing interests

The authors declare that they have no competing interests.

### Publisherâ€™s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Rights and permissions

Reprints and permissions

Shi, Y., Liu, B. Moment estimates for invariant measures of stochastic Burgers equations. Adv Differ Equ 2020, 11 (2020). https://doi.org/10.1186/s13662-019-2486-5