Skip to main content

Theory and Modern Applications

Simultaneous identification of initial field and spatial heat source for heat conduction process by optimizations


Consider the simultaneous identification of the initial field and spatial heat source for heat conduction process from extra measurements with the two additional measurement data at different times. The uniqueness and conditional stability for this inverse problem are established by using the properties of a parabolic equation and the representation of solution after reforming the equation. By combining the least squares method with the regularization technique, the inverse problem is transformed into an optimal control problem. Based on the existence and uniqueness of the minimizer of the cost functional, an alternative iteration process is built to solve this optimizing problem by the variational adjoint method. The negative gradient direction is selected as the first search direction. For further iterations, the alternative iteration algorithm is used for the initial field and heat source identification. The efficiency of the proposed scheme is tested by the numerical simulation experiments.

1 Introduction

Consider the following heat conduction problem:

$$ u_{t}-\Delta u=f(x)\quad \text{in }\varOmega \times (0,T), $$

with initial condition

$$ u(x,0)=\phi (x)\quad \text{in }\varOmega , $$

and boundary condition

$$ u(x,t)=0\quad \text{on }\partial \varOmega , $$

where \(\varOmega \subset \mathbb{R}^{d}\) (\(d=1,2\)) is bounded, \(f(x)\) is the space-dependent heat source and \(\phi (x)\) is the initial temperature with \(\phi \vert _{\partial \varOmega }=0\). If \(f,\phi \in L^{2}(\varOmega )\), then the solution of (1) is unique and \(u\in L^{2}(0,T;H_{0}^{1}( \varOmega ))\) in the distributional sense which satisfies (2) (see [1]), and

$$ \Vert u \Vert _{L^{2}(0,T;H_{0}^{1}(\varOmega ))}\le C\bigl( \Vert f \Vert _{L^{2}(\varOmega )}+ \Vert \phi \Vert _{L^{2}(\varOmega )}\bigr), $$

where C is independent of f and ϕ.

The problem considered in this paper is to determine the initial temperature \(\phi (x)\) and the space-dependent source \(f(x)\) simultaneously from two additional measurement data at times \(T_{1}\), \(T_{2}\) for \(0< T_{1}< T_{2}\leq T\):

$$\begin{aligned}& u(x,T_{1})=\psi _{1}(x),\quad x\in \varOmega , \end{aligned}$$
$$\begin{aligned}& u(x,T_{2})=\psi _{2}(x),\quad x\in \varOmega . \end{aligned}$$

It is well known that if the source term \(f(x)\) in (1) is given, the recovery of the initial value \(\phi (x)\) from \(u(x,T)\) is severely ill-posed. For the classical heat conduction equation, it is called a backward heat conduction problem. Many authors have studied this kind of inverse problem; see [2,3,4,5,6,7]. On the other hand, when the initial value \(\phi (x)\) is known, estimating the source \(f(x)\) from the final observation is also an ill-posed problem [8, 9]. The degree of ill-posedness is equivalent to that of second-order numerical differentiation [10, 11]. The problem in this paper was firstly investigated by Johansson and Lesnic in [12] where the uniqueness was presented and an iterative regularization algorithm was used to solve it.

This article is organized as follows. In Sect. 2, we give uniqueness analysis and a conditional stability result for the inverse problem (1)–(5). An optimization problem is presented and an alternative iteration scheme is constructed by means of the variational adjoint method in Sect. 3. Numerical examples are given in Sect. 4.

2 Uniqueness and conditional stability

Suppose \(f(x), \phi (x)\in L^{2}(\varOmega )\) and set \(w(x,t)=u(x,t)- \phi (x)\), then (1)–(3) will change into

$$ \textstyle\begin{cases} w_{t}-\Delta w=f(x)-\Delta \phi (x) &\text{in }\varOmega \times (0,T), \\ w(x,t)=0 &\text{on }\partial \varOmega , \\ w(x,0)=0 &\text{in }\varOmega , \end{cases} $$

with \(w(x,T_{1})=\psi _{1}(x)-\phi (x)\) and \(w(x,T_{2})=\psi _{2}(x)- \phi (x)\). Applying separation of variables, we can obtain the solution to (6) as

$$ w(x,t)= \int _{\varOmega } \int _{0}^{t} \mathbb{K}(t-\tau ,x,y) \bigl(f(y)- \Delta \phi (y)\bigr)\,d\tau \,dy, $$


$$ \mathbb{K}(t-\tau ,x,y)=\sum_{n=1}^{\infty }e^{-\lambda _{n}(t- \tau )}X_{n}(x)X_{n}(y). $$

Thus, for \(i=1,2\), we can obtain

$$ \int _{\varOmega }\bigl(f(y)-\Delta \phi (y)\bigr) \int _{0}^{T_{i}} \mathbb{K}(T_{i}- \tau ,x,y) \,d\tau \,dy+\phi (x)=\psi _{i}(x). $$

Set \(\tilde{\phi }(x)=\Delta \phi (x)\) with \(\phi (x)=0\) (\(x\in \partial \varOmega \)) and \(\tilde{\psi }_{i}(x)=\Delta \psi _{i}(x)\) (\(i=1,2\)) with \(\psi _{i}(x)=0\) (\(x\in \partial \varOmega \)). Then (8) yields

$$ \int _{\varOmega }\bigl(f(y)-\tilde{\phi }(y)\bigr) \int _{0}^{T_{i}}\Delta \mathbb{K}(T_{i}-\tau ,x,y)\,d\tau \,dy+\tilde{\phi }(x)=\tilde{\psi } _{i}(x),\quad i=1,2. $$

So we have

$$ \sum_{n=1}^{\infty } \int _{0}^{T_{i}}e^{-\lambda _{n}(T_{i}- \tau )}\,d\tau \int _{\varOmega }\bigl[f(y)-\tilde{\phi }(y)\bigr]X_{n}(y) \,dy \Delta X _{n}(x)+\tilde{\phi }(x)=\tilde{ \psi }_{i}(x). $$

Denote \(h_{n}^{i}=\int _{0}^{T_{i}}e^{-\lambda _{n}(T_{i}-\tau )}\,d \tau =\frac{1}{\lambda _{n}}(1-e^{-\lambda _{n} T_{i}})\). Since \(\Delta X_{n}=-\lambda _{n} X_{n}\), by using Green formula, we have

$$\begin{aligned}& \tilde{\phi }_{n}=\bigl\langle \tilde{\phi }(x),X_{n} \bigr\rangle =\bigl\langle \Delta \phi (x), X_{n}\bigr\rangle =\bigl\langle \phi (x),\Delta X_{n} \bigr\rangle =-\lambda _{n} \bigl\langle \phi (x), X_{n}\bigr\rangle =-\lambda _{n} \phi _{n}, \\& \tilde{ \psi }_{i,n}=\bigl\langle \tilde{ \psi }_{i}(x),X_{n} \bigr\rangle =\bigl\langle \Delta \psi _{i}(x),X_{n} \bigr\rangle =\bigl\langle \psi _{i}(x),\Delta X_{n} \bigr\rangle =- \lambda _{n}\langle \psi _{i}, X_{n} \rangle =-\lambda _{n} \psi _{i,n}. \end{aligned}$$

So (8) yields

$$ \textstyle\begin{cases} -h_{n}^{1} f_{n} -(1+h_{n}^{1}\lambda _{n})\phi _{n}=-\psi _{1,n}, \\ -h_{n}^{2} f_{n} -(1+h_{n}^{2}\lambda _{n})\phi _{n}=-\psi _{2,n}. \end{cases} $$

When \(T_{1}\ne T_{2}\), we have \(h_{n}^{1}\ne h_{n}^{2}\). So (11) has a unique solution. By simple calculation, we obtain

$$ \textstyle\begin{cases} f_{n}=\frac{(1+h_{n}^{2}\lambda _{n})\psi _{1,n}-(1+h_{n}^{1} \lambda _{n})\psi _{2,n}}{h_{n}^{1}-h_{n}^{2}}, \\ \phi _{n}=\frac{h_{n}^{1} \psi _{2,n}-h_{n}^{2} \psi _{1,n}}{h_{n}^{1}-h _{n}^{2}}. \end{cases} $$

Note that \(\phi _{n}\), \(f_{n}\) are the Fourier coefficients of ϕ and f, that is, ϕ and f are uniquely determined by means of the measurements at different times \(T_{1}\) and \(T_{2}\). So combining with the properties of the problem (1)–(3), we have the following result:

Lemma 2.1

Suppose \(\phi , f\in L^{2}(\varOmega )\), and \(\psi _{1},\psi _{2} \in H _{0}^{1}(\varOmega )\), then the solution to the inverse problem (1)(5) is unique in \(L^{2}(\varOmega )\times L^{2}( \varOmega )\).

Now, we give a conditional stability analysis for the inverse problem. Suppose \(\partial {\varOmega }\in C^{2+\alpha }\) and \(\phi , f\in C ^{2+\alpha }(\bar{\varOmega })\), then

$$ \psi _{1},\psi _{2}\in C^{2+\alpha }(\bar{ \varOmega })\subset C^{2}(\bar{ \varOmega })\subset L^{2}( \varOmega ) $$

by Theorem 2.2.5 in [13].

We divide the time interval \([0,T_{2}]\) into \([0,T_{1}]\) and \([T_{1},T_{2}]\). In \([T_{1},T_{2}]\), for the source \(f(x)\), we have the following stability result in the framework of Hölder spaces [14]:

$$ \vert f \vert ^{\alpha }_{\bar{\varOmega }}\leq C\bigl( \vert \psi _{1} \vert ^{2+\alpha }_{\bar{ \varOmega }}+ \vert \psi _{2} \vert ^{2+\alpha }_{\bar{\varOmega }}\bigr). $$

Now we consider the conditional stability with respect to ϕ in \([0,T_{1}]\) with the aid of the stability result (14). By separation of variables we can obtain the solution to (1)–(4) as follows:

$$ u(x,t)=\sum_{n=1}^{\infty }W_{n}(t)X_{n}(x). $$

Setting \(t=0\) and \(t=T_{1}\), respectively, \(W_{n}(t)\) will be determined, and we have

$$\begin{aligned}& u(x,t)=\sum_{n=1}^{\infty } \biggl[e^{-\lambda _{n} t} \phi _{n}+\frac{1}{ \lambda _{n}} \bigl(1-e^{-\lambda _{n} t}\bigr)f_{n} \biggr]X_{n}(x), \end{aligned}$$
$$\begin{aligned}& u(x,t)=\sum_{n=1}^{\infty } \biggl[e^{\lambda _{n}(T_{1}-t)}\psi _{1,n}+\frac{1}{\lambda _{n}}\bigl(1-e^{\lambda _{n}(T_{1}-t)} \bigr)f_{n} \biggr]X _{n}(x), \end{aligned}$$

where \(\phi _{n}=\langle \phi ,X_{n} \rangle \), \(\psi _{1,n}=\langle\psi _{1},X _{n}\rangle\), and \(f_{n}=\langle f,X_{n} \rangle \).

From (16), we have

$$ u(x,T_{1})=\sum_{n=1}^{\infty } \biggl[e^{-\lambda _{n} T_{1}} \phi _{n}+\frac{1}{\lambda _{n}} \bigl(1-e^{-\lambda _{n} T_{1}}\bigr)f_{n} \biggr]X_{n}(x). $$

Similarly, from (17), we have

$$ u(x,0)=\sum_{n=1}^{\infty } \biggl[e^{\lambda _{n} T_{1}}\psi _{1,n}+\frac{1}{ \lambda _{n}}\bigl(1-e^{\lambda _{n}T_{1}} \bigr)f_{n} \biggr]X_{n}(x) $$


$$ \Delta u(x,0)=\sum_{n=1}^{\infty } \bigl[-\lambda _{n} e^{\lambda _{n} T_{1}}\psi _{1,n}-\bigl(1-e^{\lambda _{n}T_{1}} \bigr)f_{n} \bigr]X_{n}(x). $$


$$\begin{aligned}& \bigl\Vert \Delta u(\cdot ,0) \bigr\Vert ^{2}_{L^{2}(\varOmega )} \\& \quad = \sum_{n=1}^{ \infty } \bigl(-\lambda _{n} e^{\lambda _{n} T_{1}}\psi _{1,n}-\bigl(1-e^{ \lambda _{n}T_{1}} \bigr)f_{n} \bigr)^{2} \\& \quad \leq 2 \Biggl[\sum_{n=1}^{\infty }\lambda _{n}^{2} e^{2\lambda _{n} T_{1}}\psi ^{2}_{1,n}+ \sum_{n=1}^{\infty }\bigl(1-e^{\lambda _{n} T_{1}} \bigr)^{2} f^{2}_{n} \Biggr] \\& \quad \leq 8 \Biggl[ \Biggl(\sum_{n=1}^{\infty } \lambda _{n}^{2}e^{4 \lambda _{n} T_{1}}\psi ^{2}_{1,n} \Biggr)^{1/2} \Biggl(\sum_{n=1}^{ \infty } \lambda _{n}^{2}\psi ^{2}_{1,n} \Biggr)^{1/2}+ \Biggl(\sum_{n=1} ^{\infty } e^{2\lambda _{n} T_{1}} f^{2}_{n} \Biggr)^{1/2} \Biggl(\sum_{n=1}^{\infty }f_{n}^{2} \Biggr)^{1/2} \Biggr] \\& \quad = 8 \Biggl[ \Biggl(\sum_{n=1}^{\infty }\lambda _{n}^{2}e^{4\lambda _{n} T_{1}}\psi ^{2}_{1,n} \Biggr)^{1/2} \Vert \Delta \psi _{1} \Vert _{L^{2}}+ \Biggl(\sum_{n=1}^{\infty } e^{2\lambda _{n} T_{1}} f^{2}_{n} \Biggr)^{1/2} \Vert f \Vert _{L^{2}} \Biggr]. \end{aligned}$$

On the other hand, (18) yields

$$ \begin{aligned}[b] \lambda _{n}^{2}e^{4\lambda _{n} T_{1}}\psi ^{2}_{1,n}&=\lambda _{n}^{2}e ^{4\lambda _{n} T_{1}}\biggl(e^{-\lambda _{n} T_{1}}\phi _{n}+\frac{1}{\lambda _{n}} \bigl(1-e^{-\lambda _{n} T_{1}}\bigr)f_{n}\biggr)^{2} \\ &\le 8 \bigl(e^{2\lambda _{n}T_{1}} \lambda _{n}^{2}\phi _{n}^{2}+e^{4\lambda _{n}T_{1}}f_{n}^{2} \bigr). \end{aligned} $$

Because \(C^{2+\alpha }(\bar{\varOmega })\subset H^{2}(\varOmega )\) and \(C^{\alpha }(\bar{\varOmega })\subset L^{2}(\varOmega )\), by means of the result in (22), we can change (21) into

$$\begin{aligned} \bigl\Vert \Delta u(\cdot ,0) \bigr\Vert ^{2}_{L^{2}(\varOmega )} \leq & C_{1} \Biggl(\sum_{n=1}^{\infty }e^{2\lambda _{n}T_{1}} \lambda _{n}^{2}\phi _{n} ^{2}+\sum _{n=1}^{\infty }e^{4\lambda _{n}T_{1}}f_{n}^{2} \Biggr) ^{1/2} \vert \psi _{1} \vert _{\bar{\varOmega }}^{2+\alpha } \\ &{}+ C_{1} \Biggl(\sum_{n=1}^{\infty } e^{2\lambda _{n} T_{1}} f ^{2}_{n} \Biggr)^{1/2} \vert f \vert _{\bar{\varOmega }}^{\alpha }. \end{aligned}$$

Since \(u(\cdot ,0)\vert _{\partial \varOmega }=0\), by (23) and using a \(W^{2,2}\)-estimate of the solution to Poisson equation, we have

$$\begin{aligned} \bigl\Vert u(\cdot ,0) \bigr\Vert ^{2}_{H^{2}(\varOmega )} \le & C \bigl\Vert \Delta u(\cdot ,0) \bigr\Vert ^{2}_{L^{2}(\varOmega )} \\ \le & C_{2} \Biggl(\sum_{n=1}^{\infty }e^{2\lambda _{n}T_{1}} \lambda _{n}^{2}\phi _{n}^{2}+\sum _{n=1}^{\infty }e^{4\lambda _{n}T_{1}}f_{n}^{2} \Biggr)^{1/2} \vert \psi _{1} \vert _{\bar{\varOmega }}^{2+\alpha } \\ &{}+ C_{2} \Biggl(\sum_{n=1}^{\infty } e^{2\lambda _{n} T_{1}} f ^{2}_{n} \Biggr)^{1/2} \vert f \vert _{\bar{\varOmega }}^{\alpha }, \end{aligned}$$

where \(C_{2}=CC_{1}\).

Now we introduce the following norms of ϕ and f:

$$ \Vert \phi \Vert ^{2}_{F_{1}}:=\sum _{n=1}^{\infty }\langle \phi ,X_{n} \rangle ^{2} \lambda _{n}^{2} e^{2\lambda _{n} T_{1}}, \quad \quad \Vert f \Vert ^{2}_{F _{2}}:=\sum _{n=1}^{\infty }\langle f,X_{n} \rangle ^{2} e^{4\lambda _{n} T_{1}}, $$

and define the following admissible set of ϕ and f:

$$ \mathbb{P}_{1}=\bigl\{ \phi \in C^{2+\alpha }(\bar{\varOmega }): \Vert \phi \Vert _{F _{1}}\leq M_{1}\bigr\} , \quad\quad \mathbb{P}_{2}=\bigl\{ f\in C^{2+\alpha }(\bar{ \varOmega }): \Vert f \Vert _{F_{2}}\leq M_{2}\bigr\} , $$

where \(M_{1},M_{2}>0\).

From the above analysis, by the linearity of the inverse problem and (14),(24), we have

Theorem 2.2

Suppose \(\partial \varOmega \in C^{2+\alpha }\). For \(\phi ^{i}\in \mathbb{P}_{1}\), \(f^{i}\in \mathbb{P}_{2}\) (\(i=1,2\)), note that

$$ \psi _{1}^{i}(x)=u\bigl[\phi ^{i},f^{i} \bigr](x,T_{1}), \quad\quad \psi _{2}^{i}(x)=u\bigl[\phi ^{i},f^{i}\bigr](x,T_{2}), $$

where \(u[\phi ^{i},f^{i}](x,t)\) are the solutions to (1)(3) with respect to \(\phi ^{i}\) and \(f^{i}\). Then the solution to the inverse problem has the following stability estimate:

$$\begin{aligned}& \bigl\vert f^{1}-f^{2} \bigr\vert ^{\alpha }_{\bar{\varOmega }}\leq C\bigl( \bigl\vert \psi _{1}^{1}- \psi _{1} ^{2} \bigr\vert ^{2+\alpha }_{\bar{\varOmega }}+ \bigl\vert \psi _{2}^{1}-\psi _{2}^{2} \bigr\vert ^{2+ \alpha }_{\bar{\varOmega }}\bigr), \end{aligned}$$
$$\begin{aligned}& \bigl\Vert \phi ^{1}-\phi ^{2} \bigr\Vert ^{2}_{H^{2}(\varOmega )}\leq C \bigl( \bigl\vert \psi _{1}^{1}- \psi _{1}^{2} \bigr\vert ^{2+\alpha }_{\bar{\varOmega }}+ \bigl\vert \psi _{2}^{1}-\psi _{2}^{2} \bigr\vert ^{2+ \alpha }_{\bar{\varOmega }} \bigr), \end{aligned}$$

where C is dependent of \(M_{1}\) and \(M_{2}\).

3 Iteration scheme for solving the optimization problem

We reformulate the inverse problem (1)–(5) as the following optimization problem:

Find \((\phi ^{*},f^{*})\) such that

$$ J\bigl(\phi ^{*},f^{*}\bigr)=\min _{\phi \in L^{2}(\varOmega ),f\in L^{2}(\varOmega )} J(\phi ,f), $$

where the cost functional is

$$ J(\phi ,f):=J_{1}(\phi ,f)+J_{2}(\phi )+J_{3}(f), $$


$$\begin{aligned}& J_{1}(\phi ,f)=\frac{1}{2} \int _{\varOmega } \bigl(\bigl(u(x,T_{1})-\psi _{1}(x)\bigr)^{2}+\bigl(u(x,T _{2})-\psi _{2}(x)\bigr)^{2} \bigr)\,dx, \end{aligned}$$
$$\begin{aligned}& J_{2}(\phi )=\frac{\gamma ^{2}}{2} \int _{\varOmega }\bigl(\phi -\phi ^{\eta }\bigr)^{2} \,dx, \quad\quad J_{3}(f)=\frac{\gamma ^{2}}{2} \int _{\varOmega }\bigl(f-f^{\eta }\bigr)^{2}\,dx. \end{aligned}$$

Here \(u(x,t)\) is the solution to the problem (1)–(5) depending on \((\phi ,f)\), \(\gamma >0\) is the regularization parameter; \(\phi ^{\eta }\) and \(f^{\eta }\) are some prior information of ϕ and f, respectively. Similar to a discussion in [15], we have the following result for the optimization problem:

Lemma 3.1

For any \(\gamma >0\), \(J(\phi ,f)\) has a unique pair of minimizers \((\phi ^{*},f^{*})\) in \(L^{2}(\varOmega )\times L^{2}(\varOmega )\).

Setting \(\varOmega \subset \mathbb{R}^{2}\), we will propose an alternative iteration scheme for minimizing \(J(\phi ,f)\) based on the variational adjoint method to generate the approximate solution to our inverse problem. The crucial steps are the derivation of the gradient of the cost functional and an efficient iteration scheme based on the gradient.

3.1 Derivation of the gradient of functional

We first derive the tangent linear model. Denote by \(u(x,t)\), \(\tilde{u}(x,t)\), and \(\breve{u}(x,t)\) the solutions to (1)–(3) corresponding to \((\phi ,f)\), \((\phi ,f+ \alpha \hat{f})\), and \((\phi +\beta \hat{\phi },f)\).


$$ \hat{u}=\lim_{\alpha \rightarrow 0}\frac{\tilde{u}-u}{\alpha }+ \lim _{\beta \rightarrow 0}\frac{\breve{u}-u}{\beta }, $$

then by direct calculation for û we have

$$\begin{aligned} \textstyle\begin{cases} \hat{u}_{t}=\Delta \hat{u}+\hat{f}, &x\in \varOmega ,t>0, \\ \hat{u}=0, &x\in \partial \varOmega , t\ge 0, \\ \hat{u}(x,0)=\hat{\phi }, &x\in \varOmega , \end{cases}\displaystyle \end{aligned}$$

where (31) is called the tangent linear model of (1)–(3).

Now we calculate the Gatéaux derivative of the cost functional \(J(\cdot ,\cdot )\). Generally, Gatéaux derivative is defined as

$$\begin{aligned} (\nabla _{f}J,\hat{f})+(\nabla _{\phi }J,\hat{ \phi }):=\lim_{\alpha \rightarrow 0}\frac{J(\phi ,f+\alpha \hat{f})-J( \phi ,f)}{\alpha } +\lim_{\beta \rightarrow 0} \frac{J(\phi + \beta \hat{\phi },f)-J(\phi ,f)}{\beta } . \end{aligned}$$

So we calculate directly the two limits on the right-hand side of (32), and by means of the conditions of the problem (1)–(3), we have

$$\begin{aligned} (\nabla _{\phi }J,\hat{\phi })+(\nabla _{f}J,\hat{f}) &= \int _{\varOmega } \hat{u}\vert _{t=T_{1}}\bigl(u(x,T_{1})- \psi _{1}(x)\bigr)\,dx+ \int _{\varOmega }\hat{u}\vert _{t=T _{2}}\bigl(u(x,T_{2})- \psi _{2}(x)\bigr)\,dx \\ & \quad {}+ \gamma ^{2} \int _{\varOmega }\hat{\phi }\bigl(\phi -\phi ^{\eta }\bigr) \,dx\,dt+ \gamma ^{2} \int _{\varOmega }\hat{f} \bigl(f-f^{\eta }\bigr) \,dx, \end{aligned}$$

where û fits the tangent linear model (31).

Secondly, define the adjoint systems:

$$\begin{aligned}& \textstyle\begin{cases} -(p_{1})_{t}-\Delta p_{1}=0, &x\in \varOmega ,0< t< T_{1}, \\ p_{1}(x,t)\vert _{\partial \varOmega }=0, &t>0, \\ p_{1}(x,T_{1})=u(x,T_{1})-\psi _{1}(x), &x\in \varOmega , \end{cases}\displaystyle \end{aligned}$$
$$\begin{aligned}& \textstyle\begin{cases} -(p_{2})_{t}-\Delta p_{2}=0, &x\in \varOmega ,0< t< T_{2}, \\ p_{2}(x,t)\vert _{\partial \varOmega }=0, &t>0, \\ p_{2}(x,T_{2})=u(x,T_{2})-\psi _{2}(x), &x\in \varOmega . \end{cases}\displaystyle \end{aligned}$$

Multiply both sides of (31) by adjoint functions \(p_{1}(x,t)\) and integrate in \(\varOmega \times [0,T_{1}]\). Similarly, multiply both sides of (31) by \(p_{2}(x,t)\) and integrate in \(\varOmega \times [0,T_{2}]\). By means of the initial and boundary conditions of (31), we are led to (36)–(37) as follows:

$$\begin{aligned}& \int _{\varOmega }p_{1}\hat{u}\vert _{0}^{T_{1}} \,dx- \int _{0}^{T_{1}} \int _{\varOmega }\hat{u}\frac{\partial {p_{1}}}{\partial {t}}\,dx\,dt \\& \quad= \int _{0}^{T_{1}} \int _{\varOmega }\hat{u}\Delta p_{1} \,dx \,dt + \int _{0} ^{T_{1}} \int _{\partial \varOmega }p_{1}\frac{\partial \hat{u}}{\partial \nu } \,ds\,dt + \int _{0}^{T_{1}} \int _{\varOmega }p_{1}\hat{f} \,dx\,dt, \end{aligned}$$
$$\begin{aligned}& \int _{\varOmega }p_{2}\hat{u}\vert _{0}^{T_{2}} \,dx- \int _{0}^{T_{2}} \int _{\varOmega }\hat{u}\frac{\partial {p_{2}}}{\partial {t}}\,dx\,dt \\& \quad = \int _{0}^{T_{2}} \int _{\varOmega }\hat{u}\Delta p_{2} \,dx \,dt + \int _{0} ^{T_{2}} \int _{\partial \varOmega }p_{2}\frac{\partial \hat{u}}{\partial \nu } \,ds \,dt+ \int _{0}^{T_{2}} \int _{\varOmega }p_{2}\hat{f} \,dx\,dt. \end{aligned}$$

Due to (34)–(35), (36) and (37) yield

$$\begin{aligned}& \begin{aligned}[b] \int _{\varOmega }\bigl(\bigl(u(x,T_{1})-\psi _{1}(x)\bigr)\hat{u}(x,T_{1})-p_{1}(x,0) \hat{\phi }(x)\bigr)\,dx&= \int _{/\varOmega }p_{1}\hat{u}\vert _{0}^{T_{1}} \,dx \\&= \int _{0}^{T_{1}} \int _{\varOmega }p_{1}\hat{f} \,dx\,dt, \end{aligned} \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] \int _{\varOmega }\bigl(\bigl(u(x,T_{2})-\psi _{2}(x)\bigr)\hat{u}(x,T_{2})-p_{2}(x,0) \hat{\phi }(x)\bigr)\,dx&= \int _{\varOmega }p_{2}\hat{u}\vert _{0}^{T_{2}} \,dx\\&= \int _{0} ^{T_{2}} \int _{\varOmega }p_{2}\hat{f} \,dx\,dt. \end{aligned} \end{aligned}$$

Adding (38) to (39), (33) can be changed into

$$\begin{aligned} (\nabla _{\phi }J,\hat{\phi })+(\nabla _{f}J,\hat{f}) =& \int _{\varOmega }p _{1}(x,0)\hat{\phi }(x)\,dx+ \int _{0}^{T_{1}} \int _{\varOmega }p_{1}\hat{f} \,dx\,dt \\ &{}+ \int _{\varOmega }p_{2}(x,0)\hat{\phi }(x)\,dx+ \int _{0}^{T_{2}} \int _{\varOmega }p_{2}\hat{f} \,dx\,dt \\ &{}+ \gamma ^{2} \int _{\varOmega }\hat{\phi }\bigl(\phi -\phi ^{\eta }\bigr) \,dx+ \gamma ^{2} \int _{\varOmega }\hat{f} \bigl(f-f^{\eta }\bigr) \,dx. \end{aligned}$$

Due to the irrelevance of ϕ̂ and , the gradient of \(J(\phi ,f)\) at \((\phi ,f)\) along ϕ̂, can be obtained as follows:

$$\begin{aligned}& \nabla _{\phi }J=p_{1}(x,0)+p_{2}(x,0)+ \gamma ^{2} \bigl(\phi -\phi ^{\eta }\bigr), \end{aligned}$$
$$\begin{aligned}& \nabla _{f} J= \int _{0}^{T_{1}} p_{1}(x,t)\,dt+ \int _{0}^{T_{2}} p_{2}(x,t)\,dt+ \gamma ^{2} \bigl(f-f^{\eta }\bigr). \end{aligned}$$

3.2 Iteration algorithm

Starting from the initial guess \(\phi _{0}(x)\) and \(f_{0}(x)\), we construct the following iteration scheme:

$$\begin{aligned}& \phi _{n+1}(x)=\phi _{n}(x)+r^{1}_{n}D^{1}_{n}, \quad n=0,1,2,\ldots, \end{aligned}$$
$$\begin{aligned}& f_{n+1}(x)=f_{n}(x)+r^{2}_{n}D^{2}_{n}, \quad n=0,1,2,\ldots, \end{aligned}$$

where \(r^{i}_{n}>0\) (\(i=1,2\)) is the step size selected by the Wolfe line search [16], and \(D^{1}_{n}\), \(D^{2}_{n}\) are the searching directions. The negative gradient direction is selected as the first search direction for \(D^{1}_{n}\), \(D^{2}_{n}\). For the succeeding iterations, two kinds of optimization method are used for the initial temperature inversion and the heat source identification, respectively. An alternative iteration scheme is constructed to improve the computation efficiency.

According to the construction of the iterative scheme (42)–(43), we design the following alternate iteration algorithm for solving \((\phi _{n},f_{n})\):

Fix the maximum iterative step \(N_{\max }\), the error level ϵ, and the maximum length of correction step \(r_{\max }>0\).

Step 1::

Given the initial guess \(\phi _{0}\), \(f_{0}\) and initial search step \(r_{0}^{1},r_{0}^{2}\in (0, r_{\max })\), calculate

$$\begin{aligned}& D^{1}_{0}=-g_{0}:=-\nabla _{\phi }J\vert _{(\phi _{0}, f_{0})}, \qquad \phi _{1}(x)=\phi _{0}(x)+r^{1}_{0}D^{1}_{0}(x), \\& D^{2}_{0}=-s_{0}:=-\nabla _{f}J\vert _{(\phi _{1},f_{0})}, \qquad f_{1}(x)=f_{0}(x)+r^{2}_{0}D^{2}_{0}(x). \end{aligned}$$

Repeat for \(n=2,3,4,\ldots \) .

Step 2::

Calculate the two tangent directions of J with respect to ϕf:

$$\begin{aligned}& g_{n-2}=\nabla _{\phi }J\vert _{(\phi _{n-2}, f_{n-2})}, \qquad g_{n-1}=\nabla _{\phi }J\vert _{(\phi _{n-1},f_{n-2})}, \\& s_{n-2}=\nabla _{f}J\vert _{(\phi _{n-1},f_{n-2})}, \qquad s_{n-1}=\nabla _{f}J\vert _{(\phi _{n-1}, f_{n-1})}. \end{aligned}$$

Here, for given \((\phi _{i}, f_{j})\), we first need to solve the direct problem (1)–(3) and get the solution \(u[\phi _{i}, f _{j}]\), and the adjoint problem (34)–(35) is then also solved to obtain \(p_{1}[\phi _{i}, f_{j}]\), \(p_{2}[\phi _{i}, f_{j}]\), where

$$ (i,j)=(n-2,n-2), (n-1,n-2), (n-1,n-1). $$
Step 3::

Calculate the correction step \(\xi _{n-1}\), \(\zeta _{n-1}\) in the nth iteration direction as

$$ \xi _{n-1}=\frac{ \Vert g_{n-1} \Vert }{ \Vert g_{n-2} \Vert }, \qquad \zeta _{n-1}= \frac{ \Vert s_{n-1}^{T}(s_{n-1}-s_{n-2}) \Vert }{ \Vert s_{n-2}^{T}(s _{n-1}-s_{n-2}) \Vert }. $$
Step 4::

Modify the negative gradient direction and get the new iteration direction of \(\phi _{n}\), \(f_{n}\):

$$ D^{1}_{n-1}=-g_{n-1}+\xi _{n-1}D^{1}_{n-2}, \quad\quad D^{2}_{n-1}=-s_{n-1}+\zeta _{n-1}D^{2}_{n-2}. $$
Step 5::

Calculate the step size \((r^{1}_{n-1},r^{2}_{n-1})\) for update \((\phi _{n-1},f_{n-1})\) by Wolfe line search.

Step 6::


$$ \phi _{n}(x)=\phi _{n-1}(x)+r^{1}_{n-1}D^{1}_{n-1}(x), \qquad f_{n}(x)=f_{n-1}(x)+r^{2}_{n-1}D^{2}_{n-1}(x); $$
Step 7::

Check whether \(\max {\{ \Vert g_{n} \Vert , \Vert s_{n} \Vert \}} <\varepsilon \) or \(n>N_{\max }\)? If it is true, stop the iteration and output \(\phi _{n}\), \(f_{n}\). Otherwise, set \(n+1\Rightarrow n\) and return to Step 2.

4 Numerical implementation

In this section, we give two numerical implementations for the iteration algorithm. All the computations were performed using MATLAB 2016a on a personal computer with Intel Core i5 and 8.00 GB memory. Set \(x=(x_{1},x_{2})\), \(\varOmega =[0,\pi ]\times [0,\pi ]\), denote the stepsizes in \(x_{1}\) and \(x_{2}\) by \(h_{1}=\pi /M_{1}\) and \(h_{2}= \pi /M_{2}\), respectively, where \(M_{1}, M_{2}\in \mathbb{N}\) are grid parameters with respect to \(x_{1}\), \(x_{2}\). The stepsize in t denoted τ is chosen by \(T_{1}=N_{1} \tau \), \(T_{2}=N_{2} \tau \). What needs to be explained is that an alternate iterative scheme is proposed. In Example 1, we compare the numerical results with the solutions by the primary iteration scheme. The final time giving the inversion input data is taken as \(T_{1}=1/2\), \(T_{2}=1\) for the two examples, while the noisy final measurement data are simulated by

$$ u^{\delta }(x,T_{i})=u(x,T_{i})+\delta \times \operatorname{randn}(x), $$

where \(\operatorname{randn}(x)\in [-1, 1]\) are from the normal distribution with mean 0 and standard deviation 1. Here the direct problem (1)–(3) and adjoint system (34) and (35) are solved by D’Yakonov ADI scheme. The error estimates of \(\phi (x)\) and \(f(x)\) are defined by \(L^{2}\)-estimation:

$$ \operatorname{err}\bigl(\phi _{n}^{\delta },\phi ^{*}\bigr)= \bigl\Vert \phi _{n}^{\delta }- \phi ^{*} \bigr\Vert _{L^{2}(\varOmega )},\quad\quad \operatorname{err} \bigl(f_{n}^{\delta },f ^{*}\bigr)= \bigl\Vert f_{n}^{\delta }-f^{*} \bigr\Vert _{L^{2}(\varOmega )}. $$

Example 1

The exact solution of (1)–(3) is

$$ u(x_{1},x_{2},t)=\bigl(2-e^{-2t} \bigr)\sin {x_{1}}\sin {x_{2}}, $$

and then \(\phi ^{*}(x_{1},x_{2})=\sin {x_{1}}\sin {x_{2}}\) and \(f^{*}(x_{1},x_{2})=4\sin {x_{1}}\sin {x_{2}}\). According to (46), the noisy data is simulated by

$$ u^{\delta }(x_{1},x_{2},T_{i})= \bigl(2-e^{-2 T_{i}}\bigr)\sin {x_{1}}\sin {x _{2}}+\delta \times \operatorname{rand}(x_{1},x_{2}),\quad i=1,2. $$

In this example, we select \(\phi ^{\eta }=f^{\eta }=0\) and divide the domain Ω as \(60 \times 60\) pixels and take the regularizing parameters \(\gamma =10^{-2}\). Set \(\phi _{0}(x_{1},x_{2})=0\), \(f_{0}(x_{1},x_{2})=0\) and consider the input data (47) with \(\delta =0,0.001,0.01,0.02,0.03\), respectively. The inverse results of \(\phi (x_{1},x_{2})\) and \(f(x_{1},x_{2})\) are shown in Figs. 1 and 2. Here, in order to represent the advantage of an alternate iteration in this paper, we also calculate \(J(\phi _{n}^{\delta },f_{n}^{\delta })\), \(\operatorname{err}(\phi _{n}^{\delta },\phi ^{*})\), and \(\operatorname{err}(f_{n}^{\delta },f^{*})\) with \(\delta =0\) by means of alternate iteration and primary iteration, which are shown in Table 1. For \(J(\phi _{n}^{\delta },f_{n}^{\delta })\), the functional value is reduced faster than when using alternate iteration after the same number of iterations. Furthermore, the estimated error values \(\operatorname{err}(\phi _{n}^{\delta },\phi ^{*})\) and \(\operatorname{err}(f_{n}^{\delta },f^{*})\) are small after 100 iterations by using the alternate iteration scheme, but respective errors when using the primary iteration scheme are reduced slowly. It can be seen that the efficiency of computation is greatly improved by alternate iteration. Figure 3 (first line) shows the numerical performance for noisy input data by giving the behavior of \(J(\phi _{n}^{\delta },f_{n}^{\delta })\). And the error estimation results of \(\phi (x_{1},x_{2})\) and \(f(x_{1},x_{2})\) are shown in Fig. 3 (second line, (middle) and (right)).

Figure 1
figure 1

Reconstructions of Example 1: (first line) exact and approximate solutions \(\phi (x_{1},x_{2})\) with \(\delta =0\); (second line) approximate solutions for \(\delta =0.001,0.01\); (third line) approximate solutions for \(\delta =0.02,0.03\), respectively

Figure 2
figure 2

Reconstructions of Example 1: (first line) exact and approximate solutions of \(f(x_{1},x_{2})\) with \(\delta =0\); (second line) approximate solutions for \(\delta =0.001,0.01\); (third line) approximate solutions for \(\delta =0.02,0.03\), respectively

Figure 3
figure 3

Numerical performance of \(J(\phi _{n}^{\delta },f_{n}^{ \delta }) \) (first line), \(\operatorname{err}(\phi _{n}^{\delta },\phi ^{*})\) (second line, (left)), \(\operatorname{err}(f_{n}^{\delta },f ^{*})\) (second line, (right)) with respect to the number of iterations for \(\delta =0, 0.001, 0.01, 0.02\), and 0.03 of Example 1

Table 1 Functional value of \(J(\phi ^{\delta }_{n},f^{\delta }_{n})\), errors \(\operatorname{err}(\phi ^{\delta }_{n},\phi ^{*})\) and \(\operatorname{err}(f^{\delta }_{n},f^{*})\) by the primary iteration scheme; “\(J^{1}\)”, “err1” represent the results of the alternate iteration scheme

Example 2

The exact initial temperature and the source field are

$$\begin{aligned}& \phi (x_{1},x_{2})=\sin {2x_{1}}\sin {2x_{2}},\quad x_{1},x_{2}\in [0,\pi ], \\& f(x_{1},x_{2})= \textstyle\begin{cases} 1, & \pi /4 \leq x_{1},x_{2}\leq 3\pi /4, \\ 0,& \text{else}. \end{cases}\displaystyle \end{aligned}$$

Then the exact solution has the same form

$$\begin{aligned} u(x_{1},x_{2},t)=\sum _{n=1}^{\infty }\sum_{m=1}^{\infty } \mathbb{H}_{mn}(t) \sin {mx_{1}}\sin {nx_{2}}, \quad 0\leq x_{1},x_{2} \leq \pi , \end{aligned}$$


$$\begin{aligned}& \begin{aligned} \mathbb{H}_{mn}(t)={}&\frac{4e^{-(m^{2}+n^{2})t}}{\pi ^{2}} \biggl( \int _{0}^{t}\biggl( \int _{0}^{\pi } \int _{0}^{\pi }f(x_{1},x_{2})\sin {mx_{1}} \sin {nx_{2}}\,dx_{2}\,dx_{1} \biggr)e^{(m^{2}+n^{2})\tau }\,d\tau \\ &{}+ \int _{0}^{\pi } \int _{0}^{\pi }\phi (x_{1},x_{2}) \sin {mx_{1}}\sin {nx_{2}}\,dx_{2}\,dx_{1} \biggr). \end{aligned} \end{aligned}$$

Set \(N_{1}=N_{2}=30\), then (48) is computed numerically by its \(N_{1}\times N_{2}\) terms, and (44) yields the noisy inversion input data

$$\begin{aligned} u^{\delta }(x_{1},x_{2},T_{i}) =&\sum_{n=1}^{N_{1}}\sum _{m=1}^{N_{2}}\mathbb{H}_{mn}(t)\sin {mx_{1}}\sin {nx_{2}} + \delta \times \operatorname{rand}(x_{1},x_{2}), \end{aligned}$$

where \(i=1,2\).

We set \(f^{\eta }=\phi ^{\eta }=0.01\), and consider the input data (49) with \(\delta =0,0.001,0.01,0.02\), and 0.03, respectively, in this example. The regularizing parameters are selected as \(\gamma =10^{-2}\). The initial guess is \(\phi _{0}(x_{1},x_{2})=0\) and \(f_{0}(x_{1},x_{2})=0\). We obtain the inverse results of \(\phi (x_{1},x _{2})\) and \(f(x_{1},x_{2})\), which are shown in Figs. 4 and 5. In Fig. 6, we give the numerical performance for noisy input data by giving the behavior of \(J(\phi _{n}^{\delta },f_{n}^{\delta })\) (first line), the errors \(\operatorname{err}(\phi _{n}^{\delta },\phi ^{*})\) and \(\operatorname{err}(f_{n}^{\delta },f^{*})\) with respect to the iteration number n and noise level δ (second line, (left) and (right)).

Figure 4
figure 4

Reconstructions of Example 2: (first line) exact approximate solutions of \(\phi (x_{1},x_{2})\) with \(\delta =0\); (second line) approximate solutions for \(\delta =0.001,0.01\); (third line)approximate solutions for \(\delta =0.02,0.03\), respectively

Figure 5
figure 5

Reconstructions of Example 2: (first line) exact and approximate solutions of \(f(x_{1},x_{2})\) with \(\delta =0\); (second line) approximate solutions for \(\delta =0.001,0.01\); (third line) approximate solutions for \(\delta =0.02,0.03\), respectively

Figure 6
figure 6

Numerical performances \(J(\phi _{n}^{\delta },f_{n}^{\delta })\) (first line), \(\operatorname{err}(\phi _{n}^{\delta },\phi ^{*})\) (second line, (left)), \(\operatorname{err}(f_{n}^{\delta },f^{*})\) (second line, (right)) with respect to the number of iterations for \(\delta =0, 0.001, 0.01, 0.02\), and 0.03 of Example 2

Remark 4.1

The penalty term \(J_{2}\) can also be replaced by \(\frac{\gamma ^{2}}{2}\int _{0}^{T_{1}}\int _{\varOmega } \vert \nabla u \vert ^{2}\,dx\), which means that the heat flux is bounded in physics. In comparison with \(J_{2}\) used in this paper, the regularization effect is more obvious for the inversion of ϕ, and the number of iteration is decreasing. We have already tested it in 1-dimension.

5 Conclusion

This paper presents an alternative iteration method for the simultaneous identification of the initial field and spatial heat source for a heat conduction process from extra measurement data at two different times. The uniqueness and conditional stability results have been established. In Example 1, we calculated \(J(\phi _{n}^{\delta },f_{n}^{\delta })\), \(\operatorname{err}(\phi _{n}^{\delta },\phi ^{*})\) and \(\operatorname{err}(f_{n}^{\delta },f^{*})\) with \(\delta =0\) by means of alternate and primary iteration schemes. In addition, the alternate iteration scheme takes 3 min, while the primary iteration scheme takes 5 min for 100 iterations in Matlab2016a. And as seen from Table 1, by using the alternate iteration scheme, the value of functional \(J(\phi _{n},f_{n})\) is reduced faster than by the primary iteration scheme for the same number of iterations.


  1. Evans, L.C.: Partial Differential Equations, 2nd edn. Am. Math. Soc., Providence (2010)

    MATH  Google Scholar 

  2. Colton, D.: The approximation of solutions to the backwards heat equation in a nonhomogeneous medium. J. Math. Anal. Appl. 72, 418–429 (1979)

    Article  MathSciNet  Google Scholar 

  3. Liu, J.J.: Determination of temperature field for backward heat transfer. Bull. Korean Math. Soc. 16, 385–397 (2001)

    MathSciNet  MATH  Google Scholar 

  4. Liu, J.J.: Numerical solution of forward and backward problem for 2-D heat conduction problem. J. Comput. Appl. Math. 145, 459–482 (2002)

    Article  MathSciNet  Google Scholar 

  5. Liu, J.J., Lou, D.J.: On stability and regularization for backward heat equation. Chin. Ann. Math., Ser. B 24(1), 35–44 (2003)

    Article  MathSciNet  Google Scholar 

  6. Cheng, J., Liu, J.J.: A quasi Tikhonov regularization for a two-dimensional backward heat problem by a fundamental solution. Inverse Probl. 24, 065012 (2008)

    Article  MathSciNet  Google Scholar 

  7. Liu, J.J., Wang, B.X.: Solving the backward heat conduction problem by homotopy analysis method. Appl. Numer. Math. 128, 84–97 (2018)

    Article  MathSciNet  Google Scholar 

  8. Yan, L., Yang, F.L., Fu, C.L.: A meshless method for solving an inverse spacewise-dependent heat source problem. J. Comput. Phys. 228(1), 123–136 (2009)

    Article  MathSciNet  Google Scholar 

  9. Johansson, B.T., Lesnic, D.: A variational method for identifying a spacewise-dependent heat source. IMA J. Appl. Math. 72, 748–760 (2007)

    Article  MathSciNet  Google Scholar 

  10. Wang, Z.W., Wang, H.B., Qiu, S.F.: A new method for numerical differentiation based on direct and inverse problems of partial differential equations. Appl. Math. Lett. 43, 61–67 (2015)

    Article  MathSciNet  Google Scholar 

  11. Xiong, X.T., Yan, Y.M.: A direct numerical method for solving inverse heat source problems. J. Phys. Conf. Ser. 290, 012017 (2001)

    Google Scholar 

  12. Johansson, B.T., Lesnic, D.: A procedure for determining a spacewise dependent heat source and the initial temperature. Appl. Anal. 87, 265–276 (2008)

    Article  MathSciNet  Google Scholar 

  13. Ye, Q.X., Li, Z.Y.: An Introduction to Reaction–Diffusion Equation. Science Press, Beijing (1994)

    Google Scholar 

  14. Isakov, V.: Inverse Problems for Partial Differential Equations, 2nd edn. Applied Mathematical Sciences, vol. 127. Springer, New York (2006)

    MATH  Google Scholar 

  15. Wang, Z.W., Qiu, S.F., Ruan, Z.S., Zhang, W.: A regularized optimization method for identifying the space-dependent source and the initial value simultaneously in a parabolic equation. Comput. Math. Appl. 67, 1345–1357 (2014)

    Article  MathSciNet  Google Scholar 

  16. Shi, Z.: On the convergence of memory gradient method with Wolfe line search. Nonlinear Anal. Hybrid Syst. 29, 9–18 (2016)

    MathSciNet  Google Scholar 

Download references


The authors would like to thank the anonymous referees for very helpful suggestions and comments which led to improvement of our original manuscript.


This work is supported by the Natural Science Foundation of Jiangsu Province of China (No. BK20181482), the Natural Science Foundation of the Jiangsu Higher Education Institutions of China (No. 18KJD110002) and the Scientific Research Foundation of Huaiyin Normal University (No. 31WBX00).

Author information

Authors and Affiliations



All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Bingxian Wang.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, B., Yang, B. & Xu, M. Simultaneous identification of initial field and spatial heat source for heat conduction process by optimizations. Adv Differ Equ 2019, 411 (2019).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: