We consider the following system of fractional differential equations:
$$ D_{t}^{\alpha_{i}} y_{i}(t) =\sum _{k=1}^{n} a_{ik} y_{k}(t) +f_{i}(t), \quad t>0, i=1,2,\dots,n, $$
(14)
where \(a_{ik} \) are constants, not all zero, \(f_{i}(t)\) are specified functions, \(D_{t}^{\alpha_{i}}\) are the Caputo fractional derivative operators with \(0<\alpha_{i}\leq1\), and \(y_{i}(t)\) are unknown functions with the specified initial values \(y_{i}(0)\).
We suppose that each \({f}_{i}(t)\) is locally integrable on the interval \(0< t<+\infty\) and the Laplace transforms exist. Equation (14) may be written in matrix equation as
$$\begin{aligned} \mathbf{D} { \mathbf{y}(t)}=\mathbf{A} \mathbf{y}(t)+\mathbf {f}(t), \quad t>0, \end{aligned}$$
(15)
where D is the diagonal matrix with the fractional derivative operators
$$\begin{aligned} \mathbf{D}=\operatorname{diag} \bigl( D_{t}^{\alpha_{1}},D_{t}^{\alpha_{2}}, \dots,D_{t}^{\alpha_{n}} \bigr), \end{aligned}$$
(16)
\(\mathbf{A}=(a_{ij})_{n\times n}\) is a non-zero coefficient matrix, \(\mathbf{f}(t)=(f_{1}(t),f_{2}(t),\dots,f_{n}(t))^{T}\), and \(\mathbf{y}=\mathbf{y}(t)=(y_{1}(t),y_{2}(t),\ldots,y_{n}(t))^{T}\).
Applying the Laplace transformation to Eq. (14) with respect to t we obtain
$$ s^{\alpha_{i}} \tilde{y}_{i}(s) -s^{\alpha_{i}-1} {y}_{i}(0) =\sum_{k=1}^{n} a_{ik} \tilde{y}_{k}(s) +\tilde{f}_{i}(s), \quad i=1,2,\dots,n, $$
(17)
where \(\operatorname{Re}(s)>c>0\), and c is constrained by the following derivation and can be taken as the value on the right hand side of inequality (21). In matrix form, Eq. (17) is
$$ \boldsymbol{\Lambda} \tilde{\mathbf{y}}(s) -s^{-1} \boldsymbol { \Lambda}\mathbf{y}(0)=\mathbf{A} \tilde{\mathbf{y}}(s) +\tilde {\mathbf{f}}(s), $$
(18)
where Λ denotes the diagonal matrix \(\boldsymbol {\Lambda}= \operatorname{diag}(s^{\alpha_{1}},s^{\alpha_{2}},\dots ,s^{\alpha_{n}})\). We rewrite Eq. (18) as
$$ ({\boldsymbol{\Lambda} -A}) \tilde{\mathbf{y}}(s) =s^{-1} \boldsymbol{ \Lambda} \mathbf{y}(0)+\tilde{\mathbf{f}}(s). $$
(19)
Left multiplication by the inverse matrix \(\boldsymbol{\Lambda }^{-1}=\operatorname{diag}(s^{-\alpha_{1}},s^{-\alpha_{2}},\dots ,s^{-\alpha _{n}})\) leads to
$$ \bigl(\mathbf{I}-\boldsymbol{\Lambda}^{-1} \mathbf{A}\bigr) \tilde{ \mathbf {y}}(s) =s^{-1} \mathbf{y}(0)+\boldsymbol{\Lambda}^{-1} {\tilde {\mathbf{f}}}(s), $$
(20)
where I is the unit matrix of order n.
We let
$$\begin{aligned} \operatorname{Re}(s)> \max_{1\leq i\leq n} \Biggl(2\sum _{k=1}^{n} \vert a_{ik} \vert \Biggr)^{1/\alpha_{i}}, \end{aligned}$$
(21)
then the matrix \(({\mathbf{I}-\Lambda^{-1} A})\) is invertible. This can be proved as follows.
In fact, by Eq. (21), we have
$$\begin{aligned} \vert s \vert ^{\alpha_{i}} > 2\sum_{k=1}^{n} \vert a_{ik} \vert ,\quad i=1,2,\dots,n, \end{aligned}$$
(22)
that is, the following holds:
$$\begin{aligned} \sum_{k=1}^{n} \bigl\vert s^{-\alpha_{i}} a_{ik} \bigr\vert < \frac{1}{2},\quad i=1,2, \dots,n. \end{aligned}$$
(23)
Thus the absolute values of the diagonal entries of the matrix \(({\mathbf{I}-\Lambda^{-1} A})\) are greater than \(1/2\), while the sum of the absolute values of the non-diagonal entries in the ith row is less than \(1/2\). By the Gershgorin circle theorem, the eigenvalues of the matrix \(({\mathbf{I}-\Lambda^{-1} A})\) are non-zero. Therefore, the matrix \(({\mathbf{I}-\Lambda^{-1} A})\) is invertible.
From Eq. (20), we solve for \(\tilde{\mathbf{y}}(s)\) as
$$ \tilde{\mathbf{y}}(s) =s^{-1} \bigl({\mathbf{I}-\Lambda^{-1} A}\bigr)^{-1} \mathbf{y}(0)+\bigl({\mathbf{I}-\Lambda^{-1} A} \bigr)^{-1}\boldsymbol{\Lambda }^{-1} \tilde{\mathbf{f}}(s). $$
(24)
Inverse Laplace transform yields
$$ \mathbf{y}(t) =\mathbf{G}(t) \mathbf{y}(0)+\mathbf{Q}(t)* \mathbf {f}(t), $$
(25)
where we introduce two matrix functions
$$\begin{aligned} &\mathbf{G}(t) = L^{-1} \bigl[s^{-1} \bigl({\mathbf{I}- \Lambda^{-1} A}\bigr)^{-1} \bigr] , \end{aligned}$$
(26)
$$\begin{aligned} &\mathbf{Q}(t)= L^{-1} \bigl[\bigl({\mathbf{I}-\Lambda^{-1} A}\bigr)^{-1}\boldsymbol{\Lambda }^{-1} \bigr], \end{aligned}$$
(27)
and the convolution is defined as
$$\begin{aligned} \mathbf{Q}(t)* \mathbf{f}(t)= \int_{0}^{t} \mathbf{Q}(t-\tau) \mathbf {f}(\tau) \,d\tau. \end{aligned}$$
(28)
First we consider the inverse Laplace transform of \(({\mathbf {I}-\Lambda^{-1} A})^{-1}\). We use the two decompositions of matrices as
$$\begin{aligned} \bigl({\mathbf{I} -\Lambda^{-1} A}\bigr)^{-1} =&\sum _{k=0}^{\infty}\bigl({\boldsymbol{ \Lambda}^{-1}A}\bigr)^{k} \end{aligned}$$
(29)
and
$$\begin{aligned} {\boldsymbol{\Lambda}^{-1}A}=\sum_{i=1}^{n} s^{-\alpha_{i} } \mathbf{A}_{i}, \end{aligned}$$
(30)
where \(\mathbf{A}_{i}\) denotes the matrix formed from A by rewriting each entry of A except that in the ith row into zeros. So the ith rows of the matrices A and \(\mathbf{A}_{i}\) are identical. Hence we have the following expression:
$$\begin{aligned} \bigl({\mathbf{I} -\Lambda^{-1} A}\bigr)^{-1} =&\sum _{k=0}^{\infty}\Biggl(\sum _{i=1}^{n} s^{-\alpha_{i} } \mathbf{A}_{i} \Biggr)^{k} \\ =&\mathbf{I}+\sum_{k=1}^{\infty}\sum _{j_{1},j_{2},\dots,j_{k}=1}^{n} s^{-(\alpha_{j_{1}}+\alpha_{j_{2}}+\cdots +\alpha _{j_{k}}) } \mathbf{A}_{j_{1}} \mathbf{A}_{j_{2}}\dots\mathbf{A}_{j_{k}}. \end{aligned}$$
(31)
Calculating the inverse Laplace transformation term by term we have
$$\begin{aligned} L^{-1} \bigl[ \bigl({\mathbf{I} -\Lambda^{-1} A} \bigr)^{-1} \bigr] =&\delta(t)\mathbf{I} \\ & {}+\sum_{k=1}^{\infty}\sum _{j_{1},j_{2},\dots,j_{k}=1}^{n} \frac{t^{\alpha_{j_{1}}+\alpha _{j_{2}}+\cdots +\alpha_{j_{k}}-1 }}{\Gamma(\alpha_{j_{1}}+\alpha_{j_{2}}+\cdots+\alpha_{j_{k}})} \mathbf{A}_{j_{1}} \mathbf{A}_{j_{2}}\dots\mathbf{A}_{j_{k}}, \end{aligned}$$
(32)
where \(\delta(t)\) is the Dirac delta function.
Further we have the following result for the matrix \(\mathbf{G}(t)\):
$$\begin{aligned} \mathbf{G}(t) =&L^{-1} \bigl[s^{-1} \bigl({\mathbf{I}- \Lambda^{-1} A}\bigr)^{-1} \bigr] \\ =&J_{t}^{1}L^{-1} \bigl[ \bigl({\mathbf{I} - \Lambda^{-1} A}\bigr)^{-1} \bigr] \\ =&\mathbf{I}+ \sum_{k=1}^{\infty}\sum _{j_{1},j_{2},\dots,j_{k}=1}^{n} \frac{t^{\alpha_{j_{1}}+\alpha _{j_{2}}+\cdots +\alpha_{j_{k}} }}{\Gamma(\alpha_{j_{1}}+\alpha_{j_{2}}+\cdots+\alpha_{j_{k}}+1)} \mathbf{A}_{j_{1}} \mathbf{A}_{j_{2}}\dots\mathbf{A}_{j_{k}}. \end{aligned}$$
(33)
We lift the generalized Mittag–Leffler function in Eq. (8) to a matrix function and use it to express the matrix \(\mathbf{G}(t)\) as
$$\begin{aligned} \mathbf{G}(t)=\mathcal{E} _{(\alpha_{1},\alpha_{2},\dots,\alpha _{n}),1}\bigl(t^{\alpha_{1}} \mathbf{A}_{1},t^{\alpha_{2}} \mathbf {A}_{2},\dots ,t^{\alpha_{n}} \mathbf{A}_{n}\bigr). \end{aligned}$$
(34)
To calculate the inverse Laplace transform in Eq. (27), we decompose the matrix \({\boldsymbol{\Lambda}^{-1}}\) as
$$\begin{aligned} {\boldsymbol{\Lambda}^{-1}}=\sum_{i=1}^{n} s^{-\alpha_{i} } \mathbf{I}_{i}, \end{aligned}$$
(35)
where \(\mathbf{I}_{j}\) are formed from the unit matrix I in a similar manner as \(\mathbf{A}_{j}\). We derive the expression for the matrix \(\mathbf{Q}(t)\) as
$$\begin{aligned} \mathbf{Q}(t) =& L^{-1} \bigl[\bigl({\mathbf{I}-\Lambda^{-1} A}\bigr)^{-1}\boldsymbol{\Lambda }^{-1} \bigr] \\ =& L^{-1} \Biggl[\sum_{i=1}^{n} s^{-\alpha_{i} }\bigl({\mathbf{I}-\Lambda^{-1} A}\bigr)^{-1} \mathbf{I}_{i} \Biggr] \\ =&\sum_{i=1}^{n} J_{t}^{\alpha_{i} }L^{-1} \bigl[\bigl({\mathbf{I}-\Lambda^{-1} A}\bigr)^{-1} \bigr] \mathbf{I}_{i} \\ =& \sum_{i=1}^{n} \Biggl[ \frac{t^{\alpha_{i}-1}}{\Gamma(\alpha_{i})} \mathbf{I}+ \sum_{k=1}^{\infty}\sum_{j_{1},j_{2},\dots,j_{k}=1}^{n} \frac{t^{\alpha _{j_{1}}+\alpha_{j_{2}}+\cdots+\alpha_{j_{k}}+\alpha_{i}-1 }}{\Gamma(\alpha _{j_{1}}+\alpha _{j_{2}}+\cdots+\alpha_{j_{k}}+\alpha_{i})} \mathbf{A}_{j_{1}} \mathbf{A}_{j_{2}} \dots\mathbf{A}_{j_{k}} \Biggr] \mathbf{I}_{i} . \end{aligned}$$
(36)
An equivalent expression is
$$\begin{aligned} \mathbf{Q}(t)= \sum_{i=1}^{n} \frac{t^{\alpha_{i}-1}}{\Gamma(\alpha_{i})} \mathbf{I}_{i}+ \sum_{k=1}^{\infty}\sum_{i=1}^{n} \sum _{j_{1},j_{2},\dots,j_{k}=1}^{n} \frac{t^{\alpha_{j_{1}}+\alpha_{j_{2}}+\cdots+\alpha_{j_{k}}+\alpha_{i}-1 }}{\Gamma (\alpha_{j_{1}}+\alpha_{j_{2}}+\cdots+\alpha_{j_{k}}+\alpha_{i})} \mathbf{A}_{j_{1}} \mathbf{A}_{j_{2}} \dots\mathbf{A}_{j_{k}} \mathbf{I}_{i}. \end{aligned}$$
(37)
In terms of the generalized Mittag–Leffler function of matrix arguments, \(\mathbf{Q}(t)\) has the form from Eq. (36)
$$\begin{aligned} \mathbf{Q}(t) =\sum_{i=1}^{n} t^{\alpha_{i}-1} \mathcal{E} _{(\alpha_{1},\alpha_{2},\dots,\alpha_{n}),\alpha _{i}}\bigl(t^{\alpha_{1}} \mathbf{A}_{1},t^{\alpha_{2}} \mathbf{A}_{2}, \dots,t^{\alpha_{n}} \mathbf {A}_{n}\bigr)\, \mathbf{I}_{i}. \end{aligned}$$
(38)
Thus the analytic solutions are obtained in Eqs. (25), (34) and (38) in terms of the generalized Mittag–Leffler function of matrix arguments. In practical computation, we can truncate the series expressions in Eqs. (33) and (37) and give analytic approximate solutions:
$$ \mathbf{y}^{[m]}(t) =\mathbf{G}^{[m]}(t) \mathbf{y}(0)+\mathbf {Q}^{[m]}(t)* \mathbf{f}(t), $$
(39)
where \(\mathbf{G}^{[m]}(t) \) and \(\mathbf{Q}^{[m]}(t)\) are the truncation with the first m terms in Eqs. (33) and (37) as
$$\begin{aligned} &\mathbf{G}^{[m]}(t) =\mathbf{I}+ \sum_{k=1}^{m-1} \sum_{j_{1},j_{2},\dots,j_{k}=1}^{n} \frac{t^{\alpha_{j_{1}}+\alpha _{j_{2}}+\cdots +\alpha_{j_{k}} }}{\Gamma(\alpha_{j_{1}}+\alpha_{j_{2}}+\cdots+\alpha_{j_{k}}+1)} \mathbf{A}_{j_{1}} \mathbf{A}_{j_{2}}\dots\mathbf{A}_{j_{k}}, \end{aligned}$$
(40)
$$\begin{aligned} & \begin{aligned}[b] \mathbf{Q}^{[m]}(t)={}& \sum_{i=1}^{n} \frac{t^{\alpha_{i}-1}}{\Gamma(\alpha_{i})} \mathbf{I}_{i} \\ &{}+ \sum_{k=1}^{m-1} \sum _{i=1}^{n} \sum_{j_{1},j_{2},\dots,j_{k}=1}^{n} \frac {t^{\alpha_{j_{1}}+\alpha_{j_{2}}+\cdots+\alpha_{j_{k}}+\alpha_{i}-1 }}{\Gamma(\alpha _{j_{1}}+\alpha_{j_{2}}+\cdots+\alpha_{j_{k}}+\alpha_{i})} \mathbf{A}_{j_{1}} \mathbf{A}_{j_{2}} \dots \mathbf{A}_{j_{k}} \mathbf{I}_{i}. \end{aligned} \end{aligned}$$
(41)
For the special case \(\alpha_{1}=\alpha_{2}=\cdots=\alpha_{n}=\alpha\), the two matrices \(\mathbf{G}(t)\) and \(\mathbf{Q}(t)\) are simplified to
$$\begin{aligned} \mathbf{G}(t) =& \mathcal{E} _{(\alpha,\alpha,\dots,\alpha),1} \bigl(t^{\alpha} \mathbf {A}_{1},t^{\alpha} \mathbf{A}_{2},\dots,t^{\alpha} \mathbf {A}_{n}\bigr) \\ =&{E} _{\alpha,1}\bigl(t^{\alpha} \mathbf{A}_{1}+t^{\alpha} \mathbf {A}_{2}+\cdots+t^{\alpha} \mathbf{A}_{n}\bigr) \\ =&{E} _{\alpha,1}\bigl(t^{\alpha} \mathbf{A}\bigr) \end{aligned}$$
(42)
and
$$\begin{aligned} \mathbf{Q}(t) =&\sum_{j=1}^{n} t^{\alpha-1} \mathcal{E} _{(\alpha,\alpha,\dots,\alpha),\alpha}\bigl(t^{\alpha} \mathbf {A}_{1},t^{\alpha} \mathbf{A}_{2}, \dots,t^{\alpha} \mathbf{A}_{n}\bigr) \mathbf{I}_{j} \\ =& t^{\alpha-1} \mathcal{E} _{(\alpha,\alpha,\dots,\alpha),\alpha}\bigl(t^{\alpha} \mathbf {A}_{1},t^{\alpha} \mathbf{A}_{2}, \dots,t^{\alpha} \mathbf {A}_{n}\bigr)\sum _{j=1}^{n} \mathbf{I}_{j} \\ =& t^{\alpha-1} {E} _{\alpha,\alpha}\bigl(t^{\alpha} \mathbf{A}_{1}+t^{\alpha} \mathbf {A}_{2}+ \cdots+t^{\alpha} \mathbf{A}_{n}\bigr) \\ =& t^{\alpha-1} {E} _{\alpha,\alpha}\bigl(t^{\alpha} \mathbf{A}\bigr). \end{aligned}$$
(43)
They are consistent with [39].