A wide range of theoretical and practical problems arise in various fields of mathematical, economical, physical, and engineering sciences which can be formulated as a polynomial equation of degree n with arbitrary real or complex coefficient:
$$\begin{aligned} f(x)=x^{n}+a_{n-1}x^{n-1}+\cdots+a_{0}= \prod_{j=1}^{{n}}(x-\zeta _{j})=(x-\zeta _{i}) \mathop{\prod _{j=1}}_{j\neq i}^{n}(x-\zeta _{j}), \end{aligned}$$
(1)
where \(\zeta _{1}\cdots \zeta _{n}\) denote all the simple or complex roots of (1). Approximating all roots of the nonlinear polynomial equation using simultaneous methods has a lot of applications in sciences and engineering because simultaneous iterative methods are less time consuming since they can be implemented for parallel processing as well. Further details about their convergence properties, computational efficiency, and parallel processing may be found in [1–25] and the references cited there in. The main objective of this paper is to develop simultaneous methods which have a higher convergence order and are more efficient as compared to the existing methods. A very high computational efficiency is achieved by using two suitable corrections [26, 27] with convergence orders equal to ten and twelve with a minimal number of function evaluations in each step.
1.1 Construction of simultaneous methods for multiple roots
Consider two-step fourth-order Newton’s method [26] for finding multiple roots of nonlinear equation (1)
$$\begin{aligned} \textstyle\begin{cases} y_{i}=x_{i}-\sigma \frac{f(x_{i})}{f^{{\prime }}(x_{i})}, \\ z_{{i}}=y_{i}-\sigma \frac{f(y_{i})}{f^{{\prime }}(y_{i})},\end{cases}\displaystyle \end{aligned}$$
(2)
where σ is the multiplicity of exact root, say ζ, of (1). We would like to convert (2) into a simultaneous method for extracting all the distinct as well as multiple roots of (1). We use the third-order Dong et al. method [26] as a correction to increase the efficiency and convergence order requiring no additional evaluation of the function:
$$\begin{aligned} \textstyle\begin{cases} v_{i}=x_{i}-\sqrt{\sigma }\frac{f(x_{i})}{f^{{\prime }}(x_{i})}, \\ u_{{i}}=v_{i}-\sigma (1-\frac{1}{\sqrt{\sigma }})^{1-\sigma } \frac{f(v_{i})}{f^{{\prime }}(x_{i})}.\end{cases}\displaystyle \end{aligned}$$
(3)
Suppose that the nonlinear polynomial equation (1) has n roots. Then
$$\begin{aligned} f(x)=\prod_{i=1}^{n} ( x-x_{i} ) \quad\text{and}\quad f^{{ \prime }}(x)=\sum_{j=1}^{n} \mathop{\prod_{j=1}}_{j\neq i}^{{n}} ( x-x_{i} ). \end{aligned}$$
(4)
This implies
$$\begin{aligned} \frac{f(x_{i})}{f^{{\prime }}(x_{i})}=\sum_{\overset{j=1}{j\neq i}}^{n}\frac{1}{(x-x_{i})}= \frac{1}{\frac{1}{x-x_{i}}-\sum_{\overset{j=1}{j\neq i}}^{n}\frac{1}{(x-x_{j})}}. \end{aligned}$$
This gives
$$\begin{aligned} x-x_{i}= \frac{1}{\frac{1}{N_{i}(x_{i})}-\sum_{\overset{j=1}{j\neq i}}^{n}\frac{1}{(x-x_{j})}}, \end{aligned}$$
where \(\frac{1}{N_{i}(x_{i})}=\frac{f^{{\prime }}(x_{i})}{f(x_{i})}\) or
$$\begin{aligned} \frac{f(x_{i})}{f^{{\prime }}(x_{i})}= \frac{1}{\frac{1}{N_{i}(x_{i})}-\sum_{\overset{j=1}{j\neq i}}^{n}\frac{1}{(x-x_{j})}}. \end{aligned}$$
(5)
The multiple root equation (5) can be written as
$$\begin{aligned} \sigma _{i}\frac{f(x_{i})}{f^{{\prime }}(x_{i})}= \frac{\sigma _{i}}{\frac{\sigma _{i}}{N_{i}(x_{i})}-\sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}}{(x-x_{j})}}. \end{aligned}$$
(6)
Replacing \(x_{j}\) by \(x_{j}^{\ast }\) in (6), we have
$$\begin{aligned} \sigma _{i}\frac{f(x_{i})}{f^{{\prime }}(x_{i})}= \frac{\sigma _{i}}{\frac{\sigma _{i}}{N_{i}(x_{i})}-\sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}}{(x-x_{j}^{\ast })}}, \end{aligned}$$
(7)
where
$$\begin{aligned} x_{j}^{\ast }=u_{j} \quad\bigl(\text{using } \text{(3)}\bigr). \end{aligned}$$
Using (7) in the first step of (2), we have
$$\begin{aligned} \textstyle\begin{cases} y_{i}^{(k)}=x_{i}^{(k)}- \frac{\sigma _{i}}{\frac{\sigma _{i}}{N_{i}(x_{i}^{(k)})}-\sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}}{(x_{i}^{(k)}-x_{j}^{{*(k)} })}}, \quad k=0,1,\ldots, \\ z_{{i}}^{(k)}=y_{i}^{(k)}- \frac{\sigma _{i}}{\frac{\sigma _{i}}{N_{i}(y_{i}^{(k)})}-\sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}}{(y_{i}^{(k)}-y_{j}^{(k)})}}.\end{cases}\displaystyle \end{aligned}$$
(8)
Thus we have constructed a new simultaneous method (8) abbreviated as MNS10M for extracting all distinct as well as multiple roots of polynomial equation (1).
1.2 Convergence analysis
In this section, the convergence analysis of a family of two-step simultaneous methods (8) given in a form of the following theorem is presented.
Theorem 1
Let \(\zeta _{{1}},\ldots,\zeta _{n}\) be simple roots of (1). If \(x_{1}^{(0)},\ldots, x_{n}^{(0)}\) are the initial approximations of the roots respectively and sufficiently close to the actual roots, then the order of convergence of method (8) equals ten.
Proof
Let \(\epsilon _{i}=x_{i}-\zeta _{i},\epsilon _{i}^{\prime }=y_{i}-\zeta _{i} \), and \(\epsilon _{i}^{{\prime \prime }}=z_{i}-\zeta _{i}\) be the errors in \(x_{i}\), \(y_{i}\), and \(z_{i}\) approximations respectively. Consider the first step of (8), which is
$$\begin{aligned} y_{i}=x_{i}- \frac{\sigma _{i}}{\frac{\sigma _{i}}{N(x_{i})}-\sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}}{(x_{i}-x_{j}^{\ast })}}, \end{aligned}$$
where \(N(x_{i})=\frac{f(x_{i})}{f^{\prime }(x_{i})}\). Then, obviously, for distinct roots, we have
$$\begin{aligned} \frac{1}{N(x_{i})}=\frac{f^{\prime }(x_{i})}{f(x_{i})}=\sum_{j=1}^{n} \frac{1}{(x_{i}-\zeta _{j})}=\frac{1}{(x_{i}-\zeta _{i})}+\sum_{ \overset{j=1}{j\neq i}}^{n} \frac{1}{(x_{i}-\zeta _{j})}. \end{aligned}$$
Thus, for multiple roots, we have from (8)
$$\begin{aligned} &y_{i}=x_{i}- \frac{\sigma _{i}}{\frac{\sigma _{i}}{(x_{i}-\zeta _{i})}+\sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}}{(x_{i}-\zeta _{j})}-\sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}}{(x_{i}-x_{{j}}^{\ast })}}, \\ &y_{i}-\zeta _{i}=x_{i}-\zeta _{i}- \frac{\sigma _{i}}{\frac{\sigma _{i}}{(x_{i}-\zeta _{i})}+\sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}(x_{i}-x_{j}^{\ast }-x_{i}+\zeta _{j})}{(x_{i}-\zeta _{j})(x_{i}-x_{{j}}^{\ast })}}, \\ &\epsilon _{i}^{\prime }=\epsilon _{i}- \frac{\sigma _{i}}{\frac{\sigma _{i}}{\epsilon _{i}}+\sum_{\overset{j=1}{j\neq i}}^{n}\frac{-\sigma _{j}(x_{j}^{\ast }-\zeta _{j})}{(x_{i}-\zeta _{j})(x_{i}-x_{j}^{\ast })}} =\epsilon _{i}- \frac{\sigma _{i}\epsilon _{i}}{\sigma _{i}+\epsilon _{i}\sum_{\overset{j=1}{j\neq i}}^{n}\frac{-\sigma _{j}(x_{j}^{\ast }-\zeta _{j})}{(x_{i}-\zeta _{j})(x_{i}-x_{j}^{\ast })}} =\epsilon _{i}- \frac{\sigma _{i}.\epsilon _{i}}{\sigma _{i}+\epsilon _{i}\sum_{\overset{j=1}{j\neq i}}^{n}E_{i}\epsilon _{j}^{3}}, \end{aligned}$$
where \(x_{j}^{\ast }-\zeta _{j}=\epsilon _{j}^{3}\) [26] and \(E_{i}= \frac{-\sigma _{j}}{(x_{i}-\zeta _{j})(x_{i}-x_{j}^{\ast })}\).
Thus
$$\begin{aligned} \epsilon _{i}^{{\prime }}= \frac{\epsilon _{i}^{2}\sum_{\overset{j=1}{j\neq i}}^{n}E_{i}\epsilon _{j}^{3}}{\sigma _{i}+\epsilon _{i}\sum_{\overset{j=1}{j\neq i}}^{n}E_{i}\epsilon _{j}^{3}}. \end{aligned}$$
(9)
If it is assumed that absolute values of all errors \(\epsilon _{j}\ (j=1,2,3,\ldots)\) are of the same order as, say, \(\vert \epsilon _{j} \vert =O \vert \epsilon \vert \), then from (9) we have
$$\begin{aligned} \epsilon _{i}^{{\prime }}=O(\epsilon )^{5}. \end{aligned}$$
(10)
From the second equation of (8), we get
$$\begin{aligned} &z_{i}=y_{i}- \frac{\sigma _{i}}{\frac{\sigma _{i}}{N(y_{i})}-\sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}}{(y_{i}-y_{j})}}, \\ &z_{i}-\zeta _{i}=y_{i}-\zeta _{i}- \frac{\sigma _{i}}{\frac{\sigma _{i}}{y_{i}-\zeta _{i}}+\sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}}{(y_{i}-\zeta _{j})}-\sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}}{(y_{i}-y_{{j}})}}, \\ &\epsilon _{i}^{{\prime \prime }}=\epsilon _{i}^{\prime }- \frac{\sigma _{i}}{\frac{\sigma _{i}}{\epsilon _{i}^{\prime }}+\sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}}{(y_{i}-\zeta _{j})}-\sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}}{(y_{i}-y_{{j}})}} =\epsilon _{i}^{ \prime }- \frac{\sigma _{i}.\epsilon _{i}^{\prime }}{\sigma _{i}+\epsilon _{i}^{{\prime }} ( \sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}.(y_{i}-y_{j}-y_{i}+\zeta _{j})}{(y_{i}-\zeta _{j})(y_{i}-y_{j})} ) } \\ &\phantom{\epsilon _{i}^{{\prime \prime }}} = \epsilon _{i}^{\prime }- \frac{\sigma _{i}.\epsilon _{i}^{\prime }}{\sigma _{i}+\epsilon _{i}^{{\prime }} ( \sum_{\overset{j=1}{j\neq i}}^{n}\frac{-\sigma _{{j}}.(y_{{j}}-\zeta _{j})}{(y_{i}-\zeta _{j})(y_{i}-y_{j})} ) -\epsilon _{i}^{{\prime }}\alpha } =\epsilon _{i}^{\prime }- \frac{\sigma _{i}\epsilon _{i}^{{\prime }}}{\sigma _{i}+\epsilon _{i}^{{\prime }}\sum_{\overset{j=1}{j\neq i}}^{n}\epsilon _{j}^{{\prime }}F_{i}-\epsilon _{i}^{{\prime }}\alpha }, \end{aligned}$$
where \(F_{i}=\frac{-\sigma _{j}}{(y_{i}-\zeta _{j})(y_{i}-y_{j})}\). This implies
$$\begin{aligned} \epsilon _{i}^{{\prime \prime }}=\epsilon _{i}^{{\prime }}- \frac{\sigma _{i}.\epsilon _{i}^{{\prime }}}{\sigma _{i}+\epsilon _{i}^{{\prime }} ( \sum_{\overset{j=1}{j\neq i}}^{n}\epsilon _{j}^{{\prime }}F_{i}-\alpha \epsilon _{i}^{{\prime }} ) } =\bigl(\epsilon _{i}^{{\prime }} \bigr)^{2} \biggl( \frac{\sum_{\overset{j=1}{ j\neq i}}^{n}F_{i}-\alpha }{\sigma _{i}+\epsilon _{i}^{{\prime }} ( \sum_{\overset{j=1}{j\neq i}}^{n}\epsilon _{j}^{{\prime }}F_{i}-\alpha ) } \biggr) =\bigl(\epsilon _{i}^{{\prime }}\bigr)^{2}C_{i}, \end{aligned}$$
where \(C_{i}= \frac{\sum_{\overset{j=1}{j\neq i}}^{n}\epsilon _{j}^{{\prime }}F_{i}-\alpha }{\sigma _{i}+\epsilon _{i}^{{\prime }}\sum_{\overset{j=1}{j\neq i}}^{n}(\epsilon _{j}^{{\prime }}F_{i}-\epsilon _{i}^{{\prime }}\alpha )}\). By (10), \(\epsilon _{i}^{{\prime }}=O(\epsilon )^{5}\) and thus
$$\begin{aligned} \epsilon _{i}^{{\prime \prime }} =O\bigl((\epsilon )^{5} \bigr)^{2}=O(\epsilon )^{10}, \end{aligned}$$
which shows that the convergence order of method (8) is ten. Hence we have proved the theorem. □
1.3 Improvement of efficiency and convergence order
To improve the convergence order of method (8) from 10 to 12, using same function evaluation, we use
$$\begin{aligned} Z_{j}^{\ast }=v_{j}-\sigma _{j} \frac{f(v_{j})}{f^{{\prime }}(v_{j})}\quad\text{and}\quad v_{j}=x_{j}-\sqrt{ \sigma _{j}} \frac{f(x_{j})}{f^{{\prime }}(x_{j})} \end{aligned}$$
instead of \(x_{j }^{\ast }=\) \(Z_{j}^{\ast }\) in (7), i.e.,
$$\begin{aligned} \sigma _{i}\frac{f(x_{i})}{f^{{\prime }}(x_{i})}= \frac{\sigma _{i}}{\frac{\sigma _{i}}{N_{i}(x_{i})}-\sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}}{(x_{i}-Z_{j}^{\ast })}}, \end{aligned}$$
(11)
where \(Z_{j}^{\ast }\) is a fourth-order method [27]. Using (11) in the first step of (2), we have
$$\begin{aligned} \textstyle\begin{cases} y_{i}^{(k)}=x_{i}^{(k)}- \frac{\sigma _{i}}{\frac{\sigma _{i}}{N_{i}(x_{i}^{(k)})}-\sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}}{(x_{i}^{(k)}-Z_{j}^{{*(k)} })}}, \\ z_{{i}}^{(k)}=y_{i}^{(k)}- \frac{\sigma _{i}}{\frac{\sigma _{i}}{N_{i}(y_{i}^{(k)})}-\sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}}{(y_{i}^{(k)}-y_{j}^{(k)})}}.\end{cases}\displaystyle \end{aligned}$$
(12)
Thus we have constructed a new simultaneous method (12), abbreviated as MNS12M for extracting all multiple roots of polynomial equation (1). For multiplicity unity, we used method (12) for determing all the distinct roots of (1), abbreviated as MNS12D.
1.4 Convergence analysis
In this section, the convergence analysis of a family of two-step simultaneous methods (12) is given in a form of the following theorem.
Theorem 2
Let \(\zeta _{{1}},\zeta _{{2}},\ldots,\zeta _{n}\) be simple roots of (1). If \(x_{1}^{(0)}\), \(x_{2}^{(0)}\), \(x_{3}^{(0)},\ldots, x_{n}^{(0)}\) are the initial approximations of the roots respectively and sufficiently close to the actual roots, then the order of convergence of method (12) equals twelve.
Proof
Let \(\epsilon _{i}=x_{i}-\zeta _{i},\epsilon _{i}^{\prime }=y_{i}-\zeta _{i} \), and \(\epsilon _{i}^{{\prime \prime }}=z_{i}-\zeta _{i}\) be the errors in \(x_{i}\), \(y_{i}\), and \(z_{i}\) approximations respectively. Consider the first step of (12), which is
$$\begin{aligned} y_{i}=x_{i}- \frac{\sigma _{i}}{\frac{\sigma _{i}}{N(x_{i})}-\sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}}{(x_{i}-Z_{j}^{\ast })}}, \end{aligned}$$
where \(N(x_{i})=\frac{f(x_{i})}{f^{\prime }(x_{i})}\). Then, obviously, for distinct roots, we have
$$\begin{aligned} \frac{1}{N(x_{i})}=\frac{f^{\prime }(x_{i})}{f(x_{i})}=\sum_{j=1}^{n} \frac{1}{(x_{i}-\zeta _{j})}=\frac{1}{(x_{i}-\zeta _{i})}+\sum_{ \overset{j=1}{j\neq i}}^{n} \frac{1}{(x_{i}-\zeta _{j})}. \end{aligned}$$
Thus, for multiple roots, we have from (6)
$$\begin{aligned} &y_{i}=x_{i}- \frac{\sigma _{i}}{\frac{\sigma _{i}}{(x_{i}-\zeta _{i})}+\sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}}{(x_{i}-\zeta _{j})}-\sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}}{(x_{i}-Z_{j}^{\ast })}}, \\ &y_{i}-\zeta _{i}=x_{i}-\zeta _{i}- \frac{\sigma _{i}}{\frac{\sigma _{i}}{(x_{i}-\zeta _{i})}+\sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}(x_{i}-Z_{j}^{\ast }-x_{i}+\zeta _{j})}{(x_{i}-\zeta _{j})(x_{i}-Z_{j}^{\ast })}}, \\ &\epsilon _{i}^{\prime }=\epsilon _{i}- \frac{\sigma _{i}}{\frac{\sigma _{i}}{\epsilon _{i}}+\sum_{\overset{j=1}{j\neq i}}^{n}\frac{-\sigma _{j}(Z_{j}^{\ast }-\zeta _{j})}{(x_{i}-\zeta _{j})(x_{i}-Z_{j}^{\ast })}} =\epsilon _{i}- \frac{\sigma _{i}\epsilon _{i}}{\sigma _{i}+\epsilon _{i}\sum_{\overset{j=1}{j\neq i}}^{n}\frac{-\sigma _{j}(Z_{j}^{\ast }-\zeta _{j})}{(x_{i}-\zeta _{j})(x_{i}-Z_{j}^{\ast })}} =\epsilon _{i}- \frac{\sigma _{i}.\epsilon _{i}}{\sigma _{i}+\epsilon _{i}\sum_{\overset{j=1}{j\neq i}}^{n}G_{i}\epsilon _{j}^{4}}, \end{aligned}$$
where \(Z_{j}^{\ast }-\zeta _{j}=\epsilon _{j}^{4}\) [27] and \(G_{i}=\frac{-\sigma _{j}}{(x_{i}-\zeta _{j})(x_{i}-Z_{j}^{\ast })}\). Thus
$$\begin{aligned} \epsilon _{i}^{{\prime }}= \frac{\epsilon _{i}^{2}\sum_{\overset{j=1}{j\neq i}}^{n}G_{i}\epsilon _{j}^{4}}{\sigma _{i}+\epsilon _{i}\sum_{\overset{j=1}{j\neq i}}^{n}G_{i}\epsilon _{j}^{4}}. \end{aligned}$$
(13)
If it is assumed that absolute values of all errors \(\epsilon _{j}\ (j=1,2,3,\ldots)\) are of the same order as, say, \(\vert \epsilon _{j} \vert =O \vert \epsilon \vert \), then from (13) we have
$$\begin{aligned} \epsilon _{i}^{{\prime }}=O(\epsilon )^{6}. \end{aligned}$$
(14)
From the second equation of (12), we have
$$\begin{aligned} &z_{i}=y_{i}- \frac{\sigma _{i}}{\frac{\sigma _{i}}{N(y_{i})}-\sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}}{(y_{i}-y_{j})}}, \\ &z_{i}-\zeta _{i}=y_{i}-\zeta _{i}- \frac{\sigma _{i}}{\frac{\sigma _{i}}{y_{i}-\zeta _{i}}+\sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}}{(y_{i}-\zeta _{j})}-\sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}}{(y_{i}-y_{{j}})}}, \\ &\epsilon _{i}^{{\prime \prime }} = \epsilon _{i}^{\prime }- \frac{\sigma _{i}}{\frac{\sigma _{i}}{\epsilon _{i}^{\prime }}+\sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}}{(y_{i}-\zeta _{j})}-\sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}}{(y_{i}-y_{{j}})}} =\epsilon _{i}^{ \prime }- \frac{\sigma _{i}\epsilon _{i}^{\prime }}{\sigma _{i}+\epsilon _{i}^{{\prime }} ( \sum_{\overset{j=1}{j\neq i}}^{n}\frac{\sigma _{j}(y_{i}-y_{j}-y_{i}+\zeta _{j})}{(y_{i}-\zeta _{j})(y_{i}-y_{j})} ) }, \\ &\phantom{\epsilon _{i}^{{\prime \prime }}} = \epsilon _{i}^{\prime }- \frac{\sigma _{i}\epsilon _{i}^{\prime }}{\sigma _{i}+\epsilon _{i}^{{\prime }} ( \sum_{\overset{j=1}{j\neq i}}^{n}\frac{-\sigma _{{j}}(y_{{j}}-\zeta _{j})}{(y_{i}-\zeta _{j})(y_{i}-y_{j})} ) } =\epsilon _{i}^{\prime }- \frac{\sigma _{i}\epsilon _{i}^{{\prime }}}{\sigma _{i}+\epsilon _{i}^{{\prime }}\sum_{\overset{j=1}{j\neq i}}^{n}\epsilon _{j}^{{\prime }}H_{i}}, \end{aligned}$$
where \(H_{i}=\frac{-\sigma _{j}}{ (y_{i}-\zeta _{j})(y_{i}-y_{j})}\). This implies
$$\begin{aligned} \epsilon _{i}^{{\prime \prime }}=\epsilon _{i}^{{\prime }}- \frac{\sigma _{i}\epsilon _{i}^{{\prime }}}{\sigma _{i}+\epsilon _{i}^{{\prime }} ( \sum_{\overset{j=1}{j\neq i}}^{n}\epsilon _{j}^{{\prime }}H_{i} ) }. \end{aligned}$$
If it is assumed that absolute values of all errors \(\epsilon _{j}\ (j=1,2,3,\ldots)\) are of the same order as, say, \(\vert \epsilon _{j} \vert =O \vert \epsilon \vert \), then we have
$$\begin{aligned} =\bigl(\epsilon _{i}^{{\prime }}\bigr)^{2} \biggl( \frac{\sum_{\overset{j=1}{j\neq i}}^{n}H_{i}}{\sigma _{i}+(\epsilon _{i}^{{\prime }})^{2} ( \sum_{\overset{j=1}{j\neq i}}^{n}H_{i} ) } \biggr) =\bigl(\epsilon _{i}^{{\prime }} \bigr)^{2}D_{i}, \end{aligned}$$
where \(D_{i}= \frac{\sum_{\overset{j=1}{j\neq i}}^{n}H_{i}}{\sigma _{i}+(\epsilon _{i}^{{\prime }})^{2}\sum_{\overset{j=1}{j\neq i}}^{n}H_{i}}\). By (14), \(\epsilon _{i}^{{\prime }}=O(\epsilon )^{6}\) and thus
$$\begin{aligned} \epsilon _{i}^{{\prime \prime }} =O\bigl((\epsilon )^{6} \bigr)^{2}=O(\epsilon )^{12}, \end{aligned}$$
which shows that the convergence order of method (12) is twelve. Hence we have proved the theorem. □