Skip to main content

Theory and Modern Applications

On iterative techniques for estimating all roots of nonlinear equation and its system with application in differential equation

Abstract

In this article, we construct a family of iterative methods for finding a single root of nonlinear equation and then generalize this family of iterative methods for determining all roots of nonlinear equations simultaneously. Further we extend this family of root estimating methods for solving a system of nonlinear equations. Convergence analysis shows that the order of convergence is 3 in case of the single root finding method as well as for the system of nonlinear equations and is 5 for simultaneous determination of all distinct and multiple roots of a nonlinear equation. The computational cost, basin of attraction, efficiency, log of residual and numerical test examples show that the newly constructed methods are more efficient as compared to the existing methods in literature.

1 Introduction

To solve the nonlinear equation

$$ f(x)=0 $$
(1)

is the oldest problem of science in general and in mathematics in particular. These nonlinear equations have diverse applications in many areas of science and engineering. In general, to find the roots of (1), we look towards iterative schemes, which can be further classified as to approximate a single root and all roots of (1). There exists another class of iterative methods in literature which solves nonlinear systems. In this article, we are going to work on all these three types of iterative methods. A lot of iterative methods for finding roots of nonlinear equations and their system of different convergence order already exist in the literature (see [112]). The aforementioned methods are used to approximate one root at a time. But mathematician are also interested in finding all roots of (1) simultaneously. This is due to the fact that simultaneous iterative methods are very popular due to their wider region of convergence, are more stable as compared to single root finding methods, and implemented for parallel computing as well. More details on simultaneous determination of all roots can be found in [1325] and the references cited therein.

The main aim of this paper to construct a family of optimal third order iterative methods and then convert them into simultaneous iterative methods for finding all distinct as well as multiple roots of nonlinear equation (1). We further extend this family of iterative methods for solving a system of nonlinear equations. Basins of attractions of single roots finding methods are also given to show the convergence behavior of iterative methods.

2 Constructions of a family of methods for single root and convergence analysis

Here, we present some well-known existing methods of third order iterative methods.

Singh et al. [4] presented the following optimal third order method (abbreviated as E1):

$$ \textstyle\begin{cases} y^{(k)}=x^{(k)}-\frac{2}{3} ( \frac{f(x^{(k)})}{f^{\prime }(x^{(k)})} ) , \\ z^{(k)}=x^{(k)}- \frac{4f(x^{(k)})}{f^{\prime }(x^{(k)})+3f^{\prime }(y^{(k)})}.\end{cases} $$

Huen et al. [26] gave the third order optimal method as follows (abbreviated as E2):

$$ \textstyle\begin{cases} y^{(k)}=x^{(k)}-\frac{2}{3} ( \frac{f(x^{(k)})}{f^{\prime }(x^{(k)})} ) , \\ z^{(k)}=x^{(k)}-\frac{f(x^{(k)})}{4} ( \frac{1}{f^{\prime }(x^{(k)})}+\frac{3}{f^{\prime }(y^{(k)})} ) .\end{cases} $$

Amat et al. [5] in (2007) gave the following third order optimal method (abbreviated as E3):

$$ \textstyle\begin{cases} y^{(k)}=x^{(k)}- ( \frac{f(x^{(k)})}{f^{\prime }(x^{(k)})} ) , \\ z^{(k)}=y^{(k)}- ( \frac{f(y^{(k)})}{f^{\prime }(x^{(k)})} ) .\end{cases} $$

Chun et al. [27] gave the third order optimal method as follows (abbreviated as E4):

$$ \textstyle\begin{cases} y^{(k)}=x^{(k)}- ( \frac{f(x^{(k)})}{f^{\prime }(x^{(k)})} ) , \\ z^{(k)}=x^{(k)}-\frac{1}{2} ( 3- \frac{f^{\prime }(y^{(k)})}{f^{\prime }(x^{(k)})} ) ( \frac{f(x^{(k)})}{f^{\prime }(x^{(k)})} ) .\end{cases} $$

Kou et al. [28] gave the third order optimal method as follows (abbreviated as E5):

$$ \textstyle\begin{cases} y^{(k)}=x^{(k)}+ ( \frac{f(x^{(k)})}{f^{\prime }(x^{(k)})} ) , \\ z^{(k)}=y^{(k)}- ( \frac{f(y^{(k)})}{f^{\prime }(x^{(k)})} ) .\end{cases} $$

Chun et al. [27] gave the following third order optimal method (abbreviated as E6):

$$ \textstyle\begin{cases} y^{(k)}=x^{(k)}- ( \frac{f(x^{(k)})}{f^{\prime }(x^{(k)})} ) , \\ z^{(k)}=x^{(k)}- \frac{f(x^{(k)})(2+3(f^{\prime }(x^{(k)}))^{2}-f^{\prime }(x^{(k)})f^{\prime }(y^{(k)}))}{f^{\prime }(x^{(k)})+2(f^{\prime }(x^{(k)}))^{3}+f^{\prime }(y^{(k)})}.\end{cases} $$

Here, we propose the following families of iterative methods (abbreviated as Q1):

$$ \textstyle\begin{cases} y^{(k)}=x^{(k)}- ( \frac{f(x^{(k)})}{f^{\prime }(x^{(k)})} ) , \\ z^{(k)}=y^{(k)}- ( \frac{f^{\prime }(x^{(k)})-f^{\prime }(y^{(k)})}{\alpha f^{\prime }(y^{(k)})+(2-\alpha )f^{\prime }(x^{(k)})} ) ( \frac{f(x^{(k)})}{f^{\prime }(x^{(k)})} ) ,\end{cases} $$
(2)

where \(\alpha \in \mathbb{R} \). For iteration schemes (2), we have the following convergence theorem by using CAS Maple 18 and the error relation of the iterative schemes defined in (2).

Theorem 1

Let \(\zeta \in I\) be a simple root of a sufficiently differential function \(f:I\subseteq R\longrightarrow R\) in an open interval I. If \(x_{0}\) is sufficiently close to ζ, then the convergence order of the family of iterative methods (2) is three and the error equation is given by

$$ e^{(k+1)}= \biggl(2c_{2}^{2}+\frac{1}{2}c_{3}- \alpha c_{2}^{2} \biggr) \bigl(e^{(k)} \bigr)^{3}+O \bigl( \bigl(e^{(k)} \bigr)^{4} \bigr), $$
(3)

where \(c_{m}=\frac{f^{m}(\zeta )}{m!f^{\prime }(\zeta )}\), \(m\geq 2\).

Proof

Let ζ be a simple root of f and \(x^{(k)}=\zeta +e^{(k)}\). By Taylor’s series expansion of \(f(x^{(k)})\) around \(x^{(k)}=\zeta \), taking \(f(\zeta )=0\), we get

$$ f \bigl(x^{(k)} \bigr)=f^{{\prime }}(\zeta ) \bigl(e^{(k)}+c_{2} \bigl(e^{(k)} \bigr)^{2}+c_{3} \bigl(e^{(k)} \bigr)^{3}+c_{4} \bigl(e^{(k)} \bigr)^{4}+O \bigl(e^{(k)} \bigr)^{5} \bigr) $$
(4)

and

$$ f^{\prime } \bigl(x^{(k)} \bigr)=f^{{\prime }}(\zeta ) (1+2c_{2} \bigl(e^{(k)} \bigr)+3c_{3} \bigl(e^{(k)} \bigr)^{2}+4c_{4} \bigl(e^{(k)} \bigr)^{3}+O \bigl( \bigl(e^{(k)} \bigr)^{4} \bigr). $$
(5)

Dividing (4) by (5), we have

$$ \frac{f(x^{(k)})}{f^{\prime }(x^{(k)})}=e^{(k)}-c_{2} \bigl(e^{(k)} \bigr)^{2}+ \bigl(2c_{2}^{2}-2c_{3} \bigr) \bigl(e^{(k)} \bigr)^{3}+O \bigl( \bigl(e^{(k)} \bigr)^{4} \bigr) $$
(6)

and

$$\begin{aligned}& y^{(k)}=c_{2} \bigl(e^{(k)} \bigr)^{2}+ \bigl(-2c_{2}^{2}+2c_{3} \bigr) \bigl(e^{(k)} \bigr)^{3}+\cdots, \end{aligned}$$
(7)
$$\begin{aligned}& f^{\prime } \bigl(y^{(k)} \bigr)=1+2c_{2}^{2} \bigl(e^{(k)} \bigr)^{2}+2c_{2} \bigl(-2c_{2}^{2}+2c_{3} \bigr) \bigl(e^{(k)} \bigr)^{3}+\cdots. \end{aligned}$$
(8)

We have

$$ \frac{f^{\prime }(x^{(k)})-f^{\prime }(y^{(k)})}{\alpha f^{\prime }(y^{(k)})+(2-\alpha )f^{\prime }(x^{(k)})}=-c_{2} \bigl(e^{(k)} \bigr)^{2}+ \biggl(4c_{2}^{2}-\frac{3}{2}c_{3}- \alpha c_{2}^{2} \biggr) \bigl(e^{(k)} \bigr)^{3}+\cdots . $$
(9)

From the second step of (2), we have

$$\begin{aligned}& e^{(k+1)}=y^{(k)}- \frac{f^{\prime }(x^{(k)})-f^{\prime }(y^{(k)})}{\alpha f^{\prime }(y^{(k)})+(2-\alpha )f^{\prime }(x^{(k)})} \frac{f(x^{(k)})}{f^{\prime }(x^{(k)})}, \end{aligned}$$
(10)
$$\begin{aligned}& e^{(k+1)}= \biggl(2c_{2}^{2}+\frac{1}{2}c_{3}- \alpha c_{2}^{2} \biggr) \bigl(e^{(k)} \bigr)^{3}+O \bigl( \bigl(e^{(k)} \bigr)^{4} \bigr). \end{aligned}$$
(11)

Hence this proves third order convergence. □

3 Generalizations to simultaneous methods

Suppose that nonlinear equation (1) has n roots. Then \(f(x)\) and \(f^{\prime }(x)\) can be approximated as

$$ f(x)=\prod_{j=1}^{n} ( x-x_{j} ) \quad \text{and}\quad f^{ \prime }(x)=\sum_{k=1}^{n} \underset{\underset{j=1}{j\neq k}}{\overset{n}{\prod }} ( x-x_{j} ) . $$
(12)

This implies

$$ \frac{f^{\prime }(x)}{f(x)}=\sum_{j=1}^{n} \biggl( \frac{1}{(x-x_{j})} \biggr) =\frac{1}{\frac{1}{x-x_{i}}-\sum_{\overset{j=1}{j\neq i}}^{n} ( \frac{1}{(x-x_{j})} ) }. $$
(13)

This gives the Albert Ehrlich method [29]

$$ y_{i}^{(k+1)}=x_{i}^{(k)}- \frac{1}{\frac{1}{N(x_{i}^{(k)})}-\sum_{\overset{j=1}{j\neq i}}^{n} ( \frac{1}{(x_{i}^{(k)}-x_{j}^{(k)})} ) }, $$
(14)

where \(N(x_{i})=\frac{f(x_{i}^{(k)})}{f^{\prime }(x_{i}^{(k)})}\) and \(i,j=1,2,3,\ldots,n\). Now from (13), an approximation of \(\frac{f(x_{i}^{(k)})}{f^{\prime }(x_{i}^{(k)})}\) is formed by replacing \(x_{j}^{(k)}\) with \(z_{j}^{(k)}\) as follows:

$$ \frac{f(x_{i}^{(k)})}{f^{\prime }(x_{i}^{(k)})}= \frac{1}{\frac{1}{N(x_{i}^{(k)})}-\sum_{\overset{j=1}{j\neq i}}^{n} ( \frac{1}{(x_{i}^{(k)}-z_{j}^{(k)})} ) }. $$
(15)

Using (15) in (14), we have

$$ y_{i}^{(k+1)}=x_{i}^{(k)}- \frac{1}{\frac{1}{N(x_{i}^{(k)})}-\sum_{\overset{j=1}{j\neq i}}^{n} ( \frac{1}{(x_{i}^{(k)}-z_{j}^{(k)})} ) }. $$
(16)

In case of multiple roots,

$$ y_{i}^{(k+1)}=x_{i}^{(k)}- \frac{\sigma _{i}}{\frac{1}{N(x_{i}^{(k)})}-\sum_{\overset{j=1}{j\neq i}}^{n} ( \frac{\sigma _{j}}{(x_{i}^{(k)}-z_{j}^{(k)})} ) }\quad (i,j=1,2,3,\ldots,n), $$
(17)

where \(z_{j}^{(k)}=y_{j}^{(k)}- ( \frac{f^{\prime }(x_{j}^{(k)})-f^{\prime }(y_{j}^{(k)})}{\alpha f^{\prime }(y_{j}^{(k)})+(2-\alpha )f^{\prime }(x_{j}^{(k)})} ) ( \frac{f(x_{j}^{(k)})}{f^{\prime }(x_{j}^{(k)})} ) \) and \(y_{j}^{(k)}=x_{j}^{(k)}- ( \frac{f(x_{j}^{(k)})}{f^{\prime }(x_{j}^{(k)})} ) \). Thus, we get the following new family of simultaneous iterative methods for extracting all distinct as well as multiple roots of nonlinear equation (1) abbreviated as SM1. Zhang et al. [30] presented the following fifth order simultaneous methods:

$$ x_{i}^{(k+1)}=x_{i}^{(k)}- \frac{2w_{i}(x_{i}^{(k)})}{1+\sum_{\underset{j=1}{j\neq i}}^{n}\frac{w_{j}(x_{j}^{(k)})}{x_{i}^{(k)}-x_{j}^{(k)}}+\sqrt{\textstyle\begin{array}{c} ( 1+\sum_{\underset{j=1}{j\neq i}}^{n}\frac{w_{j}(x_{j}^{(k)})}{x_{i}^{(k)}-x_{j}^{(k)}} ) ^{2}+4w_{i}(x_{i}^{(k)}) \\ \sum_{\underset{j=1}{j\neq i}}^{n}\frac{w_{j}(x_{i}^{(k)})}{ ( x_{i}^{(k)}-x_{j}^{(k)} ) ( x_{i}^{(k)}-w_{i}(x_{i}^{(k)})-x_{j}^{(k)} ) }\end{array}\displaystyle }}. $$
(18)

3.1 Convergence analysis

In this section, the convergence analysis of a family of simultaneous methods (17) is given in a form of the following theorem. Obviously, convergence for method (17) will follow from the convergence of method (SM1) from theorem (2) when the multiplicities of the roots are simple.

Theorem 2

(2)

Let \(\zeta _{{1}},\ldots,\zeta _{n}\) be the n number of simple roots with multiplicities \(\sigma _{{1}},\ldots,\sigma _{n}\) of nonlinear equation (1). If \(x_{1}^{(0)},\ldots,x_{n}^{(0)}\) is the initial approximations of the roots respectively and sufficiently close to actual roots, the order of convergence of method (SM1) equals five.

Proof

Let

$$\begin{aligned}& \epsilon _{i} = x_{i}^{(k)}-\zeta _{i} \quad \text{and } \end{aligned}$$
(19)
$$\begin{aligned}& \epsilon _{i}^{\prime } = y_{i}^{(k+1)}- \zeta _{i} \end{aligned}$$
(20)

be the errors in \(x_{i}^{(k)}\) and \(y_{i}^{(k+1)}\) approximations respectively. Considering (SM1), we have

$$ y_{i}^{(k+1)}=x_{i}^{(k)}- \frac{\sigma _{i}}{\frac{\sigma _{i}}{N(x_{i}^{(k)})}-\sum_{\underset{j=1}{j\neq i}}^{n} ( \frac{\sigma _{j}}{(x_{i}^{(k)}-z_{j}^{(k)})} ) }, $$
(21)

where

$$ N \bigl(x_{i}^{(k)} \bigr)= \biggl( \frac{f(x_{i}^{(k)})}{f^{\prime }(x_{i}^{(k)})} \biggr) . $$
(22)

Then, obviously, for distinct roots we have

$$ \frac{1}{N(x_{i}^{(k)})}= \biggl( \frac{f^{\prime }(x_{i}^{(k)})}{f(x_{i}^{(k)})} \biggr) =\sum _{j=1}^{n} \biggl( \frac{1}{(x_{i}^{(k)}-\zeta _{j})} \biggr) = \frac{1}{(x_{i}^{(k)}-\zeta _{i})}+\sum_{\underset{j=1}{j\neq i}}^{n} \biggl( \frac{1}{(x_{i}^{(k)}-\zeta _{j})} \biggr) . $$
(23)

Thus, for multiple roots, we have from (17)

$$\begin{aligned}& y_{i}^{(k+1)}=x_{i}^{(k)}- \frac{\sigma _{i}}{\frac{\sigma _{i}}{(x_{i}^{(k)}-\zeta _{i})}+\sum_{\underset{j=1}{j\neq i}}^{n} ( \frac{\sigma _{j}}{(x_{i}^{(k)}-\zeta _{j})} ) -\sum_{\underset{j=1}{j\neq i}}^{n} ( \frac{\sigma _{j}}{(x_{i}^{(k)}-z_{j}^{(k)})} ) }, \end{aligned}$$
(24)
$$\begin{aligned}& y_{i}^{(k+1)}-\zeta _{i}=x_{i}^{(k)}- \zeta _{i}- \frac{\sigma _{i}}{\frac{\sigma _{i}}{(x_{i}^{(k)}-\zeta _{i})}+\sum_{\underset{j=1}{j\neq i}}^{n} ( \frac{\sigma _{j}(x_{i}^{(k)}-z_{j}^{(k)}-x_{i}+\zeta _{j})}{(x_{i}^{(k)}-\zeta _{j})(x_{i}^{(k)}-z_{j}^{(k)})} ) }, \end{aligned}$$
(25)
$$\begin{aligned}& \epsilon _{i}^{\prime }=\epsilon _{i}- \frac{\sigma _{i}}{\frac{\sigma _{i}}{\epsilon _{i}}+\sum_{\underset{j=1}{j\neq i}}^{n} ( \frac{-\sigma _{j}(z_{j}^{(k)}-\zeta _{j})}{(x_{i}^{(k)}-\zeta _{j})(x_{i}^{(k)}-z_{j}^{(k)})} ) }, \end{aligned}$$
(26)
$$\begin{aligned}& \hphantom{\epsilon _{i}^{\prime }}=\epsilon _{i}- \frac{\sigma _{i}\epsilon _{i}}{\sigma _{i}+\epsilon _{i}\sum_{\underset{j=1}{j\neq i}}^{n} ( \frac{-\sigma _{j}(z_{j}^{(k)}-\zeta _{j})}{(x_{i}^{(k)}-\zeta _{j})(x_{i}^{(k)}-z_{j}^{(k)})} ) }, \end{aligned}$$
(27)
$$\begin{aligned}& \hphantom{\epsilon _{i}^{\prime }}=\epsilon _{i}- \frac{\sigma _{i}.\epsilon _{i}}{\sigma _{i}+\epsilon _{i}\sum_{\underset{j=1}{j\neq i}}^{n} ( E_{i}\epsilon _{j}^{4} ) }, \end{aligned}$$
(28)

where \(z_{j}^{(k)}-\zeta _{j}=\epsilon _{j}^{3}\) from (3) and \(E_{i}= ( \frac{-\sigma _{j}}{(x_{i}^{(k)}-\zeta _{j})(x_{i}^{(k)}-z_{j}^{(k)})} ) \).

Thus,

$$ \epsilon _{i}^{{\prime }}= \frac{\epsilon _{i}^{2}\sum_{\underset{j=1}{j\neq i}}^{n} ( E_{i}\epsilon _{j}^{3} ) }{\sigma _{i}+\epsilon _{i}\sum_{\underset{j=1}{j\neq i}}^{n} ( E_{i}\epsilon _{j}^{3} ) }. $$
(29)

If it is assumed that absolute values of all errors \(\epsilon _{j}\) (\(j=1,2,3,\ldots\)) are of the same order as, say \(\vert \epsilon _{j} \vert =O \vert \epsilon \vert \), then from (29) we have

$$ \epsilon _{i}^{{\prime }}=O(\epsilon _{i})^{5}. $$
(30)

Hence the theorem. □

4 Extension to a system of nonlinear equations

In this work, we consider the following system of nonlinear equations:

$$ \mathbf{F(x)=}0, $$
(31)

where in \(\mathbf{F(x)}=(f_{1}(x),f_{2}(x),\ldots,f_{n}(x))^{T}\) and the functions \(f_{1}(x),f_{2}(x),\ldots,f_{n}(x)\) are the coordinate functions of [31].

There are many approaches to solving nonlinear system (31). One of the famous iterative methods is Newton–Raphson method for solving the system of nonlinear equations

$$ \mathbf{y}^{(k)}=\mathbf{x}^{(k)}-\mathbf{F}^{\prime } \bigl( \mathbf{x}^{(k)} \bigr)^{-1}\mathbf{F} \bigl( \mathbf{x}^{(k)} \bigr), $$

where

F ( x ) = F ( x 1 ,, x n )= ( f 1 ( x 1 , , x n ) f 2 ( x 1 , , x n ) f n ( x 1 , , x n ) )

and

F (x)= ( f 1 x 1 f 1 x 2 f 1 x n f 2 x 1 f 2 x 2 f 2 x n f n x 1 f n x 2 f n x n ) .
(32)

Here, we present some well-known third order iterative methods for solving the system of nonlinear equations.

Darvisti et al. [32] presented the following third order iterative method (Abbreviated as EE1):

$$\begin{aligned}& \mathbf{y}^{(k)} = \mathbf{x}^{(k)}-\mathbf{F}^{\prime } \bigl(\mathbf{x}^{(k)} \bigr)^{-1}\mathbf{F} \bigl( \mathbf{x}^{(k)} \bigr), \\& \mathbf{z}^{(k)} = \mathbf{y}^{(k)}-\mathbf{F}^{\prime } \bigl(\mathbf{x}^{(k)} \bigr)^{-1}\mathbf{F} \bigl( \mathbf{y}^{(k)} \bigr). \end{aligned}$$

Trapezoidal Newton method [33] of third order was presented as follows (Abbreviated asEE2):

$$\begin{aligned}& \mathbf{y}^{(k)} = \mathbf{x}^{(k)}-\mathbf{F}^{\prime } \bigl(\mathbf{x}^{(k)} \bigr)^{-1}\mathbf{F} \bigl( \mathbf{x}^{(k)} \bigr), \\& \mathbf{z}^{(k)} = \mathbf{x}^{(k)}-2 \bigl[ \mathbf{F}^{\prime } \bigl( \mathbf{x}^{(k)} \bigr)+ \mathbf{F}^{\prime } \bigl(\mathbf{y}^{(k)} \bigr) \bigr] ^{-1}\mathbf{F} \bigl(\mathbf{x}^{(k)}\mathbf{ \bigr).} \end{aligned}$$

Khirallah et al. [34] presented the following third order iterative method (Abbreviated as EE3):

$$\begin{aligned}& \mathbf{y}^{(k)} = \mathbf{x}^{(k)}-\frac{2}{3} \mathbf{F}^{\prime } \bigl( \mathbf{x}^{(k)}\mathbf{ \bigr)}^{-1}\mathbf{F} \bigl(\mathbf{x}^{(k)} \bigr), \\& \mathbf{z}^{(k)} = \mathbf{x}^{(k)}- \biggl[ \mathbf{F}^{\prime } \bigl( \mathbf{x}^{(k)}\mathbf{ \bigr)}^{-1}+\frac{3}{2}\mathbf{F}^{\prime } \bigl( \mathbf{y}^{(k)} \bigr)^{-1} \biggr] \mathbf{F} \bigl(\mathbf{x}^{(k)} \bigr)+3 \bigl[ \mathbf{F}^{\prime } \bigl(\mathbf{x}^{(k)} \bigr)+ \mathbf{F}^{\prime } \bigl( \mathbf{y}^{(k)} \bigr) \bigr] ^{-1}\mathbf{F} \bigl( \mathbf{x}^{(k)} \bigr). \end{aligned}$$

Here, we extend the family of iterative methods (2) for solving the system of nonlinear equations

$$\begin{aligned}& \mathbf{y}^{(k)} = \mathbf{x}^{(k)}-\mathbf{F}^{\prime } \bigl(\mathbf{x}^{(k)} \bigr)^{-1}\mathbf{F} \bigl( \mathbf{x}^{(k)} \bigr), \\& \mathbf{z}^{(k)} = \mathbf{y}^{(k)}- \bigl[ \bigl( \alpha \mathbf{F}^{\prime } \bigl(\mathbf{y}^{(k)}\bigr) +(2- \alpha ) \mathbf{F}^{\prime } \bigl(\mathbf{x}^{(k)} \bigr) \bigr)^{-1} \bigl( \mathbf{F}^{\prime } \bigl(\mathbf{x}^{(k)} \bigr)-\mathbf{F}^{\prime } \bigl( \mathbf{y}^{(k)} \bigr) \bigr) \bigr] \mathbf{F}^{\prime } \bigl(\mathbf{x}^{(k)}\mathbf{ \bigr)}^{-1} \mathbf{F} \bigl( \mathbf{x}^{(k)}\mathbf{ \bigr),} \end{aligned}$$
(33)

where \(\alpha \in \mathbb{R} \). We abbreviate this family of iterative methods for approximating roots of the system of nonlinear equations by QQ1.

Theorem 3

Let the function \(\mathbf{F}:E\subseteq \mathbb{R} ^{n}\rightarrow \mathbb{R} ^{n}\) be sufficiently Fréchet differentiable on an open set E containing the root ζ of \(\mathbf{F}(\mathbf{x}^{(k)}\mathbf{)}=0\). If the initial estimation \(\mathbf{x}^{(0)}\) is close to ζ, then the convergence order of the method QQ1 is at least three, provided that \(\alpha \in \mathbb{R} \).

Proof

Let \(\mathbf{e}^{(k)}=\mathbf{x}^{(k)}-\boldsymbol{\zeta}\), , and \(\widehat{\mathbf{e}}^{(k)}=\mathbf{z}^{(k)}-\boldsymbol{\zeta}\) be the errors in developing Taylor series of \(\mathbf{F(x}^{(k)})\) in the neighborhood of ζ assuming that \(\mathbf{F}^{\prime }(\mathbf{r)}^{-1}\) exists, we write

$$ \mathbf{F} \bigl(\mathbf{x}^{(k)} \bigr)=\mathbf{F} \bigl( \mathbf{x}^{(k)}\mathbf{ \bigr)+F}^{ \prime } \bigl( \mathbf{x}^{(k)}\mathbf{ \bigr) \bigl(x-x}^{(k)} \bigr)+ \frac{1}{2!}\mathbf{F}^{{ \prime \prime }} \bigl(\mathbf{x}^{(k)}\mathbf{ \bigr) \bigl(x-x}^{(k)} \bigr)^{2}+\cdots $$
(34)

and

$$\begin{aligned}& \mathbf{F(x)=0,} \end{aligned}$$
(35)
$$\begin{aligned}& \mathbf{F \bigl(x}^{(k)} \bigr)\mathbf{=F}^{\prime }\mathbf{ \bigl(x}^{(k)} \bigr) \bigl\{ \mathbf{e}^{(k)}+ \mathbf{A}_{2} \bigl(\mathbf{e}^{(k)} \bigr)^{2}+ \mathbf{A}_{3} \bigl(\mathbf{e}^{(k)} \bigr)^{3}+\mathbf{\cdots }+\mathbf{A}_{6} \bigl( \mathbf{e}^{(k)} \bigr)^{6} \bigr\} + \bigl\Vert \mathbf{O} \bigl(\mathbf{e}^{(k)} \bigr)^{7} \bigr\Vert , \end{aligned}$$
(36)

where

$$\begin{aligned}& \mathbf{A}_{m}=\frac{1}{m!} \frac{\mathbf{F}^{(m) }\mathbf{(x}^{(k)})}{\mathbf{F}^{\prime }\mathbf{(x}^{(k)})},\quad m=2,3, \ldots \\& \bigl[ \mathbf{F}^{\prime } \bigl(\mathbf{x}^{(k)}\mathbf{ \bigr) \bigr]}^{-1} \mathbf{F \bigl(x}^{(k)} \bigr)= \mathbf{e}^{(k)}-\mathbf{A}_{2} \bigl(\mathbf{e}^{(k)} \bigr)^{2}+(2\mathbf{A}_{2}+2\mathbf{A}_{3}) \bigl(\mathbf{e}^{(k)} \bigr)^{3}+ \bigl\Vert \mathbf{O} \bigl(\mathbf{e}^{(k)} \bigr)^{4} \bigr\Vert , \end{aligned}$$
(37)
(38)

Expanding \(\mathbf{F}^{\prime }\mathbf{(y}^{(k)})\) about ζ and using (38), we obtain

(39)
(40)

Using equations (37) and (39) in the second step of (33), we get

$$ \widehat{\mathbf{e}}^{(k)}=\mathbf{z}^{(k)}-\boldsymbol{\zeta} = \biggl( 2 \mathbf{A}_{2}^{2}+\frac{1}{2} \mathbf{A}_{3}-\alpha \mathbf{A}_{2}^{2} \biggr) \bigl(\mathbf{e}^{(k)} \bigr)^{3}+ \bigl\Vert \mathbf{O} \bigl(\mathbf{e}^{(k)} \bigr)^{4} \bigr\Vert . $$
(41)

Hence, it proves the theorem. □

5 Complex dynamical study of families of iterative methods

Here, we discuss the dynamical study of iterative methods (Q1, E1–E6). We investigate the region from where we take the initial estimates to achieve the roots of nonlinear equation. Actually, we numerically approximate the domain of attractions of the roots as a qualitative measure, how the iterative methods depend on the choice of initial estimations. To answer these questions on the dynamical behavior of the iterative methods, we investigate the dynamics of method Q1 and compare it with E1–E6. For more details on the dynamical behavior of the iterative methods, one can consult [3, 35, 36]. Taking a rational function \(\Re _{f}:\mathbb{C} \longrightarrow \mathbb{C} \), where \(\mathbb{C} \) denotes the complex plane, the orbit \(x_{0}\in \mathbb{C} \) defines a set such as \(\operatorname{orb}(x)=\{x_{0},\Re _{f}(x_{0}),\Re _{f}^{2}(x_{0}),\ldots,\Re _{f}^{m}(x_{0}),\ldots \}\). The convergence \(\operatorname{orb}(x)\rightarrow x^{\ast }\) is understood in the sense if \(\underset{x\rightarrow \infty }{\lim }R^{k}(x)=x^{\ast }\) exists. A point \(x_{0}\in \mathbb{C} \) is known as attracting if \(\vert R^{k^{\prime }}(x) \vert <1\). An attracting point \(x_{0}\in \mathbb{C} \) defines the basin of attraction as the set of starting points whose orbit tends to \(x^{\ast }\). For the dynamical and graphically point of view, we take \(2000\times 2000\) grid of square \([-2.5,2.5]^{2}\in \mathbb{C} \). To each root of (1), we assign a color to which the corresponding orbit of the iterative method starts and converges to a fixed point. Take color map as Jet and Hot respectively. We use \(\vert x_{i+1}\text{-}x_{i} \vert <10^{-3}\) and \(\vert f(x_{i}) \vert <10^{-3}\) as stopping criteria, and the maximum number of iterations is taken as 20. We mark a dark blue point when using stopping criteria \(\vert x_{i+1}\text{-}x_{i} \vert <10^{-3}\) and dark black point when using \(\vert f(x_{i}) \vert <10^{-3}\). Different color is used for different roots. Iterative methods have different basins of attraction distinguished by their colors. We obtain basins of attractions for the following three test functions \(f_{1}(x)=x^{4}-ix^{2}+1\), \(f_{2}(x)=(1+2i)x^{5}+1-2i\), and \(f_{3}(x)=x^{6}-ix^{3}+1\). The exact roots of \(f_{1}(x)\), \(f_{2}(x)\), and \(f_{3}(x)\) are given in Table 1. Brightness in color means a lower number of iterations steps.

Table 1 Exact roots of functions \(f_{1}(x)\), \(f_{2}(x)\), and \(f_{3}(x)\)

6 Numerical results

Here, some numerical examples are considered in order to demonstrate the performance of our family of one-step third order single root finding methods (Q1), fifth order simultaneous methods (SM1), and third order family of iterative methods for solving the nonlinear system of equations respectively. We compared our family of single root finding methods (Q1) with third order iterative methods (E1–E6). The family of simultaneous methods (SM1) of order five is compared with Zhang et al. method [30] of the same order (abbreviated as ZPH method). Iterative methods for finding roots of nonlinear system (QQ1) are compared with EE1–EE3 respectively. All the computations are performed using CAS Maple 18 with 2500 (64 digits floating point arithmetic in case of simultaneous methods) significant digits with stopping criteria as follows:

$$\begin{aligned}& \text{(i)}\quad e_{i}^{(k)}= \bigl\vert f \bigl( x_{i}^{(k)} \bigr) \bigr\vert < \in ,\qquad \text{(ii)}\quad e_{i}^{(k)}= \bigl\vert x_{i}^{ ( k ) }- \alpha \bigr\vert < \in , \\& \text{(iii)} \quad \mathbf{e}^{(k)}= \bigl\Vert \mathbf{Fx}^{(k)} \bigr\Vert < \in , \qquad \text{(iv)} \quad \mathbf{e}^{(k)}= \bigl\Vert \mathbf{x}^{(k+1)}-\mathbf{x}^{(k)} \bigr\Vert < \in , \end{aligned}$$

where \(e_{i}\) and \(\mathbf{e}^{(k)} \) represent the absolute error. We take \(\in =10^{-600}\) for the single root finding method, \(\in =10^{-30}\) for simultaneous determination of all roots of nonlinear equation (1), and \(\in =10^{-15}\) for approximating roots of nonlinear system (31).

Numerical test examples from [32, 34, 37, 38] are provided in Tables 28. In Table 3 stopping criterion (i) is used, in Table 2 stopping criteria (i) and (ii) both are used, while in Tables 48 stopping criteria (iii) and (iv) both are used. In all Tables CO represents the convergence order, n represents the number of iterations, ρ represents local computational order of convergence [39], and CPU represents computational time in seconds. We observe that numerical results of the family of iterative methods (in case of single Q1) as well as simultaneous determination (SM1 of all roots) and for approximating roots of system of nonlinear equations QQ1 are better than E1–E6, ZPH, and EE1–EE3 respectively on the same number of iterations. Figures 4(a)–(b)–6(a), (b) represent the residual fall for the iterative methods (Q1, SM1, QQ1, ZPH, E1–E6, EE1–EE3). Figures 4(a) and 4(b) show residual fall for single (Q1, E1–E6) and simultaneous determination of all roots (SM1, ZPH), while Figs. 5(a), (b) and 6(a), (b) show residual fall for (QQ1, EE1–EE3) respectively. Tables 28 and Figs. 16 clearly show the dominance convergence behavior of our family of iterative methods (Q1, SM1, QQ1) over E1–E6, ZPH, and EE1–EE3.

Figure 1
figure 1

Figure 1(a), (e), (g), (i), (k), (m), (o) shows basins of attraction of iterative methods Q1, E1–E6 for the nonlinear function \(f_{1}(x)=x^{4}-ix^{2}+1\) using \(\vert x^{(k+1)}\text{-}x^{(k)} \vert <10^{-3}\). Figure 1(b), (f), (h), (j), (l), (n), (p) shows basins of attraction of iterative methods Q1, E1–E6 using \(\vert f(x^{(k)}) \vert <10^{-3}\). Figure 1(c), (d) shows the basin of attraction for \(\alpha =-0.000001\). In Fig. 1(a)–(p), brightness of color in basins of Q1 shows a lower number of iterations for convergence of iterative methods as compared to methods E1–E6.

Figure 2
figure 2

Figure 2(a), (e), (g), (i), (k), (m), (o) shows basins of attraction of iterative methods Q1, E–E6 for the nonlinear equation \(f_{2}(x)=(1+2i)x^{5}+1-2i\) using \(\vert x^{(k+1)}\text{-}x^{(k)} \vert <10^{-3}\). Figure 2(b), (f), (h), (j), (l), (n), (p) shows basins of attraction of iterative methods Q1, E1–E6 using \(\vert f(x^{(k)}) \vert <10^{-3}\). Figure 2(c), (d) shows the basin of attraction for \(\alpha =-0.000001\). In Fig. 2(a)–(p), brightness of color in basins of Q1 shows a lower number of iterations for convergence of iterative method as compared to methods E1–E6.

Figure 3
figure 3

Figure 3(a), (e), (g), (i), (k), (m), (o) shows basins of attraction of iterative methods Q1, E1–E6 for the nonlinear equation \(f_{3}(x)=x^{6}-ix^{3}+1\) using \(\vert x^{(k+1)}\text{-}x^{(k)} \vert <10^{-3}\). Figure 3(b), (f), (h), (j), (l), (n), (p) shows basins of attraction of iterative methods Q1, E1–E6 using \(\vert f(x^{(k)}) \vert <10^{-3}\). Figure 3(c), (d) shows the basin of attraction for \(\alpha =-0.000001\). In Fig. 3(a)–(p), brightness of color in basins of Q1 shows a lower number of iterations for convergence of iterative method as compared to methods E1–E6.

Figure 4
figure 4

Figures 4(a)–(b), 4(a) show residual graph of single roots finding method Q1, E1–E6 and 4(b) for simultaneous determination of all roots of \(f_{4}(x)\) using ZPH and SM1 respectively.

Table 2 Comparison of optimal 3rd order methods
Table 3 Simultaneous finding of all roots of \(f_{4}(x)\)
Table 4 Comparison of optimal 3rd order methods
Table 5 Comparison of optimal 3rd order methods
Table 6 Comparison of optimal 3rd order methods

7 Application in engineering

In this section, we discuss application in engineering.

Example 1

(Beam designing model [38] (1-dimensional problem))

An engineer considers a problem of embedment x of a sheet-pile wall resulting in a nonlinear equation given as

$$ f_{4}(x)=\frac{x^{3}+2.87x^{2}-10.28}{4.62}-x. $$
(42)

The exact roots of (42) are \(\zeta _{1}=2.0021\), \(\zeta _{2}=-3.3304 \), \(\zeta _{3}=-1.5417\).

The initial estimates for \(f_{4}(x)\) are taken as: \(\overset{ (0)}{x_{1}} =2.5\), \(\overset{ (0)}{x_{2}} =-7.4641\), \(\overset{ (0)}{x_{3}}=-0.5359\).

Example 2

(2-dimensional problem [32, 37])

In case of a 2-dimensional system, we consider the following systems of nonlinear equations:

$$\begin{aligned}& \mathbf{F}_{1}(\mathbf{X})=\textstyle\begin{cases} f_{1}(x_{1},x_{2})=x_{1}^{2}-10x_{1}+x_{2}^{2}+8, \\ f_{2}(x_{1},x_{2})=x_{1}x_{2}^{2}+x_{1}-10x_{2}+8,\end{cases}\displaystyle \mathbf{X}_{0} =( \mathbf{0.6},\mathbf{1.4})^{T}, \\& \mathbf{F}_{2}(\mathbf{X})=\textstyle\begin{cases} f_{1}(x_{1},x_{2})=x_{1}^{2}-2x_{1}-x_{2}+0.5, \\ f_{2}(x_{1},x_{2})=x_{1}^{2}+4x_{2}^{2}-1, \end{cases}\displaystyle \mathbf{X}_{0}, =( \mathbf{1.5},\mathbf{1.0})^{T}. \end{aligned}$$

Example 3

(3-dimensional problems [34])

In case of a 3-dimensional system, we consider the following system of nonlinear equations:

$$\begin{aligned}& \mathbf{F}_{3}(\mathbf{X})=\textstyle\begin{cases} f_{1}(x_{1},x_{2},x_{3})=15x_{1}+x_{2}^{2}-4x_{3}-13, \\ f_{2}(x_{1},x_{2},x_{3})=x_{1}^{2}+10x_{2}-e^{-x_{3}}-11, \\ f_{3}(x_{1},x_{2},x_{3})=x_{2}^{2}-25x_{3}+22,\end{cases}\displaystyle \mathbf{X}=(\mathbf{0.8}, \mathbf{1},\mathbf{0.8})^{T}, \\& \mathbf{F}_{4}(\mathbf{X})=\textstyle\begin{cases} f_{1}(x_{1},x_{2},x_{3})=x_{1}^{2}+x_{2}^{2}-x_{3}^{2}-1, \\ f_{2}(x_{1},x_{2},x_{3})=2x_{1}^{2}+10x_{2}^{2}-4x_{3}1, \\ f_{3}(x_{1},x_{2},x_{3})=3x_{1}^{2}-4x_{2}^{2}-x_{3}^{2},\end{cases}\displaystyle \mathbf{X}=(\mathbf{0.5}, \mathbf{0.5},\mathbf{0.5})^{T}. \end{aligned}$$

Example 4

(N-dimensional problem [34])

Consider the following system of nonlinear equations:

$$ \mathbf{F}_{5}: f_{i}=e^{x_{i}^{2}}-1,\quad i=1,2,3, \ldots,m, $$

the exact solution of this system is \(\mathbf{X}^{\ast }=[0,0,0,\ldots,0]^{T}\), and we take \(\mathbf{X}_{0}=[0.5,0.5,0.5,\ldots, 0.5]^{T}\) as initial estimates. Table shows the results of this system of nonlinear equations.

Example 5

(N-dimensional problem)

Consider the following system of nonlinear equations:

$$ \mathbf{F}_{6}: f_{i}=x_{i}^{2}- \cos (x_{i}-1),\quad i=1,2,3,\ldots,m, $$

the exact solution of this system is \(\mathbf{X}^{\ast }=[1,1,1,\ldots,1]^{T}\), and we take \(\mathbf{X}_{0}=[2,2,2,\ldots,2]^{T}\) as initial estimates. Table shows the results of this system of nonlinear equations.

7.1 Application to differential equation

Example 6

(Nonlinear BVP)

Here, we solve a nonlinear BVP defined as

$$\begin{aligned}& y^{\prime \prime } = \frac{1}{8} \bigl(32+2x^{3}-yy^{\prime } \bigr),\quad 1\leq x \leq 3, \\& y(1) = 17;\qquad y(3)=\frac{43}{3}. \end{aligned}$$
(43)

Using the procedure of finite difference method, we solve this nonlinear BVP. By taking \(h=0.1\), we discretize the interval \([1,3]\) into \(N+1=19+1=20\) equal subintervals (see Table 7). As \(x_{i}=a+h\) gives values of \(x_{i}\), where \(a=1\).

Table 7 Domain discretization for BVP

We use the central difference formula for both \(y^{\prime \prime }(x_{i})\) and \(y^{\prime }(x_{i})\) derived in Burden and Faires in [40] as follows:

$$\begin{aligned}& y^{\prime \prime }(x_{i}) = \frac{1}{h^{2}} \bigl(y(x_{i+1})-2y(x_{i})+y(x_{i-1}) \bigr)- \frac{h^{2}}{12}y^{(iv)}(\xi ) \quad \text{for some }\xi \in (x_{i-1},x_{i+1}), \end{aligned}$$
(44)
$$\begin{aligned}& y^{\prime }(x_{i}) = \frac{1}{2h} \bigl(y(x_{i+1})-y(x_{i-1}) \bigr)- \frac{h^{2}}{6}y^{(iii)}(\eta )\quad \text{for some }\eta \in (x_{i-1},x_{i+1}). \end{aligned}$$
(45)

Putting values of \(y^{\prime \prime }(x_{i})\) and \(y^{\prime }(x_{i})\) in (1), we obtain the following tridiagonal system of nonlinear equations:

F 7 (X)= ( f 1 = 2 x 1 x 2 + 0.01 ( 4 + 0.33275 + x 1 ( x 2 17 ) 1.6 ) 17 f 2 = x 1 + 2 x 2 x 3 + 0.01 ( 4 + 0.432 + x 2 ( x 3 x 1 ) 1.6 ) f 3 = x 2 + 2 x 3 x 4 + 0.01 ( 4 + 0.5495 + x 3 ( x 4 x 2 ) 1.6 ) f 4 = x 3 + 2 x 4 x 5 + 0.01 ( 4 + 0.686 + x 4 ( x 5 x 3 ) 1.6 ) f 5 = x 4 + 2 x 5 x 6 + 0.01 ( 4 + 0.84375 + x 5 ( x 6 x 4 ) 1.6 ) f 6 = x 5 + 2 x 6 x 7 + 0.01 ( 4 + 1.024 + x 6 ( x 7 x 5 ) 1.6 ) f 7 = x 6 + 2 x 7 x 8 + 0.01 ( 4 + 1.22825 + x 7 ( x 8 x 6 ) 1.6 ) f 8 = x 7 + 2 x 8 x 9 + 0.01 ( 4 + 1.458 + x 8 ( x 9 x 7 ) 1.6 ) f 9 = x 8 + 2 x 9 x 10 + 0.01 ( 4 + 1.71475 + x 9 ( x 10 x 8 ) 1.6 ) f 10 = x 9 + 2 x 10 x 11 + 0.01 ( 4 + 2 + x 10 ( x 11 x 9 ) 1.6 ) f 11 = x 10 + 2 x 11 x 12 + 0.01 ( 4 + 2.31525 + x 11 ( x 12 x 10 ) 1.6 ) f 12 = x 11 + 2 x 12 x 13 + 0.01 ( 4 + 2.662 + x 12 ( x 13 x 11 ) 1.6 ) f 13 = x 12 + 2 x 13 x 14 + 0.01 ( 4 + 3.04175 + x 13 ( x 14 x 12 ) 1.6 ) f 14 = x 13 + 2 x 14 x 15 + 0.01 ( 4 + 3.456 + x 14 ( x 15 x 13 ) 1.6 ) f 15 = x 14 + 2 x 15 x 16 + 0.01 ( 4 + 3.90625 + x 15 ( x 16 x 14 ) 1.6 ) f 16 = x 15 + 2 x 16 x 17 + 0.01 ( 4 + 4.394 + x 16 ( x 17 x 15 ) 1.6 ) f 17 = x 16 + 2 x 17 x 18 + 0.01 ( 4 + 4.92075 + x 17 ( x 18 x 16 ) 1.6 ) f 18 = x 17 + 2 x 18 x 19 + 0.01 ( 4 + 5.488 + x 18 ( x 19 x 17 ) 1.6 ) f 19 = x 18 + 2 x 19 + 0.01 ( 4 + + x 19 ( 14.333333 x 18 ) 1.6 ) 14.333333 ) ,
(46)

where \(x_{0}=17\) and \(x_{20}=14.333333\). We take

X 0 = [ 16.86666667 , 16.73333333 , 16.6 , 16.46666667 , 16.33333333 , 16.2 , 16.06666667 , 15.9333333 , 15.8 , 15.66666667 , 15.53333333 , 15.4 , 15.26666667 15.13333333 , 15 , 1.86666667 , 14.733333333 , 14.6 , 14.46666667 ] T .

The solution to our boundary value problem of the nonlinear ordinary differential equation is

X = [ 17.0000 , 16.7605 , 16.5134 , 16.2589 , 15.9974 , 15.7298 , 15.4577 , 15.1829 , 14.9083 , 14.6375 , 14.3750 , 14.1266 , 13.8993 , 13.7018 , 13.5443 , 13.439113 . 401013.447513 . 599913.8843 ] T .

as initial estimates. The result of the above nonlinear system is shown in Table 8.

Table 8 Residual errors of different iterations of QQ1 for solving \(\mathbf{F}_{7}(\mathbf{X})\)

8 Conclusion

We have developed here families of single root finding methods of convergence order three for a nonlinear equation as well as for a system of nonlinear equations and families of simultaneous methods of order five respectively. From Tables 28 and Figs. 15 and 6, we observe that our methods (Q1, SM1, and QQ1) are superior in terms of efficiency, stability, CPU time, and residual error as compared to the methods E1–E6, ZPH, and EE1–EE3 respectively.

Figure 5
figure 5

Figure 5(a)–(b) shows a residual graph of iterative methods QQ1, EE1–EE3 for solving \(\mathbf{F}_{1}(\mathbf{X})\) and \(\mathbf{F}_{2}(\mathbf{X})\) respectively.

Figure 6
figure 6

Figure 6(a)–(b) shows a residual graph of iterative methods QQ1, EE1–EE3 for solving \(\mathbf{F}_{3}(\mathbf{X})\) and \(\mathbf{F}_{4}(\mathbf{X})\) respectively.

Availability of data and materials

Not applicable.

References

  1. Chicharro, F., Cordero, A., Torregrosa, J.R.: Drawing dynamical and parameters planes of iterative families and methods. Sci. World J. 2013, Article ID 780153 (2013)

    Article  Google Scholar 

  2. Kung, H.T., Traub, J.F.: Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach. 21, 643–651 (1974)

    Article  MathSciNet  Google Scholar 

  3. Babajee, D.K.R., Cordero, A., Soleymani, F., Torregrosa, J.R.: On improved three-step schemes with high efficiency index and their dynamics. Numer. Algorithms 65, 153–169 (2014)

    Article  MathSciNet  Google Scholar 

  4. Singh, A., Jaiswal, J.P.: Several new third-order and fourth-order iterative methods for solving nonlinear equations. Int. J. Eng. Math. 2014, Article ID 828409 (2014)

    MATH  Google Scholar 

  5. Amat, S., Busquier, S., Gutiérrez, J.M.: Third-order iterative methods with applications to Hammerstein equations: a unified approach. J. Comput. Appl. Math. 235(9), 2936–2943 (2011)

    Article  MathSciNet  Google Scholar 

  6. Dehghan, M., Hajarian, M.: Some derivative free quadratic and cubic convergence iterative formulas for solving nonlinear equations. Comput. Appl. Math. 29, 19–30 (2010)

    Article  MathSciNet  Google Scholar 

  7. Agarwal, P., Filali, D., Akram, M., Dilshad, M.: Convergence analysis of a three-step iterative algorithm for generalized set-valued mixed-ordered variational inclusion problem. Symmetry 13(3), 444 (2021)

    Article  Google Scholar 

  8. Sunarto, A., Agarwal, P., Sulaiman, J., Chew, J.V.L., Aruchunan, E.: Iterative method for solving one-dimensional fractional mathematical physics model via quarter-sweep and PAOR. Adv. Differ. Equ. 2021(1), 147 (2021)

    Article  MathSciNet  Google Scholar 

  9. Attary, M., Agarwal, P.: On developing an optimal Jarratt-like class for solving nonlinear equations. Ital. J. Pure Appl. Math. 43, 523–530 (2020)

    Google Scholar 

  10. Kumar, S., Kumar, D., Sharma, J.R., Cesarano, C., Agarwal, P., Chu, Y.M.: An optimal fourth order derivative-free numerical algorithm for multiple roots. Symmetry 12(6), 1038 (2020)

    Article  Google Scholar 

  11. Khan, M.S., Berzig, M., Samet, B.: Some convergence results for iterative sequences of Prešić type and applications. Adv. Differ. Equ. 2012, 38 (2012). https://doi.org/10.1186/1687-1847-2012-38

    Article  MATH  Google Scholar 

  12. Li, W., Pang, Y.: Application of Adomian decomposition method to nonlinear systems. Adv. Differ. Equ. 2020, 67 (2020). https://doi.org/10.1186/s13662-020-2529-y

    Article  MathSciNet  Google Scholar 

  13. Cosnard, M., Fraigniaud, P.: Finding the roots of a polynomial on an MIMD multicomputer. Parallel Comput. 15(1–3), 75–85 (1990)

    Article  MathSciNet  Google Scholar 

  14. Kanno, S., Kjurkchiev, N., Yamamoto, T.: On some methods for the simultaneous determination of polynomial zeros. Jpn. J. Ind. Appl. Math. 13, 267–288 (1995)

    Article  MathSciNet  Google Scholar 

  15. Abert, O.: Iteration methods for finding all zeros of a polynomial simultaneously. Math. Comput. 27, 339–344 (1973)

    Article  MathSciNet  Google Scholar 

  16. Proinov, P.D., Cholakov, S.I.: Semilocal convergence of Chebyshev-like root-finding method for simultaneous approximation of polynomial zeros. Appl. Math. Comput. 236, 669–682 (2014)

    MathSciNet  MATH  Google Scholar 

  17. Proinov, P.D.: General convergence theorems for iterative processes and applications to the Weierstrass root-finding method. J. Complex. 33, 118–144 (2016)

    Article  MathSciNet  Google Scholar 

  18. Cholakov, S.I.: Local and semilocal convergence of Wang-Zheng’s method for simultaneous finding polynomial zeros. Symmetry 2019, 736 (2019)

    Article  Google Scholar 

  19. Mir, N.A., Muneer, R., Jabeen, I.: Some families of two-step simultaneous methods for determining zeros of non-linear equations. ISRN Appl. Math. 2011, Article ID 817174 (2011)

    Article  Google Scholar 

  20. Nourein, A.W.: An improvement on two iteration methods for simultaneously determination of the zeros of a polynomial. Int. J. Comput. Math. 6, 241–252 (1977)

    Article  MathSciNet  Google Scholar 

  21. Proinov, P.D., Vasileva, M.T.: On the convergence of higher-order Ehrlich-type iterative methods for approximating all zeros of polynomial simultaneously. J. Inequal. Appl. 2015, 336 (2015)

    Article  Google Scholar 

  22. Ehrlich, L.W.: A modified Newton method for polynomials. Commun. ACM 10(2), 107–108 (1967)

    Article  Google Scholar 

  23. Nedzhibov, G.H.: Iterative methods for simultaneous computing arbitrary number of multiple zeros of nonlinear equations. Int. J. Comput. Math. 90(5), 994–1007 (2013)

    Article  MathSciNet  Google Scholar 

  24. Farmer, M.R.: Computing the zeros of polynomials using the divide and conquer approach. Ph.D Thesis, Department of Computer Science and Information Systems, Birkbeck, University of London (2014)

  25. Proinov, P.D., Vasileva, M.T.: On the convergence of high-order Gargantini–Farmer–Loizou type iterative methods for simultaneous approximation of polynomial zeros. Appl. Math. Comput. 361, 202–214 (2019)

    MathSciNet  MATH  Google Scholar 

  26. Huen, K.: Neue methode zur approximativen integration der differentialge-ichungen einer unabhngigen variablen. Z. Angew. Math. Phys. 45, 23–38 (1900)

    Google Scholar 

  27. Chun, C., Kim, Y.-I.: Several new third-order iterative methods for solving nonlinear equations. Acta Appl. Math. 109, 1053–1063 (2010)

    Article  MathSciNet  Google Scholar 

  28. Kou, J., Li, Y., Wang, X.: A modification of Newton method with third-order convergence. Appl. Math. Comput. 181, 1106–1111 (2006)

    MathSciNet  MATH  Google Scholar 

  29. Proinov, P.D.: On the local convergence of Ehrlich method for numerical computation of polynomial zeros. Calcolo 53(3), 413–426 (2016)

    Article  MathSciNet  Google Scholar 

  30. Zhang, X., Peng, H., Hu, G.: A high order iteration formula for the simultaneous inclusion of polynomial zeros. Appl. Math. Comput. 179, 545–552 (2006)

    MathSciNet  MATH  Google Scholar 

  31. Ortega, J.M., Rheinboldt, W.C.: Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, New York (1970)

    MATH  Google Scholar 

  32. Darvishi, M.T., Barati, A.: A third-order Newton-type method to solve systems of nonlinear equations. Appl. Math. Comput. 187, 630–635 (2007)

    MathSciNet  MATH  Google Scholar 

  33. Cordero, A., Torregrosa, J.R.: Variants of Newton’s method for functions of several variables. Appl. Math. Comput. 183, 199–208 (2006)

    MathSciNet  MATH  Google Scholar 

  34. Khirallah, M.Q., Hafiz, M.A.: Solving system of nonlinear equations using family of Jarratt methods. Int. J. Differ. Equ. Appl. 12(2), 69–83 (2013)

    MATH  Google Scholar 

  35. Chicharro, F.I., Cordero, A., Garrido, N., Torregrosa, J.R.: Generating root-finder iterative methods of second order: convergence and stability. Axioms 8, 55 (2019)

    Article  Google Scholar 

  36. Cordero, A., García-Maimó, J., Torregrosa, J.R., Vassileva, M.P., Vindel, P.: Chaos in King’s iterative family. Appl. Math. Lett. 26, 842–848 (2013)

    Article  MathSciNet  Google Scholar 

  37. Noor, M.A., Waseem, M.: Some iterative methods for solving a system of nonlinear equations. Comput. Math. Appl. 57, 101–106 (2009)

    Article  MathSciNet  Google Scholar 

  38. Griffithms, D.V., Smith, I.M.: Numerical Methods for Engineers, 2nd edn. Chapman & Hall, London, Special Indian Edition (2011)

    Google Scholar 

  39. G-Sánchez, M., Noguera, M., Grau, A., Herrero, J.R.: On new computational local orders of convergence. Appl. Math. Lett. 25(12), 2023–2030 (2012)

    Article  MathSciNet  Google Scholar 

  40. Burden, R.L., Faires, J.D.: Numerical Analysis. Boundary-Value Problems for Ordinary Differential Equations, pp. 641–685. Thomson Brooks/Cole, Belmount (2005)

    Google Scholar 

Download references

Funding

The authors declare that there is no funding available for this paper.

Author information

Authors and Affiliations

Authors

Contributions

All authors read and approved the final manuscript.

Corresponding author

Correspondence to Choonkil Park.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shams, M., Rafiq, N., Kausar, N. et al. On iterative techniques for estimating all roots of nonlinear equation and its system with application in differential equation. Adv Differ Equ 2021, 480 (2021). https://doi.org/10.1186/s13662-021-03636-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-021-03636-x

Keywords