Skip to main content

A comparative study on the convergence rate of some iteration methods involving contractive mappings

Abstract

We compare the rate of convergence for some iteration methods for contractions. We conclude that the coefficients involved in these methods have an important role to play in determining the speed of the convergence. By using Matlab software, we provide numerical examples to illustrate the results. Also, we compare mathematical and computer-calculating insights in the examples to explain the reason of the existence of the old difference between the points of view.

1 Introduction

Iteration schemes for numerical reckoning fixed points of various classes of nonlinear operators are available in the literature. The class of contractive mappings via iteration methods is extensively studied in this regard. In 1952, Plunkett published a paper on the rate of convergence for relaxation methods [1]. In 1953, Bowden presented a talk in a symposium on digital computing machines entitled ‘Faster than thought’ [2]. Later, this basic idea has been used in engineering, statistics, numerical analysis, approximation theory, and physics for many years (see, for example, [3–9] and [10]). In 1991, Argyros published a paper about iterations converging faster than Newton’s method to the solutions of nonlinear equations in Banach spaces [11, 12]. In 1997, Lucet presented a method faster than the fast Legendre transform [13]. In 2004, Berinde used the notion of rate of convergence for iterations method and showed that the Picard iteration converges faster than the Mann iteration for a class of quasi-contractive operators [14]. Later, he provided some results in this area [15, 16]. In 2006, Babu and Vara Prasad showed that the Mann iteration converges faster than the Ishikawa iteration for the class of Zamfirescu operators [17]. In 2007, Popescu showed that the Picard iteration converges faster than the Mann iteration for the class of quasi-contractive operators [18]. Recently, there have been published some papers about introducing some new iterations and comparing of the rates of convergence for some iteration methods (see, for example, [19–22] and [23]).

In this paper, we compare the rates of convergence of some iteration methods for contractions and show that the involved coefficients in such methods have an important role to play in determining the rate of convergence. During the preparation of this work, we found that the efficiency of coefficients had been considered in [24] and [25]. But we obtained our results independently, before reading these works, and one can see it by comparing our results and those ones.

2 Preliminaries

As we know, the Picard iteration has been extensively used in many works from different points of view. Let \((X, d)\) be a metric space, \(x_{0}\in X\), and \(T\colon X\to X\) a selfmap. The Picard iteration is defined by

$$ x_{n+1}=Tx_{n} $$

for all \(n\geq0\). Let \(\{\alpha_{n}\}_{n\geq0}\), \(\{\beta_{n}\}_{n\geq 0}\), and \(\{\gamma_{n}\}_{n\geq0}\) be sequences in \([0, 1]\). Then the Mann iteration method is defined by

$$ x_{n+1}=\alpha_{n} x_{n} + (1- \alpha_{n}) Tx_{n} $$
(2.1)

for all \(n\geq0\) (for more information, see [26]). Also, the Ishikawa iteration method is defined by

$$ \begin{aligned} &x_{n+1} =(1- \alpha_{n})x_{n}+\alpha_{n}Ty_{n}, \\ &y_{n} = (1-\beta_{n})x_{n} +\beta_{n}Tx_{n} \end{aligned} $$
(2.2)

for all \(n\geq0\) (for more information, see [27]). The Noor iteration method is defined by

$$\begin{aligned}& x_{n+1} =(1-\alpha_{n})x_{n} + \alpha_{n} Ty_{n}, \\& y_{n} =(1-\beta_{n})x_{n} + \beta_{n} Tz_{n}, \\& z_{n} =(1-\gamma_{n})x_{n} +\gamma_{n} Tx_{n} \end{aligned}$$
(2.3)

for all \(n\geq0\) (for more information, see [28]). In 2007, Agarwal et al. defined their new iteration methods by

$$ \begin{aligned} &x_{n+1} = (1 - \alpha_{n})Tx_{n} + \alpha_{n}Ty_{n}, \\ &y_{n} = (1 - \beta_{n})x_{n} + \beta_{n}Tx_{n} \end{aligned} $$
(2.4)

for all \(n\geq0\) (for more information, see [29]). In 2014, Abbas et al. defined their new iteration methods by

$$\begin{aligned}& x_{n+1} =(1 - \alpha_{n})Ty_{n} + \alpha_{n}Tz_{n} , \\& y_{n} = (1-\beta_{n})Tx_{n} +\beta_{n}Tz_{n}, \\& z_{n} = (1-\gamma_{n})x_{n} + \gamma_{n}Tx_{n} \end{aligned}$$
(2.5)

for all \(n\geq0\) (for more information, see [30]). In 2014, Thakur et al. defined their new iteration methods by

$$\begin{aligned}& x_{n+1} =(1 - \alpha_{n})Tx_{n} + \alpha_{n}Ty_{n}, \\& y_{n} = (1-\beta_{n})z_{n} +\beta_{n}Tz_{n}, \\& z_{n} = (1-\gamma_{n})x_{n} + \gamma_{n}Tx_{n} \end{aligned}$$
(2.6)

for all \(n\geq0\) (for more information, see [23]). Also, the Picard S-iteration was defined by

$$\begin{aligned}& x_{n+1} = Ty_{n}, \\& y_{n} = (1-\beta_{n})Tx_{n} +\beta_{n}Tz_{n}, \\& z_{n} = (1-\gamma_{n})x_{n} + \gamma_{n}Tx_{n} \end{aligned}$$
(2.7)

for all \(n\geq0\) (for more information, see [20] and [22]).

3 Self-comparing of iteration methods

Now, we are ready to provide our main results for contractive maps. In this respect, we assume that \((X, \|\cdot\|)\) is a normed space, \(x_{0}\in X\), \(T\colon X\to X\) is a selfmap and \(\{\alpha_{n}\}_{n\geq 0}\), \(\{\beta_{n}\}_{n\geq0}\) and \(\{\gamma_{n}\}_{n\geq0}\) are sequences in \((0, 1)\).

The Mann iteration is given by \(x_{n+1}= (1-\alpha_{n})x_{n}+\alpha_{n} Tx_{n}\) for all \(n\geq0\).

Note that we can rewrite it as \(x_{n+1}= \alpha_{n} x_{n}+(1-\alpha_{n}) Tx_{n}\) for all \(n\geq0\).

We call these cases the first and second forms of the Mann iteration method.

In the next result we show that choosing a type of sequence \(\{\alpha_{n}\}_{n\geq0}\) in the Mann iteration has a notable role to play in the rate of convergence of the sequence \(\{x_{n}\}_{n\geq0}\).

Let \(\{u_{n}\}_{n\geq0}\) and \(\{v_{n}\}_{n\geq0}\) be two fixed point iteration procedures that converge to the same fixed point p and \(\|u_{n}-p\|\leq a_{n}\) and \(\|v_{n}-p\|\leq b_{n}\) for all \(n\geq0\). If the sequences \(\{a_{n}\}_{n\geq0}\) and \(\{b_{n}\}_{n\geq0}\) converge to a and b, respectively, and \(\lim_{n\to\infty}\frac{\|a_{n}-a\|}{\|b_{n}-b\|}=0\), then we say that \(\{u_{n}\}_{n\geq0}\) converges faster than \(\{v_{n}\}_{n\geq0}\) to p (see [14] and [23]).

Proposition 3.1

Let C be a nonempty, closed, and convex subset of a Banach space X, \(x_{1}\in C\), \(T\colon C \to C \) a contraction with constant \(k\in(0, 1)\) and p a fixed point of T. Consider the first case for Mann iteration. If the coefficients of \(Tx_{n}\) are greater than the coefficients of \(x_{n}\), that is, \(1-\alpha_{n} < \alpha_{n}\) for all \(n\geq0\) or equivalently \(\{\alpha_{n}\}_{n\geq0}\) is a sequence in \((\frac{1}{2}, 1)\), then the Mann iteration converges faster than the Mann iteration which the coefficients of \(x_{n}\) are greater than the coefficients of \(Tx_{n}\).

Proof

Let \(\{x_{n}\}\) be the sequence in the Mann iteration which the coefficients of \(Tx_{n}\) are greater than the coefficients of \(x_{n}\), that is,

$$ x_{n+1}=(1-\alpha_{n}) x_{n} + \alpha_{n} Tx_{n} $$
(3.1)

for all n. In this case, we have

$$\begin{aligned} \begin{aligned} \Vert x_{n+1}-p\Vert &=\bigl\Vert (1-\alpha_{n})x_{n}+ \alpha_{n} Tx_{n}-p\bigr\Vert \leq(1-\alpha_{n}) \Vert x_{n}-p\Vert +\alpha_{n}\Vert Tx_{n}-p \Vert \\ &\leq \bigl(1-\alpha_{n}(1-k) \bigr) \Vert x_{n}-p\Vert \end{aligned} \end{aligned}$$

for all n. Since \(\alpha_{n} \in(\frac{1}{2}, 1)\), \(1-\alpha_{n}(1-k) < 1-\frac{1}{2}(1-k)\). Put \(a_{n} = (1-\frac{1}{2}(1-k) )^{n} \Vert x_{1}-p\Vert \) for all n. Now, let \(\{x_{n}\}\) be the sequence in the Mann iteration of which the coefficients of \(x_{n}\) are greater than the coefficients of \(Tx_{n}\). In this case, we have

$$\begin{aligned} \Vert x_{n+1}-p\Vert =&\bigl\Vert \alpha_{n} x_{n}+(1- \alpha _{n})Tx_{n}-p\bigr\Vert \leq \alpha_{n} \Vert x_{n}-p\Vert +(1-\alpha_{n}) \Vert Tx_{n}-p\Vert \\ \leq& \bigl(1-(1-\alpha_{n}) (1-k) \bigr) \Vert x_{n}-p \Vert \end{aligned}$$

for all n. Since \(1-\alpha_{n} < \alpha_{n}\) for all \(n\geq0\), we get \(1-(1-\alpha_{n})(1-k) < 1\) for all \(n\geq0\). Put \(b_{n}=\Vert x_{1}-p\Vert \) for all n. Note that \(\lim\frac{a_{n}}{b_{n}}=\lim\frac{ (1-\frac{1}{2}(1-k) )^{n} \Vert x_{1}-p\Vert }{ \Vert x_{1}-p\Vert }=0\). This completes the proof. □

Note that we can use \(1-\alpha_{n} < \alpha_{n}\), for n large enough, instead of the condition \(1-\alpha_{n} < \alpha_{n}\), for all \(n\geq0\). One can use similar conditions instead of the conditions which we will use in our results.

As we know, we can consider four cases for writing the Ishikawa iteration method. In the next result, we indicate each case by different enumeration. Similar to the last result, we want to compare the Ishikawa iteration method with itself in the four possible cases. Again, we show that the coefficient sequences \(\{\alpha_{n}\}_{n\geq 0}\) and \(\{\beta_{n}\}_{n\geq0}\) have effective roles to play in the rate of convergence of the sequence \(\{x_{n}\}_{n\geq0}\) in the Ishikawa iteration method.

Proposition 3.2

Let C be a nonempty, closed, and convex subset of a Banach space X, \(x_{1}\in C\), \(T\colon C \to C\) a contraction with constant \(k\in(0, 1)\), and p a fixed point of T. Consider the following cases of the Ishikawa iteration method:

$$\begin{aligned}& \left \{ \textstyle\begin{array}{l} x_{n+1} =(1 - \alpha_{n})x_{n} + \alpha_{n}Ty_{n}, \\ y_{n} = (1-\beta_{n})x_{n} +\beta_{n}Tx_{n}, \end{array}\displaystyle \right . \end{aligned}$$
(3.2)
$$\begin{aligned}& \left \{ \textstyle\begin{array}{l} x_{n+1} =\alpha_{n} x_{n} + (1-\alpha_{n})Ty_{n} , \\ y_{n} = \beta_{n} x_{n} +(1-\beta_{n})Tx_{n}, \end{array}\displaystyle \right . \end{aligned}$$
(3.3)
$$\begin{aligned}& \left \{ \textstyle\begin{array}{l} x_{n+1} = \alpha_{n}x_{n} + (1-\alpha_{n})Ty_{n} , \\ y_{n} = (1-\beta_{n})x_{n} +\beta_{n}Tx_{n}, \end{array}\displaystyle \right . \end{aligned}$$
(3.4)

and

$$ \left \{ \textstyle\begin{array}{l} x_{n+1} =(1 - \alpha_{n})x_{n} + \alpha_{n}Ty_{n} , \\ y_{n} = \beta_{n} x_{n} +(1-\beta_{n})Tx_{n} \end{array}\displaystyle \right . $$
(3.5)

for all \(n\geq0\). If \(1-\alpha_{n} < \alpha_{n}\) and \(1-\beta_{n} < \beta_{n}\) for all \(n\geq0\), then the case (3.2) converges faster than the others. In fact, the Ishikawa iteration method is faster whenever the coefficients of \(Ty_{n}\) and \(Tx_{n}\) simultaneously are greater than the related coefficients of \(x_{n}\) for all \(n\geq0\).

Proof

Let \(\{x_{n}\}_{n\geq0}\) be the sequence in the case (3.2). Then we have

$$\begin{aligned} \Vert y_{n}-p\Vert &= \bigl\Vert (1-\beta_{n}) x_{n} + \beta_{n}Tx_{n}-p\bigr\Vert \\ &\leq(1 -\beta_{n})\Vert x_{n}-p\Vert + \beta_{n}\Vert Tx_{n}-p\Vert \\ & \leq \bigl((1-\beta_{n}) + \beta_{n} k \bigr) \Vert x_{n} -p \Vert \end{aligned}$$

and

$$\begin{aligned} \Vert x_{n+1}-p\Vert &= \bigl\Vert (1-\alpha_{n}) x_{n} + \alpha_{n}Ty_{n}-p\bigr\Vert \\ &\leq(1-\alpha_{n})\Vert x_{n}-p\Vert + \alpha_{n} \Vert Ty_{n}-p\Vert \\ &\leq(1-\alpha_{n})\Vert x_{n}-p\Vert + k \alpha_{n} \Vert y_{n}-p\Vert \\ & \leq \bigl(1-\alpha_{n} + k\alpha_{n}\bigl[(1- \beta_{n}) +\beta_{n} k\bigr] \bigr) \Vert x_{n} -p\Vert \\ & \leq \bigl(1-\alpha_{n} + \alpha_{n} k- \alpha_{n}\beta_{n} k +\alpha _{n} \beta_{n} k^{2} \bigr) \Vert x_{n} -p\Vert \\ & \leq \bigl(1 - \alpha_{n}(1-k)-\alpha_{n} \beta_{n} k (1 - k) \bigr) \Vert x_{n} -p\Vert \end{aligned}$$

for all \(n\geq0\). Since \(\alpha_{n}, \beta_{n} \in(\frac{1}{2}, 1)\), \(1-\alpha_{n}(1-k)-\alpha_{n}\beta_{n} k (1-k)< 1-\frac{1}{2}(1-k)-\frac {1}{4}k(1-k)\) for all \(n\geq0\). Put \(a_{n}= (1-\frac {1}{2}(1-k)-\frac{1}{4}k(1-k) ) ^{n} \Vert x_{1}-p\Vert \) for all \(n\geq0\). If \(\{x_{n}\}_{n\geq0}\) is the sequence in the case (3.3), then we get

$$\begin{aligned} \Vert y_{n}-p\Vert &= \bigl\Vert \beta_{n} x_{n} + (1-\beta _{n})Tx_{n}-p\bigr\Vert \\ &\leq\beta_{n}\Vert x_{n}-p\Vert + (1- \beta_{n})\Vert Tx_{n}-p\Vert \\ & \leq \bigl(1-(1-\beta_{n}) (1-k) \bigr) \Vert x_{n} -p \Vert \end{aligned}$$

and

$$\begin{aligned} \Vert x_{n+1}-p\Vert &= \bigl\Vert \alpha_{n} x_{n} + (1-\alpha_{n})Ty_{n}-p\bigr\Vert \\ &\leq\alpha_{n}\Vert x_{n}-p\Vert + (1- \alpha_{n}) \Vert Ty_{n}-p\Vert \\ &\leq\alpha_{n}\Vert x_{n}-p\Vert + k (1- \alpha_{n}) \Vert y_{n}-p\Vert \\ &\leq \bigl(\alpha_{n}+k(1-\alpha_{n}) \bigl(1-(1- \beta_{n}) (1-k)\bigr)\bigr) \Vert x_{n}-p\Vert \\ &= \bigl(\alpha_{n}+(1-\alpha_{n})k-k(1- \alpha_{n}) (1-\beta_{n}) (1-k) \bigr) \Vert x_{n}-p\Vert \\ &= \bigl(1-(1-\alpha_{n}) (1-k)-(1-\alpha_{n}) (1- \beta_{n})k(1-k) \bigr) \Vert x_{n}-p\Vert \end{aligned}$$

for all \(n\geq0\). Since \(\alpha_{n}, \beta_{n} \in(\frac{1}{2}, 1)\), \(1-(1-\alpha_{n}) (1-k)-(1-\alpha_{n})(1-\beta_{n})(1-k)< 1\) for all \(n\geq0\). Put \(b_{n} =\Vert x_{1}-p\Vert \) for all \(n\geq0\). Since

$$1-\frac{1}{2}(1-k)-\frac{1}{4}k(k-1) < 1+\frac{1}{2}k(1-k), $$

we get \(\lim\frac{a_{n}}{b_{n}}=\lim\frac{ (1-\frac{1}{2}(1-k)-\frac {1}{4}k(1-k) )^{n} \Vert x_{1}-p\Vert }{\Vert x_{1}-p\Vert }=0\) and so the iteration (3.2) converges faster than the case (3.3). Now, let \(\{x_{n}\}_{n\geq0}\) be the sequence in the case (3.4). Then

$$\begin{aligned} \Vert y_{n}-p\Vert &= \bigl\Vert \beta x_{n} +(1- \beta _{n})Tx_{n}-p\bigr\Vert \\ &\leq\beta_{n}\Vert x_{n}-p\Vert + (1 - \beta_{n}) \Vert Tx_{n}-p\Vert \\ &\leq \bigl(\beta_{n}+k(1-\beta_{n}) \bigr) \Vert x_{n} -p\Vert \\ &= \bigl(1-(1-\beta_{n}) (1-k) \bigr) \Vert x_{n} -p \Vert \end{aligned}$$

and

$$\begin{aligned} \Vert x_{n+1}-p\Vert &= \bigl\Vert (1-\alpha_{n}) x_{n} + \alpha_{n}Ty_{n}-p\bigr\Vert \\ &\leq(1-\alpha_{n})\Vert x_{n}-p\Vert + \alpha_{n} \Vert Ty_{n}-p\Vert \\ &\leq \bigl(1-\alpha_{n} + k\alpha_{n}\bigl[ \bigl(1-(1- \beta_{n}) (1-k) \bigr)\bigr] \bigr) \Vert x_{n} -p\Vert \\ &= \bigl(1-\alpha_{n} + k\alpha_{n}-\alpha_{n}(1- \beta_{n})k(1-k) \bigr) \Vert x_{n}-p\Vert \\ &= \bigl(1-\alpha_{n}(1-k)-\alpha_{n}(1- \beta_{n})k(1-k) \bigr) \Vert x_{n}-p\Vert \end{aligned}$$

for all \(n\geq0\). Since \(\alpha_{n}, \beta_{n}\in(\frac{1}{2}, 1)\) for all \(n\geq0\), \(-(1-k)<-\alpha_{n}(1-k)< -\frac{1}{2}(1-k)\) and \(\frac{-1}{2}k(1-k)<-\alpha_{n}(1-\beta_{n})k(1-k)<0\) for all n. Hence,

$$1-\alpha_{n}(1-k)-\alpha_{n}(1-\beta_{n})k(1-k)< 1- \frac{1}{2}(1-k) $$

for all \(n\geq0\). Put \(c_{n} = (1-\frac{1}{2}(1-k) )^{n} \Vert x_{1}-p\Vert \) for all \(n\geq0\). Thus, we obtain

$$ \lim\frac{a_{n}}{c_{n}}=\lim\frac{ (1-\frac{1}{2}(1-k)-\frac {1}{4}k(1-k) )^{n} \Vert x_{1} -p\Vert }{ (1-\frac{1}{2}(1-k) )^{n} \Vert x_{1} -p\Vert }=0 $$

and so the iteration (3.2) converges faster than the case (3.4). Now, let \(\{x_{n}\}_{n\geq0}\) be the sequence in the case (3.5). Then we have

$$\begin{aligned} \Vert y_{n}-p\Vert &= \bigl\Vert (1-\beta)x_{n} + \beta _{n} Tx_{n}-p\bigr\Vert \\ &\leq(1-\beta_{n})\Vert x_{n}-p\Vert + \beta_{n}\Vert Tx_{n}-p\Vert \\ & \leq \bigl(1-\beta_{n}(1-k) \bigr) \Vert x_{n} -p\Vert \end{aligned}$$

and

$$\begin{aligned} \Vert x_{n+1}-p\Vert &= \bigl\Vert \alpha_{n} x_{n} + (1-\alpha_{n})Ty_{n}\bigr\Vert \\ &\leq\alpha_{n}\Vert x_{n}-p\Vert + (1- \alpha_{n}) \Vert Ty_{n}-p\Vert \\ &\leq\alpha_{n}\Vert x_{n}-p\Vert + k (1- \alpha_{n}) \Vert y_{n}-p\Vert \\ &\leq \bigl(\alpha_{n} + k(1-\alpha_{n})\bigl[1- \beta_{n}(1-k)\bigr] \bigr) \Vert x_{n} -p\Vert \\ &\leq \bigl(\alpha_{n} + k(1-\alpha_{n})-(1- \alpha_{n})\beta_{n}k(1-k) \bigr) \Vert x_{n} -p \Vert \\ &\leq \bigl(1-(1-\alpha_{n} ) + k(1-\alpha_{n})-(1- \alpha_{n})\beta _{n}k(1-k) \bigr) \Vert x_{n} -p \Vert \\ &\leq \bigl(1 - (1-\alpha_{n}) (1-k)-(1-\alpha_{n}) \beta_{n}k(1-k) \bigr) \Vert x_{n} -p\Vert \end{aligned}$$

for all \(n\geq0\). Since \(\alpha_{n}, \beta_{n}\in(\frac{1}{2}, 1)\) for all n, \(-(1-k^{2})<-\alpha_{n}(1-k^{2})< -\frac{1}{2}(1-k^{2})\), and \(-\frac{1}{2}k(1-k)<-(1-\alpha_{n})\beta_{n}k(1-k)<0\) and so

$$1-\alpha_{n}(1-k)-(1-\alpha_{n})\beta_{n}k(1-k)< 1- \frac{1}{2}(1-k) $$

for all \(n\geq0\). Put \(d_{n} = (1-\frac{1}{2}(1-k) )^{n} \Vert x_{1}-p \Vert \) for all \(n\geq0\). Then we have

$$ \lim\frac{a_{n}}{d_{n}}=\lim\frac{ (1-\frac{1}{2}(1-k)-\frac {1}{4}k(1-k) )^{n} \Vert x_{1}-p\Vert }{ (1-\frac {1}{2}(1-k) )^{n} \Vert x_{1}-p\Vert }=0 $$

and so the iteration (3.2) converges faster than the case (3.5). □

By using a similar condition, one can show that the iteration (3.5) is faster than the case (3.3).

Now consider eight cases for writing the Noor iteration method. By using a condition, we show that the coefficient sequences \(\{\alpha_{n}\}_{n\geq0}\), \(\{\beta_{n}\}_{n\geq0}\), and \(\{\gamma_{n}\}_{n\geq0}\) have effective roles to play in the rate of convergence of the sequence \(\{x_{n}\}_{n\geq0}\) in the Noor iteration method. We enumerate the cases of the Noor iteration method during the proof of our next result.

Theorem 3.1

Let C be a nonempty, closed, and convex subset of a Banach space X, \(x_{1}\in C\), \(T\colon C \to C \) a contraction with constant \(k\in(0, 1)\) and p a fixed point of T. Consider the case (2.3) of the Noor iteration method

$$ \left \{ \textstyle\begin{array}{l} x_{n+1} =(1-\alpha_{n})x_{n}+\alpha_{n} Ty_{n}, \\ y_{n} = (1-\beta_{n})x_{n} +\beta_{n}Tz_{n}, \\ z_{n} = (1-\gamma_{n})x_{n} + \gamma_{n}Tx_{n} \end{array}\displaystyle \right . $$

for all \(n\geq0\). If \(1-\alpha_{n} <\alpha_{n}\), \(1-\beta_{n} <\beta_{n}\), and \(1-\gamma_{n}<\gamma_{n}\) for all \(n\geq0\), then the iteration (2.3) is faster than the other possible cases.

Proof

First, we compare the case (2.3) with the following Noor iteration case:

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} = (1-\alpha_{n})u_{n}+\alpha_{n} Tv_{n}, \\ v_{n} = (1-\beta_{n})u_{n} +\beta_{n}Tw_{n}, \\ w_{n} = \gamma_{n}u_{n} +(1- \gamma_{n}) Tu_{n} \end{array}\displaystyle \right . $$
(3.6)

for all \(n\geq0\). Note that

$$\begin{aligned} \Vert z_{n}-p\Vert &= \bigl\Vert (1-\gamma_{n})x_{n} +\gamma _{n}Tx_{n} -p\bigr\Vert \\ &\leq(1-\gamma_{n})\Vert x_{n}-p\Vert + k \gamma_{n} \Vert x_{n}-p\Vert \\ &= \bigl(1-(1-k)\gamma_{n}\bigr) \Vert x_{n}-p\Vert \end{aligned}$$

and

$$\begin{aligned} \Vert y_{n}-p\Vert &= \bigl\Vert (1-\beta_{n})x_{n} +\beta _{n}Tz_{n} -p\bigr\Vert \\ &\leq(1-\beta_{n})\Vert x_{n}-p\Vert + k \beta_{n} \Vert z_{n}-p\Vert \\ &\leq(1-\beta_{n}) +k \beta_{n} \bigl(\bigl(1-(1-k) \gamma_{n}\bigr)\bigr) \Vert x_{n}-p\Vert \\ &\leq\bigl[1-\beta_{n}(1-k)-\beta_{n}\gamma_{n} k(1-k)\bigr]\Vert x_{n}-p \Vert \end{aligned}$$

for all \(n\geq0\). Also, we have

$$\begin{aligned} \Vert x_{n+1}-p\Vert &= \bigl\Vert (1-\alpha _{n})x_{n}+ \alpha_{n}Ty_{n} -p\bigr\Vert \\ &\leq(1-\alpha_{n}) \Vert x_{n}-p\Vert + k \alpha_{n} \Vert y_{n}-p\Vert \\ &\leq(1-\alpha_{n} ) \Vert x_{n}-p\Vert + k \alpha_{n} \bigl[1-\beta_{n}(1-k)-\beta_{n} \gamma_{n} k(1-k)\bigr] \Vert x_{n}-p \Vert \\ &\leq\bigl(1-\alpha_{n} + k\alpha_{n} \bigl(1- \beta_{n}(1-k)-\beta_{n}\gamma_{n} k(1-k)\bigr) \bigr)\Vert x_{n}-p\Vert \\ &\leq\bigl(1-\alpha_{n} + k\alpha_{n} -k(1-k) \beta_{n}\alpha_{n}-\alpha_{n}\beta _{n} \gamma_{n} k^{2} (1-k)\bigr) \Vert x_{n}-p\Vert \\ &\leq\bigl(1 - (1-k)\alpha_{n} -k(1-k)\beta_{n} \alpha_{n}-\alpha_{n}\beta _{n}\gamma_{n} k^{2} (1-k)\bigr) \Vert x_{n}-p\Vert \end{aligned}$$

for all \(n\geq0\). Since \(\alpha_{n}, \beta_{n}, \gamma_{n}\in(\frac {1}{2}, 1)\) for all n, \(-(1-k^{2})< -\alpha_{n}(1-k^{2})< -\frac{1}{2}(1 -k^{2})\), \(-k(1-k)<-\alpha_{n}\beta_{n}k(1-k)< -\frac{1}{4}k(1-k)\), and

$$-k^{2}(1-k) < -\alpha_{n}\beta_{n} \gamma_{n} k^{2} (1-k) < -\frac{1}{8}k^{2}(1-k) $$

for all n. This implies that

$$1 - (1-k)\alpha_{n} -k(1-k)\beta_{n}\alpha_{n}- \alpha_{n}\beta_{n}\gamma_{n} k^{2} (1-k)< 1-\frac{1}{2}(1-k)-\frac{1}{4}k(1-k)-\frac{1}{8}k^{2}(1-k) $$

for all n. Put \(a_{n} = (1-\frac{1}{2}(1-k) -\frac {1}{8}k^{2}(1-k))^{n}\Vert x_{1}-p\Vert \) for all \(n\geq0\). Now for the sequences \(\{u_{n}\}_{n\geq0}\) with \(u_{1}=x_{1}\) and \(\{v_{n}\} _{n\geq0}\) in (3.6), we have

$$\begin{aligned} \Vert w_{n}-p\Vert &= \bigl\Vert \gamma_{n} u_{n} +(1-\gamma _{n})Tu_{n} -p\bigr\Vert \\ &\leq\gamma_{n}\Vert u_{n}-p\Vert + k(1- \gamma_{n} ) \Vert u_{n}-p\Vert \\ &= \bigl(1-(1-\gamma_{n}) (1-k)\bigr) \Vert u_{n}-p\Vert \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} \Vert v_{n}-p\Vert &= \bigl\Vert (1-\beta_{n})u_{n} +\beta _{n}Tw_{n} -p\bigr\Vert \\ &\leq(1-\beta_{n})\Vert u_{n}-p\Vert + k \beta_{n} \Vert w_{n}-p\Vert \\ &\leq(1-\beta_{n}) + k\beta_{n} \bigl(1-(1- \gamma_{n}) (1-k)\bigr) \Vert u_{n}-p\Vert \\ &\leq\bigl(1-\beta_{n} + k\beta_{n} -\beta_{n}(1- \gamma_{n})k(1-k)\bigr) \Vert u_{n}-p\Vert \\ & \leq\bigl(1-\beta_{n} (1-k) -\beta_{n}(1- \gamma_{n})k(1-k)\bigr) \Vert u_{n}-p\Vert \end{aligned} \end{aligned}$$

for all \(n\geq0\). Hence,

$$\begin{aligned} \Vert u_{n+1}-p\Vert &= \bigl\Vert (1-\alpha_{n})u_{n} +\alpha_{n} Tv_{n} -p\bigr\Vert \\ &\leq(1 - \alpha_{n}) \Vert u_{n}-p\Vert + k \alpha_{n} \Vert v_{n}-p\Vert \\ &\leq(1-\alpha_{n}) \Vert u_{n}-p\Vert + k \alpha_{n} \bigl(1-\beta_{n} (1-k) -\beta_{n}(1- \gamma_{n})k(1-k)\bigr)\Vert u_{n}-p \Vert \\ &\leq\bigl( (1-\alpha_{n}) + k\alpha_{n} - \alpha_{n}\beta_{n}k(1-k) - \alpha \beta_{n}(1- \gamma_{n}) K^{2}(1-k) \bigr)\Vert u_{n}-p\Vert \\ &\leq\bigl( 1-\alpha_{n} (1-k) -\alpha_{n} \beta_{n}k(1-k) - \alpha\beta _{n}(1-\gamma_{n}) K^{2}(1-k) \bigr)\Vert u_{n}-p\Vert \end{aligned}$$

for all n. Since \(\alpha_{n}, \beta_{n}, \gamma_{n}\in(\frac{1}{2}, 1)\) for all n, \(-k(1-k)<-\alpha_{n}\beta_{n}k(1-k)< -\frac{1}{4}k (1-k)\) and \(\frac{1}{2}k^{2}(1-k)< -\alpha_{n}\beta_{n}(1-\gamma _{n})k^{2}(1-k) < 0 \) for all n. Hence,

$$1-\alpha_{n} (1-k) -\alpha_{n}\beta_{n}k(1-k) - \alpha\beta_{n}(1-\gamma _{n}) k^{2}(1-k)< 1- \frac{1}{2}(1-k) -\frac{1}{4}k(1-k) $$

for all n. Put \(b_{n}=(1-\frac{1}{2}(1-k) -\frac{1}{4}k(1-k))^{n}\| u_{1}-p\|\) for all \(n\geq0\). Then we have

$$\begin{aligned} \lim_{n\to\infty} \frac{a_{n}}{b_{n}}= \frac{(1-\frac {1}{2}(1-k)-\frac{1}{4}k(1-k)-\frac{1}{8}k^{2}(1-k))^{n}\Vert x_{1}-p\Vert }{ {(1-\frac{1}{2}(1-k) -\frac{1}{4}k(1-k))}^{n} \Vert u_{1} -p\Vert }=0. \end{aligned}$$

Thus, \(\{x_{n}\}_{n\geq0}\) converges faster than the sequence \(\{u_{n}\} _{n\geq0}\). Now, we compare the case (2.3) with the following Noor iteration case:

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =(1-\alpha_{n} )u_{n} + \alpha_{n}Tv_{n}, \\ v_{n} = \beta_{n} u_{n} +(1-\beta_{n})Tw_{n}, \\ w_{n} = (1-\gamma_{n})u_{n} +\gamma_{n} Tu_{n} \end{array}\displaystyle \right . $$
(3.7)

for all \(n\geq0\). Note that

$$\begin{aligned} \Vert w_{n}-p\Vert &= \bigl\Vert (1-\gamma_{n}) u_{n} +\gamma _{n}Tu_{n} -p\bigr\Vert \\ &\leq(1-\gamma_{n})\Vert u_{n}-p\Vert + k \gamma_{n} \Vert u_{n}-p\Vert \\ &= \bigl(1-(1 - k)\gamma_{n}\bigr) \Vert u_{n}-p\Vert \end{aligned}$$

and

$$\begin{aligned} \Vert v_{n}-p\Vert &= \bigl\Vert \beta_{n}u_{n} +(1-\beta _{n})Tw_{n} -p\bigr\Vert \\ &\leq\beta_{n}\Vert u_{n}-p\Vert + k(1- \beta_{n}) \Vert w_{n}-p\Vert \\ &\leq\bigl(\beta_{n} + k(1-\beta_{n} )-\beta_{n} \gamma_{n} k(1-k)\bigr) \Vert u_{n}-p\Vert \\ &\leq\bigl(1-(1-k) (1-\beta_{n} )-\beta_{n} \gamma_{n} k(1-k)\bigr) \Vert u_{n}-p\Vert \end{aligned}$$

for all \(n\geq0\). Hence,

$$\begin{aligned} \Vert u_{n+1}-p\Vert &= \bigl\Vert (1-\alpha_{n})u_{n} +\alpha_{n} Tv_{n} -p\bigr\Vert \\ &\leq(1 - \alpha_{n}) \Vert u_{n}-p\Vert + k \alpha_{n} \Vert w_{n}-p\Vert \\ &\leq(1-\alpha_{n}) \Vert u_{n}-p\Vert + k \alpha_{n} \bigl(1-(1-k) (1-\beta_{n} )-\beta_{n} \gamma_{n} k(1-k)\bigr) \Vert u_{n}-p \Vert \\ &\leq\bigl((1-\alpha_{n}) + k\alpha_{n}-k(1-k) \alpha_{n}(1-\beta_{n} )-\alpha _{n} \beta_{n}\gamma_{n} k^{2}(1-k)\bigr) \Vert u_{n}-p\Vert \\ &\leq\bigl(1-(1-k)\alpha_{n} -\alpha_{n}(1- \beta_{n} )k(1-k)-\alpha_{n}\beta _{n} \gamma_{n} k^{2}(1-k) \bigr)\Vert u_{n}-p\Vert \end{aligned}$$

for all \(n\geq0\). Since \(\alpha_{n}, \beta_{n}, \gamma_{n}\in(\frac {1}{2}, 1)\) for all n, \(-\frac{1}{2}k(1-k)<-\alpha_{n}(1-\beta _{n})k(1-k)< 0 \), and \(-k^{2}(1-k)< -\alpha_{n}\beta_{n}(1-\gamma_{n})k^{2}(1-k) < -\frac{1}{8}k^{2}(1-k)\) and so

$$1-(1-k)\alpha_{n} -\alpha_{n}(1-\beta_{n} )k(1-k)- \alpha_{n}\beta_{n}\gamma _{n} k^{2}(1-k)< 1- \frac{1}{2}(1-k) -\frac{1}{8}k^{2}(1-k) $$

for all n. Put \(c_{n}=(1-\frac{1}{2}(1-k) -\frac {1}{8}k^{2}(1-k))^{n}\Vert u_{1}-p\Vert \) for all \(n\geq0\). Then we have

$$ \lim_{n\to\infty}\frac{a_{n}}{c_{n}}= \frac{(1-\frac{1}{2}(1-k)-\frac {1}{4}k(1-k)-\frac{1}{8}k^{2}(1-k))^{n}\Vert x_{1}-p\Vert }{ {(1-\frac{1}{2}(1-k) -\frac{1}{8}k^{2}(1-k))}^{n} \Vert u_{1} -p\Vert }=0. $$

Thus, \(\{x_{n}\}_{n\geq0}\) converges faster than the sequence \(\{u_{n}\} _{n\geq0}\). Now, we compare the case (2.3) with the following Noor iteration case:

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =(1-\alpha_{n}) u_{n} + \alpha_{n}Tv_{n}, \\ v_{n} = \beta_{n}u_{n} +(1-\beta_{n})Tw_{n}, \\ w_{n} = \gamma_{n}u_{n} +(1-\gamma_{n} )Tu_{n} \end{array}\displaystyle \right . $$
(3.8)

for all \(n\geq0\). Note that

$$\begin{aligned} \Vert w_{n}-p\Vert &= \bigl\Vert \gamma_{n} u_{n} +(1-\gamma _{n})Tu_{n} -p\bigr\Vert \\ &\leq\gamma_{n}\Vert u_{n}-p\Vert + k(1- \gamma_{n} ) \Vert u_{n}-p\Vert \\ &= \bigl(1-(1-\gamma_{n}) (1-k)\bigr) \Vert u_{n}-p\Vert \end{aligned}$$

and

$$\begin{aligned} \Vert v_{n}-p\Vert &= \bigl\Vert (1-\beta_{n})u_{n} +\beta _{n}Tw_{n} -p\bigr\Vert \\ &\leq(1-\beta_{n})\Vert u_{n}-p\Vert + k \beta_{n} \Vert w_{n}-p\Vert \\ &\leq\bigl(1-\beta_{n} + k\beta_{n} \bigl(1-(1- \gamma_{n}) (1-k)\bigr)\bigr) \Vert u_{n}-p\Vert \\ &\leq\bigl(1-\beta_{n} + k\beta_{n} -\beta_{n}(1- \gamma_{n})k(1-k)\bigr) \Vert u_{n}-p\Vert \\ &\leq\bigl(1-\beta_{n} (1-k) -\beta_{n}(1- \gamma_{n})k(1-k)\bigr) \Vert u_{n}-p\Vert \end{aligned}$$

and so

$$\begin{aligned} \Vert u_{n+1}-p\Vert &= \bigl\Vert (1-\alpha_{n})u_{n} +\alpha_{n} Tv_{n} -p\bigr\Vert \\ &\leq(1 - \alpha_{n}) \Vert u_{n}-p\Vert + k \alpha_{n} \Vert w_{n}-p\Vert \\ &\leq(1-\alpha_{n}) \Vert u_{n}-p\Vert + k \alpha_{n} \bigl(1-\beta_{n} (1-k) -\beta_{n}(1- \gamma_{n})k(1-k)\bigr) \Vert u_{n}-p \Vert \\ &\leq\bigl(1-\alpha_{n} + k\alpha_{n} -\alpha_{n} \beta_{n} k(1-k) - \alpha _{n}\beta_{n}(1- \gamma_{n} )k^{2}(1-k) \bigr)\Vert u_{n}-p\Vert \\ &\leq\bigl(1-(1-k)\alpha_{n} -\alpha_{n}\beta_{n} k(1-k) - \alpha_{n}\beta _{n}(1-\gamma_{n}) k^{2}(1-k) \bigr)\Vert u_{n}-p\Vert \end{aligned}$$

for all n. Since \(\alpha_{n}, \beta_{n}, \gamma_{n}\in(\frac{1}{2}, 1)\) for all n, \(-k(1-k)<-\alpha_{n}\beta_{n}k(1-k)< -\frac {1}{4}k(1-k)\), and \(-\frac{1}{2}k^{2}(1-k)< -\alpha_{n}\beta_{n}(1-\gamma _{n})k^{2}(1-k) < 0\) for all n. This implies that

$$1-(1-k)\alpha_{n} -\alpha_{n}\beta_{n} k(1-k)- \alpha_{n}\beta_{n}(1-\gamma _{n}) k^{2}(1-k)< 1-\frac{1}{2}(1-k) -\frac{1}{4}k(1-k) $$

for all n. Put \(d_{n}=(1-\frac{1}{2}(1-k) -\frac{1}{4}k(1-k))^{n} \Vert u_{1}-p\Vert \) for all \(n\geq0\). Then we get

$$ \lim_{n\to\infty}\frac{a_{n}}{d_{n}}=\frac{(1-\frac{1}{2}(1-k)-\frac {1}{4}k(1-k)-\frac{1}{8}k^{2}(1-k))^{n}\Vert x_{1}-p\Vert }{( 1-\frac{1}{2}(1-k) -\frac{1}{4}k(1-k))^{n} \Vert u_{1} -p \Vert }=0 $$

and so the sequence \(\{x_{n}\}_{n\geq0}\) converges faster than the sequence \(\{u_{n}\}_{n\geq0}\). By using similar proofs, one can show that the case (2.3) is faster than the following cases of the Noor iteration method:

$$\begin{aligned}& \left \{ \textstyle\begin{array}{l} u_{n+1} = \alpha_{n}u_{n} +(1-\alpha_{n}) Tv_{n}, \\ v_{n} = (1-\beta_{n})u_{n} +\beta_{n}Tw_{n}, \\ w_{n} = (1-\gamma_{n})u_{n} +\gamma_{n} Tu_{n} , \end{array}\displaystyle \right . \end{aligned}$$
(3.9)
$$\begin{aligned}& \left \{ \textstyle\begin{array}{l} u_{n+1} =\alpha_{n} u_{n} +(1- \alpha_{n})Tv_{n}, \\ v_{n} = (1-\beta_{n})u_{n} +\beta_{n}Tw_{n}, \\ w_{n} = \gamma_{n} u_{n} +(1-\gamma_{n}) Tu_{n} , \end{array}\displaystyle \right . \end{aligned}$$
(3.10)
$$\begin{aligned}& \left \{ \textstyle\begin{array}{l} u_{n+1} = \alpha_{n}u_{n} +(1-\alpha_{n}) Tv_{n}, \\ v_{n} = \beta_{n}u_{n} +(1-\beta_{n})Tw_{n}, \\ w_{n} =(1 - \gamma_{n}) u_{n} +\gamma_{n} Tu_{n}, \end{array}\displaystyle \right . \end{aligned}$$
(3.11)

and

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =\alpha_{n}u_{n}+(1-\alpha_{n}) Tv_{n}, \\ v_{n} = \beta_{n} u_{n} +(1-\beta_{n})Tw_{n}, \\ w_{n} = \gamma_{n} u_{n} +(1-\gamma_{n}) Tu_{n} \end{array}\displaystyle \right . $$
(3.12)

for all \(n\geq0\). This completes the proof. □

By using similar conditions, one can show that the case (3.7) converges faster than (3.8), (3.9) converges faster than (3.11), (3.11) converges faster than (3.10) and (3.10) converges faster than (3.12).

As we know, the Agarwal iteration method could be written in the following four cases:

$$\begin{aligned}& \left \{ \textstyle\begin{array}{l} x_{n+1} =(1 - \alpha_{n})Tx_{n} + \alpha_{n}Ty_{n}, \\ y_{n} = (1-\beta_{n})x_{n} +\beta_{n}Tx_{n}, \end{array}\displaystyle \right . \end{aligned}$$
(3.13)
$$\begin{aligned}& \left \{ \textstyle\begin{array}{l} x_{n+1} =\alpha_{n} Tx_{n} + (1-\alpha_{n})Ty_{n}, \\ y_{n} = \beta_{n} x_{n} +(1-\beta_{n})Tx_{n}, \end{array}\displaystyle \right . \end{aligned}$$
(3.14)
$$\begin{aligned}& \left \{ \textstyle\begin{array}{l} x_{n+1} = \alpha_{n}Tx_{n} + (1-\alpha_{n})Ty_{n}, \\ y_{n} = (1-\beta_{n})x_{n} +\beta_{n}Tx_{n}, \end{array}\displaystyle \right . \end{aligned}$$
(3.15)

and

$$ \left \{ \textstyle\begin{array}{l} x_{n+1} =(1 - \alpha_{n})Tx_{n} + \alpha_{n}Ty_{n}, \\ y_{n} = \beta_{n} x_{n} +(1-\beta_{n})Tx_{n} \end{array}\displaystyle \right . $$
(3.16)

for all \(n\geq0\). One can easily show that the case (3.13) converges faster than the other ones for contractive maps. We record it as the next lemma.

Lemma 3.1

Let C be a nonempty, closed, and convex subset of a Banach space X, \(x_{1}\in C\), \(T\colon C \to C \) a contraction with constant \(k\in(0, 1)\) and p a fixed point of T. If \(1-\alpha_{n}<\alpha_{n}\) and \(1-\beta_{n} <\beta_{n}\) for all \(n\geq0\), then the case (3.13) converges faster than (3.14), (3.15), and (3.16).

Also by using a similar condition, one can show that the case (3.16) converges faster than (3.14). Similar to Theorem 3.1, we can prove that for contractive maps one case in the Abbas iteration method converges faster than the other possible cases whenever the elements of the sequences \(\{\alpha_{n}\}_{n\geq0}\), \(\{\beta_{n}\}_{n\geq0}\), and \(\{\gamma_{n}\}_{n\geq0}\) are in \((\frac{1}{2}, 1)\) for sufficiently large n. Also, one can show that for contractive maps the case (2.6) of the Thakur-Thakur-Postolache iteration method converges faster than the other possible cases whenever elements of the sequences \(\{\alpha_{n}\}_{n\geq0}\), \(\{\beta_{n}\}_{n\geq0}\), and \(\{\gamma_{n}\}_{n\geq0}\) are in \((\frac{1}{2}, 1)\) for sufficiently large n. We record these results as follows.

Lemma 3.2

Let C be a nonempty, closed, and convex subset of a Banach space X, \(u_{1}\in C\), \(T\colon C \to C \) a contraction with constant \(k\in(0, 1)\), and p a fixed point of T. Consider the following case in the Abbas iteration method:

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =\alpha_{n} Tv_{n} + (1-\alpha_{n})Tw_{n}, \\ v_{n} = (1-\beta_{n})Tu_{n} +\beta_{n}Tw_{n}, \\ w_{n} = (1-\gamma_{n})u_{n} +\gamma_{n} Tu_{n} \end{array}\displaystyle \right . $$
(3.17)

for all n. If \(1-\alpha_{n} <\alpha_{n}\), \(1-\beta_{n} <\beta_{n}\), and \(1-\gamma_{n}< \gamma_{n}\) for sufficiently large n, then the case (3.17) converges faster than the other possible cases.

Also by using similar conditions in the Abbas iteration method, one can show that the cases

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =\alpha_{n} Tv_{n} + (1-\alpha_{n})Tw_{n}, \\ v_{n} = \beta_{n}Tu_{n} +(1-\beta_{n})Tw_{n}, \\ w_{n} = (1-\gamma_{n})u_{n} +\gamma_{n} Tu_{n} \end{array}\displaystyle \right . $$
(3.18)

and

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =\alpha_{n} Tv_{n} + (1-\alpha_{n})Tw_{n}, \\ v_{n} = (1-\beta_{n})Tu_{n} +\beta_{n}Tw_{n}, \\ w_{n} = \gamma_{n} u_{n} +(1-\gamma_{n}) Tu_{n} \end{array}\displaystyle \right . $$
(3.19)

converge faster than the case

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =\alpha_{n} Tv_{n} + (1-\alpha_{n})Tw_{n}, \\ v_{n} = \beta_{n}Tu_{n} +(1-\beta_{n})Tw_{n}, \\ w_{n} = \gamma_{n}u_{n} +(1-\gamma_{n} )Tu_{n}. \end{array}\displaystyle \right . $$
(3.20)

Also the case

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} = (1-\alpha_{n} )Tv_{n}+\alpha_{n}Tw_{n}, \\ v_{n} =(1- \beta_{n})Tu_{n} +\beta_{n}Tw_{n}, \\ w_{n} = (1-\gamma_{n})u_{n} +\gamma_{n} Tu_{n} \end{array}\displaystyle \right . $$
(3.21)

converges faster than the cases

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =(1-\alpha_{n}) Tv_{n} + \alpha_{n}Tw_{n}, \\ v_{n} = \beta_{n}Tu_{n} +(1-\beta_{n})Tw_{n}, \\ w_{n} =(1 - \gamma_{n}) u_{n} +\gamma_{n} Tu_{n} \end{array}\displaystyle \right . $$
(3.22)

and

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =(1-\alpha_{n}) Tv_{n} + \alpha_{n}Tw_{n}, \\ v_{n} = (1-\beta_{n})Tu_{n} +\beta_{n}Tw_{n}, \\ w_{n} = \gamma_{n} u_{n} +(1-\gamma_{n}) Tu_{n}, \end{array}\displaystyle \right . $$
(3.23)

and

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =(1-\alpha_{n}) Tv_{n} + \alpha_{n}Tw_{n}, \\ v_{n} = \beta_{n}Tu_{n} +(1-\beta_{n})Tw_{n}, \\ w_{n} = \gamma_{n} u_{n} +(1-\gamma_{n}) Tu_{n}. \end{array}\displaystyle \right . $$
(3.24)

Lemma 3.3

Let C be a nonempty, closed, and convex subset of a Banach space X, \(u_{1}\in C\), \(T\colon C \to C \) a contraction with constant \(k\in(0, 1)\) and p a fixed point of T. If \(1-\alpha_{n} <\alpha_{n}\), \(1-\beta_{n} <\beta_{n}\), and \(1-\gamma_{n}< \gamma_{n}\) for sufficiently large n, then the case (2.6) in the Thakur-Thakur-Postolache iteration method converges faster than the other possible cases.

Also by using similar conditions, one can show that the cases

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =(1-\alpha_{n} )Tu_{n} + \alpha_{n}Tv_{n}, \\ v_{n} = \beta_{n} w_{n} +(1-\beta_{n})Tw_{n}, \\ w_{n} = (1-\gamma_{n})u_{n} +\gamma_{n} Tu_{n} \end{array}\displaystyle \right . $$
(3.25)

and

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} = (1-\alpha_{n})Tu_{n} +\alpha_{n} Tv_{n}, \\ v_{n} = (1-\beta_{n})w_{n} +\beta_{n}Tw_{n}, \\ w_{n} = \gamma_{n} u_{n} +(1-\gamma_{n}) Tu_{n} \end{array}\displaystyle \right . $$
(3.26)

converge faster than the case

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =(1-\alpha_{n}) Tu_{n} + \alpha_{n}Tv_{n}, \\ v_{n} = \beta_{n}w_{n} +(1-\beta_{n})Tw_{n}, \\ w_{n} = \gamma_{n}u_{n} +(1-\gamma_{n} )Tu_{n}. \end{array}\displaystyle \right . $$
(3.27)

Also the case

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =\alpha_{n} Tu_{n} +(1- \alpha_{n})Tv_{n}, \\ v_{n} = (1-\beta_{n} )w_{n} +\beta_{n}Tw_{n}, \\ w_{n} = (1-\gamma_{n}) u_{n} +\gamma_{n} Tu_{n} \end{array}\displaystyle \right . $$
(3.28)

converges faster than the cases

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =\alpha_{n} Tu_{n} +(1- \alpha_{n})Tv_{n}, \\ v_{n} = \beta_{n} w_{n} +(1-\beta_{n}) Tw_{n}, \\ w_{n} =(1- \gamma_{n}) u_{n} +\gamma_{n} Tu_{n} \end{array}\displaystyle \right . $$
(3.29)

and

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =\alpha_{n} Tu_{n} +(1- \alpha_{n})Tv_{n}, \\ v_{n} = (1-\beta_{n})w_{n} +\beta_{n}Tw_{n}, \\ w_{n} = \gamma_{n} u_{n} +(1-\gamma_{n}) Tu_{n}, \end{array}\displaystyle \right . $$
(3.30)

and

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =\alpha_{n}Tu_{n}+(1-\alpha_{n}) Tv_{n}, \\ v_{n} = \beta_{n}w_{n} +(1-\beta_{n})Tw_{n}, \\ w_{n} = \gamma_{n} u_{n} +(1-\gamma_{n}) Tu_{n}. \end{array}\displaystyle \right . $$
(3.31)

Finally, we have a similar situation for the Picard S-iteration which we record here.

Lemma 3.4

Let C be a nonempty, closed, and convex subset of a Banach space X, \(x_{1}\in C\), \(T\colon C \to C \) a contraction with constant \(k\in(0, 1)\) and p a fixed point of T. If \(1-\alpha_{n} <\alpha_{n}\) and \(1-\beta_{n} <\beta_{n}\) for sufficiently large n, then the case (2.7) in the Picard S-iteration method converges faster than the other possible cases.

4 Comparing different iterations methods

In this section, we compare the rate of convergence of some different iteration methods for contractive maps. Our goal is to show that the rate of convergence relates to the coefficients.

Theorem 4.1

Let C be a nonempty, closed, and convex subset of a Banach space X, \(u_{1}\in C\), \(T\colon C \to C \) a contraction with constant \(k\in(0, 1)\) and p a fixed point of T. Consider the case (2.5) in the Abbas iteration method

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =(1-\alpha_{n}) Tv_{n} + \alpha_{n}Tw_{n}, \\ v_{n} = (1-\beta_{n})Tu_{n} +\beta_{n}Tw_{n}, \\ w_{n} = (1-\gamma_{n})u_{n} + \gamma_{n}Tu_{n}, \end{array}\displaystyle \right . $$

the case (3.17) in the Abbas iteration method

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =\alpha_{n}Tv_{n} +(1-\alpha_{n})Tw_{n}, \\ v_{n} = (1-\beta_{n})Tu_{n} +\beta_{n}Tw_{n}, \\ w_{n} = (1-\gamma_{n})u_{n} + \gamma_{n}Tu_{n}, \end{array}\displaystyle \right . $$

and the case (2.6) in the Thakur-Thakur-Postolache iteration method

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =(1-\alpha_{n})Tu_{n}+\alpha_{n} Tv_{n}, \\ v_{n} = (1-\beta_{n})w_{n} +\beta_{n}Tw_{n}, \\ w_{n} = (1-\gamma_{n})u_{n} + \gamma_{n}Tu_{n} \end{array}\displaystyle \right . $$

for all \(n\geq0\). If \(1-\alpha_{n} <\alpha_{n}\), \(1-\beta_{n} <\beta_{n}\), and \(1-\gamma_{n}< \gamma_{n}\) for sufficiently large n, then the case (3.17) in the Abbas iteration method converges faster than the case (2.6) in the Thakur-Thakur-Postolache iteration method. Also, the case (2.6) in the Thakur-Thakur-Postolache iteration method is faster than the case (2.5) in the Abbas iteration method.

Proof

Let \(\{u_{n}\}_{n\geq0}\) be the sequence in the case (3.17). Then we have

$$\begin{aligned}& \Vert w_{n}-p\Vert = \bigl\Vert (1-\gamma_{n})u_{n} +\gamma _{n}Tu_{n} -p\bigr\Vert \\& \hphantom{\Vert w_{n}-p\Vert }\leq(1-\gamma_{n})\Vert u_{n}-p\Vert + k \gamma_{n} \Vert u_{n}-p\Vert \\& \hphantom{\Vert w_{n}-p\Vert }= \bigl(1-(1-k)\gamma_{n}\bigr) \Vert u_{n}-p\Vert , \\& \Vert v_{n}-p\Vert = \bigl\Vert (1-\beta_{n})Tu_{n} +\beta _{n}Tw_{n} -p\bigr\Vert \\& \hphantom{\Vert v_{n}-p\Vert }\leq k(1-\beta_{n})\Vert u_{n}-p\Vert + k \beta_{n} \Vert w_{n}-p\Vert \\& \hphantom{\Vert v_{n}-p\Vert }\leq k \bigl[ (1-\beta_{n}) + \beta_{n} \bigl(1-(1-k)\gamma_{n}\bigr)\bigr] \Vert u_{n}-p\Vert \\& \hphantom{\Vert v_{n}-p\Vert }\leq k\bigl[1 - \beta_{n}\gamma_{n}(1-k) \bigr]\Vert u_{n}-p\Vert , \end{aligned}$$

and

$$\begin{aligned} \Vert u_{n+1}-p\Vert &= \bigl\Vert \alpha_{n}Tv_{n} +(1-\alpha_{n})Tw_{n} -p\bigr\Vert \\ &\leq\alpha_{n} k\Vert v_{n}-p\Vert + k \alpha_{n} \Vert w_{n}-p\Vert \\ &\leq\alpha_{n} k^{2} \bigl(1 - \beta_{n} \gamma_{n} (1-k)\bigr) \Vert u_{n}-p\Vert + k(1 - \alpha_{n}) \bigl(1-(1-k)\gamma_{n}\bigr) \Vert u_{n}-p\Vert \\ &\leq k\bigl[ k\alpha_{n} -\alpha_{n}\beta_{n} \gamma_{n} k(1-k) + (1-\alpha_{n}) \bigl(1-(1-k) \gamma_{n}\bigr)\bigr]\Vert u_{n}-p\Vert \\ &=k\bigl[k\alpha_{n} -\alpha_{n}\beta_{n} \gamma_{n} k(1-k)+1-\alpha_{n} - (1-\alpha_{n}) \gamma_{n}(1-k)\bigr]\Vert u_{n}-p\Vert \\ &=k\bigl[1 -\alpha_{n}(1-k)- (1-\alpha_{n}) \gamma_{n}(1-k)-\alpha_{n}\beta _{n} \gamma_{n} k(1-k) \bigr]\Vert u_{n}-p\Vert \end{aligned}$$

for all n. Since \(\alpha_{n}, \beta_{n}, \gamma_{n}\in(\frac{1}{2}, 1)\) for sufficiently large n, we have

$$-(1-k)< -\alpha_{n}(1-k)< -\frac{1}{2}(1-k), $$

\(-\frac{1}{2}(1-k)<-\alpha_{n}\gamma_{n}(1-k)<0\), and \(-k(1-k)<-\alpha _{n}\beta_{n}\gamma_{n} k (1-k) <-\frac{1}{8}k(1-k)\) for sufficiently large n. Hence,

$$1-\alpha_{n}(1-k)-(1-\alpha_{n})\gamma_{n}(1-k)- \alpha_{n}\beta_{n}\gamma_{n} k(1-k)< 1- \frac{1}{2}(1-k)-\frac{1}{8}k(1-k) $$

for sufficiently large n. Put \(a_{n} =k^{n} (1-\frac{1}{2}(1-k)-\frac {1}{8}k(1-k))^{n}\Vert u_{1}-p\Vert \) for all n. Now, let \(\{ u_{n}\}_{n\geq0}\) be the sequence in the case (2.6). Then we have

$$\begin{aligned}& \Vert w_{n}-p\Vert = \bigl\Vert (1-\gamma_{n})u_{n} +\gamma _{n}Tu_{n} -p\bigr\Vert \\& \hphantom{\Vert w_{n}-p\Vert } \leq(1-\gamma_{n})\Vert u_{n}-p\Vert + k \gamma_{n} \Vert u_{n}-p\Vert \\& \hphantom{\Vert w_{n}-p\Vert } = \bigl(1-(1-k)\gamma_{n}\bigr) \Vert u_{n}-p\Vert , \\& \Vert v_{n}-p\Vert = \bigl\Vert (1-\beta_{n})w_{n} +\beta _{n}Tw_{n} -p\bigr\Vert \\& \hphantom{\Vert v_{n}-p\Vert } \leq(1-\beta_{n})\Vert u_{n}-p\Vert + k \beta_{n} \Vert w_{n}-p\Vert \\& \hphantom{\Vert v_{n}-p\Vert } \leq(1-\beta_{n}) \bigl(1-(1-k)\gamma_{n} \bigr) +k \beta_{n} \bigl(\bigl(1-(1-k)\gamma_{n}\bigr)\bigr) \Vert u_{n}-p\Vert \\& \hphantom{\Vert v_{n}-p\Vert } \leq\bigl[1-\beta_{n}(1-k)\bigr] \bigl[1- \gamma_{n}(1-k)\bigr]\Vert u_{n}-p\Vert , \end{aligned}$$

and

$$\begin{aligned} \Vert u_{n+1}-p\Vert =& \bigl\Vert (1-\alpha _{n})Tu_{n}+\alpha_{n}Tv_{n} -p\bigr\Vert \\ \leq&(1-\alpha_{n}) k\Vert u_{n}-p\Vert + k \alpha_{n} \Vert v_{n}-p\Vert \\ \leq& k(1-\alpha_{n} ) \Vert u_{n}-p\Vert + k \alpha_{n} \bigl[1-\beta_{n}(1-k)\bigr] \bigl[1- \gamma_{n}(1-k)\bigr] \Vert u_{n}-p\Vert \\ \leq& k\bigl[ 1-\alpha_{n} + \alpha_{n} \bigl(1- \beta_{n}(1-k)\bigr) \bigl(1-\gamma _{n}(1-k)\bigr)\bigr] \Vert u_{n}-p\Vert \\ \leq& k\bigl[ 1-\alpha_{n} + \bigl(\alpha_{n} -(1-k) \beta_{n}\alpha_{n}\bigr) \bigl((1-\gamma _{n})+k \gamma_{n}\bigr)\bigr] \Vert u_{n}-p\Vert \\ \leq& k\bigl[ 1-\alpha_{n} + \alpha_{n} (1- \gamma_{n}) + \alpha_{n}\gamma_{n} k - \beta_{n}\alpha_{n}(1-\gamma_{n}) (1-k) \\ &{} -\alpha_{n}\beta_{n}\gamma_{n} k(1-k)\bigr] \Vert u_{n}-p\Vert \\ \leq& k\bigl[ 1 -\alpha_{n}\gamma_{n}(1-k) - \alpha_{n}\beta_{n}(1-\gamma _{n}) (1-k)- \alpha_{n}\beta_{n}\gamma_{n}k(1-k)\bigr] \Vert u_{n}-p \Vert \end{aligned}$$

for all n. Since \(\alpha_{n}, \beta_{n}, \gamma_{n}\in(\frac{1}{2}, 1)\) for sufficiently large n, we have

$$-(1-k)< -\alpha_{n}\gamma_{n}(1-k)< -\frac{1}{4}(1-k), $$

\(-\frac{1}{2}(1-k)<-\alpha_{n}\beta_{n}(1-\gamma_{n})(1-k)<0\), and \(-k(1-k)<-\alpha_{n}\beta_{n}\gamma_{n} k (1-k) <-\frac{1}{8}k(1-k)\) for sufficiently large n. Hence,

$$1-\alpha_{n}\gamma_{n}(1-k)-\alpha_{n} \beta_{n}(1-\gamma_{n}) (1-k)-\alpha _{n} \beta_{n}\gamma_{n}k(1-k)< 1-\frac{1}{4}(1-k) - \frac{1}{8}k(1-k) $$

for sufficiently large n. Put \(b_{n} =k^{n} (1-\frac{1}{4}(1-k) -\frac {1}{8}k(1-k))^{n}\Vert u_{1}-p\Vert \) for all n. Then

$$ \lim_{n\to\infty}\frac{a_{n}}{b_{n}}=\frac{k^{n}(1-\frac{1}{2}(1-k) -\frac{1}{8}k(1-k))^{n}\Vert u_{1}-p\Vert }{k^{n} (1-\frac {1}{4}(1-k) -\frac{1}{8}k(1-k))^{n} \Vert u_{1} -p\Vert }=0. $$

Thus, the case (3.17) in the Abbas iteration method converges faster than the case (2.6) in the Thakur-Thakur-Postolache iteration method.

Now for the case (2.5), we have

$$\begin{aligned}& \Vert w_{n}-p\Vert = \Vert 1-\gamma_{n} u_{n} +\gamma _{n}Tu_{n} -p\Vert \\& \hphantom{\Vert w_{n}-p\Vert } \leq(1-\gamma_{n})\Vert u_{n}-p\Vert + k \gamma_{n} \Vert u_{n}-p\Vert \\& \hphantom{\Vert w_{n}-p\Vert } = \bigl(1-(1-k)\gamma_{n}\bigr) \Vert u_{n}-p\Vert , \\& \Vert v_{n}-p\Vert = \bigl\Vert (1-\beta_{n})Tu_{n} +\beta _{n}Tw_{n} -p\bigr\Vert \\& \hphantom{\Vert v_{n}-p\Vert } \leq k(1-\beta_{n})\Vert u_{n}-p\Vert + k\beta_{n} \Vert w_{n}-p\Vert \\& \hphantom{\Vert v_{n}-p\Vert } \leq k \bigl[ (1-\beta_{n}) + \beta_{n} \bigl(1 -(1-k)\gamma_{n}\bigr)\bigr] \Vert u_{n}-p\Vert \\& \hphantom{\Vert v_{n}-p\Vert } \leq k\bigl[1 - \beta_{n}\gamma_{n}(1-k) \bigr]\Vert u_{n}-p\Vert , \end{aligned}$$

and

$$\begin{aligned} \Vert u_{n+1}-p\Vert &= \bigl\Vert (1-\alpha_{n})Tv_{n} +\alpha_{n}Tw_{n} -p\bigr\Vert \\ &\leq(1-\alpha_{n}) k\Vert v_{n}-p\Vert + k \alpha_{n} \Vert w_{n}-p\Vert \\ &\leq(1 - \alpha_{n}) k^{2} \bigl(1-\beta_{n} \gamma_{n} (1-k)\bigr) \Vert u_{n}-p\Vert + k \alpha_{n} \bigl(1-(1-k)\gamma_{n}\bigr) \Vert u_{n}-p\Vert \\ &\leq k\bigl[(1- \alpha_{n})k -(1-\alpha_{n}) \beta_{n}\gamma_{n} k(1-k) + \alpha _{n} - \alpha_{n}\gamma_{n}(1-k)\bigr]\Vert u_{n}-p \Vert \\ &\leq k\bigl[1-(1-\alpha_{n}) (1-k) - \alpha_{n} \gamma_{n}(1-k)-(1-\alpha _{n})\beta_{n} \gamma_{n} k(1-k)\bigr]\Vert u_{n}-p\Vert \end{aligned}$$

for all n. Since \(\alpha_{n}, \beta_{n}, \gamma_{n}\in(\frac{1}{2}, 1)\) for sufficiently large n, \(-\frac{1}{2}(1-k)<-(1-\alpha _{n})(1-k)< 0\), \(-(1-k)<-\alpha_{n}\gamma_{n}(1-k)<-\frac{1}{4}(1-k)\), and \(-\frac {1}{2}k(1-k)<-(1-\alpha_{n})\beta_{n}\gamma_{n} k (1-k) <0\) for sufficiently large n. Hence,

$$1-(1-\alpha_{n}) (1-k)-\alpha_{n}\gamma_{n}(1-k)-(1- \alpha_{n})\beta _{n}\gamma_{n} k(1-k)< 1- \frac{1}{4}(1-k) $$

for sufficiently large n. Put \(c_{n} =k^{n} (1-\frac{1}{4}(1-k) )^{n} \Vert x_{1}-p\Vert \) for all n. Then we have

$$ \lim_{n\to\infty}\frac{b_{n}}{c_{n}}=\frac{k^{n}(1-\frac {1}{4}(1-k)-\frac{1}{8}k(1-k))^{n}\Vert u_{1}-p\Vert }{k^{n}(1-\frac{1}{4}(1-k))^{n} \Vert u_{1}-p\Vert }=0 $$

and so the case (2.6) in the Thakur-Thakur-Postolache iteration method is faster than the case (2.5) in the Abbas iteration method. □

By using a similar proof, we can compare the Thakur-Thakur-Postolache and the Agarwal iteration methods as follows.

Theorem 4.2

Let C be a nonempty, closed, and convex subset of a Banach space X, \(x_{1}\in C\), \(T\colon C \to C \) a contraction with constant \(k\in(0, 1)\) and p a fixed point of T. If \(1-\alpha_{n} <\alpha_{n}\), \(1-\beta_{n} <\beta_{n}\), and \(1-\gamma_{n}< \gamma_{n}\) for sufficiently large n, then the case (2.6) in the Thakur-Thakur-Postolache iteration method converges faster than the case (2.4) in the Agarwal iteration method and the case (2.4) in the Agarwal iteration method is faster than the cases (3.29) and (3.30) in the Thakur-Thakur-Postolache iteration method.

Also by using similar proofs, we can compare some another iteration methods. We record those as follows.

Theorem 4.3

Let C be a nonempty, closed, and convex subset of a Banach space X, \(x_{1}\in C\), \(T\colon C \to C \) a contraction with constant \(k\in(0, 1)\), and p a fixed point of T. If \(1-\alpha_{n} <\alpha_{n}\), \(1-\beta_{n} <\beta_{n}\), and \(1-\gamma_{n}< \gamma_{n}\) for sufficiently large n, then the case (2.3) in the Abbas iteration method converges faster than the case (2.2) in the Ishikawa iteration method and the case (2.2) in the Ishikawa iteration method is faster than the cases (3.11) and (3.12) in the Abbas iteration method.

It is notable that there are some cases which the coefficients have no effective roles to play in the rate of convergence. By using similar proofs, one can check the next result. One can obtain some similar cases. This shows us that researchers should stress more the probability of the efficiency of coefficients in the rate of convergence for iteration methods.

Theorem 4.4

Let C be a nonempty, closed, and convex subset of a Banach space X, \(x_{1}\in C\), \(T\colon C \to C \) a contraction with constant \(k\in(0, 1)\), p a fixed point of T, and \(\alpha_{n}, \beta_{n}, \gamma_{n} \in(0, 1)\) for all \(n\geq0\). Then the case (2.4) in the Agarwal iteration method is faster than the case (2.1) in the Mann iteration method, the case (2.5) in the Abbas iteration method is faster than the case (2.1) in the Mann iteration method, the case (2.6) in the Thakur-Thakur-Postolache iteration method is faster than the case (2.1) in the Mann iteration method, the case (2.4) in the Agarwal iteration method is faster than the case (2.2) in the Ishikawa iteration method, the case (2.5) in the Abbas iteration method is faster than the case (2.2) in the Ishikawa iteration method and the case (2.6) in the Thakur-Thakur-Postolache iteration method is faster than the case (2.2) in the Ishikawa iteration method.

5 Examples and figures

In this section, we provide some examples to illustrate our results.

Example 1

Let \(X = \mathbb{R}\), \(C=[1, 60]\), \(x_{0}=20\), \(\alpha_{n}=0.7\), and \(\beta_{n}=0.85\) for all \(n\geq0\). Define the map \(T\colon C \to C\) by the formula \(T(x)=(3x+18)^{\frac{1}{3}}\) for all \(x\in C\). It is easy to see that T is a contraction. In Tables 1-3, we first compare two cases of the Mann iteration method and also four cases of the Ishikawa and Agarwal iteration methods separately. From a mathematical point of view, one can see that the Mann iteration (3.1) is more than 2.82 times faster than the Mann iteration (2.1), the Ishikawa iteration (3.2) is more than 1.07 times faster than the Ishikawa iteration (3.4), the Ishikawa iteration (3.2) is more than 11.33 times faster than the Ishikawa iteration (3.3), the Ishikawa iteration (3.2) is more than 11 times faster than the Ishikawa iteration (3.5), the Ishikawa iteration (3.4) is more than 8.75 times faster than the Ishikawa iteration (3.5), the Agarwal iteration (3.13) is 1.22 times faster than the Agarwal iteration (3.14), the Agarwal iteration (3.13) is 1.11 times faster than the Agarwal iteration (3.15), the Agarwal iteration (3.13) is 1.22 times faster than the Agarwal iteration (3.16) and so on. We first add our CPU time in Tables 1-3 for each iteration method. Also, we provide Figure 1 by using at least 30 times calculating of CPU times for our faster cases in the methods. From a computer-calculation point of view, we get a different answer. As one can see in the CPU time table, we found that the Agarwal iteration (3.13) and the Mann iteration (3.1) are faster than the Ishikawa iteration (3.2). This note emphasizes the difference of the mathematical results and computer-calculation results which have appeared many times in the literature.

Figure 1
figure 1

CPU time.

Table 1 Cases of Mann iteration
Table 2 Cases of Ishikawa iteration
Table 3 Cases of Agarwal iteration

The next example illustrates Lemma 3.2.

Example 2

Let \(X = \mathbb{R}\), \(C = [0, 2000]\), \(x_{0}=1000\), \(\alpha_{n}=0. 85\), \(\beta_{n}=0. 65\), and \(\gamma_{n}=0. 75\) for all \(n\geq0\). Define the map \(T\colon C\to C\) by the formula \(T(x) =\sqrt[3]{x^{2}}\) for all \(x\in C\). Table 4 shows us that the Abbas iteration (3.17) converges faster than the other cases, the Abbas iteration (3.18) is 1.1 times faster than the Abbas iteration (3.20), the Abbas iteration (3.19) is 1.05 times faster than the Abbas iteration (3.20), the Abbas iteration (3.21) is 1.04 times faster than the Abbas iteration (3.22) and 1.3 times faster than the Abbas iteration (3.23) and the Abbas iteration (3.24). One can get similar results about difference of the mathematical and computer-calculating points of views for this example.

Table 4 Cases of Abbas iteration

The next example illustrates Theorem 3.1.

Example 3

Let \(X = \mathbb{R}\), \(C = [1, 60]\), \(x_{0}=40\), \(\alpha_{n}=0.9\), \(\beta _{n}=0.6\), and \(\gamma_{n}=0.8\) for all \(n\geq0\). Define the map \(T\colon C\to C\) by \(T(x) =\sqrt{x^{2}-8x+40}\) for all \(x\in C\) (see [23]). Table 5 shows the Abbas iteration (3.17) converges 1.09 times faster than the Thakur-Thakur-Postolache iteration (2.6) and the Thakur-Thakur-Postolache iteration (2.6) is 1.16 times faster than the Abbas iteration (2.5) from the mathematical point of view. Again, we get different results from the computer-calculating point of view by checking Table 5 and Figures 2 and 3.

Figure 2
figure 2

Convergence behavior of the iteration methods of Thakur equation ( 2.6 ), Abbas equation ( 2.5 ), and Abbas equation ( 3.17 ).

Figure 3
figure 3

CPU time.

Table 5 Comparison between Thakur iteration and Abbas iteration

The next example shows that choosing the coefficients is very important in the rate of convergence of an iteration method.

Example 4

Let \(X=\mathbb{R}\), \(C=[0, 30]\), and \(x_{0}=20\). Define the map \(T\colon \mathbb{R}\to\mathbb{R}\) by \(T(x) = \frac{x}{2}+1\) for all \(x\in C\). Consider the following coefficients separately in the Thakur-Thakur-Postolache iteration (2.6):

  1. (a)

    \(\alpha_{n}=\beta_{n}=\gamma_{n}= 1-\frac{1}{(n+1)^{10}}\),

  2. (b)

    \(\alpha_{n}=\beta_{n}=\gamma_{n}= 1-\frac{1}{n+1}\),

  3. (c)

    \(\alpha_{n}=\beta_{n}=\gamma_{n}= 1-\frac{1}{(n+1)^{\frac{1}{2}}}\),

  4. (d)

    \(\alpha_{n}=\beta_{n}=\gamma_{n}=1 -\frac{1}{(n+1)^{\frac{1}{5}}}\)

for all \(n\geq0\). Table 6 shows that the Thakur-Thakur-Postolache iteration (2.6) with coefficients (a) is 1.25 times faster than the Thakur-Thakur-Postolache iteration (2.6) with coefficients (b), the Thakur-Thakur-Postolache iteration (2.6) with coefficients (a) is 1.6 times faster than the Thakur-Thakur-Postolache iteration (2.6) with coefficients (c) and the Thakur-Thakur-Postolache iteration (2.6) with coefficients (a) is 2.16 times faster than the Thakur-Thakur-Postolache iteration (2.6) with coefficients (d). This note satisfies other iteration methods of course from the mathematical point of view. Here, we find a little different computer-calculating result for the CPU time table of this example, which one can check in Figure 4.

Figure 4
figure 4

CPU time.

Table 6 Cases of Thakur iteration

References

  1. Plunkett, R: On the rate of convergence of relaxation methods. Q. Appl. Math. 10, 263-266 (1952)

    MATH  MathSciNet  Google Scholar 

  2. Bowden, BV: Faster than Thought: A Symposium on Digital Computing Machines. Pitman, London (1953)

    MATH  Google Scholar 

  3. Byrne, C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20(1), 103-120 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  4. Dykstra, R, Kochar, S, Robertson, T: Testing whether one risk progresses faster than the other in a competing risks problem. Stat. Decis. 14(3), 209-222 (1996)

    MATH  MathSciNet  Google Scholar 

  5. Hajela, D: On faster than Nyquist signaling: computing the minimum distance. J. Approx. Theory 63(1), 108-120 (1990)

    Article  MATH  MathSciNet  Google Scholar 

  6. Hajela, D: On faster than Nyquist signaling: further estimations on the minimum distance. SIAM J. Appl. Math. 52(3), 900-907 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  7. Longpre, L, Young, P: Cook reducibility is faster than Karp reducibility in NP. J. Comput. Syst. Sci. 41(3), 389-401 (1990)

    Article  MATH  MathSciNet  Google Scholar 

  8. Shore, GM: Faster than light: photons in gravitational fields - causality, anomalies and horizons. Nucl. Phys. B 460(2), 379-394 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  9. Shore, GM: Faster than light: photons in gravitational fields. II. Dispersion and vacuum polarisation. Nucl. Phys. B 633(1-2), 271-294 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  10. Stark, RH: Rates of convergence in numerical solution of the diffusion equation. J. Assoc. Comput. Mach. 3, 29-40 (1956)

    Article  MathSciNet  Google Scholar 

  11. Argyros, IK: Iterations converging faster than Newton’s method to the solutions of nonlinear equations in Banach space. Ann. Univ. Sci. Bp. Rolando Eötvös Nomin., Sect. Comput. 11, 97-104 (1991)

    MATH  Google Scholar 

  12. Argyros, IK: Sufficient conditions for constructing methods faster than Newton’s. Appl. Math. Comput. 93, 169-181 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  13. Lucet, Y: Faster than the fast Legendre transform: the linear-time Legendre transform. Numer. Algorithms 16(2), 171-185 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  14. Berinde, V: Picard iteration converges faster than Mann iteration for a class of quasi-contractive operators. Fixed Point Theory Appl. 2004(2), 97-105 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  15. Berinde, V, Berinde, M: The fastest Krasnoselskij iteration for approximating fixed points of strictly pseudo-contractive mappings. Carpath. J. Math. 21(1-2), 13-20 (2005)

    MathSciNet  Google Scholar 

  16. Berinde, V: A convergence theorem for Mann iteration in the class of Zamfirescu operators. An. Univ. Vest. TimiÅŸ., Ser. Mat.-Inform. 45(1), 33-41 (2007)

    MATH  MathSciNet  Google Scholar 

  17. Babu, GVR, Vara Prasad, KNVV: Mann iteration converges faster than Ishikawa iteration for the class of Zamfirescu operators. Fixed Point Theory Appl. 2006, Article ID 49615 (2006)

    MathSciNet  Google Scholar 

  18. Popescu, O: Picard iteration converges faster than Mann iteration for a class of quasi-contractive operators. Math. Commun. 12(2), 195-202 (2007)

    MATH  MathSciNet  Google Scholar 

  19. Akbulut, S, Ozdemir, M: Picard iteration converges faster than Noor iteration for a class of quasi-contractive operators. Chiang Mai J. Sci. 39(4), 688-692 (2012)

    MATH  MathSciNet  Google Scholar 

  20. Gorsoy, F, Karakaya, V: A Picard S-hybrid type iteration method for solving a differential equation with retarded argument (2014). arXiv:1403.2546v2 [math.FA]

  21. Hussain, N, Chugh, R, Kumar, V, Rafiq, A: On the rate of convergence of Kirk-type iterative schemes. J. Appl. Math. 2012, Article ID 526503 (2012)

    Article  MathSciNet  Google Scholar 

  22. Ozturk Celikler, F: Convergence analysis for a modified SP iterative method. Sci. World J. 2014, Article ID 840504 (2014)

    Google Scholar 

  23. Thakur, D, Thakur, BS, Postolache, M: New iteration scheme for numerical reckoning fixed points of nonexpansive mappings. J. Inequal. Appl. 2014, 328 (2014)

    Article  Google Scholar 

  24. Berinde, V: Iterative Approximation of Fixed Points. Springer, Berlin (2007)

    MATH  Google Scholar 

  25. Chugh, R, Kumar, S: On the rate of convergence of some new modified iterative schemes. Am. J. Comput. Math. 3, 270-290 (2013)

    Article  Google Scholar 

  26. Mann, WR: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506-510 (1953)

    Article  MATH  Google Scholar 

  27. Ishikawa, S: Fixed points by a new iteration method. Proc. Am. Math. Soc. 44, 147-150 (1974)

    Article  MATH  MathSciNet  Google Scholar 

  28. Noor, MA: New approximation schemes for general variational inequalities. J. Math. Anal. Appl. 251, 217-229 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  29. Agarwal, RP, O’Regan, D, Sahu, DR: Iterative construction of fixed points of nearly asymptotically nonexpansive mappings. J. Nonlinear Convex Anal. 8(1), 61-79 (2007)

    MATH  MathSciNet  Google Scholar 

  30. Abbas, M, Nazir, T: A new faster iteration process applied to constrained minimization and feasibility problems. Mat. Vesn. 66(2), 223-234 (2014)

    MathSciNet  Google Scholar 

Download references

Acknowledgements

The basic idea of this work has been given to the fourth author by Professor Mihai Postolache during his visit to University Politehnica of Bucharest in September 2014. The first and fourth authors was supported by Azarbaijan Shahid Madani University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mihai Postolache.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to this work. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fathollahi, S., Ghiura, A., Postolache, M. et al. A comparative study on the convergence rate of some iteration methods involving contractive mappings. Fixed Point Theory Appl 2015, 234 (2015). https://doi.org/10.1186/s13663-015-0490-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-015-0490-3

MSC

Keywords