We’d like to understand how you use our websites in order to improve them. Register your interest.

# Fixed point problems of the Picard-Mann hybrid iterative process for continuous functions on an arbitrary interval

## Abstract

In this paper, we consider the iteration method called ‘Picard-Mann hybrid iterative process’ for finding a fixed point of continuous functions on an arbitrary interval. We give a necessary and sufficient condition for convergence of this iteration for continuous functions on an arbitrary interval. Also, we compare the rate of convergence of the Picard-Mann hybrid iteration with the other iterations and prove that it is better than the others under the same computational cost. Moreover, we present numerical examples.

MSC:26A18, 47H10, 54C05.

## 1 Introduction and preliminaries

Let E be a closed interval on the real line, and let $f:E→E$ be a continuous mapping. A point $p∈E$ is a fixed point of f if $f(p)=p$. We denote the set of fixed points of f by $F(f)$. It is known that if E is also bounded, then $F(f)$ is nonempty.

Iterative methods are popular tools to approximate fixed points of nonlinear mappings. Now, we give some of them. Recall that the normal Mann iteration was introduced by Mann [1] in 1953. Recently, construction of fixed points for nonlinear mappings via the normal Mann iteration has been extensively investigated by many authors. The normal Mann iteration generates a sequence ${ x n }$ in the following manner: $x 1 ∈R$,

$x n + 1 =(1− α n ) x n + α n f( x n )$
(1.1)

for all $n≥1$, where $x 1$ is an arbitrary initial value, f is a real function, and ${ α n }$ is a sequence in $[0,1]$.

Next, we recall another popular iteration, the Ishikawa iteration. The Ishikawa iteration was introduced by Ishikawa [2] in 1974. The Ishikawa iteration generates a sequence ${ x n }$ in the following manner:

${ x n + 1 = ( 1 − α n ) x n + α n f ( y n ) , y n = ( 1 − β n ) x n + β n f ( x n )$
(1.2)

for all $n≥1$, where $x 1$ is an arbitrary initial value, f is a real function, and ${ α n }$, ${ β n }$ are real sequences in $[0,1]$. In 2000, Noor [3] introduced a new iteration method, that is, defined by $x 1 ∈E$ and

${ x n + 1 = ( 1 − α n ) x n + α n f ( y n ) , y n = ( 1 − β n ) x n + β n f ( z n ) , z n = ( 1 − γ n ) x n + γ n f ( x n )$
(1.3)

for all $n≥1$, where ${ α n }$, ${ β n }$ and ${ γ n }$ are sequences in $[0,1]$. It is easy to see that the Mann and Ishikawa iteration methods are special cases of the Noor iteration method, and the Mann iteration is also special case of the Ishikawa iteration method.

In 1953, Mann proved that if f is a continuous real function on a unit interval of the real line with a unique fixed point, then the Mann iteration converges to a unique fixed point of f. In 1971, Franks and Marzed [4] removed the condition that f enjoys a unique fixed point. In 1974, for the class of continuous and nondecreasing functions on a closed unit interval, Rhoades [5] proved the convergence of the Mann iteration, and then, he [6] extended convergence results to the Ishikawa iteration. He also proved that the Ishikawa iteration is better than the Mann iteration for this class of mappings. Later, in 1991, Borwein and Borwein [7] proved the convergence of the Mann iteration of continuous mappings on a bounded closed interval. In 2006, Qing and Qihou [8] extended their results to an arbitrary interval and to the Ishikawa iteration and gave a necessary and sufficient condition for convergence of the Ishikawa iteration on an arbitrary interval. Recently, Phuengrattana and Suantai [9] proved the convergence of (1.3) to a fixed point of continuous function f on an arbitrary interval under the suitable conditions. Also, they introduced the following iteration method called the SP-iteration method:

${ x n + 1 = ( 1 − α n ) y n + α n f ( y n ) , y n = ( 1 − β n ) z n + β n f ( z n ) , z n = ( 1 − γ n ) x n + γ n f ( x n )$
(1.4)

for all $n≥1$, where $x 1$ is an arbitrary initial value, ${ α n }$, ${ β n }$ and ${ γ n }$ are sequences in $[0,1]$. Under the suitable conditions, they proved that (1.4) iteration method converges to a fixed point of continuous function f on an arbitrary interval $E⊂R$. Moreover, they also compared the convergence speed of the Mann, Ishikawa, Noor and SP-iterations and concluded that the SP-iteration is better than the others.

Let E be a closed interval on the real line (can be unbounded), and let $f:E→E$ be a continuous function. Recently, Khan [10] and Sahu [11], individually, introduced the following iterative process, which Khan referred to as that Picard-Mann hybrid iterative process (PMH):

${ x n + 1 = f ( y n ) , y n = ( 1 − β n ) x n + β n f ( x n )$
(1.5)

for all $n≥1$, where $x 1$ is an arbitrary initial value, and ${ β n }$ is a sequence in $[0,1]$.

The purpose of this paper is to prove that the PMH-iteration process converges to a fixed point of continuous function f on an arbitrary interval E, and compare the convergence speed of (1.5) with the other iteration processes under the suitable conditions and the same computational cost.

## 2 Convergence theorems

In this section, we prove convergence theorems for the PMH-iteration process.

Lemma 1 Let E be a closed interval on the real line (can be unbounded), let $f:E→E$ be a continuous function. For $x 1 ∈E$, let the PMH-iteration ${ x n }$ be defined by (1.5), where ${ β n }$ is a sequence in $[0,1]$ such that $lim n → ∞ β n =0$. If $x n →a$, then a is a fixed point of f.

Proof Suppose that $f(a)≠a$. Since $x n →a$, and $f(x)$ is continuous, we have that $f( x n )$ is bounded. Since $y n =(1− β n ) x n + β n f( x n )$ and $β n →0$, we have $y n →a$. Let $p k =f( y k )− x k$. Then we write

$lim k → ∞ p k = lim k → ∞ ( f ( y k ) − x k ) =f(a)−a=p≠0.$

Using $x n + 1 =f( y n )$, we obtain

$x n + 1 − x n =f( y n )− x n ,$

which implies that

$x n = x 1 + ∑ k = 1 n − 1 ( f ( y k ) − x k ) = x 1 + ∑ k = 1 n − 1 p k .$

Since $p k →p≠0$, then ${ x n }$ must diverge, which is contradiction with $x n →a$. Thus, $f(a)=a$. □

Lemma 2 Let E be a closed interval on the real line (can be unbounded), let $f:E→E$ be a continuous and nondecreasing function. For $x 1 ∈E$, let the PMH-iteration ${ x n }$ be defined by (1.5), where ${ β n }$ is a sequence in $[0,1]$ such that $lim n → ∞ β n =0$. If ${ x n }$ is bounded, then it is convergent.

Proof Suppose that ${ x n }$ is not convergent. Let $a= lim inf n → ∞ x n$ and $b= lim sup n → ∞ x n$. Then $a. First, we prove that if $a, then $f(m)=m$. Suppose that $f(m)≠m$. Without loss of generality, we may suppose that $f(m)−m>0$. Since $f(x)$ is a continuous function, there exists δ, $0<δ, such that

(2.1)

From the hypothesis that ${ x n }$ is bounded, so, ${ x n }$ belongs to a bounded closed interval. Since $f(x)$ is continuous, $f( x n )$ belongs to another bounded closed interval, so, $f( x n )$ is bounded. Since $y n =(1− β n ) x n + β n f( x n )$, we obtain that ${ y n }$ is bounded, and, thus, $f( y n )$ is bounded. Using $y n − x n = β n (f( x n )− x n )$ and $lim n → ∞ β n =0$, we get that $y n − x n →0$, as $n→∞$. For the real numbers $f( x 1 )$ and $x 1$, there exist three cases: $f( x 1 )> x 1$, $f( x 1 )< x 1$ and $f( x 1 )= x 1$. Let $f( x 1 )> x 1$. From the definition of ${ x n }$, we have $f( x 1 )≥ y 1 ≥ x 1$. Since f is nondecreasing, we write $f( y 1 )= x 2 ≥f( x 1 )≥ y 1 ≥ x 1$. Again, using f is nondecreasing, we get $f( x 2 )≥ x 2 =f( y 1 )$. From the definition of ${ x n }$, we get $f( x 2 )≥ y 2 ≥ x 2$. This implies that $f( y 2 )= x 3 ≥f( x 2 )≥ y 2 ≥ x 2$. Thus, we obtain that $f( y 2 )≥ y 2$. So, by continuing in this way, we have that $f( y n )≥ y n$ for all $n≥1$. Hence, from the definition of ${ x n }$, we get that

$x n − x n + 1 = y n −f( y n )+ β n ( x n − f ( x n ) ) ≤ β n ( x n − f ( x n ) ) .$

Thus, using $lim n → ∞ β n =0$, we obtain that

$lim n → ∞ ( x n − x n + 1 )=0.$

In a similar way, for the case $f( x 1 )< x 1$, we have

$x n + 1 − x n =f( y n )− y n + β n ( f ( x n ) − x n ) ≤ β n ( f ( x n ) − x n ) .$

Again, using $lim n → ∞ β n =0$, we get the desired conclusion. For the case $f( x 1 )= x 1$, since we supposed that $f(m)≠m$ for the real number m such that $a, so either $x 1 or $x 1 >b$. Now, without loss of generality, we may suppose that $x 1 . Since $f( x 1 )= x 1$, (1.5) implies that $x 2 = x 1$, and by induction, $x n + 1 = x n$ for all $n≥1$, which yields

$lim n → ∞ ( x n + 1 − x n )=0.$

Hence, in all the three cases, we obtain

$lim n → ∞ ( x n + 1 − x n )=0.$

Thus, there exists a positive integer N such that

(2.2)

Since $b= lim sup n → ∞ x n >m$, there exist $k 1 >N$ such that $x n k 1 >m$. Let $n k 1 =k$, then $x k >m$. For $x k$, there exist only two cases

1. (i)

If $x k >m+ δ 2$, then $x k + 1 > x k − δ 2 ≥m$ using (2.2). So, $x k + 1 >m$.

2. (ii)

If $m< x k , then $m− δ 2 < y k using (2.2). So, we have $| x k −m|< δ 2 <δ$, $| y k −m|<δ$. Using (2.1), we get

$f( x k )− x k >0,f( y k )− y k >0.$
(2.3)

From (2.3), we get that

$y k − x k = β k [ f ( x k ) − x k ] ≥0,$

and hence,

$x k + 1 − x k =f( y k )− x k =f( y k )− y k + y k − x k >0.$

So, we obtain $x k + 1 > x k >m$.

In conclusion, by (i), (ii), we have $x k + 1 >m$. Analogously, we have $x k + 2 >m$, $x k + 3 >m$,…. Thus, we get $x n >m$, for all $n>k= n k 1$. So, $a= lim k → ∞ x n k ≥m$, which is a contradiction with $a. Thus, $f(m)=m$. Now, we consider the following two cases:

(I) There exists $x M$ such that $a< x M . From the proof above, we obtain $f( x M )= x M$. It follows that

$y M = ( 1 − β M ) x M + β M f ( x M ) = x M , x M + 1 = f ( y M ) = f ( x M ) = x M .$

Analogously, we have $x M = x M + 1 = x M + 2 =⋯$ , so $x n → x M$. It follows that $x M =a$, and $x n →a$, which is a contradiction with the assumption.

(II) For all n, $x n ≤a$ or $x n ≥b$. Since $b−a>0$, and $lim n → ∞ | x n + 1 − x n |=0$, so there exists $N 0$ such that

$| x n + 1 − x n |< b − a 2$

for all $n> N 0$. It implies that either $x n ≤a$ or $x n ≥b$ for all $n> N 0$. If $x n ≤a$ for all $n> N 0$, then $b= lim sup n → ∞ x n ≤a$, which is a contradiction with $a. If $x n ≥b$ for all $n> N 0$, so we have $a= lim inf n → ∞ x n ≥b$, which is a contradiction with $a. So, the assumption is not true. Then $x n →a$ as $n→∞$. □

Theorem 1 Let E be a closed interval on the real line (can be unbounded), let $f:E→E$ be a continuous and nondecreasing function. For $x 1 ∈E$, let the PMH-iteration ${ x n }$ be defined by (1.5), where ${ β n }$ is a sequence in $[0,1]$ such that $lim n → ∞ β n =0$. Then ${ x n }$ converges to a fixed point of f if and only if ${ x n }$ is bounded.

Proof It is clear that if ${ x n }$ converges to a fixed point of f, then it is bounded. Now, assume that ${ x n }$ is bounded. Then it follows from Lemma 1 and Lemma 2 that ${ x n }$ is convergent to a fixed point of f. □

The following result is obtained directly from Theorem 1.

Corollary 1 Let $f:[a,b]→[a,b]$ be a continuous and nondecreasing function. For $x 1 ∈[a,b]$, let the PMH-iteration ${ x n }$ be defined by (1.5), where ${ β n }$ is a sequence in $[0,1]$ such that $lim n → ∞ β n =0$. Then ${ x n }$ converges to a fixed point of f.

## 3 Rate of convergence

Now, we give some definitions, lemmas, and theorems about the rate of convergence speed of iterative schemes and compare those with each other. Also, we support our theorems with numerical example.

Definition 1 [6]

Let E be a closed interval on the real line, and let $f:E→E$ be a continuous function. Suppose that ${ x n }$ and ${ y n }$ are two iterations, which converge to a fixed point p of f. Then ${ x n }$ is better than ${ y n } n = 1 ∞$ if

$| x n −p|≤| y n −p|$

for all $n≥1$.

For any sequence ${ x n }$ that converges to a point p, it is said that ${ x n }$ converges linearly to p if there exist a constant $μ∈(0,1)$ such that

$| x n + 1 − p x n − p |≤μ.$

The number μ is called the rate of convergence.

In 2011, Phuengrattana and Suantai [9] proved the following theorem about the rate of convergence speed of the Mann, Ishikawa, Noor and SP-iteration processes.

Theorem 2 [9]

Let E be a closed interval on the real line, and let $f:E→E$ be a continuous and nondecreasing function such that $F(f)$ is nonempty and bounded. For $u 1 = s 1 = w 1 = x 1 ∈E$, let ${ u n }$, ${ s n }$, ${ w n }$, and ${ x n }$ be the sequences defined by the Mann, Ishikawa, Noor iterations and SP-iteration method, respectively. Let ${ α n }$, ${ β n }$, ${ γ n }$ be sequences in $[0,1)$. Then the following are satisfied:

1. (i)

The Ishikawa iteration ${ s n }$ converges to $p∈F(f)$ if and only if the Mann iteration ${ u n }$ converges to p. Moreover, the Ishikawa iteration converges faster than the Mann iteration.

2. (ii)

The Noor iteration ${ w n }$ converges to $p∈F(f)$ if and only if the Ishikawa iteration ${ s n }$ converges to p. Moreover, the Noor iteration converges faster than the the Ishikawa iteration.

3. (iii)

The SP-iteration ${ x n }$ converges to $p∈F(f)$ if and only if the Noor iteration ${ w n }$ converges to p. Moreover, the SP-iteration converges faster than the Noor iteration.

But in contrast to the theorem above, in 2013, Dong et al. [12] obtained that the Mann iteration is better than the others under the same computational cost. Also, they stated that one-step SP-iteration is exactly three-step Mann iteration (see Remark 2.1, [12]) and gave the following remark with regard to this.

Remark 1 [12]

In Theorem 2 [[12], Proposition 3.1] above, Phuengrattana and Suantai [9] compared the rate of convergence of the Mann, Ishikawa, Noor iterations and the SP-iteration and drew the conclusion that the SP-iteration is better than other iterations, the Noor iteration is better than the Ishikawa iteration, and the Ishikawa iteration is better than the Mann iteration. However, we know from [[12], Remark 2.1] that one-step SP-iteration is three-step Mann iteration. Clearly, the computation cost of one-step Ishikawa iteration and one-step Noor iteration equals to that of two-step Mann iteration and three-step Mann iteration, respectively. So, it seems to be more reasonable to compare the rate of convergence of the Mann, Ishikawa and Noor iterations under the same computation cost. In this sense, from Theorem 2(iii) [[12], Proposition 3.1(iii)], the Mann iteration is better than the Ishikawa and Noor iterations.

Also, they stated in [[12], Remark 3.3] that under the same computational cost, the Mann iteration is better than the Ishikawa and Noor iterations, the Ishikawa iteration is better than the Noor iteration.

With reference to the remark above, it is more reasonable to compare the rate of convergence of the PMH-iteration method and two-step Mann iteration method (denoted by MannII), which is defined by

${ u n + 1 = ( 1 − α n ) v n + α n f ( v n ) , v n = ( 1 − β n ) u n + β n f ( u n ) ,$
(3.1)

where ${ α n }$ and ${ β n }$ are sequences in $[0,1]$. Clearly, the computation cost of the PMH- iteration equals to the MannII iteration.

Now, we give lemmas and propositions to compare the rate of convergence speed of the PMH and MannII iteration methods.

Lemma 3 Let E be a closed interval on the real line, and let $f:E→E$ be a continuous and nondecreasing function. Let ${ α n }$, ${ β n }$ be sequences in $[0,1)$. Let the MannII iteration ${ u n }$ and the PMH-iteration ${ x n }$be defined by (3.1) and (1.5), respectively. Then the following hold:

1. (i)

If $f( u 1 )< u 1$, then $f( u n )< u n$ for all $n≥1$ and ${ u n }$ is nonincreasing.

2. (ii)

If $f( u 1 )> u 1$, then $f( u n )> u n$ for all $n≥1$ and ${ u n }$ is nondecreasing.

3. (iii)

If $f( x 1 )< x 1$, then $f( x n )≤ x n$ for all $n≥1$ and ${ x n }$ is nonincreasing.

4. (iv)

If $f( x 1 )> x 1$, then $f( x n )≥ x n$ for all $n≥1$ and ${ x n }$ is nondecreasing.

Proof (i) Let $f( u 1 )< u 1$. Then, from the definition of ${ u n }$, we get that $f( u 1 )< v 1 ≤ u 1$. Since f is nondecreasing, we have $f( v 1 )≤f( u 1 )< v 1 ≤ u 1$. This implies that $f( v 1 )< u 2 ≤ v 1$. Since f is nondecreasing, we have $f( u 2 )≤f( v 1 )< u 2$. Thus, $f( u 2 )< u 2$. Assume that $f( u k )< u k$. So, we write $f( u k )< v k ≤ u k$. Again, by using that f is nondecreasing, we have $f( v k )≤f( u k )< v k ≤ u k$. This implies that $f( v k )< u k + 1 ≤ v k$. Hence, $f( u k + 1 )≤f( v k )< u k + 1$. Thus, we get $f( u k + 1 )< u k + 1$. By mathematical induction, we obtain that $f( u n )< u n$ for all $n≥1$. This implies that $f( u n )< v n ≤ u n$. So, we can write $f( v n )≤f( u n )< v n ≤ u n$. From the definition of MannII, we get $f( v n )< u n + 1 ≤ v n ≤ u n$. It follows that $u n + 1 ≤ u n$ for all $n≥1$, that is, ${ u n }$ is nonincreasing sequence.

(ii) By using the same argument as in (i), we get the desired conclusion.

(iii) Let $f( x 1 )< x 1$. Then, from the definition of ${ x n }$, we get that $f( x 1 )< y 1 ≤ x 1$. Since f is nondecreasing, we have $f( y 1 )= x 2 ≤f( x 1 )< y 1 ≤ x 1$. This implies that $f( x 2 )≤f( y 1 )$. Thus, $f( x 2 )≤ x 2$. Assume that $f( x k )≤ x k$. So, we write $f( x k )≤ y k ≤ x k$. Since f is nondecreasing, we have $f( y k )= x k + 1 ≤f( x k )≤ y k ≤ x k$. This implies that $f( x k + 1 )≤f( y k )$. Thus, $f( x k + 1 )≤ x k + 1$. By mathematical induction, we obtain that $f( x n )≤ x n$, for all $n≥1$. It follows that $x n + 1 ≤ x n$, for all $n≥1$. So, we get that ${ x n }$ is a nonincreasing sequence.

(iv) In a similar way as in the proof of (iii), we get the desired conclusion. □

Lemma 4 Let E be a closed interval on the real line, and let $f:E→E$ be a continuous and nondecreasing function. Let the MannII iteration ${ u n }$ and the PMH-iteration ${ x n }$ be sequences defined by (3.1), (1.5), respectively, where ${ α n }$, ${ β n }$ are sequences in $[0,1)$. Then the following are satisfied:

1. (i)

If $p∈F(f)$ with $u 1 >p$, then $u n >p$ for all $n≥1$.

2. (ii)

If $p∈F(f)$ with $u 1 , then $u n for all $n≥1$.

3. (iii)

If $p∈F(f)$ with $x 1 >p$, then $x n ≥p$ for all $n≥1$.

4. (iv)

If $p∈F(f)$ with $x 1 , then $x n ≤p$ for all $n≥1$.

Proof (i) Since $p∈F(f)$ with $u 1 >p$, and f is nondecreasing function, we have $f( u 1 )≥f(p)=p$. Thus, from the definition of ${ u n }$, we get $v 1 >p$. This implies that $f( v 1 )≥p$. So, we get $u 2 >p$. Assume that $u k >p$. So, we have $f( u k )≥p$. From the definition of ${ u n }$, we get $v k >p$. Since f is nondecreasing, we get $f( v k )≥p$. So, we have $u k + 1 >p$. By mathematical induction, we obtain that $u n >p$ for all $n≥1$.

(ii) In a similar way as in the proof of (i), we get the desired conclusion.

(iii) Since $p∈F(f)$ with $x 1 >p$, and f is nondecreasing function, we have $f( x 1 )≥f(p)=p$. Thus, from the definition of ${ x n }$, we get $y 1 >p$. It implies that $f( y 1 )= x 2 ≥p$. Assume that $x k ≥p$. So, we have $f( x k )≥p$. From the definition of ${ x n }$, we have $y k ≥p$. Since f is nondecreasing, we get $f( y k )= x k + 1 ≥p$. By mathematical induction, we obtain that $x n ≥p$ for all $n≥1$.

(iv) By using the same argument as in (iii), we get the desired conclusion. □

Lemma 5 Let E be a closed interval on the real line, and let $f:E→E$ be a continuous and nondecreasing function. For $u 1 = x 1$, the MannII iteration ${ u n }$ and the PMH-iteration ${ x n }$ be sequences defined by (3.1), (1.5), respectively, where ${ α n }$, ${ β n }$ are sequences in $[0,1)$. Then the following are satisfied:

1. (i)

If $f( u 1 )< u 1$, then $x n < u n$, for all $n≥1$,

2. (ii)

If $f( u 1 )> u 1$, then $x n > u n$, for all $n≥1$.

Proof (i) Let $f( u 1 )< u 1$. Since $u 1 = x 1$, we have $f( x 1 )< x 1$. Using (3.1) and (1.5), we get

$y 1 − v 1 =(1− β 1 )( x 1 − u 1 )+ β 1 ( f ( x 1 ) − f ( u 1 ) ) =0.$

From the proof of Lemma 3(i), we know that $f( v 1 )− v 1 =f( y 1 )− v 1 <0$. This implies that

$x 2 − u 2 =(1− α 1 ) ( f ( y 1 ) − v 1 ) + α 1 ( f ( y 1 ) − f ( v 1 ) ) <0.$

Assume that $x k < u k$. Since f is nondecreasing, we get $f( x k )≤f( u k )$. From the definition of ${ x n }$ and ${ u n }$, we get

$y k − v k =(1− β k )( x k − u k )+ β k ( f ( x k ) − f ( u k ) ) <0.$

Since f is nondecreasing, we get $f( y k )≤f( v k )$. On the other hand, from the proof of Lemma 3(i), we know that $f( v k )< v k$. This implies that

$x k + 1 − u k + 1 = ( 1 − α k ) ( f ( y k ) − v k ) + α k ( f ( y k ) − f ( v k ) ) = ( 1 − α k ) ( f ( y k ) − f ( v k ) + f ( v k ) − v k ) + α k ( f ( y k ) − f ( v k ) ) < 0 .$

By mathematical induction, we obtain that $x n < u n$ for all $n≥1$.

(ii) Suppose that $f( u 1 )> u 1$. In a similar way as in proof of (i), we can show that $x n > u n$, for all $n≥1$. □

Proposition 1 Let E be a closed interval on the real line, and let $f:E→E$ be a continuous and nondecreasing function such that $F(f)$ is nonempty and bounded with $x 1 >sup{p∈E:p=f(p)}$. Let ${ α n }$, ${ β n }$ be sequences in $[0,1)$. If $f( x 1 )> x 1$, then the sequence ${ x n }$ defined by (3.1) or (1.5) does not converge to a fixed point of f.

Proof By Lemma 3(ii) or (iv), ${ x n }$ is a nondecreasing sequence. From the hypothesis, since $x 1 >sup{p∈E:p=f(p)}$, it is clear that ${ x n }$ does not converge to a fixed point of f. □

Proposition 2 Let E be a closed interval on the real line, and let $f:E→E$ be a continuous and nondecreasing function such that $F(f)$ is nonempty and bounded with $x 1 . Let ${ α n }$, ${ β n }$ be sequences in $[0,1)$. If $f( x 1 )< x 1$, then the sequence ${ x n }$ defined by (3.1) or (1.5) does not converge to a fixed point of f.

Proof By Lemma 3(i) or (iii), ${ x n }$ is a nonincreasing sequence. From the hypothesis, since $x 1 , it is clear that ${ x n }$ does not converge to a fixed point of f. □

Theorem 3 Let E be a closed interval on the real line, and let $f:E→E$ be a continuous and nondecreasing function such that $F(f)$ is nonempty and bounded. For $u 1 = x 1$, let the MannII iteration ${ u n }$ and the PMH-iteration ${ x n }$ be sequences defined by (3.1), (1.5), respectively, where ${ α n }$, ${ β n }$ are sequences in $[0,1)$. If ${ u n }$ converges to $p∈F(f)$, then ${ x n }$ converges to $p∈F(f)$. Moreover, ${ x n }$ is better than ${ u n }$.

Proof Let $U=sup{p∈E:p=f(p)}$ and $L=inf{p∈E:p=f(p)}$. Suppose that the MannII iteration ${ u n }$ converges to $p∈F(f)$. We shall divide our proof into three cases.

Case 1: Let $U< u 1 = x 1$. From Proposition 1, we get $f( x 1 )< x 1$ and $f( u 1 )< u 1$. It follows from Lemma 5(i) that $x n < u n$ for all $n≥1$. We note that $U< x 1$, and by using (1.5) and mathematical induction, we can show that $U≤ x n$ for all $n≥1$. Then we have $0≤ x n −p< u n −p$, so

$| x n −p|<| u n −p|$
(3.2)

for all $n≥1$. This implies that ${ x n }$ converges to p. Moreover, from (3.2), it is clear that ${ x n }$ is better than ${ u n }$.

Case 2: Let $L> u 1 = x 1$. From Proposition 2, we get $f( x 1 )> x 1$ and $f( u 1 )> u 1$. This implies by Lemma 5(ii) that $x n > u n$ for all $n≥1$. We note that $L> x 1$, and by using (1.5) and mathematical induction, we can show that $L≥ x n$ for all $n≥1$. Then we have $| x n −p|<| u n −p|$, that is, ${ x n }$ converges to p. Moreover, ${ x n }$ is better than ${ u n }$.

Case 3: Let $L≤ x 1 = u 1 ≤U$. Suppose that $f( u 1 )≠ u 1$. If $f( u 1 )< u 1$, then from Lemma 3, we get that ${ u n }$ is nonincreasing with limit p. By Lemma 4(i), (iii) and Lemma 5(i), we have that $p≤ x n < u n$ for all $n≥1$. So, it follows that $| x n −p|<| u n −p|$ for all $n≥1$. Therefore, we obtain that ${ x n }$ converges to p and is better than ${ u n }$. If $f( u 1 )> u 1$, then from Lemma 3, we get that ${ u n }$ is nondecreasing with limit p. By Lemma 4(ii), (iv) and Lemma 5(ii), we have that $p≥ x n > u n$ for all $n≥1$. So, it follows that $| x n −p|<| u n −p|$ for all $n≥1$. Therefore, we obtain that ${ x n }$ converges to p. Moreover, ${ x n }$ is better than ${ u n }$. □

Remark 2 It follows from Theorem 3 and [[12], Remark 3.3] that the PMH-iteration is better than the Mann, Ishikawa, Noor and SP-iterations defined by (1.1), (1.2), (1.3) and (1.4), respectively, under the same computational cost.

Now, we give an example about the comparison of the rate of convergence speed of the MannII and PMH-iterations for a given continuous and nondecreasing function on an arbitrary interval.

Example 1 Let $f:[0,4]→[0,4]$ be defined by$f(x)= x 2 + 2 x + 5 8$. Then it is clear that f is continuous and nondecreasing function with the fixed point $p=1$. In Table 1, the comparison of the convergences for the MannII and PMH-iterations is given with the initial point $u 1 = x 1 =3.4$ and the sequences $α n = 1 n + 1$, $β n = 1 n 2 + 1$. From Table 1, we see that the PMH-iteration is better than the MannII iteration. Moreover, the sequence ${ x n }$ is seem to be linear convergent.

The convergence speed of iteration methods depends on the choice of the control sequences. Now, we give such a theorem only for the PMH-iteration method.

Theorem 4 Let E be a closed interval on the real line, and let $f:E→E$ be a continuous and nondecreasing function such that $F(f)$ is nonempty and bounded. Let ${ β n }$ and ${ β n ∗ }$ be sequences in $[0,1)$ such that $β n ≤ β n ∗$ for all $n≥1$. Let ${ x n }$ and ${ x n ∗ }$ be defined by (1.5) generated by ${ β n }$ and ${ β n ∗ }$, respectively, and $x 1 = x 1 ∗$. If ${ x n }$ converges to $p∈F(f)$, then ${ x n ∗ }$ converges to $p∈F(f)$. Moreover, ${ x n ∗ }$ is better than ${ x n }$.

Proof Let $U=sup{p∈E:p=f(p)}$ and $L=inf{p∈E:p=f(p)}$. Suppose that ${ x n }$ converges to $p∈F(f)$. We shall divide our proof into three cases.

Case 1: Let $U< x 1 = x 1 ∗$. By Proposition 1, we have $f( x 1 )< x 1$. From Lemma 3(iii), it implies that $f( x n )≤ x n$ for all $n≥1$. Using (1.5), we obtain that $f( y n )≤ y n$ for all $n≥1$. It follows from (1.5) that

$y 1 ∗ − y 1 = x 1 ∗ − x 1 + ( β 1 ∗ − β 1 ) ( f ( x 1 ) − x 1 ) ≤0.$

Since f is nondecreasing function, we get $f( y 1 ∗ )≤f( y 1 )$. Thus, $x 2 ∗ ≤ x 2$. Now, assume that $x k ∗ ≤ x k$. Since $f( x k ∗ )≤f( x k )$, we have

$y k ∗ − y k = ( 1 − β k ∗ ) x k ∗ + β k ∗ f ( x k ∗ ) − ( ( 1 − β k ) x k + β k f ( x k ) ) ≤ ( 1 − β k ∗ ) ( x k ∗ − x k ) + β k ∗ ( f ( x k ∗ ) − f ( x k ) ) ≤ 0 .$

Therefore, $y k ∗ ≤ y k$, and so $f( y k ∗ )≤f( y k )$. Thus, we get $x k + 1 ∗ ≤ x k$. By mathematical induction, we have $x n ∗ ≤ x n$ for all $n≥1$. Using $U< x 1 ∗$ and definition of ${ x n ∗ }$, from mathematical induction, we can show that $U≤ x n ∗$. Since $p≤ x n ∗ ≤ x n$, we get $| x n ∗ −p|≤| x n −p|$ for all $n≥1$, that is, ${ x n ∗ }$ is better than ${ x n }$.

Case 2: Let $x 1 = x 1 ∗ . By Proposition 2, we get $f( x 1 )> x 1$. As in Case 1, we can show that $x n ∗ ≥ x n$ for all $n≥1$. Since $x 1 ∗ , by using (1.5) and mathematical induction, it is easy to see that $x n ∗ ≤L$. This implies that $| x n ∗ −p|≤| x n −p|$ for all $n≥1$, that is, ${ x n ∗ }$ is better than ${ x n }$.

Case 3: Let $L≤ x 1 = x 1 ∗ . Assume that $f( x 1 )≠ x 1$. If $f( x 1 )< x 1$, then by Lemma 3(iii), ${ x n }$ is a nonincreasing sequence, with limit p. So, it follows from Lemma 4(iii) that $p≤ x n ∗$ for all $n≥1$. As in Case 1, we can show that $x n ∗ ≤ x n$ for all $n≥1$. So, we have $p≤ x n ∗ ≤ x n$. This implies that $| x n ∗ −p|≤| x n −p|$ for all $n≥1$, that is, ${ x n ∗ }$ is better than ${ x n }$. If $f( x 1 )> x 1$, then by Lemma 3(iv), ${ x n }$ is a nondecreasing sequence, with limit p. So, it follows from Lemma 4(iv) that $p≥ x n ∗$ for all $n≥1$. As in Case 1, we can show that $x n ∗ ≥ x n$ for all $n≥1$. So, we have $p≥ x n ∗ ≥ x n$. This implies that $| x n ∗ −p|≤| x n −p|$ for all $n≥1$, that is, ${ x n ∗ }$ is better than ${ x n }$. □

Next, we present a numerical example.

Example 2 Let $f:[0,8]→[0,8]$ be defined by $f(x)= 8 x 2 + 9 3$. Then it is easy to see that f is continuous and nondecreasing function and has the fixed point $p=3$. Let $β n = 1 n 2 + 1$ and $β n ∗ = 1 n 2 + 1$. It is clear that $β n ≤ β n ∗$ for all $n≥1$. The comparison of the convergence speed for the PMH-iterations with new control conditions is given in Table 2 with initial point $x 1 = x 1 ∗ =7$. From Table 2, we see that ${ x n ∗ }$ is better than ${ x n }$.

## References

1. 1.

Mann WR: Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4: 506–510. 10.1090/S0002-9939-1953-0054846-3

2. 2.

Ishikawa S: Fixed points by a new iteration method. Proc. Am. Math. Soc. 1974, 44: 147–150. 10.1090/S0002-9939-1974-0336469-5

3. 3.

Noor MA: New approximation schemes for general variational inequalities. J. Math. Anal. Appl. 2000, 251: 217–229. 10.1006/jmaa.2000.7042

4. 4.

Franks RL, Marzec RP: A theorem on mean value iterations. Proc. Am. Math. Soc. 1971, 30: 324–326. 10.1090/S0002-9939-1971-0280656-9

5. 5.

Rhoades BE: Fixed point iterations using in finite matrices. Trans. Am. Math. Soc. 1974, 196: 161–176.

6. 6.

Rhoades BE: Comments on two fixed point iteration methods. J. Math. Anal. Appl. 1976, 56: 741–750. 10.1016/0022-247X(76)90038-X

7. 7.

Borwein D, Borwein J: Fixed point iterations for real functions. J. Math. Anal. Appl. 1991, 157(1):112–126. 10.1016/0022-247X(91)90139-Q

8. 8.

Qing Y, Qihou L: The necessary and sufficient condition for the convergence of ishikawa iteration on an arbitrary interval. J. Math. Anal. Appl. 2006, 323(2):1383–1386. 10.1016/j.jmaa.2005.11.058

9. 9.

Phuengrattana W, Suantai S: On the rate of convergence of Mann, Ishikawa, Noor and SP-iterations for continuous functions on an arbitrary interval. J. Comput. Appl. Math. 2011, 235: 3006–3014. 10.1016/j.cam.2010.12.022

10. 10.

Khan SH: A Picard-Mann hybrid iterative process. Fixed Point Theory Appl. 2013. 10.1186/1687-1812-2013-69

11. 11.

Sahu DR: Applications of S iteration process to constrained minimization problems and split feasibility problems. Fixed Point Theory 2011, 12(1):187–204.

12. 12.

Dong QL, He S, Liu X: Rate of convergence of Mann, Ishikawa and Noor iterations for continuous functions on an arbitrary interval. J. Inequal. Appl. 2013., 2013(1): Article ID 269 10.1186/1029-242X-2013-269

## Author information

Authors

### Corresponding author

Correspondence to Ibrahim Karahan.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

IK and MO contributed equally. All authors read and approved the final manuscript.

## Rights and permissions

Reprints and Permissions

Karahan, I., Ozdemir, M. Fixed point problems of the Picard-Mann hybrid iterative process for continuous functions on an arbitrary interval. Fixed Point Theory Appl 2013, 244 (2013). https://doi.org/10.1186/1687-1812-2013-244