Open Access

Generalized viscosity approximation methods for nonexpansive mappings

Fixed Point Theory and Applications20142014:68

https://doi.org/10.1186/1687-1812-2014-68

Received: 30 October 2013

Accepted: 6 March 2014

Published: 20 March 2014

Abstract

We combine a sequence of contractive mappings { f n } and propose a generalized viscosity approximation method. One side, we consider a nonexpansive mapping S with the nonempty fixed point set defined on a nonempty closed convex subset C of a real Hilbert space H and design a new iterative method to approximate some fixed point of S, which is also a unique solution of the variational inequality. On the other hand, using similar ideas, we consider N nonexpansive mappings { S i } i = 1 N with the nonempty common fixed point set defined on a nonempty closed convex subset C. Under reasonable conditions, strong convergence theorems are proven. The results presented in this paper improve and extend the corresponding results reported by some authors recently.

MSC:47H09, 47H10, 47J20, 47J25.

Keywords

nonexpansive mappingcontractive mappingvariational inequalityfixed pointviscosity approximation method

1 Introduction

Let H be a real Hilbert space with an inner product , and norm , and C be a nonempty closed convex subset of H.

Let S : C C be a nonlinear mapping, we use Fix ( S ) to denote the set of fixed points of S (i.e., Fix ( S ) = { x C : S x = x } ). A mapping is called nonexpansive if the following inequality holds:
S x S y x y

for all x , y C .

In 1967, Halpern [1] used contractions to approximate a nonexpansive mapping and considered the following explicit iterative process:
x 0 C , x n + 1 = α n u + ( 1 α n ) S x n , n 0 ,
where u is a given point and S : C C is nonexpansive. He proved the strong convergence of { x n } to a fixed point of S provided that α n = n θ with θ ( 0 , 1 ) . In 2000, Moudafi [2] introduced the viscosity approximation method for nonexpansive mappings. Until now, in many references, viscosity approximation methods still are used and studied, which formally generates the sequence { x n } by the recursive formula:
x n + 1 = α n f ( x n ) + ( 1 α n ) S x n ,

where f is a contraction and α n ( 0 , 1 ) is a slowly vanishing sequence. See, for instance, [36]. In fact, Yamada’s hybrid steepest descent algorithm is also a kind of viscosity approximation method (see [7]).

The variational inequality problem is to find a point x C such that
F x , x x 0 , x C .
(1.1)

In recent years, the theory of variational inequality has been extended to the study of a large variety of problems arising in structural analysis, economics, engineering sciences, and so on. See [810] and the references cited therein.

Recently, Zhou and Wang [11] proposed a simpler explicit iterative algorithm for finding a solution of variational inequality over the set of common fixed points of a finite family nonexpansive mappings. They introduced an explicit scheme as follows.

Theorem 1.1 Let H be a real Hilbert space and F : H H be an L-Lipschitz continuous and η-strongly monotone mapping. Let { S i } i = 1 N be N nonexpansive self-mappings of H such that C = i = 1 N Fix ( S i ) . For any point x 0 H , define a sequence { x n } in the following manner:
x n + 1 = ( I λ n μ F ) S N n S N 1 n S 1 n x n , n 0 ,
(1.2)

where μ ( 0 , 2 η / L 2 ) and S i n : = ( 1 β n i ) I + β n i S i for i = 1 , 2 , , N . When the parameters satisfy appropriate conditions, the sequence converges strongly to the unique solution of the variational inequality (1.1).

In this paper, motivated by the above works, we introduce a more generalized iterative method like viscosity approximation. In Section 3, we combine a sequence of contractive mappings and obtain strong convergence theorem for approximating fixed point of a nonexpansive mapping. In Section 4, we propose a new iterative algorithm for finding some common fixed point of a finite family nonexpansive mappings, which is also a unique solution for the variational inequality over the set of fixed point of these mappings on Hilbert spaces.

2 Preliminaries

In order to prove our results, we collect some facts and tools in a real Hilbert space H, which are listed as below.

Lemma 2.1 Let H be a real Hilbert space. We have the following inequalities:
  1. (i)

    x + y 2 x 2 + 2 x + y , y , x , y H .

     
  2. (ii)

    t x + ( 1 t ) y 2 t x 2 + ( 1 t ) y 2 , t [ 0 , 1 ] , x , y H .

     

Lemma 2.2 [12]

Let { S i } i = 1 2 be γ i -averaged on C such that Fix ( S 1 ) Fix ( S 2 ) . Then the following conclusions hold:
  1. (i)

    both S 1 S 2 and S 2 S 1 are γ-averaged, where γ = γ 1 + γ 2 γ 1 γ 2 ;

     
  2. (ii)

    Fix ( S 1 ) Fix ( S 2 ) = Fix ( S 1 S 2 ) = Fix ( S 2 S 1 ) .

     
Recall that given a nonempty closed convex subset C of a real Hilbert space H, for any x H , there exists a unique nearest point in C, denoted by P C x , such that
x P C x x y

for all y C . Such a P C is called the metric (or the nearest point) projection of H onto C.

Lemma 2.3 [13]

Let C be a nonempty closed convex subset of a real Hilbert space H. Given x H and z C , then y = P C x if and only if we have the relation
x y , y z 0 for all  z C .

Lemma 2.4 [10]

Let H be a Hilbert space and C be a nonempty closed convex subset of H, and T : C C a nonexpansive mapping with Fix ( T ) . If { x n } is a sequence in C weakly converging to x and if { ( I T ) x n } converges strongly to y, then ( I T ) x = y .

Lemma 2.5 [5]

Assume that { a n } is a sequence of nonnegative real numbers such that
a n + 1 ( 1 γ n ) a n + δ n ,
where { γ n } is a sequence in ( 0 , 1 ) and { δ n } is a sequence such that
i ( i ) n = 1 γ n = ; ( ii ) lim sup n δ n γ n 0 or n = 1 | δ n | < .

Then, lim n a n = 0 .

Lemma 2.6 [14]

Let { x n } and { z n } be bounded sequences in a Banach space and { β n } be a sequence of real numbers such that 0 < lim inf n β n lim sup n β n < 1 for all n = 0 , 1 , 2 ,  . Suppose that x n + 1 = ( 1 β n ) z n + β n x n for all n = 0 , 1 , 2 , and lim sup n ( z n + 1 z n x n + 1 x n ) 0 . Then lim n z n x n = 0 .

3 Generalized viscosity approximation method combining with a nonexpansive mapping

In this section, we combine a sequence of contractive mappings and apply a more generalized iterative method like viscosity approximation to approximate some fixed point of a nonexpansive mapping defined on a closed convex subset C of a Hilbert space H, which is also the solution of the variational inequality
f ( x ) x , p x 0 , p Fix ( S ) .
(3.1)

Suppose the contractive mapping sequence { f n ( x ) } is uniformly convergent for any x D , where D is any bounded subset of C. The uniform convergence of { f n ( x ) } on D is denoted by f n ( x ) f ( x ) ( n ), x D .

Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H and let { f n } be a sequence of ρ n -contractive self-maps of C with 0 ρ l = lim inf n ρ n lim sup n ρ n = ρ u < 1 . Let S : C C be a nonexpansive mapping. Assume the set Fix ( S ) and { f n ( x ) } is uniformly convergent for any x D , where D is any bounded subset of C. Given x 1 C , let { x n } be generated by the following algorithm:
x n + 1 = α n f n ( x n ) + ( 1 α n ) S x n .
(3.2)
If the sequence { α n } ( 0 , 1 ) satisfies the following conditions:
  1. (i)

    lim n α n = 0 ;

     
  2. (ii)

    n = 1 α n = ;

     
  3. (iii)

    n = 1 | α n + 1 α n | < ,

     

then the sequence { x n } converges strongly to a point x Fix ( S ) , which is also the unique solution of the variational inequality (3.1).

Proof The proof is divided into several steps.

Step 1. Show first that { x n } is bounded.

For any q Fix ( S ) , we have
x n + 1 q = α n f n ( x n ) + ( 1 α n ) S x n q α n f n ( x n ) q + ( 1 α n ) S x n S q α n ρ n x n q + ( 1 α n ) x n q + α n f n ( q ) q ( 1 α n ( 1 ρ n ) ) x n q + α n ( 1 ρ n ) f n ( q ) q 1 ρ n max { x n q , f n ( q ) q 1 ρ n } .

From the uniform convergence of { f n } on D, it is easy to get the boundedness of { f n ( q ) } . Thus there exists a positive constant M 1 , such that f n ( q ) q M 1 . By induction, we obtain x n p max { x 1 p , M 1 1 ρ u } . Hence, { x n } is bounded, so are { S x n } and { f n ( x n ) } .

Step 2. Show that
x n + 1 x n 0 as  n .
(3.3)
Indeed, observe that
x n + 1 x n = α n f n ( x n ) + ( 1 α n ) S x n α n 1 f n 1 ( x n 1 ) ( 1 α n 1 ) S x n 1 = α n ( f n ( x n ) f n ( x n 1 ) ) + α n ( f n ( x n 1 ) f n 1 ( x n 1 ) ) + ( α n α n 1 ) ( f n 1 ( x n 1 ) S x n 1 ) + ( 1 α n ) ( S x n S x n 1 ) α n ρ n x n x n 1 + α n f n ( x n 1 ) f n 1 ( x n 1 ) + | α n α n 1 | ( S x n + f n 1 ( x n 1 ) ) + ( 1 α n ) x n x n 1 = ( 1 α n ( 1 ρ n ) ) x n x n 1 + α n f n ( x n 1 ) f n 1 ( x n 1 ) + | α n α n 1 | ( S x n + f n 1 ( x n 1 ) ) .
By the conditions (i)-(iii) and the uniform convergence of f n ( x ) , we have
α n f n ( x n 1 ) f n 1 x n 1 + | α n α n 1 | ( S x n + f n 1 x n 1 ) α n ( 1 ρ n ) 0

as n . By Lemma 2.5, (3.3) holds.

Step 3. Show that
S x n x n 0 .
(3.4)
Since
S x n x n x n + 1 x n + x n + 1 S x n .

By the condition (i), we have x n + 1 S x n = α n f n ( x n ) S x n 0 . Combining with (3.3), it is easy to get (3.4).

Step 4.
lim sup n f ( x ) x , x n x 0 ,
(3.5)

where x = P Fix ( S ) f ( x ) is a unique solution of the variational inequality (3.1).

Since f n ( x ) is uniformly convergent on D, we have lim n ( f n ( x ) x ) = f ( x ) x .

Indeed, take a subsequence { x n j } of { x n } such that
lim sup n f ( x ) x , x n x = lim j f ( x ) x , x n j x .
(3.6)
Since { x n j } is bounded, there exists a subsequence { x n j k } of { x n j } which converges weakly to x ˆ . Without loss of generality, we can assume x n j x ˆ . From (3.4), we obtain S x n j x ˆ . Using Lemma 2.4, we have x ˆ Fix ( S ) . Since x = P Fix ( S ) f ( x ) , we get
lim j f ( x ) x , x n j x = f ( x ) x , x ˆ x 0 .

Combining with (3.6), the inequality (3.5) holds.

Step 5. Show that
x n x , x n + 1 x 2 = α n f n ( x n ) + ( 1 α n ) S x n x 2 ( 1 α n ) 2 S x n x 2 + 2 α n x n + 1 x , f n ( x n ) x ( 1 α n ) 2 x n x 2 + 2 α n x n + 1 x , f n ( x n ) f n ( x ) + 2 α n x n + 1 x , f n ( x ) x ( 1 α n ) 2 x n x 2 + α n ρ n ( x n x 2 + x n + 1 x 2 ) + 2 α n x n + 1 x , f n ( x ) x .
(3.7)
Transform the inequality into another form, we obtain
x n + 1 x 2 ( 1 α n ( 2 α n 2 ρ n ) 1 α n ρ n ) x n x 2 + 2 α n 1 α n ρ n x n + 1 x , f n ( x ) x .
By Schwartz’s inequality, we have
lim sup n x n + 1 x , f n ( x ) x lim n x n + 1 x f n ( x ) f ( x ) + lim sup n x n + 1 x , f ( x ) x .
By the boundedness of { x n } , f n ( x ) f ( x ) , (3.3) and (3.5), we have
lim sup n x n + 1 x , f n ( x ) x 0 .

It follows from Lemma 2.5 that (3.7) holds. □

Remark 3.2 In [2], Moudafi proposed the viscosity iterative algorithm as follows:
x n + 1 = α n f ( x n ) + ( 1 α n ) S x n ,
(3.8)

where f is a contraction on H. It is a special case of (3.2) in this paper when f 1 = f 2 = = f n = = f , n N and C = H . Of course, Halpern’s iteration method is also a special case of (3.2) when f 1 = f 2 = = f n = = u , n N .

Remark 3.3 In [7], the following iterative process was introduced:
x n + 1 = S x n μ α n F ( S x n ) .
Rewriting the equation, we get
x n + 1 = α n ( I μ F ) S x n + ( 1 α n ) S x n = α n f ( x n ) + ( 1 α n ) S x n .
(3.9)

It is easily to verify f : = ( I μ F ) S is a contractive mapping on H when 0 < μ < 2 η / L 2 . That is, Yamada’s method is a kind of viscosity approximation method. Of course it is also a special case of Theorem 3.1.

4 Generalized viscosity approximation method combining with a finite family of nonexpansive mappings

In this section, we apply a more generalized iterative method like viscosity approximation to approximate a common element of the set of fixed points of a finite family of nonexpansive mappings on Hilbert spaces.

Let { f n } be a sequence of ρ n -contractive self-maps of C with 0 < ρ l = lim inf n ρ n lim sup n ρ n = ρ u < 1 and { S i } i = 1 N be N nonexpansive self-mapping of C. Assume the common fixed point set F = i = 1 N Fix ( S i ) and { f n ( q ) } is convergent for any q F . Put f ( q ) : = lim n f n ( q ) , since every f n is ρ n -contractive, we have
f n ( p ) f n ( q ) ρ n p q ρ u p q
for any p , q F . Further we obtain f ( p ) f ( q ) ρ u p q . Next we prove the sequence { x n } converges strongly to a point x F = i = 1 N Fix ( S i ) , which also solves the variational inequality
f ( x ) x , p x 0 , p F .
(4.1)

As we know, it is equivalent to the fixed point equation x = P F f ( x ) .

Theorem 4.1 Let C be a nonempty closed convex subset of a real Hilbert space H and let { f n } be a sequence of ρ n -contractive self-maps of C with 0 ρ l = lim inf n ρ n lim sup n ρ n = ρ u < 1 . Let, for each 1 i N ( N 1 be an integer), S i : C C be a nonexpansive mapping. Assume the set F = i = 1 N Fix ( S i ) and { f n ( q ) } is convergent for any q F . Given x 1 C , let { x n } be generated by the following algorithm:
{ x n + 1 = α n f n ( x n ) + ( 1 α n ) S N n S N 1 n S 1 n x n , S i n = ( 1 λ i n ) I + λ i n S i , i = 1 , 2 , , N .
(4.2)
If the parameters { α n } and { λ i n } satisfy the following conditions:
  1. (i)

    { α n } ( 0 , 1 ) , lim n α n = 0 and n = 1 α n = ;

     
  2. (ii)

    λ i n ( λ l , λ u ) for some λ l , λ u ( 0 , 1 ) and lim n | λ i n λ i n + 1 | = 0 , i = 1 , 2 , , N ,

     

then the sequence { x n } converges strongly to a point x F , which is also the unique solution of the variational inequality (4.1).

Proof We will prove the theorem in the case of N = 2 . The proof is divided into several steps.

Step 1. We show first that { x n } is bounded.

For any q F , we have
x n + 1 q = α n f n ( x n ) + ( 1 α n ) S 2 n S 1 n x n q α n f n ( x n ) q + ( 1 α n ) S 2 n S 1 n x n S 2 n S 1 n q α n ρ n x n q + ( 1 α n ) x n q + α n f n ( q ) q ( 1 α n ( 1 ρ n ) ) x n q + α n ( 1 ρ n ) f n ( q ) q 1 ρ n max { x n q , f n ( q ) q 1 ρ n } .

From the convergence of { f n ( q ) } , it is easy to get the boundness of { f n ( q ) } . Thus there exists a positive constant M 1 , such that f n ( q ) q M 1 . By induction, we obtain x n p max { x 1 p , M 1 1 ρ u } . Hence, { x n } is bounded, and so are { S 1 x n } and { S 2 n S 1 n x n } .

Step 2. We show that
x n + 1 x n 0 as  n .
(4.3)
Since both S 2 n and S 1 n are averaged nonexpansive mappings, by Lemma 2.2, S 2 n S 1 n is also averaged. Rewrite S 2 n S 1 n = ( 1 β n ) I + β n V n , where β n = λ 1 n + λ 2 n λ 1 n λ 2 n . Then we have
x n + 1 = α n f n ( x n ) + ( 1 α n ) [ ( 1 β n ) I + β n V n ] x n = α n f n ( x n ) + ( 1 β n ) x n α n ( 1 β n ) x n + ( 1 α n ) β n V n x n = ( 1 β n ) x n + β n [ α n f n ( x n ) ( 1 β n ) x n β n + ( 1 α n ) V n x n ] = ( 1 β n ) x n + β n z n .
Further we obtain
z n + 1 z n = α n + 1 β n + 1 [ f n + 1 ( x n + 1 ) ( 1 β n + 1 ) x n + 1 ] + ( 1 α n + 1 ) V n + 1 x n + 1 α n β n [ f n ( x n ) ( 1 β n ) x n ] ( 1 α n ) V n x n = V n + 1 x n + 1 V n x n + [ α n + 1 β n + 1 f n + 1 ( x n + 1 ) α n β n f n ( x n ) ] [ α n + 1 ( 1 β n + 1 ) β n + 1 x n + 1 α n ( 1 β n ) β n x n ] α n + 1 V n + 1 x n + 1 + α n V n x n x n + 1 x n + V n + 1 x n V n x n + | α n + 1 β n + 1 f n + 1 ( x n + 1 ) α n β n f n ( x n ) | + α n + 1 ( 1 β n + 1 ) β n + 1 x n + 1 α n ( 1 β n ) β n x n + α n + 1 V n + 1 x n + 1 α n V n x n .
(4.4)
Write λ 1 = 2 λ l λ l 2 , λ 2 = 2 λ u λ u 2 . From the condition (iii), it is easily to get 0 < λ 1 β n λ 2 and β n + 1 β n 0 as n . We have
V n + 1 x n V n x n = S 2 n + 1 S 1 n + 1 ( 1 β n + 1 ) I β n + 1 x n S 2 n S 1 n ( 1 β n ) I β n x n S 2 n + 1 S 1 n + 1 β n + 1 x n S 2 n S 1 n β n x n + | 1 β n 1 β n + 1 | x n 1 β n S 2 n + 1 S 1 n + 1 x n S 2 n S 1 n x n + | 1 β n 1 β n + 1 | ( S 2 n + 1 S 1 n + 1 x n + x n ) 1 λ 1 ( S 1 n + 1 x n S 1 n x n + S 2 n + 1 S 1 n x n S 2 n S 1 n x n ) + | 1 β n 1 β n + 1 | ( S 2 n + 1 S 1 n + 1 x n + x n ) 1 λ 1 ( S 1 n + 1 x n S 1 n x n + S 2 n + 1 S 1 n x n S 2 n S 1 n x n ) + | 1 β n 1 β n + 1 | M 2 ,
(4.5)
where M 2 = sup n { S 2 n + 1 S 1 n + 1 x n + x n } . Since | λ i n + 1 λ i n | 0 , i = 1 , 2 , we can deduce
S 1 n + 1 x n S 1 n x n | λ 1 n + 1 λ 1 n | ( x n + S 1 x n ) 0
(4.6)
and
S 2 n + 1 S 1 n x n S 2 n S 1 n x n | λ 2 n + 1 λ 2 n | ( S 1 n x n + S 2 S 1 n x n ) 0 .
(4.7)
Substituting (4.5) into (4.4), we have
z n + 1 z n x n + 1 x n 1 λ 1 ( S 1 n + 1 x n S 1 n x n + S 2 n + 1 S 1 n x n S 2 n S 1 n x n ) + | β n β n + 1 | β n β n + 1 M 2 + α n + 1 β n + 1 f n + 1 ( x n + 1 ) α n β n f n ( x n ) + α n + 1 ( 1 β n + 1 ) β n + 1 x n + 1 α n ( 1 β n ) β n x n + α n + 1 V n + 1 x n + 1 α n V n x n .
Combining (4.6), (4.7), and condition (i), we get
lim sup n ( z n + 1 z n x n + 1 x n ) 0 .
By Lemma 2.6, we conclude that lim n z n x n 0 . Further we have
lim n x n + 1 x n = lim n β n z n x n 0 .
Step 3. We show that
S 2 n S 1 n x n x n 0 .
(4.8)
By (4.2), we get
x n + 1 S 2 n S 1 n x n = α n f n ( x n ) S 2 n S 1 n x n 0 .
We have
x n S 2 n S 1 n x n x n + 1 S 2 n S 1 n x n + x n x n + 1 .

Combining with (4.3), (4.8) holds.

Since { λ i n } ( λ l , λ u ) , we can assume that λ i n j λ i 0 as n . It is easy to get 0 < λ i 0 < 1 for i = 1 , 2 . Write S i 0 = ( 1 λ i 0 ) I + λ i 0 S i , i = 1 , 2 . Then we have Fix ( S i 0 ) = Fix ( S i ) , i = 1 , 2 and
lim j sup x D S i n j x S i 0 x = 0 ,
(4.9)

where D is an arbitrary bounded subset including { x n j } . By using (4.8) and (4.9), we obtain S 2 0 S 1 0 x n x n 0 .

Step 4. We have
lim sup n f ( x ) x , x n x 0 ,
(4.10)

where x = P F f ( x ) is a unique solution of the variational inequality (4.1).

Since f n ( q ) is convergent, we have lim n ( f n ( x ) x ) = f ( x ) x .

The proof of Step 4 is similar to that of Theorem 3.1.

Step 5. We show that
x n x .
(4.11)

The proof of Step 5 is similar to that of Theorem 3.1. □

Remark 4.2 In [11], put S n = S N n S N 1 n S 1 n , and we rewrite Zhou and Wang’s iterative algorithm as follows:
x n + 1 = ( I α n μ F ) S n x n = α n ( I μ F ) S n x n + ( 1 α n ) S n x n = α n f n ( x n ) + ( 1 α n ) S n x n .
(4.12)

It is easily to verify ( I μ F ) S n is a contractive mapping on H when 0 < μ < 2 η / L 2 . Thus it is a special case of Theorem 4.1 when f n : = ( I μ F ) S n , n N and C = H .

Declarations

Acknowledgements

The authors would like to thank the referee for valuable suggestions to improve the manuscript and the Fundamental Research Funds for the Central Universities (GRANT: 3122013k004).

Authors’ Affiliations

(1)
College of Science, Civil Aviation University of China

References

  1. Halpern B: Fixed points of nonexpanding maps. Bull. Am. Math. Soc. 1967, 73: 957–961. 10.1090/S0002-9904-1967-11864-0View ArticleMATHGoogle Scholar
  2. Moudafi A: Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 2000, 241: 46–55. 10.1006/jmaa.1999.6615View ArticleMathSciNetMATHGoogle Scholar
  3. Chen J, Zhang L, Fan T: Viscosity approximation methods for nonexpansive mappings and monotone mappings. J. Math. Anal. Appl. 2007, 334: 1450–1461. 10.1016/j.jmaa.2006.12.088View ArticleMathSciNetMATHGoogle Scholar
  4. Takahashi W: Viscosity approximation methods for countable families of nonexpansive mappings in Banach spaces. Nonlinear Anal. 2009, 70: 719–734. 10.1016/j.na.2008.01.005View ArticleMathSciNetMATHGoogle Scholar
  5. Xu HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298: 279–291. 10.1016/j.jmaa.2004.04.059View ArticleMathSciNetMATHGoogle Scholar
  6. Yao Y, Noor M: On viscosity iterative methods for variational inequalities. J. Math. Anal. Appl. 2007, 325: 776–787. 10.1016/j.jmaa.2006.01.091View ArticleMathSciNetMATHGoogle Scholar
  7. Yamada I: The hybrid steepest-descent method for variational inequality problems over the intersection of the fixed-point sets of nonexpansive mappings. Studies in Computational Mathematics 8. Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications 2001, 473–504.View ArticleGoogle Scholar
  8. Buong D, Duong LT: An explicit iterative algorithm for a class of variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2011, 151: 513–524. 10.1007/s10957-011-9890-7View ArticleMathSciNetMATHGoogle Scholar
  9. Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.MathSciNetMATHGoogle Scholar
  10. Tian M, Di LY: Synchronal algorithm and cyclic algorithm for fixed point problems and variational inequality problems in Hilbert spaces. Fixed Point Theory Appl. 2011., 2011: Article ID 21Google Scholar
  11. Zhou HY, Wang P: A simpler explicit iterative algorithm for a class of variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2013. 10.1007/s10957-013-0470-xGoogle Scholar
  12. López G, Martin V, Xu HK: Iterative algorithm for the multi-sets split feasibility problem. Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems 2009, 243–279.Google Scholar
  13. Marino G, Xu HK: Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces. J. Math. Anal. Appl. 2007, 329: 336–346. 10.1016/j.jmaa.2006.06.055View ArticleMathSciNetMATHGoogle Scholar
  14. Suzuki T: Strong convergence theorems for an infinite family of nonexpansive mappings in general Banach spaces. Fixed Point Theory Appl. 2005, 1: 103–123.Google Scholar

Copyright

© Duan and He; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.