Skip to main content

Hybrid extragradient method for generalized mixed equilibrium problems and fixed point problems in Hilbert space

Abstract

In this paper, we introduce iterative schemes based on the extragradient method for finding a common element of the set of solutions of a generalized mixed equilibrium problem and the set of fixed points of a nonexpansive mapping, and the set of solutions of a variational inequality problem for inverse strongly monotone mapping. We obtain some strong convergence theorems for the sequences generated by these processes in Hilbert spaces. The results in this paper generalize, extend and unify some well-known convergence theorems in literature.

MSC:47H09, 47J05, 47J20, 47J25.

1 Introduction

Let H be a real Hilbert space with the inner product , and the norm , and let C be a nonempty closed convex subset of H. Let B:CH be a nonlinear mapping and let φ:CR{+} be a function and F be a bifunction from C×C to R, where R is the set of real numbers. Peng and Yao [1] considered the following generalized mixed equilibrium problem:

Finding xC such that F(x,y)+φ(y)+Bx,yxφ(x).
(1.1)

The set of solutions of (1.1) is denoted by GMEP(F,φ,B). It is easy to see that x is a solution of problem (1.1) implying that xdomφ={xC:φ(x)<+}.

If B=0, then the generalized mixed equilibrium problem (1.1) becomes the following mixed equilibrium problem:

Finding xC such that F(x,y)+φ(y)φ(x).
(1.2)

Problem (1.2) was studied by Ceng and Yao [2] and Peng and Yao [3, 4]. The set of solutions of (1.2) is denoted by MEP(F,φ).

If φ=0, then the generalized mixed equilibrium problem (1.1) becomes the following generalized equilibrium problem:

Finding xC such that F(x,y)+Bx,yx0.
(1.3)

Problem (1.3) was studied by Takahashi and Takahashi [5]. The set of solutions of (1.3) is denoted by GEP(F,B).

If φ=0 and B=0, then the generalized mixed equilibrium problem (1.1) becomes the following equilibrium problem:

Finding xC such that F(x,y)0.
(1.4)

The set of solutions of (1.4) is denoted by EP(F).

If F(x,y)=0 for all x,yC, the generalized mixed equilibrium problem (1.1) becomes the following generalized variational inequality problem:

Finding xC such that φ(y)+Bx,yxφ(x).
(1.5)

The set of solutions of (1.5) is denoted by GVI(C,φ,B).

If φ=0 and F(x,y)=0 for all x,yC, the generalized mixed equilibrium problem (1.1) becomes the following variational inequality problem:

Finding xC such that Bx,yx0.
(1.6)

The set of solutions of (1.6) is denoted by VI(C,B).

If B=0 and F(x,y)=0 for all x,yC, the generalized mixed equilibrium problem (1.1) becomes the following minimization problem:

Finding xC such that φ(y)φ(x).
(1.7)

Problem (1.1) is very general in the sense that it includes, as special cases, optimization problems, variational inequalities, minimax problems, Nash equilibrium problems in noncooperative games and others, see for instance, [17].

For solving the variational inequality problem in the finite-dimensional Euclidean spaces, in 1976, Korpelevich [8] introduced the following so-called extragradient method:

{ x 1 = x C , y n = P C ( x n λ B x n ) , x n + 1 = P C ( x n λ B y n )
(1.8)

for every n=0,1,2, , λ(0, 1 k ), where C is a closed convex subset of R n , B:C R n is a monotone and k-Lipschitz continuous mapping, and P C is the metric projection of R n into C. She showed that if VI(C,B) is nonempty, then the sequences { x n } and { y n }, generated by (1.8), converge to the same point xVI(C,A). The idea of the extragradient iterative process introduced by Korpelevich was successfully generalized and extended not only in Euclidean but also in Hilbert and Banach spaces, see, e.g., the recent papers of He et at. [9], Gárciga Otero and Iuzem [10], Solodov and Svaiter [11], Solodov [12]. Moreover, Zeng and Yao [13] and Nadezhkina and Takahashi [14] introduced some iterative processes based on the extragradient method for finding the common element of the set of fixed points of nonexpansive mappings and the set of solutions of a variational inequality problem for a monotone, Lipschitz continuous mapping. Yao and Yao [15] introduced an iterative process based on the extragradient method for finding the common element of the set of fixed points of nonexpansive mappings and the set of solutions of a variational inequality problem for a k-inverse strongly monotone mapping. Plubtieng and Punpaeng [16] introduced an iterative process, based on the extragradient method, for finding the common element of the set of fixed points of nonexpansive mappings, the set of solutions of an equilibrium problem and the set of solutions of a variational inequality problem for α-inverse strongly monotone mappings.

In 2003, Takahashi and Toyoda [17], introduced the following iterative scheme:

x n + 1 = α n x n +(1 α n )S P C ( x n λ n T x n ),
(1.9)

where { α n } is a sequence in (0,1), and { λ n } is a sequence in (0,2α). They proved that if F(S)VI(A), then the sequence { x n } generated by (1.9) converges weakly to some zF(S)VI(A). Recently, Zeng and Yao [18] introduced the following iterative scheme:

{ x 0 = x C , y n = P C ( x n λ n x n ) , x n + 1 = α n x 0 + ( 1 α n ) S P C ( x n λ n A y n ) ,
(1.10)

where { λ n } and { α n } satisfy the following conditions: (i) λ n k(0,1δ) for some δ(0,1) and (ii) α n (0,1), n = 1 α n =, lim n α n =0. They proved that the sequence { x n } and { y n } generated by (1.10) converges strongly to the same point P F ( S ) VI ( C , A ) x 0 provided that lim n x n + 1 x n =0.

In 2006, Nadezhkina and Takahashi [19] also considered the extragradient method (1.9) for finding a common element of a fixed point of nonexpansive mapping and a set of solutions of variational inequalities, but the convergence result was still the weak convergence. The question posed by Takahashi and Toyoda [17] on whether the strong convergence result can be proved by the same iteration scheme Algorithm (1.9) remains open.

In 2010, with the techniques adopted by Noor and Rassias [20], Huang, Noor and Al-Said [21] set the projected residual function by

R λ (x)=x P C (xλAx),
(1.11)

it is well known that xC is a solution of variational inequality (1.6) if and only if xC is a zero of the projected residual function (1.11). They proved the strong convergence result of the iteration scheme (1.9) using the error analysis technique.

In this paper, inspired and motivated by the above researches and Huang, Noor and Al-Said [21], we introduce a new iterative scheme based on the extragradient method for finding a common element of the set of solutions of a generalized mixed equilibrium problem, the set of fixed points of nonexpansive mappings and the set of solutions of an inverse strongly monotone mapping, as follows:

{ x 1 = x C , F ( u n , y ) + B x n , y u n + φ ( y ) φ ( u n ) + 1 r n y u n , u n x n 0 , y C , y n = P C ( u n λ n A u n ) , x n + 1 = α n x n + ( 1 α n ) S [ β n x n + ( 1 β n ) P C ( y n λ n A y n ) ] ,

where { α n }, { β n }, { r n }, { λ n } satisfy some parameters controlling conditions. We will obtain some strong convergence theorems using the error analysis technique as in [21]. The results in this paper generalize, extend and unify some well-known convergence theorems in the literature.

2 Preliminaries

Let C be a closed convex subset of a Hilbert space H for every point xH. There exists a unique nearest point in C, denoted by P C x, such that

x P C xxyfor all yC.

P C is called the metric projection of H onto C. It is well known that P C is a nonexpansive mapping of H onto C and satisfies

xy, P C x P C y P C x P C y 2

for every x,yH. Moreover, P C x is characterized by the following properties: P C xC and

x P C x , y P C y 0 , x y 2 x P C x 2 + y P C x 2
(2.1)

for all xH, yC.

A mapping A of C into H is called monotone if

AxAy,xy0

for all x,yC. A mapping A of C into H is called inverse strongly monotone with a modulus α (in short, α-inverse strongly monotone) if there exists a positive real number α such that

xy,AxAyα A x A y 2

for all x,yC.

Recall that a mapping S of C into itself is nonexpansive if

SxSyxyfor all x,yC.

A mapping T of C into itself is pseudocontractive if

TxTy,xy x y 2

for all x,yC. Obviously, the class of pseudocontractive mappings is more general than the class of nonexpansive mappings.

Let A be a monotone mapping from C into H. In the context of the variational inequality problem, the characterization of projection (2.1) implies the following:

uVI(A,C)u= P C (uλAu),λ>0.

It is also known that H satisfies Opial’s condition; for any sequence { x n } with x n x, the inequality

lim inf n x n x< lim inf n x n y

holds for every yH with yx.

For solving the generalized mixed equilibrium problem, let us give the following assumptions for the bifunction F, the function φ and the set C:

  1. (A1)

    F(x,x)=0 for all xC;

  2. (A2)

    F is monotone, i.e., F(x,y)+F(y,x)0 for any x,yC;

  3. (A3)

    for each yC, xF(x,y) is weakly upper semicontinuous;

  4. (A4)

    for each xC, yF(x,y) is convex;

  5. (A5)

    for each xC, yF(x,y) is lower semicontinuous;

  6. (B1)

    for each xH and r>0, there exist a bounded subset D(x)C and y x Cdom(φ) such that for any zC D x ,

    F(z, y x )+φ( y x )+Bz, y x z+ 1 r y x z,zxφ(z);
  7. (B2)

    C is a bounded set.

Lemma 2.1 [1]

Let C be a nonempty closed convex subset of a Hilbert space H. Let F be a bifunction from C×C to R satisfying (A1)-(A5) and let φ:CR{+} be a proper lower semicontinuous and convex function. Assume that either (B1) or (B2) holds. For r>0 and xH, define a mapping T r :HC as follows:

T r = { z C : F ( z , y ) + φ ( y ) + B z , y z + 1 r y z , z x φ ( z ) , y C }

for all xH. Then the following conclusions hold:

  1. (1)

    For each xH, T r (x);

  2. (2)

    T r is single-valued;

  3. (3)

    T r is firmly nonexpansive, i.e., for any x,yH,

    T r ( x ) T r ( y ) 2 T r ( x ) T r ( y ) , x y ;
  4. (4)

    Fix( T r (IrB))=GMEP(F,φ,B);

  5. (5)

    GMEP(F,φ,B) is closed and convex.

Lemma 2.2 [22]

For any x VI(C,A), if A:CH is α-inverse strongly monotone, then R λ (x) is (1 λ 4 α )-inverse strongly monotone for any λ[0,4α] and

x x , R λ ( x ) ( 1 λ 4 α ) R λ ( x ) 2 ,

where R λ (x)=x P C (xλAx).

Lemma 2.3 [21]

For all xH and λ λ>0, it holds that

R λ ( x ) R λ ( x ) ,

where R λ (x)=x P C (xλAx).

Lemma 2.4 [23]

Let { a n } and { b n } be two sequences of non-negative numbers, such that a n + 1 a n + b n for all nN. If n = 1 b n <+, and if { a n } has a subsequence { a n k } converging to 0, then lim n a n =0.

Lemma 2.5 [24]

Let H be a real Hilbert space, and let C be a nonempty, closed and convex subset of H. Let { x n } be a sequence in H. Suppose that, for any x C,

x n + 1 x x n x (nN).

Then lim n P C ( x n )=z for some zC.

3 Main results

Theorem 3.1 Let C be a closed and convex subset of a real Hilbert space H. Let F be a bifunction from C×CR satisfying (A1)-(A5) and φ:CR{+} be a proper lower semicontinuous and convex function. Let A be an α-inverse strongly monotone mapping from C into H and B be an β-inverse strongly monotone mapping from C into H. Let S be a nonexpansive mapping of C into itself, such that Ω=Fix(S)VI(C,A)GMEP(F,φ,B). Assume that either (B1) or (B2) holds. Let { x n }, { y n } and { u n } be sequences generated by

{ x 1 = x C , F ( u n , y ) + B x n , y u n + φ ( y ) φ ( u n ) + 1 r n y u n , u n x n 0 , y C , y n = P C ( u n λ n A u n ) , x n + 1 = α n x n + ( 1 α n ) S [ β n x n + ( 1 β n ) P C ( y n λ n A y n ) ]
(3.1)

for every n=1,2, , where { λ n }, { r n }, { α n }, { β n } satisfy the following conditions: (i) 0< r n <2β, { λ n }[a,b] for some a,b(0,2α) and (ii) { α n }[c,d], { β n }[e,f] for some c,d,e,f(0,1), then { x n } converges strongly to p Ω, where p = lim n P Ω ( x n ).

Proof We divide the proof into five steps.

Step 1. We claim that { x n } is bounded and lim n R a ( u n )= lim n R λ n ( u n )=0.

Put

v n = P C ( y n λ n A y n ) , w n = β n x n + ( 1 β n ) v n , R λ n ( u n ) = u n P C ( u n λ n A u n ) , R λ n ( y n ) = y n P C ( y n λ n A y n )

for every n=1,2, . Take any pΩ and let { T r n } be a sequence of mappings defined as in Lemma 2.1, then p= P C (p λ n Ap)= T r n (p r n Bp). From u n = T r n ( x n r n B x n )C, the β-inverse strongly monotonicity of B and 0< r n <2β, we have

u n p 2 = T r n ( x n r n B x n ) T r n ( p r n B p ) 2 x n r n B x n ( p r n B p ) 2 x n p 2 2 r n x n p , B x n B p + r n 2 B x n B p 2 x n p 2 2 r n β B x n B p 2 + r n 2 B x n B p 2 = x n p 2 + r n ( r n 2 β ) B x n B p 2 x n p 2 ,
(3.2)

and from Lemma 2.2, we have

y n p 2 = u n R λ n ( u n ) p 2 = u n p 2 2 u n p , R λ n ( u n ) + R λ n ( u n ) 2 u n p 2 2 ( 1 λ n 4 α ) R λ n ( u n ) 2 + R λ n ( u n ) 2 = u n p 2 ( 1 λ n 2 α ) R λ n ( u n ) 2 ,
(3.3)

which implies from (3.2) that

y n p 2 x n p 2 ( 1 λ n 2 α ) R λ n ( u n ) 2 .
(3.4)

By the same process as in (3.3), we also have from (3.4) that

v n p 2 y n p 2 ( 1 λ n 2 α ) R λ n ( y n ) 2 y n p 2 ( 1 λ n 2 α ) R λ n ( y n ) 2 x n p 2 ( 1 λ n 2 α ) R λ n ( u n ) 2 ( 1 λ n 2 α ) R λ n ( y n ) 2 .
(3.5)

Further, from (3.1) and (3.5), we get

w n p 2 = β n 2 x n p 2 + 2 β n ( 1 β n ) x n p , v n p + ( 1 β n ) 2 v n p 2 β n 2 x n p 2 + 2 β n ( 1 β n ) x n p 2 + ( 1 β n ) 2 x n p 2 ( 1 β n ) 2 ( 1 λ n 2 α ) R λ n ( u n ) 2 x n p 2 ( 1 β n ) 2 ( 1 λ n 2 α ) R λ n ( u n ) 2 ( 1 β n ) 2 ( 1 λ n 2 α ) R λ n ( y n ) 2 .
(3.6)

Hence, from (3.6), the nonexpansive property of the mapping S and 0< λ n <2α, we have

x n + 1 p 2 = α n 2 x n p 2 + ( 1 α n ) 2 S w n p 2 + 2 α n ( 1 α n ) S w n S p , x n p α n 2 x n p 2 + ( 1 α n ) 2 w n p 2 + 2 α n ( 1 α n ) x n p 2 α n 2 x n p 2 + ( 1 α n ) 2 x n p 2 + 2 α n ( 1 α n ) x n p 2 ( 1 α n ) 2 ( 1 β n ) 2 ( 1 λ n 2 α ) R λ n ( u n ) 2 = x n p 2 ( 1 α n ) 2 ( 1 β n ) 2 ( 1 λ n 2 α ) R λ n ( u n ) 2 ( 1 α n ) 2 ( 1 β n ) 2 ( 1 λ n 2 α ) R λ n ( y n ) 2 x n p 2 .
(3.7)

Since the sequence { x n p} is a bounded and nonincreasing sequence, lim n x n p exists. Hence { x n } is bounded. Consequently, the sets { u n }, { v n }, { w n }, { y n } are also bounded. By (3.7), we have

( 1 α n ) 2 ( 1 β n ) 2 ( 1 λ n 2 α ) R λ n ( u n ) 2 x n p 2 x n + 1 p 2 .

From the conditions (i) and (ii), there must exist a constant M 1 >0 such that

M 1 R λ n ( u n ) 2 ( 1 α n ) 2 ( 1 β n ) 2 ( 1 λ n 2 α ) R λ n ( u n ) 2 x n p 2 x n + 1 p 2 ,

from which it follows that

M 1 n = 1 R λ n ( u n ) 2 n = 1 [ x n p 2 x n + 1 p 2 ] = x 1 p 2 <.

Hence, lim n R λ n ( u n )= lim n R λ n ( u n )=0. Since R λ n ( u n )= u n P C ( u n λ n A u n )= u n y n , lim n u n y n =0. Notice that λ n a, then by Lemma 2.3, R a ( u n ) R λ n ( u n ). Therefore,

lim n R a ( u n )= lim n R λ n ( u n )=0.
(3.8)

By the same way, we also get that

lim n R λ n ( y n ) = lim n y n v n =0,

and thus

lim n u n v n =0.
(3.9)

Step 2. We show that lim n x n u n = lim n S x n x n =0.

Indeed, for any pΩ, it follows from (3.1) and (3.5) that

w n p 2 = β n x n p 2 + ( 1 β n ) v n p 2 β n ( 1 β n ) x n v n 2 x n p 2 β n ( 1 β n ) x n v n 2 ,

which implies that

x n + 1 p 2 = α n x n p 2 + ( 1 α n ) w n p 2 α n ( 1 α n ) S w n x n 2 x n p 2 α n ( 1 α n ) S w n x n 2 β n ( 1 β n ) x n v n 2 .
(3.10)

Thus, it follows from (3.10) that

α n (1 α n ) S w n x n 2 x n p 2 x n + 1 p 2 .

From the condition (ii), there exists a constant M 2 >0 such that

M 2 S w n x n 2 α n (1 α n ) S w n x n 2 x n p 2 x n + 1 p 2 ,

from which it follows that

M 2 n = 1 S w n x n 2 n = 1 [ x n p 2 x n + 1 p 2 ] = x 1 p 2 <.

Hence

lim n S w n x n =0.
(3.11)

From (3.10), we also get that

β n (1 β n ) x n v n 2 x n p 2 x n + 1 p 2 .

By the same way, we obtain that

lim n x n v n =0,
(3.12)

which combining (3.9) implies that

lim n x n u n =0.
(3.13)

Since

S x n x n S x n S v n + S v n S w n + S w n x n x n v n + v n w n + S w n x n x n v n + β n x n v n + S w n x n ,

which implies from (3.11), (3.12) that

lim n S x n x n =0.
(3.14)

Further, it follows from (3.1) and (3.11) that

x n + 1 x n =(1 α n )S w n x n (1c)S w n x n 0(n).
(3.15)

Step 3. We claim that { x n } must have a convergent subsequence { x n k } such that lim k x n k = p for some p C. Moreover, p Ω=Fix(S)VI(C,A)GMEP(F,φ,B).

Since { x n } is a bounded sequence generated by Algorithm (3.1), then { x n } must have a weakly convergent subsequence { x n k } such that x n k p (k), which implies from (3.11) and (3.13) that S w n k p (k) and u n k p (k). Next we will show that p Ω=Fix(S)VI(C,A)GMEP(F,φ,B).

Since A is inverse strongly monotone with the positive constant α>0, so A is 1 α -Lipschitz continuous. Indeed, it yields that AxAy 1 α xy from the definition of the inverse strongly monotonicity of A, such that

α A x A y 2 AxAy,xyAxAyxy.

From the 1 α -Lipschitz continuity of A and the continuity of P C , it follows that R a (x)=x P C [xaAx] is also continuous. Notice that ρ n a, then by Lemma 2.3, R x ( x n ) R ρ n ( x n ). Then from Step 1,

lim k R x ( x n k ) = lim n R ρ n ( x n k ) =0.

Therefore from the continuity of R a (x),

R a ( p ) = lim n R a ( x n k )=0.

This shows that p is a solution of the variational inequality (1.6), that is p VI(C,A). From (3.12), lim n x n k p =0 and the property of the nonexpansive mapping S, it follows that p =S p , that is p Fix(S). Finally, by the same argument as in the proof of [[7], Theorem 3.1], we prove that p GMEP(F,φ,B). Thus p Ω=Fix(S)VI(C,A)GMEP(F,φ,B).

Next, we will prove that x n k p (k).

From (3.1), (3.6) and (3.7) we can calculate

x n + 1 p 2 = α n x n + ( 1 α n ) S w n p , x n + 1 p = α n x n p , x n + 1 p + ( 1 α n ) S w n p , x n + 1 p α n x n p 2 + ( 1 α n ) S w n p , x n + 1 p α n x n p 2 + ( 1 α n ) S w n p , x n + 1 x n + ( 1 α n ) S w n p , x n p α n x n p 2 + ( 1 α n ) x n p 2 + ( 1 α n ) S w n p , x n + 1 x n = x n p 2 + ( 1 α n ) S w n p , x n + 1 x n ,

which implies

x n + 1 p 2 x n p 2 ( 1 α n ) S w n p , x n + 1 x n ( 1 c ) S w n p , x n + 1 x n .
(3.16)

From S w n k p and x n k + 1 x n k 0 as k, it follows from (3.16) that

x n k + 1 p x n k p (k).

Using the Kadec-Klee property of H, we obtain that lim k x n k = p .

Step 4. We claim that the sequence { x n } generated by Algorithm (3.1) converges strongly to p Ω=Fix(S)VI(C,A)GMEP(F,φ,B).

In fact, from the result of Step 3, p Ω. Let p= p in (3.7). Consequently, x n + 1 p x n p . Meanwhile, lim k x n k p =0 from Step 3. Then from Lemma 2.4, we have lim n x n p =0. Therefore, lim n x n = p .

Step 5. We claim that p = lim n P Ω x n .

From (2.1), we have

x n P Ω x n , p P Ω x n 0.
(3.17)

By (3.7) and Lemma 2.5, lim n P Ω x n = q for some q Ω. Then in (3.13), let n, since lim n x n = p by Step 4, we have

p q , p q 0,

and, consequently, we have p = q . Hence, p = lim n P Ω x n .

This completes the proof of Theorem 3.1. □

The following theorems can be obtained from Theorem 3.1 immediately.

Theorem 3.2 Let C, H, S be as in Theorem  3.1. Assume that Ω=Fix(S)VI(C,A), let { x n }, { y n } be sequences generated by

{ x 1 = x C , y n = P C ( x n λ n A x n ) , x n + 1 = α n x n + ( 1 α n ) S [ β n x n + ( 1 β n ) P C ( y n λ n A y n ) ]

for every n=1,2, , where { λ n }, { α n }, { β n } satisfy the following conditions: (i) { λ n }[a,b] for some a,b(0,2α) and (ii) { α n }[c,d], { β n }[e,f] for some c,d,e,f(0,1), then { x n } converges strongly to p Ω, where p = lim n P Ω ( x n ).

Proof Putting B=F=φ=0, r n =1 in Theorem 3.1, the conclusion of Theorem 3.2 can be obtained from Theorem 3.1. □

Remark 3.1 The main result of Nadezhkina and Takahashi [14] is a special case of our Theorem 3.2. Indeed, if we take β n =0 in Theorem 3.2, then we obtain the result of [14].

Theorem 3.3 Let C, H, F, A, B, S be as in Theorem  3.1. Assume Ω=Fix(S)VI(C,A)GEP(F,B); let { x n }, { y n } and { u n } be sequences generated by

{ x 1 = x C , F ( u n , y ) + B x n , y u n + 1 r n y u n , u n x n 0 , y C , y n = P C ( u n λ n A u n ) , x n + 1 = α n x n + ( 1 α n ) S [ β n x n + ( 1 β n ) P C ( y n λ n A y n ) ]

for every n=1,2, , where { λ n }, { r n }, { α n }, { β n } satisfy conditions (i) and (ii) as in Theorem  3.1, then { x n } converges strongly to p Ω, where p = lim n P Ω ( x n ).

Proof Putting φ=0 in Theorem 3.1, the conclusion of Theorem 3.3 is obtained. □

Remark 3.2 Theorem 3.3 can be viewed as an improvement of Theorem 3.1 of Inchan [25] because of removing the iterative step C n in the algorithm of Theorem 3.1 of [25].

Theorem 3.4 Let C, H, F, A, S be as in Theorem  3.1. Assume that Ω=Fix(S)VI(C,A)EP(F); let { x n } and { u n } be sequences generated by

{ x 1 = x C , F ( u n , y ) + 1 r n y u n , u n x n 0 , y C , y n = P C ( u n λ n A u n ) , x n + 1 = α n x n + ( 1 α n ) S P C ( y n λ n A y n )

for every n=1,2, , where { λ n }, { r n }, { α n } satisfy the following conditions: 0< r n <2β, { λ n }[a,b] for some a,b(0,2α), { α n }[c,d] for some c,d(0,1), then { x n } converges strongly to p Ω, where p = lim n P Ω ( x n ).

Proof Taking B=φ=0, β n =0 in Theorem 3.1, the conclusion of Theorem 3.4 is obtained. □

Remark 3.3 Theorem 3.4 is the strong convergence result of Theorem 3.1 of Jaiboon, Kumam and Humphries [26].

References

  1. Peng JW, Yao JC: A new hybrid-extragradient method for generalized mixed equilibrium problems and fixed point problems and variational inequality problems. Taiwan. J. Math. 2008, 12(6):1401–1432.

    MathSciNet  Google Scholar 

  2. Ceng LC, Yao JC: A hybrid iterative scheme for mixed equilibrium problems and fixed point problems. J. Comput. Appl. Math. 2008, 214: 186–201. 10.1016/j.cam.2007.02.022

    MathSciNet  Article  Google Scholar 

  3. Peng JW, Yao JC: Strong convergence theorems of iterative scheme based on the extragradient method for mixed equilibrium problems and fixed point problems. Math. Comput. Model. 2009, 49: 1816–1828. 10.1016/j.mcm.2008.11.014

    MathSciNet  Article  Google Scholar 

  4. Peng JW, Yao JC: An iterative algorithm combining viscosity method with parallel method for a generalized equilibrium problem and strict pseudocontractions. Fixed Point Theory Appl. 2009., 2009: Article ID 794178

    Google Scholar 

  5. Takahashi S, Takahashi W: Strong convergence theorem for a generalized equilibrium problem and a nonexpansive mapping in a Hilbert space. Nonlinear Anal. 2008, 69: 1025–1033. 10.1016/j.na.2008.02.042

    MathSciNet  Article  Google Scholar 

  6. Flam SD, Antipin AS: Equilibrium programming using proximal-like algorithms. Math. Program. 1997, 78: 29–41.

    MathSciNet  Article  Google Scholar 

  7. Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.

    MathSciNet  Google Scholar 

  8. Korpelevich GM: The extragradient method for finding saddle points and other problems. Matecon 1976, 12: 747–756.

    Google Scholar 

  9. He BS, Yang ZH, Yuan XM: An approximate proximal-extragradient type method for monotone variational inequalities. J. Math. Anal. Appl. 2004, 300: 362–374. 10.1016/j.jmaa.2004.04.068

    MathSciNet  Article  Google Scholar 

  10. Gárciga Otero R, Iuzem A: Proximal methods with penalization effects in Banach spaces. Numer. Funct. Anal. Optim. 2004, 25: 69–91. 10.1081/NFA-120034119

    MathSciNet  Article  Google Scholar 

  11. Solodov MV, Svaiter BF: An inexact hybrid generalized proximal point algorithm and some new results on the theory of Bregman functions. Math. Oper. Res. 2000, 25: 214–230. 10.1287/moor.25.2.214.12222

    MathSciNet  Article  Google Scholar 

  12. Solodov MV: Convergence rate analysis of interactive algorithms for solving variational inequality problem. Math. Program. 2003, 96: 513–528. 10.1007/s10107-002-0369-z

    MathSciNet  Article  Google Scholar 

  13. Zeng LC, Yao JC: Strong convergence theorem by an extragradient method for fixed point problems and variational inequality problems. Taiwan. J. Math. 2006, 10: 1293–1303.

    MathSciNet  Google Scholar 

  14. Nadezhkina N, Takahashi W: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006, 128: 191–201. 10.1007/s10957-005-7564-z

    MathSciNet  Article  Google Scholar 

  15. Yao Y, Yao JC: On modified iterative method for nonexpansive mappings and monotone mappings. Appl. Math. Comput. 2007, 186: 1551–1558. 10.1016/j.amc.2006.08.062

    MathSciNet  Article  Google Scholar 

  16. Plubtieng S, Punpaeng R: A new iterative method for equilibrium problems and fixed point problems of nonexpansive mappings and monotone mappings. Appl. Math. Comput. 2008, 197: 548–558. 10.1016/j.amc.2007.07.075

    MathSciNet  Article  Google Scholar 

  17. Takahashi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118: 417–428. 10.1023/A:1025407607560

    MathSciNet  Article  Google Scholar 

  18. Zeng LC, Yao JC: Strong convergence theorem by an extragradient method for fixed point problems and variational inequality problems. Taiwan. J. Math. 2006, 10: 1293–1303.

    MathSciNet  Google Scholar 

  19. Nadezhkina N, Takahashi W: Weak convergence theorem by an extragradient method for nonexpansive mappings. J. Optim. Theory Appl. 2006, 128: 191–201. 10.1007/s10957-005-7564-z

    MathSciNet  Article  Google Scholar 

  20. Noor MA, Rassias TM: Projection methods for monotone variational inequalities. J. Math. Anal. Appl. 1999, 237: 405–412. 10.1006/jmaa.1999.6422

    MathSciNet  Article  Google Scholar 

  21. Huang ZY, Noor MA, Al-Said E: On an open question of Takahashi for nonexpansive mappings and inverse strongly monotone mappings. J. Optim. Theory Appl. 2010, 147: 194–204. 10.1007/s10957-010-9705-2

    MathSciNet  Article  Google Scholar 

  22. Bnouhachem A, Noor MA: A new iterative method for variational inequalities. Appl. Math. Comput. 2006, 182: 1673–1682. 10.1016/j.amc.2006.06.007

    MathSciNet  Article  Google Scholar 

  23. Liu QH: Convergence theorems of the sequence of iterates for asymptotically demicontractive and hemicontractive mappings. Nonlinear Anal. 1996, 26: 1835–1842. 10.1016/0362-546X(94)00351-H

    MathSciNet  Article  Google Scholar 

  24. Takahashi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118: 417–428. 10.1023/A:1025407607560

    MathSciNet  Article  Google Scholar 

  25. Inchan I: Hybrid extragradient method for general equilibrium problems and fixed point problems in Hilbert space. Nonlinear Anal. Hybrid Syst. 2011, 5: 467–478. 10.1016/j.nahs.2010.10.005

    MathSciNet  Article  Google Scholar 

  26. Jaiboon C, Kumam P, Humphries UW: Weak convergence theorem by an extragradient method for variational inequality, equilibrium and fixed point problems. Bull. Malays. Math. Sci. Soc. 2009, 32: 173–185.

    MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors are very grateful to the referees for their careful reading, comments and suggestions, which improved the presentation of this article. The first author was supported by the Natural Science Foundational Committee of Qinhuangdao city (201101A453) and Hebei Normal University of Science and Technology (ZDJS 2009 and CXTD2010-05). The fifth author was supported by the Natural Science Foundational Committee of Qinhuangdao city (2012025A034).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Suhong Li.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

SL and LL carried out the proof of convergence of the theorems. LC, XH and XY carried out the check of the manuscript. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Li, S., Li, L., Cao, L. et al. Hybrid extragradient method for generalized mixed equilibrium problems and fixed point problems in Hilbert space. Fixed Point Theory Appl 2013, 240 (2013). https://doi.org/10.1186/1687-1812-2013-240

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2013-240

Keywords

  • generalized mixed equilibrium problem
  • extragradient method
  • nonexpansive mapping
  • variational inequality
  • strong convergence theorem