Skip to main content

A new relaxed extragradient-like algorithm for approaching common solutions of generalized mixed equilibrium problems, a more general system of variational inequalities and a fixed point problem

Abstract

In this paper, we introduce a new iterative algorithm by the relaxed extragradient-like method for finding a common element of the set of solutions of generalized mixed equilibrium problems, the set of solutions of a more general system of variational inequalities for finite inverse strongly monotone mappings and the set of solutions of a fixed point problem of a strictly pseudocontractive mapping in a Hilbert space. Then we prove strong convergence of the scheme to a common element of the three above described sets.

1 Introduction

In this paper, we assume that H is a real Hilbert space with the inner product , and the induced norm and C is a nonempty closed convex subset of H. P C denotes the metric projection of H onto C and F(T) denotes the fixed points set of a mapping T. The sequence { x n } converges weakly to x which is denoted by x n x.

Let φ:CR be a real-valued function and let A:CH be a nonlinear mapping. Suppose that F:C×CR is a bifunction.

The generalized mixed equilibrium problem is to find xC (see, e.g., [16]) such that

F(x,y)+φ(y)φ(x)+Ax,yx0,yC.
(1.1)

The set of solutions of (1.1) is denoted by GMEP(F,φ,A).

If φ0, then problem (1.1) reduces to the generalized equilibrium problem, which is to find xC (see, e.g., [79]) such that

F(x,y)+Ax,yx0,yC.
(1.2)

The set of solutions of (1.2) is denoted by GEP(F,A).

If A0, then problem (1.1) reduces to the mixed equilibrium problem, which is to find xC (see, e.g., [1013]) such that

F(x,y)+φ(y)φ(x)0,yC.
(1.3)

The set of solutions of (1.3) is denoted by MEP(F,φ).

If F0, then problem (1.1) reduces to the mixed variational inequality of Browder type, which is to find xC (see, e.g., [3, 14]) such that

φ(y)φ(x)+Ax,yx0,yC.
(1.4)

The set of solutions of (1.4) is denoted by MVI(C,φ,A).

If φ0, A0, then problem (1.1) reduces to the equilibrium problem, which is to find xC (see, e.g., [1517]) such that

F(x,y)0,yC.
(1.5)

The set of solutions of (1.5) is denoted by EP(F).

If F0, φ0, then problem (1.1) reduces to the variational inequality, which is to find xC (see, e.g., [1829]) such that

Ax,yx0,yC.
(1.6)

The set of solutions of (1.6) is denoted by VI(C,A).

If F0, A0, then problem (1.1) reduces to the minimized problem, which is to find xC (see, e.g., [1828]) such that

φ(y)φ(x)0,yC.
(1.7)

The set of solutions of (1.7) is denoted by Argmin(φ).

Let A,B:CH be two mappings. Ceng et al. [2] considered the following problem of finding ( x , y )C×C such that

{ λ A y + x y , x x 0 , x C , μ B x + y x , x y 0 , x C ,
(1.8)

which is called a general system of variational inequalities where λ>0 and μ>0 are two constants. In particular, if A=B, x = y , then problem (1.8) reduces to the classical variational inequality problem (1.6).

In order to find the common element of the solutions of problem (1.8) and the set of fixed points of one nonexpansive mapping S, Ceng et al. [2] studied the following algorithm: fix uC, x 0 C, and

{ z n = T λ n ( Θ , φ ) ( x n λ n F x n ) , y n = T μ 1 G 1 [ T μ 2 G 2 ( z n μ 2 B 2 z n ) μ 1 B 1 T μ 2 G 2 ( z n μ 2 B 2 z n ) ] , x n + 1 = α n u + β n x n + γ n y n + δ n S y n , n 0 .
(1.9)

Under appropriate conditions, they obtained one strong convergence theorem.

Let C be a nonempty closed convex subset of a real Hilbert space H. Let { A i } i = 1 N :CH be a family of mappings. Cai and Bu [1] considered the following problem of finding ( x 1 , x 2 ,, x N )C×C××C such that

{ λ N A N x N + x 1 x N , x x 1 0 , x C , λ N 1 A N 1 x N 1 + x N x N 1 , x x N 0 , x C , λ 2 A 2 x 2 + x 3 x 2 , x x 3 0 , x C , λ 1 A 1 x 1 + x 2 x 1 , x x 2 0 , x C .
(1.10)

And (1.10) can be rewritten as

{ x 1 ( I λ N A N ) x N , x x 1 0 , x C , x N ( I λ N 1 A N 1 ) x N 1 , x x N 0 , x C , x 3 ( I λ 2 A 2 ) x 2 , x x 3 0 , x C , x 2 ( I λ 1 A 1 ) x 1 , x x 2 0 , x C ,
(1.11)

which is called a more general system of variational inequalities in Hilbert spaces, where λ i >0 for all i{1,2,,N}. The set of solutions to (1.10) is denoted by Ω. In particular, if N=2, A 1 =B, A 2 =A, λ 1 =μ, λ 2 =λ, x 1 = x , x 2 = y , then problem (1.10) reduces to problem (1.8).

In order to find a common element of the solutions of problem (1.10) and the common fixed points of a family of strictly pseudocontractive mappings, Cai and Bu [1] studied the following algorithm: pick any x 0 H, set C 1 =C, x 1 = P C 1 x 0 , and

{ u n = T r M , n ( F M , φ M ) ( I r M , n B M ) T r M 1 , n ( F M 1 , φ M 1 ) u n = × ( I r M 1 , n B M 1 ) T r 1 , n ( F 1 , φ 1 ) ( I r 1 , n B 1 ) x n , y n = P C ( I λ N A N ) P C ( I λ N 1 A N 1 ) P C ( I λ 2 A 2 ) P C ( I λ 1 A 1 ) u n , z n = α n y n + ( 1 α n ) S n y n , C n + 1 = { z C n : z n z x n z } , x n + 1 = P C n + 1 x 0 , n 1 .
(1.12)

Under suitable conditions, they also obtained one strong convergence theorem.

In this paper, motivated and inspired by the above facts, we study a new iterative algorithm by the relaxed extragradient-like method for finding a common element of the set of solutions of generalized mixed equilibrium problems, the set of solutions of a more general system of variational inequalities for finite inverse strongly monotone mappings and the set of solutions of a fixed point problem of a strictly pseudocontractive mapping in a Hilbert space. Then we prove strong convergence of the scheme to a common element of the three above described sets.

2 Preliminaries

For solving the equilibrium problem, let us assume that the bifunction F satisfies the following conditions:

  1. (A1)

    F(x,x)=0 for all xC;

  2. (A2)

    F is monotone, i.e., F(x,y)+F(y,x)0 for any x,yC;

  3. (A3)

    F is weakly upper semicontinuous, i.e., for each x,y,zC,

    lim sup t 0 F ( x + t ( z x ) , y ) F(x,y);
  4. (A4)

    F(x,) is convex and lower semicontinuous for each xC;

  5. (B1)

    For each xH and r>0, there exists a bounded subset D x C and y x C such that for any zC D x ,

    F(z, y x )+φ( y x )φ(z)+ 1 r y x z,zx<0;
  6. (B2)

    C is a bounded set.

Let H be a real Hilbert space. It is well known that

x + y 2 = x 2 + y 2 +2x,y
(2.1)

and

x 2 y 2 xy ( x + y )
(2.2)

for all x,yH.

Definition 2.1 Let C be a nonempty closed convex subset of a real Hilbert space H.

  1. (1)

    A mapping T:CC is said to be nonexpansive if

    TxTyxy,x,yC;
  2. (2)

    A mapping T:CH is said to be L-Lipschitzian if there exists L>0 such that

    TxTyLxy,x,yC;
  3. (3)

    A mapping T:CC is said to be k-strictly pseudocontractive if there exists a constant k[0,1) such that

    T x T y 2 x y 2 +k ( I T ) x ( I T ) y 2 ,x,yC.
    (2.3)

    It is obvious that k=0, then the mapping T is nonexpansive;

  4. (4)

    A mapping T:CH is said to be monotone if

    TxTy,xy0,x,yC;
  5. (5)

    A mapping T:CH is said to be α-inverse-strongly monotone if there exists a positive real number α such that

    TxTy,xyα T x T y 2 ,x,yC.

It is obvious that any α-inverse-strongly monotone mapping T is monotone and 1 α -Lipschitz continuous.

Definition 2.2 P C :HC is called a metric projection if for every point xH, there exists a unique nearest point in C, denoted by P C x, such that

x P C xxy,yC.

In order to prove our main results in the next section, we recall some lemmas.

Lemma 2.1 [30]

Let C be a nonempty closed convex subset of H and let T:CC be a k-strictly pseudocontractive mapping, then the following results hold:

  1. (1)

    equation (2.3) is equivalent to

    TxTy,xy x y 2 1 k 2 ( I T ) x ( I T ) y 2 ,x,yC;
    (2.4)
  2. (2)

    T is Lipschitz continuous with a constant 1 + k 1 k , i.e.,

    TxTy 1 + k 1 k xy,x,yC;
    (2.5)
  3. (3)

    (Demi-closed principle) IT is demi-closed on C, that is,

    if x n x Cand(IT) x n 0, then x =T x .

Lemma 2.2 [1]

Let C be a nonempty closed convex subset of H and let T:CH be an α-inverse-strongly monotone mapping, then for all x,yC and λ>0, we have

( I λ T ) x ( I λ T ) y 2 = ( x y ) λ ( T x T y ) 2 = x y 2 2 λ T x T y , x y + λ 2 T x T y 2 x y 2 + λ ( λ 2 α ) T x T y 2 .

So, if 0<λ2α, then IλT is a nonexpansive mapping from C to H.

Lemma 2.3 Let C be a nonempty closed convex subset of H and let P C :HC be a metric projection, then

  1. (1)

    P C x P C y 2 xy, P C x P C y, x,yH;

  2. (2)

    moreover, P C is a nonexpansive mapping, i.e., P C x P C yxy, x,yH;

  3. (3)

    x P C x,y P C x0, xH, yC;

  4. (4)

    x y 2 x P C x 2 + y P C x 2 , xH, yC.

Lemma 2.4 [4]

Let C be a nonempty closed convex subset of H. Assume that F:C×CR satisfies (A1)-(A4) and let φ:CR be a lower semicontinuous and convex function. Assume that either (B1) or (B2) holds. For r>0 and xH, define a mapping T r ( F , φ ) :HC as follows:

T r ( F , φ ) (x)= { z C : F ( z , y ) + φ ( y ) φ ( z ) + 1 r y z , z x 0 , y C }

for all xH. Then the following hold:

  1. (1)

    for each xH, T r ( F , φ ) (x) and T r ( F , φ ) is single-valued;

  2. (2)

    T r ( F , φ ) is firmly nonexpansive, that is, for any x,yH,

    T r ( F , φ ) ( x ) T r ( F , φ ) ( y ) 2 T r ( F , φ ) ( x ) T r ( F , φ ) ( y ) , x y ;
  3. (3)

    F( T r ( F , φ ) )=MEP(F,φ);

  4. (4)

    MEP(F,φ) is closed and convex.

Lemma 2.5 [3]

Let C be a nonempty closed convex subset of H. Assume that F:C×CR satisfies (A1)-(A4), B:CH is a continuous monotone mapping and let φ:CR be a lower semicontinuous and convex function. Assume that either (B1) or (B2) holds. For r>0 and xH, define a mapping K r ( F , φ ) :HC as follows:

K r ( F , φ ) (x)= { z C : F ( z , y ) + φ ( y ) φ ( z ) + B x , y z + 1 r y z , z x 0 , y C }

for all xH. Then the following hold:

  1. (1)

    for each xH, K r ( F , φ ) (x) and K r ( F , φ ) is single-valued;

  2. (2)

    K r ( F , φ ) is firmly nonexpansive, that is, for any x,yH,

    K r ( F , φ ) ( x ) K r ( F , φ ) ( y ) 2 K r ( F , φ ) ( x ) K r ( F , φ ) ( y ) , x y ;
  3. (3)

    F( K r ( F , φ ) )=GMEP(F,φ,B);

  4. (4)

    GMEP(F,φ,B) is closed and convex.

Lemma 2.6 Let C be a nonempty closed convex subset of H. Let { F k } k = 1 M be a family of bifunctions from C×C into satisfying (A1)-(A4), let { φ k } k = 1 M be a family of lower semicontinuous functions from C into , and let { B k } k = 1 M be a family of β k -inverse-strongly monotone mappings from C into H. For F k and φ k , k=1,2,,M, assume that either (B1) or (B2) holds. Let T:CH be a mapping defined by

T(x)= T r M ( F M , φ M ) (I r M B M ) T r M 1 ( F M 1 , φ M 1 ) (I r M 1 B M 1 ) T r 1 ( F 1 , φ 1 ) (I r 1 B 1 )x,xC.

Putting Θ 0 =I, where I is an identity mapping,

Θ k = T r k ( F k , φ k ) (I r k B k ) T r k 1 ( F k 1 , φ k 1 ) (I r k 1 B k 1 ) T r 1 ( F 1 , φ 1 ) (I r 1 B 1 ),k=1,2,,M.

If x k = 1 M GMEP( F k , φ k , B k ) and 0< r k 2 β k , k=1,2,,M, then

  1. (1)

    Θ k x=x, k=1,2,,M;

  2. (2)

    T is nonexpansive.

Proof (1) Since { B k } k = 1 M is a family of β k -inverse-strongly monotone mappings from C into H, so they are continuous monotone mappings. Observe that

T r k ( F k , φ k ) ( I r k B k ) x = { z C : F k ( z , y ) + φ k ( y ) φ k ( z ) + 1 r k y z , z ( I r k B k ) x 0 , y C } = { z C : F k ( z , y ) + φ k ( y ) φ k ( z ) + B k x , y z + 1 r k y z , z x 0 , y C } = K r k ( F k , φ k ) ( x ) .

By Lemma 2.5, we know that if x k = 1 M GMEP( F k , φ k , B k ) then x is the fixed point of the mapping K r k ( F k , φ k ) , k=1,2,,M, so we have

x= K r k ( F k , φ k ) (x)= T r k ( F k , φ k ) (I r k B k )x,
(2.6)

which implies that x is a fixed point of the mapping T r k ( F k , φ k ) (I r k B k ). Therefore we get

Θ k x=x,k=1,2,,M.

(2) Since T r ( F , φ ) is firmly nonexpansive, then it is obvious that T r ( F , φ ) is nonexpansive. And from Lemma 2.2, we have

T ( x ) T ( y ) = Θ M x Θ M y = T r M ( F M , φ M ) ( I r M B M ) Θ M 1 x T r M ( F M , φ M ) ( I r M B M ) Θ M 1 y ( I r M B M ) Θ M 1 x ( I r M B M ) Θ M 1 y Θ M 1 x Θ M 1 y Θ 0 x Θ 0 y = x y ,

which implies T is nonexpansive. □

Lemma 2.7 [1]

Let C be a nonempty closed convex subset of H. Let A i be α i -inverse-strongly monotone from C into H, respectively, where i{1,2,,N}. Let G:CC be a mapping defined by

G(x)= P C (I λ N A N ) P C (I λ N 1 A N 1 ) P C (I λ 2 A 2 ) P C (I λ 1 A 1 )x,xC.

If 0< λ i 2 α i , i=1,2,,N, then G is nonexpansive.

Proof Put Ω i = P C (I λ i A i ) P C (I λ i 1 A i 1 ) P C (I λ 1 A 1 ), i=1,2,,N, and Ω 0 =I, where I is an identity mapping. Since P C is nonexpansive and from Lemma 2.2, we have

G ( x ) G ( y ) = Ω N x Ω N y = P C ( I λ N A N ) Ω N 1 x P C ( I λ N A N ) Ω N 1 y ( I λ N A N ) Ω N 1 x ( I λ N A N ) Ω N 1 y Ω N 1 x Ω N 1 y Ω 0 x Ω 0 y x y ,

which implies G is nonexpansive. □

Lemma 2.8 Let C be a nonempty closed convex subset of H. Let A i :CH be a nonlinear mapping, where i=1,2,,N. For given x i C, i=1,2,,N, ( x 1 , x 2 ,, x N ) is a solution of problem (1.10) if and only if

x 1 = P C (I λ N A N ) x N , x i = P C (I λ i 1 A i 1 ) x i 1 ,i=2,3,,N,
(2.7)

that is,

x 1 = P C (I λ N A N ) P C (I λ N 1 A N 1 ) P C (I λ 2 A 2 ) P C (I λ 1 A 1 ) x 1 .

Proof () From Lemma 2.3(3), it is obvious that (2.7) is the solution of problem (1.10).

() Since

λ N A N x N + x 1 x N , x x 1 0 , x C x 1 ( I λ N A N ) x N , x x 1 0 , x C ( I λ N A N ) x N x 1 , ( I λ N A N ) x N x 1 ( I λ N A N ) x N + x 0 , x C ( I λ N A N ) x N x 1 2 ( I λ N A N ) x N x 1 , ( I λ N A N ) x N x , x C ( I λ N A N ) x N x 1 ( I λ N A N ) x N x , x C x 1 = P C ( I λ N A N ) x N .

Similarly, we get

x i = P C (I λ i 1 A i 1 ) x i 1 ,i=2,3,,N.

Therefore we have

x 1 = P C (I λ N A N ) P C (I λ N 1 A N 1 ) P C (I λ 2 A 2 ) P C (I λ 1 A 1 ) x 1 ,

which completes the proof. □

From Lemma 2.8, we know that x 1 =G( x 1 ), that is, x 1 is a fixed point of the mapping G, where G is defined by Lemma 2.7. Moreover, if we find the fixed point x 1 , it is easy to solve the other points by (2.7).

Lemma 2.9 [31]

Let { x n } and { y n } be bounded sequences in a Banach space X and let { β n } be a sequence in [0,1] with 0< lim inf n β n lim sup n β n <1. Suppose that x n + 1 = β n x n +(1 β n ) y n for all integers n0 and lim sup n ( y n + 1 y n x n + 1 x n )0. Then lim n y n x n =0.

Lemma 2.10 [30]

Let T:CC be a nonexpansive mapping with F(T). If x n x and (IT) x n y , then (IT) x = y .

Lemma 2.11 [30]

Assume that { α n } is a sequence of nonnegative real numbers such that

α n + 1 (1 γ n ) α n + δ n ,n1,

where { γ n } is a sequence in (0,1) and { δ n } is a sequence such that

  1. (1)

    n = 1 γ n =;

  2. (2)

    lim sup n δ n γ n 0 or n = 1 | δ n |<.

Then lim n α n =0.

3 Main results

In this section, we state and verify our main results. We have the following theorem.

Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H. Let { F k } k = 1 M be a family of bifunctions from C×C into satisfying (A1)-(A4), let { φ k } k = 1 M :CR be a family of lower semicontinuous and convex functions and let { B k } k = 1 M be a family of β k -inverse-strongly monotone mappings from C into H. Let A i be α i -inverse-strongly monotone from C into H, respectively, where i{1,2,,N}. Let S be a δ-strict pseudocontractive mapping from C into itself such that F=[ k = 1 M GMEP( F k , φ k , B k )]F(G)F(S), where G is defined by Lemma 2.7. For F k and φ k , k=1,2,,M, assume that either (B1) or (B2) holds. Pick any x 0 C, let { x n }C be a sequence generated by

{ z n = T r M , n ( F M , φ M ) ( I r M , n B M ) T r M 1 , n ( F M 1 , φ M 1 ) z n = × ( I r M 1 , n B M 1 ) T r 1 , n ( F 1 , φ 1 ) ( I r 1 , n B 1 ) x n , y n = P C ( I λ N A N ) P C ( I λ N 1 A N 1 ) P C ( I λ 2 A 2 ) P C ( I λ 1 A 1 ) z n , x n + 1 = a n x 0 + b n x n + c n y n + d n S y n , n 0 ,
(3.1)

where λ i (0,2 α i ), i=1,2,,N, δ(0,1). { a n },{ b n },{ c n },{ d n }[0,1] satisfy the following conditions:

  1. (i)

    a n + b n + c n + d n =1 and ( c n + d n )δ c n for all n0;

  2. (ii)

    lim n a n =0 and n = 0 a n =;

  3. (iii)

    0< lim inf n b n lim sup n b n <1 and lim inf n d n >0;

  4. (iv)

    lim n ( c n + 1 1 b n + 1 c n 1 b n )=0;

  5. (v)

    0< lim inf n r k , n lim sup n r k , n <2 β k , k=1,2,,M.

Then { x n }C converges strongly to P F x 0 .

Proof Putting

Θ n k = T r k , n ( F k , φ k ) ( I r k , n B k ) T r k 1 , n ( F k 1 , φ k 1 ) ( I r k 1 , n B k 1 ) T r 1 , n ( F 1 , φ 1 ) ( I r 1 , n B 1 ) , k { 1 , , M } , n N ,

and

Ω i = P C (I λ i A i ) P C (I λ i 1 A i 1 ) P C (I λ 2 A 2 ) P C (I λ 1 A 1 ),i{1,2,,N},

Θ n 0 = Ω 0 =I, where I is the identity mapping on H. Then we have that z n = Θ n M x n and y n = Ω N z n . From Lemma 2.6 and Lemma 2.7, it can be seen easily that Θ n k and Ω i are nonexpansive, where k{1,2,,M}, i{1,2,,N}. We divide the proof into six steps.

Step 1. Firstly, we show that { x n } is bounded.

Indeed, take pF arbitrarily. Since p= Θ n k p=Sp, k{1,2,,M}, nN. By Lemma 2.6, we have

z n p= Θ n M x n Θ n M p x n p.
(3.2)

It follows from Lemma 2.7 and (3.2) that

y n p= Ω N z n Ω N p z n p x n p.
(3.3)

Furthermore, from (3.1), we have

x n + 1 p = a n x 0 + b n x n + c n y n + d n S y n p = a n ( x 0 p ) + b n ( x n p ) + c n ( y n p ) + d n ( S y n p ) a n x 0 p + b n x n p + c n ( y n p ) + d n ( S y n p ) .
(3.4)

Since ( c n + d n )δ c n , (2.3) and (2.4), we have

c n ( y n p ) + d n ( S y n p ) 2 = c n 2 y n p 2 + d n 2 S y n p 2 + 2 c n d n S y n p , y n p c n 2 y n p 2 + d n 2 [ y n p 2 + δ y n S y n 2 ] + 2 c n d n [ y n p 2 1 δ 2 y n S y n 2 ] = ( c n + d n ) 2 y n p 2 + [ d n 2 δ ( 1 δ ) c n d n ] y n S y n 2 = ( c n + d n ) 2 y n p 2 + d n [ ( c n + d n ) δ c n ] y n S y n 2 ( c n + d n ) 2 y n p 2 ,

which implies that

c n ( y n p ) + d n ( S y n p ) ( c n + d n ) y n p.
(3.5)

From (3.2)-(3.5) it follows that

x n + 1 p a n x 0 p + b n x n p + c n ( y n p ) + d n ( S y n p ) a n x 0 p + b n x n p + ( c n + d n ) y n p a n x 0 p + b n x n p + ( c n + d n ) x n p = a n x 0 p + ( 1 a n ) x n p .

So, we have

x n + 1 pmax { x 0 p , x n p } ,n0.

By induction, we obtain that

x n p x 0 p,n0.

Hence, { x n } is bounded. Consequently, we deduce immediately that { z n }, { y n }, {S y n } are bounded.

Step 2. Next, we prove that lim n x n + 1 x n =0.

Indeed, define x n + 1 = b n x n +(1 b n ) w n for all n0. It follows that

w n + 1 w n = x n + 2 b n + 1 x n + 1 1 b n + 1 x n + 1 b n x n 1 b n = a n + 1 x 0 + c n + 1 y n + 1 + d n + 1 S y n + 1 1 b n + 1 a n x 0 + c n y n + d n S y n 1 b n = a n + 1 x 0 1 b n + 1 a n x 0 1 b n + c n + 1 ( y n + 1 y n ) + d n + 1 ( S y n + 1 S y n ) 1 b n + 1 + ( c n + 1 1 b n + 1 c n 1 b n ) y n + ( d n + 1 1 b n + 1 d n 1 b n ) S y n .
(3.6)

Observe that

c n + 1 ( y n + 1 y n ) + d n + 1 ( S y n + 1 S y n ) 2 = c n + 1 2 y n + 1 y n 2 + d n + 1 2 S y n + 1 S y n 2 + 2 c n + 1 d n + 1 S y n + 1 S y n , y n + 1 y n c n + 1 2 y n + 1 y n 2 + d n + 1 2 [ y n + 1 y n 2 + δ ( y n + 1 S y n + 1 ) ( y n S y n ) 2 ] + 2 c n + 1 d n + 1 [ y n + 1 y n 2 1 δ 2 ( y n + 1 S y n + 1 ) ( y n S y n ) 2 ] = ( c n + 1 + d n + 1 ) 2 y n + 1 y n 2 + [ d n + 1 2 δ ( 1 δ ) c n + 1 d n + 1 ] ( y n + 1 S y n + 1 ) ( y n S y n ) 2 = ( c n + 1 + d n + 1 ) 2 y n + 1 y n 2 + d n + 1 [ ( c n + 1 + d n + 1 ) δ c n + 1 ] × ( y n + 1 S y n + 1 ) ( y n S y n ) 2 ( c n + 1 + d n + 1 ) 2 y n + 1 y n 2 ,

which implies that

c n + 1 ( y n + 1 y n ) + d n + 1 ( S y n + 1 S y n ) ( c n + 1 + d n + 1 ) y n + 1 y n .
(3.7)

Since

z n + 1 z n = Θ n M x n + 1 Θ n M x n x n + 1 x n ,
(3.8)

then we have

y n + 1 y n = Ω N z n + 1 Ω N z n z n + 1 z n x n + 1 x n .
(3.9)

Hence it follows from (3.6), (3.7), (3.9) and c n + 1 + d n + 1 1 b n + 1 = 1 a n + 1 b n + 1 1 b n + 1 <1 that

w n + 1 w n ( a n + 1 1 b n + 1 + a n 1 b n ) x 0 + c n + 1 ( y n + 1 y n ) + d n + 1 ( S y n + 1 S y n ) 1 b n + 1 + | c n + 1 1 b n + 1 c n 1 b n | y n + | d n + 1 1 b n + 1 d n 1 b n | S y n ( a n + 1 1 b n + 1 + a n 1 b n ) x 0 + c n + 1 + d n + 1 1 b n + 1 x n + 1 x n + | c n + 1 1 b n + 1 c n 1 b n | y n + | a n + 1 + c n + 1 1 b n + 1 a n + c n 1 b n | S y n a n + 1 1 b n + 1 ( x 0 + S y n ) + a n 1 b n ( x 0 + S y n ) + x n + 1 x n + | c n + 1 1 b n + 1 c n 1 b n | ( y n + S y n ) .
(3.10)

Consequently, it follows from (3.10), conditions (ii), (iv) and { y n }, {S y n } are bounded that

lim sup n ( w n + 1 w n x n + 1 x n ) lim sup n { a n + 1 1 b n + 1 ( x 0 + S y n ) + a n 1 b n ( x 0 + S y n ) + | c n + 1 1 b n + 1 c n 1 b n | ( y n + S y n ) } = 0 .

Hence, by Lemma 2.9, we get lim n w n x n =0. Thus, from condition (iii), we have

lim n x n + 1 x n = lim n (1 b n ) w n x n =0.
(3.11)

Step 3. We show that lim n B k Θ n k 1 x n B k p=0, k=1,2,,M and lim n A i Ω i 1 z n A i Ω i 1 p=0, i=1,2,,N.

It follows from Lemma 2.6 that

z n p 2 = Θ n M x n Θ n M p 2 Θ n k x n Θ n k p 2 = T r k , n ( F k , φ k ) ( I r k , n B k ) Θ n k 1 x n T r k , n ( F k , φ k ) ( I r k , n B k ) Θ n k 1 p 2 ( I r k , n B k ) Θ n k 1 x n ( I r k , n B k ) Θ n k 1 p 2 Θ n k 1 x n p 2 + r k , n ( r k , n 2 β k ) B k Θ n k 1 x n B k p 2 x n p 2 + r k , n ( r k , n 2 β k ) B k Θ n k 1 x n B k p 2 .
(3.12)

By Lemma 2.3 and Lemma 2.2, we have

y n p 2 = P C ( I λ N A N ) Ω N 1 z n P C ( I λ N A N ) Ω N 1 p 2 ( I λ N A N ) Ω N 1 z n ( I λ N A N ) Ω N 1 p 2 Ω N 1 z n Ω N 1 p 2 + λ N ( λ N 2 α N ) A N Ω N 1 z n A N Ω N 1 p 2 .

By induction, we get

y n p 2 z n p 2 + i = 1 N λ i ( λ i 2 α i ) A i Ω i 1 z n A i Ω i 1 p 2 .
(3.13)

From condition (i) and (3.5), we get

x n + 1 p 2 = a n ( x 0 p ) + b n ( x n p ) + c n ( y n p ) + d n ( S y n p ) , x n + 1 p = a n x 0 p , x n + 1 p + b n x n p , x n + 1 p + c n ( y n p ) + d n ( S y n p ) , x n + 1 p a n x 0 p , x n + 1 p + b n x n p x n + 1 p + c n ( y n p ) + d n ( S y n p ) x n + 1 p a n x 0 p , x n + 1 p + b n x n p x n + 1 p + ( c n + d n ) y n p x n + 1 p a n x 0 p , x n + 1 p + b n 2 ( x n p 2 + x n + 1 p 2 ) + c n + d n 2 ( y n p 2 + x n + 1 p 2 ) ,

that is,

x n + 1 p 2 2 a n 1 + a n x 0 p, x n + 1 p+ b n 1 + a n x n p 2 + c n + d n 1 + a n y n p 2 .
(3.14)

So, in terms of (3.12) and (3.13), we have

x n + 1 p 2 2 a n 1 + a n x 0 p x n + 1 p + b n 1 + a n x n p 2 + c n + d n 1 + a n { z n p 2 + i = 1 N λ i ( λ i 2 α i ) A i Ω i 1 z n A i Ω i 1 p 2 } 2 a n 1 + a n x 0 p x n + 1 p + b n 1 + a n x n p 2 + c n + d n 1 + a n { x n p 2 + r k , n ( r k , n 2 β k ) B k Θ n k 1 x n B k p 2 + i = 1 N λ i ( λ i 2 α i ) A i Ω i 1 z n A i Ω i 1 p 2 } 2 a n 1 + a n x 0 p x n + 1 p + 1 a n 1 + a n x n p 2 + c n + d n 1 + a n { r k , n ( r k , n 2 β k ) B k Θ n k 1 x n B k p 2 + i = 1 N λ i ( λ i 2 α i ) A i Ω i 1 z n A i Ω i 1 p 2 } .

Therefore,

r k , n ( 2 β k r k , n ) B k Θ n k 1 x n B k p 2 + i = 1 N λ i ( 2 α i λ i ) A i Ω i 1 z n A i Ω i 1 p 2 2 a n c n + d n x 0 p x n + 1 p + 1 a n c n + d n ( x n p 2 x n + 1 p 2 ) 2 a n c n + d n x 0 p x n + 1 p + 1 a n c n + d n x n + 1 x n ( x n + 1 p + x n p ) .

Since lim n a n =0, 0< lim inf n r k , n lim sup n r k , n <2 β k , k=1,2,,M, λ i (0,2 α i ), i=1,2,,N, δ(0,1), lim inf n ( c n + d n )>0 and { x n } is bounded, we have

lim n B k Θ n k 1 x n B k p =0,k=1,2,,M,
(3.15)

and

lim n A i Ω i 1 z n A i Ω i 1 p =0,i=1,2,,N.
(3.16)

Step 4. We prove that lim n S y n y n =0.

Indeed, utilizing firmly nonexpansive of T r k ( F k , φ k ) and Lemma 2.2, we have

Θ n k x n Θ n k p 2 = T r k , n ( F k , φ k ) ( I r k , n B k ) Θ n k 1 x n T r k , n ( F k , φ k ) ( I r k , n B k ) p 2 ( I r k , n B k ) Θ n k 1 x n ( I r k , n B k ) p , Θ n k x n p = 1 2 ( ( I r k , n B k ) Θ n k 1 x n ( I r k , n B k ) p 2 + Θ n k x n p 2 ( I r k , n B k ) Θ n k 1 x n ( I r k , n B k ) p ( Θ n k x n p ) 2 ) 1 2 ( Θ n k 1 x n p 2 + Θ n k x n p 2 Θ n k 1 x n Θ n k x n r k , n ( B k Θ n k 1 x n B k p ) 2 ) ,

which implies

Θ n k x n p 2 Θ n k 1 x n p 2 Θ n k 1 x n Θ n k x n r k , n ( B k Θ n k 1 x n B k p ) 2 = Θ n k 1 x n p 2 Θ n k 1 x n Θ n k x n 2 r k , n 2 B k Θ n k 1 x n B k p 2 + 2 r k , n Θ n k 1 x n Θ n k x n , B k Θ n k 1 x n B k p Θ n k 1 x n p 2 Θ n k 1 x n Θ n k x n 2 + 2 r k , n Θ n k 1 x n Θ n k x n , B k Θ n k 1 x n B k p .
(3.17)

From (3.14), (3.3), Lemma 2.6 and (3.17), we have

x n + 1 p 2 2 a n 1 + a n x 0 p , x n + 1 p + b n 1 + a n x n p 2 + c n + d n 1 + a n z n p 2 2 a n 1 + a n x 0 p , x n + 1 p + b n 1 + a n x n p 2 + c n + d n 1 + a n Θ n k x n Θ n k p 2 a n 1 + a n x 0 p , x n + 1 p + b n 1 + a n x n p 2 + c n + d n 1 + a n [ Θ n k 1 x n p 2 Θ n k 1 x n Θ n k x n 2 + 2 r k , n Θ n k 1 x n Θ n k x n , B k Θ n k 1 x n B k p ] 2 a n 1 + a n x 0 p , x n + 1 p + b n 1 + a n x n p 2 + c n + d n 1 + a n [ x n p 2 Θ n k 1 x n Θ n k x n 2 + 2 r k , n Θ n k 1 x n Θ n k x n , B k Θ n k 1 x n B k p ] 2 a n 1 + a n x 0 p x n + 1 p + b n 1 + a n x n p 2 + c n + d n 1 + a n [ x n p 2 Θ n k 1 x n Θ n k x n 2 + 2 r k , n Θ n k 1 x n Θ n k x n B k Θ n k 1 x n B k p ] .

It follows that

c n + d n 1 + a n Θ n k 1 x n Θ n k x n 2 2 a n 1 + a n x 0 p x n + 1 p + 1 a n 1 + a n x n p 2 x n + 1 p 2 + 2 r k , n ( c n + d n ) 1 + a n Θ n k 1 x n Θ n k x n B k Θ n k 1 x n B k p 2 a n 1 + a n x 0 p x n + 1 p + x n p 2 x n + 1 p 2 + 2 r k , n ( c n + d n ) 1 + a n Θ n k 1 x n Θ n k x n B k Θ n k 1 x n B k p 2 a n x 0 p x n + 1 p + x n + 1 x n ( x n p + x n + 1 p ) + 2 r k , n ( c n + d n ) 1 + a n Θ n k 1 x n Θ n k x n B k Θ n k 1 x n B k p .

Since lim inf n c n + d n 1 + a n >0, a n 0, x n + 1 x n 0 and B k Θ n k 1 x n B k p0, we conclude that

lim n Θ n k x n Θ n k 1 x n =0,k=1,2,,M.
(3.18)

Therefore we get

x n z n = Θ n 0 x n Θ n M x n Θ n 0 x n Θ n 1 x n + Θ n 1 x n Θ n 2 x n + + Θ n M 1 x n Θ n M x n 0 as  n .
(3.19)

From Lemma 2.3(1), we obtain

Ω N z n Ω N p 2 = P C ( I λ N A N ) Ω N 1 z n P C ( I λ N A N ) Ω N 1 p 2 ( I λ N A N ) Ω N 1 z n ( I λ N A N ) Ω N 1 p , Ω N z n Ω N p = 1 2 ( ( I λ N A N ) Ω N 1 z n ( I λ N A N ) Ω N 1 p 2 + Ω N z n Ω N p 2 ( I λ N A N ) Ω N 1 z n ( I λ N A N ) Ω N 1 p ( Ω N z n Ω N p ) 2 ) 1 2 ( Ω N 1 z n Ω N 1 p 2 + Ω N z n Ω N p 2 Ω N 1 z n Ω N z n + Ω N p Ω N 1 p λ N ( A N Ω N 1 z n A N Ω N 1 p ) 2 ) ,

which implies that

Ω N z n Ω N p 2 Ω N 1 z n Ω N 1 p 2 Ω N 1 z n Ω N z n + Ω N p Ω N 1 p λ N ( A N Ω N 1 z n A N Ω N 1 p ) 2 = Ω N 1 z n Ω N 1 p 2 Ω N 1 z n Ω N z n + Ω N p Ω N 1 p 2 λ N 2 A N Ω N 1 z n A N Ω N 1 p ) 2 + 2 λ N Ω N 1 z n Ω N z n + Ω N p Ω N 1 p , A N Ω N 1 z n A N Ω N 1 p Ω N 1 z n Ω N 1 p 2 Ω N 1 z n Ω N z n + Ω N p Ω N 1 p 2 + 2 λ N Ω N 1 z n Ω N z n + Ω N p Ω N 1 p A N Ω N 1 z n A N Ω N 1 p .
(3.20)

By induction, we have

Ω N z n Ω N p 2 z n p 2 i = 1 N Ω i 1 z n Ω i z n + Ω i p Ω i 1 p 2 + 2 i = 1 N λ i Ω i 1 z n Ω i z n + Ω i p Ω i 1 p A i Ω i 1 z n A i Ω i 1 p x n p 2 i = 1 N Ω i 1 z n Ω i z n + Ω i p Ω N 1 p 2 + 2 i = 1 N λ i Ω i 1 z n Ω i z n + Ω i p Ω i 1 p A i Ω i 1 z n A i Ω i 1 p ,

that is,

y n p 2 x n p 2 i = 1 N Ω i 1 z n Ω i z n + Ω i p Ω i 1 p 2 + 2 i = 1 N λ i Ω i 1 z n Ω i z n + Ω i p Ω i 1 p A i Ω i 1 z n A i Ω i 1 p .
(3.21)

From (3.14) and (3.21),

x n + 1 p 2 2 a n 1 + a n x 0 p , x n + 1 p + b n 1 + a n x n p 2 + c n + d n 1 + a n [ x n p 2 i = 1 N Ω i 1 z n Ω i z n + Ω i p Ω i 1 p 2 + 2 i = 1 N λ i Ω i 1 z n Ω i z n + Ω i p Ω i 1 p A i Ω i 1 z n A i Ω i 1 p ] 2 a n 1 + a n x 0 p x n + 1 p + b n 1 + a n x n p 2 + c n + d n 1 + a n [ x n p 2 i = 1 N Ω i 1 z n Ω i z n + Ω i p Ω i 1 p 2 + 2 i = 1 N λ i Ω i 1 z n Ω i z n + Ω i p Ω i 1 p A i Ω i 1 z n A i Ω i 1 p ] .

It follows that

c n + d n 1 + a n i = 1 N Ω i 1 z n Ω i z n + Ω i p Ω i 1 p 2 2 a n 1 + a n x 0 p x n + 1 p + 1 a n 1 + a n x n p 2 x n + 1 p 2 + 2 ( c n + d n ) 1 + a n i = 1 N λ i Ω i 1 z n Ω i z n + Ω i p Ω i 1 p A i Ω i 1 z n A i Ω i 1 p 2 a n 1 + a n x 0 p x n + 1 p + x n p 2 x n + 1 p 2 + 2 ( c n + d n ) 1 + a n i = 1 N λ i Ω i 1 z n Ω i z n + Ω i p Ω i 1 p A i Ω i 1 z n A i Ω i 1 p 2 a n 1 + a n x 0 p x n + 1 p + x n + 1 x n ( x n + 1 p + x n p ) + 2 ( c n + d n ) 1 + a n i = 1 N λ i Ω i 1 z n Ω i z n + Ω i p Ω i 1 p A i Ω i 1 z n A i Ω i 1 p 2 a n x 0 p x n + 1 p + x n + 1 x n ( x n + 1 p + x n p ) + 2 ( c n + d n ) 1 + a n i = 1 N λ i Ω i 1 z n Ω i z n + Ω i p Ω i 1 p A i Ω i 1 z n A i Ω i 1 p .

Since lim inf n c n + d n 1 + a n >0, a n 0, x n + 1 x n 0 and A i Ω i 1 z n A i Ω i 1 p0, we conclude that

lim n i = 1 N Ω i 1 z n Ω i z n + Ω i p Ω i 1 p =0.
(3.22)

Therefore, we get

z n y n = Ω 0 z n Ω N z n i = 1 N Ω i 1 z n Ω i z n + Ω i p Ω i 1 p 0 as  n .
(3.23)

Thus from (3.19) and (3.23), we have

lim n x n y n =0.
(3.24)

Observe that

d n S y n x n = d n S y n d n x n = x n + 1 a n x 0 b n x n c n y n d n x n = x n + 1 x n + ( 1 b n d n ) x n c n y n a n x 0 = x n + 1 x n + a n ( x n x 0 ) + c n ( x n y n ) x n + 1 x n + a n x n x 0 + c n x n y n .

Since lim inf n d n >0, x n + 1 x n 0, a n 0 and x n y n 0, we have

lim n S y n x n =0.
(3.25)

From (3.24) and (3.25), we conclude that

lim n S y n y n =0.
(3.26)

Step 5. In this step, we prove that lim sup n x 0 x ¯ , x n x ¯ 0, where x ¯ = P F x 0 .

Indeed, take a subsequence { x n i } of { x n } such that

lim sup n x 0 x ¯ , x n x ¯ = lim n x 0 x ¯ , x n i x ¯ .

Since { x n } is bounded, there exists a subsequence of { x n } which converges weakly to x . Without loss of generality, we may assume that x n i x . From (3.18) and (3.24), we have Θ n i k x n i x , y n i x , where k{1,2,,M}. From (3.26) and Lemma 2.1, we have x =S x that is x F(S). Utilizing Lemma 2.7, we known that G is nonexpansive. And from (3.23), we obtain

y n i G y n i =G z n i G y n i z n i y n i 0as i.

According to Lemma 2.10, we obtain (IG) x =0, that is, x F(G).

Next we prove that x k = 1 M GMEP( F k , φ k , B k ). Since

Θ n k x n = T r k , n ( F k , φ k ) (I r k , n B k ) Θ n k 1 x n ,n1,k{1,2,,M}.

For all yC, we have

F k ( Θ n k x n , y ) + φ k ( y ) φ k ( Θ n k x n ) + B k Θ n k 1 x n , y Θ n k x n + 1 r k , n y Θ n k x n , Θ n k x n Θ n k 1 x n 0 .

By (A2), we have

φ k ( y ) φ k ( Θ n k x n ) + B k Θ n k 1 x n , y Θ n k x n + 1 r k , n y Θ n k x n , Θ n k x n Θ n k 1 x n F k ( y , Θ n k x n ) .

Replacing n by n i in the above inequality, we have

φ k ( y ) φ k ( Θ n i k x n i ) + B k Θ n i k 1 x n i , y Θ n i k x n i + 1 r k , n i y Θ n i k x n i , Θ n i k x n i Θ n i k 1 x n i F k ( y , Θ n i k x n i ) .

Let z t =ty+(1t) x for all t(0,1] and yC. This implies that z t C. Then we have

z t Θ n i k x n i , B k z t z t Θ n i k x n i , B k z t + φ k ( Θ n i k x n i ) φ k ( z t ) B k Θ n i k 1 x n i , z t Θ n i k x n i 1 r k , n i z t Θ n i k x n i , Θ n i k x n i Θ n i k 1 x n i + F k ( z t , Θ n i k ) = φ k ( Θ n i k x n i ) φ k ( z t ) + z t Θ n i k x n i , B k z t B k Θ n i k x n i + z t Θ n i k x n i , B k Θ n i k x n i B k Θ n i k 1 x n i z t Θ n i k x n i , Θ n i k x n i Θ n i k 1 x n i r k , n i + F k ( z t , Θ n i k x n i ) .

By (3.18) we have B k Θ n i k x n i B k Θ n i k 1 x n i 0 as i. Furthermore, by the monotonicity of B k , we obtain z t Θ n i k x n i , B k z t B k Θ n i k x n i 0. Then from (A4), the lower semicontinuity of φ and

Θ n i k x n i Θ n i k 1 x n i r k , n i 0, Θ n i k x n i x ,

we obtain that

z t x , B k z t φ k ( x ) φ k ( z t )+ F k ( z t , x ) .
(3.27)

Using (A1), (A4) and (3.27), we have

0 = F k ( z t , z t ) + φ k ( z t ) φ k ( z t ) t F k ( z t , y ) + ( 1 t ) F k ( z t , x ) + t φ k ( y ) + ( 1 t ) φ k ( x ) t φ k ( z t ) ( 1 t ) φ k ( z t ) = t [ F k ( z t , y ) + φ k ( y ) φ k ( z t ) ] + ( 1 t ) [ F k ( z t , x ) + φ k ( x ) φ k ( z t ) ] t [ F k ( z t , y ) + φ k ( y ) φ k ( z t ) ] + ( 1 t ) z t x , B k z t = t [ F k ( z t , y ) + φ k ( y ) φ k ( z t ) ] + ( 1 t ) t y x , B k z t ,

and hence

0 F k ( z t ,y)+ φ k (y) φ k ( z t )+(1t) y x , B k z t .

Letting t0, we have, for each yC,

0 F k ( x , y ) + φ k (y) φ k ( x ) + y x , B k x .

This implies that x GMEP( F k , φ k , B k ). Hence x k = 1 M GMEP( F k , φ k , B k ). Therefore,

x F= [ k = 1 M GMEP ( F k , φ k , B k ) ] F(G)F(S).

This together with the property of metric projection implies that

lim sup n x 0 x ¯ , x n x ¯ = lim n x 0 x ¯ , x n i x ¯ = x 0 x ¯ , x x ¯ 0.
(3.28)

Step 6. Finally, we can easily show that x n x ¯ as n.

Indeed, from (3.14) and (3.3), we have

x n + 1 x ¯ 2 2 a n 1 + a n x 0 x ¯ , x n + 1 x ¯ + b n 1 + a n x n x ¯ 2 + c n + d n 1 + a n x n x ¯ 2 = 2 a n 1 + a n x 0 x ¯ , x n + 1 x ¯ + ( 1 2 a n 1 + a n ) x n x ¯ 2 .

It is clear that n = 1 2 a n 1 + a n =. Hence, applying (3.28) and Lemma 2.11, we obtain immediately that x n x ¯ as n. This completes the proof. □

References

  1. Cai G, Bu SQ: Hybrid algorithm for generalized mixed equilibrium problems and variational inequality problems and fixed point problems. Comput. Math. Appl. 2011, 62: 4772–4782.

    Article  MathSciNet  Google Scholar 

  2. Ceng LC, Yao JC: A relaxed extragradient-like method for a generalized mixed equilibrium problem, a general system of generalized equilibria and a fixed point problem. Nonlinear Anal. 2010, 72: 1922–1937. 10.1016/j.na.2009.09.033

    Article  MathSciNet  Google Scholar 

  3. Katchang P, Jitpeera T, Kumam P: Strong convergence theorems for solving generalized mixed equilibrium problems and general system of variational inequalities by the hybrid method. Nonlinear Anal. Hybrid Syst. 2010, 4: 838–852. 10.1016/j.nahs.2010.07.001

    Article  MathSciNet  Google Scholar 

  4. Peng JW, Yao JC: A new hybrid-extragradient method for generalized mixed equilibrium problems, fixed point problems and variational inequality problems. Taiwan. J. Math. 2008, 12: 1401–1432.

    MathSciNet  Google Scholar 

  5. Liu M, Chang S, Zuo P: On a hybrid method for generalized mixed equilibrium problem and fixed point problem of a family of quasi- ϕ -asymptotically nonexpansive mappings in Banach spaces. Fixed Point Theory Appl. 2010., 2010: Article ID 157278. doi:10.1155/2010/157278

    Google Scholar 

  6. Zhang S: Generalized mixed equilibrium problem in Banach spaces. Appl. Math. Mech. 2009, 30: 1105–1112. 10.1007/s10483-009-0904-6

    Article  Google Scholar 

  7. Qin X, Cho YJ, Kang SM: Viscosity approximation methods for generalized equilibrium problems and fixed point problems with applications. Nonlinear Anal. 2010, 72: 99–112. 10.1016/j.na.2009.06.042

    Article  MathSciNet  Google Scholar 

  8. Shehu Y: Fixed point solutions of generalized equilibrium problems for nonexpansive mappings. J. Comput. Appl. Math. 2010, 234: 892–898. 10.1016/j.cam.2010.01.055

    Article  MathSciNet  Google Scholar 

  9. Takahashi S, Takahashi W: Strong convergence theorem for a generalized equilibrium problem and a nonexpansive mapping in a Hilbert space. Nonlinear Anal. 2008, 69: 1025–1033. 10.1016/j.na.2008.02.042

    Article  MathSciNet  Google Scholar 

  10. Peng JW, Yao JC: Strong convergence theorems of iterative scheme based on the extragradient method for mixed equilibrium problems and fixed point problems. Math. Comput. Model. 2009, 49: 1816–1828. 10.1016/j.mcm.2008.11.014

    Article  MathSciNet  Google Scholar 

  11. Plubtieng S, Sombut K: Weak convergence theorems for a system of mixed equilibrium problems and nonspreading mappings in a Hilbert space. J. Inequal. Appl. 2010. doi:10.1155/2010/246237

    Google Scholar 

  12. Yao Y, Liou YC, Yao JC: A new hybrid iterative algorithm for fixed-point problems, variational inequality problems, and mixed equilibrium problems. Fixed Point Theory Appl. 2008. doi:10.1155/2008/417089

    Google Scholar 

  13. Ceng LC, Yao JC: A hybrid iterative scheme for mixed equilibrium problems and fixed point problems. J. Comput. Appl. Math. 2008, 214: 186–201. 10.1016/j.cam.2007.02.022

    Article  MathSciNet  Google Scholar 

  14. Browder FE: Existence and approximation of solutions of nonlinear variational inequalities. Proc. Natl. Acad. Sci. USA 1966, 56: 1080–1086. 10.1073/pnas.56.4.1080

    Article  MathSciNet  Google Scholar 

  15. Plubtieng S, Punpaeng R: A new iterative method for equilibrium problems and fixed point problems of nonexpansive mappings and monotone mappings. Appl. Math. Comput. 2008, 197: 548–558. 10.1016/j.amc.2007.07.075

    Article  MathSciNet  Google Scholar 

  16. Qin X, Shang M, Su Y: Strong convergence of a general iterative algorithm for equilibrium problems and variational inequality problems. Math. Comput. Model. 2008, 48: 1033–1046. 10.1016/j.mcm.2007.12.008

    Article  MathSciNet  Google Scholar 

  17. Su Y, Shang M, Qin X: An iterative method of solution for equilibrium and optimization problems. Nonlinear Anal. 2008, 69: 2709–2719. 10.1016/j.na.2007.08.045

    Article  MathSciNet  Google Scholar 

  18. Takahashi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118: 417–428. 10.1023/A:1025407607560

    Article  MathSciNet  Google Scholar 

  19. Yao JC: Variational inequalities and generalized monotone operators. Math. Oper. Res. 1994, 19: 691–705. 10.1287/moor.19.3.691

    Article  MathSciNet  Google Scholar 

  20. Yao JC, Chadli O: Pseudomonotone complementarity problems and variational inequalities. In Handbook of Generalized Convexity and Monotonicity Edited by: Couzeix JP, Haddjissas N, Schaible S. 2005, 501–558.

    Chapter  Google Scholar 

  21. Zeng LC, Schaible S, Yao JC: Iterative algorithm for generalized set-valued strongly nonlinear mixed variational-like inequalities. J. Optim. Theory Appl. 2005, 124: 725–738. 10.1007/s10957-004-1182-z

    Article  MathSciNet  Google Scholar 

  22. Noor MA: Some developments in general variational inequalities. Appl. Math. Comput. 2004, 152: 199–277. 10.1016/S0096-3003(03)00558-7

    Article  MathSciNet  Google Scholar 

  23. Zeng LC: Iterative algorithms for finding approximate solutions for general strongly nonlinear variational inequalities. J. Math. Anal. Appl. 1994, 187: 352–360. 10.1006/jmaa.1994.1361

    Article  MathSciNet  Google Scholar 

  24. Censor Y, Iusem AN, Zenios SA: An interior point method with Bregman functions for the variational inequality problem with paramonotone operators. Math. Program. 1998, 81: 373–400.

    MathSciNet  Google Scholar 

  25. Nadezhkina N, Takahashi W: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006, 128: 191–201. 10.1007/s10957-005-7564-z

    Article  MathSciNet  Google Scholar 

  26. Korpelevich GM: An extragradient method for finding saddle points and for other problems. Èkon. Mat. Metody 1976, 12: 747–756.

    Google Scholar 

  27. Zeng LC, Yao JC: Strong convergence theorem by an extragradient method for fixed point problems and variational inequality problems. Taiwan. J. Math. 2006, 10: 1293–1303.

    MathSciNet  Google Scholar 

  28. Yao Y, Yao JC: On modified iterative method for nonexpansive mappings and monotone mappings. Appl. Math. Comput. 2007, 186: 1551–1558. 10.1016/j.amc.2006.08.062

    Article  MathSciNet  Google Scholar 

  29. Ceng LC, Wang C, Yao JC: Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 2008, 67: 375–390. 10.1007/s00186-007-0207-4

    Article  MathSciNet  Google Scholar 

  30. Acedo GL, Xu HK: Iterative methods for strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 2007, 67: 2258–2271. 10.1016/j.na.2006.08.036

    Article  MathSciNet  Google Scholar 

  31. Suzuki T: Strong convergence of Krasnoselskii and Mann’s type sequences for one-parameter nonexpansive semigroups without Bochner integrals. J. Math. Anal. Appl. 2005, 305: 227–239. 10.1016/j.jmaa.2004.11.017

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The project is supported by the National Natural Science Foundation of China (Grant Nos. 11071041, 11201074) and Fujian Natural Science Foundation (Grant No. 2013J01003).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Changfeng Ma.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Ke, Y., Ma, C. A new relaxed extragradient-like algorithm for approaching common solutions of generalized mixed equilibrium problems, a more general system of variational inequalities and a fixed point problem. Fixed Point Theory Appl 2013, 126 (2013). https://doi.org/10.1186/1687-1812-2013-126

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2013-126

Keywords