 Research
 Open Access
 Published:
Hybrid extragradient method for generalized mixed equilibrium problems and fixed point problems in Hilbert space
Fixed Point Theory and Applications volume 2013, Article number: 240 (2013)
Abstract
In this paper, we introduce iterative schemes based on the extragradient method for finding a common element of the set of solutions of a generalized mixed equilibrium problem and the set of fixed points of a nonexpansive mapping, and the set of solutions of a variational inequality problem for inverse strongly monotone mapping. We obtain some strong convergence theorems for the sequences generated by these processes in Hilbert spaces. The results in this paper generalize, extend and unify some wellknown convergence theorems in literature.
MSC:47H09, 47J05, 47J20, 47J25.
1 Introduction
Let H be a real Hilbert space with the inner product \u3008\cdot ,\cdot \u3009 and the norm \parallel \cdot \parallel, and let C be a nonempty closed convex subset of H. Let B:C\to H be a nonlinear mapping and let \phi :C\to R\cup \{+\mathrm{\infty}\} be a function and F be a bifunction from C\times C to R, where R is the set of real numbers. Peng and Yao [1] considered the following generalized mixed equilibrium problem:
The set of solutions of (1.1) is denoted by \mathit{GMEP}(F,\phi ,B). It is easy to see that x is a solution of problem (1.1) implying that x\in dom\phi =\{x\in C:\phi (x)<+\mathrm{\infty}\}.
If B=0, then the generalized mixed equilibrium problem (1.1) becomes the following mixed equilibrium problem:
Problem (1.2) was studied by Ceng and Yao [2] and Peng and Yao [3, 4]. The set of solutions of (1.2) is denoted by \mathit{MEP}(F,\phi ).
If \phi =0, then the generalized mixed equilibrium problem (1.1) becomes the following generalized equilibrium problem:
Problem (1.3) was studied by Takahashi and Takahashi [5]. The set of solutions of (1.3) is denoted by \mathit{GEP}(F,B).
If \phi =0 and B=0, then the generalized mixed equilibrium problem (1.1) becomes the following equilibrium problem:
The set of solutions of (1.4) is denoted by \mathit{EP}(F).
If F(x,y)=0 for all x,y\in C, the generalized mixed equilibrium problem (1.1) becomes the following generalized variational inequality problem:
The set of solutions of (1.5) is denoted by \mathit{GVI}(C,\phi ,B).
If \phi =0 and F(x,y)=0 for all x,y\in C, the generalized mixed equilibrium problem (1.1) becomes the following variational inequality problem:
The set of solutions of (1.6) is denoted by \mathit{VI}(C,B).
If B=0 and F(x,y)=0 for all x,y\in C, the generalized mixed equilibrium problem (1.1) becomes the following minimization problem:
Problem (1.1) is very general in the sense that it includes, as special cases, optimization problems, variational inequalities, minimax problems, Nash equilibrium problems in noncooperative games and others, see for instance, [1–7].
For solving the variational inequality problem in the finitedimensional Euclidean spaces, in 1976, Korpelevich [8] introduced the following socalled extragradient method:
for every n=0,1,2,\dots , \lambda \in (0,\frac{1}{k}), where C is a closed convex subset of {R}^{n}, B:C\to {R}^{n} is a monotone and kLipschitz continuous mapping, and {P}_{C} is the metric projection of {R}^{n} into C. She showed that if \mathit{VI}(C,B) is nonempty, then the sequences \{{x}_{n}\} and \{{y}_{n}\}, generated by (1.8), converge to the same point x\in \mathit{VI}(C,A). The idea of the extragradient iterative process introduced by Korpelevich was successfully generalized and extended not only in Euclidean but also in Hilbert and Banach spaces, see, e.g., the recent papers of He et at. [9], Gárciga Otero and Iuzem [10], Solodov and Svaiter [11], Solodov [12]. Moreover, Zeng and Yao [13] and Nadezhkina and Takahashi [14] introduced some iterative processes based on the extragradient method for finding the common element of the set of fixed points of nonexpansive mappings and the set of solutions of a variational inequality problem for a monotone, Lipschitz continuous mapping. Yao and Yao [15] introduced an iterative process based on the extragradient method for finding the common element of the set of fixed points of nonexpansive mappings and the set of solutions of a variational inequality problem for a kinverse strongly monotone mapping. Plubtieng and Punpaeng [16] introduced an iterative process, based on the extragradient method, for finding the common element of the set of fixed points of nonexpansive mappings, the set of solutions of an equilibrium problem and the set of solutions of a variational inequality problem for αinverse strongly monotone mappings.
In 2003, Takahashi and Toyoda [17], introduced the following iterative scheme:
where \{{\alpha}_{n}\} is a sequence in (0,1), and \{{\lambda}_{n}\} is a sequence in (0,2\alpha ). They proved that if F(S)\cap \mathit{VI}(A)\ne \mathrm{\varnothing}, then the sequence \{{x}_{n}\} generated by (1.9) converges weakly to some z\in F(S)\cap \mathit{VI}(A). Recently, Zeng and Yao [18] introduced the following iterative scheme:
where \{{\lambda}_{n}\} and \{{\alpha}_{n}\} satisfy the following conditions: (i) {\lambda}_{n}k\subset (0,1\delta ) for some \delta \in (0,1) and (ii) {\alpha}_{n}\subset (0,1), {\sum}_{n=1}^{\mathrm{\infty}}{\alpha}_{n}=\mathrm{\infty}, {lim}_{n\to \mathrm{\infty}}{\alpha}_{n}=0. They proved that the sequence \{{x}_{n}\} and \{{y}_{n}\} generated by (1.10) converges strongly to the same point {P}_{F(S)\cap \mathit{VI}(C,A)}{x}_{0} provided that {lim}_{n\to \mathrm{\infty}}\parallel {x}_{n+1}{x}_{n}\parallel =0.
In 2006, Nadezhkina and Takahashi [19] also considered the extragradient method (1.9) for finding a common element of a fixed point of nonexpansive mapping and a set of solutions of variational inequalities, but the convergence result was still the weak convergence. The question posed by Takahashi and Toyoda [17] on whether the strong convergence result can be proved by the same iteration scheme Algorithm (1.9) remains open.
In 2010, with the techniques adopted by Noor and Rassias [20], Huang, Noor and AlSaid [21] set the projected residual function by
it is well known that x\in C is a solution of variational inequality (1.6) if and only if x\in C is a zero of the projected residual function (1.11). They proved the strong convergence result of the iteration scheme (1.9) using the error analysis technique.
In this paper, inspired and motivated by the above researches and Huang, Noor and AlSaid [21], we introduce a new iterative scheme based on the extragradient method for finding a common element of the set of solutions of a generalized mixed equilibrium problem, the set of fixed points of nonexpansive mappings and the set of solutions of an inverse strongly monotone mapping, as follows:
where \{{\alpha}_{n}\}, \{{\beta}_{n}\}, \{{r}_{n}\}, \{{\lambda}_{n}\} satisfy some parameters controlling conditions. We will obtain some strong convergence theorems using the error analysis technique as in [21]. The results in this paper generalize, extend and unify some wellknown convergence theorems in the literature.
2 Preliminaries
Let C be a closed convex subset of a Hilbert space H for every point x\in H. There exists a unique nearest point in C, denoted by {P}_{C}x, such that
{P}_{C} is called the metric projection of H onto C. It is well known that {P}_{C} is a nonexpansive mapping of H onto C and satisfies
for every x,y\in H. Moreover, {P}_{C}x is characterized by the following properties: {P}_{C}x\in C and
for all x\in H, y\in C.
A mapping A of C into H is called monotone if
for all x,y\in C. A mapping A of C into H is called inverse strongly monotone with a modulus α (in short, αinverse strongly monotone) if there exists a positive real number α such that
for all x,y\in C.
Recall that a mapping S of C into itself is nonexpansive if
A mapping T of C into itself is pseudocontractive if
for all x,y\in C. Obviously, the class of pseudocontractive mappings is more general than the class of nonexpansive mappings.
Let A be a monotone mapping from C into H. In the context of the variational inequality problem, the characterization of projection (2.1) implies the following:
It is also known that H satisfies Opial’s condition; for any sequence \{{x}_{n}\} with {x}_{n}\rightharpoonup x, the inequality
holds for every y\in H with y\ne x.
For solving the generalized mixed equilibrium problem, let us give the following assumptions for the bifunction F, the function φ and the set C:

(A1)
F(x,x)=0 for all x\in C;

(A2)
F is monotone, i.e., F(x,y)+F(y,x)\le 0 for any x,y\in C;

(A3)
for each y\in C, x\u22a2F(x,y) is weakly upper semicontinuous;

(A4)
for each x\in C, y\u22a2F(x,y) is convex;

(A5)
for each x\in C, y\u22a2F(x,y) is lower semicontinuous;

(B1)
for each x\in H and r>0, there exist a bounded subset D(x)\subset C and {y}_{x}\in C\cap dom(\phi ) such that for any z\in C{D}_{x},
F(z,{y}_{x})+\phi ({y}_{x})+\u3008Bz,{y}_{x}z\u3009+\frac{1}{r}\u3008{y}_{x}z,zx\u3009\le \phi (z); 
(B2)
C is a bounded set.
Lemma 2.1 [1]
Let C be a nonempty closed convex subset of a Hilbert space H. Let F be a bifunction from C\times C to R satisfying (A1)(A5) and let \phi :C\to R\cup \{+\mathrm{\infty}\} be a proper lower semicontinuous and convex function. Assume that either (B1) or (B2) holds. For r>0 and x\in H, define a mapping {T}_{r}:H\to C as follows:
for all x\in H. Then the following conclusions hold:

(1)
For each x\in H, {T}_{r}(x)\ne \mathrm{\varnothing};

(2)
{T}_{r} is singlevalued;

(3)
{T}_{r} is firmly nonexpansive, i.e., for any x,y\in H,
{\parallel {T}_{r}(x){T}_{r}(y)\parallel}^{2}\le \u3008{T}_{r}(x){T}_{r}(y),xy\u3009; 
(4)
Fix({T}_{r}(IrB))=\mathit{GMEP}(F,\phi ,B);

(5)
\mathit{GMEP}(F,\phi ,B) is closed and convex.
Lemma 2.2 [22]
For any {x}^{\ast}\in \mathit{VI}(C,A), if A:C\to H is αinverse strongly monotone, then {R}_{\lambda}(x) is (1\frac{\lambda}{4\alpha})inverse strongly monotone for any \lambda \in [0,4\alpha ] and
where {R}_{\lambda}(x)=x{P}_{C}(x\lambda Ax).
Lemma 2.3 [21]
For all x\in H and {\lambda}^{\prime}\ge \lambda >0, it holds that
where {R}_{\lambda}(x)=x{P}_{C}(x\lambda Ax).
Lemma 2.4 [23]
Let \{{a}_{n}\} and \{{b}_{n}\} be two sequences of nonnegative numbers, such that {a}_{n+1}\le {a}_{n}+{b}_{n} for all n\in N. If {\sum}_{n=1}^{\mathrm{\infty}}{b}_{n}<+\mathrm{\infty}, and if \{{a}_{n}\} has a subsequence \{{a}_{{n}_{k}}\} converging to 0, then {lim}_{n\to \mathrm{\infty}}{a}_{n}=0.
Lemma 2.5 [24]
Let H be a real Hilbert space, and let C be a nonempty, closed and convex subset of H. Let \{{x}_{n}\} be a sequence in H. Suppose that, for any {x}^{\ast}\in C,
Then {lim}_{n\to \mathrm{\infty}}{P}_{C}({x}_{n})=z for some z\in C.
3 Main results
Theorem 3.1 Let C be a closed and convex subset of a real Hilbert space H. Let F be a bifunction from C\times C\to R satisfying (A1)(A5) and \phi :C\to R\cup \{+\mathrm{\infty}\} be a proper lower semicontinuous and convex function. Let A be an αinverse strongly monotone mapping from C into H and B be an βinverse strongly monotone mapping from C into H. Let S be a nonexpansive mapping of C into itself, such that \mathrm{\Omega}=Fix(S)\cap \mathit{VI}(C,A)\cap \mathit{GMEP}(F,\phi ,B)\ne \mathrm{\varnothing}. Assume that either (B1) or (B2) holds. Let \{{x}_{n}\}, \{{y}_{n}\} and \{{u}_{n}\} be sequences generated by
for every n=1,2,\dots , where \{{\lambda}_{n}\}, \{{r}_{n}\}, \{{\alpha}_{n}\}, \{{\beta}_{n}\} satisfy the following conditions: (i) 0<{r}_{n}<2\beta, \{{\lambda}_{n}\}\subset [a,b] for some a,b\in (0,2\alpha ) and (ii) \{{\alpha}_{n}\}\subset [c,d], \{{\beta}_{n}\}\subset [e,f] for some c,d,e,f\in (0,1), then \{{x}_{n}\} converges strongly to {p}^{\ast}\in \mathrm{\Omega}, where {p}^{\ast}={lim}_{n\to \mathrm{\infty}}{P}_{\mathrm{\Omega}}({x}_{n}).
Proof We divide the proof into five steps.
Step 1. We claim that \{{x}_{n}\} is bounded and {lim}_{n\to \mathrm{\infty}}{R}_{a}({u}_{n})={lim}_{n\to \mathrm{\infty}}{R}_{{\lambda}_{n}}({u}_{n})=0.
Put
for every n=1,2,\dots . Take any p\in \mathrm{\Omega} and let \{{T}_{{r}_{n}}\} be a sequence of mappings defined as in Lemma 2.1, then p={P}_{C}(p{\lambda}_{n}Ap)={T}_{{r}_{n}}(p{r}_{n}Bp). From {u}_{n}={T}_{{r}_{n}}({x}_{n}{r}_{n}B{x}_{n})\in C, the βinverse strongly monotonicity of B and 0<{r}_{n}<2\beta, we have
and from Lemma 2.2, we have
which implies from (3.2) that
By the same process as in (3.3), we also have from (3.4) that
Further, from (3.1) and (3.5), we get
Hence, from (3.6), the nonexpansive property of the mapping S and 0<{\lambda}_{n}<2\alpha, we have
Since the sequence \{\parallel {x}_{n}p\parallel \} is a bounded and nonincreasing sequence, {lim}_{n\to \mathrm{\infty}}\parallel {x}_{n}p\parallel exists. Hence \{{x}_{n}\} is bounded. Consequently, the sets \{{u}_{n}\}, \{{v}_{n}\}, \{{w}_{n}\}, \{{y}_{n}\} are also bounded. By (3.7), we have
From the conditions (i) and (ii), there must exist a constant {M}_{1}>0 such that
from which it follows that
Hence, {lim}_{n\to \mathrm{\infty}}{R}_{{\lambda}_{n}}({u}_{n})={lim}_{n\to \mathrm{\infty}}\parallel {R}_{{\lambda}_{n}}({u}_{n})\parallel =0. Since {R}_{{\lambda}_{n}}({u}_{n})={u}_{n}{P}_{C}({u}_{n}{\lambda}_{n}A{u}_{n})={u}_{n}{y}_{n}, {lim}_{n\to \mathrm{\infty}}\parallel {u}_{n}{y}_{n}\parallel =0. Notice that {\lambda}_{n}\ge a, then by Lemma 2.3, \parallel {R}_{a}({u}_{n})\parallel \le \parallel {R}_{{\lambda}_{n}}({u}_{n})\parallel. Therefore,
By the same way, we also get that
and thus
Step 2. We show that {lim}_{n\to \mathrm{\infty}}\parallel {x}_{n}{u}_{n}\parallel ={lim}_{n\to \mathrm{\infty}}\parallel S{x}_{n}{x}_{n}\parallel =0.
Indeed, for any p\in \mathrm{\Omega}, it follows from (3.1) and (3.5) that
which implies that
Thus, it follows from (3.10) that
From the condition (ii), there exists a constant {M}_{2}>0 such that
from which it follows that
Hence
From (3.10), we also get that
By the same way, we obtain that
which combining (3.9) implies that
Since
which implies from (3.11), (3.12) that
Further, it follows from (3.1) and (3.11) that
Step 3. We claim that \{{x}_{n}\} must have a convergent subsequence \{{x}_{{n}_{k}}\} such that {lim}_{k\to \mathrm{\infty}}{x}_{{n}_{k}}={p}^{\ast} for some {p}^{\ast}\in C. Moreover, {p}^{\ast}\in \mathrm{\Omega}=Fix(S)\cap \mathit{VI}(C,A)\cap \mathit{GMEP}(F,\phi ,B).
Since \{{x}_{n}\} is a bounded sequence generated by Algorithm (3.1), then \{{x}_{n}\} must have a weakly convergent subsequence \{{x}_{{n}_{k}}\} such that {x}_{{n}_{k}}\rightharpoonup {p}^{\ast} (k\to \mathrm{\infty}), which implies from (3.11) and (3.13) that S{w}_{{n}_{k}}\rightharpoonup {p}^{\ast} (k\to \mathrm{\infty}) and {u}_{{n}_{k}}\rightharpoonup {p}^{\ast} (k\to \mathrm{\infty}). Next we will show that {p}^{\ast}\in \mathrm{\Omega}=Fix(S)\cap \mathit{VI}(C,A)\cap \mathit{GMEP}(F,\phi ,B).
Since A is inverse strongly monotone with the positive constant \alpha >0, so A is \frac{1}{\alpha}Lipschitz continuous. Indeed, it yields that \parallel AxAy\parallel \le \frac{1}{\alpha}\parallel xy\parallel from the definition of the inverse strongly monotonicity of A, such that
From the \frac{1}{\alpha}Lipschitz continuity of A and the continuity of {P}_{C}, it follows that {R}_{a}(x)=x{P}_{C}[xaAx] is also continuous. Notice that {\rho}_{n}\ge a, then by Lemma 2.3, \parallel {R}_{x}({x}_{n})\parallel \le \parallel {R}_{{\rho}_{n}}({x}_{n})\parallel. Then from Step 1,
Therefore from the continuity of {R}_{a}(x),
This shows that {p}^{\ast} is a solution of the variational inequality (1.6), that is {p}^{\ast}\in \mathit{VI}(C,A). From (3.12), {lim}_{n\to \mathrm{\infty}}\parallel {x}_{{n}_{k}}{p}^{\ast}\parallel =0 and the property of the nonexpansive mapping S, it follows that {p}^{\ast}=S{p}^{\ast}, that is {p}^{\ast}\in Fix(S). Finally, by the same argument as in the proof of [[7], Theorem 3.1], we prove that {p}^{\ast}\in \mathit{GMEP}(F,\phi ,B). Thus {p}^{\ast}\in \mathrm{\Omega}=Fix(S)\cap \mathit{VI}(C,A)\cap \mathit{GMEP}(F,\phi ,B).
Next, we will prove that {x}_{{n}_{k}}\to {p}^{\ast} (k\to \mathrm{\infty}).
From (3.1), (3.6) and (3.7) we can calculate
which implies
From S{w}_{{n}_{k}}\rightharpoonup {p}^{\ast} and {x}_{{n}_{k+1}}{x}_{{n}_{k}}\to 0 as k\to \mathrm{\infty}, it follows from (3.16) that
Using the KadecKlee property of H, we obtain that {lim}_{k\to \mathrm{\infty}}{x}_{{n}_{k}}={p}^{\ast}.
Step 4. We claim that the sequence \{{x}_{n}\} generated by Algorithm (3.1) converges strongly to {p}^{\ast}\in \mathrm{\Omega}=Fix(S)\cap \mathit{VI}(C,A)\cap \mathit{GMEP}(F,\phi ,B).
In fact, from the result of Step 3, {p}^{\ast}\in \mathrm{\Omega}. Let p={p}^{\ast} in (3.7). Consequently, \parallel {x}_{n+1}{p}^{\ast}\parallel \le \parallel {x}_{n}{p}^{\ast}\parallel. Meanwhile, {lim}_{k\to \mathrm{\infty}}\parallel {x}_{{n}_{k}}{p}^{\ast}\parallel =0 from Step 3. Then from Lemma 2.4, we have {lim}_{n\to \mathrm{\infty}}\parallel {x}_{n}{p}^{\ast}\parallel =0. Therefore, {lim}_{n\to \mathrm{\infty}}{x}_{n}={p}^{\ast}.
Step 5. We claim that {p}^{\ast}={lim}_{n\to \mathrm{\infty}}{P}_{\mathrm{\Omega}}{x}_{n}.
From (2.1), we have
By (3.7) and Lemma 2.5, {lim}_{n\to \mathrm{\infty}}{P}_{\mathrm{\Omega}}{x}_{n}={q}^{\ast} for some {q}^{\ast}\in \mathrm{\Omega}. Then in (3.13), let n\to \mathrm{\infty}, since {lim}_{n\to \mathrm{\infty}}{x}_{n}={p}^{\ast} by Step 4, we have
and, consequently, we have {p}^{\ast}={q}^{\ast}. Hence, {p}^{\ast}={lim}_{n\to \mathrm{\infty}}{P}_{\mathrm{\Omega}}{x}_{n}.
This completes the proof of Theorem 3.1. □
The following theorems can be obtained from Theorem 3.1 immediately.
Theorem 3.2 Let C, H, S be as in Theorem 3.1. Assume that \mathrm{\Omega}=Fix(S)\cap \mathit{VI}(C,A)\ne \mathrm{\varnothing}, let \{{x}_{n}\}, \{{y}_{n}\} be sequences generated by
for every n=1,2,\dots , where \{{\lambda}_{n}\}, \{{\alpha}_{n}\}, \{{\beta}_{n}\} satisfy the following conditions: (i) \{{\lambda}_{n}\}\subset [a,b] for some a,b\in (0,2\alpha ) and (ii) \{{\alpha}_{n}\}\subset [c,d], \{{\beta}_{n}\}\subset [e,f] for some c,d,e,f\in (0,1), then \{{x}_{n}\} converges strongly to {p}^{\ast}\in \mathrm{\Omega}, where {p}^{\ast}={lim}_{n\to \mathrm{\infty}}{P}_{\mathrm{\Omega}}({x}_{n}).
Proof Putting B=F=\phi =0, {r}_{n}=1 in Theorem 3.1, the conclusion of Theorem 3.2 can be obtained from Theorem 3.1. □
Remark 3.1 The main result of Nadezhkina and Takahashi [14] is a special case of our Theorem 3.2. Indeed, if we take {\beta}_{n}=0 in Theorem 3.2, then we obtain the result of [14].
Theorem 3.3 Let C, H, F, A, B, S be as in Theorem 3.1. Assume \mathrm{\Omega}=Fix(S)\cap \mathit{VI}(C,A)\cap \mathit{GEP}(F,B)\ne \mathrm{\varnothing}; let \{{x}_{n}\}, \{{y}_{n}\} and \{{u}_{n}\} be sequences generated by
for every n=1,2,\dots , where \{{\lambda}_{n}\}, \{{r}_{n}\}, \{{\alpha}_{n}\}, \{{\beta}_{n}\} satisfy conditions (i) and (ii) as in Theorem 3.1, then \{{x}_{n}\} converges strongly to {p}^{\ast}\in \mathrm{\Omega}, where {p}^{\ast}={lim}_{n\to \mathrm{\infty}}{P}_{\mathrm{\Omega}}({x}_{n}).
Proof Putting \phi =0 in Theorem 3.1, the conclusion of Theorem 3.3 is obtained. □
Remark 3.2 Theorem 3.3 can be viewed as an improvement of Theorem 3.1 of Inchan [25] because of removing the iterative step {C}_{n} in the algorithm of Theorem 3.1 of [25].
Theorem 3.4 Let C, H, F, A, S be as in Theorem 3.1. Assume that \mathrm{\Omega}=Fix(S)\cap \mathit{VI}(C,A)\cap \mathit{EP}(F)\ne \mathrm{\varnothing}; let \{{x}_{n}\} and \{{u}_{n}\} be sequences generated by
for every n=1,2,\dots , where \{{\lambda}_{n}\}, \{{r}_{n}\}, \{{\alpha}_{n}\} satisfy the following conditions: 0<{r}_{n}<2\beta, \{{\lambda}_{n}\}\subset [a,b] for some a,b\in (0,2\alpha ), \{{\alpha}_{n}\}\subset [c,d] for some c,d\in (0,1), then \{{x}_{n}\} converges strongly to {p}^{\ast}\in \mathrm{\Omega}, where {p}^{\ast}={lim}_{n\to \mathrm{\infty}}{P}_{\mathrm{\Omega}}({x}_{n}).
Proof Taking B=\phi =0, {\beta}_{n}=0 in Theorem 3.1, the conclusion of Theorem 3.4 is obtained. □
Remark 3.3 Theorem 3.4 is the strong convergence result of Theorem 3.1 of Jaiboon, Kumam and Humphries [26].
References
Peng JW, Yao JC: A new hybridextragradient method for generalized mixed equilibrium problems and fixed point problems and variational inequality problems. Taiwan. J. Math. 2008, 12(6):1401–1432.
Ceng LC, Yao JC: A hybrid iterative scheme for mixed equilibrium problems and fixed point problems. J. Comput. Appl. Math. 2008, 214: 186–201. 10.1016/j.cam.2007.02.022
Peng JW, Yao JC: Strong convergence theorems of iterative scheme based on the extragradient method for mixed equilibrium problems and fixed point problems. Math. Comput. Model. 2009, 49: 1816–1828. 10.1016/j.mcm.2008.11.014
Peng JW, Yao JC: An iterative algorithm combining viscosity method with parallel method for a generalized equilibrium problem and strict pseudocontractions. Fixed Point Theory Appl. 2009., 2009: Article ID 794178
Takahashi S, Takahashi W: Strong convergence theorem for a generalized equilibrium problem and a nonexpansive mapping in a Hilbert space. Nonlinear Anal. 2008, 69: 1025–1033. 10.1016/j.na.2008.02.042
Flam SD, Antipin AS: Equilibrium programming using proximallike algorithms. Math. Program. 1997, 78: 29–41.
Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.
Korpelevich GM: The extragradient method for finding saddle points and other problems. Matecon 1976, 12: 747–756.
He BS, Yang ZH, Yuan XM: An approximate proximalextragradient type method for monotone variational inequalities. J. Math. Anal. Appl. 2004, 300: 362–374. 10.1016/j.jmaa.2004.04.068
Gárciga Otero R, Iuzem A: Proximal methods with penalization effects in Banach spaces. Numer. Funct. Anal. Optim. 2004, 25: 69–91. 10.1081/NFA120034119
Solodov MV, Svaiter BF: An inexact hybrid generalized proximal point algorithm and some new results on the theory of Bregman functions. Math. Oper. Res. 2000, 25: 214–230. 10.1287/moor.25.2.214.12222
Solodov MV: Convergence rate analysis of interactive algorithms for solving variational inequality problem. Math. Program. 2003, 96: 513–528. 10.1007/s101070020369z
Zeng LC, Yao JC: Strong convergence theorem by an extragradient method for fixed point problems and variational inequality problems. Taiwan. J. Math. 2006, 10: 1293–1303.
Nadezhkina N, Takahashi W: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006, 128: 191–201. 10.1007/s109570057564z
Yao Y, Yao JC: On modified iterative method for nonexpansive mappings and monotone mappings. Appl. Math. Comput. 2007, 186: 1551–1558. 10.1016/j.amc.2006.08.062
Plubtieng S, Punpaeng R: A new iterative method for equilibrium problems and fixed point problems of nonexpansive mappings and monotone mappings. Appl. Math. Comput. 2008, 197: 548–558. 10.1016/j.amc.2007.07.075
Takahashi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118: 417–428. 10.1023/A:1025407607560
Zeng LC, Yao JC: Strong convergence theorem by an extragradient method for fixed point problems and variational inequality problems. Taiwan. J. Math. 2006, 10: 1293–1303.
Nadezhkina N, Takahashi W: Weak convergence theorem by an extragradient method for nonexpansive mappings. J. Optim. Theory Appl. 2006, 128: 191–201. 10.1007/s109570057564z
Noor MA, Rassias TM: Projection methods for monotone variational inequalities. J. Math. Anal. Appl. 1999, 237: 405–412. 10.1006/jmaa.1999.6422
Huang ZY, Noor MA, AlSaid E: On an open question of Takahashi for nonexpansive mappings and inverse strongly monotone mappings. J. Optim. Theory Appl. 2010, 147: 194–204. 10.1007/s1095701097052
Bnouhachem A, Noor MA: A new iterative method for variational inequalities. Appl. Math. Comput. 2006, 182: 1673–1682. 10.1016/j.amc.2006.06.007
Liu QH: Convergence theorems of the sequence of iterates for asymptotically demicontractive and hemicontractive mappings. Nonlinear Anal. 1996, 26: 1835–1842. 10.1016/0362546X(94)00351H
Takahashi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118: 417–428. 10.1023/A:1025407607560
Inchan I: Hybrid extragradient method for general equilibrium problems and fixed point problems in Hilbert space. Nonlinear Anal. Hybrid Syst. 2011, 5: 467–478. 10.1016/j.nahs.2010.10.005
Jaiboon C, Kumam P, Humphries UW: Weak convergence theorem by an extragradient method for variational inequality, equilibrium and fixed point problems. Bull. Malays. Math. Sci. Soc. 2009, 32: 173–185.
Acknowledgements
The authors are very grateful to the referees for their careful reading, comments and suggestions, which improved the presentation of this article. The first author was supported by the Natural Science Foundational Committee of Qinhuangdao city (201101A453) and Hebei Normal University of Science and Technology (ZDJS 2009 and CXTD201005). The fifth author was supported by the Natural Science Foundational Committee of Qinhuangdao city (2012025A034).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
SL and LL carried out the proof of convergence of the theorems. LC, XH and XY carried out the check of the manuscript. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Li, S., Li, L., Cao, L. et al. Hybrid extragradient method for generalized mixed equilibrium problems and fixed point problems in Hilbert space. Fixed Point Theory Appl 2013, 240 (2013). https://doi.org/10.1186/168718122013240
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/168718122013240
Keywords
 generalized mixed equilibrium problem
 extragradient method
 nonexpansive mapping
 variational inequality
 strong convergence theorem