# Hybrid extragradient method for generalized mixed equilibrium problems and fixed point problems in Hilbert space

- Suhong Li
^{1, 2}Email author, - Lihua Li
^{1}, - Lijun Cao
^{1}, - Xiujuan He
^{3}and - Xiaoyun Yue
^{1}

**2013**:240

https://doi.org/10.1186/1687-1812-2013-240

© Li et al.; licensee Springer. 2013

**Received: **18 September 2012

**Accepted: **23 July 2013

**Published: **31 October 2013

## Abstract

In this paper, we introduce iterative schemes based on the extragradient method for finding a common element of the set of solutions of a generalized mixed equilibrium problem and the set of fixed points of a nonexpansive mapping, and the set of solutions of a variational inequality problem for inverse strongly monotone mapping. We obtain some strong convergence theorems for the sequences generated by these processes in Hilbert spaces. The results in this paper generalize, extend and unify some well-known convergence theorems in literature.

**MSC:**47H09, 47J05, 47J20, 47J25.

## Keywords

## 1 Introduction

*H*be a real Hilbert space with the inner product $\u3008\cdot ,\cdot \u3009$ and the norm $\parallel \cdot \parallel $, and let

*C*be a nonempty closed convex subset of

*H*. Let $B:C\to H$ be a nonlinear mapping and let $\phi :C\to R\cup \{+\mathrm{\infty}\}$ be a function and

*F*be a bifunction from $C\times C$ to

*R*, where

*R*is the set of real numbers. Peng and Yao [1] considered the following generalized mixed equilibrium problem:

The set of solutions of (1.1) is denoted by $\mathit{GMEP}(F,\phi ,B)$. It is easy to see that *x* is a solution of problem (1.1) implying that $x\in dom\phi =\{x\in C:\phi (x)<+\mathrm{\infty}\}$.

Problem (1.2) was studied by Ceng and Yao [2] and Peng and Yao [3, 4]. The set of solutions of (1.2) is denoted by $\mathit{MEP}(F,\phi )$.

Problem (1.3) was studied by Takahashi and Takahashi [5]. The set of solutions of (1.3) is denoted by $\mathit{GEP}(F,B)$.

The set of solutions of (1.4) is denoted by $\mathit{EP}(F)$.

The set of solutions of (1.5) is denoted by $\mathit{GVI}(C,\phi ,B)$.

The set of solutions of (1.6) is denoted by $\mathit{VI}(C,B)$.

Problem (1.1) is very general in the sense that it includes, as special cases, optimization problems, variational inequalities, minimax problems, Nash equilibrium problems in noncooperative games and others, see for instance, [1–7].

for every $n=0,1,2,\dots $ , $\lambda \in (0,\frac{1}{k})$, where *C* is a closed convex subset of ${R}^{n}$, $B:C\to {R}^{n}$ is a monotone and *k*-Lipschitz continuous mapping, and ${P}_{C}$ is the metric projection of ${R}^{n}$ into *C*. She showed that if $\mathit{VI}(C,B)$ is nonempty, then the sequences $\{{x}_{n}\}$ and $\{{y}_{n}\}$, generated by (1.8), converge to the same point $x\in \mathit{VI}(C,A)$. The idea of the extragradient iterative process introduced by Korpelevich was successfully generalized and extended not only in Euclidean but also in Hilbert and Banach spaces, see, *e.g.*, the recent papers of He *et at.* [9], Gárciga Otero and Iuzem [10], Solodov and Svaiter [11], Solodov [12]. Moreover, Zeng and Yao [13] and Nadezhkina and Takahashi [14] introduced some iterative processes based on the extragradient method for finding the common element of the set of fixed points of nonexpansive mappings and the set of solutions of a variational inequality problem for a monotone, Lipschitz continuous mapping. Yao and Yao [15] introduced an iterative process based on the extragradient method for finding the common element of the set of fixed points of nonexpansive mappings and the set of solutions of a variational inequality problem for a *k*-inverse strongly monotone mapping. Plubtieng and Punpaeng [16] introduced an iterative process, based on the extragradient method, for finding the common element of the set of fixed points of nonexpansive mappings, the set of solutions of an equilibrium problem and the set of solutions of a variational inequality problem for *α*-inverse strongly monotone mappings.

where $\{{\lambda}_{n}\}$ and $\{{\alpha}_{n}\}$ satisfy the following conditions: (i) ${\lambda}_{n}k\subset (0,1-\delta )$ for some $\delta \in (0,1)$ and (ii) ${\alpha}_{n}\subset (0,1)$, ${\sum}_{n=1}^{\mathrm{\infty}}{\alpha}_{n}=\mathrm{\infty}$, ${lim}_{n\to \mathrm{\infty}}{\alpha}_{n}=0$. They proved that the sequence $\{{x}_{n}\}$ and $\{{y}_{n}\}$ generated by (1.10) converges strongly to the same point ${P}_{F(S)\cap \mathit{VI}(C,A)}{x}_{0}$ provided that ${lim}_{n\to \mathrm{\infty}}\parallel {x}_{n+1}-{x}_{n}\parallel =0$.

In 2006, Nadezhkina and Takahashi [19] also considered the extragradient method (1.9) for finding a common element of a fixed point of nonexpansive mapping and a set of solutions of variational inequalities, but the convergence result was still the weak convergence. The question posed by Takahashi and Toyoda [17] on whether the strong convergence result can be proved by the same iteration scheme Algorithm (1.9) remains open.

it is well known that $x\in C$ is a solution of variational inequality (1.6) if and only if $x\in C$ is a zero of the projected residual function (1.11). They proved the strong convergence result of the iteration scheme (1.9) using the error analysis technique.

where $\{{\alpha}_{n}\}$, $\{{\beta}_{n}\}$, $\{{r}_{n}\}$, $\{{\lambda}_{n}\}$ satisfy some parameters controlling conditions. We will obtain some strong convergence theorems using the error analysis technique as in [21]. The results in this paper generalize, extend and unify some well-known convergence theorems in the literature.

## 2 Preliminaries

*C*be a closed convex subset of a Hilbert space

*H*for every point $x\in H$. There exists a unique nearest point in

*C*, denoted by ${P}_{C}x$, such that

*H*onto

*C*. It is well known that ${P}_{C}$ is a nonexpansive mapping of

*H*onto

*C*and satisfies

for all $x\in H$, $y\in C$.

*A*of C into

*H*is called monotone if

*A*of

*C*into

*H*is called inverse strongly monotone with a modulus

*α*(in short,

*α*-inverse strongly monotone) if there exists a positive real number

*α*such that

for all $x,y\in C$.

*S*of

*C*into itself is nonexpansive if

*T*of

*C*into itself is pseudocontractive if

for all $x,y\in C$. Obviously, the class of pseudocontractive mappings is more general than the class of nonexpansive mappings.

*A*be a monotone mapping from

*C*into

*H*. In the context of the variational inequality problem, the characterization of projection (2.1) implies the following:

*H*satisfies Opial’s condition; for any sequence $\{{x}_{n}\}$ with ${x}_{n}\rightharpoonup x$, the inequality

holds for every $y\in H$ with $y\ne x$.

*F*, the function

*φ*and the set

*C*:

- (A1)
$F(x,x)=0$ for all $x\in C$;

- (A2)
*F*is monotone,*i.e.*, $F(x,y)+F(y,x)\le 0$ for any $x,y\in C$; - (A3)
for each $y\in C$, $x\u22a2F(x,y)$ is weakly upper semicontinuous;

- (A4)
for each $x\in C$, $y\u22a2F(x,y)$ is convex;

- (A5)
for each $x\in C$, $y\u22a2F(x,y)$ is lower semicontinuous;

- (B1)for each $x\in H$ and $r>0$, there exist a bounded subset $D(x)\subset C$ and ${y}_{x}\in C\cap dom(\phi )$ such that for any $z\in C-{D}_{x}$,$F(z,{y}_{x})+\phi ({y}_{x})+\u3008Bz,{y}_{x}-z\u3009+\frac{1}{r}\u3008{y}_{x}-z,z-x\u3009\le \phi (z);$
- (B2)
*C*is a bounded set.

**Lemma 2.1** [1]

*Let*

*C*

*be a nonempty closed convex subset of a Hilbert space*

*H*.

*Let*

*F*

*be a bifunction from*$C\times C$

*to*

*R*

*satisfying*(A1)-(A5)

*and let*$\phi :C\to R\cup \{+\mathrm{\infty}\}$

*be a proper lower semicontinuous and convex function*.

*Assume that either*(B1)

*or*(B2)

*holds*.

*For*$r>0$

*and*$x\in H$,

*define a mapping*${T}_{r}:H\to C$

*as follows*:

*for all*$x\in H$.

*Then the following conclusions hold*:

- (1)
*For each*$x\in H$, ${T}_{r}(x)\ne \mathrm{\varnothing}$; - (2)
${T}_{r}$

*is single*-*valued*; - (3)${T}_{r}$
*is firmly nonexpansive*,*i*.*e*.,*for any*$x,y\in H$,${\parallel {T}_{r}(x)-{T}_{r}(y)\parallel}^{2}\le \u3008{T}_{r}(x)-{T}_{r}(y),x-y\u3009;$ - (4)
$Fix({T}_{r}(I-rB))=\mathit{GMEP}(F,\phi ,B)$;

- (5)
$\mathit{GMEP}(F,\phi ,B)$

*is closed and convex*.

**Lemma 2.2** [22]

*For any*${x}^{\ast}\in \mathit{VI}(C,A)$,

*if*$A:C\to H$

*is*

*α*-

*inverse strongly monotone*,

*then*${R}_{\lambda}(x)$

*is*$(1-\frac{\lambda}{4\alpha})$-

*inverse strongly monotone for any*$\lambda \in [0,4\alpha ]$

*and*

*where* ${R}_{\lambda}(x)=x-{P}_{C}(x-\lambda Ax)$.

**Lemma 2.3** [21]

*For all*$x\in H$

*and*${\lambda}^{\prime}\ge \lambda >0$,

*it holds that*

*where* ${R}_{\lambda}(x)=x-{P}_{C}(x-\lambda Ax)$.

**Lemma 2.4** [23]

*Let* $\{{a}_{n}\}$ *and* $\{{b}_{n}\}$ *be two sequences of non*-*negative numbers*, *such that* ${a}_{n+1}\le {a}_{n}+{b}_{n}$ *for all* $n\in N$. *If* ${\sum}_{n=1}^{\mathrm{\infty}}{b}_{n}<+\mathrm{\infty}$, *and if* $\{{a}_{n}\}$ *has a subsequence* $\{{a}_{{n}_{k}}\}$ *converging to* 0, *then* ${lim}_{n\to \mathrm{\infty}}{a}_{n}=0$.

**Lemma 2.5** [24]

*Let*

*H*

*be a real Hilbert space*,

*and let*

*C*

*be a nonempty*,

*closed and convex subset of*

*H*.

*Let*$\{{x}_{n}\}$

*be a sequence in*

*H*.

*Suppose that*,

*for any*${x}^{\ast}\in C$,

*Then* ${lim}_{n\to \mathrm{\infty}}{P}_{C}({x}_{n})=z$ *for some* $z\in C$.

## 3 Main results

**Theorem 3.1**

*Let*

*C*

*be a closed and convex subset of a real Hilbert space*

*H*.

*Let*

*F*

*be a bifunction from*$C\times C\to R$

*satisfying*(A1)-(A5)

*and*$\phi :C\to R\cup \{+\mathrm{\infty}\}$

*be a proper lower semicontinuous and convex function*.

*Let*

*A*

*be an*

*α*-

*inverse strongly monotone mapping from*

*C*

*into*

*H*

*and*

*B*

*be an*

*β*-

*inverse strongly monotone mapping from*

*C*

*into*

*H*.

*Let*

*S*

*be a nonexpansive mapping of*

*C*

*into itself*,

*such that*$\mathrm{\Omega}=Fix(S)\cap \mathit{VI}(C,A)\cap \mathit{GMEP}(F,\phi ,B)\ne \mathrm{\varnothing}$.

*Assume that either*(B1)

*or*(B2)

*holds*.

*Let*$\{{x}_{n}\}$, $\{{y}_{n}\}$

*and*$\{{u}_{n}\}$

*be sequences generated by*

*for every* $n=1,2,\dots $ , *where* $\{{\lambda}_{n}\}$, $\{{r}_{n}\}$, $\{{\alpha}_{n}\}$, $\{{\beta}_{n}\}$ *satisfy the following conditions*: (i) $0<{r}_{n}<2\beta $, $\{{\lambda}_{n}\}\subset [a,b]$ *for some* $a,b\in (0,2\alpha )$ *and* (ii) $\{{\alpha}_{n}\}\subset [c,d]$, $\{{\beta}_{n}\}\subset [e,f]$ *for some* $c,d,e,f\in (0,1)$, *then* $\{{x}_{n}\}$ *converges strongly to* ${p}^{\ast}\in \mathrm{\Omega}$, *where* ${p}^{\ast}={lim}_{n\to \mathrm{\infty}}{P}_{\mathrm{\Omega}}({x}_{n})$.

*Proof* We divide the proof into five steps.

Step 1. We claim that $\{{x}_{n}\}$ is bounded and ${lim}_{n\to \mathrm{\infty}}{R}_{a}({u}_{n})={lim}_{n\to \mathrm{\infty}}{R}_{{\lambda}_{n}}({u}_{n})=0$.

*β*-inverse strongly monotonicity of

*B*and $0<{r}_{n}<2\beta $, we have

*S*and $0<{\lambda}_{n}<2\alpha $, we have

Step 2. We show that ${lim}_{n\to \mathrm{\infty}}\parallel {x}_{n}-{u}_{n}\parallel ={lim}_{n\to \mathrm{\infty}}\parallel S{x}_{n}-{x}_{n}\parallel =0$.

Step 3. We claim that $\{{x}_{n}\}$ must have a convergent subsequence $\{{x}_{{n}_{k}}\}$ such that ${lim}_{k\to \mathrm{\infty}}{x}_{{n}_{k}}={p}^{\ast}$ for some ${p}^{\ast}\in C$. Moreover, ${p}^{\ast}\in \mathrm{\Omega}=Fix(S)\cap \mathit{VI}(C,A)\cap \mathit{GMEP}(F,\phi ,B)$.

Since $\{{x}_{n}\}$ is a bounded sequence generated by Algorithm (3.1), then $\{{x}_{n}\}$ must have a weakly convergent subsequence $\{{x}_{{n}_{k}}\}$ such that ${x}_{{n}_{k}}\rightharpoonup {p}^{\ast}$ ($k\to \mathrm{\infty}$), which implies from (3.11) and (3.13) that $S{w}_{{n}_{k}}\rightharpoonup {p}^{\ast}$ ($k\to \mathrm{\infty}$) and ${u}_{{n}_{k}}\rightharpoonup {p}^{\ast}$ ($k\to \mathrm{\infty}$). Next we will show that ${p}^{\ast}\in \mathrm{\Omega}=Fix(S)\cap \mathit{VI}(C,A)\cap \mathit{GMEP}(F,\phi ,B)$.

*A*is inverse strongly monotone with the positive constant $\alpha >0$, so

*A*is $\frac{1}{\alpha}$-Lipschitz continuous. Indeed, it yields that $\parallel Ax-Ay\parallel \le \frac{1}{\alpha}\parallel x-y\parallel $ from the definition of the inverse strongly monotonicity of

*A*, such that

*A*and the continuity of ${P}_{C}$, it follows that ${R}_{a}(x)=x-{P}_{C}[x-aAx]$ is also continuous. Notice that ${\rho}_{n}\ge a$, then by Lemma 2.3, $\parallel {R}_{x}({x}_{n})\parallel \le \parallel {R}_{{\rho}_{n}}({x}_{n})\parallel $. Then from Step 1,

This shows that ${p}^{\ast}$ is a solution of the variational inequality (1.6), that is ${p}^{\ast}\in \mathit{VI}(C,A)$. From (3.12), ${lim}_{n\to \mathrm{\infty}}\parallel {x}_{{n}_{k}}-{p}^{\ast}\parallel =0$ and the property of the nonexpansive mapping *S*, it follows that ${p}^{\ast}=S{p}^{\ast}$, that is ${p}^{\ast}\in Fix(S)$. Finally, by the same argument as in the proof of [[7], Theorem 3.1], we prove that ${p}^{\ast}\in \mathit{GMEP}(F,\phi ,B)$. Thus ${p}^{\ast}\in \mathrm{\Omega}=Fix(S)\cap \mathit{VI}(C,A)\cap \mathit{GMEP}(F,\phi ,B)$.

Next, we will prove that ${x}_{{n}_{k}}\to {p}^{\ast}$ ($k\to \mathrm{\infty}$).

Using the Kadec-Klee property of *H*, we obtain that ${lim}_{k\to \mathrm{\infty}}{x}_{{n}_{k}}={p}^{\ast}$.

Step 4. We claim that the sequence $\{{x}_{n}\}$ generated by Algorithm (3.1) converges strongly to ${p}^{\ast}\in \mathrm{\Omega}=Fix(S)\cap \mathit{VI}(C,A)\cap \mathit{GMEP}(F,\phi ,B)$.

In fact, from the result of Step 3, ${p}^{\ast}\in \mathrm{\Omega}$. Let $p={p}^{\ast}$ in (3.7). Consequently, $\parallel {x}_{n+1}-{p}^{\ast}\parallel \le \parallel {x}_{n}-{p}^{\ast}\parallel $. Meanwhile, ${lim}_{k\to \mathrm{\infty}}\parallel {x}_{{n}_{k}}-{p}^{\ast}\parallel =0$ from Step 3. Then from Lemma 2.4, we have ${lim}_{n\to \mathrm{\infty}}\parallel {x}_{n}-{p}^{\ast}\parallel =0$. Therefore, ${lim}_{n\to \mathrm{\infty}}{x}_{n}={p}^{\ast}$.

Step 5. We claim that ${p}^{\ast}={lim}_{n\to \mathrm{\infty}}{P}_{\mathrm{\Omega}}{x}_{n}$.

and, consequently, we have ${p}^{\ast}={q}^{\ast}$. Hence, ${p}^{\ast}={lim}_{n\to \mathrm{\infty}}{P}_{\mathrm{\Omega}}{x}_{n}$.

This completes the proof of Theorem 3.1. □

The following theorems can be obtained from Theorem 3.1 immediately.

**Theorem 3.2**

*Let*

*C*,

*H*,

*S*

*be as in Theorem*3.1.

*Assume that*$\mathrm{\Omega}=Fix(S)\cap \mathit{VI}(C,A)\ne \mathrm{\varnothing}$,

*let*$\{{x}_{n}\}$, $\{{y}_{n}\}$

*be sequences generated by*

*for every* $n=1,2,\dots $ , *where* $\{{\lambda}_{n}\}$, $\{{\alpha}_{n}\}$, $\{{\beta}_{n}\}$ *satisfy the following conditions*: (i) $\{{\lambda}_{n}\}\subset [a,b]$ *for some* $a,b\in (0,2\alpha )$ *and* (ii) $\{{\alpha}_{n}\}\subset [c,d]$, $\{{\beta}_{n}\}\subset [e,f]$ *for some* $c,d,e,f\in (0,1)$, *then* $\{{x}_{n}\}$ *converges strongly to* ${p}^{\ast}\in \mathrm{\Omega}$, *where* ${p}^{\ast}={lim}_{n\to \mathrm{\infty}}{P}_{\mathrm{\Omega}}({x}_{n})$.

*Proof* Putting $B=F=\phi =0$, ${r}_{n}=1$ in Theorem 3.1, the conclusion of Theorem 3.2 can be obtained from Theorem 3.1. □

**Remark 3.1** The main result of Nadezhkina and Takahashi [14] is a special case of our Theorem 3.2. Indeed, if we take ${\beta}_{n}=0$ in Theorem 3.2, then we obtain the result of [14].

**Theorem 3.3**

*Let*

*C*,

*H*,

*F*,

*A*,

*B*,

*S*

*be as in Theorem*3.1.

*Assume*$\mathrm{\Omega}=Fix(S)\cap \mathit{VI}(C,A)\cap \mathit{GEP}(F,B)\ne \mathrm{\varnothing}$;

*let*$\{{x}_{n}\}$, $\{{y}_{n}\}$

*and*$\{{u}_{n}\}$

*be sequences generated by*

*for every* $n=1,2,\dots $ , *where* $\{{\lambda}_{n}\}$, $\{{r}_{n}\}$, $\{{\alpha}_{n}\}$, $\{{\beta}_{n}\}$ *satisfy conditions* (i) *and* (ii) *as in Theorem * 3.1, *then* $\{{x}_{n}\}$ *converges strongly to* ${p}^{\ast}\in \mathrm{\Omega}$, *where* ${p}^{\ast}={lim}_{n\to \mathrm{\infty}}{P}_{\mathrm{\Omega}}({x}_{n})$.

*Proof* Putting $\phi =0$ in Theorem 3.1, the conclusion of Theorem 3.3 is obtained. □

**Remark 3.2** Theorem 3.3 can be viewed as an improvement of Theorem 3.1 of Inchan [25] because of removing the iterative step ${C}_{n}$ in the algorithm of Theorem 3.1 of [25].

**Theorem 3.4**

*Let*

*C*,

*H*,

*F*,

*A*,

*S*

*be as in Theorem*3.1.

*Assume that*$\mathrm{\Omega}=Fix(S)\cap \mathit{VI}(C,A)\cap \mathit{EP}(F)\ne \mathrm{\varnothing}$;

*let*$\{{x}_{n}\}$

*and*$\{{u}_{n}\}$

*be sequences generated by*

*for every* $n=1,2,\dots $ , *where* $\{{\lambda}_{n}\}$, $\{{r}_{n}\}$, $\{{\alpha}_{n}\}$ *satisfy the following conditions*: $0<{r}_{n}<2\beta $, $\{{\lambda}_{n}\}\subset [a,b]$ *for some* $a,b\in (0,2\alpha )$, $\{{\alpha}_{n}\}\subset [c,d]$ *for some* $c,d\in (0,1)$, *then* $\{{x}_{n}\}$ *converges strongly to* ${p}^{\ast}\in \mathrm{\Omega}$, *where* ${p}^{\ast}={lim}_{n\to \mathrm{\infty}}{P}_{\mathrm{\Omega}}({x}_{n})$.

*Proof* Taking $B=\phi =0$, ${\beta}_{n}=0$ in Theorem 3.1, the conclusion of Theorem 3.4 is obtained. □

**Remark 3.3** Theorem 3.4 is the strong convergence result of Theorem 3.1 of Jaiboon, Kumam and Humphries [26].

## Declarations

### Acknowledgements

The authors are very grateful to the referees for their careful reading, comments and suggestions, which improved the presentation of this article. The first author was supported by the Natural Science Foundational Committee of Qinhuangdao city (201101A453) and Hebei Normal University of Science and Technology (ZDJS 2009 and CXTD2010-05). The fifth author was supported by the Natural Science Foundational Committee of Qinhuangdao city (2012025A034).

## Authors’ Affiliations

## References

- Peng JW, Yao JC: A new hybrid-extragradient method for generalized mixed equilibrium problems and fixed point problems and variational inequality problems.
*Taiwan. J. Math.*2008, 12(6):1401–1432.MathSciNetGoogle Scholar - Ceng LC, Yao JC: A hybrid iterative scheme for mixed equilibrium problems and fixed point problems.
*J. Comput. Appl. Math.*2008, 214: 186–201. 10.1016/j.cam.2007.02.022MathSciNetView ArticleGoogle Scholar - Peng JW, Yao JC: Strong convergence theorems of iterative scheme based on the extragradient method for mixed equilibrium problems and fixed point problems.
*Math. Comput. Model.*2009, 49: 1816–1828. 10.1016/j.mcm.2008.11.014MathSciNetView ArticleGoogle Scholar - Peng JW, Yao JC: An iterative algorithm combining viscosity method with parallel method for a generalized equilibrium problem and strict pseudocontractions.
*Fixed Point Theory Appl.*2009., 2009: Article ID 794178Google Scholar - Takahashi S, Takahashi W: Strong convergence theorem for a generalized equilibrium problem and a nonexpansive mapping in a Hilbert space.
*Nonlinear Anal.*2008, 69: 1025–1033. 10.1016/j.na.2008.02.042MathSciNetView ArticleGoogle Scholar - Flam SD, Antipin AS: Equilibrium programming using proximal-like algorithms.
*Math. Program.*1997, 78: 29–41.MathSciNetView ArticleGoogle Scholar - Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems.
*Math. Stud.*1994, 63: 123–145.MathSciNetGoogle Scholar - Korpelevich GM: The extragradient method for finding saddle points and other problems.
*Matecon*1976, 12: 747–756.Google Scholar - He BS, Yang ZH, Yuan XM: An approximate proximal-extragradient type method for monotone variational inequalities.
*J. Math. Anal. Appl.*2004, 300: 362–374. 10.1016/j.jmaa.2004.04.068MathSciNetView ArticleGoogle Scholar - Gárciga Otero R, Iuzem A: Proximal methods with penalization effects in Banach spaces.
*Numer. Funct. Anal. Optim.*2004, 25: 69–91. 10.1081/NFA-120034119MathSciNetView ArticleGoogle Scholar - Solodov MV, Svaiter BF: An inexact hybrid generalized proximal point algorithm and some new results on the theory of Bregman functions.
*Math. Oper. Res.*2000, 25: 214–230. 10.1287/moor.25.2.214.12222MathSciNetView ArticleGoogle Scholar - Solodov MV: Convergence rate analysis of interactive algorithms for solving variational inequality problem.
*Math. Program.*2003, 96: 513–528. 10.1007/s10107-002-0369-zMathSciNetView ArticleGoogle Scholar - Zeng LC, Yao JC: Strong convergence theorem by an extragradient method for fixed point problems and variational inequality problems.
*Taiwan. J. Math.*2006, 10: 1293–1303.MathSciNetGoogle Scholar - Nadezhkina N, Takahashi W: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings.
*J. Optim. Theory Appl.*2006, 128: 191–201. 10.1007/s10957-005-7564-zMathSciNetView ArticleGoogle Scholar - Yao Y, Yao JC: On modified iterative method for nonexpansive mappings and monotone mappings.
*Appl. Math. Comput.*2007, 186: 1551–1558. 10.1016/j.amc.2006.08.062MathSciNetView ArticleGoogle Scholar - Plubtieng S, Punpaeng R: A new iterative method for equilibrium problems and fixed point problems of nonexpansive mappings and monotone mappings.
*Appl. Math. Comput.*2008, 197: 548–558. 10.1016/j.amc.2007.07.075MathSciNetView ArticleGoogle Scholar - Takahashi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings.
*J. Optim. Theory Appl.*2003, 118: 417–428. 10.1023/A:1025407607560MathSciNetView ArticleGoogle Scholar - Zeng LC, Yao JC: Strong convergence theorem by an extragradient method for fixed point problems and variational inequality problems.
*Taiwan. J. Math.*2006, 10: 1293–1303.MathSciNetGoogle Scholar - Nadezhkina N, Takahashi W: Weak convergence theorem by an extragradient method for nonexpansive mappings.
*J. Optim. Theory Appl.*2006, 128: 191–201. 10.1007/s10957-005-7564-zMathSciNetView ArticleGoogle Scholar - Noor MA, Rassias TM: Projection methods for monotone variational inequalities.
*J. Math. Anal. Appl.*1999, 237: 405–412. 10.1006/jmaa.1999.6422MathSciNetView ArticleGoogle Scholar - Huang ZY, Noor MA, Al-Said E: On an open question of Takahashi for nonexpansive mappings and inverse strongly monotone mappings.
*J. Optim. Theory Appl.*2010, 147: 194–204. 10.1007/s10957-010-9705-2MathSciNetView ArticleGoogle Scholar - Bnouhachem A, Noor MA: A new iterative method for variational inequalities.
*Appl. Math. Comput.*2006, 182: 1673–1682. 10.1016/j.amc.2006.06.007MathSciNetView ArticleGoogle Scholar - Liu QH: Convergence theorems of the sequence of iterates for asymptotically demicontractive and hemicontractive mappings.
*Nonlinear Anal.*1996, 26: 1835–1842. 10.1016/0362-546X(94)00351-HMathSciNetView ArticleGoogle Scholar - Takahashi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings.
*J. Optim. Theory Appl.*2003, 118: 417–428. 10.1023/A:1025407607560MathSciNetView ArticleGoogle Scholar - Inchan I: Hybrid extragradient method for general equilibrium problems and fixed point problems in Hilbert space.
*Nonlinear Anal. Hybrid Syst.*2011, 5: 467–478. 10.1016/j.nahs.2010.10.005MathSciNetView ArticleGoogle Scholar - Jaiboon C, Kumam P, Humphries UW: Weak convergence theorem by an extragradient method for variational inequality, equilibrium and fixed point problems.
*Bull. Malays. Math. Sci. Soc.*2009, 32: 173–185.MathSciNetGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.