# Iterative algorithms based on the viscosity approximation method for equilibrium and constrained convex minimization problem

## Abstract

The gradient-projection algorithm (GPA) plays an important role in solving constrained convex minimization problems. Based on the viscosity approximation method, we combine the GPA and averaged mapping approach to propose implicit and explicit composite iterative algorithms for finding a common solution of an equilibrium and a constrained convex minimization problem for the first time in this paper. Under suitable conditions, strong convergence theorems are obtained.

MSC:46N10, 47J20, 74G60.

## 1 Introduction

Let H be a real Hilbert space with inner product $〈\cdot ,\cdot 〉$ and norm $\parallel \cdot \parallel$. Let C be a nonempty closed convex subset of H. Let $T:C\to C$ be a nonexpansive mapping, namely $\parallel Tx-Ty\parallel \le \parallel x-y\parallel$, for all $x,y\in C$. The set of fixed points of T is denoted by $F\left(T\right)$.

Let ϕ be a bifunction of $C×C$ into , where is the set of real numbers. Consider the equilibrium problem (EP) which is to find $z\in C$ such that

$\varphi \left(z,y\right)\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C.$
(1.1)

We denoted the set of solutions of EP by $EP\left(\varphi \right)$. Given a mapping $F:C\to H$, let $\varphi \left(x,y\right)=〈Fx,y-x〉$ for all $x,y\in C$, then $z\in EP\left(\varphi \right)$ if and only if $〈Fz,y-z〉\ge 0$ for all $y\in C$, that is, z is a solution of the variational inequality. Numerous problems in physics, optimizations, and economics reduce to find a solution of (1.1). Some methods have been proposed to solve the equilibrium problem; see, for instance,  and the references therein.

Composite iterative algorithms were proposed by many authors for finding a common solution of an equilibrium problem and a fixed point problem (see ).

On the other hand, consider the constrained convex minimization problem as follows:

$minimize\left\{g\left(x\right):x\in C\right\},$
(1.2)

where $g:C\to \mathbb{R}$ is a real-valued convex function. It is well known that the gradient-projection algorithm (GPA) plays an important role in solving constrained convex minimization problems. If g is (Fréchet) differentiable, then the GPA generates a sequence $\left\{{x}_{n}\right\}$ using the following recursive formula:

${x}_{n+1}={P}_{C}\left({x}_{n}-\lambda \mathrm{\nabla }g\left({x}_{n}\right)\right),\phantom{\rule{1em}{0ex}}\mathrm{\forall }n\ge 0,$
(1.3)

or more generally,

${x}_{n+1}={P}_{C}\left({x}_{n}-{\lambda }_{n}\mathrm{\nabla }g\left({x}_{n}\right)\right),\phantom{\rule{1em}{0ex}}\mathrm{\forall }n\ge 0,$
(1.4)

where in both (1.3) and (1.4) the initial guess ${x}_{0}$ is taken from C arbitrarily, and the parameters, λ or ${\lambda }_{n}$, are positive real numbers satisfying certain conditions. The convergence of the algorithms (1.3) and (1.4) depends on the behavior of the gradient g. As a matter of fact, it is known that if g is α-strongly monotone and L-Lipschitzian with constants $\alpha ,L\ge 0$, then the operator

$W:={P}_{C}\left(I-\lambda \mathrm{\nabla }g\right)$
(1.5)

is a contraction; hence the sequence $\left\{{x}_{n}\right\}$ defined by the algorithm (1.3) converges in norm to the unique minimizer of (1.2). However, if the gradient g fails to be strongly monotone, the operator W defined by (1.5) would fail to be contractive; consequently, the sequence $\left\{{x}_{n}\right\}$ generated by the algorithm (1.3) may fail to converge strongly (see ). If g is Lipschitzian, then the algorithms (1.3) and (1.4) can still converge in the weak topology under certain conditions.

Recently, Xu  proposed an explicit operator-oriented approach to the algorithm (1.4); that is, an averaged mapping approach. He gave his averaged mapping approach to the GPA (1.4) and the relaxed gradient-projection algorithm. Moreover, he constructed a counterexample which shows that the algorithm (1.3) does not converge in norm in an infinite-dimensional space and also presented two modifications of GPA which are shown to have strong convergence [20, 21].

In 2011, Ceng et al.  proposed the following explicit iterative scheme:

${x}_{n+1}={P}_{C}\left[{s}_{n}\gamma V{x}_{n}+\left(I-{s}_{n}\mu F\right){T}_{n}{x}_{n}\right],\phantom{\rule{1em}{0ex}}n\ge 0,$

where ${s}_{n}=\frac{2-{\lambda }_{n}L}{4}$ and ${P}_{C}\left(I-{\lambda }_{n}\mathrm{\nabla }g\right)={s}_{n}I+\left(1-{s}_{n}\right){T}_{n}$ for each $n\ge 0$. He proved that the sequences $\left\{{x}_{n}\right\}$ converge strongly to a minimizer of the constrained convex minimization problem, which also solves a certain variational inequality.

In 2000, Moudafi  introduced the viscosity approximation method for nonexpansive mappings, extended in . Let f be a contraction on H, starting with an arbitrary initial ${x}_{0}\in H$, define a sequence $\left\{{x}_{n}\right\}$ recursively by

${x}_{n+1}={\alpha }_{n}f\left({x}_{n}\right)+\left(1-{\alpha }_{n}\right)T{x}_{n},\phantom{\rule{1em}{0ex}}n\ge 0,$
(1.6)

where $\left\{{\alpha }_{n}\right\}$ is a sequence in $\left(0,1\right)$. Xu  proved that if $\left\{{\alpha }_{n}\right\}$ satisfies certain conditions, the sequence $\left\{{x}_{n}\right\}$ generated by (1.6) converges strongly to the unique solution ${x}^{\ast }\in F\left(T\right)$ of the variational inequality

$〈\left(I-f\right){x}^{\ast },x-{x}^{\ast }〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in F\left(T\right).$

The purpose of the paper is to study the iterative method for finding the common solution of an equilibrium problem and a constrained convex minimization problem. Based on the viscosity approximation method, we combine the GPA and averaged mapping approach to propose implicit and explicit composite iterative method for finding the common element of the set of solutions of an equilibrium problem and the solution set of a constrained convex minimization problem. We also prove some strong convergence theorems.

## 2 Preliminaries

Throughout this paper, we always assume that C is a nonempty closed convex subset of a Hilbert space H. We use ‘’ for weak convergence and ‘→’ for strong convergence.

It is widely known that H satisfies Opial’s condition ; that is, for any sequence $\left\{{x}_{n}\right\}$ with ${x}_{n}⇀x$, the inequality

$\underset{n\to \mathrm{\infty }}{lim inf}\parallel {x}_{n}-x\parallel <\underset{n\to \mathrm{\infty }}{lim inf}\parallel {x}_{n}-y\parallel$

holds for every $y\in H$ with $y\ne x$.

In order to solve the equilibrium problem for a bifunction $\varphi :C×C\to \mathbb{R}$, let us assume that ϕ satisfies the following conditions:

(A1) $\varphi \left(x,x\right)=0$, for all $x\in C$;

(A2) ϕ is monotone, that is, $\varphi \left(x,y\right)+\varphi \left(y,x\right)\le 0$ for all $x,y\in C$;

(A3) for all $x,y,z\in C$, ${lim}_{t↓0}\varphi \left(tz+\left(1-t\right)x,y\right)\le \varphi \left(x,y\right)$;

(A4) for each fixed $x\in C$, the function $y↦\varphi \left(x,y\right)$ is convex and lower semicontinuous.

Let us recall the following lemmas which will be useful for our paper.

Lemma 2.1 

Let ϕ be a bifunction from $C×C$ into satisfying (A1), (A2), (A3), and (A4), then for any $r>0$ and $x\in H$, there exists $z\in C$ such that

$\varphi \left(z,y\right)+\frac{1}{r}〈y-z,z-x〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C.$

Further, if

${Q}_{r}x=\left\{z\in C:\varphi \left(z,y\right)+\frac{1}{r}〈y-z,z-x〉\ge 0,\mathrm{\forall }y\in C\right\},$

then the following hold:

1. (1)

${Q}_{r}$ is single-valued;

2. (2)

${Q}_{r}$ is firmly nonexpansive; that is,

${\parallel {Q}_{r}x-{Q}_{r}y\parallel }^{2}\le 〈{Q}_{r}x-{Q}_{r}y,x-y〉,\phantom{\rule{1em}{0ex}}\mathrm{\forall }x,y\in H;$
3. (3)

$F\left({Q}_{r}\right)=EP\left(\varphi \right)$;

4. (4)

$EP\left(\varphi \right)$ is closed and convex.

Definition 2.1 A mapping $T:H\to H$ is said to be firmly nonexpansive if and only if $2T-I$ is nonexpansive, or equivalently,

$〈x-y,Tx-Ty〉\ge {\parallel Tx-Ty\parallel }^{2},\phantom{\rule{1em}{0ex}}x,y\in H.$

Alternatively, T is firmly nonexpansive if and only if T can be expressed as

$T=\frac{1}{2}\left(I+S\right),$

where $S:H\to H$ is nonexpansive. Obviously, projections are firmly nonexpansive.

Definition 2.2 A mapping $T:H\to H$ is said to be an averaged mapping if it can be written as the average of the identity I and a nonexpansive mapping; that is,

$T=\left(1-\alpha \right)I+\alpha S,$
(2.1)

where $\alpha \in \left(0,1\right)$ and $S:H\to H$ is nonexpansive. More precisely, when (2.1) holds, we say that T is α-averaged.

Clearly, a firmly nonexpansive mapping is a $\frac{1}{2}$-averaged map.

Proposition 2.1 

For given operators $S,T,V:H\to H$:

1. (i)

If $T=\left(1-\alpha \right)S+\alpha V$ for some $\alpha \in \left(0,1\right)$ and if U is averaged and V is nonexpansive, then T is averaged.

2. (ii)

T is firmly nonexpansive if and only if the complement I-T is firmly nonexpansive.

3. (iii)

If $T=\left(1-\alpha \right)S+\alpha V$ for some $\alpha \in \left(0,1\right)$, U is firmly nonexpansive and V is nonexpansive, then T is averaged.

4. (iv)

The composite of finitely many averaged mappings is averaged. That is, if each of the mappings ${\left\{{T}_{i}\right\}}_{i=1}^{N}$ is averaged, then so is the composite ${T}_{1}\cdots {T}_{N}$. In particular, if ${T}_{1}$ is ${\alpha }_{1}$-averaged, and ${T}_{2}$ is ${\alpha }_{2}$-averaged, where ${\alpha }_{1},{\alpha }_{2}\in \left(0,1\right)$, then the composite ${T}_{1}{T}_{2}$ is α-averaged, where $\alpha ={\alpha }_{1}+{\alpha }_{2}-{\alpha }_{1}{\alpha }_{2}$.

Recall that the metric projection from H onto C is the mapping ${P}_{C}:H\to C$ which assigns, to each point $x\in H$, the unique point ${P}_{C}x\in C$ satisfying the property

$\parallel x-{P}_{C}x\parallel =\underset{y\in C}{inf}\parallel x-y\parallel =:d\left(x,C\right).$

Lemma 2.2 For a given $x\in H$:

1. (a)

$z={P}_{C}x$ if and only if $〈x-z,y-z〉\le 0$, $\mathrm{\forall }y\in C$.

2. (b)

$z={P}_{C}x$ if and only if ${\parallel x-z\parallel }^{2}\le {\parallel x-y\parallel }^{2}-{\parallel y-z\parallel }^{2}$, $\mathrm{\forall }y\in C$.

3. (c)

$〈{P}_{C}x-{P}_{C}y,x-y〉\ge {\parallel {P}_{C}x-{P}_{C}y\parallel }^{2}$, $\mathrm{\forall }x,y\in H$.

Consequently, ${P}_{C}$ is nonexpansive and monotone.

Lemma 2.3 The following inequality holds in an inner product space X:

${\parallel x+y\parallel }^{2}\le {\parallel x\parallel }^{2}+2〈y,x+y〉,\phantom{\rule{1em}{0ex}}\mathrm{\forall }x,y\in X.$

The so-called demiclosedness principle for nonexpansive mappings will be used.

Lemma 2.4 (Demiclosedness principle )

Let $T:C\to C$ be a nonexpansive mapping with $Fix\left(T\right)\ne \mathrm{\varnothing }$. If $\left\{{x}_{n}\right\}$ is a sequence in C that converges weakly to x and if $\left\{\left(I-T\right){x}_{n}\right\}$ converges strongly to y, then $\left(I-T\right)x=y$. In particular, if $y=0$, then $x\in Fix\left(T\right)$.

Next, we introduce monotonicity of a nonlinear operator.

Definition 2.3 A nonlinear operator G whose domain $D\left(G\right)\subseteq H$ and range $R\left(G\right)\subseteq H$ is said to be:

1. (a)

monotone if

$〈x-y,Gx-Gy〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }x,y\in D\left(G\right),$
2. (b)

β-strongly monotone if there exists $\beta >0$ such that

$〈x-y,Gx-Gy〉\ge \beta {\parallel x-y\parallel }^{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall }x,y\in D\left(G\right),$
3. (c)

ν-inverse strongly monotone (for short, ν-ism) if there exists $\nu >0$ such that

$〈x-y,Gx-Gy〉\ge \nu {\parallel Gx-Gy\parallel }^{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall }x,y\in D\left(G\right).$

It can be easily seen that if G is nonexpansive, then $I-G$ is monotone; and the projection map ${P}_{C}$ is a 1-ism.

The inverse strongly monotone (also referred to as co-coercive) operators have been widely used to solve practical problems in various fields, for instance, in traffic assignment problems; see, for example, [29, 30] and reference therein.

The following proposition summarizes some results on the relationship between averaged mappings and inverse strongly monotone operators.

Proposition 2.2 

Let $T:H\to H$ be an operator from H to itself.

1. (a)

T is nonexpansive if and only if the complement $I-T$ is $\frac{1}{2}$-ism.

2. (b)

If T is ν-ism, then for $\gamma >0$, γT is $\frac{\nu }{\gamma }$-ism.

3. (c)

T is averaged if and only if the complement $I-T$ is ν-ism for some $\nu >\frac{1}{2}$. Indeed, for $\alpha \in \left(0,1\right)$, T is α-averaged if and only if $I-T$ is $\frac{1}{2\alpha }$-ism.

Lemma 2.5 

Let $\left\{{a}_{n}\right\}$ be a sequence of nonnegative numbers satisfying the condition

${a}_{n+1}\le \left(1-{\gamma }_{n}\right){a}_{n}+{\gamma }_{n}{\delta }_{n},\phantom{\rule{1em}{0ex}}\mathrm{\forall }n\ge 0,$

where $\left\{{\gamma }_{n}\right\}$, $\left\{{\delta }_{n}\right\}$ are sequences of real numbers such that:

1. (i)

$\left\{{\gamma }_{n}\right\}\subset \left(0,1\right)$ and ${\sum }_{n=0}^{\mathrm{\infty }}{\gamma }_{n}=\mathrm{\infty }$,

2. (ii)

${lim sup}_{n\to \mathrm{\infty }}\delta \le 0$ or ${\sum }_{n=0}^{\mathrm{\infty }}{\gamma }_{n}|{\delta }_{n}|<\mathrm{\infty }$.

Then ${lim}_{n\to \mathrm{\infty }}{a}_{n}=0$.

## 3 Main results

In this paper, we always assume that $g:C\to \mathbb{R}$ is a real-valued convex function and g is an L-Lipschitzian mapping with $L\ge 0$. Since the Lipschitz continuity of g implies that it is indeed inverse strongly monotone, its complement can be an averaged mapping. Consequently, the GPA can be rewritten as the composite of a projection and an averaged mapping, which is again an averaged mapping. This shows that an averaged mapping plays an important role in the gradient-projection algorithm.

Note that g is L-Lipschitzian. This implies that g is ($1/L$)-ism, which then implies that $\lambda \mathrm{\nabla }g$ is ($1/\lambda L$)-ism. So, by Proposition 2.2, $I-\lambda \mathrm{\nabla }g$ is ($\lambda L/2$)-averaged. Now since the projection ${P}_{C}$ is (1/2)-averaged, we see from Proposition 2.1 that the composite ${P}_{C}\left(I-\lambda \mathrm{\nabla }g\right)$ is ($\left(2+\lambda L\right)/4$)-averaged for $0<\lambda <2/L$. Hence, we have that for each n, ${P}_{C}\left(I-{\lambda }_{n}\mathrm{\nabla }g\right)$ is ($\left(2+{\lambda }_{n}L\right)/4$)-averaged. Therefore, we can write

${P}_{C}\left(I-{\lambda }_{n}\mathrm{\nabla }g\right)=\frac{2-{\lambda }_{n}L}{4}I+\frac{2+{\lambda }_{n}L}{4}{T}_{n}={s}_{n}I+\left(1-{s}_{n}\right){T}_{n},$

where ${T}_{n}$ is nonexpansive and ${s}_{n}=\frac{2-{\lambda }_{n}L}{4}$.

Let $f:C\to C$ be a contraction with the constant $\rho \in \left(0,1\right)$. Suppose that the minimization problem (1.2) is consistent, and let U denote its solution set. Let $\left\{{Q}_{{\beta }_{n}}\right\}$ be a sequence of mappings defined as in Lemma 2.1. Consider the following mapping ${G}_{n}$ on C defined by

${G}_{n}x={\alpha }_{n}f\left(x\right)+\left(1-{\alpha }_{n}\right){T}_{n}{Q}_{{\beta }_{n}}x,\phantom{\rule{1em}{0ex}}x\in C,n\in \mathbb{N},$

where ${\alpha }_{n}\in \left(0,1\right)$. By Lemma 2.1, we have

$\parallel {G}_{n}x-{G}_{n}y\parallel \le \left(1-{\alpha }_{n}\left(1-\rho \right)\right)\parallel x-y\parallel .$

Since $0<1-{\alpha }_{n}\left(1-\rho \right)<1$, it follows that ${G}_{n}$ is a contraction. Therefore, by the Banach contraction principle, ${G}_{n}$ has a unique fixed point ${x}_{n}^{f}\in C$ such that

${x}_{n}^{f}={\alpha }_{n}f\left({x}_{n}^{f}\right)+\left(1-{\alpha }_{n}\right){T}_{n}{Q}_{{\beta }_{n}}{x}_{n}^{f}.$

For simplicity, we will write ${x}_{n}$ for ${x}_{n}^{f}$ provided no confusion occurs. Next, we prove the convergence of $\left\{{x}_{n}\right\}$, while we claim the existence of the $q\in U\cap EP\left(\varphi \right)$, which solves the variational inequality

$〈\left(I-f\right)q,p-q〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }p\in U\cap EP\left(\varphi \right).$
(3.1)

Equivalently, $q={P}_{U\cap EP\left(\varphi \right)}f\left(q\right)$.

Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H and ϕ be a bifunction from $C×C$ into satisfying (A1), (A2), (A3), and (A4). Let $g:C\to \mathbb{R}$ be a real-valued convex function, and assume that g is an L-Lipschitzian mapping with $L\ge 0$ and $f:C\to C$ is a contraction with the constant $\rho \in \left(0,1\right)$. Assume that $U\cap EP\left(\varphi \right)\ne \mathrm{\varnothing }$. Let $\left\{{x}_{n}\right\}$ be a sequence generated by

$\left\{\begin{array}{c}\varphi \left({u}_{n},y\right)+\frac{1}{{\beta }_{n}}〈y-{u}_{n},{u}_{n}-{x}_{n}〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C,\hfill \\ {x}_{n}={\alpha }_{n}f\left({x}_{n}\right)+\left(1-{\alpha }_{n}\right){T}_{n}{u}_{n},\phantom{\rule{1em}{0ex}}\mathrm{\forall }n\in \mathbb{N},\hfill \end{array}$

where ${u}_{n}={Q}_{{\beta }_{n}}{x}_{n}$, ${P}_{C}\left(I-{\lambda }_{n}\mathrm{\nabla }g\right)={s}_{n}I+\left(1-{s}_{n}\right){T}_{n}$, ${s}_{n}=\frac{2-{\lambda }_{n}L}{4}$ and $\left\{{\lambda }_{n}\right\}\subset \left(0,\frac{2}{L}\right)$. Let $\left\{{\beta }_{n}\right\}$ and $\left\{{\alpha }_{n}\right\}$ satisfy the following conditions:

1. (i)

$\left\{{\beta }_{n}\right\}\subset \left(0,\mathrm{\infty }\right)$, ${lim inf}_{n\to \mathrm{\infty }}{\beta }_{n}>0$;

2. (ii)

$\left\{{\alpha }_{n}\right\}\subset \left(0,1\right)$, ${lim}_{n\to \mathrm{\infty }}{\alpha }_{n}=0$.

Then $\left\{{x}_{n}\right\}$ converges strongly, as ${s}_{n}\to 0$ ($⇔{\lambda }_{n}\to \frac{2}{L}$), to a point $q\in U\cap EP\left(\varphi \right)$ which solves the variational inequality (3.1).

Proof First, we claim that $\left\{{x}_{n}\right\}$ is bounded. Indeed, pick any $p\in U\cap EP\left(\varphi \right)$, since ${u}_{n}={Q}_{{\beta }_{n}}{x}_{n}$ and $p={Q}_{{\beta }_{n}}p$, then we know that for any $n\in \mathbb{N}$,

$\parallel {u}_{n}-p\parallel =\parallel {Q}_{{\beta }_{n}}{x}_{n}-{Q}_{{\beta }_{n}}p\parallel \le \parallel {x}_{n}-p\parallel .$
(3.2)

Thus, we derive that (noting ${T}_{n}p=p$ and ${T}_{n}$ is nonexpansive)

$\begin{array}{rl}\parallel {x}_{n}-p\parallel & =\parallel {\alpha }_{n}f\left({x}_{n}\right)+\left(1-{\alpha }_{n}\right){T}_{n}{u}_{n}-p\parallel \\ \le \parallel {\alpha }_{n}f\left({x}_{n}\right)-{\alpha }_{n}f\left(p\right)\parallel +\parallel {\alpha }_{n}f\left(p\right)-{\alpha }_{n}p\parallel +\left(1-{\alpha }_{n}\right)\parallel {T}_{n}{u}_{n}-{T}_{n}p\parallel \\ \le \left[1-{\alpha }_{n}\left(1-\rho \right)\right]\parallel {x}_{n}-p\parallel +{\alpha }_{n}\parallel \left(I-f\right)p\parallel .\end{array}$

Then we have

$\parallel {x}_{n}-p\parallel \le \frac{1}{1-\rho }\parallel \left(I-f\right)p\parallel ,$

and hence $\left\{{x}_{n}\right\}$ is bounded. From (3.2), we also derive that $\left\{{u}_{n}\right\}$ is bounded.

Next, we claim that $\parallel {x}_{n}-{u}_{n}\parallel \to 0$. Indeed, for any $p\in U\cap EP\left(\varphi \right)$, by Lemma 2.1, we have

$\begin{array}{rl}{\parallel {u}_{n}-p\parallel }^{2}& ={\parallel {Q}_{{\beta }_{n}}{x}_{n}-{Q}_{{\beta }_{n}}p\parallel }^{2}\\ \le 〈{x}_{n}-p,{u}_{n}-p〉\\ =\frac{1}{2}\left({\parallel {x}_{n}-p\parallel }^{2}+{\parallel {u}_{n}-p\parallel }^{2}-{\parallel {u}_{n}-{x}_{n}\parallel }^{2}\right).\end{array}$

This implies that

${\parallel {u}_{n}-p\parallel }^{2}\le {\parallel {x}_{n}-p\parallel }^{2}-{\parallel {u}_{n}-{x}_{n}\parallel }^{2}.$
(3.3)

Then from (3.3), we derive that

$\begin{array}{rl}{\parallel {x}_{n}-p\parallel }^{2}& ={\parallel {\alpha }_{n}f\left({x}_{n}\right)+\left(1-{\alpha }_{n}\right){T}_{n}{u}_{n}-p\parallel }^{2}\\ ={\parallel {\alpha }_{n}f\left({x}_{n}\right)-{\alpha }_{n}p+\left(1-{\alpha }_{n}\right){T}_{n}{u}_{n}-\left(1-{\alpha }_{n}\right){T}_{n}p\parallel }^{2}\\ \le {\left(1-{\alpha }_{n}\right)}^{2}{\parallel {u}_{n}-p\parallel }^{2}+2{\alpha }_{n}〈f\left({x}_{n}\right)-p,{x}_{n}-p〉\\ \le {\parallel {x}_{n}-p\parallel }^{2}-{\parallel {u}_{n}-{x}_{n}\parallel }^{2}+2{\alpha }_{n}\left[\rho \parallel {x}_{n}-p\parallel +\parallel \left(I-f\right)p\parallel \right]\parallel {x}_{n}-p\parallel .\end{array}$

Since ${\alpha }_{n}\to 0$, it follows that

$\underset{n\to \mathrm{\infty }}{lim}\parallel {x}_{n}-{u}_{n}\parallel =0.$

Then we show that $\parallel {x}_{n}-{T}_{n}{x}_{n}\parallel \to 0$. Indeed,

$\begin{array}{rl}\parallel {x}_{n}-{T}_{n}{x}_{n}\parallel & =\parallel {x}_{n}-{T}_{n}{u}_{n}+{T}_{n}{u}_{n}-{T}_{n}{x}_{n}\parallel \\ \le \parallel {x}_{n}-{T}_{n}{u}_{n}\parallel +\parallel {T}_{n}{u}_{n}-{T}_{n}{x}_{n}\parallel \\ \le {\alpha }_{n}\parallel f\left({x}_{n}\right)-{T}_{n}{u}_{n}\parallel +\parallel {u}_{n}-{x}_{n}\parallel .\end{array}$

Since ${\alpha }_{n}\to 0$ and $\parallel {x}_{n}-{u}_{n}\parallel \to 0$, we obtain that

$\parallel {x}_{n}-{T}_{n}{x}_{n}\parallel \to 0.$

Thus,

$\begin{array}{rl}\parallel {u}_{n}-{T}_{n}{u}_{n}\parallel & =\parallel {u}_{n}-{x}_{n}+{x}_{n}-{T}_{n}{x}_{n}+{T}_{n}{x}_{n}-{T}_{n}{u}_{n}\parallel \\ \le \parallel {u}_{n}-{x}_{n}\parallel +\parallel {x}_{n}-{T}_{n}{x}_{n}\parallel +\parallel {T}_{n}{x}_{n}-{T}_{n}{u}_{n}\parallel \\ \le \parallel {u}_{n}-{x}_{n}\parallel +\parallel {x}_{n}-{T}_{n}{x}_{n}\parallel +\parallel {x}_{n}-{u}_{n}\parallel ,\end{array}$

and

$\parallel {x}_{n}-{T}_{n}{u}_{n}\parallel \le \parallel {x}_{n}-{u}_{n}\parallel +\parallel {u}_{n}-{T}_{n}{u}_{n}\parallel ,$

we have

$\parallel {u}_{n}-{T}_{n}{u}_{n}\parallel \to 0\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}\parallel {x}_{n}-{T}_{n}{u}_{n}\parallel \to 0.$

Observe that

$\begin{array}{rl}\parallel {P}_{C}\left(I-{\lambda }_{n}\mathrm{\nabla }g\right){u}_{n}-{u}_{n}\parallel & =\parallel {s}_{n}{u}_{n}+\left(1-{s}_{n}\right){T}_{n}{u}_{n}-{u}_{n}\parallel \\ =\left(1-{s}_{n}\right)\parallel {T}_{n}{u}_{n}-{u}_{n}\parallel \\ \le \parallel {T}_{n}{u}_{n}-{u}_{n}\parallel ,\end{array}$

where ${s}_{n}=\frac{2-{\lambda }_{n}L}{4}\in \left(0,\frac{1}{2}\right)$. Hence, we have From the boundedness of $\left\{{u}_{n}\right\}$, ${s}_{n}\to 0$ ($⇔{\lambda }_{n}\to \frac{2}{L}$) and $\parallel {u}_{n}-{T}_{n}{u}_{n}\parallel \to 0$, we conclude that

$\underset{n\to \mathrm{\infty }}{lim}\parallel {u}_{n}-{P}_{C}\left(I-\frac{2}{L}\mathrm{\nabla }g\right){u}_{n}\parallel =0.$

Since g is L-Lipschitzian, g is $\frac{1}{L}$-ism. Consequently, ${P}_{C}\left(I-\frac{2}{L}\mathrm{\nabla }g\right)$ is a nonexpansive self-mapping on C. As a matter of fact, we have for each $x,y\in C$ Consider a subsequence $\left\{{u}_{{n}_{i}}\right\}$ of $\left\{{u}_{n}\right\}$. Since $\left\{{u}_{{n}_{i}}\right\}$ is bounded, there exists a subsequence $\left\{{u}_{{n}_{{i}_{j}}}\right\}$ of $\left\{{u}_{{n}_{i}}\right\}$ which converges weakly to q. Next, we show that $q\in U\cap EP\left(\varphi \right)$. Without loss of generality, we can assume that ${u}_{{n}_{i}}⇀q$. Then, by Lemma 2.4, we obtain

$q={P}_{C}\left(I-\frac{2}{L}\mathrm{\nabla }g\right)q.$

This shows that $q\in U$.

Next, we show that $q\in EP\left(\varphi \right)$. Since ${u}_{n}={Q}_{{\beta }_{n}}{x}_{n}$, for any $y\in C$, we obtain

$\varphi \left({u}_{n},y\right)+\frac{1}{{\beta }_{n}}〈y-{u}_{n},{u}_{n}-{x}_{n}〉\ge 0.$

From (A2), we have

$\frac{1}{{\beta }_{n}}〈y-{u}_{n},{u}_{n}-{x}_{n}〉\ge \varphi \left(y,{u}_{n}\right).$

Replacing n by ${n}_{i}$, we have

$〈y-{u}_{{n}_{i}},\frac{{u}_{{n}_{i}}-{x}_{{n}_{i}}}{{\beta }_{{n}_{i}}}〉\ge \varphi \left(y,{u}_{{n}_{i}}\right).$

Since $\frac{{u}_{{n}_{i}}-{x}_{{n}_{i}}}{{\beta }_{{n}_{i}}}\to 0$ and ${u}_{{n}_{i}}⇀q$, it follows from (A4) that $0\ge \varphi \left(y,q\right)$ for all $y\in C$. Let

${z}_{t}=ty+\left(1-t\right)q,\phantom{\rule{1em}{0ex}}\mathrm{\forall }t\in \left(0,1\right],y\in C,$

then we have ${z}_{t}\in C$ and hence $\varphi \left({z}_{t},q\right)\le 0$. Thus, from (A1) and (A4), we have

$\begin{array}{rl}0& =\varphi \left({z}_{t},{z}_{t}\right)\\ \le t\varphi \left({z}_{t},y\right)+\left(1-t\right)\varphi \left({z}_{t},q\right)\\ \le t\varphi \left({z}_{t},y\right),\end{array}$

and hence $0\le \varphi \left({z}_{t},y\right)$. From (A3), we have $0\le \varphi \left(q,y\right)$ for all $y\in C$ and hence $q\in EP\left(\varphi \right)$. Therefore, $q\in EP\left(\varphi \right)\cap U$.

On the other hand, we note that

$\begin{array}{rl}{x}_{n}-q& ={\alpha }_{n}f\left({x}_{n}\right)+\left(1-{\alpha }_{n}\right){T}_{n}{u}_{n}-q\\ ={\alpha }_{n}f\left({x}_{n}\right)-{\alpha }_{n}f\left(q\right)+{\alpha }_{n}f\left(q\right)-{\alpha }_{n}q+\left(1-{\alpha }_{n}\right)\left({T}_{n}{u}_{n}-q\right).\end{array}$

Hence, we obtain

$\begin{array}{rcl}{\parallel {x}_{n}-q\parallel }^{2}& =& {\alpha }_{n}〈\left(f-I\right)q,{x}_{n}-q〉\\ +〈{\alpha }_{n}\left(f\left({x}_{n}\right)-f\left(q\right)\right)+\left(1-{\alpha }_{n}\right)\left({T}_{n}{u}_{n}-{T}_{n}q\right),{x}_{n}-q〉\\ \le & {\alpha }_{n}〈\left(f-I\right)q,{x}_{n}-q〉+\left(1-{\alpha }_{n}\left(1-\rho \right)\right){\parallel {x}_{n}-q\parallel }^{2}.\end{array}$

It follows that

${\parallel {x}_{n}-q\parallel }^{2}\le \frac{1}{1-\rho }〈\left(f-I\right)q,{x}_{n}-q〉.$

In particular,

${\parallel {x}_{{n}_{i}}-q\parallel }^{2}\le \frac{1}{1-\rho }〈\left(f-I\right)q,{x}_{{n}_{i}}-q〉.$
(3.4)

Since ${x}_{{n}_{i}}⇀q$, it follows from (3.4) that ${x}_{{n}_{i}}\to q$ as $i\to \mathrm{\infty }$.

Next, we show that q solves the variational inequality (3.1). Observe that

${x}_{n}={\alpha }_{n}f\left({x}_{n}\right)+\left(1-{\alpha }_{n}\right){T}_{n}{u}_{n}={\alpha }_{n}f\left({x}_{n}\right)+\left(1-{\alpha }_{n}\right){T}_{n}{Q}_{{\beta }_{n}}{x}_{n}.$

Hence, we conclude that

$\left(I-f\right){x}_{n}=-\frac{1}{{\alpha }_{n}}\left(I-{T}_{n}{Q}_{{\beta }_{n}}\right){x}_{n}-{T}_{n}{Q}_{{\beta }_{n}}{x}_{n}+{x}_{n}.$

Since ${T}_{n}$ is nonexpansive, we have that $I-{T}_{n}{Q}_{{\beta }_{n}}$ is monotone. Note that for any given $z\in U\cap EP\left(\varphi \right)$, Now, replacing n with ${n}_{i}$ in the above inequality, and letting $i\to \mathrm{\infty }$, we have

$〈\left(I-f\right)q,q-z〉=\underset{i\to \mathrm{\infty }}{lim}〈\left(I-f\right){x}_{{n}_{i}},{x}_{{n}_{i}}-z〉\le 0.$

From the arbitrariness of $z\in U\cap EP\left(\varphi \right)$, it follows that $q\in U\cap EP\left(\varphi \right)$ is a solution of the variational inequality (3.1). Further, by the uniqueness of solution of the variational inequality (3.1), we conclude that ${x}_{n}\to q$ as $n\to \mathrm{\infty }$. The variational inequality (3.1) can be written as

$〈f\left(q\right)-q,q-z〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }z\in U\cap EP\left(\varphi \right).$

So, in terms of Lemma 2.2, it is equivalent to the following equality:

${P}_{U\cap EP\left(\varphi \right)}f\left(q\right)=q.$

This completes the proof. □

Theorem 3.2 Let C be a nonempty closed convex subset of a real Hilbert space H and ϕ be a bifunction from $C×C$ into satisfying (A1), (A2), (A3), and (A4). Let $g:C\to \mathbb{R}$ be a real-valued convex function, and assume that g is an L-Lipschitzian mapping with $L\ge 0$ and $f:C\to C$ is a contraction with the constant $\rho \in \left(0,1\right)$. Assume that $U\cap EP\left(\varphi \right)\ne \mathrm{\varnothing }$. Let $\left\{{x}_{n}\right\}$ be a sequence generated by ${x}_{1}\in C$ and

$\left\{\begin{array}{c}\varphi \left({u}_{n},y\right)+\frac{1}{{\beta }_{n}}〈y-{u}_{n},{u}_{n}-{x}_{n}〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C,\hfill \\ {x}_{n+1}={\alpha }_{n}f\left({x}_{n}\right)+\left(1-{\alpha }_{n}\right){T}_{n}{u}_{n},\phantom{\rule{1em}{0ex}}\mathrm{\forall }n\in \mathbb{N},\hfill \end{array}$

where ${u}_{n}={Q}_{{\beta }_{n}}{x}_{n}$, ${P}_{C}\left(I-{\lambda }_{n}\mathrm{\nabla }g\right)={s}_{n}I+\left(1-{s}_{n}\right){T}_{n}$, ${s}_{n}=\frac{2-{\lambda }_{n}L}{4}$ and $\left\{{\lambda }_{n}\right\}\subset \left(0,\frac{2}{L}\right)$. Let $\left\{{\alpha }_{n}\right\}$, $\left\{{\beta }_{n}\right\}$ and $\left\{{s}_{n}\right\}$ satisfy the following conditions:

1. (i)

$\left\{{\beta }_{n}\right\}\subset \left(0,\mathrm{\infty }\right)$, $lim inf{\beta }_{n}>0$, ${\sum }_{n=1}^{\mathrm{\infty }}|{\beta }_{n+1}-{\beta }_{n}|<\mathrm{\infty }$;

2. (ii)

$\left\{{\alpha }_{n}\right\}\subset \left(0,1\right)$, ${lim}_{n\to \mathrm{\infty }}{\alpha }_{n}=0$, ${\sum }_{n=1}^{\mathrm{\infty }}{\alpha }_{n}=\mathrm{\infty }$, ${\sum }_{n=1}^{\mathrm{\infty }}|{\alpha }_{n+1}-{\alpha }_{n}|<\mathrm{\infty }$;

3. (iii)

$\left\{{s}_{n}\right\}\subset \left(0,\frac{1}{2}\right)$, ${lim}_{n\to \mathrm{\infty }}{s}_{n}=0$ ($⇔{lim}_{n\to \mathrm{\infty }}{\lambda }_{n}=\frac{2}{L}$), ${\sum }_{n=1}^{\mathrm{\infty }}|{s}_{n+1}-{s}_{n}|<\mathrm{\infty }$.

Then $\left\{{x}_{n}\right\}$ converges strongly to a point $q\in U\cap EP\left(\varphi \right)$ which solves the variational inequality (3.1).

Proof First, we show that $\left\{{x}_{n}\right\}$ is bounded. Indeed, pick any $p\in U\cap EP\left(\varphi \right)$, since ${u}_{n}={Q}_{{\beta }_{n}}{x}_{n}$ and $p={Q}_{{\beta }_{n}}p$, then we know that for any $n\in \mathbb{N}$,

$\parallel {u}_{n}-p\parallel =\parallel {Q}_{{\beta }_{n}}{x}_{n}-{Q}_{{\beta }_{n}}p\parallel \le \parallel {x}_{n}-p\parallel .$
(3.5)

Thus, we derive that (noting ${T}_{n}p=p$ and ${T}_{n}$ is nonexpansive)

$\begin{array}{rl}\parallel {x}_{n+1}-p\parallel & =\parallel {\alpha }_{n}f\left({x}_{n}\right)+\left(1-{\alpha }_{n}\right){T}_{n}{u}_{n}-p\parallel \\ \le {\alpha }_{n}\rho \parallel {x}_{n}-p\parallel +\left(1-{\alpha }_{n}\right)\parallel {x}_{n}-p\parallel +{\alpha }_{n}\parallel f\left(p\right)-p\parallel \\ \le \left(1-{\alpha }_{n}\left(1-\rho \right)\right)\parallel {x}_{n}-p\parallel +{\alpha }_{n}\parallel f\left(p\right)-p\parallel .\end{array}$

By induction, we have

$\parallel {x}_{n}-p\parallel \le max\left\{\parallel {x}_{1}-p\parallel ,\frac{1}{1-\rho }\parallel f\left(p\right)-p\parallel \right\},$

and hence $\left\{{x}_{n}\right\}$ is bounded. From (3.5), we also derive that $\left\{{u}_{n}\right\}$ is bounded.

Next, we show that $\parallel {x}_{n+1}-{x}_{n}\parallel \to 0$. Indeed, since g is $\frac{1}{L}$-ism, ${P}_{C}\left(I-{\lambda }_{n}\mathrm{\nabla }g\right)$ is nonexpansive. It follows that for any given $p\in S$,

$\begin{array}{rl}\parallel {P}_{C}\left(I-{\lambda }_{n}\mathrm{\nabla }g\right){u}_{n-1}\parallel & \le \parallel {P}_{C}\left(I-{\lambda }_{n}\mathrm{\nabla }g\right){u}_{n-1}-p\parallel +\parallel p\parallel \\ \le \parallel {P}_{C}\left(I-{\lambda }_{n}\mathrm{\nabla }g\right){u}_{n-1}-{P}_{C}\left(I-{\lambda }_{n}\mathrm{\nabla }g\right)p\parallel +\parallel p\parallel \\ \le \parallel {u}_{n-1}-p\parallel +\parallel p\parallel \\ \le \parallel {u}_{n-1}\parallel +2\parallel p\parallel .\end{array}$

This together with the boundedness of $\left\{{u}_{n}\right\}$ implies that $\left\{{P}_{C}\left(I-{\lambda }_{n}\mathrm{\nabla }g\right){u}_{n-1}\right\}$ is bounded.

Also, observe that for some appropriate constant ${M}_{1}>0$ such that

${M}_{1}\ge L\parallel {P}_{C}\left(I-{\lambda }_{n}\mathrm{\nabla }g\right){u}_{n-1}\parallel +4\parallel \mathrm{\nabla }g\left({u}_{n-1}\right)\parallel +L\parallel {u}_{n-1}\parallel ,\phantom{\rule{1em}{0ex}}\mathrm{\forall }n\ge 1.$

Thus, we get (3.6)

for some appropriate constant ${M}_{2}>0$ such that

${M}_{2}\ge max\left\{\parallel f\left({x}_{n-1}\right)\parallel +\parallel {T}_{n-1}{u}_{n-1}\parallel ,\frac{4{M}_{1}}{L}\right\},\phantom{\rule{1em}{0ex}}\mathrm{\forall }n\ge 1.$

From ${u}_{n+1}={Q}_{{\beta }_{n+1}}{x}_{n+1}$ and ${u}_{n}={Q}_{{\beta }_{n}}{x}_{n}$, we note that

$\varphi \left({u}_{n+1},y\right)+\frac{1}{{\beta }_{n+1}}〈y-{u}_{n+1},{u}_{n+1}-{x}_{n+1}〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C,$
(3.7)

and

$\varphi \left({u}_{n},y\right)+\frac{1}{{\beta }_{n}}〈y-{u}_{n},{u}_{n}-{x}_{n}〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C.$
(3.8)

Putting $y={u}_{n}$ in (3.7) and $y={u}_{n+1}$ in (3.8), we have

$\varphi \left({u}_{n+1},{u}_{n}\right)+\frac{1}{{\beta }_{n+1}}〈{u}_{n}-{u}_{n+1},{u}_{n+1}-{x}_{n+1}〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C,$

and

$\varphi \left({u}_{n},{u}_{n+1}\right)+\frac{1}{{\beta }_{n}}〈{u}_{n+1}-{u}_{n},{u}_{n}-{x}_{n}〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C.$

So, from (A2), we have

$〈{u}_{n+1}-{u}_{n},\frac{{u}_{n}-{x}_{n}}{{\beta }_{n}}-\frac{{u}_{n+1}-{x}_{n+1}}{{\beta }_{n+1}}〉\ge 0,$

and hence

$〈{u}_{n+1}-{u}_{n},{u}_{n}-{u}_{n+1}+{u}_{n+1}-{x}_{n}-\frac{{\beta }_{n}}{{\beta }_{n+1}}\left({u}_{n+1}-{x}_{n+1}\right)〉\ge 0.$

Since ${lim}_{n\to \mathrm{\infty }}{\beta }_{n}>0$, without loss of generality, let us assume that there exists a real number a such that ${\beta }_{n}>a>0$ for all $n\in \mathbb{N}$. Thus, we have

$\begin{array}{rl}{\parallel {u}_{n+1}-{u}_{n}\parallel }^{2}& \le 〈{u}_{n+1}-{u}_{n},{x}_{n+1}-{x}_{n}+\left(1-\frac{{\beta }_{n}}{{\beta }_{n+1}}\right)\left({u}_{n+1}-{x}_{n+1}\right)〉\\ \le \parallel {u}_{n+1}-{u}_{n}\parallel \left\{\parallel {x}_{n+1}-{x}_{n}\parallel +|1-\frac{{\beta }_{n}}{{\beta }_{n+1}}|\parallel {u}_{n+1}-{x}_{n+1}\parallel \right\},\end{array}$

thus,

$\parallel {u}_{n+1}-{u}_{n}\parallel \le \parallel {x}_{n+1}-{x}_{n}\parallel +\frac{1}{a}|{\beta }_{n+1}-{\beta }_{n}|{M}_{3},$
(3.9)

where ${M}_{3}=sup\left\{\parallel {u}_{n}-{x}_{n}\parallel :n\in \mathbb{N}\right\}$.

From (3.6) and (3.9), we obtain where $M=max\left[{M}_{2},\frac{{M}_{3}}{a}\right]$. Hence, by Lemma 2.5, we have

$\underset{n\to \mathrm{\infty }}{lim}\parallel {x}_{n+1}-{x}_{n}\parallel =0.$
(3.10)

Then, from (3.9) and (3.10), and $|{\beta }_{n+1}-{\beta }_{n}|\to 0$, we have

$\underset{n\to \mathrm{\infty }}{lim}\parallel {u}_{n+1}-{u}_{n}\parallel =0.$

For any $p\in U\cap EP\left(\varphi \right)$, as in the proof of Theorem 3.1, we have

${\parallel {u}_{n}-p\parallel }^{2}\le {\parallel {x}_{n}-p\parallel }^{2}-{\parallel {u}_{n}-{x}_{n}\parallel }^{2}.$
(3.11)

Then from (3.11), we derive that

$\begin{array}{rcl}{\parallel {x}_{n+1}-p\parallel }^{2}& =& {\parallel {\alpha }_{n}f\left({x}_{n}\right)-{\alpha }_{n}p+\left(1-{\alpha }_{n}\right){T}_{n}{u}_{n}-\left(1-{\alpha }_{n}\right){T}_{n}p\parallel }^{2}\\ \le & {\alpha }_{n}^{2}{\parallel f\left({x}_{n}\right)-p\parallel }^{2}+2{\alpha }_{n}\left(1-{\alpha }_{n}\right)\parallel f\left({x}_{n}\right)-p\parallel \parallel {u}_{n}-p\parallel \\ +{\left(1-{\alpha }_{n}\right)}^{2}{\parallel {u}_{n}-p\parallel }^{2}\\ \le & {\alpha }_{n}\left({\parallel f\left({x}_{n}\right)-p\parallel }^{2}+2\parallel f\left({x}_{n}\right)-p\parallel \parallel {u}_{n}-p\parallel \right)+{\parallel {u}_{n}-p\parallel }^{2}\\ \le & {\parallel {x}_{n}-p\parallel }^{2}-{\parallel {u}_{n}-{x}_{n}\parallel }^{2}+{\alpha }_{n}\left({\parallel f\left({x}_{n}\right)-p\parallel }^{2}\\ +2\parallel f\left({x}_{n}\right)-p\parallel \parallel {u}_{n}-p\parallel \right).\end{array}$

Since ${\alpha }_{n}\to 0$ and $\parallel {x}_{n}-{x}_{n+1}\parallel \to 0$, we have

$\underset{n\to \mathrm{\infty }}{lim}\parallel {x}_{n}-{u}_{n}\parallel =0.$

Next, we have

$\begin{array}{rl}\parallel {x}_{n}-{T}_{n}{x}_{n}\parallel & =\parallel {\alpha }_{n}f\left({x}_{n}\right)+\left(1-{\alpha }_{n}\right){T}_{n}{u}_{n}-{T}_{n}{x}_{n}\parallel \\ \le {\alpha }_{n}\parallel f\left({x}_{n}\right)-{T}_{n}{u}_{n}\parallel +\left(1-{\alpha }_{n}\right)\parallel {u}_{n}-{x}_{n}\parallel .\end{array}$

Then, $\parallel {x}_{n}-{T}_{n}{x}_{n}\parallel \to 0$, it follows that $\parallel {u}_{n}-{T}_{n}{u}_{n}\parallel \to 0$.

Now, we show that

$\underset{n\to \mathrm{\infty }}{lim sup}〈{x}_{n}-q,-\left(I-f\right)q〉\le 0,$

where $q={P}_{U\cap EP\left(\varphi \right)}f\left(q\right)$ is a unique solution of the variational inequality (3.1). Indeed, take a subsequence $\left\{{x}_{{n}_{k}}\right\}$ of $\left\{{x}_{n}\right\}$ such that

$\underset{n\to \mathrm{\infty }}{lim sup}〈{x}_{n}-q,-\left(I-f\right)q〉=\underset{k\to \mathrm{\infty }}{lim}〈{x}_{{n}_{k}}-q,-\left(I-f\right)q〉.$

Since $\left\{{x}_{n}\right\}$ is bounded, without loss of generality, we may assume that ${x}_{{n}_{k}}⇀\stackrel{˜}{x}$. By the same argument as in the proof of Theorem 3.1, we have $\stackrel{˜}{x}\in U\cap EP\left(\varphi \right)$.

Since $q={P}_{U\cap EP\left(\varphi \right)}f\left(q\right)$, it follows that

$\underset{n\to \mathrm{\infty }}{lim sup}〈\left(I-f\right)q,q-{x}_{n}〉=〈\left(I-f\right)q,q-\stackrel{˜}{x}〉\le 0.$
(3.12)

From

$\begin{array}{rl}{x}_{n+1}-q& ={\alpha }_{n}f\left({x}_{n}\right)+\left(1-{\alpha }_{n}\right){T}_{n}{u}_{n}-q\\ ={\alpha }_{n}f\left({x}_{n}\right)-{\alpha }_{n}f\left(q\right)+{\alpha }_{n}f\left(q\right)-{\alpha }_{n}q+\left(1-{\alpha }_{n}\right){T}_{n}{u}_{n}-\left(1-{\alpha }_{n}\right){T}_{n}q,\end{array}$

we have

$\begin{array}{rl}{\parallel {x}_{n+1}-q\parallel }^{2}& ={\parallel {\alpha }_{n}\left(f\left({x}_{n}\right)-f\left(q\right)\right)+{\alpha }_{n}\left(f\left(q\right)-q\right)+\left(1-{\alpha }_{n}\right)\left({T}_{n}{u}_{n}-{T}_{n}q\right)\parallel }^{2}\\ \le {\left(1-{\alpha }_{n}\right)}^{2}{\parallel {T}_{n}{u}_{n}-{T}_{n}q\parallel }^{2}+2{\alpha }_{n}〈f\left({x}_{n}\right)-f\left(q\right)-\left(I-f\right)q,{x}_{n+1}-q〉.\end{array}$

This implies that

$\begin{array}{rcl}{\parallel {x}_{n+1}-q\parallel }^{2}& \le & {\left(1-{\alpha }_{n}\right)}^{2}{\parallel {x}_{n}-q\parallel }^{2}+2{\alpha }_{n}\rho \parallel {x}_{n}-q\parallel \parallel {x}_{n+1}-q\parallel \\ +2{\alpha }_{n}〈-\left(I-f\right)q,{x}_{n+1}-q〉\\ \le & {\left(1-{\alpha }_{n}\right)}^{2}{\parallel {x}_{n}-q\parallel }^{2}+{\alpha }_{n}\rho \left({\parallel {x}_{n}-q\parallel }^{2}+{\parallel {x}_{n+1}-q\parallel }^{2}\right)\\ +2{\alpha }_{n}〈-\left(I-f\right)q,{x}_{n+1}-q〉.\end{array}$

Then, we have

$\begin{array}{rcl}{\parallel {x}_{n+1}-q\parallel }^{2}& \le & \frac{1-2{\alpha }_{n}+{\alpha }_{n}\rho }{1-{\alpha }_{n}\rho }{\parallel {x}_{n}-q\parallel }^{2}+\frac{{\alpha }_{n}^{2}}{1-{\alpha }_{n}\rho }{\parallel {x}_{n}-q\parallel }^{2}\\ +\frac{2{\alpha }_{n}}{1-{\alpha }_{n}\rho }〈-\left(I-f\right)q,{x}_{n+1}-q〉\\ \le & \left(1-2\left(1-\rho \right){\alpha }_{n}\right){\parallel {x}_{n}-q\parallel }^{2}+\frac{{\alpha }_{n}^{2}}{1-{\alpha }_{n}\rho }{\parallel {x}_{n}-q\parallel }^{2}\\ +\frac{2{\alpha }_{n}}{1-{\alpha }_{n}\rho }〈-\left(I-f\right)q,{x}_{n+1}-q〉\\ \le & \left(1-2\left(1-\rho \right){\alpha }_{n}\right){\parallel {x}_{n}-q\parallel }^{2}+2\left(1-\rho \right){\alpha }_{n}\left(\frac{{\alpha }_{n}}{2\left(1-\rho \right)\left(1-{\alpha }_{n}\rho \right)}{M}^{\ast }\\ +\frac{1}{\left(1-\rho \right)\left(1-{\alpha }_{n}\rho \right)}〈-\left(I-f\right)q,{x}_{n+1}-q〉\right)\\ =& \left(1-2\left(1-\rho \right){\alpha }_{n}\right){\parallel {x}_{n}-q\parallel }^{2}+2\left(1-\rho \right){\alpha }_{n}{\delta }_{n},\end{array}$

where ${M}^{\ast }=sup\left\{{\parallel {x}_{n}-q\parallel }^{2}:n\in \mathbb{N}\right\}$, and ${\delta }_{n}=\frac{{\alpha }_{n}}{2\left(1-\rho \right)\left(1-{\alpha }_{n}\rho \right)}{M}^{\ast }+\frac{1}{\left(1-\rho \right)\left(1-{\alpha }_{n}\rho \right)}〈-\left(I-f\right)q,{x}_{n+1}-q〉$.

It is easy to see that ${lim}_{n\to \mathrm{\infty }}2\left(1-\rho \right){\alpha }_{n}=0$, ${\sum }_{n=1}^{\mathrm{\infty }}2\left(1-\rho \right){\alpha }_{n}=\mathrm{\infty }$, and ${lim sup}_{n\to \mathrm{\infty }}{\delta }_{n}\le 0$ by (3.12). Hence, by Lemma 2.5, the sequence $\left\{{x}_{n}\right\}$ converges strongly to q. This completes the proof. □

## 4 Conclusions

Methods for solving the equilibrium problem and the constrained convex minimization problem have extensively been studied respectively in a Hilbert space. But to the best of our knowledge, it would probably be the first time in the literature that we introduce implicit and explicit algorithms for finding the common element of the set of solutions of an equilibrium problem and the set of solutions of a constrained convex minimization problem, which also solves a certain variational inequality.

## References

1. Mann WR: Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4: 506–510. 10.1090/S0002-9939-1953-0054846-3

2. Moudafi A: Viscosity approximation method for fixed-points problems. J. Math. Anal. Appl. 2000, 241: 46–55. 10.1006/jmaa.1999.6615

3. Moudafi A, Théra M: Proximal and dynamical approaches to equilibrium problems. Lecture Notes in Econom. and Math. System 477. In Ill-Posed Variational Problems and Regularization Techniques. Springer, Berlin; 1999:187–201. Trier, 1998

4. Ceng LC, Al-Homidan S, Ansari QH, Yao J-C: An iterative scheme for equilibrium problems and fixed point problems of strict pseudo-contraction mappings. J. Comput. Appl. Math. 2009, 223: 967–974. 10.1016/j.cam.2008.03.032

5. Takahashi S, Takahashi W: Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 2007, 331: 506–515. 10.1016/j.jmaa.2006.08.036

6. Tian M: An application of hybrid steepest descent methods for equilibrium problems and strict pseudocontractions in Hilbert spaces. J. Inequal. Appl. 2011., 2011: Article ID 173430. doi:10.1155/2011/173430

7. Yamada I: The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings. Inherently Parallel Algorithms in Feasibility and Optimization and Their Application 2001. Haifa

8. Liu Y: A general iterative method for equilibrium problems and strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 2009, 71: 4852–4861. 10.1016/j.na.2009.03.060

9. Plubtieng S, Punpaeng R: A general iterative method for equilibrium problems and fixed point problems in Hilbert space. J. Math. Anal. Appl. 2007, 336: 455–469. 10.1016/j.jmaa.2007.02.044

10. Jung JS: Strong convergence of iterative methods for k -strictly pseudo-contractive mappings in Hilbert spaces. Appl. Math. Comput. 2010, 215: 3746–3753. 10.1016/j.amc.2009.11.015

11. Jung JS: Strong convergence of composite iterative methods for equilibrium problems and fixed point problems. Appl. Math. Comput. 2009, 213: 498–505. 10.1016/j.amc.2009.03.048

12. Kumam P: A new hybrid iterative method for solution of equilibrium problems and fixed point problems for an inverse strongly monotone operator and a nonexpansive mapping. J. Appl. Math. Comput. 2009, 29: 263–280. 10.1007/s12190-008-0129-1

13. Kumam P: A hybrid approximation method for equilibrium and fixed point problems for a monotone mapping and a nonexpansive mapping. Nonlinear Anal. Hybrid Syst. 2008, 2(4):1245–1255. 10.1016/j.nahs.2008.09.017

14. Saewan S, Kumam P: The shrinking projection method for solving generalized equilibrium problem and common fixed points for asymptotically quasi- ϕ -nonexpansive mappings. Fixed Point Theory Appl. 2011. doi:10.1186/1687–1812–2011–9

15. Tada A, Takahashi W: Weak and strong convergence theorems for a nonexpansive mapping and an equilibrium problem. J. Optim. Theory Appl. 2007, 133: 359–370. 10.1007/s10957-007-9187-z

16. Yao Y, Liou YC, Yao JC: Convergence theorem for equilibrium problems and fixed point problems of infinite family of nonexpansive mappings. Fixed Point Theory Appl. 2007., 2007: Article ID 64363

17. Su YF, Shang MJ, Qin XL: An iterative method of solution for equilibrium and optimization problems. Nonlinear Anal., Theory Methods Appl. 2008, 69: 2709–2719. 10.1016/j.na.2007.08.045

18. Plubtieng S, Punpaeng R: A new iterative method for equilibrium problems and fixed point problems of nonlinear mappings and monotone mappings. Appl. Math. Comput. 2008, 179: 548–558.

19. Xu HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150: 360–378. 10.1007/s10957-011-9837-z

20. Xu HK: An iterative approach to quadratic optimization. J. Optim. Theory Appl. 2003, 116: 659–678. 10.1023/A:1023073621589

21. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240–256. 10.1112/S0024610702003332

22. Ceng LC, Ansari QH, Yao J-C: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 2011, 74: 5286–5302. 10.1016/j.na.2011.05.005

23. Marino G, Xu HK: A general method for nonexpansive mappings in Hilbert space. J. Math. Anal. Appl. 2006, 318: 43–52. 10.1016/j.jmaa.2005.05.028

24. Xu HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298: 279–291. 10.1016/j.jmaa.2004.04.059

25. Bkun E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.

26. Combettes PL, Hirstoaga SA: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6: 117–136.

27. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006

28. Geobel K, Kirk WA Cambridge Studies in Advanced Mathematical 28. In Topics on Metric Fixed Points Theory. Cambridge University Press, Cambridge; 1990.

29. Bretarkas DP, Gafin EM: Projection methods for variational inequalities with applications to the traffic assignment problem. Math. Program. Stud. 1982, 17: 139–159. 10.1007/BFb0120965

30. Han D, Lo HK: Solving non additive traffic assignment problems: a descent method for cocoercive variational inequalities. Eur. J. Oper. Res. 2004, 159: 529–544. 10.1016/S0377-2217(03)00423-5

## Acknowledgements

The authors wish to thank the referees for their helpful comments, which notably improved the presentation of this manuscript. This work was supported in part by The Fundamental Research Funds for the Central Universities (the Special Fund of Science in Civil Aviation University of China: No. ZXH2012K001), and by the Science Research Foundation of Civil Aviation University of China (No. 2012KYM03).

## Author information

Authors

### Corresponding author

Correspondence to Ming Tian.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

All the authors read and approved the final manuscript.

## Rights and permissions

Reprints and Permissions

Tian, M., Liu, L. Iterative algorithms based on the viscosity approximation method for equilibrium and constrained convex minimization problem. Fixed Point Theory Appl 2012, 201 (2012). https://doi.org/10.1186/1687-1812-2012-201

• Accepted:

• Published:

• DOI: https://doi.org/10.1186/1687-1812-2012-201

### Keywords

• iterative algorithm
• equilibrium problem
• constrained convex minimization
• variational inequality 