Skip to main content

A new iterative algorithm for solving common solutions of generalized mixed equilibrium problems, fixed point problems and variational inclusion problems with minimization problems

Abstract

In this article, we introduce a new general iterative method for solving a common element of the set of solutions of fixed point for nonexpansive mappings, the set of solutions of generalized mixed equilibrium problems and the set of solutions of the variational inclusion for a β-inverse-strongly monotone mapping in a real Hilbert space. We prove that the sequence converges strongly to a common element of the above three sets under some mild conditions. Our results improve and extend the corresponding results of Marino and Xu (J. Math. Anal. Appl. 318:43-52, 2006), Su et al. (Nonlinear Anal. 69:2709-2719, 2008), Tan and Chang (Fixed Point Theory Appl. 2011:915629, 2011) and some authors.

MSC:46C05, 47H09, 47H10.

1 Introduction

Let C be a nonempty closed convex subset of a real Hilbert space H with the inner product , and the norm , respectively. A mapping S:CC is said to be nonexpansive if SxSyxyx,yC. If C is bounded closed convex and S is a nonexpansive mapping of C into itself, then F(S):={xC:Sx=x} is nonempty [1]. A mapping S:CC is said to be a k-strictly pseudo-contraction[2] if there exists 0k<1 such that S x S y 2 x y 2 +k ( I S ) x ( I S ) y 2 x,yC, where I denotes the identity operator on C. We denote weak convergence and strong convergence by notations and →, respectively. A mapping A of C into H is called monotone if AxAy,xy0x,yC. A mapping A is called α-inverse-strongly monotone if there exists a positive real number α such that AxAy,xyα A x A y 2 x,yC. A mapping A is called α-strongly monotone if there exists a positive real number α such that AxAy,xyα x y 2 x,yC. It is obvious that any α-inverse-strongly monotone mappings A is a monotone and 1 α -Lipschitz continuous mapping. A linear bounded operator A is called strongly positive if there exists a constant γ ¯ >0 with the property Ax,x γ ¯ x 2 xH. A self mapping f:CC is called contraction on C if there exists a constant α(0,1) such that f(x)f(y)αxyx,yC.

Let B:HH be a single-valued nonlinear mapping and M:H 2 H be a set-valued mapping. The variational inclusion problem is to find xH such that

θB(x)+M(x),
(1.1)

where θ is the zero vector in H. The set of solutions of (1.1) is denoted by I(B,M). The variational inclusion has been extensively studied in the literature. See, e.g.[310] and the reference therein.

A set-valued mapping M:H 2 H is called monotone if x,yH, fM(x) and gM(y) imply xy,fg0. A monotone mapping M is maximal if its graph G(M):={(f,x)H×H:fM(x)} of M is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping M is maximal if and only if for (x,f)H×H, xy,fg0 for all (y,g)G(M) imply fM(x).

Let B be an inverse-strongly monotone mapping of C into H and let N C v be normal cone to C at vCi.e. N C v={wH:vu,w0,uC}, and define

Mv={ B v + N C v , if v C , , if v C .

Then M is a maximal monotone and θMv if and only if vVI(C,B) (see [11]).

Let M:H 2 H be a set-valued maximal monotone mapping, then the single-valued mapping J M , λ :HH defined by

J M , λ (x)= ( I + λ M ) 1 (x),xH
(1.2)

is called the resolvent operator associated with M, where λ is any positive number and I is the identity mapping. In the worth mentioning that the resolvent operator is nonexpansive, 1-inverse-strongly monotone and that a solution of problem (1.1) is a fixed point of the operator J M , λ (IλB) for all λ>0 (see [12]).

Let F be a bifunction of C×C into R, where R is the set of real numbers, Ψ:CH be a mapping and ψ:CR be a real-valued function. The generalized mixed equilibrium problem for finding xC such that

F(x,y)+Ψx,yx+ψ(y)ψ(x)0,yC.
(1.3)

The set of solutions of (1.3) is denoted by GMEP(F,ψ,Ψ), that is

GMEP(F,ψ,Ψ)= { x C : F ( x , y ) + Ψ x , y x + ψ ( y ) ψ ( x ) 0 , y C } .

If Ψ0 and ψ0, the problem (1.3) is reduced into the equilibrium problem (see also [13]) for finding xC such that

F(x,y)0,yC.
(1.4)

The set of solutions of (1.4) is denoted by EP(F), that is

EP(F)= { x C : F ( x , y ) 0 , y C } .

This problem contains fixed point problems, includes as special cases numerous problems in physics, optimization and economics. Some methods have been proposed to solve the equilibrium problem, please consult [1416].

If F0 and ψ0, the problem (1.3) is reduced into the Hartmann-Stampacchia variational inequality[17] for finding xC such that

Ψx,yx0,yC.
(1.5)

The set of solutions of (1.5) is denoted by VI(C,Ψ). The variational inequality has been extensively studied in the literature [18].

If F0 and Ψ0, the problem (1.3) is reduced into the minimize problem for finding xC such that

ψ(y)ψ(x)0,yC.
(1.6)

The set of solutions of (1.6) is denoted by Argmin(ψ). Iterative methods for nonexpansive mappings have recently been applied to solve convex minimization problems. Convex minimization problems have a great impact and influence in the development of almost all branches of pure and applied sciences. A typical problem is to minimize a quadratic function over the set of the fixed points of a nonexpansive mapping on a real Hilbert space H:

θ(x)= 1 2 Ax,xx,y,xF(S),
(1.7)

where A is a linear bounded operator, F(S) is the fixed point set of a nonexpansive mapping S and y is a given point in H[19].

In 2000, Moudafi [20] introduced the viscosity approximation method for nonexpansive mapping and prove that if H is a real Hilbert space, the sequence { x n } defined by the iterative method below, with the initial guess x 0 C is chosen arbitrarily,

x n + 1 = α n f( x n )+(1 α n )S x n ,n0,
(1.8)

where { α n }(0,1) satisfies certain conditions, converge strongly to a fixed point of S (say x ¯ C) which is the unique solution of the following variational inequality:

( I f ) x ¯ , x x ¯ 0,xF(S).
(1.9)

In 2005, Iiduka and Takahashi [21] introduced following iterative process x 0 C

x n + 1 = α n u+(1 α n )S P C ( x n λ n A x n ),n0,
(1.10)

where uC{ α n }(0,1) and { λ n }[a,b] for some a b with 0<a<b<2β. They proved that under certain appropriate conditions imposed on { α n } and { λ n }, the sequence { x n } generated by (1.10) converges strongly to a common element of the set of fixed points of nonexpansive mapping and the set of solutions of the variational inequality for an inverse-strongly monotone mapping (say x ¯ C) which solve some variational inequality

x ¯ u,x x ¯ 0,xF(S).
(1.11)

In 2006, Marino and Xu [19] introduced a general iterative method for nonexpansive mapping. They defined the sequence { x n } generated by the algorithm x 0 C

x n + 1 = α n γf( x n )+(I α n A)S x n ,n0,
(1.12)

where { α n }(0,1) and A is a strongly positive linear bounded operator. They proved that if C=H and the sequence { α n } satisfies appropriate conditions, then the sequence { x n } generated by (1.12) converge strongly to a fixed point of S (say x ¯ H) which is the unique solution of the following variational inequality:

( A γ f ) x ¯ , x x ¯ 0,xF(S),
(1.13)

which is the optimality condition for the minimization problem

min x F ( S ) 1 2 Ax,xh(x),
(1.14)

where h is a potential function for γf (i.e. h (x)=γf(x) for xH).

In 2008, Su et al.[22] introduced the following iterative scheme by the viscosity approximation method in a real Hilbert space: x 1 , u n H

{ F ( u n , y ) + 1 r n y u n , u n x n 0 , y C , x n + 1 = α n f ( x n ) + ( 1 α n ) S P C ( u n λ n A u n ) ,
(1.15)

for all nN, where { α n }[0,1) and { r n }(0,) satisfy some appropriate conditions. Furthermore, they proved { x n } and { u n } converge strongly to the same point zF(S)VI(C,A)EP(F) where z= P F ( S ) VI ( C , A ) EP ( F ) f(z).

In 2011, Tan and Chang [10] introduced following iterative process for { T n :CC} is a sequence of nonexpansive mappings. Let { x n } be the sequence defined by

x n + 1 = α n x n +(1 α n ) ( S P C ( ( 1 t n ) J M , λ ( I λ A ) T n ( I μ B ) ) x n ) ,n0,
(1.16)

where { α n }(0,1)λ(0,2α] and μ(0,2β]. The sequence { x n } defined by (1.16) converges strongly to a common element of the set of fixed points of nonexpansive mappings, the set of solutions of the variational inequality and the generalized equilibrium problem.

In this article, we mixed and modified the iterative methods (1.12), (1.15) and (1.16) by purposing the following new general viscosity iterative method: x 0 , u n C and

{ u n = T r n ( F 1 , ψ 1 ) ( x n r n B 1 x n ) , v n = T s n ( F 2 , ψ 2 ) ( x n s n B 2 x n ) , x n + 1 = ξ n P C [ α n γ f ( x n ) + ( I α n A ) S J M , λ ( I λ B ) u n ] + ( 1 ξ n ) v n , n 0 ,

where { α n },{ ξ n }(0,1), λ(0,2β) such that 0<aλb<2β, { r n }(0,2η) with 0<cd1η and { s n }(0,2ρ) with 0<ef1ρ satisfy some appropriate conditions. The purpose of this article, we show that under some control conditions the sequence { x n } converges strongly to a common element of the set of fixed points of nonexpansive mappings, the common solutions of the generalized mixed equilibrium problem and the set of solutions of the variational inclusion in a real Hilbert space.

2 Preliminaries

Let H be a real Hilbert space with the inner product , and the norm , respectively. Let C be a nonempty closed convex subset of H. Recall that the metric (nearest point) projection P C from H onto C assigns to each xH, the unique point in P C xC satisfying the property

x P C x= min y C xy.

The following characterizes the projection P C . We recall some lemmas which will be needed in the rest of this article.

Lemma 2.1 The functionuCis a solution of the variational inequality (1.5) if and only ifuCsatisfies the relationu= P C (uλΨu)for allλ>0.

Lemma 2.2 For a givenzH, uC, u= P C zuz,vu0, vC.

It is well known that P C is a firmly nonexpansive mapping of H onto C and satisfies

P C x P C y 2 P C x P C y,xy,x,yH.
(2.1)

Moreover, P C xis characterized by the following properties: P C xCand for allxH, yC,

x P C x,y P C x0.
(2.2)

Lemma 2.3 ([23])

LetM:H 2 H be a maximal monotone mapping and letB:HHbe a monotone and Lipshitz continuous mapping. Then the mappingL=M+B:H 2 H is a maximal monotone mapping.

Lemma 2.4 ([24])

Each Hilbert space H satisfies Opial’s condition, that is, for any sequence{ x n }Hwith x n x, the inequality lim inf n x n x< lim inf n x n y, hold for eachyHwithyx.

Lemma 2.5 ([25])

Assume { a n } is a sequence of nonnegative real numbers such that

a n + 1 (1 γ n ) a n + δ n ,n0,

where { γ n }(0,1) and { δ n } is a sequence in R such that

  1. (i)

    n = 1 γ n =;

  2. (ii)

    lim sup n δ n γ n 0 or n = 1 | δ n |<.

Then lim n a n =0.

Lemma 2.6 ([26])

Let C be a closed convex subset of a real Hilbert space H and letT:CCbe a nonexpansive mapping. ThenITis demiclosed at zero, that is,

x n x, x n T x n 0

impliesx=Tx.

For solving the generalized mixed equilibrium problem, let us assume that the bifunction F:C×CR, the nonlinear mapping Ψ:CH is continuous monotone and ψ:CR satisfies the following conditions:

(A1) F(x,x)=0 for all xC;

(A2) F is monotone, i.e., F(x,y)+F(y,x)0 for any x,yC;

(A3) for each fixed yC, xF(x,y) is weakly upper semicontinuous;

(A4) for each fixed xC, yF(x,y) is convex and lower semicontinuous;

(B1) for each xC and r>0, there exist a bounded subset D x C and y x C such that for any zC D x ,

F(z, y x )+ψ( y x )ψ(z)+ 1 r y x z,zx<0,
(2.3)

(B2) C is a bounded set.

Lemma 2.7 ([27])

Let C be a nonempty closed convex subset of a real Hilbert space H. LetF:C×CRbe a bifunction mapping satisfies (A 1)-(A 4) and letψ:CRis convex and lower semicontinuous such thatCdomψ. Assume that either (B 1) or (B 2) holds. Forr>0andxH, then there existsuCsuch that

F(u,y)+ψ(y)ψ(u)+ 1 r yu,ux0.

Define a mapping T r ( F , ψ ) :HCas follows:

T r ( F , ψ ) (x)= { u C : F ( u , y ) + ψ ( y ) ψ ( u ) + 1 r y u , u x 0 , y C }
(2.4)

for allxH. Then, the following hold:

  1. (i)

    T r ( F , ψ ) is single-valued;

  2. (ii)

    T r ( F , ψ ) is firmly nonexpansive, i.e., for any x,yH,

    T r ( F , ψ ) x T r ( F , ψ ) y 2 T r ( F , ψ ) x T r ( F , ψ ) y , x y ;
  3. (iii)

    F( T r ( F , ψ ) )=MEP(F,ψ);

  4. (iv)

    MEP(F,ψ) is closed and convex.

Lemma 2.8 ([19])

Assume A is a strongly positive linear bounded operator on a Hilbert space H with coefficient γ ¯ >0and0<ρ A 1 , thenIρA1ρ γ ¯ .

Lemma 2.9 ([28])

Let H be a real Hilbert space andA:HHa mapping.

  1. (i)

    If A is δ-strongly monotone and μ-strictly pseudo-contraction with δ+μ>1, then IA is contraction with constant ( 1 δ ) / μ .

  2. (ii)

    If A is δ-strongly monotone and μ-strictly pseudo-contraction with δ+μ>1, then for any fixed number τ(0,1), IτA is contraction with constant 1τ(1 ( 1 δ ) / μ ).

3 Strong convergence theorems

In this section, we show a strong convergence theorem which solves the problem of finding a common element of F(S), GMEP( F 1 , ψ 1 , B 1 ), GMEP( F 2 , ψ 2 , B 2 ) and I(B,M) of an inverse-strongly monotone mapping in a real Hilbert space.

Theorem 3.1 Let H be a real Hilbert space, C be a closed convex subset of H. Let F 1 , F 2 be two bifunctions ofC×CintoRsatisfying (A 1)-(A 4) andB, B 1 , B 2 :CHbeβ,η,ρ-inverse-strongly monotone mappings, ψ 1 , ψ 2 :CRbe convex and lower semicontinuous function, f:CCbe a contraction with coefficient α (0<α<1), M:H 2 H be a maximal monotone mapping and A be a δ-strongly monotone and μ-strictly pseudo-contraction mapping withδ+μ>1, γ is a positive real number such thatγ< 1 α (1 1 δ μ ). Assume that either (B 1) or (B 2) holds. Let S be a nonexpansive mapping of H into itself such that

Θ:=F(S)GMEP( F 1 , ψ 1 , B 1 )GMEP( F 2 , ψ 2 , B 2 )I(B,M).

Suppose{ x n }is a sequence generated by the following algorithm x 0 Carbitrarily:

{ u n = T r n ( F 1 , ψ 1 ) ( x n r n B 1 x n ) , v n = T s n ( F 2 , ψ 2 ) ( x n s n B 2 x n ) , x n + 1 = ξ n P C [ α n γ f ( x n ) + ( I α n A ) S J M , λ ( I λ B ) u n ] + ( 1 ξ n ) v n , n 0 ,
(3.1)

where{ α n },{ ξ n }(0,1), λ(0,2β)such that0<aλb<2β, { r n }(0,2η)with0<cd1ηand{ s n }(0,2ρ)with0<ef1ρsatisfy the following conditions:

(C1): lim n α n =0, Σ n = 0 α n =, Σ n = 1 | α n + 1 α n |<,

(C2): 0< lim inf n ξ n < lim sup n ξ n <1, Σ n = 1 | ξ n + 1 ξ n |<,

(C3): lim inf n r n >0and lim n | r n + 1 r n |=0,

(C4): lim inf n s n >0and lim n | s n + 1 s n |=0.

Then{ x n }converges strongly toqΘ, whereq= P Θ (γf+IA)(q)which solves the following variational inequality:

( γ f A ) q , p q 0,pΘ

which is the optimality condition for the minimization problem

min q Θ 1 2 Aq,qh(q),
(3.2)

where h is a potential function for γf (i.e., h (q)=γf(q)forqH).

Proof Since B is β-inverse-strongly monotone mappings, we have

( I λ B ) x ( I λ B ) y 2 = ( x y ) λ ( B x B y ) 2 = x y 2 2 λ x y , B x B y + λ 2 B x B y 2 x y 2 + λ ( λ 2 β ) B x B y 2 x y 2 .
(3.3)

And B 1 , B 2 are η,ρ-inverse-strongly monotone mappings, we have

( I r n B 1 ) x ( I r n B 1 ) y 2 = ( x y ) r n ( B 1 x B 1 y ) 2 = x y 2 2 r n x y , B 1 x B 1 y + r n 2 B 1 x B 1 y 2 x y 2 + r n ( r n 2 η ) B 1 x B 1 y 2 x y 2 .
(3.4)

In similar way, we can obtain

( I s n B 2 ) x ( I s n B 2 ) y 2 x y 2 .
(3.5)

It is clear that if 0<λ<2β, 0< r n <2η, 0< s n 2ρ then IλB, I r n B 1 , I s n B 2 are all nonexpansive. We will divide the proof into six steps.

Step 1. We will show { x n } is bounded. Put y n = J M , λ ( u n λB u n ), n0. It follows that

y n q = J M , λ ( u n λ B u n ) J M , λ ( q λ B q ) u n q .
(3.6)

By Lemma 2.7, we have u n = T r n ( F 1 , ψ 1 ) ( x n r n B 1 x n ) for all n0. Then, we note that

u n q 2 = T r n ( F 1 , ψ 1 ) ( x n r n B 1 x n ) T r n ( F 1 , ψ 1 ) ( q r n B 1 q ) 2 ( x n r n B 1 x n ) ( q r n B 1 q ) 2 x n q 2 + r n ( r n 2 η ) B 1 x n B 1 q 2 x n q 2 .
(3.7)

In similar way, we can obtain

v n q 2 = T s n ( F 2 , ψ 2 ) ( x n s n B 2 x n ) T s n ( F 2 , ψ 2 ) ( q s n B 2 q ) 2 ( x n s n B 2 x n ) ( q s n B 2 q ) 2 x n q 2 + s n ( s n 2 ρ ) B 2 x n B 2 q 2 x n q 2 .
(3.8)

Put z n = P C [ α n γf( x n )+(I α n A)S y n ] for all n0. From (3.1) and Lemma 2.9(ii), we deduce that

(3.9)

It follows from induction that

x n qmax { x 0 q , γ f ( q ) A q 1 1 δ μ γ α } ,n0.

Therefore { x n } is bounded, so are { v n }, { y n }, { z n }, {S y n }, {f( x n )} and {AS y n }.

Step 2. We claim that lim n x n + 2 x n + 1 =0. From (3.1), we have

x n + 2 x n + 1 = ξ n + 1 z n + 1 + ( 1 ξ n + 1 ) v n + 1 ξ n z n ( 1 ξ n ) v n = ξ n + 1 ( z n + 1 z n ) + ( ξ n + 1 ξ n ) z n + ( 1 ξ n + 1 ) ( v n + 1 v n ) + ( ξ n + 1 ξ n ) v n ξ n + 1 z n + 1 z n + ( 1 ξ n + 1 ) v n + 1 v n + | ξ n + 1 ξ n | ( z n + v n ) .
(3.10)

We will estimate v n + 1 v n . On the other hand, from v n 1 = T s n 1 ( F 2 , ψ 2 ) ( x n 1 s n 1 B 2 x n 1 ) and v n = T s n ( F 2 , ψ 2 ) ( x n s n B 2 x n ), it follows that

(3.11)

and

F 2 ( v n ,y)+ B 2 x n ,y v n + ψ 2 (y) ψ 2 ( v n )+ 1 s n y v n , v n x n 0,yC.
(3.12)

Substituting y= v n in (3.11) and y= v n 1 in (3.12), we get

F 2 ( v n 1 , v n )+ B 2 x n 1 , v n v n 1 + ψ 2 ( v n ) ψ 2 ( v n 1 )+ 1 s n 1 v n v n 1 , v n 1 x n 1 0

and

F 2 ( v n , v n 1 )+ B 2 x n , v n 1 v n + ψ 2 ( v n 1 ) ψ 2 ( v n )+ 1 s n v n 1 v n , v n x n 0.

From (A2), we obtain

v n v n 1 , B 2 x n 1 B 2 x n + v n 1 x n 1 s n 1 v n x n s n 0,

and then

v n v n 1 , s n 1 ( B 2 x n 1 B 2 x n ) + v n 1 x n 1 s n 1 s n ( v n x n ) 0,

so

v n v n 1 , s n 1 B 2 x n 1 s n 1 B 2 x n + v n 1 v n + v n x n 1 s n 1 s n ( v n x n ) 0.

It follows that

Without loss of generality, let us assume that there exists a real number e such that s n 1 >e>0, for all nN. Then, we have

v n v n 1 2 v n v n 1 , x n x n 1 + ( 1 s n 1 s n ) ( v n x n ) v n v n 1 { x n x n 1 + | 1 s n 1 s n | v n x n }

and hence

v n v n 1 x n x n 1 + 1 s n | s n s n 1 | v n x n x n x n 1 + M 1 e | s n s n 1 | ,
(3.13)

where M 1 =sup{ v n x n :nN}. Substituting (3.13) into (3.10) that

x n + 2 x n + 1 ξ n + 1 z n + 1 z n + ( 1 ξ n + 1 ) { x n + 1 x n + M 1 e | s n s n 1 | } + | ξ n + 1 ξ n | ( z n + v n ) .
(3.14)

We note that

z n + 1 z n = P C [ α n + 1 γ f ( x n + 1 ) + ( I α n + 1 A ) S y n + 1 ] P C [ α n γ f ( x n ) ( I α n A ) S y n ] α n + 1 γ f ( x n + 1 ) + ( I α n + 1 A ) S y n + 1 ( α n γ f ( x n ) ( I α n A ) S y n ) α n + 1 γ ( f ( x n + 1 ) f ( x n ) ) + ( α n + 1 α n ) γ f ( x n ) + ( I α n + 1 A ) ( S y n + 1 S y n ) + ( α n α n + 1 ) A S y n α n + 1 γ α x n + 1 x n + | α n + 1 α n | γ f ( x n ) + ( 1 α n + 1 ( 1 1 δ μ ) ) y n + 1 y n + | α n + 1 α n | A S y n α n + 1 γ α x n + 1 x n + | α n + 1 α n | ( γ f ( x n ) + A S y n ) + ( 1 α n + 1 ( 1 1 δ μ ) ) y n + 1 y n .
(3.15)

Since IλB be nonexpansive, we have

y n + 1 y n = J M , λ ( u n + 1 λ B u n + 1 ) J M , λ ( u n λ B u n ) ( u n + 1 λ B u n + 1 ) ( u n λ B u n ) u n + 1 u n .
(3.16)

On the other hand, from u n 1 = T r n 1 ( F 1 , ψ 1 ) ( x n 1 r n 1 B 1 x n 1 ) and u n = T r n ( F 1 , ψ 1 ) ( x n r n B 1 x n ), it follows that

(3.17)

and

F 1 ( u n ,y)+ B 1 x n ,y u n + ψ 1 (y) ψ 1 ( u n )+ 1 r n y u n , u n x n 0,yC.
(3.18)

Substituting y= u n in (3.17) and y= u n 1 in (3.18), we get

F 1 ( u n 1 , u n )+ B 1 x n 1 , u n u n 1 + ψ 1 ( u n ) ψ 1 ( u n 1 )+ 1 r n 1 u n u n 1 , u n 1 x n 1 0

and

F 1 ( u n , u n 1 )+ B 1 x n , u n 1 u n + ψ 1 ( u n 1 ) ψ 1 ( u n )+ 1 r n u n 1 u n , u n x n 0.

From (A2), we obtain

u n u n 1 , B 1 x n 1 B 1 x n + u n 1 x n 1 r n 1 u n x n r n 0,

and then

u n u n 1 , r n 1 ( B 1 x n 1 B 1 x n ) + u n 1 x n 1 r n 1 r n ( u n x n ) 0,

so

u n u n 1 , r n 1 B 1 x n 1 r n 1 B 1 x n + u n 1 u n + u n x n 1 r n 1 r n ( u n x n ) 0.

It follows that

Without loss of generality, let us assume that there exists a real number c such that r n 1 >c>0, for all nN. Then, we have

u n u n 1 2 u n u n 1 , x n x n 1 + ( 1 r n 1 r n ) ( u n x n ) u n u n 1 { x n x n 1 + | 1 r n 1 r n | u n x n }

and hence

u n u n 1 x n x n 1 + 1 r n | r n r n 1 | u n x n x n x n 1 + M 2 c | r n r n 1 | ,
(3.19)

where M 2 =sup{ u n x n :nN}. Substituting (3.19) into (3.16), we have

y n y n 1 x n x n 1 + M 2 c | r n r n 1 |.
(3.20)

Substituting (3.20) into (3.15), we obtain that

z n + 1 z n α n + 1 γ α x n + 1 x n + | α n + 1 α n | ( γ f ( x n ) + A S y n ) + ( 1 α n + 1 ( 1 1 δ μ ) ) { x n x n 1 + M 2 c | r n r n 1 | } .
(3.21)

And substituting (3.13), (3.21) into (3.10), we get

x n + 2 x n + 1 ξ n + 1 { α n + 1 γ α x n + 1 x n + | α n + 1 α n | ( γ f ( x n ) + A S y n ) + ( 1 α n + 1 ( 1 1 δ μ ) ) x n x n 1 + M 2 c | r n r n 1 | } + ( 1 ξ n + 1 ) { x n x n 1 + M 1 e | s n s n 1 | } + | ξ n + 1 ξ n | ( z n + v n ) ( 1 ( ( 1 1 δ μ ) γ α ) ξ n + 1 α n + 1 ) x n + 1 x n + ( | α n + 1 α n | + | ξ n + 1 ξ n | ) M 3 + M 1 e | s n s n 1 | + M 2 c | r n r n 1 | ,
(3.22)

where M 3 >0 is a constant satisfying

sup n { γ f ( x n ) + A S y n , z n + v n } M 3 .

This together with (C1)-(C4) and Lemma 2.5, imply that

lim n x n + 2 x n + 1 =0.
(3.23)

From (3.20), we also have y n + 1 y n 0 as n.

Step 3. We show the followings:

  1. (i)

    lim n B u n Bq=0;

  2. (ii)

    lim n B 1 x n B 1 q=0;

  3. (iii)

    lim n B 2 x n B 2 q=0.

For qΘ and q= J M , λ (qλBq), then we get

y n q 2 = J M , λ ( u n λ B u n ) J M , λ ( q λ B q ) 2 ( u n λ B u n ) ( q λ B q ) 2 u n q 2 + λ ( λ 2 β ) B u n B q 2 x n q 2 + λ ( λ 2 β ) B u n B q 2 .
(3.24)

It follows that

z n q 2 = P C ( α n γ f ( x n ) + ( I α n A ) S y n ) P C ( q ) 2 α n ( γ f ( x n ) A q ) + ( I α n A ) ( S y n q ) 2 α n γ f ( x n ) A q 2 + ( 1 α n ( 1 1 δ μ ) ) y n q 2 + 2 α n ( 1 α n ( 1 1 δ μ ) ) γ f ( x n ) A q y n q α n γ f ( x n ) A q 2 + 2 α n ( 1 α n ( 1 1 δ μ ) ) γ f ( x n ) A q y n q + ( 1 α n ( 1 1 δ μ ) ) { x n q 2 + λ ( λ 2 β ) B u n B q 2 } α n γ f ( x n ) A q 2 + 2 α n ( 1 α n ( 1 1 δ μ ) ) γ f ( x n ) A q y n q + x n q 2 + ( 1 α n ( 1 1 δ μ ) ) λ ( λ 2 β ) B u n B q 2 .
(3.25)

By the convexity of the norm , we have

x n + 1 q 2 = ξ n z n + ( 1 ξ n ) v n q 2 ξ n ( z n q ) + ( 1 ξ n ) ( v n q ) 2 ξ n z n q 2 + ( 1 ξ n ) v n q 2 .
(3.26)

Substituting (3.8), (3.25) into (3.26), we obtain

So, we obtain

where ϵ n =2 ξ n α n (1 α n (1 1 δ μ ))γf( x n )Aq y n q. Since conditions (C1)-(C3) and lim n x n + 1 x n =0, then we obtain that B u n Bq0 as n. We consider this inequality in (3.25) that

z n q 2 α n γ f ( x n ) A q 2 + ( 1 α n ( 1 1 δ μ ) ) y n q 2 + 2 α n ( 1 α n ( 1 1 δ μ ) ) γ f ( x n ) A q y n q .
(3.27)

Substituting (3.6) and (3.8) into (3.27), we have

z n q 2 α n γ f ( x n ) A q 2 + ( 1 α n ( 1 1 δ μ ) ) × { x n q 2 + r n ( r n 2 η ) B 1 x n B 1 q 2 } + 2 α n ( 1 α n ( 1 1 δ μ ) ) γ f ( x n ) A q y n q = α n γ f ( x n ) A q 2 + ( 1 α n ( 1 1 δ μ ) ) x n q 2 + ( 1 α n ( 1 1 δ μ ) ) r n ( r n 2 η ) B 1 x n B 1 q 2 + 2 α n ( 1 α n ( 1 1 δ μ ) ) γ f ( x n ) A q y n q α n γ f ( x n ) A q 2 + x n q 2 + ( 1 α n ( 1 1 δ μ ) ) r n ( r n 2 η ) B 1 x n B 1 q 2 + 2 α n ( 1 α n ( 1 1 δ μ ) ) γ f ( x n ) A q y n q .
(3.28)

Substituting (3.7) and (3.28) into (3.26), we obtain

x n + 1 q 2 ξ n { α n γ f ( x n ) A q 2 + x n q 2 + ( 1 α n ( 1 1 δ μ ) ) r n ( r n 2 η ) B 1 x n B 1 q 2 + 2 α n ( 1 α n ( 1 1 δ μ ) ) γ f ( x n ) A q y n q } + ( 1 ξ n ) x n q 2 = ξ n α n γ f ( x n ) A q 2 + ξ n x n q 2 + ξ n ( 1 α n ( 1 1 δ μ ) ) r n ( r n 2 η ) B 1 x n B 1 q 2 + 2 ξ n α n ( 1 α n ( 1 1 δ μ ) ) γ f ( x n ) A q y n q + ( 1 ξ n ) x n q 2 .
(3.29)

So, we also have

where ϵ n =2 ξ n α n (1 α n (1 1 δ μ ))γf( x n )Aq y n q. Since conditions (C1)-(C3), lim n x n + 1 x n =0, then we obtain that B 1 x n B 1 q0 as n. Substituting (3.24) into (3.27), we have

(3.30)

Substituting (3.8) and (3.30) into (3.26), we obtain

(3.31)

So, we also have

where ϵ n =2 ξ n α n (1 α n (1 1 δ μ ))γf( x n )Aq y n q. Since conditions (C1), (C2), (C4), lim n x n + 1 x n =0 and lim n B u n Bq=0, then we obtain that B 2 x n B 2 q0 as n.

Step 4. We show the followings:

  1. (i)

    lim n x n u n =0;

  2. (ii)

    lim n u n y n =0;

  3. (iii)

    lim n y n S y n =0.

Since T r n ( F 1 , ψ 1 ) is firmly nonexpansive, we observe that

u n q 2 = T r n ( F 1 , ψ 1 ) ( x n r n B 1 x n ) T r n ( F 1 , ψ 1 ) ( q r n B 1 q ) 2 ( x n r n B 1 x n ) ( q r n B 1 q ) , u n q = 1 2 ( ( x n r n B 1 x n ) ( q r n B 1 q ) 2 + u n q 2 ( x n r n B 1 x n ) ( q r n B 1 q ) ( u n q ) 2 ) 1 2 ( x n q 2 + u n q 2 ( x n u n ) r n ( B 1 x n B 1 q ) 2 ) = 1 2 ( x n q 2 + u n q 2 x n u n 2 + 2 r n B 1 x n B 1 q , x n u n r n 2 B 1 x n B 1 q 2 ) .

Hence, we have

u n q 2 x n q 2 x n u n 2 +2 r n B 1 x n B 1 q x n u n .
(3.32)

Since J M , λ is 1-inverse-strongly monotone, we compute

y n q 2 = J M , λ ( u n λ B u n ) J M , λ ( q λ B q ) 2 ( u n λ B u n ) ( q λ B q ) , y n q = 1 2 ( ( u n λ B u n ) ( q λ B q ) 2 + y n q 2 ( u n λ B u n ) ( q λ B q ) ( y n q ) 2 ) = 1 2 ( u n q 2 + y n q 2 ( u n y n ) λ ( B u n B q ) 2 ) 1 2 ( u n q 2 + y n q 2 u n y n 2 + 2 λ u n y n , B u n B q λ 2 B u n B q 2 ) ,
(3.33)

which implies that

y n q 2 u n q 2 u n y n 2 +2λ u n y n B u n Bq.
(3.34)

Substitute (3.32) into (3.34), we have

y n q 2 { x n q 2 x n u n 2 + 2 r n B 1 x n B 1 q x n u n } u n y n 2 + 2 λ u n y n B u n B q .
(3.35)

Substitute (3.35) into (3.27), we have

z n q 2 α n γ f ( x n ) A q 2 + ( 1 α n ( 1 1 δ μ ) ) { x n q 2 x n u n 2 + 2 r n B 1 x n B 1 q x n u n u n y n 2 + 2 λ u n y n B u n B q } + 2 α n ( 1 α n ( 1 1 δ μ ) ) γ f ( x n ) A q y n q α n γ f ( x n ) A q 2 + x n q 2 x n u n 2 + 2 ( 1 α n ( 1 1 δ μ ) ) r n B 1 x n B 1 q x n u n u n y n 2 + 2 ( 1 α n ( 1 1 δ μ ) ) λ u n y n B u n B q + 2 α n ( 1 α n ( 1 1 δ μ ) ) γ f ( x n ) A q y n q .
(3.36)

Since T s n ( F 2 , ψ 2 ) is firmly nonexpansive, we observe that

v n q 2 = T s n ( F 2 , ψ 2 ) ( x n s n B 2 x n ) T s n ( F 2 , ψ 2 ) ( q s n B 2 q ) 2 ( x n s n B 2 x n ) ( q s n B 2 q ) , v n q = 1 2 ( ( x n s n B 2 x n ) ( q s n B 2 q ) 2 + v n q 2 ( x n s n B 2 x n ) ( q s n B 2 q ) ( v n q ) 2 ) 1 2 ( x n q 2 + v n q 2 ( x n v n ) s n ( B 2 x n B 2 q ) 2 ) = 1 2 ( x n q 2 + v n q 2 x n v n 2 + 2 s n B 2 x n B 2 q , x n v n s n 2 B 2 x n B 2 q 2 ) .

Hence, we have

v n q 2 x n q 2 x n v n 2 +2 s n B 2 x n B 2 q x n v n .
(3.37)

Substitute (3.36) and (3.37) into (3.26), we obtain

(3.38)

Then, we derive

(3.39)

By conditions (C1)-(C4), lim n x n + 1 x n =0, lim n B u n Bq=0, lim n B 1 x n B 1 q=0 and lim n B 2 x n B 2 q=0. So, we have x n u n 0, u n y n 0, x n v n 0 as n. We note that x n + 1 x n = ξ n ( z n x n )+(1 ξ n )( v n x n ). From lim n x n v n =0, lim n x n + 1 x n =0, and hence

lim n z n x n =0.
(3.40)

It follows that

x n y n x n u n + u n y n 0,asn.
(3.41)

Since

z n y n z n x n + x n y n .

So, by (3.40) and lim n x n y n =0, we obtain

lim n z n y n =0.
(3.42)

Therefore, we observe that

S y n z n = P C S y n P C ( α n γ f ( x n ) + ( I α n A ) S y n ) S y n α n γ f ( x n ) ( I α n A ) S y n = α n γ f ( x n ) A S y n .
(3.43)

By condition (C1), we have S y n z n 0 as n. Next, we observe that

S y n y n S y n z n + z n y n .

By (3.42) and (3.43), we have S y n y n 0 as n.

Step 5. We show that qΘ:=F(S)GMEP( F 1 , ψ 1 , B 1 )GMEP( F 2 , ψ 2 , B 2 )I(B,M) and lim sup n (γfA)q,S y n q0. It is easy to see that P Θ (γf+(IA)) is a contraction of H into itself. In fact, from Lemma 2.9, we have

P Θ ( γ f + ( I A ) ) x P Θ ( γ f + ( I A ) ) y ( γ f + ( I A ) ) x ( γ f + ( I A ) ) y γ f ( x ) f ( y ) + ( I A ) x y γ α x y + ( 1 ( 1 1 δ μ ) ) x y = ( 1 δ μ + γ α ) x y .

Hence H is complete, there exists a unique fixed point qH such that q= P Θ (γf+(IA))(q). By Lemma 2.2 we obtain that (γfA)q,wq0 for all wΘ.

Next, we show that lim sup n (γfA)q,S y n q0, where q= P Θ (γf+IA)(q) is the unique solution of the variational inequality (γfA)q,pqr0, pΘ. We can choose a subsequence { y n i } of { y n } such that

lim sup n ( γ f A ) q , S y n q = lim i ( γ f A ) q , S y n i q .

As { y n i } is bounded, there exists a subsequence { y n i j } of { y n i } which converges weakly to w. We may assume without loss of generality that y n i w. We claim that wΘ. Since y n S y n 0 and by Lemma 2.6, we have wF(S).

Next, we show that wGMEP( F 1 , ψ 1 , B 1 ). Since u n = T r n ( F 1 , ψ 1 ) ( x n r n B 1 x n ), we know that

F 1 ( u n ,y)+ ψ 1 (y) ψ 1 ( u n )+ B 1 x n ,y u n + 1 r n y u n , u n x n 0,yC.

It follows by (A2) that

ψ 1 (y) ψ 1 ( u n )+ B 1 x n ,y u n + 1 r n y u n , u n x n F 1 (y, u n ),yC.

Hence,

ψ 1 (y) ψ 1 ( u n i )+ B 1 x n i ,y u n i + 1 r n i y u n i , u n i x n i F 1 (y, u n i ),yC.
(3.44)

For t(0,1] and yH, let y t =ty+(1t)w. From (3.44), we have

y t u n i , B 1 y t y t u n i , B 1 y t ψ 1 ( y t ) + ψ 1 ( u n i ) B 1 x n i , y t u n i 1 r n i y t u n i , u n i x n i + F 1 ( y t , u n i ) = y t u n i , B 1 y t B 1 u n i + y t u n i , B 1 u n i B 1 x n i ψ 1 ( y t ) + ψ 1 ( u n i ) 1 r n i y t u n i , u n i x n i + F 1 ( y t , u n i ) .

From u n i x n i 0, we have B 1 u n i B 1 x n i 0. Further, from (A4) and the weakly lower semicontinuity of ψ 1 , u n i x n i r n i 0 and u n i w, we have

y t w, B 1 y t ψ 1 ( y t )+ ψ 1 (w)+ F 1 ( y t ,w).
(3.45)

From (A1), (A4) and (3.45), we have

0 = F 1 ( y t , y t ) ψ 1 ( y t ) + ψ 1 ( y t ) t F 1 ( y t , y ) + ( 1 t ) F 1 ( y t , w ) + t ψ 1 ( y ) + ( 1 t ) ψ 1 ( w ) ψ 1 ( y t ) = t [ F 1 ( y t , y ) + ψ 1 ( y ) ψ 1 ( y t ) ] + ( 1 t ) [ F 1 ( y t , w ) + ψ 1 ( w ) ψ 1 ( y t ) ] t [ F 1 ( y t , y ) + ψ 1 ( y ) ψ 1 ( y t ) ] + ( 1 t ) y t w , B 1 y t = t [ F 1 ( y t , y ) + ψ 1 ( y ) ψ 1 ( y t ) ] + ( 1 t ) t y w , B 1 y t ,

and hence

0 F 1 ( y t ,y)+ ψ 1 (y) ψ 1 ( y t )+(1t)yw, B 1 y t .

Letting t0, we have, for each yC,

F 1 (w,y)+ ψ 1 (y) ψ 1 (w)+yw, B 1 w0.

This implies that wGMEP( F 1 , ψ 1 , B 1 ). By following the same arguments, we can show that wGMEP( F 2 , ψ 2 , B 2 ).

Lastly, we show that wI(B,M). In fact, since B is a β-inverse-strongly monotone, B is monotone and Lipschitz continuous mapping. It follows from Lemma 2.3 that M+B is a maximal monotone. Let (v,g)G(M+B), since gBvM(v). Again since y n i = J M , λ ( u n i λB u n i ), we have u n i λB u n i (I+λM)( y n i ), that is, 1 λ ( u n i y n i λB u n i )M( y n i ). By virtue of the maximal monotonicity of M+B, we have

v y n i , g B v 1 λ ( u n i y n i λ B u n i ) 0,

and hence

v y n i , g v y n i , B v + 1 λ ( u n i y n i λ B u n i ) = v y n i , B v B y n i + v y n i , B y n i B u n i + v y n i , 1 λ ( u n i y n i ) .

It follows from lim n u n y n =0, we have lim n B u n B y n =0 and y n i w that

lim sup n v y n ,g=vw,g0.

It follows from the maximal monotonicity of B+M that θ(M+B)(w), that is, wI(B,M). Therefore, wΘ. It follows that

lim sup n ( γ f A ) q , S y n q = lim i ( γ f A ) q , S y n i q = ( γ f A ) q , w q 0.

Step 6. We prove x n q. By using (3.1) and together with Schwarz inequality, we have

x n + 1 q 2 = ξ n P C [ ( α n γ f ( x n ) + ( I α n A ) S y n ) q ] + ( 1 ξ n ) ( v n q ) 2 ξ n P C [ ( α n γ f ( x n ) + ( I α n A ) S y n ) P C ( q ) ] 2 + ( 1 ξ n ) v n q 2 ξ n α n ( γ f ( x n ) A q ) + ( I α n A ) ( S y n q ) 2 + ( 1 ξ n ) x n q 2 ξ n ( I α n A ) 2 S y n q 2 + ξ n α n 2 γ f ( x n ) A q 2 + 2 ξ n α n ( I α n A ) ( S y n q ) , γ f ( x n ) A q + ( 1 ξ n ) x n q 2 ξ n ( 1 α n ( 1 1 δ μ ) ) 2 y n q 2 + ξ n α n 2 γ f ( x n ) A q 2 + 2 ξ n α n S y n q , γ f ( x n ) A q 2 ξ n α n 2 A ( S y n q ) , γ f ( x n ) A q + ( 1 ξ n ) x n q 2 ξ n ( 1 α n ( 1 1 δ μ ) ) 2 x n q 2 + ξ n α n 2 γ f ( x n ) A q 2 + 2 ξ n α n S y n q , γ f ( x n ) γ f ( q ) + 2 ξ n α n S y n q , γ f ( q ) A q 2 ξ n α n 2 A ( S y n q ) , γ f ( x n ) A q + ( 1 ξ n ) x n q 2 ξ n ( 1 α n ( 1 1 δ μ ) ) 2 x n q 2 + ξ n α n 2 γ f ( x n ) A q 2 + 2 ξ n α n S y n q γ f ( x n ) γ f ( q ) + 2 ξ n α n S y n q , γ f ( q ) A q 2 ξ n α n 2 A ( S y n q ) , γ f ( x n ) A q + ( 1 ξ n ) x n q 2 ξ n ( 1 α n ( 1 1 δ μ ) ) 2 x n q 2 + ξ n α n 2 γ f ( x n ) A q 2 + 2 ξ n γ α α n y n q x n q + 2 ξ n α n S y n q , γ f ( q ) A q 2 ξ n α n 2 A ( S y n q ) , γ f ( x n ) A q + ( 1 ξ n ) x n q 2 ( ξ n 2 ξ n α n ( 1 1 δ μ ) + ξ n α n 2 ( 1 1 δ μ ) 2 ) x n q 2 + ξ n α n 2 γ f ( x n ) A q 2 + 2 ξ n γ α α n x n q 2 + 2 ξ n α n S y n q , γ f ( q ) A q 2 ξ n α n 2 A ( S y n q ) , γ f ( x n ) A q + ( 1 ξ n ) x n q 2 ( 1 2 ξ n α n ( 1 1 δ μ ) + 2 ξ n γ α α n ) x n q 2 + α n { ξ n α n γ f ( x n ) A q 2 + 2 ξ n S y n q , γ f ( q ) A q 2 ξ n α n A ( S y n q ) γ f ( x n ) A q + ξ n α n ( 1 1 δ μ ) 2 x n q 2 } = ( 1 2 ( ( 1 1 δ μ ) γ α ) ξ n α n ) x n q 2 + α n { ξ n α n γ f ( x n ) A q 2 + 2 ξ n S y n q , γ f ( q ) A q 2 ξ n α n A ( S y n q ) γ f ( x n ) A q + ξ n α n ( 1 1 δ μ ) 2 x n q 2 } .

Since { x n } is bounded, where η ξ n γ f ( x n ) A q 2 2 ξ n A(S y n q)γf( x n )Aq+ ξ n ( 1 1 δ μ ) 2 x n q 2 for all n0. It follows that

x n + 1 q 2 ( 1 2 ( ( 1 1 δ μ ) γ α ) ξ n α n ) x n q 2 + α n ς n ,
(3.46)

where ς n =2 ξ n S y n q,γf(q)Aq+η α n . By lim sup n (γfA)q,S y n q0, we get lim sup n ς n 0. Applying Lemma 2.5, we can conclude that x n q. This completes the proof. □

Corollary 3.2 Let H be a real Hilbert space, C be a closed convex subset of H. Let F 1 , F 2 be two bifunctions ofC×CintoRsatisfying (A 1)-(A 4) andB, B 1 , B 2 :CHbeβ,η,ρ-inverse-strongly monotone mappings, ψ 1 , ψ 2 :CRbe convex and lower semicontinuous function, f:CCbe a contraction with coefficient α (0<α<1), M:H 2 H be a maximal monotone mapping. Assume that either (B 1) or (B 2) holds. Let S be a nonexpansive mapping of H into itself such that

Θ:=F(S)GMEP( F 1 , ψ 1 , B 1 )GMEP( F 2 , ψ 2 , B 2 )I(B,M).

Suppose{ x n }is a sequence generated by the following algorithm x 0 Carbitrarily:

{ u n = T r n ( F 1 , ψ 1 ) ( x n r n B 1 x n ) , v n = T s n ( F 2 , ψ 2 ) ( x n s n B 2 x n ) , x n + 1 = ξ n P C [ α n f ( x n ) + ( I α n ) S J M , λ ( I λ B ) u n ] + ( 1 ξ n ) v n ,

where{ α n },{ ξ n }(0,1), λ(0,2β)such that0<aλb<2β, { r n }(0,2η)with0<cd1ηand{ s n }(0,2ρ)with0<ef1ρsatisfy the conditions (C 1)-(C 4).

Then{ x n }converges strongly toqΘ, whereq= P Θ (f+I)(q)which solves the following variational inequality:

( f I ) q , p q 0,pΘ.

Proof Putting AI and γ1 in Theorem 3.1, we can obtain desired conclusion immediately. □

Corollary 3.3 Let H be a real Hilbert space, C be a closed convex subset of H. Let F 1 , F 2 be two bifunctions ofC×CintoRsatisfying (A 1)-(A 4) andB, B 1 , B 2 :CHbeβ,η,ρ-inverse-strongly monotone mappings, ψ 1 , ψ 2 :CRbe convex and lower semicontinuous function. Assume that either (B 1) or (B 2) holds. Let S be a nonexpansive mapping of H into itself such that

Θ:=F(S)GMEP( F 1 , ψ 1 , B 1 )GMEP( F 2 , ψ 2 , B 2 )I(B,M).

Suppose{ x n }is a sequence generated by the following algorithm x 0 Carbitrarily:

{ u n = T r n ( F 1 , ψ 1 ) ( x n r n B 1 x n ) , v n = T s n ( F 2 , ψ 2 ) ( x n s n B 2 x n ) , x n + 1 = ξ n P C [ α n u + ( I α n ) S J M , λ ( I λ B ) u n ] + ( 1 ξ n ) v n ,

where{ α n },{ ξ n }(0,1), λ(0,2β)such that0<aλb<2β, { r n }(0,2η)with0<cd1ηand{ s n }(0,2ρ)with0<ef1ρsatisfy the conditions (C 1)-(C 4).

Then{ x n }converges strongly toqΘ, whereq= P Θ (q)which solves the following variational inequality:

uq,pq0,pΘ.

Proof Putting fuC is a constant in Corollary 3.2, we can obtain desired conclusion immediately. □

Corollary 3.4 Let H be a real Hilbert space, C be a closed convex subset of H. Let F 1 , F 2 be two bifunctions ofC×CintoRsatisfying (A 1)-(A 4) andB, B 1 , B 2 :CHbeβ,η,ρ-inverse-strongly monotone mappings, ψ 1 , ψ 2 :CRbe convex and lower semicontinuous function, f:CCbe a contraction with coefficient α (0<α<1) and A is δ-strongly monotone and μ-strictly pseudo-contraction withδ+μ>1, γ is a positive real number such thatγ< 1 α (1 1 δ μ ). Assume that either (B 1) or (B 2) holds. Let S be a nonexpansive mapping of C into itself such that

Θ:=F(S)GMEP( F 1 , ψ 1 , B 1 )GMEP( F 2 , ψ 2 , B 2 )VI(C,B).

Suppose{ x n }is a sequence generated by the following algorithm x 0 Carbitrarily:

{ u n = T r n ( F 1 , ψ 1 ) ( x n r n B 1 x n ) , v n = T s n ( F 2 , ψ 2 ) ( x n s n B 2 x n ) , x n + 1 = ξ n P C [ α n γ f ( x n ) + ( I α n A ) S P C ( I λ B ) u n ] + ( 1 ξ n ) v n ,

where{ α n },{ ξ n }(0,1), λ(0,2β)such that0<aλb<2β, { r n }(0,2η)with0<cd1ηand{ s n }(0,2ρ)with0<ef1ρsatisfy the conditions (C 1)-(C 4).

Then{ x n }converges strongly toqΘ, whereq= P Θ (γf+IA)(q)which solves the following variational inequality:

( γ f A ) q , p q 0,pΘ.

Proof Taking J M , λ = P C in Theorem 3.1, we can obtain desired conclusion immediately. □

Corollary 3.5 Let H be a real Hilbert space, C be a closed convex subset of H. Letf:CCbe a contraction with coefficient α (0<α<1), A is δ-strongly monotone and μ-strictly pseudo-contraction withδ+μ>1, γ is a positive real number such thatγ< 1 α (1 1 δ μ ). Let S be a nonexpansive mapping of C into itself such that

Θ:=F(S).

Suppose{ x n }is a sequence generated by the following algorithm x 0 Carbitrarily:

x n + 1 = α n γf( x n )+(I α n A)S x n ,

where{ α n }(0,1)and satisfy the condition lim n α n =0. Then{ x n }converges strongly toqΘ, whereq= P Θ (γf+IA)(q)which solves the following variational inequality:

( γ f A ) q , p q 0,pΘ.

Proof Taking ξ n 1, P C I and B, B 1 , B 2 0 in Corollary 3.4, we can obtain desired conclusion immediately. □

Remark 3.6 Corollary 3.5 generalizes and improves the result of Marino and Xu [19].

Corollary 3.7 Let H be a real Hilbert space, C be a closed convex subset of H. Let F 1 , F 2 be two bifunctions ofC×CintoRsatisfying (A 1)-(A 4) and B 1 , B 2 :CHbeη,ρ-inverse-strongly monotone mappings, ψ 1 , ψ 2 :CRbe convex and lower semicontinuous function, f:CCbe a contraction with coefficient α (0<α<1). Assume that either (B 1) or (B 2) holds. Let S be a nonexpansive mapping of C into itself such that

Θ:=F(S)GMEP( F 1 , ψ 1 , B 1 )GMEP( F 2 , ψ 2 , B 2 ).

Suppose{ x n }is a sequence generated by the following algorithm x 0 Carbitrarily:

{ u n = T r n ( F 1 , ψ 1 ) ( x n r n B 1 x n ) , v n = T s n ( F 2 , ψ 2 ) ( x n s n B 2 x n ) , x n + 1 = ξ n P C [ α n f ( x n ) + ( I α n ) S u n ] + ( 1 ξ n ) v n ,

where{ α n },{ ξ n }(0,1), { r n }(0,2η)with0<cd1ηand{ s n }(0,2ρ)with0<ef1ρsatisfy the conditions (C 1)-(C 4).

Then{ x n }converges strongly toqΘ, whereq= P Θ (f+I)(q)which solves the following variational inequality:

( f I ) q , p q 0,pΘ.

Proof Taking γ1, AI, J M , λ I and B0 in Theorem 3.1, we can obtain desired conclusion immediately. □

Remark 3.8 Corollary 3.7 generalizes and improves the result of Yao and Liou [29].

References

  1. Kirk WA: Fixed point theorem for mappings which do not increase distance. Am. Math. Mon. 1965, 72: 1004–1006. 10.2307/2313345

    Article  MathSciNet  Google Scholar 

  2. Browder FE, Petryshym WV: Construction of fixed points of nonlinear mappings in Hilbert spaces. J. Math. Anal. Appl. 1963, 20: 197–228.

    Article  Google Scholar 

  3. Chamnarnpan T, Kumam P: Iterative algorithms for solving the system of mixed equilibrium problems, fixed-point problems, and variational inclusions with application to minimization problem. J. Appl. Math. 2012., 2012:

    Google Scholar 

  4. Hao Y: Some results of variational inclusion problems and fixed point problems with applications. Appl. Math. Mech. 2009, 30(12):1589–1596. 10.1007/s10483-009-1210-x

    Article  MathSciNet  Google Scholar 

  5. Jitpeera T, Kumam P: Hybrid algorithms for minimization problems over the solutions of generalized mixed equilibrium and variational inclusion problems. Math. Probl. Eng. 2011., 2011:

    Google Scholar 

  6. Jitpeera T, Kumam P: Approximating common solution of variational inclusions and generalized mixed equilibrium problems with applications to optimization problems. Dyn. Contin. Discrete Impuls. Syst. 2011, 18(6):813–837.

    MathSciNet  Google Scholar 

  7. Liu M, Chang SS, Zuo P: An algorithm for finding a common solution for a system of mixed equilibrium problem, quasivariational inclusion problems of nonexpansive semigroup. J. Inequal. Appl. 2010., 2010:

    Google Scholar 

  8. Petrot N, Wangkeeree R, Kumam P: A viscosity approximation method of common solutions for quasi variational inclusion and fixed point problems. Fixed Point Theory 2011, 12(1):165–178.

    MathSciNet  Google Scholar 

  9. Zhang SS, Lee JHW, Chan CK: Algorithms of common solutions to quasi variational inclusion and fixed point problems. Appl. Math. Mech. 2008, 29(5):571–581. 10.1007/s10483-008-0502-y

    Article  MathSciNet  Google Scholar 

  10. Tan JF, Chang SS: Iterative algorithms for finding common solutions to variational inclusion equilibrium and fixed point problems. Fixed Point Theory Appl. 2011., 2011:

    Google Scholar 

  11. Rockafellar RT: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 2000, 149: 46–55.

    Google Scholar 

  12. Takahashi W: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama; 2000.

    Google Scholar 

  13. Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.

    MathSciNet  Google Scholar 

  14. Combettes PL, Hirstoaga SA: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6: 117–136.

    MathSciNet  Google Scholar 

  15. Flam SD, Antipin AS: Equilibrium programming using proximal-link algorithms. Math. Program. 1997, 78: 29–41.

    Article  MathSciNet  Google Scholar 

  16. Takahashi S, Takahashi W: Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 2007, 331: 506–515. 10.1016/j.jmaa.2006.08.036

    Article  MathSciNet  Google Scholar 

  17. Hartman P, Stampacchia G: On some nonlinear elliptic differential functional equations. Acta Math. 1966, 115: 271–310. 10.1007/BF02392210

    Article  MathSciNet  Google Scholar 

  18. Yao JC, Chadli O: Pseudomonotone complementarity problems and variational inequalities. In Handbook of Generalized Convexity and Monotonicity Edited by: Crouzeix JP, Haddjissas N, Schaible S. 2005, 501–558.

    Chapter  Google Scholar 

  19. Marino G, Xu HK: A general iterative method for nonexpansive mapping in Hilbert space. J. Math. Anal. Appl. 2006, 318: 43–52. 10.1016/j.jmaa.2005.05.028

    Article  MathSciNet  Google Scholar 

  20. Moudafi A: Viscosity approximation methods for fixed points problems. J. Math. Anal. Appl. 2000, 241: 46–55. 10.1006/jmaa.1999.6615

    Article  MathSciNet  Google Scholar 

  21. Iiduka H, Takahashi W: Strong convergence theorems for nonexpansive mapping and inverse-strong monotone mappings. Nonlinear Anal. 2005, 61: 341–350. 10.1016/j.na.2003.07.023

    Article  MathSciNet  Google Scholar 

  22. Su Y, Shang M, Qin X: An iterative method of solution for equilibrium and optimization problems. Nonlinear Anal. 2008, 69: 2709–2719. 10.1016/j.na.2007.08.045

    Article  MathSciNet  Google Scholar 

  23. Brézis H: Opérateur Maximaux Monotones. North-Holland, Amsterdam; 1973.

    Google Scholar 

  24. Opial Z: Weak convergence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73: 591–597. 10.1090/S0002-9904-1967-11761-0

    Article  MathSciNet  Google Scholar 

  25. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240–256. 10.1112/S0024610702003332

    Article  Google Scholar 

  26. Browder FE: Nonlinear operators and nonlinear equations of evolution in Banach spaces. Proc. Symp. Pure Math. 1976, 18: 78–81.

    Google Scholar 

  27. Peng J-W, Liou Y-C, Yao J-C: An iterative algorithm combining viscosity method with parallel method for a generalized equilibrium problem and strict pseudocontractions. Fixed Point Theory Appl. 2009., 2009:

    Google Scholar 

  28. Wangkeeree R, Bantaojai T: A general composite algorithms for solving general equilibrium problems and fixed point problems in Hilbert spaces. Thai J. Math. 2011, 9: 191–212.

    MathSciNet  Google Scholar 

  29. Yao Y, Liou Y-C: Composite algorithms for minimization over the solutions of equilibrium problems and fixed point problems. Abstr. Appl. Anal. 2010., 2010:

    Google Scholar 

Download references

Acknowledgements

This work was supported by the Higher Education Research Promotion and the National Research University Project of Thailand, Office of the Higher Education Commission (under NRU-CSEC Project No. 55000613). Furthermore, the authors are grateful for the reviewers for the careful reading of the article and for the suggestions which improved the quality of this study.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Poom Kumam.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contribute equally and significantly in this research work. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Jitpeera, T., Kumam, P. A new iterative algorithm for solving common solutions of generalized mixed equilibrium problems, fixed point problems and variational inclusion problems with minimization problems. Fixed Point Theory Appl 2012, 111 (2012). https://doi.org/10.1186/1687-1812-2012-111

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2012-111

Keywords