Skip to main content

Iterative algorithms based on the viscosity approximation method for equilibrium and constrained convex minimization problem

Abstract

The gradient-projection algorithm (GPA) plays an important role in solving constrained convex minimization problems. Based on the viscosity approximation method, we combine the GPA and averaged mapping approach to propose implicit and explicit composite iterative algorithms for finding a common solution of an equilibrium and a constrained convex minimization problem for the first time in this paper. Under suitable conditions, strong convergence theorems are obtained.

MSC:46N10, 47J20, 74G60.

1 Introduction

Let H be a real Hilbert space with inner product , and norm . Let C be a nonempty closed convex subset of H. Let T:CC be a nonexpansive mapping, namely TxTyxy, for all x,yC. The set of fixed points of T is denoted by F(T).

Let ϕ be a bifunction of C×C into , where is the set of real numbers. Consider the equilibrium problem (EP) which is to find zC such that

ϕ(z,y)0,yC.
(1.1)

We denoted the set of solutions of EP by EP(ϕ). Given a mapping F:CH, let ϕ(x,y)=Fx,yx for all x,yC, then zEP(ϕ) if and only if Fz,yz0 for all yC, that is, z is a solution of the variational inequality. Numerous problems in physics, optimizations, and economics reduce to find a solution of (1.1). Some methods have been proposed to solve the equilibrium problem; see, for instance, [13] and the references therein.

Composite iterative algorithms were proposed by many authors for finding a common solution of an equilibrium problem and a fixed point problem (see [418]).

On the other hand, consider the constrained convex minimization problem as follows:

minimize { g ( x ) : x C } ,
(1.2)

where g:CR is a real-valued convex function. It is well known that the gradient-projection algorithm (GPA) plays an important role in solving constrained convex minimization problems. If g is (Fréchet) differentiable, then the GPA generates a sequence { x n } using the following recursive formula:

x n + 1 = P C ( x n λ g ( x n ) ) ,n0,
(1.3)

or more generally,

x n + 1 = P C ( x n λ n g ( x n ) ) ,n0,
(1.4)

where in both (1.3) and (1.4) the initial guess x 0 is taken from C arbitrarily, and the parameters, λ or λ n , are positive real numbers satisfying certain conditions. The convergence of the algorithms (1.3) and (1.4) depends on the behavior of the gradient g. As a matter of fact, it is known that if g is α-strongly monotone and L-Lipschitzian with constants α,L0, then the operator

W:= P C (Iλg)
(1.5)

is a contraction; hence the sequence { x n } defined by the algorithm (1.3) converges in norm to the unique minimizer of (1.2). However, if the gradient g fails to be strongly monotone, the operator W defined by (1.5) would fail to be contractive; consequently, the sequence { x n } generated by the algorithm (1.3) may fail to converge strongly (see [19]). If g is Lipschitzian, then the algorithms (1.3) and (1.4) can still converge in the weak topology under certain conditions.

Recently, Xu [19] proposed an explicit operator-oriented approach to the algorithm (1.4); that is, an averaged mapping approach. He gave his averaged mapping approach to the GPA (1.4) and the relaxed gradient-projection algorithm. Moreover, he constructed a counterexample which shows that the algorithm (1.3) does not converge in norm in an infinite-dimensional space and also presented two modifications of GPA which are shown to have strong convergence [20, 21].

In 2011, Ceng et al. [22] proposed the following explicit iterative scheme:

x n + 1 = P C [ s n γ V x n + ( I s n μ F ) T n x n ] ,n0,

where s n = 2 λ n L 4 and P C (I λ n g)= s n I+(1 s n ) T n for each n0. He proved that the sequences { x n } converge strongly to a minimizer of the constrained convex minimization problem, which also solves a certain variational inequality.

In 2000, Moudafi [2] introduced the viscosity approximation method for nonexpansive mappings, extended in [23]. Let f be a contraction on H, starting with an arbitrary initial x 0 H, define a sequence { x n } recursively by

x n + 1 = α n f( x n )+(1 α n )T x n ,n0,
(1.6)

where { α n } is a sequence in (0,1). Xu [24] proved that if { α n } satisfies certain conditions, the sequence { x n } generated by (1.6) converges strongly to the unique solution x F(T) of the variational inequality

( I f ) x , x x 0,xF(T).

The purpose of the paper is to study the iterative method for finding the common solution of an equilibrium problem and a constrained convex minimization problem. Based on the viscosity approximation method, we combine the GPA and averaged mapping approach to propose implicit and explicit composite iterative method for finding the common element of the set of solutions of an equilibrium problem and the solution set of a constrained convex minimization problem. We also prove some strong convergence theorems.

2 Preliminaries

Throughout this paper, we always assume that C is a nonempty closed convex subset of a Hilbert space H. We use ‘’ for weak convergence and ‘→’ for strong convergence.

It is widely known that H satisfies Opial’s condition [25]; that is, for any sequence { x n } with x n x, the inequality

lim inf n x n x< lim inf n x n y

holds for every yH with yx.

In order to solve the equilibrium problem for a bifunction ϕ:C×CR, let us assume that ϕ satisfies the following conditions:

(A1) ϕ(x,x)=0, for all xC;

(A2) ϕ is monotone, that is, ϕ(x,y)+ϕ(y,x)0 for all x,yC;

(A3) for all x,y,zC, lim t 0 ϕ(tz+(1t)x,y)ϕ(x,y);

(A4) for each fixed xC, the function yϕ(x,y) is convex and lower semicontinuous.

Let us recall the following lemmas which will be useful for our paper.

Lemma 2.1 [26]

Let ϕ be a bifunction from C×C into satisfying (A1), (A2), (A3), and (A4), then for any r>0 and xH, there exists zC such that

ϕ(z,y)+ 1 r yz,zx0,yC.

Further, if

Q r x= { z C : ϕ ( z , y ) + 1 r y z , z x 0 , y C } ,

then the following hold:

  1. (1)

    Q r is single-valued;

  2. (2)

    Q r is firmly nonexpansive; that is,

    Q r x Q r y 2 Q r x Q r y,xy,x,yH;
  3. (3)

    F( Q r )=EP(ϕ);

  4. (4)

    EP(ϕ) is closed and convex.

Definition 2.1 A mapping T:HH is said to be firmly nonexpansive if and only if 2TI is nonexpansive, or equivalently,

xy,TxTy T x T y 2 ,x,yH.

Alternatively, T is firmly nonexpansive if and only if T can be expressed as

T= 1 2 (I+S),

where S:HH is nonexpansive. Obviously, projections are firmly nonexpansive.

Definition 2.2 A mapping T:HH is said to be an averaged mapping if it can be written as the average of the identity I and a nonexpansive mapping; that is,

T=(1α)I+αS,
(2.1)

where α(0,1) and S:HH is nonexpansive. More precisely, when (2.1) holds, we say that T is α-averaged.

Clearly, a firmly nonexpansive mapping is a 1 2 -averaged map.

Proposition 2.1 [27]

For given operators S,T,V:HH:

  1. (i)

    If T=(1α)S+αV for some α(0,1) and if U is averaged and V is nonexpansive, then T is averaged.

  2. (ii)

    T is firmly nonexpansive if and only if the complement I-T is firmly nonexpansive.

  3. (iii)

    If T=(1α)S+αV for some α(0,1), U is firmly nonexpansive and V is nonexpansive, then T is averaged.

  4. (iv)

    The composite of finitely many averaged mappings is averaged. That is, if each of the mappings { T i } i = 1 N is averaged, then so is the composite T 1 T N . In particular, if T 1 is α 1 -averaged, and T 2 is α 2 -averaged, where α 1 , α 2 (0,1), then the composite T 1 T 2 is α-averaged, where α= α 1 + α 2 α 1 α 2 .

Recall that the metric projection from H onto C is the mapping P C :HC which assigns, to each point xH, the unique point P C xC satisfying the property

x P C x= inf y C xy=:d(x,C).

Lemma 2.2 For a given xH:

  1. (a)

    z= P C x if and only if xz,yz0, yC.

  2. (b)

    z= P C x if and only if x z 2 x y 2 y z 2 , yC.

  3. (c)

    P C x P C y,xy P C x P C y 2 , x,yH.

Consequently, P C is nonexpansive and monotone.

Lemma 2.3 The following inequality holds in an inner product space X:

x + y 2 x 2 +2y,x+y,x,yX.

The so-called demiclosedness principle for nonexpansive mappings will be used.

Lemma 2.4 (Demiclosedness principle [28])

Let T:CC be a nonexpansive mapping with Fix(T). If { x n } is a sequence in C that converges weakly to x and if {(IT) x n } converges strongly to y, then (IT)x=y. In particular, if y=0, then xFix(T).

Next, we introduce monotonicity of a nonlinear operator.

Definition 2.3 A nonlinear operator G whose domain D(G)H and range R(G)H is said to be:

  1. (a)

    monotone if

    xy,GxGy0,x,yD(G),
  2. (b)

    β-strongly monotone if there exists β>0 such that

    xy,GxGyβ x y 2 ,x,yD(G),
  3. (c)

    ν-inverse strongly monotone (for short, ν-ism) if there exists ν>0 such that

    xy,GxGyν G x G y 2 ,x,yD(G).

It can be easily seen that if G is nonexpansive, then IG is monotone; and the projection map P C is a 1-ism.

The inverse strongly monotone (also referred to as co-coercive) operators have been widely used to solve practical problems in various fields, for instance, in traffic assignment problems; see, for example, [29, 30] and reference therein.

The following proposition summarizes some results on the relationship between averaged mappings and inverse strongly monotone operators.

Proposition 2.2 [27]

Let T:HH be an operator from H to itself.

  1. (a)

    T is nonexpansive if and only if the complement IT is 1 2 -ism.

  2. (b)

    If T is ν-ism, then for γ>0, γT is ν γ -ism.

  3. (c)

    T is averaged if and only if the complement IT is ν-ism for some ν> 1 2 . Indeed, for α(0,1), T is α-averaged if and only if IT is 1 2 α -ism.

Lemma 2.5 [24]

Let { a n } be a sequence of nonnegative numbers satisfying the condition

a n + 1 (1 γ n ) a n + γ n δ n ,n0,

where { γ n }, { δ n } are sequences of real numbers such that:

  1. (i)

    { γ n }(0,1) and n = 0 γ n =,

  2. (ii)

    lim sup n δ0 or n = 0 γ n | δ n |<.

Then lim n a n =0.

3 Main results

In this paper, we always assume that g:CR is a real-valued convex function and g is an L-Lipschitzian mapping with L0. Since the Lipschitz continuity of g implies that it is indeed inverse strongly monotone, its complement can be an averaged mapping. Consequently, the GPA can be rewritten as the composite of a projection and an averaged mapping, which is again an averaged mapping. This shows that an averaged mapping plays an important role in the gradient-projection algorithm.

Note that g is L-Lipschitzian. This implies that g is (1/L)-ism, which then implies that λg is (1/λL)-ism. So, by Proposition 2.2, Iλg is (λL/2)-averaged. Now since the projection P C is (1/2)-averaged, we see from Proposition 2.1 that the composite P C (Iλg) is ((2+λL)/4)-averaged for 0<λ<2/L. Hence, we have that for each n, P C (I λ n g) is ((2+ λ n L)/4)-averaged. Therefore, we can write

P C (I λ n g)= 2 λ n L 4 I+ 2 + λ n L 4 T n = s n I+(1 s n ) T n ,

where T n is nonexpansive and s n = 2 λ n L 4 .

Let f:CC be a contraction with the constant ρ(0,1). Suppose that the minimization problem (1.2) is consistent, and let U denote its solution set. Let { Q β n } be a sequence of mappings defined as in Lemma 2.1. Consider the following mapping G n on C defined by

G n x= α n f(x)+(1 α n ) T n Q β n x,xC,nN,

where α n (0,1). By Lemma 2.1, we have

G n x G n y ( 1 α n ( 1 ρ ) ) xy.

Since 0<1 α n (1ρ)<1, it follows that G n is a contraction. Therefore, by the Banach contraction principle, G n has a unique fixed point x n f C such that

x n f = α n f ( x n f ) +(1 α n ) T n Q β n x n f .

For simplicity, we will write x n for x n f provided no confusion occurs. Next, we prove the convergence of { x n }, while we claim the existence of the qUEP(ϕ), which solves the variational inequality

( I f ) q , p q 0,pUEP(ϕ).
(3.1)

Equivalently, q= P U EP ( ϕ ) f(q).

Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H and ϕ be a bifunction from C×C into satisfying (A1), (A2), (A3), and (A4). Let g:CR be a real-valued convex function, and assume that g is an L-Lipschitzian mapping with L0 and f:CC is a contraction with the constant ρ(0,1). Assume that UEP(ϕ). Let { x n } be a sequence generated by

{ ϕ ( u n , y ) + 1 β n y u n , u n x n 0 , y C , x n = α n f ( x n ) + ( 1 α n ) T n u n , n N ,

where u n = Q β n x n , P C (I λ n g)= s n I+(1 s n ) T n , s n = 2 λ n L 4 and { λ n }(0, 2 L ). Let { β n } and { α n } satisfy the following conditions:

  1. (i)

    { β n }(0,), lim inf n β n >0;

  2. (ii)

    { α n }(0,1), lim n α n =0.

Then { x n } converges strongly, as s n 0 ( λ n 2 L ), to a point qUEP(ϕ) which solves the variational inequality (3.1).

Proof First, we claim that { x n } is bounded. Indeed, pick any pUEP(ϕ), since u n = Q β n x n and p= Q β n p, then we know that for any nN,

u n p= Q β n x n Q β n p x n p.
(3.2)

Thus, we derive that (noting T n p=p and T n is nonexpansive)

x n p = α n f ( x n ) + ( 1 α n ) T n u n p α n f ( x n ) α n f ( p ) + α n f ( p ) α n p + ( 1 α n ) T n u n T n p [ 1 α n ( 1 ρ ) ] x n p + α n ( I f ) p .

Then we have

x n p 1 1 ρ ( I f ) p ,

and hence { x n } is bounded. From (3.2), we also derive that { u n } is bounded.

Next, we claim that x n u n 0. Indeed, for any pUEP(ϕ), by Lemma 2.1, we have

u n p 2 = Q β n x n Q β n p 2 x n p , u n p = 1 2 ( x n p 2 + u n p 2 u n x n 2 ) .

This implies that

u n p 2 x n p 2 u n x n 2 .
(3.3)

Then from (3.3), we derive that

x n p 2 = α n f ( x n ) + ( 1 α n ) T n u n p 2 = α n f ( x n ) α n p + ( 1 α n ) T n u n ( 1 α n ) T n p 2 ( 1 α n ) 2 u n p 2 + 2 α n f ( x n ) p , x n p x n p 2 u n x n 2 + 2 α n [ ρ x n p + ( I f ) p ] x n p .

Since α n 0, it follows that

lim n x n u n =0.

Then we show that x n T n x n 0. Indeed,

x n T n x n = x n T n u n + T n u n T n x n x n T n u n + T n u n T n x n α n f ( x n ) T n u n + u n x n .

Since α n 0 and x n u n 0, we obtain that

x n T n x n 0.

Thus,

u n T n u n = u n x n + x n T n x n + T n x n T n u n u n x n + x n T n x n + T n x n T n u n u n x n + x n T n x n + x n u n ,

and

x n T n u n x n u n + u n T n u n ,

we have

u n T n u n 0and x n T n u n 0.

Observe that

P C ( I λ n g ) u n u n = s n u n + ( 1 s n ) T n u n u n = ( 1 s n ) T n u n u n T n u n u n ,

where s n = 2 λ n L 4 (0, 1 2 ). Hence, we have

From the boundedness of { u n }, s n 0 ( λ n 2 L ) and u n T n u n 0, we conclude that

lim n u n P C ( I 2 L g ) u n =0.

Since g is L-Lipschitzian, g is 1 L -ism. Consequently, P C (I 2 L g) is a nonexpansive self-mapping on C. As a matter of fact, we have for each x,yC

Consider a subsequence { u n i } of { u n }. Since { u n i } is bounded, there exists a subsequence { u n i j } of { u n i } which converges weakly to q. Next, we show that qUEP(ϕ). Without loss of generality, we can assume that u n i q. Then, by Lemma 2.4, we obtain

q= P C ( I 2 L g ) q.

This shows that qU.

Next, we show that qEP(ϕ). Since u n = Q β n x n , for any yC, we obtain

ϕ( u n ,y)+ 1 β n y u n , u n x n 0.

From (A2), we have

1 β n y u n , u n x n ϕ(y, u n ).

Replacing n by n i , we have

y u n i , u n i x n i β n i ϕ(y, u n i ).

Since u n i x n i β n i 0 and u n i q, it follows from (A4) that 0ϕ(y,q) for all yC. Let

z t =ty+(1t)q,t(0,1],yC,

then we have z t C and hence ϕ( z t ,q)0. Thus, from (A1) and (A4), we have

0 = ϕ ( z t , z t ) t ϕ ( z t , y ) + ( 1 t ) ϕ ( z t , q ) t ϕ ( z t , y ) ,

and hence 0ϕ( z t ,y). From (A3), we have 0ϕ(q,y) for all yC and hence qEP(ϕ). Therefore, qEP(ϕ)U.

On the other hand, we note that

x n q = α n f ( x n ) + ( 1 α n ) T n u n q = α n f ( x n ) α n f ( q ) + α n f ( q ) α n q + ( 1 α n ) ( T n u n q ) .

Hence, we obtain

x n q 2 = α n ( f I ) q , x n q + α n ( f ( x n ) f ( q ) ) + ( 1 α n ) ( T n u n T n q ) , x n q α n ( f I ) q , x n q + ( 1 α n ( 1 ρ ) ) x n q 2 .

It follows that

x n q 2 1 1 ρ ( f I ) q , x n q .

In particular,

x n i q 2 1 1 ρ ( f I ) q , x n i q .
(3.4)

Since x n i q, it follows from (3.4) that x n i q as i.

Next, we show that q solves the variational inequality (3.1). Observe that

x n = α n f( x n )+(1 α n ) T n u n = α n f( x n )+(1 α n ) T n Q β n x n .

Hence, we conclude that

(If) x n = 1 α n (I T n Q β n ) x n T n Q β n x n + x n .

Since T n is nonexpansive, we have that I T n Q β n is monotone. Note that for any given zUEP(ϕ),

Now, replacing n with n i in the above inequality, and letting i, we have

( I f ) q , q z = lim i ( I f ) x n i , x n i z 0.

From the arbitrariness of zUEP(ϕ), it follows that qUEP(ϕ) is a solution of the variational inequality (3.1). Further, by the uniqueness of solution of the variational inequality (3.1), we conclude that x n q as n. The variational inequality (3.1) can be written as

f ( q ) q , q z 0,zUEP(ϕ).

So, in terms of Lemma 2.2, it is equivalent to the following equality:

P U EP ( ϕ ) f(q)=q.

This completes the proof. □

Theorem 3.2 Let C be a nonempty closed convex subset of a real Hilbert space H and ϕ be a bifunction from C×C into satisfying (A1), (A2), (A3), and (A4). Let g:CR be a real-valued convex function, and assume that g is an L-Lipschitzian mapping with L0 and f:CC is a contraction with the constant ρ(0,1). Assume that UEP(ϕ). Let { x n } be a sequence generated by x 1 C and

{ ϕ ( u n , y ) + 1 β n y u n , u n x n 0 , y C , x n + 1 = α n f ( x n ) + ( 1 α n ) T n u n , n N ,

where u n = Q β n x n , P C (I λ n g)= s n I+(1 s n ) T n , s n = 2 λ n L 4 and { λ n }(0, 2 L ). Let { α n }, { β n } and { s n } satisfy the following conditions:

  1. (i)

    { β n }(0,), lim inf β n >0, n = 1 | β n + 1 β n |<;

  2. (ii)

    { α n }(0,1), lim n α n =0, n = 1 α n =, n = 1 | α n + 1 α n |<;

  3. (iii)

    { s n }(0, 1 2 ), lim n s n =0 ( lim n λ n = 2 L ), n = 1 | s n + 1 s n |<.

Then { x n } converges strongly to a point qUEP(ϕ) which solves the variational inequality (3.1).

Proof First, we show that { x n } is bounded. Indeed, pick any pUEP(ϕ), since u n = Q β n x n and p= Q β n p, then we know that for any nN,

u n p= Q β n x n Q β n p x n p.
(3.5)

Thus, we derive that (noting T n p=p and T n is nonexpansive)

x n + 1 p = α n f ( x n ) + ( 1 α n ) T n u n p α n ρ x n p + ( 1 α n ) x n p + α n f ( p ) p ( 1 α n ( 1 ρ ) ) x n p + α n f ( p ) p .

By induction, we have

x n pmax { x 1 p , 1 1 ρ f ( p ) p } ,

and hence { x n } is bounded. From (3.5), we also derive that { u n } is bounded.

Next, we show that x n + 1 x n 0. Indeed, since g is 1 L -ism, P C (I λ n g) is nonexpansive. It follows that for any given pS,

P C ( I λ n g ) u n 1 P C ( I λ n g ) u n 1 p + p P C ( I λ n g ) u n 1 P C ( I λ n g ) p + p u n 1 p + p u n 1 + 2 p .

This together with the boundedness of { u n } implies that { P C (I λ n g) u n 1 } is bounded.

Also, observe that

for some appropriate constant M 1 >0 such that

M 1 L P C ( I λ n g ) u n 1 +4 g ( u n 1 ) +L u n 1 ,n1.

Thus, we get

(3.6)

for some appropriate constant M 2 >0 such that

M 2 max { f ( x n 1 ) + T n 1 u n 1 , 4 M 1 L } ,n1.

From u n + 1 = Q β n + 1 x n + 1 and u n = Q β n x n , we note that

ϕ( u n + 1 ,y)+ 1 β n + 1 y u n + 1 , u n + 1 x n + 1 0,yC,
(3.7)

and

ϕ( u n ,y)+ 1 β n y u n , u n x n 0,yC.
(3.8)

Putting y= u n in (3.7) and y= u n + 1 in (3.8), we have

ϕ( u n + 1 , u n )+ 1 β n + 1 u n u n + 1 , u n + 1 x n + 1 0,yC,

and

ϕ( u n , u n + 1 )+ 1 β n u n + 1 u n , u n x n 0,yC.

So, from (A2), we have

u n + 1 u n , u n x n β n u n + 1 x n + 1 β n + 1 0,

and hence

u n + 1 u n , u n u n + 1 + u n + 1 x n β n β n + 1 ( u n + 1 x n + 1 ) 0.

Since lim n β n >0, without loss of generality, let us assume that there exists a real number a such that β n >a>0 for all nN. Thus, we have

u n + 1 u n 2 u n + 1 u n , x n + 1 x n + ( 1 β n β n + 1 ) ( u n + 1 x n + 1 ) u n + 1 u n { x n + 1 x n + | 1 β n β n + 1 | u n + 1 x n + 1 } ,

thus,

u n + 1 u n x n + 1 x n + 1 a | β n + 1 β n | M 3 ,
(3.9)

where M 3 =sup{ u n x n :nN}.

From (3.6) and (3.9), we obtain

where M=max[ M 2 , M 3 a ]. Hence, by Lemma 2.5, we have

lim n x n + 1 x n =0.
(3.10)

Then, from (3.9) and (3.10), and | β n + 1 β n |0, we have

lim n u n + 1 u n =0.

For any pUEP(ϕ), as in the proof of Theorem 3.1, we have

u n p 2 x n p 2 u n x n 2 .
(3.11)

Then from (3.11), we derive that

x n + 1 p 2 = α n f ( x n ) α n p + ( 1 α n ) T n u n ( 1 α n ) T n p 2 α n 2 f ( x n ) p 2 + 2 α n ( 1 α n ) f ( x n ) p u n p + ( 1 α n ) 2 u n p 2 α n ( f ( x n ) p 2 + 2 f ( x n ) p u n p ) + u n p 2 x n p 2 u n x n 2 + α n ( f ( x n ) p 2 + 2 f ( x n ) p u n p ) .

Since α n 0 and x n x n + 1 0, we have

lim n x n u n =0.

Next, we have

x n T n x n = α n f ( x n ) + ( 1 α n ) T n u n T n x n α n f ( x n ) T n u n + ( 1 α n ) u n x n .

Then, x n T n x n 0, it follows that u n T n u n 0.

Now, we show that

lim sup n x n q , ( I f ) q 0,

where q= P U EP ( ϕ ) f(q) is a unique solution of the variational inequality (3.1). Indeed, take a subsequence { x n k } of { x n } such that

lim sup n x n q , ( I f ) q = lim k x n k q , ( I f ) q .

Since { x n } is bounded, without loss of generality, we may assume that x n k x ˜ . By the same argument as in the proof of Theorem 3.1, we have x ˜ UEP(ϕ).

Since q= P U EP ( ϕ ) f(q), it follows that

lim sup n ( I f ) q , q x n = ( I f ) q , q x ˜ 0.
(3.12)

From

x n + 1 q = α n f ( x n ) + ( 1 α n ) T n u n q = α n f ( x n ) α n f ( q ) + α n f ( q ) α n q + ( 1 α n ) T n u n ( 1 α n ) T n q ,

we have

x n + 1 q 2 = α n ( f ( x n ) f ( q ) ) + α n ( f ( q ) q ) + ( 1 α n ) ( T n u n T n q ) 2 ( 1 α n ) 2 T n u n T n q 2 + 2 α n f ( x n ) f ( q ) ( I f ) q , x n + 1 q .

This implies that

x n + 1 q 2 ( 1 α n ) 2 x n q 2 + 2 α n ρ x n q x n + 1 q + 2 α n ( I f ) q , x n + 1 q ( 1 α n ) 2 x n q 2 + α n ρ ( x n q 2 + x n + 1 q 2 ) + 2 α n ( I f ) q , x n + 1 q .

Then, we have

x n + 1 q 2 1 2 α n + α n ρ 1 α n ρ x n q 2 + α n 2 1 α n ρ x n q 2 + 2 α n 1 α n ρ ( I f ) q , x n + 1 q ( 1 2 ( 1 ρ ) α n ) x n q 2 + α n 2 1 α n ρ x n q 2 + 2 α n 1 α n ρ ( I f ) q , x n + 1 q ( 1 2 ( 1 ρ ) α n ) x n q 2 + 2 ( 1 ρ ) α n ( α n 2 ( 1 ρ ) ( 1 α n ρ ) M + 1 ( 1 ρ ) ( 1 α n ρ ) ( I f ) q , x n + 1 q ) = ( 1 2 ( 1 ρ ) α n ) x n q 2 + 2 ( 1 ρ ) α n δ n ,

where M =sup{ x n q 2 :nN}, and δ n = α n 2 ( 1 ρ ) ( 1 α n ρ ) M + 1 ( 1 ρ ) ( 1 α n ρ ) (If)q, x n + 1 q.

It is easy to see that lim n 2(1ρ) α n =0, n = 1 2(1ρ) α n =, and lim sup n δ n 0 by (3.12). Hence, by Lemma 2.5, the sequence { x n } converges strongly to q. This completes the proof. □

4 Conclusions

Methods for solving the equilibrium problem and the constrained convex minimization problem have extensively been studied respectively in a Hilbert space. But to the best of our knowledge, it would probably be the first time in the literature that we introduce implicit and explicit algorithms for finding the common element of the set of solutions of an equilibrium problem and the set of solutions of a constrained convex minimization problem, which also solves a certain variational inequality.

References

  1. Mann WR: Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4: 506–510. 10.1090/S0002-9939-1953-0054846-3

    Article  Google Scholar 

  2. Moudafi A: Viscosity approximation method for fixed-points problems. J. Math. Anal. Appl. 2000, 241: 46–55. 10.1006/jmaa.1999.6615

    Article  MathSciNet  Google Scholar 

  3. Moudafi A, Théra M: Proximal and dynamical approaches to equilibrium problems. Lecture Notes in Econom. and Math. System 477. In Ill-Posed Variational Problems and Regularization Techniques. Springer, Berlin; 1999:187–201. Trier, 1998

    Chapter  Google Scholar 

  4. Ceng LC, Al-Homidan S, Ansari QH, Yao J-C: An iterative scheme for equilibrium problems and fixed point problems of strict pseudo-contraction mappings. J. Comput. Appl. Math. 2009, 223: 967–974. 10.1016/j.cam.2008.03.032

    Article  MathSciNet  Google Scholar 

  5. Takahashi S, Takahashi W: Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 2007, 331: 506–515. 10.1016/j.jmaa.2006.08.036

    Article  MathSciNet  Google Scholar 

  6. Tian M: An application of hybrid steepest descent methods for equilibrium problems and strict pseudocontractions in Hilbert spaces. J. Inequal. Appl. 2011., 2011: Article ID 173430. doi:10.1155/2011/173430

    Google Scholar 

  7. Yamada I: The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings. Inherently Parallel Algorithms in Feasibility and Optimization and Their Application 2001. Haifa

    Google Scholar 

  8. Liu Y: A general iterative method for equilibrium problems and strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 2009, 71: 4852–4861. 10.1016/j.na.2009.03.060

    Article  MathSciNet  Google Scholar 

  9. Plubtieng S, Punpaeng R: A general iterative method for equilibrium problems and fixed point problems in Hilbert space. J. Math. Anal. Appl. 2007, 336: 455–469. 10.1016/j.jmaa.2007.02.044

    Article  MathSciNet  Google Scholar 

  10. Jung JS: Strong convergence of iterative methods for k -strictly pseudo-contractive mappings in Hilbert spaces. Appl. Math. Comput. 2010, 215: 3746–3753. 10.1016/j.amc.2009.11.015

    Article  MathSciNet  Google Scholar 

  11. Jung JS: Strong convergence of composite iterative methods for equilibrium problems and fixed point problems. Appl. Math. Comput. 2009, 213: 498–505. 10.1016/j.amc.2009.03.048

    Article  MathSciNet  Google Scholar 

  12. Kumam P: A new hybrid iterative method for solution of equilibrium problems and fixed point problems for an inverse strongly monotone operator and a nonexpansive mapping. J. Appl. Math. Comput. 2009, 29: 263–280. 10.1007/s12190-008-0129-1

    Article  MathSciNet  Google Scholar 

  13. Kumam P: A hybrid approximation method for equilibrium and fixed point problems for a monotone mapping and a nonexpansive mapping. Nonlinear Anal. Hybrid Syst. 2008, 2(4):1245–1255. 10.1016/j.nahs.2008.09.017

    Article  MathSciNet  Google Scholar 

  14. Saewan S, Kumam P: The shrinking projection method for solving generalized equilibrium problem and common fixed points for asymptotically quasi- ϕ -nonexpansive mappings. Fixed Point Theory Appl. 2011. doi:10.1186/1687–1812–2011–9

    Google Scholar 

  15. Tada A, Takahashi W: Weak and strong convergence theorems for a nonexpansive mapping and an equilibrium problem. J. Optim. Theory Appl. 2007, 133: 359–370. 10.1007/s10957-007-9187-z

    Article  MathSciNet  Google Scholar 

  16. Yao Y, Liou YC, Yao JC: Convergence theorem for equilibrium problems and fixed point problems of infinite family of nonexpansive mappings. Fixed Point Theory Appl. 2007., 2007: Article ID 64363

    Google Scholar 

  17. Su YF, Shang MJ, Qin XL: An iterative method of solution for equilibrium and optimization problems. Nonlinear Anal., Theory Methods Appl. 2008, 69: 2709–2719. 10.1016/j.na.2007.08.045

    Article  MathSciNet  Google Scholar 

  18. Plubtieng S, Punpaeng R: A new iterative method for equilibrium problems and fixed point problems of nonlinear mappings and monotone mappings. Appl. Math. Comput. 2008, 179: 548–558.

    Article  MathSciNet  Google Scholar 

  19. Xu HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150: 360–378. 10.1007/s10957-011-9837-z

    Article  MathSciNet  Google Scholar 

  20. Xu HK: An iterative approach to quadratic optimization. J. Optim. Theory Appl. 2003, 116: 659–678. 10.1023/A:1023073621589

    Article  MathSciNet  Google Scholar 

  21. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240–256. 10.1112/S0024610702003332

    Article  Google Scholar 

  22. Ceng LC, Ansari QH, Yao J-C: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 2011, 74: 5286–5302. 10.1016/j.na.2011.05.005

    Article  MathSciNet  Google Scholar 

  23. Marino G, Xu HK: A general method for nonexpansive mappings in Hilbert space. J. Math. Anal. Appl. 2006, 318: 43–52. 10.1016/j.jmaa.2005.05.028

    Article  MathSciNet  Google Scholar 

  24. Xu HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298: 279–291. 10.1016/j.jmaa.2004.04.059

    Article  MathSciNet  Google Scholar 

  25. Bkun E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.

    Google Scholar 

  26. Combettes PL, Hirstoaga SA: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6: 117–136.

    MathSciNet  Google Scholar 

  27. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006

    Article  MathSciNet  Google Scholar 

  28. Geobel K, Kirk WA Cambridge Studies in Advanced Mathematical 28. In Topics on Metric Fixed Points Theory. Cambridge University Press, Cambridge; 1990.

    Chapter  Google Scholar 

  29. Bretarkas DP, Gafin EM: Projection methods for variational inequalities with applications to the traffic assignment problem. Math. Program. Stud. 1982, 17: 139–159. 10.1007/BFb0120965

    Article  Google Scholar 

  30. Han D, Lo HK: Solving non additive traffic assignment problems: a descent method for cocoercive variational inequalities. Eur. J. Oper. Res. 2004, 159: 529–544. 10.1016/S0377-2217(03)00423-5

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors wish to thank the referees for their helpful comments, which notably improved the presentation of this manuscript. This work was supported in part by The Fundamental Research Funds for the Central Universities (the Special Fund of Science in Civil Aviation University of China: No. ZXH2012K001), and by the Science Research Foundation of Civil Aviation University of China (No. 2012KYM03).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ming Tian.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All the authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Tian, M., Liu, L. Iterative algorithms based on the viscosity approximation method for equilibrium and constrained convex minimization problem. Fixed Point Theory Appl 2012, 201 (2012). https://doi.org/10.1186/1687-1812-2012-201

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2012-201

Keywords