Skip to main content

The hybrid steepest descent method for solutions of equilibrium problems and other problems in fixed point theory

Abstract

In this paper, we combine the gradient projection algorithm and the hybrid steepest descent method and prove the strong convergence to a common element of the equilibrium problem; the null space of an inverse strongly monotone operator; the set of fixed points of a continuous pseudocontractive mapping and the minimizer of a convex function. This common element is proved to be the unique solution of a variational inequality problem.

MSC:47H06, 47H09, 47J05, 47J25.

1 Introduction

Let H be a real Hilbert space with inner product , and norm and let K be a nonempty, closed, and convex subset of H. Let F be a bifunction of K×K into . The equilibrium problem for F:K×KR is to find xK such that

F(x,y)0,yK.
(1.1)

The set of solutions of (1.1) is denoted by EP(F). For a given nonlinear operator A, the problem of finding zK such that

Az,yz0,yK,
(1.2)

is called the variational inequality problems VIP and the set of solutions of the VIP is denoted by VIP(A,K).

Given a mapping A:KH, let F(x,y)=Ax,yx for all x,yK, then zEP(F) if and only if Az,yz0, yK, that is, z is a solution of the variational inequality (1.2).

The mapping T:KK is said to be Lipschitz if there exists L0 such

TxTyLxy,x,yK.
(1.3)

The operator T is said to be a contraction if in (1.3) L<1, and nonexpansive if L=1. Let H be a real Hilbert space and K, a nonempty subset of H. A mapping T:KHK is said to be pseudocontractive if for all x,yK,

TxTy,xy x y 2 .
(1.4)

Equivalently, (1.4) can be written as

T x T y 2 x y 2 + ( I T ) x ( I T ) y 2 ,x,yK.
(1.5)

The set of fixed points of a mapping T is denoted by F(T)={xK:Tx=x}.

In what follows, we shall use → for strong convergence and for weak convergence.

For every point xH, there exists a unique nearest point in K denoted by P K x such that x P K xxy, yK. The map P K is called the metric projection of H onto K. It is also well known that P K satisfies

P K x P K y,xy P K x P K y 2 ,x,yH.

Moreover, P K x is characterized by the properties that

P K xKandx P K x, P K xy0,yK.

Consider the optimization problem:

minf(x)such that xK,
(1.6)

where f:KR{} is a real valued convex functional. If f is a continuously Fréchet differentiable convex functional on K, then xK is a solution of the optimization problem (1.6) if and only if the optimality condition

f ( x ) , y x 0,yK,
(1.7)

holds.

Using the characterization of the projection operator, one can easily show that solving the variational inequality (1.7) is equivalent to solving the fixed point problem of finding x K which satisfies the relation

x = P K (Iμf) x ,

where μ>0 is a constant. A formulation of the iterative scheme for the variational inequality problem (1.7) may be as follows: for arbitrary x 1 K, define { x n } n 1 by

x n + 1 = P K (Iμf) x n
(1.8)

or more generally

x n + 1 = P K (I μ n f) x n ,
(1.9)

where the parameters μ, μ n are positive real numbers known as step-size. The scheme (1.9) has been considered with several step-size rules:

  • Constant step-size, where for some μ>0, we have μ n =μ for all n.

  • Diminishing step-size, where μ n 0 and n = 1 μ n =.

  • Polyaks step-size, where μ n = f ( x n ) f f ( x n ) 2 , where f is the optimal value of (1.6).

  • Modified Polyaks step-size, where μ n = f ( x n ) f n f ( x n ) 2 and f n ˆ = min 0 j n f( x j )δ for some scalar δ>0.

The constant step-size rule is suitable when we are interested in finding an approximate solution to the problem (1.6). The diminishing step-size rule is an off-line rule and is typically used with μ n = c n + 1 or c n + 1 for some distributed implementations of the method.

These schemes are the well-known Gradient Projection Algorithms. However, the convergence of these schemes requires that the operator f must be Lipschitz continuous and strongly monotone, which is a strong condition and restrictive in application. If f is Lipschitz continuous and strongly monotone on H, it is obvious that the map P K (Iμf) is a strict contraction and by the Banach contraction principle, the sequence { x n } defined by (1.8) converges strongly to the unique minimizer of (1.6) which is the solution of the variational inequality problem (1.7). Another limitation of the scheme in (1.8) is that it is based on the assumption that the closed form expression of P K :HK is well known, whereas in many situations it is not.

The iterative approximation of fixed points and zeros of the nonlinear operators has been studied extensively by many authors to solve nonlinear operator equations as well as variational inequality problems (see [1, 2], and the references therein).

Ceng et al. [3] studied the following algorithm:

x n + 1 = P C [ s n γ V x n + ( I s n μ F ) T n x n ] ,n0,

where s n = 2 λ n L 4 and P C (I λ n f)= s n I+(1 s n ) T n , n0, and they proved that the sequence { x n } converges strongly to a minimizer of a constrained convex minimization problem which also solves a certain variational inequality.

For r>0, T r and F r as in Lemma 2.2 and Lemma 2.3, respectively, Ofoedu [4] introduced the following iteration scheme:

x n + 1 = α n u+ β n x n + ( ( 1 β n ) I α n A ) W n (IεB) T r n F r n x n ,n1,

and proved that if H is a real Hilbert space; S:HH is a continuous pseudocontractive mapping; T j :HH, j=1,2,3, , is a countable infinite family of nonexpansive mappings; f:H×HR is a bifunction satisfying (A1)-(A4); Φ:HR{+} a proper lower semicontinuous convex function; Θ:HH a continuous monotone mapping; uH is a fixed vector; A:HH is a strongly positive bounded linear operator with coefficient γ; B:HH is an η-inverse strongly monotone mapping and the sequences { r n }, { α n }, { β n } satisfy appropriate conditions, then the sequence { x n } converges strongly to a unique solution x Ω=F(S)GMEP(f,Φ,Θ) B 1 (0)F( T j ) of the variational inequality uA x ,x x 0, xΩ.

In 2001, Yamada [5] introduced the hybrid steepest descent method which solves the variational inequality VIP(F,K) over the set K of fixed points of a nonexpansive map T. In particular, he studied the following:

x n + 1 =(I α n μF)T x n

and proved the following theorem.

Theorem IY [5]

Assume that H is a real Hilbert space and T:HH is nonexpansive such that F(T), and A:HH is η-strongly monotone and L-Lipschitz. Let μ(0, 2 η L 2 ). Assume also that the sequence { λ n }(0,1) satisfies the following conditions:

  1. (i)

    λ n 0, as n,

  2. (ii)

    n = 0 λ n =,

  3. (iii)

    n = 0 | λ n + 1 λ n |< or lim n ( λ n λ n + 1 )=1.

Take x 0 H, arbitrary and define { x n } n 1 by (1.9), then { x n } n 1 converges strongly to the unique solution x F(T) of VIP(A,K) where K is the set of fixed points of T.

The scheme (1.9) minimizes certain convex functions over the intersection of fixed point sets of nonexpansive mappings if A=f, say, where f is a continuously Fréchet differentiable convex function. The scheme solves the variational inequality VIP(A,K) and does not require the closed form expression of P K but, instead, requires a closed form expression of a nonexpansive mapping T, whose set of fixed points is K.

Motivated by the work of Yamada [5], Tian [6] introduced the following scheme:

{ ϕ ( u n , y ) + 1 λ n y u n , u n x n 0 , y C , y n = β n u n + ( 1 β n ) S u n , x n + 1 = ( I α n μ A ) y n , n N ,
(1.10)

and he proved that if α n , β n , λ n satisfy certain conditions, then the sequence { x n } given by (1.10) converges strongly to qF(S)EP(ϕ), which solves the variational inequality Aq,pq0, pF(S)EP(ϕ).

In 2012, Tian and Liu [7] introduced the following scheme:

{ Φ ( u n , y ) + 1 r n y u n , u n x n 0 , y C , x n + 1 = ( I α n μ F ) T n u n , n N ,

where u n = Q r n x n , P C (I λ n f)= s n I+(1 s n ) T n , s n = 2 λ n L 4 and proved that if C is a nonempty, closed, and convex subset of a real Hilbert space H; Φ a bifunction from C×C into satisfying (A1)-(A4); f:CR a real valued convex function; f is an L-Lipschitzian mapping with L0; ΩEP(Φ) where Ω is the solution set of a minimization problem; F:CH a k-Lipschitzian continuous and η-strongly monotone operator with constants k,η0; 0<μ< 2 η k 2 , τ=μ(η μ k 2 2 ) and the sequences { α n }, { r n }, { λ n } satisfy appropriate conditions, then the sequence { x n } generated by x 1 H converges strongly to a point qΩEP(ϕ) which solves the variational inequality Fq,pq0, pΩEP(ϕ).

In this paper, motivated by the results of Ofoedu [4], Yamada [5], Tian [6], Tian and Liu [7], we shall study a new iterative scheme and prove the strong convergence to a common element of the equilibrium problem; the null space of an inverse strongly monotone operator; the set of fixed points of a continuous pseudocontractive mapping and the minimizer of a convex function. This common element is proved to be the unique solution of a variational inequality problem.

2 Preliminaries

For solving the equilibrium problem for a bifunction F:K×KR, let us assume that F satisfies the following conditions:

(A1) F(x,x)=0, xK.

(A2) F is monotone, i.e., F(x,y)+F(y,x)0, x,yK.

(A3) For each x,y,zK, t(0,1], lim t 0 F(tz+(1t)x,y)F(x,y).

(A4) For each xK, the function yF(x,y) is convex and lower semicontinuous.

Lemma 2.1 (Blum and Oettli [8])

Let K be nonempty, closed, and convex subset of H and f a bifunction of K×KR satisfying (A1)-(A4). For r>0 and xH, there exists zK such that

f(z,y)+ 1 r yz,zx0,yK.

Lemma 2.2 (Zegeye [9])

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let S:CH be a continuous pseudocontractive mapping. Then, for r>0 and xH, there exists zC such that

yz,Sz 1 r y z , ( 1 + r ) z x 0,yC.

Furthermore, if

T r x= { y z , S z 1 r y z , ( 1 + r ) z x 0 , y C } ,

xH, then the following holds:

(C1) T r is single valued;

(C2) T r is firmly nonexpansive, i.e., for any x,yH

T r x T r y 2 T r x T r y,xy;

(C3) F( T r )=F(S);

(C4) F(S) is closed and convex.

Lemma 2.3 (Combettes and Hirstoaga [10])

Assume that f:K×KR satisfies (A1)-(A4). For r>0 and xH, define F r :HK by

F r x= { z K : f ( z , y ) + 1 r y z , z x 0 , y K } ,

then the following holds:

(B1) F r is single valued;

(B2) F r is firmly nonexpansive, i.e., for any x,yH

F r x F r y 2 F r x F r y,xy;

(B3) F( F r )=EP(f);

(B4) EP(f) is closed and convex.

Lemma 2.4 (Ofoedu [4])

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let S:CC be a continuous pseudocontractive mapping. For r>0, let T r :HC be the mapping in Lemma  2.2, then for any xH and for any p,q>0,

T p x T q x | p q | p ( T p x + x ) .

Recall that a mapping A:HH is said to be monotone if AxAy,xy0, x,yK. In particular, the mapping A is called

  1. (1)

    η-strongly monotone over K if there exists η>0 such that AxAy,xyη x y 2 , x,yK;

  2. (2)

    α-inverse strongly monotone over K if there exists α>0 such that AxAy,xyα A x A y 2 , x,yK.

Lemma 2.5 Let A:HH be monotone over a closed and convex subset K of H, then the following statements are equivalent:

  1. (1)

    zK is a solution of VIP(A,K) if Az,xz0, xK.

  2. (2)

    For fixed μ>0, z= P K (IμA)z.

Lemma 2.6 (see [1, 11])

Let { a n } be a sequence of nonnegative real numbers satisfying the following relation:

a n + 1 (1 γ n ) a n + δ n ,n0,

where

  1. (i)

    { γ n }[0,1], γ n =,

  2. (ii)

    lim sup n δ n γ n 0 or | δ n |<.

Then lim n a n =0.

Lemma 2.7 Let H be a real Hilbert space, then for all x,yH, the following hold:

  1. (i)

    x y 2 = x 2 2x,y+ y 2 ;

  2. (ii)

    x y 2 x 2 2y,x+y.

Lemma 2.8 (Demiclosedness Principle [12])

Let T:CC be a nonexpansive mapping with F(T). If { x n } is a sequence in C such that x n x and x n T x n 0 then xF(T).

Definition 2.9 A map T:HH is called averaged if there exists a nonexpansive mapping S on H and α(0,1) such that

T=(1α)I+αS,

and we say that T is α-averaged.

Remark 2.10

  1. (i)

    Firmly nonexpansive maps are 1 2 -averaged. Thus, a map T is firmly nonexpansive if and only if 2TI=S where S is nonexpansive and I an identity mapping on H.

  2. (ii)

    Every averaged mapping is nonexpansive.

  3. (iii)

    A map S is nonexpansive if and only if IS is 1 2 -inverse strongly monotone.

  4. (iv)

    If A is η-inverse strongly monotone, and λ>0, then λA is η λ -inverse strongly monotone.

Lemma 2.11 A map T:HH is averaged if and only if A=IT is η-inverse strongly monotone for η> 1 2 . In particular, for α(0,1), T is α-averaged if and only if IT is 1 2 α -inverse strongly monotone.

Lemma 2.12 Let T=(1α)A+αS, α(0,1). If A is averaged and S is nonexpansive, then T is averaged.

Remark 2.13

  1. (i)

    A map N is firmly nonexpansive if it is 1-inverse strongly monotone.

  2. (ii)

    N is firmly nonexpansive if and only if IN is firmly nonexpansive.

  3. (iii)

    Every firmly nonexpansive map is averaged.

  4. (iv)

    If T=(1α)N+αS, α(0,1), where N is firmly nonexpansive and S is nonexpansive, then T is averaged.

  5. (v)

    If ( S i ), 1im, is a family of nonexpansive mappings, then the mapping S= i = 1 m S i is nonexpansive.

  6. (vi)

    If ( T i ), 1im, is a family of averaged mappings, then the mapping T= i = 1 m T i is averaged. If T 1 is α 1 -averaged and T 2 is α 2 -averaged for some α 1 , α 2 (0,1), then T 1 T 2 = i = 1 2 T i is α-averaged with α= α 1 + α 2 α 1 α 2 .

Let A:HH be α-inverse strongly monotone, i.e.,

AxAy,xyα A x A y 2 ,x,yH.
(2.1)

When α=1, (2.1) implies that A is firmly nonexpansive and hence, A is nonexpansive. Thus, a map A is firmly nonexpansive if and only if it is 1-inversely strongly monotone. From the Schwartz inequality, we find that α-inverse being strongly monotone implies 1 α -Lipschitz continuity. However, the converse is not true. For instance, A=I (I is the identity mapping on H) is nonexpansive (hence, 1-Lipschitz) but not firmly nonexpansive, hence not 1-inversely strongly monotone. In 1977, Baillon and Haddad [13] showed that if D(A)=H and A is the gradient of a convex function, say f, i.e., A=f, then 1 α -Lipschitz continuity implies α-inverse strongly monotonicity and vice versa.

If f is L-Lipschitz, then f is 1 L -inverse strongly monotone and λf is 1 λ L -inverse strongly monotone. Then, by Lemma 2.11, Iλf is λ L 2 -averaged. The projection map P K is firmly nonexpansive and hence is 1 2 -averaged. The composition P K (Iλf) is α-averaged (from Remark 2.13) with α= α 1 + α 2 α 1 α 2 =( 1 2 + λ L 2 ) 1 2 λ L 2 = 2 + λ L 4 , 0<λ< 2 L . Now, for nN, P K (I λ n f) is 2 + λ n L 4 -averaged, so that from Remark 2.13, we have P K (I λ n f)= s n I+(1 s n ) T n , where T n is nonexpansive and s n = 2 + λ n L 4 , nN (see [1422], and the references therein).

3 Main result

Remark 3.1 In what follows, let K be a nonempty, closed, and convex subset of a real Hilbert space H. Let F:K×KR be a bifunction satisfying (A1)-(A4) and let T:KK be a continuous pseudocontractive mapping. Let f:KR be a real valued convex function and assume that f is 1 L -inverse strongly monotone mapping with L0. Let A:KH be a k-Lipschitz continuous and η-strongly monotone mapping with constants k,η>0 and 0<μ< 2 η k 2 , τ=μ(η μ k 2 2 ). Let Θ denote the solution set of the minimization problem in (1.6). Let B:KH be an γ-inverse strongly monotone mapping. Assume that Ω=F(T)N(B)ΘEP(F). Let { α n }, { r n }, { λ n } satisfy the following conditions:

  1. (i)

    { α n }(0,1), lim n α n =0, n = 1 α n =, n = 1 | α n + 1 α n |<,

  2. (ii)

    { λ n }(0, 2 L ), lim inf n λ n >0, n = 1 | λ n + 1 λ n |<,

  3. (iii)

    { r n }(0,), lim n r n >0, n = 1 | r n + 1 r n |<,

and let ε be a real constant such that 0<ε<2γ. For r>0, T r , F r are as in Lemma 2.2 and Lemma 2.3.

Consider the sequence { x n } n 1 generated iteratively from arbitrary x 1 H by

{ F ( z n , y ) + 1 r n y z n , z n x n 0 , y K , x n + 1 = ( I α n μ A ) ( I ε B ) T n T r n z n , n N ,
(3.1)

we shall study the strong convergence of the iteration scheme to a unique solution qΩ where q= P Ω (IμA)q solves the variational inequality Aq,zq0, zΩ, and z n = F r n x n and we have P K (I λ n f)= s n I+(1 s n ) T n , where s n = 2 λ L 4 , T n is nonexpansive.

Lemma 3.2 Suppose the conditions of Remark  3.1 are satisfied, then { x n } defined by (3.1) is bounded.

Proof We first show that (IεB) is nonexpansive. For x,yK and 0<ε<2γ, we have

( I ε B ) x ( I ε B ) y 2 = ( x y ) ε ( B x B y ) 2 = x y 2 2 ε B x B y , x y + ε 2 B x B y 2 x y 2 2 ε γ B x B y 2 + ε 2 B x B y 2 = x y 2 ( 2 ε γ ε 2 ) B x B y 2 = x y 2 ε ( 2 γ ε ) B x B y 2 ,
(3.2)

which implies that

( I ε B ) x ( I ε B ) y 2 x y 2 ,
(3.3)

and hence (IεB) is nonexpansive.

Let pΩ. Let w n = T r n z n ; u n = T n w n ; v n =(IεB) u n , then F r n p=p, T r n p=p, T n p=p and we have

v n p = ( I ε B ) u n ( I ε B ) p u n p , u n p = T n w n p w n p , w n p = T r n z n p z n p , z n p = F r n x n p x n p .
(3.4)

For all xH, define D n :HH by D n x=(I α n μA)(IεB) T n T r n F r n x where A is a k-Lipschitzian and η-strongly monotone mapping on H. Assume that 0<μ< 2 η k 2 , for x,yH, we have

( I α n μ A ) ( I ε B ) T n T r n F r n x ( I α n μ A ) ( I ε B ) T n T r n F r n y 2 = ( ( I ε B ) u n x ( I ε B ) u n y ) α n μ A ( v n x v n y ) 2 = ( v n x v n y ) α n μ ( A ( v n x ) A ( v n y ) ) 2 = v n x v n y 2 2 α n μ A ( v n x ) A ( v n y ) , v n x v n y + α n 2 μ 2 A ( v n x ) A ( v n y ) 2 x y 2 2 α n μ A ( v n x ) A ( v n y ) , v n x v n y + α n μ 2 k 2 v n x v n y 2 x y 2 2 α n μ η x y 2 + α n μ 2 k 2 x y 2 = ( 1 2 α n μ η + α n μ 2 k 2 ) x y 2 = [ 1 2 ( α n μ η α n μ 2 k 2 2 ) ] x y 2 = [ 1 2 α n μ ( η μ k 2 2 ) ] x y 2 [ 1 α n μ ( η μ k 2 2 ) ] 2 x y 2 .
(3.5)

From (3.5), we have

( I α n μ A ) x ( I α n μ A ) y (1 α n τ)xy,
(3.6)

where τ=μ(η μ k 2 2 ). Hence, (I α n μA) is a strict contraction and by the Banach contraction principle, it has a unique fixed point in H.

Now, for pΩ and from (3.1) and (3.6), we have

x n + 1 p = ( I α n μ A ) ( I ε B ) u n p = ( I α n μ A ) v n p = ( I α n μ A ) v n ( I α n μ A ) p + ( I α n μ A ) p p ( I α n μ A ) v n ( I α n μ A ) p + α n μ A p ( 1 α n τ ) v n p + α n μ A p ( 1 α n τ ) x n p + α n μ p A = ( 1 α n τ ) x n p + α n τ μ A p τ max { x n p , 1 τ μ A p } .

By induction, we get

x n pmax { x 1 p , 1 τ μ A p } .

Therefore { x n } is bounded. Consequently we find that { z n }, { w n }, { u n }, { v n } are bounded. □

Lemma 3.3 Suppose that the conditions of Remark  3.1 are satisfied, and { x n } is as defined by (3.1), then

lim n x n + 1 x n 0,n.

Proof For any pΩ, we have

A ( I ε B ) T n 1 T n r 1 z n 1 = A ( I ε B ) T n 1 T n r 1 z n 1 A p + A p A ( I ε B ) T n 1 T r n 1 z n 1 A p + A p k ( I ε B ) T n 1 T r n 1 z n 1 p + A p k x n 1 p + A p ,

which shows that {A(IεB) T n 1 T r n 1 z n 1 } is bounded.

Similarly, we have

P K ( I λ n f ) x n = P K ( I λ n f ) x n p + p P K ( I λ n f ) x n P K ( I λ n f ) p + p x n p + p .

Hence, { P K (I λ n f) x n } is bounded.

Noting that T n T r n 1 z n 1 T n 1 T r n 1 z n 1 = T n w n 1 T n 1 w n 1 and from P K (I λ n f)= ( 2 λ n L ) 4 I+ ( 2 + λ n L ) 4 T n , we get T n = 4 P K ( I λ n f ) ( 2 λ n L ) I 2 + λ n L and we compute as follows:

T n w n 1 T n 1 w n 1 = [ 4 P K ( I λ n f ) ( 2 λ n L ) I ] 2 + λ n L w n 1 [ 4 P K ( I λ n 1 f ) ( 2 λ n 1 L ) I ] 2 + λ n 1 L w n 1 = ( 2 + λ n 1 L ) [ 4 P K ( I λ n f ) ( 2 λ n L ) I ] ( 2 + λ n L ) ( 2 + λ n 1 L ) w n 1 ( 2 + λ n L ) [ 4 P K ( I λ n 1 f ) ( 2 λ n 1 L ) I ] ( 2 + λ n L ) ( 2 + λ n 1 L ) w n 1 , T n w n 1 T n 1 w n 1 4 ( 2 + λ n 1 L ) P K ( I λ n f ) w n 1 4 ( 2 + λ n L ) P K ( I λ n 1 f ) w n 1 ( 2 + λ n L ) ( 2 + λ n 1 L ) + ( 2 + λ n L ) ( 2 λ n 1 L ) w n 1 ( 2 + λ n 1 L ) ( 2 λ n L ) w n 1 ( 2 + λ n L ) ( 2 + λ n 1 L ) 4 ( 2 + λ n 1 L ) [ P K ( I λ n f ) w n 1 P K ( I λ n 1 f ) w n 1 ] ( 2 + λ n L ) ( 2 + λ n 1 L ) + 4 ( 2 + λ n 1 L ) P K ( I λ n 1 f ) w n 1 4 ( 2 + λ n L ) ( P K ( I λ n 1 f ) w n 1 ) 4 + 2 λ n L + 2 λ n 1 L + λ n 1 λ n L 2 + 4 L ( λ n λ n 1 ) 4 w n 1 2 | λ n 1 λ n | f w n 1 + L | λ n 1 λ n | P K ( I λ n 1 f ) w n 1 + L | λ n λ n 1 | w n 1 = | λ n λ n 1 | [ 2 f w n 1 + L P K ( I λ n 1 f ) w n 1 + L w n 1 ] M 0 | λ n λ n 1 | ,
(3.7)

where

M 0 =sup { 2 f w n 1 + L P K ( I λ n 1 f ) w n 1 + L w n 1 , n N } .

Now, from Lemma 2.4, (3.6), and (3.7), we have

x n + 1 x n = ( I α n μ A ) ( I ε B ) T n T r n z n ( I α n 1 μ A ) ( I ε B ) T n 1 T r n 1 z n 1 = ( I α n μ A ) ( I ε B ) T n T r n z n ( I α n μ A ) ( I ε B ) T n T r n z n 1 + ( I α n μ A ) ( I ε B ) T n T r n z n 1 ( I α n μ A ) ( I ε B ) T n T r n 1 z n 1 + ( I α n μ A ) ( I ε B ) T n T r n 1 z n 1 ( I α n μ A ) ( I ε B ) T n 1 T r n 1 z n 1 + ( I α n μ A ) ( I ε B ) T n 1 T r n 1 z n 1 ( I α n 1 μ A ) ( I ε B ) T n 1 T r n 1 z n 1 ( I α n μ A ) ( I ε B ) T n T r n z n ( I α n μ A ) ( I ε B ) T n T r n z n 1 + ( I α n μ A ) ( I ε B ) T n T r n z n 1 ( I α n μ A ) ( I ε B ) T n T r n 1 z n 1 + ( I α n μ A ) ( I ε B ) T n T r n 1 z n 1 ( I α n μ A ) ( I ε B ) T n 1 T r n 1 z n 1 + ( I α n μ A ) ( I ε B ) T n 1 T r n 1 z n 1 ( I α n 1 μ A ) ( I ε B ) T n 1 T r n 1 z n 1 ( 1 α n τ ) z n z n 1 + ( 1 α n τ ) T r n z n 1 T r n 1 z n 1 + ( 1 α n τ ) T n T r n 1 z n 1 T n 1 T r n 1 z n 1 + | α n 1 α n | μ A ( I ε B ) T n 1 T r n 1 z n 1 ( 1 α n τ ) z n z n 1 + ( 1 α n τ ) | r n r n 1 | r n ( T r n z n 1 + z n 1 ) + ( 1 α n τ ) M 0 | λ n λ n 1 | + | α n α n 1 | μ A ( I ε B ) T n 1 T r n 1 z n 1 ( 1 α n τ ) z n z n 1 + ( 1 α n τ ) | r n r n 1 | r n M 1 + ( 1 α n τ ) M 0 | λ n λ n 1 | + | α n α n 1 | M 2 ,
(3.8)

where M 1 >sup( T r n z n 1 + z n 1 ), M 2 >sup(A(IεB) T n 1 T r n 1 z n 1 ). Therefore,

x n + 1 x n ( 1 α n τ ) z n z n 1 + M 0 | λ n λ n 1 | + M 1 | r n r n 1 | r n + M 2 | α n α n 1 | ( 1 α n τ ) z n z n 1 + M [ | λ n λ n 1 | + | r n r n 1 | r n + | α n α n 1 | ] ,
(3.9)

where M=max{ M 0 , M 1 , M 2 }.

Since, z n = F r n x n , and z n + 1 = F r n + 1 x n + 1 , we have

F( z n ,y)+ 1 r n y z n , z n x n 0,yK,
(3.10)
F( z n + 1 ,y)+ 1 r n + 1 y z n + 1 , z n + 1 x n + 1 0,yK.
(3.11)

Substitute y= z n + 1 in (3.10) and y= z n in (3.11) to get

F( z n , z n + 1 )+ 1 r n z n + 1 z n , z n x n 0,
(3.12)
F( z n + 1 , z n )+ 1 r n + 1 z n z n + 1 , z n + 1 x n + 1 0.
(3.13)

From (A2), we have (3.12) + (3.13):

1 r n z n + 1 z n , z n x n 1 r n + 1 z n + 1 z n , z n + 1 x n + 1 0 , z n + 1 z n , 1 r n ( z n x n ) 1 r n + 1 ( z n + 1 x n + 1 ) 0 , z n + 1 z n , ( z n x n ) r n r n + 1 ( z n + 1 x n + 1 ) 0 , z n + 1 z n , z n z n + 1 + z n + 1 x n r n r n + 1 ( z n + 1 x n + 1 ) 0 .

Without loss of generality, let us assume that there exists a real number c such that r n >c>0 for all nN. We now have

z n + 1 z n 2 + z n + 1 z n , z n + 1 x n r n r n + 1 ( z n + 1 x n + 1 ) 0 , z n + 1 z n 2 z n + 1 z n , z n + 1 x n + 1 + x n + 1 x n r n r n + 1 ( z n + 1 x n + 1 ) z n + 1 z n , ( 1 r n r n + 1 ) ( z n + 1 x n + 1 ) + x n + 1 x n z n + 1 z n [ r n + 1 r n r n + 1 ( z n + 1 x n + 1 ) + x n + 1 x n ] z n + 1 z n [ | r n + 1 r n | r n + 1 z n + 1 x n + 1 + x n + 1 x n ] ,

which implies that

z n + 1 z n | r n + 1 r n | r n + 1 z n + 1 x n + 1 + x n + 1 x n L c | r n + 1 r n | + x n + 1 x n ,
(3.14)

where L =sup{ z n + 1 x n + 1 ,nN}.

From (3.9) and (3.14), we have

x n + 1 x n ( 1 α n τ ) [ L c | r n r n 1 | + x n x n 1 ] + M [ | λ n λ n 1 | + | α n α n 1 | + | r n r n 1 | r n ] = ( 1 α n τ ) x n x n 1 + ( 1 α n τ ) L c | r n r n 1 | + M [ | λ n λ n 1 | + | α n α n 1 | + | r n r n 1 | r n ] ( 1 α n τ ) x n x n 1 + L c | r n r n 1 | + M [ | λ n λ n 1 | + | α n α n 1 | + | r n r n 1 | r n ] .

Using conditions on { r n }, { λ n }, { α n }, and Lemma 2.6, we get

lim n x n + 1 x n =0.
(3.15)

Consequently, from (3.14) and (3.15), we have

lim n z n + 1 z n =0.
(3.16)

 □

Lemma 3.4 Suppose that the conditions of Remark  3.1 are satisfied, and { x n } is as defined by (3.1), then

lim n ( I α n μ A ) ( I ε B ) T n T r n F r n x n x n = lim n z n x n = lim n w n z n = lim n T r n F r n x n T n T r n F r n x n = lim n B T n T r n F r n x n = lim n T n T r n F r n x n x n = lim n T n T r n F r n x n ( I ε B ) T n T r n F r n x n = lim n ( I ε B ) T n T r n F r n x n x n = 0 .

Proof Observe that

( I α n μ A ) ( I ε B ) T n T r n F r n x n x n = x n ( I α n μ A ) ( I ε B ) T n T r n F r n x n = ( x n x n + 1 ) + ( x n + 1 ( I α n μ A ) ( I ε B ) T n T r n F r n x n ) x n x n + 1 0 as  n .

Furthermore, for pΩ and using (B2) we have

z n p 2 = F r n x n p 2 = F r n x n F r n p 2 F r n x n F r n p , x n p = z n p , x n p = 1 2 [ z n p 2 + x n p 2 z n x n 2 ] ,

which implies that

z n p 2 x n p 2 z n x n 2 .
(3.17)

From (3.1) and (3.17), we have the following estimate:

x n + 1 p 2 = ( I α n μ A ) ( I ε B ) T n T r n z n p 2 = ( ( I α n μ A ) v n ( I α n μ A ) p ) α n μ A p 2 .

Hence

x n + 1 p 2 = ( I α n μ A ) v n ( I α n μ A ) p 2 + α n μ A p 2 + 2 α n ( I α n μ A ) v n ( I α n μ A ) p , μ A p ( 1 α n τ ) 2 v n p 2 + α n 2 μ A p 2 + 2 α n ( I α n μ A ) v n ( I α n μ A ) p μ A p ( 1 α n τ ) 2 z n p 2 + α n 2 μ A p 2 + 2 α n ( 1 α n τ ) v n p μ A p z n p 2 + α n 2 μ A p 2 + 2 α n z n p μ A p x n p 2 z n x n 2 + α n 2 μ A p 2 + 2 α n z n p μ A p , z n x n 2 x n p 2 x n + 1 p 2 + α n μ A p 2 + 2 α n z n p μ A p = [ x n p x n + 1 p ] × [ x n p + x n + 1 p ] + α n μ A p 2 + 2 α n z n p μ A p x n x n + 1 [ x n p + x n + 1 p ] + α n μ A p 2 + 2 α n z n p μ A p .
(3.18)

Since x n + 1 x n 0 and α n 0 as n, we have

lim n z n x n =0.
(3.19)

Similarly, using (C2), we have

w n p 2 = T r n F r n x n p 2 T r n F r n x n p , F r n x n p = 1 2 [ w n p 2 + z n p 2 w n z n 2 ] ;

hence,

w n p 2 z n p 2 w n z n 2 .

From (3.18), we have

x n + 1 p 2 ( 1 α n τ ) 2 v n p 2 + α n 2 μ A p 2 + 2 α n ( I α n μ A ) v n ( I α n μ A ) p μ A p ( 1 α n τ ) 2 w n p 2 + α n 2 μ A p 2 + 2 α n ( I α n μ A ) v n ( I α n μ A ) p μ A p = ( 1 α n τ ) 2 w n p 2 + α n 2 μ A p 2 + 2 α n ( 1 α n τ ) v n p μ A p w n p 2 + α n 2 μ A p 2 + 2 α n ( 1 α n τ ) w n p μ A p z n p 2 w n z n 2 + α n 2 μ A p 2 + 2 α n ( 1 α n τ ) w n p μ A p = x n p 2 w n z n 2 + α n μ A p 2 + 2 α n ( 1 α n τ ) w n p μ A p , w n z n 2 x n p 2 x n + 1 p 2 + α n μ A p 2 + 2 α n w n p μ A p x n x n + 1 [ x n p + x n + 1 p ] + α n μ A p 2 + 2 α n w n p μ A p .

Since x n + 1 x n 0, α n 0, we have

lim n w n z n =0.
(3.20)

Furthermore

u n p 2 = T n T r n F r n x n p 2 = T n T r n F r n x n p T n T r n F r n x n p T r n F r n x n p T n T r n F r n x n p = 1 2 [ T r n F r n x n p 2 + T n T r n F r n x n p 2 T r n F r n x n T n T r n F r n x n 2 ] ;

hence,

T n T r n F r n x n p 2 T r n F r n x n p 2 T r n F r n x n T n T r n F r n x n 2 .
(3.21)

From (3.18), we obtain

x n + 1 p 2 ( 1 α n τ ) 2 v n p 2 + α n 2 μ A p 2 + 2 α n ( I α n μ A ) v n ( I α n μ A ) p μ A p = ( 1 α n τ ) 2 ( I ε B ) T n T r n F r n x n p 2 + α n 2 μ A p 2 + 2 α n ( I α n μ A ) ( I ε B ) T n T r n F r n x n ( I α n μ A ) p μ A p ( I ε B ) T n T r n F r n x n p 2 + α n μ A p 2 + 2 α n ( 1 α n τ ) ( I ε B ) T n T r n F r n x n p μ A p T n T r n F r n x n p 2 ε ( 2 γ ε ) B T n T r n F r n x n 2 + α n μ A p 2 + 2 α n ( I ε B ) T n T r n F r n x n p μ A p T r n F r n x n p 2 T r n F r n x n T n T r n F r n x n 2 ε ( 2 γ ε ) B T n T r n F r n x n 2 + α n μ A p 2 + 2 α n ( I ε B ) T n T r n F r n x n p μ A p x n p 2 T r n F r n x n T n T r n F r n x n 2 ε ( 2 γ ε ) B T n T r n F r n x n 2 + α n μ A p 2 + 2 α n ( I ε B ) T n T r n F r n x n p μ A p .

On re-arranging, we have

T r n F r n x n T n T r n F r n x n 2 + ε ( 2 γ ε ) B T n T r n F r n x n 2 x n p 2 x n + 1 p 2 + α n μ A p 2 + 2 α n ( I ε B ) T n T r n F r n x n p μ A p x n + 1 x n [ x n p + x n + 1 p ] + α n μ A p 2 + 2 α n ( I ε B ) T n T r n F r n x n p μ A p .

Since x n + 1 x n 0, α n 0 as n, we have

lim n ( T r n F r n x n T n T r n F r n x n 2 + ε ( 2 γ ε ) B T n T r n F r n x n 2 ) =0.
(3.22)

Since ε(2γε)>0, we use the sandwich theorem in (3.22) to obtain

lim n T r n F r n x n T n T r n F r n x n =0,
(3.23)
lim n B T n T r n F r n x n =0.
(3.24)

Using (3.19), (3.20), (3.23), (3.24), we obtain

T n T r n F r n x n x n T n T r n F r n x n T r n F r n x n + T r n F r n x n F r n x n + F r n x n x n 0 .
(3.25)

Furthermore, for pΩ,

( I ε B ) T n T r n F r n x n p 2 = T n T r n F r n x n ε B T n T r n F r n x n ( p ε B p ) 2 = T n T r n F r n x n ε B T n T r n F r n x n ( p ε B p ) , ( I ε B ) T n T r n F r n x n p = 1 2 [ T n T r n F r n x n ε B T n T r n F r n x n ( p ε B p ) 2 + ( I ε B ) T n T r n F r n x n p 2 ( T n T r n F r n x n ε B T n T r n F r n x n ( p ε B p ) ) ( ( I ε B ) T n T r n F r n x n p ) 2 ] 1 2 [ T n T r n F r n x n p 2 + ( I ε B ) T n T r n F r n x n p 2 [ ( T n T r n F r n x n p ) ( ( I ε B ) T n T r n F r n x n p ) 2 + ε 2 B T n T r n F r n x n 2 2 ε T n T r n F r n x n ( I ε B ) T n T r n F r n x n , B T n T r n F r n x n ] ] .

Now,

( I ε B ) T n T r n F r n x n p 2 T n T r n F r n x n p 2 T n T r n F r n x n ( I ε B ) T n T r n F r n x n 2 ε 2 B T n T r n F r n x n 2 + 2 ε T n T r n F r n x n ( I ε B ) T n T r n F r n x n B T n T r n F r n x n x n p 2 T n T r n F r n x n ( I ε B ) T n T r n F r n x n 2 + 2 ε T n T r n F r n x n ( I ε B ) T n T r n F r n x n B T n T r n F r n x n = x n x n + 1 + x n + 1 p 2 T n T r n F r n x n ( I ε B ) T n T r n F r n x n 2 + 2 ε T n T r n F r n x n ( I ε B ) T n T r n F r n x n B T n T r n F r n x n x n + 1 x n 2 + x n + 1 p 2 + 2 x n + 1 x n x n + 1 p T n T r n F r n x n ( I ε B ) T n T r n F r n x n 2 + 2 ε T n T r n F r n x n ( I ε B ) T n T r n F r n x n B T n T r n F r n x n = x n + 1 x n [ x n + 1 x n + 2 x n + 1 p ] + x n + 1 p 2 T n T r n F r n x n ( I ε B ) T n T r n F r n x n 2 + 2 ε T n T r n F r n x n ( I ε B ) T n T r n F r n x n B T n T r n F r n x n ,

which implies that

T n T r n F r n x n ( I ε B ) T n T r n F r n x n 2 x n + 1 x n [ x n + 1 x n + 2 x n + 1 p ] + x n + 1 p 2 ( I ε B ) T n T r n F r n x n p 2 + 2 ε T n T r n F r n x n ( I ε B ) T n T r n F r n x n B T n T r n F r n x n .
(3.26)

From (3.18), we obtain

x n + 1 p 2 ( 1 α n τ ) v n p 2 + α n 2 μ A p 2 + 2 α n ( 1 α n τ ) v n p μ A p v n p 2 + α n μ A p 2 + 2 α n v n p μ A p = ( I ε B ) T n T r n F r n x n p 2 + α n μ A p 2 + 2 α n ( I ε B ) T n T r n F r n x n p μ A p .
(3.27)

Using (3.27) in (3.26), we obtain

T n T r n F r n x n ( I ε B ) T n T r n F r n x n 2 x n + 1 x n [ x n + 1 x n + 2 x n + 1 p ] + α n μ A p 2 + 2 α n ( I ε B ) T n T r n F r n x n p μ A p + 2 ε T n T r n F r n x n ( I ε B ) T n T r n F r n x n B T n T r n F r n x n .

Using the fact that x n + 1 x n 0, α n 0, B T n T r n F r n x n 0, as n, we deduce that

lim n T n T r n F r n x n ( I ε B ) T n T r n F r n x n =0.
(3.28)

From (3.25) and (3.28), we have

( I ε B ) T n T r n F r n x n x n ( I ε B ) T n T r n F r n x n T n T r n F r n x n + T n T r n F r n x n x n 0 .
(3.29)

 □

Lemma 3.5 Suppose that the conditions of Remark  3.1 are satisfied, and { x n } is as defined by (3.1). Let qΩ be the unique solution of the variational inequality Aq,zq0, zΩ. Then

lim sup n Aq,qz0,

where q= P Ω (IμA)q.

Proof To show this inequality, we choose a subsequence { x n i } of { x n } such that

lim sup n Aq,q x n = lim i Aq,q x n i ;

correspondingly, there exists a subsequence { z n i } of { z n }. Since { z n i } is bounded, there exist a subsequence { z n i j } of { z n i } and zH such that z n i j z. Without loss of generality, we may assume that z n i z. Since { z n i }K and K is closed and convex, K is weakly closed. So, we have zK. Let us show that zΩ=F(T)N(B)ΘEP(F). First, we show that zEP(F). Since z n = F r n x n , we have

F( z n ,y)+ 1 r n y z n , z n x n 0,yK.

It follows from (A2) that

1 r n y z n , z n x n F(y, z n );

hence,

y z n i , z n i x n i r n i F(y, z n i ).

Since, z n i x n i r n i 0, z n i z as i, it follows that F(y,z)0, yK. For t(0,1] and mK, let y t =tm+(1t)z. Since mK and zK, we have y t K so that F( y t ,z)0. We have from (A1) and (A4)

0=F( y t , y t )=F ( y t , t m + ( 1 t ) z ) =tF( y t ,m)+(1t)F( y t ,z)tF( y t ,m).

That is, F( y t ,m)0. It follows from (A3) that F(z,m)0, mK. Since, m is taken arbitrarily, it follows that zEP(F).

We show that zF(T). Recall that w n i = T r n i z n i so that

y w n i ,T w n i 1 r n i y w n i , ( 1 + r n i ) w n i x n i 0,yK.
(3.30)

Put z t =tv+(1t)z, t(0,1) and vK. Consequently, we get z t K. From (3.30) and the pseudocontractivity of T, we have

z t w n i , T w n i 1 r n i z t w n i , ( 1 + r n i ) w n i x n i 0 , z t w n i , T w n i w n i z t , T z t + w n i z t , T z t 1 r n i z t w n i , ( 1 + r n i ) w n i x n i 0 , w n i z t , T z t z t w n i , T w n i + w n i z t , T z t 1 r n i z t w n i , w n i + r n i w n i x n i = w n i z t , T z t T w n i 1 r n i z t w n i , w n i x n i z t w n i , w n i = w n i z t , T w n i T z t 1 r n i z t w n i , w n i x n i z t w n i , w n i w n i z t 2 1 r n i z t w n i , w n i x n i [ z t w n i , w n i z t + z t w n i , z t ] = w n i z t 2 1 r n i z t w n i , w n i x n i + w n i z t 2 z t w n i , z t = w n i z t , z t 1 r n i z t w n i , w n i x n i w n i z t , z t 1 | r n i | z t w n i w n i x n i .
(3.31)

Since w n i x n i w n i z n i + z n i x n i 0 as i, (3.31) becomes

z z t , T z t z z t , z t , t z v , T z t t z v , z t , z v , T z t z v , z t , v K ,
(3.32)

taking the limit as t0 and using the fact that T is continuous, (3.32) becomes

z v , T z z v , z , v K , z v , z z v , T z 0 , z v , z T z 0 .

Put v=Tz and we have

z T z , z T z 0 , z T z 2 0 ,

which implies that zF(T).

We now show that zΘ. Observe that for { λ n i }{ λ n },

P K ( I λ n i f ) T r n i F r n i x n i T r n i F r n i x n i = P K ( I λ n i f ) w n i w n i = s n i w n i + ( 1 s n i ) T n i w n i w n i = ( 1 s n i ) T n i w n i ( 1 s n i ) w n i = | 1 s n i | T n i w n i w n i T n i w n i w n i 0

by (3.23). Let λ n i λ as i. If w n i z and P K (I λ n i f) w n i w n i 0, by the nonexpansive property of P K (Iλf), and Lemma 2.8, P K (Iλf)z=z, where λ(0, 2 L ); hence, zΘ.

Next we show that zN(B)={xH:Bx=0}, the null space of B. We make the following estimate:

( I α n μ A ) ( I ε B ) T n T r n F r n x n p 2 = ( I ε B ) T n T r n F r n x n α n μ A ( I ε B ) T n T r n F r n x n p 2 = ( I ε B ) T n T r n F r n x n α n μ A ( I ε B ) T n T r n F r n x n p , ( I α n μ A ) ( I ε B ) T n T r n F r n x n p = 1 2 [ ( I ε B ) T n T r n F r n x n α n μ A ( I ε B ) T n T r n F r n x n p 2 + ( I α n μ A ) ( I ε B ) T n T r n F r n x n p 2 ( ( I ε B ) T n T r n F r n x n α n μ A ( I ε B ) T n T r n F r n x n p ) ( ( I α n μ A ) ( I ε B ) T n T r n F r n x n p ) 2 ] 1 2 [ ( I ε B ) T n T r n F r n x n p 2 + ( I α n μ A ) ( I ε B ) T n T r n F r n x n p 2 ( ( I ε B ) T n T r n F r n x n ( I α n μ A ) ( I ε B ) T n T r n F r n x n 2 + α n 2 μ A ( I ε B ) T n T r n F r n x n 2 2 α n ( I ε B ) T n T r n F r n x n ( I α n μ A ) ( I ε B ) T n T r n F r n x n , μ A ( I ε B ) T n T r n F r n x n ) ] 1 2 [ ( I ε B ) T n T r n F r n x n p 2 + ( I α n μ A ) ( I ε B ) T n T r n F r n x n p 2 ( I ε B ) T n T r n F r n x n ( I α n μ A ) ( I ε B ) T n T r n F r n x n 2 α n 2 μ A ( I ε B ) T n T r n F r n x n 2 + 2 α n ( I ε B ) T n T r n F r n x n ( I α n μ A ) ( I ε B ) T n T r n F r n x n , μ A ( I ε B ) T n T r n F r n x n ] ,

which implies that

( I α n μ A ) ( I ε B ) T n T r n F r n x n p 2 ( I ε B ) T n T r n F r n x n p 2 ( I ε B ) T n T r n F r n x n ( I α n μ A ) ( I ε B ) T n T r n F r n x n 2 + 2 α n ( I ε B ) T n T r n F r n x n ( I α n μ A ) ( I ε B ) T n T r n F r n x n × μ A ( I ε B ) T n T r n F r n x n x n p 2 ( I ε B ) T n T r n F r n x n ( I α n μ A ) ( I ε B ) T n T r n F r n x n 2 + 2 α n ( I ε B ) T n T r n F r n x n ( I α n μ A ) ( I ε B ) T n T r n F r n x n × μ A ( I ε B ) T n T r n F r n x n = x n x n + 1 + x n + 1 p 2 ( I ε B ) T n T r n F r n x n ( I α n μ A ) ( I ε B ) T n T r n F r n x n 2 + 2 α n ( I ε B ) T n T r n F r n x n ( I α n μ A ) ( I ε B ) T n T r n F r n x n × μ A ( I ε B ) T n T r n F r n x n x n x n + 1 2 + x n + 1 p 2 + 2 x n x n + 1 x n + 1 p ( I ε B ) T n T r n F r n x n ( I α n μ A ) ( I ε B ) T n T r n F r n x n 2 + 2 α n ( I ε B ) T n T r n F r n x n ( I α n μ A ) ( I ε B ) T n T r n F r n x n × μ A ( I ε B ) T n T r n F r n x n = x n x n + 1 [ x n x n + 1 + 2 x n + 1 p ] + x n + 1 p 2 ( I ε B ) T n T r n F r n x n ( I α n μ A ) ( I ε B ) T n T r n F r n x n 2 + 2 α n ( I ε B ) T n T r n F r n x n ( I α n μ A ) ( I ε B ) T n T r n F r n x n × μ A ( I ε B ) T n T r n F r n x n .

From (3.15) and the condition on α n , we obtain

( I ε B ) T n T r n F r n x n ( I α n μ A ) ( I ε B ) T n T r n F r n x n x n x n + 1 [ x n x n + 1 + 2 x n + 1 p ] + x n + 1 p 2 ( I α n μ A ) ( I ε B ) T n T r n F r n x n p 2 + 2 α n ( I ε B ) T n T r n F r n x n ( I α n μ A ) ( I ε B ) T n T r n F r n x n × μ A ( I ε B ) T n T r n F r n x n = x n x n + 1 [ x n x n + 1 + 2 x n + 1 p ] + x n + 1 p 2 x n + 1 p 2 + 2 α n ( I ε B ) T n T r n F r n x n ( I α n μ A ) ( I ε B ) T n T r n F r n x n × μ A ( I ε B ) T n T r n F r n x n 0 .
(3.33)

Using (3.15), (3.25) in (3.33), we have

x n ( I ε B ) x n x n + 1 x n + x n + 1 ( I ε B ) x n = x n + 1 x n + ( I α n μ A ) ( I ε B ) T n T r n F r n x n ( I ε B ) x n x n + 1 x n + ( I α n μ A ) ( I ε B ) T n T r n F r n x n ( I ε B ) T n T r n F r n x n + ( I ε B ) T n T r n F r n x n ( I ε B ) x n x n + 1 x n + ( I α n μ A ) ( I ε B ) T n T r n F r n x n ( I ε B ) T n T r n F r n x n + T n T r n F r n x n x n 0 .
(3.34)

Replace n by n i in (3.34) to get

lim i x n i ( I ε B ) x n i =0.

Since the map (IεB) is nonexpansive from (3.3), we deduce from the demiclosedness principle that

lim i x n i ( I ε B ) x n i = lim i ( x n i ( I ε B ) x n i ) = z ( I ε B ) z = 0 ,

which implies that zz+εBz=0 or Bz=0 (ε>0); hence, we get zN(B) and conclude that zΩ.

Since q= P K (IμA)q, it follows that

lim sup n Aq,q x n = lim i Aq,q x n i =Aq,qz0,zΩ.
(3.35)

 □

Theorem 3.6 Suppose that the conditions of Remark  3.1 are satisfied, and { x n } is as defined by (3.1), then { x n } converges strongly to qΩ, which is a unique solution of the variational inequality Aq,zq0, zΩ.

Proof Let qΩ, then

x n + 1 q 2 = ( I α n μ A ) ( I ε B ) T n T r n F r n x n q 2 = ( I α n μ A ) ( I ε B ) u n ( I α n μ A ) ( I ε B ) q + ( I α n μ A ) ( I ε B ) q q 2 = [ ( I α n μ A ) ( I ε B ) u n ( I α n μ A ) ( I ε B ) q ] α n μ A q 2 ( I α n μ A ) ( I ε B ) u n ( I α n μ A ) ( I ε B ) q 2 + 2 α n μ A q , x n + 1 q ( 1 α n τ ) 2 ( I ε B ) u n ( I ε B ) q 2 + 2 α n μ A q , x n + 1 q ( 1 α n τ ) 2 x n q 2 + 2 α n μ A q , x n + 1 q = ( 1 2 α n τ ) x n q 2 + α n 2 τ 2 x n q 2 + 2 α n μ A q , x n + 1 q ( 1 2 α n τ ) x n q 2 + 2 α n τ ( α n τ M 2 + 1 τ μ A q , x n + 1 q ) = ( 1 2 α n τ ) x n q 2 + δ n ,
(3.36)

where M =sup{ x n q 2 :nN} and δ n =2 α n τ( α n τ M 2 + 1 τ μAq, x n + 1 q).

Apply Lemma 2.6 to (3.36) to conclude that x n q. □

Remark 3.7 The prototype sequences are

α n = 1 1 + n , λ n = 2 n 1 + n L , r n = 1 1 + n .

Remark 3.8 Our result is an extension of the result of Tian and Liu [7] and better applicable.

Remark 3.9 The scheme is found to be better applicable than the results of Yamada [5] and Tian [6] who worked on a single nonexpansive mapping.

References

  1. Xu HK: An iterative approach to quadratic optimization. J. Optim. Theory Appl. 2003, 116(3):659–678. 10.1023/A:1023073621589

    Article  MathSciNet  Google Scholar 

  2. Ishikawa S: Fixed point by a new iterative method. Proc. Am. Math. Soc. 1974, 44: 147–150. 10.1090/S0002-9939-1974-0336469-5

    Article  MathSciNet  Google Scholar 

  3. Ceng LC, Ansari QH, Yao JC: Some iterative methods for finding fixed points and for solving constrained minimization problems. Nonlinear Anal. 2011, 74: 5286–5302. 10.1016/j.na.2011.05.005

    Article  MathSciNet  Google Scholar 

  4. Ofoedu E: A general approximation scheme for solutions of various problems in fixed point theory. Int. J. Anal. 2013., 2013: Article ID 762831

    Google Scholar 

  5. Yamada I: The hybrid steepest descent method for variational inequality problems over the intersection of fixed point sets of nonexpansive mappings. Inherently Parallel Algorithms in Feasibility and Optimization and Their Application (Haifa 2000) 2001.

    Google Scholar 

  6. Tian M: An application of hybrid steepest descent methods for equilibrium problems and strictly pseudocontractions in Hilbert spaces. J. Inequal. Appl. 2011., 2011: Article ID 173430

    Google Scholar 

  7. Tian M, Liu L: Iterative algorithm based on the viscosity approximation method for equilibrium and constrained convex minimization problem. Fixed Point Theory Appl. 2012., 2012: Article ID 201

    Google Scholar 

  8. Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63(1–4):123–145.

    MathSciNet  Google Scholar 

  9. Zegeye H: An iterative approximation method for a common fixed point of two pseudocontractive mappings. ISRN Math. Anal. 2011., 2011: Article ID 621901

    Google Scholar 

  10. Combettes PL, Hirstoga SA: Equilibrium programming problems. J. Nonlinear Convex Anal. 2005, 6(1):117–136.

    MathSciNet  Google Scholar 

  11. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66(1):240–256. 10.1112/S0024610702003332

    Article  Google Scholar 

  12. Goebel K, Kirk WA Cambridge Studies in Advanced Mathematics 38. In Topics on Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.

    Chapter  Google Scholar 

  13. Baillon JB, Haddad G: Quelques propriétés des opérateurs angle - bornés et n -cycliquement monotones. Isr. J. Math. 1977, 26: 137–150. 10.1007/BF03007664

    Article  MathSciNet  Google Scholar 

  14. Chantarangsi W, Jaiboon C, Kumam P: A viscosity hybrid steepest descent method for generalized mixed equilibrium problems and variational inequalities for relaxed cocoercive mapping in Hilbert spaces. Abstr. Appl. Anal. 2010., 2010: Article ID 390972

    Google Scholar 

  15. Jaiboon C, Kumam P, Humphries UW: Weak convergence theorem by an extragradient method for variational inequality, equilibrium and fixed point problems. Bull. Malays. Math. Soc. 2009, 32(2):173–185.

    MathSciNet  Google Scholar 

  16. Jaiboon C, Chantarangsi W, Kumam P: A convergence theorem based on a hybrid relaxed extragradient method for generalized equilibrium problems and fixed point problems of a finite family of nonexpansive mappings. Nonlinear Anal. Hybrid Syst. 2010, 4(1):199–215. 10.1016/j.nahs.2009.09.009

    Article  MathSciNet  Google Scholar 

  17. Jaiboon C, Kumam P, Humphries U: An extragradient method for relaxed cocoercive variational inequality and equilibrium problems. Anal. Theory Appl. 2009, 25(4):381–400. 10.1007/s10496-009-0381-8

    Article  MathSciNet  Google Scholar 

  18. Kumam W, Kumam P: Hybrid iterative scheme by relaxed extragradient method for solutions of equilibrium problems and a general system of variational inequalities with application to optimization. Nonlinear Anal. Hybrid Syst. 2009, 3: 640–656. 10.1016/j.nahs.2009.05.007

    Article  MathSciNet  Google Scholar 

  19. Kumam P: Strong convergence theorems by an extragradient methods for solving variational inequality and equilibrium problems in a Hilbert space. Turk. J. Math. 2011, 74: 5286–5302.

    Google Scholar 

  20. Onjai-uea N, Jaiboon C, Kumam P: A relaxed hybrid steepest descent method for common solutions of generalized mixed equilibrium problems and fixed point problems. Fixed Point Theory Appl. 2011., 2011: Article ID 32 10.1186/1687-1812-2011-32

    Google Scholar 

  21. Onjai-uea N, Jaiboon C, Kumam P, Humphries U: Convergence of iterative sequences for fixed points of an infinite family of nonexpansive mappings based on a hybrid steepest descent methods. J. Inequal. Appl. 2012., 2012: Article ID 101 10.1186/1029-242X-2012-101

    Google Scholar 

  22. Wirojana N, Jitpeera T, Kumam P: The hybrid steepest descent method for solving variational inequality over triple hierarchical problems. J. Inequal. Appl. 2012., 2012: Article ID 280 10.1186/1029-242X-2012-280

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Felix U Attah.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Osilike, M.O., Ofoedu, E.U. & Attah, F.U. The hybrid steepest descent method for solutions of equilibrium problems and other problems in fixed point theory. Fixed Point Theory Appl 2014, 156 (2014). https://doi.org/10.1186/1687-1812-2014-156

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2014-156

Keywords