Open Access

The hybrid steepest descent method for solutions of equilibrium problems and other problems in fixed point theory

Fixed Point Theory and Applications20142014:156

https://doi.org/10.1186/1687-1812-2014-156

Received: 10 April 2014

Accepted: 14 June 2014

Published: 22 July 2014

Abstract

In this paper, we combine the gradient projection algorithm and the hybrid steepest descent method and prove the strong convergence to a common element of the equilibrium problem; the null space of an inverse strongly monotone operator; the set of fixed points of a continuous pseudocontractive mapping and the minimizer of a convex function. This common element is proved to be the unique solution of a variational inequality problem.

MSC:47H06, 47H09, 47J05, 47J25.

Keywords

steepest descent methodmonotoneequilibrium problemsgradient projection algorithmpseudocontractive mappingvariational inequality

1 Introduction

Let H be a real Hilbert space with inner product , and norm and let K be a nonempty, closed, and convex subset of H. Let F be a bifunction of K × K into . The equilibrium problem for F : K × K R is to find x K such that
F ( x , y ) 0 , y K .
(1.1)
The set of solutions of (1.1) is denoted by E P ( F ) . For a given nonlinear operator A, the problem of finding z K such that
A z , y z 0 , y K ,
(1.2)

is called the variational inequality problems V I P and the set of solutions of the V I P is denoted by V I P ( A , K ) .

Given a mapping A : K H , let F ( x , y ) = A x , y x for all x , y K , then z E P ( F ) if and only if A z , y z 0 , y K , that is, z is a solution of the variational inequality (1.2).

The mapping T : K K is said to be Lipschitz if there exists L 0 such
T x T y L x y , x , y K .
(1.3)
The operator T is said to be a contraction if in (1.3) L < 1 , and nonexpansive if L = 1 . Let H be a real Hilbert space and K, a nonempty subset of H. A mapping T : K H K is said to be pseudocontractive if for all x , y K ,
T x T y , x y x y 2 .
(1.4)
Equivalently, (1.4) can be written as
T x T y 2 x y 2 + ( I T ) x ( I T ) y 2 , x , y K .
(1.5)

The set of fixed points of a mapping T is denoted by F ( T ) = { x K : T x = x } .

In what follows, we shall use → for strong convergence and for weak convergence.

For every point x H , there exists a unique nearest point in K denoted by P K x such that x P K x x y , y K . The map P K is called the metric projection of H onto K. It is also well known that P K satisfies
P K x P K y , x y P K x P K y 2 , x , y H .
Moreover, P K x is characterized by the properties that
P K x K and x P K x , P K x y 0 , y K .
Consider the optimization problem:
min f ( x ) such that  x K ,
(1.6)
where f : K R { } is a real valued convex functional. If f is a continuously Fréchet differentiable convex functional on K, then x K is a solution of the optimization problem (1.6) if and only if the optimality condition
f ( x ) , y x 0 , y K ,
(1.7)

holds.

Using the characterization of the projection operator, one can easily show that solving the variational inequality (1.7) is equivalent to solving the fixed point problem of finding x K which satisfies the relation
x = P K ( I μ f ) x ,
where μ > 0 is a constant. A formulation of the iterative scheme for the variational inequality problem (1.7) may be as follows: for arbitrary x 1 K , define { x n } n 1 by
x n + 1 = P K ( I μ f ) x n
(1.8)
or more generally
x n + 1 = P K ( I μ n f ) x n ,
(1.9)

where the parameters μ, μ n are positive real numbers known as step-size. The scheme (1.9) has been considered with several step-size rules:

  • Constant step-size, where for some μ > 0 , we have μ n = μ for all n.

  • Diminishing step-size, where μ n 0 and n = 1 μ n = .

  • Polyaks step-size, where μ n = f ( x n ) f f ( x n ) 2 , where f is the optimal value of (1.6).

  • Modified Polyaks step-size, where μ n = f ( x n ) f n f ( x n ) 2 and f n ˆ = min 0 j n f ( x j ) δ for some scalar δ > 0 .

The constant step-size rule is suitable when we are interested in finding an approximate solution to the problem (1.6). The diminishing step-size rule is an off-line rule and is typically used with μ n = c n + 1 or c n + 1 for some distributed implementations of the method.

These schemes are the well-known Gradient Projection Algorithms. However, the convergence of these schemes requires that the operator f must be Lipschitz continuous and strongly monotone, which is a strong condition and restrictive in application. If f is Lipschitz continuous and strongly monotone on H, it is obvious that the map P K ( I μ f ) is a strict contraction and by the Banach contraction principle, the sequence { x n } defined by (1.8) converges strongly to the unique minimizer of (1.6) which is the solution of the variational inequality problem (1.7). Another limitation of the scheme in (1.8) is that it is based on the assumption that the closed form expression of P K : H K is well known, whereas in many situations it is not.

The iterative approximation of fixed points and zeros of the nonlinear operators has been studied extensively by many authors to solve nonlinear operator equations as well as variational inequality problems (see [1, 2], and the references therein).

Ceng et al. [3] studied the following algorithm:
x n + 1 = P C [ s n γ V x n + ( I s n μ F ) T n x n ] , n 0 ,

where s n = 2 λ n L 4 and P C ( I λ n f ) = s n I + ( 1 s n ) T n , n 0 , and they proved that the sequence { x n } converges strongly to a minimizer of a constrained convex minimization problem which also solves a certain variational inequality.

For r > 0 , T r and F r as in Lemma 2.2 and Lemma 2.3, respectively, Ofoedu [4] introduced the following iteration scheme:
x n + 1 = α n u + β n x n + ( ( 1 β n ) I α n A ) W n ( I ε B ) T r n F r n x n , n 1 ,

and proved that if H is a real Hilbert space; S : H H is a continuous pseudocontractive mapping; T j : H H , j = 1 , 2 , 3 ,  , is a countable infinite family of nonexpansive mappings; f : H × H R is a bifunction satisfying (A1)-(A4); Φ : H R { + } a proper lower semicontinuous convex function; Θ : H H a continuous monotone mapping; u H is a fixed vector; A : H H is a strongly positive bounded linear operator with coefficient γ; B : H H is an η-inverse strongly monotone mapping and the sequences { r n } , { α n } , { β n } satisfy appropriate conditions, then the sequence { x n } converges strongly to a unique solution x Ω = F ( S ) G M E P ( f , Φ , Θ ) B 1 ( 0 ) F ( T j ) of the variational inequality u A x , x x 0 , x Ω .

In 2001, Yamada [5] introduced the hybrid steepest descent method which solves the variational inequality V I P ( F , K ) over the set K of fixed points of a nonexpansive map T. In particular, he studied the following:
x n + 1 = ( I α n μ F ) T x n

and proved the following theorem.

Theorem IY [5]

Assume that H is a real Hilbert space and T : H H is nonexpansive such that F ( T ) , and A : H H is η-strongly monotone and L-Lipschitz. Let μ ( 0 , 2 η L 2 ) . Assume also that the sequence { λ n } ( 0 , 1 ) satisfies the following conditions:
  1. (i)

    λ n 0 , as n ,

     
  2. (ii)

    n = 0 λ n = ,

     
  3. (iii)

    n = 0 | λ n + 1 λ n | < or lim n ( λ n λ n + 1 ) = 1 .

     

Take x 0 H , arbitrary and define { x n } n 1 by (1.9), then { x n } n 1 converges strongly to the unique solution x F ( T ) of V I P ( A , K ) where K is the set of fixed points of T.

The scheme (1.9) minimizes certain convex functions over the intersection of fixed point sets of nonexpansive mappings if A = f , say, where f is a continuously Fréchet differentiable convex function. The scheme solves the variational inequality V I P ( A , K ) and does not require the closed form expression of P K but, instead, requires a closed form expression of a nonexpansive mapping T, whose set of fixed points is K.

Motivated by the work of Yamada [5], Tian [6] introduced the following scheme:
{ ϕ ( u n , y ) + 1 λ n y u n , u n x n 0 , y C , y n = β n u n + ( 1 β n ) S u n , x n + 1 = ( I α n μ A ) y n , n N ,
(1.10)

and he proved that if α n , β n , λ n satisfy certain conditions, then the sequence { x n } given by (1.10) converges strongly to q F ( S ) E P ( ϕ ) , which solves the variational inequality A q , p q 0 , p F ( S ) E P ( ϕ ) .

In 2012, Tian and Liu [7] introduced the following scheme:
{ Φ ( u n , y ) + 1 r n y u n , u n x n 0 , y C , x n + 1 = ( I α n μ F ) T n u n , n N ,

where u n = Q r n x n , P C ( I λ n f ) = s n I + ( 1 s n ) T n , s n = 2 λ n L 4 and proved that if C is a nonempty, closed, and convex subset of a real Hilbert space H; Φ a bifunction from C × C into satisfying (A1)-(A4); f : C R a real valued convex function; f is an L-Lipschitzian mapping with L 0 ; Ω E P ( Φ ) where Ω is the solution set of a minimization problem; F : C H a k-Lipschitzian continuous and η-strongly monotone operator with constants k , η 0 ; 0 < μ < 2 η k 2 , τ = μ ( η μ k 2 2 ) and the sequences { α n } , { r n } , { λ n } satisfy appropriate conditions, then the sequence { x n } generated by x 1 H converges strongly to a point q Ω E P ( ϕ ) which solves the variational inequality F q , p q 0 , p Ω E P ( ϕ ) .

In this paper, motivated by the results of Ofoedu [4], Yamada [5], Tian [6], Tian and Liu [7], we shall study a new iterative scheme and prove the strong convergence to a common element of the equilibrium problem; the null space of an inverse strongly monotone operator; the set of fixed points of a continuous pseudocontractive mapping and the minimizer of a convex function. This common element is proved to be the unique solution of a variational inequality problem.

2 Preliminaries

For solving the equilibrium problem for a bifunction F : K × K R , let us assume that F satisfies the following conditions:

(A1) F ( x , x ) = 0 , x K .

(A2) F is monotone, i.e., F ( x , y ) + F ( y , x ) 0 , x , y K .

(A3) For each x , y , z K , t ( 0 , 1 ] , lim t 0 F ( t z + ( 1 t ) x , y ) F ( x , y ) .

(A4) For each x K , the function y F ( x , y ) is convex and lower semicontinuous.

Lemma 2.1 (Blum and Oettli [8])

Let K be nonempty, closed, and convex subset of H and f a bifunction of K × K R satisfying (A1)-(A4). For r > 0 and x H , there exists z K such that
f ( z , y ) + 1 r y z , z x 0 , y K .

Lemma 2.2 (Zegeye [9])

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let S : C H be a continuous pseudocontractive mapping. Then, for r > 0 and x H , there exists z C such that
y z , S z 1 r y z , ( 1 + r ) z x 0 , y C .
Furthermore, if
T r x = { y z , S z 1 r y z , ( 1 + r ) z x 0 , y C } ,

x H , then the following holds:

(C1) T r is single valued;

(C2) T r is firmly nonexpansive, i.e., for any x , y H
T r x T r y 2 T r x T r y , x y ;

(C3) F ( T r ) = F ( S ) ;

(C4) F ( S ) is closed and convex.

Lemma 2.3 (Combettes and Hirstoaga [10])

Assume that f : K × K R satisfies (A1)-(A4). For r > 0 and x H , define F r : H K by
F r x = { z K : f ( z , y ) + 1 r y z , z x 0 , y K } ,

then the following holds:

(B1) F r is single valued;

(B2) F r is firmly nonexpansive, i.e., for any x , y H
F r x F r y 2 F r x F r y , x y ;

(B3) F ( F r ) = E P ( f ) ;

(B4) E P ( f ) is closed and convex.

Lemma 2.4 (Ofoedu [4])

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let S : C C be a continuous pseudocontractive mapping. For r > 0 , let T r : H C be the mapping in Lemma  2.2, then for any x H and for any p , q > 0 ,
T p x T q x | p q | p ( T p x + x ) .
Recall that a mapping A : H H is said to be monotone if A x A y , x y 0 , x , y K . In particular, the mapping A is called
  1. (1)

    η-strongly monotone over K if there exists η > 0 such that A x A y , x y η x y 2 , x , y K ;

     
  2. (2)

    α-inverse strongly monotone over K if there exists α > 0 such that A x A y , x y α A x A y 2 , x , y K .

     
Lemma 2.5 Let A : H H be monotone over a closed and convex subset K of H, then the following statements are equivalent:
  1. (1)

    z K is a solution of V I P ( A , K ) if A z , x z 0 , x K .

     
  2. (2)

    For fixed μ > 0 , z = P K ( I μ A ) z .

     

Lemma 2.6 (see [1, 11])

Let { a n } be a sequence of nonnegative real numbers satisfying the following relation:
a n + 1 ( 1 γ n ) a n + δ n , n 0 ,
where
  1. (i)

    { γ n } [ 0 , 1 ] , γ n = ,

     
  2. (ii)

    lim sup n δ n γ n 0 or | δ n | < .

     

Then lim n a n = 0 .

Lemma 2.7 Let H be a real Hilbert space, then for all x , y H , the following hold:
  1. (i)

    x y 2 = x 2 2 x , y + y 2 ;

     
  2. (ii)

    x y 2 x 2 2 y , x + y .

     

Lemma 2.8 (Demiclosedness Principle [12])

Let T : C C be a nonexpansive mapping with F ( T ) . If { x n } is a sequence in C such that x n x and x n T x n 0 then x F ( T ) .

Definition 2.9 A map T : H H is called averaged if there exists a nonexpansive mapping S on H and α ( 0 , 1 ) such that
T = ( 1 α ) I + α S ,

and we say that T is α-averaged.

Remark 2.10
  1. (i)

    Firmly nonexpansive maps are 1 2 -averaged. Thus, a map T is firmly nonexpansive if and only if 2 T I = S where S is nonexpansive and I an identity mapping on H.

     
  2. (ii)

    Every averaged mapping is nonexpansive.

     
  3. (iii)

    A map S is nonexpansive if and only if I S is 1 2 -inverse strongly monotone.

     
  4. (iv)

    If A is η-inverse strongly monotone, and λ > 0 , then λA is η λ -inverse strongly monotone.

     

Lemma 2.11 A map T : H H is averaged if and only if A = I T is η-inverse strongly monotone for η > 1 2 . In particular, for α ( 0 , 1 ) , T is α-averaged if and only if I T is 1 2 α -inverse strongly monotone.

Lemma 2.12 Let T = ( 1 α ) A + α S , α ( 0 , 1 ) . If A is averaged and S is nonexpansive, then T is averaged.

Remark 2.13
  1. (i)

    A map N is firmly nonexpansive if it is 1-inverse strongly monotone.

     
  2. (ii)

    N is firmly nonexpansive if and only if I N is firmly nonexpansive.

     
  3. (iii)

    Every firmly nonexpansive map is averaged.

     
  4. (iv)

    If T = ( 1 α ) N + α S , α ( 0 , 1 ) , where N is firmly nonexpansive and S is nonexpansive, then T is averaged.

     
  5. (v)

    If ( S i ) , 1 i m , is a family of nonexpansive mappings, then the mapping S = i = 1 m S i is nonexpansive.

     
  6. (vi)

    If ( T i ) , 1 i m , is a family of averaged mappings, then the mapping T = i = 1 m T i is averaged. If T 1 is α 1 -averaged and T 2 is α 2 -averaged for some α 1 , α 2 ( 0 , 1 ) , then T 1 T 2 = i = 1 2 T i is α-averaged with α = α 1 + α 2 α 1 α 2 .

     
Let A : H H be α-inverse strongly monotone, i.e.,
A x A y , x y α A x A y 2 , x , y H .
(2.1)

When α = 1 , (2.1) implies that A is firmly nonexpansive and hence, A is nonexpansive. Thus, a map A is firmly nonexpansive if and only if it is 1-inversely strongly monotone. From the Schwartz inequality, we find that α-inverse being strongly monotone implies 1 α -Lipschitz continuity. However, the converse is not true. For instance, A = I (I is the identity mapping on H) is nonexpansive (hence, 1-Lipschitz) but not firmly nonexpansive, hence not 1-inversely strongly monotone. In 1977, Baillon and Haddad [13] showed that if D ( A ) = H and A is the gradient of a convex function, say f, i.e., A = f , then 1 α -Lipschitz continuity implies α-inverse strongly monotonicity and vice versa.

If f is L-Lipschitz, then f is 1 L -inverse strongly monotone and λ f is 1 λ L -inverse strongly monotone. Then, by Lemma 2.11, I λ f is λ L 2 -averaged. The projection map P K is firmly nonexpansive and hence is 1 2 -averaged. The composition P K ( I λ f ) is α-averaged (from Remark 2.13) with α = α 1 + α 2 α 1 α 2 = ( 1 2 + λ L 2 ) 1 2 λ L 2 = 2 + λ L 4 , 0 < λ < 2 L . Now, for n N , P K ( I λ n f ) is 2 + λ n L 4 -averaged, so that from Remark 2.13, we have P K ( I λ n f ) = s n I + ( 1 s n ) T n , where T n is nonexpansive and s n = 2 + λ n L 4 , n N (see [1422], and the references therein).

3 Main result

Remark 3.1 In what follows, let K be a nonempty, closed, and convex subset of a real Hilbert space H. Let F : K × K R be a bifunction satisfying (A1)-(A4) and let T : K K be a continuous pseudocontractive mapping. Let f : K R be a real valued convex function and assume that f is 1 L -inverse strongly monotone mapping with L 0 . Let A : K H be a k-Lipschitz continuous and η-strongly monotone mapping with constants k , η > 0 and 0 < μ < 2 η k 2 , τ = μ ( η μ k 2 2 ) . Let Θ denote the solution set of the minimization problem in (1.6). Let B : K H be an γ-inverse strongly monotone mapping. Assume that Ω = F ( T ) N ( B ) Θ E P ( F ) . Let { α n } , { r n } , { λ n } satisfy the following conditions:
  1. (i)

    { α n } ( 0 , 1 ) , lim n α n = 0 , n = 1 α n = , n = 1 | α n + 1 α n | < ,

     
  2. (ii)

    { λ n } ( 0 , 2 L ) , lim inf n λ n > 0 , n = 1 | λ n + 1 λ n | < ,

     
  3. (iii)

    { r n } ( 0 , ) , lim n r n > 0 , n = 1 | r n + 1 r n | < ,

     

and let ε be a real constant such that 0 < ε < 2 γ . For r > 0 , T r , F r are as in Lemma 2.2 and Lemma 2.3.

Consider the sequence { x n } n 1 generated iteratively from arbitrary x 1 H by
{ F ( z n , y ) + 1 r n y z n , z n x n 0 , y K , x n + 1 = ( I α n μ A ) ( I ε B ) T n T r n z n , n N ,
(3.1)

we shall study the strong convergence of the iteration scheme to a unique solution q Ω where q = P Ω ( I μ A ) q solves the variational inequality A q , z q 0 , z Ω , and z n = F r n x n and we have P K ( I λ n f ) = s n I + ( 1 s n ) T n , where s n = 2 λ L 4 , T n is nonexpansive.

Lemma 3.2 Suppose the conditions of Remark  3.1 are satisfied, then { x n } defined by (3.1) is bounded.

Proof We first show that ( I ε B ) is nonexpansive. For x , y K and 0 < ε < 2 γ , we have
( I ε B ) x ( I ε B ) y 2 = ( x y ) ε ( B x B y ) 2 = x y 2 2 ε B x B y , x y + ε 2 B x B y 2 x y 2 2 ε γ B x B y 2 + ε 2 B x B y 2 = x y 2 ( 2 ε γ ε 2 ) B x B y 2 = x y 2 ε ( 2 γ ε ) B x B y 2 ,
(3.2)
which implies that
( I ε B ) x ( I ε B ) y 2 x y 2 ,
(3.3)

and hence ( I ε B ) is nonexpansive.

Let p Ω . Let w n = T r n z n ; u n = T n w n ; v n = ( I ε B ) u n , then F r n p = p , T r n p = p , T n p = p and we have
v n p = ( I ε B ) u n ( I ε B ) p u n p , u n p = T n w n p w n p , w n p = T r n z n p z n p , z n p = F r n x n p x n p .
(3.4)
For all x H , define D n : H H by D n x = ( I α n μ A ) ( I ε B ) T n T r n F r n x where A is a k-Lipschitzian and η-strongly monotone mapping on H. Assume that 0 < μ < 2 η k 2 , for x , y H , we have
( I α n μ A ) ( I ε B ) T n T r n F r n x ( I α n μ A ) ( I ε B ) T n T r n F r n y 2 = ( ( I ε B ) u n x ( I ε B ) u n y ) α n μ A ( v n x v n y ) 2 = ( v n x v n y ) α n μ ( A ( v n x ) A ( v n y ) ) 2 = v n x v n y 2 2 α n μ A ( v n x ) A ( v n y ) , v n x v n y + α n 2 μ 2 A ( v n x ) A ( v n y ) 2 x y 2 2 α n μ A ( v n x ) A ( v n y ) , v n x v n y + α n μ 2 k 2 v n x v n y 2 x y 2 2 α n μ η x y 2 + α n μ 2 k 2 x y 2 = ( 1 2 α n μ η + α n μ 2 k 2 ) x y 2 = [ 1 2 ( α n μ η α n μ 2 k 2 2 ) ] x y 2 = [ 1 2 α n μ ( η μ k 2 2 ) ] x y 2 [ 1 α n μ ( η μ k 2 2 ) ] 2 x y 2 .
(3.5)
From (3.5), we have
( I α n μ A ) x ( I α n μ A ) y ( 1 α n τ ) x y ,
(3.6)

where τ = μ ( η μ k 2 2 ) . Hence, ( I α n μ A ) is a strict contraction and by the Banach contraction principle, it has a unique fixed point in H.

Now, for p Ω and from (3.1) and (3.6), we have
x n + 1 p = ( I α n μ A ) ( I ε B ) u n p = ( I α n μ A ) v n p = ( I α n μ A ) v n ( I α n μ A ) p + ( I α n μ A ) p p ( I α n μ A ) v n ( I α n μ A ) p + α n μ A p ( 1 α n τ ) v n p + α n μ A p ( 1 α n τ ) x n p + α n μ p A = ( 1 α n τ ) x n p + α n τ μ A p τ max { x n p , 1 τ μ A p } .
By induction, we get
x n p max { x 1 p , 1 τ μ A p } .

Therefore { x n } is bounded. Consequently we find that { z n } , { w n } , { u n } , { v n } are bounded. □

Lemma 3.3 Suppose that the conditions of Remark  3.1 are satisfied, and { x n } is as defined by (3.1), then
lim n x n + 1 x n 0 , n .
Proof For any p Ω , we have
A ( I ε B ) T n 1 T n r 1 z n 1 = A ( I ε B ) T n 1 T n r 1 z n 1 A p + A p A ( I ε B ) T n 1 T r n 1 z n 1 A p + A p k ( I ε B ) T n 1 T r n 1 z n 1 p + A p k x n 1 p + A p ,

which shows that { A ( I ε B ) T n 1 T r n 1 z n 1 } is bounded.

Similarly, we have
P K ( I λ n f ) x n = P K ( I λ n f ) x n p + p P K ( I λ n f ) x n P K ( I λ n f ) p + p x n p + p .

Hence, { P K ( I λ n f ) x n } is bounded.

Noting that T n T r n 1 z n 1 T n 1 T r n 1 z n 1 = T n w n 1 T n 1 w n 1 and from P K ( I λ n f ) = ( 2 λ n L ) 4 I + ( 2 + λ n L ) 4 T n , we get T n = 4 P K ( I λ n f ) ( 2 λ n L ) I 2 + λ n L and we compute as follows:
T n w n 1 T n 1 w n 1 = [ 4 P K ( I λ n f ) ( 2 λ n L ) I ] 2 + λ n L w n 1 [ 4 P K ( I λ n 1 f ) ( 2 λ n 1 L ) I ] 2 + λ n 1 L w n 1 = ( 2 + λ n 1 L ) [ 4 P K ( I λ n f ) ( 2 λ n L ) I ] ( 2 + λ n L ) ( 2 + λ n 1 L ) w n 1 ( 2 + λ n L ) [ 4 P K ( I λ n 1 f ) ( 2 λ n 1 L ) I ] ( 2 + λ n L ) ( 2 + λ n 1 L ) w n 1 , T n w n 1 T n 1 w n 1 4 ( 2 + λ n 1 L ) P K ( I λ n f ) w n 1 4 ( 2 + λ n L ) P K ( I λ n 1 f ) w n 1 ( 2 + λ n L ) ( 2 + λ n 1 L ) + ( 2 + λ n L ) ( 2 λ n 1 L ) w n 1 ( 2 + λ n 1 L ) ( 2 λ n L ) w n 1 ( 2 + λ n L ) ( 2 + λ n 1 L ) 4 ( 2 + λ n 1 L ) [ P K ( I λ n f ) w n 1 P K ( I λ n 1 f ) w n 1 ] ( 2 + λ n L ) ( 2 + λ n 1 L ) + 4 ( 2 + λ n 1 L ) P K ( I λ n 1 f ) w n 1 4 ( 2 + λ n L ) ( P K ( I λ n 1 f ) w n 1 ) 4 + 2 λ n L + 2 λ n 1 L + λ n 1 λ n L 2 + 4 L ( λ n λ n 1 ) 4 w n 1 2 | λ n 1 λ n | f w n 1 + L | λ n 1 λ n | P K ( I λ n 1 f ) w n 1 + L | λ n λ n 1 | w n 1 = | λ n λ n 1 | [ 2 f w n 1 + L P K ( I λ n 1 f ) w n 1 + L w n 1 ] M 0 | λ n λ n 1 | ,
(3.7)
where
M 0 = sup { 2 f w n 1 + L P K ( I λ n 1 f ) w n 1 + L w n 1 , n N } .
Now, from Lemma 2.4, (3.6), and (3.7), we have
x n + 1 x n = ( I α n μ A ) ( I ε B ) T n T r n z n ( I α n 1 μ A ) ( I ε B ) T n 1 T r n 1 z n 1 = ( I α n μ A ) ( I ε B ) T n T r n z n ( I α n μ A ) ( I ε B ) T n T r n z n 1 + ( I α n μ A ) ( I ε B ) T n T r n z n 1 ( I α n μ A ) ( I ε B ) T n T r n 1 z n 1 + ( I α n μ A ) ( I ε B ) T n T r n 1 z n 1 ( I α n μ A ) ( I ε B ) T n 1 T r n 1 z n 1 + ( I α n μ A ) ( I ε B ) T n 1 T r n 1 z n 1 ( I α n 1 μ A ) ( I ε B ) T n 1 T r n 1 z n 1 ( I α n μ A ) ( I ε B ) T n T r n z n ( I α n μ A ) ( I ε B ) T n T r n z n 1 + ( I α n μ A ) ( I ε B ) T n T r n z n 1 ( I α n μ A ) ( I ε B ) T n T r n 1 z n 1 + ( I α n μ A ) ( I ε B ) T n T r n 1 z n 1 ( I α n μ A ) ( I ε B ) T n 1 T r n 1 z n 1 + ( I α n μ A ) ( I ε B ) T n 1 T r n 1 z n 1 ( I α n 1 μ A ) ( I ε B ) T n 1 T r n 1 z n 1 ( 1 α n τ ) z n z n 1 + ( 1 α n τ ) T r n z n 1 T r n 1 z n 1 + ( 1 α n τ ) T n T r n 1 z n 1 T n 1 T r n 1 z n 1 + | α n 1 α n | μ A ( I ε B ) T n 1 T r n 1 z n 1 ( 1 α n τ ) z n z n 1 + ( 1 α n τ ) | r n r n 1 | r n ( T r n z n 1 + z n 1 ) + ( 1 α n τ ) M 0 | λ n λ n 1 | + | α n α n 1 | μ A ( I ε B ) T n 1 T r n 1 z n 1 ( 1 α n τ ) z n z n 1 + ( 1 α n τ ) | r n r n 1 | r n M 1 + ( 1 α n τ ) M 0 | λ n λ n 1 | + | α n α n 1 | M 2 ,
(3.8)
where M 1 > sup ( T r n z n 1 + z n 1 ) , M 2 > sup ( A ( I ε B ) T n 1 T r n 1 z n 1 ) . Therefore,
x n + 1 x n ( 1 α n τ ) z n z n 1 + M 0 | λ n λ n 1 | + M 1 | r n r n 1 | r n + M 2 | α n α n 1 | ( 1 α n τ ) z n z n 1 + M [ | λ n λ n 1 | + | r n r n 1 | r n + | α n α n 1 | ] ,
(3.9)

where M = max { M 0 , M 1 , M 2 } .

Since, z n = F r n x n , and z n + 1 = F r n + 1 x n + 1 , we have
F ( z n , y ) + 1 r n y z n , z n x n 0 , y K ,
(3.10)
F ( z n + 1 , y ) + 1 r n + 1 y z n + 1 , z n + 1 x n + 1 0 , y K .
(3.11)
Substitute y = z n + 1 in (3.10) and y = z n in (3.11) to get
F ( z n , z n + 1 ) + 1 r n z n + 1 z n , z n x n 0 ,
(3.12)
F ( z n + 1 , z n ) + 1 r n + 1 z n z n + 1 , z n + 1 x n + 1 0 .
(3.13)
From (A2), we have (3.12) + (3.13):
1 r n z n + 1 z n , z n x n 1 r n + 1 z n + 1 z n , z n + 1 x n + 1 0 , z n + 1 z n , 1 r n ( z n x n ) 1 r n + 1 ( z n + 1 x n + 1 ) 0 , z n + 1 z n , ( z n x n ) r n r n + 1 ( z n + 1 x n + 1 ) 0 , z n + 1 z n , z n z n + 1 + z n + 1 x n r n r n + 1 ( z n + 1 x n + 1 ) 0 .
Without loss of generality, let us assume that there exists a real number c such that r n > c > 0 for all n N . We now have
z n + 1 z n 2 + z n + 1 z n , z n + 1 x n r n r n + 1 ( z n + 1 x n + 1 ) 0 , z n + 1 z n 2 z n + 1 z n , z n + 1 x n + 1 + x n + 1 x n r n r n + 1 ( z n + 1 x n + 1 ) z n + 1 z n , ( 1 r n r n + 1 ) ( z n + 1 x n + 1 ) + x n + 1 x n z n + 1 z n [ r n + 1 r n r n + 1 ( z n + 1 x n + 1 ) + x n + 1 x n ] z n + 1 z n [ | r n + 1 r n | r n + 1 z n + 1 x n + 1 + x n + 1 x n ] ,
which implies that
z n + 1 z n | r n + 1 r n | r n + 1 z n + 1 x n + 1 + x n + 1 x n L c | r n + 1 r n | + x n + 1 x n ,
(3.14)

where L = sup { z n + 1 x n + 1 , n N } .

From (3.9) and (3.14), we have
x n + 1 x n ( 1 α n τ ) [ L c | r n r n 1 | + x n x n 1 ] + M [ | λ n λ n 1 | + | α n α n 1 | + | r n r n 1 | r n ] = ( 1 α n τ ) x n x n 1 + ( 1 α n τ ) L c | r n r n 1 | + M [ | λ n λ n 1 | + | α n α n 1 | + | r n r n 1 | r n ] ( 1 α n τ ) x n x n 1 + L c | r n r n 1 | + M [ | λ n λ n 1 | + | α n α n 1 | + | r n r n 1 | r n ] .
Using conditions on { r n } , { λ n } , { α n } , and Lemma 2.6, we get
lim n x n + 1 x n = 0 .
(3.15)
Consequently, from (3.14) and (3.15), we have
lim n z n + 1 z n = 0 .
(3.16)

 □

Lemma 3.4 Suppose that the conditions of Remark  3.1 are satisfied, and { x n } is as defined by (3.1), then
lim n ( I α n μ A ) ( I ε B ) T n T r n F r n x n x n = lim n z n x n = lim n w n z n = lim n T r n F r n x n T n T r n F r n x n = lim n B T n T r n F r n x n = lim n T n T r n F r n x n x n = lim n T n T r n F r n x n ( I ε B ) T n T r n F r n x n = lim n ( I ε B ) T n T r n F r n x n x n = 0 .
Proof Observe that
( I α n μ A ) ( I ε B ) T n T r n F r n x n x n = x n ( I α n μ A ) ( I ε B ) T n T r n F r n x n = ( x n x n + 1 ) + ( x n + 1 ( I α n μ A ) ( I ε B ) T n T r n F r n x n ) x n x n + 1 0 as  n .
Furthermore, for p Ω and using (B2) we have
z n p 2 = F r n x n p 2 = F r n x n F r n p 2 F r n x n F r n p , x n p = z n p , x n p = 1 2 [ z n p 2 + x n p 2 z n x n 2 ] ,
which implies that
z n p 2 x n p 2 z n x n 2 .
(3.17)
From (3.1) and (3.17), we have the following estimate:
x n + 1 p 2 = ( I α n μ A ) ( I ε B ) T n T r n z n p 2 = ( ( I α n μ A ) v n ( I α n μ A ) p ) α n μ A p 2 .
Hence
x n + 1 p 2 = ( I α n μ A ) v n ( I α n μ A ) p 2 + α n μ A p 2 + 2 α n ( I α n μ A ) v n ( I α n μ A ) p , μ A p ( 1 α n τ ) 2 v n p 2 + α n 2 μ A p 2 + 2 α n ( I α n μ A ) v n ( I α n μ A ) p μ A p ( 1 α n τ ) 2 z n p 2 + α n 2 μ A p 2 + 2 α n ( 1 α n τ ) v n p μ A p z n p 2 + α n 2 μ A p 2 + 2 α n z n p μ A p x n p 2 z n x n 2 + α n 2 μ A p 2 + 2 α n z n p μ A p , z n x n 2 x n p 2 x n + 1 p 2 + α n μ A p 2 + 2 α n z n p μ A p = [ x n p x n + 1 p ] × [ x n p + x n + 1 p ] + α n μ A p 2 + 2 α n z n p μ A p x n x n + 1 [ x n p + x n + 1 p ] + α n μ A p 2 + 2 α n z n p μ A p .
(3.18)
Since x n + 1 x n 0 and α n 0 as n , we have
lim n z n x n = 0 .
(3.19)
Similarly, using (C2), we have
w n p 2 = T r n F r n x n p 2 T r n F r n x n p , F r n x n p = 1 2 [ w n p 2 + z n p 2 w n z n 2 ] ;
hence,
w n p 2 z n p 2 w n z n 2 .
From (3.18), we have
x n + 1 p 2 ( 1 α n τ ) 2 v n p 2 + α n 2 μ A p 2 + 2 α n ( I α n μ A ) v n ( I α n μ A ) p μ A p ( 1 α n τ ) 2 w n p 2 + α n 2 μ A p 2 + 2 α n ( I α n μ A ) v n ( I α n μ A ) p μ A p = ( 1 α n τ ) 2 w n p 2 + α n 2 μ A p 2 + 2 α n ( 1 α n τ ) v n p μ A p w n p 2 + α n 2 μ A p 2 + 2 α n ( 1 α n τ ) w n p μ A p z n p 2 w n z n 2 + α n 2 μ A p 2 + 2 α n ( 1 α n τ ) w n p μ A p = x n p 2 w n z n 2 + α n μ A p 2 + 2 α n ( 1 α n τ ) w n p μ A p , w n z n 2 x n p 2 x n + 1 p 2 + α n μ A p 2 + 2 α n w n p μ A p x n x n + 1 [ x n p + x n + 1 p ] + α n μ A p 2 + 2 α n w n p μ A p .
Since x n + 1 x n 0 , α n 0 , we have
lim n w n z n = 0 .
(3.20)
Furthermore
u n p 2 = T n T r n F r n x n p 2 = T n T r n F r n x n p T n T r n F r n x n p T r n F r n x n p T n T r n F r n x n p = 1 2 [ T r n F r n x n p 2 + T n T r n F r n x n p 2 T r n F r n x n T n T r n F r n x n 2 ] ;
hence,
T n T r n F r n x n p 2 T r n F r n x n p 2 T r n F r n x n T n T r n F r n x n 2 .
(3.21)
From (3.18), we obtain
x n + 1 p 2 ( 1 α n τ ) 2 v n p 2 + α n 2 μ A p 2 + 2 α n ( I α n μ A ) v n ( I α n μ A ) p μ A p = ( 1 α n τ ) 2 ( I ε B ) T n T r n F r n x n p 2 + α n 2 μ A p 2 + 2 α n ( I α n μ A ) ( I ε B ) T n T r n F r n x n ( I α n μ A ) p μ A p ( I ε B ) T n T r n F r n x n p 2 + α n μ A p 2 + 2 α n ( 1 α n τ ) ( I ε B ) T n T r n F r n x n p μ A p T n T r n F r n x n p 2 ε ( 2 γ ε ) B T n T r n F r n x n 2 + α n μ A p 2 + 2 α n ( I ε B ) T n T r n F r n x n p μ A p T r n F r n x n p 2 T r n F r n x n T n T r n F r n x n 2 ε ( 2 γ ε ) B T n T r n F r n x n 2 + α n μ A p 2 + 2 α n ( I ε B ) T n T r n F r n x n p μ A p x n p 2 T r n F r n x n T n T r n F r n x n 2 ε ( 2 γ ε ) B T n T r n F r n x n 2 + α n μ A p 2 + 2 α n ( I ε B ) T n T r n F r n x n p μ A p .
On re-arranging, we have
T r n F r n x n T n T r n F r n x n 2 + ε ( 2 γ ε ) B T n T r n F r n x n 2 x n p 2 x n + 1 p 2 + α n μ A p 2 + 2 α n ( I ε B ) T n T r n F r n x n p μ A p x n + 1 x n [ x n p + x n + 1 p ] + α n μ A p 2 + 2 α n ( I ε B ) T n T r n F r n x n p μ A p .
Since x n + 1 x n 0 , α n 0 as n , we have
lim n ( T r n F r n x n T n T r n F r n x n 2 + ε ( 2 γ ε ) B T n T r n F r n x n 2 ) = 0 .
(3.22)
Since ε ( 2 γ ε ) > 0 , we use the sandwich theorem in (3.22) to obtain
lim n T r n F r n x n T n T r n F r n x n = 0 ,
(3.23)
lim n B T n T r n F r n x n = 0 .
(3.24)
Using (3.19), (3.20), (3.23), (3.24), we obtain
T n T r n F r n x n x n T n T r n F r n x n T r n F r n x n + T r n F r n x n F r n x n + F r n x n x n 0 .
(3.25)
Furthermore, for p Ω ,
( I ε B ) T n T r n F r n x n p 2 = T n T r n F r n x n ε B T n T r n F r n x n ( p ε B p ) 2 = T n T r n F r n x n ε B T n T r n F r n x n ( p ε B p ) , ( I ε B ) T n T r n F r n x n p = 1 2 [ T n T r n F r n x n ε B T n T r n F r n x n ( p ε B p ) 2 + ( I ε B ) T n T r n F r n x n p 2 ( T n T r n F r n x n ε B T n T r n F r n x n ( p ε B p ) ) ( ( I ε B ) T n T r n F r n x n p ) 2 ] 1 2 [ T n T r n F r n x n p 2 + ( I ε B ) T n T r n F r n x n p 2 [ ( T n T r n F r n x n p ) ( ( I ε B ) T n T r n F r n x n p ) 2 + ε 2 B T n T r n F r n x n 2 2 ε T n T r n F r n x n ( I ε B ) T n T r n F r n x n , B T n T r n F r n x n ] ] .
Now,
( I ε B ) T n T r n F r n x n p 2 T n T r n F r n x n p 2 T n T r n F r n x n ( I ε B ) T n T r n F r n x n 2 ε 2 B T n T r n F r n x n 2 + 2 ε T n T r n F r n x n ( I ε B ) T n T r n F r n x n B T n T r n F r n x n x n p 2 T n T r n F r n x n ( I ε B ) T n T r n F r n x n 2 + 2 ε T n T r n F r n x n ( I ε B ) T n T r n F r n x n B T n T r n F r n x n = x n x n + 1 + x n + 1 p 2 T n T r n F r n x n ( I ε B ) T n T r n F r n x n 2 + 2 ε T n T r n F r n x n ( I ε B ) T n T r n F r n x n B T n T r n F r n x n x n + 1 x n 2 + x n + 1 p 2 + 2 x n + 1 x n x n + 1 p T n T r n F r n x n ( I ε B ) T n T r n F r n x n 2 + 2 ε T n T r n F r n x n ( I ε B ) T n T r n F r n x n B T n T r n F r n x n = x n + 1 x n [ x n + 1 x n + 2 x n + 1 p ] + x n + 1 p 2 T n T r n F r n x n ( I ε B ) T n T r n F r n x n 2 + 2 ε T n T r n F r n x n ( I ε B ) T n T r n F r n x n B T n T r n F r n x n ,
which implies that
T n T r n F r n x n ( I ε B ) T n T r n F r n x n 2 x n + 1 x n [ x n + 1 x n + 2 x n + 1 p ] + x n + 1 p 2 ( I ε B ) T n T r n F r n x n p 2 + 2 ε T n T r n F r n x n ( I ε B ) T n T r n F r n x n B T n T r n F r n x n .
(3.26)
From (3.18), we obtain
x n + 1 p 2 ( 1 α n τ ) v n p 2 + α n 2 μ A p 2 + 2 α n ( 1 α n τ ) v n p μ A p v n p 2 + α n μ A p 2 + 2 α n v n p μ A p = ( I ε B ) T n T r n F r n x n p 2 + α n μ A p 2 + 2 α n ( I ε B ) T n T r n F r n x n p μ A p .
(3.27)
Using (3.27) in (3.26), we obtain
T n T r n F r n x n ( I ε B ) T n T r n F r n x n 2 x n + 1 x n [ x n + 1 x n + 2 x n + 1 p ] + α n μ A p 2 + 2 α n ( I ε B ) T n T r n F r n x n p μ A p + 2 ε T n T r n F r n x n ( I ε B ) T n T r n F r n x n B T n T r n F r n x n .
Using the fact that x n + 1 x n 0 , α n 0 , B T n T r n F r n x n 0 , as n , we deduce that
lim n T n T r n F r n x n ( I ε B ) T n T r n F r n x n = 0 .
(3.28)
From (3.25) and (3.28), we have
( I ε B ) T n T r n F r n x n x n ( I ε B ) T n T r n F r n x n T n T r n F r n x n + T n T r n F r n x n x n 0 .
(3.29)

 □

Lemma 3.5 Suppose that the conditions of Remark  3.1 are satisfied, and { x n } is as defined by (3.1). Let q Ω be the unique solution of the variational inequality A q , z q 0 , z Ω . Then
lim sup n A q , q z 0 ,

where q = P Ω ( I μ A ) q .

Proof To show this inequality, we choose a subsequence { x n i } of { x n } such that
lim sup n A q , q x n = lim i A q , q x n i ;
correspondingly, there exists a subsequence { z n i } of { z n } . Since { z n i } is bounded, there exist a subsequence { z n i j } of { z n i } and z H such that z n i j z . Without loss of generality, we may assume that z n i z . Since { z n i } K and K is closed and convex, K is weakly closed. So, we have z K . Let us show that z Ω = F ( T ) N ( B ) Θ E P ( F ) . First, we show that z E P ( F ) . Since z n = F r n x n , we have
F ( z n , y ) + 1 r n y z n , z n x n 0 , y K .
It follows from (A2) that
1 r n y z n , z n x n F ( y , z n ) ;
hence,
y z n i , z n i x n i r n i F ( y , z n i ) .
Since, z n i x n i r n i 0 , z n i z as i , it follows that F ( y , z ) 0 , y K . For t ( 0 , 1 ] and m K , let y t = t m + ( 1 t ) z . Since m K and z K , we have y t K so that F ( y t , z ) 0 . We have from (A1) and (A4)
0 = F ( y t , y t ) = F ( y t , t m + ( 1 t ) z ) = t F ( y t , m ) + ( 1 t ) F ( y t , z ) t F ( y t , m ) .

That is, F ( y t , m ) 0 . It follows from (A3) that F ( z , m ) 0 , m K . Since, m is taken arbitrarily, it follows that z E P ( F ) .

We show that z F ( T ) . Recall that w n i = T r n i z n i so that
y w n i , T w n i 1 r n i y w n i , ( 1 + r n i ) w n i x n i 0 , y K .
(3.30)
Put z t = t v + ( 1 t ) z , t ( 0 , 1 ) and v K . Consequently, we get z t K . From (3.30) and the pseudocontractivity of T, we have
z t w n i , T w n i 1 r n i z t w n i , ( 1 + r n i ) w n i x n i 0 , z t w n i , T w n i w n i z t , T z t + w n i z t , T z t 1 r n i z t w n i , ( 1 + r n i ) w n i x n i 0 , w n i z t , T z t z t w n i , T w n i + w n i z t , T z t 1 r n i z t w n i , w n i + r n i w n i x n i = w n i z t , T z t T w n i 1 r n i z t w n i , w n i x n i z t w n i , w n i = w n i z t , T w n i T z t 1 r n i z t w n i , w n i x n i z t w n i , w n i w n i z t 2 1 r n i z t w n i , w n i x n i [ z t w n i , w n i z t + z t w n i , z t ] = w n i z t 2 1 r n i z t w n i , w n i x n i + w n i z t 2 z t w n i , z t = w n i z t , z t 1 r n i z t w n i , w n i x n i w n i z t , z t 1 | r n i | z t w n i w n i x n i .
(3.31)
Since w n i x n i w n i z n i + z n i x n i 0 as i , (3.31) becomes
z z t , T z t z z t , z t , t z v , T z t t z v , z t , z v , T z t z v , z t , v K ,
(3.32)
taking the limit as t 0 and using the fact that T is continuous, (3.32) becomes
z v , T z z v , z , v K , z v , z z v , T z 0 , z v , z T z 0 .
Put v = T z and we have
z T z , z T z 0 , z T z 2 0 ,

which implies that z F ( T ) .

We now show that z Θ . Observe that for { λ n i } { λ n } ,
P K ( I λ n i f ) T r n i F r n i x n i T r n i F r n i x n i = P K ( I λ n i f ) w n i w n i = s n i w n i + ( 1 s n i ) T n i w n i w n i = ( 1 s n i ) T n i w n i ( 1 s n i ) w n i = | 1 s n i | T n i w n i w n i T n i w n i w n i 0

by (3.23). Let λ n i λ as i . If w n i z and P K ( I λ n i f ) w n i w n i 0 , by the nonexpansive property of P K ( I λ f ) , and Lemma 2.8, P K ( I λ f ) z = z , where λ ( 0 , 2 L ) ; hence, z Θ .

Next we show that z N ( B ) = { x H : B x = 0 } , the null space of B. We make the following estimate:
( I α n μ A ) ( I ε B ) T n T r n F r n x n p 2 = ( I ε B ) T n T r n F r n x n α n μ A ( I ε B ) T n T r n F r n x n p 2 = ( I ε B ) T n T r n F r n x n α n μ A ( I ε B ) T n T r n F r n x n p , ( I α n μ A ) ( I ε B ) T n T r n F r n x n p = 1 2 [ ( I ε B ) T n T r n F r n x n α n μ A ( I ε B ) T n T r n F r n x n p 2 + ( I α n μ A ) ( I ε B ) T n T r n F r n x n p 2 ( ( I ε B ) T n T r n F r n x n α n μ A ( I ε B ) T n T r n F r n x n p ) ( ( I α n μ A ) ( I ε B ) T n T r n F r n x n p ) 2 ] 1 2 [ ( I ε B ) T n T r n F r n x n p 2 + ( I α n μ A ) ( I ε B ) T n T r n F r n x n p 2 ( ( I ε B ) T n T r n F r n x n ( I α n μ A ) ( I ε B ) T n T r n F r n x n 2 + α n 2 μ A ( I ε B ) T n T r n F r n x n 2 2 α n ( I ε B ) T n T r n F r n x n ( I α n μ A ) ( I ε B ) T n T r n F r n x n , μ A ( I ε