Open Access

Relaxed and hybrid viscosity methods for general system of variational inequalities with split feasibility problem constraint

Fixed Point Theory and Applications20132013:43

https://doi.org/10.1186/1687-1812-2013-43

Received: 21 November 2012

Accepted: 31 January 2013

Published: 27 February 2013

Abstract

In this paper, we propose and analyze some relaxed and hybrid viscosity iterative algorithms for finding a common element of the solution set Ξ of a general system of variational inequalities, the solution set Γ of a split feasibility problem and the fixed point set Fix ( S ) of a strictly pseudocontractive mapping S in the setting of infinite-dimensional Hilbert spaces. We prove that the sequences generated by the proposed algorithms converge strongly to an element of Fix ( S ) Ξ Γ under mild conditions.

AMS Subject Classification:49J40, 47H05, 47H19.

Keywords

relaxed and hybrid viscosity methods general system of variational inequalities split feasibility problem fixed point problem strong convergence

1 Introduction

Let be a real Hilbert space with the inner product , and the norm . Let C be a nonempty closed convex subset of . The (nearest point or metric) projection of onto C is denoted by P C . Let S : C C be a self-mapping on C. We denote by Fix ( S ) the set of fixed points of S and by R the set of all real numbers. For a given nonlinear operator A : C H , we consider the variational inequality problem (VIP) of finding x C such that
A x , x x 0 , x C .
(1.1)

The solution set of VIP (1.1) is denoted by VI ( C , A ) . Variational inequality theory has been studied quite extensively and has emerged as an important tool in the study of a wide class of obstacle, unilateral, free, moving and equilibrium problems. It is now well known that variational inequalities are equivalent to fixed point problems, the origin of which can be traced back to Lions and Stampacchia [1]. This alternative formulation has been used to suggest and analyze a projection iterative method for solving variational inequalities under the conditions that the involved operator must be strongly monotone and Lipschitz continuous. Related to the variational inequalities, we have the problem of finding fixed points of nonexpansive mappings or strict pseudo-contraction mappings, which is the current interest in functional analysis. Several people considered a unified approach to solve variational inequality problems and fixed point problems; see, for example, [211] and the references therein.

For finding an element of Fix ( S ) VI ( C , A ) when C is closed and convex, S is nonexpansive and A is α-inverse strongly monotone, Takahashi and Toyoda [12] introduced the following Mann-type iterative algorithm:
x n + 1 = α n x n + ( 1 α n ) S P C ( 1 λ n A x n ) , n 0 ,
(1.2)

where P C is the metric projection of onto C, x 0 = x C , { α n } is a sequence in ( 0 , 1 ) and { λ n } is a sequence in ( 0 , 2 α ) . They showed that if Fix ( S ) VI ( C , A ) , then the sequence { x n } converges weakly to some z Fix ( S ) VI ( C , A ) . Nadezhkina and Takahashi [13] and Zeng and Yao [9] proposed extragradient methods motivated by Korpelevich [14] for finding a common element of the fixed point set of a nonexpansive mapping and the solution set of a variational inequality problem. Further, these iterative methods were extended in [15] to develop a new iterative method for finding elements in Fix ( S ) VI ( C , A ) .

Let B 1 , B 2 : C H be two mappings. Recently, Ceng, Wang and Yao [5] introduced and considered the problem of finding ( x , y ) C × C such that
{ μ 1 B 1 y + x y , x x 0 , x C , μ 2 B 2 x + y x , x y 0 , x C ,
(1.3)

which is called a general system of variational inequalities (GSVI), where μ 1 > 0 and μ 2 > 0 are two constants. The set of solutions of problem (1.3) is denoted by GSVI ( C , B 1 , B 2 ) . In particular, if B 1 = B 2 , then problem (1.3) reduces to the new system of variational inequalities (NSVI) introduced and studied by Verma [16]. Further, if x = y additionally, then the NSVI reduces to VIP (1.1).

Recently, Ceng, Wang and Yao [5] transformed problem (1.3) into a fixed point problem in the following way.

Lemma 1.1 (see [5])

For given x ¯ , y ¯ C , ( x ¯ , y ¯ ) is a solution of problem (1.3) if and only if x ¯ is a fixed point of the mapping G : C C defined by
G ( x ) = P C [ P C ( x μ 2 B 2 x ) μ 1 B 1 P C ( x μ 2 B 2 x ) ] , x C ,

where y ¯ = P C ( x ¯ μ 2 B 2 x ¯ ) .

In particular, if the mapping B i : C H is β i -inverse strongly monotone for i = 1 , 2 , then the mapping G is nonexpansive provided μ i ( 0 , 2 β i ) for i = 1 , 2 .

Utilizing Lemma 1.1, they introduced and studied a relaxed extragradient method for solving GSVI (1.3). Throughout this paper, the set of fixed points of the mapping G is denoted by Ξ. Based on the extragradient method and viscosity approximation method, Yao et al. [8] proposed and analyzed a relaxed extragradient method for finding a common solution of GSVI (1.3) and a fixed point problem of a strictly pseudo-contractive mapping S : C C .

Theorem YLK (see [[8], Theorem 3.2])

Let C be a nonempty bounded closed convex subset of a real Hilbert space . Let the mapping B i : C H be μ i -inverse strongly monotone for i = 1 , 2 . Let S : C C be a k-strictly pseudocontractive mapping such that Fix ( S ) Ξ . Let Q : C C be a ρ-contraction with ρ [ 0 , 1 2 ) . For x 0 C given arbitrarily, let the sequences { x n } , { y n } and { z n } be generated iteratively by
{ z n = P C ( x n μ 2 B 2 x n ) , y n = α n Q x n + ( 1 α n ) P C ( z n μ 1 B 1 z n ) , x n + 1 = β n x n + γ n P C ( z n μ 1 B 1 z n ) + δ n S y n , n 0 ,
(1.4)
where μ i ( 0 , 2 β i ) for i = 1 , 2 , and { α n } , { β n } , { γ n } , { δ n } are four sequences in [ 0 , 1 ] such that
  1. (i)

    β n + γ n + δ n = 1 and ( γ n + δ n ) k γ n < ( 1 2 ρ ) δ n for all n 0 ;

     
  2. (ii)

    lim n α n = 0 and n = 0 α n = ;

     
  3. (iii)

    0 < lim inf n β n lim sup n β n < 1 and lim inf n δ n > 0 ;

     
  4. (iv)

    lim n ( γ n + 1 1 β n + 1 γ n 1 β n ) = 0 .

     

Then the sequence { x n } generated by (1.4) converges strongly to x ¯ = P Fix ( S ) Ξ Q x ¯ and ( x ¯ , y ¯ ) is a solution of GSVI (1.3), where y ¯ = P C ( x ¯ μ 2 B 2 x ¯ ) .

Subsequently, Ceng, Guu and Yao [17] further presented and analyzed an iterative scheme for finding a common element of the solution set of VIP (1.1), the solution set of GSVI (1.3) and the fixed point set of a strictly pseudo-contractive mapping S : C C .

Theorem CGY (see [[17], Theorem 3.1])

Let C be a nonempty closed convex subset of a real Hilbert space . Let A : C H be α-inverse strongly monotone and B i : C H be β i -inverse strongly monotone for i = 1 , 2 . Let S : C C be a k-strictly pseudocontractive mapping such that Fix ( S ) Ξ VI ( C , A ) . Let Q : C C be a ρ-contraction with ρ [ 0 , 1 2 ) . For x 0 C given arbitrarily, let the sequences { x n } , { y n } and { z n } be generated iteratively by
{ z n = P C ( x n λ n A x n ) , y n = α n Q x n + ( 1 α n ) P C [ P C ( z n μ 2 B 2 z n ) μ 1 B 1 P C ( z n μ 2 B 2 z n ) ] , x n + 1 = β n x n + γ n y n + δ n S y n , n 0 ,
(1.5)
where μ i ( 0 , 2 β i ) for i = 1 , 2 , { λ n } ( 0 , 2 α ] and { α n } , { β n } , { γ n } , { δ n } [ 0 , 1 ] such that
  1. (i)

    β n + γ n + δ n = 1 and ( γ n + δ n ) k γ n for all n 0 ;

     
  2. (ii)

    lim n α n = 0 and n = 0 α n = ;

     
  3. (iii)

    0 < lim inf n β n lim sup n β n < 1 and lim inf n δ n > 0 ;

     
  4. (iv)

    lim n ( γ n + 1 1 β n + 1 γ n 1 β n ) = 0 ;

     
  5. (v)

    0 < lim inf n λ n lim sup n λ n < 2 α and lim n | λ n + 1 λ n | = 0 .

     

Then the sequence { x n } generated by (1.5) converges strongly to x ¯ = P Fix ( S ) Ξ VI ( C , A ) Q x ¯ and ( x ¯ , y ¯ ) is a solution of GSVI (1.3), where y ¯ = P C ( x ¯ μ 2 B 2 x ¯ ) .

On the other hand, let C and Q be nonempty closed convex subsets of infinite-dimensional real Hilbert spaces H 1 and H 2 , respectively. The split feasibility problem (SFP) is to find a point x with the property
x C and A x Q ,
(1.6)

where A B ( H 1 , H 2 ) and B ( H 1 , H 2 ) denotes the family of all bounded linear operators from H 1 to H 2 .

In 1994, the SFP was first introduced by Censor and Elfving [18], in finite-dimensional Hilbert spaces, for modeling inverse problems which arise from phase retrievals and in medical image reconstruction. A number of image reconstruction problems can be formulated as the SFP; see, e.g., [19] and the references therein. Recently, it has been found that the SFP can also be applied to study intensity-modulated radiation therapy; see, e.g., [2022] and the references therein. In the recent past, a wide variety of iterative methods have been used in signal processing and image reconstruction and for solving the SFP; see, e.g., [1929] and the references therein. A special case of the SFP is the following convex constrained linear inverse problem [30] of finding an element x such that
x C and A x = b .
(1.7)

It has been extensively investigated in the literature using the projected Landweber iterative method [31]. Comparatively, the SFP has received much less attention so far due to the complexity resulting from the set Q. Therefore, whether various versions of the projected Landweber iterative method [31] can be extended to solve the SFP remains an interesting open topic. For example, it is not clear whether the dual approach to (1.7) of [32] can be extended to the SFP. The original algorithm given in [18] involves the computation of the inverse A 1 (assuming the existence of the inverse of A) and thus has not become popular. A seemingly more popular algorithm that solves the SFP is the CQ algorithm of Byrne [19, 24] which is found to be a gradient-projection method (GPM) in convex minimization. It is also a special case of the proximal forward-backward splitting method [33]. The CQ algorithm only involves the computation of the projections P C and P Q onto the sets C and Q, respectively, and is therefore implementable in the case where P C and P Q have closed-form expressions, for example, C and Q are closed balls or half-spaces. However, it remains a challenge how to implement the CQ algorithm in the case where the projections P C and/or P Q fail to have closed-form expressions, though theoretically we can prove the (weak) convergence of the algorithm.

Very recently, Xu [23] gave a continuation of the study on the CQ algorithm and its convergence. He applied Mann’s algorithm to the SFP and proposed an averaged CQ algorithm which was proved to be weakly convergent to a solution of the SFP. He also established the strong convergence result, which shows that the minimum-norm solution can be obtained.

Furthermore, Korpelevich [14] introduced the so-called extragradient method for finding a solution of a saddle point problem. She proved that the sequences generated by the proposed iterative algorithm converge to a solution of the saddle point problem.

Throughout this paper, assume that the SFP is consistent, that is, the solution set Γ of the SFP is nonempty. Let f : H 1 R be a continuous differentiable function. The minimization problem
min x C f ( x ) : = 1 2 A x P Q A x 2
(1.8)
is ill-posed. Therefore, Xu [23] considered the following Tikhonov regularization problem:
min x C f α ( x ) : = 1 2 A x P Q A x 2 + 1 2 α x 2 ,
(1.9)

where α > 0 is the regularization parameter. The regularized minimization (1.9) has a unique solution which is denoted by x α . The following results are easy to prove.

Proposition 1.1 (see [[34], Proposition 3.1])

Given x H 1 , the following statements are equivalent:
  1. (i)

    x solves the SFP;

     
  2. (ii)
    x solves the fixed point equation
    P C ( I λ f ) x = x ,

    where λ > 0 , f = A ( I P Q ) A and A is the adjoint of A;

     
  3. (iii)
    x solves the variational inequality problem (VIP) of finding x C such that
    f ( x ) , x x 0 , x C .
    (1.10)
     
It is clear from Proposition 1.1 that
Γ = Fix ( P C ( I λ f ) ) = VI ( C , f )

for all λ > 0 , where Fix ( P C ( I λ f ) ) and VI ( C , f ) denote the set of fixed points of P C ( I λ f ) and the solution set of VIP (1.10), respectively.

Proposition 1.2 (see [34])

The following statements hold:
  1. (i)
    the gradient
    f α = f + α I = A ( I P Q ) A + α I

    is ( α + A 2 ) -Lipschitz continuous and α-strongly monotone;

     
  2. (ii)
    the mapping P C ( I λ f α ) is a contraction with coefficient
    1 λ ( 2 α λ ( A 2 + α ) 2 ) ( 1 α λ 1 1 2 α λ ) ,

    where 0 < λ α ( A 2 + α ) 2 ;

     
  3. (iii)

    if the SFP is consistent, then the strong lim α 0 x α exists and is the minimum-norm solution of the SFP.

     

Very recently, by combining the regularization method and extragradient method due to Nadezhkina and Takahashi [13], Ceng, Ansari and Yao [34] proposed an extragradient algorithm with regularization and proved that the sequences generated by the proposed algorithm converge weakly to an element of Fix ( S ) Γ , where S : C C is a nonexpansive mapping.

Theorem CAY (see [[34], Theorem 3.1])

Let S : C C be a nonexpansive mapping such that Fix ( S ) Γ . Let { x n } and { y n } be the sequences in C generated by the following extragradient algorithm:
{ x 0 = x C chosen arbitrarily , y n = P C ( x n λ n f α n ( x n ) ) , x n + 1 = β n x n + ( 1 β n ) S P C ( x n λ n f α n ( y n ) ) , n 0 ,
(1.11)

where n = 0 α n < , { λ n } [ a , b ] for some a , b ( 0 , 1 A 2 ) and { β n } [ c , d ] for some c , d ( 0 , 1 ) . Then both the sequences { x n } and { y n } converge weakly to an element x ˆ Fix ( S ) Γ .

Motivated and inspired by the research going on in this area, we propose and analyze some relaxed and hybrid viscosity iterative algorithms for finding a common element of the solution set Ξ of GSVI (1.3), the solution set Γ of SFP (1.6) and the fixed point set Fix ( S ) of a strictly pseudocontractive mapping S : C C . These iterative algorithms are based on the regularization method, the viscosity approximation method, the relaxed method in [8] and the hybrid method in [10]. Furthermore, it is proven that the sequences generated by the proposed algorithms converge strongly to an element of Fix ( S ) Ξ Γ under mild conditions.

Observe that both [[23], Theorem 5.7] and [[34], Theorem 3.1] are weak convergence results for solving the SFP and that our problem of finding an element of Fix ( S ) Ξ Γ is more general than the corresponding ones in [[23], Theorem 5.7] and [[34], Theorem 3.1], respectively. Hence there is no doubt that our strong convergence results are very interesting and quite valuable. It is worth emphasizing that our relaxed and hybrid viscosity iterative algorithms involve a ρ-contractive self-mapping Q, a k-strictly pseudo-contractive self-mapping S and several parameter sequences, they are more flexible, more advantageous and more subtle than the corresponding ones in [[23], Theorem 5.7] and [[34], Theorem 3.1], respectively. Furthermore, relaxed extragradient iterative scheme (1.4) and hybrid extragradient iterative scheme (1.5) are extended to develop our relaxed viscosity iterative algorithms and hybrid viscosity iterative algorithms, respectively. In our strong convergence results, the relaxed and hybrid viscosity iterative algorithms drop the requirement of boundedness for the domain in which various mappings are defined; see, e.g., Yao et al. [[8], Theorem 3.2]. Therefore, our results represent the modification, supplementation, extension and improvement of [[23], Theorem 5.7], [[34], Theorem 3.1], [[17], Theorem 3.1] and [[8], Theorem 3.2] to a great extent.

2 Preliminaries

Let be a real Hilbert space, whose inner product and norm are denoted by , and , respectively. Let K be a nonempty, closed and convex subset of . Now, we present some known results and definitions which will be used in the sequel.

The metric (or nearest point) projection from onto K is the mapping P K : H K which assigns to each point x H the unique point P K x K satisfying the property
x P K x = inf y K x y = : d ( x , K ) .

The following properties of projections are useful and pertinent to our purpose.

Proposition 2.1 (see [35])

For given x H and z K ,
  1. (i)

    z = P K x x z , y z 0 , y K ;

     
  2. (ii)

    z = P K x x z 2 x y 2 y z 2 , y K ;

     
  3. (iii)

    P K x P K y , x y P K x P K y 2 , y H , which hence implies that P K is nonexpansive and monotone.

     
Definition 2.1 A mapping T : H H is said to be
  1. (a)
    nonexpansive if
    T x T y x y , x , y H ;
     
  2. (b)
    firmly nonexpansive if 2 T I is nonexpansive, or equivalently,
    x y , T x T y T x T y 2 , x , y H ;
     
alternatively, T is firmly nonexpansive if and only if T can be expressed as
T = 1 2 ( I + S ) ,

where S : H H is nonexpansive; projections are firmly nonexpansive.

Definition 2.2 Let T be a nonlinear operator with domain D ( T ) H and range R ( T ) H .
  1. (a)
    T is said to be monotone if
    x y , T x T y 0 , x , y D ( T ) .
     
  2. (b)
    Given a number β > 0 , T is said to be β-strongly monotone if
    x y , T x T y β x y 2 , x , y D ( T ) .
     
  3. (c)
    Given a number ν > 0 , T is said to be ν-inverse strongly monotone (ν-ism) if
    x y , T x T y ν T x T y 2 , x , y D ( T ) .
     

It can be easily seen that if S is nonexpansive, then I S is monotone. It is also easy to see that a projection P K is 1-ism.

Inverse strongly monotone (also referred to as co-coercive) operators have been applied widely to solving practical problems in various fields, for instance, in traffic assignment problems; see, e.g., [36, 37].

Definition 2.3 A mapping T : H H is said to be an averaged mapping if it can be written as an average of the identity I and a nonexpansive mapping, that is,
T ( 1 α ) I + α S ,

where α ( 0 , 1 ) and S : H H is nonexpansive. More precisely, when the last equality holds, we say that T is α-averaged. Thus, firmly nonexpansive mappings (in particular, projections) are 1 2 -averaged maps.

Proposition 2.2 (see [24])

Let T : H H be a given mapping.
  1. (i)

    T is nonexpansive if and only if the complement I T is 1 2 -ism.

     
  2. (ii)

    If T is ν-ism, then for γ > 0 , γT is ν γ -ism.

     
  3. (iii)

    T is averaged if and only if the complement I T is ν-ism for some ν > 1 / 2 . Indeed, for α ( 0 , 1 ) , T is α-averaged if and only if I T is 1 2 α -ism.

     

Proposition 2.3 (see [24, 38])

Let S , T , V : H H be given operators.
  1. (i)

    If T = ( 1 α ) S + α V for some α ( 0 , 1 ) and if S is averaged and V is nonexpansive, then T is averaged.

     
  2. (ii)

    T is firmly nonexpansive if and only if the complement I T is firmly nonexpansive.

     
  3. (iii)

    If T = ( 1 α ) S + α V for some α ( 0 , 1 ) and if S is firmly nonexpansive and V is nonexpansive, then T is averaged.

     
  4. (iv)

    The composite of finitely many averaged mappings is averaged. That is, if each of the mappings { T i } i = 1 N is averaged, then so is the composite T 1 T 2 T N . In particular, if T 1 is α 1 -averaged and T 2 is α 2 -averaged, where α 1 , α 2 ( 0 , 1 ) , then the composite T 1 T 2 is α-averaged, where α = α 1 + α 2 α 1 α 2 .

     
  5. (v)
    If the mappings { T i } i = 1 N are averaged and have a common fixed point, then
    i = 1 N Fix ( T i ) = Fix ( T 1 T N ) .
     

The notation Fix ( T ) denotes the set of all fixed points of the mapping T, that is, Fix ( T ) = { x H : T x = x } .

It is clear that, in a real Hilbert space , S : C C is k-strictly pseudo-contractive if and only if the following inequality holds:
S x S y , x y x y 2 1 k 2 ( I S ) x ( I S ) y 2 , x , y C .
(2.1)

This immediately implies that if S is a k-strictly pseudo-contractive mapping, then I S is 1 k 2 -inverse strongly monotone; for further details, we refer to [39] and the references therein. It is well known that the class of strict pseudo-contractions strictly includes the class of nonexpansive mappings.

In order to prove the main results of this paper, the following lemmas will be required.

Lemma 2.1 (see [40])

Let { x n } and { y n } be bounded sequences in a Banach space X and let { β n } be a sequence in [ 0 , 1 ] with 0 < lim inf n β n lim sup n β n < 1 . Suppose x n + 1 = ( 1 β n ) y n + β n x n for all integers n 0 and lim sup n ( y n + 1 y n x n + 1 x n ) 0 . Then lim n y n x n = 0 .

Lemma 2.2 (see [[39], Proposition 2.1])

Let C be a nonempty closed convex subset of a real Hilbert space and S : C C be a mapping.
  1. (i)
    If S is a k-strict pseudo-contractive mapping, then S satisfies the Lipschitz condition
    S x S y 1 + k 1 k x y , x , y C .
     
  2. (ii)

    If S is a k-strict pseudo-contractive mapping, then the mapping I S is semiclosed at 0, that is, if { x n } is a sequence in C such that x n x ˜ weakly and ( I S ) x n 0 strongly, then ( I S ) x ˜ = 0 .

     
  3. (iii)

    If S is k-(quasi-)strict pseudo-contraction, then the fixed point set Fix ( S ) of S is closed and convex so that the projection P Fix ( S ) is well defined.

     

The following lemma plays a key role in proving strong convergence of the sequences generated by our algorithms.

Lemma 2.3 (see [35])

Let { a n } be a sequence of nonnegative real numbers satisfying the property
a n + 1 ( 1 s n ) a n + s n t n + r n , n 0 ,
where { s n } ( 0 , 1 ] and { t n } are such that
  1. (i)

    n = 0 s n = ;

     
  2. (ii)

    either lim sup n t n 0 or n = 0 | s n t n | < ;

     
  3. (iii)

    n = 0 r n < , where r n 0 , n 0 .

     

Then lim n a n = 0 .

Lemma 2.4 (see [8])

Let C be a nonempty closed convex subset of a real Hilbert space . Let S : C C be a k-strictly pseudo-contractive mapping. Let γ and δ be two nonnegative real numbers such that ( γ + δ ) k γ . Then
γ ( x y ) + δ ( S x S y ) ( γ + δ ) x y , x , y C .
(2.2)

The following lemma is an immediate consequence of an inner product.

Lemma 2.5 In a real Hilbert space , the following inequality holds:
x + y 2 x 2 + 2 y , x + y , x , y H .
Let K be a nonempty closed convex subset of a real Hilbert space and let F : K H be a monotone mapping. The variational inequality problem (VIP) is to find x K such that
F x , y x 0 , y K .
The solution set of the VIP is denoted by VI ( K , F ) . It is well known that
x VI ( K , F ) x = P K ( x λ F x ) for some  λ > 0 .
A set-valued mapping T : H 2 H is called monotone if for all x , y H , f T x and g T y imply that x y , f g 0 . A monotone set-valued mapping T : H 2 H is called maximal if its graph Gph ( T ) is not properly contained in the graph of any other monotone set-valued mapping. It is known that a monotone set-valued mapping T : H 2 H is maximal if and only if for ( x , f ) H × H , x y , f g 0 for every ( y , g ) Gph ( T ) implies that f T x . Let F : K H be a monotone and Lipschitz continuous mapping and let N K v be the normal cone to K at v K , that is,
N K v = { w H : v u , w 0 , u K } .
Define
T v = { F v + N K v if  v K , if  v K .

It is known that in this case the mapping T is maximal monotone and 0 T v if and only if v VI ( K , F ) ; for further details, we refer to [41] and the references therein.

3 Relaxed viscosity methods and their convergence criteria

In this section, we propose and analyze the following relaxed viscosity iterative algorithms for finding a common element of the solution set of GSVI (1.3), the solution set of SFP (1.6) and the fixed point set of a strictly pseudo-contractive mapping S : C C .

Algorithm 3.1 Let μ i ( 0 , 2 β i ) for i = 1 , 2 , { α n } ( 0 , ) , { λ n } ( 0 , 2 A 2 ) and { σ n } , { β n } , { γ n } , { δ n } [ 0 , 1 ] such that β n + γ n + δ n = 1 for all n 0 . For x 0 C given arbitrarily, let { x n } , { y n } , { z n } be the sequences generated by the Mann-type viscosity iterative scheme with regularization
{ z n = P C [ P C ( x n μ 2 B 2 x n ) μ 1 B 1 P C ( x n μ 2 B 2 x n ) ] , y n = σ n Q x n + ( 1 σ n ) P C ( z n λ n f α n ( z n ) ) , x n + 1 = β n x n + γ n y n + δ n S y n , n 0 .
Algorithm 3.2 Let μ i ( 0 , 2 β i ) for i = 1 , 2 , { α n } ( 0 , ) , { λ n } ( 0 , 2 A 2 ) and { σ n } , { β n } , { γ n } , { δ n } [ 0 , 1 ] such that β n + γ n + δ n = 1 for all n 0 . For x 0 C given arbitrarily, let { x n } , { y n } , { z n } be the sequences generated by the Mann-type viscosity iterative scheme with regularization
{ z n = P C ( x n λ n f α n ( x n ) ) , y n = σ n Q x n + ( 1 σ n ) P C [ P C ( z n μ 2 B 2 z n ) μ 1 B 1 P C ( z n μ 2 B 2 z n ) ] , x n + 1 = β n x n + γ n y n + δ n S y n , n 0 .

Next, we first give the strong convergence criteria of the sequences generated by Algorithm 3.1.

Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H 1 . Let A B ( H 1 , H 2 ) and B i : C H 1 be β i -inverse strongly monotone for i = 1 , 2 . Let S : C C be a k-strictly pseudocontractive mapping such that Fix ( S ) Ξ Γ . Let Q : C C be a ρ-contraction with ρ [ 0 , 1 2 ) . For x 0 C given arbitrarily, let { x n } , { y n } , { z n } be the sequences generated by Algorithm  3.1, where μ i ( 0 , 2 β i ) for i = 1 , 2 , { α n } ( 0 , ) , { λ n } ( 0 , 2 A 2 ) and { σ n } , { β n } , { γ n } , { δ n } [ 0 , 1 ] such that
  1. (i)

    n = 0 α n < ;

     
  2. (ii)

    β n + γ n + δ n = 1 and ( γ n + δ n ) k γ n for all n 0 ;

     
  3. (iii)

    lim n σ n = 0 and n = 0 σ n = ;

     
  4. (iv)

    0 < lim inf n β n lim sup n β n < 1 and lim inf n δ n > 0 ;

     
  5. (v)

    lim n ( γ n + 1 1 β n + 1 γ n 1 β n ) = 0 ;

     
  6. (vi)

    0 < lim inf n λ n lim sup n λ n < 2 A 2 and lim n | λ n + 1 λ n | = 0 .

     

Then the sequences { x n } , { y n } , { z n } converge strongly to the same point x ¯ = P Fix ( S ) Ξ Γ Q x ¯ if and only if lim n y n z n = 0 . Furthermore, ( x ¯ , y ¯ ) is a solution of GSVI (1.3), where y ¯ = P C ( x ¯ μ 2 B 2 x ¯ ) .

Proof First, taking into account 0 < lim inf n λ n lim sup n λ n < 2 A 2 , without loss of generality, we may assume that { λ n } [ a , b ] for some a , b ( 0 , 2 A 2 ) .

Now, let us show that P C ( I λ f α ) is ζ-averaged for each λ ( 0 , 2 α + A 2 ) , where
ζ = 2 + λ ( α + A 2 ) 4 .
Indeed, it is easy to see that f = A ( I P Q ) A is 1 A 2 -ism, that is,
f ( x ) f ( y ) , x y 1 A 2 f ( x ) f ( y ) 2 .
Observe that
( α + A 2 ) f α ( x ) f α ( y ) , x y = ( α + A 2 ) [ α x y 2 + f ( x ) f ( y ) , x y ] = α 2 x y 2 + α f ( x ) f ( y ) , x y + α A 2 x y 2 + A 2 f ( x ) f ( y ) , x y α 2 x y 2 + 2 α f ( x ) f ( y ) , x y + f ( x ) f ( y ) 2 = α ( x y ) + f ( x ) f ( y ) 2 = f α ( x ) f α ( y ) 2 .
Hence, it follows that f α = α I + A ( I P Q ) A is 1 α + A 2 -ism. Thus, λ f α is 1 λ ( α + A 2 ) -ism according to Proposition 2.2(ii). By Proposition 2.2(iii), the complement I λ f α is λ ( α + A 2 ) 2 -averaged. Therefore, noting that P C is 1 2 -averaged and utilizing Proposition 2.3(iv), we know that for each λ ( 0 , 2 α + A 2 ) , P C ( I λ f α ) is ζ-averaged with
ζ = 1 2 + λ ( α + A 2 ) 2 1 2 λ ( α + A 2 ) 2 = 2 + λ ( α + A 2 ) 4 ( 0 , 1 ) .
This shows that P C ( I λ f α ) is nonexpansive. Furthermore, for { λ n } [ a , b ] with a , b ( 0 , 2 A 2 ) , we have
a inf n 0 λ n sup n 0 λ n b < 2 A 2 = lim n 2 α n + A 2 .
Without loss of generality, we may assume that
a inf n 0 λ n sup n 0 λ n b < 2 α n + A 2 , n 0 .
Consequently, it follows that for each integer n 0 , P C ( I λ n f α n ) is ζ n -averaged with
ζ n = 1 2 + λ n ( α n + A 2 ) 2 1 2 λ n ( α n + A 2 ) 2 = 2 + λ n ( α n + A 2 ) 4 ( 0 , 1 ) .

This immediately implies that P C ( I λ n f α n ) is nonexpansive for all n 0 .

Next, we divide the remainder of the proof into several steps.

Step 1. { x n } is bounded.

Indeed, take p Fix ( S ) Ξ Γ arbitrarily. Then S p = p , P C ( I λ f ) p = p for λ ( 0 , 2 A 2 ) , and
p = P C [ P C ( p μ 2 B 2 p ) μ 1 B 1 P C ( p μ 2 B 2 p ) ] .
For simplicity, we write
q = P C ( p μ 2 B 2 p ) , x ˜ n = P C ( x n μ 2 B 2 x n ) and u n = P C ( z n λ n f α n ( z n ) )
for each n 0 . Then y n = σ n Q x n + ( 1 σ n ) u n for each n 0 . From Algorithm 3.1 it follows that
u n p = P C ( I λ n f α n ) z n P C ( I λ n f ) p P C ( I λ n f α n ) z n P C ( I λ n f α n ) p + P C ( I λ n f α n ) p P C ( I λ n f ) p z n p + ( I λ n f α n ) p ( I λ n f ) p z n p + λ n α n p .
(3.1)
Utilizing Lemma 2.5, we also have
u n p 2 = P C ( I λ n f α n ) z n P C ( I λ n f ) p 2 = P C ( I λ n f α n ) z n P C ( I λ n f α n ) p + P C ( I λ n f α n ) p P C ( I λ n f ) p 2 P C ( I λ n f α n ) z n P C ( I λ n f α n ) p 2 + 2 P C ( I λ n f α n ) p P C ( I λ n f ) p , u n p z n p 2 + 2 P C ( I λ n f α n ) p P C ( I λ n f ) p u n p z n p 2 + 2 ( I λ n f α n ) p ( I λ n f ) p u n p = z n p 2 + 2 λ n α n p u n p z n p 2 + 2 λ n α n p u n p .
(3.2)
Since B i : C H 1 is β i -inverse strongly monotone for i = 1 , 2 and 0 < μ i < 2 β i for i = 1 , 2 , we know that for all n 0 ,
z n p 2 = P C [ P C ( x n μ 2 B 2 x n ) μ 1 B 1 P C ( x n μ 2 B 2 x n ) ] p 2 = P C [ P C ( x n μ 2 B 2 x n ) μ 1 B 1 P C ( x n μ 2 B 2 x n ) ] P C [ P C ( p μ 2 B 2 p ) μ 1 B 1 P C ( p μ 2 B 2 p ) ] 2 [ P C ( x n μ 2 B 2 x n ) μ 1 B 1 P C ( x n μ 2 B 2 x n ) ] [ P C ( p μ 2 B 2 p ) μ 1 B 1 P C ( p μ 2 B 2 p ) ] 2 = [ P C ( x n μ 2 B 2 x n ) P C ( p μ 2 B 2 p ) ] μ 1 [ B 1 P C ( x n μ 2 B 2 x n ) B 1 P C ( p μ 2 B 2 p ) ] 2 P C ( x n μ 2 B 2 x n ) P C ( p μ 2 B 2 p ) 2 μ 1 ( 2 β 1 μ 1 ) B 1 P C ( x n μ 2 B 2 x n ) B 1 P C ( p μ 2 B 2 p ) 2 ( x n μ 2 B 2 x n ) ( p μ 2 B 2 p ) 2 μ 1 ( 2 β 1 μ 1 ) B 1 x ˜ n B 1 q 2 = ( x n p ) μ 2 ( B 2 x n B 2 p ) 2 μ 1 ( 2 β 1 μ 1 ) B 1 x ˜ n B 1 q 2 x n p 2 μ 2 ( 2 β 2 μ 2 ) B 2 x n B 2 p 2 μ 1 ( 2 β 1 μ 1 ) B 1 x ˜ n B 1 q 2 x n p 2 .
(3.3)
Hence it follows from (3.1) and (3.3) that
y n p = σ n ( Q x n p ) + ( 1 σ n ) ( u n p ) σ n Q x n p + ( 1 σ n ) u n p σ n ( Q x n Q p + Q p p ) + ( 1 σ n ) ( z n p + λ n α n p ) σ n ( ρ x n p + Q p p ) + ( 1 σ n ) ( x n p + λ n α n p ) ( 1 ( 1 ρ ) σ n ) x n p + σ n Q p p + λ n α n p = ( 1 ( 1 ρ ) σ n ) x n p + ( 1 ρ ) σ n Q p p 1 ρ + λ n α n p max { x n p , Q p p 1 ρ } + λ n α n p .
(3.4)
Since ( γ n + δ n ) k γ n for all n 0 , utilizing Lemma 2.4, we obtain from (3.4)
x n + 1 p = β n ( x n p ) + γ n ( y n p ) + δ n ( S y n p ) β n x n p + γ n ( y n p ) + δ n ( S y n p ) β n x n p + ( γ n + δ n ) y n p β n x n p + ( γ n + δ n ) [ max { x n p , Q p p 1 ρ } + λ n α n p ] β n x n p + ( γ n + δ n ) max { x n p , Q p p 1 ρ } + λ n α n p max { x n p , Q p p 1 ρ } + b p α n .
(3.5)
Now, we claim that
x n + 1 p max { x 0 p , Q p p 1 ρ } + b p j = 0 n α j .
(3.6)
As a matter of fact, if n = 0 , then it is clear that (3.6) is valid, that is,
x 1 p max { x 0 p , Q p p 1 ρ } + b p j = 0 0 α j .
Assume that (3.6) holds for n 1 , that is,
x n p max { x 0 p , Q p p 1 ρ } + b p j = 0 n 1 α j .
(3.7)
Then we conclude from (3.5) and (3.7) that
x n + 1 p max { x n p , Q p p 1 ρ } + b p α n max { max { x 0 p , Q p p 1 ρ } + b p j = 0 n 1 α j , Q p p 1 ρ } + b p α n max { x 0 p , Q p p 1 ρ } + b p j = 0 n 1 α j + b p α n = max { x 0 p , Q p p 1 ρ } + b p j = 0 n α j .

By induction, we conclude that (3.6) is valid. Hence, { x n } is bounded. Since P C , f α n , B 1 and B 2 are Lipschitz continuous, it is easy to see that { u n } , { z n } , { y n } and { x ˜ n } are bounded, where x ˜ n = P C ( x n μ 2 B 2 x n ) for all n 0 .

Step 2. lim n x n + 1 x n = 0 .

Indeed, define x n + 1 = β n x n + ( 1 β n ) w n for all n 0 . It follows that
w n + 1 w n = x n + 2 β n + 1 x n + 1 1 β n + 1 x n + 1 β n x n 1 β n = γ n + 1 y n + 1 + δ n + 1 S y n + 1 1 β n + 1 γ n y n + δ n S y n 1 β n = γ n + 1 ( y n + 1 y n ) + δ n + 1 ( S y n + 1 S y n ) 1 β n + 1 + ( γ n + 1 1 β n + 1 γ n 1 β n ) y n + ( δ n + 1 1 β n + 1 δ n 1 β n ) S y n .
(3.8)
Since ( γ n + δ n ) k γ n for all n 0 , utilizing Lemma 2.4, we have
γ n + 1 ( y n + 1 y n ) + δ n + 1 ( S y n + 1 S y n ) ( γ n + 1 + δ n + 1 ) y n + 1 y n .
(3.9)
Next, we estimate y n + 1 y n . Observe that
u n + 1 u n = P C ( z n + 1 λ n + 1 f α n + 1 ( z n + 1 ) ) P C ( z n λ n f α n ( z n ) ) P C ( I λ n + 1 f α n + 1 ) z n + 1 P C ( I λ n + 1 f α n + 1 ) z n + P C ( I λ n + 1 f α n + 1 ) z n P C ( I λ n f α n ) z n z n + 1 z n + ( I λ n + 1 f α n + 1 ) z n ( I λ n f α n ) z n = z n + 1 z n + λ n + 1 ( α n + 1 I + f ) z n λ n ( α n I + f ) z n z n + 1 z n + | λ n + 1 λ n | f ( z n ) + | λ n + 1 α n + 1 λ n α n | z n
(3.10)
and
z n + 1 z n 2 = P C [ P C ( x n + 1 μ 2 B 2 x n + 1 ) μ 1 B 1 P C ( x n + 1 μ 2 B 2 x n + 1 ) ] P C [ P C ( x n μ 2 B 2 x n ) μ 1 B 1 P C ( x n μ 2 B 2 x n ) ] 2 [ P C ( x n + 1 μ 2 B 2 x n + 1 ) μ 1 B 1 P C ( x n + 1 μ 2 B 2 x n + 1 ) ] [ P C ( x n μ 2 B 2 x n ) μ 1 B 1 P C ( x n μ 2 B 2 x n ) ] 2 = [ P C ( x n + 1 μ 2 B 2 x n + 1 ) P C ( x n μ 2 B 2 x n ) ] μ 1 [ B 1 P C ( x n + 1 μ 2 B 2 x n + 1 ) B 1 P C ( x n μ 2 B 2 x n ) ] 2 P C ( x n + 1 μ 2 B 2 x n + 1 ) P C ( x n μ 2 B 2 x n ) 2 μ 1 ( 2 β 1 μ 1 ) B 1 P C ( x n + 1 μ 2 B 2 x n + 1 ) B 1 P C ( x n μ 2 B 2 x n ) 2 P C ( x n + 1 μ 2 B 2 x n + 1 ) P C ( x n μ 2 B 2 x n ) 2 ( x n + 1 μ 2 B 2 x n + 1 ) ( x n μ 2 B 2 x n ) 2 = ( x n + 1 x n ) μ 2 ( B 2 x n + 1 B 2 x n ) 2 x n + 1 x n 2 μ 2 ( 2 β 2 μ 2 ) B 2 x n + 1 B 2 x n 2 x n + 1 x n 2 .
(3.11)
Combining (3.10) with (3.11), we get
u n + 1 u n z n + 1 z n + | λ n + 1 λ n | f ( z n ) + | λ n + 1 α n + 1 λ n α n | z n x n + 1 x n + | λ n + 1 λ n | f ( z n ) + | λ n + 1 α n + 1 λ n α n | z n ,
(3.12)
which hence implies that
y n + 1 y n = u n + 1 + σ n + 1 ( Q x n + 1 u n + 1 ) u n σ n ( Q x n u n ) u n + 1 u n + σ n + 1 Q x n + 1 u n + 1 + σ n Q x n u n x n + 1 x n + | λ n + 1 λ n | f ( z n ) + | λ n + 1 α n + 1 λ n α n | z n + σ n + 1 Q x n + 1 u n + 1 + σ n Q x n u n .
(3.13)
Hence it follows from (3.8), (3.9) and (3.13) that
w n + 1 w n γ n + 1 ( y n + 1 y n ) + δ n + 1 ( S y n + 1 S y n ) 1 β n + 1 + | γ n + 1 1 β n + 1 γ n 1 β n | y n + | δ n + 1 1 β n + 1 δ n 1 β n | S y n γ n + 1 + δ n + 1 1 β n + 1 y n + 1 y n + | γ n + 1 1 β n + 1 γ n 1 β n | ( y n + S y n ) = y n + 1 y n + | γ n + 1 1 β n + 1 γ n 1 β n | ( y n + S y n ) x n + 1 x n + | λ n + 1 λ n | f ( z n ) + | λ n + 1 α n + 1 λ n α n | z n + σ n + 1 Q x n + 1 u n + 1 + σ n Q x n u n + | γ n + 1 1 β n + 1 γ n 1 β n | ( y n + S y n ) .
Since { x n } , { y n } , { z n } and { u n } are bounded, it follows from conditions (i), (iii), (v) and (vi) that
lim sup n ( w n + 1 w n x n + 1 x n ) lim sup n { | λ n + 1 λ n | f ( z n ) + | λ n + 1 α n + 1 λ n α n | z n + σ n + 1 Q x n + 1 u n + 1 + σ n Q x n u n + | γ n + 1 1 β n + 1 γ n 1 β n | ( y n + S y n ) } = 0 .
Hence by Lemma 2.1, we get lim n w n x n = 0 . Thus,
lim n x n + 1 x n = lim n ( 1 β n ) w n x n = 0 .
(3.14)

Step 3. lim n B 1 x ˜ n B 1 q = 0 and lim n B 2 x n B 2 p = 0 , where q = P C ( p μ 2 B 2 p ) .

Indeed, utilizing Lemma 2.4 and the convexity of 2 , we obtain from Algorithm 3.1 and (3.2)-(3.3) that
x n + 1 p 2 = β n ( x n p ) + γ n ( y n p ) + δ n ( S y n p ) 2 β n x n p 2 + ( γ n + δ n ) 1 γ n + δ n [ γ n ( y n p ) + δ n ( S y n p ) ] 2 β n x n p 2 + ( γ n + δ n ) y n p 2 β n x n p 2 + ( γ n + δ n ) [ σ n Q x n p 2 + ( 1 σ n ) u n p 2 ] β n x n p 2 + σ n Q x n p 2 + ( γ n + δ n ) u n p 2 β n x n p 2 + σ n Q x n p 2 + ( γ n + δ n ) ( z n p 2 + 2 λ n α n p u n p ) β n x n p 2 + σ n Q x n p 2 + ( γ n + δ n ) [ x n p 2 μ 2 ( 2 β 2 μ 2 ) B 2 x n B 2 p 2 μ 1 ( 2 β 1 μ 1 ) B 1 x ˜ n B 1 q 2 + 2 λ n α n p u n p ] x n p 2 + σ n Q x n p 2 ( γ n + δ n ) [ μ 2 ( 2 β 2 μ 2 ) B 2 x n B 2 p 2 + μ 1 ( 2 β 1 μ 1 ) B 1 x ˜ n B 1 q 2 ] + 2 λ n α n p u n p .
Therefore,
( γ n + δ n ) [ μ 2 ( 2 β 2 μ 2 ) B 2 x n B 2 p 2 + μ 1 ( 2 β 1 μ 1 ) B 1 x ˜ n B 1 q 2 ] x n p 2 x n + 1 p 2 + σ n Q x n p 2 + 2 λ n α n p u n p ( x n p + x n + 1 p ) x n x n + 1 + σ n Q x n p 2 + 2 λ n α n p u n p .
Since α n 0 , σ n 0 , x n x n + 1 0 , lim inf n ( γ n + δ n ) > 0 and { λ n } [ a , b ] for some a , b ( 0 , 1 A 2 ) , it follows that
lim n B 1 x ˜ n B 1 q = 0 and lim n B 2 x n B 2 p = 0 .

Step 4. lim n S y n y n = 0 .

Indeed, by firm nonexpansiveness of P C , we have
x ˜ n q 2 = P C ( x n μ 2 B 2 x n ) P C ( p μ 2 B 2 p ) 2 ( x n μ 2 B 2 x n ) ( p μ 2 B 2 p ) , x ˜ n q = 1 2 [ x n p μ 2 ( B 2 x n B 2 p ) 2 + x ˜ n q 2 ( x n p ) μ 2 ( B 2 x n B 2 p ) ( x ˜ n q ) 2 ] 1 2 [ x n p 2 + x ˜ n q 2 ( x n x ˜ n ) μ 2 ( B 2 x n B 2 p ) ( p q ) 2 ] = 1 2 [ x n p 2 + x ˜ n q 2 x n x ˜ n ( p q ) 2 + 2 μ 2 x n x ˜ n ( p q ) , B 2 x n B 2 p μ 2 2 B 2 x n B 2 p 2 ] 1 2 [ x n p 2 + x ˜ n q 2 x n x ˜ n ( p q ) 2 + 2 μ 2 x n x ˜ n ( p q ) B 2 x n B 2 p ] ,
that is,
x ˜ n q 2 x n p 2 x n x ˜ n ( p q ) 2 + 2 μ 2 x n x ˜ n ( p q ) B 2 x n B 2 p .
(3.15)
Moreover, using the argument technique similar to the above one, we derive
z n p 2 = P C ( x ˜ n μ 1 B 1 x ˜ n ) P C ( q μ 1 B 1 q ) 2 ( x ˜ n μ 1 B 1 x ˜ n ) ( q μ 1 B 1 q ) , z n p = 1 2 [ x ˜ n q μ 1 ( B 1 x ˜ n B 1 q ) 2 + z n p 2 ( x ˜ n q ) μ 1 ( B 1 x ˜ n B 1 q ) ( z n p ) 2 ] 1 2 [ x ˜ n q 2 + z n p 2 ( x ˜ n z n ) μ 1 ( B 1 x ˜ n B 1 q ) + ( p q ) 2 ] = 1 2 [ x ˜ n q 2 + z n p 2 x ˜ n z n + ( p q ) 2 + 2 μ 1 x ˜ n z n + ( p q ) , B 1 x ˜ n B 1 q μ 1 2 B 1 x ˜ n B 1 q 2 ] 1 2 [ x ˜ n q 2 + z n p 2 x ˜ n z n + ( p q ) 2 + 2 μ 1 x ˜ n z n + ( p q ) B 1 x ˜ n B 1 q ] ,
that is,
z n p 2 x ˜ n q 2 x ˜ n z n + ( p q ) 2 + 2 μ 1 x ˜ n z n + ( p q ) B 1 x ˜ n B 1 q .
(3.16)
Utilizing (3.2), (3.15) and (3.16), we have
u n p 2 z n p 2 + 2 λ n α n p u n p x ˜ n q 2 x ˜ n z n + ( p q ) 2 + 2 μ 1 x ˜ n z n + ( p q ) B 1 x ˜ n B 1 q + 2 λ n α n p u n p x n p 2 x n x ˜ n ( p q ) 2 + 2 μ 2 x n x ˜ n ( p q ) B 2 x n B 2 p x ˜ n z n + ( p q ) 2 + 2 μ 1 x ˜ n z n + ( p q ) B 1 x ˜ n B 1 q + 2 λ n α n p u n p .
(3.17)
So, from Algorithm 3.1 and (3.17), it follows that
x n + 1 p 2 = β n ( x n p ) + γ n ( y n p ) + δ n ( S y n p ) 2 β n x n p 2 + ( γ n + δ n ) y n p 2 = β n x n p 2 + ( 1 β n ) y n p 2 β n x n p 2 + ( 1 β n ) [ σ n Q x n p 2 + ( 1 σ n ) u n p 2 ] β n x n p 2 + σ n Q x n p 2 + ( 1 β n ) u n p 2 β n x n p 2 + σ n Q x n p 2 + ( 1 β n ) [ x n p 2 x n x ˜ n ( p q ) 2 + 2 μ 2 x n x ˜ n ( p q ) B 2 x n B 2 p x ˜ n z n + ( p q ) 2 + 2 μ 1 x ˜ n z n + ( p q ) B 1 x ˜ n B 1 q + 2 λ n α n p u n p ] = x n p 2 + σ n Q x n p 2 ( 1 β n ) [ x n x ˜ n ( p q ) 2 + x ˜ n z n + ( p q ) 2 ] + ( 1 β n ) [ 2 μ 2 x n x ˜ n ( p q ) B 2 x n B 2 p + 2 μ 1 x ˜ n z n + ( p q ) B 1 x ˜ n B 1 q + 2 λ n α n p u n p ] ,
which hence implies that
( 1 β n ) [ x n x ˜ n ( p q ) 2 + x ˜ n z n + ( p q ) 2 ] x n p 2 x n + 1 p 2 + σ n Q x n p 2 + ( 1 β n ) [ 2 μ 2 x n x ˜ n ( p q ) B 2 x n B 2 p + 2 μ 1 x ˜ n z n + ( p q ) B 1 x ˜ n B 1 q + 2 λ n α n p u n p ] ( x n p + x n + 1 p ) x n x n + 1 + σ n Q x n p 2 + 2 μ 2 x n x ˜ n ( p q ) B 2 x n B 2 p + 2 μ 1 x ˜ n z n + ( p q ) B 1 x ˜ n B 1 q + 2 λ n α n p u n p .
Since lim sup n β n < 1 , { λ n } [ a , b ] , α n 0 , σ n 0 , B 2 x n B 2 p 0 , B 1 x ˜ n B 1 q 0 and x n + 1 x n 0 , it follows from the boundedness of { x n } , { x ˜ n } , { z n } and { u n } that
lim n x n x ˜ n ( p q ) = 0 and lim n x ˜ n z n + ( p q ) = 0 .
Consequently, it immediately follows that lim n x n z n = 0 . Also, since y n = σ n Q x n + ( 1 σ n ) u n and y n z n 0 , we have
( 1 σ n ) u n z n = y n z n σ n ( Q x n z n ) y n z n + σ n Q x n z n 0 ( n ) .
Thus, we have
lim n u n z n = 0 and lim n x n y n = 0 .
(3.18)
Note that
δ n ( S y n x n ) x n + 1 x n + γ n y n x n .
It hence follows that
lim n S y n x n = 0 and lim n S y n y n = 0 .

Step 5. lim sup n Q x ¯ x ¯ , x n x ¯ 0 , where x ¯ = P Fix ( S ) Ξ Γ Q x ¯ .

Indeed, since { x n } is bounded, there exists a subsequence { x n i } of { x n } such that
lim sup n Q x ¯ x ¯ , x n x ¯ = lim i Q x ¯ x ¯ , x n i x ¯ .
(3.19)
Also, since H is reflexive and { y n } is bounded, without loss of generality, we may assume that y n i p ˆ weakly for some p ˆ C . First, it is clear from Lemma 2.2 that p ˆ Fix ( S ) . Now, let us show that p ˆ Ξ . We note that
x n G ( x n ) = x n P C [ P C ( x n μ 2 B 2 x n ) μ 1 B 1 P C ( x n μ 2 B 2 x n ) ] = x n z n 0 ( n ) ,
where G : C C is defined as that in Lemma 1.1. According to Lemma 2.2, we obtain p ˆ Ξ . Further, let us show that p ˆ Γ . As a matter of fact, since x n z n 0 , u n z n 0 and x n y n 0 , we deduce that z n i p ˆ weakly and u n i p ˆ weakly. Let
T v = { f ( v ) + N C v if  v C , if  v C ,
where N C v = { w H 1 : v u , w 0 , u C } . Then T is maximal monotone and 0 T v if and only if v VI ( C , f ) ; see [41] for more details. Let ( v , w ) Gph ( T ) . Then we have
w T v = f ( v ) + N C v
and hence
w f ( v ) N C v .
So, we have
v u , w f ( v ) 0 , u C .
On the other hand, from
u n = P C ( z n λ n f α n ( z n ) ) and v C ,
we have
z n λ n f α n ( z n ) u n , u n v 0
and hence,
v u n , u n z n λ n + f α n ( z n ) 0 .
Therefore, from
w f ( v ) N C v and u n i C ,
we have
v u n i , w v u n i , f ( v ) v u n i , f ( v ) v u n i , u n i z n i λ n i + f α n i ( z n i ) = v u n i , f ( v ) v u n i , u n i z n i λ n i + f ( z n i ) α n i v u n i , z n i = v u n i , f ( v ) f ( u n i ) + v u n i , f ( u n i ) f ( z n i ) v u n i , u n i z n i λ n i α n i v u n i , z n i v u n i , f ( u n i ) f ( z n i ) v u n i , u n i z n i λ n i α n i v u n i , z n i .
Hence, we get
v p ˆ , w 0 as  i .
Since T is maximal monotone, we have p ˆ T 1 0 , and hence p ˆ VI ( C , f ) . Thus it is clear that p ˆ Γ . Therefore, p ˆ Fix ( S ) Ξ Γ . Consequently, in terms of Proposition 2.1(i), we obtain from (3.21) that
lim sup n Q x ¯ x ¯ , x n x ¯ = lim i Q x ¯ x ¯ , x n i x ¯ = Q x ¯ x ¯ , p ˆ x ¯ 0 .

Step 6. lim n x n x ¯ = 0 .

Indeed, from (3.2) and (3.3) it follows that
u n x ¯ 2 z n x ¯ 2 + 2 λ n α n x ¯ u n x ¯ x n x ¯ 2 + 2 λ n α n x ¯ u n x ¯ .
Note that
Q x n x ¯ , y n x ¯ = Q x n x ¯ , x n x ¯ + Q x n x ¯ , y n x n = Q x n Q x ¯ , x n x ¯ + Q x ¯ x ¯ , x n x ¯ + Q x n x ¯ , y n x n ρ x n x ¯ 2 + Q x ¯ x ¯ , x n x ¯ + Q x n x ¯ y n x n .
Utilizing Lemmas 2.4 and 2.5, we obtain from (3.2) and the convexity of 2
x n + 1 x ¯ 2 = β n ( x n x ¯ ) + γ n ( y n x ¯ ) + δ n ( S y n x ¯ ) 2 β n x n x ¯ 2 + ( γ n + δ n ) 1 γ n + δ n [ γ n ( y n x ¯ ) + δ n ( S y n x ¯ ) ] 2 β n x n x ¯ 2 + ( γ n + δ n ) y n x ¯ 2 β n x n x ¯ 2 + ( γ n + δ n ) [ ( 1 σ n ) 2 u n x ¯ 2 + 2 σ n Q x n x ¯ , y n x ¯ ] β n x n x ¯ 2 + ( γ n + δ n ) [ ( 1 σ n ) ( x n x ¯ 2 + 2 λ n α n x ¯ u n x ¯ ) + 2 σ n Q x n x ¯ , y n x ¯ ] = ( 1 ( γ n + δ n ) σ n ) x n x ¯ 2 + ( γ n + δ n ) 2 σ n Q x n x ¯ , y n x ¯ + ( γ n + δ n ) 2 λ n α n x ¯ u n x ¯ ( 1 ( γ n + δ n ) σ n ) x n x ¯ 2 + ( γ n + δ n ) 2 σ n Q x n x ¯ , y n x ¯ + 2 λ n α n x ¯ u n x ¯ ( 1 ( γ n + δ n ) σ n ) x n x ¯ 2 + ( γ n + δ n ) 2 σ n [ ρ x n x ¯ 2 + Q x ¯ x ¯ , x n x ¯ + Q x n x ¯ y n x n ] + 2 λ n α n x ¯ u n x ¯ = [ 1 ( 1 2 ρ ) ( γ n + δ n ) σ n ] x n x ¯ 2 + ( γ n + δ n ) 2 σ n [ Q x ¯ x ¯ , x n x ¯ + Q x n x ¯ y n x n ] + 2 λ n α n x ¯ u n x ¯ = [ 1 ( 1 2 ρ ) ( γ n + δ n ) σ n ] x n x ¯ 2 + ( 1 2 ρ ) ( γ n + δ n ) σ n 2 [ Q x ¯ x ¯ , x n x ¯ + Q x n x ¯ y n x n ] 1 2 ρ + 2 λ n α n x ¯ u n x ¯ .
Note that lim inf n ( 1 2 ρ ) ( γ n + δ n ) > 0 . It follows that n = 0 ( 1 2 ρ ) ( γ n + δ n ) σ n = . It is clear that
lim sup n 2 [ Q x ¯ x ¯ , x n x ¯ + Q x n x ¯ y n x n ] 1 2 ρ 0

because lim sup n Q x ¯ x ¯ , x n x ¯ 0 and lim n x n y n = 0 . In addition, note also that {