Skip to main content

Relaxed and hybrid viscosity methods for general system of variational inequalities with split feasibility problem constraint

Abstract

In this paper, we propose and analyze some relaxed and hybrid viscosity iterative algorithms for finding a common element of the solution set Ξ of a general system of variational inequalities, the solution set Γ of a split feasibility problem and the fixed point set Fix(S) of a strictly pseudocontractive mapping S in the setting of infinite-dimensional Hilbert spaces. We prove that the sequences generated by the proposed algorithms converge strongly to an element of Fix(S)ΞΓ under mild conditions.

AMS Subject Classification:49J40, 47H05, 47H19.

1 Introduction

Let be a real Hilbert space with the inner product , and the norm . Let C be a nonempty closed convex subset of . The (nearest point or metric) projection of onto C is denoted by P C . Let S:CC be a self-mapping on C. We denote by Fix(S) the set of fixed points of S and by R the set of all real numbers. For a given nonlinear operator A:CH, we consider the variational inequality problem (VIP) of finding x C such that

A x , x x 0,xC.
(1.1)

The solution set of VIP (1.1) is denoted by VI(C,A). Variational inequality theory has been studied quite extensively and has emerged as an important tool in the study of a wide class of obstacle, unilateral, free, moving and equilibrium problems. It is now well known that variational inequalities are equivalent to fixed point problems, the origin of which can be traced back to Lions and Stampacchia [1]. This alternative formulation has been used to suggest and analyze a projection iterative method for solving variational inequalities under the conditions that the involved operator must be strongly monotone and Lipschitz continuous. Related to the variational inequalities, we have the problem of finding fixed points of nonexpansive mappings or strict pseudo-contraction mappings, which is the current interest in functional analysis. Several people considered a unified approach to solve variational inequality problems and fixed point problems; see, for example, [211] and the references therein.

For finding an element of Fix(S)VI(C,A) when C is closed and convex, S is nonexpansive and A is α-inverse strongly monotone, Takahashi and Toyoda [12] introduced the following Mann-type iterative algorithm:

x n + 1 = α n x n +(1 α n )S P C (1 λ n A x n ),n0,
(1.2)

where P C is the metric projection of onto C, x 0 =xC, { α n } is a sequence in (0,1) and { λ n } is a sequence in (0,2α). They showed that if Fix(S)VI(C,A), then the sequence { x n } converges weakly to some zFix(S)VI(C,A). Nadezhkina and Takahashi [13] and Zeng and Yao [9] proposed extragradient methods motivated by Korpelevich [14] for finding a common element of the fixed point set of a nonexpansive mapping and the solution set of a variational inequality problem. Further, these iterative methods were extended in [15] to develop a new iterative method for finding elements in Fix(S)VI(C,A).

Let B 1 , B 2 :CH be two mappings. Recently, Ceng, Wang and Yao [5] introduced and considered the problem of finding ( x , y )C×C such that

{ μ 1 B 1 y + x y , x x 0 , x C , μ 2 B 2 x + y x , x y 0 , x C ,
(1.3)

which is called a general system of variational inequalities (GSVI), where μ 1 >0 and μ 2 >0 are two constants. The set of solutions of problem (1.3) is denoted by GSVI(C, B 1 , B 2 ). In particular, if B 1 = B 2 , then problem (1.3) reduces to the new system of variational inequalities (NSVI) introduced and studied by Verma [16]. Further, if x = y additionally, then the NSVI reduces to VIP (1.1).

Recently, Ceng, Wang and Yao [5] transformed problem (1.3) into a fixed point problem in the following way.

Lemma 1.1 (see [5])

For given x ¯ , y ¯ C, ( x ¯ , y ¯ ) is a solution of problem (1.3) if and only if x ¯ is a fixed point of the mapping G:CC defined by

G(x)= P C [ P C ( x μ 2 B 2 x ) μ 1 B 1 P C ( x μ 2 B 2 x ) ] ,xC,

where y ¯ = P C ( x ¯ μ 2 B 2 x ¯ ).

In particular, if the mapping B i :CH is β i -inverse strongly monotone for i=1,2, then the mapping G is nonexpansive provided μ i (0,2 β i ) for i=1,2.

Utilizing Lemma 1.1, they introduced and studied a relaxed extragradient method for solving GSVI (1.3). Throughout this paper, the set of fixed points of the mapping G is denoted by Ξ. Based on the extragradient method and viscosity approximation method, Yao et al. [8] proposed and analyzed a relaxed extragradient method for finding a common solution of GSVI (1.3) and a fixed point problem of a strictly pseudo-contractive mapping S:CC.

Theorem YLK (see [[8], Theorem 3.2])

Let C be a nonempty bounded closed convex subset of a real Hilbert space . Let the mapping B i :CH be μ i -inverse strongly monotone for i=1,2. Let S:CC be a k-strictly pseudocontractive mapping such that Fix(S)Ξ. Let Q:CC be a ρ-contraction with ρ[0, 1 2 ). For x 0 C given arbitrarily, let the sequences { x n }, { y n } and { z n } be generated iteratively by

{ z n = P C ( x n μ 2 B 2 x n ) , y n = α n Q x n + ( 1 α n ) P C ( z n μ 1 B 1 z n ) , x n + 1 = β n x n + γ n P C ( z n μ 1 B 1 z n ) + δ n S y n , n 0 ,
(1.4)

where μ i (0,2 β i ) for i=1,2, and { α n }, { β n }, { γ n }, { δ n } are four sequences in [0,1] such that

  1. (i)

    β n + γ n + δ n =1 and ( γ n + δ n )k γ n <(12ρ) δ n for all n0;

  2. (ii)

    lim n α n =0 and n = 0 α n =;

  3. (iii)

    0< lim inf n β n lim sup n β n <1 and lim inf n δ n >0;

  4. (iv)

    lim n ( γ n + 1 1 β n + 1 γ n 1 β n )=0.

Then the sequence { x n } generated by (1.4) converges strongly to x ¯ = P Fix ( S ) Ξ Q x ¯ and ( x ¯ , y ¯ ) is a solution of GSVI (1.3), where y ¯ = P C ( x ¯ μ 2 B 2 x ¯ ).

Subsequently, Ceng, Guu and Yao [17] further presented and analyzed an iterative scheme for finding a common element of the solution set of VIP (1.1), the solution set of GSVI (1.3) and the fixed point set of a strictly pseudo-contractive mapping S:CC.

Theorem CGY (see [[17], Theorem 3.1])

Let C be a nonempty closed convex subset of a real Hilbert space . Let A:CH be α-inverse strongly monotone and B i :CH be β i -inverse strongly monotone for i=1,2. Let S:CC be a k-strictly pseudocontractive mapping such that Fix(S)ΞVI(C,A). Let Q:CC be a ρ-contraction with ρ[0, 1 2 ). For x 0 C given arbitrarily, let the sequences { x n }, { y n } and { z n } be generated iteratively by

{ z n = P C ( x n λ n A x n ) , y n = α n Q x n + ( 1 α n ) P C [ P C ( z n μ 2 B 2 z n ) μ 1 B 1 P C ( z n μ 2 B 2 z n ) ] , x n + 1 = β n x n + γ n y n + δ n S y n , n 0 ,
(1.5)

where μ i (0,2 β i ) for i=1,2, { λ n }(0,2α] and { α n },{ β n },{ γ n },{ δ n }[0,1] such that

  1. (i)

    β n + γ n + δ n =1 and ( γ n + δ n )k γ n for all n0;

  2. (ii)

    lim n α n =0 and n = 0 α n =;

  3. (iii)

    0< lim inf n β n lim sup n β n <1 and lim inf n δ n >0;

  4. (iv)

    lim n ( γ n + 1 1 β n + 1 γ n 1 β n )=0;

  5. (v)

    0< lim inf n λ n lim sup n λ n <2α and lim n | λ n + 1 λ n |=0.

Then the sequence { x n } generated by (1.5) converges strongly to x ¯ = P Fix ( S ) Ξ VI ( C , A ) Q x ¯ and ( x ¯ , y ¯ ) is a solution of GSVI (1.3), where y ¯ = P C ( x ¯ μ 2 B 2 x ¯ ).

On the other hand, let C and Q be nonempty closed convex subsets of infinite-dimensional real Hilbert spaces H 1 and H 2 , respectively. The split feasibility problem (SFP) is to find a point x with the property

x CandA x Q,
(1.6)

where AB( H 1 , H 2 ) and B( H 1 , H 2 ) denotes the family of all bounded linear operators from H 1 to H 2 .

In 1994, the SFP was first introduced by Censor and Elfving [18], in finite-dimensional Hilbert spaces, for modeling inverse problems which arise from phase retrievals and in medical image reconstruction. A number of image reconstruction problems can be formulated as the SFP; see, e.g., [19] and the references therein. Recently, it has been found that the SFP can also be applied to study intensity-modulated radiation therapy; see, e.g., [2022] and the references therein. In the recent past, a wide variety of iterative methods have been used in signal processing and image reconstruction and for solving the SFP; see, e.g., [1929] and the references therein. A special case of the SFP is the following convex constrained linear inverse problem [30] of finding an element x such that

xCandAx=b.
(1.7)

It has been extensively investigated in the literature using the projected Landweber iterative method [31]. Comparatively, the SFP has received much less attention so far due to the complexity resulting from the set Q. Therefore, whether various versions of the projected Landweber iterative method [31] can be extended to solve the SFP remains an interesting open topic. For example, it is not clear whether the dual approach to (1.7) of [32] can be extended to the SFP. The original algorithm given in [18] involves the computation of the inverse A 1 (assuming the existence of the inverse of A) and thus has not become popular. A seemingly more popular algorithm that solves the SFP is the CQ algorithm of Byrne [19, 24] which is found to be a gradient-projection method (GPM) in convex minimization. It is also a special case of the proximal forward-backward splitting method [33]. The CQ algorithm only involves the computation of the projections P C and P Q onto the sets C and Q, respectively, and is therefore implementable in the case where P C and P Q have closed-form expressions, for example, C and Q are closed balls or half-spaces. However, it remains a challenge how to implement the CQ algorithm in the case where the projections P C and/or P Q fail to have closed-form expressions, though theoretically we can prove the (weak) convergence of the algorithm.

Very recently, Xu [23] gave a continuation of the study on the CQ algorithm and its convergence. He applied Mann’s algorithm to the SFP and proposed an averaged CQ algorithm which was proved to be weakly convergent to a solution of the SFP. He also established the strong convergence result, which shows that the minimum-norm solution can be obtained.

Furthermore, Korpelevich [14] introduced the so-called extragradient method for finding a solution of a saddle point problem. She proved that the sequences generated by the proposed iterative algorithm converge to a solution of the saddle point problem.

Throughout this paper, assume that the SFP is consistent, that is, the solution set Γ of the SFP is nonempty. Let f: H 1 R be a continuous differentiable function. The minimization problem

min x C f(x):= 1 2 A x P Q A x 2
(1.8)

is ill-posed. Therefore, Xu [23] considered the following Tikhonov regularization problem:

min x C f α (x):= 1 2 A x P Q A x 2 + 1 2 α x 2 ,
(1.9)

where α>0 is the regularization parameter. The regularized minimization (1.9) has a unique solution which is denoted by x α . The following results are easy to prove.

Proposition 1.1 (see [[34], Proposition 3.1])

Given x H 1 , the following statements are equivalent:

  1. (i)

    x solves the SFP;

  2. (ii)

    x solves the fixed point equation

    P C (Iλf) x = x ,

    where λ>0, f= A (I P Q )A and A is the adjoint of A;

  3. (iii)

    x solves the variational inequality problem (VIP) of finding x C such that

    f ( x ) , x x 0,xC.
    (1.10)

It is clear from Proposition 1.1 that

Γ=Fix ( P C ( I λ f ) ) =VI(C,f)

for all λ>0, where Fix( P C (Iλf)) and VI(C,f) denote the set of fixed points of P C (Iλf) and the solution set of VIP (1.10), respectively.

Proposition 1.2 (see [34])

The following statements hold:

  1. (i)

    the gradient

    f α =f+αI= A (I P Q )A+αI

    is (α+ A 2 )-Lipschitz continuous and α-strongly monotone;

  2. (ii)

    the mapping P C (Iλ f α ) is a contraction with coefficient

    1 λ ( 2 α λ ( A 2 + α ) 2 ) ( 1 α λ 1 1 2 α λ ) ,

    where 0<λ α ( A 2 + α ) 2 ;

  3. (iii)

    if the SFP is consistent, then the strong lim α 0 x α exists and is the minimum-norm solution of the SFP.

Very recently, by combining the regularization method and extragradient method due to Nadezhkina and Takahashi [13], Ceng, Ansari and Yao [34] proposed an extragradient algorithm with regularization and proved that the sequences generated by the proposed algorithm converge weakly to an element of Fix(S)Γ, where S:CC is a nonexpansive mapping.

Theorem CAY (see [[34], Theorem 3.1])

Let S:CC be a nonexpansive mapping such that Fix(S)Γ. Let { x n } and { y n } be the sequences in C generated by the following extragradient algorithm:

{ x 0 = x C chosen arbitrarily , y n = P C ( x n λ n f α n ( x n ) ) , x n + 1 = β n x n + ( 1 β n ) S P C ( x n λ n f α n ( y n ) ) , n 0 ,
(1.11)

where n = 0 α n <, { λ n }[a,b] for some a,b(0, 1 A 2 ) and { β n }[c,d] for some c,d(0,1). Then both the sequences { x n } and { y n } converge weakly to an element x ˆ Fix(S)Γ.

Motivated and inspired by the research going on in this area, we propose and analyze some relaxed and hybrid viscosity iterative algorithms for finding a common element of the solution set Ξ of GSVI (1.3), the solution set Γ of SFP (1.6) and the fixed point set Fix(S) of a strictly pseudocontractive mapping S:CC. These iterative algorithms are based on the regularization method, the viscosity approximation method, the relaxed method in [8] and the hybrid method in [10]. Furthermore, it is proven that the sequences generated by the proposed algorithms converge strongly to an element of Fix(S)ΞΓ under mild conditions.

Observe that both [[23], Theorem 5.7] and [[34], Theorem 3.1] are weak convergence results for solving the SFP and that our problem of finding an element of Fix(S)ΞΓ is more general than the corresponding ones in [[23], Theorem 5.7] and [[34], Theorem 3.1], respectively. Hence there is no doubt that our strong convergence results are very interesting and quite valuable. It is worth emphasizing that our relaxed and hybrid viscosity iterative algorithms involve a ρ-contractive self-mapping Q, a k-strictly pseudo-contractive self-mapping S and several parameter sequences, they are more flexible, more advantageous and more subtle than the corresponding ones in [[23], Theorem 5.7] and [[34], Theorem 3.1], respectively. Furthermore, relaxed extragradient iterative scheme (1.4) and hybrid extragradient iterative scheme (1.5) are extended to develop our relaxed viscosity iterative algorithms and hybrid viscosity iterative algorithms, respectively. In our strong convergence results, the relaxed and hybrid viscosity iterative algorithms drop the requirement of boundedness for the domain in which various mappings are defined; see, e.g., Yao et al. [[8], Theorem 3.2]. Therefore, our results represent the modification, supplementation, extension and improvement of [[23], Theorem 5.7], [[34], Theorem 3.1], [[17], Theorem 3.1] and [[8], Theorem 3.2] to a great extent.

2 Preliminaries

Let be a real Hilbert space, whose inner product and norm are denoted by , and , respectively. Let K be a nonempty, closed and convex subset of . Now, we present some known results and definitions which will be used in the sequel.

The metric (or nearest point) projection from onto K is the mapping P K :HK which assigns to each point xH the unique point P K xK satisfying the property

x P K x= inf y K xy=:d(x,K).

The following properties of projections are useful and pertinent to our purpose.

Proposition 2.1 (see [35])

For given xH and zK,

  1. (i)

    z= P K xxz,yz0, yK;

  2. (ii)

    z= P K x x z 2 x y 2 y z 2 , yK;

  3. (iii)

    P K x P K y,xy P K x P K y 2 , yH, which hence implies that P K is nonexpansive and monotone.

Definition 2.1 A mapping T:HH is said to be

  1. (a)

    nonexpansive if

    TxTyxy,x,yH;
  2. (b)

    firmly nonexpansive if 2TI is nonexpansive, or equivalently,

    xy,TxTy T x T y 2 ,x,yH;

alternatively, T is firmly nonexpansive if and only if T can be expressed as

T= 1 2 (I+S),

where S:HH is nonexpansive; projections are firmly nonexpansive.

Definition 2.2 Let T be a nonlinear operator with domain D(T)H and range R(T)H.

  1. (a)

    T is said to be monotone if

    xy,TxTy0,x,yD(T).
  2. (b)

    Given a number β>0, T is said to be β-strongly monotone if

    xy,TxTyβ x y 2 ,x,yD(T).
  3. (c)

    Given a number ν>0, T is said to be ν-inverse strongly monotone (ν-ism) if

    xy,TxTyν T x T y 2 ,x,yD(T).

It can be easily seen that if S is nonexpansive, then IS is monotone. It is also easy to see that a projection P K is 1-ism.

Inverse strongly monotone (also referred to as co-coercive) operators have been applied widely to solving practical problems in various fields, for instance, in traffic assignment problems; see, e.g., [36, 37].

Definition 2.3 A mapping T:HH is said to be an averaged mapping if it can be written as an average of the identity I and a nonexpansive mapping, that is,

T(1α)I+αS,

where α(0,1) and S:HH is nonexpansive. More precisely, when the last equality holds, we say that T is α-averaged. Thus, firmly nonexpansive mappings (in particular, projections) are 1 2 -averaged maps.

Proposition 2.2 (see [24])

Let T:HH be a given mapping.

  1. (i)

    T is nonexpansive if and only if the complement IT is 1 2 -ism.

  2. (ii)

    If T is ν-ism, then for γ>0, γT is ν γ -ism.

  3. (iii)

    T is averaged if and only if the complement IT is ν-ism for some ν>1/2. Indeed, for α(0,1), T is α-averaged if and only if IT is 1 2 α -ism.

Proposition 2.3 (see [24, 38])

Let S,T,V:HH be given operators.

  1. (i)

    If T=(1α)S+αV for some α(0,1) and if S is averaged and V is nonexpansive, then T is averaged.

  2. (ii)

    T is firmly nonexpansive if and only if the complement IT is firmly nonexpansive.

  3. (iii)

    If T=(1α)S+αV for some α(0,1) and if S is firmly nonexpansive and V is nonexpansive, then T is averaged.

  4. (iv)

    The composite of finitely many averaged mappings is averaged. That is, if each of the mappings { T i } i = 1 N is averaged, then so is the composite T 1 T 2 T N . In particular, if T 1 is α 1 -averaged and T 2 is α 2 -averaged, where α 1 , α 2 (0,1), then the composite T 1 T 2 is α-averaged, where α= α 1 + α 2 α 1 α 2 .

  5. (v)

    If the mappings { T i } i = 1 N are averaged and have a common fixed point, then

    i = 1 N Fix( T i )=Fix( T 1 T N ).

The notation Fix(T) denotes the set of all fixed points of the mapping T, that is, Fix(T)={xH:Tx=x}.

It is clear that, in a real Hilbert space , S:CC is k-strictly pseudo-contractive if and only if the following inequality holds:

SxSy,xy x y 2 1 k 2 ( I S ) x ( I S ) y 2 ,x,yC.
(2.1)

This immediately implies that if S is a k-strictly pseudo-contractive mapping, then IS is 1 k 2 -inverse strongly monotone; for further details, we refer to [39] and the references therein. It is well known that the class of strict pseudo-contractions strictly includes the class of nonexpansive mappings.

In order to prove the main results of this paper, the following lemmas will be required.

Lemma 2.1 (see [40])

Let { x n } and { y n } be bounded sequences in a Banach space X and let { β n } be a sequence in [0,1] with 0< lim inf n β n lim sup n β n <1. Suppose x n + 1 =(1 β n ) y n + β n x n for all integers n0 and lim sup n ( y n + 1 y n x n + 1 x n )0. Then lim n y n x n =0.

Lemma 2.2 (see [[39], Proposition 2.1])

Let C be a nonempty closed convex subset of a real Hilbert space and S:CC be a mapping.

  1. (i)

    If S is a k-strict pseudo-contractive mapping, then S satisfies the Lipschitz condition

    SxSy 1 + k 1 k xy,x,yC.
  2. (ii)

    If S is a k-strict pseudo-contractive mapping, then the mapping IS is semiclosed at 0, that is, if { x n } is a sequence in C such that x n x ˜ weakly and (IS) x n 0 strongly, then (IS) x ˜ =0.

  3. (iii)

    If S is k-(quasi-)strict pseudo-contraction, then the fixed point set Fix(S) of S is closed and convex so that the projection P Fix ( S ) is well defined.

The following lemma plays a key role in proving strong convergence of the sequences generated by our algorithms.

Lemma 2.3 (see [35])

Let { a n } be a sequence of nonnegative real numbers satisfying the property

a n + 1 (1 s n ) a n + s n t n + r n ,n0,

where { s n }(0,1] and { t n } are such that

  1. (i)

    n = 0 s n =;

  2. (ii)

    either lim sup n t n 0 or n = 0 | s n t n |<;

  3. (iii)

    n = 0 r n <, where r n 0, n0.

Then lim n a n =0.

Lemma 2.4 (see [8])

Let C be a nonempty closed convex subset of a real Hilbert space . Let S:CC be a k-strictly pseudo-contractive mapping. Let γ and δ be two nonnegative real numbers such that (γ+δ)kγ. Then

γ ( x y ) + δ ( S x S y ) (γ+δ)xy,x,yC.
(2.2)

The following lemma is an immediate consequence of an inner product.

Lemma 2.5 In a real Hilbert space , the following inequality holds:

x + y 2 x 2 +2y,x+y,x,yH.

Let K be a nonempty closed convex subset of a real Hilbert space and let F:KH be a monotone mapping. The variational inequality problem (VIP) is to find xK such that

Fx,yx0,yK.

The solution set of the VIP is denoted by VI(K,F). It is well known that

xVI(K,F)x= P K (xλFx)for some λ>0.

A set-valued mapping T:H 2 H is called monotone if for all x,yH, fTx and gTy imply that xy,fg0. A monotone set-valued mapping T:H 2 H is called maximal if its graph Gph(T) is not properly contained in the graph of any other monotone set-valued mapping. It is known that a monotone set-valued mapping T:H 2 H is maximal if and only if for (x,f)H×H, xy,fg0 for every (y,g)Gph(T) implies that fTx. Let F:KH be a monotone and Lipschitz continuous mapping and let N K v be the normal cone to K at vK, that is,

N K v= { w H : v u , w 0 , u K } .

Define

Tv={ F v + N K v if  v K , if  v K .

It is known that in this case the mapping T is maximal monotone and 0Tv if and only if vVI(K,F); for further details, we refer to [41] and the references therein.

3 Relaxed viscosity methods and their convergence criteria

In this section, we propose and analyze the following relaxed viscosity iterative algorithms for finding a common element of the solution set of GSVI (1.3), the solution set of SFP (1.6) and the fixed point set of a strictly pseudo-contractive mapping S:CC.

Algorithm 3.1 Let μ i (0,2 β i ) for i=1,2, { α n }(0,), { λ n }(0, 2 A 2 ) and { σ n },{ β n },{ γ n },{ δ n }[0,1] such that β n + γ n + δ n =1 for all n0. For x 0 C given arbitrarily, let { x n }, { y n }, { z n } be the sequences generated by the Mann-type viscosity iterative scheme with regularization

{ z n = P C [ P C ( x n μ 2 B 2 x n ) μ 1 B 1 P C ( x n μ 2 B 2 x n ) ] , y n = σ n Q x n + ( 1 σ n ) P C ( z n λ n f α n ( z n ) ) , x n + 1 = β n x n + γ n y n + δ n S y n , n 0 .

Algorithm 3.2 Let μ i (0,2 β i ) for i=1,2, { α n }(0,), { λ n }(0, 2 A 2 ) and { σ n },{ β n },{ γ n },{ δ n }[0,1] such that β n + γ n + δ n =1 for all n0. For x 0 C given arbitrarily, let { x n }, { y n }, { z n } be the sequences generated by the Mann-type viscosity iterative scheme with regularization

{ z n = P C ( x n λ n f α n ( x n ) ) , y n = σ n Q x n + ( 1 σ n ) P C [ P C ( z n μ 2 B 2 z n ) μ 1 B 1 P C ( z n μ 2 B 2 z n ) ] , x n + 1 = β n x n + γ n y n + δ n S y n , n 0 .

Next, we first give the strong convergence criteria of the sequences generated by Algorithm 3.1.

Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H 1 . Let AB( H 1 , H 2 ) and B i :C H 1 be β i -inverse strongly monotone for i=1,2. Let S:CC be a k-strictly pseudocontractive mapping such that Fix(S)ΞΓ. Let Q:CC be a ρ-contraction with ρ[0, 1 2 ). For x 0 C given arbitrarily, let { x n }, { y n }, { z n } be the sequences generated by Algorithm  3.1, where μ i (0,2 β i ) for i=1,2, { α n }(0,), { λ n }(0, 2 A 2 ) and { σ n },{ β n },{ γ n },{ δ n }[0,1] such that

  1. (i)

    n = 0 α n <;

  2. (ii)

    β n + γ n + δ n =1 and ( γ n + δ n )k γ n for all n0;

  3. (iii)

    lim n σ n =0 and n = 0 σ n =;

  4. (iv)

    0< lim inf n β n lim sup n β n <1 and lim inf n δ n >0;

  5. (v)

    lim n ( γ n + 1 1 β n + 1 γ n 1 β n )=0;

  6. (vi)

    0< lim inf n λ n lim sup n λ n < 2 A 2 and lim n | λ n + 1 λ n |=0.

Then the sequences { x n }, { y n }, { z n } converge strongly to the same point x ¯ = P Fix ( S ) Ξ Γ Q x ¯ if and only if lim n y n z n =0. Furthermore, ( x ¯ , y ¯ ) is a solution of GSVI (1.3), where y ¯ = P C ( x ¯ μ 2 B 2 x ¯ ).

Proof First, taking into account 0< lim inf n λ n lim sup n λ n < 2 A 2 , without loss of generality, we may assume that { λ n }[a,b] for some a,b(0, 2 A 2 ).

Now, let us show that P C (Iλ f α ) is ζ-averaged for each λ(0, 2 α + A 2 ), where

ζ= 2 + λ ( α + A 2 ) 4 .

Indeed, it is easy to see that f= A (I P Q )A is 1 A 2 -ism, that is,

f ( x ) f ( y ) , x y 1 A 2 f ( x ) f ( y ) 2 .

Observe that

( α + A 2 ) f α ( x ) f α ( y ) , x y = ( α + A 2 ) [ α x y 2 + f ( x ) f ( y ) , x y ] = α 2 x y 2 + α f ( x ) f ( y ) , x y + α A 2 x y 2 + A 2 f ( x ) f ( y ) , x y α 2 x y 2 + 2 α f ( x ) f ( y ) , x y + f ( x ) f ( y ) 2 = α ( x y ) + f ( x ) f ( y ) 2 = f α ( x ) f α ( y ) 2 .

Hence, it follows that f α =αI+ A (I P Q )A is 1 α + A 2 -ism. Thus, λ f α is 1 λ ( α + A 2 ) -ism according to Proposition 2.2(ii). By Proposition 2.2(iii), the complement Iλ f α is λ ( α + A 2 ) 2 -averaged. Therefore, noting that P C is 1 2 -averaged and utilizing Proposition 2.3(iv), we know that for each λ(0, 2 α + A 2 ), P C (Iλ f α ) is ζ-averaged with

ζ= 1 2 + λ ( α + A 2 ) 2 1 2 λ ( α + A 2 ) 2 = 2 + λ ( α + A 2 ) 4 (0,1).

This shows that P C (Iλ f α ) is nonexpansive. Furthermore, for { λ n }[a,b] with a,b(0, 2 A 2 ), we have

a inf n 0 λ n sup n 0 λ n b< 2 A 2 = lim n 2 α n + A 2 .

Without loss of generality, we may assume that

a inf n 0 λ n sup n 0 λ n b< 2 α n + A 2 ,n0.

Consequently, it follows that for each integer n0, P C (I λ n f α n ) is ζ n -averaged with

ζ n = 1 2 + λ n ( α n + A 2 ) 2 1 2 λ n ( α n + A 2 ) 2 = 2 + λ n ( α n + A 2 ) 4 (0,1).

This immediately implies that P C (I λ n f α n ) is nonexpansive for all n0.

Next, we divide the remainder of the proof into several steps.

Step 1. { x n } is bounded.

Indeed, take pFix(S)ΞΓ arbitrarily. Then Sp=p, P C (Iλf)p=p for λ(0, 2 A 2 ), and

p= P C [ P C ( p μ 2 B 2 p ) μ 1 B 1 P C ( p μ 2 B 2 p ) ] .

For simplicity, we write

q= P C (p μ 2 B 2 p), x ˜ n = P C ( x n μ 2 B 2 x n )and u n = P C ( z n λ n f α n ( z n ) )

for each n0. Then y n = σ n Q x n +(1 σ n ) u n for each n0. From Algorithm 3.1 it follows that

u n p = P C ( I λ n f α n ) z n P C ( I λ n f ) p P C ( I λ n f α n ) z n P C ( I λ n f α n ) p + P C ( I λ n f α n ) p P C ( I λ n f ) p z n p + ( I λ n f α n ) p ( I λ n f ) p z n p + λ n α n p .
(3.1)

Utilizing Lemma 2.5, we also have

u n p 2 = P C ( I λ n f α n ) z n P C ( I λ n f ) p 2 = P C ( I λ n f α n ) z n P C ( I λ n f α n ) p + P C ( I λ n f α n ) p P C ( I λ n f ) p 2 P C ( I λ n f α n ) z n P C ( I λ n f α n ) p 2 + 2 P C ( I λ n f α n ) p P C ( I λ n f ) p , u n p z n p 2 + 2 P C ( I λ n f α n ) p P C ( I λ n f ) p u n p z n p 2 + 2 ( I λ n f α n ) p ( I λ n f ) p u n p = z n p 2 + 2 λ n α n p u n p z n p 2 + 2 λ n α n p u n p .
(3.2)

Since B i :C H 1 is β i -inverse strongly monotone for i=1,2 and 0< μ i <2 β i for i=1,2, we know that for all n0,

z n p 2 = P C [ P C ( x n μ 2 B 2 x n ) μ 1 B 1 P C ( x n μ 2 B 2 x n ) ] p 2 = P C [ P C ( x n μ 2 B 2 x n ) μ 1 B 1 P C ( x n μ 2 B 2 x n ) ] P C [ P C ( p μ 2 B 2 p ) μ 1 B 1 P C ( p μ 2 B 2 p ) ] 2 [ P C ( x n μ 2 B 2 x n ) μ 1 B 1 P C ( x n μ 2 B 2 x n ) ] [ P C ( p μ 2 B 2 p ) μ 1 B 1 P C ( p μ 2 B 2 p ) ] 2 = [ P C ( x n μ 2 B 2 x n ) P C ( p μ 2 B 2 p ) ] μ 1 [ B 1 P C ( x n μ 2 B 2 x n ) B 1 P C ( p μ 2 B 2 p ) ] 2 P C ( x n μ 2 B 2 x n ) P C ( p μ 2 B 2 p ) 2 μ 1 ( 2 β 1 μ 1 ) B 1 P C ( x n μ 2 B 2 x n ) B 1 P C ( p μ 2 B 2 p ) 2 ( x n μ 2 B 2 x n ) ( p μ 2 B 2 p ) 2 μ 1 ( 2 β 1 μ 1 ) B 1 x ˜ n B 1 q 2 = ( x n p ) μ 2 ( B 2 x n B 2 p ) 2 μ 1 ( 2 β 1 μ 1 ) B 1 x ˜ n B 1 q 2 x n p 2 μ 2 ( 2 β 2 μ 2 ) B 2 x n B 2 p 2 μ 1 ( 2 β 1 μ 1 ) B 1 x ˜ n B 1 q 2 x n p 2 .
(3.3)

Hence it follows from (3.1) and (3.3) that

y n p = σ n ( Q x n p ) + ( 1 σ n ) ( u n p ) σ n Q x n p + ( 1 σ n ) u n p σ n ( Q x n Q p + Q p p ) + ( 1 σ n ) ( z n p + λ n α n p ) σ n ( ρ x n p + Q p p ) + ( 1 σ n ) ( x n p + λ n α n p ) ( 1 ( 1 ρ ) σ n ) x n p + σ n Q p p + λ n α n p = ( 1 ( 1 ρ ) σ n ) x n p + ( 1 ρ ) σ n Q p p 1 ρ + λ n α n p max { x n p , Q p p 1 ρ } + λ n α n p .
(3.4)

Since ( γ n + δ n )k γ n for all n0, utilizing Lemma 2.4, we obtain from (3.4)

x n + 1 p = β n ( x n p ) + γ n ( y n p ) + δ n ( S y n p ) β n x n p + γ n ( y n p ) + δ n ( S y n p ) β n x n p + ( γ n + δ n ) y n p β n x n p + ( γ n + δ n ) [ max { x n p , Q p p 1 ρ } + λ n α n p ] β n x n p + ( γ n + δ n ) max { x n p , Q p p 1 ρ } + λ n α n p max { x n p , Q p p 1 ρ } + b p α n .
(3.5)

Now, we claim that

x n + 1 pmax { x 0 p , Q p p 1 ρ } +bp j = 0 n α j .
(3.6)

As a matter of fact, if n=0, then it is clear that (3.6) is valid, that is,

x 1 pmax { x 0 p , Q p p 1 ρ } +bp j = 0 0 α j .

Assume that (3.6) holds for n1, that is,

x n pmax { x 0 p , Q p p 1 ρ } +bp j = 0 n 1 α j .
(3.7)

Then we conclude from (3.5) and (3.7) that

x n + 1 p max { x n p , Q p p 1 ρ } + b p α n max { max { x 0 p , Q p p 1 ρ } + b p j = 0 n 1 α j , Q p p 1 ρ } + b p α n max { x 0 p , Q p p 1 ρ } + b p j = 0 n 1 α j + b p α n = max { x 0 p , Q p p 1 ρ } + b p j = 0 n α j .

By induction, we conclude that (3.6) is valid. Hence, { x n } is bounded. Since P C , f α n , B 1 and B 2 are Lipschitz continuous, it is easy to see that { u n }, { z n }, { y n } and { x ˜ n } are bounded, where x ˜ n = P C ( x n μ 2 B 2 x n ) for all n0.

Step 2. lim n x n + 1 x n =0.

Indeed, define x n + 1 = β n x n +(1 β n ) w n for all n0. It follows that

w n + 1 w n = x n + 2 β n + 1 x n + 1 1 β n + 1 x n + 1 β n x n 1 β n = γ n + 1 y n + 1 + δ n + 1 S y n + 1 1 β n + 1 γ n y n + δ n S y n 1 β n = γ n + 1 ( y n + 1 y n ) + δ n + 1 ( S y n + 1 S y n ) 1 β n + 1 + ( γ n + 1 1 β n + 1 γ n 1 β n ) y n + ( δ n + 1 1 β n + 1 δ n 1 β n ) S y n .
(3.8)

Since ( γ n + δ n )k γ n for all n0, utilizing Lemma 2.4, we have

γ n + 1 ( y n + 1 y n ) + δ n + 1 ( S y n + 1 S y n ) ( γ n + 1 + δ n + 1 ) y n + 1 y n .
(3.9)

Next, we estimate y n + 1 y n . Observe that

u n + 1 u n = P C ( z n + 1 λ n + 1 f α n + 1 ( z n + 1 ) ) P C ( z n λ n f α n ( z n ) ) P C ( I λ n + 1 f α n + 1 ) z n + 1 P C ( I λ n + 1 f α n + 1 ) z n + P C ( I λ n + 1 f α n + 1 ) z n P C ( I λ n f α n ) z n z n + 1 z n + ( I λ n + 1 f α n + 1 ) z n ( I λ n f α n ) z n = z n + 1 z n + λ n + 1 ( α n + 1 I + f ) z n λ n ( α n I + f ) z n z n + 1 z n + | λ n + 1 λ n | f ( z n ) + | λ n + 1 α n + 1 λ n α n | z n
(3.10)

and

z n + 1 z n 2 = P C [ P C ( x n + 1 μ 2 B 2 x n + 1 ) μ 1 B 1 P C ( x n + 1 μ 2 B 2 x n + 1 ) ] P C [ P C ( x n μ 2 B 2 x n ) μ 1 B 1 P C ( x n μ 2 B 2 x n ) ] 2 [ P C ( x n + 1 μ 2 B 2 x n + 1 ) μ 1 B 1 P C ( x n + 1 μ 2 B 2 x n + 1 ) ] [ P C ( x n μ 2 B 2 x n ) μ 1 B 1 P C ( x n μ 2 B 2 x n ) ] 2 = [ P C ( x n + 1 μ 2 B 2 x n + 1 ) P C ( x n μ 2 B 2 x n ) ] μ 1 [ B 1 P C ( x n + 1 μ 2 B 2 x n + 1 ) B 1 P C ( x n μ 2 B 2 x n ) ] 2 P C ( x n + 1 μ 2 B 2 x n + 1 ) P C ( x n μ 2 B 2 x n ) 2 μ 1 ( 2 β 1 μ 1 ) B 1 P C ( x n + 1 μ 2 B 2 x n + 1 ) B 1 P C ( x n μ 2 B 2 x n ) 2 P C ( x n + 1 μ 2 B 2 x n + 1 ) P C ( x n μ 2 B 2 x n ) 2 ( x n + 1 μ 2 B 2 x n + 1 ) ( x n μ 2 B 2 x n ) 2 = ( x n + 1 x n ) μ 2 ( B 2 x n + 1 B 2 x n ) 2 x n + 1 x n 2 μ 2 ( 2 β 2 μ 2 ) B 2 x n + 1 B 2 x n 2 x n + 1 x n 2 .
(3.11)

Combining (3.10) with (3.11), we get

u n + 1 u n z n + 1 z n + | λ n + 1 λ n | f ( z n ) + | λ n + 1 α n + 1 λ n α n | z n x n + 1 x n + | λ n + 1 λ n | f ( z n ) + | λ n + 1 α n + 1 λ n α n | z n ,
(3.12)

which hence implies that

y n + 1 y n = u n + 1 + σ n + 1 ( Q x n + 1 u n + 1 ) u n σ n ( Q x n u n ) u n + 1 u n + σ n + 1 Q x n + 1 u n + 1 + σ n Q x n u n x n + 1 x n + | λ n + 1 λ n | f ( z n ) + | λ n + 1 α n + 1 λ n α n | z n + σ n + 1 Q x n + 1 u n + 1 + σ n Q x n u n .
(3.13)

Hence it follows from (3.8), (3.9) and (3.13) that

w n + 1 w n γ n + 1 ( y n + 1 y n ) + δ n + 1 ( S y n + 1 S y n ) 1 β n + 1 + | γ n + 1 1 β n + 1 γ n 1 β n | y n + | δ n + 1 1 β n + 1 δ n 1 β n | S y n γ n + 1 + δ n + 1 1 β n + 1 y n + 1 y n + | γ n + 1 1 β n + 1 γ n 1 β n | ( y n + S y n ) = y n + 1 y n + | γ n + 1 1 β n + 1 γ n 1 β n | ( y n + S y n ) x n + 1 x n + | λ n + 1 λ n | f ( z n ) + | λ n + 1 α n + 1 λ n α n | z n + σ n + 1 Q x n + 1 u n + 1 + σ n Q x n u n + | γ n + 1 1 β n + 1 γ n 1 β n | ( y n + S y n ) .

Since { x n }, { y n }, { z n } and { u n } are bounded, it follows from conditions (i), (iii), (v) and (vi) that

lim sup n ( w n + 1 w n x n + 1 x n ) lim sup n { | λ n + 1 λ n | f ( z n ) + | λ n + 1 α n + 1 λ n α n | z n + σ n + 1 Q x n + 1 u n + 1 + σ n Q x n u n + | γ n + 1 1 β n + 1 γ n 1 β n | ( y n + S y n ) } = 0 .

Hence by Lemma 2.1, we get lim n w n x n =0. Thus,

lim n x n + 1 x n = lim n (1 β n ) w n x n =0.
(3.14)

Step 3. lim n B 1 x ˜ n B 1 q=0 and lim n B 2 x n B 2 p=0, where q= P C (p μ 2 B 2 p).

Indeed, utilizing Lemma 2.4 and the convexity of 2 , we obtain from Algorithm 3.1 and (3.2)-(3.3) that

x n + 1 p 2 = β n ( x n p ) + γ n ( y n p ) + δ n ( S y n p ) 2 β n x n p 2 + ( γ n + δ n ) 1 γ n + δ n [ γ n ( y n p ) + δ n ( S y n p ) ] 2 β n x n p 2 + ( γ n + δ n ) y n p 2 β n x n p 2 + ( γ n + δ n ) [ σ n Q x n p 2 + ( 1 σ n ) u n p 2 ] β n x n p 2 + σ n Q x n p 2 + ( γ n + δ n ) u n p 2 β n x n p 2 + σ n Q x n p 2 + ( γ n + δ n ) ( z n p 2 + 2 λ n α n p u n p ) β n x n p 2 + σ n Q x n p 2 + ( γ n + δ n ) [ x n p 2 μ 2 ( 2 β 2 μ 2 ) B 2 x n B 2 p 2 μ 1 ( 2 β 1 μ 1 ) B 1 x ˜ n B 1 q 2 + 2 λ n α n p u n p ] x n p 2 + σ n Q x n p 2 ( γ n + δ n ) [ μ 2 ( 2 β 2 μ 2 ) B 2 x n B 2 p 2 + μ 1 ( 2 β 1 μ 1 ) B 1 x ˜ n B 1 q 2 ] + 2 λ n α n p u n p .

Therefore,

( γ n + δ n ) [ μ 2 ( 2 β 2 μ 2 ) B 2 x n B 2 p 2 + μ 1 ( 2 β 1 μ 1 ) B 1 x ˜ n B 1 q 2 ] x n p 2 x n + 1 p 2 + σ n Q x n p 2 + 2 λ n α n p u n p ( x n p + x n + 1 p ) x n x n + 1 + σ n Q x n p 2 + 2 λ n α n p u n p .

Since α n 0, σ n 0, x n x n + 1 0, lim inf n ( γ n + δ n )>0 and { λ n }[a,b] for some a,b(0, 1 A 2 ), it follows that

lim n B 1 x ˜ n B 1 q=0and lim n B 2 x n B 2 p=0.

Step 4. lim n S y n y n =0.

Indeed, by firm nonexpansiveness of P C , we have

x ˜ n q 2 = P C ( x n μ 2 B 2 x n ) P C ( p μ 2 B 2 p ) 2 ( x n μ 2 B 2 x n ) ( p μ 2 B 2 p ) , x ˜ n q = 1 2 [ x n p μ 2 ( B 2 x n B 2 p ) 2 + x ˜ n q 2 ( x n p ) μ 2 ( B 2 x n B 2 p ) ( x ˜ n q ) 2 ] 1 2 [ x n p 2 + x ˜ n q 2 ( x n x ˜ n ) μ 2 ( B 2 x n B 2 p ) ( p q ) 2 ] = 1 2 [ x n p 2 + x ˜ n q 2 x n x ˜ n ( p q ) 2 + 2 μ 2 x n x ˜ n ( p q ) , B 2 x n B 2 p μ 2 2 B 2 x n B 2 p 2 ] 1 2 [ x n p 2 + x ˜ n q 2 x n x ˜ n ( p q ) 2 + 2 μ 2 x n x ˜ n ( p q ) B 2 x n B 2 p ] ,

that is,

x ˜ n q 2 x n p 2 x n x ˜ n ( p q ) 2 + 2 μ 2 x n x ˜ n ( p q ) B 2 x n B 2 p .
(3.15)

Moreover, using the argument technique similar to the above one, we derive

z n p 2 = P C ( x ˜ n μ 1 B 1 x ˜ n ) P C ( q μ 1 B 1 q ) 2 ( x ˜ n μ 1 B 1 x ˜ n ) ( q μ 1 B 1 q ) , z n p = 1 2 [ x ˜ n q μ 1 ( B 1 x ˜ n B 1 q ) 2 + z n p 2 ( x ˜ n q ) μ 1 ( B 1 x ˜ n B 1 q ) ( z n p ) 2 ] 1 2 [ x ˜ n q 2 + z n p 2 ( x ˜ n z n ) μ 1 ( B 1 x ˜ n B 1 q ) + ( p q ) 2 ] = 1 2 [ x ˜ n q 2 + z n p 2 x ˜ n z n + ( p q ) 2 + 2 μ 1 x ˜ n z n + ( p q ) , B 1 x ˜ n B 1 q μ 1 2 B 1 x ˜ n B 1 q 2 ] 1 2 [ x ˜ n q 2 + z n p 2 x ˜ n z n + ( p q ) 2 + 2 μ 1 x ˜ n z n + ( p q ) B 1 x ˜ n B 1 q ] ,

that is,

z n p 2 x ˜ n q 2 x ˜ n z n + ( p q ) 2 + 2 μ 1 x ˜ n z n + ( p q ) B 1 x ˜ n B 1 q .
(3.16)

Utilizing (3.2), (3.15) and (3.16), we have

u n p 2 z n p 2 + 2 λ n α n p u n p x ˜ n q 2 x ˜ n z n + ( p q ) 2 + 2 μ 1 x ˜ n z n + ( p q ) B 1 x ˜ n B 1 q + 2 λ n α n p u n p x n p 2 x n x ˜ n ( p q ) 2 + 2 μ 2 x n x ˜ n ( p q ) B 2 x n B 2 p x ˜ n z n + ( p q ) 2 + 2 μ 1 x ˜ n z n + ( p q ) B 1 x ˜ n B 1 q + 2 λ n α n p u n p .
(3.17)

So, from Algorithm 3.1 and (3.17), it follows that

x n + 1 p 2 = β n ( x n p ) + γ n ( y n p ) + δ n ( S y n p ) 2 β n x n p 2 + ( γ n + δ n ) y n p 2 = β n x n p 2 + ( 1 β n ) y n p 2 β n x n p 2 + ( 1 β n ) [ σ n Q x n p 2 + ( 1 σ n ) u n p 2 ] β n x n p 2 + σ n Q x n p 2 + ( 1 β n ) u n p 2 β n x n p 2 + σ n Q x n p 2 + ( 1 β n ) [ x n p 2 x n x ˜ n ( p q ) 2 + 2 μ 2 x n x ˜ n ( p q ) B 2 x n B 2 p x ˜ n z n + ( p q ) 2 + 2 μ 1 x ˜ n z n + ( p q ) B 1 x ˜ n B 1 q + 2 λ n α n p u n p ] = x n p 2 + σ n Q x n p 2 ( 1 β n ) [ x n x ˜ n ( p q ) 2 + x ˜ n z n + ( p q ) 2 ] + ( 1 β n ) [ 2 μ 2 x n x ˜ n ( p q ) B 2 x n B 2 p + 2 μ 1 x ˜ n z n + ( p q ) B 1 x ˜ n B 1 q + 2 λ n α n p u n p ] ,

which hence implies that

( 1 β n ) [ x n x ˜ n ( p q ) 2 + x ˜ n z n + ( p q ) 2 ] x n p 2 x n + 1 p 2 + σ n Q x n p 2 + ( 1 β n ) [ 2 μ 2 x n x ˜ n ( p q ) B 2 x n B 2 p + 2 μ 1 x ˜ n z n + ( p q ) B 1 x ˜ n B 1 q + 2 λ n α n p u n p ] ( x n p + x n + 1 p ) x n x n + 1 + σ n Q x n p 2 + 2 μ 2 x n x ˜ n ( p q ) B 2 x n B 2 p + 2 μ 1 x ˜ n z n + ( p q ) B 1 x ˜ n B 1 q + 2 λ n α n p u n p .

Since lim sup n β n <1, { λ n }[a,b], α n 0, σ n 0, B 2 x n B 2 p0, B 1 x ˜ n B 1 q0 and x n + 1 x n 0, it follows from the boundedness of { x n }, { x ˜ n }, { z n } and { u n } that

lim n x n x ˜ n ( p q ) =0and lim n x ˜ n z n + ( p q ) =0.

Consequently, it immediately follows that lim n x n z n =0. Also, since y n = σ n Q x n +(1 σ n ) u n and y n z n 0, we have

(1 σ n ) u n z n = y n z n σ n ( Q x n z n ) y n z n + σ n Q x n z n 0(n).

Thus, we have

lim n u n z n =0and lim n x n y n =0.
(3.18)

Note that

δ n ( S y n x n ) x n + 1 x n + γ n y n x n .

It hence follows that

lim n S y n x n =0and lim n S y n y n =0.

Step 5. lim sup n Q x ¯ x ¯ , x n x ¯ 0, where x ¯ = P Fix ( S ) Ξ Γ Q x ¯ .

Indeed, since { x n } is bounded, there exists a subsequence { x n i } of { x n } such that

lim sup n Q x ¯ x ¯ , x n x ¯ = lim i Q x ¯ x ¯ , x n i x ¯ .
(3.19)

Also, since H is reflexive and { y n } is bounded, without loss of generality, we may assume that y n i p ˆ weakly for some p ˆ C. First, it is clear from Lemma 2.2 that p ˆ Fix(S). Now, let us show that p ˆ Ξ. We note that

x n G ( x n ) = x n P C [ P C ( x n μ 2 B 2 x n ) μ 1 B 1 P C ( x n μ 2 B 2 x n ) ] = x n z n 0 ( n ) ,

where G:CC is defined as that in Lemma 1.1. According to Lemma 2.2, we obtain p ˆ Ξ. Further, let us show that p ˆ Γ. As a matter of fact, since x n z n 0, u n z n 0 and x n y n 0, we deduce that z n i p ˆ weakly and u n i p ˆ weakly. Let

Tv={ f ( v ) + N C v if  v C , if  v C ,

where N C v={w H 1 :vu,w0,uC}. Then T is maximal monotone and 0Tv if and only if vVI(C,f); see [41] for more details. Let (v,w)Gph(T). Then we have

wTv=f(v)+ N C v

and hence

wf(v) N C v.

So, we have

v u , w f ( v ) 0,uC.

On the other hand, from

u n = P C ( z n λ n f α n ( z n ) ) andvC,

we have

z n λ n f α n ( z n ) u n , u n v 0

and hence,

v u n , u n z n λ n + f α n ( z n ) 0.

Therefore, from

wf(v) N C vand u n i C,

we have

v u n i , w v u n i , f ( v ) v u n i , f ( v ) v u n i , u n i z n i λ n i + f α n i ( z n i ) = v u n i , f ( v ) v u n i , u n i z n i λ n i + f ( z n i ) α n i v u n i , z n i = v u n i , f ( v ) f ( u n i ) + v u n i , f ( u n i ) f ( z n i ) v u n i , u n i z n i λ n i α n i v u n i , z n i v u n i , f ( u n i ) f ( z n i ) v u n i , u n i z n i λ n i α n i v u n i , z n i .

Hence, we get

v p ˆ ,w0as i.

Since T is maximal monotone, we have p ˆ T 1 0, and hence p ˆ VI(C,f). Thus it is clear that p ˆ Γ. Therefore, p ˆ Fix(S)ΞΓ. Consequently, in terms of Proposition 2.1(i), we obtain from (3.21) that

lim sup n Q x ¯ x ¯ , x n x ¯ = lim i Q x ¯ x ¯ , x n i x ¯ =Q x ¯ x ¯ , p ˆ x ¯ 0.

Step 6. lim n x n x ¯ =0.

Indeed, from (3.2) and (3.3) it follows that

u n x ¯ 2 z n x ¯ 2 +2 λ n α n x ¯ u n x ¯ x n x ¯ 2 +2 λ n α n x ¯ u n x ¯ .

Note that

Q x n x ¯ , y n x ¯ = Q x n x ¯ , x n x ¯ + Q x n x ¯ , y n x n = Q x n Q x ¯ , x n x ¯ + Q x ¯ x ¯ , x n x ¯ + Q x n x ¯ , y n x n ρ x n x ¯ 2 + Q x ¯ x ¯ , x n x ¯ + Q x n x ¯ y n x n .

Utilizing Lemmas 2.4 and 2.5, we obtain from (3.2) and the convexity of 2

x n + 1 x ¯ 2 = β n ( x n x ¯ ) + γ n ( y n x ¯ ) + δ n ( S y n x ¯ ) 2 β n x n x ¯ 2 + ( γ n + δ n ) 1 γ n + δ n [ γ n ( y n x ¯ ) + δ n ( S y n x ¯ ) ] 2 β n x n x ¯ 2 + ( γ n + δ n ) y n x ¯ 2 β n x n x ¯ 2 + ( γ n + δ n ) [ ( 1 σ n ) 2 u n x ¯ 2 + 2 σ n Q x n x ¯ , y n x ¯ ] β n x n x ¯ 2 + ( γ n + δ n ) [ ( 1 σ n ) ( x n x ¯ 2 + 2 λ n α n x ¯ u n x ¯ ) + 2 σ n Q x n x ¯ , y n x ¯ ] = ( 1 ( γ n + δ n ) σ n ) x n x ¯ 2 + ( γ n + δ n ) 2 σ n Q x n x ¯ , y n x ¯ + ( γ n + δ n ) 2 λ n α n x ¯ u n x ¯ ( 1 ( γ n + δ n ) σ n ) x n x ¯ 2 + ( γ n + δ n ) 2 σ n Q x n x ¯ , y n x ¯ + 2 λ n α n x ¯ u n x ¯ ( 1 ( γ n + δ n ) σ n ) x n x ¯ 2 + ( γ n + δ n ) 2 σ n [ ρ x n x ¯ 2 + Q x ¯ x ¯ , x n x ¯ + Q x n x ¯ y n x n ] + 2 λ n α n x ¯ u n x ¯ = [ 1 ( 1 2 ρ ) ( γ n + δ n ) σ n ] x n x ¯ 2 + ( γ n + δ n ) 2 σ n [ Q x ¯ x ¯ , x n x ¯ + Q x n x ¯ y n x n ] + 2 λ n α n x ¯ u n x ¯ = [ 1 ( 1 2 ρ ) ( γ n + δ n ) σ n ] x n x ¯ 2 + ( 1 2 ρ ) ( γ n + δ n ) σ n 2 [ Q x ¯ x ¯ , x n x ¯ + Q x n x ¯ y n x n ] 1 2 ρ + 2 λ n α n x ¯ u n x ¯ .

Note that lim inf n (12ρ)( γ n + δ n )>0. It follows that n = 0 (12ρ)( γ n + δ n ) σ n =. It is clear that

lim sup n 2 [ Q x ¯ x ¯ , x n x ¯ + Q x n x ¯ y n x n ] 1 2 ρ 0

because lim sup n Q x ¯ x ¯ , x n x ¯ 0 and lim n x n y n =0. In addition, note also that { λ n }[a,b], n = 0 α n < and { u n } is bounded. Hence we get n = 0 2 λ n α n x ¯ u n x ¯ <. Therefore, all conditions of Lemma 2.3 are satisfied. Consequently, we immediately deduce that x n x ¯ 0 as n. This completes the proof. □

Corollary 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H 1 . Let AB( H 1 , H 2 ) and B i :C H 1 be β i -inverse strongly monotone for i=1,2. Let S:CC be a k-strictly pseudocontractive mapping such that Fix(S)ΞΓ. For fixed uC and x 0 C given arbitrarily, let the sequences { x n }, { y n }, { z n } be generated iteratively by

{ z n = P C [ P C ( x n μ 2 B 2 x n ) μ 1 B 1 P C ( x n μ 2 B 2 x n ) ] , y n = σ n u + ( 1 σ n ) P C ( z n λ n f α n ( z n ) ) , x n + 1 = β n x n + γ n y n + δ n S y n , n 0 ,
(3.20)

where μ i (0,2 β i ) for i=1,2, { α n }(0,), { λ n }(0, 2 A 2 ) and { σ n },{ β n },{ γ n },{ δ n }[0,1] such that

  1. (i)

    n = 0 α n <;

  2. (ii)

    β n + γ n + δ n =1 and ( γ n + δ n )k γ n for all n0;

  3. (iii)

    lim n σ n =0 and n = 0 σ n =;

  4. (iv)

    0< lim inf n β n lim sup n β n <1 and lim inf n δ n >0;

  5. (v)

    lim n ( γ n + 1 1 β n + 1 γ n 1 β n )=0;

  6. (vi)

    0< lim inf n λ n lim sup n λ n < 2 A 2 and lim n | λ n + 1 λ n |=0.

Then the sequences { x n }, { y n }, { z n } converge strongly to the same point x ¯ = P Fix ( S ) Ξ Γ u if and only if lim n y n z n =0. Furthermore, ( x ¯ , y ¯ ) is a solution of GSVI (1.3), where y ¯ = P C ( x ¯ μ 2 B 2 x ¯ ).

Next, utilizing Corollary 3.1, we give the following result.

Corollary 3.2 Let C be a nonempty closed convex subset of a real Hilbert space H 1 . Let AB( H 1 , H 2 ) and S:CC be a nonexpansive mapping such that Fix(S)Γ. For fixed uC and x 0 C given arbitrarily, let the sequences { x n }, { y n } be generated iteratively by

{ y n = σ n u + ( 1 σ n ) P C ( x n λ n f α n ( x n ) ) , x n + 1 = β n x n + ( 1 β n ) S y n , n 0 ,
(3.21)

where { α n }(0,), { λ n }(0, 2 A 2 ) and { σ n },{ β n }[0,1] such that

  1. (i)

    n = 0 α n <;

  2. (ii)

    lim n σ n =0 and n = 0 σ n =;

  3. (iii)

    0< lim inf n β n lim sup n β n <1;

  4. (iv)

    0< lim inf n λ n lim sup n λ n < 2 A 2 and lim n | λ n + 1 λ n |=0.

Then the sequences { x n }, { y n } converge strongly to the same point x ¯ = P Fix ( S ) Γ u if and only if lim n x n y n =0.

Proof In Corollary 3.1, put B 1 = B 2 =0 and γ n =0. Then Ξ=C, β n + δ n =1 for all n0, and the iterative scheme (3.20) is equivalent to

{ z n = x n , y n = σ n u + ( 1 σ n ) P C ( z n λ n f α n ( z n ) ) , x n + 1 = β n x n + δ n S y n , n 0 .

This is equivalent to (3.21). Since S is a nonexpansive mapping, S must be a k-strictly pseudo-contractive mapping with k=0. In this case, it is easy to see that conditions (i)-(vi) in Corollary 3.1 all are satisfied. Therefore, in terms of Corollary 3.1, we obtain the desired result. □

Now, we are in a position to present the strong convergence criteria of the sequences generated by Algorithm 3.2.

Theorem 3.2 Let C be a nonempty closed convex subset of a real Hilbert space H 1 . Let AB( H 1 , H 2 ) and B i :C H 1 be β i -inverse strongly monotone for i=1,2. Let S:CC be a k-strictly pseudocontractive mapping such that Fix(S)ΞΓ. Let Q:CC be a ρ-contraction with ρ[0, 1 2 ). For x 0 C given arbitrarily, let { x n }, { y n }, { z n } be the sequences generated by Algorithm  3.2, where μ i (0,2 β i ) for i=1,2, { α n }(0,), { λ n }(0, 2 A 2 ) and { σ n },{ β n },{ γ n },{ δ n }[0,1] such that

  1. (i)

    n = 0 α n <;

  2. (ii)

    β n + γ n + δ n =1 and ( γ n + δ n )k γ n for all n0;

  3. (iii)

    lim n σ n =0 and n = 0 σ n =;

  4. (iv)

    0< lim inf n β n lim sup n β n <1 and lim inf n δ n >0;

  5. (v)

    lim n ( γ n + 1 1 β n + 1 γ n 1 β n )=0;

  6. (vi)

    0< lim inf n λ n lim sup n λ n < 2 A 2 and lim n | λ n + 1 λ n |=0.

Then the sequences { x n }, { y n }, { z n } converge strongly to the same point x ¯ = P Fix ( S ) Ξ Γ Q x ¯ if and only if lim n y n z n =0. Furthermore, ( x ¯ , y ¯ ) is a solution of GSVI (1.3), where y ¯ = P C ( x ¯ μ 2 B 2 x ¯ ).

Proof First, taking into account 0< lim inf n λ n lim sup n λ n < 2 A 2 , without loss of generality, we may assume that { λ n }[a,b] for some a,b(0, 2 A 2 ). Repeating the same argument as that in the proof of Theorem 3.1, we can show that P C (Iλ f α ) is ζ-averaged for each λ(0, 2 α + A 2 ), where ζ= 2 + λ ( α + A 2 ) 4 . Further, repeating the same argument as that in the proof of Theorem 3.1, we can also show that for each integer n0, P C (I λ n f α n ) is ζ n -averaged with ζ n = 2 + λ n ( α n + A 2 ) 4 (0,1).

Next, we divide the remainder of the proof into several steps.

Step 1. { x n } is bounded.

Indeed, take pFix(S)ΞΓ arbitrarily. Then Sp=p, P C (Iλf)p=p for λ(0, 2 A 2 ), and

p= P C [ P C ( p μ 2 B 2 p ) μ 1 B 1 P C ( p μ 2 B 2 p ) ] .

For simplicity, we write

q= P C (p μ 2 B 2 p), z ˜ n = P C ( z n μ 2 B 2 z n )and u n = P C ( z ˜ n μ 1 B 1 z ˜ n )

for each n0. Then y n = σ n Q x n +(1 σ n ) u n for each n0. Utilizing the arguments similar to those of (3.1) and (3.2) in the proof of Theorem 3.1, from Algorithm 3.2 we can obtain

z n p x n p+ λ n α n p
(3.22)

and

z n p 2 x n p 2 +2 λ n α n p z n p.
(3.23)

Since B i :C H 1 is β i -inverse strongly monotone and 0< μ i <2 β i for i=1,2, utilizing the argument similar to that of (3.3) in the proof of Theorem 3.1, we can obtain that for all n0,

u n p 2 z n p 2 μ 2 ( 2 β 2 μ 2 ) B 2 z n B 2 p 2 μ 1 ( 2 β 1 μ 1 ) B 1 z ˜ n B 1 q 2 z n p 2 .
(3.24)

Hence it follows from (3.22) and (3.24) that

y n p = σ n ( Q x n p ) + ( 1 σ n ) ( u n p ) σ n ( Q x n Q p + Q p p ) + ( 1 σ n ) z n p σ n ( ρ x n p + Q p p ) + ( 1 σ n ) ( x n p + λ n α n p ) ( 1 ( 1 ρ ) σ n ) x n p + σ n Q p p + λ n α n p max { x n p , Q p p 1 ρ } + λ n α n p .
(3.25)

Since ( γ n + δ n )k γ n for all n0, by Lemma 2.4 we can readily see from (3.25) that

x n + 1 pmax { x n p , Q p p 1 ρ } +bp α n .
(3.26)

Repeating the same argument as that of (3.6) in the proof of Theorem 3.1, by induction we can prove that

x n + 1 pmax { x 0 p , Q p p 1 ρ } +2bp j = 0 n α j .
(3.27)

Thus, { x n } is bounded. Since P C , f α n , B 1 and B 2 are Lipschitz continuous, it is easy to see that { z n }, { u n }, { u ¯ n }, { y n } and { z ˜ n } are bounded, where z ˜ n = P C ( z n μ 2 B 2 z n ) for all n0.

Step 2. lim n x n + 1 x n =0.

Indeed, define x n + 1 = β n x n +(1 β n ) w n for all n0. Then, utilizing the arguments similar to those of (3.8)-(3.11) in the proof of Theorem 3.1, we can obtain that

w n + 1 w n = γ n + 1 ( y n + 1 y n ) + δ n + 1 ( S y n + 1 S y n ) 1 β n + 1 + ( γ n + 1 1 β n + 1 γ n 1 β n ) y n + ( δ n + 1 1 β n + 1 δ n 1 β n ) S y n ,
(3.28)
γ n + 1 ( y n + 1 y n ) + δ n + 1 ( S y n + 1 S y n ) ( γ n + 1 + δ n + 1 ) y n + 1 y n
(3.29)

(due to Lemma 2.4)

z n + 1 z n x n + 1 x n +| λ n + 1 λ n | f ( x n ) +| λ n + 1 α n + 1 λ n α n | x n
(3.30)

and

u n + 1 u n 2 P C ( z n + 1 μ 2 B 2 z n + 1 ) P C ( z n μ 2 B 2 z n ) 2 μ 1 ( 2 β 1 μ 1 ) B 1 P C ( z n + 1 μ 2 B 2 z n + 1 ) B 1 P C ( z n μ 2 B 2 z n ) 2 ( z n + 1 μ 2 B 2 z n + 1 ) ( z n μ 2 B 2 z n ) 2 z n + 1 z n 2 μ 2 ( 2 β 2 μ 2 ) B 2 z n + 1 B 2 z n 2 z n + 1 z n 2 .
(3.31)

So, from (3.30) and (3.31), we get

y n + 1 y n = u n + 1 + σ n + 1 ( Q x n + 1 u n + 1 ) u n σ n ( Q x n u n ) u n + 1 u n + σ n + 1 Q x n + 1 u n + 1 + σ n Q x n u n x n + 1 x n + | λ n + 1 λ n | f ( x n ) + | λ n + 1 α n + 1 λ n α n | x n + σ n + 1 Q x n + 1 u n + 1 + σ n Q x n u n .
(3.32)

Hence it follows from (3.28), (3.29) and (3.32) that

w n + 1 w n x n + 1 x n + | λ n + 1 λ n | f ( x n ) + | λ n + 1 α n + 1 λ n α n | x n + σ n + 1 Q x n + 1 u n + 1 + σ n Q x n u n + | γ n + 1 1 β n + 1 γ n 1 β n | ( y n + S y n ) .

Since { x n }, { y n }, { z n } and { u n } are bounded, it follows from conditions (i), (iii), (v) and (vi) that

lim sup n ( w n + 1 w n x n + 1 x n ) lim sup n { | λ n + 1 λ n | f ( x n ) + | λ n + 1 α n + 1 λ n α n | x n + σ n + 1 Q x n + 1 u n + 1 + σ n Q x n u n + | γ n + 1 1 β n + 1 γ n 1 β n | ( y n + S y n ) } = 0 .

Hence by Lemma 2.1, we get lim n w n x n =0. Thus,

lim n x n + 1 x n = lim n (1 β n ) w n x n =0.
(3.33)

Step 3. lim n B 1 z ˜ n B 1 q=0 and lim n B 2 z n B 2 p=0, where q= P C (p μ 2 B 2 p).

Indeed, utilizing the arguments similar to those of Step 3 in the proof of Theorem 3.1, we can obtain the desired conclusion.

Step 4. lim n S y n y n =0.

Indeed, utilizing the arguments similar to those of Step 4 in the proof of Theorem 3.1, we can obtain the desired conclusion.

Step 5. lim sup n Q x ¯ x ¯ , x n x ¯ 0, where x ¯ = P Fix ( S ) Ξ Γ Q x ¯ .

Indeed, utilizing the arguments similar to those of Step 5 in the proof of Theorem 3.1, we can obtain the desired conclusion.

Step 6. lim n x n x ¯ =0.

Indeed, utilizing the arguments similar to those of Step 6 in the proof of Theorem 3.1, we can obtain the desired conclusion. This completes the proof. □

Corollary 3.3 Let C be a nonempty closed convex subset of a real Hilbert space H 1 . Let AB( H 1 , H 2 ) and B i :C H 1 be β i -inverse strongly monotone for i=1,2. Let S:CC be a k-strictly pseudocontractive mapping such that Fix(S)ΞΓ. For fixed uC and x 0 C given arbitrarily, let the sequences { x n }, { y n }, { z n } be generated iteratively by

{ z n = P C ( x n λ n f α n ( x n ) ) , y n = σ n u + ( 1 σ n ) P C [ P C ( z n μ 2 B 2 z n ) μ 1 B 1 P C ( z n μ 2 B 2 z n ) ] , x n + 1 = β n x n + γ n y n + δ n S y n , n 0 ,
(3.34)

where μ i (0,2 β i ) for i=1,2, { α n }(0,), { λ n }(0, 2 A 2 ) and { σ n },{ β n },{ γ n },{ δ n }[0,1] such that

  1. (i)

    n = 0 α n <;

  2. (ii)

    β n + γ n + δ n =1 and ( γ n + δ n )k γ n for all n0;

  3. (iii)

    lim n σ n =0 and n = 0 σ n =;

  4. (iv)

    0< lim inf n β n lim sup n β n <1 and lim inf n δ n >0;

  5. (v)

    lim n ( γ n + 1 1 β n + 1 γ n 1 β n )=0;

  6. (vi)

    0< lim inf n λ n lim sup n λ n < 2 A 2 and lim n | λ n + 1 λ n |=0.

Then the sequences { x n }, { y n }, { z n } converge strongly to the same point x ¯ = P Fix ( S ) Ξ Γ Q x ¯ if and only if lim n y n z n =0. Furthermore, ( x ¯ , y ¯ ) is a solution of GSVI (1.3), where y ¯ = P C ( x ¯ μ 2 B 2 x ¯ ).

Remark 3.1 In Corollary 3.3, let S be a nonexpansive mapping and put B 1 = B 2 =0 and γ n =0. Then Ξ=C, β n + δ n =1, P C [ P C ( z n μ 2 B 2 z n ) μ 1 B 1 P C ( z n μ 2 B 2 z n )]= z n , and the iterative scheme (3.34) is equivalent to

{ z n = P C ( x n λ n f α n ( x n ) ) , y n = σ n u + ( 1 σ n ) z n , x n + 1 = β n x n + δ n S y n , n 0 .
(3.35)

This is equivalent to (3.21) in Corollary 3.2. In this case, it is easy to see that Corollary 3.3 reduces to Corollary 3.2. Thus Corollary 3.3 includes Corollary 3.2 as a special case.

Remark 3.2 Our Theorems 3.1 and 3.2 improve, extend and develop [[23], Theorem 5.7], [[34], Theorem 3.1], [[8], Theorem 3.2] and [[17], Theorem 3.1] in the following aspects:

  1. (i)

    Compared with the relaxed extragradient method in [[8], Theorem 3.2], our relaxed viscosity iterative algorithms (i.e., Algorithms 3.1 and 3.2) drop the requirement of boundedness for the domain in which various mappings are defined.

  2. (ii)

    Because both [[23], Theorem 5.7] and [[34], Theorem 3.1] are weak convergence results for solving the SFP, beyond question, our Theorems 3.1 and 3.2, as strong convergence results, are very interesting and quite valuable.

  3. (iii)

    The problem of finding an element of Fix(S)ΞΓ in our Theorems 3.1 and 3.2 is more general than the corresponding problems in [[23], Theorem 5.7] and [[34], Theorem 3.1], respectively.

  4. (iv)

    The hybrid extragradient method for finding an element of Fix(S)ΞVI(C,A) in [[17], Theorem 3.1] is extended to develop our relaxed viscosity iterative algorithms (i.e., Algorithms 3.1 and 3.2) for finding an element of Fix(S)ΞΓ.

  5. (v)

    The proof of our results is very different from that of [[17], Theorem 3.1] because our argument technique depends on Lemma 2.3, the restriction on the regularization parameter sequence { α n } and the properties of the averaged mappings P C (I λ n f α n ) to a great extent.

  6. (vi)

    Because Algorithms 3.1 and 3.2 involve a contractive self-mapping Q, a k-strictly pseudo-contractive self-mapping S and several parameter sequences, they are more flexible and more advantageous than the corresponding ones in [[23], Theorem 5.7] and [[34], Theorem 3.1], respectively.

4 Hybrid viscosity methods and their convergence criteria

In this section, we propose and analyze the following hybrid viscosity iterative algorithms for finding a common element of the solution set of GSVI (1.3), the solution set of SFP (1.6) and the fixed point set of a strictly pseudo-contractive mapping S:CC.

Algorithm 4.1 Let μ i (0,2 β i ) for i=1,2, { α n }(0,), { λ n }(0, 2 A 2 ) and { σ n },{ τ n },{ β n },{ γ n },{ δ n }[0,1] such that σ n + τ n 1 and β n + γ n + δ n =1 for all n0. For x 0 C given arbitrarily, let { x n }, { y n } and { z n } be the sequences generated by

{ z n = P C ( x n λ n f α n ( x n ) ) , y n = σ n Q x n + τ n P C ( z n λ n f α n ( z n ) ) y n = + ( 1 σ n τ n ) P C [ P C ( z n μ 2 B 2 z n ) μ 1 B 1 P C ( z n μ 2 B 2 z n ) ] , x n + 1 = β n x n + γ n y n + δ n S y n , n 0 .

Algorithm 4.2 Let μ i (0,2 β i ) for i=1,2, { α n }(0,), { λ n }(0, 2 A 2 ) and { σ n },{ β n },{ γ n },{ δ n }[0,1] such that β n + γ n + δ n =1 for all n0. For x 0 C given arbitrarily, let { x n }, { u n } and { u ˜ n } be the sequences generated by

{ u n = P C [ P C ( x n μ 2 B 2 x n ) μ 1 B 1 P C ( x n μ 2 B 2 x n ) ] , u ˜ n = P C ( u n λ n f α n ( u n ) ) , y n = σ n Q x n + ( 1 σ n ) P C ( u ˜ n λ n f α n ( u ˜ n ) ) , x n + 1 = β n x n + γ n y n + δ n S y n , n 0 .

Next, we first give the strong convergence criteria of the sequences generated by Algorithm 4.1.

Theorem 4.1 Let C be a nonempty closed convex subset of a real Hilbert space H 1 . Let AB( H 1 , H 2 ) and B i :C H 1 be β i -inverse strongly monotone for i=1,2. Let S:CC be a k-strictly pseudocontractive mapping such that Fix(S)ΞΓ. Let Q:CC be a ρ-contraction with ρ[0, 1 2 ). For x 0 C given arbitrarily, let { x n }, { y n }, { z n } be the sequences generated by Algorithm  4.1, where μ i (0,2 β i ) for i=1,2, { α n }(0,), { λ n }(0, 2 A 2 ) and { σ n },{ τ n },{ β n },{ γ n },{ δ n }[0,1] such that

  1. (i)

    n = 0 α n <;

  2. (ii)

    σ n + τ n 1, β n + γ n + δ n =1 and ( γ n + δ n )k γ n for all n0;

  3. (iii)

    lim n σ n =0 and n = 0 σ n =;

  4. (iv)

    lim sup n τ n <1 and lim n | τ n + 1 τ n |=0;

  5. (v)

    0< lim inf n β n lim sup n β n <1 and lim inf n δ n >0;

  6. (vi)

    lim n ( γ n + 1 1 β n + 1 γ n 1 β n )=0;

  7. (vii)

    0< lim inf n λ n lim sup n λ n < 2 A 2 and lim n | λ n + 1 λ n |=0.

Then the sequences { x n }, { y n }, { z n } converge strongly to the same point x ¯ = P Fix ( S ) Ξ Γ Q x ¯ if and only if lim n x n z n =0. Furthermore, ( x ¯ , y ¯ ) is a solution of GSVI (1.3), where y ¯ = P C ( x ¯ μ 2 B 2 x ¯ ).

Proof First, taking into account 0< lim inf n λ n lim sup n λ n < 2 A 2 , without loss of generality, we may assume that { λ n }[a,b] for some a,b(0, 2 A 2 ). Repeating the same argument as that in the proof of Theorem 4.1, we can show that P C (Iλ f α ) is ζ-averaged for each λ(0, 2 α + A 2 ), where ζ= 2 + λ ( α + A 2 ) 4 . Further, repeating the same argument as that in the proof of Theorem 3.1, we can also show that for each integer n0, P C (I λ n f α n ) is ζ n -averaged with ζ n = 2 + λ n ( α n + A 2 ) 4 (0,1).

Next, we divide the remainder of the proof into several steps.

Step 1. { x n } is bounded.

Indeed, take pFix(S)ΞΓ arbitrarily. Then Sp=p, P C (Iλf)p=p for λ(0, 2 A 2 ), and

p= P C [ P C ( p μ 2 B 2 p ) μ 1 B 1 P C ( p μ 2 B 2 p ) ] .

For simplicity, we write q= P C (p μ 2 B 2 p), z ˜ n = P C ( z n μ 2 B 2 z n ),

u n = P C [ P C ( z n μ 2 B 2 z n ) μ 1 B 1 P C ( z n μ 2 B 2 z n ) ] and u ¯ n = P C ( z n λ n f α n ( z n ) )

for each n0. Then y n = σ n x n + τ n u ¯ n +(1 σ n τ n ) u n for each n0. Utilizing the arguments similar to those of (3.1), (3.2) and (3.3) in the proof of Theorem 3.1, we deduce from Algorithm 4.1 that

z n p x n p+ λ n α n p,
(4.1)
z n p 2 x n p 2 +2 λ n α n p z n p
(4.2)

and

u n p 2 z n p 2 μ 2 ( 2 β 2 μ 2 ) B 2 z n B 2 p 2 μ 1 ( 2 β 1 μ 1 ) B 1 z ˜ n B 1 q 2 .
(4.3)

Furthermore, repeating the same arguments as in (4.1) and (4.2), we can obtain that

u ¯ n p z n p+ λ n α n p
(4.4)

and

u ¯ n p 2 z n p 2 +2 λ n α n p u ¯ n p.
(4.5)

Hence it follows from (4.1), (4.3) and (4.4) that

y n p = σ n ( Q x n p ) + τ n ( u ¯ n p ) + ( 1 σ n τ n ) ( u n p ) σ n Q x n p + τ n u ¯ n p + ( 1 σ n τ n ) u n p σ n ( Q x n Q p + Q p p ) + τ n ( z n p + λ n α n p ) + ( 1 σ n τ n ) z n p σ n ( ρ x n p + Q p p ) + ( 1 σ n ) z n p + λ n α n p σ n ρ x n p + σ n Q p p + ( 1 σ n ) ( x n p + λ n α n p ) + λ n α n p ( 1 ( 1 ρ ) σ n ) x n p + σ n Q p p + 2 λ n α n p = ( 1 ( 1 ρ ) σ n ) x n p + ( 1 ρ ) σ n Q p p 1 ρ + 2 λ n α n p max { x n p , Q p p 1 ρ } + 2 λ n α n p .
(4.6)

Since ( γ n + δ n )k γ n for all n0, utilizing Lemma 2.4, we obtain from (4.6)

x n + 1 p = β n ( x n p ) + γ n ( y n p ) + δ n ( S y n p ) β n x n p + γ n ( y n p ) + δ n ( S y n p ) β n x n p + ( γ n + δ n ) y n p β n x n p + ( γ n + δ n ) [ max { x n p , Q p p 1 ρ } + 2 λ n α n p ] β n x n p + ( γ n + δ n ) max { x n p , Q p p 1 ρ } + 2 λ n α n p max { x n p , Q p p 1 ρ } + 2 b p α n .
(4.7)

By induction, we can derive

x n + 1 pmax { x 0 p , Q p p 1 ρ } +2bp j = 0 n α j .
(4.8)

Hence, { x n } is bounded. Since P C , f α n , B 1 and B 2 are Lipschitz continuous, it is easy to see that { z n }, { u n }, { u ¯ n }, { y n } and { z ˜ n } are bounded, where

z ˜ n = P C ( z n μ 2 B 2 z n ),n0.
(4.9)

Step 2. lim n x n + 1 x n =0.

Indeed, define x n + 1 = β n x n +(1 β n ) w n for all n0. It follows that

w n + 1 w n = γ n + 1 ( y n + 1 y n ) + δ n + 1 ( S y n + 1 S y n ) 1 β n + 1 + ( γ n + 1 1 β n + 1 γ n 1 β n ) y n + ( δ n + 1 1 β n + 1 δ n 1 β n ) S y n .
(4.10)

Since ( γ n + δ n )k γ n for all n0, utilizing Lemma 2.4, we have

γ n + 1 ( y n + 1 y n ) + δ n + 1 ( S y n + 1 S y n ) ( γ n + 1 + δ n + 1 ) y n + 1 y n .
(4.11)

Next, we estimate y n + 1 y n . Utilizing the arguments similar to those of (3.10) and (3.11) in the proof of Theorem 3.1, we obtain that

z n + 1 z n x n + 1 x n +| λ n + 1 λ n | f ( x n ) +| λ n + 1 α n + 1 λ n α n | x n
(4.12)

and

u n + 1 u n 2 P C ( z n + 1 μ 2 B 2 z n + 1 ) P C ( z n μ 2 B 2 z n ) 2 μ 1 ( 2 β 1 μ 1 ) B 1 P C ( z n + 1 μ 2 B 2 z n + 1 ) B 1 P C ( z n μ 2 B 2 z n ) 2 z n + 1 z n 2 μ 2 ( 2 β 2 μ 2 ) B 2 z n + 1 B 2 z n 2 .
(4.13)

Thus,

u ¯ n + 1 u ¯ n z n + 1 z n + | λ n + 1 λ n | f ( z n ) + | λ n + 1 α n + 1 λ n α n | z n x n + 1 x n + | λ n + 1 λ n | f ( x n ) + | λ n + 1 α n + 1 λ n α n | x n + | λ n + 1 λ n | f ( z n ) + | λ n + 1 α n + 1 λ n α n | z n = x n + 1 x n + | λ n + 1 λ n | ( f ( x n ) + f ( z n ) ) + | λ n + 1 α n + 1 λ n α n | ( x n + z n ) .
(4.14)

This together with (4.12) implies that

y n + 1 y n = σ n + 1 ( Q x n + 1 u n + 1 ) + τ n + 1 u ¯ n + 1 + ( 1 τ n + 1 ) u n + 1 σ n ( Q x n u n ) τ n u ¯ n ( 1 τ n ) u n τ n + 1 u ¯ n + 1 τ n u ¯ n + ( 1 τ n + 1 ) u n + 1 ( 1 τ n ) u n + σ n + 1 Q x n + 1 u n + 1 + σ n Q x n u n | τ n + 1 τ n | u ¯ n + 1 + τ n u ¯ n + 1 u ¯ n + | τ n + 1 τ n | u n + 1 + ( 1 τ n ) u n + 1 u n + σ n + 1 Q x n + 1 u n + 1 + σ n Q x n u n τ n [ x n + 1 x n + | λ n + 1 λ n | ( f ( x n ) + f ( z n ) ) + | λ n + 1 α n + 1 λ n α n | ( x n + z n ) ] + ( 1 τ n ) z n + 1 z n + | τ n + 1 τ n | ( u ¯ n + 1 + u n + 1 ) + σ n + 1 Q x n + 1 u n + 1 + σ n Q x n u n τ n [ x n + 1 x n + | λ n + 1 λ n | ( f ( x n ) + f ( z n ) ) + | λ n + 1 α n + 1 λ n α n | ( x n + z n ) ] + ( 1 τ n ) [ x n + 1 x n + | λ n + 1 λ n | f ( x n ) + | λ n + 1 α n + 1 λ n α n | x n ] + | τ n + 1 τ n | ( u ¯ n + 1 + u n + 1 ) + σ n + 1 Q x n + 1 u n + 1 + σ n Q x n u n x n + 1 x n + | λ n + 1 λ n | ( f ( x n ) + f ( z n ) ) + | λ n + 1 α n + 1 λ n α n | ( x n + z n ) + | τ n + 1 τ n | ( u ¯ n + 1 + u n + 1 ) + σ n + 1 Q x n + 1 u n + 1 + σ n Q x n u n .
(4.15)

Hence it follows from (4.10), (4.11) and (4.15) that

w n + 1 w n γ n + 1 ( y n + 1 y n ) + δ n + 1 ( S y n + 1 S y n ) 1 β n + 1 + | γ n + 1 1 β n + 1 γ n 1 β n | y n + | δ n + 1 1 β n + 1 δ n 1 β n | S y n γ n + 1 + δ n + 1 1 β n + 1 y n + 1 y n + | γ n + 1 1 β n + 1 γ n 1 β n | ( y n + S y n ) = y n + 1 y n + | γ n + 1 1 β n + 1 γ n 1 β n | ( y n + S y n ) x n + 1 x n + | λ n + 1 λ n | ( f ( x n ) + f ( z n ) ) + | λ n + 1 α n + 1 λ n α n | ( x n + z n ) + | τ n + 1 τ n | ( u ¯ n + 1 + u n + 1 ) + σ n + 1 Q x n + 1 u n + 1 + σ n Q x n u n + | γ n + 1 1 β n + 1 γ n 1 β n | ( y n + S y n ) .

Since { x n }, { y n }, { z n }, { u n } and { u ¯ n } are bounded, it follows from conditions (i), (iii), (iv), (vi) and (vii) that

lim sup n ( w n + 1 w n x n + 1 x n ) lim sup n { | λ n + 1 λ n | ( f ( x n ) + f ( z n ) ) + | λ n + 1 α n + 1 λ n α n | ( x n + z n ) + | τ n + 1 τ n | ( u ¯ n + 1 + u n + 1 ) + σ n + 1 Q x n + 1 u n + 1 + σ n Q x n u n + | γ n + 1 1 β n + 1 γ n 1 β n | ( y n + S y n ) } = 0 .

Hence by Lemma 2.1, we get lim n w n x n =0. Thus,

lim n x n + 1 x n = lim n (1 β n ) w n x n =0.
(4.16)

Step 3. lim n B 2 z n B 2 p=0 and lim n B 1 z ˜ n B 1 q=0, where q= P C (p μ 2 B 2 p).

Indeed, utilizing Lemma 2.4 and the convexity of 2 , we obtain from Algorithm 4.1 and (4.2), (4.3), (4.5) that

x n + 1 p 2 β n x n p 2 + ( γ n + δ n ) 1 γ n + δ n [ γ n ( y n p ) + δ n ( S y n p ) ] 2 β n x n p 2 + ( γ n + δ n ) y n p 2 β n x n p 2 + ( γ n + δ n ) [ σ n Q x n p 2 + τ n u ¯ n p 2 + ( 1 σ n τ n ) u n p 2 ] β n x n p 2 + ( γ n + δ n ) { σ n Q x n p 2 + τ n [ z n p 2 + 2 λ n α n p u ¯ n p ] + ( 1 σ n τ n ) [ z n p 2 μ 2 ( 2 β 2 μ 2 ) B 2 z n B 2 p 2 μ 1 ( 2 β 1 μ 1 ) B 1 z ˜ n B 1 q 2 ] } β n x n p 2 + ( γ n + δ n ) { σ n Q x n p 2 + τ n [ x n p 2 + 2 λ n α n p z n p + 2 λ n α n p u ¯ n p ] + ( 1 σ n τ n ) [ x n p 2 + 2 λ n α n p z n p μ 2 ( 2 β 2 μ 2 ) B 2 z n B 2 p 2 μ 1 ( 2 β 1 μ 1 ) B 1 z ˜ n B 1 q 2 ] } = β n x n p 2 + ( γ n + δ n ) { σ n Q x n p 2 + ( 1 σ n ) x n p 2 + 2 ( 1 σ n ) λ n α n p z n p + 2 τ n λ n α n p u ¯ n p ( 1 σ n τ n ) [ μ 2 ( 2 β 2 μ 2 ) B 2 z n B 2 p 2 + μ 1 ( 2 β 1 μ 1 ) B 1 z ˜ n B 1 q 2 ] } x n p 2 + σ n Q x n p 2 + 2 λ n α n p ( z n p + u ¯ n p ) ( γ n + δ n ) ( 1 σ n τ n ) [ μ 2 ( 2 β 2 μ 2 ) B 2 z n B 2 p 2 + μ 1 ( 2 β 1 μ 1 ) B 1 z ˜ n B 1 q 2 ] .

Therefore,

( γ n + δ n ) ( 1 σ n τ n ) [ μ 2 ( 2 β 2 μ 2 ) B 2 z n B 2 p 2 + μ 1 ( 2 β 1 μ 1 ) B 1 z ˜ n B 1 q 2 ] x n p 2 x n + 1 p 2 + σ n Q x n p 2 + 2 λ n α n p ( z n p + u ¯ n p ) ( x n p + x n + 1 p ) x n x n + 1 + σ n Q x n p 2 + 2 λ n α n p ( z n p + u ¯ n p ) .

Since α n 0, x n x n + 1 0, lim inf n δ n >0, { λ n }[a,b], σ n 0 and lim sup n τ n <1, it follows that

lim n B 1 z ˜ n B 1 q=0and lim n B 2 z n B 2 p=0.

Step 4. lim n S y n y n =0.

Indeed, observe that

u ¯ n z n = P C ( I λ n f α n ) z n P C ( I λ n f α n ) x n z n x n .

This together with z n x n 0 implies that lim n u ¯ n z n =0 and hence lim n u ¯ n x n =0. By firm nonexpansiveness of P C , we have

z ˜ n q 2 = P C ( z n μ 2 B 2 z n ) P C ( p μ 2 B 2 p ) 2 ( z n μ 2 B 2 z n ) ( p μ 2 B 2 p ) , z ˜ n q = 1 2 [ z n p μ 2 ( B 2 z n B 2 p ) 2 + z ˜ n q 2 ( z n p ) μ 2 ( B 2 z n B 2 p ) ( z ˜ n q ) 2 ] 1 2 [ z n p 2 + z ˜ n q 2 ( z n z ˜ n ) μ 2 ( B 2 z n B 2 p ) ( p q ) 2 ] = 1 2 [ z n p 2 + z ˜ n q 2 z n z ˜ n ( p q ) 2 + 2 μ 2 z n z ˜ n ( p q ) , B 2 z n B 2 p μ 2 2 B 2 z n B 2 p 2 ] 1 2 [ z n p 2 + z ˜ n q 2 z n z ˜ n ( p q ) 2 + 2 μ 2 z n z ˜ n ( p q ) B 2 z n B 2 p ] ,

that is,

z ˜ n q 2 z n p 2 z n z ˜ n ( p q ) 2 + 2 μ 2 z n z ˜ n ( p q ) B 2 z n B 2 p .
(4.17)

Moreover, using the argument technique similar to the above one, we derive

u n p 2 = P C ( z ˜ n μ 1 B 1 z ˜ n ) P C ( q μ 1 B 1 q ) 2 ( z ˜ n μ 1 B 1 z ˜ n ) ( q μ 1 B 1 q ) , u n p = 1 2 [ z ˜ n q μ 1 ( B 1 z ˜ n B 1 q ) 2 + u n p 2 ( z ˜ n q ) μ 1 ( B 1 z ˜ n B 1 q ) ( u n p ) 2 ] 1 2 [ z ˜ n q 2 + u n p 2 ( z ˜ n u n ) μ 1 ( B 1 z ˜ n B 1 q ) + ( p q ) 2 ] = 1 2 [ z ˜ n q 2 + u n p 2 z ˜ n u n + ( p q ) 2 + 2 μ 1 z ˜ n u n + ( p q ) , B 1 z ˜ n B 1 q μ 1 2 B 1 z ˜ n B 1 q 2 ] 1 2 [ z ˜ n q 2 + u n p 2 z ˜ n u n + ( p q ) 2 + 2 μ 1 z ˜ n u n + ( p q ) B 1 z ˜ n B 1 q ] ,

that is,

u n p 2 z ˜ n q 2 z ˜ n u n + ( p q ) 2 + 2 μ 1 z ˜ n u n + ( p q ) B 1 z ˜ n B 1 q .
(4.18)

Utilizing (4.2), (4.5), (4.17) and (4.18), we have

y n p 2 = σ n ( Q x n p ) + τ n ( u ¯ n p ) + ( 1 σ n τ n ) ( u n p ) 2 σ n Q x n p 2 + τ n u ¯ n p 2 + ( 1 σ n τ n ) u n p 2 σ n Q x n p 2 + τ n ( z n p 2 + 2 λ n α n p u ¯ n p ) + ( 1 σ n τ n ) [ z ˜ n q 2 z ˜ n u n + ( p q ) 2 + 2 μ 1 z ˜ n u n + ( p q ) B 1 z ˜ n B 1 q ] σ n Q x n p 2 + τ n ( x n p 2 + 2 λ n α n p z n p + 2 λ n α n p u ¯ n p ) + ( 1 σ n τ n ) { z n p 2 z n z ˜ n ( p q ) 2 + 2 μ 2 z n z ˜ n ( p q ) B 2 z n B 2 p z ˜ n u n + ( p q ) 2 + 2 μ 1 z ˜ n u n + ( p q ) B 1 z ˜ n B 1 q } σ n Q x n p 2 + τ n [ x n p 2 + 2 λ n α n p ( z n p + u ¯ n p ) ] + ( 1 σ n τ n ) { x n p 2 + 2 λ n α n p z n p z n z ˜ n ( p q ) 2 + 2 μ 2 z n z ˜ n ( p q ) B 2 z n B 2 p z ˜ n u n + ( p q ) 2 + 2 μ 1 z ˜ n u n + ( p q ) B 1 z ˜ n B 1 q } σ n Q x n p 2 + x n p 2 + 2 λ n α n p ( z n p + u ¯ n p ) + 2 μ 2 z n z ˜ n ( p q ) B 2 z n B 2 p + 2 μ 1 z ˜ n u n + ( p q ) B 1 z ˜ n B 1 q ( 1 σ n τ n ) ( z n z ˜ n ( p q ) 2 + z ˜ n u n + ( p q ) 2 ) .
(4.19)

Thus, from Algorithm 4.1 and (4.19), it follows that

x n + 1 p 2 = β n ( x n p ) + γ n ( y n p ) + δ n ( S y n p ) 2 β n x n p 2 + ( γ n + δ n ) y n p 2 = β n x n p 2 + ( 1 β n ) y n p 2 β n x n p 2 + ( 1 β n ) { x n p 2 + σ n Q x n p 2 + 2 λ n α n p ( z n p + u ¯ n p ) + 2 μ 2 z n z ˜ n ( p q ) B 2 z n B 2 p + 2 μ 1 z ˜ n u n + ( p q ) B 1 z ˜ n B 1 q ( 1 σ n τ n ) ( z n z ˜ n ( p q ) 2 + z ˜ n u n + ( p q ) 2 ) } x n p 2 + σ n Q x n p 2 + 2 λ n α n p ( z n p + u ¯ n p ) + 2 μ 2 z n z ˜ n ( p q ) B 2 z n B 2 p + 2 μ 1 z ˜ n u n + ( p q ) B 1 z ˜ n B 1 q ( 1 β n ) ( 1 σ n τ n ) ( z n z ˜ n ( p q ) 2 + z ˜ n u n + ( p q ) 2 ) ,

which hence implies that

( 1 β n ) ( 1 σ n τ n ) ( z n z ˜ n ( p q ) 2 + z ˜ n u n + ( p q ) 2 ) x n p 2 x n + 1 p 2 + σ n Q x n p 2 + 2 λ n α n p ( z n p + u ¯ n p ) + 2 μ 2 z n z ˜ n ( p q ) B 2 z n B 2 p + 2 μ 1 z ˜ n u n + ( p q ) B 1 z ˜ n B 1 q ( x n p + x n + 1 p ) x n x n + 1 + σ n Q x n p 2 + 2 λ n α n p ( z n p + u ¯ n p ) + 2 μ 2 z n z ˜ n ( p q ) B 2 z n B 2 p + 2 μ 1 z ˜ n u n + ( p q ) B 1 z ˜ n B 1 q .

Since lim sup n β n <1, lim sup n τ n <1, { λ n }[a,b], α n 0, σ n 0, B 2 z n B 2 p0, B 1 z ˜ n B 1 q0 and x n + 1 x n 0, it follows from the boundedness of { x n }, { u n }, { u ¯ n }, { z n } and { z ˜ n } that

lim n z n z ˜ n ( p q ) =0and lim n z ˜ n u n + ( p q ) =0.

Consequently, it immediately follows that

lim n z n u n =0and lim n u n u ¯ n =0.
(4.20)

Also, note that

y n u ¯ n σ n Q x n u ¯ n +(1 σ n τ n ) u n u ¯ n 0.

This together with x n u ¯ n 0 implies that

lim n x n y n =0.

Since

δ n ( S y n x n ) x n + 1 x n + γ n y n x n ,

it follows that

lim n S y n x n =0and lim n S y n y n =0.

Step 5. lim sup n Q x ¯ x ¯ , x n x ¯ 0, where x ¯ = P Fix ( S ) Ξ Γ Q x ¯ .

Indeed, since { x n } is bounded, there exists a subsequence { x n i } of { x n } such that

lim sup n Q x ¯ x ¯ , x n x ¯ = lim i Q x ¯ x ¯ , x n i x ¯ .
(4.21)

Also, since H is reflexive and { x n } is bounded, without loss of generality, we may assume that x n i p ˆ weakly for some p ˆ C. Taking into account that x n y n 0 and x n z n 0 as n, we deduce that y n i p ˆ weakly and z n i p ˆ weakly.

First, it is clear from Lemma 2.2 and S y n y n 0 that p ˆ Fix(S). Now, let us show that p ˆ Ξ. Note that

z n G ( z n ) = z n P C [ P C ( z n μ 2 B 2 z n ) μ 1 B 1 P C ( z n μ 2 B 2 z n ) ] = z n u n 0 ( n ) ,

where G:CC is defined as in Lemma 1.1. According to Lemma 2.2, we get p ˆ Ξ. Further, let us show that p ˆ Γ. As a matter of fact, define

Tv={ f ( v ) + N C v if  v C , if  v C ,

where N C v={w H 1 :vu,w0,uC}. Then T is maximal monotone and 0Tv if and only if vVI(C,f); see [41] for more details. Utilizing the arguments similar to those of Step 5 in the proof of Theorem 3.1 and the relations

z n = P C ( x n λ n f α n ( x n ) ) andvC,

we can derive

v p ˆ ,w0as i.

Since T is maximal monotone, we have p ˆ T 1 0 and hence p ˆ VI(C,f). Thus it is clear that p ˆ Γ. Therefore, p ˆ Fix(S)ΞΓ. Consequently, in terms of Proposition 2.1(i), we obtain from (4.21) that

lim sup n Q x ¯ x ¯ , x n x ¯ = lim i Q x ¯ x ¯ , x n i x ¯ =Q x ¯ x ¯ , p ˆ x ¯ 0.

Step 6. lim n x n x ¯ =0.

Indeed, observe that

Q x n x ¯ , y n x ¯ = Q x n x ¯ , x n x ¯ + Q x n x ¯ , y n x n = Q x n Q x ¯ , x n x ¯ + Q x ¯ x ¯ , x n x ¯ + Q x n x ¯ , y n x n ρ x n x ¯ 2 + Q x ¯ x ¯ , x n x ¯ + Q x n x ¯ y n x n .

Utilizing Lemmas 2.4 and 2.5, we obtain from (4.2), (4.3) and (4.5) and the convexity of 2 that

x n + 1 x ¯ 2 = β n ( x n x ¯ ) + γ n ( y n x ¯ ) + δ n ( S y n x ¯ ) 2 β n x n x ¯ 2 + ( γ n + δ n ) 1 γ n + δ n [ γ n ( y n x ¯ ) + δ n ( S y n x ¯ ) ] 2 β n x n x ¯ 2 + ( γ n + δ n ) y n x ¯ 2 β n x n x ¯ 2 + ( γ n + δ n ) [ τ n ( u ¯ n x ¯ ) + ( 1 σ n τ n ) ( u n x ¯ ) 2 + 2 σ n Q x n x ¯ , y n x ¯ ] β n x n x ¯ 2 + ( γ n + δ n ) [ τ n u ¯ n x ¯ 2 + ( 1 σ n τ n ) u n x ¯ 2 + 2 σ n Q x n x ¯ , y n x ¯ ] β n x n x ¯ 2 + ( γ n + δ n ) [ τ n u ¯ n x ¯ 2 + ( 1 σ n τ n ) z n x ¯ 2 + 2 σ n Q x n x ¯ , y n x ¯ ] β n x n x ¯ 2 + ( γ n + δ n ) { τ n ( x n x ¯ 2 + 2 λ n α n x ¯ ( z n x ¯ + u ¯ n x ¯ ) ) + ( 1 σ n τ n ) ( x n x ¯ 2 + 2 λ n α n x ¯ z n x ¯ ) + 2 σ n Q x n x ¯ , y n x ¯ } β n x n x ¯ 2 + ( γ n + δ n ) { ( 1 σ n ) ( x n x ¯ 2 + 2 λ n α n x ¯ ( z n x ¯ + u ¯ n x ¯ ) ) + 2 σ n Q x n x ¯ , y n x ¯ } = ( 1 ( γ n + δ n ) σ n ) x n x ¯ 2 + ( γ n + δ n ) ( 1 σ n ) 2 λ n α n x ¯ ( z n x ¯ + u ¯ n x ¯ ) + ( γ n + δ n ) 2 σ n Q x n x ¯ , y n x ¯ ( 1 ( γ n + δ n ) σ n ) x n x ¯ 2 + ( γ n + δ n ) 2 σ n [ ρ x n x ¯ 2 + Q x ¯ x ¯ , x n x ¯ + Q x n x ¯ y n x n ] + 2 λ n α n x ¯ ( z n x ¯ + u ¯ n x ¯ ) = [ 1 ( 1 2 ρ ) ( γ n + δ n ) σ n ] x n x ¯ 2 + ( 1 2 ρ ) ( γ n + δ n ) σ n 2 [ Q x ¯ x ¯ , x n x ¯ + Q x n x ¯ y n x n ] 1 2 ρ + 2 λ n α n x ¯ ( z n x ¯ + u ¯ n x ¯ ) .

Note that lim inf n (12ρ)( γ n + δ n )>0. It follows that n = 0 (12ρ)( γ n + δ n ) σ n =. It is clear that

lim sup n 2 [ Q x ¯ x ¯ , x n x ¯ + Q x n x ¯ y n x n ] 1 2 ρ 0

because lim sup n Q x ¯ x ¯ , x n x ¯ 0 and lim n x n y n =0. In addition, note also that { λ n }[a,b], n = 0 α n < and { z n } is bounded. Hence we get n = 0 2 λ n α n x ¯ z n x ¯ <. Therefore, all conditions of Lemma 2.3 are satisfied. Consequently, we immediately deduce that x n x ¯ 0 as n. In the meantime, taking into account that x n y n 0 and x n z n 0 as n, we infer that

lim n y n x ¯ = lim n z n x ¯ =0.

This completes the proof. □

Corollary 4.1 Let C be a nonempty closed convex subset of a real Hilbert space H 1 . Let AB( H 1 , H 2 ) and B i :C H 1 be β i -inverse strongly monotone for i=1,2. Let S:CC be a k-strictly pseudocontractive mapping such that Fix(S)ΞΓ. For fixed uC and x 0 C given arbitrarily, let the sequences { x n }, { y n }, { z n } be generated iteratively by

{ z n = P C ( x n λ n f α n ( x n ) ) , y n = σ n u + τ n P C ( z n λ n f α n ( z n ) ) y n = + ( 1 σ n τ n ) P C [ P C ( z n μ 2 B 2 z n ) μ 1 B 1 P C ( z n μ 2 B 2 z n ) ] , x n + 1 = β n x n + γ n y n + δ n S y n , n 0 ,
(4.22)

where μ i (0,2 β i ) for i=1,2, { α n }(0,), { λ n }(0, 2 A 2 ) and { σ n },{ τ n },{ β n },{ γ n },{ δ n }[0,1] such that

  1. (i)

    n = 0 α n <;

  2. (ii)

    σ n + τ n 1, β n + γ n + δ n =1 and ( γ n + δ n )k γ n for all n0;

  3. (iii)

    lim n σ n =0 and n = 0 σ n =;

  4. (iv)

    lim sup n τ n <1 and lim n | τ n + 1 τ n |=0;

  5. (v)

    0< lim inf n β n lim sup n β n <1 and lim inf n δ n >0;

  6. (vi)

    lim n ( γ n + 1 1 β n + 1 γ n 1 β n )=0;

  7. (vii)

    0< lim inf n λ n lim sup n λ n < 2 A 2 and lim n | λ n + 1 λ n |=0.

Then the sequences { x n }, { y n }, { z n } converge strongly to the same point x ¯ = P Fix ( S ) Ξ Γ u if and only if lim n x n z n =0. Furthermore, ( x ¯ , y ¯ ) is a solution of GSVI (1.3), where y ¯ = P C ( x ¯ μ 2 B 2 x ¯ ).

Next, utilizing Corollary 4.1, we give the following improvement and extension of the main result in [34] (i.e., [[34], Theorem 3.1]).

Corollary 4.2 Let C be a nonempty closed convex subset of a real Hilbert space H 1 . Let AB( H 1 , H 2 ) and S:CC be a nonexpansive mapping such that Fix(S)Γ. For fixed uC and x 0 C given arbitrarily, let the sequences { x n }, { z n } be generated iteratively by

{ z n = P C ( x n λ n f α n ( x n ) ) , x n + 1 = β n x n + ( 1 β n ) S [ σ n u + τ n P C ( z n λ n f α n ( z n ) ) + ( 1 σ n τ n ) z n ] , n 0 ,
(4.23)

where { α n }(0,), { λ n }(0, 2 A 2 ) and { σ n },{ τ n },{ β n }[0,1] such that

  1. (i)

    n = 0 α n <;

  2. (ii)

    σ n + τ n 1 for all n0;

  3. (iii)

    lim n σ n =0 and n = 0 σ n =;

  4. (iv)

    lim sup n τ n <1 and lim n | τ n + 1 τ n |=0;

  5. (v)

    0< lim inf n β n lim sup n β n <1;

  6. (vi)

    0< lim inf n λ n lim sup n λ n < 2 A 2 and lim n | λ n + 1 λ n |=0.

Then the sequences { x n }, { z n } converge strongly to the same point x ¯ = P Fix ( S ) Γ u if and only if lim n x n z n =0.

Proof In Corollary 4.1, put B 1 = B 2 =0 and γ n =0. Then Ξ=C, β n + δ n =1, P C [ P C ( z n μ 2 B 2 z n ) μ 1 B 1 P C ( z n μ 2 B 2 z n )]= z n , and the iterative scheme (4.22) is equivalent to

{ z n = P C ( x n λ n f α n ( x n ) ) , y n = σ n u + τ n P C ( z n λ n f α n ( z n ) ) + ( 1 σ n τ n ) z n , x n + 1 = β n x n + δ n S y n , n 0 .

This is equivalent to (4.23). Since S is a nonexpansive mapping, S must be a k-strictly pseudocontractive mapping with k=0. In this case, it is easy to see that conditions (i)-(vii) in Corollary 4.1 all are satisfied. Therefore, in terms of Corollary 4.1, we obtain the desired result. □

Now, we are in a position to present the strong convergence criteria of the sequences generated by Algorithm 4.2.

Theorem 4.2 Let C be a nonempty closed convex subset of a real Hilbert space H 1 . Let AB( H 1 , H 2 ) and B i :C H 1 be β i -inverse strongly monotone for i=1,2. Let S:CC be a k-strictly pseudo-contractive mapping such that Fix(S)ΞΓ. Let Q:CC be a ρ-contraction with ρ[0, 1 2 ). For x 0 C given arbitrarily, let the sequences { x n }, { u n }, { u ˜ n } be generated by Algorithm  4.2, where μ i (0,2 β i ) for i=1,2, { α n }(0,), { λ n }(0, 2 A 2 ) and { σ n },{ β n },{ γ n },{ δ n }[0,1] such that

  1. (i)

    n = 0 α n <;

  2. (ii)

    β n + γ n + δ n =1 and ( γ n + δ n )k γ n for all n0;

  3. (iii)

    lim n σ n =0 and n = 0 σ n =;

  4. (iv)

    0< lim inf n β n lim sup n β n <1 and lim inf n δ n >0;

  5. (v)

    lim n ( γ n + 1 1 β n + 1 γ n 1 β n )=0;

  6. (vi)

    0< lim inf n λ n lim sup n λ n < 2 A 2 and lim n | λ n + 1 λ n |=0.

Then the sequences { x n }, { u n }, { u ˜ n } converge strongly to the same point x ¯ = P Fix ( S ) Ξ Γ Q x ¯ if and only if lim n u ˜ n u n =0. Furthermore, ( x ¯ , y ¯ ) is a solution of GSVI (1.3), where y ¯ = P C ( x ¯ μ 2 B 2 x ¯ ).

Proof First, taking into account 0< lim inf n λ n lim sup n λ n < 2 A 2 , without loss of generality, we may assume that { λ n }[a,b] for some a,b(0, 2 A 2 ). Repeating the same argument as that in the proof of Theorem 4.1, we can show that P C (Iλ f α ) is ζ-averaged for each λ(0, 2 α + A 2 ), where ζ= 2 + λ ( α + A 2 ) 4 . Further, repeating the same argument as that in the proof of Theorem 4.1, we can also show that for each integer n0, P C (I λ n f α n ) is ζ n -averaged with ζ n = 2 + λ n ( α n + A 2 ) 4 (0,1).

Next, we divide the remainder of the proof into several steps.

Step 1. { x n } is bounded.

Indeed, take pFix(S)ΞΓ arbitrarily. Then Sp=p, P C (Iλf)p=p for λ(0, 2 A 2 ), and

p= P C [ P C ( p μ 2 B 2 p ) μ 1 B 1 P C ( p μ 2 B 2 p ) ] .

For simplicity, we write

q= P C (p μ 2 B 2 p), x ˜ n = P C ( x n μ 2 B 2 x n )and u ¯ n = P C ( u ˜ n λ n f α n ( u ˜ n ) )

for each n0. Then y n = σ n x n +(1 σ n ) u ¯ n for each n0. Utilizing the arguments similar to those of (4.1) and (4.2) in the proof of Theorem 4.1, from Algorithm 4.2 we can obtain

u ˜ n p u n p+ λ n α n p
(4.24)

and

u ˜ n p 2 u n p 2 +2 λ n α n p u ˜ n p.
(4.25)

Since B i :C H 1 is β i -inverse strongly monotone and 0< μ i <2 β i for i=1,2, utilizing the argument similar to that of (4.3) in the proof of Theorem 4.1, we can obtain that for all n0,

u n p 2 x n p 2 μ 2 ( 2 β 2 μ 2 ) B 2 x n B 2 p 2 μ 1 ( 2 β 1 μ 1 ) B 1 x ˜ n B 1 q 2 .
(4.26)

Utilizing the argument similar to that of (4.4) and (4.5) in the proof of Theorem 4.1, from (4.12) we can obtain

u ¯ n p u ˜ n p+ λ n α n p
(4.27)

and

u ¯ n p 2 u ˜ n p 2 +2 λ n α n p u ¯ n p.
(4.28)

Hence it follows from (4.24), (4.26) and (4.27) that

y n p = σ n ( Q x n p ) + ( 1 σ n ) ( u ¯ n p ) σ n Q x n p + ( 1 σ n ) u ¯ n p σ n ( Q x n Q p + Q p p ) + ( 1 σ n ) ( u ˜ n p + λ n α n p ) σ n ( ρ x n p + Q p p ) + ( 1 σ n ) ( u n p + λ n α n p + λ n α n p ) σ n ρ x n p + σ n Q p p + ( 1 σ n ) ( x n p + 2 λ n α n p ) ( 1 σ n ( 1 ρ ) ) x n p + σ n Q p p + 2 λ n α n p max { x n p , Q p p 1 ρ } + 2 λ n α n p .
(4.29)

Since ( γ n + δ n )k γ n for all n0, by Lemma 2.4 we can readily see from (4.29) that

x n + 1 p β n x n p + ( γ n + δ n ) y n p max { x n p , Q p p 1 ρ } + 2 b p α n .
(4.30)

By induction, we can derive

x n + 1 pmax { x 0 p , Q p p 1 ρ } +2bp j = 0 n α j .
(4.31)

Hence, { x n } is bounded. Since P C , f α n , B 1 and B 2 are Lipschitz continuous, it is easy to see that { u n }, { u ˜ n }, { u ¯ n }, { x ˜ n } and { y n } are bounded, where x ˜ n = P C ( x n μ 2 B 2 x n ) for all n0.

Step 2. lim n x n + 1 x n =0.

Indeed, define x n + 1 = β n x n +(1 β n ) w n for all n0. It follows that

w n + 1 w n = γ n + 1 ( y n + 1 y n ) + δ n + 1 ( S y n + 1 S y n ) 1 β n + 1 + ( γ n + 1 1 β n + 1 γ n 1 β n ) y n + ( δ n + 1 1 β n + 1 δ n 1 β n ) S y n .
(4.32)

Since ( γ n + δ n )k γ n for all n0, utilizing Lemma 2.4, we have

γ n + 1 ( y n + 1 y n ) + δ n + 1 ( S y n + 1 S y n ) ( γ n + 1 + δ n + 1 ) y n + 1 y n .
(4.33)

Next, we estimate y n + 1 y n . Utilizing the arguments similar to those of (4.12), (4.13) and (4.14), we can obtain that

u ˜ n + 1 u ˜ n u n + 1 u n +| λ n + 1 λ n | f ( u n ) +| λ n + 1 α n + 1 λ n α n | u n ,
(4.34)
u n + 1 u n x n + 1 x n
(4.35)

and hence

u ¯ n + 1 u ¯ n u ˜ n + 1 u ˜ n + | λ n + 1 λ n | f ( u ˜ n ) + | λ n + 1 α n + 1 λ n α n | u ˜ n u n + 1 u n + | λ n + 1 λ n | f ( u n ) + | λ n + 1 α n + 1 λ n α n | u n + | λ n + 1 λ n | f ( u ˜ n ) + | λ n + 1 α n + 1 λ n α n | u ˜ n x n + 1 x n + | λ n + 1 λ n | f ( u n ) + | λ n + 1 α n + 1 λ n α n | u n + | λ n + 1 λ n | f ( u ˜ n ) + | λ n + 1 α n + 1 λ n α n | u ˜ n = x n + 1 x n + | λ n + 1 λ n | ( f ( u n ) + f ( u ˜ n ) ) + | λ n + 1 α n + 1 λ n α n | ( u n + u ˜ n ) .
(4.36)

This together with Algorithm 4.2 implies that

y n + 1 y n = u ¯ n + 1 + σ n + 1 ( Q x n + 1 u ¯ n + 1 ) u ¯ n σ n ( Q x n u ¯ n ) u ¯ n + 1 u ¯ n + σ n + 1 Q x n + 1 u ¯ n + 1 + σ n Q x n u ¯ n x n + 1 x n + | λ n + 1 λ n | ( f ( u n ) + f ( u ˜ n ) ) + | λ n + 1 α n + 1 λ n α n | ( u n + u ˜ n ) + σ n + 1 Q x n + 1 u ¯ n + 1 + σ n Q x n u ¯ n .
(4.37)

Hence it follows from (4.32), (4.33) and (4.37) that

w n + 1 w n γ n + 1 ( y n + 1 y n ) + δ n + 1 ( S y n + 1 S y n ) 1 β n + 1 + | γ n + 1 1 β n + 1 γ n 1 β n | y n + | δ n + 1 1 β n + 1 δ n 1 β n | S y n y n + 1 y n + | γ n + 1 1 β n + 1 γ n 1 β n | ( y n + S y n ) x n + 1 x n + | λ n + 1 λ n | ( f ( u n ) + f ( u ˜ n ) ) + | λ n + 1 α n + 1 λ n α n | ( u n + u ˜ n ) + σ n + 1 Q x n + 1 u ¯ n + 1 + σ n Q x n u ¯ n + | γ n + 1 1 β n + 1 γ n 1 β n | ( y n + S y n ) .

Since { x n }, { u n }, { u ˜ n }, { u ¯ n } and { y n } are bounded, it follows from conditions (i), (iii), (v) and (vi) that

lim sup n ( w n + 1 w n x n + 1 x n ) lim sup n { | λ n + 1 λ n | ( f ( u n ) + f ( u ˜ n ) ) + | λ n + 1 α n + 1 λ n α n | ( u n + u ˜ n ) + σ n + 1 Q x n + 1 u ¯ n + 1 + σ n Q x n u ¯ n + | γ n + 1 1 β n + 1 γ n 1 β n | ( y n + S y n ) } = 0 .

Hence by Lemma 2.1, we get lim n w n x n =0. Thus,

lim n x n + 1 x n = lim n (1 β n ) w n x n =0.
(4.38)

Step 3. lim n B 2 x n B 2 p=0 and lim n B 1 x ˜ n B 1 q=0, where q= P C (p μ 2 B 2 p).

Indeed, utilizing Lemma 2.4 and the convexity of 2 , we obtain from Algorithm 4.2 and (4.25), (4.26), (4.28) that

x n + 1 p 2 β n x n p 2 + ( γ n + δ n ) y n p 2 β n x n p 2 + ( γ n + δ n ) [ σ n Q x n p 2 + ( 1 σ n ) u ¯ n p 2 ] β n x n p 2 + ( γ n + δ n ) [ σ n Q x n p 2 + ( 1 σ n ) ( u ˜ n p 2 + 2 λ n α n p u ¯ n p ) ] β n x n p 2 + ( γ n + δ n ) [ σ n Q x n p 2 + ( 1 σ n ) ( u n p 2 + 2 λ n α n p ( u ˜ n p + u ¯ n p ) ) ] β n x n p 2 + ( γ n + δ n ) [ σ n Q x n p 2 + ( 1 σ n ) ( x n p 2 μ 2 ( 2 β 2 μ 2 ) B 2 x n B 2 p 2 μ 1 ( 2 β 1 μ 1 ) B 1 x ˜ n B 1 q 2 + 2 λ n α n p ( u ˜ n p + u ¯ n p ) ) ] x n p 2 + σ n Q x n p 2 ( γ n + δ n ) ( 1 σ n ) [ μ 2 ( 2 β 2 μ 2 ) B 2 x n B 2 p 2 + μ 1 ( 2 β 1 μ 1 ) B 1 x ˜ n B 1 q 2 ] + 2 λ n α n p ( u ˜ n p + u ¯ n p ) .

Therefore,

( γ n + δ n ) ( 1 σ n ) [ μ 2 ( 2 β 2 μ 2 ) B 2 x n B 2 p 2 + μ 1 ( 2 β 1 μ 1 ) B 1 x ˜ n B 1 q 2 ] x n p 2 x n + 1 p 2 + σ n Q x n p 2 + 2 λ n α n p ( u ˜ n p + u ¯ n p ) ( x n p + x n + 1 p ) x n x n + 1 + σ n Q x n p 2 + 2 λ n α n p ( u ˜ n p + u ¯ n p ) .

Since α n 0, x n x n + 1 0, lim inf n δ n >0, { λ n }[a,b] and σ n 0, it follows that

lim n B 1 x ˜ n B 1 q=0and lim n B 2 x n B 2 p=0.

Step 4. lim n S y n y n =0.

Indeed, utilizing the Lipschitz continuity of f α n , we have

u ¯ n u ˜ n = P C ( I λ n f α n ) u ˜ n P C ( I λ n f α n ) u n u ˜ n u n .

This together with u ˜ n u n 0 implies that lim n u ¯ n u ˜ n =0 and hence lim n u ¯ n u n =0. Utilizing the arguments similar to those of (4.17) and (4.18) in the proof of Theorem 4.1, we get

x ˜ n q 2 x n p 2 x n x ˜ n ( p q ) 2 + 2 μ 2 x n x ˜ n ( p q ) B 2 x n B 2 p
(4.39)

and

u n p 2 x ˜ n q 2 x ˜ n u n + ( p q ) 2 + 2 μ 1 x ˜ n u n + ( p q ) B 1 x ˜ n B 1 q .
(4.40)

Utilizing (4.28), (4.39) and (4.40), we have

u ¯ n p 2 = u ˜ n p + u ¯ n u ˜ n 2 u ˜ n p 2 + 2 u ¯ n u ˜ n , u ¯ n p u ˜ n p 2 + 2 u ¯ n u ˜ n u ¯ n p u n p 2 + 2 λ n α n p u ˜ n p + 2 u ¯ n u ˜ n u ¯ n p x ˜ n q 2 x ˜ n u n + ( p q ) 2 + 2 μ 1 x ˜ n u n + ( p q ) B 1 x ˜ n B 1 q + 2 λ n α n p u ˜ n p + 2 u ¯ n u ˜ n u ¯ n p x n p 2 x n x ˜ n ( p q ) 2 + 2 μ 2 x n x ˜ n ( p q ) B 2 x n B 2 p x ˜ n u n + ( p q ) 2 + 2 μ 1 x ˜ n u n + ( p q ) B 1 x ˜ n B 1 q + 2 λ n α n p u ˜ n p + 2 u ¯ n u ˜ n u ¯ n p .
(4.41)

Thus, utilizing Lemma 2.1, from Algorithm 4.2 and (4.41) it follows that

x n + 1 p 2 β n x n p 2 + ( γ n + δ n ) y n p 2 β n x n p 2 + ( γ n + δ n ) [ σ n Q x n p 2 + ( 1 σ n ) u ¯ n p 2 ] β n x n p 2 + ( γ n + δ n ) { σ n Q x n p 2 + ( 1 σ n ) [ x n p 2 x n x ˜ n ( p q ) 2 + 2 μ 2 x n x ˜ n ( p q ) B 2 x n B 2 p x ˜ n u n + ( p q ) 2 + 2 μ 1 x ˜ n u n + ( p q ) B 1 x ˜ n B 1 q + 2 λ n α n p u ˜ n p + 2 u ¯ n u ˜ n u ¯ n p ] } x n p 2 + σ n Q x n p 2 ( γ n + δ n ) ( 1 σ n ) ( x n x ˜ n ( p q ) 2 + x ˜ n u n + ( p q ) 2 ) + 2 μ 2 x n x ˜ n ( p q ) B 2 x n B 2 p + 2 μ 1 x ˜ n u n + ( p q ) B 1 x ˜ n B 1 q + 2 λ n α n p u ˜ n p + 2 u ¯ n u ˜ n u ¯ n p ,

which hence implies that

( γ n + δ n ) ( 1 σ n ) ( x n x ˜ n ( p q ) 2 + x ˜ n u n + ( p q ) 2 ) x n p 2 x n + 1 p 2 + σ n Q x n p 2 + 2 λ n α n p u ˜ n p + 2 μ 2 x n x ˜ n ( p q ) B 2 x n B 2 p + 2 μ 1 x ˜ n u n + ( p q ) B 1 x ˜ n B 1 q + 2 u ¯ n u ˜ n u ¯ n p ( x n p + x n + 1 p ) x n x n + 1 + σ n Q x n p 2 + 2 λ n α n p u ˜ n p + 2 μ 2 x n x ˜ n ( p q ) B 2 x n B 2 p + 2 μ 1 x ˜ n u n + ( p q ) B 1 x ˜ n B 1 q + 2 u ¯ n u ˜ n u ¯ n p .

Since lim inf n δ n >0, σ n 0, { λ n }[a,b], α n 0, B 2 x n B 2 p0, B 1 x ˜ n B 1 q0, u ¯ n u ˜ n 0 and x n x n + 1 0, it follows from the boundedness of { x n }, { x ˜ n }, { u n }, { u ˜ n } and { u ¯ n } that

lim n x n x ˜ n ( p q ) =0and lim n x ˜ n u n + ( p q ) =0.

Consequently, it immediately follows that

lim n x n u n =0and lim n x n u ¯ n =0.
(4.42)

This together with y n u ¯ n σ n Q x n u ¯ n 0 implies that

lim n x n y n =0.

Since

δ n ( S y n x n ) = γ n ( x n y n ) + δ n ( x n S y n ) + γ n ( y n x n ) x n x n + 1 + γ n x n y n ,

we have

lim n S y n x n =0and lim n S y n y n =0.

Step 5. lim sup n Q x ¯ x ¯ , x n x ¯ 0, where x ¯ = P Fix ( S ) Ξ Γ Q x ¯ .

Indeed, since { x n } is bounded, there exists a subsequence { x n i } of { x n } such that

lim sup n Q x ¯ x ¯ , x n x ¯ = lim i Q x ¯ x ¯ , x n i x ¯ .
(4.43)

Also, since H is reflexive and { y n } is bounded, without loss of generality, we may assume that y n i p ˆ weakly for some p ˆ C. First, it is clear from Lemma 2.2 and S y n y n 0 that p ˆ Fix(S). Now, let us show that p ˆ Ξ. Note that

x n G ( x n ) = x n P C [ P C ( x n μ 2 B 2 x n ) μ 1 B 1 P C ( x n μ 2 B 2 x n ) ] = x n u n 0 ( n ) ,

where G:CC is defined as that in Lemma 1.1. According to Lemma 2.2, we get p ˆ Ξ. Further, let us show that p ˆ Γ. As a matter of fact, since x n u n 0, u ˜ n u n 0 and x n y n 0, we deduce that x n i p ˆ weakly and u ˜ n i p ˆ weakly. Let

Tv={ f ( v ) + N C v if  v C , if  v C ,

where N C v={w H 1 :vu,w0,uC}. Then T is maximal monotone and 0Tv if and only if vVI(C,f); see [41] for more details. Utilizing the arguments similar to those of Step 5 in the proof of Theorem 4.1 and the relations

u ˜ n = P C ( u n λ n f α n ( u n ) ) andvC,

we can derive

v p ˆ ,w0.

Since T is maximal monotone, we have p ˆ T 1 0 and hence p ˆ VI(C,f). Thus it is clear that p ˆ Γ. Therefore, p ˆ Fix(S)ΞΓ. Consequently, in terms of Proposition 2.1(i), we obtain from (4.43) that

lim sup n Q x ¯ x ¯ , x n x ¯ = lim i Q x ¯ x ¯ , x n i x ¯ =Q x ¯ x ¯ , p ˆ x ¯ 0.

Step 6. lim n x n x ¯ =0.

Indeed, from (4.25), (4.26) and (4.28) it follows that

u ¯ n x ¯ 2 u ˜ n x ¯ 2 + 2 λ n α n x ¯ u ¯ n x ¯ u n x ¯ 2 + 2 λ n α n x ¯ ( u ¯ n x ¯ + u ˜ n x ¯ ) x n x ¯ 2 + 2 λ n α n x ¯ ( u ¯ n x ¯ + u ˜ n x ¯ ) .

Utilizing the arguments similar to those of Step 6 in the proof of Theorem 4.1, we can infer that

Q x n x ¯ , y n x ¯ ρ x n x ¯ 2 +Q x ¯ x ¯ , x n x ¯ +Q x n x ¯ y n x n

and

x n + 1 x ¯ 2 β n x n x ¯ 2 + ( γ n + δ n ) y n x ¯ 2 β n x n x ¯ 2 + ( γ n + δ n ) [ ( 1 σ n ) 2 u ¯ n x ¯ 2 + 2 σ n Q x n x ¯ , y n x ¯ ] β n x n x ¯ 2 + ( γ n + δ n ) { ( 1 σ n ) [ x n x ¯ 2 + 2 λ n α n x ¯ ( u ¯ n x ¯ + u ˜ n x ¯ ) ] + 2 σ n [ ρ x n x ¯ 2 + Q x ¯ x ¯ , x n x ¯ + Q x n x ¯ y n x n ] } [ 1 ( 1 2 ρ ) ( γ n + δ n ) σ n ] x n x ¯ 2 + ( 1 2 ρ ) ( γ n + δ n ) σ n 2 [ Q x ¯ x ¯ , x n x ¯ + Q x n x ¯ y n x n ] 1 2 ρ + 2 λ n α n x ¯ ( u ¯ n x ¯ + u ˜ n x ¯ ) .

It is easy to see that all conditions of Lemma 2.3 are satisfied. Consequently, we immediately deduce that x n x ¯ 0 as n.

Finally, from u n x n 0 and u ˜ n x n 0, it follows that u n x ¯ 0 and u ˜ n x ¯ 0. This completes the proof. □

Corollary 4.3 Let C be a nonempty closed convex subset of a real Hilbert space H 1 . Let AB( H 1 , H 2 ) and B i :C H 1 be β i -inverse strongly monotone for i=1,2. Let S:CC be a k-strictly pseudo-contractive mapping such that Fix(S)ΞΓ. For fixed uC and x 0 C given arbitrarily, let { x n }, { u n }, { u ˜ n } be the sequences generated iteratively by

{ u n = P C [ P C ( x n μ 2 B 2 x n ) μ 1 B 1 P C ( x n μ 2 B 2 x n ) ] , u ˜ n = P C ( u n λ n f α n ( u n ) ) , y n = σ n u + ( 1 σ n ) P C ( u ˜ n λ n f α n ( u ˜ n ) ) , x n + 1 = β n x n + γ n y n + δ n S y n , n 0 ,
(4.44)

where μ i (0,2 β i ) for i=1,2, { α n }(0,), { λ n }(0, 2 A 2 ) and { σ n },{ β n },{ γ n },{ δ n }[0,1] such that

  1. (i)

    n = 0 α n <;

  2. (ii)

    β n + γ n + δ n =1 and ( γ n + δ n )k γ n for all n0;

  3. (iii)

    lim n σ n =0 and n = 0 σ n =;

  4. (iv)

    0< lim inf n β n lim sup n β n <1 and lim inf n δ n >0;

  5. (v)

    lim n ( γ n + 1 1 β n + 1 γ n 1 β n )=0;

  6. (vi)

    0< lim inf n λ n lim sup n λ n < 2 A 2 and lim n | λ n + 1 λ n |=0.

Then the sequences { x n }, { u n }, { u ˜ n } converge strongly to the same point x ¯ = P Fix ( S ) Ξ Γ u if and only if lim n u ˜ n u n =0. Furthermore, ( x ¯ , y ¯ ) is a solution of GSVI (1.3), where y ¯ = P C ( x ¯ μ 2 B 2 x ¯ ).

In addition, utilizing Corollary 4.3, we derive the following result.

Corollary 4.4 Let C be a nonempty closed convex subset of a real Hilbert space H 1 . Let AB( H 1 , H 2 ) and S:CC be a nonexpansive mapping such that Fix(S)Γ. For fixed uC and x 0 C given arbitrarily, let { x n }, { u ˜ n } be the sequences generated iteratively by

{ u ˜ n = P C ( x n λ n f α n ( x n ) ) , x n + 1 = β n x n + ( 1 β n ) S [ σ n u + ( 1 σ n ) P C ( u ˜ n λ n f α n ( u ˜ n ) ) ] , n 0 ,
(4.45)

where { α n }(0,), { λ n }(0, 2 A 2 ) and { σ n },{ β n }[0,1] such that

  1. (i)

    n = 0 α n <;

  2. (ii)

    lim n σ n =0 and n = 0 σ n =;

  3. (iii)

    0< lim inf n β n lim sup n β n <1;

  4. (iv)

    0< lim inf n λ n lim sup n λ n < 2 A 2 and lim n | λ n + 1 λ n |=0.

Then the sequences { x n }, { u ˜ n } converge strongly to the same point x ¯ = P Fix ( S ) Γ u if and only if lim n u ˜ n x n =0.

Proof In Corollary 4.3, put B 1 = B 2 =0 and γ n =0. Then Ξ=C, β n + δ n =1 for all n0, and the iterative scheme (4.44) is equivalent to

{ u n = x n , u ˜ n = P C ( u n λ n f α n ( u n ) ) , y n = σ n u + ( 1 σ n ) P C ( u ˜ n λ n f α n ( u ˜ n ) ) , x n + 1 = β n x n + δ n S y n , n 0 .

This is equivalent to (4.45). Since S is a nonexpansive mapping, S must be a k-strictly pseudo-contractive mapping with k=0. In this case, it is easy to see that all the conditions (i)-(vi) in Corollary 4.3 are satisfied. Therefore, in terms of Corollary 4.3, we obtain the desired result. □

Remark 4.1 Theorems 4.1 and 4.2 improve, extend and develop [[23], Theorem 5.7], [[34], Theorem 3.1], [[8], Theorem 3.2] and [[17], Theorem 3.1] in the following aspects:

  1. (i)

    Compared with the relaxed extragradient iterative algorithm in [[8], Theorem 3.2], our hybrid viscosity iterative algorithms (i.e., Algorithms 4.1 and 4.2) remove the requirement of boundedness for the domain C in which various mappings are defined.

  2. (ii)

    Because both [[23], Theorem 5.7] and [[34], Theorem 3.1] are weak convergence results for solving the SFP, beyond question, our results as strong convergence theorems are very interesting and quite valuable.

  3. (iii)

    The problem of finding an element of Fix(S)ΞΓ in Theorems 4.1 and 4.2 is more general than the corresponding problems in [[23], Theorem 5.7] and [[34], Theorem 3.1], respectively.

  4. (iv)

    The hybrid extragradient method for finding an element of Fix(S)ΞVI(C,A) in [[17], Theorem 3.1] is extended to develop our hybrid viscosity iterative algorithms (i.e., Algorithms 4.1 and 4.2) for finding an element of Fix(S)ΞΓ.

  5. (v)

    The proof of our results is very different from that of [[17], Theorem 3.1] because our argument technique depends on Lemma 2.3, the restriction on the regularization parameter sequence { α n } and the properties of the averaged mappings P C (I λ n f α n ) to a great extent.

  6. (vi)

    Because Algorithms 4.1 and 4.2 involve two inverse strongly monotone mappings B 1 and B 2 , a k-strictly pseudo-contractive self-mapping S and several parameter sequences, they are more flexible and more subtle than the corresponding ones in [[23], Theorem 5.7] and [[34], Theorem 3.1], respectively.

References

  1. Lions JL, Stampacchia G: Variational inequalities. Commun. Pure Appl. Math. 1967, 20: 493–512.

    Article  MathSciNet  Google Scholar 

  2. Bnouhachem A, Noor MA, Hao Z: Some new extragradient iterative methods for variational inequalities. Nonlinear Anal. 2009, 70: 1321–1329.

    Article  MathSciNet  Google Scholar 

  3. Ceng LC, Ansari QH, Yao JC: Viscosity approximation methods for generalized equilibrium problems and fixed point problems. J. Glob. Optim. 2009, 43: 487–502.

    Article  MathSciNet  Google Scholar 

  4. Ceng LC, Huang S: Modified extragradient methods for strict pseudo-contractions and monotone mappings. Taiwan. J. Math. 2009, 13(4):1197–1211.

    MathSciNet  Google Scholar 

  5. Ceng LC, Wang CY, Yao JC: Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 2008, 67: 375–390.

    Article  MathSciNet  Google Scholar 

  6. Ceng LC, Yao JC: An extragradient-like approximation method for variational inequality problems and fixed point problems. Appl. Math. Comput. 2007, 190: 205–215.

    Article  MathSciNet  Google Scholar 

  7. Ceng LC, Yao JC: Relaxed viscosity approximation methods for fixed point problems and variational inequality problems. Nonlinear Anal. 2008, 69: 3299–3309.

    Article  MathSciNet  Google Scholar 

  8. Yao Y, Liou YC, Kang SM: Approach to common elements of variational inequality problems and fixed point problems via a relaxed extragradient method. Comput. Math. Appl. 2010, 59: 3472–3480.

    Article  MathSciNet  Google Scholar 

  9. Zeng LC, Yao JC: Strong convergence theorem by an extragradient method for fixed point problems and variational inequality problems. Taiwan. J. Math. 2006, 10: 1293–1303.

    MathSciNet  Google Scholar 

  10. Nadezhkina N, Takahashi W: Strong convergence theorem by a hybrid method for nonexpansive mappings and Lipschitz-continuous monotone mappings. SIAM J. Optim. 2006, 16(4):1230–1241.

    Article  MathSciNet  Google Scholar 

  11. Ceng LC, Lin YC, Petruşel A: Hybrid method for designing explicit hierarchical fixed point approach to monotone variational inequalities. Taiwan. J. Math. 2012, 16: 1531–1555.

    Google Scholar 

  12. Takahashi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118: 417–428.

    Article  MathSciNet  Google Scholar 

  13. Nadezhkina N, Takahashi W: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006, 128: 191–201.

    Article  MathSciNet  Google Scholar 

  14. Korpelevich GM: An extragradient method for finding saddle points and for other problems. Ekon. Mat. Metod. 1976, 12: 747–756.

    Google Scholar 

  15. Yao Y, Yao JC: On modified iterative method for nonexpansive mappings and monotone mappings. Appl. Math. Comput. 2007, 186(2):1551–1558.

    Article  MathSciNet  Google Scholar 

  16. Verma RU: On a new system of nonlinear variational inequalities and associated iterative algorithms. Math. Sci. Res. Hot-Line 1999, 3(8):65–68.

    MathSciNet  Google Scholar 

  17. Ceng LC, Guu SM, Yao JC: Finding common solutions of a variational inequality, a general system of variational inequalities, and a fixed-point problem via a hybrid extragradient method. Fixed Point Theory Appl. 2011., 2011: Article ID 626159. doi:10.1155/2011/626159

    Google Scholar 

  18. Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239.

    Article  MathSciNet  Google Scholar 

  19. Byrne C: Iterative oblique projection onto convex subsets and the split feasibility problem. Inverse Probl. 2002, 18: 441–453.

    Article  MathSciNet  Google Scholar 

  20. Censor Y, Bortfeld T, Martin B, Trofimov A: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51: 2353–2365.

    Article  Google Scholar 

  21. Censor Y, Elfving T, Kopf N, Bortfeld T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21: 2071–2084.

    Article  MathSciNet  Google Scholar 

  22. Censor Y, Motova A, Segal A: Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 2007, 327: 1244–1256.

    Article  MathSciNet  Google Scholar 

  23. Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018

    Google Scholar 

  24. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120.

    Article  MathSciNet  Google Scholar 

  25. Qu B, Xiu N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21: 1655–1665.

    Article  MathSciNet  Google Scholar 

  26. Xu HK: A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22: 2021–2034.

    Article  Google Scholar 

  27. Yang Q: The relaxed CQ algorithm for solving the split feasibility problem. Inverse Probl. 2004, 20: 1261–1266.

    Article  Google Scholar 

  28. Zhao J, Yang Q: Several solution methods for the split feasibility problem. Inverse Probl. 2005, 21: 1791–1799.

    Article  Google Scholar 

  29. Sezan MI, Stark H: Applications of convex projection theory to image recovery in tomography and related areas. In Image Recovery Theory and Applications. Edited by: Stark H. Academic Press, Orlando; 1987:415–462.

    Google Scholar 

  30. Eicke B: Iteration methods for convexly constrained ill-posed problems in Hilbert spaces. Numer. Funct. Anal. Optim. 1992, 13: 413–429.

    Article  MathSciNet  Google Scholar 

  31. Landweber L: An iterative formula for Fredholm integral equations of the first kind. Am. J. Math. 1951, 73: 615–624.

    Article  MathSciNet  Google Scholar 

  32. Potter LC, Arun KS: A dual approach to linear inverse problems with convex constraints. SIAM J. Control Optim. 1993, 31: 1080–1092.

    Article  MathSciNet  Google Scholar 

  33. Combettes PL, Wajs V: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4: 1168–1200.

    Article  MathSciNet  Google Scholar 

  34. Ceng LC, Ansari QH, Yao JC: An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 2012, 64(4):633–642.

    Article  MathSciNet  Google Scholar 

  35. Opial Z: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73: 591–597.

    Article  MathSciNet  Google Scholar 

  36. Bertsekas DP, Gafni EM: Projection methods for variational inequalities with applications to the traffic assignment problem. Math. Program. Stud. 1982, 17: 139–159.

    Article  MathSciNet  Google Scholar 

  37. Han D, Lo HK: Solving non-additive traffic assignment problems: a descent method for co-coercive variational inequalities. Eur. J. Oper. Res. 2004, 159: 529–544.

    Article  MathSciNet  Google Scholar 

  38. Combettes PL: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004, 53(5–6):475–504.

    Article  MathSciNet  Google Scholar 

  39. Marino G, Xu HK: Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces. J. Math. Anal. Appl. 2007, 329: 336–346.

    Article  MathSciNet  Google Scholar 

  40. Geobel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.

    Chapter  Google Scholar 

  41. Rockafellar RT: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 1970, 149: 75–88.

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

Dedicated to Professor Hari M Srivastava.

The first author was partially supported by the National Science Foundation of China (11071169), Innovation Program of Shanghai Municipal Education Commission (09ZZ133) and Ph.D. Program Foundation of Ministry of Education of China (20123127110002). The second author was partially supported by the grant NSC 99-2115-M-037-002-MY3.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jen-Chih Yao.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors take equal roles in deriving results and writing of this paper.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Ceng, LC., Yao, JC. Relaxed and hybrid viscosity methods for general system of variational inequalities with split feasibility problem constraint. Fixed Point Theory Appl 2013, 43 (2013). https://doi.org/10.1186/1687-1812-2013-43

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2013-43

Keywords