Skip to main content

Hybrid extragradient method for hierarchical variational inequalities

Abstract

In this paper, we consider a hierarchical variational inequality problem (HVIP) defined over a common set of solutions of finitely many generalized mixed equilibrium problems, finitely many variational inclusions, a general system of variational inequalities, and the fixed point problem of a strictly pseudocontractive mapping. By combining Korpelevich’s extragradient method, the viscosity approximation method, the hybrid steepest-descent method and Mann’s iteration method, we introduce and analyze a multistep hybrid extragradient algorithm for finding a solution of our HVIP. It is proven that under appropriate assumptions, the proposed algorithm converges strongly to a solution of a general system of variational inequalities defined over a common set of solutions of finitely many generalized mixed equilibrium problems (GMEPs), finitely many variational inclusions, and the fixed point problem of a strictly pseudocontractive mapping. In the meantime, we also prove the strong convergence of the proposed algorithm to a unique solution of our HVIP. The results obtained in this paper improve and extend the corresponding results announced by many others.

MSC:49J30, 47H09, 47J20.

1 Introduction and formulations

1.1 Variational inequalities and equilibrium problems

Let C be a nonempty closed convex subset of a real Hilbert space H and A:CH be a nonlinear mapping. A variational inequality problem (VIP) is to find a point xC such that

Ax,yx0,yC.
(1.1)

The solution set of the VIP (1.1) defined by C and A is denoted by VI(C,A). The theory of variational inequalities is a well established subject in nonlinear analysis and optimization. For different aspects of variational inequalities and their generalizations, we refer to [13] and the references therein. Several solution methods for solving different kinds of variational inequality have appeared in literature. Korpelevich’s extragradient method [4] is one of them. It is further studied in [5].

Let Θ:C×CR be a bifunction. The equilibrium problem (EP) is to find xC such that

Θ(x,y)0,yC.
(1.2)

The set of solutions of EP is denoted by EP(Θ). It is a unified model of several problems, namely, variational inequalities, Nash equilibrium problems, optimization problems, saddle point problems, etc. For further details of EP, we refer to [6, 7] and the references therein.

Let φ:CR be a real-valued function and A:HH be a nonlinear mapping. The generalized mixed equilibrium problem (GMEP) [8] is to find xC such that

Θ(x,y)+φ(y)φ(x)+Ax,yx0,yC.
(1.3)

We denote the set of solutions of GMEP (1.3) by GMEP(Θ,φ,A). The GMEP (1.3) is very general in the sense that it includes, as special cases, optimization problems, variational inequalities, minimax problems, Nash equilibrium problems in noncooperative games, and others. The GMEP is further considered and studied in [9, 10] and the references therein.

When A0, then GMEP (1.3) reduces to the following mixed equilibrium problem (MEP): find xC such that

Θ(x,y)+φ(y)φ(x)0,yC.

The set of solutions of MEP is denoted by MEP(Θ,φ).

The common assumptions on the bifunction Θ:C×CR to study GMEP (1.3) or EP (1.2) are the following:

(A1) Θ(x,x)=0 for all xC;

(A2) Θ is monotone, i.e., Θ(x,y)+Θ(y,x)0 for any x,yC;

(A3) Θ is upper-hemicontinuous, i.e., for each x,y,zC,

lim sup t 0 + Θ ( t z + ( 1 t ) x , y ) Θ(x,y);

(A4) Θ(x,) is convex and lower semicontinuous for each xC;

We use the assumption that the function φ:CR is a lower semicontinuous and convex function with restriction (B1) or (B2), where

(B1) for each xH and r>0, there exists a bounded subset D x C and y x C such that for any zC D x ,

Θ(z, y x )+φ( y x )φ(z)+ 1 r y x z,zx<0;

(B2) C is a bounded set.

Given a positive number r>0. Let T r ( Θ , φ ) :HC be the solution set of the auxiliary mixed equilibrium problem, that is, for each xH,

T r ( Θ , φ ) (x):= { y C : Θ ( y , z ) + φ ( z ) φ ( y ) + 1 r y x , z y 0 , z C } .

Some elementary conclusions related to MEP are given in the following result.

Proposition 1.1 [11]Let Θ:C×CR satisfy conditions (A1)-(A4) and φ:CR be a proper lower semicontinuous and convex function such that either (B1) or (B2) holds. For r>0 and xH, define a mapping T r ( Θ , φ ) :HC by

T r ( Θ , φ ) (x)= { z C : Θ ( z , y ) + φ ( y ) φ ( z ) + 1 r y z , z x 0 , y C } ,xH.

Then the following conclusions hold:

  1. (i)

    For each xH, T r ( Θ , φ ) (x) is nonempty and single-valued;

  2. (ii)

    T r ( Θ , φ ) is firmly nonexpansive, that is, for any x,yH,

    T r ( Θ , φ ) x T r ( Θ , φ ) y 2 T r ( Θ , φ ) x T r ( Θ , φ ) y , x y ;
  3. (iii)

    Fix( T r ( Θ , φ ) )=MEP(Θ,φ);

  4. (iv)

    MEP(Θ,φ) is closed and convex;

  5. (v)

    T s ( Θ , φ ) x T t ( Θ , φ ) x 2 s t s T s ( Θ , φ ) x T t ( Θ , φ ) x, T s ( Θ , φ ) xx, s,t>0 and xH.

Recently, Cai and Bu [12] considered a problem of finding a common element of the set of solutions of finitely many generalized mixed equilibrium problems, the set of solutions of finitely many variational inequalities mappings and the set of fixed points of an asymptotically k-strict pseudocontractive mapping in the intermediate sense [13] in a real Hilbert space. They proposed and analyzed an algorithm for finding such a solution. The weak convergence result for the proposed algorithm is also presented.

1.2 General system of variational inequalities

Let F 1 , F 2 :CH be two mappings. We consider the general system of variational inequalities (GSVI) of finding ( x , y )C×C such that

{ ν 1 F 1 y + x y , x x 0 , x C , ν 2 F 2 x + y x , x y 0 , x C ,
(1.4)

where ν 1 >0 and ν 2 >0 are two constants. It was considered and studied in [5, 14, 15]. In particular, if F 1 = F 2 =A, then the GSVI (1.4) reduces to the problem of finding ( x , y )C×C such that

{ ν 1 A y + x y , x x 0 , x C , ν 2 A x + y x , x y 0 , x C .
(1.5)

It is called a new system of variational inequalities (NSVI) [9]. It is worth to mention that the above system of two variational inequalities could be used to solve Nash equilibrium problem. For applications of system of variational inequalities to Nash equilibrium problems, we refer to [1619] and the references therein. Further, if x = y additionally, then the NSVI reduces to the classical VIP (1.1). Putting G:= P C (I ν 1 F 1 ) P C (I ν 2 F 2 ) and y = P C (I ν 2 F 2 ) x , Ceng et al. [15] transformed the GSVI (1.4) into the following fixed point equation:

G x = x .
(1.6)

1.3 Hierarchical variational inequalities

A variational inequality problem defined over the set of fixed points of a nonexpansive mapping, is called a hierarchical variational inequality problem. Let S,T:HH be nonexpansive mappings. We denote by Fix(T) and Fix(T) the set of fixed points of T and S, respectively. If we replace C by Fix(T) in the formulation of VIP (1.1), then VIP (1.1) is defined by Fix(T) and A and it is called a hierarchical variational inequality problem.

Yao et al. [20] considered the hierarchical variational inequality problem (HVIP) in which the mapping A is replaced by the monotone mapping IS. They considered the following form of HVIP: find x ˜ Fix(T) such that

( I S ) x ˜ , p x ˜ 0,pFix(T).
(1.7)

The solution set of HVIP (1.7) is denoted by Λ. It is not hard to check that solving HVIP (1.7) is equivalent to the fixed point problem of the composite mapping P Fix ( T ) S, that is, find x ˜ C such that x ˜ = P Fix ( T ) S x ˜ . They proposed and analyzed an iterative method for solving this kind of HTVI. For further details and a comprehensive survey on HVIP, we refer to [21] and the references therein.

1.4 Variational inclusions

Let B:CH be a single-valued mapping and R:C 2 H be a set-valued mapping with D(R)=C. We consider the variational inclusion problem of finding a point xC such that

0Bx+Rx.
(1.8)

We denote by I(B,R) the solution set of the variational inclusion (1.8). In particular, if BR0, then I(B,R)=C. If B=0, then problem (1.8) becomes the inclusion problem introduced by Rockafellar [22]. It is well known that problem (1.8) provides a convenient framework for the unified study of optimal solutions in many optimization related areas including mathematical programming, complementarity problems, variational inequalities, optimal control, mathematical economics, equilibria and game theory, etc.

1.5 Problem to be considered

In this paper, we introduce and study the following hierarchical variational inequality problem (HVIP) defined over a common set of solutions of finitely many GMEPs, finitely many variational inclusions, a general system of variational inequalities, and a fixed point of a strictly pseudocontractive mapping. Throughout the paper, we denote by M and N set of the positive integers.

Problem 1.1 Assume that

  1. (i)

    for j=1,2, F j :CH and F:HH are mappings;

  2. (ii)

    for each k{1,2,,M}, Θ k :C×CR is a bifunction satisfying conditions (A1)-(A4) and φ k :CR{+} is a proper lower semicontinuous and convex function with restriction (B1) or (B2);

  3. (iii)

    for each k{1,2,,M} and i{1,2,,N}, R i :C 2 H is a maximal monotone mapping, and A k :HH and B i :CH are mappings;

  4. (iv)

    T:CC is a mapping and S:HH is a nonexpansive mapping;

  5. (v)

    Ω:= k = 1 M GMEP( Θ k , φ k , A k ) i = 1 N I( B i , R i )GSVI(G)Fix(T).

Then the objective is to find x Ω such that

( μ F γ S ) x , x x 0,xΩ.
(1.9)

By combining Korpelevich’s extragradient method, the viscosity approximation method, the hybrid steepest-descent method [10], and Mann’s iteration method, we introduce and analyze a multistep hybrid extragradient algorithm for finding a solution of Problem 1.1. It is proven that under appropriate assumptions, the proposed algorithm converges strongly to a solution of GSVI (1.4) defined over a common set of solutions of finitely many generalized mixed equilibrium problems (GMEPs), finitely many variational inclusions and the fixed point problem of a strictly pseudocontractive mapping. In the meantime, we also prove the strong convergence of the proposed algorithm to a unique solution of Problem 1.1. The results obtained in this paper improve and extend the corresponding results announced by many others.

2 Preliminaries

Throughout this paper, we assume that H is a real Hilbert space whose inner product and norm are denoted by , and , respectively. Let C be a nonempty closed convex subset of H. We write x n x to indicate that the sequence { x n } converges weakly to x and x n x to indicate that the sequence { x n } converges strongly to x. Moreover, we use ω w ( x n ) to denote the weak ω-limit set of the sequence { x n }, i.e.,

ω w ( x n ):= { x H : x n i x  for some subsequence  { x n i }  of  { x n } } .

Definition 2.1 A mapping A:CH is called

  1. (i)

    η-strongly monotone if there exists a constant η>0 such that

    AxAy,xyη x y 2 ,x,yC;
  2. (ii)

    ζ-inverse-strongly monotone if there exists a constant ζ>0 such that

    AxAy,xyζ A x A y 2 ,x,yC.

It is easy to see that the projection P C is 1-inverse-strongly monotone. Inverse-strongly monotone (also referred to as co-coercive) operators have been applied widely in solving practical problems in various fields. It is obvious that if A is ζ-inverse-strongly monotone, then A is monotone and 1 ζ -Lipschitz continuous. Moreover, we also have, for all u,vC and λ>0,

( I λ A ) u ( I λ A ) v 2 = ( u v ) λ ( A u A v ) 2 = u v 2 2 λ A u A v , u v + λ 2 A u A v 2 u v 2 + λ ( λ 2 ζ ) A u A v 2 .
(2.1)

So, if λ2ζ, then IλA is a nonexpansive mapping from C to H.

Definition 2.2 A mapping T:HH is said to be firmly nonexpansive if 2TI is nonexpansive, or equivalently, if T is 1-inverse-strongly monotone (1-ism),

xy,TxTy T x T y 2 ,x,yH;

alternatively, T is firmly nonexpansive if and only if T can be expressed as

T= 1 2 (I+S),

where S:HH is nonexpansive; projections are firmly nonexpansive.

It can easily be seen that if T is nonexpansive, then IT is monotone.

It is clear that, in a real Hilbert space H, T:CC is ξ-strictly pseudocontractive if and only if the following inequality holds:

TxTy,xy x y 2 1 ξ 2 ( I T ) x ( I T ) y 2 ,x,yC.

This immediately implies that if T is a ξ-strictly pseudocontractive mapping, then IT is 1 ξ 2 -inverse-strongly monotone; for further details, we refer to [23] and the references therein. It is well known that the class of strict pseudocontractions strictly includes the class of nonexpansive mappings and that the class of pseudocontractions strictly includes the class of strict pseudocontractions.

Lemma 2.1 [[23], Proposition 2.1]

Let C be a nonempty closed convex subset of a real Hilbert space H and T:CC be a mapping.

  1. (i)

    If T is a ξ-strictly pseudocontractive mapping, then T satisfies the Lipschitzian condition

    TxTy 1 + ξ 1 ξ xy,x,yC.
  2. (ii)

    If T is a ξ-strictly pseudocontractive mapping, then the mapping IT is semiclosed at 0, that is, if { x n } is a sequence in C such that x n x ˜ and (IT) x n 0, then (IT) x ˜ =0.

  3. (iii)

    If T is a ξ-(quasi-)strict pseudocontraction, then the fixed-point set Fix(T) of T is closed and convex so that the projection P Fix ( T ) is well defined.

Lemma 2.2 [24]

Let C be a nonempty closed convex subset of a real Hilbert space H. Let T:CC be a ξ-strictly pseudocontractive mapping. Let γ and δ be two nonnegative real numbers such that (γ+δ)ξγ. Then

γ ( x y ) + δ ( T x T y ) (γ+δ)xy,x,yC.

Let C be a nonempty closed convex subset of a real Hilbert space H. We introduce some notations. Let λ be a number in (0,1] and let μ>0. Associated with a nonexpansive mapping T:CH, we define the mapping T λ :CH by

T λ x:=TxλμF(Tx),xC,

where F:HH is an operator such that, for some positive constants κ,η>0, F is κ-Lipschitzian and η-strongly monotone on H, that is, F satisfies the conditions

FxFyκxyandFxFy,xyη x y 2

for all x,yH.

Remark 2.1 Since F is κ-Lipschitzian and η-strongly monotone on H, we get 0<ηκ. Hence, whenever 0<μ< 2 η κ 2 , we have

0 ( 1 μ η ) 2 = 1 2 μ η + μ 2 η 2 1 2 μ η + μ 2 κ 2 < 1 2 μ η + 2 η κ 2 μ κ 2 = 1 ,

which implies

0<1 1 2 μ η + μ 2 κ 2 1.

So, τ=1 1 μ ( 2 η μ κ 2 ) (0,1].

Finally, recall that a set-valued mapping T:D(T)H 2 H is called monotone if for all x,yD(T), fTx and gTy imply

fg,xy0.

A set-valued mapping T is called maximal monotone if T is monotone and (I+λT)D(T)=H for each λ>0, where I is the identity mapping of H. We denote by G(T) the graph of T. It is well known that a monotone mapping T is maximal if and only if, for (x,f)H×H, fg,xy0 for every (y,g)G(T) implies fTx. Next we provide an example to illustrate the concept of a maximal monotone mapping.

Let A:CH be a monotone, k-Lipschitz-continuous mapping and let N C v be the normal cone to C at vC, i.e.,

N C v= { u H : v p , u 0 , p C } .

Define

T ˜ v= { A v + N C v , if  v C , , if  v C .

Then T ˜ is maximal monotone (see [22]) such that

0 T ˜ vvVI(C,A).
(2.2)

Let R:D(R)H 2 H be a maximal monotone mapping. Let λ,μ>0 be two positive numbers. Associated with R and λ, we define the resolvent operator J R , λ :H D ( R ) ¯ by

J R , λ = ( I + λ R ) 1 ,xH,

where λ is a positive number.

The lemma shows that the resolvent operator J R , λ :H D ( R ) ¯ is nonexpansive.

Lemma 2.3 [25]

J R , λ is single-valued and firmly nonexpansive, i.e.,

J R , λ x J R , λ y,xy J R , λ x J R , λ y 2 ,x,yH.

Consequently, J R , λ is nonexpansive and monotone.

Lemma 2.4 [26]

Let R be a maximal monotone mapping with D(R)=C. Then for any given λ>0, uC is a solution of problem (1.5) if and only if uC satisfies

u= J R , λ (uλBu).

Lemma 2.5 [27]

Let R be a maximal monotone mapping with D(R)=C and let B:CH be a strongly monotone, continuous, and single-valued mapping. Then for each zH, the equation z(B+λR)x has a unique solution x λ for λ>0.

Lemma 2.6 [26]

Let R be a maximal monotone mapping with D(R)=C and B:CH be a monotone, continuous and single-valued mapping. Then (I+λ(R+B))C=H for each λ>0. In this case, R+B is maximal monotone.

Lemma 2.7 [28]

We have the resolvent identity

J R , λ x= J R , μ ( μ λ x + ( 1 μ λ ) J R , λ x ) ,xH.

Remark 2.2 For λ,μ>0, we have the following relation:

J R , λ x J R , μ yxy+|λμ| ( 1 λ J R , λ x y + 1 μ x J R , μ y ) ,x,yH.
(2.3)

Indeed, whenever λμ, utilizing Lemma 2.7 we deduce that

J R , λ x J R , μ y = J R , μ ( μ λ x + ( 1 μ λ ) J R , λ x ) J R , μ y μ λ x + ( 1 μ λ ) J R , λ x y μ λ x y + ( 1 μ λ ) J R , λ x y x y + | λ μ | λ J R , λ x y .

Similarly, whenever λ<μ, we get

J R , λ x J R , μ yxy+ | λ μ | μ x J R , μ y.

Combining the above two cases we conclude that (2.3) holds.

We need following fact and lemmas to establish the strong convergence of the sequences generated by the proposed algorithm.

Lemma 2.8 Let X be a real inner product space. Then

x + y 2 x 2 +2y,x+y,x,yX.

Lemma 2.9 Let H be a real Hilbert space. Then:

  1. (a)

    x y 2 = x 2 y 2 2xy,y for all x,yH;

  2. (b)

    λ x + μ y 2 =λ x 2 +μ y 2 λμ x y 2 for all x,yH and λ,μ[0,1] with λ+μ=1;

  3. (c)

    If { x n } is a sequence in H such that x n x, it follows that

    lim sup n x n y 2 = lim sup n x n x 2 + x y 2 ,yH.

Lemma 2.10 (Demiclosedness principle [29])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let S be a nonexpansive self-mapping on C with Fix(S). Then IS is demiclosed, that is, whenever { x n } is a sequence in C weakly converging to some xC and the sequence {(IS) x n } strongly converges to some y, it follows that (IS)x=y, where I is the identity operator of H.

Lemma 2.11 [[30], Lemma 3.1]

T λ is a contraction provided 0<μ< 2 η κ 2 ; that is,

T λ x T λ y (1λτ)xy,x,yC,

where τ=1 1 μ ( 2 η μ κ 2 ) (0,1].

Remark 2.3 (a) Since F is κ-Lipschitzian and η-strongly monotone on H, we get 0<ηκ. Hence, whenever 0<μ< 2 η κ 2 , we have

0 ( 1 μ η ) 2 = 1 2 μ η + μ 2 η 2 1 2 μ η + μ 2 κ 2 < 1 2 μ η + 2 η κ 2 μ κ 2 = 1 ,

which implies

0<1 1 2 μ η + μ 2 κ 2 1.

Therefore, τ=1 1 μ ( 2 η μ κ 2 ) (0,1].

(b) In Lemma 2.11, put F= 1 2 I and μ=2. Then we know that κ=η= 1 2 , 0<μ=2< 2 η κ 2 =4 and

τ=1 1 μ ( 2 η μ κ 2 ) =1 1 2 ( 2 × 1 2 2 × ( 1 2 ) 2 ) =1.

Lemma 2.12 [30]

Let { s n } be a sequence of nonnegative numbers satisfying the conditions

s n + 1 (1 α n ) s n + α n β n ,n1,

where { α n } and { β n } are sequences of real numbers such that

  1. (a)

    { α n }[0,1] and n = 1 α n =, or equivalently,

    n = 1 (1 α n ):= lim n k = 1 n (1 α k )=0;
  2. (b)

    lim sup n β n 0, or n = 1 | α n β n |<.

Then lim n s n =0.

3 Algorithms and convergence results

In this section, we will introduce and analyze a multistep hybrid extragradient algorithm for finding a solution of the HVIP (1.9) (over the fixed point set of a strictly pseudocontractive mapping) with constraints of several problems: GSVI (1.4), finitely many GMEPs, and finitely many variational inclusions in a real Hilbert space. This algorithm is based on Korpelevich’s extragradient method, the viscosity approximation method, the hybrid steepest-descent method [10], and Mann’s iteration method. We prove the strong convergence of the proposed algorithm to a unique solution of HVIP (1.9) under suitable conditions.

We propose the following algorithm to compute the approximate solution of Problem 1.1.

Algorithm 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H. For each k{1,2,,M}, let Θ k :C×CR be a bifunction satisfying conditions (A1)-(A4) and φ k :CR{+} be a proper lower semicontinuous and convex function with restriction (B1) or (B2). For each k{1,2,,M}, i{1,2,,N}, let R i :C 2 H be a maximal monotone mapping, and A k :HH and B i :CH be μ k -inverse-strongly monotone and η i -inverse-strongly monotone, respectively. Let T:CC be a ξ-strictly pseudocontractive mapping, S:HH is a nonexpansive mapping and V:HH be a ρ-contraction with coefficient ρ[0,1). For each j=1,2, let F j :CH be ζ j -inverse-strongly monotone and F:HH be κ-Lipschitzian and η-strongly monotone with positive constants κ,η>0 such that 0γ<τ and 0<μ< 2 η κ 2 where τ=1 1 μ ( 2 η μ κ 2 ) . Assume that Ω:= k = 1 M GMEP( Θ k , φ k , A k ) i = 1 N I( B i , R i )GSVI(G)Fix(T). Let { α n },{ λ n }(0,1], { β n },{ γ n },{ δ n }[0,1], { ρ n }(0,2α], { λ i , n }[ a i , b i ](0,2 η i ) and { r k , n }[ c k , d k ](0,2 μ k ) where i{1,2,,N} and k{1,2,,M}. For arbitrarily given x 0 H, let { x n } be a sequence generated by

{ u n = T r M , n ( Θ M , φ M ) ( I r M , n A M ) T r M 1 , n ( Θ M 1 , φ M 1 ) ( I r M 1 , n A M 1 ) T r 1 , n ( Θ 1 , φ 1 ) ( I r 1 , n A 1 ) x n , v n = J R N , λ N , n ( I λ N , n B N ) J R N 1 , λ N 1 , n ( I λ N 1 , n B N 1 ) J R 1 , λ 1 , n ( I λ 1 , n B 1 ) u n , y n = β n x n + γ n G v n + δ n T G v n , x n + 1 = λ n γ ( α n V x n + ( 1 α n ) S x n ) + ( I λ n μ F ) y n , n 0 ,
(3.1)

where G:= P C (I ν 1 F 1 ) P C (I ν 2 F 2 ) with ν j (0,2 ζ j ) for j=1,2.

Theorem 3.1 In addition to the assumption in Algorithm  3.1, suppose that

  1. (i)

    lim n λ n =0, n = 0 λ n = and lim n 1 λ n |1 α n 1 α n |=0;

  2. (ii)

    lim sup n α n λ n <, lim n 1 λ n | 1 α n 1 α n 1 |=0 and lim n 1 α n |1 λ n 1 λ n |=0;

  3. (iii)

    lim n | β n β n 1 | λ n α n =0 and lim n | γ n γ n 1 | λ n α n =0;

  4. (iv)

    lim n | λ i , n λ i , n 1 | λ n α n =0 and lim n | r k , n r k , n 1 | λ n α n =0 for i=1,2,,N and k=1,2,,M;

  5. (v)

    β n + γ n + δ n =1 and ( γ n + δ n )ξ γ n for all n0;

  6. (vi)

    { β n }[a,b](0,1) and lim inf n δ n >0.

Then

  1. (a)

    lim n x n + 1 x n α n =0;

  2. (b)

    ω w ( x n )Ω;

  3. (c)

    { x n } converges strongly to a point x Ω, which is a unique solution of HVIP (1.9), that is,

    ( μ F γ S ) x , p x 0,pΩ.

Proof First of all, observe that

μ η τ μ η 1 1 μ ( 2 η μ κ 2 ) 1 μ ( 2 η μ κ 2 ) 1 μ η 1 2 μ η + μ 2 κ 2 1 2 μ η + μ 2 η 2 κ 2 η 2 κ η

and

( μ F γ S ) x ( μ F γ S ) y , x y = μ F x F y , x y γ S x S y , x y μ η x y 2 γ x y 2 = ( μ η γ ) x y 2 , x , y H .

Since 0γ<τ and κη, we know that μητ>γ and hence the mapping μFγS is (μηγ)-strongly monotone. Moreover, it is clear that the mapping μFγS is (μκ+γ)-Lipschitzian. Thus, there exists a unique solution x in Ω to the VIP

( μ F γ S ) x , p x 0,pΩ.

That is, { x }=VI(Ω,μFγS). Now, we put

Δ n k = T r k , n ( Θ k , φ k ) (I r k , n A k ) T r k 1 , n ( Θ k 1 , φ k 1 ) (I r k 1 , n A k 1 ) T r 1 , n ( Θ 1 , φ 1 ) (I r 1 , n A 1 ) x n

for all k{1,2,,M} and n1,

Λ n i = J R i , λ i , n (I λ i , n B i ) J R i 1 , λ i 1 , n (I λ i 1 , n B i 1 ) J R 1 , λ 1 , n (I λ 1 , n B 1 )

for all i{1,2,,N}, Δ n 0 =I and Λ n 0 =I, where I is the identity mapping on H. Then we have u n = Δ n M x n and v n = Λ n N u n .

We divide the rest of the proof into several steps.

Step 1. We prove that { x n } is bounded.

Indeed, take a fixed pΩ arbitrarily. Utilizing (2.1) and Proposition 1.1(ii), we have

u n p = T r M , n ( Θ M , φ M ) ( I r M , n B M ) Δ n M 1 x n T r M , n ( Θ M , φ M ) ( I r M , n B M ) Δ n M 1 p ( I r M , n B M ) Δ n M 1 x n ( I r M , n B M ) Δ n M 1 p Δ n M 1 x n Δ n M 1 p Δ n 0 x n Δ n 0 p = x n p .
(3.2)

Utilizing (2.1) and Lemma 2.3, we have

v n p = J R N , λ N , n ( I λ N , n A N ) Λ n N 1 u n J R N , λ N , n ( I λ N , n A N ) Λ n N 1 p ( I λ N , n A N ) Λ n N 1 u n ( I λ N , n A N ) Λ n N 1 p Λ n N 1 u n Λ n N 1 p Λ n 0 u n Λ n 0 p = u n p .
(3.3)

Combining (3.2) and (3.3), we have

v n p x n p.
(3.4)

Since p=Gp= P C (I ν 1 F 1 ) P C (I ν 2 F 2 )p, F j is ζ j -inverse-strongly monotone for j=1,2, and 0< ν j 2 ζ j for j=1,2, we deduce that, for any n0,

G v n p 2 = P C ( I ν 1 F 1 ) P C ( I ν 2 F 2 ) v n P C ( I ν 1 F 1 ) P C ( I ν 2 F 2 ) p 2 ( I ν 1 F 1 ) P C ( I ν 2 F 2 ) v n ( I ν 1 F 1 ) P C ( I ν 2 F 2 ) p 2 = [ P C ( I ν 2 F 2 ) v n P C ( I ν 2 F 2 ) p ] ν 1 [ F 1 P C ( I ν 2 F 2 ) v n F 1 P C ( I ν 2 F 2 ) p ] 2 P C ( I ν 2 F 2 ) v n P C ( I ν 2 F 2 ) p 2 + ν 1 ( ν 1 2 ζ 1 ) F 1 P C ( I ν 2 F 2 ) v n F 1 P C ( I ν 2 F 2 ) p 2 P C ( I ν 2 F 2 ) v n P C ( I ν 2 F 2 ) p 2 ( I ν 2 F 2 ) v n ( I ν 2 F 2 ) p 2 = ( v n p ) ν 2 ( F 2 v n F 2 p ) 2 v n p 2 + ν 2 ( ν 2 2 ζ 2 ) F 2 v n F 2 p 2 v n p 2 .
(3.5)

(This shows that G:CC is a nonexpansive mapping.) Since ( γ n + δ n )ξ γ n for all n0 and T is ξ-strictly pseudocontractive, utilizing Lemma 2.2, we obtain from (3.1), (3.4), and (3.5)

y n p = β n x n + γ n G v n + δ n T G v n p = β n ( x n p ) + γ n ( G v n p ) + δ n ( T G v n p ) β n x n p + γ n ( G v n p ) + δ n ( T G v n p ) β n x n p + ( γ n + δ n ) G v n p β n x n p + ( γ n + δ n ) v n p β n x n p + ( γ n + δ n ) x n p = x n p .
(3.6)

Utilizing Lemma 2.11, we deduce from (3.1), (3.6), and 0γ<τ that for all n0

x n + 1 p = λ n γ ( α n V x n + ( 1 α n ) S x n ) + ( I λ n μ F ) y n p = λ n γ ( α n V x n + ( 1 α n ) S x n ) λ n μ F p + ( I λ n μ F ) y n ( I λ n μ F ) p λ n γ ( α n V x n + ( 1 α n ) S x n ) λ n μ F p + ( I λ n μ F ) y n ( I λ n μ F ) p = λ n α n ( γ V x n μ F p ) + ( 1 α n ) ( γ S x n μ F p ) + ( I λ n μ F ) y n ( I λ n μ F ) p λ n [ α n γ V x n μ F p + ( 1 α n ) γ S x n μ F p ] + ( I λ n μ F ) y n ( I λ n μ F ) p λ n [ α n ( γ V x n V p + γ V p μ F p ) + ( 1 α n ) ( γ S x n S p + γ S p μ F p ) ] + ( I λ n μ F ) y n ( I λ n μ F ) p λ n [ α n ( γ ρ x n p + γ V p μ F p ) + ( 1 α n ) ( γ x n p + γ S p μ F p ) ] + ( 1 λ n τ ) y n p λ n [ ( 1 α n ( 1 ρ ) ) γ x n p + max { γ V p μ F p , γ S p μ F p } ] + ( 1 λ n τ ) x n p = λ n ( 1 α n ( 1 ρ ) ) γ x n p + λ n max { γ V p μ F p , γ S p μ F p } + ( 1 λ n τ ) x n p λ n γ x n p + λ n max { γ V p μ F p , γ S p μ F p } + ( 1 λ n τ ) x n p = ( 1 λ n ( τ γ ) ) x n p + λ n max { γ V p μ F p , γ S p μ F p } = ( 1 λ n ( τ γ ) ) x n p + λ n ( τ γ ) max { γ V p μ F p , γ S p μ F p } τ γ max { x n p , γ V p μ F p τ γ , γ S p μ F p τ γ } .

By induction, we get

x n pmax { x 0 p , γ V p μ F p τ γ , γ S p μ F p τ γ } ,n0.

Thus, { x n } is bounded and so are the sequences { u n }, { v n }, and { y n }.

Step 2. We prove that lim n x n + 1 x n α n =0.

Indeed, utilizing (2.1) and (2.3), we obtain

v n + 1 v n = Λ n + 1 N u n + 1 Λ n N u n = J R N , λ N , n + 1 ( I λ N , n + 1 B N ) Λ n + 1 N 1 u n + 1 J R N , λ N , n ( I λ N , n B N ) Λ n N 1 u n J R N , λ N , n + 1 ( I λ N , n + 1 B N ) Λ n + 1 N 1 u n + 1 J R N , λ N , n + 1 ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 + J R N , λ N , n + 1 ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 J R N , λ N , n ( I λ N , n B N ) Λ n N 1 u n ( I λ N , n + 1 B N ) Λ n + 1 N 1 u n + 1 ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 + ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 ( I λ N , n B N ) Λ n N 1 u n + | λ N , n + 1 λ N , n | × ( 1 λ N , n + 1 J R N , λ N , n + 1 ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 ( I λ N , n B N ) Λ n N 1 u n + 1 λ N , n ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 J R N , λ N , n ( I λ N , n B N ) Λ n N 1 u n ) | λ N , n + 1 λ N , n | ( B N Λ n + 1 N 1 u n + 1 + M ˜ ) + Λ n + 1 N 1 u n + 1 Λ n N 1 u n | λ N , n + 1 λ N , n | ( B N Λ n + 1 N 1 u n + 1 + M ˜ ) + | λ N 1 , n + 1 λ N 1 , n | ( B N 1 Λ n + 1 N 2 u n + 1 + M ˜ ) + Λ n + 1 N 2 u n + 1 Λ n N 2 u n | λ N , n + 1 λ N , n | ( B N Λ n + 1 N 1 u n + 1 + M ˜ ) + | λ N 1 , n + 1 λ N 1 , n | ( B N 1 Λ n + 1 N 2 u n + 1 + M ˜ ) + + | λ 1 , n + 1 λ 1 , n | ( B 1 Λ n + 1 0 u n + 1 + M ˜ ) + Λ n + 1 0 u n + 1 Λ n 0 u n M ˜ 0 i = 1 N | λ i , n + 1 λ i , n | + u n + 1 u n ,
(3.7)

where

sup n 0 { 1 λ N , n + 1 J R N , λ N , n + 1 ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 ( I λ N , n B N ) Λ n N 1 u n + 1 λ N , n ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 J R N , λ N , n ( I λ N , n B N ) Λ n N 1 u n } M ˜

for some M ˜ >0 and sup n 0 { i = 1 N B i Λ n + 1 i 1 u n + 1 + M ˜ } M ˜ 0 for some M ˜ 0 >0.

Utilizing Proposition 1.1(ii), (v), we deduce that

u n + 1 u n = Δ n + 1 M x n + 1 Δ n M x n = T r M , n + 1 ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 T r M , n ( Θ M , φ M ) ( I r M , n A M ) Δ n M 1 x n T r M , n + 1 ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 T r M , n ( Θ M , φ M ) ( I r M , n A M ) Δ n + 1 M 1 x n + 1 + T r M , n ( Θ M , φ M ) ( I r M , n A M ) Δ n + 1 M 1 x n + 1 T r M , n ( Θ M , φ M ) ( I r M , n A M ) Δ n M 1 x n T r M , n + 1 ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 T r M , n ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 + T r M , n ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 T r M , n ( Θ M , φ M ) ( I r M , n A M ) Δ n + 1 M 1 x n + 1 + ( I r M , n A M ) Δ n + 1 M 1 x n + 1 ( I r M , n A M ) Δ n M 1 x n | r M , n + 1 r M , n | r M , n + 1 T r M , n + 1 ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 + | r M , n + 1 r M , n | A M Δ n + 1 M 1 x n + 1 + Δ n + 1 M 1 x n + 1 Δ n M 1 x n = | r M , n + 1 r M , n | [ A M Δ n + 1 M 1 x n + 1 + 1 r M , n + 1 T r M , n + 1 ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 ] + Δ n + 1 M 1 x n + 1 Δ n M 1 x n | r M , n + 1 r M , n | [ A M Δ n + 1 M 1 x n + 1 + 1 r M , n + 1 T r M , n + 1 ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 ] + + | r 1 , n + 1 r 1 , n | [ A 1 Δ n + 1 0 x n + 1 + 1 r 1 , n + 1 T r 1 , n + 1 ( Θ 1 , φ 1 ) ( I r 1 , n + 1 A 1 ) Δ n + 1 0 x n + 1 ( I r 1 , n + 1 A 1 ) Δ n + 1 0 x n + 1 ] + Δ n + 1 0 x n + 1 Δ n 0 x n M ˜ 1 k = 1 M | r k , n + 1 r k , n | + x n + 1 x n ,
(3.8)

where M ˜ 1 >0 is a constant such that for each n0

k = 1 M [ A k Δ n + 1 k 1 x n + 1 + 1 r k , n + 1 T r k , n + 1 ( Θ k , φ k ) ( I r k , n + 1 A k ) Δ n + 1 k 1 x n + 1 ( I r k , n + 1 A k ) Δ n + 1 k 1 x n + 1 ] M ˜ 1 .

Furthermore, we define y n = β n x n +(1 β n ) w n for all n0. It follows that

w n + 1 w n = y n + 1 β n + 1 x n + 1 1 β n + 1 y n β n x n 1 β n = γ n + 1 G v n + 1 + δ n + 1 T G v n + 1 1 β n + 1 γ n G v n + δ n T G v n 1 β n = γ n + 1 ( G v n + 1 G v n ) + δ n + 1 ( T G v n + 1 T G v n ) 1 β n + 1 + ( γ n + 1 1 β n + 1 γ n 1 β n ) G v n + ( δ n + 1 1 β n + 1 δ n 1 β n ) T G v n .
(3.9)

Since ( γ n + δ n )ξ γ n for all n0, utilizing Lemma 2.2, we have

γ n + 1 ( G v n + 1 G v n ) + δ n + 1 ( T G v n + 1 T G v n ) ( γ n + 1 + δ n + 1 )G v n + 1 G v n .
(3.10)

Hence it follows from (3.7)-(3.10) that

w n + 1 w n γ n + 1 ( G v n + 1 G v n ) + δ n + 1 ( T G v n + 1 T G v n ) 1 β n + 1 + | γ n + 1 1 β n + 1 γ n 1 β n | G v n + | δ n + 1 1 β n + 1 δ n 1 β n | T G v n ( γ n + 1 + δ n + 1 ) 1 β n + 1 G v n + 1 G v n + | γ n + 1 1 β n + 1 γ n 1 β n | ( G v n + T G v n ) = G v n + 1 G v n + | γ n + 1 1 β n + 1 γ n 1 β n | ( G v n + T G v n ) v n + 1 v n + | γ n + 1 1 β n + 1 γ n 1 β n | ( G v n + T G v n ) M ˜ 0 i = 1 N | λ i , n + 1 λ i , n | + u n + 1 u n + | γ n + 1 1 β n + 1 γ n 1 β n | ( G v n + T G v n ) M ˜ 0 i = 1 N | λ i , n + 1 λ i , n | + M ˜ 1 k = 1 M | r k , n + 1 r k , n | + x n + 1 x n + | γ n + 1 1 β n + 1 γ n 1 β n | ( G v n + T G v n ) .
(3.11)

In the meantime, simple calculation shows that

y n + 1 y n = β n ( x n + 1 x n )+(1 β n )( w n + 1 w n )+( β n + 1 β n )( x n + 1 w n + 1 ).

So, it follows from (3.11) that

y n + 1 y n β n x n + 1 x n + ( 1 β n ) w n + 1 w n + | β n + 1 β n | x n + 1 w n + 1 β n x n + 1 x n + ( 1 β n ) [ M ˜ 0 i = 1 N | λ i , n + 1 λ i , n | + M ˜ 1 k = 1 M | r k , n + 1 r k , n | + x n + 1 x n + | γ n + 1 1 β n + 1 γ n 1 β n | ( G v n + T G v n ) ] + | β n + 1 β n | x n + 1 w n + 1 x n + 1 x n + M ˜ 0 i = 1 N | λ i , n + 1 λ i , n | + M ˜ 1 k = 1 M | r k , n + 1 r k , n | + | γ n + 1 γ n | ( 1 β n ) + γ n | β n + 1 β n | 1 β n + 1 ( G v n + T G v n ) + | β n + 1 β n | x n + 1 w n + 1 x n + 1 x n + M ˜ 0 i = 1 N | λ i , n + 1 λ i , n | + M ˜ 1 k = 1 M | r k , n + 1 r k , n | + | γ n + 1 γ n | G v n + T G v n 1 b + | β n + 1 β n | ( x n + 1 w n + 1 + G v n + T G v n 1 b ) x n + 1 x n + M ˜ 2 ( i = 1 N | λ i , n + 1 λ i , n | + k = 1 M | r k , n + 1 r k , n | + | γ n + 1 γ n | + | β n + 1 β n | ) ,
(3.12)

where sup n 0 { x n + 1 w n + 1 + G v n + T G v n 1 b + M ˜ 0 + M ˜ 1 } M ˜ 2 for some M ˜ 2 >0.

On the other hand, we define z n := α n V x n +(1 α n )S x n for all n0. Then it is well known that x n + 1 = λ n γ z n +(I λ n μF) y n for all n0. Simple calculations show that

{ z n + 1 z n = ( α n + 1 α n ) ( V x n S x n ) + α n + 1 ( V x n + 1 V x n ) z n + 1 z n = + ( 1 α n + 1 ) ( S x n + 1 S x n ) , x n + 2 x n + 1 = ( λ n + 1 λ n ) ( γ z n μ F y n ) + λ n + 1 γ ( z n + 1 z n ) x n + 2 x n + 1 = + ( I λ n + 1 μ F ) y n + 1 ( I λ n + 1 μ F ) y n .

Since V is a ρ-contraction with coefficient ρ[0,1) and S is a nonexpansive mapping, we conclude that

z n + 1 z n | α n + 1 α n | V x n S x n + α n + 1 V x n + 1 V x n + ( 1 α n + 1 ) S x n + 1 S x n | α n + 1 α n | V x n S x n + α n + 1 ρ x n + 1 x n + ( 1 α n + 1 ) x n + 1 x n = ( 1 α n + 1 ( 1 ρ ) ) x n + 1 x n + | α n + 1 α n | V x n S x n ,

which together with (3.12) and 0γ<τ implies that

x n + 2 x n + 1 | λ n + 1 λ n | γ z n μ F y n + λ n + 1 γ z n + 1 z n + ( I λ n + 1 μ F ) y n + 1 ( I λ n + 1 μ F ) y n | λ n + 1 λ n | γ z n μ F y n + λ n + 1 γ z n + 1 z n + ( 1 λ n + 1 τ ) y n + 1 y n | λ n + 1 λ n | γ z n μ F y n + λ n + 1 γ [ ( 1 α n + 1 ( 1 ρ ) ) x n + 1 x n + | α n + 1 α n | V x n S x n ] + ( 1 λ n + 1 τ ) [ x n + 1 x n + M ˜ 2 ( i = 1 N | λ i , n + 1 λ i , n | + k = 1 M | r k , n + 1 r k , n | + | γ n + 1 γ n | + | β n + 1 β n | ) ] ( 1 λ n + 1 ( τ γ ) ) x n + 1 x n + | λ n + 1 λ n | γ z n μ F y n + | α n + 1 α n | V x n S x n + M ˜ 2 ( i = 1 N | λ i , n + 1 λ i , n | + k = 1 M | r k , n + 1 r k , n | + | γ n + 1 γ n | + | β n + 1 β n | ) ( 1 λ n + 1 ( τ γ ) ) x n + 1 x n + M ˜ 3 { i = 1 N | λ i , n + 1 λ i , n | + k = 1 M | r k , n + 1 r k , n | + | λ n + 1 λ n | + | α n + 1 α n | + | β n + 1 β n | + | γ n + 1 γ n | } ,

where sup n 0 {γ z n μF y n +V x n S x n + M ˜ 2 } M ˜ 3 for some M ˜ 3 >0. Consequently,

x n + 1 x n α n ( 1 λ n ( τ γ ) ) x n x n 1 α n + M ˜ 3 { i = 1 N | λ i , n λ i , n 1 | α n + k = 1 M | r k , n r k , n 1 | α n + | λ n λ n 1 | α n + | α n α n 1 | α n + | β n β n 1 | α n + | γ n γ n 1 | α n } = ( 1 λ n ( τ γ ) ) x n x n 1 α n 1 + ( 1 λ n ( τ γ ) ) x n x n 1 ( 1 α n 1 α n 1 ) + M ˜ 3 { i = 1 N | λ i , n λ i , n 1 | α n + k = 1 M | r k , n r k , n 1 | α n + | λ n λ n 1 | α n + | α n α n 1 | α n + | β n β n 1 | α n + | γ n γ n 1 | α n } ( 1 λ n ( τ γ ) ) x n x n 1 α n 1 + λ n ( τ γ ) M ˜ 4 τ γ { 1 λ n | 1 α n 1 α n 1 | + i = 1 N | λ i , n λ i , n 1 | λ n α n + k = 1 M | r k , n r k , n 1 | λ n α n + 1 α n | 1 λ n 1 λ n | + 1 λ n | 1 α n 1 α n | + | β n β n 1 | λ n α n + | γ n γ n 1 | λ n α n } ,
(3.13)

where sup n 1 { x n x n 1 + M ˜ 3 } M ˜ 4 for some M ˜ 4 >0. From conditions (i)-(iv) it follows that n = 0 λ n (τγ)= and

lim n M ˜ 4 τ γ { 1 λ n | 1 α n 1 α n 1 | + i = 1 N | λ i , n λ i , n 1 | λ n α n + k = 1 M | r k , n r k , n 1 | λ n α n + 1 α n | 1 λ n 1 λ n | + 1 λ n | 1 α n 1 α n | + | β n β n 1 | λ n α n + | γ n γ n 1 | λ n α n } = 0 .
(3.14)

Thus, utilizing Lemma 2.12, we immediately conclude that

lim n x n + 1 x n α n =0.

So, from α n 0 it follows that

lim n x n + 1 x n =0.

Step 3. We prove that lim n x n u n =0, lim n x n v n =0, lim n v n G v n =0 and lim n v n T v n =0.

Indeed, utilizing Lemmas 2.8 and 2.9(b), from (3.1), (3.4)-(3.5), and 0γ<τ we deduce that

y n p 2 = β n x n + γ n G v n + δ n T G v n p 2 = β n ( x n p ) + ( 1 β n ) ( γ n G v n + δ n T G v n 1 β n p ) 2 = β n x n p 2 + ( 1 β n ) γ n G v n + δ n T G v n 1 β n p 2 β n ( 1 β n ) γ n G v n + δ n T G v n 1 β n x n 2 = β n x n p 2 + ( 1 β n ) γ n ( G v n p ) + δ n ( T G v n p ) 1 β n 2 β n ( 1 β n ) y n x n 1 β n 2 β n x n p 2 + ( 1 β n ) ( γ n + δ n ) 2 G v n p 2 ( 1 β n ) 2 β n 1 β n y n x n 2 = β n x n p 2 + ( 1 β n ) G v n p 2 β n 1 β n y n x n 2 β n x n p 2 + ( 1 β n ) v n p 2 β n 1 β n y n x n 2 β n x n p 2 + ( 1 β n ) x n p 2 β n 1 β n y n x n 2 = x n p 2 β n 1 β n y n x n 2 ,
(3.15)

and hence

x n + 1 p 2 = λ n γ ( α n V x n + ( 1 α n ) S x n ) + ( I λ n μ F ) y n p 2 = λ n γ ( α n V x n + ( 1 α n ) S x n ) λ n μ F p + ( I λ n μ F ) y n ( I λ n μ F ) p 2 = λ n [ α n ( γ V x n μ F p ) + ( 1 α n ) ( γ S x n μ F p ) ] + ( I λ n μ F ) y n ( I λ n μ F ) p 2 = λ n [ α n ( γ V x n γ V p ) + ( 1 α n ) ( γ S x n γ S p ) ] + ( I λ n μ F ) y n ( I λ n μ F ) p + λ n [ α n ( γ V p μ F p ) + ( 1 α n ) ( γ S p μ F p ) ] 2 λ n [ α n ( γ V x n γ V p ) + ( 1 α n ) ( γ S x n γ S p ) ] + ( I λ n μ F ) y n ( I λ n μ F ) p 2 + 2 λ n α n ( γ V p μ F p ) , x n + 1 p + 2 λ n ( 1 α n ) ( γ S p μ F p ) , x n + 1 p [ λ n α n ( γ V x n γ V p ) + ( 1 α n ) ( γ S x n γ S p ) + ( I λ n μ F ) y n ( I λ n μ F ) p ] 2 + 2 λ n α n ( γ V p μ F p ) , x n + 1 p + 2 λ n ( 1 α n ) ( γ S p μ F p ) , x n + 1 p [ λ n ( α n γ ρ x n p + ( 1 α n ) γ x n p ) + ( 1 λ n τ ) y n p ] 2 + 2 λ n α n ( γ V p μ F p ) , x n + 1 p + 2 ( 1 α n ) λ n ( γ S p μ F p ) , x n + 1 p = [ λ n ( 1 α n ( 1 ρ ) ) γ x n p + ( 1 λ n τ ) y n p ] 2 + 2 λ n α n ( γ V p μ F p ) , x n + 1 p + 2 λ n ( 1 α n ) ( γ S p μ F p ) , x n + 1 p [ λ n γ x n p + ( 1 λ n τ ) y n p ] 2 + 2 λ n α n ( γ V p μ F p ) , x n + 1 p + 2 λ n ( 1 α n ) ( γ S p μ F p ) , x n + 1 p = [ λ n τ γ τ x n p + ( 1 λ n τ ) y n p ] 2 + 2 λ n α n ( γ V p μ F p ) , x n + 1 p + 2 λ n ( 1 α n ) ( γ S p μ F p ) , x n + 1 p λ n γ 2 τ x n p + ( 1 λ n τ ) y n p 2 + 2 λ n α n ( γ V p μ F p ) , x n + 1 p + 2 λ n ( 1 α n ) ( γ S p μ F p ) , x n + 1 p λ n γ 2 τ x n p + ( 1 λ n τ ) [ x n p 2 β n 1 β n y n x n 2 ] + 2 λ n α n ( γ V p μ F p ) , x n + 1 p + 2 λ n ( 1 α n ) ( γ S p μ F p ) , x n + 1 p = ( 1 λ n τ 2 γ 2 τ ) x n p 2 β n ( 1 λ n τ ) 1 β n y n x n 2 + 2 λ n α n ( γ V p μ F p ) , x n + 1 p + 2 λ n ( 1 α n ) ( γ S p μ F p ) , x n + 1 p x n p 2 β n ( 1 λ n τ ) 1 β n y n x n 2 + 2 λ n α n γ V p μ F p x n + 1 p + 2 λ n γ S p μ F p x n + 1 p ,
(3.16)

which together with { β n }[a,b](0,1), immediately yields

a ( 1 λ n τ ) 1 a y n x n 2 β n ( 1 λ n τ ) 1 β n y n x n 2 x n p 2 x n + 1 p 2 + 2 λ n α n γ V p μ F p x n + 1 p + 2 λ n γ S p μ F p x n + 1 p x n x n + 1 ( x n p + x n + 1 p ) + 2 λ n α n γ V p μ F p x n + 1 p + 2 λ n γ S p μ F p x n + 1 p .

Since λ n 0, α n 0, x n + 1 x n 0, and { x n } is bounded, we have

lim n y n x n =0.
(3.17)

Observe that

Δ n k x n p 2 = T r k , n ( Θ k , φ k ) ( I r k , n A k ) Δ n k 1 x n T r k , n ( Θ k , φ k ) ( I r k , n A k ) p 2 ( I r k , n A k ) Δ n k 1 x n ( I r k , n A k ) p 2 Δ n k 1 x n p 2 + r k , n ( r k , n 2 μ k ) A k Δ n k 1 x n A k p 2 x n p 2 + r k , n ( r k , n 2 μ k ) A k Δ n k 1 x n A k p 2
(3.18)

and

Λ n i u n p 2 = J R i , λ i , n ( I λ i , n B i ) Λ n i 1 u n J R i , λ i , n ( I λ i , n B i ) p 2 ( I λ i , n B i ) Λ n i 1 u n ( I λ i , n B i ) p 2 Λ n i 1 u n p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 u n p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 x n p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2
(3.19)

for i{1,2,,N} and k{1,2,,M}. Combining (3.15), (3.18), and (3.19), we get

y n p 2 β n x n p 2 + ( 1 β n ) v n p 2 β n 1 β n y n x n 2 β n x n p 2 + ( 1 β n ) v n p 2 β n x n p 2 + ( 1 β n ) Λ n i u n p 2 β n x n p 2 + ( 1 β n ) [ u n p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 ] β n x n p 2 + ( 1 β n ) [ Δ n k x n p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 ] β n x n p 2 + ( 1 β n ) [ x n p 2 + r k , n ( r k , n 2 μ k ) A k Δ n k 1 x n A k p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 ] = x n p 2 + ( 1 β n ) [ r k , n ( r k , n 2 μ k ) A k Δ n k 1 x n A k p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 ] ,

which immediately leads to

( 1 β n ) [ r k , n ( 2 μ k r k , n ) A k Δ n k 1 x n A k p 2 + λ i , n ( 2 η i λ i , n ) B i Λ n i 1 u n B i p 2 ] x n p 2 y n p 2 x n y n ( x n p + y n p ) .

Since x n y n 0, { β n }[a,b](0,1), { λ i , n }[ a i , b i ](0,2 η i ), { r k , n }[ c k , d k ](0,2 μ k ), i{1,2,,N}, k{1,2,,M}, and { x n }, { y n } are bounded sequences, we have

lim n A k Δ n k 1 x n A k p =0and lim n B i Λ n i 1 u n B i p =0
(3.20)

for all k{1,2,,M} and i{1,2,,N}.

Furthermore, by Proposition 1.1(ii) and Lemma 2.9(a), we have

Δ n k x n p 2 = T r k , n ( Θ k , φ k ) ( I r k , n A k ) Δ n k 1 x n T r k , n ( Θ k , φ k ) ( I r k , n A k ) p 2 ( I r k , n A k ) Δ n k 1 x n ( I r k , n A k ) p , Δ n k x n p = 1 2 ( ( I r k , n A k ) Δ n k 1 x n ( I r k , n A k ) p 2 + Δ n k x n p 2 ( I r k , n A k ) Δ n k 1 x n ( I r k , n A k ) p ( Δ n k x n p ) 2 ) 1 2 ( Δ n k 1 x n p 2 + Δ n k x n p 2 Δ n k 1 x n Δ n k x n r k , n ( A k Δ n k 1 x n A k p ) 2 ) ,

which implies that

Δ n k x n p 2 Δ n k 1 x n p 2 Δ n k 1 x n Δ n k x n r k , n ( A k Δ n k 1 x n A k p ) 2 = Δ n k 1 x n p 2 Δ n k 1 x n Δ n k x n 2 r k , n 2 A k Δ n k 1 x n A k p 2 + 2 r k , n Δ n k 1 x n Δ n k x n , A k Δ n k 1 x n A k p Δ n k 1 x n p 2 Δ n k 1 x n Δ n k x n 2 + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p x n p 2 Δ n k 1 x n Δ n k x n 2 + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p .
(3.21)

By Lemma 2.9(a) and Lemma 2.3, we obtain

Λ n i u n p 2 = J R i , λ i , n ( I λ i , n B i ) Λ n i 1 u n J R i , λ i , n ( I λ i , n B i ) p 2 ( I λ i , n B i ) Λ n i 1 u n ( I λ i , n B i ) p , Λ n i u n p = 1 2 ( ( I λ i , n B i ) Λ n i 1 u n ( I λ i , n B i ) p 2 + Λ n i u n p 2 ( I λ i , n B i ) Λ n i 1 u n ( I λ i , n B i ) p ( Λ n i u n p ) 2 ) 1 2 ( Λ n i 1 u n p 2 + Λ n i u n p 2 Λ n i 1 u n Λ n i u n λ i , n ( B i Λ n i 1 u n B i p ) 2 ) 1 2 ( u n p 2 + Λ n i u n p 2 Λ n i 1 u n Λ n i u n λ i , n ( B i Λ n i 1 u n B i p ) 2 ) 1 2 ( x n p 2 + Λ n i u n p 2 Λ n i 1 u n Λ n i u n λ i , n ( B i Λ n i 1 u n B i p ) 2 ) ,

which immediately leads to

Λ n i u n p 2 x n p 2 Λ n i 1 u n Λ n i u n λ i , n ( B i Λ n i 1 u n B i p ) 2 = x n p 2 Λ n i 1 u n Λ n k u n 2 λ i , n 2 B i Λ n i 1 u n B i p 2 + 2 λ i , n Λ n i 1 u n Λ n i u n , B i Λ n i 1 u n B i p x n p 2 Λ n i 1 u n Λ n i u n 2 + 2 λ i , n Λ n i 1 u n Λ n i u n B i Λ n i 1 u n B i p .
(3.22)

Combining (3.15) and (3.22), we conclude

y n p 2 β n x n p 2 + ( 1 β n ) v n p 2 β n 1 β n y n x n 2 β n x n p 2 + ( 1 β n ) v n p 2 β n x n p 2 + ( 1 β n ) Λ n i u n p 2 β n x n p 2 + ( 1 β n ) [ x n p 2 Λ n i 1 u n Λ n i u n 2 + 2 λ i , n Λ n i 1 u n Λ n i u n B i Λ n i 1 u n B i p ] x n p 2 ( 1 β n ) Λ n i 1 u n Λ n i u n 2 + 2 λ i , n Λ n i 1 u n Λ n i u n B i Λ n i 1 u n B i p ,

which yields

( 1 β n ) Λ n i 1 u n Λ n i u n 2 x n p 2 y n p 2 + 2 λ i , n Λ n i 1 u n Λ n i u n B i Λ n i 1 u n B i p x n y n ( x n p + y n p ) + 2 λ i , n Λ n i 1 u n Λ n i u n B i Λ n i 1 u n B i p .

Since { β n }[a,b](0,1), { λ i , n }[ a i , b i ](0,2 η i ), i=1,2,,N, and { u n }, { x n }, and { y n } are bounded sequences, we deduce from (3.20) and x n y n 0 that

lim n Λ n i 1 u n Λ n i u n =0,i{1,2,,N}.
(3.23)

Also, combining (3.3), (3.15), and (3.21), we deduce

y n p 2 β n x n p 2 + ( 1 β n ) v n p 2 β n 1 β n y n x n 2 β n x n p 2 + ( 1 β n ) v n p 2 β n x n p 2 + ( 1 β n ) u n p 2 β n x n p 2 + ( 1 β n ) Δ n k x n p 2 β n x n p 2 + ( 1 β n ) [ x n p 2 Δ n k 1 x n Δ n k x n 2 + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p ] x n p 2 ( 1 β n ) Δ n k 1 x n Δ n k x n 2 + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p ,

which yields

( 1 β n ) Δ n k 1 x n Δ n k x n 2 x n p 2 y n p 2 + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p x n y n ( x n p + y n p ) + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p .

Since { β n }[a,b](0,1), { r k , n }[ c k , d k ](0,2 μ k ) for k=1,2,,M, and { x n }, { y n } are bounded sequences, we deduce from (3.20) and x n y n 0 that

lim n Δ n k 1 x n Δ n k x n =0,k{1,2,,M}.
(3.24)

Hence from (3.23) and (3.24), we get

x n u n = Δ n 0 x n Δ n M x n Δ n 0 x n Δ n 1 x n + Δ n 1 x n Δ n 2 x n + + Δ n M 1 x n Δ n M x n 0 as  n
(3.25)

and

u n v n = Λ n 0 u n Λ n N u n Λ n 0 u n Λ n 1 u n + Λ n 1 u n Λ n 2 u n + + Λ n N 1 u n Λ n N u n 0 as  n ,
(3.26)

respectively. Thus, from (3.25) and (3.26), we obtain

x n v n x n u n + u n v n 0as n.
(3.27)

On the other hand, for simplicity, we write p ˜ = P C (I ν 2 F 2 )p, v ˜ n = P C (I ν 2 F 2 ) v n , and k n =G v n = P C (I ν 1 F 1 ) v ˜ n for all n1. Then

p=Gp= P C (I ν 1 F 1 ) p ˜ = P C (I ν 1 F 1 ) P C (I ν 2 F 2 )p.

We now show that lim n G v n v n =0, i.e., lim n k n v n =0. As a matter of fact, for pΩ, it follows from (3.4), (3.5), and (3.15) that

y n p 2 β n x n p 2 + ( 1 β n ) G v n p 2 β n 1 β n y n x n 2 β n x n p 2 + ( 1 β n ) G v n p 2 = β n x n p 2 + ( 1 β n ) k n p 2 β n x n p 2 + ( 1 β n ) [ v ˜ n p ˜ 2 + ν 1 ( ν 1 2 ζ 1 ) F 1 v ˜ n F 1 p ˜ 2 ] β n x n p 2 + ( 1 β n ) [ v n p 2 + ν 2 ( ν 2 2 ζ 2 ) F 2 v n F 2 p 2 + ν 1 ( ν 1 2 ζ 1 ) F 1 v ˜ n F 1 p ˜ 2 ] β n x n p 2 + ( 1 β n ) [ x n p 2 + ν 2 ( ν 2 2 ζ 2 ) F 2 v n F 2 p 2 + ν 1 ( ν 1 2 ζ 1 ) F 1 v ˜ n F 1 p ˜ 2 ] = x n p 2 + ( 1 β n ) [ ν 2 ( ν 2 2 ζ 2 ) F 2 v n F 2 p 2 + ν 1 ( ν 1 2 ζ 1 ) F 1 v ˜ n F 1 p ˜ 2 ] ,
(3.28)

which immediately yields

( 1 β n ) [ ν 2 ( 2 ζ 2 ν 2 ) F 2 v n F 2 p 2 + ν 1 ( 2 ζ 1 ν 1 ) F 1 v ˜ n F 1 p ˜ 2 ] x n p 2 y n p 2 x n y n ( x n p + y n p ) .

Since x n y n 0, { β n }[a,b](0,1), ν j (0,2 ζ j ), j=1,2, and { x n }, { y n } are bounded sequences, we have

lim n F 2 v n F 2 p=0and lim n F 1 v ˜ n F 1 p ˜ =0.
(3.29)

Also, in terms of the firm nonexpansivity of P C and the ζ j -inverse-strong monotonicity of F j for j=1,2, we obtain from ν j (0,2 ζ j ), j=1,2,

v ˜ n p ˜ 2 = P C ( I ν 2 F 2 ) v n P C ( I ν 2 F 2 ) p 2 ( I ν 2 F 2 ) v n ( I ν 2 F 2 ) p , v ˜ n p ˜ = 1 2 [ ( I ν 2 F 2 ) v n ( I ν 2 F 2 ) p 2 + v ˜ n p ˜ 2 ( I ν 2 F 2 ) v n ( I ν 2 F 2 ) p ( v ˜ n p ˜ ) 2 ] 1 2 [ v n p 2 + v ˜ n p ˜ 2 ( v n v ˜ n ) ν 2 ( F 2 v n F 2 p ) ( p p ˜ ) 2 ] = 1 2 [ v n p 2 + v ˜ n p ˜ 2 ( v n v ˜ n ) ( p p ˜ ) 2 + 2 ν 2 ( v n v ˜ n ) ( p p ˜ ) , F 2 v n F 2 p ν 2 2 F 2 v n F 2 p 2 ]

and

k n p 2 = P C ( I ν 1 F 1 ) v ˜ n P C ( I ν 1 F 1 ) p ˜ 2 ( I ν 1 F 1 ) v ˜ n ( I ν 1 F 1 ) p ˜ , k n p = 1 2 [ ( I ν 1 F 1 ) v ˜ n ( I ν 1 F 1 ) p ˜ 2 + k n p 2 ( I ν 1 F 1 ) v ˜ n ( I ν 1 F 1 ) p ˜ ( k n p ) 2 ] 1 2 [ v ˜ n p ˜ 2 + k n p 2 ( v ˜ n k n ) + ( p p ˜ ) 2 + 2 ν 1 F 1 v ˜ n F 1 p ˜ , ( v ˜ n k n ) + ( p p ˜ ) ν 1 2 F 1 v ˜ n F 1 p ˜ 2 ] 1 2 [ v n p 2 + w n p 2 ( v ˜ n k n ) + ( p p ˜ ) 2 + 2 ν 1 F 1 v ˜ n F 1 p ˜ , ( v ˜ n k n ) + ( p p ˜ ) ] .

Thus, we have

v ˜ n p ˜ 2 v n p 2 ( v n v ˜ n ) ( p p ˜ ) 2 + 2 ν 2 ( v n v ˜ n ) ( p p ˜ ) , F 2 v n F 2 p ν 2 2 F 2 v n F 2 p 2
(3.30)

and

k n p 2 v n p 2 ( v ˜ n k n ) + ( p p ˜ ) 2 + 2 ν 1 F 1 v ˜ n F 1 p ˜ ( v ˜ n k n ) + ( p p ˜ ) .
(3.31)

Consequently, from (3.4), (3.28), and (3.30), it follows that

y n p 2 β n x n p 2 + ( 1 β n ) [ v ˜ n p ˜ 2 + ν 1 ( ν 1 2 ζ 1 ) F 1 v ˜ n F 1 p ˜ 2 ] β n x n p 2 + ( 1 β n ) v ˜ n p ˜ 2 β n x n p 2 + ( 1 β n ) [ v n p 2 ( v n v ˜ n ) ( p p ˜ ) 2 + 2 ν 2 ( v n v ˜ n ) ( p p ˜ ) , F 2 v n F 2 p ν 2 2 F 2 v n F 2 p 2 ] β n x n p 2 + ( 1 β n ) [ x n p 2 ( v n v ˜ n ) ( p p ˜ ) 2 + 2 ν 2 ( v n v ˜ n ) ( p p ˜ ) F 2 v n F 2 p ] x n p 2 ( 1 β n ) ( v n v ˜ n ) ( p p ˜ ) 2 + 2 ν 2 ( v n v ˜ n ) ( p p ˜ ) F 2 v n F 2 p ,

which hence leads to

( 1 β n ) ( v n v ˜ n ) ( p p ˜ ) 2 x n p 2 y n p 2 + 2 ν 2 ( v n v ˜ n ) ( p p ˜ ) F 2 v n F 2 p x n y n ( x n p + y n p ) + 2 ν 2 ( v n v ˜ n ) ( p p ˜ ) F 2 v n F 2 p .

Since x n y n 0, { β n }[a,b](0,1), ν 2 (0,2 ζ 2 ), and { x n }, { y n }, { v n }, { v ˜ n } are bounded sequences, we obtain from (3.29)

lim n ( v n v ˜ n ) ( p p ˜ ) =0.
(3.32)

Furthermore, from (3.4), (3.28), and (3.31), it follows that

y n p 2 β n x n p 2 + ( 1 β n ) k n p 2 β n x n p 2 + ( 1 β n ) [ v n p 2 ( v ˜ n k n ) + ( p p ˜ ) 2 + 2 ν 1 F 1 v ˜ n F 1 p ˜ ( v ˜ n k n ) + ( p p ˜ ) ] β n x n p 2 + ( 1 β n ) [ x n p 2 ( v ˜ n k n ) + ( p p ˜ ) 2 + 2 ν 1 F 1 v ˜ n F 1 p ˜ ( v ˜ n k n ) + ( p p ˜ ) ] = x n p 2 ( 1 β n ) ( v ˜ n k n ) + ( p p ˜ ) 2 + 2 ν 1 F 1 v ˜ n F 1 p ˜ ( v ˜ n k n ) + ( p p ˜ ) ,

which hence yields

( 1 β n ) ( v ˜ n k n ) + ( p p ˜ ) 2 x n p 2 y n p 2 + 2 ν 1 F 1 v ˜ n F 1 p ˜ ( v ˜ n k n ) + ( p p ˜ ) x n y n ( x n p + y n p ) + 2 ν 1 F 1 v ˜ n F 1 p ˜ ( v ˜ n k n ) + ( p p ˜ ) .

Since x n y n 0, { β n }[a,b](0,1), ν 1 (0,2 ζ 1 ), and { x n }, { y n }, { k n }, { v ˜ n } are bounded sequences, we obtain from (3.29)

lim n ( v ˜ n k n ) + ( p p ˜ ) =0.
(3.33)

Note that

v n k n ( v n v ˜ n ) ( p p ˜ ) + ( v ˜ n k n ) + ( p p ˜ ) .

Hence from (3.32) and (3.33), we get

lim n v n G v n = lim n v n k n =0.
(3.34)

Also, observe that

y n x n = γ n (G v n x n )+ δ n (TG v n x n ),n0.

Hence we find that

δ n T G v n v n δ n T G v n x n + δ n x n v n = y n x n γ n ( G v n x n ) + δ n x n v n y n x n + γ n G v n x n + δ n x n v n y n x n + γ n G v n v n + γ n v n x n + δ n x n v n = y n x n + γ n G v n v n + ( γ n + δ n ) x n v n y n x n + G v n v n + x n v n .

So, from lim inf n δ n >0, (3.17), (3.27), and (3.34), it follows that

lim n TG v n v n =0.
(3.35)

In addition, noticing that

T v n v n T v n T G v n + T G v n v n v n G v n + T G v n v n ,

we know from (3.34) and (3.35) that

lim n T v n v n =0.
(3.36)

Step 4. We prove that ω w ( x n )Ω.

Indeed, since H is reflexive and { x n } is bounded, there exists at least a weak convergence subsequence of { x n }. Hence it is well known that ω w ( x n ). Now, take an arbitrary w ω w ( x n ). Then there exists a subsequence { x n i } of { x n } such that x n i w. From (3.23)-(3.25), and (3.27), we have u n i w, v n i w, Λ n i m u n i w, and Δ n i k x n i w, where m{1,2,,N} and k{1,2,,M}. Utilizing Lemma 2.1(ii), we deduce from v n i w and (3.36) that wFix(T). In the meantime, utilizing Lemma 2.10, we obtain from v n i w and (3.34) wGSVI(G). Next, we prove that w m = 1 N I( B m , R m ). As a matter of fact, since B m is η m -inverse-strongly monotone, B m is a monotone and Lipschitz continuous mapping. It follows from Lemma 2.6 that R m + B m is maximal monotone. Let (v,g)G( R m + B m ), i.e., g B m v R m v. Again, since Λ n m u n = J R m , λ m , n (I λ m , n B m ) Λ n m 1 u n , n1, m{1,2,,N}, we have

Λ n m 1 u n λ m , n B m Λ n m 1 u n (I+ λ m , n R m ) Λ n m u n ,

that is,

1 λ m , n ( Λ n m 1 u n Λ n m u n λ m , n B m Λ n m 1 u n ) R m Λ n m u n .

In terms of the monotonicity of R m , we get

v Λ n m u n , g B m v 1 λ m , n ( Λ n m 1 u n Λ n m u n λ m , n B m Λ n m 1 u n ) 0

and hence

v Λ n m u n , g v Λ n m u n , B m v + 1 λ m , n ( Λ n m 1 u n Λ n m u n λ m , n B m Λ n m 1 u n ) = v Λ n m u n , B m v B m Λ n m u n + B m Λ n m u n B m Λ n m 1 u n + 1 λ m , n ( Λ n m 1 u n Λ n m u n ) v Λ n m u n , B m Λ n m u n B m Λ n m 1 u n + v Λ n m u n , 1 λ m , n ( Λ n m 1 u n Λ n m u n ) .

In particular,

v Λ n i m u n i , g v Λ n i m u n i , B m Λ n i m u n i B m Λ n i m 1 u n i + v Λ n i m u n i , 1 λ m , n i ( Λ n i m 1 u n i Λ n i m u n i ) .

Since Λ n m u n Λ n m 1 u n 0 (due to (3.23)) and B m Λ n m u n B m Λ n m 1 u n 0 (due to the Lipschitz continuity of B m ), we conclude from Λ n i m u n i w and { λ i , n }[ a i , b i ](0,2 η i ) that

lim i v Λ n i m u n i , g =vw,g0.

It follows from the maximal monotonicity of B m + R m that 0( R m + B m )w, i.e., wI( B m , R m ). Therefore, w m = 1 N I( B m , R m ). Next we prove that w k = 1 M GMEP( Θ k , φ k , A k ). Since Δ n k x n = T r k , n ( Θ k , φ k ) (I r k , n A k ) Δ n k 1 x n , n1, k{1,2,,M}, we have

Θ k ( Δ n k x n , y ) + φ k ( y ) φ k ( Δ n k x n ) + A k Δ n k 1 x n , y Δ n k x n + 1 r k , n y Δ n k x n , Δ n k x n Δ n k 1 x n 0 .

By (A2), we have

φ k (y) φ k ( Δ n k x n ) + A k Δ n k 1 x n , y Δ n k x n + 1 r k , n y Δ n k x n , Δ n k x n Δ n k 1 x n Θ k ( y , Δ n k x n ) .

Let z t =ty+(1t)w for all t(0,1] and yC. This implies that z t C. Then we have

z t Δ n k x n , A k z t φ k ( Δ n k x n ) φ k ( z t ) + z t Δ n k x n , A k z t z t Δ n k x n , A k Δ n k 1 x n z t Δ n k x n , Δ n k x n Δ n k 1 x n r k , n + Θ k ( z t , Δ n k x n ) = φ k ( Δ n k x n ) φ k ( z t ) + z t Δ n k x n , A k z t A k Δ n k x n + z t Δ n k x n , A k Δ n k x n A k Δ n k 1 x n z t Δ n k x n , Δ n k x n Δ n k 1 x n r k , n + Θ k ( z t , Δ n k x n ) .
(3.37)

By (3.24), we have A k Δ n k x n A k Δ n k 1 x n 0 as n. Furthermore, by the monotonicity of A k , we obtain z t Δ n k x n , A k z t A k Δ n k x n 0. Then by (A4) we obtain

z t w, A k z t φ k (w) φ k ( z t )+ Θ k ( z t ,w).
(3.38)

Utilizing (A1), (A4), and (3.26), we obtain

0 = Θ k ( z t , z t ) + φ k ( z t ) φ k ( z t ) t Θ k ( z t , y ) + ( 1 t ) Θ k ( z t , w ) + t φ k ( y ) + ( 1 t ) φ k ( w ) φ k ( z t ) t [ Θ k ( z t , y ) + φ k ( y ) φ k ( z t ) ] + ( 1 t ) z t w , A k z t = t [ Θ k ( z t , y ) + φ k ( y ) φ k ( z t ) ] + ( 1 t ) t y w , A k z t ,

and hence

0 Θ k ( z t ,y)+ φ k (y) φ k ( z t )+(1t)yw, A k z t .

Letting t0, we have, for each yC,

0 Θ k (w,y)+ φ k (y) φ k (w)+yw, A k w.

This implies that wGMEP( Θ k , φ k , A k ), and hence, w k = 1 M GMEP( Θ k , φ k , A k ). Thus, wΩ= n = 1 Fix( T n ) k = 1 M GMEP( Θ k , φ k , A k ) m = 1 N I( B m , R m ). Consequently, w k = 1 M GMEP( Θ k , φ k , A k ) m = 1 N I( B m , R m )GSVI(G)Fix(T)=:Ω. This shows that ω w ( x n )Ω.

Step 5. We prove that ω w ( x n )Ξ.

Indeed, take an arbitrary w ω w ( x n ). Then there exists a subsequence { x n i } of { x n } such that x n i w. Utilizing (3.16), we obtain, for all pΩ,

x n + 1 p 2 ( 1 λ n τ 2 γ 2 τ ) x n p 2 β n ( 1 λ n τ ) 1 β n y n x n 2 + 2 λ n α n ( γ V p μ F p ) , x n + 1 p + 2 λ n ( 1 α n ) ( γ S p μ F p ) , x n + 1 p x n p 2 + 2 λ n α n ( γ V μ F ) p , x n + 1 p + 2 λ n ( 1 α n ) ( γ S p μ F p ) , x n + 1 p ,

which implies that

( μ F γ S ) p , x n p ( μ F γ S ) p , x n x n + 1 + ( μ F γ S ) p , x n + 1 p ( μ F γ S ) p x n x n + 1 + x n p 2 x n + 1 p 2 2 λ n ( 1 α n ) + α n 1 α n ( γ V μ F ) p , x n + 1 p ( μ F γ S ) p x n x n + 1 + x n x n + 1 ( x n p + x n + 1 p ) 2 λ n ( 1 α n ) + α n 1 α n ( γ V μ F ) p x n + 1 p .
(3.39)

Since α n 0, x n x n + 1 0 and

lim n x n x n + 1 λ n = lim n x n x n + 1 α n α n λ n =0,

from (3.39), we conclude that

( μ F γ S ) p , w p = lim i ( μ F γ S ) p , x n i p lim sup n ( μ F γ S ) p , x n p 0 , p Ω ,

that is,

( μ F γ S ) p , w p 0,pΩ.
(3.40)

Since μFγS is (μηγ)-strongly monotone and (μκ+γ)-Lipschitz continuous, by Minty’s lemma [29] we know that (3.40) is equivalent to the VIP

( μ F γ S ) w , p w 0,pΩ.
(3.41)

This shows that wVI(Ω,μFγS). Taking into account { x }=VI(Ω,μFγS), we know that w= x . Thus, ω w ( x n )={ x }; that is, x n x .

Next we prove that lim n x n x =0. As a matter of fact, utilizing (3.16) with p= x , we get

x n + 1 x 2 ( 1 λ n τ 2 γ 2 τ ) x n x 2 β n ( 1 λ n τ ) 1 β n y n x n 2 + 2 λ n α n ( γ V x μ F x ) , x n + 1 x + 2 λ n ( 1 α n ) ( γ S x μ F x ) , x n + 1 x ( 1 λ n τ 2 γ 2 τ ) x n x 2 + 2 λ n α n ( γ V μ F ) x x n + 1 x + 2 λ n ( 1 α n ) ( γ S x μ F x ) , x n + 1 x = ( 1 λ n τ 2 γ 2 τ ) x n x 2 + λ n τ 2 γ 2 τ 2 τ τ 2 γ 2 [ α n ( γ V μ F ) x x n + 1 x + ( 1 α n ) ( γ S x μ F x ) , x n + 1 x ] .
(3.42)

Since n = 0 λ n = and lim n (γS x μF x ), x x n + 1 =0 (due to x n x ), we deduce that n = 0 λ n τ 2 γ 2 τ =, and

lim n 2 τ τ 2 γ 2 [ α n ( γ V μ F ) x x n + 1 x + ( 1 α n ) ( γ S x μ F x ) , x n + 1 x ] =0.

Therefore, applying Lemma 2.12 to (3.42) we infer that lim n x n x =0. This completes the proof. □

Putting M=N=1, F 2 0, and F 1 =Γ an inverse-strongly monotone mapping on C in Algorithm 3.1, we have the following algorithm.

Algorithm 3.2 Let C be a nonempty closed convex subset of a real Hilbert space H, Θ:C×CR be a bifunction from satisfying conditions (A1)-(A4), φ:CR{+} be a proper lower semicontinuous and convex function with restriction (B1) or (B2), and A:HH be μ 0 -inverse-strongly monotone. Let R:C 2 H be a maximal monotone mapping, B:CH be η 0 -inverse-strongly monotone, T:CC be a ξ-strictly pseudocontractive mapping, S:HH is a nonexpansive mapping and V:HH be a ρ-contraction with coefficient ρ[0,1). Let Γ:CH be ζ-inverse-strongly monotone, and F:HH be κ-Lipschitzian and η-strongly monotone with positive constants κ,η>0 such that 0γ<τ and 0<μ< 2 η κ 2 where τ=1 1 μ ( 2 η μ κ 2 ) . Assume that Ω:=GMEP(Θ,φ,A)I(B,R)VI(C,Γ)Fix(T). Let { α n },{ λ n }(0,1], { β n },{ γ n },{ δ n }[0,1], { ρ n }(0,2α], { r n }[c,d](0,2 μ 0 ) and { ρ n }[e,f](0,2 η 0 ). For arbitrarily given x 0 H, let { x n } be a sequence generated by

{ Θ ( u n , y ) + φ ( y ) φ ( u n ) + A x n , y u n + 1 r n y u n , u n x n 0 , y C , v n = J R , ρ n ( I ρ n B ) u n , y n = β n x n + γ n P C ( I ν Γ ) v n + δ n T P C ( I ν Γ ) v n , x n + 1 = λ n γ ( α n V x n + ( 1 α n ) S x n ) + ( I λ n μ F ) y n , n 0 ,
(3.43)

where ν(0,2ζ).

From Theorem 3.1, we have following result.

Corollary 3.1 In addition to assumption of Algorithm  3.2, suppose that

  1. (i)

    lim n λ n =0, n = 0 λ n = and lim n 1 λ n |1 α n 1 α n |=0;

  2. (ii)

    lim sup n α n λ n <, lim n 1 λ n | 1 α n 1 α n 1 |=0 and lim n 1 α n |1 λ n 1 λ n |=0;

  3. (iii)

    lim n | β n β n 1 | λ n α n =0 and lim n | γ n γ n 1 | λ n α n =0;

  4. (iv)

    lim n | r n r n 1 | λ n α n =0 and lim n | ρ n ρ n 1 | λ n α n =0;

  5. (v)

    β n + γ n + δ n =1 and ( γ n + δ n )ξ γ n for all n0;

  6. (vi)

    { β n }[a,b](0,1) and lim inf n δ n >0.

Then

  1. (a)

    lim n x n + 1 x n α n =0;

  2. (b)

    ω w ( x n )Ω;

  3. (c)

    { x n } converges strongly to a point x Ω, which is a unique solution of HVIP (1.9), i.e.,

    ( μ F γ S ) x , p x 0,pΩ.

Proof Since F 2 0 and F 1 =Γ a ζ-inverse-strongly monotone mapping on C, it is easy to see that GSVI(G)=VI(C,Γ). Thus, in terms of Theorem 3.1, we derive the desired result. □

Putting Γ=IΦ, where Φ:CC is a ξ 0 -strictly pseudocontractive mapping on C, in Algorithm 3.2, we obtain the following algorithm.

Algorithm 3.3 Let C be a nonempty closed convex subset of a real Hilbert space H, Θ:C×CR be a bifunction satisfying conditions (A1)-(A4), φ:CR{+} be a proper lower semicontinuous and convex function with restriction (B1) or (B2), and A:HH be μ 0 -inverse-strongly monotone. Let R:C 2 H be a maximal monotone mapping, B:CH be η 0 -inverse-strongly monotone, T:CC be a ξ-strictly pseudocontractive mapping, S:HH is a nonexpansive mapping and V:HH be a ρ-contraction with coefficient ρ[0,1). Let Φ:CC be a ξ 0 -strictly pseudocontractive mapping, and F:HH be κ-Lipschitzian and η-strongly monotone with positive constants κ,η>0 such that 0γ<τ and 0<μ< 2 η κ 2 where τ=1 1 μ ( 2 η μ κ 2 ) . Assume that Ω:=GMEP(Θ,φ,A)I(B,R)Fix(Φ)Fix(T). Let { α n },{ λ n }(0,1], { β n },{ γ n },{ δ n }[0,1], { ρ n }(0,2α], { r n }[c,d](0,2 μ 0 ) and { ρ n }[e,f](0,2 η 0 ). For arbitrarily given x 0 H, let { x n } be a sequence generated by

{ Θ ( u n , y ) + φ ( y ) φ ( u n ) + A x n , y u n + 1 r n y u n , u n x n 0 , y C , v n = J R , ρ n ( I ρ n B ) u n , y n = β n x n + γ n ( I ν ( I Φ ) ) v n + δ n T ( I ν ( I Φ ) ) v n , x n + 1 = λ n γ ( α n V x n + ( 1 α n ) S x n ) + ( I λ n μ F ) y n , n 0 ,
(3.44)

where ν(0,1 ξ 0 ).

Corollary 3.2 In addition to assumption of Algorithm  3.3, suppose that

  1. (i)

    lim n λ n =0, n = 0 λ n = and lim n 1 λ n |1 α n 1 α n |=0;

  2. (ii)

    lim sup n α n λ n <, lim n 1 λ n | 1 α n 1 α n 1 |=0 and lim n 1 α n |1 λ n 1 λ n |=0;

  3. (iii)

    lim n | β n β n 1 | λ n α n =0 and lim n | γ n γ n 1 | λ n α n =0;

  4. (iv)

    lim n | r n r n 1 | λ n α n =0 and lim n | ρ n ρ n 1 | λ n α n =0;

  5. (v)

    β n + γ n + δ n =1 and ( γ n + δ n )ξ γ n for all n0;

  6. (vi)

    { β n }[a,b](0,1) and lim inf n δ n >0.

Then

  1. (a)

    lim n x n + 1 x n α n =0;

  2. (b)

    ω w ( x n )Ω;

  3. (c)

    { x n } converges strongly to a point x Ω, which is a unique solution of HVIP (1.9), i.e.,

    ( μ F γ S ) x , p x 0,pΩ.

Proof Since Φ:CC is a ξ 0 -strictly pseudocontractive mapping on C, it is well known that for constant ξ 0 [0,1),

ΦxΦy,xy x y 2 1 ξ 0 2 ( I Φ ) x ( I Φ ) y 2 ,x,yC.

It is clear that in this case the mapping Γ=IΦ is 1 ξ 0 2 -inverse-strongly monotone. Moreover, we have, for ν(0,1 ξ 0 ),

y n = β n x n + γ n P C ( I ν Γ ) v n + δ n T P C ( I ν Γ ) v n = β n x n + γ n ( I ν ( I Φ ) ) v n + δ n T ( I ν ( I Φ ) ) v n .

Now let us show Fix(Φ)=VI(C,Γ). In fact, we have, for λ>0,

u VI ( C , Γ ) Γ u , y u 0 , y C u λ Γ u u , u y 0 , y C u = P C ( u λ Γ u ) u = P C ( u λ u + λ Φ u ) u λ u + λ Φ u u , u y 0 , y C u Φ u , u y 0 , y C u = Φ u u Fix ( Φ ) .

Consequently,

Ω = GMEP ( Θ , φ , A ) I ( B , R ) VI ( C , Γ ) Fix ( T ) = GMEP ( Θ , φ , A ) I ( B , R ) Fix ( Φ ) Fix ( T ) .

Therefore, by Corollary 3.1, we derive the desired result. □

Remark 3.1 Our results generalize and improve results in [12, 20] and the references therein.

References

  1. Ansari QH, Lalitha CS, Mehta M: Generalized Convexity, Nonsmooth Variational Inequalities and Nonsmooth Optimization. CRC Press, Boca Raton; 2013.

    Google Scholar 

  2. Oden JT: Quantitative Methods on Nonlinear Mechanics. Prentice Hall, Englewood Cliffs; 1986.

    Google Scholar 

  3. Ceng L-C, Guu S-M, Yao J-C: Hybrid iterative method for finding common solutions of generalized mixed equilibrium and fixed point problems. Fixed Point Theory Appl. 2012., 2012: Article ID 92

    Google Scholar 

  4. Korpelevich GM: The extragradient method for finding saddle points and other problems. Matecon 1976, 12: 747–756.

    Google Scholar 

  5. Ceng L-C, Ansari QH, Yao J-C: Relaxed extragradient iterative methods for variational inequalities. Appl. Math. Comput. 2011, 218: 1112–1123. 10.1016/j.amc.2011.01.061

    Article  MathSciNet  Google Scholar 

  6. Ansari QH: Metric Spaces: Including Fixed Point Theory and Set-Valued Maps. Narosa Publishing House, New Delhi; 2014.

    Google Scholar 

  7. Almezel S, Ansari QH, Khamsi MA: Topics in Fixed Point Theory. Springer, Cham; 2014.

    Book  Google Scholar 

  8. Peng J-W, Yao J-C: A new hybrid-extragradient method for generalized mixed equilibrium problems, fixed point problems and variational inequality problems. Taiwan. J. Math. 2008, 12: 1401–1432.

    MathSciNet  Google Scholar 

  9. Verma RU: On a new system of nonlinear variational inequalities and associated iterative algorithms. Math. Sci. Res. Hot-Line 1999, 3(8):65–68.

    MathSciNet  Google Scholar 

  10. Yamada I: The hybrid steepest-descent method for the variational inequality problems over the intersection of the fixed-point sets of nonexpansive mappings. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications. Edited by: Batnariu D, Censor Y, Reich S. North-Holland, Amsterdam; 2001:473–504.

    Chapter  Google Scholar 

  11. Ceng L-C, Yao J-C: A hybrid iterative scheme for mixed equilibrium problems and fixed point problems. J. Comput. Appl. Math. 2008, 214: 186–201. 10.1016/j.cam.2007.02.022

    Article  MathSciNet  Google Scholar 

  12. Cai G, Bu SQ: Strong and weak convergence theorems for general mixed equilibrium problems and variational inequality problems and fixed point problems in Hilbert spaces. J. Comput. Appl. Math. 2013, 247: 34–52.

    Article  MathSciNet  Google Scholar 

  13. Sahu DR, Xu H-K, Yao J-C: Asymptotically strict pseudocontractive mappings in the intermediate sense. Nonlinear Anal. 2009, 70: 3502–3511. 10.1016/j.na.2008.07.007

    Article  MathSciNet  Google Scholar 

  14. Ceng L-C, Wong MM: Relaxed extragradient method for finding a common element of systems of variational inequalities and fixed point problems. Taiwan. J. Math. 2013, 17(2):701–724.

    MathSciNet  Google Scholar 

  15. Ceng L-C, Wang CY, Yao J-C: Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 2008, 67(3):375–390. 10.1007/s00186-007-0207-4

    Article  MathSciNet  Google Scholar 

  16. Ansari QH, Scahiable S, Yao J-C: System of vector equilibrium problems and its applications. J. Optim. Theory Appl. 2000, 107: 547–557. 10.1023/A:1026495115191

    Article  MathSciNet  Google Scholar 

  17. Ansari QH, Yao J-C: A fixed point theorem and its applications to the system of variational inequalities. Bull. Aust. Math. Soc. 1999, 59: 433–442. 10.1017/S0004972700033116

    Article  MathSciNet  Google Scholar 

  18. Ansari QH, Yao J-C: Systems of generalized variational inequalities and their applications. Appl. Anal. 2000, 76(3–4):203–217. 10.1080/00036810008840877

    Article  MathSciNet  Google Scholar 

  19. Lin L-J, Yu Z-T, Ansari QH, Lai L-P: Fixed point and maximal element theorems with applications to abstract economies and minimax inequalities. J. Math. Anal. Appl. 2003, 284: 656–671. 10.1016/S0022-247X(03)00385-8

    Article  MathSciNet  Google Scholar 

  20. Yao Y, Liou YC, Marino G: Two-step iterative algorithms for hierarchical fixed point problems and variational inequality problems. J. Appl. Math. Comput. 2009, 31(1–2):433–445. 10.1007/s12190-008-0222-5

    Article  MathSciNet  Google Scholar 

  21. Ansari QH, Ceng L-C, Gupta H: Triple hierarchical variational inequalities. In Nonlinear Analysis: Approximation Theory, Optimization and Applications. Edited by: Ansari QH. Birkhäuser, Basel; 2014:231–280.

    Google Scholar 

  22. Rockafellar RT: Monotone operators and the proximal point algorithms. SIAM J. Control Optim. 1976, 14: 877–898. 10.1137/0314056

    Article  MathSciNet  Google Scholar 

  23. Marino G, Xu HK: Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces. J. Math. Anal. Appl. 2007, 329: 336–346. 10.1016/j.jmaa.2006.06.055

    Article  MathSciNet  Google Scholar 

  24. Yao Y, Liou YC, Kang SM: Approach to common elements of variational inequality problems and fixed point problems via a relaxed extragradient method. Comput. Math. Appl. 2010, 59: 3472–3480. 10.1016/j.camwa.2010.03.036

    Article  MathSciNet  Google Scholar 

  25. Huang NJ: A new completely general class of variational inclusions with noncompact valued mappings. Comput. Math. Appl. 1998, 35(10):9–14. 10.1016/S0898-1221(98)00067-4

    Article  MathSciNet  Google Scholar 

  26. Ceng L-C, Ansari QH, Wong MM, Yao J-C: Mann type hybrid extragradient method for variational inequalities, variational inclusions and fixed point problems. Fixed Point Theory 2012, 13(2):403–422.

    MathSciNet  Google Scholar 

  27. Zeng L-C, Guu S-M, Yao J-C: Characterization of H -monotone operators with applications to variational inclusions. Comput. Math. Appl. 2005, 50(3–4):329–337. 10.1016/j.camwa.2005.06.001

    Article  MathSciNet  Google Scholar 

  28. Barbu V: Nonlinear Semigroups and Differential Equations in Banach Spaces. Noordhoff, Groningen; 1976.

    Book  Google Scholar 

  29. Goebel K, Kirk WA: Topics on Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.

    Book  Google Scholar 

  30. Xu HK, Kim TH: Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 2003, 119(1):185–201.

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This article was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah. Therefore, the authors acknowledge with thanks DSR, for technical and financial support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abdul Latif.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ceng, LC., Latif, A., Ansari, Q.H. et al. Hybrid extragradient method for hierarchical variational inequalities. Fixed Point Theory Appl 2014, 222 (2014). https://doi.org/10.1186/1687-1812-2014-222

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2014-222

Keywords