Open Access

Hybrid extragradient method for hierarchical variational inequalities

Fixed Point Theory and Applications20142014:222

https://doi.org/10.1186/1687-1812-2014-222

Received: 3 August 2014

Accepted: 16 October 2014

Published: 30 October 2014

Abstract

In this paper, we consider a hierarchical variational inequality problem (HVIP) defined over a common set of solutions of finitely many generalized mixed equilibrium problems, finitely many variational inclusions, a general system of variational inequalities, and the fixed point problem of a strictly pseudocontractive mapping. By combining Korpelevich’s extragradient method, the viscosity approximation method, the hybrid steepest-descent method and Mann’s iteration method, we introduce and analyze a multistep hybrid extragradient algorithm for finding a solution of our HVIP. It is proven that under appropriate assumptions, the proposed algorithm converges strongly to a solution of a general system of variational inequalities defined over a common set of solutions of finitely many generalized mixed equilibrium problems (GMEPs), finitely many variational inclusions, and the fixed point problem of a strictly pseudocontractive mapping. In the meantime, we also prove the strong convergence of the proposed algorithm to a unique solution of our HVIP. The results obtained in this paper improve and extend the corresponding results announced by many others.

MSC:49J30, 47H09, 47J20.

Keywords

hierarchical variational inequalities multistep hybrid extragradient algorithm general system of variational inequalities generalized mixed equilibrium problem variational inclusions strictly pseudocontractive mappings nonexpansive mappings

1 Introduction and formulations

1.1 Variational inequalities and equilibrium problems

Let C be a nonempty closed convex subset of a real Hilbert space H and A : C H be a nonlinear mapping. A variational inequality problem (VIP) is to find a point x C such that
A x , y x 0 , y C .
(1.1)

The solution set of the VIP (1.1) defined by C and A is denoted by VI ( C , A ) . The theory of variational inequalities is a well established subject in nonlinear analysis and optimization. For different aspects of variational inequalities and their generalizations, we refer to [13] and the references therein. Several solution methods for solving different kinds of variational inequality have appeared in literature. Korpelevich’s extragradient method [4] is one of them. It is further studied in [5].

Let Θ : C × C R be a bifunction. The equilibrium problem (EP) is to find x C such that
Θ ( x , y ) 0 , y C .
(1.2)

The set of solutions of EP is denoted by EP ( Θ ) . It is a unified model of several problems, namely, variational inequalities, Nash equilibrium problems, optimization problems, saddle point problems, etc. For further details of EP, we refer to [6, 7] and the references therein.

Let φ : C R be a real-valued function and A : H H be a nonlinear mapping. The generalized mixed equilibrium problem (GMEP) [8] is to find x C such that
Θ ( x , y ) + φ ( y ) φ ( x ) + A x , y x 0 , y C .
(1.3)

We denote the set of solutions of GMEP (1.3) by GMEP ( Θ , φ , A ) . The GMEP (1.3) is very general in the sense that it includes, as special cases, optimization problems, variational inequalities, minimax problems, Nash equilibrium problems in noncooperative games, and others. The GMEP is further considered and studied in [9, 10] and the references therein.

When A 0 , then GMEP (1.3) reduces to the following mixed equilibrium problem (MEP): find x C such that
Θ ( x , y ) + φ ( y ) φ ( x ) 0 , y C .

The set of solutions of MEP is denoted by MEP ( Θ , φ ) .

The common assumptions on the bifunction Θ : C × C R to study GMEP (1.3) or EP (1.2) are the following:

(A1) Θ ( x , x ) = 0 for all x C ;

(A2) Θ is monotone, i.e., Θ ( x , y ) + Θ ( y , x ) 0 for any x , y C ;

(A3) Θ is upper-hemicontinuous, i.e., for each x , y , z C ,
lim sup t 0 + Θ ( t z + ( 1 t ) x , y ) Θ ( x , y ) ;

(A4) Θ ( x , ) is convex and lower semicontinuous for each x C ;

We use the assumption that the function φ : C R is a lower semicontinuous and convex function with restriction (B1) or (B2), where

(B1) for each x H and r > 0 , there exists a bounded subset D x C and y x C such that for any z C D x ,
Θ ( z , y x ) + φ ( y x ) φ ( z ) + 1 r y x z , z x < 0 ;

(B2) C is a bounded set.

Given a positive number r > 0 . Let T r ( Θ , φ ) : H C be the solution set of the auxiliary mixed equilibrium problem, that is, for each x H ,
T r ( Θ , φ ) ( x ) : = { y C : Θ ( y , z ) + φ ( z ) φ ( y ) + 1 r y x , z y 0 , z C } .

Some elementary conclusions related to MEP are given in the following result.

Proposition 1.1 [11]Let Θ : C × C R satisfy conditions (A1)-(A4) and φ : C R be a proper lower semicontinuous and convex function such that either (B1) or (B2) holds. For r > 0 and x H , define a mapping T r ( Θ , φ ) : H C by
T r ( Θ , φ ) ( x ) = { z C : Θ ( z , y ) + φ ( y ) φ ( z ) + 1 r y z , z x 0 , y C } , x H .
Then the following conclusions hold:
  1. (i)

    For each x H , T r ( Θ , φ ) ( x ) is nonempty and single-valued;

     
  2. (ii)
    T r ( Θ , φ ) is firmly nonexpansive, that is, for any x , y H ,
    T r ( Θ , φ ) x T r ( Θ , φ ) y 2 T r ( Θ , φ ) x T r ( Θ , φ ) y , x y ;
     
  3. (iii)

    Fix ( T r ( Θ , φ ) ) = MEP ( Θ , φ ) ;

     
  4. (iv)

    MEP ( Θ , φ ) is closed and convex;

     
  5. (v)

    T s ( Θ , φ ) x T t ( Θ , φ ) x 2 s t s T s ( Θ , φ ) x T t ( Θ , φ ) x , T s ( Θ , φ ) x x , s , t > 0 and x H .

     

Recently, Cai and Bu [12] considered a problem of finding a common element of the set of solutions of finitely many generalized mixed equilibrium problems, the set of solutions of finitely many variational inequalities mappings and the set of fixed points of an asymptotically k-strict pseudocontractive mapping in the intermediate sense [13] in a real Hilbert space. They proposed and analyzed an algorithm for finding such a solution. The weak convergence result for the proposed algorithm is also presented.

1.2 General system of variational inequalities

Let F 1 , F 2 : C H be two mappings. We consider the general system of variational inequalities (GSVI) of finding ( x , y ) C × C such that
{ ν 1 F 1 y + x y , x x 0 , x C , ν 2 F 2 x + y x , x y 0 , x C ,
(1.4)
where ν 1 > 0 and ν 2 > 0 are two constants. It was considered and studied in [5, 14, 15]. In particular, if F 1 = F 2 = A , then the GSVI (1.4) reduces to the problem of finding ( x , y ) C × C such that
{ ν 1 A y + x y , x x 0 , x C , ν 2 A x + y x , x y 0 , x C .
(1.5)
It is called a new system of variational inequalities (NSVI) [9]. It is worth to mention that the above system of two variational inequalities could be used to solve Nash equilibrium problem. For applications of system of variational inequalities to Nash equilibrium problems, we refer to [1619] and the references therein. Further, if x = y additionally, then the NSVI reduces to the classical VIP (1.1). Putting G : = P C ( I ν 1 F 1 ) P C ( I ν 2 F 2 ) and y = P C ( I ν 2 F 2 ) x , Ceng et al. [15] transformed the GSVI (1.4) into the following fixed point equation:
G x = x .
(1.6)

1.3 Hierarchical variational inequalities

A variational inequality problem defined over the set of fixed points of a nonexpansive mapping, is called a hierarchical variational inequality problem. Let S , T : H H be nonexpansive mappings. We denote by Fix ( T ) and Fix ( T ) the set of fixed points of T and S, respectively. If we replace C by Fix ( T ) in the formulation of VIP (1.1), then VIP (1.1) is defined by Fix ( T ) and A and it is called a hierarchical variational inequality problem.

Yao et al. [20] considered the hierarchical variational inequality problem (HVIP) in which the mapping A is replaced by the monotone mapping I S . They considered the following form of HVIP: find x ˜ Fix ( T ) such that
( I S ) x ˜ , p x ˜ 0 , p Fix ( T ) .
(1.7)

The solution set of HVIP (1.7) is denoted by Λ. It is not hard to check that solving HVIP (1.7) is equivalent to the fixed point problem of the composite mapping P Fix ( T ) S , that is, find x ˜ C such that x ˜ = P Fix ( T ) S x ˜ . They proposed and analyzed an iterative method for solving this kind of HTVI. For further details and a comprehensive survey on HVIP, we refer to [21] and the references therein.

1.4 Variational inclusions

Let B : C H be a single-valued mapping and R : C 2 H be a set-valued mapping with D ( R ) = C . We consider the variational inclusion problem of finding a point x C such that
0 B x + R x .
(1.8)

We denote by I ( B , R ) the solution set of the variational inclusion (1.8). In particular, if B R 0 , then I ( B , R ) = C . If B = 0 , then problem (1.8) becomes the inclusion problem introduced by Rockafellar [22]. It is well known that problem (1.8) provides a convenient framework for the unified study of optimal solutions in many optimization related areas including mathematical programming, complementarity problems, variational inequalities, optimal control, mathematical economics, equilibria and game theory, etc.

1.5 Problem to be considered

In this paper, we introduce and study the following hierarchical variational inequality problem (HVIP) defined over a common set of solutions of finitely many GMEPs, finitely many variational inclusions, a general system of variational inequalities, and a fixed point of a strictly pseudocontractive mapping. Throughout the paper, we denote by M and N set of the positive integers.

Problem 1.1 Assume that
  1. (i)

    for j = 1 , 2 , F j : C H and F : H H are mappings;

     
  2. (ii)

    for each k { 1 , 2 , , M } , Θ k : C × C R is a bifunction satisfying conditions (A1)-(A4) and φ k : C R { + } is a proper lower semicontinuous and convex function with restriction (B1) or (B2);

     
  3. (iii)

    for each k { 1 , 2 , , M } and i { 1 , 2 , , N } , R i : C 2 H is a maximal monotone mapping, and A k : H H and B i : C H are mappings;

     
  4. (iv)

    T : C C is a mapping and S : H H is a nonexpansive mapping;

     
  5. (v)

    Ω : = k = 1 M GMEP ( Θ k , φ k , A k ) i = 1 N I ( B i , R i ) GSVI ( G ) Fix ( T ) .

     
Then the objective is to find x Ω such that
( μ F γ S ) x , x x 0 , x Ω .
(1.9)

By combining Korpelevich’s extragradient method, the viscosity approximation method, the hybrid steepest-descent method [10], and Mann’s iteration method, we introduce and analyze a multistep hybrid extragradient algorithm for finding a solution of Problem 1.1. It is proven that under appropriate assumptions, the proposed algorithm converges strongly to a solution of GSVI (1.4) defined over a common set of solutions of finitely many generalized mixed equilibrium problems (GMEPs), finitely many variational inclusions and the fixed point problem of a strictly pseudocontractive mapping. In the meantime, we also prove the strong convergence of the proposed algorithm to a unique solution of Problem 1.1. The results obtained in this paper improve and extend the corresponding results announced by many others.

2 Preliminaries

Throughout this paper, we assume that H is a real Hilbert space whose inner product and norm are denoted by , and , respectively. Let C be a nonempty closed convex subset of H. We write x n x to indicate that the sequence { x n } converges weakly to x and x n x to indicate that the sequence { x n } converges strongly to x. Moreover, we use ω w ( x n ) to denote the weak ω-limit set of the sequence { x n } , i.e.,
ω w ( x n ) : = { x H : x n i x  for some subsequence  { x n i }  of  { x n } } .
Definition 2.1 A mapping A : C H is called
  1. (i)
    η-strongly monotone if there exists a constant η > 0 such that
    A x A y , x y η x y 2 , x , y C ;
     
  2. (ii)
    ζ-inverse-strongly monotone if there exists a constant ζ > 0 such that
    A x A y , x y ζ A x A y 2 , x , y C .
     
It is easy to see that the projection P C is 1-inverse-strongly monotone. Inverse-strongly monotone (also referred to as co-coercive) operators have been applied widely in solving practical problems in various fields. It is obvious that if A is ζ-inverse-strongly monotone, then A is monotone and 1 ζ -Lipschitz continuous. Moreover, we also have, for all u , v C and λ > 0 ,
( I λ A ) u ( I λ A ) v 2 = ( u v ) λ ( A u A v ) 2 = u v 2 2 λ A u A v , u v + λ 2 A u A v 2 u v 2 + λ ( λ 2 ζ ) A u A v 2 .
(2.1)

So, if λ 2 ζ , then I λ A is a nonexpansive mapping from C to H.

Definition 2.2 A mapping T : H H is said to be firmly nonexpansive if 2 T I is nonexpansive, or equivalently, if T is 1-inverse-strongly monotone (1-ism),
x y , T x T y T x T y 2 , x , y H ;
alternatively, T is firmly nonexpansive if and only if T can be expressed as
T = 1 2 ( I + S ) ,

where S : H H is nonexpansive; projections are firmly nonexpansive.

It can easily be seen that if T is nonexpansive, then I T is monotone.

It is clear that, in a real Hilbert space H, T : C C is ξ-strictly pseudocontractive if and only if the following inequality holds:
T x T y , x y x y 2 1 ξ 2 ( I T ) x ( I T ) y 2 , x , y C .

This immediately implies that if T is a ξ-strictly pseudocontractive mapping, then I T is 1 ξ 2 -inverse-strongly monotone; for further details, we refer to [23] and the references therein. It is well known that the class of strict pseudocontractions strictly includes the class of nonexpansive mappings and that the class of pseudocontractions strictly includes the class of strict pseudocontractions.

Lemma 2.1 [[23], Proposition 2.1]

Let C be a nonempty closed convex subset of a real Hilbert space H and T : C C be a mapping.
  1. (i)
    If T is a ξ-strictly pseudocontractive mapping, then T satisfies the Lipschitzian condition
    T x T y 1 + ξ 1 ξ x y , x , y C .
     
  2. (ii)

    If T is a ξ-strictly pseudocontractive mapping, then the mapping I T is semiclosed at 0, that is, if { x n } is a sequence in C such that x n x ˜ and ( I T ) x n 0 , then ( I T ) x ˜ = 0 .

     
  3. (iii)

    If T is a ξ-(quasi-)strict pseudocontraction, then the fixed-point set Fix ( T ) of T is closed and convex so that the projection P Fix ( T ) is well defined.

     

Lemma 2.2 [24]

Let C be a nonempty closed convex subset of a real Hilbert space H. Let T : C C be a ξ-strictly pseudocontractive mapping. Let γ and δ be two nonnegative real numbers such that ( γ + δ ) ξ γ . Then
γ ( x y ) + δ ( T x T y ) ( γ + δ ) x y , x , y C .
Let C be a nonempty closed convex subset of a real Hilbert space H. We introduce some notations. Let λ be a number in ( 0 , 1 ] and let μ > 0 . Associated with a nonexpansive mapping T : C H , we define the mapping T λ : C H by
T λ x : = T x λ μ F ( T x ) , x C ,
where F : H H is an operator such that, for some positive constants κ , η > 0 , F is κ-Lipschitzian and η-strongly monotone on H, that is, F satisfies the conditions
F x F y κ x y and F x F y , x y η x y 2

for all x , y H .

Remark 2.1 Since F is κ-Lipschitzian and η-strongly monotone on H, we get 0 < η κ . Hence, whenever 0 < μ < 2 η κ 2 , we have
0 ( 1 μ η ) 2 = 1 2 μ η + μ 2 η 2 1 2 μ η + μ 2 κ 2 < 1 2 μ η + 2 η κ 2 μ κ 2 = 1 ,
which implies
0 < 1 1 2 μ η + μ 2 κ 2 1 .

So, τ = 1 1 μ ( 2 η μ κ 2 ) ( 0 , 1 ] .

Finally, recall that a set-valued mapping T : D ( T ) H 2 H is called monotone if for all x , y D ( T ) , f T x and g T y imply
f g , x y 0 .

A set-valued mapping T is called maximal monotone if T is monotone and ( I + λ T ) D ( T ) = H for each λ > 0 , where I is the identity mapping of H. We denote by G ( T ) the graph of T. It is well known that a monotone mapping T is maximal if and only if, for ( x , f ) H × H , f g , x y 0 for every ( y , g ) G ( T ) implies f T x . Next we provide an example to illustrate the concept of a maximal monotone mapping.

Let A : C H be a monotone, k-Lipschitz-continuous mapping and let N C v be the normal cone to C at v C , i.e.,
N C v = { u H : v p , u 0 , p C } .
Define
T ˜ v = { A v + N C v , if  v C , , if  v C .
Then T ˜ is maximal monotone (see [22]) such that
0 T ˜ v v VI ( C , A ) .
(2.2)
Let R : D ( R ) H 2 H be a maximal monotone mapping. Let λ , μ > 0 be two positive numbers. Associated with R and λ, we define the resolvent operator J R , λ : H D ( R ) ¯ by
J R , λ = ( I + λ R ) 1 , x H ,

where λ is a positive number.

The lemma shows that the resolvent operator J R , λ : H D ( R ) ¯ is nonexpansive.

Lemma 2.3 [25]

J R , λ is single-valued and firmly nonexpansive, i.e.,
J R , λ x J R , λ y , x y J R , λ x J R , λ y 2 , x , y H .

Consequently, J R , λ is nonexpansive and monotone.

Lemma 2.4 [26]

Let R be a maximal monotone mapping with D ( R ) = C . Then for any given λ > 0 , u C is a solution of problem (1.5) if and only if u C satisfies
u = J R , λ ( u λ B u ) .

Lemma 2.5 [27]

Let R be a maximal monotone mapping with D ( R ) = C and let B : C H be a strongly monotone, continuous, and single-valued mapping. Then for each z H , the equation z ( B + λ R ) x has a unique solution x λ for λ > 0 .

Lemma 2.6 [26]

Let R be a maximal monotone mapping with D ( R ) = C and B : C H be a monotone, continuous and single-valued mapping. Then ( I + λ ( R + B ) ) C = H for each λ > 0 . In this case, R + B is maximal monotone.

Lemma 2.7 [28]

We have the resolvent identity
J R , λ x = J R , μ ( μ λ x + ( 1 μ λ ) J R , λ x ) , x H .
Remark 2.2 For λ , μ > 0 , we have the following relation:
J R , λ x J R , μ y x y + | λ μ | ( 1 λ J R , λ x y + 1 μ x J R , μ y ) , x , y H .
(2.3)
Indeed, whenever λ μ , utilizing Lemma 2.7 we deduce that
J R , λ x J R , μ y = J R , μ ( μ λ x + ( 1 μ λ ) J R , λ x ) J R , μ y μ λ x + ( 1 μ λ ) J R , λ x y μ λ x y + ( 1 μ λ ) J R , λ x y x y + | λ μ | λ J R , λ x y .
Similarly, whenever λ < μ , we get
J R , λ x J R , μ y x y + | λ μ | μ x J R , μ y .

Combining the above two cases we conclude that (2.3) holds.

We need following fact and lemmas to establish the strong convergence of the sequences generated by the proposed algorithm.

Lemma 2.8 Let X be a real inner product space. Then
x + y 2 x 2 + 2 y , x + y , x , y X .
Lemma 2.9 Let H be a real Hilbert space. Then:
  1. (a)

    x y 2 = x 2 y 2 2 x y , y for all x , y H ;

     
  2. (b)

    λ x + μ y 2 = λ x 2 + μ y 2 λ μ x y 2 for all x , y H and λ , μ [ 0 , 1 ] with λ + μ = 1 ;

     
  3. (c)
    If { x n } is a sequence in H such that x n x , it follows that
    lim sup n x n y 2 = lim sup n x n x 2 + x y 2 , y H .
     

Lemma 2.10 (Demiclosedness principle [29])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let S be a nonexpansive self-mapping on C with Fix ( S ) . Then I S is demiclosed, that is, whenever { x n } is a sequence in C weakly converging to some x C and the sequence { ( I S ) x n } strongly converges to some y, it follows that ( I S ) x = y , where I is the identity operator of H.

Lemma 2.11 [[30], Lemma 3.1]

T λ is a contraction provided 0 < μ < 2 η κ 2 ; that is,
T λ x T λ y ( 1 λ τ ) x y , x , y C ,

where τ = 1 1 μ ( 2 η μ κ 2 ) ( 0 , 1 ] .

Remark 2.3 (a) Since F is κ-Lipschitzian and η-strongly monotone on H, we get 0 < η κ . Hence, whenever 0 < μ < 2 η κ 2 , we have
0 ( 1 μ η ) 2 = 1 2 μ η + μ 2 η 2 1 2 μ η + μ 2 κ 2 < 1 2 μ η + 2 η κ 2 μ κ 2 = 1 ,
which implies
0 < 1 1 2 μ η + μ 2 κ 2 1 .

Therefore, τ = 1 1 μ ( 2 η μ κ 2 ) ( 0 , 1 ] .

(b) In Lemma 2.11, put F = 1 2 I and μ = 2 . Then we know that κ = η = 1 2 , 0 < μ = 2 < 2 η κ 2 = 4 and
τ = 1 1 μ ( 2 η μ κ 2 ) = 1 1 2 ( 2 × 1 2 2 × ( 1 2 ) 2 ) = 1 .

Lemma 2.12 [30]

Let { s n } be a sequence of nonnegative numbers satisfying the conditions
s n + 1 ( 1 α n ) s n + α n β n , n 1 ,
where { α n } and { β n } are sequences of real numbers such that
  1. (a)
    { α n } [ 0 , 1 ] and n = 1 α n = , or equivalently,
    n = 1 ( 1 α n ) : = lim n k = 1 n ( 1 α k ) = 0 ;
     
  2. (b)

    lim sup n β n 0 , or n = 1 | α n β n | < .

     

Then lim n s n = 0 .

3 Algorithms and convergence results

In this section, we will introduce and analyze a multistep hybrid extragradient algorithm for finding a solution of the HVIP (1.9) (over the fixed point set of a strictly pseudocontractive mapping) with constraints of several problems: GSVI (1.4), finitely many GMEPs, and finitely many variational inclusions in a real Hilbert space. This algorithm is based on Korpelevich’s extragradient method, the viscosity approximation method, the hybrid steepest-descent method [10], and Mann’s iteration method. We prove the strong convergence of the proposed algorithm to a unique solution of HVIP (1.9) under suitable conditions.

We propose the following algorithm to compute the approximate solution of Problem 1.1.

Algorithm 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H. For each k { 1 , 2 , , M } , let Θ k : C × C R be a bifunction satisfying conditions (A1)-(A4) and φ k : C R { + } be a proper lower semicontinuous and convex function with restriction (B1) or (B2). For each k { 1 , 2 , , M } , i { 1 , 2 , , N } , let R i : C 2 H be a maximal monotone mapping, and A k : H H and B i : C H be μ k -inverse-strongly monotone and η i -inverse-strongly monotone, respectively. Let T : C C be a ξ-strictly pseudocontractive mapping, S : H H is a nonexpansive mapping and V : H H be a ρ-contraction with coefficient ρ [ 0 , 1 ) . For each j = 1 , 2 , let F j : C H be ζ j -inverse-strongly monotone and F : H H be κ-Lipschitzian and η-strongly monotone with positive constants κ , η > 0 such that 0 γ < τ and 0 < μ < 2 η κ 2 where τ = 1 1 μ ( 2 η μ κ 2 ) . Assume that Ω : = k = 1 M GMEP ( Θ k , φ k , A k ) i = 1 N I ( B i , R i ) GSVI ( G ) Fix ( T ) . Let { α n } , { λ n } ( 0 , 1 ] , { β n } , { γ n } , { δ n } [ 0 , 1 ] , { ρ n } ( 0 , 2 α ] , { λ i , n } [ a i , b i ] ( 0 , 2 η i ) and { r k , n } [ c k , d k ] ( 0 , 2 μ k ) where i { 1 , 2 , , N } and k { 1 , 2 , , M } . For arbitrarily given x 0 H , let { x n } be a sequence generated by
{ u n = T r M , n ( Θ M , φ M ) ( I r M , n A M ) T r M 1 , n ( Θ M 1 , φ M 1 ) ( I r M 1 , n A M 1 ) T r 1 , n ( Θ 1 , φ 1 ) ( I r 1 , n A 1 ) x n , v n = J R N , λ N , n ( I λ N , n B N ) J R N 1 , λ N 1 , n ( I λ N 1 , n B N 1 ) J R 1 , λ 1 , n ( I λ 1 , n B 1 ) u n , y n = β n x n + γ n G v n + δ n T G v n , x n + 1 = λ n γ ( α n V x n + ( 1 α n ) S x n ) + ( I λ n μ F ) y n , n 0 ,
(3.1)

where G : = P C ( I ν 1 F 1 ) P C ( I ν 2 F 2 ) with ν j ( 0 , 2 ζ j ) for j = 1 , 2 .

Theorem 3.1 In addition to the assumption in Algorithm  3.1, suppose that
  1. (i)

    lim n λ n = 0 , n = 0 λ n = and lim n 1 λ n | 1 α n 1 α n | = 0 ;

     
  2. (ii)

    lim sup n α n λ n < , lim n 1 λ n | 1 α n 1 α n 1 | = 0 and lim n 1 α n | 1 λ n 1 λ n | = 0 ;

     
  3. (iii)

    lim n | β n β n 1 | λ n α n = 0 and lim n | γ n γ n 1 | λ n α n = 0 ;

     
  4. (iv)

    lim n | λ i , n λ i , n 1 | λ n α n = 0 and lim n | r k , n r k , n 1 | λ n α n = 0 for i = 1 , 2 , , N and k = 1 , 2 , , M ;

     
  5. (v)

    β n + γ n + δ n = 1 and ( γ n + δ n ) ξ γ n for all n 0 ;

     
  6. (vi)

    { β n } [ a , b ] ( 0 , 1 ) and lim inf n δ n > 0 .

     
Then
  1. (a)

    lim n x n + 1 x n α n = 0 ;

     
  2. (b)

    ω w ( x n ) Ω ;

     
  3. (c)
    { x n } converges strongly to a point x Ω , which is a unique solution of HVIP (1.9), that is,
    ( μ F γ S ) x , p x 0 , p Ω .
     
Proof First of all, observe that
μ η τ μ η 1 1 μ ( 2 η μ κ 2 ) 1 μ ( 2 η μ κ 2 ) 1 μ η 1 2 μ η + μ 2 κ 2 1 2 μ η + μ 2 η 2 κ 2 η 2 κ η
and
( μ F γ S ) x ( μ F γ S ) y , x y = μ F x F y , x y γ S x S y , x y μ η x y 2 γ x y 2 = ( μ η γ ) x y 2 , x , y H .
Since 0 γ < τ and κ η , we know that μ η τ > γ and hence the mapping μ F γ S is ( μ η γ ) -strongly monotone. Moreover, it is clear that the mapping μ F γ S is ( μ κ + γ ) -Lipschitzian. Thus, there exists a unique solution x in Ω to the VIP
( μ F γ S ) x , p x 0 , p Ω .
That is, { x } = VI ( Ω , μ F γ S ) . Now, we put
Δ n k = T r k , n ( Θ k , φ k ) ( I r k , n A k ) T r k 1 , n ( Θ k 1 , φ k 1 ) ( I r k 1 , n A k 1 ) T r 1 , n ( Θ 1 , φ 1 ) ( I r 1 , n A 1 ) x n
for all k { 1 , 2 , , M } and n 1 ,
Λ n i = J R i , λ i , n ( I λ i , n B i ) J R i 1 , λ i 1 , n ( I λ i 1 , n B i 1 ) J R 1 , λ 1 , n ( I λ 1 , n B 1 )

for all i { 1 , 2 , , N } , Δ n 0 = I and Λ n 0 = I , where I is the identity mapping on H. Then we have u n = Δ n M x n and v n = Λ n N u n .

We divide the rest of the proof into several steps.

Step 1. We prove that { x n } is bounded.

Indeed, take a fixed p Ω arbitrarily. Utilizing (2.1) and Proposition 1.1(ii), we have
u n p = T r M , n ( Θ M , φ M ) ( I r M , n B M ) Δ n M 1 x n T r M , n ( Θ M , φ M ) ( I r M , n B M ) Δ n M 1 p ( I r M , n B M ) Δ n M 1 x n ( I r M , n B M ) Δ n M 1 p Δ n M 1 x n Δ n M 1 p Δ n 0 x n Δ n 0 p = x n p .
(3.2)
Utilizing (2.1) and Lemma 2.3, we have
v n p = J R N , λ N , n ( I λ N , n A N ) Λ n N 1 u n J R N , λ N , n ( I λ N , n A N ) Λ n N 1 p ( I λ N , n A N ) Λ n N 1 u n ( I λ N , n A N ) Λ n N 1 p Λ n N 1 u n Λ n N 1 p Λ n 0 u n Λ n 0 p = u n p .
(3.3)
Combining (3.2) and (3.3), we have
v n p x n p .
(3.4)
Since p = G p = P C ( I ν 1 F 1 ) P C ( I ν 2 F 2 ) p , F j is ζ j -inverse-strongly monotone for j = 1 , 2 , and 0 < ν j 2 ζ j for j = 1 , 2 , we deduce that, for any n 0 ,
G v n p 2 = P C ( I ν 1 F 1 ) P C ( I ν 2 F 2 ) v n P C ( I ν 1 F 1 ) P C ( I ν 2 F 2 ) p 2 ( I ν 1 F 1 ) P C ( I ν 2 F 2 ) v n ( I ν 1 F 1 ) P C ( I ν 2 F 2 ) p 2 = [ P C ( I ν 2 F 2 ) v n P C ( I ν 2 F 2 ) p ] ν 1 [ F 1 P C ( I ν 2 F 2 ) v n F 1 P C ( I ν 2 F 2 ) p ] 2 P C ( I ν 2 F 2 ) v n P C ( I ν 2 F 2 ) p 2 + ν 1 ( ν 1 2 ζ 1 ) F 1 P C ( I ν 2 F 2 ) v n F 1 P C ( I ν 2 F 2 ) p 2 P C ( I ν 2 F 2 ) v n P C ( I ν 2 F 2 ) p 2 ( I ν 2 F 2 ) v n ( I ν 2 F 2 ) p 2 = ( v n p ) ν 2 ( F 2 v n F 2 p ) 2 v n p 2 + ν 2 ( ν 2 2 ζ 2 ) F 2 v n F 2 p 2 v n p 2 .
(3.5)
(This shows that G : C C is a nonexpansive mapping.) Since ( γ n + δ n ) ξ γ n for all n 0 and T is ξ-strictly pseudocontractive, utilizing Lemma 2.2, we obtain from (3.1), (3.4), and (3.5)
y n p = β n x n + γ n G v n + δ n T G v n p = β n ( x n p ) + γ n ( G v n p ) + δ n ( T G v n p ) β n x n p + γ n ( G v n p ) + δ n ( T G v n p ) β n x n p + ( γ n + δ n ) G v n p β n x n p + ( γ n + δ n ) v n p β n x n p + ( γ n + δ n ) x n p = x n p .
(3.6)
Utilizing Lemma 2.11, we deduce from (3.1), (3.6), and 0 γ < τ that for all n 0
x n + 1 p = λ n γ ( α n V x n + ( 1 α n ) S x n ) + ( I λ n μ F ) y n p = λ n γ ( α n V x n + ( 1 α n ) S x n ) λ n μ F p + ( I λ n μ F ) y n ( I λ n μ F ) p λ n γ ( α n V x n + ( 1 α n ) S x n ) λ n μ F p + ( I λ n μ F ) y n ( I λ n μ F ) p = λ n α n ( γ V x n μ F p ) + ( 1 α n ) ( γ S x n μ F p ) + ( I λ n μ F ) y n ( I λ n μ F ) p λ n [ α n γ V x n μ F p + ( 1 α n ) γ S x n μ F p ] + ( I λ n μ F ) y n ( I λ n μ F ) p λ n [ α n ( γ V x n V p + γ V p μ F p ) + ( 1 α n ) ( γ S x n S p + γ S p μ F p ) ] + ( I λ n μ F ) y n ( I λ n μ F ) p λ n [ α n ( γ ρ x n p + γ V p μ F p ) + ( 1 α n ) ( γ x n p + γ S p μ F p ) ] + ( 1 λ n τ ) y n p λ n [ ( 1 α n ( 1 ρ ) ) γ x n p + max { γ V p μ F p , γ S p μ F p } ] + ( 1 λ n τ ) x n p = λ n ( 1 α n ( 1 ρ ) ) γ x n p + λ n max { γ V p μ F p , γ S p μ F p } + ( 1 λ n τ ) x n p λ n γ x n p + λ n max { γ V p μ F p , γ S p μ F p } + ( 1 λ n τ ) x n p = ( 1 λ n ( τ γ ) ) x n p + λ n max { γ V p μ F p , γ S p μ F p } = ( 1 λ n ( τ γ ) ) x n p + λ n ( τ γ ) max { γ V p μ F p , γ S p μ F p } τ γ max { x n p , γ V p μ F p τ γ , γ S p μ F p τ γ } .
By induction, we get
x n p max { x 0 p , γ V p μ F p τ γ , γ S p μ F p τ γ } , n 0 .

Thus, { x n } is bounded and so are the sequences { u n } , { v n } , and { y n } .

Step 2. We prove that lim n x n + 1 x n α n = 0 .

Indeed, utilizing (2.1) and (2.3), we obtain
v n + 1 v n = Λ n + 1 N u n + 1 Λ n N u n = J R N , λ N , n + 1 ( I λ N , n + 1 B N ) Λ n + 1 N 1 u n + 1 J R N , λ N , n ( I λ N , n B N ) Λ n N 1 u n J R N , λ N , n + 1 ( I λ N , n + 1 B N ) Λ n + 1 N 1 u n + 1 J R N , λ N , n + 1 ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 + J R N , λ N , n + 1 ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 J R N , λ N , n ( I λ N , n B N ) Λ n N 1 u n ( I λ N , n + 1 B N ) Λ n + 1 N 1 u n + 1 ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 + ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 ( I λ N , n B N ) Λ n N 1 u n + | λ N , n + 1 λ N , n | × ( 1 λ N , n + 1 J R N , λ N , n + 1 ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 ( I λ N , n B N ) Λ n N 1 u n + 1 λ N , n ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 J R N , λ N , n ( I λ N , n B N ) Λ n N 1 u n ) | λ N , n + 1 λ N , n | ( B N Λ n + 1 N 1 u n + 1 + M ˜ ) + Λ n + 1 N 1 u n + 1 Λ n N 1 u n | λ N , n + 1 λ N , n | ( B N Λ n + 1 N 1 u n + 1 + M ˜ ) + | λ N 1 , n + 1 λ N 1 , n | ( B N 1 Λ n + 1 N 2 u n + 1 + M ˜ ) + Λ n + 1 N 2 u n + 1 Λ n N 2 u n | λ N , n + 1 λ N , n | ( B N Λ n + 1 N 1 u n + 1 + M ˜ ) + | λ N 1 , n + 1 λ N 1 , n | ( B N 1 Λ n + 1 N 2 u n + 1 + M ˜ ) + + | λ 1 , n + 1 λ 1 , n | ( B 1 Λ n + 1 0 u n + 1 + M ˜ ) + Λ n + 1 0 u n + 1 Λ n 0 u n M ˜ 0 i = 1 N | λ i , n + 1 λ i , n | + u n + 1 u n ,
(3.7)
where
sup n 0 { 1 λ N , n + 1 J R N , λ N , n + 1 ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 ( I λ N , n B N ) Λ n N 1 u n + 1 λ N , n ( I λ N , n B N ) Λ n + 1 N 1 u n + 1 J R N , λ N , n ( I λ N , n B N ) Λ n N 1 u n } M ˜

for some M ˜ > 0 and sup n 0 { i = 1 N B i Λ n + 1 i 1 u n + 1 + M ˜ } M ˜ 0 for some M ˜ 0 > 0 .

Utilizing Proposition 1.1(ii), (v), we deduce that
u n + 1 u n = Δ n + 1 M x n + 1 Δ n M x n = T r M , n + 1 ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 T r M , n ( Θ M , φ M ) ( I r M , n A M ) Δ n M 1 x n T r M , n + 1 ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 T r M , n ( Θ M , φ M ) ( I r M , n A M ) Δ n + 1 M 1 x n + 1 + T r M , n ( Θ M , φ M ) ( I r M , n A M ) Δ n + 1 M 1 x n + 1 T r M , n ( Θ M , φ M ) ( I r M , n A M ) Δ n M 1 x n T r M , n + 1 ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 T r M , n ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 + T r M , n ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 T r M , n ( Θ M , φ M ) ( I r M , n A M ) Δ n + 1 M 1 x n + 1 + ( I r M , n A M ) Δ n + 1 M 1 x n + 1 ( I r M , n A M ) Δ n M 1 x n | r M , n + 1 r M , n | r M , n + 1 T r M , n + 1 ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 + | r M , n + 1 r M , n | A M Δ n + 1 M 1 x n + 1 + Δ n + 1 M 1 x n + 1 Δ n M 1 x n = | r M , n + 1 r M , n | [ A M Δ n + 1 M 1 x n + 1 + 1 r M , n + 1 T r M , n + 1 ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 ] + Δ n + 1 M 1 x n + 1 Δ n M 1 x n | r M , n + 1 r M , n | [ A M Δ n + 1 M 1 x n + 1 + 1 r M , n + 1 T r M , n + 1 ( Θ M , φ M ) ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 ( I r M , n + 1 A M ) Δ n + 1 M 1 x n + 1 ] + + | r 1 , n + 1 r 1 , n | [ A 1 Δ n + 1 0 x n + 1 + 1 r 1 , n + 1 T r 1 , n + 1 ( Θ 1 , φ 1 ) ( I r 1 , n + 1 A 1 ) Δ n + 1 0 x n + 1 ( I r 1 , n + 1 A 1 ) Δ n + 1 0 x n + 1 ] + Δ n + 1 0 x n + 1 Δ n 0 x n M ˜ 1 k = 1 M | r k , n + 1 r k , n | + x n + 1 x n ,
(3.8)
where M ˜ 1 > 0 is a constant such that for each n 0
k = 1 M [ A k Δ n + 1 k 1 x n + 1 + 1 r k , n + 1 T r k , n + 1 ( Θ k , φ k ) ( I r k , n + 1 A k ) Δ n + 1 k 1 x n + 1 ( I r k , n + 1 A k ) Δ n + 1 k 1 x n + 1 ] M ˜ 1 .
Furthermore, we define y n = β n x n + ( 1 β n ) w n for all n 0 . It follows that
w n + 1 w n = y n + 1 β n + 1 x n + 1 1 β n + 1 y n β n x n 1 β n = γ n + 1 G v n + 1 + δ n + 1 T G v n + 1 1 β n + 1 γ n G v n + δ n T G v n 1 β n = γ n + 1 ( G v n + 1 G v n ) + δ n + 1 ( T G v n + 1 T G v n ) 1 β n + 1 + ( γ n + 1 1 β n + 1 γ n 1 β n ) G v n + ( δ n + 1 1 β n + 1 δ n 1 β n ) T G v n .
(3.9)
Since ( γ n + δ n ) ξ γ n for all n 0 , utilizing Lemma 2.2, we have
γ n + 1 ( G v n + 1 G v n ) + δ n + 1 ( T G v n + 1 T G v n ) ( γ n + 1 + δ n + 1 ) G v n + 1 G v n .
(3.10)
Hence it follows from (3.7)-(3.10) that
w n + 1 w n γ n + 1 ( G v n + 1 G v n ) + δ n + 1 ( T G v n + 1 T G v n ) 1 β n + 1 + | γ n + 1 1 β n + 1 γ n 1 β n | G v n + | δ n + 1 1 β n + 1 δ n 1 β n | T G v n ( γ n + 1 + δ n + 1 ) 1 β n + 1 G v n + 1 G v n + | γ n + 1 1 β n + 1 γ n 1 β n | ( G v n + T G v n ) = G v n + 1 G v n + | γ n + 1 1 β n + 1 γ n 1 β n | ( G v n + T G v n ) v n + 1 v n + | γ n + 1 1 β n + 1 γ n 1 β n | ( G v n + T G v n ) M ˜ 0 i = 1 N | λ i , n + 1 λ i , n | + u n + 1 u n + | γ n + 1 1 β n + 1 γ n 1 β n | ( G v n + T G v n ) M ˜ 0 i = 1 N | λ i , n + 1 λ i , n | + M ˜ 1 k = 1 M | r k , n + 1 r k , n | + x n + 1 x n + | γ n + 1 1 β n + 1 γ n 1 β n | ( G v n + T G v n ) .
(3.11)
In the meantime, simple calculation shows that
y n + 1 y n = β n ( x n + 1 x n ) + ( 1 β n ) ( w n + 1 w n ) + ( β n + 1 β n ) ( x n + 1 w n + 1 ) .
So, it follows from (3.11) that
y n + 1 y n β n x n + 1 x n + ( 1 β n ) w n + 1 w n + | β n + 1 β n | x n + 1 w n + 1 β n x n + 1 x n + ( 1 β n ) [ M ˜ 0 i = 1 N | λ i , n + 1 λ i , n | + M ˜ 1 k = 1 M | r k , n + 1 r k , n | + x n + 1 x n + | γ n + 1 1 β n + 1 γ n 1 β n | ( G v n + T G v n ) ] + | β n + 1 β n | x n + 1 w n + 1 x n + 1 x n + M ˜ 0 i = 1 N | λ i , n + 1 λ i , n | + M ˜ 1 k = 1 M | r k , n + 1 r k , n | + | γ n + 1 γ n | ( 1 β n ) + γ n | β n + 1 β n | 1 β n + 1 ( G v n + T G v n ) + | β n + 1 β n | x n + 1 w n + 1 x n + 1 x n + M ˜ 0 i = 1 N | λ i , n + 1 λ i , n | + M ˜ 1 k = 1 M | r k , n + 1 r k , n | + | γ n + 1 γ n | G v n + T G v n 1 b + | β n + 1 β n | ( x n + 1 w n + 1 + G v n + T G v n 1 b ) x n + 1 x n + M ˜ 2 ( i = 1 N | λ i , n + 1 λ i , n | + k = 1 M | r k , n + 1 r k , n | + | γ n + 1 γ n | + | β n + 1 β n | ) ,
(3.12)

where sup n 0 { x n + 1 w n + 1 + G v n + T G v n 1 b + M ˜ 0 + M ˜ 1 } M ˜ 2 for some M ˜ 2 > 0 .

On the other hand, we define z n : = α n V x n + ( 1 α n ) S x n for all n 0 . Then it is well known that x n + 1 = λ n γ z n + ( I λ n μ F ) y n for all n 0 . Simple calculations show that
{ z n + 1 z n = ( α n + 1 α n ) ( V x n S x n ) + α n + 1 ( V x n + 1 V x n ) z n + 1 z n = + ( 1 α n + 1 ) ( S x n + 1 S x n ) , x n + 2 x n + 1 = ( λ n + 1 λ n ) ( γ z n μ F y n ) + λ n + 1 γ ( z n + 1 z n ) x n + 2 x n + 1 = + ( I λ n + 1 μ F ) y n + 1 ( I λ n + 1 μ F ) y n .
Since V is a ρ-contraction with coefficient ρ [ 0 , 1 ) and S is a nonexpansive mapping, we conclude that
z n + 1 z n | α n + 1 α n | V x n S x n + α n + 1 V x n + 1 V x n + ( 1 α n + 1 ) S x n + 1 S x n | α n + 1 α n | V x n S x n + α n + 1 ρ x n + 1 x n + ( 1 α n + 1 ) x n + 1 x n = ( 1 α n + 1 ( 1 ρ ) ) x n + 1 x n + | α n + 1 α n | V x n S x n ,
which together with (3.12) and 0 γ < τ implies that
x n + 2 x n + 1 | λ n + 1 λ n | γ z n μ F y n + λ n + 1 γ z n + 1 z n + ( I λ n + 1 μ F ) y n + 1 ( I λ n + 1 μ F ) y n | λ n + 1 λ n | γ z n μ F y n + λ n + 1 γ z n + 1 z n + ( 1 λ n + 1 τ ) y n + 1 y n | λ n + 1 λ n | γ z n μ F y n + λ n + 1 γ [ ( 1 α n + 1 ( 1 ρ ) ) x n + 1 x n + | α n + 1 α n | V x n S x n ] + ( 1 λ n + 1 τ ) [ x n + 1 x n + M ˜ 2 ( i = 1 N | λ i , n + 1 λ i , n | + k = 1 M | r k , n + 1 r k , n | + | γ n + 1 γ n | + | β n + 1 β n | ) ] ( 1 λ n + 1 ( τ γ ) ) x n + 1 x n + | λ n + 1 λ n | γ z n μ F y n + | α n + 1 α n | V x n S x n + M ˜ 2 ( i = 1 N | λ i , n + 1 λ i , n | + k = 1 M | r k , n + 1 r k , n | + | γ n + 1 γ n | + | β n + 1 β n | ) ( 1 λ n + 1 ( τ γ ) ) x n + 1 x n + M ˜ 3 { i = 1 N | λ i , n + 1 λ i , n | + k = 1 M | r k , n + 1 r k , n | + | λ n + 1 λ n | + | α n + 1 α n | + | β n + 1 β n | + | γ n + 1 γ n | } ,
where sup n 0 { γ z n μ F y n + V x n S x n + M ˜ 2 } M ˜ 3 for some M ˜ 3 > 0 . Consequently,
x n + 1 x n α n ( 1 λ n ( τ γ ) ) x n x n 1 α n + M ˜ 3 { i = 1 N | λ i , n λ i , n 1 | α n + k = 1 M | r k , n r k , n 1 | α n + | λ n λ n 1 | α n + | α n α n 1 | α n + | β n β n 1 | α n + | γ n γ n 1 | α n } = ( 1 λ n ( τ γ ) ) x n x n 1 α n 1 + ( 1 λ n ( τ γ ) ) x n x n 1 ( 1 α n 1 α n 1 ) + M ˜ 3 { i = 1 N | λ i , n λ i , n 1 | α n + k = 1 M | r k , n r k , n 1 | α n + | λ n λ n 1 | α n + | α n α n 1 | α n + | β n β n 1 | α n + | γ n γ n 1 | α n } ( 1 λ n ( τ γ ) ) x n x n 1 α n 1 + λ n ( τ γ ) M ˜ 4 τ γ { 1 λ n | 1 α n 1 α n 1 | + i = 1 N | λ i , n λ i , n 1 | λ n α n + k = 1 M | r k , n r k , n 1 | λ n α n + 1 α n | 1 λ n 1 λ n | + 1 λ n | 1 α n 1 α n | + | β n β n 1 | λ n α n + | γ n γ n 1 | λ n α n } ,
(3.13)
where sup n 1 { x n x n 1 + M ˜ 3 } M ˜ 4 for some M ˜ 4 > 0 . From conditions (i)-(iv) it follows that n = 0 λ n ( τ γ ) = and
lim n M ˜ 4 τ γ { 1 λ n | 1 α n 1 α n 1 | + i = 1 N | λ i , n λ i , n 1 | λ n α n + k = 1 M | r k , n r k , n 1 | λ n α n + 1 α n | 1 λ n 1 λ n | + 1 λ n | 1 α n 1 α n | + | β n β n 1 | λ n α n + | γ n γ n 1 | λ n α n } = 0 .
(3.14)
Thus, utilizing Lemma 2.12, we immediately conclude that
lim n x n + 1 x n α n = 0 .
So, from α n 0 it follows that
lim n x n + 1 x n = 0 .

Step 3. We prove that lim n x n u n = 0 , lim n x n v n = 0 , lim n v n G v n = 0 and lim n v n T v n = 0 .

Indeed, utilizing Lemmas 2.8 and 2.9(b), from (3.1), (3.4)-(3.5), and 0 γ < τ we deduce that
y n p 2 = β n x n + γ n G v n + δ n T G v n p 2 = β n ( x n p ) + ( 1 β n ) ( γ n G v n + δ n T G v n 1 β n p ) 2 = β n x n p 2 + ( 1 β n ) γ n G v n + δ n T G v n 1 β n p 2 β n ( 1 β n ) γ n G v n + δ n T G v n 1 β n x n 2 = β n x n p 2 + ( 1 β n ) γ n ( G v n p ) + δ n ( T G v n p ) 1 β n 2 β n ( 1 β n ) y n x n 1 β n 2 β n x n p 2 + ( 1 β n ) ( γ n + δ n ) 2 G v n p 2 ( 1 β n ) 2 β n 1 β n y n x n 2 = β n x n p 2 + ( 1 β n ) G v n p 2 β n 1 β n y n x n 2 β n x n p 2 + ( 1 β n ) v n p 2 β n 1 β n y n x n 2 β n x n p 2 + ( 1 β n ) x n p 2 β n 1 β n y n x n 2 = x n p 2 β n 1 β n y n x n 2 ,
(3.15)
and hence
x n + 1 p 2 = λ n γ ( α n V x n + ( 1 α n ) S x n ) + ( I λ n μ F ) y n p 2 = λ n γ ( α n V x n + ( 1 α n ) S x n ) λ n μ F p + ( I λ n μ F ) y n ( I λ n μ F ) p 2 = λ n [ α n ( γ V x n μ F p ) + ( 1 α n ) ( γ S x n μ F p ) ] + ( I λ n μ F ) y n ( I λ n μ F ) p 2 = λ n [ α n ( γ V x n γ V p ) + ( 1 α n ) ( γ S x n γ S p ) ] + ( I λ n μ F ) y n ( I λ n μ F ) p + λ n [ α n ( γ V p μ F p ) + ( 1 α n ) ( γ S p μ F p ) ] 2 λ n [ α n ( γ V x n γ V p ) + ( 1 α n ) ( γ S x n γ S p ) ] + ( I λ n μ F ) y n ( I λ n μ F ) p 2 + 2 λ n α n ( γ V p μ F p ) , x n + 1 p + 2 λ n ( 1 α n ) ( γ S p μ F p ) , x n + 1 p [ λ n α n ( γ V x n γ V p ) + ( 1 α n ) ( γ S x n γ S p ) + ( I λ n μ F ) y n ( I λ n μ F ) p ] 2 + 2 λ n α n ( γ V p μ F p ) , x n + 1 p + 2 λ n ( 1 α n ) ( γ S p μ F p ) , x n + 1 p [ λ n ( α n γ ρ x n p + ( 1 α n ) γ x n p ) + ( 1 λ n τ ) y n p ] 2 + 2 λ n α n ( γ V p μ F p ) , x n + 1 p + 2 ( 1 α n ) λ n ( γ S p μ F p ) , x n + 1 p = [ λ n ( 1 α n ( 1 ρ ) ) γ x n p + ( 1 λ n τ ) y n p ] 2 + 2 λ n α n ( γ V p μ F p ) , x n + 1 p + 2 λ n ( 1 α n ) ( γ S p μ F p ) , x n + 1 p [ λ n γ x n p + ( 1 λ n τ ) y n p ] 2 + 2 λ n α n ( γ V p μ F p ) , x n + 1 p + 2 λ n ( 1 α n ) ( γ S p μ F p ) , x n + 1 p = [ λ n τ γ τ x n p + ( 1 λ n τ ) y n p ] 2 + 2 λ n α n ( γ V p μ F p ) , x n + 1 p + 2 λ n ( 1 α n ) ( γ S p μ F p ) , x n + 1 p λ n γ 2 τ x n p + ( 1 λ n τ ) y n p 2 + 2 λ n α n ( γ V p μ F p ) , x n + 1 p + 2 λ n ( 1 α n ) ( γ S p μ F p ) , x n + 1 p λ n γ 2 τ x n p + ( 1 λ n τ ) [ x n p 2 β n 1 β n y n x n 2 ] + 2 λ n α n ( γ V p μ F p ) , x n + 1 p + 2 λ n ( 1 α n ) ( γ S p μ F p ) , x n + 1 p = ( 1 λ n τ 2 γ 2 τ ) x n p 2 β n ( 1 λ n τ ) 1 β n y n x n 2 + 2 λ n α n ( γ V p μ F p ) , x n + 1 p + 2 λ n ( 1 α n ) ( γ S p μ F p ) , x n + 1 p x n p 2 β n ( 1 λ n τ ) 1 β n y n x n 2 + 2 λ n α n γ V p μ F p x n + 1 p + 2 λ n γ S p μ F p x n + 1 p ,
(3.16)
which together with { β n } [ a , b ] ( 0 , 1 ) , immediately yields
a ( 1 λ n τ ) 1 a y n x n 2 β n ( 1 λ n τ ) 1 β n y n x n 2 x n p 2 x n + 1 p 2 + 2 λ n α n γ V p μ F p x n + 1 p + 2 λ n γ S p μ F p x n + 1 p x n x n + 1 ( x n p + x n + 1 p ) + 2 λ n α n γ V p μ F p x n + 1 p + 2 λ n γ S p μ F p x n + 1 p .
Since λ n 0 , α n 0 , x n + 1 x n 0 , and { x n } is bounded, we have
lim n y n x n = 0 .
(3.17)
Observe that
Δ n k x n p 2 = T r k , n ( Θ k , φ k ) ( I r k , n A k ) Δ n k 1 x n T r k , n ( Θ k , φ k ) ( I r k , n A k ) p 2 ( I r k , n A k ) Δ n k 1 x n ( I r k , n A k ) p 2 Δ n k 1 x n p 2 + r k , n ( r k , n 2 μ k ) A k Δ n k 1 x