Open Access

General iterative algorithms for mixed equilibrium problems, variational inequalities and fixed point problems

Fixed Point Theory and Applications20142014:80

https://doi.org/10.1186/1687-1812-2014-80

Received: 15 October 2013

Accepted: 11 March 2014

Published: 26 March 2014

Abstract

In this paper, we introduce and analyze a general iterative algorithm for finding a common solution of a mixed equilibrium problem, a general system of variational inequalities and a fixed point problem of infinitely many nonexpansive mappings in a real Hilbert space. Under some mild conditions, we derive the strong convergence of the sequence generated by the proposed algorithm to a common solution, which also solves some optimization problem. The result presented in this paper improves and extends some corresponding ones in the earlier and recent literature.

MSC:49J30, 47H10, 47H15.

Keywords

mixed equilibrium problemnonexpansive mappingvariational inequalityfixed pointstrongly positive bounded linear operatorinverse strongly monotone mapping

1 Introduction

Let H be a real Hilbert space with the inner product , and the norm . Let C be a nonempty, closed and convex subset of H, and let T : C C be a nonlinear mapping. Throughout this paper, we use F ( T ) to denote the fixed point set of T. A mapping T : C C is said to be nonexpansive if
T x T y x y , x , y C .
(1.1)
Let F : C × C R be a real-valued bifunction and φ : C R be a real-valued function, where R is the set of real numbers. The so-called mixed equilibrium problem (MEP) is to find x C such that
F ( x , y ) + φ ( y ) φ ( x ) 0 , y C ,
(1.2)
which was considered and studied in [1, 2]. The set of solutions of MEP (1.2) is denoted by MEP ( F , φ ) . In particular, whenever φ 0 , MEP (1.2) reduces to the equilibrium problem (EP) of finding x C such that
F ( x , y ) 0 , y C ,

which was considered and studied in [37]. The set of solutions of the EP is denoted by EP ( F ) . Given a mapping A : C H , let F ( x , y ) = A x , y x for all x , y C . Then x EP ( F ) if and only if A x , y x 0 for all y C . Numerous problems in physics, optimization and economics reduce to finding a solution of the EP.

Throughout this paper, assume that F : C × C R is a bifunction satisfying conditions (A1)-(A4) and that φ : C R is a lower semicontinuous and convex function with restriction (B1) or (B2), where

(A1) F ( x , x ) = 0 for all x C ;

(A2) F is monotone, i.e., F ( x , y ) + F ( y , x ) 0 for any x , y C ;

(A3) F is upper-hemicontinuous, i.e., for each x , y , z C ,
lim sup t 0 + F ( t z + ( 1 t ) x , y ) F ( x , y ) ;

(A4) F ( x , ) is convex and lower semicontinuous for each x C ;

(B1) for each x H and r > 0 , there exists a bounded subset D x C and y x C such that for any z C D x ,
F ( z , y x ) + φ ( y x ) φ ( z ) + 1 r y x z , z x < 0 ;

(B2) C is a bounded set.

The mappings { T n } n = 1 are said to be an infinite family of nonexpansive self-mappings on C if
T n x T n y x y , x , y C , n 1 ,
(1.3)

and denoted by F ( T n ) is the fixed point set of T n , i.e., F ( T n ) : = { x C : T n x = x } . Finding an optimal point in the intersection n = 1 F ( T n ) of fixed point sets of mappings T n , n 1 , is a matter of interest in various branches of sciences.

Recently, many authors considered some iterative methods for finding a common element of the set of solutions of MEP (1.2) and the set of fixed points of nonexpansive mappings; see, e.g., [2, 8, 9] and the references therein.

A mapping A : C H is said to be
  1. (i)
    monotone if
    A x A y , x y 0 , x , y C ;
     
  2. (ii)
    strongly monotone if there exists a constant η > 0 such that
    A x A y , x y η x y 2 , x , y C .
     
In such a case, A is said to be η-strongly monotone;
  1. (iii)
    inverse-strongly monotone if there exists a constant ζ > 0 such that
    A x A y , x y ζ A x A y 2 , x , y C .
     

In such a case, A is said to be ζ-inverse-strongly monotone.

Let A : C H be a nonlinear mapping. The classical variational inequality problem (VIP) is to find x C such that
A x , y x 0 , y C .
(1.4)

We use VI ( C , A ) to denote the set of solutions to VIP (1.4). One can easily see that VIP (1.4) is equivalent to a fixed point problem. That is, u C is a solution of VIP (1.4) if and only if u is a fixed point of the mapping P C ( I λ A ) , where λ > 0 is a constant. Variational inequality theory has been studied quite extensively and has emerged as an important tool in the study of a wide class of obstacle, unilateral, free, moving, equilibrium problems. It is now well known that the variational inequalities are equivalent to the fixed point problems, the origin of which can be traced back to Lions and Stampacchia [10]. Not only the existence and uniqueness of solutions are important topics in the study of VIP (1.4), but also how to actually find a solution of VIP (1.4) is important. Up to now, there have been many iterative algorithms in the literature for finding approximate solutions of VIP (1.4) and its extended versions; see e.g., [3, 1114].

Recently, Plubtieng and Punpaeng [15] and Ceng et al. [16, 17] considered some iterative methods for VIP (1.4) and its extended versions and got some strong convergence theorems. As a generalization of VIP (1.4), the general system of variational inequalities (GSVI) is to find ( x , y ) C × C such that
{ μ 1 B 1 y + x y , x x 0 , x C , μ 2 B 2 x + y x , x y 0 , x C ,
(1.5)

which μ 1 and μ 2 are two positive constants. GSVI (1.5) is considered and studied in [17], and the solution set of GSVI (1.5) is denoted by GSVI ( C , B 1 , B 2 ) . In particular, whenever B 1 = B 2 = A and x = y , GSVI (1.5) reduces to VIP (1.4). Ceng et al. [17] transformed GSVI (1.5) into a fixed point problem in the following way.

Lemma 1.1 (see [17])

For given x ¯ , y ¯ C , ( x ¯ , y ¯ ) is a solution of GSVI (1.5) if and only if x ¯ is a fixed point of the mapping G : C C defined by
G x = P C ( I μ 1 B 1 ) P C ( I μ 2 B 2 ) x , x C ,

where y ¯ = P C ( x ¯ μ 2 B 2 x ¯ ) and P C is the projection of H onto C.

In particular, if the mapping B i : C H is ζ i -inverse strongly monotone for i = 1 , 2 , then the mapping G is nonexpansive provided μ i ( 0 , 2 ζ i ) for i = 1 , 2 . We denote by Γ the fixed point set of the mapping G.

On the other hand, Moudafi [1] introduced the viscosity approximation method for nonexpansive mappings (see also [18] for further developments in both Hilbert spaces and Banach spaces).

A mapping f : C C is called α-contractive if there exists a constant α ( 0 , 1 ) such that
f ( x ) f ( y ) α x y , x , y C .
Let f be a contraction on C. Starting with an arbitrary initial x 1 C , define a sequence { x n } recursively by
x n + 1 = α n f ( x n ) + ( 1 α n ) T x n , n 0 ,
(1.6)
where T is a nonexpansive mapping of C into itself and { α n } is a sequence in ( 0 , 1 ) . It is proved in [1, 18] that under appropriate conditions imposed on { α n } the sequence { x n } generated by (1.6) strongly converges to the unique solution x F ( T ) to the VIP
( I f ) x , x x 0 , x F ( T ) .
A linear bounded operator A is said to be γ ¯ -strongly positive on H if there exists a constant γ ¯ ( 0 , 1 ) such that
A x , x γ ¯ x 2 , x H .
(1.7)
Recently, Marino and Xu [19] introduced the following general iterative process:
x n + 1 = α n γ f ( x n ) + ( I α n A ) T x n , n 0 ,
(1.8)
where A is a strongly positive bounded linear operator on H. They proved that under appropriate conditions imposed on { α n } the sequence { x n } generated by (1.8) converges strongly to the unique solution x F ( T ) to the VIP
( A γ f ) x , x x 0 , x F ( T ) ,
(1.9)
which is the optimality condition for the minimization problem
min x F ( T ) 1 2 A x , x h ( x ) ,

where h is a potential function for γf (i.e., h ( x ) = γ f ( x ) for all x H ).

In 2007, Takahashi and Takahashi [5] introduced an iterative scheme by the viscosity approximation method for finding a common element of the set of solutions of the EP and the set of fixed points of a nonexpansive mapping in a real Hilbert space. Let S : C H be a nonexpansive mapping. Starting with arbitrary initial x 1 H , define sequences { x n } and { u n } recursively by
{ F ( u n , y ) + 1 r n y u n , u n x n 0 , y C , x n + 1 = α n f ( x n ) + ( 1 α n ) S u n , n 1 .
(1.10)

They proved that under appropriate conditions imposed on { α n } and { r n } , the sequences { x n } and { u n } converge strongly to x F ( S ) EP ( F ) , where x = P F ( S ) EP ( F ) f ( x ) .

Subsequently, Plubtieng and Punpaeng [15] introduced a general iterative process for finding a common element of the set of solutions of the EP and the set of fixed points of a nonexpansive mapping in a Hilbert space.

Let S : H H be a nonexpansive mapping. Starting with an arbitrary x 1 H , define sequences { x n } and { u n } by
{ F ( u n , y ) + 1 r n y u n , u n x n 0 , y C , x n + 1 = α n γ f ( x n ) + ( 1 α n A ) S u n , n 1 .
(1.11)
They proved that under appropriate conditions imposed on { α n } and { r n } , the sequence { x n } generated by (1.11) converges strongly to the unique solution x F ( S ) EP ( F ) to the VIP
( A γ f ) x , x x 0 , x F ( S ) EP ( F ) ,
(1.12)
which is the optimality condition for the minimization problem
min x F ( S ) EP ( F ) 1 2 A x , x h ( x ) ,

where h is a potential function for γf (i.e., h ( x ) = γ f ( x ) for all x H ).

Let { T n } n = 1 be an infinite family of nonexpansive self-mappings on C and { λ n } n = 1 be a sequence of nonnegative numbers in [ 0 , 1 ] . For any n 1 , define a mapping W n of C into itself as follows:
{ U n , n + 1 = I , U n , n = λ n T n U n , n + 1 + ( 1 λ n ) I , U n , n 1 = λ n 1 T n 1 U n , n + ( 1 λ n 1 ) I , , U n , k = λ k T k U n , k + 1 + ( 1 λ k ) I , U n , k 1 = λ k 1 T k 1 U n , k + ( 1 λ k 1 ) I , , U n , 2 = λ 2 T 2 U n , 3 + ( 1 λ 2 ) I , W n = U n , 1 = λ 1 T 1 U n , 2 + ( 1 λ 1 ) I .
(1.13)

Such a mapping W n is called the W-mapping generated by T n , T n 1 , , T 1 and λ n , λ n 1 , , λ 1 .

Recently, Yao et al. [6] proved the following strong convergence result.

Theorem 1.1 Let C be a nonempty closed convex subset of a real Hilbert space H. Let F : C × C R be a bifunction satisfying conditions (A1)-(A4). Let { T n } n = 1 be an infinite family of nonexpansive self-mappings on C such that Ω : = n = 1 F ( T n ) EP ( F ) . Suppose that { α n } , { β n } and { γ n } are three sequences in ( 0 , 1 ) such that α n + β n + γ n = 1 and { r n } ( 0 , ) . Suppose that the following conditions are satisfied:
  1. (i)

    lim n α n = 0 and n = 1 α n = ;

     
  2. (ii)

    0 < lim inf n β n lim sup n β n < 1 ;

     
  3. (iii)

    lim inf n r n > 0 and lim n ( r n + 1 r n ) = 0 .

     
Let f be a contraction on H, and let x 0 H be given arbitrarily. Then the sequences { x n } and { y n } generated iteratively by
{ F ( y n , x ) + 1 r n x y n , y n x n 0 , x C , x n + 1 = α n f ( x n ) + β n x n + γ n W n y n ,
converge strongly to x Ω , the unique solution of the minimization problem
min x Ω 1 2 x 2 h ( x ) ,

where h is a potential function for f.

Very recently, Chen [20] proved the following strong convergence theorem.

Theorem 1.2 Let C be a nonempty closed convex subset of a real Hilbert space H. Let { T n } n = 1 be an infinite family of nonexpansive self-mappings on C such that the common fixed point set Ω : = n = 1 F ( T n ) . Let f : C H be an α-contraction, and let A : H H be a strongly positive bounded linear operator with a constant γ ¯ > 0 . Let γ be a constant such that 0 < γ α < γ ¯ . For an arbitrary initial point x 0 C , one defines a sequence { x n } iteratively
x n + 1 = P C [ α n γ f ( x n ) + ( 1 α n A ) W n x n ] , n 0 ,
(1.14)

where { α n } is a real sequence in [ 0 , 1 ] . Assume that the sequence { α n } satisfies the following conditions:

(C1) lim n α n = 0 ;

(C2) n = 1 α n = .

Then the sequence { x n } generated by (1.14) converges in norm to the unique solution x Ω , which solves the VIP
( A γ f ) x , x x 0 , x Ω .
(1.15)
More recently, Rattanaseeha [7] introduced an iterative algorithm:
{ x 1 H arbitrarily given , F ( u n , y ) + 1 r n y u n , u n x n 0 , y C , x n + 1 = P C [ α n γ f ( x n ) + ( I α n A ) W n x n ] , n 1 ,
(1.16)

and proved the following strong convergence theorem.

Theorem 1.3 Let C be a nonempty closed convex subset of a real Hilbert space H. Let F be a bifunction from C × C to R satisfying (A1)-(A4). Let f be an α-contraction on H with α ( 0 , 1 ) , and let { T n } n = 1 be an infinite family of nonexpansive self-mappings on C such that Ω : = n = 1 F ( T n ) EP ( F ) . Let A : H H be a γ ¯ -strongly positive bounded linear operator with 0 < γ < γ ¯ α . Let λ 1 , λ 2 , be a sequence of real numbers such that 0 < λ n b < 1 , n = 1 , 2 ,  . Let W n be the W-mapping of C into itself generated by (1.13). Let W be defined by W x = lim n W n x , x C . Let { x n } and { u n } be sequences generated by (1.16), where { α n } is a sequence in ( 0 , 1 ) and { r n } is a sequence in ( 0 , ) such that the following conditions hold:

(C1) lim n α n = 0 ;

(C2) n = 1 α n = ;

(C3) lim n r n = r > 0 .

Then both { x n } and { u n } converge strongly to x Ω , which is the unique solution to the VIP
( A γ f ) x , x x 0 , x Ω .

Equivalently, x = P Ω ( I A + γ f ) x .

Let F : C × C R be a real-valued bifunction, φ : C R be a real-valued function, A be a strongly positive bounded linear operator on H, and f be an l-Lipschitz continuous mapping on H. Motivated by the above facts, in this paper we propose and analyze a general iterative algorithm:
{ x 1 H arbitrarily given , F ( u n , y ) + φ ( y ) φ ( u n ) + 1 r n y u n , u n x n 0 , y C , x n + 1 = P C [ α n γ f ( x n ) + β n x n + ( ( 1 β n ) I α n A ) W n G u n ] , n 1 ,
(1.17)
where { α n } , { β n } ( 0 , 1 ) , { r n } ( 0 , ) , G = P C ( I μ 1 B 1 ) P C ( I μ 2 B 2 ) , and W n is the W-mapping defined by (1.13), for finding a common solution of MEP (1.2), GSVI (1.5) and the fixed point problem of an infinite family of nonexpansive self-mappings { T n } n = 1 on C. It is proven that under mild conditions imposed on { α n } , { β n } and { r n } , the sequences { x n } and { u n } generated by (1.17) converge strongly to x Ω : = n = 1 F ( T n ) MEP ( F , φ ) Γ , which is the unique solution of the VIP
( A γ f ) x , x x 0 , x Ω ,
(1.18)
or, equivalently, the unique solution of the minimization problem
min x Ω 1 2 A x , x h ( x ) ,

where h is a potential function for γf (i.e., h ( x ) = γ f ( x ) for all x H ). Whenever B 1 B 2 0 and φ 0 , our problem of finding x n = 1 F ( T n ) MEP ( F , φ ) Γ reduces to the problem of finding x n = 1 F ( T n ) EP ( F ) in Theorem 1.3. The results presented in this paper improve and extend the corresponding theorems in [7].

2 Preliminaries

Let H be a real Hilbert space with the inner product , and the norm , and let C be a closed convex subset of H. We indicate weak convergence and strong convergence by using the notations → and , respectively. A mapping f : C H is called l-Lipschitz continuous if there exists a constant l 0 such that
f ( x ) f ( y ) l x y , x , y C .
In particular, if l = 1 then f is called a nonexpansive mapping; if l [ 0 , 1 ) then f is called a contraction. Recall that a mapping T : H H is said to be a firmly nonexpansive mapping if
T x T y 2 T x T y , x y , x , y H .
The metric (or nearest point) projection from H onto C is the mapping P C : H C which assigns to each point x H the unique point P C x C satisfying the property
x P C x = inf y C x y = : d ( x , C ) .

Some important properties of projections are gathered in the following proposition.

Proposition 2.1 For given x H and z C :
  1. (i)

    z = P C x x z , y z 0 , y C ;

     
  2. (ii)

    z = P C x x z 2 x y 2 y z 2 , y C ;

     
  3. (iii)

    P C x P C y , x y P C x P C y 2 , y H .

     

Consequently, P C is a firmly nonexpansive mapping of H onto C and hence nonexpansive and monotone.

Let A : C H be an α-inverse-strongly monotone mapping. Then it is obvious that A is monotone and 1 α -Lipschitz continuous. We also have that for all x , y C and λ > 0 ,
( I λ A ) x ( I λ A ) y 2 = x y λ ( A x A y ) 2 = x y 2 2 λ A x A y , x y + λ 2 A x A y 2 x y 2 + λ ( λ 2 α ) A x A y 2 .
(2.1)

So, whenever λ 2 α , I λ A is a nonexpansive mapping.

Given a positive number r > 0 . Let T r ( F , φ ) : H C be the solution set of the auxiliary mixed equilibrium problem, that is, for each x H ,
T r ( F , φ ) x : = { y C : F ( y , z ) + φ ( z ) φ ( y ) + 1 r y x , z y 0 , z C } .

Proposition 2.2 (see [2, 8])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let F : C × C R be a bifunction satisfying conditions (A1)-(A4), and let φ : C R be a lower semicontinuous and convex function with restriction (B1) or (B2). Then the following hold:
  1. (a)

    for each x H , T r ( F , φ ) x ;

     
  2. (b)

    T r ( F , φ ) is single-valued;

     
  3. (c)
    T r ( F , φ ) is firmly nonexpansive, i.e., for any x , y H ,
    T r ( F , φ ) x T r ( F , φ ) y 2 T r ( F , φ ) x T r ( F , φ ) y , x y ;
     
  4. (d)
    for all s , t > 0 and x H ,
    T s ( F , φ ) x T t ( F , φ ) x 2 s t s T s ( F , φ ) x x , T s ( F , φ ) x T t ( F , φ ) x ;
     
  5. (e)

    F ( T r ( F , φ ) ) = MEP ( F , φ ) ;

     
  6. (f)

    MEP ( F , φ ) is closed and convex.

     
Remark 2.1 It is easy to see from conclusions (c) and (d) in Proposition 2.2 that
T r ( F , φ ) x T r ( F , φ ) y x y , r > 0 , x , y H
and
T s ( F , φ ) x T t ( F , φ ) x | s t | s T s ( F , φ ) x x , s , t > 0 , x H .

We need some facts and tools in a real Hilbert space H which are listed as lemmas below.

Lemma 2.1 Let X be a real inner product space. Then the following inequality holds:
x + y 2 x 2 + 2 y , x + y , x , y X .
Lemma 2.2 Let H be a real Hilbert space. Then the following hold:
  1. (a)

    x y 2 = x 2 y 2 2 x y , y for all x , y H ;

     
  2. (b)

    λ x + μ y 2 = λ x 2 + μ y 2 λ μ x y 2 for all x , y H and λ , μ [ 0 , 1 ] with λ + μ = 1 ;

     
  3. (c)
    If { x n } is a sequence in H such that x n x , it follows that
    lim sup n x n y 2 = lim sup n x n x 2 + x y 2 , y H .
     

We have the following crucial lemmas concerning the W-mappings defined by (1.13).

Lemma 2.3 (see [[21], Lemma 3.2])

Let { T n } n = 1 be a sequence of nonexpansive self-mappings on C such that n = 1 F ( T n ) , and let { λ n } be a sequence in ( 0 , b ] for some b ( 0 , 1 ) . Then, for every x C and k 1 , the limit lim n U n , k x exists, where U n , k is defined by (1.13).

Remark 2.2 (see [[6], Remark 3.1])

It can be known from Lemma 2.3 that if D is a nonempty bounded subset of C, then for ϵ > 0 there exists n 0 k such that for all n > n 0 ,
sup x D U n , k x U k x ϵ .

Remark 2.3 (see [[6], Remark 3.2])

Utilizing Lemma 2.3, we define a mapping W : C C as follows:
W x = lim n W n x = lim n U n , 1 x , x C .
Such a W is called the W-mapping generated by T 1 , T 2 , and λ 1 , λ 2 ,  . Since W n is nonexpansive, W : C C is also nonexpansive. Indeed, observe that for each x , y C ,
W x W y = lim n W n x W n y x y .
If { x n } is a bounded sequence in C, then we put D = { x n : n 1 } . Hence, it is clear from Remark 2.2 that for an arbitrary ϵ > 0 , there exists N 0 1 such that for all n > N 0 ,
W n x n W x n = U n , 1 x n U 1 x n sup x D U n , 1 x U 1 x ϵ .
This implies that
lim n W n x n W x n = 0 .

Lemma 2.4 (see [[21], Lemma 3.3])

Let { T n } n = 1 be a sequence of nonexpansive self-mappings on C such that n = 1 F ( T n ) , and let { λ n } be a sequence in ( 0 , b ] for some b ( 0 , 1 ) . Then F ( W ) = n = 1 F ( T n ) .

Lemma 2.5 (see [[22], demiclosedness principle])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let T be a nonexpansive self-mapping on C with F ( T ) . Then I T is demiclosed. That is, whenever { x n } is a sequence in C weakly converging to some x C and the sequence { ( I T ) x n } strongly converges to some y, it follows that ( I T ) x = y . Here I is the identity operator of H.

Lemma 2.6 Let A : C H be a monotone mapping. In the context of the variational inequality problem, the characterization of the projection (see Proposition  2.1(i)) implies
u VI ( C , A ) u = P C ( u λ A u ) , λ > 0 .

Lemma 2.7 (see [19])

Let A be a γ ¯ -strongly positive bounded linear operator on H and assume 0 < ρ A 1 . Then I ρ A 1 ρ γ ¯ .

Lemma 2.8 (see [23])

Assume that { a n } is a sequence of nonnegative real numbers such that
a n + 1 ( 1 γ n ) a n + σ n γ n , n 1 ,
where { γ n } is a sequence in [ 0 , 1 ] and { σ n } is a real sequence such that
  1. (i)

    n = 1 γ n = ;

     
  2. (ii)

    lim sup n σ n 0 or n = 1 | σ n γ n | < .

     

Then lim n a n = 0 .

3 Main results

We will introduce and analyze a general iterative algorithm for finding a common solution of MEP (1.2), GSVI (1.5) and the fixed point problem of infinitely many nonexpansive mappings in a real Hilbert space. Under appropriate conditions imposed on the parameter sequences, we will prove strong convergence of the proposed algorithm.

Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H. Let F be a bifunction from C × C to R satisfying conditions (A1)-(A4), and let φ : C R be a lower semicontinuous and convex function with restriction (B1) or (B2). Let the mapping B i : C H be ζ i -inverse strongly monotone for i = 1 , 2 . Let { T n } n = 1 be a sequence of nonexpansive self-mappings on C, and let { λ n } be a sequence in ( 0 , b ] for some b ( 0 , 1 ) . Let A be a γ ¯ -strongly positive bounded linear operator on H and f : H H be an l-Lipschitz continuous mapping with γ l < γ ¯ . Let W n be the W-mapping defined by (1.13). Assume that Ω : = n = 1 F ( T n ) MEP ( F , φ ) Γ , where Γ is the fixed point set of the mapping G = P C ( I μ 1 B 1 ) P C ( I μ 2 B 2 ) with μ i ( 0 , 2 ζ i ) for i = 1 , 2 . Let { α n } and { β n } be two sequences in ( 0 , 1 ) and { r n } be a sequence in ( 0 , ) such that:
  1. (i)

    lim n α n = 0 , n = 1 α n = and n = 1 | α n + 1 α n | < ;

     
  2. (ii)

    0 < lim inf n β n lim sup n β n < 1 and n = 1 | β n + 1 β n | < ;

     
  3. (iii)

    lim inf n r n > 0 and n = 1 | r n + 1 r n | < .

     
Given x 1 H arbitrarily, the sequences { x n } and { u n } generated iteratively by
{ F ( u n , y ) + φ ( y ) φ ( u n ) + 1 r n y u n , u n x n 0 , y C , x n + 1 = P C [ α n γ f ( x n ) + β n x n + ( ( 1 β n ) I α n A ) W n G u n ] , n 1 ,
(3.1)
converge strongly to x Ω , which is the unique solution of the VIP
( A γ f ) x , x x 0 , x Ω ,
(3.2)
or, equivalently, the unique solution of the minimization problem
min x Ω 1 2 A x , x h ( x ) ,
(3.3)

where h is a potential function for γf.

Proof Taking into account that lim n α n = 0 and 0 < lim inf n β n lim sup n β n < 1 , we may assume, without loss of generality, that α n ( 1 β n ) A 1 for all n 1 . Since A is a γ ¯ -strongly positive bounded linear operator on H, we know that A = sup { A u , u : u H , u = 1 } γ ¯ ,
I A = sup { ( I A ) u , u : u H , u = 1 } 1 γ ¯ ,
and for u H with u = 1 ,
( ( 1 β n ) I α n A ) u , u = 1 β n α n A u , u 1 β n α n A 0 ,
that is, ( 1 β n ) I α n A is positive. It follows that
( 1 β n ) I α n A = sup { ( ( 1 β n ) I α n A ) u , u : u H , u = 1 } = sup { 1 β n α n A u , u : u H , u = 1 } 1 β n α n γ ¯ .
We observe that P Ω ( γ f + ( I A ) ) is a contraction. Indeed, for all x , y H , we have
P Ω ( γ f + ( I A ) ) x P Ω ( γ f + ( I A ) ) y ( γ f + ( I A ) ) x ( γ f + ( I A ) ) y γ f ( x ) f ( y ) + I A x y γ l x y + ( 1 γ ¯ ) x y = ( 1 ( γ ¯ γ l ) ) x y .

By the Banach contraction principle, we deduce that P Ω ( γ f + ( I A ) ) has a unique fixed point x H . That is, x = P Ω ( γ f + ( I A ) ) x . In addition, by Proposition 2.2 we have u n = T r n ( F , φ ) x n for all n 1 .

We divide the rest of the proof into several steps.

Step 1. We show that { x n } is bounded. Indeed, take p Ω arbitrarily. Since p = T r n ( F , φ ) p and p = G p = P C ( I μ 1 B 1 ) P C ( I μ 2 B 2 ) p , B i is ζ i -inverse-strongly monotone for i = 1 , 2 , by Proposition 2.2(c) we deduce from 0 μ i 2 ζ i , i = 1 , 2 , that for any n 1 ,
G u n p 2 = P C ( I μ 1 B 1 ) P C ( I μ 2 B 2 ) u n P C ( I μ 1 B 1 ) P C ( I μ 2 B 2 ) p 2 ( I μ 1 B 1 ) P C ( I μ 2 B 2 ) u n ( I μ 1 B 1 ) P C ( I μ 2 B 2 ) p 2 = [ P C ( I μ 2 B 2 ) u n P C ( I μ 2 B 2 ) p ] μ 1 [ B 1 P C ( I μ 2 B 2 ) u n B 1 P C ( I μ 2 B 2 ) p ] 2 P C ( I μ 2 B 2 ) u n P C ( I μ 2 B 2 ) p 2 + μ 1 ( μ 1 2 ζ 1 ) B 1 P C ( I μ 2 B 2 ) u n B 1 P C ( I μ 2 B 2 ) p 2 P C ( I μ 2 B 2 ) u n P C ( I μ 2 B 2 ) p 2 ( I μ 2 B 2 ) u n ( I μ 2 B 2 ) p 2 = ( u n p ) μ 2 ( B 2 u n B 2 p ) 2 u n p 2 + μ 2 ( μ 2 2 ζ 2 ) B 2 u n B 2 p 2 u n p 2 = T r n ( F , φ ) x n T r n ( F , φ ) p 2 x n p 2 .
(3.4)
(This shows that G is nonexpansive.) Thus, from (3.1), (3.4) and W n p = p , we get
x n + 1 p = P C [ α n γ f ( x n ) + β n x n + ( ( 1 β n ) I α n A ) W n G u n ] p α n γ f ( x n ) + β n x n + ( ( 1 β n ) I α n A ) W n G u n p = α n ( γ f ( x n ) A p ) + β n ( x n p ) + ( ( 1 β n ) I α n A ) ( W n G u n p ) ( 1 β n ) I α n A W n G u n p + β n x n p + α n γ f ( x n ) A p ( 1 β n α n γ ¯ ) G u n p + β n x n p + α n γ f ( x n ) A p ( 1 α n γ ¯ ) x n p + α n ( γ f ( x n ) f ( p ) + γ f ( p ) A p ) ( 1 α n γ ¯ ) x n p + α n ( γ l x n p + γ f ( p ) A p ) = [ 1 ( γ ¯ γ l ) α n ] x n p + α n γ f ( p ) A p = [ 1 ( γ ¯ γ l ) α n ] x n p + ( γ ¯ γ l ) α n γ f ( p ) A p γ ¯ γ l max { x n p , γ f ( p ) A p γ ¯ γ l } .
By induction, we get
x n p max { x 0 p , γ f ( p ) A p γ ¯ γ l } .

Therefore, { x n } is bounded and so are the sequences { u n } , { f ( x n ) } and { W n G u n } .

Step 2. We show that x n + 1 x n 0 as n . Indeed, we write y n = α n γ f ( x n ) + β n x n + ( ( 1 β n ) I α n A ) W n G u n . Then x n + 1 = P C y n for each n 1 . Define y n = β n x n + ( 1 β n ) w n for each n 1 . Then from the definition of w n , we obtain
w n + 1 w n = y n + 1 β n + 1 x n + 1 1 β n + 1 y n β n x n 1 β n = α n + 1 γ f ( x n + 1 ) + ( ( 1 β n + 1 ) I α n + 1 A ) W n + 1 G u n + 1 1 β n + 1 α n γ f ( x n ) + ( ( 1 β n ) I α n A ) W n G u n 1 β n = γ ( α n + 1 1 β n + 1 f ( x n + 1 ) α n 1 β n f ( x n ) ) + ( I α n + 1 A 1 β n + 1 ) W n + 1 G u n + 1 ( I α n A 1 β n ) W n G u n = γ [ ( α n + 1 1 β n + 1 α n 1 β n ) f ( x n + 1 ) + α n 1 β n ( f ( x n + 1 ) f ( x n ) ) ] + ( ( I α n + 1 A 1 β n + 1 ) ( I α n A 1 β n ) ) W n + 1 G u n + 1 + ( I α n A 1 β n ) ( W n + 1 G u n + 1 W n G u n ) = ( α n + 1 1 β n + 1 α n 1 β n ) ( γ f ( x n + 1 ) A W n + 1 G u n + 1 ) + α n 1 β n γ ( f ( x n + 1 ) f ( x n ) ) + ( I α n 1 β n A ) ( W n + 1 G u n + 1 W n G u n ) .
It follows from Lemma 2.7 that
w n + 1 w n | α n + 1 1 β n + 1 α n 1 β n | γ f ( x n + 1 ) A W n + 1 G u n + 1 + α n 1 β n γ f ( x n + 1 ) f ( x n ) + ( I α n 1 β n A ) ( W n + 1 G u n + 1 W n G u n ) | α n + 1 1 β n + 1 α n 1 β n | γ f ( x n + 1 ) A W n + 1 G u n + 1 + α n 1 β n γ l x n + 1 x n + I α n 1 β n A W n + 1 G u n + 1 W n G u n | ( α n + 1 α n ) ( 1 β n ) + α n ( β n + 1 β n ) ( 1 β n ) ( 1 β n + 1 ) | γ f ( x n + 1 ) A W n + 1 G u n + 1 + α n 1 β n γ l x n + 1 x n + I α n 1 β n A ( W n + 1 G u n + 1 W n + 1 G u n + W n + 1 G u n W n G u n ) | α n + 1 α n | + | β n + 1 β n | ( 1 d ) ( 1 β n ) γ f ( x n + 1 ) A W n + 1 G u n + 1 + α n 1 β n γ l x n + 1 x n + ( 1 α n 1 β n γ ¯ ) ( G u n + 1 G u n + W n + 1 G u n W n G u n ) | α n + 1 α n | + | β n + 1 β n | ( 1 d ) ( 1 β n ) γ f ( x n + 1 ) A W n + 1 G u n + 1 + α n 1 β n γ l x n + 1 x n + ( 1 α n 1 β n γ ¯ ) ( u n + 1 u n + W n + 1 G u n W n G u n ) .
(3.5)
From (1.13), since W n , T n and U n , i are all nonexpansive, we have
W n + 1 G u n W n G u n = λ 1 T 1 U n + 1 , 2 G u n λ 1 T 1 U n , 2 G u n λ 1 U n + 1 , 2 G u n U n , 2 G u n = λ 1 λ 2 T 2 U n + 1 , 3 G u n λ 2 T 2 U n , 3 G u n λ 1 λ 2 U n + 1 , 3 G u n U n , 3 G u n λ 1 λ 2 λ n U n + 1 , n + 1 G u n U n , n + 1 G u n M i = 1 n λ i ,
(3.6)
where sup n 1 { U n + 1 , n + 1 G u n + U n , n + 1 G u n } M for some M > 0 . Furthermore, we estimate u n + 1 u n . Taking into account that 0 < lim inf n β n lim sup n β n < 1 and lim inf n r n > 0 , we may assume, without loss of generality, that { β n } [ c , d ] ( 0 , 1 ) and { r n } [ ϵ , ) for some ϵ > 0 . Utilizing Remark 2.1, we get
u n + 1 u n = T r n + 1 ( F , φ ) x n + 1 T r n ( F , φ ) x n T r n + 1 ( F , φ ) x n + 1 T r n + 1 ( F , φ ) x n + T r n + 1 ( F , φ ) x n T r n ( F , φ ) x n x n + 1 x n + | r n + 1 r n | r n + 1 T r n + 1 ( F , φ ) x n x n x n + 1 x n + | r n + 1 r n | ϵ T r n + 1 ( F , φ ) x n x n x n + 1 x n + M 1 | r n + 1 r n | ,
(3.7)

where sup n 1 { 1 ϵ T r n + 1 ( F , φ ) x n x n } M 1 for some M 1 > 0 .

Note that
y n + 1 y n = β n ( x n + 1 x n ) + ( β n + 1 β n ) ( x n + 1 w n + 1 ) + ( 1 β n ) ( w n + 1 w n ) .
Hence it follows from (3.5)-(3.7) that
x n + 2 x n + 1 = P C y n + 1 P C y n y n + 1 y n β n x n + 1 x n + | β n + 1 β n | x n + 1 w n + 1 + ( 1 β n ) w n + 1 w n β n x n + 1 x n + | β n + 1 β n | x n + 1 w n + 1 + ( 1 β n ) { | α n + 1 α n | + | β n + 1 β n | ( 1 d ) ( 1 β n ) γ f ( x n + 1 ) A W n + 1 G u n + 1 + α n 1 β n γ l x n + 1 x n + ( 1 α n 1 β n γ ¯ ) u n + 1 u n + W n + 1 G u n W n G u n } β n x n + 1 x n + | β n + 1 β n | x n + 1 w n + 1 + ( 1 β n ) { | α n + 1 α n | + | β n + 1 β n | ( 1 d ) ( 1 β n ) γ f ( x n + 1 ) A W n + 1 G u n + 1 + α n 1 β n γ l x n + 1 x n + ( 1 α n 1 β n γ ¯ ) [ x n + 1 x n + M 1 | r n + 1 r n | + W n + 1 G u n W n G u n ] } β n x n + 1 x n + | β n + 1 β n | x n + 1 w n + 1 + ( 1 β n ) { | α n + 1 α n | + | β n + 1 β n | ( 1 d ) ( 1 β n ) γ f ( x n + 1 ) A W n + 1 G u n + 1 + ( 1 α n 1 β n ( γ ¯ γ l ) ) x n + 1 x n + M 1 | r n + 1 r n | + W n + 1 G u n W n G u n } β n x n + 1 x n + | β n + 1 β n | x n + 1 w n + 1 + | α n + 1 α n | + | β n + 1 β n | 1 d γ f ( x n + 1 ) A W n + 1 G u n + 1 + ( 1 β n α n ( γ ¯ γ l ) ) x n + 1 x n + M 1 | r n + 1 r n | + W n + 1 G u n W n G u n ( 1 α n ( γ ¯ γ l ) ) x n + 1 x n + | β n + 1 β n | x n + 1 w n + 1 + | α n + 1 α n | + | β n + 1 β n | 1 d γ f ( x n + 1 ) A W n + 1 G u n + 1 + M 1 | r n + 1 r n | + W n + 1 G u n W n G u n ( 1 α n ( γ ¯ γ l ) ) x n + 1 x n + | β n + 1 β n | x n + 1 w n + 1 + | α n + 1 α n | + | β n + 1 β n | 1 d γ f ( x n + 1 ) A W n + 1 G u n + 1 + M 1 | r n + 1 r n | + M i = 1 n λ i ( 1 α n ( γ ¯ γ l ) ) x n + 1 x n + M 2 ( | α n + 1 α n | + | β n + 1 β n | + | r n + 1 r n | + b n ) ,
where sup n 1 { 1 1 d γ f ( x n ) A W n G u n + x n w n + M 1 + M } M 2 for some M 2 > 0 . Since n = 1 α n = , n = 1 | α n + 1 α n | < , n = 1 | β n + 1 β n | < and n = 1 | r n + 1 r n | < , from b ( 0 , 1 ) and Lemma 2.8 we conclude that
lim n x n + 1 x n = 0 .
(3.8)
Step 3. We show that lim n u n G u n = 0 . Indeed, for simplicity, we write u ˜ n = P C ( I μ 2 B 2 ) u n , v n = P C ( I μ 1 B 1 ) u ˜ n and p ˜ = P C ( I μ 2 B 2 ) p . Then v n = G u n and
p = P C ( I μ 1 B 1 ) p ˜ = P C ( I μ 1 B 1 ) P C ( I μ 2 B 2 ) p = G p .
From (3.1), (3.4) and Proposition 2.1(i) and Lemma 2.2(b), we obtain that for p Ω ,
x n + 1 p 2 α n ( γ f ( x n ) A W n G u n ) + β n ( x n p ) + ( 1 β n ) ( W n G u n p ) 2 = β n ( x n p ) + ( 1 β n ) ( W n G u n p ) 2 + 2 α n ( γ f ( x n ) A W n G u n ) , β n ( x n p ) + ( 1 β n ) ( W n G u n p ) + α n 2 γ f ( x n ) A W n G u n 2 β n x n p 2 + ( 1 β n ) W n G u n p 2 β n ( 1 β n ) x n W n G u n 2 + α n γ f ( x n ) A W n G u n [ 2 β n ( x n p ) + ( 1 β n ) ( W n G u n p ) + α n γ f ( x n ) A W n G u n ] β n x n p 2 + ( 1 β n ) G u n p 2 β n ( 1 β n ) x n W n G u n 2 + α n γ f ( x n ) A W n G u n [ 2 ( β n x n p + ( 1 β n ) u n p ) + α n γ f ( x n ) A W n G u n ] β n x n p 2 + ( 1 β n ) G u n p 2 β n ( 1 β n ) x n W n G u n 2 + α n γ f ( x n ) A W n G u n ( 2 x n p + α n γ f ( x n ) A W n G u n ) β n x n p 2 + ( 1 β n ) [ u n p 2 + μ 2 ( μ 2 2 ζ 2 ) B 2 u n B 2 p 2 + μ 1 ( μ 1 2 ζ 1 ) B 1 u ˜ n B 1 p ˜ 2 ] β n ( 1 β n ) x n W n G u n 2 + α n γ f ( x n ) A W n G u n ( 2 x n p + α n γ f ( x n ) A W n G u n ) β n x n p 2 + ( 1 β n ) [ x n p 2 + μ 2 ( μ 2 2 ζ 2 ) B 2 u n B 2 p 2 + μ 1 ( μ 1 2 ζ 1 ) B 1 u ˜ n B 1 p ˜ 2 ] β n ( 1 β n ) x n W n G u n 2 + α n γ f ( x n ) A W n G u n ( 2 x n p + α n γ f ( x n ) A W n G u n ) = x n p 2 ( 1 β n ) [ μ 2 ( 2 ζ 2 μ 2 ) B 2 u n B 2 p 2 + μ 1 ( 2 ζ 1 μ 1 ) B 1 u ˜ n B 1 p ˜ 2 ] β n ( 1 β n ) x n W n G u n 2 + α n γ f ( x n ) A W n G u n ( 2 x n p + α n γ f ( x n ) A W n G u n ) ,
(3.9)
which immediately implies that
( 1 d ) [ μ 2 ( 2 ζ 2 μ 2 ) B 2 u n B 2 p 2 + μ 1 ( 2 ζ 1 μ 1 ) B 1 u ˜ n B 1 p ˜ 2 ] + c ( 1 d ) x n W n G u n ( 1 β n ) [ μ 2 ( 2 ζ 2 μ 2 ) B 2 u n B 2 p 2 + μ 1 ( 2 ζ 1 μ 1 ) B 1 u ˜ n B 1 p ˜ 2 ] + β n ( 1 β n ) x n W n G u n x n p 2 x n + 1 p 2 + α n γ f ( x n ) A W n G u n ( 2 x n p + α n γ f ( x n ) A W n G u n ) x n x n + 1 ( x n p + x n + 1 p ) + α n γ f ( x n ) A W n G u n ( 2 x n p + α n γ f ( x n ) A W n G u n ) .
Since lim n α n = 0 , lim n x n + 1 x n = 0 and μ i ( 0 , 2 ζ i ) , i = 1 , 2 , we deduce from the boundedness of { x n } , { f ( x n ) } and { W n G u n } that
lim n B 2 u n B 2 p = 0 , lim n B 1 u ˜ n B 1 p ˜ = 0 and lim n x n W n G u n = 0 .
(3.10)
Also, in terms of the firm nonexpansivity of P C and the ζ i -inverse strong monotonicity of B i for i = 1 , 2 , we obtain from μ i ( 0 , 2 ζ i ) , i { 1 , 2 } and (3.4) that
u ˜ n p ˜ 2 = P C ( I μ 2 B 2 ) u n P C ( I μ 2 B 2 ) p 2 ( I μ 2 B 2 ) u n ( I μ 2 B 2 ) p , u ˜ n p ˜ = 1 2 [ ( I μ 2 B 2 ) u n ( I μ 2 B 2 ) p 2 + u ˜ n p ˜ 2 ( I μ 2 B 2 ) u n ( I μ 2 B 2 ) p ( u ˜ n p ˜ ) 2 ] 1 2 [ u n p 2 + u ˜ n p ˜ 2 ( u n u ˜ n ) μ 2 ( B 2 u n B 2 p ) ( p p ˜ ) 2 ] 1 2 [ x n p 2 + u ˜ n p ˜ 2 ( u n u ˜ n ) ( p p ˜ ) 2 + 2 μ 2 ( u n u ˜ n ) ( p p ˜ ) , B 2 u n B 2 p μ 2 2 B 2 u n B 2 p 2 ]
and
v n p 2 = P C ( I μ 1 B 1 ) u ˜ n P C ( I μ 1 B 1 ) p ˜ 2 ( I μ 1 B 1 ) v n ( I μ 1 B 1 ) p ˜ , v n p = 1 2 [ ( I μ 1 B 1 ) u ˜ n ( I μ 1 B 1 ) p ˜ 2 + v n p 2 ( I μ 1 B 1 ) u ˜ n ( I μ 1 B 1 ) p ˜ ( v n p ) 2 ] 1 2 [ u ˜ n p ˜ 2 + v n p 2 ( u ˜ n v n ) + ( p p ˜ ) 2 + 2 μ 1 B 1 u ˜ n B 1 p ˜ , ( u ˜ n v n ) + ( p p ˜ ) μ 1 2 B 1 u ˜ n B 1 p ˜ 2 ] 1 2 [ x n p 2 + v n p 2 ( u ˜ n v n ) + ( p p ˜ ) 2 + 2 μ 1 B 1 u ˜ n B 1 p ˜ , ( u ˜ n v n ) + ( p p ˜ ) ] .
Thus, we have
u ˜ n p ˜ 2 x n p 2 ( u n u ˜ n ) ( p p ˜ ) 2 + 2 μ 2 ( u n u ˜ n ) ( p p ˜ ) , B 2 u n B 2 p μ 2 2 B 2 u n B 2 p 2
(3.11)
and
v n p 2 x n p 2 ( u ˜ n v n ) + ( p p ˜ ) 2 + 2 μ 1 B 1 u ˜ n B 1 p ˜ ( u ˜ n v n ) + ( p p ˜ ) .
(3.12)
Consequently, from (3.4), (3.9) and (3.11) it follows that
x n + 1 p 2 β n x n p + ( 1 β n ) G u n p 2 β n ( 1 β n ) x n W n G u n + α n γ f ( x n ) A W n G u n ( 2 x n p + α n γ f ( x n ) A W n G u n ) β n x n p + ( 1 β n ) u ˜ n p ˜ 2 + α n γ f ( x n ) A W n G u n ( 2 x n p + α n γ f ( x n ) A W n G u n ) β n x n p + ( 1 β n ) [ x n p 2 ( u n u ˜ n ) ( p p ˜ ) 2 + 2 μ 2 ( u n u ˜ n ) ( p p ˜ ) B 2 u n B 2 p ] + α n γ f ( x n ) A W n G u n ( 2 x n p + α n γ f ( x n ) A W n G u n ) x n p 2 ( 1 β n ) ( u n u ˜ n ) ( p p ˜ ) 2 + 2 μ 2 ( u n u ˜ n ) ( p p ˜ ) B 2 u n B 2 p + α n γ f ( x n ) A W n G u n ( 2 x n p + α n γ f ( x n ) A W n G u n ) ,
which yields
( 1 d ) ( u n u ˜ n ) ( p p ˜ ) 2 ( 1 β n ) ( u n u ˜ n ) ( p p ˜ ) 2 x n p 2 x n + 1 p 2 + 2 μ 2 ( u n u ˜ n ) ( p p ˜ ) B 2 u n B 2 p + α n γ f ( x n ) A W n G u n ( 2 x n p + α n γ f ( x n ) A W n G u n ) x n x n + 1 ( x n p + x n + 1 p ) + 2 μ 2 ( u n u ˜ n ) ( p p ˜ ) B 2 u n B 2 p + α n γ f ( x n ) A W n G u n ( 2 x n p + α n γ f ( x n ) A W n G u n ) .
Since lim n α n = 0 , lim n x n + 1 x n = 0 and lim n B 2 u n B 2 p = 0 , we deduce from the boundedness of { x n } , { u n } , { u ˜ n } , { f ( x n ) } and { W n G u n } that
lim n ( u n u ˜ n ) ( p p ˜ ) = 0 .
(3.13)
Furthermore, from (3.4), (3.9) and (3.12) it follows that
x n + 1 p 2 β n x n p + ( 1 β n ) G u n p 2 + α n γ f ( x n ) A W n G u n ( 2 x n p + α n γ f ( x n ) A W n G u n ) β n x n p + ( 1 β n ) v n p 2 + α n γ f ( x n ) A W n G u n ( 2 x n p + α n γ f ( x n ) A W n G u n ) β n x n p + ( 1 β n ) [ x n p 2 ( u ˜ n v n ) + ( p p ˜ ) 2 + 2 μ 1 B 1 u ˜ n B 1 p ˜ ( u ˜ n v n ) + ( p p ˜ ) ] + α n γ f ( x n ) A W n G u n ( 2 x n p + α n γ f ( x n ) A W n G u n ) x n p 2 ( 1 β n ) ( u ˜ n v n ) + ( p p ˜ ) 2 + 2 μ 1 B 1 u ˜ n B 1 p ˜ ( u ˜ n v n ) + ( p p ˜ ) + α n γ f ( x n ) A W n G u n ( 2 x n p + α n γ f ( x n ) A W n G u n ) ,
which leads to
( 1 d ) ( u ˜ n v n ) + ( p p ˜ ) 2 ( 1 β n ) ( u ˜ n v n ) + ( p p ˜ ) 2 x n p 2 x n + 1 p 2 + 2 μ 1 B 1 u ˜ n B 1 p ˜ ( u ˜ n v n ) + ( p p ˜ ) + α n γ f ( x n ) A W n G u n ( 2 x n p + α n γ f ( x n ) A W n G u n ) x n x n + 1 ( x n p + x n + 1 p ) + 2 μ 1 B 1 u ˜ n B 1 p ˜ ( u ˜ n v n ) + ( p p ˜ ) + α n γ f ( x n ) A W n G u n ( 2 x n p + α n γ f ( x n ) A W n G u n ) .
Since lim n α n = 0 , lim n x n + 1 x n = 0 and lim n B 1 u ˜ n B 1 p ˜ = 0 , we deduce from the boundedness of { x n } , { v n } , { u ˜ n } , { f ( x n ) } and { W n G u n } that
lim n ( u ˜ n v n ) + ( p p ˜ ) = 0 .
(3.14)
Note that
u n v n ( u n u ˜ n ) ( p p ˜ ) + ( u ˜ n v n ) + ( p p ˜ ) .
Hence from (3.13) and (3.14) we get
lim n u n v n = lim n u n G u n = 0 .
(3.15)
Step 4. We show that lim n u n x n = 0 and lim n u n W u n = 0 . Indeed, by Proposition 2.2(c) we have
u n p 2 = T r n ( F , φ ) x n T r n ( F , φ ) p 2 u n p , x n p = 1 2 [ u n p 2 + x n p 2 u n x n 2 ] .
That is,
u n p 2 x n p 2 u n x n 2 .
(3.16)
From (3.4), (3.9) and (3.16) it follows that
x n + 1 p 2 β n x n p + ( 1 β n ) G u n p 2 + α n γ f ( x n ) A W n G u n ( 2 x n p + α n γ f ( x n ) A W n G u n ) β n x n p + ( 1 β n ) u n p 2 + α n γ f ( x n ) A W n G u n ( 2 x n p + α n γ f ( x n ) A W n G u n ) β n x n p + ( 1 β n ) [ x n p 2 u n x n 2 ] + α n γ f ( x n ) A W n G u n ( 2 x n p + α n γ f ( x n ) A W n G u n ) = x n p 2 ( 1 β n ) u n x n 2 + α n γ f ( x n ) A W n G u n ( 2 x n p + α n γ f ( x n ) A W n G u n ) .
So, we get
( 1 d ) u n x n 2 ( 1 β n ) u n x n 2 x n p 2 x n + 1 p 2 + α n γ f ( x n ) A W n G u n ( 2 x n p + α n γ f ( x n ) A W n G u n ) x n x n + 1 ( x n p + x n + 1 p ) + α n γ f ( x n ) A W n G u n ( 2 x n p + α n γ f ( x n ) A W n G u n ) .
Since lim n α n = 0 and lim n x n + 1 x n = 0 , we deduce from the boundedness of { x n } , { f ( x n ) } and { W n G u n } that
lim n u n x n = 0 .
(3.17)
In the meantime, we observe that
G u n W n G u n G u n u n + u n x n + x n W n G u n .
From (3.10), (3.15) and (3.17) it follows that
lim n G u n W n G u n = 0 .
(3.18)
Also, note that
u n W u n u n G u n + G u n W n G u n + W n G u n W n u n + W n u n W u n 2 u n G u n + G u n W n G u n + W n u n W u n .
From (3.15), (3.18), Remark 2.3 and the boundedness of { u n } we immediately obtain
lim n u n W u n = 0 .
(3.19)
Step 5. We show that
lim sup n ( γ f A ) x , x n x 0 ,

where x uniquely solves the minimization problem (3.3).

Indeed, as previously, we have proven that x is the unique fixed point of the mapping P Ω ( γ f + ( I A ) ) (i.e., x = P Ω ( γ f + ( I A ) ) x ). That is, x is the unique solution of VIP (3.2). Equivalently, x is the unique solution of the minimization problem (3.3).