Skip to main content

A modified projection method for a common solution of a system of variational inequalities, a split equilibrium problem and a hierarchical fixed-point problem

Abstract

In this paper, we suggest and analyze a modified projection method for finding a common solution of a system of variational inequalities, a split equilibrium problem, and a hierarchical fixed-point problem in the setting of real Hilbert spaces. We prove the strong convergence of the sequence generated by the proposed method to a common solution of a system of variational inequalities, a split equilibrium problem, and a hierarchical fixed-point problem. Several special cases are also discussed. The results presented in this paper extend and improve some well-known results in the literature.

MSC: 49J30, 47H09, 47J20.

1 Introduction

Let H be a real Hilbert space, whose inner product and norm are denoted by , and . Let C be a nonempty closed convex subset of H. Recently, Ceng et al. [1] considered the general system of variational inequalities, which involves finding ( x , y )C×C such that

{ μ 1 B 1 y + x y , x x 0 ; x C  and  μ 1 > 0 , μ 2 B 2 x + y x , x y 0 ; x C  and  μ 2 > 0 ,
(1.1)

where B i :CC is a nonlinear mapping for each i=1,2. The solution set of (1.1) is denoted by S . As pointed out in [2] the system of variational inequalities is used as a tool to study the Nash equilibrium problem, see, for example, [36] and the references therein. We believe that the problem (1.1) could be used to study the Nash equilibrium problem for a two players game. The theory of variational inequalities is well established and it has a wide range of applications in science, engineering, management, and social sciences, see, for example, [47] and the references therein.

Ceng et al. [1] transformed problem (1.1) into a fixed-point problem (see Lemma 2.2) and introduced an iterative method for finding the common element of the set Fix(T) S . Based on the one-step iterative method [8] and the multi-step iterative method [9], Latif et al. [10] proposed a multi-step hybrid viscosity method that generates a sequence via an explicit iterative algorithm to compute the approximate solutions of a system of variational inequalities defined over the intersection of the set of solutions of an equilibrium problem, the set of common fixed points of a finite family of nonexpansive mappings, and the solution set of a nonexpansive mapping. Under very mild conditions, they proved that the sequence converges strongly to a unique solution of system of variational inequalities defined over the set consisting of the set of solutions of an equilibrium problem, the set of common fixed points of nonexpansive mappings, and the set of fixed points of a mapping, and to a unique solution of the triple hierarchical variational inequality problem.

On the other hand, by combining the regularization method, Korpelevich’s extragradient method, the hybrid steepest-descent method, and the viscosity approximation method, Ceng et al. [2] introduced and analyzed implicit and explicit iterative schemes for computing a common element of the solution set of system of variational inequalities and a set of zeros of an accretive operator in Banach space. Under suitable assumptions, they proved the strong convergence of the sequences generated by the proposed schemes.

If B 1 = B 2 =B, then the problem (1.1) reduces to finding ( x , y )C×C such that

{ μ 1 B y + x y , x x 0 ; x C  and  μ 1 > 0 , μ 2 B x + y x , x y 0 ; x C  and  μ 2 > 0 ,
(1.2)

which has been introduced and studied by Verma [11, 12].

If x = y and μ 1 = μ 2 , then the problem (1.2) collapses to the classical variational inequality: finding x C, such that

B x , x x 0,xC.

For the recent applications, numerical techniques, and physical formulation, see [145].

We introduce the following definitions, which are useful in the following analysis.

Definition 1.1 The mapping T:CH is said to be

  1. (a)

    monotone, if

    TxTy,xy0,x,yC;
  2. (b)

    strongly monotone, if there exists an α>0 such that

    TxTy,xyα x y 2 ,x,yC;
  3. (c)

    α-inverse strongly monotone, if there exists an α>0 such that

    TxTy,xyα T x T y 2 ,x,yC;
  4. (d)

    nonexpansive, if

    TxTyxy,x,yC;
  5. (e)

    k-Lipschitz continuous, if there exists a constant k>0 such that

    TxTykxy,x,yC;
  6. (f)

    a contraction on C, if there exists a constant 0k<1 such that

    TxTykxy,x,yC;

It is easy to observe that every α-inverse strongly monotone T is monotone and Lipschitz continuous. It is well known that every nonexpansive operator T:HH satisfies, for all (x,y)H×H, the inequality

( x T ( x ) ) ( y T ( y ) ) , T ( y ) T ( x ) 1 2 ( T ( x ) x ) ( T ( y ) y ) 2 ,
(1.3)

and therefore, we get, for all (x,y)H×Fix(T),

x T ( x ) , y T ( x ) 1 2 T ( x ) x 2 .
(1.4)

The fixed-point problem for the mapping T is to find xC such that

Tx=x.
(1.5)

We denote by F(T) the set of solutions of (1.5). It is well known that F(T) is closed and convex, and P F (T) is well defined.

The equilibrium problem, denoted by EP, is to find xC such that

F(x,y)0,yC.
(1.6)

The solution set of (1.6) is denoted by EP(F). Numerous problems in physics, optimization, and economics reduce to finding a solution of (1.6), see [25, 38]. In 1994, Censor and Elfving [19] introduced and studied the following split feasibility problem.

Let C and Q be nonempty closed convex subsets of the infinite-dimensional real Hilbert spaces H 1 and H 2 , respectively, and let AB( H 1 , H 2 ), where B( H 1 , H 2 ) denotes the collection of all bounded linear operators from H 1 to H 2 . The problem is to find x such that

x Csuch thatA x Q.

Very recently, Ceng et al. [22] introduced and analyzed an extragradient method with regularization for finding a common element of the solution set of the split feasibility problem and the set of fixed points of a nonexpansive mapping S in the setting of infinite-dimensional Hilbert spaces. By combining Mann’s iterative method and the extragradient method, Ceng et al. [21] proposed three different kinds of Mann type iterative methods for finding a common element of the solution set of the split feasibility problem and the set of fixed points of a nonexpansive mapping S in the setting of infinite-dimensional Hilbert spaces.

Recently, Censor et al. [23] introduced a new variational inequality problem which we call the split variational inequality problem (SVIP). Let H 1 and H 2 be two real Hilbert spaces. Given the operators f: H 1 H 1 and g: H 2 H 2 , a bounded linear operator A: H 1 H 2 , and the nonempty, closed, and convex subsets C H 1 and Q H 2 , the SVIP is formulated as follows: find a point x C such that

f ( x ) , x x 0for all xC
(1.7)

and such that

y =A x Qsolves g ( y ) , y y 0for all yQ.
(1.8)

In [36], Moudafi introduced an iterative method which can be regarded as an extension of the method given by Censor et al. [23] for the following split monotone variational inclusions:

Find x H 1 such that0f ( x ) + B 1 ( x )

and such that

y =A x H 2 solves0g ( y ) + B 2 ( y ) ,

where B i : H i 2 H i is a set-valued mapping for i=1,2. Later Byrne et al. [18] generalized and extended the work of Censor et al. [23] and Moudafi [36].

In this paper, we consider the following pair of equilibrium problems, called split equilibrium problems: Let F 1 :C×CR and F 2 :Q×QR be nonlinear bifunctions and A: H 1 H 2 be a bounded linear operator, then the split equilibrium problem (SEP) is to find x C such that

F 1 ( x , x ) 0,xC,
(1.9)

and such that

y =A x Qsolves F 2 ( y , y ) 0,yQ.
(1.10)

The solution set of SEP (1.9)-(1.10) is denoted by Λ={pEP( F 1 ):ApEP( F 2 )}.

Let S:CH be a nonexpansive mapping. The following problem is called a hierarchical fixed-point problem: find xF(T) such that

xSx,yx0,yF(T).
(1.11)

It is known that the hierarchical fixed-point problem (1.11) links with some monotone variational inequalities and convex programming problems; see [45]. Various methods have been proposed to solve the hierarchical fixed-point problem; see Mainge and Moudafi [34] and Cianciaruso et al. [26]. In 2010, Yao et al. [45] introduced the following strong convergence iterative algorithm to solve the problem (1.11):

y n = β n S x n + ( 1 β n ) x n , x n + 1 = P C [ α n f ( x n ) + ( 1 α n ) T y n ] , n 0 ,
(1.12)

where f:CH is a contraction mapping and { α n } and { β n } are two sequences in (0,1). Under some certain restrictions on the parameters, Yao et al. proved that the sequence { x n } generated by (1.12) converges strongly to zF(T), which is the unique solution of the following variational inequality:

( I f ) z , y z 0,yF(T).
(1.13)

In 2011, Ceng et al. [20] investigated the following iterative method:

x n + 1 = P C [ α n ρ U ( x n ) + ( I α n μ F ) ( T ( y n ) ) ] ,n0,
(1.14)

where U is a Lipschitzian mapping, and F is a Lipschitzian and strongly monotone mapping. They proved that under some approximate assumptions on the operators and parameters, the sequence { x n } generated by (1.14) converges strongly to the unique solution of the variational inequality

ρ U ( z ) μ F ( z ) , x z 0,xFix(T).

In this paper, motivated by the work of Censor et al. [23], Moudafi [36], Byrne et al. [18], Yao et al. [45], Ceng et al. [20], Bnouhachem [1517] and by the recent work going in this direction, we give an iterative method for finding the approximate element of the common set of solutions of (1.1), (1.9)-(1.10), and (1.11) in real Hilbert space. We establish a strong convergence theorem based on this method. We would like to mention that our proposed method is quite general and flexible and includes many known results for solving a system of variational inequality problems, split equilibrium problems, and hierarchical fixed-point problems, see, e.g. [20, 26, 34, 40, 45] and relevant references cited therein.

2 Preliminaries

In this section, we list some fundamental lemmas that are useful in the consequent analysis. The first lemma provides some basic properties of projection onto C.

Lemma 2.1 Let P C denote the projection of H onto C. Then we have the following inequalities:

z P C [ z ] , P C [ z ] v 0,zH,vC;
(2.1)
u v , P C [ u ] P C [ v ] P C [ u ] P C [ v ] 2 ,u,vH;
(2.2)
P C [ u ] P C [ v ] uv,u,vH;
(2.3)
u P C [ z ] 2 z u 2 z P C [ z ] 2 ,zH,uC.
(2.4)

Lemma 2.2 [1]

For any ( x , y )C×C, ( x , y ) is a solution of (1.1) if and only if x is a fixed point of the mapping Q:CC defined by

Q(x)= P C [ P C [ x μ 2 B 2 x ] μ 1 B 1 P C [ x μ 2 B 2 x ] ] ,xC,
(2.5)

where y = P C [ x μ 2 B 2 x ], μ i (0,2 θ i ) and B i :CC is for the θ i -inverse strongly monotone mappings for each i=1,2.

Assumption 2.1 [14]

Let F:C×CR be a bifunction satisfying the following assumptions:

  1. (i)

    F(x,x)=0, xC;

  2. (ii)

    F is monotone, i.e., F(x,y)+F(y,x)0, x,yC;

  3. (iii)

    for each x,y,zC, lim t 0 F(tz+(1t)x,y)F(x,y);

  4. (iv)

    for each xC, yF(x,y) is convex and lower semicontinuous;

  5. (v)

    for fixed r>0 and zC, there exists a bounded subset K of H 1 and xCK such that

    F(y,x)+ 1 r yx,xz0,yCK.

Lemma 2.3 [28]

Assume that F 1 :C×CR satisfies Assumption  2.1. For r>0 and x H 1 , define a mapping T r F 1 : H 1 C as follows:

T r F 1 (x)= { z C : F 1 ( z , y ) + 1 r y z , z x 0 , y C } .

Then the following hold:

  1. (i)

    T r F 1 is nonempty and single-valued;

  2. (ii)

    T r F 1 is firmly nonexpansive, i.e.,

    T r F 1 ( x ) T r F 1 ( y ) 2 T r F 1 ( x ) T r F 1 ( y ) , x y ,x,y H 1 ;
  3. (iii)

    F( T r F 1 )=EP( F 1 );

  4. (iv)

    EP( F 1 ) is closed and convex.

Assume that F 2 :Q×QR satisfies Assumption 2.1, and for s>0 and u H 2 , define a mapping T s F 2 : H 2 Q as follows:

T s F 2 (u)= { v Q : F 2 ( v , w ) + 1 s w v , v u 0 , w Q } .

Then T s F 2 satisfies conditions (i)-(iv) of Lemma 2.3. F( T s F 2 )=EP( F 2 ,Q), where EP( F 2 ,Q) is the solution set of the following equilibrium problem:

Find y Qsuch that F 2 ( y , y ) 0,yQ.

Lemma 2.4 [27]

Assume that F 1 :C×CR satisfies Assumption  2.1, and let T r F 1 be defined as in Lemma  2.3. Let x,y H 1 and r 1 , r 2 >0. Then

T r 2 F 1 ( y ) T r 1 F 1 ( x ) yx+| r 2 r 1 r 2 | T r 2 F 1 ( y ) y .

Lemma 2.5 [31]

Let C be a nonempty closed convex subset of a real Hilbert space H. If T:CC is a nonexpansive mapping with Fix(T), then the mapping IT is demiclosed at 0, i.e., if { x n } is a sequence in C weakly converges to x and if {(IT) x n } converges strongly to 0, then (IT)x=0.

Lemma 2.6 [20]

Let U:CH be a τ-Lipschitzian mapping and let F:CH be a k-Lipschitzian and η-strongly monotone mapping, then for 0ρτ<μη, μFρU is μηρτ-strongly monotone i.e.,

( μ F ρ U ) x ( μ F ρ U ) y , x y (μηρτ) x y 2 ,x,yC.

Lemma 2.7 [39]

Suppose that λ(0,1) and μ>0. Let F:CH be a k-Lipschitzian and η-strongly monotone operator. In association with a nonexpansive mapping T:CC, define the mapping T λ :CH by

T λ x=TxλμFT(x),xC.

Then T λ is a contraction provided μ< 2 η k 2 , that is,

T λ x T λ y (1λν)xy,x,yC,

where ν=1 1 μ ( 2 η μ k 2 ) .

Lemma 2.8 [43]

Assume { a n } is a sequence of nonnegative real numbers such that

a n + 1 (1 γ n ) a n + δ n ,

where { γ n } is a sequence in (0,1) and δ n is a sequence such that

  1. (1)

    n = 1 γ n =;

  2. (2)

    lim sup n δ n / γ n 0 or n = 1 | δ n |<.

Then lim n a n =0.

Lemma 2.9 [13]

Let C be a closed convex subset of H. Let { x n } be a bounded sequence in H. Assume that

  1. (i)

    the weak w-limit set w w ( x n )C where w w ( x n )={x: x n i x};

  2. (ii)

    for each zC, lim n x n z exists.

Then { x n } is weakly convergent to a point in C.

3 The proposed method and some properties

In this section, we suggest and analyze our method for finding the common solutions of the system of the variational inequality problem (1.1), the split equilibrium problem (1.9)-(1.10), and the hierarchical fixed-point problem (1.11).

Let H 1 and H 2 be two real Hilbert spaces and C H 1 and Q H 2 be nonempty closed convex subsets of Hilbert spaces H 1 and H 2 , respectively. Let A: H 1 H 2 be a bounded linear operator. Assume that F 1 :C×CR and F 2 :Q×QR are the bifunctions satisfying Assumption 2.1 and F 2 is upper semicontinuous in the first argument. Let B i :CH be a θ i -inverse strongly monotone mapping for each i=1,2 and S,T:CC a nonexpansive mappings such that S ΛF(T). Let F:CC be a k-Lipschitzian mapping and be η-strongly monotone, and let U:CC be a τ-Lipschitzian mapping.

Algorithm 3.1 For an arbitrary given x 0 C, let the iterative sequences { u n }, { x n }, { y n }, and { z n } be generated by

{ u n = T r n F 1 ( x n + γ A ( T r n F 2 I ) A x n ) ; z n = P C [ P C [ u n μ 2 B 2 u n ] μ 1 B 1 P C [ u n μ 2 B 2 u n ] ] ; y n = β n S x n + ( 1 β n ) z n ; x n + 1 = P C [ α n ρ U ( x n ) + ( I α n μ F ) ( T ( y n ) ) ] , n 0 ,
(3.1)

where μ i (0,2 θ i ) for each i=1,2, { r n }(0,2ς) and γ(0,1/L), L is the spectral radius of the operator A A, and A is the adjoint of A. Suppose the parameters satisfy 0<μ< 2 η k 2 , 0ρτ<ν, where ν=1 1 μ ( 2 η μ k 2 ) . Also, { α n } and { β n } are sequences in (0,1) satisfying the following conditions:

  1. (a)

    lim n α n =0 and n = 1 α n =,

  2. (b)

    lim n ( β n / α n )=0,

  3. (c)

    n = 1 | α n 1 α n |< and n = 1 | β n 1 β n |<,

  4. (d)

    lim inf n r n < lim sup n r n <2ς and n = 1 | r n 1 r n |<.

Remark 3.1 Our method can be viewed as an extension and improvement for some well-known results, for example the following.

  • The proposed method is an extension and improvement of the method of Wang and Xu [42] for finding the approximate element of the common set of solutions of a split equilibrium problem and a hierarchical fixed-point problem in a real Hilbert space.

  • If we have the Lipschitzian mapping U=f, F=I, ρ=μ=1, and B 1 = B 2 =0, we obtain an extension and improvement of the method of Yao et al.[45] for finding the approximate element of the common set of solutions of a split equilibrium problem and a hierarchical fixed-point problem in a real Hilbert space.

  • The contractive mapping f with a coefficient α[0,1) in other papers [39, 40, 45] is extended to the cases of the Lipschitzian mapping U with a coefficient constant γ[0,).

This shows that Algorithm 3.1 is quite general and unifying.

Lemma 3.1 Let x S ΛF(T). Then { x n }, { u n }, { z n }, and { y n } are bounded.

Proof Let x S ΛF(T); we have x = T r n F 1 ( x ) and A x = T r n F 2 (A x ). Then

u n x 2 = T r n F 1 ( x n + γ A ( T r n F 2 I ) A x n ) x 2 = T r n F 1 ( x n + γ A ( T r n F 2 I ) A x n ) T r n F 1 ( x ) 2 x n + γ A ( T r n F 2 I ) A x n x 2 = x n x 2 + γ 2 A ( T r n F 2 I ) A x n 2 + 2 γ x n x , A ( T r n F 2 I ) A x n = x n x 2 + γ 2 ( T r n F 2 I ) A x n , A A ( T r n F 2 I ) A x n + 2 γ x n x , A ( T r n F 2 I ) A x n .
(3.2)

From the definition of L it follows that

γ 2 ( T r n F 2 I ) A x n , A A ( T r n F 2 I ) A x n L γ 2 ( T r n F 2 I ) A x n , ( T r n F 2 I ) A x n = L γ 2 ( T r n F 2 I ) A x n 2 .
(3.3)

It follows from (1.4) that

2 γ x n x , A ( T r n F 2 I ) A x n = 2 γ A ( x n x ) , ( T r n F 2 I ) A x n = 2 γ A ( x n x ) + ( T r n F 2 I ) A x n ( T r n F 2 I ) A x n , ( T r n F 2 I ) A x n = 2 γ ( T r n F 2 A x n A x , ( T r n F 2 I ) A x n ( T r n F 2 I ) A x n 2 ) 2 γ ( 1 2 ( T r n F 2 I ) A x n 2 ( T r n F 2 I ) A x n 2 ) = γ ( T r n F 2 I ) A x n 2 .
(3.4)

Applying (3.4) and (3.3) to (3.2) and from the definition of γ, we get

u n x 2 x n x 2 + γ ( L γ 1 ) ( T r n F 2 I ) A x n 2 x n x 2 .
(3.5)

Let x S ΛF(T); we have

x = P C [ y μ 1 B 1 y ] ,

where

y = P C [ x μ 2 B 2 x ] .

We set v n = P C [ u n μ 2 B 2 u n ]. Since B 2 is a θ 2 -inverse strongly monotone mapping, it follows that

v n y 2 = P C [ u n μ 2 B 2 u n ] P C [ x μ 2 B 2 x ] 2 u n x μ 2 ( B 2 u n B 2 x ) 2 x n x 2 μ 2 ( 2 θ 2 μ 2 ) B 2 u n B 2 x 2 x n x 2 .
(3.6)

Since B i is θ i -inverse strongly monotone mappings, for each i=1,2, we get

z n x 2 = P C [ P C [ u n μ 2 B 2 u n ] μ 1 B 1 P C [ u n μ 2 B 2 u n ] ] P C [ P C [ x μ 2 B 2 x ] μ 1 B 1 P C [ x μ 2 B 2 x ] ] 2 P C [ u n μ 2 B 2 u n ] μ 1 B 1 P C [ u n μ 2 B 2 u n ] ( P C [ x μ 2 B 2 x ] μ 1 B 1 P C [ x μ 2 B 2 x ] ) 2 = P C [ u n μ 2 B 2 u n ] P C [ x μ 2 B 2 x ] μ 1 ( B 1 P C [ u n μ 2 B 2 u n ] B 1 P C [ x μ 2 B 2 x ] ) 2 P C [ u n μ 2 B 2 u n ] P C [ x μ 2 B 2 x ] 2 μ 1 ( 2 θ 1 μ 1 ) B 1 P C [ u n μ 2 B 2 u n ] B 1 P C [ x μ 2 B 2 x ] 2 ( u n μ 2 B 2 u n ) ( x μ 2 B 2 x ) 2 μ 1 ( 2 θ 1 μ 1 ) B 1 P C [ u n μ 2 B 2 u n ] B 1 P C [ x μ 2 B 2 x ] 2 u n x 2 μ 2 ( 2 θ 2 μ 2 ) B 2 u n B 2 x 2 μ 1 ( 2 θ 1 μ 1 ) B 1 v n B 1 y 2 u n x 2 x n x 2 .
(3.7)

We denote V n = α n ρU( x n )+(I α n μF)(T( y n )). Next, we prove that the sequence { x n } is bounded, and without loss of generality we can assume that β n α n for all n1. From (3.1), we have

x n + 1 x = P C [ V n ] P C [ x ] α n ρ U ( x n ) + ( I α n μ F ) ( T ( y n ) ) x α n ρ U ( x n ) μ F ( x ) + ( I α n μ F ) ( T ( y n ) ) ( I α n μ F ) T ( x ) = α n ρ U ( x n ) ρ U ( x ) + ( ρ U μ F ) x + ( I α n μ F ) ( T ( y n ) ) ( I α n μ F ) T ( x ) α n ρ τ x n x + α n ( ρ U μ F ) x + ( 1 α n ν ) y n x α n ρ τ x n x + α n ( ρ U μ F ) x + ( 1 α n ν ) β n S x n + ( 1 β n ) z n x α n ρ τ x n x + α n ( ρ U μ F ) x + ( 1 α n ν ) ( β n S x n S x + β n S x x + ( 1 β n ) z n x ) α n ρ τ x n x + α n ( ρ U μ F ) x + ( 1 α n ν ) ( β n S x n S x + β n S x x + ( 1 β n ) x n x ) α n ρ τ x n x + α n ( ρ U μ F ) x + ( 1 α n ν ) ( β n x n x + β n S x x + ( 1 β n ) x n x ) = ( 1 α n ( ν ρ τ ) ) x n x + α n ( ρ U μ F ) x + ( 1 α n ν ) β n S x x ( 1 α n ( ν ρ τ ) ) x n x + α n ( ρ U μ F ) x + β n S x x ( 1 α n ( ν ρ τ ) ) x n x + α n ( ( ρ U μ F ) x + S x x ) = ( 1 α n ( ν ρ τ ) ) x n x + α n ( ν ρ τ ) ν ρ τ ( ( ρ U μ F ) x + S x x ) max { x n x , 1 ν ρ τ ( ( ρ U μ F ) x + S x x ) } ,

where the third inequality follows from Lemma 2.7.

By induction on n, we obtain x n x max{ x 0 x , 1 ν ρ τ ((ρUμF) x +S x x )}, for n0 and x 0 C. Hence { x n } is bounded and, consequently, we deduce that { u n }, { z n }, { v n }, { y n }, {S( x n )}, {T( x n )}, {F(T( y n ))}, and {U( x n )} are bounded. □

Lemma 3.2 Let x S ΛF(T) and { x n } be the sequence generated by Algorithm  3.1. Then we have:

  1. (a)

    lim n x n + 1 x n =0.

  2. (b)

    The weak w-limit set w w ( x n )F(T) ( w w ( x n )={x: x n i x}).

Proof Since u n = T r n F 1 ( x n +γ A ( T r n F 2 I)A x n ) and u n 1 = T r n 1 F 1 ( x n 1 +γ A ( T r n 1 F 2 I)A x n 1 ). It follows from Lemma 2.4 that

u n u n 1 x n x n 1 + γ ( A ( T r n F 2 I ) A x n A ( T r n 1 F 2 I ) A x n 1 ) + | 1 r n 1 r n | T r n F 1 ( x n + γ A ( T r n F 2 I ) A x n ) ( x n + γ A ( T r n F 2 I ) A x n ) x n x n 1 γ A A ( x n x n 1 ) + γ A T r n F 2 A x n T r n 1 F 2 A x n 1 + | 1 r n 1 r n | T r n F 1 ( x n + γ A ( T r n F 2 I ) A x n ) ( x n + γ A ( T r n F 2 I ) A x n ) ( x n x n 1 2 2 γ A ( x n x n 1 ) 2 + γ 2 A 4 x n x n 1 2 ) 1 2 + γ A ( A ( x n x n 1 ) + | 1 r n 1 r n | T r n F 2 A x n A x n ) + | 1 r n 1 r n | T r n F 1 ( x n + γ A ( T r n F 2 I ) A x n ) ( x n + γ A ( T r n F 2 I ) A x n ) ( 1 2 γ A 2 + γ 2 A 4 ) 1 2 x n x n 1 + γ A 2 x n x n 1 + γ A | 1 r n 1 r n | T r n F 2 A x n A x n + | 1 r n 1 r n | T r n F 1 ( x n + γ A ( T r n F 2 I ) A x n ) ( x n + γ A ( T r n F 2 I ) A x n ) = ( 1 γ A 2 ) x n x n 1 + γ A 2 x n x n 1 + γ A | 1 r n 1 r n | T r n F 2 A x n A x n + | 1 r n 1 r n | T r n F 1 ( x n + γ A ( T r n F 2 I ) A x n ) ( x n + γ A ( T r n F 2 I ) A x n ) = x n x n 1 + γ A | 1 r n 1 r n | T r n F 2 A x n A x n + | 1 r n 1 r n | T r n F 1 ( x n + γ A ( T r n F 2 I ) A x n ) ( x n + γ A ( T r n F 2 I ) A x n ) = x n x n 1 + | r n r n 1 r n | ( γ A σ n + χ n ) ,

where σ n := T r n F 2 A x n A x n and χ n := T r n F 1 ( x n +γ A ( T r n F 2 I)A x n )( x n +γ A ( T r n F 2 I)A x n ). Without loss of generality, let us assume that there exists a real number μ such that r n >μ>0, for all positive integers n. Then we get

u n 1 u n x n 1 x n + 1 μ | r n 1 r n | ( γ A σ n + χ n ) .
(3.8)

Next, we estimate

z n z n 1 2 = P C [ P C [ u n μ 2 B 2 u n ] μ 1 B 1 P C [ u n μ 2 B 2 u n ] ] P C [ P C [ u n 1 μ 2 B 2 u n 1 ] μ 1 B 1 P C [ u n 1 μ 2 B 2 u n 1 ] ] 2 P C [ u n μ 2 B 2 u n ] μ 1 B 1 P C [ u n μ 2 B 2 u n ] ( P C [ u n 1 μ 2 B 2 u n 1 ] μ 1 B 1 P C [ u n 1 μ 2 B 2 u n 1 ] ) 2 = P C [ u n μ 2 B 2 u n ] P C [ u n 1 μ 2 B 2 u n 1 ] μ 1 ( B 1 P C [ u n μ 2 B 2 u n ] B 1 P C [ u n 1 μ 2 B 2 u n 1 ] ) 2 P C [ u n μ 2 B 2 u n ] P C [ u n 1 μ 2 B 2 u n 1 ] 2 μ 1 ( 2 θ 1 μ 1 ) B 1 P C [ u n μ 2 B 2 u n ] B 1 P C [ u n 1 μ 2 B 2 u n 1 ] 2 P C [ u n μ 2 B 2 u n ] P C [ u n 1 μ 2 B 2 u n 1 ] 2 ( u n u n 1 ) μ 2 ( B 2 u n B 2 u n 1 ) 2 u n u n 1 2 μ 2 ( 2 θ 2 μ 2 ) B 2 u n B 2 u n 1 2 u n u n 1 2 .
(3.9)

It follows from (3.8) and (3.9) that

z n z n 1 x n 1 x n + 1 μ | r n 1 r n | ( γ A σ n + χ n ) .

From (3.1) and the above inequality, we get

y n y n 1 = β n S x n + ( 1 β n ) z n ( β n 1 S x n 1 + ( 1 β n 1 ) z n 1 ) = β n ( S x n S x n 1 ) + ( β n β n 1 ) S x n 1 + ( 1 β n ) ( z n z n 1 ) + ( β n 1 β n ) z n 1 β n x n x n 1 + ( 1 β n ) z n z n 1 + | β n β n 1 | ( S x n 1 + z n 1 ) β n x n x n 1 + ( 1 β n ) { x n 1 x n + 1 μ | r n 1 r n | ( γ A σ n + χ n ) } + | β n β n 1 | ( S x n 1 + z n 1 ) x n x n 1 + 1 μ | r n 1 r n | ( γ A σ n + χ n ) + | β n β n 1 | ( S x n 1 + z n 1 ) .
(3.10)

Next, we estimate

x n + 1 x n = P C [ V n ] P C [ V n 1 ] α n ρ ( U ( x n ) U ( x n 1 ) ) + ( α n α n 1 ) ρ U ( x n 1 ) + ( I α n μ F ) ( T ( y n ) ) ( I α n μ F ) T ( y n 1 ) + ( I α n μ F ) ( T ( y n 1 ) ) ( I α n 1 μ F ) ( T ( y n 1 ) ) α n ρ τ x n x n 1 + ( 1 α n ν ) y n y n 1 + | α n α n 1 | ( ρ U ( x n 1 ) + μ F ( T ( y n 1 ) ) ) ,
(3.11)

where the second inequality follows from Lemma 2.7. From (3.10) and (3.11), we have

x n + 1 x n α n ρ τ x n x n 1 + ( 1 α n ν ) { x n x n 1 + 1 μ | r n 1 r n | ( γ A σ n + χ n ) + | β n β n 1 | ( S x n 1 + z n 1 ) } + | α n α n 1 | ( ρ U ( x n 1 ) + μ F ( T ( y n 1 ) ) ) ( 1 ( ν ρ τ ) α n ) x n x n 1 + 1 μ | r n 1 r n | ( γ A σ n + χ n ) + | β n β n 1 | ( S x n 1 + z n 1 ) + | α n α n 1 | ( ρ U ( x n 1 ) + μ F ( T ( y n 1 ) ) ) ( 1 ( ν ρ τ ) α n ) x n x n 1 + M ( 1 μ | r n 1 r n | + | β n β n 1 | + | α n α n 1 | ) .
(3.12)

Here

M = max { sup n 1 ( γ A σ n + χ n ) , sup n 1 ( S x n 1 + z n 1 ) , sup n 1 ( ρ U ( x n 1 ) + μ F ( T ( y n 1 ) ) ) } .

It follows by conditions (a)-(d) of Algorithm 3.1 and Lemma 2.8 that

lim n x n + 1 x n =0.

Next, we show that lim n u n x n =0. Since x S ΛF(T) by using (3.2), (3.5), and (3.7), we obtain

x n + 1 x 2 = P C [ V n ] x , x n + 1 x = P C [ V n ] V n , P C [ V n ] x + V n x , x n + 1 x α n ( ρ U ( x n ) μ F ( x ) ) + ( I α n μ F ) ( T ( y n ) ) ( I α n μ F ) ( T ( x ) ) , x n + 1 x = α n ρ ( U ( x n ) U ( x ) ) , x n + 1 x + α n ρ U ( x ) μ F ( x ) , x n + 1 x + ( I α n μ F ) ( T ( y n ) ) ( I α n μ F ) ( T ( x ) ) , x n + 1 x α n ρ τ x n x x n + 1 x + α n ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) y n x x n + 1 x α n ρ τ 2 ( x n x 2 + x n + 1 x 2 ) + α n ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) 2 ( y n x 2 + x n + 1 x 2 ) ( 1 α n ( ν ρ τ ) ) 2 x n + 1 x 2 + α n ρ τ 2 x n x 2 + α n ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) 2 ( β n S x n x 2 + ( 1 β n ) z n x 2 ) ( 1 α n ( ν ρ τ ) ) 2 x n + 1 x 2 + α n ρ τ 2 x n x 2 + α n ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) β n 2 S x n x 2 + ( 1 α n ν ) ( 1 β n ) 2 { u n x 2 μ 2 ( 2 θ 2 μ 2 ) B 2 u n B 2 x 2 μ 1 ( 2 θ 1 μ 1 ) B 1 v n B 1 y 2 } ( 1 α n ( ν ρ τ ) ) 2 x n + 1 x 2 + α n ρ τ 2 x n x 2 + α n ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) β n 2 S x n x 2 + ( 1 α n ν ) ( 1 β n ) 2 { x n x 2 + γ ( L γ 1 ) ( T r n F 2 I ) A x n 2 μ 2 ( 2 θ 2 μ 2 ) B 2 u n B 2 x 2 μ 1 ( 2 θ 1 μ 1 ) B 1 v n B 1 y 2 } ,
(3.13)

which implies that

x n + 1 x 2 α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) β n 1 + α n ( ν ρ τ ) S x n x 2 + ( 1 α n ν ) ( 1 β n ) 1 + α n ( ν ρ τ ) { x n x 2 + γ ( L γ 1 ) ( T r n F 2 I ) A x n 2 μ 2 ( 2 θ 2 μ 2 ) B 2 u n B 2 x 2 μ 1 ( 2 θ 1 μ 1 ) B 1 v n B 1 y 2 } α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + x n x 2 + ( 1 α n ν ) β n 1 + α n ( ν ρ τ ) S x n x 2 ( 1 α n ν ) ( 1 β n ) 1 + α n ( ν ρ τ ) { γ ( 1 L γ ) ( T r n F 2 I ) A x n 2 + μ 2 ( 2 θ 2 μ 2 ) B 2 u n B 2 x 2 + μ 1 ( 2 θ 1 μ 1 ) B 1 v n B 1 y 2 } .

Then from the above inequality, we get

( 1 α n ν ) ( 1 β n ) 1 + α n ( ν ρ τ ) { γ ( 1 L γ ) ( T r n F 2 I ) A x n 2 + μ 2 ( 2 θ 2 μ 2 ) B 2 u n B 2 x 2 + μ 1 ( 2 θ 1 μ 1 ) B 1 v n B 1 y 2 } α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + β n S x n x 2 + x n x 2 x n + 1 x 2 α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + β n S x n x 2 + ( x n x + x n + 1 x ) x n + 1 x n .

Since γ(1Lγ)>0, 2 θ 1 μ 1 >0, 2 θ 2 μ 2 >0, lim n x n + 1 x n =0, α n 0, and β n 0, we obtain

lim n B 2 u n B 2 x = 0 , lim n B 1 v n B 1 y = 0

and

lim n ( T r n F 2 I ) A x n =0.
(3.14)

Since T r n F 1 is firmly nonexpansive, we have

u n x 2 = T r n F 1 ( x n + γ A ( T r n F 2 I ) A x n ) T r n F 1 ( x ) 2 u n x , x n + γ A ( T r n F 2 I ) A x n x = 1 2 { u n x 2 + x n + γ A ( T r n F 2 I ) A x n x 2 u n x [ x n + γ A ( T r n F 2 I ) A x n x ] 2 } = 1 2 { u n x 2 + x n + γ A ( T r n F 2 I ) A x n x 2 u n x n γ A ( T r n F 2 I ) A x n 2 } 1 2 { u n x 2 + x n x 2 u n x n γ A ( T r n F 2 I ) A x n 2 } = 1 2 { u n x 2 + x n x 2 [ u n x n 2 + γ 2 A ( T r n F 2 I ) A x n 2 2 γ u n x n , A ( T r n F 2 I ) A x n ] } .

Hence, we get

u n x 2 x n x 2 u n x n 2 +2γA u n A x n ( T r n F 2 I ) A x n .
(3.15)

From (3.13), (3.7), and the above inequality, we have

x n + 1 x 2 ( 1 α n ( ν ρ τ ) ) 2 x n + 1 x 2 + α n ρ τ 2 x n x 2 + α n ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) 2 ( β n S x n x 2 + ( 1 β n ) z n x 2 ) ( 1 α n ( ν ρ τ ) ) 2 x n + 1 x 2 + α n ρ τ 2 x n x 2 + α n ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) 2 ( β n S x n x 2 + ( 1 β n ) u n x 2 ) ( 1 α n ( ν ρ τ ) ) 2 x n + 1 x 2 + α n ρ τ 2 x n x 2 + α n ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) 2 { β n S x n x 2 + ( 1 β n ) ( x n x 2 u n x n 2 + 2 γ A u n A x n ( T r n F 2 I ) A x n ) } ,

which implies that

x n + 1 x 2 α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) β n 1 + α n ( ν ρ τ ) S x n x 2 + ( 1 α n ν ) ( 1 β n ) 1 + α n ( ν ρ τ ) { x n x 2 u n x n 2 + 2 γ A u n A x n ( T r n F 2 I ) A x n } α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) β n 1 + α n ( ν ρ τ ) S x n x 2 + x n x 2 + ( 1 α n ν ) ( 1 β n ) 1 + α n ( ν ρ τ ) { u n x n 2 + 2 γ A u n A x n ( T r n F 2 I ) A x n } .

Hence

( 1 α n ν ) ( 1 β n ) 1 + α n ( ν ρ τ ) u n x n 2 α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) β n 1 + α n ( ν ρ τ ) S x n x 2 + 2 ( 1 α n ν ) ( 1 β n ) γ 1 + α n ( ν ρ τ ) A u n A x n ( T r n F 2 I ) A x n + x n x 2 x n + 1 x 2 α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) β n 1 + α n ( ν ρ τ ) S x n x 2 + 2 ( 1 α n ν ) ( 1 β n ) γ 1 + α n ( ν ρ τ ) A u n A x n ( T r n F 2 I ) A x n + ( x n x + x n + 1 x ) x n + 1 x n .

Since lim n x n + 1 x n =0, α n 0, β n 0, and lim n ( T r n F 2 I)A x n =0, we obtain

lim n u n x n =0.
(3.16)

From (2.2), we get

v n y 2 = P C [ u n μ 2 B 2 u n ] P C [ x μ 2 B 2 x ] 2 v n y , ( u n μ 2 B 2 u n ) ( x μ 2 B 2 x ) = 1 2 { v n y 2 + u n x μ 2 ( B 2 u n B 2 x ) 2 u n x μ 2 ( B 2 u n B 2 x ) ( v n y ) 2 } 1 2 { v n y 2 + u n x 2 μ 2 ( 2 θ 2 μ 2 ) B 2 u n B 2 x 2 u n x μ 2 ( B 2 u n B 2 x ) ( v n y ) 2 } 1 2 { v n y 2 + u n x 2 u n v n μ 2 ( B 2 u n B 2 x ) ( x y ) 2 } = 1 2 { v n y 2 + u n x 2 u n v n ( x y ) 2 + 2 μ 2 u n v n ( x y ) , B 2 u n B 2 x μ 2 2 B 2 u n B 2 x 2 } 1 2 { v n y 2 + u n x 2 u n v n ( x y ) 2 + 2 μ 2 u n v n ( x y ) B 2 u n B 2 x } .

Hence

v n y 2 u n x 2 u n v n ( x y ) 2 + 2 μ 2 u n v n ( x y ) B 2 u n B 2 x x n x 2 u n x n 2 + 2 γ A u n A x n ( T r n F 2 I ) A x n u n v n ( x y ) 2 + 2 μ 2 u n v n ( x y ) B 2 u n B 2 x ,
(3.17)

where the last inequality follows from (3.15). On the other hand, from (3.1) and (2.2), we obtain

z n x 2 = P C [ v n μ 1 B 1 v n ] P C [ y μ 1 B 1 y ] 2 z n x , ( v n μ 1 B 1 v n ) ( y μ 1 B 1 y ) = 1 2 { z n x 2 + v n y μ 1 ( B 1 v n B 1 y ) 2 v n y μ 1 ( B 1 v n B 1 y ) ( z n x ) 2 } = 1 2 { z n x 2 + v n y 2 2 μ 1 v n y , B 1 v n B 1 y + μ 1 2 B 1 v n B 1 y 2 v n y μ 1 ( B 1 v n B 1 y ) ( z n x ) 2 } 1 2 { z n x 2 + v n y 2 μ 1 ( 2 θ 1 μ 1 ) B 1 v n B 1 y 2 v n y μ 1 ( B 1 v n B 1 y ) ( z n x ) 2 } 1 2 { z n x 2 + v n y 2 v n z n μ 1 ( B 1 v n B 1 y ) + ( x y ) 2 } 1 2 { z n x 2 + v n y 2 v n z n + ( x y ) 2 + 2 μ 1 v n z n + ( x y ) , B 1 v n B 1 y } 1 2 { z n x 2 + v n y 2 v n z n + ( x y ) 2 + 2 μ 1 v n z n + ( x y ) B 1 v n B 1 y } .

Hence

z n x 2 v n y 2 v n z n + ( x y ) 2 + 2 μ 1 v n z n + ( x y ) B 1 v n B 1 y x n x 2 u n x n 2 + 2 γ A u n A x n ( T r n F 2 I ) A x n u n v n ( x y ) 2 + 2 μ 2 u n v n ( x y ) B 2 u n B 2 x v n z n + ( x y ) 2 + 2 μ 1 v n z n + ( x y ) B 1 v n B 1 y ,

where the last inequality follows from (3.17). From (3.13) and the above inequality, we have

x n + 1 x 2 ( 1 α n ( ν ρ τ ) ) 2 x n + 1 x 2 + α n ρ τ 2 x n x 2 + α n ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) 2 ( β n S x n x 2 + ( 1 β n ) z n x 2 ) ( 1 α n ( ν ρ τ ) ) 2 x n + 1 x 2 + α n ρ τ 2 x n x 2 + α n ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) 2 { β n S x n x 2 + ( 1 β n ) ( x n x 2 u n x n 2 + 2 γ A u n A x n ( T r n F 2 I ) A x n ) + ( 1 β n ) ( u n v n ( x y ) 2 + 2 μ 2 u n v n ( x y ) B 2 u n B 2 x ) + ( 1 β n ) ( v n z n + ( x y ) 2 + 2 μ 1 v n z n + ( x y ) B 1 v n B 1 y ) } ,

which implies that

x n + 1 x 2 α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) β n 1 + α n ( ν ρ τ ) S x n x 2 + ( 1 α n ν ) ( 1 β n ) 1 + α n ( ν ρ τ ) { x n x 2 u n x n 2 + 2 γ A u n A x n ( T r n F 2 I ) A x n } + ( 1 α n ν ) ( 1 β n ) 1 + α n ( ν ρ τ ) ( u n v n ( x y ) 2 + 2 μ 2 u n v n ( x y ) B 2 u n B 2 x ) + ( 1 α n ν ) ( 1 β n ) 1 + α n ( ν ρ τ ) ( v n z n + ( x y ) 2 + 2 μ 1 v n z n + ( x y ) B 1 v n B 1 y ) α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) β n 1 + α n ( ν ρ τ ) S x n x 2 + x n x 2 + 2 γ A u n A x n ( T r n F 2 I ) A x n + 2 μ 2 u n v n ( x y ) B 2 u n B 2 x + 2 μ 1 v n z n + ( x y ) B 1 v n B 1 y ( 1 α n ν ) ( 1 β n ) 1 + α n ( ν ρ τ ) ( u n x n 2 + u n v n ( x y ) 2 + v n z n + ( x y ) 2 ) .

Hence

( 1 α n ν ) ( 1 β n ) 1 + α n ( ν ρ τ ) ( u n x n 2 + u n v n ( x y ) 2 + v n z n + ( x y ) 2 ) α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) β n 1 + α n ( ν ρ τ ) S x n x 2 + x n x 2 x n + 1 x 2 + 2 γ A u n A x n ( T r n F 2 I ) A x n + 2 μ 2 u n v n ( x y ) B 2 u n B 2 x + 2 μ 1 v n z n + ( x y ) B 1 v n B 1 y = α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) β n 1 + α n ( ν ρ τ ) S x n x 2 + ( x n x + x n + 1 x ) x n + 1 x n + 2 γ A u n A x n ( T r n F 2 I ) A x n + 2 μ 2 u n v n ( x y ) B 2 u n B 2 x + 2 μ 1 v n z n + ( x y ) B 1 v n B 1 y .

Since lim n x n + 1 x n =0, α n 0, β n 0, and lim n ( T r n F 2 I)A x n =0, lim n B 2 u n B 2 x =0, lim n B 1 v n B 1 y =0, we obtain

lim n u n v n ( x y ) =0and lim n v n z n + ( x y ) =0.

Since

u n z n u n v n ( x y ) + v n z n + ( x y )

we get

lim n u n z n =0.
(3.18)

It follows from (3.16) and (3.18) that

lim n x n z n =0.
(3.19)

Since T( x n )C, we have

x n T ( x n ) x n x n + 1 + x n + 1 T ( x n ) = x n x n + 1 + P C [ V n ] P C [ T ( x n ) ] x n x n + 1 + α n ( ρ U ( x n ) μ F ( T ( y n ) ) ) + T ( y n ) T ( x n ) x n x n + 1 + α n ρ U ( x n ) μ F ( T ( y n ) ) + y n x n x n x n + 1 + α n ρ U ( x n ) μ F ( T ( y n ) ) + β n S x n + ( 1 β n ) z n x n x n x n + 1 + α n ρ U ( x n ) μ F ( T ( y n ) ) + β n S x n x n + ( 1 β n ) z n x n .

Since lim n x n + 1 x n =0, α n 0, β n 0, and ρU( x n )μF(T( y n )) and S x n x n are bounded and lim n x n z n =0, we obtain

lim n x n T ( x n ) =0.

Since { x n } is bounded, without loss of generality we can assume that x n x C. It follows from Lemma 2.5 that x F(T). Therefore w w ( x n )F(T). □

Theorem 3.1 The sequence { x n } generated by Algorithm  3.1 converges strongly to z, which is the unique solution of the variational inequality

ρ U ( z ) μ F ( z ) , x z 0,x S ΛF(T).
(3.20)

Proof Since { x n } is bounded x n w and from Lemma 3.2, we have wF(T). Next, we show that wEP( F 1 ). Since u n = T r n F 1 ( x n +γ A ( T r n F 2 I)A x n ), we have

F 1 ( u n ,y)+ 1 r n y u n , u n x n 1 r n y u n , γ A ( T r n F 2 I ) A x n 0,yC.

It follows from monotonicity of F 1 that

1 r n y u n , γ A ( T r n F 2 I ) A x n + 1 r n y u n , u n x n F 1 (y, u n ),yC

and

1 r n k y u n k , γ A ( T r n k F 2 I ) A x n k + y u n k , u n k x n k r n k F 1 (y, u n k ),yC.
(3.21)

Since lim n u n x n =0, lim n ( T r n F 2 I)A x n =0, and x n w, it is easy to observe that u n k w. It follows by Assumption 2.1(iv) that F 1 (y,w)0, yC.

For any 0<t1 and yC, let y t =ty+(1t)w, and we have y t C. Then, from Assumption 2.1(i) and (iv), we have

0 = F 1 ( y t , y t ) t F 1 ( y t , y ) + ( 1 t ) F 1 ( y t , w ) t F 1 ( y t , y ) .

Therefore F 1 ( y t ,y)0. From Assumption 2.1(iii), we have F 1 (w,y)0, which implies that wEP( F 1 ).

Next, we show that AwEP( F 2 ). Since { x n } is bounded and x n w, there exists a subsequence { x n k } of { x n } such that x n k w, and since A is a bounded linear operator A x n k Aw. Now we set v n k =A x n k T r n k F 2 A x n k . It follows from (3.14) that lim k v n k =0 and A x n k v n k = T r n k F 2 A x n k . Therefore from the definition of T r n k F 2 , we have

F 2 (A x n k v n k ,y)+ 1 r n k y ( A x n k v n k ) , ( A x n k v n k ) A x n k 0,yC.

Since F 2 is upper semicontinuous in the first argument, taking lim sup in the above inequality as k and using Assumption 2.1(iv), we obtain

F 2 (Aw,y)0,yC,

which implies that AwEP( F 2 ) and hence wΛ.

Next, we show that w S . Since lim n x n z n =0 and there exists a subsequence { x n k } of { x n } such that x n k w, it is easy to observe that z n k w. For any x,yC, using (2.5), we have

Q ( x ) Q ( y ) 2 = P C [ P C [ x μ 2 B 2 x ] μ 1 B 1 P C [ x μ 2 B 2 x ] ] P C [ P C [ y μ 2 B 2 y ] μ 1 B 1 P C [ y μ 2 B 2 y ] ] 2 ( P C [ x μ 2 B 2 x ] P C [ y μ 2 B 2 y ] ) μ 1 ( B 1 P C [ x μ 2 B 2 x ] B 1 P C [ y μ 2 B 2 y ] ) 2 P C [ x μ 2 B 2 x ] P C [ y μ 2 B 2 y ] 2 μ 1 ( 2 θ 1 μ 1 ) B 1 P C [ x μ 2 B 2 x ] B 1 P C [ y μ 2 B 2 y ] 2 P C [ x μ 2 B 2 x ] P C [ y μ 2 B 2 y ] 2 ( x μ 2 B 2 x ) ( y μ 2 B 2 y ) 2 x y 2 μ 2 ( 2 θ 2 μ 2 ) B 2 x B 2 y 2 x y 2 .

This implies that Q:CC is nonexpansive. On the other hand

z n Q ( z n ) 2 = P C [ P C [ u n μ 2 B 2 u n ] μ 1 B 1 P C [ u n μ 2 B 2 u n ] ] Q ( z n ) 2 = Q ( u n ) Q ( z n ) 2 u n z n 2 .

Since lim n u n z n =0 (see (3.18)), we have lim n z n Q( z n )=0. It follows from Lemma 2.5 that w=Q(w), which implies from Lemma 2.2 that w S .

Thus we have

w S ΛF(T).

Observe that the constants satisfy 0ρτ<ν and

k η k 2 η 2 1 2 μ η + μ 2 k 2 1 2 μ η + μ 2 η 2 1 μ ( 2 η μ k 2 ) 1 μ η μ η 1 1 μ ( 2 η μ k 2 ) μ η ν ;

therefore, from Lemma 2.6, the operator μFρU is μηρτ strongly monotone, and we get the uniqueness of the solution of the variational inequality (3.20) and denote it by z S ΛF(T).

Next, we claim that lim sup n ρU(z)μF(z), x n z0. Since { x n } is bounded, there exists a subsequence { x n k } of { x n } such that

lim sup n ρ U ( z ) μ F ( z ) , x n z = lim sup k ρ U ( z ) μ F ( z ) , x n k z = ρ U ( z ) μ F ( z ) , w z 0 .

Next, we show that x n z. We have

x n + 1 z 2 = P C [ V n ] z , x n + 1 z = P C [ V n ] V n , P C [ V n ] z + V n z , x n + 1 z α n ( ρ U ( x n ) μ F ( z ) ) + ( I α n μ F ) ( T ( y n ) ) ( I α n μ F ) ( T ( z ) ) , x n + 1 z α n ρ ( U ( x n ) U ( z ) ) , x n + 1 z + α n ρ U ( z ) μ F ( z ) , x n + 1 z + ( I α n μ F ) ( T ( y n ) ) ( I α n μ F ) ( T ( z ) ) , x n + 1 z α n ρ τ x n z x n + 1 z + α n ρ U ( z ) μ F ( z ) , x n + 1 z + ( 1 α n ν ) y n z x n + 1 z α n ρ τ x n z x n + 1 z + α n ρ U ( z ) μ F ( z ) , x n + 1 z + ( 1 α n ν ) { β n S x n S z + β n S z z + ( 1 β n ) z n z } x n + 1 z α n ρ τ x n z x n + 1 z + α n ρ U ( z ) μ F ( z ) , x n + 1 z + ( 1 α n ν ) { β n x n z + β n S z z + ( 1 β n ) x n z } x n + 1 z = ( 1 α n ( ν ρ τ ) ) x n z x n + 1 z + α n ρ U ( z ) μ F ( z ) , x n + 1 z + ( 1 α n ν ) β n S z z x n + 1 z 1 α n ( ν ρ τ ) 2 ( x n z 2 + x n + 1 z 2 ) + α n ρ U ( z ) μ F ( z ) , x n + 1 z + ( 1 α n ν ) β n S z z x n + 1 z ,

which implies that

x n + 1 z 2 1 α n ( ν ρ τ ) 1 + α n ( ν ρ τ ) x n z 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( z ) μ F ( z ) , x n + 1 z + 2 ( 1 α n ν ) β n 1 + α n ( ν ρ τ ) S z z x n + 1 z ( 1 α n ( ν ρ τ ) ) x n z 2 + 2 α n ( ν ρ τ ) 1 + α n ( ν ρ τ ) { 1 ν ρ τ ρ U ( z ) μ F ( z ) , x n + 1 z + ( 1 α n ν ) β n α n ( ν ρ τ ) S z z x n + 1 z } .

Let γ n = α n (νρτ) and δ n = 2 α n ( ν ρ τ ) 1 + α n ( ν ρ τ ) { 1 ν ρ τ ρU(z)μF(z), x n + 1 z+ ( 1 α n ν ) β n α n ( ν ρ τ ) Szz x n + 1 z}.

We have

n = 1 α n =

and

lim sup n { 1 ν ρ τ ρ U ( z ) μ F ( z ) , x n + 1 z + ( 1 α n ν ) β n α n ( ν ρ τ ) S z z x n + 1 z } 0.

It follows that

n = 1 γ n =and lim sup n δ n γ n 0.

Thus all the conditions of Lemma 2.8 are satisfied. Hence we deduce that x n z. This completes the proof. □

Remark 3.2 In hierarchical fixed-point problem (1.11), if S=I(ρUμF), then we can get the variational inequality (3.20). In (3.20), if U=0 then we get the variational inequality F(z),xz0, x S ΛF(T), which just is the variational inequality studied by Suzuki [39] extending the common set of solutions of a system of variational inequalities, a split equilibrium problem, and a hierarchical fixed-point problem.

4 Conclusions

In this paper, we suggest and analyze an iterative method for finding the approximate element of the common set of solutions of (1.1), (1.9)-(1.10), and (1.11) in real Hilbert space, which can be viewed as a refinement and improvement of some existing methods for solving a system of variational inequality problem, a split equilibrium problem, and a hierarchical fixed-point problem. Some existing methods (e.g. [20, 26, 34, 40, 45]) can be viewed as special cases of Algorithm 3.1. Therefore, the new algorithm is expected to be widely applicable.

References

  1. Ceng LC, Wang CY, Yao JC: Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 2008, 67: 375–390. 10.1007/s00186-007-0207-4

    Article  MathSciNet  MATH  Google Scholar 

  2. Ceng LC, Al-Mezel SA, Anasri QH: Implicit and explicit iterative methods for systems of variational inequalities and zeros of accretive operators. Abstr. Appl. Anal. 2013., 2013: Article ID 631382

    Google Scholar 

  3. Ansari QH, Yao JC: Systems of generalized variational inequalities and their applications. Appl. Anal. 2000, 76: 203–217. 10.1080/00036810008840877

    Article  MathSciNet  MATH  Google Scholar 

  4. Aubin JP: Mathematical Methods of Game and Economic Theory. North-Holland, Amsterdam; 1979.

    MATH  Google Scholar 

  5. Facchinei F, Pang JS I. In Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, New York; 2003.

    Google Scholar 

  6. Facchinei F, Pang JS II. In Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, New York; 2003.

    Google Scholar 

  7. Ansari QH, Lalitha CS, Mehta M: Generalized Convexity, Nonsmooth Variational Inequalities and Nonsmooth Optimization. CRC Press, Boca Raton; 2013.

    Google Scholar 

  8. Ceng LC, Anasri QH, Yao JC: Iterative methods for triple hierarchical variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2011, 151: 489–512. 10.1007/s10957-011-9882-7

    Article  MathSciNet  MATH  Google Scholar 

  9. Marino G, Muglia L, Yao Y: Viscosity methods for common solutions of equilibrium and variational inequality problems via multi-step iterative algorithms and common fixed points. Nonlinear Anal. 2012, 75: 1787–1798. 10.1016/j.na.2011.09.019

    Article  MathSciNet  Google Scholar 

  10. Latif A, Ceng LC, Ansari QH: Multi-step hybrid viscosity method for systems of variational inequalities defined over sets of solutions of equilibrium problem and fixed point problems. Fixed Point Theory Appl. 2012., 2012: Article ID 186

    Google Scholar 

  11. Verma RU: Projection methods, algorithms, and a new system of nonlinear variational inequalities. Comput. Math. Appl. 2001, 41: 1025–1031. 10.1016/S0898-1221(00)00336-9

    Article  MathSciNet  MATH  Google Scholar 

  12. Verma RU: General convergence analysis for two-step projection methods and applications to variational problems. Appl. Math. Lett. 2005, 18: 1286–1292. 10.1016/j.aml.2005.02.026

    Article  MathSciNet  MATH  Google Scholar 

  13. Acedo GL, Xu HK: Iterative methods for strictly pseudo-contractions in Hilbert space. Nonlinear Anal. 2007, 67: 2258–2271. 10.1016/j.na.2006.08.036

    Article  MathSciNet  MATH  Google Scholar 

  14. Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.

    MathSciNet  MATH  Google Scholar 

  15. Bnouhachem A, Noor MA: An iterative method for approximating the common solutions of a variational inequality, a mixed equilibrium problem and a hierarchical fixed point problem. J. Inequal. Appl. 2013., 2013: Article ID 490

    Google Scholar 

  16. Bnouhachem A: Algorithms of common solutions for a variational inequality, a split equilibrium problem and a hierarchical fixed point problem. Fixed Point Theory Appl. 2013., 2013: Article ID 278

    Google Scholar 

  17. Bnouhachem, A: Strong convergence algorithm for split equilibrium problems and hierarchical fixed point problems. Sci. World J. (in press)

  18. arXiv: 1108.5953

  19. Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692

    Article  MathSciNet  MATH  Google Scholar 

  20. Ceng LC, Anasri QH, Yao JC: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 2011, 74: 5286–5302. 10.1016/j.na.2011.05.005

    Article  MathSciNet  MATH  Google Scholar 

  21. Ceng LC, Anasri QH, Yao JC: Mann type iterative methods for finding a common solution of split feasibility and fixed point problems. Positivity 2012, 16(3):471–495. 10.1007/s11117-012-0174-8

    Article  MathSciNet  MATH  Google Scholar 

  22. Ceng LC, Anasri QH, Yao JC: An extragradient method for split feasibility and fixed point problems. Comput. Math. Appl. 2012, 64: 633–642. 10.1016/j.camwa.2011.12.074

    Article  MathSciNet  MATH  Google Scholar 

  23. Censor Y, Gibali A, Reich S: Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59(2):301–323. 10.1007/s11075-011-9490-5

    Article  MathSciNet  MATH  Google Scholar 

  24. Chang SS, Lee HWJ, Chan CK: Generalized system for relaxed cocoercive variational inequalities in Hilbert spaces. Appl. Math. Lett. 2007, 20: 329–334. 10.1016/j.aml.2006.04.017

    Article  MathSciNet  MATH  Google Scholar 

  25. Chang SS, Lee HWJ, Chan CK: A new method for solving equilibrium problem fixed point problem and variational inequality problem with application to optimization. Nonlinear Anal. 2009, 70: 3307–3319. 10.1016/j.na.2008.04.035

    Article  MathSciNet  MATH  Google Scholar 

  26. Cianciaruso F, Marino G, Muglia L, Yao Y: On a two-steps algorithm for hierarchical fixed point problems and variational inequalities. J. Inequal. Appl. 2009., 2009: Article ID 208692

    Google Scholar 

  27. Cianciaruso F, Marino G, Muglia L, Yao Y: A hybrid projection algorithm for finding solutions of mixed equilibrium problem and variational inequality problem. Fixed Point Theory Appl. 2010., 2010: Article ID 383740

    Google Scholar 

  28. Combettes PL, Hirstoaga SA: Equilibrium programming using proximal like algorithms. Math. Program. 1997, 78: 29–41.

    Article  Google Scholar 

  29. Crombez G: A geometrical look at iterative methods for operators with fixed points. Numer. Funct. Anal. Optim. 2005, 26: 157–175. 10.1081/NFA-200063882

    Article  MathSciNet  MATH  Google Scholar 

  30. Crombez G: A hierarchical presentation of operators with fixed points on Hilbert spaces. Numer. Funct. Anal. Optim. 2006, 27: 259–277. 10.1080/01630560600569957

    Article  MathSciNet  MATH  Google Scholar 

  31. Geobel K, Kirk WA Cambridge Stud. Adv. Math. 28. In Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.

    Chapter  Google Scholar 

  32. Gu G, Wang S, Cho YJ: Strong convergence algorithms for hierarchical fixed points problems and variational inequalities. J. Appl. Math. 2011., 2011: Article ID 164978

    Google Scholar 

  33. Korpelevic GM: An extragradient method for finding saddle points and for other problems. Èkon. Mat. Metody 1976, 12(4):747–756.

    MathSciNet  MATH  Google Scholar 

  34. Mainge PE, Moudafi A: Strong convergence of an iterative method for hierarchical fixed-point problems. Pac. J. Optim. 2007, 3(3):529–538.

    MathSciNet  MATH  Google Scholar 

  35. Marino G, Xu HK: Explicit hierarchical fixed point approach to variational inequalities. J. Optim. Theory Appl. 2011, 149(1):61–78. 10.1007/s10957-010-9775-1

    Article  MathSciNet  MATH  Google Scholar 

  36. Moudafi A: Split monotone variational inclusions. J. Optim. Theory Appl. 2011, 50: 275–283.

    Article  MathSciNet  Google Scholar 

  37. Moudafi A: Krasnoselski-Mann iteration for hierarchical fixed-point problems. Inverse Probl. 2007, 23(4):1635–1640. 10.1088/0266-5611/23/4/015

    Article  MathSciNet  MATH  Google Scholar 

  38. Qin X, Shang M, Su Y: A general iterative method for equilibrium problem and fixed point problem in Hilbert spaces. Nonlinear Anal. 2008, 69: 3897–3909. 10.1016/j.na.2007.10.025

    Article  MathSciNet  MATH  Google Scholar 

  39. Suzuki N: Moudafi’s viscosity approximations with Meir-Keeler contractions. J. Math. Anal. Appl. 2007, 325: 342–352. 10.1016/j.jmaa.2006.01.080

    Article  MathSciNet  MATH  Google Scholar 

  40. Tian M: A general iterative algorithm for nonexpansive mappings in Hilbert spaces. Nonlinear Anal. 2010, 73: 689–694. 10.1016/j.na.2010.03.058

    Article  MathSciNet  MATH  Google Scholar 

  41. Verma RU: Generalized system for relaxed cocoercive variational inequalities and projection methods. J. Optim. Theory Appl. 2004, 121(1):203–210.

    Article  MathSciNet  MATH  Google Scholar 

  42. Wang Y, Xu W: Strong convergence of a modified iterative algorithm for hierarchical fixed point problems and variational inequalities. Fixed Point Theory Appl. 2013., 2013: Article ID 121

    Google Scholar 

  43. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240–256. 10.1112/S0024610702003332

    Article  MATH  Google Scholar 

  44. Yang H, Zhou L, Li Q: A parallel projection method for a system of nonlinear variational inequalities. Appl. Math. Comput. 2010, 217: 1971–1975. 10.1016/j.amc.2010.06.053

    Article  MathSciNet  MATH  Google Scholar 

  45. Yao Y, Cho YJ, Liou YC: Iterative algorithms for hierarchical fixed points problems and variational inequalities. Math. Comput. Model. 2010, 52(9–10):1697–1705. 10.1016/j.mcm.2010.06.038

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abdellah Bnouhachem.

Additional information

Competing interests

The author declares that he has no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Bnouhachem, A. A modified projection method for a common solution of a system of variational inequalities, a split equilibrium problem and a hierarchical fixed-point problem. Fixed Point Theory Appl 2014, 22 (2014). https://doi.org/10.1186/1687-1812-2014-22

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2014-22

Keywords