Open Access

An iterative algorithm for system of generalized equilibrium problems and fixed point problem

Fixed Point Theory and Applications20142014:235

https://doi.org/10.1186/1687-1812-2014-235

Received: 1 September 2014

Accepted: 12 November 2014

Published: 1 December 2014

Abstract

In this paper, we propose an iterative algorithm for finding a common solution of a system of generalized equilibrium problems and a fixed point problem of strictly pseudo-contractive mapping in the setting of real Hilbert spaces. We prove the strong convergence of the sequence generated by the proposed method to a common solution of a system of generalized equilibrium problems and a hierarchical fixed point problem. Preliminary numerical experiments are included to verify the theoretical assertions of the proposed method. The iterative algorithm and results presented in this paper generalize, unify, and improve the previously known results of this area.

MSC:49J30, 47H09, 47J20.

Keywords

system of generalized equilibrium problemsvariational inequalityhierarchical fixed point problemfixed point problemstrictly pseudo-contractive mapping

1 Introduction

Let H be a real Hilbert space, whose inner product and norm are denoted by , and . Let C be a nonempty closed convex subset of H. Recently, Ceng and Yao [1] considered the following system of generalized equilibrium problems, which involves finding ( x , y ) C × C :
{ F 1 ( x , x ) + B 1 y , x x + 1 μ 1 x y , x x 0 ; x C  and  μ 1 > 0 , F 2 ( y , y ) + B 2 x , y y + 1 μ 2 y x , y y 0 ; y C  and  μ 2 > 0 ,
(1.1)

where F i : C × C H is two bifunctions and B i : C H is a nonlinear mapping for each i = 1 , 2 . The solution set of (1.1) is denoted by Ω.

If F 1 = F 2 = F , B 1 = B 2 = B , and x = y , then problem (1.1) becomes the following generalized equilibrium problem: Finding x C such that
F ( x , y ) + B x , y x 0 , y C ,
(1.2)

which was studied by Takahashi and Takahashi [2]. Inspired by the work of Takahashi and Takahashi [2], and Ceng et al. [3], Ceng et al. [4] introduced and analyzed an iterative scheme for finding the approximate solutions of the generalized equilibrium problem (1.2), a system of general generalized equilibrium problems (1.1) and a fixed point problem of a nonexpansive mapping in a Hilbert space. Under appropriate conditions, they proved that the sequence converges strongly to a common solution of these three problems. Recently, Ansari [5] studied the existence of solutions of equilibrium problems in the setting of metric spaces. Inspired by the method in [6], Latif et al. [7] introduced and analyzed an iterative algorithm by the hybrid iterative method for finding a solution of the system of generalized equilibrium problems with constraints of several problems: a generalized mixed equilibrium problem, finitely many variational inclusions, and the common fixed point problem of an asymptotically strict pseudo-contractive mapping in the intermediate sense and infinitely many nonexpansive mappings in a real Hilbert space. Under mild conditions, they proved the weak convergence of this iterative algorithm.

If F 1 = F 2 = 0 , then problem (1.1) reduces to the following general system of variational inequalities, which involves finding ( x , y ) C × C :
{ μ 1 B 1 y + x y , x x 0 ; x C  and  μ 1 > 0 , μ 2 B 2 x + y x , x y 0 ; x C  and  μ 2 > 0 ,
(1.3)

this problem was considered and investigated by Ceng et al. [3]. As pointed out in [8] that the system of variational inequalities is used as a tool to study the Nash equilibrium problem; see, for example, [911] and the references therein.

If F 1 = F 2 = 0 , and B 1 = B 2 = B , then problem (1.1) reduces to finding ( x , y ) C × C such that
{ μ 1 B y + x y , x x 0 ; x C  and  μ 1 > 0 , μ 2 B x + y x , x y 0 ; x C  and  μ 2 > 0 ,
(1.4)

which has been introduced and studied by Verma [12, 13].

If x = y and μ 1 = μ 2 , then problem (1.4) collapses to the classical variational inequality, finding x C such that
B x , x x 0 , x C .

The theory of variational inequalities emerged as a rapidly growing area of research because of its applications in nonlinear analysis, optimization, economics, game theory; see for example [1417]. For recent applications, numerical techniques, and physical formulation, see [150].

The fixed point problem for the mapping T is to find x C such that
T x = x .
(1.5)

We denote by F ( T ) the set of solutions of (1.5). It is well known that F ( T ) is closed and convex, and P F ( T ) is well defined (see [19]).

Let S : C H be a nonexpansive mapping, that is, S x S y x y for all x , y C . The hierarchical fixed point problem is to find x F ( T ) such that
x S x , y x 0 , y F ( T ) .
(1.6)
It is linked with some monotone variational inequalities and convex programming problems; see [20]. Various methods have been proposed to solve (1.6); see, for example, [2135]. By combining Korpelevich’s extragradient method and the viscosity approximation method, Ceng et al. [36] introduced and analyzed implicit and explicit iterative schemes for computing a common element of the set of fixed points of a nonexpansive mapping and the set of solutions of the variational inequality for an α-inverse strongly monotone mapping in a Hilbert space. Under suitable assumptions, they proved the strong convergence of the sequences generated by the proposed schemes. In 2010, Yao et al. [20] introduced the following strong convergence iterative algorithm to solve problem (1.6):
y n = β n S x n + ( 1 β n ) x n , x n + 1 = P C [ α n f ( x n ) + ( 1 α n ) T y n ] , n 0 ,
(1.7)
where f : C H is a contraction mapping and { α n } and { β n } are two sequences in ( 0 , 1 ) . Under some certain restrictions on parameters, Yao et al. proved that the sequence { x n } generated by (1.7) converges strongly to z F ( T ) , which is the unique solution of the following variational inequality:
( I f ) z , y z 0 , y F ( T ) .
(1.8)
In 2011, Ceng et al. [37] investigated the following iterative method:
x n + 1 = P C [ α n ρ U ( x n ) + ( I α n μ F ) ( T ( y n ) ) ] , n 0 ,
(1.9)
where U is a Lipschitzian mapping, and F is a Lipschitzian and strongly monotone mapping. They proved that under some approximate assumptions on the operators and parameters, the sequence { x n } generated by (1.9) converges strongly to the unique solution of the variational inequality
ρ U ( z ) μ F ( z ) , x z 0 , x Fix ( T ) .
(1.10)
Very recently, Wang and Xu [38] investigated an iterative method for a hierarchical fixed point problem by
y n = β n S x n + ( 1 β n ) x n , x n + 1 = P C [ α n ρ U ( x n ) + ( I α n μ F ) ( T ( y n ) ) ] , n 0 ,
(1.11)

where S : C C is a nonexpansive mapping. They proved that under some approximate assumptions on the operators and parameters, the sequence { x n } generated by (1.11) converges strongly to the unique solution of the variational inequality (1.10). In 2014, Ansari et al. [39] presented a hybrid iterative algorithm for computing a fixed point of a pseudo-contractive mapping and for finding a solution of triple hierarchical variational inequality in the setting of real Hilbert space. Under very appropriate conditions, they proved that the sequence generated by the proposed algorithm converges strongly to a fixed point which is also a solution of this triple hierarchical variational inequality.

In this paper, motivated by the work of Ceng et al. [4], Yao et al. [20], Bnouhachem [33, 34] and by the recent work going in this direction, we give an iterative method for finding the approximate element of the common set of solutions of (1.1) and (1.6) in real Hilbert space. We establish a strong convergence theorem based on this method. In order to verify the theoretical assertions and to compare the numerical results between the system of generalized equilibrium problems and the generalized equilibrium problems, an example is given. Our results can be viewed as significant extensions of the previously known results.

2 Preliminaries

We present some definitions which will be used in the sequel.

Definition 2.1 A mapping T : C H is said to be k-Lipschitz continuous if there exists a constant k > 0 such that
T x T y k x y , x , y C .
  • If k = 1 , then T is called nonexpansive.

  • If k ( 0 , 1 ) , then T is called a contraction.

Definition 2.2 A mapping T : C H is said to be
  1. (a)
    strongly monotone if there exists an α > 0 such that
    T x T y , x y α x y 2 , x , y C ;
     
  2. (b)
    α-inverse strongly monotone if there exists an α > 0 such that
    T x T y , x y α T x T y 2 , x , y C ;
     
  3. (c)
    a k-strict pseudo-contraction, if there exists a constant 0 k < 1 such that
    T x T y 2 x y 2 + k ( I T ) x ( I T ) y 2 , x , y C .
     

Assumption 2.1 [42]

Let F : C × C R be a bifunction satisfying the following assumptions:

(A1) F ( x , x ) = 0 , x C ;

(A2) F is monotone, i.e., F ( x , y ) + F ( y , x ) 0 , x , y C ;

(A3) for each x , y , z C , lim t 0 F ( t z + ( 1 t ) x , y ) F ( x , y ) ;

(A4) for each x C , y F ( x , y ) is convex and lower semicontinuous.

We list some fundamental lemmas that are useful in the consequent analysis.

Lemma 2.1 [43]

Let C be a nonempty closed convex subset of H. Let F : C × C R satisfies (A1)-(A4). Assume that for r > 0 and x H , define a mapping T r : H C as follows:
T r ( x ) = { z C : F ( z , y ) + 1 r y z , z x 0 , y C } .
Then the following hold:
  1. (i)

    T r is nonempty and single-valued;

     
  2. (ii)
    T r is firmly nonexpansive, i.e.,
    T r ( x ) T r ( y ) 2 T r ( x ) T r ( y ) , x y , x , y H ;
     
  3. (iii)

    F ( T r ) = EP ( F ) ;

     
  4. (iv)

    EP ( F ) is closed and convex.

     

Lemma 2.2 [4]

Let F 1 , F 2 : C × C R be two bifunctions satisfying (A1)-(A4). For any ( x , y ) C × C , ( x , y ) is a solution of (1.1) if and only if x is a fixed point of the mapping Q : C C defined by
Q ( x ) = T μ 1 F 1 [ T μ 2 F 2 [ x μ 2 B 2 x ] μ 1 B 1 T μ 2 F 2 [ x μ 2 B 2 x ] ] , x C ,
(2.1)

where y = T μ 2 F 2 [ x μ 2 B 2 x ] , μ i ( 0 , 2 θ i ) , and B i : C C is a θ i -inverse strongly monotone mapping for each i = 1 , 2 .

Lemma 2.3 [44]

Let C be a nonempty closed convex subset of a real Hilbert space H.

If T : C C is a nonexpansive mapping with Fix ( T ) , then the mapping I T is demiclosed at 0, i.e., if { x n } is a sequence in C that weakly converges to x, and if { ( I T ) x n } converges strongly to 0, then ( I T ) x = 0 .

Lemma 2.4 [37]

Let U : C H be a τ-Lipschitzian mapping, and let F : C H be a k-Lipschitzian and η-strongly monotone mapping, then for 0 ρ τ < μ η , μ F ρ U is μη-ρτ-strongly monotone, i.e.,
( μ F ρ U ) x ( μ F ρ U ) y , x y ( μ η ρ τ ) x y 2 , x , y C .

Lemma 2.5 [45]

Let C be a nonempty closed convex subset of a real Hilbert space H, and S : C H be a k-strict pseudo-contraction mapping. Define B : C H by B x = λ S x + ( 1 λ ) x for all x C . Then as λ [ k , 1 ) , B is a nonexpansive mapping such that F ( B ) = F ( S ) .

Lemma 2.6 [46]

Let H be a real Hilbert space, T : C H be a k-Lipschitzian and η-strongly monotone operator. Let 0 < μ < 2 η k 2 , let W = I λ μ T and μ ( η μ k 2 2 ) = τ , then for 0 < λ < min { 1 , 1 τ } , W is a contraction with a constant 1 λ τ , that is,
W x W y ( 1 λ τ ) x y , x , y C .

Lemma 2.7 [47]

Let { x n } , { y n } be bounded sequences in a Banach space E and { β n } be a sequence in [ 0 , 1 ] with 0 < lim inf n β n < lim sup n β n < 1 .

Suppose x n + 1 = β n x n + ( 1 β n ) y n , n 0 and lim sup n ( y n + 1 y n x n + 1 x n ) 0 . Then lim n y n x n = 0 .

Lemma 2.8 [48]

Assume { a n } is a sequence of nonnegative real numbers such that
a n + 1 ( 1 υ n ) a n + δ n ,
where { υ n } is a sequence in ( 0 , 1 ) and δ n is a sequence such that
  1. (1)

    n = 1 υ n = ;

     
  2. (2)

    lim sup n δ n / υ n 0 or n = 1 | δ n | < .

     

Then lim n a n = 0 .

Lemma 2.9 [49]

Let C be a closed convex subset of H. Let { x n } be a bounded sequence in H. Assume that
  1. (i)

    the weak w-limit set w w ( x n ) C where w w ( x n ) = { x : x n i x } ;

     
  2. (ii)

    for each z C , lim n x n z exists.

     

Then { x n } is weakly convergent to a point in C.

Lemma 2.10 [50]

Let H be a real Hilbert space. Then the following inequality holds:
x + y 2 x 2 + 2 y , x + y , x , y H .

3 The proposed method and some properties

In this section, we suggest and analyze our method for finding the common solutions of the system of the generalized equilibrium problem (1.1) and the hierarchical fixed point problem (1.6). Let C be a nonempty closed convex subset of a real Hilbert space H. Let F 1 , F 2 : C × C R be two bifunctions satisfying (A1)-(A4). Let B i : C H be a θ i -inverse strongly monotone mapping for each i = 1 , 2 , and let S : C H be a σ-strict pseudo-contraction mapping such that Ω F ( S ) . Let T : C C be a k-Lipschitzian mapping and be η-strongly monotone, and let f : C C be a τ-Lipschitzian mapping.

Algorithm 3.1 For an arbitrarily given x 0 C , let the iterative sequences { x n } , { y n } , and { z n } be generated by
{ z n = T μ 1 F 1 [ T μ 2 F 2 [ x n μ 2 B 2 x n ] μ 1 B 1 T μ 2 F 2 [ x n μ 2 B 2 x n ] ] ; y n = β n S z n + ( 1 β n ) z n ; x n + 1 = α n ρ f ( x n ) + γ n x n + ( ( 1 γ n ) I α n μ T ) ( y n ) , n 0 ,
(3.1)
where μ i ( 0 , 2 θ i ) for each i = 1 , 2 . Suppose the parameters satisfy 0 < μ < 2 η k 2 , 0 ρ < ν / τ , where ν = μ ( η μ k 2 2 ) . Also { γ n } , { α n } , and { β n } are sequences in ( 0 , 1 ) satisfying the following conditions:
  1. (a)

    0 < lim inf n γ n < lim sup n γ n < 1 ;

     
  2. (b)

    lim n α n = 0 and n = 1 α n = ;

     
  3. (c)

    { β n } [ σ , 1 ) and lim n β n = β < 1 .

     

If F 1 = F 2 = F , B 1 = B 2 = B , and μ 1 = μ 2 = r , then Algorithm 3.1 reduces to Algorithm 3.2 for finding the common solutions of the generalized equilibrium problem (1.2) and the hierarchical fixed point problem (1.6).

Algorithm 3.2 For an arbitrarily given x 0 C arbitrarily, let the iterative sequences { u n } , { x n } , { y n } , and { z n } be generated by
{ F ( z n , y ) + B x n , y z n + 1 r y z n , z n x n 0 , y C ; y n = β n S z n + ( 1 β n ) z n ; x n + 1 = α n ρ f ( x n ) + γ n x n + ( ( 1 γ n ) I α n μ T ) ( y n ) , n 0 .
Suppose that the parameters satisfy 0 < μ < 2 η k 2 , 0 ρ τ < ν , where ν = 1 1 μ ( 2 η μ k 2 ) . Also { γ n } , { α n } , and { β n } are sequences in ( 0 , 1 ) satisfying the following conditions:
  1. (a)

    0 < lim inf n γ n < lim sup n γ n < 1 ;

     
  2. (b)

    lim n α n = 0 and n = 1 α n = ;

     
  3. (c)

    { β n } [ σ , 1 ) and lim n β n = β < 1 .

     

Remark 3.1 If ρ = μ = 1 , γ n = 0 , and S z n = S x n , we obtain an extension and improvement of the method of Yao et al. [20] and Wang and Xu [38] for finding the approximate element of the common set of solutions of a system of generalized equilibrium problem and a hierarchical fixed point problem in a real Hilbert space.

Lemma 3.1 Let x Ω F ( S ) . Then { x n } , { z n } , and { y n } are bounded.

Proof Let x Ω F ( S ) , we have
x = T μ 1 F 1 [ y μ 1 B 1 y ] ,
where
y = T μ 2 F 2 [ x μ 2 B 2 x ] .
We set v n = T μ 2 F 2 [ x n μ 2 B 2 x n ] . Since B 2 is a θ 2 -inverse strongly monotone mapping, it follows that
v n y 2 = T μ 2 F 2 [ x n μ 2 B 2 x n ] T μ 2 F 2 [ x μ 2 B 2 x ] 2 x n x μ 2 ( B 2 x n B 2 x ) 2 x n x 2 μ 2 ( 2 θ 2 μ 2 ) B 2 x n B 2 x 2 x n x 2 .
(3.2)
Since B i is a θ i -inverse strongly monotone mapping for each i = 1 , 2 , we get
z n x 2 = T μ 1 F 1 [ T μ 2 F 2 [ x n μ 2 B 2 x n ] μ 1 B 1 T μ 2 F 2 [ x n μ 2 B 2 x n ] ] T μ 1 F 1 [ T μ 2 F 2 [ x μ 2 B 2 x ] μ 1 B 1 T μ 2 F 2 [ x μ 2 B 2 x ] ] 2 T μ 2 F 2 [ x n μ 2 B 2 x n ] μ 1 B 1 T μ 2 F 2 [ x n μ 2 B 2 x n ] ( T μ 2 F 2 [ x μ 2 B 2 x ] μ 1 B 1 T μ 2 F 2 [ x μ 2 B 2 x ] ) 2 = T μ 2 F 2 [ x n μ 2 B 2 x n ] T μ 2 F 2 [ x μ 2 B 2 x ] μ 1 ( B 1 T μ 2 F 2 [ x n μ 2 B 2 x n ] B 1 T μ 2 F 2 [ x μ 2 B 2 x ] ) 2 T μ 2 F 2 [ x n μ 2 B 2 x n ] T μ 2 F 2 [ x μ 2 B 2 x ] 2 μ 1 ( 2 θ 1 μ 1 ) B 1 T μ 2 F 2 [ x n μ 2 B 2 x n ] B 1 T μ 2 F 2 [ x μ 2 B 2 x ] 2 ( x n μ 2 B 2 x n ) ( x μ 2 B 2 x ) 2 μ 1 ( 2 θ 1 μ 1 ) B 1 T μ 2 F 2 [ x n μ 2 B 2 x n ] B 1 T μ 2 F 2 [ x μ 2 B 2 x ] 2 x n x 2 μ 2 ( 2 θ 2 μ 2 ) B 2 x n B 2 x 2 μ 1 ( 2 θ 1 μ 1 ) B 1 v n B 1 y 2 x n x 2 .
(3.3)
By Lemma 2.5 and the inequality above, it is easy to show that
y n x z n x x n x .
(3.4)
Next, we prove that the sequence { x n } is bounded. Since lim n α n = 0 , without loss of generality we can assume that α n min { ϵ , ϵ τ } for all n 1 , where 0 < ϵ < 1 lim sup n γ n . From (3.1) and (3.4), we have
x n + 1 x α n ρ f ( x n ) + γ n x n + ( ( 1 γ n ) I α n μ T ) ( y n ) x = α n ( ρ f ( x n ) μ T ( x ) ) + γ n ( x n x ) + ( ( 1 γ n ) I α n μ T ) ( y n ) ( ( 1 γ n ) I α n μ T ) ( x ) α n ρ τ x n x + α n ( ρ f μ T ) x + γ n x n x + ( ( 1 γ n ) I α n μ T ) ( y n ) ( ( 1 γ n ) I α n μ T ) ( x ) = α n ρ τ x n x + α n ( ρ f μ T ) x + γ n x n x + ( 1 γ n ) ( I α n μ ( 1 γ n ) T ) ( y n ) ( I α n μ ( 1 γ n ) T ) ( x ) α n ρ τ x n x + α n ( ρ f μ T ) x + γ n x n x + ( 1 γ n α n ν ) y n x α n ρ τ x n x + α n ( ρ f μ T ) x + γ n x n x + ( 1 γ n α n ν ) x n x = α n ρ τ x n x + α n ( ρ f μ T ) x + ( 1 α n ν ) x n x = ( 1 α n ( ν ρ τ ) ) x n x + α n ( ρ f μ T ) x max { x n x , 1 ν ρ τ ( ( ρ f μ T ) x ) } ,

where the third inequality follows from Lemma 2.6 and the fourth inequality follows from (3.4). By induction on n, we obtain x n x max { x n x , 1 ν ρ τ ( ( ρ f μ T ) x ) } , for n 0 and x 0 C . Hence { x n } is bounded, and consequently we deduce that { z n } , { v n } , { y n } , { S ( z n ) } , { T ( y n ) } , and { f ( x n ) } are bounded. □

Lemma 3.2 Let x Ω F ( S ) and { x n } be the sequence generated by Algorithm 3.1. Then we have:
  1. (a)

    lim n x n + 1 x n = 0 .

     
  2. (b)

    The weak w-limit set w w ( x n ) F ( S ) ( w w ( x n ) = { x : x n i x } ).

     
Proof Next, we estimate
z n z n 1 2 = T μ 1 F 1 [ T μ 2 F 2 [ x n μ 2 B 2 x n ] μ 1 B 1 T μ 2 F 2 [ x n μ 2 B 2 x n ] ] T μ 1 F 1 [ T μ 2 F 2 [ x n 1 μ 2 B 2 x n 1 ] μ 1 B 1 T μ 2 F 2 [ x n 1 μ 2 B 2 x n 1 ] ] 2 T μ 2 F 2 [ x n μ 2 B 2 x n ] μ 1 B 1 T μ 2 F 2 [ x n μ 2 B 2 x n ] ( T μ 2 F 2 [ x n 1 μ 2 B 2 x n 1 ] μ 1 B 1 T μ 2 F 2 [ x n 1 μ 2 B 2 x n 1 ] ) 2 = T μ 2 F 2 [ x n μ 2 B 2 x n ] T μ 2 F 2 [ x n 1 μ 2 B 2 x n 1 ] μ 1 ( B 1 T μ 2 F 2 [ x n μ 2 B 2 x n ] B 1 T μ 2 F 2 [ x n 1 μ 2 B 2 x n 1 ] ) 2 T μ 2 F 2 [ x n μ 2 B 2 x n ] T μ 2 F 2 [ x n 1 μ 2 B 2 x n 1 ] 2 μ 1 ( 2 θ 1 μ 1 ) B 1 T μ 2 F 2 [ x n μ 2 B 2 x n ] B 1 T μ 2 F 2 [ x n 1 μ 2 B 2 x n 1 ] 2 T μ 2 F 2 [ x n μ 2 B 2 x n ] T μ 2 F 2 [ x n 1 μ 2 B 2 x n 1 ] 2 ( x n x n 1 ) μ 2 ( B 2 x n B 2 x n 1 ) 2 x n x n 1 2 μ 2 ( 2 θ 2 μ 2 ) B 2 x n B 2 x n 1 2 x n x n 1 2 .
(3.5)
From (3.1) and (3.5), we have
y n y n 1 β n S z n + ( 1 β n ) z n ( β n 1 S z n 1 + ( 1 β n 1 ) z n 1 ) = β n ( S z n S z n 1 ) + ( β n β n 1 ) S z n 1 + ( 1 β n ) ( z n z n 1 ) + ( β n 1 β n ) z n 1 z n z n 1 + | β n β n 1 | S z n 1 z n 1 x n 1 x n + | β n β n 1 | S z n 1 z n 1 .
(3.6)
We define w n = x n + 1 γ n x n 1 γ n , which implies that x n + 1 = ( 1 γ n ) w n + γ n x n . It follows from (3.6) that
w n + 1 w n α n + 1 1 γ n + 1 ρ f ( x n + 1 ) μ T ( y n + 1 ) + α n 1 γ n ρ f ( x n ) μ T ( y n ) + y n + 1 y n α n + 1 1 γ n + 1 ρ f ( x n + 1 ) μ T ( y n + 1 ) + α n 1 γ n ρ f ( x n ) μ T ( y n ) + x n + 1 x n + | β n + 1 β n | S z n z n .
(3.7)
Since lim n α n = 0 , lim n β n = β , and lim inf n γ n < lim sup n γ n < 1 , we get
lim sup n ( w n + 1 w n x n + 1 x n ) 0 .
By Lemma 2.7, we have lim n w n x n = 0 . Since x n + 1 x n = ( 1 γ n ) w n x n , we obtain
lim n x n + 1 x n = 0 .
Next, we estimate
x n y n x n + 1 x n + x n + 1 y n x n + 1 x n + x n + 1 y n x n + 1 x n + α n ρ f ( x n ) μ T ( y n ) + γ n x n y n ,
which implies
( 1 γ n ) x n y n x n + 1 x n + α n ρ f ( x n ) μ T ( y n ) .
Since lim n α n = 0 and lim inf n γ n < lim sup n γ n < 1 , we have
lim n x n y n = 0 .
(3.8)
Next, we show that lim n z n x n = 0 . Since x Ω F ( S ) by using Lemma 2.10, (3.4), and (3.3), we obtain
x n + 1 x 2 = γ n ( x n x ) + ( 1 γ n ) ( y n x ) + α n ( ρ f ( x n ) μ T ( y n ) ) 2 γ n ( x n x ) + ( 1 γ n ) ( y n x ) 2 + 2 α n ρ f ( x n ) μ T ( y n ) , x n + 1 x γ n x n x 2 + ( 1 γ n ) y n x 2 + 2 α n ρ f ( x n ) μ T ( y n ) , x n + 1 x γ n x n x 2 + ( 1 γ n ) z n x 2 + 2 α n ρ f ( x n ) μ T ( y n ) , x n + 1 x γ n x n x 2 + ( 1 γ n ) { x n x 2 μ 2 ( 2 θ 2 μ 2 ) B 2 x n B 2 x 2 μ 1 ( 2 θ 1 μ 1 ) B 1 v n B 1 y 2 } + 2 α n ρ f ( x n ) μ T ( y n ) , x n + 1 x = x n x 2 ( 1 γ n ) { μ 2 ( 2 θ 2 μ 2 ) B 2 x n B 2 x 2 + μ 1 ( 2 θ 1 μ 1 ) B 1 v n B 1 y 2 } + 2 α n ρ f ( x n ) μ T ( y n ) , x n + 1 x ,
(3.9)
which implies that
( 1 γ n ) { μ 2 ( 2 θ 2 μ 2 ) B 2 x n B 2 x 2 + μ 1 ( 2 θ 1 μ 1 ) B 1 v n B 1 y 2 } x n x 2 x n + 1 x 2 + 2 α n ρ f ( x n ) μ T ( y n ) , x n + 1 x ( x n x + x n + 1 x ) x n + 1 x n + 2 α n ρ f ( x n ) μ T ( y n ) , x n + 1 x .
Since lim sup n γ n < 1 , 2 θ 1 μ 1 > 0 , 2 θ 2 μ 2 > 0 , lim n x n + 1 x n = 0 and α n 0 , we obtain
lim n B 2 x n B 2 x = 0
and
lim n B 1 v n B 1 y = 0 .
Since T μ 2 F 2 is firmly nonexpansive, we have
v n y 2 = T μ 2 F 2 [ x n μ 2 B 2 x n ] T μ 2 F 2 [ x μ 2 B 2 x ] 2 v n y , ( x n μ 2 B 2 x n ) ( x μ 2 B 2 x ) = 1 2 { v n y 2 + x n x μ 2 ( B 2 x n B 2 x ) 2 x n x μ 2 ( B 2 x n B 2 x ) ( v n y ) 2 } 1 2 { v n y 2 + x n x 2 μ 2 ( 2 θ 2 μ 2 ) B 2 x n B 2 x 2 x n x μ 2 ( B 2 x n B 2 x ) ( v n y ) 2 } 1 2 { v n y 2 + x n x 2 x n v n μ 2 ( B 2 x n B 2 x ) ( x y ) 2 } = 1 2 { v n y 2 + x n x 2 x n v n ( x y ) 2 + 2 μ 2 x n v n ( x y ) , B 2 x n B 2 x μ 2 2 B 2 x n B 2 x 2 } 1 2 { v n y 2 + x n x 2 x n v n ( x y ) 2 + 2 μ 2 x n v n ( x y ) B 2 x n B 2 x } .
Hence, we get
v n y 2 x n x 2 x n v n ( x y ) 2 + 2 μ 2 x n v n ( x y ) B 2 x n B 2 x .
(3.10)
On the other hand, from (3.1) and Lemma 2.1(ii), we obtain
z n x 2 = T μ 1 F 1 [ v n μ 1 B 1 v n ] T μ 1 F 1 [ y μ 1 B 1 y ] 2 z n x , ( v n μ 1 B 1 v n ) ( y μ 1 B 1 y ) = 1 2 { z n x 2 + v n y μ 1 ( B 1 v n B 1 y ) 2 v n y μ 1 ( B 1 v n B 1 y ) ( z n x ) 2 } = 1 2 { z n x 2 + v n y 2 2 μ 1 v n y , B 1 v n B 1 y + μ 1 2 B 1 v n B 1 y 2 v n y μ 1 ( B 1 v n B 1 y ) ( z n x ) 2 } 1 2 { z n x 2 + v n y 2 μ 1 ( 2 θ 1 μ 1 ) B 1 v n B 1 y 2 v n y μ 1 ( B 1 v n B 1 y ) ( z n x ) 2 } 1 2 { z n x 2 + v n y 2 v n z n μ 1 ( B 1 v n B 1 y ) + ( x y ) 2 } 1 2 { z n x 2 + v n y 2 v n z n + ( x y ) 2 + 2 μ 1 v n z n + ( x y ) , B 1 v n B 1 y } 1 2 { z n x 2 + v n y 2 v n z n + ( x y ) 2 + 2 μ 1 v n z n + ( x y ) B 1 v n B 1 y } ,
which implies that
z n x 2 v n y 2 v n z n + ( x y ) 2 + 2 μ 1 v n z n + ( x y ) B 1 v n B 1 y x n x 2 x n v n ( x y ) 2 + 2 μ 2 x n v n ( x y ) B 2 x n B 2 x v n z n + ( x y ) 2 + 2 μ 1 v n z n + ( x y ) B 1 v n B 1 y ,
where the last inequality follows from (3.10). From (3.9) and the above inequality, we have
x n + 1 x 2 γ n x n x 2 + ( 1 γ n ) z n x 2 + 2 α n ρ f ( x n ) μ T ( y n ) , x n + 1 γ n x n x 2 + 2 α n ρ f ( x n ) μ T ( y n ) , x n + 1 + ( 1 γ n ) x n x 2 + ( 1 γ n ) ( x n v n ( x y ) 2 + 2 μ 2 x n v n ( x y ) B 2 x n B 2 x ) + ( 1 γ n ) ( v n z n + ( x y ) 2 + 2 μ 1 v n z n + ( x y ) B 1 v n B 1 y ) = x n x 2 + 2 α n ρ f ( x n ) μ T ( y n ) , x n + 1 + ( 1 γ n ) ( x n v n ( x y ) 2 + 2 μ 2 x n v n ( x y ) B 2 x n B 2 x ) + ( 1 γ n ) ( v n z n + ( x y ) 2 + 2 μ 1 v n z n + ( x y ) B 1 v n B 1 y ) ,
which implies that
( 1 γ n ) ( x n v n ( x y ) 2 + v n z n + ( x y ) 2 ) 2 α n ρ f ( x n ) μ T ( y n ) , x n + 1 + ( x n x + x n + 1 x ) x n + 1 x n + 2 ( 1 γ n ) μ 2 x n v n ( x y ) B 2 x n B 2 x + 2 ( 1 γ n ) μ 1 v n z n + ( x y ) B 1 v n B 1 y .
Since lim n x n + 1 x n = 0 , lim n α n 0 , 0 < lim inf γ n < lim sup γ n < 1 , lim n B 2 x n B 2 x = 0 , lim n B 1 v n B 1 y = 0 , we obtain
lim n x n v n ( x y ) = 0 and lim n v n z n + ( x y ) = 0 .
Since
x n z n x n v n ( x y ) + v n z n + ( x y ) ,
we get
lim n x n z n = 0 .
(3.11)
It follows from (3.8) and (3.11) that
lim n y n z n = 0 .
(3.12)
We define a mapping W : C H by W x = β S x + ( 1 β ) x with σ β < 1 . It follows from Lemma 2.5 that W is a nonexpansive mapping and F ( W ) = F ( S ) . Note that
W z n z n W z n y n + z n y n | β n β | S z n z n + z n y n .
Since lim n β n = β and lim n y n z n = 0 , we obtain
lim n W z n z n = 0 .

Since { x n } is bounded and without loss of generality we can assume that x n x C , from (3.11), it is easy to observe that z n x . It follows from Lemma 2.3 that x F ( W ) = F ( S ) . Therefore w w ( x n ) F ( S ) . □

Theorem 3.1 The sequence { x n } generated by Algorithm  3.1 converges strongly to z, which is the unique solution of the variational inequality
ρ f ( z ) μ T ( z ) , x z 0 , x Ω F ( S ) .
(3.13)
Proof Since { x n } is bounded x n w and from Lemma 3.2, we have w F ( S ) . Next, we show that w Ω . Since lim n x n z n = 0 and there exists a subsequence { x n k } of { x n } such that x n k w , it is easy to observe that z n k w . For any x , y C , using (2.1), we have
Q ( x ) Q ( y ) 2 = T μ 1 F 1 [ T μ 2 F 2 [ x μ 2 B 2 x ] μ 1 B 1 T μ 2 F 2 [ x μ 2 B 2 x ] ] T μ 1 F 1 [ T μ 2 F 2 [ y μ 2 B 2 y ] μ 1 B 1 T μ 2 F 2 [ y μ 2 B 2 y ] ] 2 ( T μ 2 F 2 [ x μ 2 B 2 x ] T μ 2 F 2 [ y μ 2 B 2 y ] ) μ 1 ( B 1 T μ 2 F 2 [ x μ 2 B 2 x ] B 1 T μ 2 F 2 [ y μ 2 B 2 y ] ) 2 T μ 2 F 2 [ x μ 2 B 2 x ] T μ 2 F 2 [ y μ 2 B 2 y ] 2 μ 1 ( 2 θ 1 μ 1 ) B 1 T μ 2 F 2 [ x μ 2 B 2 x ] B 1 T μ 2 F 2 [ y μ 2 B 2 y ] 2 T μ 2 F 2 [ x μ 2 B 2 x ] T μ 2 F 2 [ y μ 2 B 2 y ] 2 ( x μ 2 B 2 x ) ( y μ 2 B 2 y ) 2 x y 2 μ 2 ( 2 θ 2 μ 2 ) B 2 x B 2 y 2 x y 2 .
This implies that Q : C C is nonexpansive. On the other hand
z n Q ( z n ) 2 = T μ 1 F 1 [ T μ 2 F 2 [ x n μ 2 B 2 x n ] μ 1 B 1 T μ 2 F 2 [ x n μ 2 B 2 x n ] ] Q ( z n ) 2 = Q ( x n ) Q ( z n ) 2 x n z n 2 .
Since lim n x n z n = 0 (see (3.11)), we have lim n z n Q ( z n ) = 0 . It follows from Lemma 2.3 that w = Q ( w ) , which implies from Lemma 2.2 that w Ω . Thus we have
w Ω F ( S ) .

Since 0 ρ τ < μ η , from Lemma 2.4, the operator μ T ρ f is μη-ρτ-strongly monotone, and we get the uniqueness of the solution of the variational inequality (3.13) and denote it by z Ω F ( S ) .

Next, we claim that lim sup n ρ f ( z ) μ T ( z ) , x n z 0 . Since { x n } is bounded, there exists a subsequence { x n k } of { x n } such that
lim sup n ρ f ( z ) μ T ( z ) , x n z = lim sup k ρ f ( z ) μ T ( z ) , x n k z = ρ f ( z ) μ T ( z ) , w z 0 .
Next, we show that x n z . We have
x n + 1 z 2 = α n ρ f ( x n ) + γ n x n + ( ( 1 γ n ) I α n μ T ) ( y n ) z , x n + 1 z = α n ρ f ( x n ) μ T ( z ) , x n + 1 z + γ n x n z , x n + 1 z + ( ( 1 γ n ) I α n μ T ) ( y n ) ( ( 1 γ n ) I α n μ T ) ( z ) , x n + 1 z α n ρ ( f ( x n ) f ( z ) ) , x n + 1 z + α n ρ f ( z ) μ T ( z ) , x n + 1 z + γ n x n z x n + 1 z + ( 1 γ n α n ν ) y n z x n + 1 z α n ρ τ x n z x n + 1 z + α n ρ f ( z ) μ T ( z ) , x n + 1 z + γ n x n z x n + 1 z + ( 1 γ n α n ν ) x n z x n + 1 z = ( 1 α n ( ν ρ τ ) ) x n z x n + 1 z + α n ρ f ( z ) μ T ( z ) , x n + 1 z 1 α n ( ν ρ τ ) 2 ( x n z 2 + x n + 1 z 2 ) + α n ρ f ( z ) μ T ( z ) , x n + 1 z 1 α n ( ν ρ τ ) 2 x n z 2 + 1 2 x n + 1 z 2 + α n ρ f ( z ) μ T ( z ) , x n + 1 z ,
(3.14)
which implies that
x n + 1 z 2 ( 1 α n ( ν ρ τ ) ) x n z 2 + 2 α n ρ f ( z ) μ T ( z ) , x n + 1 z .

Let υ n = α n ( ν ρ τ ) and δ n = 2 α n ρ f ( z ) μ T ( z ) , x n + 1 z .

We have
n = 1 α n =
and
lim sup n { 1 ν ρ τ ρ f ( z ) μ T ( z ) , x n + 1 z } 0 .
It follows that
n = 1 υ n = and lim sup n δ n υ n 0 .

Thus all the conditions of Lemma 2.8 are satisfied. Hence we deduce that x n z . This completes the proof. □

4 Applications

To verify the theoretical assertions, we consider the following example.

Example 4.1 Let α n = 1 3 n , β n = 1 n 3 , and γ n = 2 n 1 3 n .

It is easy to show that the sequence { γ n } satisfies condition (a).

We have
lim n α n = 1 3 lim n 1 n = 0
and
n = 1 α n = 1 3 n = 1 1 n = .

The sequence { α n } satisfies condition (b).

Let be the set of real numbers, B 1 = B 2 = 0 , and let the mapping T : R R be defined by
T ( x ) = 2 x + 5 7 , x R ,
let the mapping S : R R be defined by
S ( x ) = x 3 , x R ,
let the mapping f : R R be defined by
f ( x ) = x 14 , x R .
It is easy to show that T is a 1-Lipschitzian mapping and 1 7 -strongly monotone, S is a 0-strict pseudo-contraction mapping and f is 1 7 -Lipschitzian. Let the mapping F 2 : R × R R be defined by
F 2 ( x , y ) = 5 x 2 + x y + 4 y 2 , ( x , y ) R × R .
By the definition of F 2 , we have
0 F 2 ( u n , y ) + 1 μ 2 y u n , u n x n = 5 u n 2 + u n y + 4 y 2 + 1 μ 2 ( y u n ) ( u n x n ) .
Then
0 μ 2 ( 5 u n 2 + u n y + 4 y 2 ) + ( y u n y x n u n 2 + u n x n ) = 4 μ 2 y 2 + ( μ 2 u n + u n x n ) y 5 μ 2 u n 2 u n 2 + u n x n .
Let A ( y ) = 4 μ 2 y 2 + ( μ 2 u n + u n x n ) y 5 μ 2 u n 2 u n 2 + u n x n . A ( y ) is a quadratic function of y with coefficients a = 4 μ 2 , b = μ 2 u n + u n x n , c = 5 μ 2 u n 2 u n 2 + u n x n . We determine the discriminant Δ of A as follows:
Δ = b 2 4 a c = ( μ 2 u n + u n x n ) 2 16 μ 2 ( 5 μ 2 u n 2 u n 2 + u n x n ) = 81 μ 2 2 u n 2 + 18 μ 2 u n 2 + u n 2 18 μ 2 u n x n 2 u n x n + x n 2 = x n 2 + ( 81 μ 2 2 + 18 μ 2 + 1 ) u n 2 2 x n u n ( 9 μ 2 + 1 ) = ( x n u n ( 9 μ 2 + 1 ) ) 2 .
We have A ( y ) 0 , y R . If it has at most one solution in , then Δ = 0 , we obtain
u n = x n 1 + 9 μ 2 .
(4.1)
Let the mapping F 1 : R × R R be defined by
F 1 ( x , y ) = 3 x 2 + x y + 2 y 2 , ( x , y ) R × R .
By the definition of F 1 , we have
0 F 1 ( u n , y ) + 1 μ 1 y u n , u n x n = 3 u n 2 + u n y + 2 y 2 + 1 μ 1 ( y u n ) ( u n x n ) .
Then
0 μ 1 ( 3 u n 2 + u n y + 2 y 2 ) + ( y u n y x n u n 2 + u n x n ) = 2 μ 1 y 2 + ( μ 1 u n + u n x n ) y 3 μ 1 u n 2 u n 2 + u n x n .
Let B ( y ) = 2 μ 1 y 2 + ( μ 1 u n + u n x n ) y 3 μ 1 u n 2 u n 2 + u n x n . B ( y ) is a quadratic function of y with coefficients a = 2 μ 1 , b = μ 1 u n + u n x n , c = 3 μ 1 u n 2 u n 2 + u n x n . We determine the discriminant Δ of B as follows:
Δ = b 2 4 a c = ( μ 1 u n + u n x n ) 2 8 μ 1 ( 3 μ 1 u n 2 u n 2 + u n x n ) = u n 2 + 10 μ 1 u n 2 + 25 u n 2 μ 1 2 2 x n u n 10 x n u n μ 1 + x n 2 = ( u n + 5 u n μ 1 ) 2 2 x n ( u n + 5 u n μ 1 ) + x n 2 = ( u n + 5 u n μ 1 x n ) 2 .
We have B ( y ) 0 , y R . If it has at most one solution in , then Δ = 0 , we obtain
u n = x n 1 + 5 μ 1 .
(4.2)
For every n 1 , from (4.1) and (4.2), we rewrite (3.1) as follows:
{ z n = x n ( 1 + 5 μ 1 ) ( 1 + 9 μ 2 ) ; y n = z n 3 n 3 + ( 1 1 n 3 ) z n ; x n + 1 = ( ρ 42 n + 2 n 1 3 n ) x n + ( n + 1 ) y n 3 n μ 2 y n + 5 21 n .
In all tests we take ρ = 1 15 and μ = 1 7 . In our example η = 1 7 , k = 1 , τ = 1 7 . It is easy to show that the parameters satisfy 0 < μ < 2 η k 2 , 0 ρ τ < ν , where ν = μ ( η μ k 2 2 ) . All codes were written in Matlab, the values of { z n } , { y n } , and { x n } with different n are reported in Tables 1 and 2.
Table 1

The values of { z n } , { y n } , and { x n } with initial value x 1 = 10

 

Algorithm 3.1

Algorithm 3.2 with F = F 1

Algorithm 3.2 with F = F 2

z n

y n

x n

z n

y n

x n

z n

y n

x n

n = 1

0.714286

0.238095

10.000000

2.857143

0.952381

10.000000

2.500000

0.833333

10.000000

n = 2

0.174115

0.159605

3.470684

0.908574

0.832860

3.937156

0.839002

0.769085

3.859410

n = 3

0.078010

0.076084

1.799806

0.497992

0.485696

2.365460

0.472515

0.460848

2.295072

n = 4

0.040919

0.040493

1.022977

0.303544

0.300382

1.517720

0.293530

0.290472

1.467648

n = 5

0.023015

0.022893

0.605373

0.193853

0.192819

1.001573

0.190192

0.189177

0.968249

n = 6

0.013425

0.013383

0.365708

0.126958

0.126566

0.671062

0.126044

0.125655

0.649610

n = 7

0.007982

0.007966

0.223090

0.084379

0.084215

0.453535

0.084628

0.084464

0.440068

n = 8

0.004782

0.004776

0.136314

0.056557

0.056483

0.307922

0.057242

0.057167

0.299676

n = 9

0.002859

0.002856

0.082755

0.038063

0.038028

0.209346

0.038845

0.038809

0.204446

n = 10

0.001685

0.001684

0.049394

0.025624

0.025607

0.142095

0.026353

0.026336

0.139297

Table 2

The values of { z n } , { y n } , and { x n } with initial value x 1 = 10

 

Algorithm 3.1

Algorithm 3.2 with F = F 1

Algorithm 3.2 with F = F 2

z n

y n

x n

z n

y n

x n

z n

y n

x n

n = 1

−0.714286

−0.238095

−10.000000

−2.857143

−0.952381

−10.000000

−2.500000

−0.833333

−10.000000

n = 2

−0.177527

−0.162733

−3.538711

−0.924273

−0.847250

−4.005183

−0.853791

−0.782642

−3.927438

n = 3

−0.081028

−0.079027

−1.869430

−0.513819

−0.501132

−2.440639

−0.487908

−0.475861

−2.369839

n = 4

−0.043427

−0.042974

−1.085664

−0.317798

−0.314488

−1.588992

−0.307701

−0.304496

−1.538505

n = 5

−0.025092

−0.024958

−0.659998

−0.206325

−0.205225

−1.066013

−0.202795

−0.201714

−1.032413

n = 6

−0.015158

−0.015111

−0.412926

−0.137783

−0.137358

−0.728280

−0.137124

−0.136701

−0.706715

n = 7

−0.009444

−0.009426

−0.263964

−0.093772

−0.093590

−0.504027

−0.094344

−0.094161

−0.490588

n = 8

−0.006031

−0.006023

−0.171898

−0.064738

−0.064654

−0.352462

−0.065776

−0.065690

−0.344356

n = 9

−0.003937

−0.003934

−0.113970

−0.045226

−0.045185

−0.248745

−0.046372

−0.046330

−0.244064

n = 10

−0.002627

−0.002626

−0.077009

−0.031937

−0.031916

−0.177107

−0.033029

−0.033007

−0.174582

Remark 4.1 Tables 1 and 2, and Figures 1 and 2 show that the sequences { z n } , { y n } , and { x n } converge to 0, where { 0 } = Ω F ( S ) . Also Tables 1 and 2 show that the convergence of Algorithm 3.1 is faster than Algorithm 3.2.
Figure 1

The convergence of { z n } , { y n } , and { x n } with initial value x 1 = 10 for Algorithm  3.1 and Algorithm  3.2 with F = F 1 and with F = F 2 .

Figure 2

The convergence of { z n } , { y n } , and { x n } with initial value x 1 = 10 for Algorithm  3.1 and Algorithm  3.2 with F = F 1 and with F = F 2 .

5 Conclusions

In this paper, we suggest and analyze an iterative method for finding the approximate element of the common set of solutions of (1.1) and (1.6) in real Hilbert space, which can be viewed as a refinement and improvement of some existing methods for solving equilibrium problem, and a hierarchical fixed point problem. Strong convergence of the proposed method is proved under mild assumptions. Furthermore, some preliminary numerical results are reported to verify the theoretical assertions of the proposed method and show that our algorithm for the system of generalized equilibrium problems is more attractive in practice than our algorithm for the generalized equilibrium problems.

Declarations

Acknowledgements

The author would like to thank Prof. Xindan Li, Dean of School of Management and Engineering of Nanjing University, for providing excellent research facilities.

Authors’ Affiliations

(1)
School of Management Science and Engineering, Nanjing University
(2)
ENSA, Ibn Zohr University

References

  1. Ceng LC, Yao JC: A relaxed extragradient-like method for a generalized mixed equilibrium problem, a general system of generalized equilibria and a fixed point problem. Nonlinear Anal. 2010, 72: 1922–1937. 10.1016/j.na.2009.09.033View ArticleMathSciNetGoogle Scholar
  2. Takahashi S, Takahashi W: Strong convergence theorem for a generalized equilibrium problem and a nonexpansive mapping in a Hilbert space. Nonlinear Anal. 2008, 69: 1025–1033. 10.1016/j.na.2008.02.042View ArticleMathSciNetGoogle Scholar
  3. Ceng LC, Wang CY, Yao JC: Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 2008, 67: 375–390. 10.1007/s00186-007-0207-4View ArticleMathSciNetGoogle Scholar
  4. Ceng LC, Ansari QH, Schaible S, Yao JC: Iterative methods for generalized equilibrium problems, systems of general generalized equilibrium problems and fixed point problems for nonexpansive mappings in Hilbert spaces. Fixed Point Theory 2011, 12(2):293–308.MathSciNetGoogle Scholar
  5. Ansari QH: Metric Spaces: Including Fixed Point Theory and Set-Valued Maps. Narosa Publishing House, New Delhi; 2010.Google Scholar
  6. Reich S, Sabach S: Three strong convergence theorems regarding iterative methods for solving equilibrium problems in reflexive Banach spaces. Contemp. Math. 2012, 568: 22–240.MathSciNetGoogle Scholar
  7. Latif A, Al-Mazrooei AE, Alofi AS, Yao JC: Hybrid iterative method for systems of generalized equilibria with constraints of variational inclusion and fixed point problems. Fixed Point Theory Appl. 2014., 2014: Article ID 164Google Scholar
  8. Ceng LC, Al-Mezel SA, Anasri QH: Implicit and explicit iterative methods for systems of variational inequalities and zeros of accretive operators. Abstr. Appl. Anal. 2013., 2013: Article ID 631382Google Scholar
  9. Ansari QH, Yao JC: Systems of generalized variational inequalities and their applications. Appl. Anal. 2000, 76: 203–217. 10.1080/00036810008840877View ArticleMathSciNetGoogle Scholar
  10. Aubin JP: Mathematical Methods of Game and Economic Theory. North-Holland, Amsterdam; 1979.Google Scholar
  11. Facchinei F, Pang JS II. In Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, New York; 2003.Google Scholar
  12. Verma RU: Projection methods, algorithms, and a new system of nonlinear variational inequalities. Comput. Math. Appl. 2001, 41: 1025–1031. 10.1016/S0898-1221(00)00336-9View ArticleMathSciNetGoogle Scholar
  13. Verma RU: General convergence analysis for two-step projection methods and applications to variational problems. Appl. Math. Lett. 2005, 18: 1286–1292. 10.1016/j.aml.2005.02.026View ArticleMathSciNetGoogle Scholar
  14. Ansari QH, Wong NC, Yao JC: The existence of nonlinear inequalities. Appl. Math. Lett. 1999, 12(5):89–92. 10.1016/S0893-9659(99)00062-2View ArticleMathSciNetGoogle Scholar
  15. Bnouhachem A: A self-adaptive method for solving general mixed variational inequalities. J. Math. Anal. Appl. 2005, 309(1):136–150. 10.1016/j.jmaa.2004.12.023View ArticleMathSciNetGoogle Scholar
  16. Bnouhachem A: A new projection and contraction method for linear variational inequalities. J. Math. Anal. Appl. 2006, 314(2):513–525. 10.1016/j.jmaa.2005.03.095View ArticleMathSciNetGoogle Scholar
  17. Ansari QH, Lalitha CS, Mehta M: Generalized Convexity, Nonsmooth Variational Inequalities and Nonsmooth Optimization. CRC Press, Boca Raton; 2014.Google Scholar
  18. Ansari QH, Rehan A: Split feasibility and fixed point problems. In Nonlinear Analysis: Approximation Theory, Optimization and Applications. Edited by: Ansari QH. Birkhäuser, Basel; 2014:281–322.Google Scholar
  19. Zhou H: Convergence theorems of fixed points for k -strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 2008, 69: 456–462. 10.1016/j.na.2007.05.032View ArticleMathSciNetGoogle Scholar
  20. Yao Y, Cho YJ, Liou YC: Iterative algorithms for hierarchical fixed points problems and variational inequalities. Math. Comput. Model. 2010, 52(9–10):1697–1705. 10.1016/j.mcm.2010.06.038View ArticleMathSciNetGoogle Scholar
  21. Crombez G: A hierarchical presentation of operators with fixed points on Hilbert spaces. Numer. Funct. Anal. Optim. 2006, 27: 259–277. 10.1080/01630560600569957View ArticleMathSciNetGoogle Scholar
  22. Mainge PE, Moudafi A: Strong convergence of an iterative method for hierarchical fixed-point problems. Pac. J. Optim. 2007, 3(3):529–538.MathSciNetGoogle Scholar
  23. Moudafi A: Krasnoselski-Mann iteration for hierarchical fixed-point problems. Inverse Problems 2007, 23(4):1635–1640. 10.1088/0266-5611/23/4/015View ArticleMathSciNetGoogle Scholar
  24. Cianciaruso F, Marino G, Muglia L, Yao Y: On a two-steps algorithm for hierarchical fixed point problems and variational inequalities. J. Inequal. Appl. 2009., 2009: Article ID 208692Google Scholar
  25. Gu G, Wang S, Cho YJ: Strong convergence algorithms for hierarchical fixed points problems and variational inequalities. J. Appl. Math. 2011., 2011: Article ID 164978Google Scholar
  26. Marino G, Xu HK: Explicit hierarchical fixed point approach to variational inequalities. J. Optim. Theory Appl. 2011, 149(1):61–78. 10.1007/s10957-010-9775-1View ArticleMathSciNetGoogle Scholar
  27. Ceng LC, Ansari QH, Yao JC: Iterative methods for triple hierarchical variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2011, 151: 489–512. 10.1007/s10957-011-9882-7View ArticleMathSciNetGoogle Scholar
  28. Bnouhachem A, Noor MA: An iterative method for approximating the common solutions of a variational inequality, a mixed equilibrium problem and a hierarchical fixed point problem. J. Inequal. Appl. 2013., 2013: Article ID 490Google Scholar
  29. Bnouhachem A: Algorithms of common solutions for a variational inequality, a split equilibrium problem and a hierarchical fixed point problem. Fixed Point Theory Appl. 2013., 2013: Article ID 278Google Scholar
  30. Bnouhachem A: An iterative method for a common solution of generalized mixed equilibrium problems, variational inequalities, and hierarchical fixed point problems. Fixed Point Theory Appl. 2014., 2014: Article ID 155Google Scholar
  31. Bnouhachem A: A hybrid iterative method for a combination of equilibrium problem, a combination of variational inequality problem and a hierarchical fixed point problem. Fixed Point Theory Appl. 2014., 2014: Article ID 163Google Scholar
  32. Bnouhachem A, Al-Homidan S, Ansari QH: An iterative method for common solutions of equilibrium problems and hierarchical fixed point problems. Fixed Point Theory Appl. 2014., 2014: Article ID 194Google Scholar
  33. Bnouhachem A: A modified projection method for a common solution of a system of variational inequalities, a split equilibrium problem and a hierarchical fixed-point problem. Fixed Point Theory Appl. 2014., 2014: Article ID 22Google Scholar
  34. Bnouhachem A: Strong convergence algorithm for approximating the common solutions of a variational inequality, a mixed equilibrium problem and a hierarchical fixed-point problem. J. Inequal. Appl. 2014., 2014: Article ID 154Google Scholar
  35. Ceng LC, Al-Mezel SA, Latif A: Hybrid viscosity approaches to general systems of variational inequalities with hierarchical fixed point problem constraints in Banach spaces. Abstr. Appl. Anal. 2014., 2014: Article ID 945985Google Scholar
  36. Ceng LC, Khan AR, Ansari QH, Yao JC: Viscosity approximation methods for strongly positive and monotone operators. Fixed Point Theory 2009, 10(1):35–71.MathSciNetGoogle Scholar
  37. Ceng LC, Anasri QH, Yao JC: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 2011, 74(16):5286–5302. 10.1016/j.na.2011.05.005View ArticleMathSciNetGoogle Scholar
  38. Wang Y, Xu W: Strong convergence of a modified iterative algorithm for hierarchical fixed point problems and variational inequalities. Fixed Point Theory Appl. 2013., 2013: Article ID 121Google Scholar
  39. Ansari QH, Ceng LC, Gupta H: Triple hierarchical variational inequalities. In Nonlinear Analysis: Approximation Theory, Optimization and Applications. Edited by: Ansari QH. Birkhäuser, Basel; 2014:231–280.Google Scholar
  40. Censor Y, Gibali A, Reich S: Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59(2):301–323. 10.1007/s11075-011-9490-5View ArticleMathSciNetGoogle Scholar
  41. Moudafi A: Split monotone variational inclusions. J. Optim. Theory Appl. 2011, 50: 275–283.View ArticleMathSciNetGoogle Scholar
  42. Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.MathSciNetGoogle Scholar
  43. Combettes PL, Hirstoaga SA: Equilibrium programming using proximal-like algorithms. Math. Program. 1997, 78: 29–41.View ArticleGoogle Scholar
  44. Geobel K, Kirk WA Stud. Adv. Math. 28. In Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.View ArticleGoogle Scholar
  45. Zhou HY: Convergence theorems of fixed points for κ -strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 2008, 69: 456–462. 10.1016/j.na.2007.05.032View ArticleMathSciNetGoogle Scholar
  46. Deng BC, Chen T, Li ZF: Cyclic iterative method for strictly pseudononspreading in Hilbert space. J. Appl. Math. 2012., 2012: Article ID 435676Google Scholar
  47. Suzuki T: Strong convergence of Krasnoselskii and Mann’s type sequences for one parameter nonexpansive semigroups without Bochner integrals. J. Math. Anal. Appl. 2005, 305(1):227–239. 10.1016/j.jmaa.2004.11.017View ArticleMathSciNetGoogle Scholar
  48. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240–256. 10.1112/S0024610702003332View ArticleGoogle Scholar
  49. Acedo GL, Xu HK: Iterative methods for strict pseudo-contractions in Hilbert space. Nonlinear Anal. 2007, 67(7):2258–2271. 10.1016/j.na.2006.08.036View ArticleMathSciNetGoogle Scholar
  50. Marino G, Xu HK: Convergence of generalized proximal point algorithms. Commun. Pure Appl. Anal. 2004, 3: 791–808.View ArticleMathSciNetGoogle Scholar

Copyright

© Bnouhachem; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.