Skip to main content

Iterative methods for triple hierarchical variational inequalities and common fixed point problems

Abstract

The purpose of this paper is to introduce a new iterative scheme for approximating the solution of a triple hierarchical variational inequality problem. Under some requirements on parameters, we study the convergence analysis of the proposed iterative scheme for the considered triple hierarchical variational inequality problem which is defined over the set of solutions of a variational inequality problem defined over the intersection of the set of common fixed points of a sequence of nearly nonexpansive mappings and the set of solutions of the classical variational inequality. Our strong convergence theorems extend and improve some known corresponding results in the contemporary literature for a wider class of nonexpansive type mappings in Hilbert spaces.

MSC:47J20, 47J25.

1 Introduction

The classical variational inequality problem initially studied by Stampacchia [1] for a nonlinear operator A:CH is a problem which provides us such x D which satisfies

A x , y x 0,yD,
(1.1)

where C is a nonempty closed convex subset of a real Hilbert space H and D is a nonempty closed convex subset of C. The variational inequality (1.1) is denoted by VI D (C,A). The set of solutions of (1.1) is denoted by Ω D (C,A), that is,

Ω D (C,A)= { x D : A x , y x 0 , y D } .

For C=D, we use VI(C,A):= VI D (C,A) and Ω(C,A):= Ω D (C,A).

In the framework of variational inequality problems, various problems arising in several branches of pure and applied sciences can be studied (see [2, 3]).

The equivalence relation between the variational inequality and fixed point problems can be seen by projection technique which plays an important role in developing an important role in developing some efficient methods for solving variational inequality problems and related optimization problems. The problem of finding the fixed points of a nonexpansive mapping is the subject of current interest related to variational inequality problems in functional analysis.

Over the set of fixed points of a nonexpansive mapping, several authors (see [47]etc.) have studied the variational inequality problem in a particular manner. This kind of variational inequality is named a hierarchical variational inequality; it is defined as follows:

Find  x F(S) such that  ( I T ) x , y x 0,yF(S),

where T and S are two nonexpansive mappings from a nonempty closed convex subset C of a real Hilbert space H into itself, and F(S) denotes the set of fixed points of the mapping S. One can easily observe that VI F ( S ) (C,IT) is equivalent to the fixed point problem x = P F ( S ) (T x ), that is, x is a fixed point of the nonexpansive mapping P F ( S ) (T), where P F ( S ) is the metric projection from H onto a nonempty closed convex subset F(S) of H.

After all, in the scenario of variational inequality problem, we eagerly discuss such kind of variational inequality problem which is defined over the set of solutions of a variational inequality and the set of fixed points of a nonexpansive mapping, having a triple structure in contrast with bilevel programming problems or hierarchical constrained optimization problems or hierarchical fixed point problems. This kind of variational inequality is called the triple hierarchical variational inequality (see [8, 9]), which is also called the triple hierarchical constrained optimization problem (see [8]), and it is defined as follows:

Find  x Ω F ( S ) (C,A) such that  F x , y x 0,y Ω F ( S ) (C,A),
(1.2)

where Ω F ( S ) (C,A) is the set of solutions of VI F ( S ) (C,A), and mappings A, F, and S are inverse strongly monotone, strongly monotone and Lipschitz continuous, and nonexpansive from a nonempty closed convex subset C of a real Hilbert space H into itself, respectively.

If Ω F ( S ) (C,IT) is nonempty, then the metric projection P Ω F ( S ) ( C , I T ) is well defined. The minimum norm solution x of VI F ( S ) (C,IT) exists uniquely and is exactly the nearest point projection of the origin to Ω F ( S ) (C,IT), that is, x = P Ω F ( S ) ( C , I T ) (0). Alternatively, x is the unique solution of the quadratic minimization problem:

x 2 =min { x 2 : x Ω F ( S ) ( C , I T ) } .

Finding of this minimum norm solution x is an interesting problem. In this context, Yao et al. [10] proposed two iterative schemes in an implicit and an explicit both ways to find the minimum norm solution x of VI F ( S ) (C,IT). They proved two strong convergence results by regularizing the nonexpansive mapping T using contractions.

Recently, Ceng et al. [11], motivated by the results of Yao et al. [10] introduced and studied two iterative schemes, one of which was an implicit while other was an explicit one. They proved two strong convergence results by the considered iterative schemes under suitable conditions on parameters for considered triple hierarchical variational inequalities for both cases. Some hybrid steepest-descent-like methods with variable parameters for triple hierarchical variational inequalities are also studied in Ceng et al. [12]. The importance of the triple hierarchical variational inequalities and a nice survey on this topic is given in [13].

In 2005, the first author introduced the class of nearly nonexpansive mappings [14, 15] which is an important generalization of the class of nonexpansive mappings. Let C be a nonempty subset of a Banach space X. Fix a sequence { a n } in [0,) with a n 0. A mapping T:CC is said to be nearly nonexpansive with respect to the sequence { a n } if for each nN,

T n x T n y xy+ a n for all x,yC.

We now discuss the notion of the sequence of nearly nonexpansive mappings.

Let C be a nonempty subset of a Banach space X. Let T:= { T n } n = 1 be a sequence of mappings from C into itself. We denote by F(T) the set of common fixed points of the sequence , that is, F(T)= n = 1 F( T n ). Fix a sequence { a n } in [0,) with a n 0, and let { T n } be a sequence of mappings from C into X. Then the sequence T:={ T n } is called a sequence of nearly nonexpansive mappings (see [16]) with respect to a sequence { a n } if

T n x T n yxy+ a n for all x,yC and nN.

Clearly, the sequence of nearly nonexpansive mappings can easily be seen to be a wider class of sequence of nonexpansive mappings.

Motivated and inspired by the works mentioned above, we introduce an explicit iterative scheme that generates a sequence and prove that this sequence converges strongly to a unique solution of the considered triple hierarchical variational inequality problem defined over the set of solutions of a variational inequality problem which is defined over the intersection of the set of common fixed points of a sequence of nearly nonexpansive mappings and the set of solutions of the classical variational inequality problem. Our results generalize the result of Ceng et al. [11] in the context of the sequence of nearly nonexpansive mappings and in some other remarkable senses. Our results also extend the result of Yao et al. [10] and many other related works.

2 Preliminaries

Throughout this paper, we denote by → and the strong convergence and weak convergence, respectively. The symbol stands for the set of all natural numbers and ω w ({ x n }) denotes the set of all weak limits of the sequence { x n }.

Let C be a nonempty subset of a real Hilbert space H with inner product , and norm , respectively. A mapping T:CH is called

  1. (1)

    monotone if

    TxTy,xy0for all x,yC,
  2. (2)

    η-strongly monotone if there exists a positive real number η such that

    TxTy,xyη x y 2 for all x,yC,
  3. (3)

    α-inverse strongly monotone if there exists a positive real number α such that

    TxTy,xyα T x T y 2 for all x,yC,
  4. (4)

    k-Lipschitzian if there exists a constant k>0 such that

    TxTykxyfor all x,yC,
  5. (5)

    ρ-contraction if there exists a constant ρ(0,1) such that

    TxTyρxyfor all x,yC,
  6. (6)

    nonexpansive if TxTyxy for all x,yC,

  7. (7)

    λ-strictly pseudocontractive if there exists λ[0,1) such that

    T x T y 2 x y 2 +λ ( I T ) x ( I T ) y 2 for all x,yC,

where I is the identity mapping. Note that if T:CH is λ-strictly pseudocontractive, then the mapping A:=IT is 1 λ 2 -inverse strongly monotone.

Let C be a nonempty closed convex subset of H. Then, for any xH, there exists a unique nearest point in C, denoted by P C (x), such that

x P C ( x ) =infxy=:d(x,C)for all yC.

The mapping P C is called the metric projection from H onto C (see Agarwal et al. [14] for some other information related to P C ).

Let A:CH be a monotone and k-Lipschitz continuous mapping and let N C (v) be the normal cone to C at vC, i.e.,

N C (v)= { w H : v y , w 0  for all  y C } .

Define

Tv={ A v + N C ( v ) , if  v C , , if  v C .

Then T is a maximal monotone and 0Tv if and only if vΩ(C,A).

Let C be a nonempty subset of a real Hilbert space H and let T 1 , T 2 :CH be two mappings. We denote B(C), the collection of all bounded subsets of C. The deviation between T 1 and T 2 on BB(C) [16], denoted by D B ( T 1 , T 2 ), is defined by

D B ( T 1 , T 2 )=sup { T 1 x T 2 x : x B } .

The following lemmas will be needed to prove our main results.

Lemma 2.1 ([17])

The metric projection mapping P C is characterized by the following properties:

  1. (i)

    P C (x)C for all xH;

  2. (ii)

    x P C (x), P C (x)y0 for all xH and yC;

  3. (iii)

    x y 2 x P C ( x ) 2 + y P C ( x ) 2 for all xH and yC;

  4. (iv)

    P C (x) P C (y),xy P C ( x ) P C ( y ) 2 for all x,yH.

Lemma 2.2 ([18])

Let C be a nonempty subset of a real Hilbert space H. Suppose that λ(0,1) and μ>0. Let F:CH be a k-Lipschitzian and η-strongly monotone operator on C. Define the mapping W:CH by

Wx=xλμF(x)for all xC.

Then W is a contraction provided μ< 2 η k 2 . More precisely, for μ(0, 2 η k 2 ),

WxWy(1λτ)xyfor all x,yC,

where τ=1 1 μ ( 2 η μ k 2 ) (0,1].

Lemma 2.3 ([14])

Let T be a nonexpansive self-mapping of a nonempty closed convex subset C of a real Hilbert space H. Then IT is demiclosed at zero, i.e., if { x n } is a sequence in C weakly converging to some xC and the sequence {(IT) x n } strongly converges to 0, then xF(T).

Lemma 2.4 ([19])

Assume { s n } is a sequence of nonnegative real numbers such that

s n + 1 (1 α n ) s n + α n β n for all nN,

where { α n } and { β n } are sequences of nonnegative real numbers which satisfy the conditions:

  1. (i)

    { α n } n = 1 (0,1) and n = 1 α n =;

  2. (ii)

    lim sup n β n 0, or

(ii)′ n = 1 α n β n is convergent.

Then lim n s n =0.

Lemma 2.5 ([20])

Let C be a nonempty closed convex subset of a real Hilbert space H and let λ i >0 (i=1,2,3,,N) such that i = 1 N λ i =1. Let T 1 , T 2 , T 3 ,, T N :CC be nonexpansive mappings with i = 1 N F( T i ) and let T= i = 1 N λ i T i . Then T is nonexpansive from C into itself and F(T)= i = 1 N F( T i ).

Proposition 2.1 ([21])

Let C be a nonempty subset of a real Hilbert space H. Let A:CH be an α-inverse strongly monotone mapping. Then, the mapping (ItA) is nonexpansive from C into H, if 0t2α.

3 Main results

Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H. Let F:CH be a k-Lipschitzian and η-strongly monotone operator, and g:CH be a ρ-contraction mapping. Let S:CC be a nonexpansive mapping and A:CH be an α-inverse strongly monotone mapping. Let T={ T n } be a sequence of nearly nonexpansive mappings from C into itself with respect to a sequence { a n } such that n = 1 D B ( T n , T n + 1 )< for all BB(C) and F(T)Ω(C,A) and let T be a mapping from C into itself defined by Tx= lim n T n x for all xC. Suppose that F(T)=F(T), 0<μ< 2 η k 2 and 0<γτ, where τ=1 1 μ ( 2 η μ k 2 ) . Assume that Ω, the set of solutions of the hierarchical variational inequality of finding z F(T)Ω(C,A) such that

( μ F γ S ) z , z z 0,zF(T)Ω(C,A),
(3.1)

is nonempty. Consider the sequence { x n } in C for arbitrary x 1 C, generated by the following iterative process:

{ x 1 C , y n = T n P C [ x n t n A x n ] , x n + 1 = P C [ λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) + ( I λ n μ F ) y n ]
(3.2)

for all nN, where { α n }, { λ n } are sequences in (0,1), and { t n } is a sequence in [a,b] (for some a, b with 0<a<b<2α) satisfying the following conditions:

  1. (i)

    lim n λ n =0, lim n α n =0 and n = 1 α n λ n =;

  2. (ii)

    lim n | α n λ n α n 1 λ n 1 | α n λ n 2 =0 and lim n | λ n λ n 1 | α n λ n 2 λ n 1 =0;

  3. (iii)

    lim n D B ( T n , T n + 1 ) α n + 1 λ n + 1 2 =0 for each BB(C) and n = 1 | t n + 1 t n |<;

  4. (iv)

    lim n λ n 1 θ α n =0, lim n a n α n λ n 2 =0 and lim n | t n t n 1 | α n λ n 2 =0;

  5. (v)

    there are constants k ¯ >0 and θ>0 satisfying

    x T n x k ¯ [ d ( x , F ( T ) Ω ( C , A ) ) ] θ ,xC and nN.

If the generated sequence { x n } is bounded and lim n x n P C [ x n t n A x n ] λ n =0, then it converges strongly to the point x F(T)Ω(C,A), where x is the unique solution of the triple hierarchical variational inequality of finding x Ω such that

( μ F γ g ) x , x x 0,xΩ.
(3.3)

Proof First of all, we assume that { x n } is bounded and lim n x n P C [ x n t n A x n ] λ n =0. We divide the proof into several steps.

Step 1. lim n x n + 1 x n =0.

Set u n = λ n γ( α n g( x n )+(1 α n )S x n )+(I λ n μF) y n and γ n =(1ρ)γ λ n α n . Then, we have

u n u n 1 = α n λ n γ [ g ( x n ) g ( x n 1 ) ] + λ n ( 1 α n ) γ ( S x n S x n 1 ) + [ ( I λ n μ F ) y n ( I λ n μ F ) y n 1 ] + ( α n λ n α n 1 λ n 1 ) γ [ g ( x n 1 ) S x n 1 ] + ( λ n λ n 1 ) ( γ S x n 1 μ F y n 1 ) .

From (3.2), we have

x n + 1 x n = P C ( u n ) P C ( u n 1 ) u n u n 1 α n λ n γ g ( x n ) g ( x n 1 ) + λ n ( 1 α n ) γ S x n S x n 1 + ( I λ n μ F ) y n ( I λ n μ F ) y n 1 + | α n λ n α n 1 λ n 1 | γ g ( x n 1 ) S x n 1 + | λ n λ n 1 | γ S x n 1 μ F y n 1 α n λ n γ ρ x n x n 1 + λ n ( 1 α n ) γ x n x n 1 + ( 1 λ n τ ) y n y n 1 + | α n λ n α n 1 λ n 1 | M + | λ n λ n 1 | M ,
(3.4)

where M is a constant such that

M= sup n N { γ g ( x n ) S ( x n ) + γ S x n μ F y n } .

Set z n := P C ( x n t n A x n ) and B={ z n }. Since { x n } is bounded, it follows that BB(C). Now, we have

y n + 1 y n = T n + 1 P C ( x n + 1 t n + 1 A x n + 1 ) T n P C ( x n t n A x n ) T n + 1 P C ( x n + 1 t n + 1 A x n + 1 ) T n + 1 P C ( x n t n A x n ) + T n + 1 P C ( x n t n A x n ) T n P C ( x n t n A x n ) P C ( x n + 1 t n + 1 A x n + 1 ) P C ( x n t n A x n ) + D B ( T n + 1 , T n ) + a n + 1 ( x n + 1 t n + 1 A x n + 1 ) ( x n t n A x n ) + D B ( T n + 1 , T n ) + a n + 1 x n + 1 x n + | t n + 1 t n | A x n + D B ( T n + 1 , T n ) + a n + 1 .
(3.5)

From (3.4) and (3.5), we obtain

x n + 1 x n α n λ n γ ρ x n x n 1 + λ n ( 1 α n ) γ x n x n 1 + | α n λ n α n 1 λ n 1 | M + | λ n λ n 1 | M + ( 1 λ n τ ) [ x n x n 1 + D B ( T n , T n 1 ) + | t n t n 1 | A x n 1 + a n ] = ( 1 ( 1 ρ ) γ λ n α n ) x n x n 1 + M ( | α n λ n α n 1 λ n 1 | + | λ n λ n 1 | ) + ( 1 λ n τ ) [ D B ( T n , T n 1 ) + | t n t n 1 | A x n 1 + a n ] ( 1 γ n ) x n x n 1 + M ( | α n λ n α n 1 λ n 1 | + | λ n λ n 1 | ) + D B ( T n , T n 1 ) + N | t n t n 1 | + a n ( 1 γ n ) x n x n 1 + γ n [ M ( | α n λ n α n 1 λ n 1 | + | λ n λ n 1 | γ n ) + D B ( T n , T n 1 ) γ n + N | t n t n 1 | γ n + a n γ n ] ,
(3.6)

where N= sup n N {A x n }. Note that lim n a n α n λ n =0 and n = 1 α n λ n =. Therefore, from conditions (ii), (iii), and Lemma 2.4, we have lim n x n + 1 x n =0.

Step 2. A x n Au0 for uF(T)Ω(C,A) and x n + 1 x n λ n 0 as n.

Set d n =2 a n z n u+ a n 2 and ε n =2 λ n γ( α n g( x n )+(1 α n )S x n )μFu y n u+ d n . One can observe that

z n u 2 = P C ( x n t n A x n ) P C ( u t n A u ) 2 ( x n t n A x n ) ( u t n A u ) 2 = ( x n u ) t n ( A x n A u ) 2 x n u 2 2 t n x n u , A x n A u + t n 2 A x n A u 2 x n u 2 t n ( 2 α t n ) A x n A u 2 x n u 2 a ( 2 α b ) A x n A u 2 .

We also have

y n u 2 = T n z n T n u 2 ( z n u + a n ) 2 z n u 2 + 2 a n z n u + a n 2 = z n u 2 + d n .

From (3.2), we have

x n + 1 u 2 = P C [ λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) + ( I λ n μ F ) y n ] P C ( u ) 2 λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) + ( I λ n μ F ) y n u 2 = λ n ( γ ( α n g ( x n ) + ( 1 α n ) S x n ) μ F u ) + ( I λ n μ F ) ( y n ) ( I λ n μ F ) ( u ) 2 [ λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) μ F u + ( 1 λ n τ ) y n u ] 2 λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) μ F u 2 + ( 1 λ n τ ) ( z n u 2 + d n ) + 2 λ n ( 1 λ n τ ) γ ( α n g ( x n ) + ( 1 α n ) S x n ) μ F u y n u λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) μ F u 2 + z n u 2 + ε n λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) μ F u 2 + x n u 2 a ( 2 α b ) A x n A u 2 + ε n .
(3.7)

Thus, we get

a ( 2 α b ) A x n A u 2 λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) μ F u 2 + ( x n u 2 x n + 1 u 2 ) + ε n λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) μ F u 2 + x n x n + 1 ( x n u + x n + 1 u ) + ε n .

Since λ n 0, ε n 0 and x n + 1 x n 0 as n, we obtain A x n Au0 as n. From (3.6), we have

x n + 1 x n λ n ( 1 γ n ) x n x n 1 λ n + M ( | α n λ n α n 1 λ n 1 | + | λ n λ n 1 | ) λ n + D B ( T n , T n 1 ) λ n + N | t n t n 1 | λ n + a n λ n = ( 1 γ n ) x n x n 1 λ n 1 + ( 1 γ n ) ( x n x n 1 λ n x n x n 1 λ n 1 ) + M ( | α n λ n α n 1 λ n 1 | + | λ n λ n 1 | ) λ n + D B ( T n , T n 1 ) λ n + N | t n t n 1 | λ n + a n λ n ( 1 γ n ) x n x n 1 λ n 1 + α n λ n x n x n 1 1 α n λ n | 1 λ n 1 λ n 1 | + M α n λ n ( | α n λ n α n 1 λ n 1 | + | λ n λ n 1 | ) α n λ n 2 + α n λ n ( D B ( T n , T n 1 ) α n λ n 2 + N | t n t n 1 | α n λ n 2 + a n α n λ n 2 ) .

Noticing that lim n a n α n λ n 2 =0, lim n | t n t n 1 | α n λ n 2 =0, and n = 1 α n λ n =. Thus, using conditions (ii) and (iii), and applying Lemma 2.4, we have

lim n x n + 1 x n λ n =0.
(3.8)

Step 3. x n z n 0 as n.

Let uF(T)Ω(C,A). Then using Lemma 2.1(iv), we have

z n u 2 = P C ( x n t n A x n ) P C ( u t n A u ) 2 ( x n t n A x n ) ( u t n A u ) , z n u = 1 2 [ ( x n t n A x n ) ( u t n A u ) 2 + z n u 2 ( x n t n A x n ) ( u t n A u ) ( z n u ) 2 ] 1 2 [ x n u 2 + z n u 2 ( x n z n ) t n ( A x n A u ) 2 ] .

It follows that

z n u 2 x n u 2 x n z n 2 + 2 t n x n z n , A x n A u t n 2 A x n A u 2 .
(3.9)

From (3.7) and (3.9), we have

x n + 1 u 2 λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) μ F u 2 + x n u 2 x n z n 2 + 2 t n x n z n , A x n A u t n 2 A x n A u 2 + ε n ,

which gives

x n z n 2 λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) μ F u 2 + ( x n u 2 x n + 1 u 2 ) + 2 t n x n z n , A x n A u t n 2 A x n A u 2 + ε n λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) μ F u 2 + ( x n u + x n + 1 u ) x n x n + 1 + 2 t n x n z n A x n A u t n 2 A x n A u 2 + ε n .

We have x n + 1 x n 0, λ n 0, ε n 0, and A x n Au0 as n. Therefore, we have x n z n 0 as n.

Step 4. x n T x n 0 as n.

Since y n = T n z n , we get

x n + 1 T n z n = P C [ λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) + ( I λ n μ F ) y n ] P C ( T n z n ) λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) + ( I λ n μ F ) y n T n z n = λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) + y n λ n μ F y n T n z n = λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) μ F y n 0 as  n .

It follows that

x n y n = x n T n z n x n x n + 1 + x n + 1 T n z n 0 as  n .
(3.10)

Also, we get

z n T n z n z n x n + x n T n z n 0as n.

Note that

x n T n x n x n T n z n + T n z n T n x n x n T n z n + z n x n + a n 0 as  n .

Thus,

T x n x n T x n T z n + T z n T n z n + T n z n x n x n z n + D B ( T n , T ) + T n z n x n 0 as  n .

Step 5. ω w ({ x n })F(T)Ω(C,A).

Note that A is an α-inverse strongly monotone mapping so that it is 1 α -Lipschitz continuous. Therefore, we have

lim n A z n A x n =0.

Since { x n } is a bounded sequence in C, there exists a subsequence { x n k } of { x n } which converges weakly to some x ˆ in C. Since lim n x n T x n =0, it follows, from the demiclosedness principle of nonexpansive mappings, that x ˆ F(T). Now, let us show that x ˆ Ω(C,A). Let

Av={ A v + N C ( v ) , if  v C , , if  v C .

Note that is maximal monotone and 0Av if and only if vΩ(C,A). Let (v,w)G(A), the graph of . Then, we have wAv=Av+ N C (v) and hence wAv N C (v). Thus, we have

wAv,vu0for all uC.

On the other hand, from z n = P C ( x n t n A x n ) and vC, we have

x n t n A x n z n , z n v0,

and hence

v z n , z n x n t n + A x n 0.

Therefore, from wAv N C (v) and z n i C, we have

v z n i , w v z n i , A v v z n i , A v v z n i , z n i x n i t n i + A x n i = v z n i , A v A z n i + v z n i , A z n i A x n i v z n i , z n i x n i t n i v z n i , A z n i A x n i v z n i , z n i x n i t n i .

Letting limit n i we obtain v x ˆ ,w0. Thus, x ˆ A 1 0 together with the maximal monotonicity of imply x ˆ Ω(C,A).

Step 6. lim sup n (μFγg) x , x n x 0.

From (3.2), we have

x n + 1 = P C ( u n ) u n + λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) +(I λ n μF) y n .

Therefore, we have

x n x n + 1 = u n P C ( u n ) + α n λ n ( μ F γ g ) x n + λ n ( 1 α n ) ( μ F γ S ) x n + ( 1 λ n ) ( x n y n ) + λ n [ ( I μ F ) x n ( I μ F ) y n ] .

Set v n := x n x n + 1 λ n ( 1 α n ) , nN. Note x n = P C ( u n 1 ). Then, we have

v n = u n P C ( u n ) λ n ( 1 α n ) + α n 1 α n ( μ F γ g ) x n + ( μ F γ S ) x n + ( 1 λ n ) λ n ( 1 α n ) ( x n y n ) + 1 1 α n [ ( I μ F ) x n ( I μ F ) y n ] .

Let wF(T)Ω(C,A). Observe that

v n , x n w = 1 λ n ( 1 α n ) u n P C ( u n ) , P C ( u n 1 ) w + α n 1 α n ( μ F γ g ) x n , x n w + ( μ F γ S ) x n , x n w + 1 λ n λ n ( 1 α n ) x n y n , x n w + 1 1 α n ( I μ F ) x n ( I μ F ) y n , x n w = 1 λ n ( 1 α n ) u n P C ( u n ) , P C ( u n ) w + 1 λ n ( 1 α n ) u n P C ( u n ) , P C ( u n 1 ) P C ( u n ) + ( μ F γ S ) w , x n w + ( μ F γ S ) x n ( μ F γ S ) w , x n w + 1 λ n λ n ( 1 α n ) x n y n , x n w + α n 1 α n ( μ F γ g ) x n , x n w + 1 1 α n ( I μ F ) x n ( I μ F ) y n , x n w .
(3.11)

The first and fourth terms in (3.11) are nonnegative due to the property of the projection operator given in Lemma 2.1(ii), and the monotonicity of (μFγS), respectively. Note x n + 1 = P C ( u n ). Thus, from (3.11), we have

v n , x n w 1 λ n ( 1 α n ) u n P C ( u n ) , P C ( u n 1 ) P C ( u n ) + ( μ F γ S ) w , x n w + α n 1 α n ( μ F γ g ) x n , x n w + 1 1 α n ( I μ F ) x n ( I μ F ) y n , x n w + 1 λ n λ n ( 1 α n ) x n y n , x n w = u n P C ( u n ) , v n + ( μ F γ S ) w , x n w + α n 1 α n ( μ F γ g ) x n , x n w + 1 1 α n ( I μ F ) x n ( I μ F ) y n , x n w + 1 λ n λ n ( 1 α n ) x n y n , x n w .
(3.12)

Noticing, from (3.10), that x n y n 0, we have (IμF) x n (IμF) y n 0. It is clear from (3.8) that v n 0. By assumption α n 0 and the sequence { x n } is bounded; we see that { u n } is bounded. Thus, from (3.12), we have

lim sup n ( μ F γ S ) w , x n w 0,wF(T)Ω(C,A).
(3.13)

This is sufficient to guarantee that ω w ({ x n })Ω, i.e., every weak limit point of the sequence { x n } solves the hierarchical variational inequality (3.1). In fact, if { x n k } is a subsequence of { x n } such that x n k x ˜ ω w ({ x n }), then, from (3.13), we have

( μ F γ S ) w , x ˜ w = lim sup n ( μ F γ S ) w , x n w 0,wF(T)Ω(C,A),

that is,

( μ F γ S ) w , w x ˜ 0,wF(T)Ω(C,A).
(3.14)

Note that ω w ({ x n })F(T)Ω(C,A). Moreover, (μFγS) is monotone and Lipschitz continuous, and F(T)Ω(C,A) is closed and convex. Therefore, the inequality (3.14) is equivalent to the inequality (3.1) by the Minty lemma (see [22]). Thus, we have x ˜ Ω.

Now, we choose a subsequence { x n k } of { x n } satisfying

lim sup n ( μ F γ g ) x , x n x = lim k ( μ F γ g ) x , x n k x .

Without loss of generality, we may further assume that x n k x ˜ . Note that x ˜ Ω. As x is a solution of the triple hierarchical variational inequality (3.3), we obtain

lim sup n ( μ F γ g ) x , x n x = ( μ F γ g ) x , x ˜ x 0.

Step 7. x n x as n.

Noticing that y n = T n z n , γ n =(1ρ)γ λ n α n , and x n + 1 = P C ( u n ). Set χ n = α n λ n χ n + χ n ′′ , where χ n :=(γgμF) x , x n + 1 x and χ n ′′ = λ n (1 α n )(γSμF) x , x n + 1 x . From (3.2), we have

x n + 1 x 2 = u n x , x n + 1 x + P C ( u n ) u n , P C ( u n ) x u n x , x n + 1 x = λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) + ( I λ n μ F ) y n x , x n + 1 x = ( I λ n μ F ) y n ( I λ n μ F ) x , x n + 1 x + α n λ n γ g ( x n ) g ( x ) , x n + 1 x + λ n ( 1 α n ) γ S x n S x , x n + 1 x + α n λ n ( γ g μ F ) x , x n + 1 x + λ n ( 1 α n ) ( γ S μ F ) x , x n + 1 x ( 1 λ n τ ) ( z n x + a n ) x n + 1 x + α n λ n γ ρ x n x x n + 1 x + λ n ( 1 α n ) γ x n x x n + 1 x + χ n ( 1 λ n τ ) x n x x n + 1 x + α n λ n γ ρ x n x x n + 1 x + λ n ( 1 α n ) γ x n x x n + 1 x + ( 1 λ n τ ) a n x n + 1 x + χ n [ 1 λ n τ + α n λ n γ ρ + λ n ( 1 α n ) γ ] x n x x n + 1 x + a n x n + 1 x + χ n [ 1 α n λ n γ ( 1 ρ ) ] x n x x n + 1 x + a n x n + 1 x + χ n [ 1 α n λ n γ ( 1 ρ ) ] 1 2 ( x n x 2 + x n + 1 x 2 ) + a n x n + 1 x + χ n .

It follows that

x n + 1 x 2 1 α n λ n γ ( 1 ρ ) 1 + α n λ n γ ( 1 ρ ) x n x 2 + 2 1 + γ n χ n + 2 a n 1 + γ n R [ 1 α n λ n γ ( 1 ρ ) ] x n x 2 + 2 χ n 1 + γ n + 2 a n R 1 + γ n
(3.15)

for some R>0. Since x Ω, by using condition (v) we have

( γ S μ F ) x , x n + 1 x = ( γ S μ F ) x , x n + 1 P F ( T ) Ω ( C , A ) ( x n + 1 ) + ( γ S μ F ) x , P F ( T ) Ω ( C , A ) ( x n + 1 ) x ( γ S μ F ) x , x n + 1 P F ( T ) Ω ( C , A ) ( x n + 1 ) ( γ S μ F ) x d ( x n + 1 , F ( T ) Ω ( C , A ) ) ( γ S μ F ) x ( 1 k ¯ x n + 1 T n x n + 1 ) 1 θ .
(3.16)

Note that

x n + 1 T n x n = P C ( u n ) P C ( T n x n ) u n T n x n = λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) + ( I λ n μ F ) y n T n x n λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) μ F y n + T n z n T n x n λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) μ F y n + z n x n + a n .

We observe that

x n + 1 T n x n + 1 x n + 1 T n x n + T n x n T n x n + 1 x n x n + 1 + x n + 1 T n x n + a n x n x n + 1 + λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) μ F y n + z n x n + 2 a n . x n x n + 1 + λ n M + z n x n + 2 a n .

Hence from (3.16), we get

( γ S μ F ) x , x n + 1 x ( 1 k ¯ ) 1 θ ( γ S μ F ) x ( x n x n + 1 + M λ n + z n x n + 2 a n ) 1 θ λ n 1 θ M ( 1 + x n x n + 1 λ n + z n x n λ n + a n λ n ) 1 θ
(3.17)

for some constant M . Therefore from (3.15) and (3.17), we have

x n + 1 x 2 [ 1 γ ( 1 ρ ) α n λ n ] x n x 2 + 2 α n λ n 1 + γ n [ χ n + M λ n 1 θ α n ( 1 + x n x n + 1 λ n + z n x n λ n + a n λ n ) 1 θ ] + 2 a n R 1 + γ n = ( 1 γ n ) x n x 2 + σ n + 2 a n R 1 + γ n ,

where

σ n = 2 α n λ n 1 + γ n [ χ n + M λ n 1 θ α n ( 1 + x n x n + 1 λ n + z n x n λ n + a n λ n ) 1 θ ] .

Note that lim n a n λ n =0 and n = 1 α n λ n =. Using Lemma 2.4, we obtain x n x . This completes the proof. □

If we put g=0 in (3.3), then this triple hierarchical variational inequality reduces to the variational inequality (3.18). Thus, the following is the direct consequence of Theorem 3.1.

Theorem 3.2 Let C be a nonempty closed convex subset of a real Hilbert space H. Let F:CH be a k-Lipschitzian and η-strongly monotone operator, and S:CC be a nonexpansive mapping. Let A:CH be an α-inverse strongly monotone mapping and T={ T n } be a sequence of nearly nonexpansive mappings from C into itself with respect to a sequence { a n } such that n = 1 D B ( T n , T n + 1 )< for all BB(C) and F(T)Ω(C,A) and let T be a mapping from C into itself defined by Tx= lim n T n x for all xC. Suppose that F(T)=F(T), 0<μ< 2 η k 2 , and 0<γτ, where τ=1 1 μ ( 2 η μ k 2 ) . Assume that Ω, the set of solutions of the hierarchical variational inequality (3.1), is nonempty. Consider the sequence { x n } in C for arbitrary x 1 C, generated by the following iterative process:

{ x 1 C , y n = T n P C [ x n t n A x n ] , x n + 1 = P C [ λ n ( 1 α n ) γ S x n + ( I λ n μ F ) y n ]

for all nN, to be bounded and lim n x n P C [ x n t n A x n ] λ n =0, where { α n }, { λ n }, and { t n } are sequences mentioned in Theorem  3.1 satisfying all the conditions of Theorem  3.1. Then the sequence { x n } converges strongly to a unique solution x of the variational inequality of finding x Ω such that

F x , x x 0,xΩ.
(3.18)

Take T n =T and A=0 in Theorem 3.1, we have the following.

Corollary 3.1 (Ceng et al. [[11], Theorem 4.1])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let F:CH be a k-Lipschitzian and η-strongly monotone operator, and g:CH be a ρ-contraction mapping. Let S and T be nonexpansive mappings from C into itself such that F(T). Suppose that 0<μ< 2 η k 2 and 0<γτ, where τ=1 1 μ ( 2 η μ k 2 ) . Assume that Ω, the set of solutions of the hierarchical variational inequality of finding z F(T) such that

( μ F γ S ) z , z z 0,zF(T),

is nonempty. Consider the sequence { x n } in C for arbitrary x 1 C, generated by the following iterative process:

{ x 1 C , x n + 1 = P C [ λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) + ( I λ n μ F ) T x n ]

for all nN, where { α n } and { λ n } are sequences in (0,1) satisfying the conditions (i)-(ii) of Theorem  3.1. Suppose that lim n λ n 1 θ α n =0, and xTx k ¯ [ d ( x , F ( T ) ) ] θ , xC, where k ¯ >0 and θ>0 are constants. Then the following hold:

  1. (a)

    If the generated sequence { x n } is bounded, then the sequence { x n } converges strongly to the point x F(T), where x is the unique solution of the triple hierarchical variational inequality of finding x Ω such that

    ( μ F γ g ) x , x x 0,xΩ.
  2. (b)

    If the sequence { x n } in C for arbitrary x 1 C, generated by the following iterative process:

    { x 1 C , x n + 1 = P C [ λ n ( 1 α n ) γ S x n + ( I λ n μ F ) T x n ]

for all nN, is bounded, then the sequence { x n } converges strongly to the unique solution x of the variational inequality of finding x Ω such that

F x , x x 0,xΩ.

We now derive the result of Yao et al. [[10], Theorem 4.1] as a corollary.

Corollary 3.2 Let C be a nonempty closed convex subset of a real Hilbert space H. Let f:CH be a ρ-contraction mapping. Let S and T be nonexpansive mappings from C into itself such that F(T). Assume that Ω, the set of solutions of the hierarchical variational inequality of finding x F(T) such that

( I S ) x , x x 0,xF(T),
(3.19)

is nonempty. Consider the sequence { x n } in C for arbitrary x 1 C, generated by the following iterative process:

{ x 1 C , x n + 1 = P C [ λ n ( α n f ( x n ) + ( 1 α n ) S x n ) + ( 1 λ n ) T x n ]

for all nN, where { α n } and { λ n } are sequences in (0,1) satisfying the conditions (i)-(ii) of Theorem  3.1. Suppose that lim n λ n 1 θ α n =0 and xTx k ¯ [ d ( x , F ( T ) ) ] θ , xC, where k ¯ >0 and θ>0 are constants. Then:

  1. (a)

    If the generated sequence { x n } is bounded, then the sequence { x n } converges strongly to the point x F(T), where x is the unique solution of the variational inequality of finding x Ω such that

    ( I f ) x , x x 0,xΩ.
  2. (b)

    If the sequence { x n } in C for arbitrary x 1 C, generated by the following iterative process:

    { x 1 C , x n + 1 = P C [ λ n ( 1 α n ) S x n + ( 1 λ n ) T x n ]

for all nN, is bounded, then the sequence { x n } converges strongly to a minimum norm solution of the hierarchical variational inequality (3.19).

Again, we derive the following result as a corollary for S and T being two nonexpansive mappings.

Corollary 3.3 Let C be a nonempty closed convex subset of a real Hilbert space H. Let F:CH be a k-Lipschitzian and η-strongly monotone operator, and g:CH be a ρ-contraction mapping. Let S and T be nonexpansive mappings from C into itself and A:CH be an α-inverse strongly monotone mapping such that F(T)Ω(C,A). Suppose that 0<μ< 2 η k 2 and 0<γτ, where τ=1 1 μ ( 2 η μ k 2 ) . Assume that Ω, the set of solutions of the hierarchical variational inequality of finding z F(T)Ω(C,A) such that

( μ F γ S ) z , z z 0,zF(T)Ω(C,A),

is nonempty. Consider the sequence { x n } in C for arbitrary x 1 C, generated by the following iterative process:

{ x 1 C , y n = T P C [ x n t n A x n ] , x n + 1 = P C [ λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) + ( I λ n μ F ) y n ]

for all nN, where { α n }, { λ n } are sequences in (0,1) and { t n } is a sequence in [a,b] (for some a, b with 0<a<b<2α) satisfying the conditions (i)-(ii) of Theorem  3.1. Suppose that lim n λ n 1 θ α n =0, n = 1 | t n + 1 t n |<, lim n | t n t n 1 | α n λ n 2 =0 and xTx k ¯ [ d ( x , F ( T ) Ω ( C , A ) ) ] θ , xC, where k ¯ >0 and θ>0 are constants. Then the following hold:

  1. (a)

    If the generated sequence { x n } is bounded and lim n x n P C [ x n t n A x n ] λ n =0, then the sequence { x n } converges strongly to the point x F(T)Ω(C,A), where x is the unique solution of the triple hierarchical variational inequality of finding x Ω such that

    ( μ F γ g ) x , x x 0,xΩ.
  2. (b)

    If the sequence { x n } in C for arbitrary x 1 C, generated by the following iterative process:

    { x 1 C , y n = T P C [ x n t n A x n ] , x n + 1 = P C [ λ n ( 1 α n ) γ S x n + ( I λ n μ F ) y n ]

for all nN, is bounded and lim n x n P C [ x n t n A x n ] λ n =0, then the sequence { x n } converges strongly to the unique solution x of the variational inequality of finding x Ω such that

F x , x x 0,xΩ.

4 Applications

In this section, we present two applications of Theorem 3.1. The first application is concerned with the image recovery problem which is equivalent to finding a common fixed point of finitely many nonexpansive self mappings. The first application improves a number of results related to this context. The second application deals with a strictly pseudocontractive mapping.

Theorem 4.1 Let C be a nonempty closed convex subset of a real Hilbert space H. Let F:CH be a k-Lipschitzian and η-strongly monotone operator, and g:CH be a ρ-contraction mapping. Let S:CC be a nonexpansive mapping and A:CH be an α-inverse strongly monotone mapping. Let t 1 , t 2 , t 3 ,, t N >0 such that i = 1 N t i =1. Let T 1 , T 2 , T 3 ,, T N :CC be nonexpansive mappings such that i = 1 N F( T i )Ω(C,A) and assume that T= i = 1 N t i T i . Suppose that 0<μ< 2 η k 2 and 0<γτ, where τ=1 1 μ ( 2 η μ k 2 ) . Assume that Ω, the set of solutions of the hierarchical variational inequality of finding z i = 1 N F( T i )Ω(C,A) such that

( μ F γ S ) z , z z 0,z i = 1 N F( T i )Ω(C,A),

is nonempty. Consider the sequence { x n } in C for arbitrary x 1 C, generated by the following iterative process:

{ x 1 C , y n = i = 1 N t i T i P C [ x n t n A x n ] , x n + 1 = P C [ λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) + ( I λ n μ F ) y n ]

for all nN, where { α n }, { λ n } are sequences in (0,1) and { t n } is a sequence in [a,b] (for some a, b with 0<a<b<2α) satisfying all the conditions of Corollary  3.3. Then the following hold:

  1. (a)

    If the generated sequence { x n } is bounded and lim n x n P C [ x n t n A x n ] λ n =0, then the sequence { x n } converges strongly to the point x i = 1 N F( T i )Ω(C,A), where x is the unique solution of the triple hierarchical variational inequality of finding x Ω such that

    ( μ F γ g ) x , x x 0,xΩ.
  2. (b)

    If the sequence { x n } in C for arbitrary x 1 C, generated by the following iterative process:

    { x 1 C , y n = i = 1 N t i T i P C [ x n t n A x n ] , x n + 1 = P C [ λ n ( 1 α n ) γ S x n + ( I λ n μ F ) y n ]

for all nN, is bounded and lim n x n P C [ x n t n A x n ] λ n =0, then the sequence { x n } converges strongly to the unique solution x of the variational inequality of finding x Ω such that

F x , x x 0,xΩ.

Proof Lemma 2.5 implies that T is nonexpansive from C into itself and F(T)= i = 1 N F( T i ). Hence, the result follows from Corollary 3.3. □

Theorem 4.2 Let C be a nonempty closed convex subset of a real Hilbert space H. Let F:CH be a k-Lipschitzian and η-strongly monotone operator, and g:CH be a ρ-contraction mapping. Let S:CC be a nonexpansive mapping. Let U:CC be a λ-strictly pseudocontractive mapping and T={ T n } be a sequence of nearly nonexpansive mappings from C into itself with respect to a sequence { a n } such that n = 1 D B ( T n , T n + 1 )< for all BB(C) and F(T)F(U) and let T be a mapping from C into itself defined by Tx= lim n T n x for all xC. Suppose that F(T)=F(T), 0<μ< 2 η k 2 , and 0<γτ, where τ=1 1 μ ( 2 η μ k 2 ) . Assume that Ω, the set of solutions of the hierarchical variational inequality of finding z F(T)F(U) such that

( μ F γ S ) z , z z 0,zF(T)F(U),

is nonempty. Consider the sequence { x n } in C for arbitrary x 1 C, generated by the following iterative process:

{ x 1 C , y n = T n [ ( 1 t n ) x n + t n U x n ] , x n + 1 = P C [ λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) + ( I λ n μ F ) y n ]

for all nN, where { α n }, { λ n } are sequences in (0,1) and { t n } is a sequence in [a,b] (for some a, b with 0<a<b<1λ) satisfying the conditions (i)-(iv) of Theorem  3.1. Suppose that x T n x k ¯ [ d ( x , F ( T ) F ( U ) ) ] θ , xC and nN, where k ¯ >0 and θ>0 are constants. Then the following hold:

  1. (a)

    If the generated sequence { x n } is bounded and lim n x n U x n λ n =0, then the sequence { x n } converges strongly to the point x F(T)F(U), where x is the unique solution of the triple hierarchical variational inequality of finding x Ω such that

    ( μ F γ g ) x , x x 0,xΩ.
  2. (b)

    If the sequence { x n } in C for arbitrary x 1 C, generated by the following iterative process:

    { x 1 C , y n = T n [ ( 1 t n ) x n + t n U x n ] , x n + 1 = P C [ λ n ( 1 α n ) γ S x n + ( I λ n μ F ) y n ]

for all nN, is bounded and lim n x n U x n λ n =0, then the sequence { x n } converges strongly to the unique solution x of the variational inequality of finding x Ω such that

F x , x x 0,xΩ.

Proof Put A=IU in Theorem 3.1, then A is ( 1 λ ) 2 -inverse strongly monotone. We also have F(U)=Ω(C,A) and P C ( x n t n A x n )=(1 t n ) x n + t n U x n . Therefore, the conclusion follows from Theorem 3.1 and Theorem 3.2. □

5 Numerical example

In this section, we discuss the following example which shows the effectiveness and convergence of iteratively generated sequence { x n } by the considered scheme (3.2) of Theorem 3.1.

Example 5.1 Let H=R and C=[0,1]. Let A, S, and T be mappings defined by A(x)=2x1, S(x)=x, and T(x)=1x for all xC.

Let F,g:CH be mappings defined by F(x)=2x and g(x)= x 2 +1 for all xC. Define { t n }, { α n }, and { λ n } in (0,1) by t n = 1 2 , α n = 1 ( n + 2 ) p , and λ n = 1 ( n + 2 ) q , where 0<p+2q<1. It is clear that S and T are nonexpansive self mappings, and A is 2-inverse strongly monotone. Note F is a 2-Lipschitzian and 2-strongly monotone, and g is a 1 2 -contraction mapping. Here, k=2, η=2, and ρ= 1 2 . We take μ= 1 4 , γ=τ=1 1 μ ( 2 η μ k 2 ) = 1 2 , p= 1 2 , q= 1 6 , and θ= 1 4 . Note that 0<μ< 2 η k 2 . Observe that T n =T with a n =0 for all nN and F(T)Ω(C,A)={ 1 2 }. The iterative algorithm (3.2) can be written as

x n + 1 = P C ( u n )for all nN,

where

u n = λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) + ( I λ n μ F ) y n , y n = T n z n , z n = P C ( x n t n A x n ) .

We observe that

z n = P C ( x n t n A x n ) = P C ( x n 1 2 ( 2 x n 1 ) ) = P C ( 1 2 ) = 1 2 , y n = T n z n = T ( 1 / 2 ) = 1 / 2

and

u n = λ n γ ( α n g ( x n ) + ( 1 α n ) S x n ) + ( I λ n μ F ) y n = λ n 2 ( α n ( x n 2 + 1 ) + ( 1 α n ) x n ) + 1 2 λ n 4 F ( 1 2 ) = λ n 2 ( α n + ( 1 α n 2 ) x n ) + 1 2 λ n 4 .

Let x 1 C. For n=1, we have

u 1 = λ 1 2 ( α 1 + ( 1 α 1 2 ) x 1 ) + 1 2 λ 1 4 λ 1 2 ( α 1 + ( 1 α 1 2 ) ) + 1 2 λ 1 4 = 1 2 ( 3 ) 1 6 ( 1 2 ( 3 ) 1 2 + 1 ) + 1 2 1 4 ( 3 ) 1 6 = 1 4 ( 3 ) 1 6 + 1 4 ( 3 ) 2 3 + 1 2 = 0.8284 < 1 .

Thus, u 1 C.

Next we show that x n C for all nN. Note u 1 C. Suppose that u k C for some kN. Now, for n=k+1, we have

u k + 1 = λ k + 1 2 ( α k + 1 + ( 1 α k + 1 2 ) x k + 1 ) + 1 2 λ k + 1 4 = λ k + 1 2 ( α k + 1 + ( 1 α k + 1 2 ) P C u k ) + 1 2 λ k + 1 4 = λ k + 1 2 ( α k + 1 + ( 1 α k + 1 2 ) u k ) + 1 2 λ k + 1 4 λ k + 1 2 ( α k + 1 + ( 1 α k + 1 2 ) ) + 1 2 λ k + 1 4 = λ k + 1 2 ( α k + 1 2 + 1 ) + 1 2 λ k + 1 4 = λ k + 1 4 + 1 2 + λ k + 1 α k + 1 4 < 1 4 + 1 4 + 1 2 = 1 .

Thus, by mathematical induction, we get u n C for all nN. Therefore, x n C for all nN. It can be seen from Table 1 and Figure 1 that { | x n z n | λ n } converges to 0 (see also (5.3)). Thus, all the assumptions of Theorem 3.1 are satisfied. Therefore, the iteratively generated sequence { x n } defined by (3.2) converges strongly to { 1 2 }, which is also the unique solution of the triple hierarchical variational inequality (3.3).

Figure 1
figure 1

Convergence of sequence { | x n z n | λ n } .

Table 1 The numerical values of | x n z n | λ n up to n=10,000

The numerical values of x n up to n=10,000 have been calculated in Table 2 and convergence of sequence { x n } is given in Figure 2. Finally, mathematically, we show that x n 1 2 and | x n z n | λ n 0 as n. Note

x n + 1 = P C ( u n ) = u n = λ n 2 ( α n + ( 1 α n 2 ) x n ) + 1 2 λ n 4 λ n 2 ( α n + ( 1 α n 2 ) ) + 1 2 λ n 4 = α n λ n 4 + λ n 4 + 1 2 ,
(5.1)

which implies that

| x n + 1 1 2 | α n λ n 4 + λ n 4 0as n.
(5.2)

Therefore, from (5.2) we get x n 1 2 as n.

Figure 2
figure 2

Convergence of the sequence { x n } .

Table 2 The numerical values of x n up to n=10,000

From (5.1), we have

x n + 1 1 2 = λ n 2 [ α n + x n 1 2 α n x n 2 ] ,

which implies that

| x n + 1 1 2 | λ n + 1 = λ n 2 λ n + 1 | α n + x n 1 2 α n x n 2 |.

We have α n 0 and λ n 0 as n. Thus, we obtain

| x n + 1 z n | λ n + 1 = λ n 2 λ n + 1 | α n + x n 1 2 α n x n 2 |0as n.
(5.3)

References

  1. Stampacchia G: Formes bilineaires coercivities sur les ensembles convexes. C. R. Acad. Sci. Paris 1964, 258: 4413–4416.

    MathSciNet  Google Scholar 

  2. Kazmi KR, Ahmad N, Rizvi SH: System of implicit nonconvex variational inequality problems: a projection method approach. J. Nonlinear Sci. Appl. 2013, 6: 170–180.

    MathSciNet  Google Scholar 

  3. Noor MA: Some developments in general variational inequalities. Appl. Math. Comput. 2004, 152: 199–277. 10.1016/S0096-3003(03)00558-7

    Article  MathSciNet  Google Scholar 

  4. Cianciaruso F, Marino G, Muglia L, Yao Y: On a two-step algorithm for hierarchical fixed point problems and variational inequalities. J. Inequal. Appl. 2009., 2009: Article ID 208692 10.1155/2009/208692

    Google Scholar 

  5. Mainge PE, Moudafi A: Strong convergence of an iterative method for hierarchical fixed point problems. Pac. J. Optim. 2007, 3: 529–538.

    MathSciNet  Google Scholar 

  6. Moudafi A, Mainge PE: Towards viscosity approximations of hierarchical fixed points problems. Fixed Point Theory Appl. 2006., 2006: Article ID 95453 10.1155/FPTA/2006/95453

    Google Scholar 

  7. Sahu DR, Kang SM, Sagar V: Iterative methods for hierarchical common fixed point problems and variational inequalities. Fixed Point Theory Appl. 2013., 2013: Article ID 299 10.1186/1687-1812-2013-299

    Google Scholar 

  8. Iiduka H: Strong convergence for an iterative method for the triple hierarchical constrained optimization problem. Nonlinear Anal. 2009, 71: e1292-e1297. 10.1016/j.na.2009.01.133

    Article  MathSciNet  Google Scholar 

  9. Iiduka H: Iterative algorithm for solving triple hierarchical constrained optimization problem. J. Optim. Theory Appl. 2011, 148: 580–592. 10.1007/s10957-010-9769-z

    Article  MathSciNet  Google Scholar 

  10. Yao Y, Chen R, Xu HK: Schemes for finding minimum-norm solutions of variational inequalities. Nonlinear Anal. 2010, 72: 3447–3456. 10.1016/j.na.2009.12.029

    Article  MathSciNet  Google Scholar 

  11. Ceng LC, Ansari QH, Yao JC: Iterative methods for triple hierarchical variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2011, 151: 489–512. 10.1007/s10957-011-9882-7

    Article  MathSciNet  Google Scholar 

  12. Ceng LC, Ansari QH, Yao JC: Relaxed hybrid steepest-descent methods with variable parameters for triple-hierarchical variational inequalities. Appl. Anal. 2012, 91: 1793–1810. 10.1080/00036811.2011.614602

    Article  MathSciNet  Google Scholar 

  13. Ansari QH, Ceng LC, Gupta H: Triple hierarchical variational inequalities. In Nonlinear Analysis: Approximation Theory, Optimization and Applications. Edited by: Ansari QH. Springer, New York; 2014:231–280.

    Google Scholar 

  14. Agarwal RP, O’Regan D, Sahu DR Topological Fixed Point Theory and Its Applications. In Fixed Point Theory for Lipschitzian-Type Mappings with Applications. Springer, New York; 2009.

    Google Scholar 

  15. Sahu DR: Fixed points of demicontinuous nearly Lipschitzian mappings in Banach spaces. Comment. Math. Univ. Carol. 2005, 46: 653–666.

    MathSciNet  Google Scholar 

  16. Sahu DR, Wong NC, Yao JC: A generalized hybrid steepest-descent method for variational inequalities in Banach spaces. Fixed Point Theory Appl. 2011., 2011: Article ID 754702 10.1155/2011/754702

    Google Scholar 

  17. Goebel K, Kirk WA: Topics on Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.

    Book  Google Scholar 

  18. Yamada I: The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings. Stud. Comput. Math. 8. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications. Edited by: Butnariu D, Censor Y, Reich S. North-Holland, Amsterdam; 2001:473–504. (Haifa, 2000)

    Chapter  Google Scholar 

  19. Xu HK, Kim TH: Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 2003, 119: 185–201. 10.1023/B:JOTA.0000005048.79379.b6

    Article  MathSciNet  Google Scholar 

  20. Wong NC, Sahu DR, Yao JC: Solving variational inequalities involving nonexpansive type mappings. Nonlinear Anal. 2008, 69: 4732–4753. 10.1016/j.na.2007.11.025

    Article  MathSciNet  Google Scholar 

  21. Takahashi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118: 417–428. 10.1023/A:1025407607560

    Article  MathSciNet  Google Scholar 

  22. Kinderlehrer D, Stampacchia G: An Introduction to Variational Inequalities and Their Applications. Academic Press, New York; 1980.

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank the editor and referees for the useful comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shin Min Kang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sahu, D., Kang, S.M., Sagar, V. et al. Iterative methods for triple hierarchical variational inequalities and common fixed point problems. Fixed Point Theory Appl 2014, 244 (2014). https://doi.org/10.1186/1687-1812-2014-244

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2014-244

Keywords