Skip to content

Advertisement

  • Research
  • Open Access

Convergence results for a common solution of a finite family of variational inequality problems for monotone mappings with Bregman distance function

Fixed Point Theory and Applications20132013:343

https://doi.org/10.1186/1687-1812-2013-343

  • Received: 27 August 2013
  • Accepted: 14 November 2013
  • Published:

Abstract

In this paper, we introduce an iterative process which converges strongly to a common solution of a finite family of variational inequality problems for monotone mappings with Bregman distance function. Our convergence theorem is applied to the convex minimization problem. Our theorems extend and unify most of the results that have been proved for the class of monotone mappings.

MSC:47H05, 47J05, 47J25.

Keywords

  • Bregman projection
  • monotone mappings
  • strong convergence
  • variational inequality problems

1 Introduction

Throughout this paper, E is a real reflexive Banach space with E as its dual and f : E ( , + ] is a proper, lower semicontinuous and convex function. We denote by domf the domain of f, defined by dom f : = { x E : f ( x ) < + } . For any x int ( dom f ) and y E , the right-hand derivative of f at x in the direction of y is defined by
f 0 ( x , y ) : = lim t 0 + f ( x + t y ) f ( x ) t .
(1.1)

The function f is said to be Gâteaux differentiable at x if lim t 0 + ( f ( x + t y ) f ( x ) ) / t exists for any y E . In this case, f 0 ( x , y ) coincides with f ( x ) , the value of the gradient f of f at x. The function f is said to be Gâteaux differentiable if it is Gâteaux differentiable for any x int ( dom f ) . The function f is said to be Fréchet differentiable at x if this limit is attained uniformly in y = 1 . We say that f is uniformly Fréchet differentiable on a subset C of E if the limit is attained uniformly for x C and y = 1 .

Let f : E ( , + ] be a Gâteaux differentiable function. The function D f : dom f × int ( dom f ) [ 0 , + ) defined by
D f ( x , y ) : = f ( x ) f ( y ) f ( y ) , x y

is called the Bregman distance with respect to f [1].

A Bregman projection with respect to f [1] of x int ( dom f ) onto the nonempty, closed and convex set C int ( dom f ) is the unique vector P C f ( x ) C satisfying
D f ( P C f ( x ) , x ) = inf { D f ( y , x ) : y C } .
Remark 1.1 If E is a smooth and strictly convex Banach space and f ( x ) = x 2 for all x E , then we have that f ( x ) = 2 J ( x ) for all x E , where J the normalized duality mapping from E into E , and hence D f ( x , y ) becomes D f ( x , y ) = x 2 2 x , J y + y 2 for all x , y E , which is the Lyapunov function introduced by Alber [2] and studied by many authors (see, e.g., [39] and the references therein). In addition, under the same condition, the Bregman projection P C f ( x ) reduces to the generalized projection Π C ( x ) (see, e.g., [2]) which is defined by
ϕ ( Π C ( x ) , x ) = min y C ϕ ( y , x ) .
(1.2)

If E = H , a Hilbert space, J is the identity mapping, and hence the Bregman projection P C f ( x ) reduces to the metric projection of H on to C, P C ( x ) .

A mapping A : D ( A ) E E is said to be γ-inverse strongly monotone if there exists a positive real number γ such that
A x A y , x y γ A x A y 2 for all  x , y D ( A ) .
(1.3)
A is said to be monotone if, for each x , y D ( A ) , the following inequality holds:
A x A y , x y 0 .
(1.4)

Clearly, the class of monotone mappings includes the class of γ-inverse strongly monotone mappings.

Let C be a nonempty, closed and convex subset of E and A : C E be a monotone mapping. The problem of finding
a point  u C  such that  A u , v u 0  for all  v C
(1.5)

is called the variational inequality problem. The set of solutions of the variational inequality is denoted by VI ( C , A ) .

Variational inequality problems are related with the convex minimization problem, the zero of monotone mappings and the complementarity problem. Consequently, many researchers (see, e.g., [3, 5, 1015]) have made efforts to obtain iterative methods for approximating solutions of variational inequality problems.

If E = H , a Hilbert space, Iiduka et al. [16] introduced the following projection algorithm:
x 1 = x C , x n + 1 = P C ( x n α n A x n ) for any  n 1 ,
(1.6)

where P C is the metric projection from H onto C and { α n } is a sequence of positive real numbers. They proved that the sequence { x n } generated by (1.6) converges weakly to some element of VI ( C , A ) provided that A is a γ-inverse strongly monotone mapping.

If E is a 2-uniformly convex and uniformly smooth Banach space, and A is γ-inverse strongly monotone, Iiduka and Takahashi [17] introduced the following iteration scheme for finding a solution of the variational inequality problem:
x n + 1 = Π C J 1 ( J x n α n A x n ) for any  n 1 ,
(1.7)

where Π C is the generalized projection from E onto C, J is the normalized duality mapping from E into E and { α n } is a sequence of positive real numbers. They proved that the sequence { x n } generated by (1.7) converges weakly to some element of VI ( C , A ) .

It is worth mentioning that the convergence obtained above is weak convergence. Our concern now is to look for an iteration scheme which converges strongly to a solution of the variational inequality problem for a monotone mapping A.

In this regard, when E is a 2-uniformly convex and uniformly smooth Banach space and A is a γ-inverse strongly monotone mapping satisfying A u A y A u for all y C and u VI ( C , A ) for VI ( C , A ) , Iiduka and Takahashi [10] studied the following iterative scheme for a solution of the variational inequality problem:
{ x 0 K chosen arbitrarily , y n = Π C J 1 ( J x n α n A x n ) , C n = { z E : ϕ ( z , y n ) ϕ ( z , x n ) } , Q n = { z E : x n z , J x 0 J x n 0 } , x n + 1 = Π C n Q n ( x 0 ) , n 1 ,
(1.8)

where { α n } is a positive real sequence satisfying certain mild conditions and Π C n Q n is the generalized projection from E onto C n Q n , J is the duality mapping from E into E . Then they proved that the sequence { x n } converges strongly to an element of VI ( C , A ) .

Recently, Zegeye and Shahzad [18] studied the following iterative scheme for a common point of a solution of two variational inequality problems for continuous monotone mappings in a uniformly smooth and strictly convex real Banach space E which also enjoys the Kadec-Klee property:
{ x 0 C 0 = C chosen arbitrarily , u n = T 1 , γ n x n ; v n = T 2 , γ n x n , w n = J 1 ( β J u n + ( 1 β ) J v n ) , C n + 1 = { z C n : ϕ ( z , w n ) ϕ ( z , x n ) } , x n + 1 = Π C n + 1 ( x 0 ) , n 0 ,
(1.9)

where T i , γ x : = { z C : A i z , y z + 1 γ y z , J z J x 0 , y C } for all x E , i = 1 , 2 , and β , γ n ( 0 , 1 ) satisfy certain mild conditions. Then they proved that the sequence { x n } converges strongly to Π F ( x 0 ) , where Π F is the generalized projection from E onto F : = i = 1 2 VI ( C , A i ) .

In 1967, Bregman [1] discovered an elegant and effective technique for using the so-called Bregman distance function D f ( , ) in the process of designing and analyzing feasibility and optimization algorithms. Using Bregman’s distance function and its properties, authors have opened a growing area of research not only for iterative algorithms of solving feasibility and optimization problems but also for algorithms of solving nonlinear, equilibrium, variational inequality, fixed point problems and others (see, e.g., [1925] and the references therein).

In 2010, Reich and Sabach [25] proposed an algorithm for finding a common zero point of a finite family of maximal monotone mappings A i : E 2 E ( i = 1 , 2 , , N ) in a general reflexive Banach space E as follows:
{ x 0 E chosen arbitrarily , y n i = Res λ n i A i ( x n + e n i ) , C n i = { z E : D f ( z , y n i ) D f ( z , x n + e n i ) } , C n = i = 1 N C n i , Q n i = { z E : f ( x 0 ) f ( x n ) , z x n 0 } , x n + 1 = P C n Q n f ( x 0 ) , n 0 ,
(1.10)

where { λ n i } i = 1 N ( 0 , ) , { e n i } i = 1 N are error sequences in E with e n i 0 and P C f is the Bregman projection with respect to f from E onto a closed and convex subset C of E. Those authors showed that the sequence { x n } defined by (1.10) converges strongly to a common element in i = 1 N A 1 ( 0 ) = i = 1 N V I ( E , A i ) under some mild conditions. Similar results are also available in [26, 27].

Remark 1.2 But it is worth mentioning that the iteration processes (1.8)-(1.10) seem difficult in the sense that at each stage of iteration, the set(s) C n and (or) Q n is (are) computed and the next iterate is taken as the Bregman projection of x 0 onto the intersection of C n and Q n (or Q n ). This seems difficult to do in applications.

It is our purpose in this paper to introduce an iterative scheme { x n } which converges strongly to a common solution of a finite family of variational inequality problems for monotone mappings in real reflexive Banach spaces. Our scheme does not involve computations of C n or Q n for each n 1 . Furthermore, we apply our convergence theorem to a convex minimization problem. Our theorems extend and unify most of the results that have been proved for this important class of nonlinear operators.

2 Preliminaries

Let x int ( dom f ) . The subdifferential of f at x is the convex set defined by f ( x ) = { x E : f ( x ) + x , y x f ( y ) , y E } , where the Fenchel conjugate of f is the function f : E ( , + ] defined by f ( x ) = sup { x , x f ( x ) : x E } .

The function f is said to be:
  1. (i)

    Essentially smooth if ∂f is both locally bounded and single-valued on its domain.

     
  2. (ii)

    Essentially strictly convex if ( f ) 1 is locally bounded on its domain and f is strictly convex on every convex subset of domf.

     
  3. (iii)

    Legendre if it is both essentially smooth and essentially strictly convex.

     
We remark that we have the following:
  1. (i)

    f is essentially smooth if and only if f is essentially strictly convex (see [19], Theorem 5.4).

     
  2. (ii)

    ( f ) 1 = f (see [28]).

     
  3. (iii)

    f is Legendre if and only if f is Legendre (see [19], Corollary 5.5).

     
  4. (iv)

    If f is Legendre, then f is a bijection satisfying f = ( f ) 1 , ran f = dom f = int ( dom f ) and ran f = dom f = int ( dom f ) (see [19], Theorem 5.10).

     

When the subdifferential of f is single-valued, then f = f (see [29]).

A function f on E is coercive [30] if the sublevel set of f is bounded; equivalently, lim x f ( x ) = .

Let f : E ( , + ] be a convex and Gâteaux differentiable function. The modulus of total convexity of f at x domf is the function ν f ( x , ) : [ 0 , + ) [ 0 , + ] defined by
ν f ( x , t ) : = inf { D f ( y , x ) : y dom f , y x = t } .
The function f is called totally convex at x if ν f ( x , t ) > 0 , whenever t > 0 . The function f is called totally convex if it is totally convex at any point x int ( dom f ) and it is said to be totally convex on bounded sets if ν f ( B , t ) > 0 for any nonempty bounded subset B of E and t > 0 , where the modulus of total convexity of the function f on the set B is the function ν f : int ( dom f ) × [ 0 , + ) [ 0 , + ] defined by
ν f ( B , t ) : = inf { V f ( x , t ) : x B dom f } .

We know that f is totally convex on bounded sets if and only if f is uniformly convex on bounded sets (see [22], Theorem 2.10). The following lemmas will be useful in the proof of our main result.

Lemma 2.1 [31]

The function f : E ( , + ] is totally convex on bounded subsets of E if and only if for any two sequences { x n } and { y n } in int ( dom f ) and domf, respectively, such that the first one is bounded,
lim n D f ( y n , x n ) = 0 lim n y n x n = 0 .

Lemma 2.2 [22]

Let C be a nonempty, closed and convex subset of E. Let f : E ( , + ] be a Gâteaux differentiable and totally convex function, and let x E . Then:
  1. (i)

    z = P C f ( x ) if and only if f ( x ) f ( z ) , y z 0 , y C .

     
  2. (ii)

    D f ( y , P C f ( x ) ) + D f ( P C f ( x ) , x ) D f ( y , x ) , y C .

     

Lemma 2.3 [29]

Let f : E ( , + ] be a proper, lower semi-continuous and convex function, then f : E ( , + ] is a proper, weak lower semicontinuous and convex function. Thus, for all z E , we have
D f ( z , f ( i = 1 N t i f ( x i ) ) ) i = 1 N t i D f ( z , x i ) .
(2.1)

Lemma 2.4 [32]

Let f : E R be Gâteaux differentiable on int ( dom f ) such that f is bounded on bounded subsets of dom f . Let x X and { x n } E . If { D f ( x , x n ) } is bounded, so is the sequence { x n } .

Let f : E R be a Legendre and Gâteaux differentiable function. Following [2] and [33], we make use of the function V f : E × E [ 0 , + ) associated with f, which is defined by
V f ( x , x ) = f ( x ) x , x + f ( x ) , x E , x E .
(2.2)
Then V f is nonnegative and
V f ( x , x ) = D f ( x , f ( x ) ) for all  x E  and  x E .
(2.3)
Moreover, by the subdifferential inequality,
V f ( x , x ) + y , f ( x ) x V f ( x , x + y ) ,
(2.4)

x E and x , y E (see [34]).

Lemma 2.5 [35]

Let { a n } be a sequence of nonnegative real numbers satisfying the following relation:
a n + 1 ( 1 α n ) a n + α n δ n , n n 0 ,

where { α n } ( 0 , 1 ) and { δ n } R satisfy the following conditions: lim n α n = 0 , n = 1 α n = , and lim sup n δ n 0 . Then lim n a n = 0 .

Lemma 2.6 [4]

Let { a n } be a sequence of real numbers such that there exists a subsequence { n i } of { n } such that a n i < a n i + 1 for all i N . Then there exists an increasing sequence { m k } N such that m k and the following properties are satisfied by all (sufficiently large) numbers k N :
a m k a m k + 1 and a k a m k + 1 .

In fact, m k is the largest number n in the set { 1 , 2 , , k } such that the condition a n a n + 1 holds.

Following the agreement in [26], we have the following lemma.

Lemma 2.7 Let f : E ( , + ] be a coercive Legendre function and C be a nonempty, closed and convex subset of E. Let A : C E be a continuous monotone mapping. For r > 0 and x E , define the mapping F r : E C as follows:
F r x : = { z C : A z , y z + 1 r f ( z ) f ( x ) , y z 0 , y C }
for all x E . Then the following hold:
  1. (1)

    F r is single-valued;

     
  2. (2)

    F ( F r ) = VI ( C , A ) ;

     
  3. (3)

    ϕ ( p , F r x ) + ϕ ( F r x , x ) ϕ ( p , x ) for p F ( F r ) ;

     
  4. (4)

    VI ( C , A ) is closed and convex.

     

3 Main result

Let C be a nonempty, closed and convex subset of E. Let A i : C E , for i = 1 , 2 , , N , be continuous monotone mappings. For r > 0 , define T i , r x : = { z C : A i z , y z + 1 r f ( z ) f ( x ) , y z 0 , y C } for all x E and i { 1 , 2 , , N } , where f is a Legendre and convex function from E into ( , + ) . Then, in what follows, we shall study the following iteration process:
{ x 0 = u C chosen arbitrarily , w n = T N , r n T N 1 , r n T 1 , r n x n , x n + 1 = P C f f ( α n f ( u ) + ( 1 α n ) f ( w n ) ) , n 0 ,
(3.1)

where { α n } ( 0 , 1 ) satisfies lim n α n = 0 and n = 1 = , and { r n } [ c 1 , ) for some c 1 > 0 .

Theorem 3.1 Let f : E R be a strongly coercive Legendre function which is bounded, uniformly Fréchet differentiable and totally convex on bounded subsets of E. Let C be a nonempty, closed and convex subset of int ( dom f ) and A i : C E , for i = 1 , 2 , , N , be a finite family of continuous monotone mappings with F : = i = 1 N VI ( C , A i ) . Let { x n } n 0 be a sequence defined by (3.1). Then { x n } converges strongly to x = P F f ( u ) .

Proof By Lemma 2.7 we have that each VI ( C , A i ) for each i { 1 , 2 , , N } and hence are closed and convex. Thus, we can take x : = P F f u . Let u n , 1 = T 1 , r n x n , u n , 2 = T 2 , r n u n , 1 , , u n , N 1 = T N 1 , r n u n , N 2 and u n , N = T N , r n u n , N 1 = w n . Then, from (3.1), Lemmas 2.2, 2.3 and the property of ϕ, we get that
D f ( x , x n + 1 ) = D f ( x , P C f f ( α n f ( u ) + ( 1 α n ) f ( w n ) ) ) D f ( x , f ( α n f ( u ) + ( 1 α n ) f ( w n ) ) ) α n D f ( x , u ) + ( 1 α n ) D f ( x , w n ) = α n D f ( x , u ) + ( 1 α n ) D f ( x , T N , r n T N 1 , r n T 1 , r n x n ) α n D f ( x , u ) + ( 1 α n ) D f ( x , x n ) .
(3.2)
Thus, by induction,
D f ( x , x n + 1 ) max { D f ( x , x n ) , D f ( x , u ) } , n 0 ,
which implies by Lemma 2.4 that { x n } and hence { w n } are bounded. Now let z n = f ( α n f ( u ) + ( 1 α n ) f ( w n ) ) . Then we have from (3.1) that x n + 1 = P C f z n . Using Lemmas 2.2, 2.3, 2.7(3), (2.3) and (2.4), we obtain that
D f ( x , x n + 1 ) = D f ( x , P C f z n ) D f ( x , z n ) = V ( x , f ( z n ) ) V ( x , f ( z n ) α n ( f ( u ) f ( x ) ) ) + α n ( f ( u ) f ( x ) ) , z n x = D f ( x , f ( α n f ( x ) + ( 1 α n ) f ( w n ) ) ) + α n f ( u ) f ( x ) , z n x α n ϕ ( x , x ) + ( 1 α n ) D f ( x , w n ) + α n f ( u ) f ( x ) , z n x ( 1 α n ) D f ( x , T N , r n u n , N 1 ) + α n f ( u ) f ( x ) , z n x ,
which implies that
D f ( x , x n + 1 ) ( 1 α n ) [ D f ( x , u n , N 1 ) D f ( w n , u n , N 1 ) ] + α n f ( u ) f ( x ) , z n x ( 1 α n ) [ D f ( x , u n N 2 ) D f ( u n N 1 , u n N 2 ) ] ( 1 α n ) D f ( w n , u n , N 1 ) + α n f ( u ) f ( x ) , z n x ( 1 α n ) D f ( x , x n ) ( 1 α n ) [ D f ( u n , 1 , x n ) + D f ( u n , 2 , u n , 1 ) + + D f ( u n , N 1 , u n , N 2 ) + D f ( w n , u n , N 1 ) ] + α n f ( u ) f ( x ) , z n x ( 1 α n ) D f ( x , x n ) + α n f ( u ) f ( x ) , z n x .
(3.3)

Now, we consider two possible cases.

Case 1. Suppose that there exists n 0 N such that { D f ( x , x n ) } is decreasing. Then we obtain that { D f ( x , x n ) } is convergent. Thus, from (3.3) we have that D f ( u n , 1 , x n ) , D f ( u n , 2 , u n , 1 ) , , D f ( w n , u n , N 1 ) 0 as n , and hence by Lemma 2.1 we get that
u n , 1 x n 0 , u n , 2 u n , 1 0 , , w n u n , N 1 0 as  n .
(3.4)
Furthermore, from the property of D f ( , ) and the fact that α n 0 as n , we have that
D f ( w n , z n ) = D f ( w n , f ( α n f ( u ) + ( 1 α n ) f ( w n ) ) ) α n D f ( w n , u ) + ( 1 α n ) D f ( w n , w n ) α n D f ( w n , u ) + ( 1 α n ) D f ( w n , w n ) 0 as  n ,
and hence from Lemma 2.1 we have that w n z n 0 and this with (3.4) implies that
z n u n , N 1 0 , z n u n , N 2 0 , , z n u n , 1 0 as  n .
(3.5)

Since { z n } is bounded and E is reflexive, we choose a subsequence { z n k } of { z n } such that z n k z and lim sup n f ( u ) f ( x ) , z n x = lim k f ( u ) f ( x ) , z n k x . Then, from (3.5) and (3.4), we get that u n k , i z for each i { 1 , 2 , , N } .

Now, we show that z VI ( C , A i ) for each i { 1 , 2 , , N } . But from the definition of u n , i , we have that
A i u n , i , y u n , i + f ( u n , i ) f ( x n ) r n , y u n , i 0 , y C ,
and hence
A i u n k , i , y u n k , i + f ( u n k , i ) f ( x n k ) r n k , y u n k , i 0 , y C
(3.6)
for each i { 1 , 2 , , N } . Set v t = t y + ( 1 t ) z for all t ( 0 , 1 ] and y C . Consequently, we get that v t C . Now, from (3.6) it follows that
A i v t , v t u n k , i A i v t , v t u n k , i A i u n k , i , v t u n k , i f ( u n k , i ) f ( x n k ) r n k , v t u n k , i = A i v t A i u n k , i , v t u n k , i f ( u n k , i ) f ( x n k ) r n k , v t u n k , i .
In addition, since f is uniformly Fréchet differentiable and bounded, we have that f is uniformly continuous (see [36]). Thus, from (3.4) and the uniform continuity of f, we obtain that
f ( u n k , i ) f ( x n k ) r n k 0 as  k ,
and since A is monotone, we also have that A i v t A i u n k , i , v t u n k , i 0 . Thus, it follows that
0 lim k A i v t , v t u n k , i = A i v t , v t z ,
and hence
A i v t , y z 0 , y C , for all  i { 1 , 2 , , N } .
If t 0 , the continuity of A i implies that
A i z , y z 0 , y C .

This implies that z VI ( C , A i ) for all i { 1 , 2 , , N } .

Therefore, we obtain that z i = 1 N VI ( C , A i ) . Thus, by Lemma 2.2, we immediately obtain that lim sup n f ( u ) f ( x ) , z n x = f ( u ) f ( x ) , z x 0 . It follows from Lemma 2.5 and (3.3) that D f ( x , x n ) 0 as n . Consequently, x n x .

Case 2. Suppose that there exists a subsequence { n j } of { n } such that
D f ( x , x n j ) < D f ( x , x n j + 1 )
for all j N . Then, by Lemma 2.6, there exists a nondecreasing sequence { m k } N such that m k , D f ( x , x m k ) D f ( x , x m k + 1 ) and D f ( x , x k ) D f ( x , x m k + 1 ) for all k N . From (3.3) and α n 0 , we have
( 1 α m k ) ( D f ( u m k , 1 , x m k ) + + D f ( w m k , u m k , N 1 ) ) ( D f ( x , x m k ) D f ( x , x m k + 1 ) ) + α m k D f ( x , x m k ) + α m k f ( u ) f ( x ) , z m k x 0 as  k ,
which implies that D f ( u m k , 1 , x m k ) , , D f ( w m k , u m k , N 1 ) 0 , and hence u m k , 1 x m k 0 , , w m k u m k , N 1 0 as k . Thus, as in Case 1, we obtain that
lim sup k f ( u ) f ( x ) , z m k x 0 .
(3.7)
Furthermore, from (3.3) we have that
D f ( x , x m k + 1 ) ( 1 α m k ) D f ( x , x m k ) + α m k f ( u ) f ( x ) , z m k x .
(3.8)
Thus, since D f ( x , x m k ) D f ( x , x m k + 1 ) , we get that
α m k D f ( x , x m k ) D f ( x , x m k ) D f ( x , x m k + 1 ) + α m k f ( u ) f ( x ) , z m k x α m k f ( u ) f ( x ) , z m k x .
Moreover, since α m k > 0 , we obtain that
D f ( x , x m k ) f ( u ) f ( x ) , z m k x .

It follows from (3.7) that D f ( x , x m k ) 0 as k . This together with (3.8) implies that D f ( x , x m k + 1 ) 0 . Therefore, since D f ( x , x k ) D f ( x , x m k + 1 ) for all k N , we conclude that x k x as k . Hence, both cases imply that { x n } converges strongly to x = P F f u and the proof is complete. □

If in Theorem 3.1 N = 1 , then we get the following corollary.

Corollary 3.2 Let f : E R be a strongly coercive Legendre function which is bounded, uniformly Fréchet differentiable and totally convex on bounded subsets of E. Let C be a nonempty, closed and convex subset of int ( dom f ) , and let A : C E be a continuous monotone mapping with VI ( C , A ) . Let { x n } n 0 be a sequence defined by (3.1),
{ x 0 = u C chosen arbitrarily , w n = T r n x n , x n + 1 = P C f f ( α n f ( u ) + ( 1 α n ) f ( w n ) ) ,
(3.9)

where T γ x : = { z C : A z , y z + 1 γ f ( z ) f ( x ) , y z 0 , y C } for all x E ; α n ( 0 , 1 ) satisfies lim n α n = 0 and n = 1 α n = and { r n } [ c 1 , ) for some c 1 > 0 . Then the sequence { x n } n 0 converges strongly to a point x = P VI ( C , A ) ( u ) .

If C = E , then VI ( C , A ) = A 1 ( 0 ) and hence the following corollary holds.

Corollary 3.3 Let f : E R be a strongly coercive Legendre function which is bounded, uniformly Fréchet differentiable and totally convex on bounded subsets of E. Let A i : E E , for i = 1 , 2 , , N , be a finite family of continuous monotone mappings. Let F : = i = 1 N VI ( C , A i ) = i = 1 N A 1 ( 0 ) . Let { x n } n 0 be a sequence defined by (3.1). Then { x n } converges strongly to x = P F f ( u ) .

If in Theorem 3.1 we assume u = 0 , then the scheme converges strongly to the common minimum-norm zero of a finite family of continuous monotone mappings. In fact, we have the following corollary.

Corollary 3.4 Let f : E R be a strongly coercive Legendre function which is bounded, uniformly Fréchet differentiable and totally convex on bounded subsets of E. Let C be a nonempty, closed and convex subset of int ( dom f ) , and let A i : C E , for i = 1 , 2 , , N , be a finite family of continuous monotone mappings with F : = i = 1 N VI ( C , A i ) . Let { x n } n 0 be a sequence defined by (3.1) with u = 0 . Then { x n } converges strongly to x = P F f ( 0 ) , which is the common minimum-norm (with respect to the Bregman distance) solution of the variational inequalities.

4 Application

In this section, we study the problem of finding a minimizer of a continuously Fréchet differentiable convex functional in Banach spaces.

Let g i , for i = 1 , 2 , , N , be continuously Fréchet differentiable convex functionals such that the gradients of g i , ( g i ) | C are continuous and monotone. For r > 0 , let K i , r x : = { z C : g i ( z ) , y z + 1 r f ( z ) f ( x ) , y z 0 , y C } for all x E and for each i { 1 , 2 , , N } . Then the following theorem holds.

Theorem 4.1 Let f : E R be a strongly coercive Legendre function which is bounded, uniformly Fréchet differentiable and totally convex on bounded subsets of E. Let g i , i = 1 , 2 , , N , be continuously Fréchet differentiable convex functionals such that the gradients of g i , ( g i ) | C are continuous, monotone and F : = i = 1 N arg min y C g i ( y ) , where arg min y C g i ( y ) : = { z C : g i ( z ) = min y C g i ( y ) } . Let { x n } n 0 be a sequence defined by
{ x 0 = u C chosen arbitrarily , w n = K N , r n K N 1 , r n K 1 , r n x n , x n + 1 = P C f f ( α n f ( u ) + ( 1 α n ) f ( w n ) ) , n 0 ,
(4.1)

where α n ( 0 , 1 ) satisfies lim n α n = 0 and n = 1 α n = and { r n } [ c 1 , ) for some c 1 > 0 . Then the sequence { x n } converges strongly to p = P F f ( u ) .

Proof We note that from the convexity and Fréchet differentiability of f, we have VI ( C , ( g i ) | C ) = arg min y C g i ( y ) for each i { 1 , 2 , , N } . Thus, by Theorem 3.1, { x n } converges strongly to p = P F f ( u ) . □

Remark 4.2 Our results are new even if the convex function f is chosen to be f ( x ) = 1 p x p ( 1 < p < ) in uniformly smooth and uniformly convex spaces.

Remark 4.3 Our theorems extend and unify most of the results that have been proved for this important class of nonlinear operators. In particular, Theorem 3.1 extends Theorem 3.3 of [16], Theorem 3.1 of [17], Theorem 3.1 of [17] and Theorem 3.3 of [10] and Theorem 4.2 of [25] either to a more general class of continuous monotone operators or to a more general Banach space E. Moreover, in all our theorems and corollaries, the computation of C n or Q n for each n 1 is not required.

Declarations

Acknowledgements

This article was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah. The first and third authors acknowledge with thanks DSR for financial support. The second author undertook this work when he was visiting the Abdus Salam International Center for Theoretical Physics (ICTP), Trieste, Italy, as a regular associate.

Authors’ Affiliations

(1)
Department of Mathematics, King Abdulaziz University, P.O. Box 80203, Jeddah, 21589, Saudi Arabia
(2)
Department of Mathematics, University of Botswana, Pvt. Bag, 00704 Gaborone, Botswana

References

  1. Bregman LM: The relaxation method for finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 1967, 7: 200–217.View ArticleGoogle Scholar
  2. Alber YI: Metric and generalized projection operators in Banach spaces: properties and applications. Lect. Notes Pure Appl. Math. Theory and Applications of Nonlinear Operators of Accretive and Monotone Type 1996, 15–50.Google Scholar
  3. Kamimura S, Takahashi W: Strong convergence of proximal-type algorithm in a Banach space. SIAM J. Optim. 2002, 13: 938–945. 10.1137/S105262340139611XMathSciNetView ArticleGoogle Scholar
  4. Maingé PE: Strong convergence of projected subgradient methods for nonsmooth and non-strictly convex minimization. Set-Valued Anal. 2008, 16: 899–912. 10.1007/s11228-008-0102-zMathSciNetView ArticleGoogle Scholar
  5. Zegeye H, Ofoedu EU, Shahzad N: Convergence theorems for equilibrium problem, variational inequality problem and countably infinite relatively quasi-nonexpansive mappings. Appl. Math. Comput. 2010, 216: 3439–3449. 10.1016/j.amc.2010.02.054MathSciNetView ArticleGoogle Scholar
  6. Zegeye H, Shahzad N: Strong convergence theorems for a finite family of nonexpansive mappings and semi-groups via the hybrid method. Nonlinear Anal. 2010, 72: 325–329. 10.1016/j.na.2009.06.056MathSciNetView ArticleGoogle Scholar
  7. Zegeye H, Shahzad N: A hybrid approximation method for equilibrium, variational inequality and fixed point problems. Nonlinear Anal. Hybrid Syst. 2010, 4: 619–630. 10.1016/j.nahs.2010.03.005MathSciNetView ArticleGoogle Scholar
  8. Zegeye H, Shahzad N: A hybrid scheme for finite families of equilibrium, variational inequality and fixed point problems. Nonlinear Anal. 2011, 74: 263–272. 10.1016/j.na.2010.08.040MathSciNetView ArticleGoogle Scholar
  9. Zegeye H, Shahzad N: Convergence theorems for a common point of solutions of equilibrium and fixed point of relatively nonexpansive multi-valued mapping problems. Abstr. Appl. Anal. 2012., 2012: Article ID 859598Google Scholar
  10. Iiduka H, Takahashi W: Strong convergence studied by a hybrid type method for monotone operators in a Banach space. Nonlinear Anal. 2008, 68: 3679–3688. 10.1016/j.na.2007.04.010MathSciNetView ArticleGoogle Scholar
  11. Kinderlehrer D, Stampaccia G: An Iteration to Variational Inequalities and Their Applications. Academic Press, New York; 1990.Google Scholar
  12. Lions JL, Stampacchia G: Variational inequalities. Commun. Pure Appl. Math. 1967, 20: 493–517. 10.1002/cpa.3160200302MathSciNetView ArticleGoogle Scholar
  13. Zegeye H, Shahzad N: Strong convergence for monotone mappings and relatively weak nonexpansive mappings. Nonlinear Anal. 2009, 70: 2707–2716. 10.1016/j.na.2008.03.058MathSciNetView ArticleGoogle Scholar
  14. Zegeye H, Shahzad N, Alghamdi MA: Strong convergence theorems for a common point of solution of variational inequality, solutions of equilibrium and fixed point problems. Fixed Point Theory Appl. 2012., 2012: Article ID 119Google Scholar
  15. Zegeye H, Shahzad N: An iteration to a common point of solution of variational inequality and fixed point-problems in Banach spaces. J. Appl. Math. 2012., 2012: Article ID 504503Google Scholar
  16. Iiduka H, Takahashi W, Toyoda M: Approximation of solutions of variational inequalities for monotone mappings. Panam. Math. J. 2004, 14: 49–61.MathSciNetGoogle Scholar
  17. Iiduka H, Takahashi W: Weak convergence of projection algorithm for variational inequalities in Banach spaces. J. Math. Anal. Appl. 2008, 339: 668–679. 10.1016/j.jmaa.2007.07.019MathSciNetView ArticleGoogle Scholar
  18. Zegeye H, Shahzad N: Approximating common solution of variational inequality problems for two monotone mappings in Banach spaces. Optim. Lett. 2011, 5: 691–704. 10.1007/s11590-010-0235-5MathSciNetView ArticleGoogle Scholar
  19. Bauschke HH, Borwein JM, Combettes PL: Essential smoothness, essential strict convexity, and Legendre functions in Banach spaces. Commun. Contemp. Math. 2001, 3: 615–647. 10.1142/S0219199701000524MathSciNetView ArticleGoogle Scholar
  20. Bauschke HH, Borwein JM, Combettes PL: Bregman monotone optimization algorithms. SIAM J. Control Optim. 2003, 42: 596–636. 10.1137/S0363012902407120MathSciNetView ArticleGoogle Scholar
  21. Butnariu D, Iusem AN, Zalinescu C: On uniform convexity, total convexity and convergence of the proximal point and outer Bregman projection algorithms in Banach spaces. J. Convex Anal. 2003, 10: 35–61.MathSciNetGoogle Scholar
  22. Butnariu D, Resmerita E: Bregman distances, totally convex functions and a method for solving operator equations in Banach spaces. Abstr. Appl. Anal. 2006., 2006: Article ID 84919Google Scholar
  23. Reich S: A weak convergence theorem for the alternating method with Bregman distances. In Theory and Applications of Nonlinear Operators of Accretive and Monotone Type. Dekker, New York; 1996:313–318.Google Scholar
  24. Reich S, Sabach S: A projection method for solving nonlinear problems in reflexive Banach spaces. J. Fixed Point Theory Appl. 2011, 9: 101–116. 10.1007/s11784-010-0037-5MathSciNetView ArticleGoogle Scholar
  25. Reich S, Sabach S: Two strong convergence theorems for a proximal method in reflexive Banach spaces. Numer. Funct. Anal. Optim. 2010, 31: 22–44. 10.1080/01630560903499852MathSciNetView ArticleGoogle Scholar
  26. Reich S, Sabach S: Two strong convergence theorems for Bregman strongly nonexpansive operators in reflexive Banach spaces. Nonlinear Anal. TMA 2010, 73: 122–135. 10.1016/j.na.2010.03.005MathSciNetView ArticleGoogle Scholar
  27. Reich S, Sabach S: Existence and approximation of fixed points of Bregman firmly nonexpansive mappings in reflexive Banach spaces. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering. Springer, New York; 2011:301–316.View ArticleGoogle Scholar
  28. Bonnans JF, Shapiro A: Perturbation Analysis of Optimization Problems. Springer, New York; 2000.View ArticleGoogle Scholar
  29. Phelps RP Lecture Notes in Mathematics 1364. In Convex Functions, Monotone Operators, and Differentiability. 2nd edition. Springer, Berlin; 1993.Google Scholar
  30. Hiriart-Urruty J-B, Lemarchal C Grundlehren der Mathematischen Wissenschaften 306. In Convex Analysis and Minimization Algorithms II. Springer, Berlin; 1993.View ArticleGoogle Scholar
  31. Butnariu D, Iusem AN: Totally Convex Functions for Fixed Points Computation and Infinite Dimensional Optimization. Kluwer Academic, Dordrecht; 2000.View ArticleGoogle Scholar
  32. Martin-Marquez V, Reich S, Sabach S: Bregman strongly nonexpansive operators in reflexive Banach spaces. J. Math. Anal. Appl. 2013, 400: 597–614. 10.1016/j.jmaa.2012.11.059MathSciNetView ArticleGoogle Scholar
  33. Censor Y, Lent A: An iterative row-action method for interval convex programming. J. Optim. Theory Appl. 1981, 34: 321–353. 10.1007/BF00934676MathSciNetView ArticleGoogle Scholar
  34. Kohsaka F, Takahashi W: Proximal point algorithms with Bregman functions in Banach spaces. J. Nonlinear Convex Anal. 2005, 6: 505–523.MathSciNetGoogle Scholar
  35. Xu HK: Another control condition in an iterative method for nonexpansive mappings. Bull. Aust. Math. Soc. 2002, 65: 109–113. 10.1017/S0004972700020116View ArticleGoogle Scholar
  36. Reich S, Sabach S: A strong convergence theorem for a proximal-type algorithm in reflexive Banach spaces. J. Nonlinear Convex Anal. 2009, 10: 471–485.MathSciNetGoogle Scholar

Copyright

© Shahzad et al.; licensee Springer. 2013

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement