Skip to main content

Convergence of an extragradient-like iterative algorithm for monotone mappings and nonexpansive mappings

Abstract

In this paper, we investigate the problem of finding some common element in the set of common fixed points of an infinite family of nonexpansive mappings and in the set of solutions of variational inequalities based on an extragradient-like iterative algorithm. Strong convergence of the purposed iterative algorithm is obtained.

MSC:47H05, 47H09, 47J25, 90C33.

1 Introduction

Iterative algorithms have been playing an important role in the approximation solvability, especially of nonlinear variational inequalities as well as of nonlinear equations in several fields such as mechanics, traffic, economics, information, medicine, and many others. The well-known convex feasibility problem which captures applications in various disciplines such as image restoration and radiation therapy treatment planning is to find a point in the intersection of common fixed point sets of a family of nonlinear mappings; see, for example, [111]. The Mann iterative algorithm is an efficient method to study the class of nonexpansive mappings. Indeed, Picard cannot converge even that the fixed point set of nonexpansive mappings is nonempty.

It is known that Mann iterative algorithm only has weak convergence for nonexpansive mappings in infinite-dimensional Hilbert spaces; see [12] for more details and the references therein. In many disciplines, including economics [13], image recovery [14], quantum physics [1520], and control theory [21], problems arise in infinite dimension spaces. To improve the weak convergence of the Mann iterative algorithm, many authors considered using contractions to approximate nonexpansive mappings; for more details, see [22] and [23] and the references therein.

In this paper, we focus on the problem of finding some common element in the set of common fixed points of an infinite family of nonexpansive mappings and in the set of solutions of variational inequalities based on an extragradient-like iterative algorithm. Some deduced sub-results and applications are obtained.

2 Preliminaries

Throughout this paper, we assume that H is a real Hilbert space whose inner product and norm are denoted by , and , respectively. Let K be a nonempty, closed, and convex subset of H. Let P K be the metric projection from H onto K.

Recall that a mapping B:KH is said to be inverse-strongly monotone iff there exists a positive real number μ such that

BxBy,xyμ B x B y 2 ,x,yK.

For such a case, B is also said to be μ-inverse-strongly monotone.

Recall that a mapping T:KK is said to be nonexpansive iff

TxTyxy,x,yK.

In this paper, we use F(T) to denote the fixed point set of the mapping T.

Recall that a mapping f:KK is said to be a contraction iff there exists a coefficient α(0,1) such that

f ( x ) f ( y ) αxy,x,yK.

For such a case, f is also said to be an α-contraction.

Recall that a linear bounded operator A:KK is strongly positive iff there exists a constant γ ¯ >0 such that

Ax,x γ ¯ x 2 ,xK.

Recall that a set-valued mapping S:H 2 H is said to be monotone iff fSx and gSy imply

xy,fg0,x,yH.

A monotone mapping S:H 2 H is maximal iff the graph of G(S) of S is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping S is maximal iff for (x,f)H×H, xy,fg0 for every (y,g)G(S) implies fSx. Let Q:CH be a monotone mapping and N K v be the normal cone to K at vK, i.e., N K v={wH:vu,w0,uK}, and define

Sv={ Q v + N K v , v K , , v K .

Then S is maximal monotone and 0Sv iff vVI(K,A); see [24] for more details.

Recall that the classical variational inequality is to find a uK such that

Bu,vu0,vK,
(2.1)

where B:KH is a monotone mapping. It is known that uK is a solution to (2.1) iff u is a fixed point of the mapping P K (IλB), where λ>0 is a constant and I stands for the identity mapping. In this paper, we use VI(K,B) to denote the solution set of the variational inequality (2.1).

Iterative algorithms for nonexpansive mappings have recently been applied to solve convex minimization problems. A typical problem is to minimize a quadratic function over the set of fixed points of a nonexpansive mapping T on a real Hilbert space H,

min x F ( T ) 1 2 Ax,xx,u,
(2.2)

where A is a linear bounded self-adjoint operator on H and u is a given point in H. In [25], it is proved that the sequence { x n } defined by the iterative algorithm

x 0 H, x n + 1 =(I α n A)T x n + α n u,n0,

strongly converges to the unique solution of the minimization problem (2.2) provided that the sequence { α n } satisfies certain restriction.

Recently, Marino and Xu [26] reconsidered the problem by viscosity approximation method. They investigated the following iterative algorithm:

x 0 H, x n + 1 =(I α n A)T x n + α n γf( x n ),n0,

where A is a linear bounded self-adjoint operator on H, T:HH is a nonexpansive mapping, and f:HH is a contraction. They proved that the sequence { x n } generated in the above iterative process converges strongly to the unique solution of the following variational inequality:

( A γ f ) x , x x 0,xK,

which is the optimality condition for the minimization problem

min x F ( T ) 1 2 Ax,xh(x),

where h is a potential function for γf, that is, h (x)=γf(x) for xH.

Recently, the problem of finding a common element in the fixed point set of a nonexpansive mapping and in the solution set of a variational inequality has been considered by many authors; see, for example, [2740] and the references therein. In 2003, Takahashi and Toyoda [35] considered the following iterative algorithm:

x 1 K, x n + 1 = α n x n +(1 α n )T P K ( x n λ n B x n ),n1,
(2.3)

where T:KK is a nonexpansive mapping, B:KH is a μ-inverse-strongly monotone mapping, { α n } is a sequence in (0,1), and { λ n } is a sequence in (0,2μ). They showed that the sequence { x n } generated in (2.3) weakly converges to some point zF(T)VI(K,B).

Iiduka and Takahashi [36] reconsidered the common element problem via the following iterative algorithm:

x 1 =xK, x n + 1 = α n x+(1 α n )T P K ( x n λ n B x n ),n1,
(2.4)

where T:KK is a nonexpansive mapping, B:KH is a μ-inverse-strongly monotone mapping, { α n } is a sequence in (0,1), and { λ n } is a sequence in (0,2μ). They proved that the sequence { x n } strongly converges to some point zF(T)VI(K,B).

In this paper, we will consider an infinite family of nonexpansive mappings. More precisely, we consider the mapping W n defined by

(2.5)

where γ 1 , γ 2 , are real numbers such that 0 γ n 1, T 1 , T 2 , is an infinite family of mappings of K into itself. Nonexpansivity of each T i ensures the nonexpansivity of W n .

Regarding W n , we have the following lemmas which are important to prove our main results.

Lemma 2.1 [41]

Let K be a nonempty, closed, and convex subset of a strictly convex Banach space E. Let T 1 , T 2 , be nonexpansive mappings of K into itself such that n = 1 F( T n ) is nonempty, and let γ 1 , γ 2 , be real numbers such that 0< γ n b<1 for any n1. Then, for every xK and kN, the limit lim n U n , k x exists.

Using Lemma 2.1, one can define the mapping W as follows:

Wx= lim n W n x= lim n U n , 1 x,xK.
(2.6)

Such a mapping W is called W-mapping generated by T 1 , T 2 , and γ 1 , γ 2 , .

Throughout this paper, we will assume that 0< γ n b<1 for each n1.

Lemma 2.2 [41]

Let K be a nonempty, closed, and convex subset of a strictly convex Banach space E. Let T 1 , T 2 , be nonexpansive mappings of K into itself such that n = 1 F( T n ) is nonempty, and let γ 1 , γ 2 , be real numbers such that 0< γ n b<1 for each n1. Then F(W)= n = 1 F( T n ).

In this paper, motivated by the above results, we investigate the problem of approximating a common element in the solution set of variational inequalities and in the common fixed point set of a family of nonexpansive mappings based on an extragradient-like iterative algorithm. Strong convergence theorems of common elements are established in the framework of Hilbert spaces.

In order to prove our main results, we also need the following lemmas.

Lemma 2.3 In a real Hilbert space H, the following inequality holds:

x + y 2 x 2 +2y,x+y,x,yH.

Lemma 2.4 [26]

Assume A is a strongly positive linear bounded self-adjoint operator on a Hilbert space H with the coefficient γ ¯ >0 and 0<ρ A 1 . Then IρA1ρ γ ¯ .

Lemma 2.5 [26]

Let H be a Hilbert space. Let A be a strongly positive linear bounded self-adjoint operator with the coefficient γ ¯ >0. Assume that 0<γ< γ ¯ /α. Let T be a nonexpansive mapping with a fixed point x t H of the contraction xtγf(x)+(ItA)Tx. Then { x t } converges strongly as t0 to a fixed point x ¯ of T, which solves the variational inequality

( A γ f ) x ¯ , z x ¯ 0,zF(T).

Equivalently, we have P F ( T ) (IA+γf) x ¯ = x ¯ .

Lemma 2.6 [42]

Assume that { α n } is a sequence of nonnegative real numbers such that

α n + 1 (1 γ n ) α n + δ n ,

where { γ n } is a sequence in (0,1) and { δ n } is a sequence such that

  1. (a)

    n = 1 γ n =;

  2. (b)

    lim sup n δ n / γ n 0 or n = 1 | δ n |<.

Then lim n α n =0.

Lemma 2.7 [39]

Let K be a nonempty closed convex subset of a Hilbert space H, { T i :CC} be a family of infinitely nonexpansive mappings with i = 1 F( T i ), { γ n } be a real sequence such that 0< γ n b<1 for each n1. If C is any bounded subset of K, then lim n sup x C Wx W n x=0.

3 Main results

Theorem 3.1 Let K be a nonempty, closed, and convex subset of a real Hilbert space H. Let B i :KH be μ i -inverse-strongly monotone mappings for each i=1,2, and f:KK be an α-contraction. Let A:KK be a strongly positive linear bounded self-adjoint operator with the coefficient γ ¯ >0. Let { x n } be a sequence generated in the following extragradient-like iterative algorithm:

{ x 1 K , y n = P K ( x n η n B 2 x n ) , x n + 1 = P K ( α n γ f ( x n ) + ( I α n A ) W n P K ( I λ n B 1 ) y n ) , n 1 ,
(3.1)

where P K is the metric projection from H onto K, W n is a mapping defined by (2.5), { α n } is a real number sequence in (0,1), and { λ n }, { η n } are two positive real number sequences. Assume that F= i = 1 F( T i )VI(K, B 1 )VI(K, B 2 ), 0<γ< γ ¯ /α and the following restrictions are satisfied:

  1. (a)

    lim n α n =0, n = 1 α n =, and n = 1 | α n + 1 α n |;

  2. (b)

    n = 1 | η n + 1 η n |<, n = 1 | λ n + 1 λ n |<;

  3. (c)

    { η n },{ λ n }[u,v], where 0<u<v<2min{ μ 1 , μ 2 }.

Then the sequence { x n } strongly converges to x F, where x = P F (γf+(IA)) x .

Proof First, we show that I λ n B 1 and I η n B 2 are nonexpansive. Indeed, we see from the restriction (c) that

( I λ n B 1 ) x ( I λ n B 1 ) y 2 = x y λ n ( B 1 x B 1 y ) 2 = x y 2 2 λ n x y , B 1 x B 1 y + λ n 2 B 1 x B 1 y 2 x y 2 + λ n ( λ n 2 μ 1 ) B 1 x B 1 y 2 x y 2 , x , y C .

This shows that I λ n B 1 is nonexpansive, so is I η n B 2 . Noticing the condition (a), we may assume, with no loss of generality, that α n A 1 for each n1. It follows from Lemma 2.4 that I α n A1 α n γ ¯ .

Next, we show that the sequence { x n } is bounded. Letting pF, we see that

y n p= P K ( I η n B 2 ) x n P K ( I η n B 2 ) p x n p.

It follows that

x n + 1 p α n ( γ f ( x n ) A p ) + ( I α n A ) ( W n P C ( I λ n B ) y n p ) α n γ f ( x n ) A p + ( 1 α n γ ¯ ) W n P C ( I λ n B ) y n p α n γ f ( x n ) f ( p ) + α n γ f ( p ) A p + ( 1 α n γ ¯ ) y n p = ( 1 α n ( γ ¯ γ α ) ) x n p + α n γ f ( p ) A p .

By simple induction, we have

x n pmax { x 1 p , A p γ f ( p ) γ ¯ γ α } ,

which yields that the sequence { x n } is bounded, so is { y n }. Notice that

(3.2)

Putting ρ n = P K (I λ n B 1 ) y n , we have

(3.3)

Substituting (3.2) into (3.3), we arrive at

ρ n + 1 ρ n x n + 1 x n + M 1 ( | η n + 1 η n | + | λ n + 1 λ n | ) ,
(3.4)

where M 1 is an appropriate constant such that

M 1 max { sup n 1 B 1 y n , sup n 1 B 2 x n } .

Notice that

(3.5)

Since T i and U n , i are nonexpansive, we have from (2.6) that

W n + 1 ρ n W n ρ n = γ 1 T 1 U n + 1 , 2 ρ n γ 1 T 1 U n , 2 ρ n γ 1 U n + 1 , 2 ρ n U n , 2 ρ n = γ 1 γ 2 T 2 U u + 1 , 3 ρ n γ 2 T 2 U n , 3 ρ n γ 1 γ 2 U u + 1 , 3 ρ n U n , 3 ρ n γ 1 γ 2 γ n U n + 1 , n + 1 ρ n U n , n + 1 ρ n M 2 i = 1 n γ i ,
(3.6)

where M 2 0 is an appropriate constant such that U n + 1 , n + 1 ρ n U n , n + 1 ρ n M 2 for each n1. Substituting (3.4) and (3.6) into (3.5), we arrive at

x n + 2 x n + 1 ( 1 α n + 1 ( γ ¯ α γ ) ) x n + 1 x n + M 3 ( i = 1 n γ i + | α n + 1 α n | + | λ n + 1 λ n | + | η n + 1 η n | ) ,
(3.7)

where M 3 is an appropriate constant such that

M 3 =max { M 1 , M 2 , γ sup n 1 { f ( x n ) + A W n ρ n } } .

From the restrictions (a) and (b), we obtain from Lemma 2.6 that

lim n x n + 1 x n =0.
(3.8)

Notice that

x n + 1 W n ρ n = P K ( α n γ f ( x n ) + ( I α n A ) W n P K ( I λ n B 1 ) y n ) P K ( W n ρ n ) α n γ f ( x n ) A W n ρ n .

It follows from the restriction (a) that

lim n W n ρ n x n + 1 =0.
(3.9)

Notice that

y n p 2 ( x n p ) η n ( B 2 x n B 2 p ) 2 x n p 2 2 η n μ 2 B 2 x n B 2 p 2 + η n 2 B 2 x n B 2 p 2 = x n p 2 + ( η n 2 2 η n μ 2 ) B 2 x n B 2 p 2 .
(3.10)

In a similar way, we find that

ρ n p 2 x n p 2 + ( λ n 2 2 λ μ 1 ) B 1 y n B 1 p 2 .
(3.11)

On the other hand, we have

x n + 1 p 2 α n ( γ f ( x n ) A p ) + ( I α n A ) ( W n ρ n p ) 2 ( α n γ f ( x n ) A p + ( 1 α n γ ¯ ) ρ n p ) 2 α n γ f ( x n ) A p 2 + ρ n p 2 + 2 α n γ f ( x n ) A p ρ n p .
(3.12)

Substituting (3.11) into (3.12) gives

x n + 1 p 2 α n γ f ( x n ) A p 2 + x n p 2 + ( λ n 2 2 λ n μ 1 ) B 1 y n B 1 p 2 + 2 α n γ f ( x n ) A p ρ n p .

It follows from the restriction (c) that

In view of the restriction (a), we obtain from (3.8) that

lim n B 1 y n B 1 p=0.
(3.13)

From (3.12), we also have

x n + 1 p 2 α n γ f ( x n ) A p 2 + y n p 2 + 2 α n γ f ( x n ) A p ρ n p .
(3.14)

Combining (3.10) with (3.14), we arrive at

x n + 1 p 2 α n γ f ( x n ) A p 2 + x n p 2 + ( η n 2 2 η n μ 2 ) B 2 x n B 2 p 2 + 2 α n γ f ( x n ) A p ρ n p ,

which implies from the restriction (c) that

In view of the restriction (a), we obtain from (3.8) that

lim n B 2 x n B 2 p=0.
(3.15)

On the other hand, we see from the firm expansivity of P K that

y n p 2 = P K ( I η n B 2 ) x n P K ( I η n B 2 ) p 2 ( I η n B 2 ) x n ( I η n B 2 ) p , y n p = 1 2 ( ( I η n B 2 ) x n ( I η n B 2 ) p 2 + y n p 2 ( I η n B 2 ) x n ( I η n B 2 ) p ( y n p ) 2 ) 1 2 ( x n p 2 + y n p 2 ( x n y n ) η n ( B 2 x n B 2 p ) 2 ) = 1 2 ( x n p 2 + y n p 2 x n y n 2 η n 2 B 2 x n B 2 p 2 + 2 η n x n y n , B 2 x n B 2 p ) ,

which yields

y n p 2 x n p 2 x n y n 2 +2 η n x n y n B 2 x n B 2 p.
(3.16)

In the same way, we can obtain that

ρ n p 2 x n p 2 ρ n y n 2 +2 λ n ρ n y n B 1 y n B 1 p.
(3.17)

Substituting (3.16) into (3.14) yields

x n + 1 p 2 α n γ f ( x n ) A p 2 + x n p 2 x n y n 2 + 2 η n x n y n B 2 x n B 2 p + 2 α n γ f ( x n ) A p ρ n p .

It follows that

x n y n 2 α n γ f ( x n ) A p 2 + x n p 2 x n + 1 p 2 + 2 η n x n y n B 2 x n B 2 p + 2 α n γ f ( x n ) A p ρ n p α n γ f ( x n ) A p 2 + ( x n p + x n + 1 p ) x n + 1 x n + 2 η n x n y n B 2 x n B 2 p + 2 α n γ f ( x n ) A p ρ n p .

In view of (3.7) and (3.15), we see from the restriction (a) that

lim n x n y n =0.
(3.18)

Similarly, we can obtain that

lim n ρ n y n =0.
(3.19)

Observe that

ρ n W n ρ n y n ρ n + x n y n + x n x n + 1 + x n + 1 W n ρ n .

It follows from (3.7), (3.9), (3.18) and (3.19) that

lim n W n ρ n ρ n =0.
(3.20)

From Lemma 2.7, we have W ρ n W n ρ n 0 as n. This in turn implies that

lim n W ρ n ρ n =0.
(3.21)

Next, we show that

lim sup n γ f ( x ) A x , x n x 0.
(3.22)

To show it, we choose a subsequence { x n i } of { x n } such that

lim sup n γ f ( x ) A x , x n x = lim i γ f ( x ) A x , x n i x .

As { x n i } is bounded, we have that a subsequence { x n i j } of { x n i } converges weakly to p. We may assume, without loss of generality, that x n i p. From (3.18) and (3.19), we also have y n i p and z n i p, respectively. Notice that pF. Indeed, let us first show that pVI(K, B 1 ). Put

Sw={ B 1 v + N K v , v K , , v K .

Then S is maximal monotone. Let (v,w)G(S). Since w B 1 v N K v and ρ n K, we have

v ρ n ,w B 1 v0.

On the other hand, we have from ρ n = P K (I λ n B 1 ) y n that

v ρ n , ρ n ( I λ n B 1 ) y n 0

and hence

v ρ n , ρ n y n λ n + B 1 y n 0.

It follows that

v ρ n i , w v ρ n i , B 1 v v ρ n i , B 1 v v ρ n i , ρ n i y n i λ n i + B 1 y n i v ρ n i , B 1 v ρ n i y n i λ n i B 1 y n i = v ρ n i , B 1 v B 1 ρ n i + v ρ n i , B 1 ρ n i B 1 y n i v ρ n i , ρ n i y n i λ n i v ρ n i , B 1 ρ n i B 1 y n i v ρ n i , ρ n i y n i λ n i ,

which implies that vp,w0. We have p B 1 1 0 and hence pVI(K, B 1 ). In a similar way, we can show pVI(K, B 2 ). Next, let us show p i = 1 F( T i ). Since Hilbert spaces satisfy Opial’s condition, we see from (3.21) that

lim inf i ρ n i p < lim inf i ρ n i W p = lim inf i ρ n i W ρ n i + W ρ n i W p lim inf i W ρ n i W p lim inf i ρ n i p ,

which derives a contradiction. Thus, we have pF(W)= i = 1 F( T i ). From Lemma 2.5, we see that there exists a unique x such that x = P F (γf+(IA)) x . It follows that

lim sup n γ f ( x ) A x , x n x = γ f ( x ) A x , p x 0.

That is, (3.22) holds. It follows from Lemma 2.3 that

x n + 1 x 2 α n ( γ f ( x n ) A x ) + ( I α n A ) ( W n ρ n x ) 2 ( 1 α n γ ¯ ) 2 W n ρ n x 2 + 2 α n γ f ( x n ) A x , x n + 1 x ( 1 α n γ ¯ ) 2 x n x 2 + α γ α n ( x n x 2 + x n + 1 x 2 ) + 2 α n γ f ( x ) A x , x n + 1 x .

It follows that

x n + 1 x 2 ( 1 α n γ ¯ ) 2 + α n γ α 1 α n γ α x n x 2 + 2 α n 1 α n γ α γ f ( x ) A x , x n + 1 x = ( 1 2 α n γ ¯ + α n α γ ) 1 α n γ α x n x 2 + α n 2 γ ¯ 2 1 α n γ α x n x 2 + 2 α n 1 α n γ α γ f ( x ) A x , x n + 1 x [ 1 2 α n ( γ ¯ α γ ) 1 α n γ α ] x n x 2 + 2 α n ( γ ¯ α γ ) 1 α n γ α ( 1 γ ¯ α γ γ f ( x ) A x , x n + 1 x + α n γ ¯ 2 2 ( γ ¯ α γ ) M 4 ) ,

where M 4 is an appropriate constant such that M 4 sup n 1 x n x 2 . Put b n = 2 α n ( γ ¯ α γ ) 1 α n α γ and c n = 1 γ ¯ α γ γf( x )A x , x n + 1 x + α n γ ¯ 2 2 ( γ ¯ α γ ) M 4 . That is,

x n + 1 x 2 (1 b n ) x n x + b n c n .
(3.23)

In view of the restrictions (a) and (b), we see from (3.22) that

lim n b n =0, n = 1 b n =and lim sup n c n 0.

Apply Lemma 2.6 to (3.23) to conclude that x n x as n. This completes the proof. □

If B 2 =0, the zero mapping, then Theorem 3.1 is reduced to the following.

Corollary 3.2 Let K be a nonempty, closed, and convex subset of a real Hilbert space H. Let B 1 :KH be μ 1 -inverse-strongly monotone mappings and f:KK be an α-contraction. Let A:KK be a strongly positive linear bounded self-adjoint operator with the coefficient γ ¯ >0. Assume that 0<γ< γ ¯ /α. Let { x n } be a sequence generated in the following iterative algorithm:

{ x 1 K , x n + 1 = P K ( α n γ f ( x n ) + ( I α n A ) W n P K ( I λ n B 1 ) x n ) , n 1 ,

where P K is the metric projection from H onto K, W n is a mapping defined by (2.5), { α n } is a real number sequence in (0,1), and { λ n } is a positive real number sequence. Assume that F= i = 1 F( T i )VI(K, B 1 ) and the following restrictions are satisfied:

  1. (a)

    lim n α n =0, n = 1 α n =, and n = 1 | α n + 1 α n |;

  2. (b)

    n = 1 | λ n + 1 λ n |<;

  3. (c)

    { λ n }[u,v], where 0<u<v<2 μ 1 .

Then the sequence { x n } strongly converges to x F, where x = P F (γf+(IA)) x .

Remark 3.3 Corollary 3.2 includes the corresponding results in Iiduka and Takahashi [36] as a special case.

As an application of our main results, we consider another class of important nonlinear operators: strict pseudocontractions.

Recall that a mapping S:KK is said to be a κ-strict pseudocontraction if there exists a constant κ[0,1) such that

S x S y 2 x y 2 +κ ( I S ) x ( I S ) y 2 ,x,yK.

It is easy to see that the class of κ-strict pseudocontractions strictly includes the class of nonexpansive mappings as a special case.

Putting B=IS, where S:KK is a κ-strict pseudocontraction, we know that B is 1 κ 2 -inverse-strongly monotone; see [43] and the references therein.

Corollary 3.4 Let H be a real Hilbert space and K be a nonempty closed convex subset of H. Let S i :KK be κ i -inverse-strongly monotone mappings for each i=1,2 and f:KK be an α-contraction. Let A:KK be a strongly positive linear bounded self-adjoint operator with the coefficient γ ¯ >0. Assume that 0<γ< γ ¯ /α. Let { x n } be a sequence generated in the following iterative process:

{ x 1 K , y n = ( 1 η n ) x n + η n S 2 x n , x n + 1 = P K ( α n γ f ( x n ) + ( I α n A ) W n ( ( 1 λ n ) y n + λ n S 1 y n ) ) , n 1 ,

where P K is the metric projection from H onto K, W n is a mapping defined by (2.5), { α n } is a real number sequence in (0,1), and { λ n }, { η n } are two positive real number sequences. Assume that F= i = 1 F( T i )F( S 1 )F( S 2 ) and the following restrictions are satisfied:

  1. (a)

    lim n α n =0, n = 1 α n =, and n = 1 | α n + 1 α n |;

  2. (b)

    n = 1 | η n + 1 η n |<, n = 1 | λ n + 1 λ n |<;

  3. (c)

    { η n },{ λ n }[u,v], where 0<u<v<2min{ μ 1 , μ 2 }.

Then the sequence { x n } strongly converges to x F, where x = P F (γf+(IA)) x .

Proof Put B 1 =I S 1 and B 2 =I S 2 . Then B 1 is (1 κ 1 )/2-inverse-strongly monotone and B 2 is (1 κ 2 )/2-inverse-strongly monotone, respectively. We have F( S 1 )=VI(K, B 1 ), F( S 2 )=VI(K, B 2 ), P K (I λ n B 1 ) y n =(1 λ n ) y n + λ n T 1 y n and P K (I η n B 2 ) x n =(1 η n ) x n + η n T 2 x n . The desired conclusion can be immediately obtained from Theorem 3.1. □

References

  1. Bauschke HH, Borwein JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38: 367–426. 10.1137/S0036144593251710

    Article  MATH  MathSciNet  Google Scholar 

  2. Qin X, Cho SY, Kang SM: Common fixed points of a pair of non-expansive mappings with applications to convex feasibility problems. Glasg. Math. J. 2010, 52: 241–252. 10.1017/S0017089509990309

    Article  MATH  MathSciNet  Google Scholar 

  3. Kotzer T, Cohen N, Shamir J: Image restoration by a novel method of parallel projection onto constraint sets. Optim. Lett. 1995, 20: 1772–1774.

    Google Scholar 

  4. Hao Y, Wang X, Tong A: Weak and strong convergence theorems for two finite families of asymptotically nonexpansive mappings in Banach spaces. Adv. Fixed Point Theory 2012, 2: 417–432.

    Google Scholar 

  5. Cho SY, Kang SM: Some results on asymptotically hemi-pseudocontractive mappings in the intermediate sense. Fixed Point Theory Appl. 2012, 13: 153–164.

    MATH  MathSciNet  Google Scholar 

  6. Qin X, Cho SY, Kang SM: On hybrid projection methods for asymptotically quasi- ϕ -nonexpansive mappings. Appl. Math. Comput. 2010, 215: 3874–3883. 10.1016/j.amc.2009.11.031

    Article  MATH  MathSciNet  Google Scholar 

  7. Manro S, Kumar S, Bhatia SS: Common fixed point theorems in intuitionistic fuzzy metric spaces using occasionally weakly compatible maps. J. Math. Comput. Sci. 2012, 2: 73–81.

    MathSciNet  MATH  Google Scholar 

  8. Lv S, Wu C: Convergence of iterative algorithms for a generalized variational inequality and a nonexpansive mapping. Eng. Math. Lett. 2012, 1: 44–57.

    Google Scholar 

  9. Qin X, Su Y: Strong convergence theorems for relatively nonexpansive mappings in a Banach space. Nonlinear Anal. 2007, 67: 1958–1965. 10.1016/j.na.2006.08.021

    Article  MATH  MathSciNet  Google Scholar 

  10. Cho SY, Kang SM: Hybrid projection algorithms for treating common fixed points of a family of demicontinuous pseudocontractions. Appl. Math. Lett. 2012, 25: 854–857. 10.1016/j.aml.2011.10.031

    Article  MATH  MathSciNet  Google Scholar 

  11. Qin X, Cho YJ, Kang SM, Zhou H: Convergence of a modified Halpern-type iteration algorithm for quasi- ϕ -nonexpansive mappings. Appl. Math. Lett. 2009, 22: 1051–1055. 10.1016/j.aml.2009.01.015

    Article  MATH  MathSciNet  Google Scholar 

  12. Genel A, Lindenstruss J: An example concerning fixed points. Isr. J. Math. 1975, 22: 81–86. 10.1007/BF02757276

    Article  MATH  MathSciNet  Google Scholar 

  13. Khan MA, Yannelis NC: Equilibrium Theory in Infinite Dimensional Spaces. Springer, New York; 1991.

    Book  MATH  Google Scholar 

  14. Combettes PL: The Convex Feasibility Problem in Image Recovery. 95. In Advances in Imaging and Electron Physics. Edited by: Hawkes P. Academic Press, New York; 1996:155–270.

    Google Scholar 

  15. Dautray R, Lions JL 1. In Mathematical Analysis and Numerical Methods for Science and Technology. Springer, New York; 1988.

    Chapter  Google Scholar 

  16. Dautray R, Lions JL 2. In Mathematical Analysis and Numerical Methods for Science and Technology. Springer, New York; 1989.

    Google Scholar 

  17. Dautray R, Lions JL 3. In Mathematical Analysis and Numerical Methods for Science and Technology. Springer, New York; 1990.

    Google Scholar 

  18. Dautray R, Lions JL 4. In Mathematical Analysis and Numerical Methods for Science and Technology. Springer, New York; 1991.

    Google Scholar 

  19. Dautray R, Lions JL 5. In Mathematical Analysis and Numerical Methods for Science and Technology. Springer, New York; 1992.

    Google Scholar 

  20. Dautray R, Lions JL 6. In Mathematical Analysis and Numerical Methods for Science and Technology. Springer, New York; 1993.

    Google Scholar 

  21. Fattorini HO: Infinite-Dimensional Optimization and Control Theory. Cambridge University Press, Cambridge; 1999.

    Book  MATH  Google Scholar 

  22. Reich S: Strong convergence theorems for resolvents of accretive operators in Banach spaces. J. Math. Anal. Appl. 1980, 75: 287–292. 10.1016/0022-247X(80)90323-6

    Article  MATH  MathSciNet  Google Scholar 

  23. Qin X, Su Y: Approximation of a zero point of accretive operator in Banach spaces. J. Math. Anal. Appl. 2007, 329: 415–424. 10.1016/j.jmaa.2006.06.067

    Article  MATH  MathSciNet  Google Scholar 

  24. Rockafellar RT: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 1970, 149: 75–88. 10.1090/S0002-9947-1970-0282272-5

    Article  MATH  MathSciNet  Google Scholar 

  25. Xu HK: An iterative approach to quadratic optimization. J. Optim. Theory Appl. 2003, 116: 659–678. 10.1023/A:1023073621589

    Article  MATH  MathSciNet  Google Scholar 

  26. Marino G, Xu HK: A general iterative method for nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2006, 318: 43–52. 10.1016/j.jmaa.2005.05.028

    Article  MATH  MathSciNet  Google Scholar 

  27. Qin X, Shang M, Su Y: Strong convergence of a general iterative algorithm for equilibrium problems and variational inequality problems. Math. Comput. Model. 2008, 48: 1033–1046. 10.1016/j.mcm.2007.12.008

    Article  MATH  MathSciNet  Google Scholar 

  28. Cho SY, Kang SM: Approximation of fixed points of pseudocontraction semigroups based on a viscosity iterative process. Appl. Math. Lett. 2011, 24: 224–228. 10.1016/j.aml.2010.09.008

    Article  MATH  MathSciNet  Google Scholar 

  29. Zegeye H, Shahzad N: Strong convergence theorem for a common point of solution of variational inequality and fixed point problem. Adv. Fixed Point Theory 2012, 4: 374–397.

    MathSciNet  MATH  Google Scholar 

  30. Luo H, Wang Y: Iterative approximation for the common solutions of a infinite variational inequality system for inverse-strongly accretive mappings. J. Math. Comput. Sci. 2012, 2: 1660–1670.

    MathSciNet  Google Scholar 

  31. Cho SY, Kang SM: Approximation of common solutions of variational inequalities via strict pseudocontractions. Acta Math. Sci. 2012, 32: 1607–1618.

    Article  MATH  MathSciNet  Google Scholar 

  32. Ye J, Huang J: Strong convergence theorems for fixed point problems and generalized equilibrium problems of three relatively quasi-nonexpansive mappings in Banach spaces. J. Math. Comput. Sci. 2011, 1: 1–18.

    Article  MathSciNet  Google Scholar 

  33. Qin X, Cho SY, Kang SM: Strong convergence of shrinking projection methods for quasi- ϕ -nonexpansive mappings and equilibrium problems. J. Comput. Appl. Math. 2010, 234: 750–760. 10.1016/j.cam.2010.01.015

    Article  MATH  MathSciNet  Google Scholar 

  34. Yang S, Li W: Iterative solutions of a system of equilibrium problems in Hilbert spaces. Adv. Fixed Point Theory 2011, 1: 15–26.

    Google Scholar 

  35. Takahashi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118: 417–428. 10.1023/A:1025407607560

    Article  MATH  MathSciNet  Google Scholar 

  36. Iiduka H, Takahashi W: Strong convergence theorems for nonexpansive mappings and inverse-strongly monotone mappings. Nonlinear Anal. 2005, 61: 341–350. 10.1016/j.na.2003.07.023

    Article  MATH  MathSciNet  Google Scholar 

  37. Hao Y: On variational inclusion and common fixed point problems in Hilbert spaces with applications. Appl. Math. Comput. 2010, 217: 3000–3010. 10.1016/j.amc.2010.08.033

    Article  MATH  MathSciNet  Google Scholar 

  38. Qin X, Cho SY, Kang SM: Convergence theorems of common elements for equilibrium problems and fixed point problems in Banach spaces. J. Comput. Appl. Math. 2009, 225: 20–30. 10.1016/j.cam.2008.06.011

    Article  MATH  MathSciNet  Google Scholar 

  39. Chang SS, Lee HWJ, Chan CK: A new method for solving equilibrium problem fixed point problem and variational inequality problem with application to optimization. Nonlinear Anal. 2009, 70: 3307–3319. 10.1016/j.na.2008.04.035

    Article  MATH  MathSciNet  Google Scholar 

  40. Qin X, Cho YJ, Kang SM: Viscosity approximation methods for generalized equilibrium problems and fixed point problems with applications. Nonlinear Anal. 2010, 72: 99–112. 10.1016/j.na.2009.06.042

    Article  MATH  MathSciNet  Google Scholar 

  41. Shimoji K, Takahashi W: Strong convergence to common fixed points of infinite nonexpansive mappings and applications. Taiwan. J. Math. 2001, 5: 387–404.

    MATH  MathSciNet  Google Scholar 

  42. Liu L: Ishikawa-type and Mann-type iterative processes with errors for constructing solutions of nonlinear equations involving m -accretive operators in Banach spaces. Nonlinear Anal. 1998, 34: 307–317. 10.1016/S0362-546X(97)00579-8

    Article  MATH  MathSciNet  Google Scholar 

  43. Browder FE, Petryshyn WV: Construction of fixed points of nonlinear mappings in Hilbert space. J. Math. Anal. Appl. 1967, 20: 197–228. 10.1016/0022-247X(67)90085-6

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

This research was supported by the Natural Science Foundation of Hebei Province (A2010001943), the Science Foundation of Shijiazhuang Science and Technology Bureau (121130971) and the Science Foundation of Beijing Jiaotong University (2011YJS075).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Meijuan Shang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

Both authors participated in the design of this work and performed equally. Both authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Qing, Y., Shang, M. Convergence of an extragradient-like iterative algorithm for monotone mappings and nonexpansive mappings. Fixed Point Theory Appl 2013, 67 (2013). https://doi.org/10.1186/1687-1812-2013-67

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2013-67

Keywords