Skip to main content

On weak convergence of an iterative algorithm for common solutions of inclusion problems and fixed point problems in Hilbert spaces

Abstract

In this paper, a monotone inclusion problem and a fixed point problem of nonexpansive mappings are investigated based on a Mann-type iterative algorithm with mixed errors. Strong convergence theorems of common elements are established in the framework of Hilbert spaces.

MSC:47H05, 47H09, 47J25.

1 Introduction

Variational inclusion has become rich of inspiration in pure and applied mathematics. In recent years, classical variational inclusion problems have been extended and generalized to study a large variety of problems arising in image recovery, economics, and signal processing; for more details, see [114]. Based on the projection technique, it has been shown that the variational inclusion problems are equivalent to the fixed point problems. This alternative formulation has played a fundamental and significant part in developing several numerical methods for solving variational inclusion problems and related optimization problems.

The purposes of this paper is to study the zero point problem of the sum of a maximal monotone mapping and an inverse-strongly monotone mapping, and the fixed point problem of a nonexpansive mapping. The organization of this paper is as follows. In Section 2, we provide some necessary preliminaries. In Section 3, a Mann-type iterative algorithm with mixed errors is investigated. A weak convergence theorem is established. Applications of the main results are also discussed in this section.

2 Preliminaries

Throughout this paper, we always assume that H is a real Hilbert space with the inner product , and the norm , respectively. Let C be a nonempty closed convex subset of H and let P C be the metric projection from H onto C.

Let S:CC be a mapping. F(S) stands for the fixed point set of S; that is, F(S):={xC:x=Sx}.

Recall that S is said to be nonexpansive iff

SxSyxy,x,yC.

If C is a bounded, closed, and convex subset of H, then F(S) is not empty, closed, and convex; see [15].

Let A:CH be a mapping. Recall that A is said to be monotone iff

AxAy,xy0,x,yC.

A is said to be strongly monotone iff there exists a constant α>0 such that

AxAy,xyα x y 2 ,x,yC.

For such a case, A is also said to be α-strongly monotone. A is said to be inverse-strongly monotone iff there exists a constant α>0 such that

AxAy,xyα A x A y 2 ,x,yC.

For such a case, A is also said to be α-inverse-strongly monotone. It is not hard to see that inverse-strongly monotone mappings are monotone and Lipschitz continuous.

Recall that the classical variational inequality is to find an xC such that

Ax,yx0,yC.
(2.1)

In this paper, we use VI(C,A) to denote the solution set of (2.1). It is known that ωC is a solution to (2.1) iff ω is a fixed point of the mapping P C (IλA), where λ>0 is a constant, and I stands for the identity mapping. If A is α-inverse-strongly monotone and λ(0,2α], then the mapping P C (IλA) is nonexpansive. Indeed, we have

( I λ A ) x ( I λ A ) y 2 = ( x y ) λ ( A x A y ) 2 = x y 2 2 λ x y , A x A y + λ 2 A x A y 2 x y 2 λ ( 2 α λ ) A x A y 2 .

This shows that P C (IλA) is nonexpansive. It follows that VI(C,A) is closed and convex.

A multivalued operator T:H 2 H with the domain D(T)={xH:Tx} and the range R(T)={Tx:xD(T)} is said to be monotone if for x 1 D(T), x 2 D(T), y 1 T x 1 , and y 2 T x 2 , we have x 1 x 2 , y 1 y 2 0. A monotone operator T is said to be maximal if its graph G(T)={(x,y):yTx} is not properly contained in the graph of any other monotone operator. Let I denote the identity operator on H and let T:H 2 H be a maximal monotone operator. Then we can define, for each λ>0, a nonexpansive single-valued mapping J λ :HH by J λ = ( I + λ T ) 1 . It is called the resolvent of T. We know that T 1 0=F( J λ ) for all λ>0 and J λ is firmly nonexpansive, that is,

J λ x J λ y 2 J λ x J λ y,xy,x,yH;

for more details, see [1622] and the references therein.

In [19], Kamimura and Takahashi investigated the problem of finding zero points of a maximal monotone operator based on the following algorithm:

x 0 H, x n + 1 = α n x n +(1 α n ) J λ n x n ,n=0,1,2,,

where { α n } is a sequence in (0,1), { λ n } is a positive sequence, T:H 2 H is maximal monotone and J λ n = ( I + λ n T ) 1 . They showed that the sequence { x n } converges weakly to some z T 1 (0) provided that the control sequence satisfies some restrictions. Further, using this result, they also investigated the case that T=f, where f:H(,] is a proper lower semicontinuous convex function. Convergence theorems are established in the framework of real Hilbert spaces.

In [16], Takahashi an Toyoda investigated the problem of finding a common solutions of the variational inequality problem (2.1) and a fixed point problem of nonexpansive mappings based on the following algorithm:

x 0 C, x n + 1 = α n x n +(1 α n )S P C ( x n λ n A x n ),n0,

where { α n } is a sequence in (0,1), { λ n } is a positive sequence, S:CC is a nonexpansive mapping and A:CH is an inverse-strongly monotone mapping. They showed that the sequence { x n } converges weakly to some zVI(C,A)F(S) provided that the control sequence satisfies some restrictions.

In [23], Tada and Takahashi investigated the problem of finding common solutions of an equilibrium problem and a fixed point problem of nonexpansive mappings based on the following algorithm: x 0 H and

{ u n C  such that  F ( u n , u ) + 1 r n u u n , u n x n 0 , u C , x n + 1 = α n x n + ( 1 α n ) S u n , n 0 ,

where { α n } is a sequence in (0,1), { r n } is a positive sequence, S:CC is a nonexpansive mapping and F:C×CR is a bifunction. They showed that the sequence { x n } converges weakly to some zVI(C,A)F(S) provided that the control sequence satisfies some restrictions.

Recently, fixed point and zero point problems have been studied by many authors based on iterative methods; see, for example, [2334] and the references therein. In this paper, motivated by the above results, we consider the problem of finding a common solution to the zero point problem and the fixed point problem based on Mann-type iterative methods with errors. Weak convergence theorems are established in the framework of Hilbert spaces.

To obtain our main results in this paper, we need the following lemmas.

Recall that a space is said to satisfy Opial’s condition [35] if, for any sequence { x n }H with x n x, where denotes the weak convergence, the inequality

lim inf n x n x< lim inf n x n y

holds for every yH with yx. Indeed, the above inequality is equivalent to the following:

lim sup n x n x< lim sup n x n y.

Lemma 2.1 [34]

Let C be a nonempty, closed, and convex subset of H, let A:CH be a mapping, and let B:HH be a maximal monotone operator. Then F( J λ (IλA))= ( A + B ) 1 (0), where J λ (IλA) is the resolvent of B for λ>0.

Lemma 2.2 [36]

Let { a n }, { b n }, and { c n } be three nonnegative sequences satisfying the following condition:

a n + 1 (1+ b n ) a n + c n ,n n 0 ,

where n 0 is some nonnegative integer, n = 1 b n < and n = 1 c n <. Then the limit lim n a n exists.

Lemma 2.3 [37]

Suppose that H is a real Hilbert space and 0<p t n q<1 for all n1. Suppose further that { x n } and { y n } are sequences of H such that

lim sup n x n r, lim sup n y n r

and

lim n t n x n + ( 1 t n ) y n =r

hold for some r0. Then lim n x n y n =0.

Lemma 2.4 [15]

Let C be a nonempty, closed, and convex subset of H. Let S:CC be a nonexpansive mapping. Then the mapping IS is demiclosed at zero, that is, if { x n } is a sequence in C such that x n x ¯ and x n S x n 0, then x ¯ F(S).

3 Main results

Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H, let A:CH be an α-inverse-strongly monotone mapping, let S:CC be a nonexpansive mapping and let B be a maximal monotone operator on H such that the domain of B is included in C. Assume that F=F(S) ( A + B ) 1 (0). Let { α n }, { β n }, and { γ n } be real number sequences in (0,1) such that α n + β n + γ n =1. Let { λ n } be a positive real number sequence and let { e n } be a bounded error sequence in C. Let { x n } be a sequence in C generated in the following iterative process:

x 1 C, x n + 1 = α n S x n + β n J λ n ( x n λ n A x n )+ γ n e n
(3.1)

for all nN, where J λ n = ( I + λ n B ) 1 . Assume that the sequences { α n }, { β n }, { γ n }, and { λ n } satisfy the following restrictions:

  1. (a)

    0<a β n b<1,

  2. (b)

    0<c λ n d<2α,

  3. (c)

    n = 1 γ n <,

where a, b, c, and d are some real numbers. Then the sequence { x n } generated in (3.1) converges weakly to some point in .

Proof Notice that I λ n A is nonexpansive. Indeed, we have

( I λ n A ) x ( I λ n A ) y 2 = ( x y ) λ n ( A x A y ) 2 = x y 2 2 λ n x y , A x A y + λ n 2 A x A y 2 x y 2 λ n ( 2 α λ n ) A x A y 2 .

In view of the restriction (b), we find that I λ n A is nonexpansive. Fixing pF, we find from Lemma 2.1 that

p=Sp= J λ n (p λ n Ap).

Put y n = J λ n ( x n λ n A x n ). Since J λ n and I λ n A are nonexpansive, we have

y n p ( x n λ n A x n ) ( p λ n A p ) x n p .
(3.2)

On the other hand, we have

x n + 1 p α n x n p + β n y n p + γ n e n p x n p + γ n e n p .
(3.3)

We find that lim n x n p exists with the aid of Lemma 2.2. This in turn implies that { x n } and { y n } are bounded. Put lim n x n p=L>0. Notice that

S x n p + γ n ( e n S x n ) S x n p + γ n e n S x n x n p + γ n e n S x n .

This implies from the restriction (c) that

lim sup n S x n p + γ n ( e n S x n ) L.

We also have

y n p + γ n ( e n S x n ) y n p + γ n e n S x n x n p + γ n e n S x n .

This implies from the restriction (c) that

lim sup n y n p + γ n ( e n S x n ) L.

On the other hand, we have

x n + 1 p= ( 1 β n ) ( S x n p + γ n ( e n S x n ) ) + β n ( y n p + γ n ( e n S x n ) ) .

It follows from Lemma 2.3 that

lim n S x n y n =0.
(3.4)

Notice that

y n p 2 ( x n λ n A x n ) ( p λ n A p ) 2 x p 2 2 α λ n A x A p 2 + λ n 2 A x A p 2 = x p 2 λ n ( 2 α λ n ) A x A p 2 .
(3.5)

This implies that

x n + 1 p 2 α n x n p 2 + β n y n p 2 + γ n e n p 2 α n x n p 2 + β n x n p 2 β n λ n ( 2 α λ n ) A x n A p 2 + γ n e n p 2 x n p 2 β n λ n ( 2 α λ n ) A x n A p 2 + γ n e n p 2 .
(3.6)

It follows that

β n λ n (2α λ n ) A x n A p 2 x n p 2 x n + 1 p 2 + γ n e n p 2 .

In view of the restrictions (a), (b), and (c), we obtain that

lim n A x n Ap=0.
(3.7)

Notice that

y n p 2 = J λ n ( x n λ n A x n ) J λ n ( p λ n A p ) 2 ( x n λ n A x n ) ( p λ n A p ) , y n p = 1 2 ( ( x n λ n A x n ) ( p λ n A p ) 2 + y n p 2 ( x n λ n A x n ) ( p λ n A p ) ( y n p ) 2 ) 1 2 ( x n p 2 + y n p 2 x n y n λ n ( A x n A p ) 2 ) 1 2 ( x n p 2 + y n p 2 x n y n 2 λ n 2 A x n A p 2 + 2 λ n x n y n A x n A p ) 1 2 ( x n p 2 + y n p 2 x n y n 2 + 2 λ n x n y n A x n A p ) .

It follows that

y n p 2 x n p 2 x n y n 2 +2 λ n x n y n A x n Ap.
(3.8)

On the other hand, we have

x n + 1 p 2 α n x n p 2 + β n y n p 2 + γ n e n p 2 .
(3.9)

Substituting (3.8) into (3.9), we arrive at

x n + 1 p 2 α n x n p 2 + β n x n p 2 β n x n y n 2 + 2 β n λ n x n y n A x n A p + γ n e n p 2 x n p 2 β n x n y n 2 + 2 β n λ n x n y n A x n A p + γ n e n p 2 .

It derives that

β n x n y n 2 x n p 2 x n + 1 p 2 +2 β n λ n x n y n A x n Ap+ γ n e n p 2 .

In view of the restrictions (a) and (c), we find from (3.7) that

lim n x n y n =0.
(3.10)

Notice that

S x n x n S x n y n + y n x n .

It follows from (3.4) and (3.10) that

lim n S x n x n =0.
(3.11)

Since { x n } is bounded, there exists a subsequence { x n i } of { x n } such that x n i ωC, where denotes the weak convergence. From Lemma 2.4, we find that ωF(S). In view of (3.10), we can choose a subsequence { y n i } of { y n } such that y n i ω. Notice that

y n = J λ n ( x n λ n A x n ).

This implies that

x n λ n A x n (I+ λ n B) y n .

That is,

x n y n λ n A x n B y n .

Since B is monotone, we get for any (u,v)B that

y n u , x n y n λ n A x n v 0.
(3.12)

Replacing n by n i and letting i, we obtain from (3.10) that

ωu,Aωv0.

This means AωBω, that is, 0(A+B)(ω). Hence, we get ω ( A + B ) 1 (0). This completes the proof that ωF.

Suppose that there is another subsequence { x n j } of { x n } such that x n j ω . Then we can show that ω F in exactly the same way. Assume that ω ω since lim n x n p exits for any pF. Put lim n x n ω=d. Since the space satisfies Opial’s condition, we see that

d = lim inf i x n i ω < lim inf i x n i ω = lim n x n ω = lim inf j x n j ω < lim inf j x n j ω = d .

This is a contradiction. This shows that ω= ω . This proves that the sequence { x n } converges weakly to ωF. This completes the proof. □

We obtain from Theorem 3.1 the following inclusion problem.

Corollary 3.2 Let C be a nonempty closed convex subset of a real Hilbert space H, let A:CH be an α-inverse-strongly monotone mapping, and let B be a maximal monotone operator on H such that the domain of B is included in C. Assume that ( A + B ) 1 (0). Let { α n }, { β n }, and { γ n } be real number sequences in (0,1) such that α n + β n + γ n =1. Let { λ n } be a positive real number sequence and let { e n } be a bounded error sequence in C. Let { x n } be a sequence in C generated in the following iterative process:

x 1 C, x n + 1 = α n x n + β n J λ n ( x n λ n A x n )+ γ n e n

for all nN, where J λ n = ( I + λ n B ) 1 . Assume that the sequences { α n }, { β n }, { γ n }, and { λ n } satisfy the following restrictions:

  1. (a)

    0<a β n b<1,

  2. (b)

    0<c λ n d<2α,

  3. (c)

    n = 1 γ n <,

where a, b, c, and d are some real numbers. Then the sequence { x n } converges weakly to some point in ( A + B ) 1 (0).

Let f:H(,] be a proper lower semicontinuous convex function. Define the subdifferential

f(x)= { z H : f ( x ) + y x , z f ( y ) , y H }

for all xH. Then ∂f is a maximal monotone operator of H into itself; for more details, see [38]. Let C be a nonempty closed convex subset of H and let i C be the indicator function of C, that is,

i C x={ 0 , x C , , x C .

Furthermore, we define the normal cone N C (v) of C at v as follows:

N C v= { z H : z , y v 0 , y H }

for any vC. Then i C :H(,] is a proper lower semicontinuous convex function on H and i C is a maximal monotone operator. Let J λ x= ( I + λ i C ) 1 x for any λ>0 and xH. From i C x= N C x and xC, we get

v = J λ x x v + λ N C v x v , y v 0 , y C , v = P C x ,

where P C is the metric projection from H into C. Similarly, we can get that x ( A + i C ) 1 (0)xVI(A,C). Putting B= i C in Theorem 3.1, we find that J λ n = P C . The following results are not hard to derive.

Theorem 3.3 Let C be a nonempty closed convex subset of a real Hilbert space H, let A:CH be an α-inverse-strongly monotone mapping and let S:CC be a nonexpansive mapping. Assume that F=F(S)VI(C,A). Let { α n }, { β n }, and { γ n } be real number sequences in (0,1) such that α n + β n + γ n =1. Let { λ n } be a positive real number sequence and let { e n } be a bounded error sequence in C. Let { x n } be a sequence in C generated in the following iterative process:

x 1 C, x n + 1 = α n S x n + β n P C ( x n λ n A x n )+ γ n e n

for all nN. Assume that the sequences { α n }, { β n }, { γ n }, and { λ n } satisfy the following restrictions:

  1. (a)

    0<a β n b<1,

  2. (b)

    0<c λ n d<2α,

  3. (c)

    n = 1 γ n <,

where a, b, c, and d are some real numbers. Then the sequence { x n } converges weakly to some point in .

In view of Theorem 3.3, we have the following result.

Corollary 3.4 Let C be a nonempty closed convex subset of a real Hilbert space H and let A:CH be an α-inverse-strongly monotone mapping such that VI(C,A). Let { α n }, { β n }, and { γ n } be real number sequences in (0,1) such that α n + β n + γ n =1. Let { λ n } be a positive real number sequence and let { e n } be a bounded error sequence in C. Let { x n } be a sequence in C generated in the following iterative process:

x 1 C, x n + 1 = α n x n + β n P C ( x n λ n A x n )+ γ n e n

for all nN. Assume that the sequences { α n }, { β n }, { γ n }, and { λ n } satisfy the following restrictions:

  1. (a)

    0<a β n b<1,

  2. (b)

    0<c λ n d<2α,

  3. (c)

    n = 1 γ n <,

where a, b, c, and d are some real numbers. Then the sequence { x n } converges weakly to some point in VI(C,A).

Let F be a bifunction of C×C into , where denotes the set of real numbers. Recall the following equilibrium problem.

Find xC such that F(x,y)0,yC.

In this paper, we use EP(F) to denote the solution set of the equilibrium problem.

To study the equilibrium problems, we may assume that F satisfies the following conditions:

  1. (A1)

    F(x,x)=0 for all xC;

  2. (A2)

    F is monotone, i.e., F(x,y)+F(y,x)0 for all x,yC;

  3. (A3)

    for each x,y,zC,

    lim sup t 0 F ( t z + ( 1 t ) x , y ) F(x,y);
  4. (A4)

    for each xC, yF(x,y) is convex and weakly lower semi-continuous.

Putting F(x,y)=Ax,yx for every x,yC, we see that the equilibrium problem is reduced to the variational inequality (2.1).

The following lemma can be found in [39].

Lemma 3.5 Let C be a nonempty closed convex subset of H and let F:C×CR be a bifunction satisfying (A1)-(A4). Then, for any r>0 and xH, there exists zC such that

F(z,y)+ 1 r yz,zx0,yC.

Further, define

T r x= { z C : F ( z , y ) + 1 r y z , z x 0 , y C }
(3.13)

for all r>0 and xH. Then the following hold:

  1. (a)

    T r is single-valued,

  2. (b)

    T r is firmly nonexpansive, i.e., for any x,yH,

    T r x T r y 2 T r x T r y,xy,
  3. (c)

    F( T r )=EP(F),

  4. (d)

    EP(F) is closed and convex.

Lemma 3.6 [5]

Let C be a nonempty closed convex subset of a real Hilbert space H, let F a bifunction from C×C to which satisfies (A1)-(A4) and let A F be a multivalued mapping of H into itself defined by

A F x={ { z H : F ( x , y ) y x , z , y C } , x C , , x C .

Then A F is a maximal monotone operator with the domain D( A F )C, EP(F)= A F 1 (0) and

T r x= ( I + r A F ) 1 x,xH,r>0,

where T r is defined as in (3.13).

Theorem 3.7 Let C be a nonempty closed convex subset of a real Hilbert space H, let S:CC be a nonexpansive mapping and let F be a bifunction from C×C to which satisfies (A1)-(A4). Assume that F=F(S)EP(F). Let { α n }, { β n }, and { γ n } be real number sequences in (0,1) such that α n + β n + γ n =1. Let { λ n } be a positive real number sequence and let { e n } be a bounded error sequence in C. Let { x n } be a sequence in C generated in the following iterative process:

x 1 C, x n + 1 = α n S x n + β n y n + γ n e n

for all nN, where y n C such that

F( y n ,u)+ 1 λ n u y n , y n x n 0,uC.

Assume that the sequences { α n }, { β n }, { γ n }, and { λ n } satisfy the following restrictions:

  1. (a)

    0<a β n b<1,

  2. (b)

    0<c λ n d<,

  3. (c)

    n = 1 γ n <,

where a, b, c, and d are some real numbers. Then the sequence { x n } converges weakly to some point in .

References

  1. Douglas J, Rachford HH: On the numerical solution of heat conduction problems in two and three space variables. Trans. Am. Math. Soc. 1956, 82: 421–439. 10.1090/S0002-9947-1956-0084194-4

    Article  MathSciNet  Google Scholar 

  2. Shen J, Pang LP: An approximate bundle method for solving variational inequalities. Commun. Optim. Theory 2012, 1: 1–18.

    Google Scholar 

  3. Lions PL, Mercier B: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16: 964–979. 10.1137/0716071

    Article  MathSciNet  Google Scholar 

  4. Qin X, Chang SS, Cho YJ: Iterative methods for generalized equilibrium problems and fixed point problems with applications. Nonlinear Anal. 2010, 11: 2963–2972. 10.1016/j.nonrwa.2009.10.017

    Article  MathSciNet  Google Scholar 

  5. Takahashi S, Takahashi W, Toyoda M: Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 2010, 147: 27–41. 10.1007/s10957-010-9713-2

    Article  MathSciNet  Google Scholar 

  6. Abdel-Salam HS, Al-Khaled K: Variational iteration method for solving optimization problems. J. Math. Comput. Sci. 2012, 2: 1475–1497.

    MathSciNet  Google Scholar 

  7. Cho SY, Kang SM: Approximation of common solutions of variational inequalities via strict pseudocontractions. Acta Math. Sci. 2012, 32: 1607–1618.

    Article  Google Scholar 

  8. Noor MA, Noor KI, Waseem M: Decomposition method for solving system of linear equations. Eng. Math. Lett. 2013, 2: 31–41.

    Google Scholar 

  9. Cho SY, Li W, Kang SM: Convergence analysis of an iterative algorithm for monotone operators. J. Inequal. Appl. 2013., 2013: Article ID 199

    Google Scholar 

  10. Zegeye H, Shahzad N: Strong convergence theorem for a common point of solution of variational inequality and fixed point problem. Adv. Fixed Point Theory 2012, 2: 374–397.

    Google Scholar 

  11. Cho SY, Kang SM: Approximation of fixed points of pseudocontraction semigroups based on a viscosity iterative process. Appl. Math. Lett. 2011, 24: 224–228. 10.1016/j.aml.2010.09.008

    Article  MathSciNet  Google Scholar 

  12. He RH: Coincidence theorem and existence theorems of solutions for a system of Ky Fan type minimax inequalities in FC-spaces. Adv. Fixed Point Theory 2012, 2: 47–57.

    Google Scholar 

  13. Kellogg RB: Nonlinear alternating direction algorithm. Math. Comput. 1969, 23: 23–27. 10.1090/S0025-5718-1969-0238507-3

    Article  MathSciNet  Google Scholar 

  14. Cho SY, Kang SM: Zero point theorems of m -accretive operators in a Banach space. Fixed Point Theory 2012, 13: 49–58.

    MathSciNet  Google Scholar 

  15. Browder FE: Nonlinear operators and nonlinear equations of evolution in Banach spaces. Proc. Symp. Pure Math. 1976, 18: 78–81.

    Google Scholar 

  16. Takahahsi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118: 417–428. 10.1023/A:1025407607560

    Article  MathSciNet  Google Scholar 

  17. Qin X, Cho YJ, Kang SM: Approximating zeros of monotone operators by proximal point algorithms. J. Glob. Optim. 2010, 46: 75–87. 10.1007/s10898-009-9410-6

    Article  MathSciNet  Google Scholar 

  18. Eshita K, Takahashi W: Approximating zero points of accretive operators in general Banach spaces. Fixed Point Theory Appl. 2007, 2: 105–116.

    MathSciNet  Google Scholar 

  19. Kammura S, Takahashi W: Approximating solutions of maximal monotone operators in Hilbert spaces. J. Approx. Theory 2000, 106: 226–240. 10.1006/jath.2000.3493

    Article  MathSciNet  Google Scholar 

  20. Qin X, Su Y: Approximation of a zero point of accretive operator in Banach spaces. J. Math. Anal. Appl. 2007, 329: 415–424. 10.1016/j.jmaa.2006.06.067

    Article  MathSciNet  Google Scholar 

  21. Yuan Q, Shang M: Convergence of an extragradient-like iterative algorithm for monotone mappings and nonexpansive mappings. Fixed Point Theory Appl. 2013., 2013: Article ID 67

    Google Scholar 

  22. Luo H, Wang Y: Iterative approximation for the common solutions of a infinite variational inequality system for inverse-strongly accretive mappings. J. Math. Comput. Sci. 2012, 2: 1660–1670.

    MathSciNet  Google Scholar 

  23. Tada A, Takahashi W: Weak and strong convergence theorems for a nonexpansive mappings and an equilibrium problem. J. Optim. Theory Appl. 2007, 133: 359–370. 10.1007/s10957-007-9187-z

    Article  MathSciNet  Google Scholar 

  24. Noor MA, Huang Z: Some resolvent iterative methods for variational inclusions and nonexpansive mappings. Appl. Math. Comput. 2007, 194: 267–275. 10.1016/j.amc.2007.04.037

    Article  MathSciNet  Google Scholar 

  25. Qin X, Cho YJ, Kang SM: Convergence theorems of common elements for equilibrium problems and fixed point problems in Banach spaces. J. Comput. Appl. Math. 2009, 225: 20–30. 10.1016/j.cam.2008.06.011

    Article  MathSciNet  Google Scholar 

  26. Qin X, Cho SY, Kang SM: Strong convergence of shrinking projection methods for quasi- ϕ -nonexpansive mappings and equilibrium problems. J. Comput. Appl. Math. 2010, 234: 750–760. 10.1016/j.cam.2010.01.015

    Article  MathSciNet  Google Scholar 

  27. Shehu Y: An iterative method for fixed point problems, variational inclusions and generalized equilibrium problems. Math. Comput. Model. 2011, 54: 1394–1404. 10.1016/j.mcm.2011.04.008

    Article  MathSciNet  Google Scholar 

  28. Saejung S, Yotkae P: Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. 2012, 75: 742–750. 10.1016/j.na.2011.09.005

    Article  MathSciNet  Google Scholar 

  29. Saejung S, Wongchan K, Yotkae P: Another weak convergence theorems for accretive mappings in Banach spaces. Fixed Point Theory Appl. 2011., 2011: Article ID 26

    Google Scholar 

  30. Cho YJ, Kang SM, Zhou H: Approximate proximal point algorithms for finding zeroes of maximal monotone operators in Hilbert spaces. J. Inequal. Appl. 2008., 2008: Article ID 598191

    Google Scholar 

  31. Wu C, Liu A: Strong convergence of a hybrid projection iterative algorithm for common solutions of operator equations and of inclusion problems. Fixed Point Theory Appl. 2012., 012: Article ID 90

    Google Scholar 

  32. Qin X, Cho SY, Kang SM: On hybrid projection methods for asymptotically quasi- ϕ -nonexpansive mappings. Appl. Math. Comput. 2010, 215: 3874–3883. 10.1016/j.amc.2009.11.031

    Article  MathSciNet  Google Scholar 

  33. Kim JK: Strong convergence theorems by hybrid projection methods for equilibrium problems and fixed point problems of the asymptotically quasi- ϕ -nonexpansive mappings. Fixed Point Theory Appl. 2011., 2011: Article ID 10

    Google Scholar 

  34. Aoyama K, et al.: On a strongly nonexpansive sequence in Hilbert spaces. J. Nonlinear Convex Anal. 2007, 8: 471–489.

    MathSciNet  Google Scholar 

  35. Opial Z: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73: 591–597. 10.1090/S0002-9904-1967-11761-0

    Article  MathSciNet  Google Scholar 

  36. Tan KK, Xu HK: The nonlinear ergodic theorem for asymptotically nonexpansive mappings in Banach spaces. Proc. Am. Math. Soc. 1992, 114: 399–404. 10.1090/S0002-9939-1992-1068133-2

    Article  MathSciNet  Google Scholar 

  37. Schu J: Weak and strong convergence to fixed points of a asymptotically nonexpansive mappings. Bull. Aust. Math. Soc. 1991, 43: 153–159. 10.1017/S0004972700028884

    Article  MathSciNet  Google Scholar 

  38. Rockafellar RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14: 877–898. 10.1137/0314056

    Article  MathSciNet  Google Scholar 

  39. Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.

    MathSciNet  Google Scholar 

Download references

Acknowledgements

The author is grateful to the reviewers for useful suggestions which improved the contents of the article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuan Hecai.

Additional information

Competing interests

The author declares that he has no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Hecai, Y. On weak convergence of an iterative algorithm for common solutions of inclusion problems and fixed point problems in Hilbert spaces. Fixed Point Theory Appl 2013, 155 (2013). https://doi.org/10.1186/1687-1812-2013-155

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2013-155

Keywords