Skip to content

Advertisement

Open Access

A regularization algorithm for zero points of accretive operators

Fixed Point Theory and Applications20132013:341

https://doi.org/10.1186/1687-1812-2013-341

Received: 10 November 2013

Accepted: 3 December 2013

Published: 13 December 2013

Abstract

A regularization algorithm with a computational error for treating accretive operators is investigated. A strong convergence theorem for zero points of accretive operators is established in a reflexive Banach space.

Keywords

accretive operatorfixed pointnonexpansive mappingregularization algorithmzero point

1 Introduction

In this paper, we are concerned with the problem of finding zero points of a mapping A : E 2 E ; that is, finding a point x in the domain of A such that 0 A x . The domain of a mapping A is defined by the set { x E : A x 0 } . Many important problems have reformulations which require finding zero points, for instance, evolution equations, complementarity problems, mini-max problems, variational inequalities and optimization problems. It is well known that minimizing a convex function f can be reduced to finding zero points of the subdifferential mapping A = f . One of the most popular techniques for solving the inclusion problem goes back to the work of Browder [1]. One of the basic ideas in the case of a Hilbert space H is reducing the above inclusion problem to a fixed point problem of the operator R A defined by R A = ( I + A ) 1 , which is called the classical resolvent of A. If A has some monotonicity conditions, the classical resolvent of A is with full domain and firmly nonexpansive, that is, R A x R A y 2 R A x R A y , x y , x , y H . The property of the resolvent ensures that the Picard iterative algorithm x n + 1 = R A x n converges weakly to a fixed point of R A , which is necessarily a zero point of A. Rockafellar introduced this iteration method and called it the proximal point algorithm; for more detail, see [24] and the references therein. Methods for finding zero points of monotone mappings in the framework of Hilbert spaces are based on the good properties of the resolvent R A , but these properties are not available in the framework of Banach spaces.

In this paper, we study a viscosity algorithm with a computational error. A strong convergence theorem for zero points of accretive operators is established in a reflexive Banach space. The organization of this paper is as follows. In Section 2, we provide some necessary preliminaries. In Section 3, a strong convergence theorem is established in a reflexive Banach space. Two applications of the main results are also discussed in this section.

2 Preliminaries

In what follows, we always assume that E is a Banach space with the dual E . Let U E = { x E : x = 1 } . E is said to be smooth or is said to have a Gâteaux differentiable norm if the limit lim t 0 x + t y x t exists for each x , y U E . E is said to have a uniformly Gâteaux differentiable norm if for each y U E , the limit is attained uniformly for all x U E . E is said to be uniformly smooth or is said to have a uniformly Fréchet differentiable norm if the limit is attained uniformly for x , y U E . Let , denote the pairing between E and E . The normalized duality mapping J : E 2 E is defined by J ( x ) = { f E : x , f = x 2 = f 2 } , x E . In the sequel, we use j to denote the single-valued normalized duality mapping. It is known that if the norm of E is uniformly Gâteaux differentiable, then the duality mapping j is single-valued and uniformly norm to weak continuous on each bounded subset of E.

Let C be a nonempty closed convex subset of E. Let T : C C be a mapping. In this paper, we use F ( T ) to denote the set of fixed points of T. Recall that T is said to be α-contractive if there exits a constant α ( 0 , 1 ) such that T x T y α x y , x , y C . T is said to be nonexpansive if α = 1 . T is said to be pseudocontractive if there exists some j ( x y ) J ( x y ) such that T x T y , j ( x y ) x y 2 , x , y C .

Recall that a closed convex subset C of a Banach space E is said to have normal structure if for each bounded closed convex subset K of C which contains at least two points, there exists an element x of K which is not a diametral point of K, i.e., sup { x y : y K } < d ( K ) , where d ( K ) is the diameter of K. Let D be a nonempty subset of C. Let Q : C D . Q is said to be contraction if Q 2 = Q ; sunny if for each x C and t ( 0 , 1 ) , we have Q ( t x + ( 1 t ) Q x ) = Q x ; sunny nonexpansive retraction if Q is sunny, nonexpansive, and contraction. K is said to be a nonexpansive retract of C if there exists a nonexpansive retraction from C onto D; for more details, see [5] and the references therein.

Let I denote the identity operator on E. An operator A E × E with domain D ( A ) = { z E : A z } and range R ( A ) = { A z : z D ( A ) } is said to be accretive if for each x i D ( A ) and y i A x i , i = 1 , 2 , there exists j ( x 1 x 2 ) J ( x 1 x 2 ) such that y 1 y 2 , j ( x 1 x 2 ) 0 . An accretive operator A is said to be m-accretive if R ( I + r A ) = E for all r > 0 . In a real Hilbert space, an operator A is m-accretive if and only if A is maximal monotone. In this paper, we use A 1 ( 0 ) to denote the set of zero points of A. For an accretive operator A, we can define a nonexpansive single-valued mapping J r : R ( I + r A ) D ( A ) by J r = ( I + r A ) 1 for each r > 0 , which is called the resolvent of A.

One of classical methods of studying the problem 0 A x , where A E × E is an accretive operator, is the proximal point algorithm (PPA) which was initiated by Martinet [6] and further developed by Rockafellar [3]. It is known that PPA is only weakly convergent; see Güler [7]. In many disciplines, including economics, image recovery, quantum physics, and control theory, problems arise in infinite dimension spaces. In such problems, strong convergence (norm convergence) is often much more desirable than weak convergence, for it translates the physically tangible property that the energy x n x of the error between the iterate x n and the solution x eventually becomes arbitrarily small. The importance of strong convergence is also underlined in [7], where a convex function f is minimized via the proximal-point algorithm: it is shown that the rate of convergence of the value sequence { f ( x n ) } is better when { x n } converges strongly than when it converges weakly. Such properties have a direct impact when the process is executed directly in the underlying infinite dimensional space.

Regularization methods recently have been investigated for treating zero points of accretive operators; see [822] and the references therein. In this paper, zero points of m-accretive operators are investigated based on a viscosity iterative algorithm with a computational error. A strong convergence theorem for zero points of m-accretive operators is established in a reflexive Banach space.

In order to state our main results, we need the following lemmas.

Lemma 2.1 [23]

Let { x n } and { y n } be bounded sequences in a Banach space E. Let { β n } be a sequence in ( 0 , 1 ) with 0 < lim inf n β n lim sup n β n < 1 . Suppose that x n + 1 = ( 1 β n ) y n + β n x n , n 1 and lim sup n ( y n + 1 y n x n + 1 x n ) 0 . Then lim n y n x n = 0 .

Lemma 2.2 [21]

Let E be a real reflexive Banach space with the uniformly Gâteaux differentiable norm and the normal structure, and let C be a nonempty closed convex subset of E. Let T : C C be a nonexpansive mapping with a fixed point, and let f : C C be a fixed contraction with the coefficient α ( 0 , 1 ) . Let { x t } be a sequence generated by the following x t = t f ( x t ) + ( 1 t ) T x t , where t ( 0 , 1 ) . Then { x t } converges strongly as t 0 to a fixed point x of T, which is the unique solution in F ( T ) to the following variational inequality f ( x ) x , j ( x p ) 0 , p F ( T ) .

Lemma 2.3 [24]

Let E be a Banach space, and let A be an m-accretive operator. For λ > 0 , μ > 0 , and x E , we have J λ x = J μ ( μ λ x + ( 1 μ λ ) J λ x ) , where J λ = ( I + λ A ) 1 and J μ = ( I + μ A ) 1 .

Lemma 2.4 [25]

Let { a n } be a sequence of nonnegative numbers satisfying the condition a n + 1 ( 1 t n ) a n + t n b n + c n , n 0 , where { t n } is a number sequence in ( 0 , 1 ) such that lim n t n = 0 and n = 0 t n = , { b n } is a number sequence such that lim sup n b n 0 , and { c n } is a positive number sequence such that n = 0 c n < . Then lim n a n = 0 .

3 Main results

Theorem 3.1 Let E be a real reflexive Banach space with the uniformly Gâteaux differentiable norm, and let A be an m-accretive operator in E. Assume that C : = D ( A ) ¯ is convex and has the normal structure. Let f : C C be a fixed α-contraction. Let { x n } be a sequence generated in the following manner: x 0 C and
x n + 1 = β n x n + ( 1 β n ) J r n ( α n f ( x n ) + ( 1 α n ) x n + e n + 1 ) , n 0 ,
where { α n } and { β n } are real number sequences in ( 0 , 1 ) , { e n } is a sequence in E, { r n } is a positive real number sequence, and J r n = ( I + r n A ) 1 . Assume that A 1 ( 0 ) is not empty and the above control sequences satisfy the following restrictions:
  1. (a)

    lim n α n = 0 and n = 1 α n = ;

     
  2. (b)

    0 < lim inf n β n lim sup n β n < 1 ;

     
  3. (c)

    n = 1 e n < ;

     
  4. (d)

    r n r > 0 and lim n | r n r n + 1 | = 0 .

     

Then the sequence { x n } converges strongly to x ¯ A 1 ( 0 ) , which is the unique solution to the following variational inequality f ( x ¯ ) x ¯ , j ( p x ¯ ) 0 , p A 1 ( 0 ) .

Proof Fixing p A 1 ( 0 ) , we find that
x n + 1 p β n x n p + ( 1 β n ) J r n ( α n f ( x n ) + ( 1 α n ) x n + e n + 1 ) p β n x n p + ( 1 β n ) ( α n f ( x n ) p + ( 1 α n ) x n p + e n + 1 ) ( 1 α n ( 1 β n ) ( 1 α ) ) x n p + α n ( 1 β n ) f ( p ) p + e n + 1 max { x n p , f ( p ) p 1 α } + e n + 1 max { x 0 p , f ( p ) p 1 α } + i = 1 n + 1 e i max { x 0 p , f ( p ) p 1 α } + i = 1 e i < .
This proves that the sequence { x n } is bounded. Put y n = α n f ( x n ) + ( 1 α n ) x n + e n + 1 . It follows that
y n + 1 y n α n f ( x n + 1 ) f ( x n ) + | α n + 1 α n | f ( x n ) x n + ( 1 α n + 1 ) x n + 1 x n + e n + 1 + e n + 1 x n + 1 x n + | α n + 1 α n | f ( x n ) x n + e n + 1 + e n + 1 .
(3.1)
In view of Lemma 2.3, we find that
J r n + 1 y n + 1 J r n y n = J r n ( r n r n + 1 y n + 1 + ( 1 r n r n + 1 ) J r n + 1 y n + 1 ) J r n y n ( r n r n + 1 y n + 1 + ( 1 r n r n + 1 ) J r n + 1 y n + 1 ) y n y n + 1 y n + r n + 1 r n r M ,
(3.2)
where M is an appropriate constant such that M sup n 0 { J r n + 1 y n + 1 y n + 1 } . Substituting (3.1) into (3.2), we find that
J r n + 1 y n + 1 J r n y n x n + 1 x n | α n + 1 α n | f ( x n ) x n + e n + 1 + e n + 1 + r n + 1 r n r M .
In view of the restrictions (a), (c) and (d), we find that
lim sup n ( J r n + 1 y n + 1 J r n y n x n x n + 1 ) 0 .
It follows from Lemma 2.1 that
lim n J r n y n x n = 0 .
(3.3)
Notice that y n x n α n f ( x n ) x n + e n + 1 . It follows from the restrictions (a) and (c) that
lim n y n x n = 0 .
(3.4)
In view of J r n y n y n J r n y n x n + x n y n , we find from (3.3) and (3.4) that
lim n J r n y n y n = 0 .
(3.5)
Take a fixed number s such that r > s > 0 . It follows from Lemma 2.3 that
y n J s y n y n J r n y n + J s ( s r n y n + ( 1 s r n ) J r n y n ) J s y n y n J r n y n + ( 1 s r n ) ( J r n y n y n ) 2 y n J r n y n .
This implies from (3.5) that
lim n y n J s y n = 0 .
(3.6)
Now, we are in a position to claim that lim sup n x ¯ f ( x ¯ ) , j ( y n x ¯ ) 0 , where x ¯ = lim t 0 x t , and x t solves the fixed point equation x t = t f ( x t ) + ( 1 t ) J s x t , t ( 0 , 1 ) . It follows that
x t y n 2 ( 1 t ) ( x t y n 2 + J s y n y n x t y n ) + t f ( x t ) x t , j ( x t y n ) + t x t y n 2 x t y n 2 + J s y n y n x t y n + t f ( x t ) x t , j ( x t y n ) .
This implies that x t f ( x t ) , j ( x t y n ) 1 t J s y n y n x t y n , t ( 0 , 1 ) . In view of (3.6), we find that
lim sup n x t f ( x t ) , j ( x t y n ) 0 .
(3.7)
Since x t x ¯ , as t 0 and the fact that j is strong to weak uniformly continuous on bounded subsets of E, we see that
| f ( x ¯ ) x ¯ , j ( y n x ¯ ) x t f ( x t ) , j ( x t y n ) | | f ( x ¯ ) x ¯ , j ( y n x ¯ ) f ( x ¯ ) x ¯ , j ( y n x t ) | + | f ( x ¯ ) x ¯ , j ( y n x t ) x t f ( x t ) , j ( x t y n ) | | f ( x ¯ ) x ¯ , j ( y n x ¯ ) j ( y n x t ) | + | f ( x ¯ ) x ¯ + x t f ( x t ) , j ( y n x t ) | f ( x ¯ ) x ¯ j ( y n x ¯ ) j ( y n x t ) + f ( x ¯ ) x ¯ + x t f ( x t ) y n x t 0 as  t 0 .

Hence, for any ϵ > 0 , there exists λ > 0 such that t ( 0 , λ ) the following inequality holds f ( x ¯ ) x ¯ , j ( y n x ¯ ) x t f ( x t ) , j ( x t y n ) + ϵ . Taking lim sup n in the above inequality, we find that lim sup n f ( x ¯ ) x ¯ , j ( y n x ¯ ) lim sup n x t f ( x t ) , j ( x t y n ) + ϵ . Since ϵ is arbitrary, we obtain from (3.7) that lim sup n f ( x ¯ ) x ¯ , j ( y n x ¯ ) 0 .

Finally, we prove that x n x ¯ as n . Note that
y n x ¯ 2 2 α n f ( x n ) x ¯ , j ( y n x ¯ ) + ( 1 α n ) x n x ¯ 2 + 2 e n + 1 y n x ¯ .
(3.8)
On the other hand, we have
x n + 1 x ¯ 2 β n x n x ¯ , j ( x n + 1 x ¯ ) + ( 1 β n ) J r n y n x ¯ , j ( x n + 1 x ¯ ) β n 2 ( x n x ¯ 2 + x n + 1 x ¯ 2 ) + 1 β n 2 ( y n x ¯ 2 + x n + 1 x ¯ 2 ) .
It follows from (3.8) that
x n + 1 x ¯ 2 ( 1 α n ( 1 β n ) ) x n x ¯ 2 + 2 α n ( 1 β n ) f ( x n ) x ¯ , j ( y n x ¯ ) + 2 e n + 1 y n x ¯ .

In view of Lemma 2.4, we find the desired conclusion immediately. □

4 Applications

In this section, we give two applications of our main result in the framework of Hilbert spaces.

First, we consider, in the framework of Hilbert spaces, solutions of a Ky Fan inequality, which is known as an equilibrium problem in the terminology of Blum and Oettli; see [26] and [27] and the references therein.

Let C be a nonempty closed and convex subset of a Hilbert space H. Let F be a bifunction of C × C into , where denotes the set of real numbers. Recall the following equilibrium problem:
Find  x C  such that  F ( x , y ) 0 , y C .
(4.1)
To study equilibrium problem (4.1), we may assume that F satisfies the following restrictions:
  1. (A1)

    F ( x , x ) = 0 for all x C ;

     
  2. (A2)

    F is monotone, i.e., F ( x , y ) + F ( y , x ) 0 for all x , y C ;

     
  3. (A3)

    for each x , y , z C , lim sup t 0 F ( t z + ( 1 t ) x , y ) F ( x , y ) ;

     
  4. (A4)

    for each x C , y F ( x , y ) is convex and lower semi-continuous.

     

The following lemma can be found in [27].

Lemma 4.1 Let C be a nonempty, closed, and convex subset of H and F : C × C R be a bifunction satisfying (A1)-(A4). Then, for any s > 0 and x H , there exists z C such that F ( z , y ) + 1 s y z , z x 0 , y C . Further, define
T s x = { z C : F ( z , y ) + 1 s y z , z x 0 , y C }
(4.2)

for all s > 0 and x H . Then (1) T s is single-valued and firmly nonexpansive; (2) F ( T s ) = EP ( F ) is closed and convex.

Lemma 4.2 [28]

Let F be a bifunction from C × C to which satisfies (A1)-(A4), and let A F be a multivalued mapping of H into itself defined by
A F x = { { z H : F ( x , y ) y x , z , y C } , x C , , x C .
(4.3)
Then A F is a maximal monotone operator with domain D ( A F ) C , EP ( F ) = A F 1 ( 0 ) , where EP ( F ) stands for the solution set of (4.1), and
T s x = ( I + s A F ) 1 x , x H , s > 0 ,

where T s is defined as in (4.2).

Theorem 4.3 Let F : C × C R be a bifunction satisfying (A1)-(A4). Let f : C C be a fixed α-contraction. Let { x n } be a sequence generated in the following manner: x 0 C and
x n + 1 = β n x n + ( 1 β n ) T r n ( α n f ( x n ) + ( 1 α n ) x n + e n + 1 ) , n 0 ,

where { α n } and { β n } are real number sequences in ( 0 , 1 ) , { e n } is a sequence in H, { r n } is a positive real number sequence, and T r n = ( I + r n A F ) 1 . Assume that EP ( F ) is not empty and the above control sequences satisfy the restrictions (a), (b), (c) and (d) in Theorem  3.1. Then the sequence { x n } converges strongly to x ¯ EP ( F ) , which is the unique solution to the following variational inequality f ( x ¯ ) x ¯ , p x ¯ 0 , p A 1 ( 0 ) .

Next, we consider the problem of finding a minimizer of a proper convex lower semicontinuous function.

For a proper lower semicontinuous convex function g : H ( , ] , the subdifferential mapping ∂g of g is defined by
g ( x ) = { x H : g ( x ) + y x , x g ( y ) , y H } , x H .

Rockafellar [2] proved that ∂g is a maximal monotone operator. It is easy to verify that 0 g ( v ) if and only if g ( v ) = min x H g ( x ) .

Theorem 4.4 Let g : H ( , + ] be a proper convex lower semicontinuous function such that ( g ) 1 ( 0 ) is not empty. Let f : H H be a κ-contraction, and let { x n } be a sequence in H in the following process: x 0 H and
{ y n = arg min z H { g ( z ) + z α n f ( x n ) ( 1 α n ) x n e n + 1 2 2 r n } , x n + 1 = β n x n + ( 1 β n ) y n , n 0 ,

where { α n } and { β n } are real number sequences in ( 0 , 1 ) , { e n } is a sequence in E, and { r n } is a positive real number sequence. Assume that the above control sequences satisfy the restrictions in Theorem  3.1. Then the sequence { x n } converges strongly to x ¯ ( f ) 1 ( 0 ) , which is the unique solution to the following variational inequality f ( x ¯ ) x ¯ , j ( p x ¯ ) 0 , p ( f ) 1 ( 0 ) .

Proof Since g : H ( , ] is a proper convex and lower semicontinuous function, we see that subdifferential ∂g of g is maximal monotone. Note that
y n = arg min z H { g ( z ) + z α n f ( x n ) ( 1 α n ) x n e n + 1 2 2 r n }
is equivalent to
0 g ( y n ) + 1 r n ( y n α n f ( x n ) ( 1 α n ) x n e n + 1 ) .
It follows that
α n f ( x n ) + ( 1 α n ) x n + e n + 1 y n + r n g ( y n ) .

Following the proof in Theorem 3.1, we draw the desired conclusion immediately. □

Declarations

Authors’ Affiliations

(1)
Department of Mathematics, Hangzhou Normal University, Hangzhou, China
(2)
Department of Mathematics, Gyeongsang National University, Jinju, Korea

References

  1. Browder FE: Existence and approximation of solutions of nonlinear variational inequalities. Proc. Natl. Acad. Sci. USA 1966, 56: 1080–1086. 10.1073/pnas.56.4.1080MathSciNetView ArticleGoogle Scholar
  2. Rockfellar RT: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1976, 1: 97–116. 10.1287/moor.1.2.97MathSciNetView ArticleGoogle Scholar
  3. Rockfellar RT: Monotone operators and proximal point algorithm. SIAM J. Control Optim. 1976, 14: 877–898. 10.1137/0314056View ArticleGoogle Scholar
  4. Solodov MV, Svaiter BF: Forcing strong convergence of proximal point iterations in a Hilbert spaces. Math. Program. 2000, 87: 189–202.MathSciNetGoogle Scholar
  5. Bruck RE: Nonexpansive projections on subsets of Banach spaces. Pac. J. Math. 1973, 47: 341–355. 10.2140/pjm.1973.47.341MathSciNetView ArticleGoogle Scholar
  6. Martinet B: Regularisation d’inequations variationnelles par approximations successives. Rev. Fr. Inform. Rech. Oper. 1970, 4: 154–158.MathSciNetGoogle Scholar
  7. Güler O: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 1991, 29: 403–419. 10.1137/0329022MathSciNetView ArticleGoogle Scholar
  8. Cho SY, Kang SM: Approximation of fixed points of pseudocontraction semigroups based on a viscosity iterative process. Appl. Math. Lett. 2011, 24: 224–228. 10.1016/j.aml.2010.09.008MathSciNetView ArticleGoogle Scholar
  9. Cho SY, Kang SM: Zero point theorems for m -accretive operators in a Banach space. Fixed Point Theory 2012, 13: 49–58.MathSciNetGoogle Scholar
  10. Yang S: Zero theorems of accretive operators in reflexive Banach spaces. J. Nonlinear Funct. Anal. 2013., 2013: Article ID 2Google Scholar
  11. Wen M, Hu C: Strong convergence of an new iterative method for a zero of accretive operator and nonexpansive mapping. Fixed Point Theory Appl. 2012., 2012: Article ID 98Google Scholar
  12. Luo H, Wang Y: Iterative approximation for the common solutions of a infinite variational inequality system for inverse-strongly accretive mappings. J. Math. Comput. Sci. 2012, 2: 1660–1670.MathSciNetGoogle Scholar
  13. Cho SY, Qin X, Kang SM: Iterative processes for common fixed points of two different families of mappings with applications. J. Glob. Optim. 2013, 57: 1429–1446. 10.1007/s10898-012-0017-yMathSciNetView ArticleGoogle Scholar
  14. Cho SY: Strong convergence of an iterative algorithm for sums of two monotone operators. J. Fixed Point Theory 2013., 2013: Article ID 6Google Scholar
  15. Jung JS: Strong convergence of viscosity approximation methods for finding zeros of accretive operators in Banach spaces. Nonlinear Anal. 2010, 72: 449–459. 10.1016/j.na.2009.06.079MathSciNetView ArticleGoogle Scholar
  16. Zegeye H, Shahzad N: Strong convergence theorem for a common point of solution of variational inequality and fixed point problem. Adv. Fixed Point Theory 2012, 2: 374–397.Google Scholar
  17. Yang S: A proximal point algorithm for zeros of monotone operators. Math. Finance Lett. 2013., 2013: Article ID 7Google Scholar
  18. Cho SY, Qin X, Kang SM: Hybrid projection algorithms for treating common fixed points of a family of demicontinuous pseudocontractions. Appl. Math. Lett. 2012, 25: 854–857. 10.1016/j.aml.2011.10.031MathSciNetView ArticleGoogle Scholar
  19. Cho SY, Kang SM: Approximation of common solutions of variational inequalities via strict pseudocontractions. Acta Math. Sci. 2012, 32: 1607–1618. 10.1016/S0252-9602(12)60127-1View ArticleGoogle Scholar
  20. Wu C, Lv S: Bregman projection methods for zeros of monotone operators. J. Fixed Point Theory 2013., 2013: Article ID 7Google Scholar
  21. Qin X, Cho S, Wang L: Iterative algorithms with errors for zero points of m -accretive operators. Fixed Point Theory Appl. 2013., 2013: Article ID 148Google Scholar
  22. Qin X, Su Y: Approximation of a zero point of accretive operator in Banach spaces. J. Math. Anal. Appl. 2007, 329: 415–424. 10.1016/j.jmaa.2006.06.067MathSciNetView ArticleGoogle Scholar
  23. Suzuki T: Strong convergence of Krasnoselskii and Mann’s type sequences for one-parameter nonexpansive semigroups without Bochne integrals. J. Math. Anal. Appl. 2005, 305: 227–239. 10.1016/j.jmaa.2004.11.017MathSciNetView ArticleGoogle Scholar
  24. Barbu V: Nonlinear Semigroups and Differential Equations in Banach Space. Noordhoff, Groningen; 1976.View ArticleGoogle Scholar
  25. Liu L: Ishikawa-type and Mann-type iterative processes with errors for constructing solutions of nonlinear equations involving m -accretive operators in Banach spaces. Nonlinear Anal. 1998, 34: 307–317. 10.1016/S0362-546X(97)00579-8MathSciNetView ArticleGoogle Scholar
  26. Fan K: A minimax inequality and applications. III. In Inequality. Edited by: Shisha O. Academic Press, New York; 1972:103–113.Google Scholar
  27. Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.MathSciNetGoogle Scholar
  28. Takahashi S, Takahashi W, Toyoda M: Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 2010, 147: 27–41. 10.1007/s10957-010-9713-2MathSciNetView ArticleGoogle Scholar

Copyright

© Qing and Cho; licensee Springer. 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.