Skip to main content
  • Research Article
  • Open access
  • Published:

Iterative Approaches to Find Zeros of Maximal Monotone Operators by Hybrid Approximate Proximal Point Methods

Abstract

The purpose of this paper is to introduce and investigate two kinds of iterative algorithms for the problem of finding zeros of maximal monotone operators. Weak and strong convergence theorems are established in a real Hilbert space. As applications, we consider a problem of finding a minimizer of a convex function.

1. Introduction

Let be a nonempty, closed, and convex subset of a real Hilbert space . In this paper, we always assume that is a maximal monotone operator. A classical method to solve the following set-valued equation:

(1.1)

is the proximal point method. To be more precise, start with any point , and update iteratively conforming to the following recursion:

(1.2)

where () is a sequence of real numbers. However, as pointed out in [1], the ideal form of the method is often impractical since, in many cases, to solve the problem (1.2) exactly is either impossible or has the same difficulty as the original problem (1.1). Therefore, one of the most interesting and important problems in the theory of maximal monotone operators is to find an efficient iterative algorithm to compute approximate zeros of .

In 1976, Rockafellar [2] gave an inexact variant of the method

(1.3)

where is regarded as an error sequence. This is an inexact proximal point method. It was shown that, if

(1.4)

the sequence defined by (1.3) converges weakly to a zero of provided that . In [3], Güler obtained an example to show that Rockafellar's inexact proximal point method (1.3) does not converge strongly, in general.

Recently, many authors studied the problems of modifying Rockafellar's inexact proximal point method (1.3) in order to strong convergence to be guaranteed. In 2008, Ceng et al. [4] gave new accuracy criteria to modified approximate proximal point algorithms in Hilbert spaces; that is, they established strong and weak convergence theorems for modified approximate proximal point algorithms for finding zeros of maximal monotone operators in Hilbert spaces. In the meantime, Cho et al. [5] proved the following strong convergence result.

Theorem CKZ1.

Let be a real Hilbert space, a nonempty closed convex subset of , and a maximal monotone operator with . Let be the metric projection of onto . Suppose that, for any given , , and , there exists conforming to the following set-valued mapping equation:

(1.5)

where with as and

(1.6)

Let be a real sequence in such that

(i) as ,

(ii).

For any fixed , define the sequence iteratively as follows:

(1.7)

Then converges strongly to a zero of , where .

They also derived the following weak convergence theorem.

Theorem CKZ2.

Let be a real Hilbert space, a nonempty closed convex subset of , and a maximal monotone operator with . Let be the metric projection of onto . Suppose that, for any given , , and , there exists conforming to the following set-valued mapping equation:

(1.8)

where and

(1.9)

Let be a real sequence in with , and define a sequence iteratively as follows:

(1.10)

where for all . Then the sequence converges weakly to a zero of .

Very recently, Qin et al. [6] extended (1.7) and (1.10) to the iterative scheme

(1.11)

and the iterative one

(1.12)

respectively, where , , and with . Under appropriate conditions, they derived one strong convergence theorem for (1.11) and another weak convergence theorem for (1.12). In addition, for other recent research works on approximate proximal point methods and their variants for finding zeros of monotone maximal operators, see, for example, [7–10] and the references therein.

In this paper, motivated by the research work going on in this direction, we continue to consider the problem of finding a zero of the maximal monotone operator . The iterative algorithms (1.7) and (1.10) are extended to develop the following new iterative ones:

(1.13)
(1.14)

respectively, where is any fixed point in , , , , and with . Under mild conditions, we establish one strong convergence theorem for (1.13) and another weak convergence theorem for (1.14). The results presented in this paper improve the corresponding results announced by many others. It is easy to see that in the case when and for all , the iterative algorithms (1.13) and (1.14) reduce to (1.7) and (1.10), respectively. Moreover, the iterative algorithms (1.13) and (1.14) are very different from (1.11) and (1.12), respectively. Indeed, it is clear that the iterative algorithm (1.13) is equivalent to the following:

(1.15)

Here, the first iteration step , is to compute the prediction value of approximate zeros of ; the second iteration step, , is to compute the correction value of approximate zeros of . Similarly, it is obvious that the iterative algorithm (1.14) is equivalent to the following:

(1.16)

Here, the first iteration step, + + , is to compute the prediction value of approximate zeros of ; the second iteration step, , is to compute the correction value of approximate zeros of . Therefore, there is no doubt that the iterative algorithms (1.13) and (1.14) are very interesting and quite reasonable.

In this paper, we consider the problem of finding zeros of maximal monotone operators by hybrid proximal point method. To be more precise, we introduce two kinds of iterative schemes, that is, (1.13) and (1.14). Weak and strong convergence theorems are established in a real Hilbert space. As applications, we also consider a problem of finding a minimizer of a convex function.

2. Preliminaries

In this section, we give some preliminaries which will be used in the rest of this paper. Let be a real Hilbert space with inner product and norm . Let be a set-valued mapping. The set defined by

(2.1)

is called the effective domain of . The set defined by

(2.2)

is called the range of . The set defined by

(2.3)

is called the graph of . A mapping is said to be monotone if

(2.4)

is said to be maximal monotone if its graph is not properly contained in the one of any other monotone operator.

The class of monotone mappings is one of the most important classes of mappings among nonlinear mappings. Within the past several decades, many authors have been devoted to the study of the existence and iterative algorithms of zeros for maximal monotone mappings; see [1–5, 7, 11–30]. In order to prove our main results, we need the following lemmas. The first lemma can be obtained from Eckstein [1, Lemma 2] immediately.

Lemma 2.1.

Let be a nonempty, closed, and convex subset of a Hilbert space . For any given , , and , there exists conforming to the following set-valued mapping equation (SVME ):

(2.5)

Furthermore, for any , we have

(2.6)

Lemma 2.2 (see [30, Lemma 2.5, page 243]).

Let be a sequence of nonnegative real numbers satisfying the inequality

(2.7)

where , , and satisfy the conditions

(i), , or equivalently ,

(ii),

(iii), .

Then .

Lemma 2.3 (see [28, Lemma 1, page 303]).

Let and be sequences of nonnegative real numbers satisfying the inequality

(2.8)

If , then exists.

Lemma 2.4 (see [11]).

Let be a uniformly convex Banach space, let be a nonempty closed convex subset of , and let be a nonexpansive mapping. Then is demiclosed at zero.

Lemma 2.5 (see [31]).

Let be a uniformly convex Banach space, and and be a closed ball of . Then there exists a continuous strictly increasing convex function with such that

(2.9)

for all and with .

It is clear that the following lemma is valid.

Lemma 2.6.

Let be a real Hilbert space. Then there holds

(2.10)

3. Main Results

Let be a nonempty, closed, and convex subset of a real Hilbert space . We always assume that is a maximal monotone operator. Then, for each , the resolvent is a single-valued nonexpansive mapping whose domain is all . Recall also that the Yosida approximation of is defined by

(3.1)

Assume that , where is the set of zeros of . Then for all , where is the set of fixed points of the resolvent .

Theorem 3.1.

Let be a real Hilbert space, a nonempty, closed, and convex subset of , and a maximal monotone operator with . Let be a metric projection from onto . For any given , , and , find conforming to SVME (2.5), where with as and with . Let , , , and be real sequences in satisfying the following control conditions:

(i) and ,

(ii) and ,

(iii) and .

Let be a sequence generated by the following manner:

(3.2)

where is a fixed point and is a bounded sequence in . Then the sequence generated by (3.2) converges strongly to a zero of , where , if and only if as .

Proof.

First, let us show the necessity. Assume that as , where . It follows from (2.5) that

(3.3)

and hence

(3.4)

This implies that as . Note that

(3.5)

This shows that as .

Next, let us show the sufficiency. The proof is divided into several steps.

Step 1 ( is bounded).

Indeed, from the assumptions and , it follows that

(3.6)

Take an arbitrary . Then it follows from Lemma 2.1 that

(3.7)

and hence

(3.8)

This implies that

(3.9)

Putting

(3.10)

we show that for all . It is easy to see that the result holds for . Assume that the result holds for some . Next, we prove that . As a matter of fact, from (3.9), we see that

(3.11)

This shows that the sequence is bounded.

Step 2 (, where ).

The existence of is guaranteed by Lemma 1 of Bruck [12].

Since is maximal monotone, and , we deduce that

(3.12)

Since as , for each , we have

(3.13)

On the other hand, by the nonexpansivity of , we obtain that

(3.14)

From the assumption as and (3.13), we get

(3.15)

From (2.5), we see that

(3.16)

Since and , we conclude from and the boundedness of that

(3.17)

Combining (3.15) with (3.17), we have

(3.18)

In the meantime, from algorithm (3.2) and assumption , it follows that

(3.19)

Thus, from the condition , we have

(3.20)

This together with (3.18) implies that

(3.21)

From and (3.21), we can obtain that

(3.22)

Step 3 ( as ).

Indeed, utilizing (3.8), we deduce from algorithm (3.2) that

(3.23)

Note that and is bounded. Hence it is known that . Since , , and , in terms of Lemma 2.2, we conclude that

(3.24)

This completes the proof.

Remark 3.2.

The maximal monotonicity of is only used to guarantee the existence of solutions of SVME , for any given , , and . If we assume that is monotone (not necessarily maximal) and satisfies the range condition

(3.25)

we can see that Theorem 3.1 still holds.

Corollary 3.3.

Let be a real Hilbert space, a nonempty, closed, and convex subset of , and a demicontinuous pseudocontraction with a fixed point in . Let be a metric projection from onto . For any , , and , find such that

(3.26)

where with as and with . Let , , , and be real sequences in satisfying the following control conditions:

(i) and ,

(ii) and ,

(iii) and .

Let be a sequence generated by the following manner:

(3.27)

where is a fixed point and is a bounded sequence in . If the sequence satisfies the condition as , then the sequence converges strongly to a fixed point of , where .

Proof.

Let . Then is demicontinuous, monotone, and satisfies the range condition:

(3.28)

For any , define an operator by

(3.29)

Then is demicontinuous and strongly pseudocontractive. By the study of Lan and Wu [21, Theorem 2.2], we see that has a unique fixed point ; that is,

(3.30)

This implies that for all . In particular, for any given , , and , there exists such that

(3.31)

that is,

(3.32)

Finally, from the proof of Theorem 3.1, we can derive the desired conclusion immediately.

From Theorem 3.1, we also have the following result immediately.

Corollary 3.4.

Let be a real Hilbert space, a nonempty, closed, and convex subset of , and a maximal monotone operator with . Let be a metric projection from onto . For any , and , find conforming to SVME (2.5), where with as and with . Let , , and be real sequences in satisfying the following control conditions:

(i),

(ii) and ,

(iii).

Let be a sequence generated by the following manner:

(3.33)

where is a fixed point. Then the sequence converges strongly to a zero of , where , if and only if as .

Proof.

In Theorem 3.1, put for all . Then, from Theorem 3.1, we obtain the desired result immediately.

Next, we give a hybrid Mann-type iterative algorithm and study the weak convergence of the algorithm.

Theorem 3.5.

Let be a real Hilbert space, a nonempty, closed, and convex subset of , and a maximal monotone operator with . Let be a metric projection from onto . For any given , , and , find conforming to SVME (2.5), where and with . Let , , , and be real sequences in satisfying the following control conditions:

(i) and ,

(ii),

(iii) and .

Let be a sequence generated by the following manner:

(3.34)

where is a bounded sequence in . Then the sequence generated by (3.34) converges weakly to a zero of .

Proof.

Take an arbitrary . Utilizing Lemma 2.1, from the assumption with , we conclude that

(3.35)

It follows from Lemma 2.5 that

(3.36)

Utilizing Lemma 2.3, we know that exists. We, therefore, obtain that the sequence is bounded. It follows from (3.36) that

(3.37)

From the conditions , , and , we conclude that

(3.38)

Note that

(3.39)

In view of (3.38), we obtain that

(3.40)

Also, note that

(3.41)

In view of the assumption and (3.40), we see that

(3.42)

Let be a weakly subsequential limit of such that converges weakly to as . From (3.40), we see that also converges weakly to . Since is nonexpansive, we can obtain that by Lemma 2.4. Opial's condition (see [23]) guarantees that the sequence converges weakly to . This completes the proof.

By the careful analysis of the proof of Corollary 3.3 and Theorem 3.5, it is not hard to derive the following result.

Corollary 3.6.

Let be a real Hilbert space, a nonempty, closed, and convex subset of , and a demicontinuous pseudocontraction with a fixed point in . Let be a metric projection from onto . For any , , and , find such that

(3.43)

where and with . Let , , , and be real sequences in satisfying the following control conditions:

(i) and ,

(ii),

(iii) and .

Let be a sequence generated by the following manner:

(3.44)

where is a bounded sequence in . Then the sequence converges weakly to a fixed point of .

Utilizing Theorem 3.5, we also obtain the following result immediately.

Corollary 3.7.

Let be a real Hilbert space, a nonempty, closed, and convex subset of , and a maximal monotone operator with . Let be a metric projection from onto . For any , , and , find conforming to SVME (2.5), where and with . Let , , and be real sequences in satisfying the following control conditions:

(i),

(ii),

(iii).

Let be a sequence generated by the following manner:

(3.45)

Then the sequence converges weakly to a zero of .

4. Applications

In this section, as applications of the main Theorems 3.1 and 3.5, we consider the problem of finding a minimizer of a convex function .

Let be a real Hilbert space, and let be a proper convex lower semi-continuous function. Then the subdifferential of is defined as follows:

(4.1)

Theorem 4.1.

Let be a real Hilbert space and a proper convex lower semi-continuous function such that . Let be a sequence in with as and a sequence in such that with . Let be the solution of SVME (2.5) with replaced by ; that is, for any given ,

(4.2)

Let , , , and be real sequences in satisfying the following control conditions:

(i) and ,

(ii) and ,

(iii) and .

Let be a sequence generated by the following manner:

(4.3)

where is a fixed point and is a bounded sequence in . If the sequence satisfies the condition as , then the sequence converges strongly to a minimizer of nearest to .

Proof.

Since is a proper convex lower semi-continuous function, we have that the subdifferential of is maximal monotone by the study of Rockafellar [2]. Notice that

(4.4)

is equivalent to the following:

(4.5)

It follows that

(4.6)

By using Theorem 3.1, we can obtain the desired result immediately.

Theorem 4.2.

Let be a real Hilbert space and a proper convex lower semi-continuous function such that . Let be a sequence in with and a sequence in such that with . Let be the solution of SVME (2.5) with replaced by ; that is, for any given ,

(4.7)

Let , , , and be real sequences in satisfying the following control conditions:

(i) and ,

(ii),

(iii) and .

Let be a sequence generated by the following manner:

(4.8)

where is a bounded sequence in . Then the sequence converges weakly to a minimizer of .

Proof.

We can obtain the desired result readily from the proof of Theorems 3.5 and 4.1.

References

  1. Eckstein J: Approximate iterations in Bregman-function-based proximal algorithms. Mathematical Programming 1998,83(1, Ser. A):113–123. 10.1007/BF02680553

    MathSciNet  MATH  Google Scholar 

  2. Rockafellar RT: Monotone operators and the proximal point algorithm. SIAM Journal on Control and Optimization 1976,14(5):877–898. 10.1137/0314056

    Article  MathSciNet  MATH  Google Scholar 

  3. Güler O: On the convergence of the proximal point algorithm for convex minimization. SIAM Journal on Control and Optimization 1991,29(2):403–419. 10.1137/0329022

    Article  MathSciNet  MATH  Google Scholar 

  4. Ceng L-C, Wu S-Y, Yao J-C: New accuracy criteria for modified approximate proximal point algorithms in Hilbert spaces. Taiwanese Journal of Mathematics 2008,12(7):1691–1705.

    MathSciNet  MATH  Google Scholar 

  5. Cho YJ, Kang SM, Zhou H: Approximate proximal point algorithms for finding zeroes of maximal monotone operators in Hilbert spaces. Journal of Inequalities and Applications 2008, 2008:-10.

    Google Scholar 

  6. Qin X, Kang SM, Cho YJ: Approximating zeros of monotone operators by proximal point algorithms. Journal of Global Optimization 2010,46(1):75–87. 10.1007/s10898-009-9410-6

    Article  MathSciNet  MATH  Google Scholar 

  7. Ceng L-C, Yao J-C: On the convergence analysis of inexact hybrid extragradient proximal point algorithms for maximal monotone operators. Journal of Computational and Applied Mathematics 2008,217(2):326–338. 10.1016/j.cam.2007.02.010

    Article  MathSciNet  MATH  Google Scholar 

  8. Ceng L-C, Yao J-C: Generalized implicit hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. Taiwanese Journal of Mathematics 2008,12(3):753–766.

    MathSciNet  MATH  Google Scholar 

  9. Ceng LC, Petruşel A, Wu SY: On hybrid proximal-type algorithms in Banach spaces. Taiwanese Journal of Mathematics 2008,12(8):2009–2029.

    MathSciNet  MATH  Google Scholar 

  10. Ceng L-C, Huang S, Liou Y-C: Hybrid proximal point algorithms for solving constrained minimization problems in Banach spaces. Taiwanese Journal of Mathematics 2009,13(2):805–820.

    MathSciNet  MATH  Google Scholar 

  11. Browder FE: Nonlinear operators and nonlinear equations of evolution in Banach spaces. Proceedings of Symposia in Pure Mathematics 1976, 18: 78–81.

    MathSciNet  Google Scholar 

  12. Bruck, RE Jr.: A strongly convergent iterative solution of for a maximal monotone operator in Hilbert space. Journal of Mathematical Analysis and Applications 1974, 48: 114–126. 10.1016/0022-247X(74)90219-4

    Article  MathSciNet  MATH  Google Scholar 

  13. Burachik RS, Iusem AN, Svaiter BF: Enlargement of monotone operators with applications to variational inequalities. Set-Valued Analysis 1997,5(2):159–180. 10.1023/A:1008615624787

    Article  MathSciNet  MATH  Google Scholar 

  14. Censor Y, Zenios SA: Proximal minimization algorithm with -functions. Journal of Optimization Theory and Applications 1992,73(3):451–464. 10.1007/BF00940051

    Article  MathSciNet  MATH  Google Scholar 

  15. Cohen G: Auxiliary problem principle extended to variational inequalities. Journal of Optimization Theory and Applications 1988,59(2):325–333.

    MathSciNet  MATH  Google Scholar 

  16. Deimling K: Zeros of accretive operators. Manuscripta Mathematica 1974, 13: 365–374. 10.1007/BF01171148

    Article  MathSciNet  MATH  Google Scholar 

  17. Dembo RS, Eisenstat SC, Steihaug T: Inexact Newton methods. SIAM Journal on Numerical Analysis 1982,19(2):400–408. 10.1137/0719025

    Article  MathSciNet  MATH  Google Scholar 

  18. Han D, He B: A new accuracy criterion for approximate proximal point algorithms. Journal of Mathematical Analysis and Applications 2001,263(2):343–354. 10.1006/jmaa.2001.7535

    Article  MathSciNet  MATH  Google Scholar 

  19. Kamimura S, Takahashi W: Approximating solutions of maximal monotone operators in Hilbert spaces. Journal of Approximation Theory 2000,106(2):226–240. 10.1006/jath.2000.3493

    Article  MathSciNet  MATH  Google Scholar 

  20. Kamimura S, Takahashi W: Strong convergence of a proximal-type algorithm in a Banach space. SIAM Journal on Optimization 2002,13(3):938–945. 10.1137/S105262340139611X

    Article  MathSciNet  MATH  Google Scholar 

  21. Lan KQ, Wu JH: Convergence of approximants for demicontinuous pseudo-contractive maps in Hilbert spaces. Nonlinear Analysis 2002,49(6):737–746. 10.1016/S0362-546X(01)00130-4

    Article  MathSciNet  MATH  Google Scholar 

  22. Nevanlinna O, Reich S: Strong convergence of contraction semigroups and of iterative methods for accretive operators in Banach spaces. Israel Journal of Mathematics 1979,32(1):44–58. 10.1007/BF02761184

    Article  MathSciNet  MATH  Google Scholar 

  23. Opial Z: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bulletin of the American Mathematical Society 1967, 73: 591–597. 10.1090/S0002-9904-1967-11761-0

    Article  MathSciNet  MATH  Google Scholar 

  24. Pazy A: Remarks on nonlinear ergodic theory in Hilbert space. Nonlinear Analysis 1979,3(6):863–871. 10.1016/0362-546X(79)90053-1

    Article  MathSciNet  MATH  Google Scholar 

  25. Qin X, Su Y: Approximation of a zero point of accretive operator in Banach spaces. Journal of Mathematical Analysis and Applications 2007,329(1):415–424. 10.1016/j.jmaa.2006.06.067

    Article  MathSciNet  MATH  Google Scholar 

  26. Solodov MV, Svaiter BF: An inexact hybrid generalized proximal point algorithm and some new results on the theory of Bregman functions. Mathematics of Operations Research 2000,25(2):214–230. 10.1287/moor.25.2.214.12222

    Article  MathSciNet  MATH  Google Scholar 

  27. Teboulle M: Convergence of proximal-like algorithms. SIAM Journal on Optimization 1997,7(4):1069–1083. 10.1137/S1052623495292130

    Article  MathSciNet  MATH  Google Scholar 

  28. Tan K-K, Xu HK: Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process. Journal of Mathematical Analysis and Applications 1993,178(2):301–308. 10.1006/jmaa.1993.1309

    Article  MathSciNet  MATH  Google Scholar 

  29. Verma RU: Rockafellar's celebrated theorem based on -maximal monotonicity design. Applied Mathematics Letters 2008,21(4):355–360. 10.1016/j.aml.2007.05.004

    Article  MathSciNet  MATH  Google Scholar 

  30. Xu H-K: Iterative algorithms for nonlinear operators. Journal of the London Mathematical Society 2002,66(1):240–256. 10.1112/S0024610702003332

    Article  MathSciNet  MATH  Google Scholar 

  31. Cho YJ, Zhou H, Guo G: Weak and strong convergence theorems for three-step iterations with errors for asymptotically nonexpansive mappings. Computers & Mathematics with Applications 2004,47(4–5):707–717. 10.1016/S0898-1221(04)90058-2

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgment

This research was partially supported by the Teaching and Research Award Fund for Outstanding Young Teachers in Higher Education Institutions of MOE, China and the Dawn Program Foundation in Shanghai.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eskandar Naraghirad.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Ceng, L., Liou, Y. & Naraghirad, E. Iterative Approaches to Find Zeros of Maximal Monotone Operators by Hybrid Approximate Proximal Point Methods. Fixed Point Theory Appl 2011, 282171 (2011). https://doi.org/10.1155/2011/282171

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1155/2011/282171

Keywords