Open Access

On Two Iterative Methods for Mixed Monotone Variational Inequalities

Fixed Point Theory and Applications20092010:291851

DOI: 10.1155/2010/291851

Received: 22 September 2009

Accepted: 23 November 2009

Published: 7 December 2009

Abstract

A mixed monotone variational inequality (MMVI) problem in a Hilbert space is formulated to find a point such that for all , where is a monotone operator and is a proper, convex, and lower semicontinuous function on . Iterative algorithms are usually applied to find a solution of an MMVI problem. We show that the iterative algorithm introduced in the work of Wang et al., (2001) has in general weak convergence in an infinite-dimensional space, and the algorithm introduced in the paper of Noor (2001) fails in general to converge to a solution.

1. Introduction

Let be a real Hilbert space with inner product and norm and let be an operator with domain and range in . Recall that is monotone if its graph is a monotone set in . This means that is monotone if and only if

(1.1)

A monotone operator is maximal monotone if its graph is not properly contained in the graph of any other monotone operator on .

Let be a proper, convex, and lower semicontinuous functional. Thesubdifferential of is defined by

(1.2)

It is well known (cf. [1]) that is a maximal monotone operator.

Themixed monotone variational inequality (MMVI) problem is to find a point with the property

(1.3)

where is a monotone operator and is a proper, convex, and lower semicontinuous function on .

If one takes to be the indicator of a closed convex subset of ,
(1.4)
then the MMVI (1.3) is reduced to the classical variational inequality (VI):
(1.5)

Recall that theresolvent of a monotone operator is defined as

(1.6)

If , we write for It is known that is monotone if and only of for each the resolvent is nonexpansive, and is maximal monotone if and only of for each , the resolvent is nonexpansive and defined on the entire space . Recall that a self-mapping of a closed convex subset of is said to be

(i)nonexpansive if for all ;

(ii)firmly nonexpansive if for . Equivalently, is firmly nonexpansive if and only of is nonexpansive. It is known that each resolvent of a monotone operator is firmly nonexpansive.

We use to denote the set of fixed points of ; that is, .

Variational inequalities have extensively been studied; see the monographs by Baiocchi and Capelo [2], Cottle et al. [3], Glowinski et al. [4], Giannessi and Maugeri [5], and Kinderlehrer and Stampacciha [6].

Iterative methods play an important role in solving variational inequalities. For example, if is a single-valued, strongly monotone (i.e., for all and some ), and Lipschitzian (i.e., for some and all ) operator on , then the sequence generated by the iterative algorithm
(1.7)

where is the identity operator and is the metric projection of onto , and the initial guess is chosen arbitrarily, converges strongly to the unique solution of VI (1.5) provided, is small enough.

2. An Inexact Implicit Method

In this section we study the convergence of an inexact implicit method for solving the MMVI (1.3) introduced by Wang et al. [7] (see also [8, 9] for related work).

Let and be two sequences of nonnegative numbers such that

(2.1)

Let and . The inexact implicit method introduced in [7] generates a sequence defined in the following way. Once has been constructed, the next iterate is implicitly constructed satisfying the equation

(2.2)

where is a sequence of nonnegative numbers such that

(2.3)

for , and for and ,

(2.4)

and where is such that

(2.5)
with given as follows:
(2.6)
We note that is a solution of the MMVI (1.3) if and only if, for each , satisfies the fixed point equation
(2.7)

Before discussing the convergence of the implicit algorithm (2.2), we look at a special case of (2.2), where . In this case, the MMVI (1.3) reduces to the problem of finding a such that

(2.8)

in another word, finding an absolute minimizer of over . This is equivalent to solving the inclusion

(2.9)

and the algorithm (2.2) is thus reduced to a special case of the Eckastein-Bertsekas algorithm [10]

(2.10)

where If , then algorithm (2.2) is reduced to a special case of Rockafellar's proximal point algorithm [11]

(2.11)

Rockafellar's proximal point algorithm for finding a zero of a maximal monotone operator has received tremendous investigations; see [1214] and the references therein.

Remark 2.1.

Theorem  5.1 of Wang et al. [7] holds true only in the finite-dimensional setting. This is because in the infinite-dimensional setting, a bounded sequence fails, in general, to have a norm-convergent subsequence. As a matter of fact, in the infinite-dimensional case, the special case of (2.2) where and corresponds to Rockafellar's proximal point algorithm (2.11) which fails to converge in the norm topology, in general, in the infinite-dimensional setting; see Güler's counterexample [15]. This infinite-dimensionality problem occurred in several papers by Noor (see, e.g., [1626]).

In the infinite-dimensional setting, whether or not Wang et al.'s implicit algorithm (2.2) converges even in the weak topology remains an open question. We will provide a partial answer by showing that if the operator is weak-to-strong continuous (i.e., takes weakly convergent sequences to strongly convergent sequences), then the implicit algorithm (2.2) does converge weakly.

We next collect the (correct) results proved in [7].

Proposition 2.2.

Assume that is generated by the implicit algorithm (2.2).

(a)For , is a nondecreasing function of .
  1. (b)
    If is a solution to the MMVI (1.3), and , then
    (2.12)
     
(c)For any solution to the MMVI (1.3),
(2.13)

where satisfies .

(d) is bounded.

(e)There is a such that

Since algorithm (2.2) is, in general, not strongly convergent, we turn to investigate its weak convergence. It is however unclear if the algorithm is weakly convergent (if the space is infinite dimensional). We present a partial answer below. But first recall that an operator is said to be weak-to-strong continuous if the weak convergence of a sequence to a point implies the strong convergence of the sequence to the point .

Theorem 2.3.

Assume that is generated by algorithm (2.2). If is weak-to-strong continuous, then converges weakly to a solution of the MMVI (1.3).

Proof.

Putting
(2.14)
we have
(2.15)
It follows that
(2.16)
This implies that
(2.17)
So, if weakly (hence strongly since is weak-to-strong continuous), it follows that
(2.18)

Thus, is a solution.

To prove that the entire sequence of is weakly convergent, assume that weakly. All we have to prove is that . Passing through further subsequences if necessary, we may assume that and both exist.

For , since strongly and since and are bounded, there exists an integer such that, for
(2.19)
It follows that for ,
(2.20)
This implies
(2.21)
However,
(2.22)
It follows that
(2.23)
Similarly, by repeating the argument above we obtain
(2.24)

Adding these inequalities, we get .

3. A Counterexample

It is not hard to see that solves MMVI (1.3) if and only of solves the inclusion

(3.1)

which is in turn equivalent to the fixed point equation

(3.2)

where is the resolvent of defined by

(3.3)

Recall that if is the indicator of a closed convex subset of ,

(3.4)

then MMVI (1.3) is reduced to the classical variational inequality (VI)

(3.5)

In [27], Noor introduced a new iterative algorithm [27, Algorithm  3.3, page 36] as follows. Given , compute by the iterative scheme

(3.6)

where and are constant, and is given by

(3.7)

Noor [27] proved a convergence result for his algorithm (3.6) as follows.

Theorem 3.1 (see [27, page 38]).

Let be a finite-dimensional Hilbert space. Then the sequence generated by algorithm (3.6) converges to a solution of MMVI (1.3).

We however found that the conclusion stated in the above theorem is incorrect. It is true that solves MMVI (1.3) if and only if solves the fixed point equation (3.2). The reason that led Noor to his mistake is his claim that solves MMVI (1.3) if and only if solves the following iterated fixed point equation:

(3.8)

As a matter of fact, the two fixed point equations (3.2) and (3.8) are not equivalent, as shown in the following counterexample which also shows that the convergence result of Noor [27] is incorrect.

Example 3.2.

Take . Define and by
(3.9)
Notice that (Clarke [28])
(3.10)
It is easily seen that is the unique solution to the MMVI
(3.11)
Observe that equation is equivalent to the fixed point equation
(3.12)
Now since for all , we get that solves (3.12) if and only if
(3.13)
where
(3.14)
It follows from (3.13) that . Hence
(3.15)
But, since
(3.16)
we deduce that the solution set of the fixed point equation (3.12) is given by
(3.17)

(We therefore conclude that equation is not equivalent to MMVI (1.3), as claimed by Noor [27].)

Now take the initial guess for . Then and we have that algorithm (3.6) generates a constant sequence for all . However, is not a solution of MMVI (3.11). This shows that algorithm (3.6) may generate a sequence that fails to converge to a solution of MMVI (1.3) and Noor's result in [27] is therefore false.

Remark 3.3.

Noor has repeated his above mistake in a number of his recent articles. A partial search found that articles [20, 21, 26, 2932] contain the same error.

Declarations

Acknowledgments

The authors are grateful to the anonymous referees for their comments and suggestions which improved the presentation of this manuscript. This paper is dedicated to Professor Wataru Takahashi on the occasion of his retirement. The second author supported in part by NSC 97-2628-M-110-003-MY3, and by DGES MTM2006-13997-C02-01.

Authors’ Affiliations

(1)
Department of Mathematics, East China University of Science and Technology
(2)
Department of Applied Mathematics, National Sun Yat-Sen University

References

  1. Brezis H: Operateurs Maximaux Monotones et Semi-Groups de Contraction dans les Espaces de Hilbert. North-Holland, Amsterdam, The Netherlands; 1973.MATHGoogle Scholar
  2. Baiocchi C, Capelo A: Variational and Quasivariational Inequalities: Applications to Free Boundary Problems, A Wiley-Interscience Publication. John Wiley & Sons, New York, NY, USA; 1984:ix+452.Google Scholar
  3. Cottle RW, Giannessi F, Lions JL: Variational Inequalities and Complementarity Problems: Theory and Applications. John Wiley & Sons, New York, NY, USA; 1980.Google Scholar
  4. Glowinski R, Lions J-L, Trémolières R: Numerical Analysis of Variational Inequalities, Studies in Mathematics and Its Applications. Volume 8. North-Holland, Amsterdam, The Netherlands; 1981:xxix+776.Google Scholar
  5. Giannessi F, Maugeri A: Variational Inequalities and Network Equilibrium Problems. Plenum Press, New York, NY, USA; 1995.View ArticleMATHGoogle Scholar
  6. Kinderlehrer D, Stampacchia G: An Introduction to Variational Inequalities and Their Applications, Pure and Applied Mathematics. Volume 88. Academic Press, New York, NY, USA; 1980:xiv+313.MATHGoogle Scholar
  7. Wang SL, Yang H, He B: Inexact implicit method with variable parameter for mixed monotone variational inequalities. Journal of Optimization Theory and Applications 2001,111(2):431–443. 10.1023/A:1011942620208MathSciNetView ArticleMATHGoogle Scholar
  8. He B: Inexact implicit methods for monotone general variational inequalities. Mathematical Programming 1999,86(1):199–217. 10.1007/s101070050086MathSciNetView ArticleMATHGoogle Scholar
  9. Han D, He B: A new accuracy criterion for approximate proximal point algorithms. Journal of Mathematical Analysis and Applications 2001,263(2):343–354. 10.1006/jmaa.2001.7535MathSciNetView ArticleMATHGoogle Scholar
  10. Eckstein J, Bertsekas DP: On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Mathematical Programming 1992,55(3):293–318. 10.1007/BF01581204MathSciNetView ArticleMATHGoogle Scholar
  11. Rockafellar RT: Monotone operators and the proximal point algorithm. SIAM Journal on Control and Optimization 1976,14(5):877–898. 10.1137/0314056MathSciNetView ArticleMATHGoogle Scholar
  12. Solodov MV, Svaiter BF: Forcing strong convergence of proximal point iterations in a Hilbert space. Mathematical Programming, Series A 2000,87(1):189–202.MathSciNetMATHGoogle Scholar
  13. Xu H-K: Iterative algorithms for nonlinear operators. Journal of the London Mathematical Society 2002,66(1):240–256. 10.1112/S0024610702003332MathSciNetView ArticleMATHGoogle Scholar
  14. Marino G, Xu H-K: Convergence of generalized proximal point algorithms. Communications on Pure and Applied Analysis 2004,3(4):791–808.MathSciNetView ArticleMATHGoogle Scholar
  15. Güler O: On the convergence of the proximal point algorithm for convex minimization. SIAM Journal on Control and Optimization 1991,29(2):403–419. 10.1137/0329022MathSciNetView ArticleMATHGoogle Scholar
  16. Noor MA: Monotone mixed variational inequalities. Applied Mathematics Letters 2001,14(2):231–236. 10.1016/S0893-9659(00)00141-5MathSciNetView ArticleMATHGoogle Scholar
  17. Noor MA: An implicit method for mixed variational inequalities. Applied Mathematics Letters 1998,11(4):109–113. 10.1016/S0893-9659(98)00066-4MathSciNetView ArticleMATHGoogle Scholar
  18. Noor MA: A modified projection method for monotone variational inequalities. Applied Mathematics Letters 1999,12(5):83–87. 10.1016/S0893-9659(99)00061-0MathSciNetView ArticleMATHGoogle Scholar
  19. Noor MA: Some iterative techniques for general monotone variational inequalities. Optimization 1999,46(4):391–401. 10.1080/02331939908844464MathSciNetView ArticleMATHGoogle Scholar
  20. Noor MA: Some algorithms for general monotone mixed variational inequalities. Mathematical and Computer Modelling 1999,29(7):1–9. 10.1016/S0895-7177(99)00058-8MathSciNetView ArticleMATHGoogle Scholar
  21. Noor MA: Splitting algorithms for general pseudomonotone mixed variational inequalities. Journal of Global Optimization 2000,18(1):75–89. 10.1023/A:1008322118873MathSciNetView ArticleMATHGoogle Scholar
  22. Noor MA: An iterative method for general mixed variational inequalities. Computers & Mathematics with Applications 2000,40(2–3):171–176. 10.1016/S0898-1221(00)00151-6MathSciNetView ArticleMATHGoogle Scholar
  23. Noor MA: Splitting methods for pseudomonotone mixed variational inequalities. Journal of Mathematical Analysis and Applications 2000,246(1):174–188. 10.1006/jmaa.2000.6776MathSciNetView ArticleMATHGoogle Scholar
  24. Noor MA: A class of new iterative methods for general mixed variational inequalities. Mathematical and Computer Modelling 2000,31(13):11–19. 10.1016/S0895-7177(00)00108-4MathSciNetView ArticleMATHGoogle Scholar
  25. Noor MA: Solvability of multivalued general mixed variational inequalities. Journal of Mathematical Analysis and Applications 2001,261(1):390–402. 10.1006/jmaa.2001.7533MathSciNetView ArticleMATHGoogle Scholar
  26. Noor MA, Al-Said EA: Wiener-Hopf equations technique for quasimonotone variational inequalities. Journal of Optimization Theory and Applications 1999,103(3):705–714. 10.1023/A:1021796326831MathSciNetView ArticleMATHGoogle Scholar
  27. Noor MA: Iterative schemes for quasimonotone mixed variational inequalities. Optimization 2001,50(1–2):29–44. 10.1080/02331930108844552MathSciNetView ArticleMATHGoogle Scholar
  28. Clarke FH: Optimization and Nonsmooth Analysis, Classics in Applied Mathematics. Volume 5. 2nd edition. SIAM, Philadelphia, Pa, USA; 1990:xii+308.View ArticleMATHGoogle Scholar
  29. Noor MA: An extraresolvent method for monotone mixed variational inequalities. Mathematical and Computer Modelling 1999,29(3):95–100. 10.1016/S0895-7177(99)00033-3MathSciNetView ArticleMATHGoogle Scholar
  30. Noor MA: A modified extragradient method for general monotone variational inequalities. Computers & Mathematics with Applications 1999,38(1):19–24. 10.1016/S0898-1221(99)00164-9MathSciNetView ArticleMATHGoogle Scholar
  31. Noor MA: Projection type methods for general variational inequalities. Soochow Journal of Mathematics 2002,28(2):171–178.MathSciNetMATHGoogle Scholar
  32. Noor MA: Modified projection method for pseudomonotone variational inequalities. Applied Mathematics Letters 2002,15(3):315–320. 10.1016/S0893-9659(01)00137-9MathSciNetView ArticleMATHGoogle Scholar

Copyright

© Xiwen Lu et al. 2010

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.