Conversion of algorithms by releasing projection for minimization problems
© Yao et al.; licensee Springer 2013
Received: 24 September 2012
Accepted: 15 April 2013
Published: 29 April 2013
The projection methods for solving the minimization problems have been extensively considered in many practical problems, for example, the least-square problem. However, the computational difficulty of the projection might seriously affect the efficiency of the method. The purpose of this paper is to construct two algorithms by releasing projection for solving the minimization problem with the feasibility sets such as the set of fixed points of nonexpansive mappings and the solution set of the equilibrium problem.
MSC:47J05, 47J25, 47H09, 65J15.
where C is a nonempty closed convex subset of a real Hilbert space H, is a bifunction and is an α-inverse-strongly monotone mapping. The reasons why we focus on the above minimization problem (1.1) are mainly in two respects.
Reason 2 The problem (1.2) is very general in the sense that it includes optimization problems, variational inequalities, minimax problems and the Nash equilibrium problem in noncooperative games as special cases. At the same time, fixed point algorithms for non-expansive mappings have received vast investigations due to their extensive applications in a variety of applied areas of the inverse problem, partial differential equations, image recovery and signal processing.
Based on the above facts, it is an interesting topic to construct algorithms for solving the above problems. Now we next briefly review some historic approaches which relate to the problems (1.2) and (1.4).
Subsequently, algorithms constructed for solving the equilibrium problems and fixed point problems have been further developed by some authors. For some works related to the equilibrium problem, fixed point problems and the variational inequality problem, please see Blum and Oettli , Chang et al. , Chantarangsi et al. , Cianciaruso et al. , Colao et al. [12, 13], Fang et al. , Jung , Mainge , Mainge and Moudafi , Moudafi and Théra , Nadezhkina and Takahashi , Noor et al. , Peng et al. , Peng and Yao , Plubtieng and Punpaeng , Takahashi and Takahashi , Yao et al. , Yao and Liou  and the references therein.
Remark 1.1 It is well known that projection methods are used extensively in a variety of methods in optimization theory. Apart from theoretical interest, the main advantage of projection methods, which makes them successful in real-word applications, is computational. The field of projection methods is vast; see, e.g., Bauschke and Borwein , Combettes , Combettes and Pesquet . However, it is clear that if the set C is simple enough, so that the projection onto it is easily executed, then this method is particularly useful; but if C is a general closed and convex set, then a minimal distance problem has to be solved in order to obtain the next iterative. This might seriously affect the efficiency of the method. Hence, it is a very interesting work of solving (1.1) without involving projection.
In particular, if we take , then the net and the sequences converge in norm to a solution of the minimization problem (1.1). It should be pointed out that our suggested algorithms solve the above minimization problem (1.1) without involving the metric projection.
Let C be a nonempty closed convex subset of a real Hilbert space H. Recall that a mapping is called α-inverse-strongly monotone if there exists a positive real number α such that , . It is clear that any α-inverse-strongly monotone mapping is monotone and -Lipschitz continuous. A mapping is said to be nonexpansive if , . Denote the set of fixed points of S by .
for all ;
F is monotone, i.e., for all ;
for each , ;
for each , is convex and lower semicontinuous.
We need the following lemmas for proving our main results.
Lemma 2.1 ()
is single-valued and is firmly nonexpansive, i.e., for any , ;
EP is closed and convex and .
Lemma 2.2 ()
In particular, if , then is nonexpansive.
Lemma 2.3 ()
Let C be a closed convex subset of a real Hilbert space H, and be a nonexpansive mapping. Then the mapping is demiclosed. That is, if is a sequence in C such that weakly and strongly, then .
Lemma 2.4 ()
3 Main results
In this section, we convert algorithms (1.5) and (1.6) by releasing projection and construct two algorithms for finding the minimum norm element of .
This indicates that is a contraction. Using the Banach contraction principle, there exists a unique fixed point of in C. Hence, (3.2) is well defined.
C is a nonempty closed convex subset of a real Hilbert space H;
is a nonexpansive mapping, is an α-inverse strongly monotone mapping and is a ρ-contraction;
is a bifunction which satisfies conditions (H1)-(H4);
In order to prove our first main result, we need the following propositions.
Proposition 3.1 The net generated by the implicit method (3.2) is bounded.
So, is bounded. Hence , , and are also bounded. This completes the proof. □
Proposition 3.2 The net generated by the implicit method (3.2) is relatively norm compact as .
Since is bounded, without loss of generality, we may assume that converges weakly to a point . Also and . Noticing (3.9) we can use Lemma 2.3 to get .
Consequently, the weak convergence of (and ) to actually implies that . This has proved the relative norm-compactness of the net as . This completes the proof. □
Now we show our first main result.
In particular, if we take , then the net converges in norm, as , to a solution of the minimization problem (1.1).
Therefore, . That is, is the unique fixed point in Γ of the contraction . Clearly, this is sufficient to conclude that the entire net converges in norm to as .
Therefore, is a solution of the minimization problem (1.1). This completes the proof. □
Next, we introduce an explicit algorithm for finding a solution of the minimization problem (1.1).
where is a real number sequence in .
Next, we give our second main result.
Theorem 3.5 Assume that the sequence satisfies the conditions: , and . Then the sequence generated by (3.15) converges strongly to which is the unique solution of the variational inequality (3.13). In particular, if , then the sequence converges strongly to a solution of the minimization problem (1.1).
Therefore, is bounded. Hence, , , are also bounded.
Put , where is the net defined by (3.2). We will finally show that .
We can therefore apply Lemma 2.4 to conclude that .
Finally, if we take , by a similar argument as that in Theorem 3.3, we deduce immediately that is a minimum norm element in Γ. This completes the proof. □
The first author was supported in part by NSFC 11071279 and NSFC 71161001-G0105. The second author was supported in part by NSC 101-2628-E-230-001-MY3.
- Reich S, Xu HK: An iterative approach to a constrained least squares problem. Abstr. Appl. Anal. 2003, 8: 503–512.MathSciNetView ArticleGoogle Scholar
- Sabharwal A, Potter LC: Convexly constrained linear inverse problems: iterative least-squares and regularization. IEEE Trans. Signal Process. 1998, 46: 2345–2352. 10.1109/78.709518MATHMathSciNetView ArticleGoogle Scholar
- Xu HK: An iterative approach to quadratic optimization. J. Optim. Theory Appl. 2003, 116: 659–678. 10.1023/A:1023073621589MATHMathSciNetView ArticleGoogle Scholar
- Yao Y, Liou YC, Yao JC: Convergence theorem for equilibrium problems and fixed point problems of infinite family of nonexpansive mappings. Fixed Point Theory Appl. 2007., 2007: Article ID 64363Google Scholar
- Combettes PL, Hirstoaga A: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6: 117–136.MATHMathSciNetGoogle Scholar
- Moudafi A: Weak convergence theorems for nonexpansive mappings and equilibrium problems. J. Nonlinear Convex Anal. 2008, 9: 37–43.MATHMathSciNetGoogle Scholar
- Takahashi S, Takahashi W: Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 2007, 331: 506–515. 10.1016/j.jmaa.2006.08.036MATHMathSciNetView ArticleGoogle Scholar
- Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.MATHMathSciNetGoogle Scholar
- Chang SS, Lee HWJ, Chan CK: A new method for solving equilibrium problem fixed point problem and variational inequality problem with application to optimization. Nonlinear Anal. 2009, 70: 3307–3319. 10.1016/j.na.2008.04.035MATHMathSciNetView ArticleGoogle Scholar
- Chantarangsi W, Jaiboon C, Kumam P: A viscosity hybrid steepest descent method for generalized mixed equilibrium problems and variational inequalities for relaxed cocoercive mapping in Hilbert spaces. Abstr. Appl. Anal. 2010., 2010: Article ID 390972Google Scholar
- Cianciaruso F, Marino G, Muglia L, Yao Y: A hybrid projection algorithm for finding solutions of mixed equilibrium problem and variational inequality problem. Fixed Point Theory Appl. 2010., 2010: Article ID 383740Google Scholar
- Colao V, Acedo GL, Marino G: An implicit method for finding common solutions of variational inequalities and systems of equilibrium problems and fixed points of infinite family of nonexpansive mappings. Nonlinear Anal. 2009, 71: 2708–2715. 10.1016/j.na.2009.01.115MATHMathSciNetView ArticleGoogle Scholar
- Colao V, Marino G, Xu HK: An iterative method for finding common solutions of equilibrium and fixed point problems. J. Math. Anal. Appl. 2008, 344: 340–352. 10.1016/j.jmaa.2008.02.041MATHMathSciNetView ArticleGoogle Scholar
- Fang YP, Huang NJ, Yao JC: Well-posedness by perturbations of mixed variational inequalities in Banach spaces. Eur. J. Oper. Res. 2010, 201: 682–692. 10.1016/j.ejor.2009.04.001MATHMathSciNetView ArticleGoogle Scholar
- Jung JS: Strong convergence of composite iterative methods for equilibrium problems and fixed point problems. Appl. Math. Comput. 2009, 213: 498–505. 10.1016/j.amc.2009.03.048MATHMathSciNetView ArticleGoogle Scholar
- Mainge PE: Projected subgradient techniques and viscosity methods for optimization with variational inequality constraints. Eur. J. Oper. Res. 2010, 205: 501–506. 10.1016/j.ejor.2010.01.042MATHMathSciNetView ArticleGoogle Scholar
- Mainge PE, Moudafi A: Coupling viscosity methods with the extragradient algorithm for solving equilibrium problems. J. Nonlinear Convex Anal. 2008, 9: 283–294.MATHMathSciNetGoogle Scholar
- Moudafi A, Théra M Lecture Notes in Economics and Mathematical Systems 477. In Proximal and Dynamical Approaches to Equilibrium Problems. Springer, Berlin; 1999:187–201.Google Scholar
- Nadezhkina N, Takahashi W: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006, 128: 191–201. 10.1007/s10957-005-7564-zMATHMathSciNetView ArticleGoogle Scholar
- Noor MA, Yao Y, Chen R, Liou YC: An iterative method for fixed point problems and variational inequality problems. Math. Commun. 2007, 12: 121–132.MATHMathSciNetGoogle Scholar
- Peng JW, Wu SY, Yao JC: A new iterative method for finding common solutions of a system of equilibrium problems, fixed-point problems, and variational inequalities. Abstr. Appl. Anal. 2010., 2010: Article ID 428293Google Scholar
- Peng JW, Yao JC: A new hybrid-extragradient method for generalized mixed equilibrium problems and fixed point problems and variational inequality problems. Taiwan. J. Math. 2008, 12: 1401–1433.MATHMathSciNetGoogle Scholar
- Plubtieng S, Punpaeng R: A new iterative method for equilibrium problems and fixed point problems of nonexpansive mappings and monotone mappings. Appl. Math. Comput. 2008, 197: 548–558. 10.1016/j.amc.2007.07.075MATHMathSciNetView ArticleGoogle Scholar
- Takahashi S, Takahashi W: Strong convergence theorem for a generalized equilibrium problem and a nonexpansive mapping in a Hilbert space. Nonlinear Anal. 2008, 69: 1025–1033. 10.1016/j.na.2008.02.042MATHMathSciNetView ArticleGoogle Scholar
- Yao Y, Cho YJ, Liou YC: Algorithms of common solutions for variational inclusions, mixed equilibrium problems and fixed point problems. Eur. J. Oper. Res. 2011, 212: 242–250. 10.1016/j.ejor.2011.01.042MATHMathSciNetView ArticleGoogle Scholar
- Yao Y, Liou YC: Composite algorithms for minimization over the solutions of equilibrium problems and fixed point problems. Abstr. Appl. Anal. 2010., 2010: Article ID 763506Google Scholar
- Bauschke HH, Borwein JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38: 367–426. 10.1137/S0036144593251710MATHMathSciNetView ArticleGoogle Scholar
- Combettes PL: Strong convergence of block-iterative outer approximation methods for convex optimization. SIAM J. Control Optim. 2000, 38: 538–565. 10.1137/S036301299732626XMATHMathSciNetView ArticleGoogle Scholar
- Combettes PL, Pesquet JC: Proximal thresholding algorithm for minimization over orthonormal bases. SIAM J. Optim. 2007, 18: 1351–1376.MATHMathSciNetView ArticleGoogle Scholar
- Takahashi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118: 417–428. 10.1023/A:1025407607560MATHMathSciNetView ArticleGoogle Scholar
- Geobel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.View ArticleGoogle Scholar
- Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240–256. 10.1112/S0024610702003332MATHView ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.