- Open Access
Iterative methods for constrained convex minimization problem in Hilbert spaces
© Tian and Huang; licensee Springer 2013
- Received: 14 December 2012
- Accepted: 4 April 2013
- Published: 18 April 2013
In this paper, based on Yamada’s hybrid steepest descent method, a general iterative method is proposed for solving constrained convex minimization problem. It is proved that the sequences generated by proposed implicit and explicit schemes converge strongly to a solution of the constrained convex minimization problem, which also solves a certain variational inequality.
MSC:58E35, 47H09, 65J15.
- iterative algorithm
- constrained convex minimization
- nonexpansive mapping
- fixed point
- variational inequality
Let H be a real Hilbert space with inner product and induced norm . Let C be a nonempty, closed and convex subset of H. We need some nonlinear operators which are introduced below.
Let be nonlinear operators.
T is nonexpansive if for all .
T is Lipschitz continuous if there exists a constant such that , for all .
is monotone if , for all .
Given is a number , is η-strongly monotone if , for all .
Given is a number . is υ-inverse strongly monotone (υ-ism) if , for all .
It is known that inverse strongly monotone operators have been studied widely (see [1–3]), and applied to solve practical problems in various fields; for instance, in traffic assignment problems (see [4, 5]).
is said to be an averaged mapping if , where α is a number in and is nonexpansive. In particular, projections are ()-averaged mappings.
where is a real valued convex function. Assume that the minimization problem (1.1) is consistent, and let S denote its solution set. It is known that the gradient-projection algorithm is one of the powerful methods for solving the minimization problem (1.1) (see [11–18]), and sometimes the minimization problem (1.1) has more than one solution. So, regularization is needed. We can use the idea of regularization to design an iterative algorithm for finding the minimum-norm solution of (1.1).
can be obtained by two steps. First, observing that the gradient is -Lipschitzian and α-strongly monotone, the mapping is a contraction with coefficient , where . So, the regularized problem (1.2) has a unique solution, which is denoted as and which can be obtained via the Banach contraction principle. Secondly, letting yields in norm. The following result shows that for suitable choices of γ and α, the minimum-norm solution can be obtained by a single step.
Theorem 1.1 
for all n;
(and ) as ;
Then as .
In the assumptions of Theorem 1.1, the sequence is forced to tend to zero. If we keep it as a constant, then we have weak convergence as follows.
Theorem 1.2 
Assume that and . Then converges weakly to a solution of the minimization problem (1.1).
In this paper, we introduce a modification of algorithm (1.4) which is based on Yamada’s method. It is proved that the sequence generated by our proposed algorithm converges strongly to a minimizer of (1.1), which is also a solution of a certain variational inequality.
In this section, we introduce some useful properties and lemmas which will be used in the proofs for the main results in the next section.
If , for some and if S is averaged and V is nonexpansive, then T is averaged.
The composition of finitely many averaged mappings is averaged. That is, if each of the mappings is averaged, then so is the composite . In particular, if is -averaged and is -averaged, where , then the composite is α-averaged, where .
- (iii)If the mappings are averaged and have a common fixed point, then
Here, the notations denotes the set of fixed point of the mapping T; that is, .
T is nonexpansive, if and only if the complement is ()-ism;
If T is υ-ism, then for , γT is ()-ism;
T is averaged, if and only if the complement is υ-ism for some ; indeed, for , T is α-averaged, if and only if is ()-ism.
The so-called demiclosed principle for nonexpansive mappings will often be used.
Lemma 2.3 (Demiclosed Principle )
Let C be a closed and convex subset of a Hilbert space H and let be a nonexpansive mapping with . If is a sequence in C weakly converging to x and if converges strongly to y, then . In particular, if , then .
is characterized as follows.
either or ;
We adopt the following notation:
means that strongly;
means that weakly.
Recall that throughout this paper, we use S to denote the solution set of constrained convex minimization problem (1.1).
is continuous with respect to s and .
The following proposition summarizes the properties of the net .
is bounded for ;
defines a continuous curve from into C.
where is a constant. It is clear that , i.e., .
where and .
Hence, is bounded.
where and .
Since is bounded, and is continuous with respect to s, defines a continuous curve from into C. □
The following theorem shows that the net converges strongly as to a minimizer of (1.1), which solves some variational inequality.
is continuous with respect to s and .
Equivalently, we have .
The strong monotonicity of F implies that and the uniqueness is proved. Below we use to denote the unique solution of the variational inequality (3.4).
Note that and , so we get , i.e., .
Since is bounded, it is obvious that if is a sequence in such that , and .
So, by Lemma 2.3, we get .
Since , we obtain from (3.8) that .
Since is nonexpansive, is monotone. Note that, for any given , and .
So is a solution of the variational inequality (3.4). We get by uniqueness. Therefore, as .
It is proved that the sequence converges strongly to a minimizer of (1.1), which also solves the variational inequality (3.4). □
for all n;
Then the sequence generated by the explicit scheme (3.10) converges strongly to a minimizer of (1.1), which is also a solution of the variational inequality (3.4).
- (a)solves the minimization problem (1.1) if and only if solves the fixed-point equation
the gradient ∇f is -ism.
- (c)is averaged for , in particular, the following relation holds:
Consequently, is bounded. It implies that is also bounded.
By Lemma 2.5, we obtain .
where is a solution of the variational inequality (3.4).
Without loss of generality, we may assume that .
By (3.14), we get .
In terms of Lemma 2.3, we get .
Finally, we show that .
By (3.15) and , we get . Now applying Lemma 2.5 to (3.17) concludes that as . □
In this section, we give an application of Theorem 3.3 to the split feasibility problem (say SFP, for short), which was introduced by Censor and Elfving . Since its inception in 1994, the split feasibility problem (SFP) has received much attention (see [7, 23, 24]) due to its applications in signal processing and image reconstruction, with particular progress in intensity-modulated radiation therapy.
where C and Q are nonempty, closed and convex subset of Hilbert space and , respectively. is a bounded linear operator.
where . He obtained that the sequence generated by (4.3) converges weakly to a solution of the (SFP).
for all n,
where is k-Lipschitzian and η-strongly monotone operator with constant , such that . We can show that the sequence generated by (4.4) converges strongly to a solution of the (SFP) (4.1) if the sequence and the sequence of parameters satisfy appropriate conditions.
Applying Theorem 3.3, we obtain the following result.
Theorem 4.1 Assume that the split feasibility problem (4.1) is consistent. Let the sequence be generated by (4.4). Where the sequence and the sequence satisfy the conditions (C3)-(C5). Then the sequence converges strongly to a solution of the split feasibility problem (4.1).
for all n.
Due to Theorem 3.3, we have the conclusion immediately. □
The authors wish to thank the referees for their helpful comments, which notably improved the presentation of this manuscript. This work was supported in part by The Fundamental Research Funds for the Central Universities (the Special Fund of Science in Civil Aviation University of China: No. ZXH2012K001), and by the Science Research Foundation of Civil Aviation University of China (No. 2012KYM03).
- Brezis H: Operateurs Maximaux Monotones et Semi-Groups de Contractions dans les Espaces de Hilbert. North-Holland, Amsterdam; 1973.Google Scholar
- Jaiboon C, Kumam P: A Hybrid extragradient viscosity approximation method for solving equilibrium problems and fixed point problems of infinitely many nonexpansive mappings. Fixed Point Theory Appl. 2009. doi:10.1155/2009/374815Google Scholar
- Jitpeera T, Katchang P, Kumam P: A viscosity of Cesàro Mean approximation methods for a mixed equilibrium, variational inequalities, and fixed point prolems. Fixed Point Theory Appl. 2011. doi:10.1155/2011/945051Google Scholar
- Bertsekas DP, Gafni EM: Projection methods for variational inequalities with applications to the traffic assignment problem. Math. Program. Stud. 1982, 17: 139–159. 10.1007/BFb0120965MATHMathSciNetView ArticleGoogle Scholar
- Han D, Lo HK: Solving non-additive traffic assignment problems, a descent method for cocoercive variational inequalities. Eur. J. Oper. Res. 2004, 159: 529–544. 10.1016/S0377-2217(03)00423-5MATHMathSciNetView ArticleGoogle Scholar
- Bauschke H, Borwein J: On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38: 367–426. 10.1137/S0036144593251710MATHMathSciNetView ArticleGoogle Scholar
- Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006MATHMathSciNetView ArticleGoogle Scholar
- Combettes PL: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004, 53: 475–504. 10.1080/02331930412331327157MATHMathSciNetView ArticleGoogle Scholar
- Xu HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150: 360–378. 10.1007/s10957-011-9837-zMATHMathSciNetView ArticleGoogle Scholar
- Yamada I: The hybrid steepest descent for the variational inequality problems over the intersection of fixed points sets of nonexpansive mapping. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Application. Edited by: Butnariu D, Censor Y, Reich S. Elservier, New York; 2001:473–504.View ArticleGoogle Scholar
- Levitin ES, Polyak BT: Constrained minimization methods. Zh. Vychisl. Mat. Mat. Fiz. 1996, 6: 787–823.Google Scholar
- Calamai PH, Moré JJ: Projected gradient methods for linearly constrained problems. Math. Program. 1987, 39: 93–116. 10.1007/BF02592073MATHView ArticleGoogle Scholar
- Polyak BT: Introduction to Optimization. Optimization Software, New York; 1987.Google Scholar
- Su M, Xu HK: Remarks on the gradient-projection algorithm. J. Nonlinear Anal. Optim. 2010, 1: 35–43.MathSciNetGoogle Scholar
- Jung JS: A general iterative approach to variational inequality problems and optimization problems. Fixed Point Theory Appl. 2011. doi:10.1155/2011/284363Google Scholar
- Jung JS: A general composite iterative method for generalized mixed equilibrium problems, variational inequalities problems and optimization problems. J. Inequal. Appl. 2011. doi:10.1186/1029–242x-2011–51Google Scholar
- Jitpeera T, Kumam P: A new iterative algorithm for solving common solutions of generalized mixed equilibrium problems, fixed point problems and variational inclusion problems with minimization problems. Fixed Point Theory Appl. 2012., 2012: Article ID 111. doi:10.1186/1687–1812–2012–111Google Scholar
- Witthayarat U, Jitpeera T, Kumam P: A new modified hybrid steepest-descent by using a viscosity approximation method with a weakly contractive mapping for a system of equilibrium problems and fixed point problems with minimization problems. Abstr. Appl. Anal. 2012., 2012: Article ID 206345Google Scholar
- Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018Google Scholar
- Martinez-Yanes C, Xu HK: Strong convergence of the CQ method for fixed-point iteration processes. Nonlinear Anal. 2006, 64: 2400–2411. 10.1016/j.na.2005.08.018MATHMathSciNetView ArticleGoogle Scholar
- Hundal H: An alternating projection that does not converge in norm. Nonlinear Anal. 2004, 57: 35–61. 10.1016/j.na.2003.11.004MATHMathSciNetView ArticleGoogle Scholar
- Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692MATHMathSciNetView ArticleGoogle Scholar
- López G, Martin-Márquez V, Wang FH, Xu HK: Solving the split feasibility problem without prior knowlege of matrix norms. Inverse Probl. 2012., 28: Article ID 085004Google Scholar
- Zhao JL, Zhang YJ, Yang QZ: Modified projection methods for the split feasibility problem and the multiple-set split feasibility problem. Appl. Math. Comput. 2012, 219: 1644–1653. 10.1016/j.amc.2012.08.005MATHMathSciNetView ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.