The hybrid steepest descent method for solutions of equilibrium problems and other problems in fixed point theory
© Osilike et al.; licensee Springer. 2014
Received: 10 April 2014
Accepted: 14 June 2014
Published: 22 July 2014
In this paper, we combine the gradient projection algorithm and the hybrid steepest descent method and prove the strong convergence to a common element of the equilibrium problem; the null space of an inverse strongly monotone operator; the set of fixed points of a continuous pseudocontractive mapping and the minimizer of a convex function. This common element is proved to be the unique solution of a variational inequality problem.
MSC:47H06, 47H09, 47J05, 47J25.
is called the variational inequality problems and the set of solutions of the is denoted by .
Given a mapping , let for all , then if and only if , , that is, z is a solution of the variational inequality (1.2).
The set of fixed points of a mapping T is denoted by .
In what follows, we shall use → for strong convergence and ⇀ for weak convergence.
where the parameters μ, are positive real numbers known as step-size. The scheme (1.9) has been considered with several step-size rules:
Constant step-size, where for some , we have for all n.
Diminishing step-size, where and .
Polyaks step-size, where , where is the optimal value of (1.6).
Modified Polyaks step-size, where and for some scalar .
The constant step-size rule is suitable when we are interested in finding an approximate solution to the problem (1.6). The diminishing step-size rule is an off-line rule and is typically used with or for some distributed implementations of the method.
These schemes are the well-known Gradient Projection Algorithms. However, the convergence of these schemes requires that the operator ∇f must be Lipschitz continuous and strongly monotone, which is a strong condition and restrictive in application. If ∇f is Lipschitz continuous and strongly monotone on H, it is obvious that the map is a strict contraction and by the Banach contraction principle, the sequence defined by (1.8) converges strongly to the unique minimizer of (1.6) which is the solution of the variational inequality problem (1.7). Another limitation of the scheme in (1.8) is that it is based on the assumption that the closed form expression of is well known, whereas in many situations it is not.
The iterative approximation of fixed points and zeros of the nonlinear operators has been studied extensively by many authors to solve nonlinear operator equations as well as variational inequality problems (see [1, 2], and the references therein).
where and , , and they proved that the sequence converges strongly to a minimizer of a constrained convex minimization problem which also solves a certain variational inequality.
and proved that if H is a real Hilbert space; is a continuous pseudocontractive mapping; , , is a countable infinite family of nonexpansive mappings; is a bifunction satisfying (A1)-(A4); a proper lower semicontinuous convex function; a continuous monotone mapping; is a fixed vector; is a strongly positive bounded linear operator with coefficient γ; is an η-inverse strongly monotone mapping and the sequences , , satisfy appropriate conditions, then the sequence converges strongly to a unique solution of the variational inequality , .
and proved the following theorem.
Theorem IY 
, as ,
Take , arbitrary and define by (1.9), then converges strongly to the unique solution of where K is the set of fixed points of T.
The scheme (1.9) minimizes certain convex functions over the intersection of fixed point sets of nonexpansive mappings if , say, where f is a continuously Fréchet differentiable convex function. The scheme solves the variational inequality and does not require the closed form expression of but, instead, requires a closed form expression of a nonexpansive mapping T, whose set of fixed points is K.
and he proved that if , , satisfy certain conditions, then the sequence given by (1.10) converges strongly to , which solves the variational inequality , .
where , , and proved that if C is a nonempty, closed, and convex subset of a real Hilbert space H; Φ a bifunction from into ℝ satisfying (A1)-(A4); a real valued convex function; ∇f is an L-Lipschitzian mapping with ; where Ω is the solution set of a minimization problem; a k-Lipschitzian continuous and η-strongly monotone operator with constants ; , and the sequences , , satisfy appropriate conditions, then the sequence generated by converges strongly to a point which solves the variational inequality , .
In this paper, motivated by the results of Ofoedu , Yamada , Tian , Tian and Liu , we shall study a new iterative scheme and prove the strong convergence to a common element of the equilibrium problem; the null space of an inverse strongly monotone operator; the set of fixed points of a continuous pseudocontractive mapping and the minimizer of a convex function. This common element is proved to be the unique solution of a variational inequality problem.
For solving the equilibrium problem for a bifunction , let us assume that F satisfies the following conditions:
(A1) , .
(A2) F is monotone, i.e., , .
(A3) For each , , .
(A4) For each , the function is convex and lower semicontinuous.
Lemma 2.1 (Blum and Oettli )
Lemma 2.2 (Zegeye )
, then the following holds:
(C1) is single valued;
(C4) is closed and convex.
Lemma 2.3 (Combettes and Hirstoaga )
then the following holds:
(B1) is single valued;
(B4) is closed and convex.
Lemma 2.4 (Ofoedu )
η-strongly monotone over K if there exists such that , ;
α-inverse strongly monotone over K if there exists such that , .
is a solution of if , .
For fixed , .
Lemma 2.8 (Demiclosedness Principle )
Let be a nonexpansive mapping with . If is a sequence in C such that and then .
and we say that T is α-averaged.
Firmly nonexpansive maps are -averaged. Thus, a map T is firmly nonexpansive if and only if where S is nonexpansive and I an identity mapping on H.
Every averaged mapping is nonexpansive.
A map S is nonexpansive if and only if is -inverse strongly monotone.
If A is η-inverse strongly monotone, and , then λA is -inverse strongly monotone.
Lemma 2.11 A map is averaged if and only if is η-inverse strongly monotone for . In particular, for , T is α-averaged if and only if is -inverse strongly monotone.
Lemma 2.12 Let , . If A is averaged and S is nonexpansive, then T is averaged.
A map N is firmly nonexpansive if it is 1-inverse strongly monotone.
N is firmly nonexpansive if and only if is firmly nonexpansive.
Every firmly nonexpansive map is averaged.
If , , where N is firmly nonexpansive and S is nonexpansive, then T is averaged.
If , , is a family of nonexpansive mappings, then the mapping is nonexpansive.
If , , is a family of averaged mappings, then the mapping is averaged. If is -averaged and is -averaged for some , then is α-averaged with .
When , (2.1) implies that A is firmly nonexpansive and hence, A is nonexpansive. Thus, a map A is firmly nonexpansive if and only if it is 1-inversely strongly monotone. From the Schwartz inequality, we find that α-inverse being strongly monotone implies -Lipschitz continuity. However, the converse is not true. For instance, (I is the identity mapping on H) is nonexpansive (hence, 1-Lipschitz) but not firmly nonexpansive, hence not 1-inversely strongly monotone. In 1977, Baillon and Haddad  showed that if and A is the gradient of a convex function, say f, i.e., , then -Lipschitz continuity implies α-inverse strongly monotonicity and vice versa.
If ∇f is L-Lipschitz, then ∇f is -inverse strongly monotone and is -inverse strongly monotone. Then, by Lemma 2.11, is -averaged. The projection map is firmly nonexpansive and hence is -averaged. The composition is α-averaged (from Remark 2.13) with , . Now, for , is -averaged, so that from Remark 2.13, we have , where is nonexpansive and , (see [14–22], and the references therein).
3 Main result
, , , ,
, , ,
, , ,
and let ε be a real constant such that . For , , are as in Lemma 2.2 and Lemma 2.3.
we shall study the strong convergence of the iteration scheme to a unique solution where solves the variational inequality , , and and we have , where , is nonexpansive.
Lemma 3.2 Suppose the conditions of Remark 3.1 are satisfied, then defined by (3.1) is bounded.
and hence is nonexpansive.
where . Hence, is a strict contraction and by the Banach contraction principle, it has a unique fixed point in H.
Therefore is bounded. Consequently we find that , , , are bounded. □
which shows that is bounded.
Hence, is bounded.
That is, . It follows from (A3) that , . Since, m is taken arbitrarily, it follows that .
which implies that .
by (3.23). Let as . If and , by the nonexpansive property of , and Lemma 2.8, , where ; hence, .