 Research
 Open Access
Approximate solutions to variational inequality over the fixed point set of a strongly nonexpansive mapping
 Shigeru Iemoto^{1},
 Kazuhiro Hishinuma^{2} and
 Hideaki Iiduka^{2}Email author
https://doi.org/10.1186/16871812201451
© Iemoto et al.; licensee Springer. 2014
Received: 3 September 2013
Accepted: 13 February 2014
Published: 25 February 2014
Abstract
Variational inequality problems over fixed point sets of nonexpansive mappings include many practical problems in engineering and applied mathematics, and a number of iterative methods have been presented to solve them. In this paper, we discuss a variational inequality problem for a monotone, hemicontinuous operator over the fixed point set of a strongly nonexpansive mapping on a real Hilbert space. We then present an iterative algorithm, which uses the strongly nonexpansive mapping at each iteration, for solving the problem. We show that the algorithm potentially converges in the fixed point set faster than algorithms using firmly nonexpansive mappings. We also prove that, under certain assumptions, the algorithm with slowly diminishing stepsize sequences converges to a solution to the problem in the sense of the weak topology of a Hilbert space. Numerical results demonstrate that the algorithm converges to a solution to a concrete variational inequality problem faster than the previous algorithm.
MSC:47H06, 47J20, 47J25.
Keywords
 variational inequality problem
 fixed point set
 strongly nonexpansive mapping
 monotone operator
1 Introduction
when $A:H\to H$ is strongly monotone and Lipschitz continuous. Problem (2) contains many applications such as signal recovery problems [11], beamforming problems [12], powercontrol problems [13, 14], bandwidth allocation problems [15–17], and optimal control problems [18]. References [11, 19], and [20] presented acceleration methods for solving Problem (2) when A is strongly monotone and Lipschitz continuous. Algorithms were presented to solve Problem (2) when A is (strictly) monotone and Lipschitz continuous [15, 17]. When $H={\mathbb{R}}^{N}$ and $A:{\mathbb{R}}^{N}\to {\mathbb{R}}^{N}$ is continuous (and is not necessarily monotone), a simple algorithm, ${x}_{n+1}:={\alpha}_{n}{x}_{n}+(1{\alpha}_{n})(1/2)(I+T)({x}_{n}{r}_{n}A{x}_{n})$ (${\alpha}_{n},{r}_{n}\in [0,1]$), was presented in [14] and the algorithm converges to a solution to Problem (2) under some conditions.
Algorithm (5) potentially converges in the fixed point set faster than algorithm (4). Here, we can see that the mapping, $S:=(1\alpha )I+\alpha T$, satisfies the strong nonexpansivity condition [22], which is a weaker condition than firm nonexpansivity. This implies that the previous algorithms in [14, 21], which can be applied to Problem (2) when T is firmly nonexpansive, cannot solve Problem (2) when T is strongly nonexpansive.
In this paper, we present an iterative algorithm for solving the variational inequality problem with a monotone, hemicontinuous operator over the fixed point set of a strongly nonexpansive mapping and show that the algorithm weakly converges to a solution to the problem under certain assumptions.
The rest of the paper is organized as follows. Section 2 covers the mathematical preliminaries. Section 3 presents the algorithm for solving the variational inequality problem for a monotone, hemicontinuous operator over the fixed point set of a strongly nonexpansive mapping, and its convergence analyses. Section 4 provides numerical comparisons of the algorithm with the previous algorithm in [21] and shows that the algorithm converges to a solution to a concrete variational inequality problem faster than the previous algorithm. Section 5 concludes the paper.
2 Preliminaries
Throughout this paper, we will denote the set of all positive integers by ℕ and the set of all real numbers by ℝ. Let H be a real Hilbert space with inner product $\u3008\cdot \phantom{\rule{0.2em}{0ex}},\cdot \u3009$ and its induced norm $\parallel \phantom{\rule{0.2em}{0ex}}\cdot \phantom{\rule{0.2em}{0ex}}\parallel $. We denote the strong convergence and weak convergence of $\{{x}_{n}\}$ to $x\in H$ by ${x}_{n}\to x$ and ${x}_{n}\rightharpoonup x$, respectively. It is well known that H satisfies the following condition, called Opial’s condition [23]: for any $\{{x}_{n}\}\subset H$ satisfying ${x}_{n}\rightharpoonup {x}_{0}$, ${lim\hspace{0.17em}inf}_{n\to \mathrm{\infty}}\parallel {x}_{n}{x}_{0}\parallel <{lim\hspace{0.17em}inf}_{n\to \mathrm{\infty}}\parallel {x}_{n}y\parallel $ holds for all $y\in H$ with $y\ne {x}_{0}$; see also [5, 6, 24]. To prove our main theorems, we need the following lemma, which was proven in [25]; see also [5, 6, 26].
Lemma 2.1 ([25])
Assume that $\{{s}_{n}\}$ and $\{{e}_{n}\}$ are sequences of nonnegative numbers such that ${s}_{n+1}\le {s}_{n}+{e}_{n}$ for all $n\in \mathbb{N}$. If ${\sum}_{n=1}^{\mathrm{\infty}}{e}_{n}<\mathrm{\infty}$, then ${lim}_{n\to \mathrm{\infty}}{s}_{n}$ exists.
2.1 Strong nonexpansivity and fixed point set
Let T be a mapping of H into itself. We denote the fixed point set of T by $Fix(T)$; i.e., $Fix(T)=\{z\in H:Tz=z\}$. A mapping $T:H\to H$ is said to be nonexpansive if $\parallel TxTy\parallel \le \parallel xy\parallel $ for all $x,y\in H$. $Fix(T)$ is closed and convex when T is nonexpansive [5, 6, 24, 27]. $T:H\to H$ is said to be strongly nonexpansive [22] if T is nonexpansive and if, for bounded sequences $\{{x}_{n}\},\{{y}_{n}\}\subset H$, $\parallel {x}_{n}{y}_{n}\parallel \parallel T{x}_{n}T{y}_{n}\parallel \to 0$ implies $\parallel {x}_{n}{y}_{n}(T{x}_{n}T{y}_{n})\parallel \to 0$. The following properties for strongly nonexpansive mappings were shown in [22]:

$Fix(T)$ is closed and convex when $T:H\to H$ is strongly nonexpansive because T is also nonexpansive.

A strongly nonexpansive mapping, $T:H\to H$, with $Fix(T)\ne \mathrm{\varnothing}$ is asymptotically regular[24, 28], i.e., for each $x\in H$, ${lim}_{n\to \mathrm{\infty}}\parallel {T}^{n}x{T}^{n+1}x\parallel =0$.

If $S,T:H\to H$ are strongly nonexpansive, then ST is also strongly nonexpansive, and $Fix(ST)=Fix(S)\cap Fix(T)$ when $Fix(S)\cap Fix(T)\ne \mathrm{\varnothing}$.

If $S:H\to H$ is strongly nonexpansive and if $T:H\to H$ is nonexpansive, then $\alpha S+(1\alpha )T$ is strongly nonexpansive for $\alpha \in (0,1)$. If $Fix(S)\cap Fix(T)\ne \mathrm{\varnothing}$, then $Fix(\alpha S+(1\alpha )T)=Fix(S)\cap Fix(T)$[29]. In particular, since the identity mapping I is strongly nonexpansive, the mapping $U:=\alpha I+(1\alpha )T$ is strongly nonexpansive. Such U is said to be averaged nonexpansive.
Then T is strongly nonexpansive and $Fix(T)=\{x\in D:f(x)={min}_{y\in D}f(y)\}$.
Then S is nonexpansive [[10], Proposition 4.2] and $Fix(S)={C}_{\mathrm{\Phi}}:=\{x\in {D}_{0}:\mathrm{\Phi}(x)={min}_{y\in {D}_{0}}\mathrm{\Phi}(y)\}$. Hence, T is strongly nonexpansive and $Fix(T)={C}_{\mathrm{\Phi}}$. ${C}_{\mathrm{\Phi}}$ is referred to as a generalized convex feasible set [10, 32] and is defined as the subset of ${D}_{0}$ that is closest to ${D}_{1},{D}_{2},\dots ,{D}_{m}$ in the mean square sense. Even if ${\bigcap}_{i=0}^{m}{D}_{i}=\mathrm{\varnothing}$, ${C}_{\mathrm{\Phi}}$ is well defined. ${C}_{\mathrm{\Phi}}={\bigcap}_{i=0}^{m}{D}_{i}$ holds when ${\bigcap}_{i=0}^{m}{D}_{i}\ne \mathrm{\varnothing}$. Accordingly, ${C}_{\mathrm{\Phi}}$ is a generalization of ${\bigcap}_{i=0}^{m}{D}_{i}$.
A mapping $F:H\to H$ is said to be firmly nonexpansive [33] if ${\parallel FxFy\parallel}^{2}\le \u3008xy,FxFy\u3009$ for all $x,y\in H$ (see also [24, 27, 34]). Every firmly nonexpansive mapping F can be expressed as $F=(1/2)(I+T)$ given some nonexpansive mapping T [24, 27, 34]. Hence, the class of averaged nonexpansive mappings includes the class of firmly nonexpansive mappings.
2.2 Variational inequality
We denote the solution set of the variational inequality problem by $VI(C,A)$. The monotonicity and hemicontinuity of A imply that $VI(C,A)=\{z\in C:\u3008yz,Ay\u3009\ge 0\text{for all}y\in C\}$ [[5], Subsection 7.1]. This means that $VI(C,A)$ is closed and convex. $VI(C,A)$ is nonempty when $A:H\to H$ is monotone and hemicontinuous, and $C\subset H$ is nonempty, compact, and convex [[5], Theorem 7.1.8].
Example 2.3 Let $g:H\to \mathbb{R}$ be convex and continuously Fréchet differentiable and $A:=\mathrm{\nabla}g$. Then A is monotone and hemicontinuous.
A solution of this problem is a minimizer of g over the set of all minimizers of f over D. Therefore, the problem has a triplex structure [16, 31, 35].
This problem is to find a minimizer of g over the generalized convex feasible set [10, 13, 14, 16, 18].
3 Optimization of variational inequality over fixed point set
In this section, we present an iterative algorithm for solving the variational inequality problem for a monotone, hemicontinuous operator over the fixed point set of a strongly nonexpansive mapping and its convergence analyses. We assume that $T:H\to H$ is a strongly nonexpansive mapping with $Fix(T)\ne \mathrm{\varnothing}$ and that $A:H\to H$ is a monotone, hemicontinuous operator.
Algorithm 3.1
Step 0. Choose ${x}_{1}\in H$, ${r}_{1}\in (0,1)$, and ${\alpha}_{1}\in [0,1)$ arbitrarily, and let $n:=1$.
Step 2. Update $n:=n+1$, and go to Step 1.
To prove our main theorems, we need the following lemma.
 (A)
${\sum}_{n=1}^{\mathrm{\infty}}{r}_{n}<\mathrm{\infty}$, or
 (B)
${\sum}_{n=1}^{\mathrm{\infty}}{r}_{n}^{2}<\mathrm{\infty}$, $VI(Fix(T),A)\ne \mathrm{\varnothing}$, and the existence of an ${n}_{0}\in \mathbb{N}$ satisfying $VI(Fix(T),A)\subset \mathrm{\Omega}:={\bigcap}_{n={n}_{0}}^{\mathrm{\infty}}\{x\in Fix(T):\u3008{x}_{n}x,A{x}_{n}\u3009\ge 0\}$.
Then $\{{x}_{n}\}$ is bounded.
From ${\sum}_{n=1}^{\mathrm{\infty}}{r}_{n}<\mathrm{\infty}$, the boundedness of $\{A{x}_{n}\}$, and Lemma 2.1, the limit of $\{\parallel {x}_{n}u\parallel \}$ exists for all $u\in Fix(T)$, which implies that $\{{x}_{n}\}$ is bounded.
Hence, the condition, ${\sum}_{n=1}^{\mathrm{\infty}}{r}_{n}^{2}<\mathrm{\infty}$, and Lemma 2.1 guarantee that the limit of $\{\parallel {x}_{n}u\parallel \}$ exists for all $u\in VI(Fix(T),A)$. We thus conclude that $\{{x}_{n}\}$ is bounded. □
Now, we are in the position to perform the convergence analysis on Algorithm 3.1 under condition (A) in Lemma 3.1.
Then Algorithm 3.1 converges weakly to a point in $VI(Fix(T),A)$.
 (a)
Prove that $\{{x}_{n}\}$ and $\{{z}_{n}\}$ are bounded.
 (b)
Prove that ${lim}_{n\to \mathrm{\infty}}\parallel {x}_{n}{y}_{n}\parallel =0$ and ${lim}_{n\to \mathrm{\infty}}\parallel {x}_{n}T{x}_{n}\parallel =0$ hold.
 (c)
Prove that $\{{x}_{n}\}$ converges weakly to a point in $\mathrm{VI}(Fix(T),A)$.
 (a)
Choose $u\in Fix(T)$ arbitrarily. From the inequality, $\parallel {z}_{n}u\parallel =\parallel ({x}_{n}{r}_{n}A{x}_{n})u\parallel \le \parallel {x}_{n}u\parallel +{r}_{n}\parallel A{x}_{n}\parallel $, and Lemma 3.1, we deduce that $\{{z}_{n}\}$ is bounded.
 (b)Put $c:={lim}_{n\to \mathrm{\infty}}\parallel {x}_{n}u\parallel $ for any $u\in Fix(T)$. Then, from ${\sum}_{n=1}^{\mathrm{\infty}}{r}_{n}<\mathrm{\infty}$, for any $\epsilon >0$, we can choose $m\in \mathbb{N}$ such that $\parallel {x}_{n}u\parallel c\le \epsilon $, and ${r}_{n}\le \epsilon $ for all $n\ge m$. Also, there exists $a>0$ such that ${\alpha}_{n}<a<1$ for all $n\ge m$ because of ${lim\hspace{0.17em}sup}_{n\to \mathrm{\infty}}{\alpha}_{n}<1$. Since ${y}_{n}=(1/(1{\alpha}_{n})){x}_{n+1}({\alpha}_{n}/(1{\alpha}_{n})){x}_{n}$, we have$\parallel {y}_{n}u\parallel \ge \frac{1}{1{\alpha}_{n}}\parallel {x}_{n+1}u\parallel \frac{{\alpha}_{n}}{1{\alpha}_{n}}\parallel {x}_{n}u\parallel $
 (c)From the boundedness of $\{{x}_{n}\}$, there exists a subsequence $\{{x}_{{n}_{i}}\}$ of $\{{x}_{n}\}$ such that $\{{x}_{{n}_{i}}\}$ converges weakly to a point $v\in H$. From the nonexpansivity of T and (12), it is guaranteed that T is demiclosed (i.e., ${x}_{n}\rightharpoonup u$ and $\parallel {x}_{n}T{x}_{n}\parallel \to 0$ imply $u\in Fix(T)$). Hence, we have $v\in Fix(T)$. From (9), we get, for any $u\in Fix(T)$ and for any $n\in \mathbb{N}$,$\begin{array}{rl}0\le & (\parallel {x}_{n}u\parallel +\parallel {x}_{n+1}u\parallel )(\parallel {x}_{n}u\parallel \parallel {x}_{n+1}u\parallel )\\ +2{r}_{n}(1{\alpha}_{n})\u3008u{x}_{n},Au\u3009+K{r}_{n}^{2},\end{array}$
This is a contradiction. Thus, $v=w$. This implies that every subsequence of $\{{x}_{n}\}$ converges weakly to the same point in $VI(Fix(T),A)$. Therefore, $\{{x}_{n}\}$ converges weakly to $v\in VI(Fix(T),A)$. This completes the proof. □
Remark 3.1 The numerical examples in [14, 16, 21] show that Algorithm 3.1 satisfies ${lim}_{n\to \mathrm{\infty}}\parallel {x}_{n}{y}_{n}\parallel /{r}_{n}=0$ when T is firmly nonexpansive and ${r}_{n}:=1/{n}^{\alpha}$ ($1\le \alpha <2$). However, when $\alpha \ge 2$, there are counterexamples that do not satisfy ${lim}_{n\to \mathrm{\infty}}\parallel {x}_{n}{y}_{n}\parallel /{r}_{n}=0$ [14, 16, 21].
Remark 3.2 If the sequence $\{{x}_{n}\}$ satisfies the assumptions in Theorem 3.1, we need not assume that $VI(Fix(T),A)\ne \mathrm{\varnothing}$ or that ${n}_{0}\in \mathbb{N}$ exists such that $VI(Fix(T),A)\subset \mathrm{\Omega}$ in condition (B) (see also [[14], Remark 7(c)]).
instead of ${x}_{n+1}$ in Algorithm 3.1. Since $\{{x}_{n}\}\subset V$ and V is bounded, $\{{x}_{n}\}$ is bounded. The Lipschitz continuity of A means that $\parallel A{x}_{n}Ax\parallel \le L\parallel {x}_{n}x\parallel $ ($x\in H$), where L (>0) is a constant, and hence, $\{A{x}_{n}\}$ is bounded. We can prove that Algorithm 3.1 with Equation (13) and $\{{\alpha}_{n}\}$ and $\{{r}_{n}\}$ satisfying the conditions in Theorem 3.1 (or Theorem 3.2) weakly converges to a point in $VI(Fix(T),A)$ by referring to the proof of Theorem 3.1 (or Theorem 3.2).
We prove the following theorem under condition (B) in Lemma 3.1. The essential parts of a proof are similar those of Lemma 3.1 and Theorem 3.1, so we will only give an outline of the proof below.
If $VI(Fix(T),A)\ne \mathrm{\varnothing}$ and if there exists ${n}_{0}\in \mathbb{N}$ such that $VI(Fix(T),A)\subset {\bigcap}_{n={n}_{0}}^{\mathrm{\infty}}\{x\in Fix(T):\u3008{x}_{n}x,A{x}_{n}\u3009\ge 0\}$, then the sequence $\{{x}_{n}\}$ converges weakly to a point in $VI(Fix(T),A)$.
 (a)
Prove that $\{{x}_{n}\}$ and $\{{z}_{n}\}$ are bounded.
 (b)
Prove that ${lim}_{n\to \mathrm{\infty}}\parallel {x}_{n}T{x}_{n}\parallel =0$ holds.
 (c)
Prove that $\{{x}_{n}\}$ converges weakly to a point in $\mathrm{VI}(Fix(T),A)$.
 (a)
From Lemma 3.1, it follows that the limit of $\{\parallel {x}_{n}u\parallel \}$ exists for all $u\in VI(Fix(T),A)$, and hence $\{{x}_{n}\}$ and $\{{z}_{n}\}$ are bounded.
 (b)Let $u\in VI(Fix(T),A)$ and put $c:={lim}_{n\to \mathrm{\infty}}\parallel {x}_{n}u\parallel $. Since ${\sum}_{n=1}^{\mathrm{\infty}}{r}_{n}^{2}<\mathrm{\infty}$, the condition, ${r}_{n}\to 0$, holds. As in the proof of Theorem 3.1(b), for any $\epsilon >0$, there exists $m\in \mathbb{N}$ such that$\parallel {x}_{n}u\parallel c\le \epsilon ,\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}\parallel {y}_{n}u\parallel \ge c\frac{1+a}{1a}\epsilon $
 (c)
Following the proof of Theorem 3.1(c), there exists a subsequence $\{{x}_{{n}_{i}}\}\subset \{{x}_{n}\}$ such that $\{{x}_{{n}_{i}}\}$ converges weakly to $v\in VI(Fix(T),A)$. Assume that another subsequence $\{{x}_{{n}_{j}}\}$ of $\{{x}_{n}\}$ converges weakly to w. Then we also have $w\in VI(Fix(T),A)$. Since the limit of $\{\parallel {x}_{n}u\parallel \}$ exists for $u\in \mathrm{VI}(Fix(T),A)$, Opial’s theorem [23] guarantees that $v=w$. This implies that every subsequence of $\{{x}_{n}\}$ converges weakly to the same point in $VI(Fix(T),A)$, and hence, $\{{x}_{n}\}$ converges weakly to $v\in VI(Fix(T),A)$. This completes the proof. □
As we mentioned in Section 1, to solve constrained optimization problems whose feasible set is the fixed point set of a nonexpansive mapping T, Algorithm 3.1 must converge in $Fix(T)$ early in the execution. Therefore, it would be useful to use a large parameter α ($\in (0,1)$) when a strongly nonexpansive mapping is represented by $(1\alpha )I+\alpha T$. Theorem 3.1 has the following consequences.
Then $\{{x}_{n}\}$ converges weakly to a point in $VI(Fix(T),A)$.
Proof Since every averaged nonexpansive mapping is strongly nonexpansive and $Fix((1\alpha )I+\alpha T)=Fix(T)$ for $\alpha \in (0,1)$, Theorem 3.1 implies Corollary 3.1. □
By following the proof of Theorem 3.2 and Corollary 3.1, we get the following.
If $VI(Fix(T),A)\ne \mathrm{\varnothing}$ and if there exists ${n}_{0}\in \mathbb{N}$ such that $VI(Fix(T),A)\subset {\bigcap}_{n={n}_{0}}^{\mathrm{\infty}}\{x\in Fix(T):\u3008{x}_{n}x,A{x}_{n}\u3009\ge 0\}$, then $\{{x}_{n}\}$ converges weakly to a point in $VI(Fix(T),A)$.
4 Numerical examples
Let us apply Algorithm 3.1 and the algorithm in [21] to the following variational inequality problem.
where $Q\in {\mathbb{R}}^{1,000\times 1,000}$ is positive semidefinite, ${a}_{i}:=({a}_{i}^{(1)},{a}_{i}^{(2)},\dots ,{a}_{i}^{(1,000)})\in {\mathbb{R}}^{1,000}$, and ${b}_{i}\in {\mathbb{R}}_{+}$ ($i=1,2$). Find $z\in VI({C}_{1}\cap {C}_{2},\mathrm{\nabla}f)$.
We set Q as a diagonal matrix with diagonal components $0,1,\dots ,999$ and choose ${a}_{i}^{(j)}\in (0,100)$ ($i=1,2$, $j=1,2,\dots ,1,000$) to be Mersenne Twister pseudorandom numbers given by the randomreal function of srfi27, Gauche.^{a} We also set ${b}_{1}:=5,000$ and ${b}_{2}:=4,000$. The compiler used in this experiment was gcc.^{b} The doubleprecision floating points were used for arithmetic processing of real numbers. The language was C.
The convergence of $\{{D}_{n}\}$ to 0 implies that algorithm (15) converges to a point in ${C}_{1}\cap {C}_{2}$.
to check that algorithm (15) is stable.
5 Conclusion
We studied a variational inequality problem for a monotone, hemicontinuous operator over the fixed point set of a strongly nonexpansive mapping in a Hilbert space and devised an iterative algorithm for solving it. Our convergence analyses guarantee that the algorithm weakly converges to a solution under certain assumptions. We gave numerical results to support the convergence analyses on the algorithm. The results showed that the algorithm converges to a solution to a concrete variational inequality problem faster than the previous algorithm.
6 Endnotes
We used the Gauche scheme shell, 0.9.3.3 [utf8,pthreads], x86_64appledarwin12.4.1.
We used gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00).
For example, we set a large parameter, i.e., much more than $1/2$: $\alpha =9/10$.
Declarations
Acknowledgements
We are sincerely grateful to the Lead Guest Editor, Qamrul Hasan Ansari, of Special Issue on Variational Analysis and Fixed Point Theory: Dedicated to Professor Wataru Takahashi on the occasion of his 70th birthday, and the two anonymous referees for helping us improve the original manuscript. This work was supported by the Japan Society for the Promotion of Science through a GrantinAid for JSPS Fellows (08J08592) and a GrantinAid for Young Scientists (B) (23760077).
Authors’ Affiliations
References
 Kinderlehrer D, Stampacchia G: An Introduction to Variational Inequalities and Their Applications. Academic Press, New York; 1980.Google Scholar
 Lions JL, Stampacchia G: Variational inequalities. Commun. Pure Appl. Math. 1967, 20: 493–519. 10.1002/cpa.3160200302View ArticleMathSciNetGoogle Scholar
 Rockafellar RT, Wets RJB: Variational Analysis. Springer, Berlin; 1998.View ArticleGoogle Scholar
 Stampacchia G: Formes bilinéaires coercitives sur les ensembles convexes. C. R. Math. Acad. Sci. Paris 1964, 258: 4413–4416.MathSciNetGoogle Scholar
 Takahashi W: Nonlinear Functional Analysis. Fixed Point Theory and Its Applications. Yokohama Publishers, Yokohama; 2000.Google Scholar
 Takahashi W: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers, Yokohama; 2009.Google Scholar
 Zeidler E: Nonlinear Functional Analysis and Its Applications. II/B. Springer, New York; 1990.View ArticleGoogle Scholar
 Zeidler E: Nonlinear Functional Analysis and Its Applications. III. Springer, New York; 1985.View ArticleGoogle Scholar
 Goldstein AA: Convex programming in Hilbert space. Bull. Am. Math. Soc. 1964, 70: 709–710. 10.1090/S000299041964111782View ArticleGoogle Scholar
 Yamada I: The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings. Stud. Comput. Math. 8. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications. NorthHolland, Amsterdam; 2001:473–504.View ArticleGoogle Scholar
 Combettes PL: A blockiterative surrogate constraint splitting method for quadratic signal recovery. IEEE Trans. Signal Process. 2003, 51: 1771–1782. 10.1109/TSP.2003.812846View ArticleMathSciNetGoogle Scholar
 Slavakis K, Yamada I: Robust wideband beamforming by the hybrid steepest descent method. IEEE Trans. Signal Process. 2007, 55: 4511–4522.View ArticleMathSciNetGoogle Scholar
 Iiduka H, Yamada I: An ergodic algorithm for the powercontrol games for CDMA data networks. J. Math. Model. Algorithms 2009, 8: 1–18. 10.1007/s1085200890994View ArticleMathSciNetGoogle Scholar
 Iiduka H: Fixed point optimization algorithm and its application to power control in CDMA data networks. Math. Program. 2012, 133: 227–242. 10.1007/s101070100427xView ArticleMathSciNetGoogle Scholar
 Iiduka H: Fixed point optimization algorithm and its application to network bandwidth allocation. J. Comput. Appl. Math. 2012, 236: 1733–1742. 10.1016/j.cam.2011.10.004View ArticleMathSciNetGoogle Scholar
 Iiduka H: Iterative algorithm for triplehierarchical constrained nonconvex optimization problem and its application to network bandwidth allocation. SIAM J. Optim. 2012, 22: 862–878. 10.1137/110849456View ArticleMathSciNetGoogle Scholar
 Iiduka H: Fixed point optimization algorithms for distributed optimization in networked systems. SIAM J. Optim. 2013, 23: 1–26. 10.1137/120866877View ArticleMathSciNetGoogle Scholar
 Iiduka H, Yamada I: Computational method for solving a stochastic linearquadratic control problem given an unsolvable stochastic algebraic Riccati equation. SIAM J. Control Optim. 2012, 50: 2173–2192. 10.1137/110850542View ArticleMathSciNetGoogle Scholar
 Iiduka H: Acceleration method for convex optimization over the fixed point set of a nonexpansive mapping. Math. Program. 2014. 10.1007/s1010701307411Google Scholar
 Iiduka H, Yamada I: A use of conjugate gradient direction for the convex optimization problem over the fixed point set of a nonexpansive mapping. SIAM J. Optim. 2009, 19: 1881–1893. 10.1137/070702497View ArticleMathSciNetGoogle Scholar
 Iiduka H: A new iterative algorithm for the variational inequality problem over the fixed point set of a firmly nonexpansive mapping. Optimization 2010, 59: 873–885. 10.1080/02331930902884158View ArticleMathSciNetGoogle Scholar
 Bruck RE, Reich S: Nonexpansive projections and resolvents of accretive operators in Banach spaces. Houst. J. Math. 1977, 3: 459–470.MathSciNetGoogle Scholar
 Opial Z: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73: 591–597. 10.1090/S000299041967117610View ArticleMathSciNetGoogle Scholar
 Goebel K, Kirk WA: Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.View ArticleGoogle Scholar
 Tan KK, Xu HK: Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process. J. Math. Anal. Appl. 1993, 178: 301–308. 10.1006/jmaa.1993.1309View ArticleMathSciNetGoogle Scholar
 Berinde V Lecture Notes in Mathematics. In Iterative Approximation of Fixed Points. Springer, Berlin; 2007.Google Scholar
 Goebel K, Reich S: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York; 1984.Google Scholar
 Browder FE, Petryshyn WV: The solution by iteration of linear functional equations in Banach spaces. Bull. Am. Math. Soc. 1966, 72: 566–570. 10.1090/S000299041966115434View ArticleMathSciNetGoogle Scholar
 Aoyama K, Kimura Y, Takahashi W, Toyoda M: On a strongly nonexpansive sequence in Hilbert spaces. J. Nonlinear Convex Anal. 2007, 8: 471–489.MathSciNetGoogle Scholar
 Baillon JB, Haddad G: Quelques propriétés des opérateurs anglebornés et n cycliquement monotones. Isr. J. Math. 1977, 26: 137–150. 10.1007/BF03007664View ArticleMathSciNetGoogle Scholar
 Iiduka H: Strong convergence for an iterative method for the triplehierarchical constrained optimization problem. Nonlinear Anal. 2009, 71: 1292–1297. 10.1016/j.na.2009.01.133View ArticleMathSciNetGoogle Scholar
 Combettes PL, Bondon P: Hardconstrained inconsistent signal feasibility problems. IEEE Trans. Signal Process. 1999, 47: 2460–2468. 10.1109/78.782189View ArticleGoogle Scholar
 Browder FE: Convergence theorems for sequences of nonlinear operators in Banach spaces. Math. Z. 1967, 100: 201–225. 10.1007/BF01109805View ArticleMathSciNetGoogle Scholar
 Reich S, Shoikhet D: Nonlinear Semigroups, Fixed Points, and Geometry of Domains in Banach Spaces. Imperial College Press, London; 2005.View ArticleGoogle Scholar
 Iiduka H: Iterative algorithm for solving triplehierarchical constrained optimization problem. J. Optim. Theory Appl. 2011, 148: 580–592. 10.1007/s109570109769zView ArticleMathSciNetGoogle Scholar
 Bauschke HH, Borwein JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38: 367–426. 10.1137/S0036144593251710View ArticleMathSciNetGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.