Skip to main content

Krasnoselskii-Mann method for non-self mappings

Abstract

Let H be a Hilbert space and let C be a closed, convex and nonempty subset of H. If \(T:C\to H\) is a non-self and non-expansive mapping, we can define a map \(h:C\to\mathbb{R}\) by \(h(x):=\inf\{\lambda\geq 0:\lambda x+(1-\lambda)Tx\in C\}\). Then, for a fixed \(x_{0}\in C\) and for \(\alpha_{0}:=\max\{1/2, h(x_{0})\}\), we define the Krasnoselskii-Mann algorithm \(x_{n+1}=\alpha _{n}x_{n}+(1-\alpha_{n})Tx_{n}\), where \(\alpha_{n+1}=\max\{\alpha_{n},h(x_{n+1})\}\). We will prove both weak and strong convergence results when C is a strictly convex set and T is an inward mapping.

1 Introduction

Let C be a closed, convex and nonempty subset of a Hilbert space H and let \(T:C\to H\) be a non-expansive mapping such that the fixed point set \(\operatorname{Fix}(T):=\{x\in C:Tx=x\}\) is not empty.

For a real sequence \(\{\alpha_{n}\}\subset(0,1)\), we will consider the iterations

$$ \begin{cases} x_{0}\in C, \\ x_{n+1}=\alpha_{n}x_{n}+(1-\alpha_{n})Tx_{n}. \end{cases} $$
(1)

If T is a self-mapping, the iterative scheme above has been studied in an impressive amount of papers (see [1] and the references therein) in the last decades and it is often called ‘segmenting Mann’ [24] or ‘Krasnoselskii-Mann’ (e.g., [5, 6]) iteration.

A general result on algorithm (1) is due to Reich [7] and states that the sequence \(\{x_{n}\}\) weakly converges to a fixed point of the operator T under the following assumptions:

  1. (C1)

    T is a self-mapping, i.e., \(T:C\to C\) and

  2. (C2)

    \(\{\alpha_{n}\}\) is such that \(\sum_{n}\alpha _{n}(1-\alpha_{n})=+\infty\).

In this paper, we are interested in lowering condition (C1) by allowing T to be non-self at the price of strengthening the requirements on the sequence \(\{\alpha_{n}\}\) and on the set C. Indeed, we will assume that C is a strictly convex set and that the non-expansive map \(T:C\to H\) is inward.

Historically, the inward condition and its generalizations were widely used to prove convergence results for both implicit [811] and explicit (see, e.g., [1, 1214]) algorithms. However, we point out that the explicit case was only studied in conjunction with processes involving the calculation of a projection or a retraction \(P:H\to C\) at each step.

As an example, in [12], the following algorithm is studied:

$$x_{n+1}=P\bigl(\alpha_{n}f(x_{n})+(1- \alpha_{n})Tx_{n}\bigr), $$

where \(T:C\to H\) satisfies the weakly inward condition, f is a contraction and \(P:H\to C\) is a non-expansive retraction.

We point out that in many real world applications, the process of calculating P can be a resource consumption task and it may require an approximating algorithm by itself, even in the case when P is the nearest point projection.

To overcome the necessity of using an auxiliary mapping P, for an inward and non-expansive mapping \(T:C\to H\), we will introduce a new search strategy for the coefficients \(\{\alpha_{n}\}\) and we will prove that the Krasnoselskii-Mann algorithm

$$x_{n+1}=\alpha_{n}x_{n}+(1-\alpha_{n})Tx_{n} $$

is well defined for this particular choice of the sequence \(\{\alpha _{n}\}\). Also we will prove both weak and strong convergence results for the above algorithm when C is a strictly convex set.

We stress that the main difference between the classical Krasnoselskii-Mann and our algorithm is that the choice of the coefficient \(\alpha_{n}\) is not made a priori in the latter, but it is constructed step to step and determined by the values of the map T and the geometry of the set C.

2 Main result

We will make use of the following.

Definition 1

A map \(T:C\to H\) is said to be inward (or to satisfy the inward condition) if, for any \(x\in C\), it holds

$$ Tx\in I_{C}(x):=\bigl\{ x+c(u-x):c\geq1\mbox{ and }u\in C\bigr\} . $$
(2)

We refer to [15] for a comprehensive survey on the properties of the inward mappings.

Definition 2

A set \(C\subset H\) is said to be strictly convex if it is convex and with the property that \(x,y\in\partial C\) and \(t\in(0,1)\) implies that

$$tx+(1-t)y\in\mathring{C}. $$

In other words, if the boundary ∂C does not contain any segment.

Definition 3

A sequence \(\{y_{n}\}\subset C\) is Fejér-monotone with respect to a set \(D\subset C\) if, for any element \(y\in D\),

$$\|y_{n+1}-y\|\leq\|y_{n}-y\| \quad \forall n\in\mathbb{N}. $$

For a closed and convex set C and a map \(T:C\to H\), we define a mapping \(h:C\to\mathbb{R}\) as

$$ h(x):=\inf\bigl\{ \lambda\geq0:\lambda x+(1-\lambda)Tx\in C\bigr\} . $$
(3)

Note that the above quantity is a minimum since C is closed. In the following lemma, we group the properties of the function defined above.

Lemma 1

Let C be a nonempty, closed and convex set, let \(T:C\to H\) be a mapping and define \(h:C\to\mathbb{R}\) as in (3). Then the following properties hold:

  1. (P1)

    for any \(x\in C\), \(h(x)\in[0,1]\) and \(h(x)=0\) if and only if \(Tx\in C\);

  2. (P2)

    for any \(x\in C\) and any \(\alpha\in[h(x),1]\), \(\alpha x+(1-\alpha)Tx\in C\);

  3. (P3)

    if T is an inward mapping, then \(h(x)<1\) for any \(x\in C\);

  4. (P4)

    whenever \(Tx\notin C\), \(h(x)x+(1-h(x))Tx\in\partial C\).

Proof

Properties (P1) and (P2) follow directly from the definition of h. To prove (P3), observe that (2) implies

$$\frac{1}{c}Tx+\biggl(1-\frac{1}{c}\biggr)x\in C $$

for some \(c\geq1\). As a consequence,

$$h(x)=\inf\bigl\{ \lambda\geq0:\lambda x+(1-\lambda)Tx\in C\bigr\} \leq\biggl(1- \frac{1}{c}\biggr)< 1. $$

In order to verify (P4), we first note that \(h(x)>0\) by property (P1) and that \(h(x)x+(1-h(x))Tx\in C\). Let \(\{\eta_{n}\}\subset(0,h(x))\) be a sequence of real numbers converging to \(h(x)\) and note that, by the definition of h, it holds

$$z_{n}:=\eta_{n}x+(1-\eta_{n})Tx\notin C $$

for any \(n\in\mathbb{N}\). Since \(\eta_{n}\to h(x)\) and

$$\bigl\Vert z_{n}-h(x)x-\bigl(1-h(x)\bigr)Tx\bigr\Vert =\bigl\vert \eta_{n}-h(x)\bigr\vert \|x-Tx\|, $$

it follows that \(z_{n}\to h(x)x+(1-h(x))Tx\in C\), so that this last must belong to ∂C. □

Our main result is the following.

Theorem 1

Let C be a convex, closed and nonempty subset of a Hilbert space H and let \(T:C\to H\) be a mapping. Then the algorithm

$$ \begin{cases} x_{0}\in C, \\ \alpha_{0}:=\max\{\frac{1}{2},h(x_{0})\}, \\ x_{n+1}:=\alpha_{n}x_{n}+(1-\alpha_{n})Tx_{n}, \\ \alpha_{n+1}:=\max\{\alpha_{n},h(x_{n+1})\} \end{cases} $$
(4)

is well defined.

If we further assume that

  1. 1.

    C is strictly convex and

  2. 2.

    T is a non-expansive mapping, which satisfies the inward condition (2) and such that \(\operatorname{Fix}(T)\neq\emptyset\),

then \(\{x_{n}\}\) weakly converges to a point \(p\in \operatorname{Fix}(T)\). Moreover, if \(\sum_{n=0}^{\infty}(1-\alpha_{n})<\infty\), then the convergence is strong.

Proof

To prove that the algorithm is well defined, it is sufficient to note that \(\alpha_{n}\in[h(x_{n}),1]\) for any \(n\in\mathbb{N}\); then, by recalling property (P2) from Lemma 1, it immediately follows that

$$x_{n+1}=\alpha_{n}x_{n}+(1-\alpha_{n})Tx_{n} \in C. $$

Assume now that T satisfies the inward condition. In this case, by property (P3) of the previous lemma, we obtain that the non-decreasing sequence \(\{\alpha_{n}\}\) is contained in \([\frac{1}{2},1)\). Also, since T is non-expansive and with at least one fixed point, it follows by standard arguments that \(\{x_{n}\}\) is Fejér-monotone with respect to \(\operatorname{Fix}(T)\) and, as a consequence, both \(\{x_{n}\}\) and \(\{Tx_{n}\}\) are bounded.

Firstly, assume that \(\sum_{n=0}^{\infty}(1-\alpha_{n})=\infty\). Then, since \(\alpha_{n}\geq\frac{1}{2}\), we derive that \(\sum_{n=0}^{\infty}\alpha_{n}(1-\alpha_{n})=\infty\) and from Lemma 2 of [16] we obtain that

$$\|x_{n}-Tx_{n}\|\to0. $$

This fact, together with the Fejér-monotonicity of \(\{x_{n}\}\) proves that the sequence weakly converges in \(\operatorname{Fix}(T)\) (see [17], Proposition 2.1).

Suppose that

$$ \sum_{n=0}^{\infty}(1-\alpha_{n})< \infty. $$
(5)

Since

$$\|x_{n+1}-x_{n}\|=(1-\alpha_{n}) \|Tx_{n}-x_{n}\|, $$

and by the boundedness of \(\{x_{n}\}\) and \(\{Tx_{n}\}\), it is promptly obtained that

$$\sum_{n=0}^{\infty}\|x_{n+1}-x_{n} \|< \infty, $$

i.e., \(\{x_{n}\}\) is a strongly Cauchy sequence and hence \(x_{n}\to x^{*}\in C\).

Note that T satisfies the inward condition. Then, by applying properties (P2) and (P3) from Lemma 1, we obtain that \(h(x^{*})<1\) and that for any \(\mu\in(h(x^{*}),1)\) it holds

$$ \mu x^{*}+(1-\mu)Tx^{*}\in C. $$
(6)

On the other hand, we observe that since \(\lim_{n\to\infty}\alpha_{n}=1\) by (5) and since \(\alpha_{n}=\max\{\alpha _{n-1}, h(x_{n})\}\) holds, it follows that we can choose a sub-sequence \(\{x_{n_{k}}\}\) with the property that \(\{h(x_{n_{k}})\}\) is non-decreasing and \(h(x_{n_{k}})\to1\). In particular, for any \(\mu<1\),

$$ \mu x_{n_{k}}+(1-\mu)Tx_{n_{k}}\notin C $$
(7)

eventually holds.

Choose \(\mu_{1},\mu_{2}\in(h(x^{*}),1)\) with \(\mu_{1}\neq\mu_{2}\) and set \(v_{1}:=\mu_{1}x^{*}+(1-\mu_{1})Tx^{*}\) and \(v_{2}:=\mu _{2}x^{*}+(1-\mu_{2})Tx^{*}\). Then, whenever \(\mu\in[\mu_{1},\mu_{2}]\), by (6) we have that \(v:=\mu x^{*}+(1-\mu)Tx^{*}\in C\). Moreover,

$$\mu x_{n_{k}}+(1-\mu)Tx_{n_{k}}\to v $$

since \(x_{n}\to x^{*}\). This last, together with (7), implies that \(v\in\partial C\) and \([v_{1},v_{2}]\subset\partial C\), since μ is arbitrary.

By the strict convexity of C, we derive that

$$\mu_{1}x^{*}+(1-\mu_{1})Tx^{*}= \mu_{2}x^{*}+(1-\mu_{2})Tx^{*} $$

and \(x^{*}=Tx^{*}\) must necessarily hold, i.e., \(\{x_{n}\}\) strongly converges to a fixed point of T. □

Remark 1

Following the same line of proof, it can be easily seen that the same results hold true if the starting coefficient \(\alpha_{0}=\max\{ \frac{1}{2},h(x_{0})\}\) is substituted by \(\alpha_{0}=\max\{b,h(x_{0})\}\), where \(b\in(0,1)\) is a fixed and arbitrary value. In the statement of Theorem 1, the value \(b=\frac{1}{2}\) was taken to ease the notation.

We also note that the value \(h(x_{n})\) can be replaced, in practice, by \(h_{n}=1-\frac{1}{2^{j_{n}}}\), where \(j_{n}:=\min\{j\in\mathbb {N}:(1-\frac{1}{2^{j}})x_{n}+\frac{1}{2^{j}}Tx_{n}\in C\}\).

Remark 2

As it follows from the proof, the condition \(\sum_{n}(1-\alpha _{n})<\infty\) provides a localization result for the fixed point \(x^{*}\) as a side result. Indeed, in this case, it holds that \(x^{*}=v_{1}=v_{2}\) belongs to the boundary ∂C of the set C.

Remark 3

In [18], for a closed and convex set C, the map

$$f(x):=\inf\bigl\{ \lambda\in[0,1]:x\in\lambda C\bigr\} $$

was introduced and used in conjunction with an iterative scheme to approximate a fixed point of minimum norm (see also [19]). Indeed, in the above mentioned paper, it is proved that the iterative scheme

$$\begin{cases} \lambda_{n}=\max\{f(x_{n}),\lambda_{n-1}\}, \\ y_{n}=\alpha_{n}x_{n}+(1-\alpha_{n})Tx_{n}, \\ x_{n+1}=\alpha_{n}\lambda_{n}x_{n}+(1-\alpha_{n})y_{n} \end{cases} $$

strongly converges under the assumptions that \(\{\alpha_{n}\}\) is a sequence in \((0,1)\) such that \(\lim_{n}\frac{\alpha_{n}}{(1-\lambda_{n})}=0\) and that \(\sum_{n}(1-\lambda_{n})\alpha_{n}=\infty\). We point out that the mentioned conditions appear to be difficult to be checked as they involve the geometry of the set C.

We illustrate the statement of our results with a brief example.

Example 1

Let \(H=l^{2}(\mathbb{R})\) and let \(C:=B_{1}\cap B_{2}\), where \(B_{1}:=\{ (t_{i})_{i\in\mathbb{N}}:(t_{1}-49.995)^{2}+\sum_{i=2}^{\infty }t_{i}^{2}\leq(50.005)^{2}\}\) and \(B_{2}:=\{(t_{i})_{i\in\mathbb{N}}:\sum_{i=1}^{\infty}t_{i}^{2}\leq 1\}\). Then C is a nonempty, closed and strictly convex subset of H. Let \(T:C\to H\) be the map defined by \(T(t_{1},t_{2},\ldots,t_{i},\ldots ):=(-t_{1},t_{2},\ldots,t_{i},\ldots)\), then T is a non-expansive inward map with \(\operatorname{Fix}(T)=\{(0,t_{2},\ldots ,t_{i},\ldots):\sum_{i=2}^{\infty}t_{i}^{2}\leq1\}\). If we use the algorithm

$$\begin{cases} x_{0}=(t_{i})_{i\in\mathbb{N}}\in C, \\ \alpha_{0}:=\max\{\frac{1}{2},h(x_{0})\}, \\ x_{n+1}:=\alpha_{n}x_{n}+(1-\alpha_{n})Tx_{n}, \\ \alpha_{n+1}:=\max\{\alpha_{n},h(x_{n+1})\}, \end{cases} $$

then, by the natural symmetry of the problem, we obtain the constant sequence

$$x_{1}=\cdots=x_{n}=(0,t_{2}, \ldots,t_{i},\ldots)\in \operatorname{Fix}(T). $$

If we use the algorithm

$$\begin{cases} x_{0}=(t_{i})_{i\in\mathbb{N}}\in C, \\ \alpha_{0}:=\max\{0.01,h(x_{0})\}, \\ x_{n+1}:=\alpha_{n}x_{n}+(1-\alpha_{n})Tx_{n}, \\ \alpha_{n+1}:=\max\{\alpha_{n},h(x_{n+1})\}, \end{cases} $$

then \(\{x_{n}\}\) still converges in \(\operatorname{Fix}(T)\), but \(\{x_{n}\}\cap \operatorname{Fix}(T)=\emptyset\) whenever \(t_{i}\neq0\).

We conclude the paper by including few question that appear to be still open to the best of our knowledge.

Question 1

It has been proved that the Krasnoselskii-Mann algorithm converges for general classes of mappings (see, e.g., [20] and [21]). By maintaining the same assumption on the set C and the inward condition of the involved map, it appears to be natural to ask for which classes of mappings the same result of Theorem 1 still holds.

Question 2

Under which assumptions can algorithm (4) be adapted to produce a converging sequence to a common fixed point for a family of mappings? In other words, does the algorithm

$$\begin{cases} x_{0}\in C, \\ \alpha_{0}:=\max\{\frac{1}{2},h_{n}(x_{0})\}, \\ x_{n+1}:=\alpha_{n}x_{n}+(1-\alpha_{n})T_{n}x_{n}, \\ \alpha_{n+1}:=\max\{\alpha_{n},h_{n+1}(x_{n+1})\} \end{cases} $$

converge to a common fixed point of the family \(\{T_{n}\}\), where

$$h_{n}(x):=\inf\bigl\{ \lambda\geq0:\lambda x+(1-\lambda)T_{n}x \in C\bigr\} $$

and under suitable hypotheses?

We refer to [22] and [23] for two examples regarding the classical Krasnoselskii-Mann algorithm.

Question 3

In the classical literature, it has been proved that the inward condition can be often dropped in favor of a weaker condition. For example, a mapping \(T:C\to X\) is said to be weakly inward (or to satisfy the weakly inward condition) if

$$Tx\in\overline{I_{C}(x)}\quad \forall x\in C. $$

Does Theorem 1 hold even for weakly inward mappings?

On the other hand, we observe that the strict convexity of the set C does appear to be unusual for results regarding the convergence of Krasnoselskii-Mann iterations. We do not know if our result can hold for a convex and closed set C, even at the price of strengthening the requirements on the map T.

References

  1. Chidume, C: Geometric Properties of Banach Spaces and Nonlinear Iterations. Lecture Notes in Mathematics, vol. 1965. Springer, Berlin (2009)

    MATH  Google Scholar 

  2. Mann, WR: Mean value methods in iteration. Proc. Am. Math. Soc. 4(3), 506-510 (1953)

    Article  MATH  MathSciNet  Google Scholar 

  3. Groetsch, CW: A note on segmenting Mann iterates. J. Math. Anal. Appl. 40(2), 369-372 (1972)

    Article  MATH  MathSciNet  Google Scholar 

  4. Hicks, TL, Kubicek, JD: On the Mann iteration process in a Hilbert space. J. Math. Anal. Appl. 59(3), 498-504 (1977)

    Article  MATH  MathSciNet  Google Scholar 

  5. Edelstein, M, O’Brien, RC: Nonexpansive mappings, asymptotic regularity and successive approximations. J. Lond. Math. Soc. 2(3), 547-554 (1978)

    Article  MathSciNet  MATH  Google Scholar 

  6. Hillam, BP: A generalization of Krasnoselski’s theorem on the real line. Math. Mag. 48(3), 167-168 (1975)

    Article  MATH  MathSciNet  Google Scholar 

  7. Reich, S: Weak convergence theorems for nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 67(2), 274-276 (1979)

    Article  MATH  MathSciNet  Google Scholar 

  8. Xu, H-K, Yin, X-M: Strong convergence theorems for nonexpansive nonself-mappings. Nonlinear Anal. 24(2), 223-228 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  9. Xu, H-K: Approximating curves of nonexpansive nonself-mappings in Banach spaces. C. R. Acad. Sci. Paris Sér. I Math. 325(2), 151-156 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  10. Marino, G, Trombetta, G: On approximating fixed points for nonexpansive mappings. Indian J. Math. 34, 91-98 (1992)

    MATH  MathSciNet  Google Scholar 

  11. Takahashi, W, Kim, G-E: Strong convergence of approximants to fixed points of nonexpansive nonself-mappings in Banach spaces. Nonlinear Anal. 32(3), 447-454 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  12. Song, Y, Chen, R: Viscosity approximation methods for nonexpansive nonself-mappings. J. Math. Anal. Appl. 321(1), 316-326 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  13. Song, YS, Cho, YJ: Averaged iterates for non-expansive nonself mappings in Banach spaces. J. Comput. Anal. Appl. 11, 451-460 (2009)

    MATH  MathSciNet  Google Scholar 

  14. Zhou, H, Wang, P: Viscosity approximation methods for nonexpansive nonself-mappings without boundary conditions. Fixed Point Theory Appl. 2014, 61 (2014)

    Article  MathSciNet  Google Scholar 

  15. Kirk, W, Sims, B: Handbook of Metric Fixed Point Theory. Springer, Berlin (2001)

    Book  MATH  Google Scholar 

  16. Ishikawa, S: Fixed points and iteration of a nonexpansive mapping in a Banach space. Proc. Am. Math. Soc. 59(1), 65-71 (1976)

    Article  MATH  MathSciNet  Google Scholar 

  17. Bauschke, HH, Combettes, PL: A weak-to-strong convergence principle for Fejér-monotone methods in Hilbert spaces. Math. Oper. Res. 26(2), 248-264 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  18. He, S, Zhu, W: A modified Mann iteration by boundary point method for finding minimum-norm fixed point of nonexpansive mappings. Abstr. Appl. Anal. 2013, Article ID 768595 (2013)

    MathSciNet  MATH  Google Scholar 

  19. He, S, Yang, C: Boundary point algorithms for minimum norm fixed points of nonexpansive mappings. Fixed Point Theory Appl. 2014, 56 (2014)

    Article  MathSciNet  Google Scholar 

  20. Schu, J: Iterative construction of fixed points of asymptotically nonexpansive mappings. J. Math. Anal. Appl. 158(2), 407-413 (1991)

    Article  MATH  MathSciNet  Google Scholar 

  21. Marino, G, Xu, H-K: Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces. J. Math. Anal. Appl. 329(1), 336-346 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  22. Bauschke, HH: The approximation of fixed points of compositions of nonexpansive mappings in Hilbert space. J. Math. Anal. Appl. 202(1), 150-159 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  23. Suzuki, T: Strong convergence of Krasnoselskii and Mann’s type sequences for one-parameter nonexpansive semigroups without Bochner integrals. J. Math. Anal. Appl. 305(1), 227-239 (2005)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

This project was funded by Ministero dell’Istruzione, dell’Universitá e della Ricerca (MIUR).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giuseppe Marino.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing the article. All authors read and approved the final manuscript.

Rights and permissions

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Colao, V., Marino, G. Krasnoselskii-Mann method for non-self mappings. Fixed Point Theory Appl 2015, 39 (2015). https://doi.org/10.1186/s13663-015-0287-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-015-0287-4

Keywords