Open Access

Boundary point algorithms for minimum norm fixed points of nonexpansive mappings

Fixed Point Theory and Applications20142014:56

https://doi.org/10.1186/1687-1812-2014-56

Received: 11 October 2013

Accepted: 19 February 2014

Published: 4 March 2014

Abstract

Let H be a real Hilbert space and C be a closed convex subset of H. Let T : C C be a nonexpansive mapping with a nonempty set of fixed points Fix ( T ) . If 0 C , then Halpern’s iteration process x n + 1 = ( 1 t n ) T x n cannot be used for finding a minimum norm fixed point of T since x n may not belong to C. To overcome this weakness, Wang and Xu introduced the iteration process x n + 1 = P C ( 1 t n ) T x n for finding the minimum norm fixed point of T, where the sequence { t n } ( 0 , 1 ) , x 0 C arbitrarily and P C is the metric projection from H onto C. However, it is difficult to implement this iteration process in actual computing programs because the specific expression of P C cannot be obtained, in general. In this paper, three new algorithms (called boundary point algorithms due to using certain boundary points of C at each iterative step) for finding the minimum norm fixed point of T are proposed and strong convergence theorems are proved under some assumptions. Since the algorithms in this paper do not involve P C , they are easy to implement in actual computing programs.

MSC:47H09, 47H10, 65K10.

Keywords

minimum norm fixed point nonexpansive mapping metric projection boundary point algorithm Hilbert space

1 Introduction and preliminaries

Let H be a real Hilbert space with the inner product , and the norm , and let C be a nonempty closed convex subset of H. Recall that a mapping T : C C is nonexpansive if T x T y x y for all x , y C . We use Fix ( T ) to denote a set of fixed points of T, i.e., Fix ( T ) { x C T x = x } . Throughout this article, Fix ( T ) is always assumed to be nonempty.

For every nonempty closed convex subset K of H, the metric (or nearest point) projection indicated by P K from H onto K can be defined, that is, for each x H , P K x is the only point in K such that x P K x = inf { x z z K } . It is well known (e.g., see [1]) that P K is nonexpansive and a characteristic inequality holds.

Lemma 1.1 Let K be a closed convex subset of a real Hilbert space H. Given x H and z K . Then z = P K x if and only if there holds the relation
x z , y z 0 , y K .

Since Fix ( T ) is a closed convex subset of H, so the metric projection P Fix ( T ) is valid and thus there exists a unique element, denoted by x , in Fix ( T ) such that x = inf x Fix ( T ) x , that is, x = P Fix ( T ) 0 . x is called a minimum norm fixed point of T. Because the minimum norm fixed point of a nonexpansive mapping is closely related to convex optimization problems, it is favored by people.

An extensive literature on iteration methods for fixed point problems of nonexpansive mappings has been published (for example, see [117]). Many iteration processes are often used to approximate a fixed point of a nonexpansive mapping in a Hilbert space or a Banach space. One of them is now known as Halpern’s iteration process [2] and is defined as follows: take an initial guess x 0 C arbitrarily and define { x n } recursively by
x n + 1 = t n u + ( 1 t n ) T x n , n = 0 , 1 , 2 , ,
(1.1)

where { t n } is a sequence in the interval [ 0 , 1 ] and u is some given element in C. For Halpern’s iteration process, a classical result is as follows.

Theorem 1.2 ([13, 14])

If { t n } satisfies the conditions:
  1. (i)

    t n 0 ( n );

     
  2. (ii)

    n = 1 t n = ;

     
  3. (iii)

    lim n t n + 1 t n = 1 or n = 1 | t n + 1 t n | < ;

     
then the sequence { x n } generated by (1.1) converges strongly to a fixed point x of T such that x = P Fix ( T ) u , that is,
u x = inf x Fix ( T ) u x .
Now we consider how to get the minimum norm fixed point of T. In the case where 0 C , taking u = 0 in (1.1), we assert by using Theorem 1.2 that { x n } generated by (1.1) converges strongly to x under conditions (i)-(iii) above. But, in the case where 0 C , the iteration process x n + 1 = ( 1 t n ) T x n becomes invalid because x n may not belong to C. In order to overcome this weakness, Wang and Xu [15] introduced the iteration process
x n + 1 = P C ( 1 t n ) T x n , n = 1 , 2 , .
(1.2)

They proved that if { t n } satisfies the same conditions in Theorem 1.2, then the sequence { x n } generated by (1.2) converges strongly to x .

However, it is difficult to implement the iteration process (1.2) in actual computing programs because the specific expression of P C cannot be obtained, in general.

The purpose of this paper is to propose three new algorithms for finding the minimum norm fixed point of T. The strong convergence theorems are proved under some assumptions. The main advantage of the algorithms in this paper is that they have nothing to do with the metric projection P C and thus they are easy to implement in actual computing programs. Because the key of our algorithms is replacing a fixed element u in (1.1) by a certain sequence { u n } in the boundary of C, they are called boundary point algorithms.

We will use the following notations:
  1. 1.

    for weak convergence and → for strong convergence.

     
  2. 2.

    ω w ( x n ) = { x { x n k } { x n } such that x n k x } denotes the weak ω-limit set of { x n } .

     
  3. 3.

    A B means that B is the definition of A.

     

We need some facts and tools in a real Hilbert space H which are listed as lemmas below.

Lemma 1.3 ([18])

Let C be a closed convex subset of a real Hilbert space H, and let T : C C be a nonexpansive mapping such that Fix ( T ) . If a sequence { x n } in C is such that x n z and x n T x n 0 , then z = T z .

Lemma 1.4 There holds the identity in a real Hilbert space H:
u v 2 = u 2 v 2 2 v , u v , u , v H .

Lemma 1.5 ([12, 19])

Assume that { a n } is a sequence of nonnegative real numbers satisfying the property
a n + 1 ( 1 γ n ) a n + γ n δ n + σ n , n = 0 , 1 , 2 , .
If { γ n } n = 1 ( 0 , 1 ) , { δ n } n = 1 and { σ n } n = 1 satisfy the conditions:
  1. (i)

    n = 1 γ n = ,

     
  2. (ii)

    lim sup n δ n 0 ,

     
  3. (iii)

    n = 1 | σ n | < ,

     

then lim n a n = 0 .

2 Main results

In this section, C is always assumed to be a nonempty closed convex subset of H such that 0 C . We use ∂C to denote the boundary of C. In order to give our main results, we first introduce a function h : C ( 0 , 1 ] by the definition
h ( x ) = inf { λ ( 0 , 1 ] λ x C } , x C .

It is easy to see that h ( x ) x C and h ( x ) > 0 hold for each x C due to the assumption 0 C .

Since our iteration processes will involve the function h ( x ) , it is necessary to explain how to calculate h ( x ) for any given x C in actual computing programs. In order to get the value h ( x ) for a given x C , we often need to deal with an algebraic equation. But dealing with an algebraic equation is easier than calculating the metric projection P C , in general. To illustrate this viewpoint, let us consider the following simple example.

Example 1 Let H be a real Hilbert space. Define a convex function φ : H R 1 by
φ ( x ) = x x 0 2 + x , u , x H ,
where x 0 and u are two given points in H such that x 0 , u < 0 . Setting C = { x H φ ( x ) 0 } , then it is easy to show that C is a nonempty convex closed subset of H such that 0 C (note that φ ( x 0 ) = x 0 , u < 0 and φ ( 0 ) = x 0 2 > 0 ). For a given x C , we have φ ( x ) 0 . In order to get h ( x ) , let φ ( λ x ) = 0 , where λ ( 0 , 1 ] is an unknown number. Thus we obtain an algebraic equation
x 2 λ 2 + ( x , u 2 x , x 0 ) λ + x 0 2 = 0 .
Consequently, we get
λ = 2 x , x 0 x , u ± ( x , u 2 x , x 0 ) 2 4 x 2 x 0 2 2 x 2 .
By the definition of h, we have
h ( x ) = 2 x , x 0 x , u ( x , u 2 x , x 0 ) 2 4 x 2 x 0 2 2 x 2 .
Next we give our first iteration process for finding the minimum norm fixed point of T: take u 0 C arbitrarily and define { x n } recursively by
{ x n = P Fix ( T ) u n , u n = λ n x n 1 ,
(2.1)

where λ n = h ( x n 1 ) ( n 1 ).

Remark 1 How to implement the iteration process (2.1)? In actual computing programs, we can use the standard Halpern’s iteration process to get x n from u n for each n 0 . Indeed, taking x n ( 0 ) = u n and { x n ( m ) } is generated inductively by
x n ( m + 1 ) = t m u n + ( 1 t m ) T x n ( m ) , m 0 ,

then, using Theorem 1.2, x n ( m ) x n P Fix ( T ) u n as m . Thus we can take x n = x n ( M n ) approximately for a sufficiently large integer M n in actual computing programs.

Geometric intuition seems to encourage us to guess x n P Fix ( T ) 0 as n under some certain assumptions. As a matter of fact, it is true.

Theorem 2.1 If { λ n } satisfies n = 1 ( 1 λ n ) = , then { x n } generated by (2.1) converges strongly to x = P Fix ( T ) 0 .

Proof Noticing the fact that x = P Fix ( T ) 0 = P Fix ( T ) λ x holds for all λ [ 0 , 1 ] , we have from (2.1) that
x n x = P Fix ( T ) u n x = P Fix ( T ) λ n x n 1 P Fix ( T ) λ n x λ n x n 1 x ,
consequently,
x n x λ n λ n 1 λ 2 λ 1 x 0 x .
(2.2)

Thus this together with the condition n = 1 ( 1 λ n ) = leads to the conclusion. □

Remark 2 Is the condition n = 1 ( 1 λ n ) = reasonable? In other words, can we find an example which satisfies this condition? The answer is yes. The following result implies that this condition is not harsh.

Corollary 1 If d ( Fix ( T ) , C ) inf { x y x Fix ( T ) , y C } > 0 , then { x n } generated by (2.1) converges strongly to x = P Fix ( T ) 0 .

Proof Obviously, it suffices to verify that if d ( Fix ( T ) , C ) > 0 , then n = 1 ( 1 λ n ) = . In fact, setting d d ( Fix ( T ) , C ) > 0 , we have from (2.1) and (2.2) that
λ n = u n x n 1 = x n 1 x n 1 u n x n 1 1 d x 0 + x 0 x ,
hence
1 λ n d x 0 + x 0 x .

This implies that n = 1 ( 1 λ n ) = holds. □

Our second iteration process for finding the minimum norm fixed point of T is defined by
x n = t n λ n x n 1 + ( 1 t n ) T x n , n 1 ,
(2.3)

where { t n } ( 0 , 1 ) , λ n = h ( x n 1 ) ( n 1 ) and x 0 is taken in C arbitrarily.

Remark 3 Equation (2.3) is an implicit iteration process. A natural question is how to get x n from x n 1 . Indeed, suppose that we have got x n 1 , define the mapping T n : C C by T n : x t n λ n x n 1 + ( 1 t n ) T x ( x C ), then T n is ( 1 t n ) -contractive and x n is just its unique fixed point. So we can use Picard’s iteration process
x n ( m + 1 ) = t n λ n x n 1 + ( 1 t n ) T x n ( m ) , m 0 ,

to calculate x n approximately since x n ( m ) x n as m , where x n ( 0 ) can be taken in C arbitrarily, for example, x n ( 0 ) = x n 1 .

Theorem 2.2 Assume that n = 1 ( 1 λ n ) = and n = 1 t n < , then { x n } generated by (2.3) converges strongly to x = P Fix ( T ) 0 .

Proof We first show that { x n } is bounded. Indeed, take a p Fix ( T ) to derive that
x n p = t n λ n ( x n 1 p ) + ( 1 t n ) ( T x n p ) t n ( 1 λ n ) p t n λ n x n 1 p + ( 1 t n ) x n p + t n ( 1 λ n ) p .
It follows that
x n p λ n x n 1 p + ( 1 λ n ) p .
By induction,
x n p max { x 1 p , p }
(2.4)

and { x n } is bounded, so are { T x n } . This together with (2.3) implies that x n T x n 0 ( n ). Thus it follows from Lemma 1.3 that ω w ( x n ) Fix ( T ) .

Next we show that
lim n sup x , x n x 0 .
(2.5)
Indeed, take a subsequence { x n k } of { x n } such that
lim n sup x , x n x = lim k x , x n k x ,
without loss of generality, we may assume that x n k x ¯ . Noticing x = P Fix ( T ) 0 , we obtain from x ¯ Fix ( T ) and Lemma 1.1 that
lim n sup x , x n x = x , x ¯ x 0 .
Finally, we show that x n x 0 ( n ). As a matter of fact, we have by using Lemma 1.4 that
x n x 2 = t n λ n ( x n 1 x ) + ( 1 t n ) ( T x n x ) t n ( 1 λ n ) x 2 t n λ n ( x n 1 x ) + ( 1 t n ) ( T x n x ) 2 + 2 t n ( 1 λ n ) x , x n x t n 2 λ n 2 x n 1 x 2 + ( 1 t n ) 2 x n x 2 + 2 t n λ n ( 1 t n ) x n 1 x x n x + 2 t n ( 1 λ n ) x , x n x .
Hence,
( 2 t n ) x n x 2 t n λ n 2 x n 1 x 2 + 2 λ n ( 1 t n ) x n 1 x x n x + 2 ( 1 λ n ) x , x n x t n λ n 2 x n 1 x 2 + λ n 2 x n 1 x 2 + ( 1 t n ) 2 x n x 2 + 2 ( 1 λ n ) x , x n x .
Consequently,
x n x 2 [ 1 ( 1 λ n ) ] x n 1 x 2 + 2 ( 1 λ n ) x , x n x + t n x n 1 x 2 .

Using Lemma 1.5, we conclude from (2.5) and conditions n = 1 ( 1 λ n ) = and n = 1 t n < that x n x . □

By a similar argument as above, we easily get the following result.

Corollary 2 If d ( R ( T ) , C ) inf { x y x Fix ( T ) , y C } > 0 and 1 t n < , then { x n } generated by (2.3) converges strongly to x = P Fix ( T ) 0 , where R ( T ) is the range of T.

Proof It suffices to verify that d ( R ( T ) , C ) > 0 implies n = 1 ( 1 λ n ) = . Indeed,
λ n = u n x n 1 = x n 1 x n 1 u n x n 1 = 1 x n 1 T x n 1 + T x n 1 u n x n 1 .
Setting d d ( R ( T ) , C ) > 0 , we have from (2.4) that
1 λ n T x n 1 u n x n 1 x n 1 T x n 1 x n 1 d x 1 x + 2 x x n 1 T x n 1 d ( 0 , C ) .

Note that x n 1 T x n 1 0 , it follows that n = 1 ( 1 λ n ) = . □

Finally, we propose an explicit iteration process for finding the minimum norm fixed point of T which is defined by
x n + 1 = t n λ n x n + ( 1 t n ) T x n , n 0 ,
(2.6)

where { t n } ( 0 , 1 ) , λ n = h ( x n ) ( n 0 ) and x 0 is taken in C arbitrarily.

Theorem 2.3 Assume that { t n } and { λ n } satisfy the following conditions:
  1. (i)

    t n 0 and n = 0 t n = ;

     
  2. (ii)

    lim sup n λ n λ ¯ < 1 ;

     
  3. (iii)

    n = 1 | t n t n 1 | < or lim n t n t n 1 = 1 ;

     
  4. (iv)

    n = 1 t n | λ n λ n 1 | < or lim n λ n λ n 1 = 1 .

     

Then { x n } generated by (2.6) converges strongly to x = P Fix ( T ) 0 .

Proof We first show that { x n } is bounded. Indeed, we have by taking p Fix ( T ) arbitrarily that
x n + 1 p t n λ n x n p + ( 1 t n ) T x n p t n [ λ n x n p + ( 1 λ n ) p ] + ( 1 t n ) x n p t n max { x n p , p } + ( 1 t n ) x n p max { x n p , p } .
Inductively,
x n p max { x 0 p , p } , n 0 .

This means that { x n } is bounded, so are { T x n } .

We next show that x n + 1 x n 0 . Using (2.6), it follows from a direct calculation that
x n + 1 x n = [ t n λ n x n + ( 1 t n ) T x n ] [ t n 1 λ n 1 x n 1 + ( 1 t n 1 ) T x n 1 ] = ( 1 t n ) ( T x n T x n 1 ) ( t n t n 1 ) T x n 1 + t n λ n ( x n x n 1 ) + ( t n λ n t n 1 λ n 1 ) x n 1 [ 1 t n ( 1 λ n ) ] x n x n 1 + | t n t n 1 | ( T x n 1 + λ n 1 x n 1 ) + t n | λ n λ n 1 | x n 1 .

Using Lemma 1.5, we conclude from conditions (i)-(iv) that x n + 1 x n 0 . Noticing the boundedness of { x n } and { T x n } and condition (i), we have from (2.6) that x n + 1 T x n 0 . Consequently, x n T x n 0 . Using Lemma 1.3, we derive that ω w ( x n ) Fix ( T ) .

Then we show that
lim n sup x , x n 1 x 0 .
(2.7)

As a matter of fact, this is derived by the same argument as in the proof of Theorem 2.3.

Finally, we show that x n x 0 . Using Lemma 1.4 and (2.6), it is easy to verify that
x n + 1 x 2 = t n ( λ n x n x ) + ( 1 t n ) ( T x n x ) 2 ( 1 t n ) 2 T x n x 2 + 2 t n λ n x n x , x n + 1 x ( 1 t n ) 2 x n x 2 + 2 t n λ n x n x , x n + 1 x + 2 t n ( 1 λ n ) x , x n + 1 x ( 1 t n ) 2 x n x 2 + 2 t n λ n x n x x n + 1 x + 2 t n ( 1 λ n ) x , x n + 1 x .
Hence,
x n + 1 x 2 ( 1 γ n ) x n x 2 + γ n σ n ,
where
γ n = t n 2 ( 1 λ n ) t n 1 t n λ n , σ n = 2 ( 1 λ n ) 2 ( 1 λ n ) t n x , x n + 1 x .

It is easily seen that γ n 0 , n = 0 γ n = by conditions (i) and (ii), and lim n sup σ n 0 by (2.7). By Lemma 1.5, we conclude that x n x . □

Declarations

Acknowledgements

This work was supported in part by the Fundamental Research Funds for the Central Universities (3122013k004) and in part by the Foundation of Tianjin Key Lab for Advanced Signal Processing.

Authors’ Affiliations

(1)
College of Science, Civil Aviation University of China
(2)
Tianjin Key Laboratory for Advanced Signal Processing, Civil Aviation University of China

References

  1. Goebel K, Reich S: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York; 1984.Google Scholar
  2. Halpern B: Fixed points of nonexpanding maps. Bull. Am. Math. Soc. 1967, 73: 957–961. 10.1090/S0002-9904-1967-11864-0View ArticleGoogle Scholar
  3. Ishikawa S: Fixed points by a new iteration method. Proc. Am. Math. Soc. 1974, 44: 147–150. 10.1090/S0002-9939-1974-0336469-5View ArticleMathSciNetGoogle Scholar
  4. Mann WR: Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4: 506–510. 10.1090/S0002-9939-1953-0054846-3View ArticleGoogle Scholar
  5. Nakajo K, Takahashi W: Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 2003, 279: 372–379. 10.1016/S0022-247X(02)00458-4View ArticleMathSciNetGoogle Scholar
  6. Reich S: Weak convergence theorems for nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 1979, 67: 274–276. 10.1016/0022-247X(79)90024-6View ArticleMathSciNetGoogle Scholar
  7. Reich S: Strong convergence theorem for resolvents of accretive operators in Banach spaces. J. Math. Anal. Appl. 1980, 75: 287–292. 10.1016/0022-247X(80)90323-6View ArticleMathSciNetGoogle Scholar
  8. Reich S, Shemen L: Two algorithms for nonexpansive mappings. Fixed Point Theory 2011, 12: 443–448.MathSciNetGoogle Scholar
  9. Shioji N, Takahashi W: Strong convergence of approximated sequences for nonexpansive mappings in Banach spaces. Proc. Am. Math. Soc. 1997, 125: 3641–3645. 10.1090/S0002-9939-97-04033-1View ArticleMathSciNetGoogle Scholar
  10. Tan KK, Xu HK: Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process. J. Math. Anal. Appl. 1993, 178(2):301–308. 10.1006/jmaa.1993.1309View ArticleMathSciNetGoogle Scholar
  11. Wittmann R: Approximation of fixed points of nonexpansive mappings. Arch. Math. 1992, 58: 486–491. 10.1007/BF01190119View ArticleMathSciNetGoogle Scholar
  12. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240–256. 10.1112/S0024610702003332View ArticleGoogle Scholar
  13. Moudafi A: Viscosity approximation methods for fixed points problems. J. Math. Anal. Appl. 2000, 241: 46–55. 10.1006/jmaa.1999.6615View ArticleMathSciNetGoogle Scholar
  14. Xu HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298: 279–291. 10.1016/j.jmaa.2004.04.059View ArticleMathSciNetGoogle Scholar
  15. Wang F, Xu HK: Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem. J. Inequal. Appl. 2010., 2010: Article ID 102085 10.1155/2010/102085Google Scholar
  16. Diestel J Lecture Notes in Mathematics 485. In Geometry of Banach Spaces - Selected Topics. Springer, Berlin; 1975.Google Scholar
  17. Beauzamy B: Introduction to Banach Spaces and Their Geometry. North-Holland, Amsterdam; 1982.Google Scholar
  18. Goebel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.View ArticleGoogle Scholar
  19. Liu LS: Iterative processes with errors for nonlinear strongly accretive mappings in Banach spaces. J. Math. Anal. Appl. 1995, 194: 114–125. 10.1006/jmaa.1995.1289View ArticleMathSciNetGoogle Scholar

Copyright

© He and Yang; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.