# Iterative algorithms for minimum-norm fixed point of non-expansive mapping in hilbert space

- Yong Cai
^{1}, - Yuchao Tang
^{1, 2}Email author and - Liwei Liu
^{1}

**2012**:49

https://doi.org/10.1186/1687-1812-2012-49

© Cai et al; licensee Springer. 2012

**Received: **12 January 2012

**Accepted: **26 March 2012

**Published: **26 March 2012

## Abstract

The purpose of this article is to introduce two iterative algorithms for finding the least norm fixed point of nonexpansive mappings. We provide two algorithms, one implicit and another explicit, from which strong convergence theorems are obtained in Hilbert spaces. Then we apply these algorithms to solve some convex optimization problems. Furthermore, we use them to solve some split feasibility problems. The results of this article extend and improve several results presented in the literature in the recent past.

**Mathematics Subject Classification (2000):** 47H09; 47H10; 90C25.

## Keywords

## 1 Introduction

*H*is a real Hilbert space with inner product 〈·,·〉 and induced norm ║·║. Let

*C*be a nonempty closed convex subset of

*H*. Then, a mapping

*T*, from

*C*into itself is said to be a nonexpansive mapping if

*x, y*∈

*C*. Fix(

*T*) denotes the fixed point set of

*T*, that is Fix(

*T*) = {

*x*∈

*C*:

*Tx*=

*x*}. Iterative methods for finding fixed points of nonexpansive mappings are an important topic in the theory of nonexpansive mappings and have wide applications in a number of applied areas, such as, image reconstruction in computerized tomography [1], optics and neural networks [2], collective sensing [3], and image denoising and deblurring [4] etc. However, the Picard sequence ${\left\{{T}^{n}x\right\}}_{n=0}^{\infty}$ often fails to converge even in the weak topology. To overcome the difficulties, the Krasnoselskii-Mann iteration algorithm become prevail. This algorithm generates from an arbitrary initial guess

*x*

_{0}∈

*C*and a sequence {

*x*

_{ n }} by the recursive formula

*α*

_{ n }} is a sequence in (0, 1). Reich [5] proved that if

*X*is a uniformly convex Banach space with a Fréchet differential norm and if {

*α*

_{ n }} is chosen such that ${\sum}_{n=0}^{\infty}{\alpha}_{n}\left(1-{\alpha}_{n}\right)=+\infty $, then the sequence {

*x*

_{ n }} defined by (1.1) converges weakly to a fixed point of

*T*. On the other hand, Maingé [6] proposed the so-called inertial Krasnoselskii-Mann-type algorithm as follows

*I*:

*H*→

*H*is the identity operator,

*x*

_{0},

*x*

_{1}∈

*H*, {

*θ*

_{ n }} ⊂ [0, 1], {

*α*

_{ n }} ⊂ (0, 1) are relaxation factor. The proposed algorithm unifies Krasnoselskii-Mann iteration and inertial type extrapolation. He established some weak convergence theorems of the sequence {

*x*

_{ n }} generated by (1.2). It is clear that if

*θ*

_{ n }= 0 for all

*n*, then the algorithm (1.2) reduces to the Krasnoselskii-Mann iteration (1.1). The sequence {

*υ*

_{ n }} is intended to speed up the convergence of algorithms. As a matter of fact, the above algorithms (1.1) and (1.2) have only weak convergence except in a finite dimensional space. To obtain strong convergence in the setting of an infinite dimensional Hilbert or Banach spaces, there exist several iterative algorithms to nonexpansive mappings (e.g., Viscosity iteration algorithm [7], Hybrid projection algorithm [8], Hybrid steepest descent algorithm [9], Halpern-type iteration algorithm [10, 11], Shrinking projection algorithm [12], etc.). In general, the nonexpansive mapping may have more than one fixed point. Without loss of generality, we may assume that $\mathsf{\text{Fix}}\left(T\right)\ne \mathrm{0\u0338}$ (otherwise,

*C*is additionally bounded), then Fix(

*T*) is closed and convex (It is worth mentioning that Ferreira [13] proved that Fix(

*T*) is closed and convex even in a strictly convex Banach space which includes Hilbert spaces as a special case). So there exists a unique

*x** ∈ Fix(

*T*) satisfies:

*x** is the minimum-norm fixed point of

*T*. In other words,

*x** is the metric projection of the origin into Fix(

*T*), i.e.,

*x** =

*P*

_{Fix(T)}0. It is an interesting thing to construct iterative sequence to find the minimum-norm fixed point of a nonexpansive mapping

*T*, i.e., the minimum-norm solutions of

*x*=

*Tx*. Recently, Yao and Xu [14] and Cui and Liu [15] independently introduced two iterative methods (one implicit and one explicit) for finding the minimum-norm fixed point of nonexpansive mapping which is defined on a closed convex subset

*C*of

*H*. The proposed algorithms are based on the well-known Browder's iterative method [16] and Halpern's iterative method [17]. We next briefly recall the Browder's iterative method and the Halpern's iterative method. Browder [16] introduced an implicit scheme as follows. Let

*u*∈

*C*and

*t*∈ (0, 1),

*x*

_{ t }be the unique fixed point in

*C*of the contraction

*T*

_{ t }from

*C*into

*C*:

*x*

_{ t }} as

*t*→ 0

^{+}is the fixed point of

*T*which is nearest from Fix(

*T*) to

*u*, i.e., lim

_{t→0+}

*x*

_{ t }=

*P*

_{Fix(T)}

*u*. Besides, Halpern [17] introduced an explicit scheme. Let

*x*

_{0}∈

*C*, define a sequence {

*x*

_{ n }} by the following:

where {*α*_{
n
}} ⊂ (0, 1). It is known that the sequence {*x*_{
n
}} generated by (1.4) converges in norm to the same limit *P*_{Fix(T)}*u* as Browder's implicit scheme (1.3) if the sequence {*α*_{
n
}} satisfies the following conditions:

(C1) lim_{n→∞}*α*_{
n
} = 0;

(C2) ${\sum}_{n=0}^{\infty}{\alpha}_{n}=+\infty $;

(C3) either ${\sum}_{\mathsf{\text{n=0}}}^{\infty}\left|{\alpha}_{n+1}-{\alpha}_{n}\right|+\infty \phantom{\rule{0.3em}{0ex}}\mathsf{\text{or}}\phantom{\rule{0.3em}{0ex}}\mathsf{\text{li}}{\mathsf{\text{m}}}_{n\to \infty}\left({\alpha}_{n}/{\alpha}_{n+1}\right)=1$.

It is noticed that the Browder's and the Halpern's iterative methods do find the minimum-norm fixed point *x** of *T* if 0 ∈ *C*. However, if 0 ∉ *C*, then neither Browder's nor Halpern's methods works to find the minimum-norm element *x**. The reason is simple: if 0 ∉ *C*, we cannot take *u* = 0 either in (1.3) or (1.4) since the contraction *T*_{
t
}*x* = (1 - *t*)*Tx* is no longer a self-mapping of *C* or the point (1 - *α*_{
n
})*Tx*_{
n
} may not belong to *C* and consequently, {*x*_{
n
}_{+1}} may be undefined. In order to overcome this difficulties caused by possible exclusion of the origin from *C*, Yao and Xu [14] and Cui and Liu [15] put forward the improvement strategy to impose the metric projection *P*_{
C
} on the right side of the (1.3) and (1.4) when *u* = 0. The role of the metric projection *P*_{
C
} is to pull the substituted sequence back to *C*, then the iterative sequences are well-defined.

*C*by a closed convex cone

*C*(

*C*is said to be a closed convex cone if (i)

*C*is closed and convex; (ii)

*αx*∈

*C*, for all

*α*≥ 0 and

*x*∈

*C*; (iii)

*C*≠ {0}). We present new strongly convergent methods for approximating minimum-norm fixed point of nonexpansive mappings. The proposed algorithms consist of two types and generated by the following. For each λ ∈ (0, 1), (i) The implicit method

- (ii)The explicit method${x}_{n+1}=\left(1-{\alpha}_{n}\right)\left(\lambda T{x}_{n}+\left(1-\lambda \right){x}_{n}\right),n\ge 0,$(1.6)

where {*α*_{
n
}} ⊂ (0, 1).

We prove that the sequence {*x*_{
n
}} generated by (1.5) and (1.6) converge strongly to the element of minimal norm fixed point of nonexpansive mappings. As applications, we provide iterative processes for solving the constrained convex optimization problem. And we use them to solve some split feasibility problems which attracted great attention in recent years. Our results improve and generalize the corresponding results of Cui and Liu [15], Yao and Xu [14], and Wang and Xu [18] et al.

## 2 Preliminaries

Let *H* be a Hilbert space with inner product 〈·,·〉 and norm ║·║, and let *C* be a nonempty closed convex subset of *H*.

- (i)
*⇀*for weak convergence and → for strong convergence; - (ii)
${w}_{w}\left({x}_{n}\right)=\left\{x:\exists {x}_{{n}_{j}}\rightharpoonup x\right\}$ denotes the weak

*ω*-limit set of {*x*_{ n }}.

*P*

_{ C }

*x*of

*x*onto

*C*is defined by the following

*x*∈

*H*,

- (i)
〈

*x - P*_{ C }*x*,*z - P*_{ C }*x*〉 ≤ 0, for all*z*∈*C*; - (ii)
║

*P*_{ C }*x*-*P*_{ C }*y*║^{2}≤ 〈*P*_{ C }*x*-*P*_{ C }*y, x*-*y*〉, for all*x, y*∈*H*.

We shall make use of the following results.

**Lemma 2.1**. (*Demiclosedness principle of nonexpansive mapping*) *Let T* : *C* → *C a non-expansive mapping with* $\mathsf{\text{Fix}}\left(T\right)\ne \mathrm{0\u0338}$. *If x*_{
n
} *⇀ x and* (*I* - *T*)*x*_{
n
} → 0, *then x* = *Tx*.

**Lemma 2.2**. (

*see*, [19])

*Let*{

*x*

_{ n }}

*and*{

*y*

_{ n }}

*be bounded sequences in a Banach space E and let*{

*β*

_{ n }}

*be a sequence in*[0, 1]

*with*0 < lim inf

*β*

_{n}≤ lim sup

*β*

_{n}< 1.

*Suppose x*

_{ n }

_{+1}=

*β*

_{ n }

*y*

_{ n }+ (1 -

*β*

_{ n })

*x*

_{ n }

*for all n*≥ 0

*and*

*Then* lim_{n→∞}║*y*_{
n
} - *x*_{
n
}║ = 0.

**Lemma 2.3**. (

*see*, [20])

*Let*{

*a*

_{ n }}

*be an nonnegative real sequences satisfying the following inequality*:

*where* {*γ*_{
n
}} ⊂ (0, 1) *such that* ${\sum}_{n=0}^{\infty}{\gamma}_{n}=+\infty $, *and* lim sup_{n→∞}*δ*_{
n
} ≤ 0. *Then* lim_{n→∞}*a*_{
n
} = 0.

## 3 Main results

First, we prove the following strong convergence theorem by using the implicit method (1.5) for finding the minimum-norm fixed point of a nonexpansive mapping *T*.

**Theorem 3.1**. *Let C be a closed convex cone of a real Hilbert space H. Let T* : *C* → *C be a nonexpansive with* $\mathsf{\text{Fix}}\left(T\right)\ne \mathrm{0\u0338}$. *For each t* ∈ (0, 1), *let x*_{
t
} *be the unique fixed point in C of the contraction T*_{λ} : = (1 - *t*)(λ*T* + (1 - λ)*I*), *where* λ ∈ (0, 1) *is a constant. Then x*_{
t
} *converges strongly to the minimum-norm fixed point of T as t* → 0^{+}.

*Proof*. Take

*p*∈ Fix(

*T*), from (1.5), we have

*x*

_{ t }} is bounded and so is {

*Tx*

_{ t }}. Next, we prove that ║

*x*

_{ t }-

*Tx*

_{ t }║→ 0 as

*t*→ 0

^{+}. In fact, from (1.5), we have

Next we show that {*x*_{
t
}} is relatively norm-compact as *t* → 0^{+}. Since {*x*_{
t
}} is bounded, there exists a null sequence {*t*_{
n
}} ⊂ (0, 1) such that ${x}_{{t}_{n}}\rightharpoonup \stackrel{\u0304}{x}$. By Lemma 2.1 and (3.1), then $\stackrel{\u0304}{x}\in \mathsf{\text{Fix}}\left(T\right)$.

Since $\stackrel{\u0304}{x}\in \mathsf{\text{Fix}}\left(T\right)$, we may substitute $\stackrel{\u0304}{x}$ for $\stackrel{\u0303}{x}$ and *t*_{
n
} for *t* in (3.2) to obtain that ${x}_{{t}_{n}}\to \stackrel{\u0304}{x}$.

Hence, {*x*_{
t
}} is indeed relatively compact (as *t* → 0^{+}) in the norm topology.

Therefore, $\stackrel{\u0304}{x}={x}^{*}$, where *x** is the minimum-norm fixed point of *T*, and we conclude that *x*_{
t
} → *x** as *t* → 0^{+}. This completes the proof. □

Now, we are in the position to prove the strong convergence of the explicit method (1.6). Our proofs of this theorem closely follows proofs given in [11] for some related results.

**Theorem 3.2**. *Let C be a closed convex cone of a real Hilbert space H. Let T : C → C be a nonexpansive mapping and* Fix(*T*) *is nonempty. Assume that the sequence* {*α*_{
n
}} ⊂ (0, 1) *satisfies the following conditions:*

(*i*) lim_{n→∞}*α*_{
n
} = 0;

(*ii*) ${\sum}_{n=0}^{\infty}{\alpha}_{n}=+\infty $.

*Then the sequence* {*x*_{
n
}} *generated by the algorithm (1.6) strongly converges to a fixed point of T which is of minimal norm*.

*Proof*. First we prove that the sequence {

*x*

_{ n }} is bounded. Let

*p*∈ Fix(

*T*). By (1.6), we have

for all *n* ≥ 0. Then {*x*_{
n
}} is bounded. Therefore, {*Tx*_{
n
}} is also bounded.

_{n→∞}(

*α*

_{ n }+ (1 -

*α*

_{ n })

*λ*) = λ, then

*y*

_{ n }} is bounded. Consequently, we have

*x*

_{ n }} and {

*Tx*

_{ n }} are bounded sequences and lim

_{n→∞}

*α*

_{ n }= 0, then

_{n→∞}

*║y*

_{ n }-

*x*

_{ n }║ = 0. Therefore,

_{n→∞}〈

*x** -

*x*

_{ n }

*, x**〉 ≤ 0. To achieve this, we take a subsequence $\left\{{x}_{{n}_{i}}\right\}$ of {

*x*

_{ n }} such that

*x*

_{ n }} is bounded, without loss of generality, we may assume that ${x}_{{n}_{i}}\rightharpoonup {x}^{\prime}$. Consequently,

_{n→∞}║

*x*

_{ n }-

*Tx*

_{ n }║ = 0. By the demiclosedness principle of nonexpansive mapping

*T*, we have

*x*' ∈ Fix(

*T*). Since

*x** =

*P*

_{Fix(T)}0. It follows from the properties of Projection operator that

By the condition of (ii) and the inequality (3.5), we can apply Lemma 2.3 to (3.8) and conclude that {*x*_{
n
}} converges strongly to *x** as *n* → ∞, that is, the minimum-norm fixed point of *T*. This completes the proof. □

*Remark*3.1. (i) If the closed convex cone

*C*in Theorems 3.1 and 3.2 are replaced by closed convex

*C*with 0 ∈

*C*, then Theorems 3.1 and 3.2 are still true because of the iterative sequence (1.5) and (1.6) are well-defined now.

- (ii)
Theorem 3.2 also improve the [[14], Theorem 3.2] and [[15], Theorem 3.3], in which the restrictions ${}^{\u2033}{\sum}_{n=0}^{\infty}\left|{\alpha}_{n+1}-{\alpha}_{n}\right|<+\infty \phantom{\rule{0.3em}{0ex}}\mathsf{\text{or}}\phantom{\rule{0.3em}{0ex}}{\mathrm{lim}}_{n\to \infty}{\alpha}_{n}/{\alpha}_{n+1}={1}^{\u2033}$ are removed.

## 4 Some applications

where *f* : *C* → *R* is a convex, Fréchet differentiable function, *C* is closed convex subset of *H*.

where ∇*f* : *H* → *H* is the gradient of *f*.

*P*

_{ C }is the metric projection onto

*C*and

*μ*> 0 is a positive constant. Based on the fixed point problem, we deduce the projected gradient method.

Using Theorems 3.1 and 3.2, we immediately obtain the following result.

**Theorem 4.1**. *Suppose that the solution set of* (*4.1*) *is nonempty. Let the objective function f be convex, fréchet differentiable and its gradient* ∇*f is Lipschitz continuous with Lipschitz constant L. In addition, if* 0 ∈ *C or C is closed convex cone. Let μ* ∈ (0, 2/*L*),

*(i) For each t*∈ (0, 1),

*let x*

_{ t }

*be the unique solution of the fixed point equation*

*Then* {*x*_{
t
}} *converges in norm as t* → 0^{+} *to the minimum-norm solution of the minimization* (*4.1*)

*(ii) Define a sequence*{

*x*

_{ n }}

*by the following*

*where λ* ∈ (0, 1) *and the sequence* {*α*_{
n
}} ⊂ (0, 1) *satisfies conditions in Theorem 3.2. Then the sequence* {*x*_{
n
}} *converges strongly to the minimum-norm solution of the minimization* (*4.1*).

*Proof*. Since ∇*f* is Lipschitz continuous with Lipschitz constant *L*, then the *P*_{
C
}(*I* - *μ*∇*f*) is nonexpansive mapping (see [[21], Sect. 4). Replace the mapping *T* in (1.5) and (1.6) with *P*_{
C
}(*I* - *μ*∇*f*). Therefore, the conclusion of Theorem 4.1 follows from Theorems 3.1 and 3.2 immediately. □

where *C* and *Q* are nonempty closed convex subset of Hilbert space *H*_{1} and *H*_{2}, respectively. *A* : *H*_{1} → *H*_{2} is a bounded linear operator.

*x** is a solution to the split feasibility problem (4.5) if and only if

*x** ∈

*C*and

*Ax** -

*P*

_{ Q }

*Ax** = 0. We define the proximity function

*f*by

*x** solves the split feasibility problem (4.5) if and only if

*x** solves the minimization (4.6) with the minimize equal to 0. Byrne [21] introduced the so-called CQ algorithm to solve the (SFP).

where 0 < *μ* < 2/*ρ*(*A***A*) and where *P*_{
C
} denotes the projection onto *C* and *ρ*(*A***A*) is the spectral radius of the self-adjoint operator *A***A*. He obtained that the sequence {*x*_{
n
}} generated by (4.7) converges weakly to a solution of the (SFP).

*μ*< 2/

*ρ*(

*A**

*A*). He showed that when the sequence {

*α*

_{ n }} satisfies the conditions (C1)-(C3), then {

*x*

_{ n }} converges strongly to the projection of

*u*onto the solution set of the (SFP). In particular, if

*u*= 0 in the algorithm (4.8), then the corresponding algorithms converges strongly to the minimal norm solution of the (SFP). Lately, Wang and Xu [18] introduced a modification of CQ algorithm (4.7) with strong convergence by introducing an approximating curve for the (SFP) in infinite dimensional Hilbert space, and obtained the minimum-norm solution of the (SFP) as the strong limit of the approximating curve. The sequence {

*x*

_{ n }} is generated by the iterative algorithm

where {*α*_{
n
}} ⊂ (0, 1) such that (C1)-(C3).

Applying Theorem 4.1, we obtain the following result which improve the corresponding results of Xu [23] and Wang and Xu [18].

**Theorem 4.2**.

*Assume that the split feasibility problem*(

*4.5*)

*is consistent. In addition, if*0 ∈

*C or C is closed convex cone. Let the sequence*{

*x*

_{ n }}

*be generated by*

*where the sequence* {*α*_{
n
}} ⊂ (0, 1) *satisfies the conditions: (i)* lim _{n→∞}, *α*_{
n
} = 0, (*ii*) ${\sum}_{n=0}^{\infty}{\alpha}_{n}=+\infty ,\lambda \in \left(0,1\right)$, *λ* ∈ (0, 1) *and μ* ∈ (0, 2/*ρ*(*A***A*)), *where ρ*(*A***A*) *denotes the spectral radius of the self-adjoint operator A***A*. *Then the sequence* {*x*_{
n
}} *converges strongly to the minimum-norm solution of the split feasibility problem (4.5)*.

*Proof*. By the definition of the proximity function

*f*, we have

*f*is Lipschitz continuous (Lemma 8.1 of [21]), i.e.,

*L*=

*ρ*(

*A**

*A*). Then the iterative scheme (4.10) is equivalent to

Due to Theorem 4.1, we have the conclusion immediately. □

*Remark* 4.1. Theorem 4.2 extends the corresponding results of Wang and Xu [18] and Xu [23] by discarding the assumption "${\sum}_{n=0}^{\infty}\left|{\alpha}_{n+1}-{\alpha}_{n}\right|<+\infty $ or lim_{n → ∞}(*α*_{
n
}/*α*_{n+1}) = 1".

## Declarations

### Acknowledgements

The authors would like to thank the anonymous referees for their constructive comments and helpful suggestions, which greatly improved the original manuscript of this article. The authors are also deeply grateful to Prof. Hongkun Xu (Editor) for managing the review process. This study was supported partly by the National Natural Science Foundations of China (11101204, 11102078), the Natural Science Foundations of Jiangxi Province (2009GZS0021, CA201107114) and the Youth Science Funds of The Education Department of Jiangxi Province (GJJ12141).

## Authors’ Affiliations

## References

- Herman GT:
*Fundamentals of Computerized Tomography: Image Reconstruction from Projections.*2nd edition. Springer, New York; 2009.View ArticleGoogle Scholar - Stark H, Yang Y:
*Vector Space Projections: A Numerical Approach to Signal and Image Processing, Neural Nets, and Optics.*Wiley-Interscience, New York; 1998.Google Scholar - Li X: Fine-granularity and spatially adaptive regularization for projection based image deblurring.
*IEEE Trans Image Process*2011, 20(4):971–983.MathSciNetView ArticleGoogle Scholar - Beck A, Teboulle M: Fast gradient-based algorithms for constrained total variation de-noising and deblurring problems.
*IEEE Trans Image Process*2009, 18(11):2419–2434.MathSciNetView ArticleGoogle Scholar - Reich S: Weak convergence theorems for nonexpansive mappings in Banach spaces.
*J Math Anal Appl*1979, 67: 274–276.MathSciNetView ArticleGoogle Scholar - Maingé PE: Convergence theorems for inertial KM-type algorithms.
*J Comput Appl Math*2008, 219: 223–236.MathSciNetView ArticleGoogle Scholar - Moudafi A: Viscosity approximations methods for fixed-points problems.
*J Math Anal Appl*2000, 241: 46–55.MathSciNetView ArticleGoogle Scholar - Nakajo K, Takahashi W: Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups.
*J Math Anal Appl*2003, 279: 372–379.MathSciNetView ArticleGoogle Scholar - Yamada I, Ogura N: Hybrid steepest descent method for the variational inequality problem over the fixed point set of certain quasi-nonexpansive mappings.
*Number Funct Anal Optim*2004, 25(7–8):619–655.MathSciNetView ArticleGoogle Scholar - Chidume CE, Chidume CO: Iterative approximation of fixed points of nonexpansive mappings.
*J Math Anal Appl*2006, 318: 288–295.MathSciNetView ArticleGoogle Scholar - Suzuki T: A sufficient and necessary condition for Halpern-type strong convergence to fixed point of nonexpansive mappings.
*Proc Am Math Soc*2007, 135(1):99–106.View ArticleGoogle Scholar - Takahashi W, Takeuchi Y, Kubota R: Strong convergence theorems by hybrid methods for families of nonexpansive mappings in Hilbert spaces.
*J Math Anal Appl*2008, 341: 276–286.MathSciNetView ArticleGoogle Scholar - Ferreira PJSG: The existence and uniqueness of the minimum-norm solution to certain linear and nonlinear problems.
*Signal Process*1996, 55(1):137–139.View ArticleGoogle Scholar - Yao YH, Xu HK: Iterative methods for finding minimum-norm fixed points of nonex-pansive mappings with applications.
*Optim*2011, 60(6):645–658.MathSciNetView ArticleGoogle Scholar - Cui YL, Liu X: Notes on Browder's and Halpern's methods for nonexpansive mappings.
*Fixed Point Theory*2009, 10(1):89–98.MathSciNetGoogle Scholar - Browder FE: Convergence theorems for sequences of nonlinear operators in Banach spaces.
*Math Z*1967, 100: 201–225.MathSciNetView ArticleGoogle Scholar - Halpern B: Fixed points of nonexpanding maps.
*Bull Am Math Soc*1967, 73: 957–961.View ArticleGoogle Scholar - Wang F, Xu HK: Approximating curve and strong convergence of the CQ Algorithm for the split feasibility problem.
*Journal of Inequalities and Applications*2010, 2010: 102085.Google Scholar - Suzuki T: Strong convergence of Krasnoselskii and Mann's type sequences for one-parameter nonexpansive semigroups without Bochner integrals.
*J Math Anal Appl*2005, 305: 227–239.MathSciNetView ArticleGoogle Scholar - Xu HK: Another control condition in an iterative method for nonexpansive mappings.
*Bull Aust Math Soc*2002, 65: 109–113.View ArticleGoogle Scholar - Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction.
*Inverse Probl*2004, 20: 103–120.MathSciNetView ArticleGoogle Scholar - Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space.
*Numer Alg*1994, 8: 221–239.MathSciNetView ArticleGoogle Scholar - Xu HK: A variable Krasnosel'skii-Mann algorithm and the multiple-set split feasibility problem.
*Inverse probl*2006, 22: 2021–2034.View ArticleGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.