Skip to main content

Alternating mann iterative algorithms for the split common fixed-point problem of quasi-nonexpansive mappings

Abstract

Very recently, Moudafi (Alternating CQ-algorithms for convex feasibility and split fixed-point problems, J. Nonlinear Convex Anal. ) introduced an alternating CQ-algorithm with weak convergence for the following split common fixed-point problem. Let H 1 , H 2 , H 3 be real Hilbert spaces, let A: H 1 H 3 , B: H 2 H 3 be two bounded linear operators.

Find xF(U),yF(T) such that Ax=By,
(1)

where U: H 1 H 1 and T: H 2 H 2 are two firmly quasi-nonexpansive operators with nonempty fixed-point sets F(U)={x H 1 :Ux=x} and F(T)={x H 2 :Tx=x}. Note that by taking H 2 = H 3 and B=I, we recover the split common fixed-point problem originally introduced in Censor and Segal (J. Convex Anal. 16:587-600, 2009) and used to model many significant real-world inverse problems in sensor net-works and radiation therapy treatment planning. In this paper, we will continue to consider the split common fixed-point problem (1) governed by the general class of quasi-nonexpansive operators. We introduce two alternating Mann iterative algorithms and prove the weak convergence of algorithms. At last, we provide some applications. Our results improve and extend the corresponding results announced by many others.

MSC:47H09, 47H10, 47J05, 54H25.

1 Introduction

Throughout this paper, we always assume that H is a real Hilbert space with the inner product , and the norm . Let I denote the identity operator on H. Let C and Q be nonempty closed convex subset of real Hilbert spaces H 1 and H 2 , respectively. The split feasibility problem (SFP) is to find a point

xC such that AxQ,
(1.1)

where A: H 1 H 2 is a bounded linear operator. The SFP in finite-dimensional Hilbert spaces was first introduced by Censor and Elfving [1] for modeling inverse problems which arise from phase retrievals and in medical image reconstruction [2]. The SFP attracts many authors’ attention due to its application in signal processing. Various algorithms have been invented to solve it (see [312] and references therein).

Note that if the split feasibility problem (1.1) is consistent (i.e., (1.1) has a solution), then (1.1) can be formulated as a fixed point equation by using the fact

P C ( I γ A ( I P Q ) A ) x = x .
(1.2)

That is, x solves the SFP (1.1) if and only if x solves the fixed point equation (1.2) (see [13] for the details). This implies that we can use fixed point algorithms (see [6, 1315]) to solve SFP. A popular algorithm that solves the SFP (1.1) is due to Byrne’s CQ algorithm [2], which is found to be a gradient-projection method (GPM) in convex minimization. Subsequently, Byrne [3] applied KM iteration to the CQ algorithm and Zhao [16] applied KM iteration to the perturbed CQ algorithm to solve the SFP.

Recently, Moudafi [17] introduced a new convex feasibility problem (CFP). Let H 1 , H 2 , H 3 be real Hilbert spaces, let C H 1 , Q H 2 be two nonempty closed convex sets, let A: H 1 H 3 , B: H 2 H 3 be two bounded linear operators. The convex feasibility problem in [17] is to find

xC,yQ such that Ax=By,
(1.3)

which allows asymmetric and partial relations between the variables x and y. The interest is to cover many situations, for instance, in decomposition methods for PDEs, applications in game theory and in intensity-modulated radiation therapy (IMRT). In decision sciences, this allows to consider agents who interplay only via some components of their decision variables (see [18]). In (IMRT), this amounts to envisaging a weak coupling between the vector of doses absorbed in all voxels and that of the radiation intensity (see [19]). If H 2 = H 3 and B=I, then the convex feasibility problem (1.3) reduces to the split feasibility problem (1.1).

For solving the CFP (1.3), Moudafi [17] studied the fixed point formulation of the solutions of the CFP (1.3). Assuming that the CFP (1.3) is consistent (i.e., (1.3) has a solution), if (x,y) solves (1.3), then it solves the following fixed point equation system

{ x = P C ( x γ A ( A x B y ) ) , y = P Q ( y + β B ( A x B y ) ) ,
(1.4)

where γ,β>0 are any positive constants. Moudafi [17] introduced the following alternating CQ algorithm

{ x k + 1 = P C ( x k γ k A ( A x k B y k ) ) , y k + 1 = P Q ( y k + γ k B ( A x k + 1 B y k ) ) ,
(1.5)

where γ k (ε,min( 1 λ A , 1 λ B )ε), λ A and λ B are the spectral radiuses of A A and B B, respectively.

The split common fixed-point problem (SCFP) is a generalization of the split feasibility problem (SFP) and the convex feasibility problem (CFP); see [20]. SCFP is in itself at the core of the modeling of many inverse problems in various areas of mathematics and physical sciences and has been used to model significant real-world inverse problems in sensor net-works, in radiation therapy treatment planning, in resolution enhancement, in wavelet-based denoising, in antenna design, in computerized tomography, in materials science, in watermarking, in data compression, in magnetic resonance imaging, in holography, in color imaging, in optics and neural networks, in graph matching… (see [21]). Censor and Segal consider the following problem:

find  x C such that A x Q,
(1.6)

where A: H 1 H 2 is a bounded linear operator, U: H 1 H 1 and T: H 2 H 2 are two nonexpansive operators with nonempty fixed-point sets F(U)=C and F(T)=Q and denote the solution set of the two-operator SCFP by

Γ={yC;AyQ}.

To solve (1.6), Censor and Segal [20] proposed and proved, in finite-dimensional spaces, the convergence of the following algorithm:

x k + 1 =U ( x k + γ A t ( T I ) A x k ) ,kN,

where γ(0, 2 λ ), with λ being the largest eigenvalue of the matrix A t A ( A t stands for matrix transposition). For solving SCFP of quasi-nonexpansive operators, Moudafi [22] introduced the following relaxed algorithm:

x k + 1 = α k u k +(1 α k )U( u k ),kN,
(1.7)

where u k = x k +γβ A (TI)A x k , β(0,1), α k (0,1) and γ(0, 1 λ β ), with λ being the spectral radius of the operator A A. Moudafi proved weak convergence result of the algorithm in Hilbert spaces.

In [17], Moudafi introduced the following SCFP

find xF(U),yF(T) such that Ax=By,
(1.8)

and considered the following alternating SCFP-algorithm

{ x k + 1 = U ( x k γ k A ( A x k B y k ) ) , y k + 1 = T ( y k + γ k B ( A x k + 1 B y k ) )
(1.9)

for firmly quasi-nonexpansive operators U and T. Moudafi [17] obtained the following result.

Theorem 1.1 Let H 1 , H 2 , H 3 be real Hilbert spaces, let U: H 1 H 1 , T: H 2 H 2 be two firmly quasi-nonexpansive operators such that IU, IT are demiclosed at 0. Let A: H 1 H 3 , B: H 2 H 3 be two bounded linear operators. Assume that the solution set Γ is nonempty, ( γ k ) is a positive non-decreasing sequence such that γ k (ε,min( 1 λ A , 1 λ B )ε), where λ A , λ B stand for the spectral radiuses of A A and B B, respectively. Then the sequence ( x k , y k ) generated by (1.9) weakly converges to a solution ( x ¯ , y ¯ ) of (1.8). Moreover, A x k B y k 0, x k x k + 1 0, and y k y k + 1 0 as k.

In this paper, inspired and motivated by the works mentioned above, firstly, we introduce the following alternating Mann iterative algorithm for solving the SCFP (1.8) for the general class of quasi-nonexpansive operators.

Algorithm 1.1 Let x 0 H 1 , y 0 H 2 be arbitrary.

{ u k = x k γ k A ( A x k B y k ) , x k + 1 = α k u k + ( 1 α k ) U ( u k ) , v k + 1 = y k + γ k B ( A x k + 1 B y k ) , y k + 1 = β k v k + 1 + ( 1 β k ) T ( v k + 1 ) .

By taking B=I, we recover (1.8) clearly the classical SCFP (1.6). In addition, if γ k =1 and β k =β(0,1) in Algorithm 1.1, we have v k + 1 =A x k + 1 and y k + 1 = β k A x k + 1 +(1 β k )T(A x k + 1 ). Thus, Algorithm 1.1 reduces to u k = x k +(1β) A (TI)A x k and x k + 1 = α k u k +(1 α k )U( u k ), which is algorithm (1.7) proposed by Moudafi [22].

The CQ algorithm is a special case of the K-M algorithm. Due to the fixed point formulation (1.4) of the CFP (1.3), we can apply the K-M algorithm to obtain the following other alterative Mann iterative sequence for solving the SCFP (1.8) for quasi-nonexpansive operators.

Algorithm 1.2 Let x 0 H 1 , y 0 H 2 be arbitrary.

{ u k = x k γ k A ( A x k B y k ) , x k + 1 = α k x k + ( 1 α k ) U ( u k ) , v k + 1 = y k + γ k B ( A x k + 1 B y k ) , y k + 1 = α k y k + ( 1 α k ) T ( v k + 1 ) .

The organization of this paper is as follows. Some useful definitions and results are listed for the convergence analysis of the iterative algorithm in Section 2. In Section 3, we prove the weak convergence of the alternating Mann iterative Algorithms 1.1 and 1.2. At last, we provide some applications of Algorithms 1.1 and 1.2.

2 Preliminaries

Let T:HH be a mapping. A point xH is said to be a fixed point of T provided Tx=x. In this paper, we use F(T) to denote the fixed point set and use → and to denote the strong convergence and weak convergence, respectively. We use ω w ( x k )={x: x k j x} stand for the weak ω-limit set of { x k } and use Γ stand for the solution set of the SCFP (1.8).

  • A mapping T:HH belongs to the general class Φ Q of (possibly discontinuous) quasi-nonexpansive mappings if

    Txqxq,(x,q)H×F(T).
  • A mapping T:HH belongs to the set Φ N of nonexpansive mappings if

    TxTyxy,(x,y)H×H.
  • A mapping T:HH belongs to the set Φ F N of firmly nonexpansive mappings if

    T x T y 2 x y 2 ( x y ) ( T x T y ) 2 ,(x,y)H×H.
  • A mapping T:HH belongs to the set Φ F Q of firmly quasi-nonexpansive mappings if

    T x q 2 x q 2 x T x 2 ,(x,q)H×F(T).

It is easily observed that Φ F N Φ N Φ Q and that Φ F N Φ F Q Φ Q . Furthermore, Φ F N is well known to include resolvents and projection operators, while Φ F Q contains subgradient projection operators (see, for instance, [23] and the reference therein).

A mapping T:HH is called demiclosed at the origin if, for any sequence { x n } which weakly converges to x, and if the sequence {T x n } strongly converges to 0, then Tx=0.

In real Hilbert space, we easily get the following equality:

2x,y= x 2 + y 2 x y 2 = x + y 2 x 2 y 2 ,x,yH.
(2.1)

In what follows, we give some key properties of the relaxed operator T α =αI+(1α)T which will be needed in the convergence analysis of our algorithms.

Lemma 2.1 ([22])

Let H be a real Hilbert space, and let T:HH be a quasi-nonexpansive mapping. Set T α =αI+(1α)T for α[0,1). Then the following properties are reached for all (x,q)H×F(T):

  1. (i)

    xTx,xq 1 2 x T x 2 and xTx,qTx 1 2 x T x 2 ;

  2. (ii)

    T α x q 2 x q 2 α(1α) T x x 2 ;

  3. (iii)

    x T α x,xq 1 α 2 x T x 2 .

Remark 2.2 Let T α =αI+(1α)T, where T:HH is a quasi-nonexpansive mapping and α[0,1). We have F( T α )=F(T) and T α x x 2 = ( 1 α ) 2 T x x 2 . It follows from (ii) of Lemma 2.1 that T α x q 2 x q 2 α 1 α T α x x 2 , which implies that T α is firmly quasi-nonexpansive when α= 1 2 . On the other hand, if T ˆ is a firmly quasi-nonexpansive mapping, we can obtain T ˆ = 1 2 I+ 1 2 T, where T is quasi-nonexpansive. This is proved by the following inequalities.

For all xH and qF( T ˆ )=F(T),

T x q 2 = ( 2 T ˆ I ) x q 2 = ( T ˆ x q ) + ( T ˆ x x ) 2 = T ˆ x q 2 + T ˆ x x 2 + 2 T ˆ x q , T ˆ x x = T ˆ x q 2 + T ˆ x x 2 + T ˆ x q 2 + T ˆ x x 2 x q 2 = 2 T ˆ x q 2 + 2 T ˆ x x 2 x q 2 2 x q 2 2 T ˆ x x 2 + 2 T ˆ x x 2 x q 2 = x q 2 ,

where T ˆ is firmly quasi-nonexpansive.

Lemma 2.3 ([24])

Let H be a real Hilbert space. Then for all t[0,1] and x,yH,

t x + ( 1 t ) y 2 =t x 2 +(1t) y 2 t(1t) x y 2 .

3 Convergence of the alternating Mann iterative Algorithms 1.1 and 1.2

Theorem 3.1 Let H 1 , H 2 , H 3 be real Hilbert spaces. Given two bounded linear operators A: H 1 H 3 , B: H 2 H 3 , let U: H 1 H 1 and T: H 2 H 2 be quasi-nonexpansive mappings with nonempty fixed point set F(U) and F(T). Assume that UI, TI are demiclosed at origin, and the solution set Γ of (1.8) is nonempty. Let { γ k } be a positive non-decreasing sequence such that γ k (ε,min( 1 λ A , 1 λ B )ε), where λ A , λ B stand for the spectral radiuses of A A and B B, respectively, and ε is small enough. Then, the sequence {( x k , y k )} generated by Algorithm 1.1 weakly converges to a solution ( x , y ) of (1.8), provided that { α k }(δ,1δ) and { β k }(σ,1σ) for small enough δ,σ>0. Moreover, A x k B y k 0, x k x k + 1 0 and y k y k + 1 0 as k.

Proof Taking (x,y)Γ, i.e., xF(U); yF(T) and Ax=By. We have

u k x 2 = x k γ k A ( A x k B y k ) x 2 = x k x 2 2 γ k x k x , A ( A x k B y k ) + γ k 2 A ( A x k B y k ) 2 .
(3.1)

From the definition of λ A , it follows that

γ k 2 A ( A x k B y k ) 2 = γ k 2 A ( A x k B y k ) , A ( A x k B y k ) = γ k 2 A x k B y k , A A ( A x k B y k ) λ A γ k 2 A x k B y k , A x k B y k = λ A γ k 2 A x k B y k 2 .
(3.2)

Using equality (2.1), we have

2 x k x , A ( A x k B y k ) = 2 A x k A x , A x k B y k = A x k A x 2 A x k B y k 2 + B y k A x 2 .
(3.3)

By (3.1)-(3.3), we obtain

u k x 2 x k x 2 γ k ( 1 λ A γ k ) A x k B y k 2 γ k A x k A x 2 + γ k B y k A x 2 .
(3.4)

Similarly, by Algorithm 1.1, we have

v k + 1 y 2 y k y 2 γ k ( 1 λ B γ k ) A x k + 1 B y k 2 γ k B y k B y 2 + γ k A x k + 1 B y 2 .
(3.5)

By adding the two last inequalities, and by taking into account assumptions on { γ k } and the fact that Ax=By, we obtain

u k x 2 + v k + 1 y 2 x k x 2 + y k y 2 γ k A x k A x 2 + γ k + 1 A x k + 1 A x 2 γ k ( 1 λ A γ k ) A x k B y k 2 γ k ( 1 λ B γ k ) A x k + 1 B y k 2 .
(3.6)

Using the fact that U and T are quasi-nonexpansive mappings, it follows from the property (ii) of Lemma 2.1 that

x k + 1 x 2 u k x 2 α k (1 α k ) U ( u k ) u k 2

and

y k + 1 y 2 v k + 1 y 2 β k (1 β k ) T ( v k + 1 ) v k + 1 2 .

So, by (3.6), we have

x k + 1 x 2 + y k + 1 y 2 x k x 2 + y k y 2 γ k A x k A x 2 + γ k + 1 A x k + 1 A x 2 γ k ( 1 λ A γ k ) A x k B y k 2 γ k ( 1 λ B γ k ) A x k + 1 B y k 2 α k ( 1 α k ) U ( u k ) u k 2 β k ( 1 β k ) T ( v k + 1 ) v k + 1 2 .
(3.7)

Now, by setting ρ k (x,y):= x k x 2 + y k y 2 γ k A x k A x 2 , we obtain the following inequality:

ρ k + 1 ( x , y ) ρ k ( x , y ) γ k ( 1 λ A γ k ) A x k B y k 2 γ k ( 1 λ B γ k ) A x k + 1 B y k 2 α k ( 1 α k ) U ( u k ) u k 2 β k ( 1 β k ) T ( v k + 1 ) v k + 1 2 .
(3.8)

On the other hand, noting that

γ k A x k A x 2 = γ k x k x , A ( A x k A x ) γ k λ A x k x 2 ,

we have

ρ k (x,y)(1 λ A γ k ) x k x 2 + y k y 2 0.
(3.9)

The sequence { ρ k (x,y)} being decreasing and lower bounded by 0, consequently it converges to some finite limit, says ρ(x,y). Again from (3.8), we have ρ k + 1 (x,y) ρ k (x,y) γ k (1 λ A γ k ) A x k B y k 2 , and hence

lim k A x k B y k =0

by the assumption on { γ k }. Similarly, by the conditions on { γ k }, { α k } and { β k }, we obtain

lim k A x k + 1 B y k = lim k U ( u k ) u k = lim k T v k + 1 v k + 1 =0.

Since

u k x k = γ k A ( A x k B y k ) ,

and { γ k } is bounded, we have lim k u k x k =0. It follows from lim k U( u k ) u k =0 that lim k U( u k ) x k =0. So,

x k + 1 x k α k u k x k +(1 α k ) U ( u k ) x k 0

as n, which infers that { x k } is asymptotically regular, namely lim k x k + 1 x k =0. Similarly, lim k v k + 1 y k =0, and { y k } is asymptotically regular, too. Now, relation (3.9) and the assumption on { γ k } imply that

ρ k (x,y)ε λ A x k x 2 + y k y 2 ,

which ensures that both sequences { x k } and { y k } are bounded thanks to the fact that { ρ k (x,y)} converges to a finite limit.

Taking x ω w ( x k ) and y ω w ( y k ), from lim k u k x k =0 and lim k v k + 1 y k =0, we have x ω w ( u k ) and y ω w ( v k + 1 ). Combined with the demiclosednesses of UI and TI at 0,

lim k U ( u k ) u k = lim k T v k + 1 v k + 1 =0

yields U x = x and T y = y . So, xF(U) and yF(T). On the other hand, A x B y ω w (A x k B y k ) and lower semicontinuity of the norm imply that

A x B y lim inf k A x k B y k =0,

hence ( x , y )Γ.

Next, we will show the uniqueness of the weak cluster points of { x k } and { y k }. Indeed, let x ¯ , y ¯ be other weak cluster points of { x k } and { y k }, respectively, then ( x ¯ , y ¯ )Γ. From the definition of ρ k (x,y), we have

ρ k ( x , y ) = x k x 2 + y k y 2 γ k A x k A x 2 = x k x ¯ 2 + x ¯ x 2 + 2 x k x ¯ , x ¯ x + y k y ¯ 2 + y ¯ y 2 + 2 y k y ¯ , y ¯ y γ k ( A x k A x ¯ 2 + A x ¯ A x 2 + 2 A x k A x ¯ , A x ¯ A x ) = ρ k ( x ¯ , y ¯ ) + x ¯ x 2 + y ¯ y 2 γ k A x ¯ A x 2 + 2 x k x ¯ , x ¯ x + 2 y k y ¯ , y ¯ y 2 γ k A x k A x ¯ , A x ¯ A x .
(3.10)

Without loss of generality, we may assume that x k x ¯ , y k y ¯ and γ k γ because of the boundedness of the sequence { γ k }. By passing to the limit in relation (3.10), we obtain

ρ ( x , y ) =ρ( x ¯ , y ¯ )+ x ¯ x 2 + y ¯ y 2 γ A x ¯ A x 2 .

Reversing the role of ( x , y ) and ( x ¯ , y ¯ ), we also have

ρ( x ¯ , y ¯ )=ρ ( x , y ) + x x ¯ 2 + y y ¯ 2 γ A x A x ¯ 2 .

By adding the two last equalities, and having in mind that { γ k } is a non-decreasing sequence satisfying 1 γ k λ A >ε λ A , we obtain

ε λ A x x ¯ 2 + y y ¯ 2 0.

Hence x = x ¯ and y = y ¯ , this implies that the whole sequence {( x k , y k )} weakly converges to a solution of problem (1.8), which completes the proof. □

Remark 3.2 Taking α n = β n = 1 2 in Algorithm 1.1, it follows from Remark 2.2 that Theorem 3.1 becomes Theorem 1.1, which is proved by Moudafi [17].

Theorem 3.3 Let H 1 , H 2 , H 3 be real Hilbert spaces. Given two bounded linear operators A: H 1 H 3 , B: H 2 H 3 , let U: H 1 H 1 and T: H 2 H 2 be quasi-nonexpansive mappings with nonempty fixed point set F(U) and F(T). Assume that UI, TI are demiclosed at origin, and the solution set Γ of (1.8) is nonempty. Let { γ k } be a positive non-decreasing sequence such that γ k (ε,min( 1 λ A , 1 λ B )ε), where λ A , λ B stand for the spectral radiuses of A A and B B, respectively, and ε is small enough. Then the sequence {( x k , y k )} generated by Algorithm 1.2 weakly converges to a solution ( x , y ) of (1.8), provided that { α k } is an non-increasing sequence such that { α k }(δ,1δ) for small enough δ>0. Moreover, A x k B y k 0, x k x k + 1 0 and y k y k + 1 0 as k.

Proof Taking (x,y)Γ, i.e., xF(U); yF(T) and Ax=By. By repeating the proof of Theorem 3.1, we have that (3.6) is true.

Using the fact that U and T are quasi-nonexpansive mappings, it follows from Lemma 2.3 that

x k + 1 x 2 = α k x k x 2 + ( 1 α k ) U ( u k ) x 2 α k ( 1 α k ) U ( u k ) x k 2 α k x k x 2 + ( 1 α k ) u k x 2 α k ( 1 α k ) U ( u k ) x k 2

and

y k + 1 y 2 α k y k y 2 +(1 α k ) v k + 1 y 2 α k (1 α k ) T ( v k + 1 ) y k 2 .

So, by (3.6) and the assumption on { α k }, we have

x k + 1 x 2 + y k + 1 y 2 x k x 2 + y k y 2 γ k ( 1 α k ) A x k A x 2 + γ k + 1 ( 1 α k + 1 ) A x k + 1 A x 2 γ k ( 1 α k ) ( 1 λ A γ k ) A x k B y k 2 γ k ( 1 α k ) ( 1 λ B γ k ) A x k + 1 B y k 2 α k ( 1 α k ) U ( u k ) x k 2 α k ( 1 α k ) T ( v k + 1 ) y k 2 .
(3.11)

Now, by setting ρ k (x,y):= x k x 2 + y k y 2 γ k (1 α k ) A x k A x 2 , we obtain the following inequality:

ρ k + 1 ( x , y ) ρ k ( x , y ) γ k ( 1 α k ) ( 1 λ A γ k ) A x k B y k 2 γ k ( 1 α k ) ( 1 λ B γ k ) A x k + 1 B y k 2 α k ( 1 α k ) U ( u k ) x k 2 α k ( 1 α k ) T ( v k + 1 ) y k 2 .
(3.12)

Following the lines of the proof of Theorem 3.1, by the conditions on { γ k } and { α k }, we have that the sequence { ρ k (x,y)} converges to some finite limit, say ρ(x,y). Furthermore, we obtain

lim k A x k B y k = lim k A x k + 1 B y k = lim k U ( u k ) x k = lim k T v k + 1 y k =0.

Since

u k x k = γ k A ( A x k B y k ) ,

and { γ k } is bounded, we have lim k u k x k =0. It follows from

lim k x k + 1 x k = lim k (1 α k ) U ( u k ) x k =0

that { x k } is asymptotically regular. Similarly, lim k v k + 1 y k =0 and { y k } is asymptotically regular, too.

The rest of the proof is analogous to that of Theorem 3.1. □

4 Applications

We now turn our attention to providing some applications relying on some convex and nonlinear analysis notions, see, for example, [25].

4.1 Convex feasibility problem (1.3)

Taking U= P C and T= P Q , we have the following alterative Mann iterative algorithms for CFP (1.3).

Algorithm 4.1 Let x 0 H 1 , y 0 H 2 be arbitrary.

{ u k = x k γ k A ( A x k B y k ) , x k + 1 = α k u k + ( 1 α k ) P C ( u k ) , v k + 1 = y k + γ k B ( A x k + 1 B y k ) , y k + 1 = β k v k + 1 + ( 1 β k ) P Q ( v k + 1 ) .

Algorithm 4.2 Let x 0 H 1 , y 0 H 2 be arbitrary.

{ u k = x k γ k A ( A x k B y k ) , x k + 1 = α k x k + ( 1 α k ) P C ( u k ) , v k + 1 = y k + γ k B ( A x k + 1 B y k ) , y k + 1 = α k y k + ( 1 α k ) P Q ( v k + 1 ) .

4.2 Variational problems via resolvent mappings

Given a maximal monotone operator M: H 1 2 H 1 , it is well known that its associated resolvent mapping, J μ M (x):= ( I + μ M ) 1 , is quasi-nonexpansive and 0M(x)x= J μ M (x). In other words, zeroes of M are exactly fixed-points of its resolvent mapping. By taking U= J μ M , T= J ν S , where N: H 2 2 H 2 is another maximal monotone operator, the problem under consideration is nothing but

find  x M 1 (0), y N 1 (0) such that A x =B y ,
(4.1)

and the algorithms take the following equivalent form.

Algorithm 4.3 Let x 0 H 1 , y 0 H 2 be arbitrary.

{ u k = x k γ k A ( A x k B y k ) , x k + 1 = α k u k + ( 1 α k ) J μ M ( u k ) , v k + 1 = y k + γ k B ( A x k + 1 B y k ) , y k + 1 = β k v k + 1 + ( 1 β k ) J ν S ( v k + 1 ) .

Algorithm 4.4 Let x 0 H 1 , y 0 H 2 be arbitrary.

{ u k = x k γ k A ( A x k B y k ) , x k + 1 = α k x k + ( 1 α k ) J μ M ( u k ) , v k + 1 = y k + γ k B ( A x k + 1 B y k ) , y k + 1 = α k y k + ( 1 α k ) J ν S ( v k + 1 ) .

References

  1. Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692

    Article  MATH  MathSciNet  Google Scholar 

  2. Byrne C: Iterative oblique projection onto convex subsets and the split feasibility problem. Inverse Probl. 2002, 18: 441–453. 10.1088/0266-5611/18/2/310

    Article  MATH  MathSciNet  Google Scholar 

  3. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006

    Article  MATH  MathSciNet  Google Scholar 

  4. Qu B, Xiu N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21: 1655–1665. 10.1088/0266-5611/21/5/009

    Article  MATH  MathSciNet  Google Scholar 

  5. Thakur BS, Dewangan R, Postolache M: Strong convergence of new iteration process for a strongly continuous semigroup of asymptotically pseudocontractive mappings. Numer. Funct. Anal. Optim. 2013. 10.1080/01630563.2013.808667

    Google Scholar 

  6. Xu HK: A variable Krasnosel’skiĭ-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22: 2021–2034. 10.1088/0266-5611/22/6/007

    Article  MATH  Google Scholar 

  7. Yang Q: The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 2004, 20: 1261–1266. 10.1088/0266-5611/20/4/014

    Article  MATH  MathSciNet  Google Scholar 

  8. Yang Q, Zhao J: Generalized KM theorems and their applications. Inverse Probl. 2006, 22: 833–844. 10.1088/0266-5611/22/3/006

    Article  MathSciNet  MATH  Google Scholar 

  9. Yao Y, Postolache M, Liou YC: Strong convergence of a self-adaptive method for the split feasibility problem. Fixed Point Theory Appl. 2013., 2013: Article ID 201

    Google Scholar 

  10. Yao Y, Postolache M, Liou YC: Variant extragradient-type method for monotone variational inequalities. Fixed Point Theory Appl. 2013., 2013: Article ID 185

    Google Scholar 

  11. Yao Y, Postolache M, Liou YC: Coupling Ishikawa algorithms with hybrid techniques for pseudocontractive mappings. Fixed Point Theory Appl. 2013., 2013: Article ID 221 10.1186/1687-1812-2013-211(2013)

    Google Scholar 

  12. Yao Y, Postolache M: Iterative methods for pseudomonotone variational inequalities and fixed point problems. J. Optim. Theory Appl. 2012, 155(1):273–287. 10.1007/s10957-012-0055-0

    Article  MATH  MathSciNet  Google Scholar 

  13. Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26(10): Article ID 105018

    MATH  Google Scholar 

  14. Bauschke HH, Borwein JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38(3):367–426. 10.1137/S0036144593251710

    Article  MATH  MathSciNet  Google Scholar 

  15. Yao Y, Wu J, Liou YC: Regularized methods for the split feasibility problem. Abstr. Appl. Anal. 2012., 2012: Article ID 140679

    Google Scholar 

  16. Zhao J, Yang Q: Several solution methods for the split feasibility problem. Inverse Probl. 2005, 21: 1791–1799. 10.1088/0266-5611/21/5/017

    Article  MATH  MathSciNet  Google Scholar 

  17. Moudafi, A: Alternating CQ-algorithm for convex feasibility and split fixed-point problems. J. Nonlinear Convex Anal. (submitted for publication)

  18. Attouch H, Bolte J, Redont P, Soubeyran A: Alternating proximal algorithms for weakly coupled minimization problems. Applications to dynamical games and PDEs. J. Convex Anal. 2008, 15: 485–506.

    MATH  MathSciNet  Google Scholar 

  19. Censor Y, Bortfeld T, Martin B, Trofimov A: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51: 2353–2365. 10.1088/0031-9155/51/10/001

    Article  Google Scholar 

  20. Censor Y, Segal A: The split common fixed point problem for directed operators. J. Convex Anal. 2009, 16: 587–600.

    MATH  MathSciNet  Google Scholar 

  21. Censor Y, Gibali A, Reich S: Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59: 301–323. 10.1007/s11075-011-9490-5

    Article  MATH  MathSciNet  Google Scholar 

  22. Moudafi A: A note on the split common fixed-point problem for quasi-nonexpansive operators. Nonlinear Anal. 2011, 74: 4083–4087. 10.1016/j.na.2011.03.041

    Article  MATH  MathSciNet  Google Scholar 

  23. Maruster S, Popirlan C: On the Mann-type iteration and convex feasibility problem. J. Comput. Appl. Math. 2008, 212: 390–396. 10.1016/j.cam.2006.12.012

    Article  MATH  MathSciNet  Google Scholar 

  24. Matinez-Yanes C, Xu HK: Strong convergence of the CQ method for fixed point processes. Nonlinear Anal. 2006, 64: 2400–2411. 10.1016/j.na.2005.08.018

    Article  MathSciNet  MATH  Google Scholar 

  25. Rockafellar RT, Wets R Grundlehren der Mathematischen Wissenschaften 317. In Variational Analysis. Springer, Berlin; 1998.

    Chapter  Google Scholar 

Download references

Acknowledgements

The research was supported by the Fundamental Research Funds for the Central Universities (Program No. 3122013k004), it was also supported by the science research foundation program in the Civil Aviation University of China (2012KYM04). The authors would also like to thank the referees for careful reading of the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jing Zhao.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors carried out the algorithm design and drafted the manuscript. The authors completed the proof. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Zhao, J., He, S. Alternating mann iterative algorithms for the split common fixed-point problem of quasi-nonexpansive mappings. Fixed Point Theory Appl 2013, 288 (2013). https://doi.org/10.1186/1687-1812-2013-288

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2013-288

Keywords