Skip to main content

Advertisement

General split equality problems in Hilbert spaces

Article metrics

  • 803 Accesses

  • 5 Citations

Abstract

A new convex feasibility problem, the split equality problem (SEP), has been proposed by Moudafi and Byrne. The SEP was solved through the ACQA and ARCQA algorithms. In this paper the SEPs are extended to infinite-dimensional SEPs in Hilbert spaces and we established the strong convergence of a proposed algorithm to a solution of general split equality problems (GSEPs).

1 Introduction

In the present paper, we are concerned with the general split equality problem (GSEP) which is formulated as finding points x and y with the property:

x i = 1 C i  and y j = 1 Q j ,such that Ax=By,
(1.1)

where C i and Q j are two nonempty closed convex subsets of real Hilbert spaces H 1 and H 2 , respectively, H 3 also is a Hilbert space, A: H 1 H 3 , B: H 2 H 3 are two bounded linear operators.

It generalizes the split equality problem (SEP), which is to find xC, yQ such that Ax=By [1], as well as the split feasibility problem (SFP). When B=I, the SEP becomes a SFP. As we know, the SEP has received much attention due to its applications in image reconstruction, signal processing, and intensity-modulated radiation therapy, see for instance [25].

To solve the SEP, Byrne and Moudafi put forward the alternating CQ-algorithm (ACQA) and the relaxed alternating CQ-algorithm (RACQA). For an exhaustive study of ACQA and RACQA, see for instance [6, 7]. The approximate SEP (ASEP), which is only to find approximate solutions to SEP, is also proposed and solved through the simultaneous iterative algorithm (SSEA), the relaxed SSEA (RSSEA) and the perturbed SSEA (PSSEA) by Byrne and Moudafi, see for example [1, 8].

This paper aims at a study of an iterative algorithm improved by Eslamian [9] for the GSEP in the Hilbert space. We show the strong convergence of the presented algorithms to a solution of the GSEP, and we obtain an algorithm which strongly converges to the minimum norm solution of the GSEP.

2 Preliminaries

For the sake of simplicity, we will denote by H a real Hilbert space with inner product , and norm . Let C be a nonempty closed convex subset of H. Let T:HH be an operator on H. Recall that T is said to be nonexpansive if TxTyxy, x,yH. A typical example of nonexpansivity is the orthogonal projection P C from H onto a nonempty closed convex subset CH defined by x P C x=minxy, yC. It is well known that P C x is characterized by the relation

P C xC,x P C x,y P C x0,yC.

Lemma 2.1 Let S=C×Q in R N × R M = R I , where I=N+M. Define

G= [ A B ] ,w=[ x y ],and so G G=[ A A A B B A B B ],

then w = [ x y ] solves the SEP if and only if w solves the fixed point equation P S (Iγ G G) w = w .

Lemma 2.2 Let H be a Hilbert space. Then for any given sequence { x n } in H, any given sequence { λ n } n = 1 of positive numbers with n = 1 λ n =1 and for any positive integer i, j with i<j,

n = 1 λ n x n 2 n = 1 λ n x n 2 λ i λ j x i x j 2 .

Lemma 2.3 Let H be a Hilbert space. For every x and y in H, the following inequality holds:

x + y 2 x 2 +2y,x+y.

Lemma 2.4 Let C be a nonempty closed convex subset of H, and let T:CC be a nonexpansive mapping with Fix(T). Then T is demiclosed on C, that is, if x n xC and x n T x n 0, then x=Tx.

Lemma 2.5 Assume { a n } is a sequence of nonnegative real numbers such that a n + 1 (1 γ n ) a n + δ n where { γ n } is a sequence in (0,1) and { δ n } is a sequence such that

  1. (a)

    n = 1 γ n =;

  2. (b)

    lim sup n δ n / γ n 0 or n = 1 | δ n |<.

Then lim n a n =0.

Lemma 2.6 Let { t n } be a sequence of real numbers that does not decrease at infinity, in the sense that there exists a subsequence { t n j } j 0 of { t n } such that

{ t n j }<{ t n j + 1 }for all j0.

Also consider the sequence of the integers { τ ( n ) } n n 0 defined by

τ(n)=max{kn| t k < t k + 1 }.

Then { τ ( n ) } n n 0 is a nondecreasing sequence verifying lim n τ(n)=, and for all n n 0 , the following two estimates hold:

t τ ( n ) t τ ( n ) + 1 , t n t τ ( n ) + 1 .

3 Main results

Let C i R N and let Q i R M be closed, nonempty convex sets, and let A, B be J×N and J×M real matrices, respectively. Let S i = C i × Q i . Define

G= [ A B ] ,w=[ x y ],

then

G G=[ A A A B B A B B ].

The problem (1.1) can also be formulated as finding wS= i = 1 + S i with Gw=0 or with minimizing the function Gw over wS [1].

Proposition 3.1 w = [ x y ] solves the GSEP (1.1) if and only if

w i = 1 + P S i ( I λ n , i G G ) w .

Proof Assume that there exists w satisfying w i = 1 + P S i (I λ n , i G G) w , then for any i[1,+), we have w = P S i (I λ n , i G G) w . We use x and y to express w = P S i (I λ n , i G G) w :

x = P C i ( x λ n , i A ( A x B y ) ) ,
(3.1)
y = P C i ( y + λ n , i B ( A x B y ) ) .
(3.2)

By Lemma 2.1, for any i[1,+), there exist x C i and y Q i , such that A x =B y . Therefore, there exist x i = 1 + C i and y i = 1 + Q i , such that A x =B y , that is to say, w solves GSEP (1.1).

Assume that w solves GSEP (1.1), such that G w =0, that is, for any i[1,+), we have x C i and y Q i , such that A x =B y . Substituting A x =B y into (3.1) and (3.2), we obtain for any i[1,+), w = P S i (I λ n , i G G) w . Therefore, w solves w i = 1 + P S i (I λ n , i G G) w . □

Theorem 3.2 Assume that the GSEP has a nonempty solution set Ω. Suppose that f is a self k-contraction mapping of H, k(0,1), and let { w n } be a sequence generated by

w n + 1 = α n w n + β n f( w n )+ i = 1 γ n , i P S i ( I λ n , i G G ) w n ,n0,
(3.3)

where α n + β n + i = 1 γ n , i =1. If the sequences { α n }, { β n }, { γ n , i }, and { λ n , i } satisfy the following conditions:

  1. (i)

    lim n β n =0 and n = 0 β n =,

  2. (ii)

    lim inf n α n γ n , i >0, for each iN,

  3. (iii)

    { λ n , i }(0, 2 L ), for each iN, where L=ρ( G G),

then the sequence { w n } strongly converges to w , where w = P Ω f( w ), w = [ x y ] .

Proof We first prove that { w n } is bounded. Let zΩ; actually, by Lemma 2.1, zΩ equals the fixed point equation z= P S i (I λ n , i G G)z. Note that for each iN, { λ n , i }(0, 2 L ), where L=ρ( G G), then the operator P S i (I λ n , i G G) is nonexpansive. We also know that f is a k-contraction mapping, then

w n + 1 z = α n w n + β n f ( w n ) + i = 1 γ n , i P S i ( I λ n , i G G ) w n z α n w n z + β n f ( w n ) z + i = 1 γ n , i P S i ( I λ n , i G G ) w n z α n w n z + β n f ( w n ) z + i = 1 γ n , i P S i ( I λ n , i G G ) w n P S i ( I λ n , i G G ) z α n w n z + β n f ( w n ) z + i = 1 γ n , i w n z = ( 1 β n ) w n z + β n f ( w n ) z ( 1 β n ) w n z + β n f ( w n ) f ( z ) + β n f ( z ) z ( 1 β n ) w n z + β n k w n z + β n f ( z ) z = ( 1 ( 1 k ) β n ) w n z + ( 1 k ) β n 1 1 k f ( z ) z max { w n z , 1 1 k f ( z ) z } .

Then, from the upper deduction we have w n zmax{ w n 1 z, 1 1 k f(z)z} and

w n + 1 z ( 1 β n ) w n z + β n f ( w n ) z max { w n z , 1 1 k f ( z ) z } max { w 0 z , 1 1 k f ( z ) z } .

We can conclude that { w n }, {f( w n )} are bounded.

Furthermore, from (3.3) and Lemma 2.2 we get

w n + 1 z 2 = α n w n + β n f ( w n ) + i = 1 γ n , i P S i ( I λ n , i G G ) w n z 2 = α n ( w n z ) + β n ( f ( w n ) z ) + i = 1 γ n , i ( P S i ( I λ n , i G G ) w n z ) 2 α n w n z 2 + β n f ( w n ) z 2 + i = 1 γ n , i P S i ( I λ n , i G G ) w n z 2 α n γ n , i P S i ( I λ n , i G G ) w n w n α n w n z 2 + β n f ( w n ) z 2 + i = 1 γ n , i w n z 2 α n γ n , i P S i ( I λ n , i G G ) w n w n = ( 1 β n ) w n z 2 + β n f ( w n z ) 2 α n γ n , i P S i ( I λ n , i G G ) w n w n 2 .

It follows that

α n γ n , i P S i ( I λ n , i G G ) w n w n 2 w n z 2 w n + 1 z 2 + β n f ( w n ) z 2 .
(3.4)

In order to show that { w n } w , we consider two cases.

Case 1: Suppose that { w n w } is a monotone sequence. Since w n w is bounded, w n w is convergent. Take the limit on both sides for (3.4), because lim n β n =0 and lim inf n α n γ n , i >0, and we get lim n P S i (I λ n , i G G) w n w n =0, iN.

We first prove there exists a unique w Ω, such that w = P Ω f( w ). Since P Ω is nonexpansive and f is a self k-contraction mapping, we get

P Ω ( f ) ( w 1 ) P Ω ( f ) ( w 2 ) f ( w 1 ) f ( w 2 ) k w 1 w 2 ;

therefore, there exists a unique w Ω, such that w = P Ω f( w ).

Next, we show that { w n } w . Using Lemma 2.3, we get

w n + 1 w 2 = α n w n + β n f ( w n ) + i = 1 γ n , i P S ( I λ n , i G G ) w n w 2 = α n ( w n w ) + β n ( f ( w n ) w ) + i = 1 γ n , i ( P S ( I λ n , i G G ) w n w ) 2 α n ( w n w ) + i = 1 γ n , i ( P S ( I λ n , i G G ) w n w ) 2 + 2 β n f ( w n ) w , w n + 1 w ( 1 β n ) 2 w n w 2 + 2 β n f ( w n ) f ( w ) , w n + 1 w + 2 β n f ( w ) w , w n + 1 w ( 1 β n ) 2 w n w 2 + 2 β n k w n w w n + 1 w + 2 β n f ( w ) w , w n + 1 w ( 1 β n ) 2 w n w 2 + β n k { w n w 2 + w n + 1 w 2 } + 2 β n f ( w ) w , w n + 1 w .

By induction, we obtain

w n + 1 w 2 ( 1 β n ) 2 + β n k 1 β n k w n w 2 + 2 β n 1 β n k f ( w ) w , w n + 1 w = 1 2 β n + β n k 1 β n k w n w 2 + β n 2 1 β n k w n w 2 + 2 β n 1 β n k f ( w ) w , w n + 1 w ( 1 2 ( 1 k ) β n 1 β n k ) w n w 2 + 2 ( 1 k ) β n 1 β n k { β n M 2 ( 1 k ) + 1 1 k f ( w ) w , w n + 1 w } ( 1 η n ) w n w 2 + η n δ n ,

where η n = 2 ( 1 k ) β n 1 β n k , δ n ={ β n M 2 ( 1 k ) + 1 1 k f( w ) w , w n + 1 w } and M=sup{ w n w 2 ,n0}.

Since lim n β n =0, n = 1 β n =0, we have n = 1 η n =. Next, we will prove lim sup n δ n 0. Actually, β n M 2 ( 1 k ) 0 (since lim n β n =0), so we just need to prove lim sup n f( w ) w , w n w 0. Take a subsequence { w n k } in { w n }, such that

lim n f ( w ) w , w n k w = lim sup n f ( w ) w , w n w .

Since { w n k } is bounded, there exists a subsequence { w n k j } converging weakly to v. Suppose that w n k v and λ n , i λ i (0, 2 G G ), according to Lemma 2.4, vΩ. Since vΩ and w = P Ω f( w ),

lim sup n f ( w ) w , w n w = lim n f ( w ) w , w n k w = f ( w ) w , v w 0 ,

as desired.

Therefore, n = 1 η n = and lim sup n δ n 0 hold. All conditions of Lemma 2.5 are satisfied. Therefore w n + 1 w 0, w n w .

Case 2: If { w n w } is not a monotone sequence, we could define an integer sequence {τ(n)} by

τ(n)=max { k n : w k w w k + 1 w } .

It is easy to see that {τ(n)} is nondecreasing and when n we get τ(n). For all n n 0 we obtain w τ ( n ) w < w τ ( n ) + 1 w . Then { w τ ( n ) w } is a monotone sequence and according to Case 1, we have lim n w τ ( n ) w =0 and lim n w τ ( n ) + 1 w =0. Finally, from Lemma 2.6, we get

0 w n w max { w n w , w τ ( n ) w } w τ ( n ) + 1 w 0,n.

Therefore, the sequence { w n } converges strongly to w .

For every n0, w Ω solves the GSEP if and only if w solves the fixed point equation w = P S i (I λ n , i G G) w , iN. Actually, we have proved lim n w n P S i (I λ n , i G G) w n =0 and w n w . Then w = P S i (I λ n , i G G) w , iN, that is, w Ω solves the GSEP.

Therefore, the sequence { w n } strongly converges to w = P Ω f( w ). This completes the proof. □

Corollary 3.3 We define a sequence { w n } iteratively

w n + 1 = α n w n + i = 1 γ n , i P S i ( I λ n , i G G ) w n ,n0,
(3.5)

where α n + i = 1 γ n , i (0,1). If { α n }, { γ n , i }, { λ n , i } satisfy the following conditions:

  1. (i)

    lim n ( α n + i = 1 γ n , i )=1 and n = 0 (1 α n i = 1 γ n , i )=,

  2. (ii)

    lim inf n α n γ n , i >0, for each iN,

  3. (iii)

    { λ n , i }(0, 2 L ), for each iN, where L=ρ( G G),

then { w n } converges strongly to a point w which is the minimum norm solution of GSEP (1.1).

Proof Let f=0 in (3.3), then we get (3.5). We have proved w n + 1 w = P Ω f( w ) in Theorem 3.2. Then,

f ( w ) w , z w = f ( w ) P Ω f ( w ) , z P Ω f ( w ) 0.

Hence, f( w ) w ,z w 0. Since f=0, then w ,z w 0, for all zΩ, that is,

w 2 | w , z | w z w z.

Thus, w is the minimum norm solution of GSEP (1.1). This completes the proof. □

Let { T i } i = 1 :HH be a countable family of nonexpansive mappings with i = 1 F( T i ) and let T:HH be a nonexpansive mapping. Consider the variational inequality problem of finding a common fixed point of { T i } with respect to a nonexpansive mapping T is to

find  x i = 1 F( T i ),such that  x T x , x x 0,x i = 1 F( T i ).
(3.6)

It is easy to see that (3.6) equals the following fixed point problem:

find  x i = 1 F( T i ),such that  x = P i = 1 F ( T i ) T x .
(3.7)

Letting C i =F( T i ), Q j =F( P F ( T j ) T), A=I, B=I, then the upper problem (3.7) is transformed into GSEP (1.1):

find x i = 1 C i  and y j = 1 Q j ,such that Ax=By.

Therefore, GSEP (1.1) equals (3.6). Hence, we have the following result.

Theorem 3.4 If α n + β n + i = 1 γ n , i =1 and the sequences { α n }, { β n }, { γ n , i }, and { λ n , i } satisfy the following conditions:

  1. (i)

    lim n β n =0 and n = 0 β n =,

  2. (ii)

    lim inf n α n γ n , i >0, for each iN,

  3. (iii)

    { λ n , i }(0, 2 L ), for each iN, where L=ρ( G G),

the sequence { w n } defined by (3.3) converges strongly to a point w which solves the following variational inequality with w Ω:

f ( w ) w , z w 0for all zΩ.

Proof We know from the proof of Theorem 3.2 that the sequence { w n } defined by (3.3) converges strongly to w = P Ω f( w ), which solves the GSEP. Also since GSEP (1.1) equals (3.6), w solves the variational inequality problem (3.6). Since w = P Ω f( w ), by (3.7) and (3.6), we have f( w ) w ,z w 0. Actually, since f is a self k-contraction mapping, k(0,1), then f is also a nonexpansive mapping. That is to say, the condition in (3.6), that T is a nonexpansive mapping, is satisfied. Therefore, { w n } defined by (3.3) converges strongly to a solution of f( w ) w ,z w 0. This completes the proof. □

References

  1. 1.

    Byrne, C, Moudafi, A: Extensions of the CQ algorithm for the split feasibility and split equality problems. hal-00776640-version 1 (2013)

  2. 2.

    Combettes PL, Wajs VR: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4: 1168–2000. 10.1137/050626090

  3. 3.

    Censor Y, Bortfeld T, Martin B, Trofimov A: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51: 2353–2365. 10.1088/0031-9155/51/10/001

  4. 4.

    Censor Y, Elfving T, Kopf N, Bortfeld T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21: 2071–2084. 10.1088/0266-5611/21/6/017

  5. 5.

    Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20(1):103–120. 10.1088/0266-5611/20/1/006

  6. 6.

    Moudafi, A: Alternating CQ-algorithms for convex feasibility and split fixed-point problems. J. Nonlinear Convex Anal. (2012)

  7. 7.

    Moudafi A: A relaxed alternating CQ-algorithms for convex feasibility problems. Nonlinear Anal., Theory Methods Appl. 2013, 79: 117–121.

  8. 8.

    Chen RD, Li J, Ren Y: Regularization method for the approximate split equality problem in infinite-dimensional Hilbert spaces. Abstr. Appl. Anal. 2013., 2013: Article ID 813635

  9. 9.

    Eslamian M, Latif A: General split feasibility problems in Hilbert spaces. Abstr. Appl. Anal. 2013., 2013: Article ID 805104

Download references

Acknowledgements

We wish to thank the referees for their helpful comments and suggestions. This research was supported by NSFC Grant No. 11071279.

Author information

Correspondence to Rudong Chen.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Chen, R., Wang, J. & Zhang, H. General split equality problems in Hilbert spaces. Fixed Point Theory Appl 2014, 35 (2014) doi:10.1186/1687-1812-2014-35

Download citation

Keywords

  • general split equality problem
  • strong convergence
  • the minimum norm solution