Skip to main content

Strong convergence theorems for the split variational inclusion problem in Hilbert spaces

Abstract

In this paper, we first consider a split variational inclusion problem and give several strong convergence theorems in Hilbert spaces, like the Halpern-Mann type iteration method and the regularized iteration method. As applications, we consider the algorithms for a split feasibility problem and a split optimization problem and give strong convergence theorems for these problems in Hilbert spaces. Our results for the split feasibility problem improve the related results in the literature.

MSC:47H10, 49J40, 54H25.

1 Introduction

In 1994, the split feasibility problem in finite dimensional Hilbert spaces was first introduced by Censor and Elfving [1] for modeling inverse problems which arise from medical image reconstruction. Since then, the split feasibility problem has received much attention due to its applications in signal processing, image reconstruction, approximation theory, control theory, biomedical engineering, communications, and geophysics. For examples, one can refer to [15] and related literature.

We know that the split feasibility problem can be formulated as the following problem:

(SFP)Find  x ¯ H 1  such that  x ¯ C and A x ¯ Q,

where C and Q are nonempty closed convex subsets of Hilbert spaces H 1 and H 2 , respectively, and A: H 1 H 2 is an operator. It is worth noting that a special case of problem (SFP) is the convexly constrained linear inverse problem in the finite dimensional Hilbert space [6]:

(CLIP)Find  x ¯ C such that A x ¯ =b,where b H 2 .

Originally, problem (SFP) was considered in Euclidean spaces. (Note that if H 1 and H 2 are two Euclidean spaces, then A is a matrix.) In 1994, problem (SFP) in finite dimensional Hilbert spaces was first introduced by Censor and Elfving [1] for modeling inverse problems which arise from medical image reconstruction. Since then, many researchers have studied (SFP) in finite dimensional or infinite dimensional Hilbert spaces. For example, one can see [2, 716] and related literature.

In 2002, Byrne [2] first introduced the following recursive procedure:

x n + 1 = P C ( x n ρ n A ( I P Q ) A x n ) ,
(1.1)

where the stepsize τ n is chosen in the interval (0,2/ A 2 ), and P C and P Q are the metric projections onto C R n and Q R m , respectively. This algorithm is called CQ algorithm. Note that A may be not invertible. In 2010, Wang and Xu [11] modified Byrne’s CQ algorithm and gave a weak convergence theorem in infinite dimensional Hilbert spaces.

In 2004, motivated by the works on CQ algorithm (1.1), Yang [14] considered (SFP) under the following conditions:

C:= { x R n : c ( x ) 0 } andQ:= { x R m : q ( x ) 0 } ,

where c: R n R and q: R m R are convex and lower semicontinuous functions. In fact, Yang [14] studied the following problem, and we call this problem the relaxed split feasibility problem:

(RSFP)Find  x ¯ R n  such that c( x ¯ )0 and q(A x ¯ )0.

In 2010, Xu [13] modified and extended Yang’s algorithm and gave a weak convergence theorem in infinite dimensional Hilbert spaces.

On the other hand, let H be a real Hilbert space, and B be a set-valued mapping with domain D(B):={xH:B(x)}. Recall that B is called monotone if uv,xy0 for any uBx and vBy; B is maximal monotone if its graph {(x,y):xD(B),yBx} is not properly contained in the graph of any other monotone mapping. An important problem for set-valued monotone mappings is to find x ¯ H such that 0B x ¯ . Here, x ¯ is called a zero point of B. A well-known method for approximating a zero point of a maximal monotone mapping defined in a real Hilbert space is the proximal point algorithm first introduced by Martinet [17] and generated by Rockafellar [18]. This is an iterative procedure, which generates { x n } by x 1 =xH and

x n + 1 = J β n B x n ,nN,
(1.2)

where { β n }(0,), B is a maximal monotone mapping in a real Hilbert space, and J r B is the resolvent mapping of B defined by J r B = ( I + r B ) 1 for each r>0. In 1976, Rockafellar [18] proved the following in the Hilbert space setting: If the solution set B 1 (0) is nonempty and lim inf n β n >0, then the sequence { x n } in (1.2) converges weakly to an element of B 1 (0). In particular, if B is the subdifferential ∂f of a proper lower semicontinuous and convex function f:HR, then (1.2) is reduced to

x n + 1 =arg min y H { f ( y ) + 1 2 β n y x n 2 } ,nN.
(1.3)

In this case, { x n } converges weakly to a minimizer of f. Later, many researchers have studied the convergence theorems of the proximal point algorithm in Hilbert spaces. For examples, one can refer to [1924] and references therein.

Let H 1 and H 2 be two real Hilbert spaces, B 1 : H 1 H 1 and B 2 : H 2 H 2 be two set-valued maximal monotone mappings, A: H 1 H 2 be a linear and bounded operator, and A be the adjoint of A. In this paper, motivated by the works in [13, 14] and related literature, we consider the following split variational inclusion problem:

(SFVIP)Find  x ¯ H 1  such that 0 B 1 ( x ¯ ) and 0 B 2 (A x ¯ ).

Clearly, we know that the following split variational inclusion problem (SFVIP) is a generalization of variational inclusion problem. Further, we observed that problem (SFVIP) was introduced by Moudafi [25], and Moudafi [25] gave a weak convergence theorem for problem (SFVIP). The following is an iteration process given by Moudafi [25]:

x n + 1 := J λ B 1 ( x n + γ A ( J λ B 2 I ) A x n ) .

It is worth noting that λ and γ are fixed numbers. Hence, it is important to establish generalized iteration processes and the related strong convergence theorems for problem (SFVIP).

Besides, we know that the following problems are special cases of problem (SFVIP).

(SFOP) Find x ¯ H 1 such that f( x ¯ )= min y H 1 f(y) and g(A x ¯ )= min y H 2 g(z), where f: H 1 R and g: H 2 R are two proper, lower semicontinuous, and convex functions.

(SFP) Find x ¯ H 1 such that x ¯ C and A x ¯ Q, where C and Q are two nonempty closed convex subsets of real Hilbert spaces H 1 and H 2 , respectively.

In this paper, we first consider a split variational inclusion problem and give several strong convergence theorems in Hilbert spaces, like the Halpern-Mann type iteration method, the regularized iteration method. As applications, we consider algorithms for a split feasibility problem and a split optimization problem and give strong convergence theorems for these problems in Hilbert spaces. Our results for the split feasibility problem improve the related results in the literature.

2 Preliminaries

Throughout this paper, let be the set of positive integers and let be the set of real numbers. Let H be a (real) Hilbert space with the inner product , and the norm , respectively. We denote the strong convergence and the weak convergence of { x n } to xH by x n x and x n x, respectively. From [26], for each x,yH and λ[0,1], we have

λ x + ( 1 λ ) y 2 =λ x 2 +(1λ) y 2 λ(1λ) x y 2 .

Hence, we also have

2xy,uv= x v 2 + y u 2 x u 2 y v 2

for all x,y,u,vH. Furthermore, we know that

α x + β y + γ z 2 =α x 2 +β y 2 +γ z 2 αβ x y 2 αγ x z 2 βγ y z 2

for each x,y,zH and α,β,γ[0,1] with α+β+γ=1 [27].

Lemma 2.1 [28]

Let H be a (real) Hilbert space, and let x,yH. Then x + y 2 x 2 +2y,x+y.

Let C be a nonempty closed convex subset of a real Hilbert space H, and let T:CH be a mapping. Let Fix(T):={xC:Tx=x}. Then T is said to be a nonexpansive mapping if TxTyxy for every x,yC. T is said to be a quasi-nonexpansive mapping if Fix(T) and Txyxy for every xC and yFix(T). It is easy to see that Fix(T) is a closed convex subset of C if T is a quasi-nonexpansive mapping. Besides, T is said to be a firmly nonexpansive mapping if T x T y 2 xy,TxTy for every x,yC, that is, T x T y 2 x y 2 ( I T ) x ( I T ) y 2 for every x,yC.

Lemma 2.2 [29]

Let C be a nonempty closed convex subset of a real Hilbert space H. Let T:CH be a nonexpansive mapping, and let { x n } be a sequence in C. If x n w and lim n x n T x n =0, then Tw=w.

Let C be a nonempty closed convex subset of a real Hilbert space H. Then, for each xH, there is a unique element x ¯ C such that x x ¯ = min y C xy. Here, we set P C x= x ¯ and P C is said to be the metric projection from H onto C.

Lemma 2.3 [30]

Let C be a nonempty closed convex subset of a Hilbert space H. Let P C be the metric projection from H onto C. Then, for each xH and zC, we know that z= P C x if and only if xz,zy0 for all yC.

The following result is an important tool in this paper. For similar results, one can see [31].

Lemma 2.4 Let H be a real Hilbert space. Let B:HH be a set-valued maximal monotone mapping, β>0, and let J β B be a resolvent mapping of B.

  1. (i)

    For each β>0, J β B is a single-valued and firmly nonexpansive mapping;

  2. (ii)

    D( J β B )=H and Fix( J β B )={xD(B):0Bx};

  3. (iii)

    x J β B xx J γ B x for all 0<βγ and for all xH;

  4. (iv)

    (I J β B ) is a firmly nonexpansive mapping for each β>0;

  5. (v)

    Suppose that B 1 (0). Then x J β B x 2 + J β B x x ¯ 2 x x ¯ 2 for each xH, each x ¯ B 1 (0), and each β>0.

  6. (vi)

    Suppose that B 1 (0). Then x J β B x, J β B xw0 for each xH and each w B 1 (0), and each β>0.

Lemma 2.5 Let H 1 and H 2 be two real Hilbert spaces, A: H 1 H 2 be a linear operator, and A be the adjoint of A, and let β>0 be fixed. Let B: H 2 H 2 be a set-valued maximal monotone mapping, and let J β B be a resolvent mapping of B. Let T: H 1 H 1 be defined by Tx:= A (I J β B )Ax for each x H 1 . Then

  1. (i)

    ( I J β B ) A x ( I J β B ) A y 2 TxTy,xy for all x,y H 1 ;

  2. (ii)

    A ( I J β B ) A x A ( I J β B ) A y 2 A 2 TxTy,xy for all x,y H 1 .

Proof (i) By Lemma 2.4,

T x T y , x y = A ( I J β B ) A x A ( I J β B ) A y , x y = ( I J β B ) A x ( I J β B ) A y , A x A y ( I J β B ) A x ( I J β B ) A y 2

for all x,y H 1 . (ii) Further, we have

A ( I J β B ) A x A ( I J β B ) A y 2 A 2 ( I J β B ) A x ( I J β B ) A y 2 A 2 T x T y , x y

for all x,y H 1 . Therefore, the proof is completed. □

Lemma 2.6 Let H 1 and H 2 be two real Hilbert spaces, A: H 1 H 2 be a linear operator, and A be the adjoint of A, and let β>0 be fixed, and let ρ(0, 2 A 2 ). Let B 2 : H 2 H 2 be a set-valued maximal monotone mapping, and let J β B 2 be a resolvent mapping of B 2 . Then

[ x ρ A ( I J β B 2 ) A x ] [ y ρ A ( I J β B 2 ) A y ] 2 x y 2 ( 2 ρ ρ 2 A 2 ) ( I J β B 2 ) A x ( I J β B 2 ) A y 2

for all x,y H 1 . Furthermore, Iρ A (I J β B 2 )A is a nonexpansive mapping.

Proof For all x,y H 1 , we have

[ x ρ A ( I J β B 2 ) A x ] [ y ρ A ( I J β B 2 ) A y ] 2 = x y 2 2 ρ x y , A ( I J β B 2 ) A x A ( I J β B 2 ) A y + ρ 2 A ( I J β B 2 ) A x A ( I J β B 2 ) A y 2 = x y 2 2 ρ A x A y , ( I J β B 2 ) A x ( I J β B 2 ) A y + ρ 2 A ( I J β B 2 ) A x A ( I J β B 2 ) A y 2 .
(2.1)

Hence, it follows from (2.1) and Lemma 2.4 that

[ x ρ A ( I J β B 2 ) A x ] [ y ρ A ( I J β B 2 ) A y ] 2 x y 2 2 ρ ( I J β B 2 ) A x ( I J β B 2 ) A y 2 + ρ 2 A ( I J β B 2 ) A x A ( I J β B 2 ) A y 2 x y 2 ( 2 ρ ρ 2 A 2 ) ( I J β B 2 ) A x ( I J β B 2 ) A y 2

for all x,y H 1 . Therefore, the proof is completed. □

The following is a very important result for various strong convergence theorems. Recently, many researchers have studied Halpern’s type strong convergence theorems by using the following lemma and got many generalized results. For examples, one can see [32, 33]. In this paper, we also use this result to get our strong convergence theorems, and our results for the split feasibility problem improve the results in the literature.

Lemma 2.7 [34]

Let { a n } be a sequence of real numbers such that there exists a subsequence { n i } of {n} such that a n i < a n i + 1 for all iN. Then there exists a nondecreasing sequence { m k }N such that m k , a m k a m k + 1 and a k a m k + 1 are satisfied by all (sufficiently large) numbers kN. In fact, m k =max{jk: a j < a j + 1 }.

Lemma 2.8 [35]

Let { a n } n N be a sequence of nonnegative real numbers, { α n } be a sequence of real numbers in [0,1] with n = 1 α n =, { u n } be a sequence of nonnegative real numbers with n = 1 u n <, { t n } be a sequence of real numbers with lim sup t n 0. Suppose that a n + 1 (1 α n ) a n + α n t n + u n for each nN. Then lim n a n =0.

3 Halpern-Mann type algorithm with perturbations

In this section, we first give the following result.

Lemma 3.1 Let H 1 and H 2 be two real Hilbert spaces, A: H 1 H 2 be a linear and bounded operator, and let A denote the adjoint of A. Let B 1 : H 1 H 1 , and B 2 : H 2 H 2 be two set-valued maximal monotone mappings, and let β>0 and γ>0. Given any x ¯ H 1 .

  1. (i)

    If x ¯ is a solution of (SFVIP), then J β B 1 ( x ¯ γ A (I J β B 2 )A x ¯ )= x ¯ .

  2. (ii)

    Suppose that J β B 1 ( x ¯ γ A (I J β n B 2 )A x ¯ )= x ¯ and the solution set of (SFVIP) is nonempty. Then x ¯ is a solution of (SFVIP).

Proof (i) Suppose that x ¯ H 1 is a solution of (SFVIP). Then x ¯ B 1 1 (0) and A x ¯ B 2 1 (0). By Lemma 2.4, it is easy to see that

J β B 1 ( x ¯ γ A ( I J β B 2 ) A x ¯ ) = J β B 1 ( x ¯ γ A ( A x ¯ J β B 2 A x ¯ ) ) = J β B 1 ( x ¯ )= x ¯ .

(ii) Suppose that w ¯ is a solution of (SFVIP) and J β B 1 ( x ¯ γ A (I J β B 2 )A x ¯ )= x ¯ . By Lemma 2.4,

( x ¯ γ A ( I J β B 2 ) A x ¯ ) x ¯ , x ¯ w 0for each w B 1 1 (0).

That is,

A ( I J β B 2 ) A x ¯ , x ¯ w 0for each w B 1 1 (0).
(3.1)

By (3.1) and A is the adjoint of A,

A x ¯ J β B 2 A x ¯ , A x ¯ A w 0for each w B 1 1 (0).
(3.2)

On the other hand, by Lemma 2.4 again,

A x ¯ J β B 2 A x ¯ , v J β B 2 A x ¯ 0for each v B 2 1 (0).
(3.3)

By (3.2) and (3.3),

A x ¯ J β B 2 A x ¯ , v J β B 2 A x ¯ + A x ¯ A w 0
(3.4)

for each w B 1 1 (0) and each v B 2 1 (0). That is,

A x ¯ J β B 2 A x ¯ 2 A x ¯ J β B 2 A x ¯ , A w v
(3.5)

for each w B 1 1 (0) and each v B 2 1 (0). Since w ¯ is a solution of (SFVIP), w ¯ B 1 1 (0) and A w ¯ B 2 1 (0). So, it follows from (3.5) that A x ¯ = J β B 2 A x ¯ . So, A x ¯ Fix( J β B 2 )= B 2 1 (0). Further,

x ¯ = J β B 1 ( x ¯ γ A ( I J β B 2 ) A x ¯ ) = J β B 1 ( x ¯ ).

Then x ¯ Fix( J β B 1 )= B 1 1 (0). Therefore, x ¯ is a solution of (SFVIP). □

Theorem 3.1 Let H 1 and H 2 be two real Hilbert spaces, A: H 1 H 2 be a linear and bounded operator, and let A denote the adjoint of A. Let B 1 : H 1 H 1 and B 2 : H 2 H 2 be two set-valued maximal monotone mappings. Let { a n }, { b n }, { c n }, and { d n } be sequences of real numbers in [0,1] with a n + b n + c n + d n =1 and 0< a n <1 for each nN. Let { β n } be a sequence in (0,). Let { v n } be a bounded sequence in H. Let uH be fixed. Let { ρ n }(0, 2 A 2 + 1 ). Let Ω be the solution set of (SFVIP) and suppose that Ω. Let { x n } be defined by

x n + 1 := a n u+ b n x n + c n J β n B 1 [ x n ρ n A ( I J β n B 2 ) A x n ] + d n v n

for each nN. Assume that:

  1. (i)

    lim n a n = lim n d n a n =0; n = 1 a n =; n = 1 d n <;

  2. (ii)

    lim inf n c n ρ n >0, lim inf n b n c n >0, lim inf n β n >0.

Then lim n x n = x ¯ , where x ¯ = P Ω u.

Proof Let x ¯ = P Ω u, where P Ω is the metric projection from H 1 onto Ω. Then, for each nN, it follows from Lemma 2.6 that

x n + 1 x ¯ a n u x ¯ + b n x n x ¯ + d n v n x ¯ + c n J β n B 1 [ x n ρ n A ( I J β n B 2 ) A x n ] x ¯ a n u x ¯ + b n x n x ¯ + d n v n x ¯ + c n [ x n ρ n A ( I J β n B 2 ) A x n ] [ x ¯ ρ n A ( I J β n B 2 ) A x ¯ ] a n u x ¯ + ( b n + c n ) x n x ¯ + d n v n x ¯ .

This implies that { x n } is a bounded sequence. Besides, by Lemmas 2.4 and 2.6, we have

J β n B 1 [ x n ρ n A ( I J β n B 2 ) A x n ] x ¯ 2 [ x n ρ n A ( I J β n B 2 ) A x n ] [ x ¯ ρ n A ( I J β n B 2 ) A x ¯ ] x n x ¯ 2 ( 2 ρ n ρ n 2 A 2 ) ( I J β n B 2 ) A x n ( I J β n B 2 ) A x ¯ 2 = x n x ¯ 2 ( 2 ρ n ρ n 2 A 2 ) ( I J β n B 2 ) A x n 2 .
(3.6)

Hence, it follows from Lemma 2.1 that

x n + 1 x ¯ 2 = a n u + b n x n + c n J β n B 1 [ x n ρ n A ( I J β n B 2 ) A x n ] + d n v n x ¯ 2 b n ( x n x ¯ ) + c n ( J β n B 1 [ x n ρ n A ( I J β n B 2 ) A x n ] x ¯ ) + d n ( v n x ¯ ) 2 + 2 a n u x ¯ , x n + 1 x ¯ = ( 1 a n ) 2 b n ( x n x ¯ ) + c n ( J β n B 1 [ x n ρ n A ( I J β n B 2 ) A x n ] x ¯ ) + d n ( v n x ¯ ) 2 + 2 a n u x ¯ , x n + 1 x ¯ ,
(3.7)

where b n := b n b n + c n + d n , c n := c n b n + c n + d n , d n := d n b n + c n + d n . Further, by (3.6) and (3.7), we have

x n + 1 x ¯ 2 b n x n x ¯ 2 + c n J β n B 1 [ x n ρ n A ( I J β n B 2 ) A x ] x ¯ 2 + d n v n x ¯ 2 + 2 a n u x ¯ , x n + 1 v b n c n x n J β n B 1 [ x n ρ n A ( I J β n B 2 ) A x ] 2 b n x n x ¯ 2 + c n ( x n x ¯ 2 ( 2 ρ n ρ n 2 A 2 ) A x n J β n B 2 A x n 2 ) + d n v n x ¯ 2 + 2 a n u x ¯ , x n + 1 v b n c n x n J β n B 1 [ x n ρ n A ( I J β n B 2 ) A x ] 2 = ( b n + c n ) x n x ¯ 2 + d n v n x ¯ 2 + 2 a n u x ¯ , x n + 1 x ¯ c n ( 2 ρ n ρ n 2 A 2 ) A x n J β n B 2 A x n 2 b n c n x n J β n B 1 [ x n ρ n A ( I J β n B 2 ) A x ] 2 .
(3.8)

Since lim inf n β n >0, we may assume that β n >β>0 for each nN. Next, we consider two cases.

Case 1: There exists a natural number N such that x n + 1 x ¯ x n x ¯ for each nN. So, lim n x n x ¯ exists. Hence, it follows from (3.8) and (i) that

lim n c n ( 2 ρ n ρ n 2 A 2 ) A x n J β n B 2 A x n 2 =0.

Clearly, c n (2 ρ n ρ n 2 A 2 ) c n ρ n A 2 + 1 . Since lim inf n c n ρ n >0, we have

lim n A x n J β n B 2 A x n =0.
(3.9)

By (3.9) and Lemma 2.4,

lim n A x n J β B 2 A x n =0.
(3.10)

Similarly, we know that

lim n x n J β n B 1 [ x n ρ n A ( I J β n B 2 ) A x n ] =0.
(3.11)

Further, there exists a subsequence { x n k } of { x n } such that x n k z for some zC and

lim sup n u x ¯ , x n + 1 x ¯ = lim k u x ¯ , x n k x ¯ =u x ¯ ,z x ¯ .
(3.12)

Clearly, A x n k Az. By (3.10), Lemmas 2.2 and 2.4, we know that Az B 2 1 (0). Besides, it follows from Lemma 2.4 that

J β n B 1 [ x n ρ n A ( I J β n B 2 ) A x n ] J β n B 1 x n ρ n A A x n J β n B 2 A x n .
(3.13)

By (3.9) and (3.13),

lim n J β n B 1 [ x n ρ n A ( I J β n B 2 ) A x n ] J β n B 1 x n =0.
(3.14)

By (3.11) and (3.14),

lim n x n J β n B 1 x n =0.
(3.15)

By (3.15) and Lemma 2.4,

lim n x n J β B 1 x n =0.
(3.16)

Then it follows from (3.16) and Lemma 2.2 that z B 1 1 (0). So, z is a solution of (SFVIP). By (3.12) and Lemma 2.3,

lim sup n u x ¯ , x n + 1 x ¯ 0.
(3.17)

By assumptions, (3.8), (3.17), and Lemma 2.8, we know that lim n x n = x ¯ .

Case 2: Suppose that there exists { n i } of {n} such that x n i x ¯ x n i + 1 x ¯ for all iN. By Lemma 2.7, there exists a nondecreasing sequence { m k } in such that m k ,

x m k x ¯ x m k + 1 x ¯ and x k x ¯ x m k + 1 x ¯
(3.18)

for all kN. By (3.8) and (3.18), we have

x m k x ¯ x m k + 1 x ¯ 2 ( b m k + c m k ) x m k x ¯ 2 + d m k v m k x ¯ 2 + 2 a m k u x ¯ , x m k + 1 x ¯ c m k ( 2 ρ m k ρ m k 2 A 2 ) A x m k J β m k B 2 A x m k 2 b m k c m k x m k J β m k B 1 [ x m k ρ m k A ( I J β m k B 2 ) A x m k ] 2 .
(3.19)

Following a similar argument as the proof of Case 1, we have

lim k x m k J β m k B 1 [ x m k ρ m k A ( I J β m k B 2 ) A x m k ] =0,
(3.20)
lim k A x m k J β B 2 A x m k = lim k x m k J β B 1 x m k =0
(3.21)

and

lim sup k u x ¯ , x m k + 1 x ¯ 0.
(3.22)

By (3.19),

x m k x ¯ 2 d m k a m k v m k x ¯ 2 +2u x ¯ , x m k + 1 x ¯ .
(3.23)

By assumption, (3.22), and (3.23),

lim k x m k x ¯ =0.
(3.24)

Besides, we have

x m k + 1 x m k a m k u x m k + c m k x m k J β m k B 1 [ x m k ρ m k A ( I J β m k B 2 ) A x m k ] + d m k v m k x m k .
(3.25)

By assumptions, (3.20), and (3.25),

lim k x m k + 1 x m k =0.
(3.26)

By (3.24) and (3.26),

lim k x m k + 1 x ¯ =0.
(3.27)

By (3.18) and (3.27),

lim k x k x ¯ =0.

Therefore, the proof is completed. □

In Theorem 3.1, if we set v n =0 and d n =0 for each nN, then we get the following result.

Corollary 3.1 Let H 1 and H 2 be two real Hilbert spaces, A: H 1 H 2 be a linear and bounded operator, and let A denote the adjoint of A. Let B 1 : H 1 H 1 and B 2 : H 2 H 2 be two set-valued maximal monotone mappings. Let { a n }, { b n }, and { c n } be sequences of real numbers in [0,1] with a n + b n + c n =1 and 0< a n <1 for each nN. Let { β n } be a sequence in (0,). Let uH be fixed. Let { ρ n }(0, 2 A 2 + 1 ). Let Ω be the solution set of (SFVIP) and suppose that Ω. Let { x n } be defined by

x n + 1 := a n u+ b n x n + c n J β n B 1 [ x n ρ n A ( I J β n B 2 ) A x n ]

for each nN. Assume that lim n a n =0, n = 1 a n =, lim inf n c n ρ n >0, lim inf n b n c n >0, and lim inf n β n >0. Then lim n x n = x ¯ , where x ¯ = P Ω u.

Further, we can get the following result by Corollary 3.1 and Lemma 2.8. In fact, Corollary 3.1 and Theorem 3.2 are equivalent.

Theorem 3.2 Let H 1 and H 2 be two real Hilbert spaces, A: H 1 H 2 be a linear and bounded operator, and let A denote the adjoint of A. Let B 1 : H 1 H 1 and B 2 : H 2 H 2 be two set-valued maximal monotone mappings. Let { a n }, { b n }, and { c n } be sequences of real numbers in [0,1] with a n + b n + c n =1 and 0< a n <1 for each nN. Let { β n } be a sequence in (0,). Let { v n } be a bounded sequence in H. Let uH be fixed. Let { ρ n }(0, 2 A 2 + 1 ). Let Ω be the solution set of (SFVIP) and suppose that Ω. Let { v n } be a bounded sequence. Let { x n } be defined by

x n + 1 := a n u+ b n x n + c n J β n B 1 [ x n ρ n A ( I J β n B 2 ) A x n ] + v n

for each nN. Assume that lim n a n =0, n = 1 a n =, n = 1 v n <, lim inf n c n ρ n >0, lim inf n b n c n >0, and lim inf n β n >0. Then lim n x n = x ¯ , where x ¯ = P Ω u.

Proof Let { y n } be defined by

y n + 1 := a n u+ b n y n + c n J β n B 1 [ y n ρ n A ( I J β n B 2 ) A y n ] .

By Corollary 3.1, lim n y n = x ¯ , where x ¯ = P Ω u. Besides, we know that

x n + 1 y n + 1 c n J β n B 1 [ x n ρ n A ( I J β n B 2 ) A x n ] J β n B 1 [ y n ρ n A ( I J β n B 2 ) A y n ] + b n x n y n + v n ( b n + c n ) x n y n + v n = ( 1 a n ) x n y n + v n .
(3.28)

By (3.28) and Lemma 2.8, lim n x n y n =0. So, lim n x n = x ¯ , where x ¯ = P Ω u. Therefore, the proof is completed. □

4 Regularized method for (SFVIP)

Lemma 4.1 Let H 1 and H 2 be two real Hilbert spaces, A: H 1 H 2 be a linear and bounded operator, and let A denote the adjoint of A. Let B 1 : H 1 H 1 and B 2 : H 2 H 2 be two set-valued maximal monotone mappings. Let β>0, a(0,1), and ρ(0,2/( A 2 +2)). Then

J β B 1 [ ( 1 a ρ ) x ρ A ( I J β B 2 ) A x ] J β B 1 [ ( 1 a ρ ) y ρ A ( I J β B 2 ) A y ] (1aρ)xy

for all x,y H 1 .

Proof For each x,y H 1 , it follows from Lemma 2.4 and Lemma 2.5 that

J β B 1 ( ( 1 a ρ ) x ρ A ( I J β B 2 ) A x ) J β B 1 ( ( 1 a ρ ) y ρ A ( I J β B 2 ) A y ) 2 ( 1 a ρ ) ( x y ) ρ ( A ( I J β B 2 ) A x A ( I J β B 2 ) A y ) 2 = ( 1 a ρ ) 2 x y 2 2 ( 1 a ρ ) ρ x y , A ( I J β B 2 ) A x A ( I J β B 2 ) A y + ρ 2 A ( I J β B 2 ) A x A ( I J β B 2 ) A y 2 ( 1 a ρ ) x y 2 2 ( 1 α n ρ ) ρ 1 A 2 A ( I J β B 2 ) A x A ( I J β B 2 ) A y 2 + ρ 2 A ( I J β B 2 ) A x A ( I J β B 2 ) A y ) 2 .

If ρ(0,2/ A 2 +2), then 2(1aρ)ρ(1/ A 2 ) ρ 2 . This implies that the conclusion of Lemma 4.1 holds. □

Theorem 4.1 Let H 1 and H 2 be two real Hilbert spaces, A: H 1 H 2 be a linear and bounded operator, and let A denote the adjoint of A. Let B 1 : H 1 H 1 and B 2 : H 2 H 2 be two set-valued maximal monotone mappings. Let { β n } be a sequence in (0,), { a n }(0,1), and { ρ n }(0,2/( A 2 +2)). Let Ω be the solution set of (SFVIP) and suppose that Ω. Let { x n } be defined by

x n + 1 := J β n B 1 [ ( 1 a n ρ n ) x n ρ n A ( I J β n B 2 ) A x n ]

for each nN. Assume that:

lim n a n =0, n = 1 a n ρ n =, lim inf n ρ n >0and lim inf n β n >0.

Then lim n x n = x ¯ , where x ¯ = P Ω 0, i.e., x ¯ is the minimal norm solution of (SFVIP).

Proof Let x ¯ = P Ω 0. Take any wΩ and let w be fixed. Then we know that

x n + 1 w = J β n B 1 [ ( 1 a n ρ n ) x n ρ n A ( I J β n B 2 ) A x n ] w = J β n B 1 [ ( 1 a n ρ n ) x n ρ n A ( I J β n B 2 ) A x n ] J β n B 1 [ w ρ n A ( I J β n B 2 ) A w ] J β n B 1 [ ( 1 a n ρ n ) x n ρ n A ( I J β n B 2 ) A x n ] J β n B 1 [ ( 1 a n ρ n ) w ρ n A ( I J β n B 2 ) A w ] + J β n B 1 [ ( 1 a n ρ n ) w ρ n A ( I J β n B 2 ) A w ] J β n B 1 [ w ρ n A ( I J β n B 2 ) A w ] ( 1 a n ρ n ) x n w + a n ρ n w

for each nN. Then { x n } is a bounded sequence. Further, we have

x n + 1 w 2 = J β n B 1 [ ( 1 a n ρ n ) x n ρ n A ( I J β n B 2 ) A x n ] J β n B 1 [ w ρ n A ( I J β n B 2 ) A w ] 2 [ ( 1 a n ρ n ) x n ρ n A ( I J β n B 2 ) A x n ] [ w ρ n A ( I J β n B 2 ) A w ] 2 = [ x n ρ n A ( I J β n B 2 ) A x n ] [ w ρ n A ( I J β n B 2 ) A w ] a n ρ n x n 2 = [ x n ρ n A ( I J β n B 2 ) A x n ] [ w ρ n A ( I J β n B 2 ) A w ] 2 2 a n ρ n [ x n ρ n A ( I J β n B 2 ) A x n ] [ w ρ n A ( I J β n B 2 ) A w ] , x n + a n ρ n x n 2
(4.1)

for each nN. By (4.1) and Lemma 2.6,

x n + 1 w 2 x n w 2 ( 2 ρ n ρ n 2 A 2 ) ( I J β n B 2 ) A x n ( I J β n B 2 ) A w 2 2 a n ρ n [ x n ρ n A ( I J β n B 2 ) A x n ] [ w ρ n A ( I J β n B 2 ) A w ] , x n + a n ρ n x n 2 x n w 2 ( 2 ρ n ρ n 2 A 2 ) A x n J β n B 2 A x n 2 + a n ρ n x n 2 + 2 a n ρ n [ x n ρ n A ( I J β n B 2 ) A x n ] [ w ρ n A ( I J β n B 2 ) A w ] x n x n w 2 ( 2 ρ n ρ n 2 A 2 ) A x n J β n B 2 A x n 2 + a n ρ n x n 2 + 2 a n ρ n x n w x n
(4.2)

for each nN. By (4.1)-(4.2), Lemma 2.4, we know that

( 1 a n ρ n ) x n ρ n A ( I J β n B 2 ) A x n x n + 1 2 + x n + 1 w 2 ( 1 a n ρ n ) x n ρ n A ( I J β n B 2 ) A x n w 2 = ( 1 a n ρ n ) x n ρ n A ( I J β n B 2 ) A x n w + ρ n A ( I J β n B 2 ) A w 2 x n w 2 + 2 a n ρ n x n w x n + a n ρ n x n 2
(4.3)

for each nN. Next, we know that

( 1 a n ρ n ) x n ρ n A ( I J β n B 2 ) A x n x n + 1 2 = x n x n + 1 2 + a n ρ n x n + ρ n A ( I J β n B 2 ) A x n 2 2 x n x n + 1 , a n ρ n x n + ρ n A ( I J β n B 2 ) A x n
(4.4)

for each nN, and

x n + 1 J β n B 1 x n = J β n B 1 [ ( 1 a n ρ n ) x n ρ n A ( I J β n B 2 ) A x n ] J β n B 1 x n [ ( 1 a n ρ n ) x n ρ n A ( I J β n B 2 ) A x n ] x n a n ρ n x n + ρ n A ( I J β n B 2 ) A x n a n ρ n x n + ρ n A A x n J β n B 2 A x n
(4.5)

for each nN. Further, we have

x n + 1 x ¯ 2 = J β n B 1 [ ( 1 a n ρ n ) x n ρ n A ( I J β n B 2 ) A x n ] x ¯ 2 ( 1 a n ρ n ) x n ρ n A ( I J β n B 2 ) A x n x ¯ + ρ n A ( I J β n B 2 ) A x ¯ , x n + 1 x ¯ = ( 1 a n ρ n ) x n ρ n A ( I J β n B 2 ) A x n ( 1 a n ρ n ) x ¯ + ρ n A ( I J β n B 2 ) A x ¯ , x n + 1 x ¯ a n ρ n x ¯ , x n + 1 x ¯
(4.6)

for each nN. Hence,

x n + 1 x ¯ 2 ( 1 a n ρ n ) x n ρ n A ( I J β n B 2 ) A x n ( 1 a n ρ n ) x ¯ + ρ n A ( I J β n B 2 ) A x ¯ x n + 1 x ¯ + a n ρ n x ¯ , x n + 1 x ¯ ( 1 a n ρ n ) x n x ¯ x n + 1 x ¯ + a n ρ n x ¯ , x n + 1 x ¯ ( 1 a n ρ n ) 2 2 x n x ¯ 2 + 1 2 x n + 1 x ¯ 2 + a n ρ n x ¯ , x n + 1 x ¯ ( 1 a n ρ n 2 ) x n x ¯ 2 + 1 2 x n + 1 x ¯ 2 + a n ρ n x ¯ , x n + 1 x ¯

for each nN. This implies that

x n + 1 x ¯ 2 (1 a n ρ n ) x n x ¯ 2 +2 a n ρ n x ¯ , x n + 1 x ¯
(4.7)

for each nN.

Case 1: There exists a natural number N such that x n + 1 x ¯ x n x ¯ for each nN. So, lim n x n x ¯ exists.

Hence, it follows from lim n x n x ¯ exists and (4.2) that

lim n ( 2 ρ n ρ n 2 A 2 ) A x n J β n B 2 A x n 2 =0.
(4.8)

Clearly,

2 ρ n ρ n 2 A 2 = ρ n ( 2 ρ n A 2 ) ρ n ( 2 2 A 2 A 2 + 2 ) = 4 ρ n A 2 + 2 .
(4.9)

By assumption, (4.8), and (4.9),

lim n A x n J β n B 2 A x n =0.
(4.10)

Without loss of generality, we may assume that β n β>0 for each nN. By (4.10) and Lemma 2.4,

lim n A x n J β B 2 A x n =0.
(4.11)

By assumption, (4.5), and (4.10),

lim n x n + 1 J β n B 1 x n =0.
(4.12)

By assumption, lim n x n x ¯ exists, { x n } is a bounded sequence, and (4.3), we know that

lim n ( 1 a n ρ n ) x n ρ n A ( I J β n B 2 ) A x n x n + 1 =0.
(4.13)

Clearly,

a n ρ n x n + ρ n A ( I J β n B 2 ) A x n a n ρ n x n + ρ n A A x n J β n B 2 A x n
(4.14)

for each nN. By assumption, (4.10), and (4.14),

lim n a n ρ n x n + ρ n A ( I J β n B 2 ) A x n =0.
(4.15)

By (4.15),

lim n x n x n + 1 , a n ρ n x n + ρ n A ( I J β n B 2 ) A x n =0.
(4.16)

By (4.4), (4.13), (4.15), and (4.16), we know that

lim n x n + 1 x n =0.
(4.17)

By (4.12) and (4.17),

lim n x n J β n B 1 x n =0.
(4.18)

Since { x n } is a bounded sequence, there exists a subsequence { x n j } of { x n } such that x n j z for some z H 1 and

lim sup n x ¯ , x n + 1 x ¯ = lim n x ¯ , x n j x ¯ = x ¯ ,z x ¯ .

Then A x n j Az H 2 . By (4.11), (4.18), Lemma 2.2, and Lemma 2.4, we know that z B 1 1 (0) and Az B 2 1 (0). That is, zΩ. By Lemma 2.3,

lim sup n x ¯ , x n + 1 x ¯ = x ¯ ,z x ¯ 0.
(4.19)

By (4.7), (4.19), and Lemma 2.8, we know that lim n x n = x ¯ , where x ¯ = P Ω 0.

Case 2: Suppose that there exists { n i } of {n} such that x n i x ¯ x n i + 1 x ¯ for all iN. By Lemma 2.7, there exists a nondecreasing sequence { m k } in such that m k ,

x m k x ¯ x m k + 1 x ¯ and x k x ¯ x m k + 1 x ¯
(4.20)

for each kN. By (4.2), we have

x m k + 1 x ¯ 2 x m k x ¯ 2 ( 2 ρ m k ρ m k 2 A 2 ) A x m k J β m k B 2 A x m k 2 + a m k ρ m k x m k 2 + 2 a m k ρ m k x m k x ¯ x m k
(4.21)

for each kN. By (4.20) and (4.21),

( 2 ρ m k ρ m k 2 A 2 ) A x m k J β m k B 2 A x m k 2 x m k x ¯ 2 x m k + 1 x ¯ 2 + a m k ρ m k x m k 2 + 2 a m k ρ m k x m k x ¯ x m k a m k ρ m k x m k 2 + 2 a m k ρ m k x m k x ¯ x m k
(4.22)

for each kN. Then following the same argument as the above, we know that

lim k A x m k J β m k B 2 A x m k =0,
(4.23)
lim k A x m k J β B 2 A x m k =0,
(4.24)
lim k x m k + 1 J β m k B 1 A x m k =0.
(4.25)

By (4.3),

( 1 a m k ρ m k ) x m k ρ m k A ( I J β m k B 2 ) A x m k x m k + 1 2 x m k x ¯ 2 x m k + 1 x ¯ 2 + 2 a m k ρ m k x m k x ¯ x m k + a m k ρ m k x m k 2 2 a m k ρ m k x m k x ¯ x m k + a m k ρ m k x m k 2
(4.26)

for each kN. This implies that

lim k ( 1 a m k ρ m k ) x m k ρ m k A ( I J β m k B 2 ) A x m k x m k + 1 2 =0.
(4.27)

Following the same argument as the above, we know that

lim k x m k + 1 x m k = lim k x m k J β m k B 1 x m k =0
(4.28)

and

lim sup k x ¯ , x m k + 1 x ¯ = x ¯ ,z x ¯ 0.
(4.29)

By (4.7) and (4.20),

a m k ρ m k x m k x ¯ 2 x m k x ¯ 2 x m k + 1 x ¯ 2 + 2 a m k ρ m k x ¯ , x m k + 1 x ¯ 2 a m k ρ m k x ¯ , x m k + 1 x ¯

for each kN. This implies that

x m k x ¯ 2 2 x ¯ , x m k + 1 x ¯
(4.30)

for each kN. By (4.29) and (4.30),

lim k x m k x ¯ =0.
(4.31)

By (4.28) and (4.31),

lim k x m k + 1 x ¯ lim k x m k x m k + 1 + lim k x m k x ¯ =0.
(4.32)

By (4.20) and (4.32),

lim k x k x ¯ =0.
(4.33)

Therefore, the proof is completed. □

5 Applications: (SFOP) and (SFP)

We get the following results by Theorems 3.1 and 3.2, respectively.

Theorem 5.1 Let H 1 and H 2 be two real Hilbert spaces, A: H 1 H 2 be a linear and bounded operator, and let A denote the adjoint of A. Let f: H 1 R and g: H 2 R be two proper lower semicontinuous and convex functions. Let { a n }, { b n }, { c n }, and { d n } be sequences of real numbers in [0,1] with a n + b n + c n + d n =1 and 0< a n <1 for each nN. Let { β n } be a sequence in (0,). Let { v n } be a bounded sequence in H. Let uH be fixed. Let { ρ n }(0, 2 A 2 + 1 ). Let Ω be the solution set of (SFOP) and suppose that Ω. Let { x n } be defined by

{ y n = arg min z H 2 { g ( z ) + 1 2 β n z A x n 2 } , z n = x n ρ n A ( A x n y n ) , w n = arg min y H 1 { f ( y ) + 1 2 β n y z n 2 } , x n + 1 : = a n u + b n x n + c n w n + d n v n , n N .

Assume that:

  1. (i)

    lim n a n = lim n d n a n =0; n = 1 a n =; n = 1 d n <;

  2. (ii)

    lim inf n c n ρ n >0; lim inf n b n c n >0; lim inf n β n >0.

Then lim n x n = x ¯ , where x ¯ = P Ω u.

Theorem 5.2 Let H 1 and H 2 be two real Hilbert spaces, A: H 1 H 2 be a linear and bounded operator, and let A denote the adjoint of A. Let f: H 1 R and g: H 2 R be two proper lower semicontinuous and convex functions. Let { a n }, { b n }, and { c n } be sequences of real numbers in [0,1] with a n + b n + c n =1 and 0< a n <1 for each nN. Let { β n } be a sequence in (0,). Let { v n } be a bounded sequence in H. Let uH be fixed. Let { ρ n }(0, 2 A 2 + 1 ). Let Ω be the solution set of (SFOP) and suppose that Ω. Let { x n } be defined by

{ y n = arg min z H 2 { g ( z ) + 1 2 β n z A x n 2 } , z n = x n ρ n A ( A x n y n ) , w n = arg min y H 1 { f ( y ) + 1 2 β n y z n 2 } , x n + 1 : = a n u + b n x n + c n w n + v n , n N .

Assume that lim n a n =0, n = 1 a n =, n = 1 v n <, lim inf n c n ρ n >0, lim inf n b n c n >0, lim inf n β n >0. Then lim n x n = x ¯ , where x ¯ = P Ω u.

By Theorem 4.1, we get the following result.

Theorem 5.3 Let H 1 and H 2 be two real Hilbert spaces, A: H 1 H 2 be a linear and bounded operator, and let A denote the adjoint of A. Let f: H 1 R and g: H 2 R be two proper lower semicontinuous and convex functions. Let { β n } be a sequence in (0,), { a n }(0,1), and { ρ n }(0,2/( A 2 +2)). Let Ω be the solution set of (SFOP) and suppose that Ω. Let { x n } be defined by

{ y n = arg min z H 2 { g ( z ) + 1 2 β n z A x n 2 } , z n = ( 1 a n ρ n ) x n ρ n A ( A x n y n ) , x n + 1 = arg min y H 1 { g ( y ) + 1 2 β n y z n 2 } , n N .

Assume that lim n a n =0, n = 1 a n ρ n =, lim inf n ρ n >0, and lim inf n β n >0. Then lim n x n = x ¯ , where x ¯ = P Ω 0, i.e., x ¯ is the minimal norm solution of (SFOP).

Let H be a Hilbert space and let g be a proper lower semicontinuous convex function of H into (,). Then the subdifferential ∂g of g is defined as follows:

g(x)= { z H : g ( x ) + z , y x g ( y ) , y H }

for all xH. Let C be a nonempty closed convex subset of a real Hilbert space H, and i C be the indicator function of C, i.e.,

i C x= { 0 if  x C , if  x C .

Further, we also define the normal cone N C u of C at u as follows:

N C u= { z H : z , v u 0 , v C } .

Then i C is a proper lower semicontinuous convex function on H, and the subdifferential i C of i C is a maximal monotone operator. So, we can define the resolvent J λ i C of i C for λ>0, i.e.,

J λ i C x= ( I + λ i C ) 1 x

for all xH. By definitions, we know that

i C x = { z H : i C x + z , y x i C y , y H } = { z H : z , y x 0 , y C } = N C x

for all xC. Hence, for each β>0, we have that

u = J β i C x x u + β i C u x u β N C u x u , y u 0 , y C u = P C x .

Hence, we have the following result by Theorem 3.2.

Theorem 5.4 Let C and Q be two nonempty closed convex subsets of H 1 and H 2 , respectively. Let A: H 1 H 2 be a linear and bounded operator, and let A denote the adjoint of A. Let { a n }, { b n }, and { c n } be sequences of real numbers in [0,1] with a n + b n + c n =1 and 0< a n <1 for each nN. Let { β n } be a sequence in (0,). Let { v n } be a bounded sequence in H. Let uH be fixed. Let { ρ n }(0, 2 A 2 + 1 ). Let Ω be the solution set of (SFP) and suppose that Ω. Let { x n } be defined by

x n + 1 := a n u+ b n x n + c n P C [ x n ρ n A ( I P Q ) A x n ] + v n

for each nN. Assume that lim n a n =0, n = 1 a n =, lim inf n c n ρ n >0, and lim inf n b n c n >0. Then lim n x n = x ¯ , where x ¯ = P Ω u.

By Theorem 4.1, we get the following result.

Theorem 5.5 Let C and Q be two nonempty closed convex subsets of H 1 and H 2 , respectively. Let A: H 1 H 2 be a linear and bounded operator, and let A denote the adjoint of A. Let { β n } be a sequence in (0,), { a n }(0,1), and { ρ n }(0,2/( A 2 +2)). Let Ω be the solution set of (SFP) and suppose that Ω. Let { x n } be defined by

x n + 1 := P C [ ( 1 a n ρ n ) x n ρ n A ( I P Q ) A x n ]

for each nN. Assume that lim n a n =0, n = 1 a n ρ n =, lim inf n ρ n >0, and lim inf n β n >0. Then lim n x n = x ¯ , where x ¯ = P Ω 0, i.e., x ¯ is the minimal norm solution of (SFP).

Remark 5.1 Theorem 5.5 improves some conditions of [[13], Theorem 5.5].

References

  1. Censor Y, Elfving T: A multiprojection algorithm using Bregman projection in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692

    Article  MathSciNet  Google Scholar 

  2. Byrne C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18: 441–453. 10.1088/0266-5611/18/2/310

    Article  MathSciNet  Google Scholar 

  3. Censor Y, Bortfeld T, Martin B, Trofimov A: A unified approach for inversion problems in intensity modulated radiation therapy. Phys. Med. Biol. 2003, 51: 2353–2365.

    Article  Google Scholar 

  4. López G, Martín-Márquez V, Xu HK: Iterative algorithms for the multiple-sets split feasibility problem. In Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems. Edited by: Censor Y, Jiang M, Wang G. Medical Physics Publishing, Madison; 2010:243–279.

    Google Scholar 

  5. Stark H: Image Recovery: Theory and Applications. Academic Press, San Diego; 1987.

    Google Scholar 

  6. Eicke B: Iteration methods for convexly constrained ill-posed problems in Hilbert space. Numer. Funct. Anal. Optim. 1992, 13: 413–429. 10.1080/01630569208816489

    Article  MathSciNet  Google Scholar 

  7. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006

    Article  MathSciNet  Google Scholar 

  8. Dang Y, Gao Y: The strong convergence of a KM-CQ-like algorithm for a split feasibility problem. Inverse Probl. 2011., 27: Article ID 015007

    Google Scholar 

  9. Masad E, Reich S: A note on the multiple-set split convex feasibility problem in Hilbert space. J. Nonlinear Convex Anal. 2008, 8: 367–371.

    MathSciNet  Google Scholar 

  10. Qu B, Xiu N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21: 1655–1665. 10.1088/0266-5611/21/5/009

    Article  MathSciNet  Google Scholar 

  11. Wang F, Xu HK: Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem. J. Inequal. Appl. 2010., 2010: Article ID 102085

    Google Scholar 

  12. Xu HK: A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22: 2021–2034. 10.1088/0266-5611/22/6/007

    Article  Google Scholar 

  13. Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018

    Google Scholar 

  14. Yang Q: The relaxed CQ algorithm for solving the split feasibility problem. Inverse Probl. 2004, 20: 1261–1266. 10.1088/0266-5611/20/4/014

    Article  Google Scholar 

  15. Yang Q: On variable-step relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 2005, 302: 166–179. 10.1016/j.jmaa.2004.07.048

    Article  MathSciNet  Google Scholar 

  16. Zhao J, Yang Q: Self-adaptive projection methods for the multiple-sets split feasibility problem. Inverse Probl. 2011., 27: Article ID 035009

    Google Scholar 

  17. Martinet B: Régularisation d’inéquations variationnelles par approximations successives. Rev. Fr. Autom. Inform. Rech. Opér. 1970, 4: 154–158.

    MathSciNet  Google Scholar 

  18. Rockafellar RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14: 877–898. 10.1137/0314056

    Article  MathSciNet  Google Scholar 

  19. Brézis H, Lions PL: Produits infinis de résolvantes. Isr. J. Math. 1978, 29: 329–345. 10.1007/BF02761171

    Article  Google Scholar 

  20. Güler O: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 1991, 29: 403–419. 10.1137/0329022

    Article  MathSciNet  Google Scholar 

  21. Kamimura S, Takahashi W: Approximating solutions of maximal monotone operators in Hilbert spaces. J. Approx. Theory 2000, 106: 226–240. 10.1006/jath.2000.3493

    Article  MathSciNet  Google Scholar 

  22. Kamimura S, Takahashi W: Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim. 2002, 13: 938–945. 10.1137/S105262340139611X

    Article  MathSciNet  Google Scholar 

  23. Nakajo K, Takahashi W: Strong convergence theorems for nonexpansive mapping and nonexpansive semigroups. J. Math. Anal. Appl. 2003, 279: 372–379. 10.1016/S0022-247X(02)00458-4

    Article  MathSciNet  Google Scholar 

  24. Solodov MV, Svaiter BF: Forcing strong convergence of proximal point iterations in a Hilbert space. Math. Program. 2000, 87: 189–202.

    MathSciNet  Google Scholar 

  25. Moudafi A: Split monotone variational inclusions. J. Optim. Theory Appl. 2011, 150: 275–283. 10.1007/s10957-011-9814-6

    Article  MathSciNet  Google Scholar 

  26. Takahashi W: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers, Yokohama; 2009.

    Google Scholar 

  27. Osilike MO, Igbokwe DI: Weak and strong convergence theorems for fixed points of pseudocontractions and solutions of monotone type operator equations. Comput. Math. Appl. 2000, 40: 559–567. 10.1016/S0898-1221(00)00179-6

    Article  MathSciNet  Google Scholar 

  28. Xu HK: Iterative algorithm for nonlinear operators. J. Lond. Math. Soc. 2002, 2: 240–256.

    Article  Google Scholar 

  29. Browder FE: Fixed point theorems for noncompact mappings in Hilbert spaces. Proc. Natl. Acad. Sci. USA 1965, 53: 1272–1276. 10.1073/pnas.53.6.1272

    Article  MathSciNet  Google Scholar 

  30. Takahashi W: Nonlinear Functional Analysis-Fixed Point Theory and Its Applications. Yokohama Publishers, Yokohama; 2000.

    Google Scholar 

  31. He Z, Du W-S: Nonlinear algorithms approach to split common solution problems. Fixed Point Theory Appl. 2012., 2012: Article ID 130

    Google Scholar 

  32. Chuang CS, Lin LJ, Takahashi W: Halpern’s type iterations with perturbations in a Hilbert space: equilibrium solutions and fixed points. J. Glob. Optim. 2013, 56: 1591–1601. 10.1007/s10898-012-9911-6

    Article  MathSciNet  Google Scholar 

  33. Yu, ZT, Lin, LJ, Chuang, CS: A unified study of the split feasible problems with applications. J. Nonlinear Convex Anal. (accepted)

  34. Maingé PE: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16: 899–912. 10.1007/s11228-008-0102-z

    Article  MathSciNet  Google Scholar 

  35. Aoyama K, Kimura Y, Takahashi W, Toyoda M: Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space. Nonlinear Anal. 2007, 67: 2350–2360. 10.1016/j.na.2006.08.032

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The author was supported by the National Science Council of Republic of China. Also, the author is grateful to an anonymous referee for his fruitful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chih-Sheng Chuang.

Additional information

Competing interests

The author declares that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Chuang, CS. Strong convergence theorems for the split variational inclusion problem in Hilbert spaces. Fixed Point Theory Appl 2013, 350 (2013). https://doi.org/10.1186/1687-1812-2013-350

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2013-350

Keywords