Skip to content

Advertisement

  • Research
  • Open Access

Solutions for a variational inclusion problem with applications to multiple sets split feasibility problems

Fixed Point Theory and Applications20132013:333

https://doi.org/10.1186/1687-1812-2013-333

  • Received: 3 April 2013
  • Accepted: 7 November 2013
  • Published:

Abstract

In this paper, we first study the set of common solutions for two variational inclusion problems in a real Hilbert space and establish a strong convergence theorem of this problem. As applications, we study unique minimum norm solutions of the following problems: multiple sets split feasibility problems, system of convex constrained linear inverse problems, convex constrained linear inverse problems, split feasibility problems, convex feasibility problems. We establish iteration processes of these problems and show the strong convergence theorems of these iteration processes.

MSC:47J20, 47J25, 47H05, 47H09.

Keywords

  • firmly nonexpansive mapping
  • proximal-contraction
  • multiple sets split feasibility problem
  • maximum monotone
  • convex feasibility problem

1 Introduction

Let C 1 , C 2 , , C m be nonempty closed convex subsets of a real Hilbert space . The well-known convex feasibility problem (CFP) is to find x H such that
x C 1 C 2 C m .

The convex feasibility problem has received a lot of attention due to its diverse applications in mathematics, approximation theory, communications, geophysics, control theory, biomedical engineering. One can refer to [1, 2].

If C 1 and C 2 are closed vector spaces of a real Hilbert space , in 1933, von Neumann showed that any sequence { x n } is generated from the method of alternating projections: x 0 H , x 1 = P C 1 x 0 , x 2 = P C 2 x 1 , x 3 = P C 1 x 2 , … . Then { x n } converges strongly to some x ¯ C 1 C 2 . If C 1 and C 2 are nonempty closed convex subsets of , Bregman [3] showed that the sequence { x n } generated from the method of alternating projection converges weakly to a point in C 1 C 2 . Hundal [4] showed that the strong convergence fails if C 1 and C 2 are nonempty closed convex subsets of . Recently, Boikanyo et al. [5] proposed the following process:
x 2 n + 1 = J β n G 1 ( α n u + ( 1 α n ) x 2 n + e n ) , n = 0 , 1 , .
(1.1)
and
x 2 n = J ρ n G 2 ( λ n u + ( 1 λ n ) x 2 n 1 + e n ) , n = 0 , 1 , ,
(1.2)

where u , x 0 H are given arbitrarily, G 1 and G 2 are two set-valued maximal monotone operators with J β G 1 = ( I + β G 1 ) 1 , and { α n } , { λ n } , { β n } , { ρ n } , { e n } , { e n } are sequences. Boikanyo et al. [5] proved that the sequence { x n } converges strongly to a point x ¯ G 1 1 ( 0 ) G 2 1 ( 0 ) under suitable conditions.

The split feasibility problem (SFP) is to find a point
x C  such that  A x Q ,
where C, Q are nonempty closed convex subsets of real Hilbert spaces H 1 , H 2 , respectively. A : H 1 H 2 is a bounded linear operator. The split feasibility problem (SFP) in finite dimensional real Hilbert spaces was first introduced by Censor and Elfving [6] for modeling inverse problems which arise from medical image reconstruction. Since then, the split feasibility problem (SFP) has received much attention due to its applications in signal processing, image reconstruction, approximation theory, control theory, biomedical engineering, communications, and geophysics. For examples, one can refer to [1, 2, 617] and related literature. A special case of problem (SFP) is the convexly constrained linear inverse problem in a finite dimensional real Hilbert space [18]:
( CLIP ) Find  x ¯ C  such that  A x ¯ = b ,
where C is a nonempty closed convex subset of a real Hilbert space H 1 and b is a given element of a real Hilbert space H 2 , which has extensively been investigated by using the Landweber iterative method [19]:
x n + 1 : = x n + γ A T ( b A x n ) , n N .
Let C 1 , C 2 , , C m be nonempty closed convex subsets of H 1 , let Q 1 , Q 2 , , Q m be nonempty closed convex subsets of H 2 , and let A 1 , A 2 , , A m : H 1 H 2 be bounded linear operators. The well-known multiple sets split feasibility problem (MSSFP) is to find x H 1 such that
x C i  such that  A i x Q i for all  i = 1 , 2 , , m .
The multiple sets split feasibility problem (MSSFP) contains convex feasibility problem (CFP) and split feasibility problem (SFP) as special cases [6, 10, 13]. Indeed, Censor et al. [11] first studied this type of problem. Xu [16] and Lopez et al. [13] also studied this type of problem. In 2011, Boikanyo and Moroşanu [20] gave the following algorithm:
{ v 2 n + 1 : = a n u + b n v 2 n + c n J α n G 1 v 2 n , n N { 0 } , v 2 n : = f n u + g n v 2 n 1 + h n J β n G 2 v 2 n 1 , n N ,
(1.3)

where u is given in , and G 1 , G 2 are two set-valued maximal monotone mappings on , { a n } , { b n } , { c n } , { f n } , { g n } and { h n } are sequences in [ 0 , 1 ] . Boikanyo and Moroşanu [20] proved that { v n } in (1.3) converges strongly to some x ¯ G 1 1 ( 0 ) G 2 1 ( 0 ) under suitable conditions.

Motivated by the above works, we consider the following algorithm:
{ v 0 H  is chosen arbitrarily , v 2 n + 1 : = a n u + b n v 2 n + c n J δ n G 1 ( I δ n B 1 ) v 2 n , n N { 0 } , v 2 n : = f n u + g n v 2 n 1 + h n J γ n G 2 ( I γ n B 2 ) v 2 n 1 , n N ,
(1.4)

where G 1 , G 2 are two set-valued maximal monotone mappings on a real Hilbert space H 1 , B 1 , B 2 : C H 1 are two mappings, { a n } , { b n } , { c n } , { f n } , { g n } , and { h n } are sequences in [ 0 , 1 ] . We show that the sequence { v n } generated by (1.4) converges strongly to some x ¯ ( B 1 + G 1 ) 1 ( 0 ) ( B 2 + G 2 ) 1 ( 0 ) under suitable conditions.

Algorithm (1.4) contains as special cases the inexact proximal point algorithm (e.g., [21, 22]) and the generalized contraction proximal point algorithm (e.g., [23]). Our conclusion extends and unifies Boikanyo and Moroşanu’s result in [20], Wang and Cui’s result in [24] becomes a special case. For details, one can see Section 3.

In this paper, we first study the set of common solutions for two variational inclusion problems in a Hilbert space and establish a strong convergence theorem of this problem. As applications, we study unique minimum norm solutions of the following problems: multiple sets split feasibility problems, system of convex constrained linear inverse problems, convex constrained linear inverse problems, split feasibility problems, convex feasibility problems. We establish iteration processes of these problems and show strong convergence theorems of these iteration processes.

2 Preliminaries

Throughout this paper, let , H 1 , H 2 , H 3 denote real Hilbert spaces with the inner product , and the norm ; let be the set of all natural numbers and R + be the set of all positive real numbers. A set-valued mapping A with domain D ( A ) on is called monotone if u v , x y 0 for any u A x , v A y and for all x , y D ( A ) . A monotone operator A is called maximal monotone if its graph { ( x , y ) : x D ( A ) , y A x } is not properly contained in the graph of any other monotone mapping. The set of all zero points of A is denoted by A 1 ( 0 ) , i.e., A 1 ( 0 ) = { x H : 0 A x } . In what follows, we denote the strong convergence and the weak convergence of { x n } to x H by x n x and x n x , respectively. In order to facilitate our discussion, in the next section, we recall some facts. The following equality is easy to check:
α x + β y + γ z 2 = α x 2 + β y 2 + γ z 2 α β x y 2 α γ x z 2 β γ y z 2
(2.1)
for each x , y , z H and α , β , γ [ 0 , 1 ] with α + β + γ = 1 . Besides, we also have
x + y 2 x 2 + 2 y , x + y
(2.2)
for each x , y H . Let C be a nonempty closed convex subset of , and a mapping T : C H . We denote the set of all fixed points of T by Fix ( T ) . A mapping T : C H is said to be nonexpansive if T x T y x y for every x , y C . A mapping T : C H is said to be quasi-nonexpansive if Fix ( T ) and T x y x y for all x C and y Fix ( T ) . A mapping T : C H is said to be firmly nonexpansive if
T x T y 2 x y 2 ( I T ) x ( I T ) y 2
for every x , y C . Besides, it is easy to see that Fix ( T ) is a closed convex subset of C if T : C H is a quasi-nonexpansive mapping. A mapping T : C H is said to be α-inverse-strongly monotone (α-ism) if
x y , T x T y α T x T y 2

for all x , y H and α > 0 .

The following lemmas are needed in this paper.

Lemma 2.1 [25]

Let A : H 1 H 2 be a bounded linear operator, and let A be the adjoint of A. Suppose that C is a nonempty closed convex subset of H 2 , and that G : C H 2 is a firmly nonexpansive mapping. Then A ( I G ) A is 1 A 2 -ism, that is,
1 A 2 A ( I G ) A x A ( I G ) A y 2 x y , A ( I G ) A x A ( I G ) A y

for all x , y H 1 .

Lemma 2.2 Let C be a nonempty closed convex subset of , and let G : C H be a firmly nonexpansive mapping. Suppose that Fix ( G ) is nonempty. Then x G x , G x w 0 for each x H and each w Fix ( G ) .

Let C be a nonempty closed convex subset of . Then, for each x H , there is a unique element x ¯ C such that x x ¯ = min y C x y . Here, we set P C x = x ¯ and P C is said to be the metric projection from onto C.

Lemma 2.3 [26]

Let C be a nonempty closed convex subset of , and let P C be the metric projection from onto C. Then x P C x , P C x y 0 for each x H and each y C .

For a set-valued maximal monotone operator G on and r > 0 , we may define an operator J r G : H H with J r G = ( I + r G ) 1 which is called the resolvent mapping of G for r.

Lemma 2.4 [26, 27]

Let G : H 2 H be a set-valued maximal monotone mapping. Then we have that
  1. (i)

    for each α > 0 , J α G is a single-valued and firmly nonexpansive mapping.

     
  2. (ii)

    D ( J α G ) = H and Fix ( J α G ) = G 1 ( 0 ) .

     
  3. (iii)

    x J α G x 2 x J β G x for each x H and all α , β ( 0 , ) with 0 < α β .

     

Lemma 2.5 [26]

Let G be a set-valued maximal monotone operator on . For a > 0 , we define the resolvent J a G = ( I + a G ) 1 . Then the following holds:
J α G x J β G x 2 α β α J α G x J β G x , J α G x x
for all α , β > 0 and x H . In fact,
J α G x J β G x | α β | α J α G x x

for all α , β > 0 and x H .

Lemma 2.6 [28]

Let { S n } be a sequence of real numbers that does not decrease at infinity, in the sense that there exists a subsequence { S n i } i 0 of { S n } such that S n i < S n i + 1 for all i N . Consider the sequence { τ ( n ) } n n 0 defined by τ ( n ) = max { n 0 k n : τ k < τ k + 1 } for some sufficiently large number n 0 N . Then { τ ( n ) } n n 0 is a nondecreasing sequence with τ ( n ) as n , and for all n n 0 ,
S τ ( n ) S τ ( n ) + 1 and S n S τ ( n ) + 1 .

In fact, max { S τ ( n ) , S n } S τ ( n ) + 1 .

Lemma 2.7 [5]

Let { a n } and { b n } be two sequences in [ 0 , 1 ] . Let { e n } be a sequence of nonnegative real numbers. Let { t n } and { k n } be two sequences of real numbers. Let { S n } n N be a sequence of nonnegative real numbers with
S n + 1 ( 1 a n ) ( 1 b n ) S n + a n t n + b n k n + e n
for each n N . Assume that:
n = 1 a n = , lim sup n t n 0 , lim sup n k n 0 , and n = 1 e n = .

Then lim n S n = 0 .

A mapping T : H H is said to be averaged if T = ( 1 α ) I + α S , where S : H H is a nonexpansive mapping and α ( 0 , 1 ) .

Lemma 2.8 [29]

Let C be a nonempty closed convex subset of and T : C H be a mapping. Then the following hold:
  1. (i)

    T is a nonexpansive mapping if and only if I T is 1 2 -inverse-strongly monotone ( 1 2 -ism).

     
  2. (ii)

    If S is ν-ism, then γS is ν γ -ism.

     
  3. (iii)

    S is averaged if and only if I S is ν-ism for some ν > 1 2 .

     
Indeed, S is α-averaged if and only if I S is 1 ( 2 α ) -ism for α ( 0 , 1 ) .
  1. (iv)

    If S and T are averaged, then the composition ST is also averaged.

     
  2. (v)

    If the mappings { T i } i = 1 n are averaged and have a common fixed point, then i = 1 n Fix ( T i ) = Fix ( T 1 T n ) for each n N .

     

Lemma 2.9 [30]

Let T be a nonexpansive self-mapping on a nonempty closed convex subset C of , and let { x n } be a sequence in C. If x n w and lim n x n T x n = 0 , then T w = w .

Let C be a nonempty closed convex subset of . The indicator function ι C defined by
ι C x = { 0 , x C , , x C
is a proper lower semicontinuous convex function and its subdifferential ι C defined by
ι C x = { z H : y x , z ι C ( y ) ι C ( x ) , y H }
is a maximal monotone operator [31]. Furthermore, we also define the normal cone N C u of C at u as follows:
N C u = { z H : z , v u 0 , v C } .
We can define the resolvent J λ i C of i C for λ > 0 , i.e.,
J λ i C x = ( I + λ i C ) 1 x
for all x H . Since
i C x = { z H : i C x + z , y x i C y , y H } = { z H : z , y x 0 , y C } = N C x
for all x C , we have that
u = J λ i C x x u + λ i C u x u λ N C u x u , y u 0 , y C u = P C x .

3 Main results

Let C, Q and Q be nonempty closed convex subsets of H 1 , H 2 and H 3 , respectively. For each i = 1 , 2 and κ i > 0 , let B i be a κ i -inverse-strongly monotone mapping of C into H 1 , and let G i be a set-valued maximal monotone mapping on H 1 such that the domain of G i is included in C for each i = 1 , 2 . Let F 1 be a firmly nonexpansive mapping of H 2 into H 2 and F 2 be a firmly nonexpansive mapping of H 3 into H 3 . Note J λ G 1 = ( I + λ G 1 ) 1 and J r G 2 = ( I + r G 2 ) 1 for each λ > 0 and r > 0 . Let A 1 : H 1 H 2 be a bounded linear operator, A 2 : H 1 H 3 be a bounded linear operator, and A i be the adjoint of A i for i = 1 , 2 . Throughout this paper, we use these notations unless specified otherwise.

Theorem 3.1 Suppose that ( B 1 + G 1 ) 1 ( 0 ) ( B 2 + G 2 ) 1 ( 0 ) is nonempty, and { a n } , { b n } , { c n } , { f n } , { g n } , { h n } are sequences in [ 0 , 1 ] such that a n + b n + c n = 1 , f n + g n + h n = 1 , 0 < a n < 1 , and 0 < f n < 1 for each n N . For an arbitrarily fixed u H , define a sequence { v n } by
{ v 0 H  is chosen arbitrarily , v 2 n + 1 : = a n u + b n v 2 n + c n J δ n G 1 ( I δ n B 1 ) v 2 n , n N { 0 } , v 2 n : = f n u + g n v 2 n 1 + h n J γ n G 2 ( I γ n B 2 ) v 2 n 1 , n N .
Then lim n v n = P ( B 1 + G 1 ) 1 ( 0 ) ( B 2 + G 2 ) 1 ( 0 ) u provided the following conditions are satisfied:
  1. (i)

    lim n a n = lim n f n = 0 ;

     
  2. (ii)

    either n = 1 a n = or n = 1 f n = ;

     
  3. (iii)

    0 < a δ n b < 2 κ 1 and 0 < f γ n g < 2 κ 2 for each n N and for some a , b , f , g R + ;

     
  4. (iv)

    lim inf n c n > 0 , lim inf n h n > 0 .

     
Proof Given any fixed v ¯ : = P ( B 1 + G 1 ) 1 ( 0 ) ( B 2 + G 2 ) 1 ( 0 ) u . Then v ¯ = J δ n G 1 ( I δ n B 1 ) v ¯ and v ¯ = J γ n G 2 ( I γ n B 2 ) v ¯ . Since B 1 is κ 1 -ism and κ 1 δ n > 1 2 , it follows from Lemma 2.8 that I δ n B 1 is averaged. Hence I δ n B 1 is nonexpansive. By Lemma 2.8, J δ n G 1 ( I δ n G 1 ) and J γ n G 2 ( I γ n G 2 ) are averaged. Hence J δ n G 1 ( I δ n G 1 ) and J γ n G 2 ( I γ n G 2 ) are nonexpansive. By condition (iv), we may assume that there exist positive real numbers c and h such that c n > 0 and h n h > 0 for each n N . For each n N , we have that
v 2 n v ¯ = f n u + g n v 2 n 1 + h n J γ n G 2 ( I γ n B 2 ) v 2 n 1 v ¯ f n u v ¯ + g n v 2 n 1 v ¯ + h n J γ n G 2 ( I γ n B 2 ) v 2 n 1 v ¯ f n u v ¯ + g n v 2 n 1 v ¯ + h n v 2 n 1 v ¯ = f n u v ¯ + ( g n + h n ) v 2 n 1 v ¯ = f n u v ¯ + ( 1 f n ) v 2 n 1 v ¯
(3.1)
and
v 2 n + 1 v ¯ = a n u + b n v 2 n + c n J δ n G 1 ( I δ n B 1 ) v 2 n v ¯ a n u v ¯ + b n v 2 n v ¯ + c n J δ n G 1 ( I δ n B 1 ) v 2 n v ¯ a n u v ¯ + b n v 2 n v ¯ + c n v 2 n v ¯ = a n u v ¯ + ( b n + c n ) v 2 n v ¯ ( 1 a n ) { f n u v ¯ + ( 1 f n ) v 2 n 1 v ¯ } + a n u v ¯ by  ( 3.1 ) = f n ( 1 a n ) u v ¯ + ( ( 1 a n ) ( 1 f n ) ) v 2 n 1 v ¯ + a n u v ¯ = ( 1 ( 1 a n ) ( 1 f n ) ) u v ¯ + ( 1 a n ) ( 1 f n ) v 2 n 1 v ¯ max { u v ¯ , v 2 n 1 v ¯ } max { u v ¯ , v 1 v ¯ } .
By the mathematical induction method, we know that { v 2 n 1 } , { v 2 n } and { v n } are bounded sequences. By Lemma 2.4, we have that
J δ n G 1 ( I δ n B 1 ) v 2 n v ¯ 2 = J δ n G 1 ( I δ n B 1 ) v 2 n J δ n G 1 ( I δ n B 1 ) v ¯ 2 ( I δ n B 1 ) v 2 n ( I δ n B 1 ) v ¯ 2 ( I J δ n G 1 ( I δ n B 1 ) ) v 2 n ( I J δ n G 1 ( I δ n B 1 ) ) v ¯ 2 v 2 n v ¯ 2 ( I J δ n G 1 ( I δ n B 1 ) ) v 2 n ( I J δ n G 1 ( I δ n B 1 ) ) v ¯ 2 = v 2 n v ¯ 2 v 2 n J δ n G 1 ( I δ n B 1 ) v 2 n 2
for each n N . Hence,
v 2 n + 1 v ¯ 2 = a n ( u v ¯ ) + b n ( v 2 n v ¯ ) + c n ( J δ n G 1 ( I δ n B 1 ) v 2 n v ¯ ) 2 b n ( v 2 n v ¯ ) + c n ( J δ n G 1 ( I δ n B 1 ) v 2 n v ¯ ) 2 + 2 a n u v ¯ , v 2 n + 1 v ¯ = ( 1 a n ) 2 b n ( v 2 n v ¯ ) + c n ( J δ n G 1 ( I δ n B 1 ) v 2 n v ¯ ) 2 + 2 a n u v ¯ , v 2 n + 1 v ¯ = ( 1 a n ) 2 { b n v 2 n v ¯ 2 + c n J δ n G 1 ( I δ n B 1 ) v 2 n v ¯ 2 b n c n v 2 n J δ n G 1 ( I δ n B 1 ) v 2 n 2 } + 2 a n u v ¯ , v 2 n + 1 v ¯ b n v 2 n v ¯ 2 + c n J δ n G 1 ( I δ n B 1 ) v 2 n v ¯ 2 + 2 a n u v ¯ , v 2 n + 1 v ¯ b n v 2 n v ¯ 2 + c n ( v 2 n v ¯ 2 v 2 n J δ n G 1 ( I δ n B 1 ) v 2 n 2 ) + 2 a n u v ¯ , v 2 n + 1 v ¯ = ( b n + c n ) v 2 n v ¯ 2 + 2 a n u v ¯ , v 2 n + 1 v ¯ c n v 2 n J δ n G 1 ( I δ n B 1 ) v 2 n 2 ,
(3.2)
where
b n : = b n b n + c n , c n : = c n b n + c n .
Similarly, we have
v 2 n v ¯ 2 ( g n + h n ) v 2 n 1 v ¯ 2 + 2 f n u v ¯ , v 2 n v ¯ h n J γ n G 2 ( I γ n B 2 ) v 2 n 1 v 2 n 1 2 .
(3.3)
Consequently, it follows from (3.2) and (3.3) that
v 2 n + 1 v ¯ 2 ( b n + c n ) v 2 n v ¯ 2 + 2 a n u v ¯ , v 2 n + 1 v ¯ c n J δ n G 1 ( I δ n B 1 ) v 2 n v 2 n 2 ( 1 a n ) { ( 1 f n ) v 2 n 1 v ¯ 2 + 2 f n u v ¯ , v 2 n v ¯ h n J γ n G 2 ( I γ n B 2 ) v 2 n 1 v 2 n 1 2 } + 2 a n u v ¯ , v 2 n + 1 v ¯ c n J δ n G 1 ( I δ n B 1 ) v 2 n v 2 n 2 = ( 1 a n ) ( 1 f n ) v 2 n 1 v ¯ 2 + 2 f n ( 1 a n ) u v ¯ , v 2 n v ¯ h n ( 1 a n ) J γ n G 2 ( I γ n B 2 ) v 2 n 1 v 2 n 1 2 + 2 a n u v ¯ , v 2 n + 1 v ¯ c n J δ n G 1 ( I δ n B 1 ) v 2 n v 2 n 2 .
(3.4)
For each n N , set S n : = v 2 n 1 v ¯ 2 . Then S n + 1 = v 2 n + 1 v ¯ 2 and (3.4) become
S n + 1 ( 1 a n ) ( 1 f n ) S n + 2 a n u v ¯ , v 2 n + 1 v ¯ + 2 f n ( 1 a n ) u v ¯ , v 2 n v ¯ c n J δ n G 1 ( I δ n B 1 ) v 2 n v 2 n 2 h n ( 1 a n ) J γ n G 2 ( I γ n B 2 ) v 2 n 1 v 2 n 1 2 ( 1 a n ) ( 1 f n ) S n + 2 a n u v ¯ , v 2 n + 1 v ¯ + 2 f n ( 1 a n ) u v ¯ , v 2 n v ¯ .
(3.5)
Case 1: { S n } is eventually decreasing, i.e., there exists a natural number N such that v 2 n + 1 v ¯ v 2 n 1 v ¯ for each n N . So, { S n } is convergent and lim n v 2 n 1 v ¯ exists. For all n N , c n c , h n h and (3.5), we have that
0 c J δ n G 1 ( I δ n B 1 ) v 2 n v 2 n 2 + h ( 1 a n ) J γ n G 2 ( I γ n B 2 ) v 2 n 1 v 2 n 1 2 ( 1 a n ) ( 1 f n ) S n S n + 1 + 2 a n u v ¯ , v 2 n + 1 v ¯ + 2 f n ( 1 a n ) u v ¯ , v 2 n v ¯ .
(3.6)
Noting via condition (i) and the fact that { v n } is bounded that
lim n [ ( 1 a n ) ( 1 f n ) S n S n + 1 ] = 0 , lim n 2 a n u v ¯ , v 2 n + 1 v ¯ = lim n 2 f n ( 1 a n ) u v ¯ , v 2 n v ¯ = 0 ,
we conclude from (3.6) that
lim n ( c J δ n G 1 ( I δ n B 1 ) v 2 n v 2 n 2 + h ( 1 a n ) J γ n G 2 ( I γ n B 2 ) v 2 n 1 v 2 n 1 2 ) = 0 .
Therefore,
lim n J δ n G 1 ( I δ n B 1 ) v 2 n v 2 n = lim n J γ n G 2 ( I γ n B 2 ) v 2 n 1 v 2 n 1 = 0 .
(3.7)
Since { v 2 n } is a bounded sequence in , there is a subsequence { v ( 2 n ) k } of { v 2 n } such that v ( 2 n ) k x ¯ H and
lim sup n u v ¯ , v 2 n v ¯ = lim k u v ¯ , v ( 2 n ) k v ¯ = u v ¯ , x ¯ v ¯ .
On the other hand, 0 < a δ n b < 2 κ 1 , there exists a subsequence { δ n k j } of { δ n } such that { δ n k j } converges to a number δ ¯ [ a , b ] . By Lemma 2.5, we have
v ( 2 n ) k j J δ ¯ G 1 ( I δ ¯ B 1 ) v ( 2 n ) k j v ( 2 n ) k j J δ n k j G 1 ( I δ n k j B 1 ) v ( 2 n ) k j + J δ n k j G 1 ( I δ ¯ B 1 ) v ( 2 n ) k j J δ ¯ G 1 ( I δ ¯ B 1 ) v ( 2 n ) k j + J δ n k j G 1 ( I δ n k j B 1 ) v ( 2 n ) k j J δ n k j G 1 ( I δ ¯ B 1 ) v ( 2 n ) k j v ( 2 n ) k j J δ n k j G 1 ( I δ n k j B 1 ) v ( 2 n ) k j + | δ n k j δ ¯ | B 1 u n k j + | δ n k j δ | ¯ δ ¯ J δ ¯ G 1 ( I δ ¯ B 1 ) v ( 2 n ) k j ( I δ ¯ B 1 ) v ( 2 n ) k j 0 .
(3.8)
By (3.8) and Lemma 2.9, x ¯ Fix ( J δ ¯ G 1 ( I δ ¯ B 1 ) ) = ( B 1 + G 1 ) 1 ( 0 ) . Since 0 < c γ n d < 2 κ 2 , there exists a subsequence { γ n k j } of { γ n } such that { γ n k j } converges to a number γ ¯ [ c , d ] . We have that
v 2 n + 1 v 2 n = a n u + b n v 2 n + c n J δ n G 1 ( I δ n B 1 ) v 2 n v 2 n a n u v 2 n + c n J δ n G 1 ( I δ n B 1 ) v 2 n v 2 n
(3.9)
and
v 2 n v 2 n 1 = f n u + g n v 2 n 1 + h n J γ n G 2 ( I γ n B 2 ) v 2 n 1 v 2 n 1 f n u v 2 n 1 + h n J γ n G 2 ( I γ n B 2 ) v 2 n 1 v 2 n 1 .
(3.10)
Since { v n } is bounded, we conclude from (3.7), (3.9), (3.10), and conditions (i), (ii) that
lim n v n + 1 v n = 0 .
(3.11)
By Lemma 2.5, we have
v ( 2 n ) k j + 1 J γ ¯ ( I γ ¯ B 2 ) v ( 2 n ) k j + 1 v ( 2 n ) k j + 1 J γ n k j ( I γ n k j B 2 ) v ( 2 n ) k j + 1 + J γ n k j ( I γ n k j B 2 ) v ( 2 n ) k j + 1 J γ n k j ( I γ ¯ B 2 ) v ( 2 n ) k j + 1 + J γ n k j ( I γ ¯ B 2 ) v ( 2 n ) k j + 1 J r ¯ ( I r ¯ B 2 ) v ( 2 n ) k j + 1 v ( 2 n ) k j + 1 J γ n k j ( I γ n k j B 2 ) v ( 2 n ) k j + 1 + | γ n k j γ ¯ | B 2 v ( 2 n ) k j + 1 + | γ n k j γ ¯ | γ ¯ J γ ¯ ( I γ ¯ B 2 ) v ( 2 n ) k j + 1 ( I γ ¯ B 2 ) v ( 2 n ) k j + 1 0 .
(3.12)
Since lim n v 2 n + 1 v 2 n = 0 , we know that v ( 2 n ) k j + 1 x ¯ . By (3.12) and Lemma 2.9, we know that J γ ¯ G 2 ( I γ ¯ B 2 ) x ¯ = x ¯ . So, x ¯ ( B 2 + G 2 ) 1 ( 0 ) . This shows that x ¯ ( B 1 + G 1 ) 1 ( 0 ) ( B 2 + G 2 ) 1 ( 0 ) . It follows from v ¯ : = P ( B 1 + G 1 ) 1 ( 0 ) ( B 2 + G 2 ) 1 ( 0 ) u and Lemma 2.3 that
lim sup n u v ¯ , v 2 n v ¯ = u v ¯ , x ¯ v ¯ 0 .
(3.13)
By (3.11) and (3.13),
lim sup n u v ¯ , v 2 n + 1 v ¯ = lim sup n ( u v ¯ , v 2 n + 1 v 2 n + u v ¯ , v 2 n v ¯ ) lim sup n u v ¯ , v 2 n + 1 v 2 n + lim sup n u v ¯ , v 2 n v ¯ 0 .
(3.14)

Applying Lemma 2.7 to inequality (3.5) with t n = 2 u v ¯ , v 2 n + 1 v ¯ and k n = 2 ( 1 a n ) u v ¯ , v 2 n v ¯ , we obtain from (3.13) and (3.14) and conditions (i), (ii) that lim n S n = 0 . That is, lim n v 2 n 1 = v ¯ . And then it follows from (3.11) that lim n v 2 n = v ¯ . Thus, lim n v n = v ¯ .

Case 2: Suppose that { S n } is not an eventually decreasing sequence. Let { S n i } be a subsequence of { S n } such that S n i S n i + 1 for all i 0 , also consider the sequence of integers { τ ( n ) } n n 0 , defined by τ ( n ) = max { k n , S k < S k + 1 } , for some n 0 ( n 0 is a sufficiently large number). Then { τ ( n ) } n n 0 is a nondecreasing sequence as lim n τ ( n ) = , and for all n n 0 , one has that
S τ ( n ) S τ ( n ) + 1 and S n S τ ( n ) + 1 .
(3.15)
That is, max { S τ ( n ) , S n } S τ ( n ) + 1 . For such n n 0 , it follows from (3.15) that
v 2 τ ( n ) 1 v ¯ v 2 τ ( n ) + 1 v ¯
(3.16)
and
v 2 n 1 v ¯ v 2 τ ( n ) + 1 v ¯ .
(3.17)
From (3.15) and (3.5), we obtain
S τ ( n ) S τ ( n ) + 1 ( 1 a τ ( n ) ) ( 1 f τ ( n ) ) S τ ( n ) + 2 a τ ( n ) u v ¯ , v 2 τ ( n ) + 1 v ¯ + 2 f τ ( n ) ( 1 a τ ( n ) ) u v ¯ , v 2 τ ( n ) v ¯ c τ ( n ) J δ τ ( n ) G 1 ( I δ τ ( n ) B 1 ) v 2 τ ( n ) v 2 τ ( n ) 2 h τ ( n ) J γ τ ( n ) G 2 ( I γ τ ( n ) B 2 ) v 2 τ ( n ) 1 v 2 τ ( n ) 1 2 ( 1 a τ ( n ) ) ( 1 f τ ( n ) ) S τ ( n ) + 2 a τ ( n ) u v ¯ , v 2 τ ( n ) + 1 v ¯ + 2 f τ ( n ) ( 1 a τ ( n ) ) u v ¯ , v 2 τ ( n ) v ¯ .
(3.18)
Just as the argument of Case 1, we have
{ lim n v 2 τ ( n ) J δ τ ( n ) G 1 ( I δ τ ( n ) B 1 ) v 2 τ ( n ) = 0 , lim n v 2 τ ( n ) 1 J γ τ ( n ) G 2 ( I γ τ ( n ) B 2 ) v 2 τ ( n ) 1 = 0 , lim sup n u v ¯ , v 2 τ ( n ) + 1 v ¯ 0 , lim sup n u v ¯ , v 2 τ ( n ) v ¯ 0 , lim n v 2 τ ( n ) + 1 v 2 τ ( n ) = 0 .
(3.19)
By (3.15) and (3.18), we have
S τ ( n ) + 1 ( 1 a τ ( n ) ) ( 1 f τ ( n ) ) S τ ( n ) + 1 + 2 a τ ( n ) u v ¯ , v 2 τ ( n ) + 1 v ¯ + 2 f τ ( n ) ( 1 a τ ( n ) ) u v ¯ , v 2 τ ( n ) v ¯
for all n n 0 . This implies that
S τ ( n ) + 1 ( 1 a τ ( n ) ) ( 1 f τ ( n ) ) S τ ( n ) + 1 + a τ ( n ) K v 2 τ ( n ) + 1 v 2 τ ( n ) + 2 [ f τ ( n ) ( 1 a τ ( n ) ) + a τ ( n ) ] u v ¯ , v 2 τ ( n ) v ¯
for all n n 0 , where K = 2 u v ¯ . Furthermore, we have
S τ ( n ) + 1 a τ ( n ) K v 2 τ ( n ) + 1 v 2 τ ( n ) a τ ( n ) + f τ ( n ) ( 1 a τ ( n ) ) + 2 u v ¯ , v 2 τ ( n ) v ¯ K v 2 τ ( n ) + 1 v 2 τ ( n ) + 2 u v ¯ , v 2 τ ( n ) v ¯ .
(3.20)
Hence, it follows from (3.19) and (3.20) that
lim n S τ ( n ) + 1 = 0 .
(3.21)

By (3.15) and (3.21), we know that lim n S n = 0 . Then, just as the argument in the proof of Case 1, we obtain lim n v n = v ¯ . Therefore, the proof is completed. □

Remark 3.1
  1. (i)

    If we put B 1 = B 2 = 0 in Theorem 3.1, then Theorem 3.1 is reduced to Theorem 7 in [20].

     
  2. (ii)

    Boikanyo et al. [20] showed that strong convergence theorem of a proximal point algorithm with error can be obtained from strong convergence of a proximal point algorithm without errors. Therefore, in Theorem 3.1, we study strong convergence of variational inclusion problems without error.

     

As a simple consequence of Theorem 3.1, we have the following theorem.

Theorem 3.2 Suppose that ( B 1 + G 1 ) 1 ( 0 ) is nonempty, and { a n } , { b n } , { c n } are sequences in [ 0 , 1 ] such that a n + b n + c n = 1 , 0 < a n < 1 for each n N . For an arbitrary fixed u H , define a sequence { x n } by
{ x 0 H  is chosen arbitrarily , x n + 1 : = a n u + b n x n + c n J δ n G 1 ( I δ n B 1 ) x n , n N { 0 } .
Then lim n v n = P ( B 1 + G 1 ) 1 ( 0 ) u if the following conditions are satisfied:
  1. (i)

    lim n a n = 0 , n = 1 a n = ;

     
  2. (ii)

    0 < a δ n b < 2 κ 1 for each n N and for some a , b R + ;

     
  3. (iii)

    lim inf n c n > 0 .

     
Proof Set f n = 0 , B 2 = 0 , g n + h n = 1 , { g n } and { h n } are sequences in [ 0 , 1 ] , and G 2 = ι H 1 . Define a sequence v n by
v 2 n + 1 = a n u + b n x n + c n J δ n G 1 ( I δ n B 1 ) x n , n N { 0 }
and
v 2 n = v 2 n 1 = x n 1 .
Then
v 2 n + 1 = a n u + b n v 2 n + c n J δ n G 1 ( I δ n B 1 ) v 2 n , n N { 0 } ,
and
v 2 n = f n u + g n v 2 n 1 + h n v 2 n 1 = f n u + g n v 2 n 1 + h n J γ n ι H 1 v 2 n 1 , n N { 0 } .
Since
G 2 1 ( 0 ) = ( ι H 1 ) 1 0 = Fix ( J γ n ι H 1 ) = Fix ( P H 1 ) = H 1 ,
it is easy to see that
( B 1 + G 1 ) 1 ( 0 ) = ( ( B 1 + G 1 ) 1 ( 0 ) H 1 ) = ( ( B 1 + G 1 ) 1 ( 0 ) ( B 2 + G 2 ) 1 ( 0 ) ) .

Then Theorem 3.2 follows from Theorem 3.1. □

Remark 3.2 Following the same argument as in Remark 12 in [20], we see that Theorems 3.1 and 3.2 contain [[23], Theorem 3.3], [[24], Theorem 1], [[32], Theorems 1-4] and many other recent results as special cases.

4 Applications

Now, we recall the following multiple sets split feasibility problem (MSSFP-A1):
Find  x ¯ H 1  such that  x ¯ G 1 1 ( 0 ) G 2 1 ( 0 ) , A 1 x ¯ Fix ( F 1 )  and  A 2 x ¯ Fix ( F 2 ) .

Theorem 4.1 [33]

Given any x ¯ H 1 , { λ n } and { γ n } are sequences in ( 0 , ) .
  1. (i)

    If x ¯ is a solution of (MSSFP-A1), then J λ n G 1 ( I ρ n A 1 ( I F 1 ) A 1 ) J r n G 2 ( I σ n A 2 ( I F 2 ) A 2 ) x ¯ = x ¯ for each n N .

     
  2. (ii)

    Suppose that J λ n G 1 ( I ρ n A 1 ( I F 1 ) A 1 ) J r n G 2 ( I σ n A 2 ( I F 2 ) A 2 ) x ¯ = x ¯ with 0 < ρ n < 2 A 1 2 + 2 , 0 < σ n < 2 A 2 2 + 2 for each n N and the solution set of (MSSFP-A1) is nonempty. Then x ¯ is a solution of (MSSFP-A1).

     
In order to study the convergence theorems for the solution set of multiple split feasibility problem (MSSFP-A1), we need the following problems and the following essential tool which is a special case of Theorem 3.2 in [33]:
(SFP-1) Find  x ¯ H 1  such that  x ¯ Fix ( J ρ n G 1 )  and  A 1 x ¯ Fix ( F 1 ) .
Lemma 4.1 Given any x ¯ H 1 .
  1. (i)

    If x ¯ is a solution of (SFP-1), then J ρ n G 1 ( I ρ n A 1 ( I F 1 ) A 1 ) x ¯ = x ¯ for each n N .

     
  2. (ii)

    Suppose that J ρ n G 1 ( I ρ n A 1 ( I F 1 ) A 1 ) x ¯ = x ¯ with 0 < ρ n < 2 A 1 2 + 2 for each n N , and the solution set of (SFP-1) is nonempty. Then ( I ρ n A 1 ( I F 1 ) A 1 ) and J ρ n G 1 ( I ρ n A 1 ( I F 1 ) A 1 ) are averaged and x ¯ is a solution of (SFP-1).

     
Proof (i) Suppose that x ¯ H 1 is a solution of (SFP-1). Then x ¯ Fix ( J ρ n G 1 ) , A 1 x ¯ Fix ( F 1 ) for each n N . It is easy to see that
J ρ n G 1 ( I ρ n A 1 ( I F 1 ) A 1 ) x ¯ = x ¯ , n N .
(ii) Since the solution set of (SFP-1) is nonempty, there exists w ¯ H 1 such that w ¯ Fix ( J ρ n G 1 ) , A 1 w ¯ Fix ( F 1 ) . Then w ¯ G 1 1 ( 0 ) . If we put G 2 = G 1 and F 2 = F 1 , we get that the solution set of (MSSFP-A1) is nonempty. By Lemma 2.1 we have that
A 1 ( I F 1 ) A 1  is  1 A 1 2 -ism .
(4.1)
By (4.1), 0 < ρ n < 2 A 1 2 + 2 , and Lemma 2.8(ii), (iii), we know that
I ρ n A 1 ( I F 1 ) A 1  is averaged for each  n N .
(4.2)
On the other hand, for each n N , J ρ n G 1 is a firmly nonexpansive mappings, it is easy to see that
J ρ n G 1  is  1 2 -averaged .
(4.3)
Hence, by (4.2), (4.3) and Lemma 2.8(iv) and (v), we see that
J ρ n G 1 ( I ρ n A 1 ( I F 1 ) A 1 )  is averaged .
Since
J ρ n G 1 ( I ρ n A 1 ( I F 1 ) A 1 ) x ¯ = x ¯ ,
so
J ρ n G 1 ( I ρ n A 1 ( I F 1 ) A 1 ) J ρ n G 1 ( I ρ n A 1 ( I F 1 ) A 1 ) x ¯ = x ¯ .

Then Lemma 4.1 follows from Theorem 4.1 by taking G 1 = G 2 , F 1 = F 2 and ρ n = r n . □

Remark 4.1 From the following result, we know that Lemma 4.1 is more useful than Theorem 4.1.

Theorem 4.2 Suppose that the solution set Ω A 1 of (MSSFP-A1) is nonempty and { a n } , { b n } , { c n } , { f n } , { g n } , { h n } are sequences in [ 0 , 1 ] such that a n + b n + c n = 1 , f n + g n + h n = 1 , 0 < a n < 1 , and 0 < f n < 1 for each n N . For an arbitrarily fixed u H , a sequence { v n } is defined by
{ v 0 H  is chosen arbitrarily , v 2 n + 1 : = a n u + b n v 2 n + c n J ρ n G 1 ( I ρ n A 1 ( I F 1 ) A 1 ) v 2 n , n N { 0 } , v 2 n : = f n u + g n v 2 n 1 + h n J σ n G 2 ( I σ n A 2 ( I F 2 ) A 2 ) v 2 n 1 , n N .
Then lim n v n is a unique solution of the following optimization problem:
min q Ω A 1 q u
provided the following conditions are satisfied:
  1. (i)

    lim n a n = lim n f n = 0 ;

     
  2. (ii)

    either n = 1 a n = or n = 1 f n = ;

     
  3. (iii)

    0 < a ρ n b < 2 A 1 2 + 2 , 0 < c σ n d < 2 A 2 2 + 2 for each n N , and for some a , b , c , d R + ;

     
  4. (iv)

    lim inf n c n > 0 , lim inf n h n > 0 .

     
Proof Since F i is firmly nonexpansive, it follows from Lemma 4.1 that A i ( I F i ) A i is 1 A i 2 -ism for each i = 1 , 2 . For each i = 1 , 2 , put B i = A i ( I F i ) A i in Lemma 4.1. Then algorithm in Theorem 3.1 follows immediately from algorithm in Theorem 4.2. Since Ω A 1 is nonempty, by Lemma 4.1, we have that
w ¯ ( Fix ( J ρ n G 1 ( ( I ρ n A 1 ( I F 1 ) A 1 ) ) ) Fix ( J σ n G 2 ( ( I σ n A 2 ( I F 2 ) A 2 ) ) ) )
(4.4)
for each n N . This implies that
w ¯ ( Fix ( J ρ n G 1 ( I ρ n B 1 ) ) ( Fix ( J σ n G 2 ( I σ n B 2 ) ) ) )
(4.5)
for each n N . Hence,
w ¯ ( ( B 1 + G 1 ) 1 ( 0 ) ( B 2 + G 2 ) 1 ( 0 ) ) .
By Theorem 3.1, lim n v n = x ¯ , where x ¯ = P ( B 1 + G 1 ) 1 ( 0 ) ( B 2 + G 2 ) 1 ( 0 ) u . That is,
x ¯ ( B 1 + G 1 ) 1 ( 0 ) ( B 2 + G 2 ) 1 ( 0 )
and
x ¯ u q u
for all q ( B 1 + G 1 ) 1 ( 0 ) ( B 2 + G 2 ) 1 ( 0 ) . Since
x ¯ ( B 1 + G 1 ) 1 ( 0 ) ( B 2 + G 2 ) 1 ( 0 ) ,
we know that
x ¯ Fix ( J ρ n G 1 ( I ρ n B 1 ) ) Fix ( J σ n G 2 ( I σ n B 2 ) ) .
That is,
x ¯ Fix ( J ρ n G 1 ( I ρ n A 1 ( I F 1 ) A 1 ) )
and
x ¯ Fix ( J σ n G 2 ( I σ n A 2 ( I F 2 ) A 2 ) ) .
By Lemma 4.1, we get that x ¯ Ω A 1 . Similarly, if q ( B 1 + G 1 ) 1 ( 0 ) ( B 2 + G 2 ) 1 ( 0 ) , then q Ω A 1 . Therefore x ¯ = P Ω A 1 u . This shows that lim n v n is a unique solution of the optimization problem
min q Ω A 1 q u .

Therefore, the proof is completed. □

In the following theorem, we study the following multiple sets split feasibility problem (MSSMVIP-A2):
Find  x ¯ H 1  such that  x ¯ C , A 1 x ¯ Q  and  A 2 x ¯ Q .

Let Ω A 2 denote the solution set of (MSSMVIP-A2). The following theorem is a special case of Theorem 4.3. Hence, it is also a special case of Theorem 4.2.

Theorem 4.3 Suppose that Ω A 2 is nonempty, and that { a n } , { b n } , { c n } , { f n } , { g n } , { h n } are sequences in [ 0 , 1 ] with a n + b n + c n = 1 , f n + g n + h n = 1 , 0 < a n < 1 , and 0 < f n < 1 for each n N . For an arbitrary fixed u H , a sequence { v n } is defined by
{ v 0 H  is chosen arbitrarily , v 2 n + 1 : = a n u + b n v 2 n + c n P C ( I ρ n A 1 ( I P Q ) A 1 ) v 2 n , n N { 0 } , v 2 n : = f n u + g n v 2 n 1 + h n P C ( I σ n A 2 ( I P Q ) A 2 ) v 2 n 1 , n N .
Then lim n v n is a unique solution of the following optimization problem:
min q Ω A 2 q u
provided the following conditions are satisfied:
  1. (i)

    lim n a n = lim n f n = 0 ;

     
  2. (ii)

    either n = 1 a n = or n = 1 f n = ;

     
  3. (iii)

    0 < a ρ n b < 2 A 1 2 + 2 , 0 < c σ n d < 2 A 2 2 + 2 for each n N and for some a , b , c , d R + ;

     
  4. (iv)

    lim inf n c n > 0 , lim inf n h n > 0 .

     

Proof Put G 1 = G 2 = ι C , F 1 = P Q , and F 2 = P Q . Then G 1 , G 2 are two set-valued maximum monotone mappings, F 1 and F 2 are firmly nonexpansive mappings. Since J ρ n ι C = P C and J σ n ι C = P C , we have Fix ( F 1 ) = Fix ( P Q ) = Q , Fix ( F 2 ) = Fix ( P Q ) = Q , Fix ( J ρ n ι C ) = Fix ( P C ) = C and Fix ( J σ n ι C ) = Fix ( P C ) = C . Then Theorem 4.3 follows from Theorem 4.2. □

In the following theorem, we study the following split feasibility problem (MSSMVIP-A3):
Find  x ¯ H 1  such that  x ¯ C Q , A 1 x ¯ Q  where  Q  is a nonempty closed subset of  H 1 .

Let Ω A 3 denote the solution set of problem (MSSMVIP-A3). The following is also a special case of Theorem 4.3.

Theorem 4.4 Suppose that Q is a nonempty closed convex subset of H 1 , Ω A 3 is nonempty, and { a n } , { b n } , { c n } , { f n } , { g n } , { h n } are sequences in [ 0 , 1 ] with a n + b n + c n = 1 , f n + g n + h n = 1 , 0 < a n < 1 , and 0 < f n < 1 for each n N . For an arbitrary fixed u H , a sequence { v n } is defined by
{ v 0 H  is chosen arbitrarily , v 2 n + 1 : = a n u + b n v 2 n + c n P C ( I ρ n A 1 ( I P Q ) A 1 ) v 2 n , n N { 0 } , v 2 n : = f n u + g n v 2 n 1 + h n P C ( I σ n ( I P Q ) ) v 2 n 1 , n N .
Then lim n v n is a unique solution of the following optimization problem:
min q Ω A 3 q u
provided the following conditions are satisfied:
  1. (i)

    lim n a n = lim n f n = 0 ;

     
  2. (ii)

    either n = 1 a n = or n = 1 f n = ;

     
  3. (iii)

    0 < a ρ n b < 2 A 1 2 + 2 , 0 < c < σ n < d < 2 3 for each n N and for some a , b , c , d R + ;

     
  4. (iv)

    lim inf n c n > 0 , lim inf n h n > 0 .

     

Proof Put A 2 = I and H 1 = H 3 in Theorem 4.3. Then Theorem 4.4 follows from Theorem 4.3. □

In the following theorem, we study the following convex feasibility problem (MSSMVIP-A4):
Find  x ¯ H 1  such that  x ¯ C Q Q ,  where  Q , Q  are nonempty closed subsets of  H 1 .

Let Ω A 4 denote the solution set of (MSSMVIP-A4). The following is a special case of Theorem 4.3.

Theorem 4.5 Suppose that Q and Q are nonempty closed convex subsets of H 1 , Ω A 4 is nonempty, and { a n } , { b n } , { c n } , { f n } , { g n } , { h n } are sequences in [ 0 , 1 ] with a n + b n + c n = 1 , f n + g n + h n = 1 , 0 < a n < 1 , and 0 < f n < 1 for each n N . For an arbitrary fixed u H , a sequence { v n } is defined by
{ v 0 H  is chosen arbitrarily , v 2 n + 1 : = a n u + b n v 2 n + c n P C ( I ρ n ( I P Q ) ) v 2 n , n N { 0 } , v 2 n : = f n u + g n v 2 n 1 + h n P C ( I σ n ( I P Q ) ) v 2 n 1 , n N .
Then lim n v n is a unique solution of the following optimization problem:
min q Ω A 4 q u
provided the following conditions are satisfied:
  1. (i)

    lim n a n = lim n f n = 0 ;

     
  2. (ii)

    either n = 1 a n = or n = 1 f n = ;

     
  3. (iii)

    0 < a < ρ n < b < 2 3 , 0 < c < σ n < d < 2 3 for each n N and for some a , b , c , d R + ;

     
  4. (iv)

    lim inf n c n > 0 , lim inf n h n > 0 .

     

Proof Put A 1 = A 2 = I and H 1 = H 2 = H 3 in Theorem 4.3. Then Theorem 4.5 follows from Theorem 4.3. □

In the following theorem, we study the following convex feasibility problem (MSSMVIP-A5):
Find  x ¯ H 1  such that  x ¯ Q Q ,  where  Q  and  Q  are nonempty closed convex subsets of  H 1 .

Let Ω A 5 denote the solution set of (MSSMVIP-A5).

The following existent theorem of a convex feasibility problem follows immediately from Theorem 4.5.

Theorem 4.6 Suppose that Q and Q are nonempty closed convex subsets of H 1 , Ω A 5 is nonempty, and { a n } , { b n } , { c n } , { f n } , { g n } , { h n } are sequences in [ 0 , 1 ] with a n + b n + c n = 1 , f n + g n + h n = 1 , 0 < a n < 1 , and 0 < f n < 1 for each n N . For an arbitrary fixed u H . Define a sequence { v n } by
{ v 0 H  is chosen arbitrarily , v 2 n + 1 : = a n u + b n v 2 n + c n ( I ρ n ( I P Q ) ) v 2 n , n N { 0 } , v 2 n : = f n u + g n v 2 n 1 + h n ( I σ n ( I P Q ) ) v 2 n 1 , n N .
Then lim n v n is a unique solution of the following optimization problem:
min q Ω A 5 q u
provided the following conditions are satisfied:
  1. (i)

    lim n a n = lim n f n = 0 ;

     
  2. (ii)

    either n = 1 a n = or n = 1 f n = ;

     
  3. (iii)

    0 < a < ρ n < b < 2 3 , 0 < c < σ n < d < 2 3 for each n N and for some a , b , c , d R + ;

     
  4. (iv)

    lim inf n c n > 0 , lim inf n h n > 0 .

     

Proof Put C = H 1 , then P C = P H 1 . Then Theorem 4.6 follows from Theorem 4.5. □

In the following theorem, we study the following system of convexly constrained linear inverse problem (SCCLIP):
Find  x ¯ H 1  such that  x ¯ C , A 1 x ¯ = b  and  A 2 x ¯ = b ,  where  b H 2  and  b H 3 .

Let Ω A 6 denote the solution set of (SCCLIP).

Theorem 4.7 Suppose that Ω A 6 is nonempty, and b H 2 , b H 3 . Let { a n } , { b n } , { c n } , { f n } , { g n } , and { h n } be sequences in [ 0 , 1 ] with a n + b n + c n = 1 , f n + g n + h n = 1 , 0 < a n < 1 , and 0 < f n < 1 for each n N . For an arbitrary fixed u H , a sequence { v n } is defined by
{ v 0 H  is chosen arbitrarily , v 2 n + 1 : = a n u + b n v 2 n + c n P C ( v 2 n ρ n A 1 ( A 1 v 2 n b ) ) , n N { 0 } , v 2 n : = f n u + g n v 2 n 1 + h n P C ( v 2 n 1 σ n A 2 ( A 2 v 2 n 1 b ) ) , n N .
Then lim n v n is a unique solution of the following optimization problem:
min q Ω A 6 q u
provided the following conditions are satisfied:
  1. (i)

    lim n a n = lim n f n = 0 ;

     
  2. (ii)

    either n = 1 a n = or n = 1 f n = ;

     
  3. (iii)

    0 < a ρ n b < 2 A 1 2 + 2 , 0 < c σ n d < 2 A 2 2 + 2 for each n N and for some a , b , c , d R + ;

     
  4. (iv)

    lim inf n c n > 0 , lim inf n h n > 0 .

     

Proof Put Q = { b } and Q = { b } . Then Theorem 4.7 follows from Theorem 4.2. □

In the following theorem, we study the following convexly constrained linear inverse problem (CCLIP):
Find  x ¯ H 1  such that  x ¯ C Q  and  A 1 x ¯ = b ,  where  b H 2  and  Q  is a nonempty closed convex subset of  H 1 .

Let Ω A 7 denote the solution set of (CCLIP).

Theorem 4.8 Suppose that Q is a nonempty closed convex subset of H 1 . Ω A 7 is nonempty, b H 2 , and { a n } , { b n } , { c n } , { f n } , { g n } , { h n } are sequences in [ 0 , 1 ] with a n + b n + c n = 1 , f n + g n + h n = 1 , 0 < a n < 1 , and 0 < f n < 1 for each n N . For an arbitrary fixed u H , a sequence { v n } is defined by
{ v 0 H  is chosen arbitrarily , v 2 n + 1 : = a n u + b n v 2 n + c n P C ( v 2 n ρ n ( A 1 v 2 n b ) ) , n N { 0 } , v 2 n : = f n u + g n v 2 n 1 + h n P C ( I σ n ( I P Q ) ) v 2 n 1 , n N .
Then lim n v n is a unique solution of the following optimization problem:
min q Ω A 7 q u
provided the following conditions are satisfied:
  1. (i)

    lim n a n = lim n f n = 0 ;

     
  2. (ii)

    either n = 1 a n = or n = 1 f n = ;

     
  3. (iii)

    0 < a ρ n b < 2 A 1 2 + 2 , 0 < c < σ n < d < 2 3 for each n N , and for some a , b , c , d R + ;

     
  4. (iv)

    lim inf n c n > 0 , lim inf n h n > 0 .

     
Remark 4.2 The iteration in Theorem 4.8 is different from the Landweber iterative method [19]:
x n + 1 : = x n + γ A T ( b A x n ) , n N .

Declarations

Acknowledgements

Prof. C-S Chuang was supported by the National Science Council of Republic of China while he work on the publish, and Y-D Chen was supported by Southern Taiwan University of Science and Technology.

Authors’ Affiliations

(1)
Department of Mathematics, National Changhua University of Education, Changhua, 50058, Taiwan
(2)
Department of Chemical and Materials Engineering, Southern Taiwan University of Science and Technology, Tainan, 71005, Taiwan
(3)
Department of Applied Mathematics, National Sun Yat-sen University, Kaohsiung, Taiwan

References

  1. Combettes PL: The convex feasible problem in image recovery. 95. In Advanced in Image and Electron Physics. Edited by: Hawkes P. Academic Press, New York; 1996:155–270.Google Scholar
  2. Stark H: Image Recovery: Theory and Applications. Academic Press, New York; 1987.MATHGoogle Scholar
  3. Bregman LM: The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 1967, 7: 200–217.View ArticleMATHMathSciNetGoogle Scholar
  4. Hundal H: An alternating projection that does not converge in norm. Nonlinear Anal. 2004, 57: 35–61. 10.1016/j.na.2003.11.004MATHMathSciNetView ArticleGoogle Scholar
  5. Boikanyo OA, Morosanu G: Strong convergence of the method of alternating resolvents. J. Nonlinear Convex Anal. 2013, 14: 221–229.MATHMathSciNetGoogle Scholar
  6. Censor Y, Elfving T: A multiprojection algorithm using Bregman projection in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692MATHMathSciNetView ArticleGoogle Scholar
  7. Bauschke HH, Borwein JM: On projection algorithm for solving convex feasible problems. SIAM Rev. 1996, 38: 376–426.MathSciNetView ArticleGoogle Scholar
  8. Byrne C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18: 441–453. 10.1088/0266-5611/18/2/310MATHMathSciNetView ArticleGoogle Scholar
  9. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006MATHMathSciNetView ArticleGoogle Scholar
  10. Censor Y, Bortfeld T, Martin B, Trofimov A: A unified approach for inversion problems in intensity modulated radiation therapy. Phys. Med. Biol. 2003, 51: 2353–2365.View ArticleGoogle Scholar
  11. Censor Y, Elfving T, Kopf N, Bortfeld T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21: 2071–2084. 10.1088/0266-5611/21/6/017MATHMathSciNetView ArticleGoogle Scholar
  12. López G, Martín-Márquez V, Xu HK: Iterative algorithms for the multiple-sets split feasibility problem. In Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems. Edited by: Censor Y, Jiang M, Wang G. Medical Physics Publishing, Madison; 2010:243–279.Google Scholar
  13. Lopez G, Martinez-Marquez V, Wang F, Xu HK: Solving the split feasibility problem without prior knowledge of matrix n . Inverse Probl. 2012., 28: Article ID 085004Google Scholar
  14. Qu B, Xiu N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21: 1655–1665. 10.1088/0266-5611/21/5/009MATHMathSciNetView ArticleGoogle Scholar
  15. Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018Google Scholar
  16. Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018Google Scholar
  17. Yang Q: The relaxed CQ algorithm solving the split feasible problem. Inverse Probl. 2004, 20: 1261–1266. 10.1088/0266-5611/20/4/014MATHView ArticleGoogle Scholar
  18. Eicke B: Iteration methods for convexly constrained ill-posed problems in Hilbert space.