Skip to main content

Solutions for a variational inclusion problem with applications to multiple sets split feasibility problems

Abstract

In this paper, we first study the set of common solutions for two variational inclusion problems in a real Hilbert space and establish a strong convergence theorem of this problem. As applications, we study unique minimum norm solutions of the following problems: multiple sets split feasibility problems, system of convex constrained linear inverse problems, convex constrained linear inverse problems, split feasibility problems, convex feasibility problems. We establish iteration processes of these problems and show the strong convergence theorems of these iteration processes.

MSC:47J20, 47J25, 47H05, 47H09.

1 Introduction

Let C 1 , C 2 ,, C m be nonempty closed convex subsets of a real Hilbert space . The well-known convex feasibility problem (CFP) is to find x H such that

x C 1 C 2 C m .

The convex feasibility problem has received a lot of attention due to its diverse applications in mathematics, approximation theory, communications, geophysics, control theory, biomedical engineering. One can refer to [1, 2].

If C 1 and C 2 are closed vector spaces of a real Hilbert space , in 1933, von Neumann showed that any sequence { x n } is generated from the method of alternating projections: x 0 H, x 1 = P C 1 x 0 , x 2 = P C 2 x 1 , x 3 = P C 1 x 2 , … . Then { x n } converges strongly to some x ¯ C 1 C 2 . If C 1 and C 2 are nonempty closed convex subsets of , Bregman [3] showed that the sequence { x n } generated from the method of alternating projection converges weakly to a point in C 1 C 2 . Hundal [4] showed that the strong convergence fails if C 1 and C 2 are nonempty closed convex subsets of . Recently, Boikanyo et al. [5] proposed the following process:

x 2 n + 1 = J β n G 1 ( α n u + ( 1 α n ) x 2 n + e n ) ,n=0,1,.
(1.1)

and

x 2 n = J ρ n G 2 ( λ n u + ( 1 λ n ) x 2 n 1 + e n ) ,n=0,1,,
(1.2)

where u, x 0 H are given arbitrarily, G 1 and G 2 are two set-valued maximal monotone operators with J β G 1 = ( I + β G 1 ) 1 , and { α n }, { λ n }, { β n }, { ρ n }, { e n }, { e n } are sequences. Boikanyo et al. [5] proved that the sequence { x n } converges strongly to a point x ¯ G 1 1 (0) G 2 1 (0) under suitable conditions.

The split feasibility problem (SFP) is to find a point

x C such that A x Q,

where C, Q are nonempty closed convex subsets of real Hilbert spaces H 1 , H 2 , respectively. A: H 1 H 2 is a bounded linear operator. The split feasibility problem (SFP) in finite dimensional real Hilbert spaces was first introduced by Censor and Elfving [6] for modeling inverse problems which arise from medical image reconstruction. Since then, the split feasibility problem (SFP) has received much attention due to its applications in signal processing, image reconstruction, approximation theory, control theory, biomedical engineering, communications, and geophysics. For examples, one can refer to [1, 2, 617] and related literature. A special case of problem (SFP) is the convexly constrained linear inverse problem in a finite dimensional real Hilbert space [18]:

(CLIP)Find  x ¯ C such that A x ¯ =b,

where C is a nonempty closed convex subset of a real Hilbert space H 1 and b is a given element of a real Hilbert space H 2 , which has extensively been investigated by using the Landweber iterative method [19]:

x n + 1 := x n +γ A T (bA x n ),nN.

Let C 1 , C 2 ,, C m be nonempty closed convex subsets of H 1 , let Q 1 , Q 2 ,, Q m be nonempty closed convex subsets of H 2 , and let A 1 , A 2 ,, A m : H 1 H 2 be bounded linear operators. The well-known multiple sets split feasibility problem (MSSFP) is to find x H 1 such that

x C i  such that  A i x Q i for all i=1,2,,m.

The multiple sets split feasibility problem (MSSFP) contains convex feasibility problem (CFP) and split feasibility problem (SFP) as special cases [6, 10, 13]. Indeed, Censor et al. [11] first studied this type of problem. Xu [16] and Lopez et al. [13] also studied this type of problem. In 2011, Boikanyo and Moroşanu [20] gave the following algorithm:

{ v 2 n + 1 : = a n u + b n v 2 n + c n J α n G 1 v 2 n , n N { 0 } , v 2 n : = f n u + g n v 2 n 1 + h n J β n G 2 v 2 n 1 , n N ,
(1.3)

where u is given in , and G 1 , G 2 are two set-valued maximal monotone mappings on , { a n }, { b n }, { c n }, { f n }, { g n } and { h n } are sequences in [0,1]. Boikanyo and Moroşanu [20] proved that { v n } in (1.3) converges strongly to some x ¯ G 1 1 (0) G 2 1 (0) under suitable conditions.

Motivated by the above works, we consider the following algorithm:

{ v 0 H  is chosen arbitrarily , v 2 n + 1 : = a n u + b n v 2 n + c n J δ n G 1 ( I δ n B 1 ) v 2 n , n N { 0 } , v 2 n : = f n u + g n v 2 n 1 + h n J γ n G 2 ( I γ n B 2 ) v 2 n 1 , n N ,
(1.4)

where G 1 , G 2 are two set-valued maximal monotone mappings on a real Hilbert space H 1 , B 1 , B 2 :C H 1 are two mappings, { a n }, { b n }, { c n }, { f n }, { g n }, and { h n } are sequences in [0,1]. We show that the sequence { v n } generated by (1.4) converges strongly to some x ¯ ( B 1 + G 1 ) 1 (0) ( B 2 + G 2 ) 1 (0) under suitable conditions.

Algorithm (1.4) contains as special cases the inexact proximal point algorithm (e.g., [21, 22]) and the generalized contraction proximal point algorithm (e.g., [23]). Our conclusion extends and unifies Boikanyo and Moroşanu’s result in [20], Wang and Cui’s result in [24] becomes a special case. For details, one can see Section 3.

In this paper, we first study the set of common solutions for two variational inclusion problems in a Hilbert space and establish a strong convergence theorem of this problem. As applications, we study unique minimum norm solutions of the following problems: multiple sets split feasibility problems, system of convex constrained linear inverse problems, convex constrained linear inverse problems, split feasibility problems, convex feasibility problems. We establish iteration processes of these problems and show strong convergence theorems of these iteration processes.

2 Preliminaries

Throughout this paper, let , H 1 , H 2 , H 3 denote real Hilbert spaces with the inner product , and the norm ; let be the set of all natural numbers and R + be the set of all positive real numbers. A set-valued mapping A with domain D(A) on is called monotone if uv,xy0 for any uAx, vAy and for all x,yD(A). A monotone operator A is called maximal monotone if its graph {(x,y):xD(A),yAx} is not properly contained in the graph of any other monotone mapping. The set of all zero points of A is denoted by A 1 (0), i.e., A 1 (0)={xH:0Ax}. In what follows, we denote the strong convergence and the weak convergence of { x n } to xH by x n x and x n x, respectively. In order to facilitate our discussion, in the next section, we recall some facts. The following equality is easy to check:

α x + β y + γ z 2 = α x 2 + β y 2 + γ z 2 α β x y 2 α γ x z 2 β γ y z 2
(2.1)

for each x,y,zH and α,β,γ[0,1] with α+β+γ=1. Besides, we also have

x + y 2 x 2 +2y,x+y
(2.2)

for each x,yH. Let C be a nonempty closed convex subset of , and a mapping T:CH. We denote the set of all fixed points of T by Fix(T). A mapping T:CH is said to be nonexpansive if TxTyxy for every x,yC. A mapping T:CH is said to be quasi-nonexpansive if Fix(T) and Txyxy for all xC and yFix(T). A mapping T:CH is said to be firmly nonexpansive if

T x T y 2 x y 2 ( I T ) x ( I T ) y 2

for every x,yC. Besides, it is easy to see that Fix(T) is a closed convex subset of C if T:CH is a quasi-nonexpansive mapping. A mapping T:CH is said to be α-inverse-strongly monotone (α-ism) if

xy,TxTyα T x T y 2

for all x,yH and α>0.

The following lemmas are needed in this paper.

Lemma 2.1 [25]

Let A: H 1 H 2 be a bounded linear operator, and let A be the adjoint of A. Suppose that C is a nonempty closed convex subset of H 2 , and that G:C H 2 is a firmly nonexpansive mapping. Then A (IG)A is 1 A 2 -ism, that is,

1 A 2 A ( I G ) A x A ( I G ) A y 2 x y , A ( I G ) A x A ( I G ) A y

for all x,y H 1 .

Lemma 2.2 Let C be a nonempty closed convex subset of , and let G:CH be a firmly nonexpansive mapping. Suppose that Fix(G) is nonempty. Then xGx,Gxw0 for each xH and each wFix(G).

Let C be a nonempty closed convex subset of . Then, for each xH, there is a unique element x ¯ C such that x x ¯ = min y C xy. Here, we set P C x= x ¯ and P C is said to be the metric projection from onto C.

Lemma 2.3 [26]

Let C be a nonempty closed convex subset of , and let P C be the metric projection from onto C. Then x P C x, P C xy0 for each xH and each yC.

For a set-valued maximal monotone operator G on and r>0, we may define an operator J r G :HH with J r G = ( I + r G ) 1 which is called the resolvent mapping of G for r.

Lemma 2.4 [26, 27]

Let G:H 2 H be a set-valued maximal monotone mapping. Then we have that

  1. (i)

    for each α>0, J α G is a single-valued and firmly nonexpansive mapping.

  2. (ii)

    D( J α G )=H and Fix( J α G )= G 1 (0).

  3. (iii)

    x J α G x2x J β G x for each xH and all α,β(0,) with 0<αβ.

Lemma 2.5 [26]

Let G be a set-valued maximal monotone operator on . For a>0, we define the resolvent J a G = ( I + a G ) 1 . Then the following holds:

J α G x J β G x 2 α β α J α G x J β G x , J α G x x

for all α,β>0 and xH. In fact,

J α G x J β G x | α β | α J α G x x

for all α,β>0 and xH.

Lemma 2.6 [28]

Let { S n } be a sequence of real numbers that does not decrease at infinity, in the sense that there exists a subsequence { S n i } i 0 of { S n } such that S n i < S n i + 1 for all iN. Consider the sequence { τ ( n ) } n n 0 defined by τ(n) = max{ n 0 kn: τ k < τ k + 1 } for some sufficiently large number n 0 N. Then { τ ( n ) } n n 0 is a nondecreasing sequence with τ(n) as n, and for all n n 0 ,

S τ ( n ) S τ ( n ) + 1 and S n S τ ( n ) + 1 .

In fact, max{ S τ ( n ) , S n } S τ ( n ) + 1 .

Lemma 2.7 [5]

Let { a n } and { b n } be two sequences in [0,1]. Let { e n } be a sequence of nonnegative real numbers. Let { t n } and { k n } be two sequences of real numbers. Let { S n } n N be a sequence of nonnegative real numbers with

S n + 1 (1 a n )(1 b n ) S n + a n t n + b n k n + e n

for each nN. Assume that:

n = 1 a n =, lim sup n t n 0, lim sup n k n 0,and n = 1 e n =.

Then lim n S n =0.

A mapping T:HH is said to be averaged if T=(1α)I+αS, where S:HH is a nonexpansive mapping and α(0,1).

Lemma 2.8 [29]

Let C be a nonempty closed convex subset of and T:CH be a mapping. Then the following hold:

  1. (i)

    T is a nonexpansive mapping if and only if IT is 1 2 -inverse-strongly monotone ( 1 2 -ism).

  2. (ii)

    If S is ν-ism, then γS is ν γ -ism.

  3. (iii)

    S is averaged if and only if IS is ν-ism for some ν> 1 2 .

Indeed, S is α-averaged if and only if IS is 1 ( 2 α ) -ism for α(0,1).

  1. (iv)

    If S and T are averaged, then the composition ST is also averaged.

  2. (v)

    If the mappings { T i } i = 1 n are averaged and have a common fixed point, then i = 1 n Fix( T i )=Fix( T 1 T n ) for each nN.

Lemma 2.9 [30]

Let T be a nonexpansive self-mapping on a nonempty closed convex subset C of , and let { x n } be a sequence in C. If x n w and lim n x n T x n =0, then Tw=w.

Let C be a nonempty closed convex subset of . The indicator function ι C defined by

ι C x={ 0 , x C , , x C

is a proper lower semicontinuous convex function and its subdifferential ι C defined by

ι C x= { z H : y x , z ι C ( y ) ι C ( x ) , y H }

is a maximal monotone operator [31]. Furthermore, we also define the normal cone N C u of C at u as follows:

N C u= { z H : z , v u 0 , v C } .

We can define the resolvent J λ i C of i C for λ>0, i.e.,

J λ i C x= ( I + λ i C ) 1 x

for all xH. Since

i C x= { z H : i C x + z , y x i C y , y H } = { z H : z , y x 0 , y C } = N C x

for all xC, we have that

u = J λ i C x x u + λ i C u x u λ N C u x u , y u 0 , y C u = P C x .

3 Main results

Let C, Q and Q be nonempty closed convex subsets of H 1 , H 2 and H 3 , respectively. For each i=1,2 and κ i >0, let B i be a κ i -inverse-strongly monotone mapping of C into H 1 , and let G i be a set-valued maximal monotone mapping on H 1 such that the domain of G i is included in C for each i=1,2. Let F 1 be a firmly nonexpansive mapping of H 2 into H 2 and F 2 be a firmly nonexpansive mapping of H 3 into H 3 . Note J λ G 1 = ( I + λ G 1 ) 1 and J r G 2 = ( I + r G 2 ) 1 for each λ>0 and r>0. Let A 1 : H 1 H 2 be a bounded linear operator, A 2 : H 1 H 3 be a bounded linear operator, and A i be the adjoint of A i for i=1,2. Throughout this paper, we use these notations unless specified otherwise.

Theorem 3.1 Suppose that ( B 1 + G 1 ) 1 (0) ( B 2 + G 2 ) 1 (0) is nonempty, and { a n }, { b n }, { c n }, { f n }, { g n }, { h n } are sequences in [0,1] such that a n + b n + c n =1, f n + g n + h n =1, 0< a n <1, and 0< f n <1 for each nN. For an arbitrarily fixed uH, define a sequence { v n } by

{ v 0 H  is chosen arbitrarily , v 2 n + 1 : = a n u + b n v 2 n + c n J δ n G 1 ( I δ n B 1 ) v 2 n , n N { 0 } , v 2 n : = f n u + g n v 2 n 1 + h n J γ n G 2 ( I γ n B 2 ) v 2 n 1 , n N .

Then lim n v n = P ( B 1 + G 1 ) 1 ( 0 ) ( B 2 + G 2 ) 1 ( 0 ) u provided the following conditions are satisfied:

  1. (i)

    lim n a n = lim n f n =0;

  2. (ii)

    either n = 1 a n = or n = 1 f n =;

  3. (iii)

    0<a δ n b<2 κ 1 and 0<f γ n g<2 κ 2 for each nN and for some a,b,f,g R + ;

  4. (iv)

    lim inf n c n >0, lim inf n h n >0.

Proof Given any fixed v ¯ := P ( B 1 + G 1 ) 1 ( 0 ) ( B 2 + G 2 ) 1 ( 0 ) u. Then v ¯ = J δ n G 1 (I δ n B 1 ) v ¯ and v ¯ = J γ n G 2 (I γ n B 2 ) v ¯ . Since B 1 is κ 1 -ism and κ 1 δ n > 1 2 , it follows from Lemma 2.8 that I δ n B 1 is averaged. Hence I δ n B 1 is nonexpansive. By Lemma 2.8, J δ n G 1 (I δ n G 1 ) and J γ n G 2 (I γ n G 2 ) are averaged. Hence J δ n G 1 (I δ n G 1 ) and J γ n G 2 (I γ n G 2 ) are nonexpansive. By condition (iv), we may assume that there exist positive real numbers c and h such that c n >0 and h n h>0 for each nN. For each nN, we have that

v 2 n v ¯ = f n u + g n v 2 n 1 + h n J γ n G 2 ( I γ n B 2 ) v 2 n 1 v ¯ f n u v ¯ + g n v 2 n 1 v ¯ + h n J γ n G 2 ( I γ n B 2 ) v 2 n 1 v ¯ f n u v ¯ + g n v 2 n 1 v ¯ + h n v 2 n 1 v ¯ = f n u v ¯ + ( g n + h n ) v 2 n 1 v ¯ = f n u v ¯ + ( 1 f n ) v 2 n 1 v ¯
(3.1)

and

v 2 n + 1 v ¯ = a n u + b n v 2 n + c n J δ n G 1 ( I δ n B 1 ) v 2 n v ¯ a n u v ¯ + b n v 2 n v ¯ + c n J δ n G 1 ( I δ n B 1 ) v 2 n v ¯ a n u v ¯ + b n v 2 n v ¯ + c n v 2 n v ¯ = a n u v ¯ + ( b n + c n ) v 2 n v ¯ ( 1 a n ) { f n u v ¯ + ( 1 f n ) v 2 n 1 v ¯ } + a n u v ¯ by  ( 3.1 ) = f n ( 1 a n ) u v ¯ + ( ( 1 a n ) ( 1 f n ) ) v 2 n 1 v ¯ + a n u v ¯ = ( 1 ( 1 a n ) ( 1 f n ) ) u v ¯ + ( 1 a n ) ( 1 f n ) v 2 n 1 v ¯ max { u v ¯ , v 2 n 1 v ¯ } max { u v ¯ , v 1 v ¯ } .

By the mathematical induction method, we know that { v 2 n 1 }, { v 2 n } and { v n } are bounded sequences. By Lemma 2.4, we have that

J δ n G 1 ( I δ n B 1 ) v 2 n v ¯ 2 = J δ n G 1 ( I δ n B 1 ) v 2 n J δ n G 1 ( I δ n B 1 ) v ¯ 2 ( I δ n B 1 ) v 2 n ( I δ n B 1 ) v ¯ 2 ( I J δ n G 1 ( I δ n B 1 ) ) v 2 n ( I J δ n G 1 ( I δ n B 1 ) ) v ¯ 2 v 2 n v ¯ 2 ( I J δ n G 1 ( I δ n B 1 ) ) v 2 n ( I J δ n G 1 ( I δ n B 1 ) ) v ¯ 2 = v 2 n v ¯ 2 v 2 n J δ n G 1 ( I δ n B 1 ) v 2 n 2

for each nN. Hence,

v 2 n + 1 v ¯ 2 = a n ( u v ¯ ) + b n ( v 2 n v ¯ ) + c n ( J δ n G 1 ( I δ n B 1 ) v 2 n v ¯ ) 2 b n ( v 2 n v ¯ ) + c n ( J δ n G 1 ( I δ n B 1 ) v 2 n v ¯ ) 2 + 2 a n u v ¯ , v 2 n + 1 v ¯ = ( 1 a n ) 2 b n ( v 2 n v ¯ ) + c n ( J δ n G 1 ( I δ n B 1 ) v 2 n v ¯ ) 2 + 2 a n u v ¯ , v 2 n + 1 v ¯ = ( 1 a n ) 2 { b n v 2 n v ¯ 2 + c n J δ n G 1 ( I δ n B 1 ) v 2 n v ¯ 2 b n c n v 2 n J δ n G 1 ( I δ n B 1 ) v 2 n 2 } + 2 a n u v ¯ , v 2 n + 1 v ¯ b n v 2 n v ¯ 2 + c n J δ n G 1 ( I δ n B 1 ) v 2 n v ¯ 2 + 2 a n u v ¯ , v 2 n + 1 v ¯ b n v 2 n v ¯ 2 + c n ( v 2 n v ¯ 2 v 2 n J δ n G 1 ( I δ n B 1 ) v 2 n 2 ) + 2 a n u v ¯ , v 2 n + 1 v ¯ = ( b n + c n ) v 2 n v ¯ 2 + 2 a n u v ¯ , v 2 n + 1 v ¯ c n v 2 n J δ n G 1 ( I δ n B 1 ) v 2 n 2 ,
(3.2)

where

b n := b n b n + c n , c n := c n b n + c n .

Similarly, we have

v 2 n v ¯ 2 ( g n + h n ) v 2 n 1 v ¯ 2 + 2 f n u v ¯ , v 2 n v ¯ h n J γ n G 2 ( I γ n B 2 ) v 2 n 1 v 2 n 1 2 .
(3.3)

Consequently, it follows from (3.2) and (3.3) that

v 2 n + 1 v ¯ 2 ( b n + c n ) v 2 n v ¯ 2 + 2 a n u v ¯ , v 2 n + 1 v ¯ c n J δ n G 1 ( I δ n B 1 ) v 2 n v 2 n 2 ( 1 a n ) { ( 1 f n ) v 2 n 1 v ¯ 2 + 2 f n u v ¯ , v 2 n v ¯ h n J γ n G 2 ( I γ n B 2 ) v 2 n 1 v 2 n 1 2 } + 2 a n u v ¯ , v 2 n + 1 v ¯ c n J δ n G 1 ( I δ n B 1 ) v 2 n v 2 n 2 = ( 1 a n ) ( 1 f n ) v 2 n 1 v ¯ 2 + 2 f n ( 1 a n ) u v ¯ , v 2 n v ¯ h n ( 1 a n ) J γ n G 2 ( I γ n B 2 ) v 2 n 1 v 2 n 1 2 + 2 a n u v ¯ , v 2 n + 1 v ¯ c n J δ n G 1 ( I δ n B 1 ) v 2 n v 2 n 2 .
(3.4)

For each nN, set S n := v 2 n 1 v ¯ 2 . Then S n + 1 = v 2 n + 1 v ¯ 2 and (3.4) become

S n + 1 ( 1 a n ) ( 1 f n ) S n + 2 a n u v ¯ , v 2 n + 1 v ¯ + 2 f n ( 1 a n ) u v ¯ , v 2 n v ¯ c n J δ n G 1 ( I δ n B 1 ) v 2 n v 2 n 2 h n ( 1 a n ) J γ n G 2 ( I γ n B 2 ) v 2 n 1 v 2 n 1 2 ( 1 a n ) ( 1 f n ) S n + 2 a n u v ¯ , v 2 n + 1 v ¯ + 2 f n ( 1 a n ) u v ¯ , v 2 n v ¯ .
(3.5)

Case 1: { S n } is eventually decreasing, i.e., there exists a natural number N such that v 2 n + 1 v ¯ v 2 n 1 v ¯ for each nN. So, { S n } is convergent and lim n v 2 n 1 v ¯ exists. For all nN, c n c, h n h and (3.5), we have that

0 c J δ n G 1 ( I δ n B 1 ) v 2 n v 2 n 2 + h ( 1 a n ) J γ n G 2 ( I γ n B 2 ) v 2 n 1 v 2 n 1 2 ( 1 a n ) ( 1 f n ) S n S n + 1 + 2 a n u v ¯ , v 2 n + 1 v ¯ + 2 f n ( 1 a n ) u v ¯ , v 2 n v ¯ .
(3.6)

Noting via condition (i) and the fact that { v n } is bounded that

lim n [ ( 1 a n ) ( 1 f n ) S n S n + 1 ] = 0 , lim n 2 a n u v ¯ , v 2 n + 1 v ¯ = lim n 2 f n ( 1 a n ) u v ¯ , v 2 n v ¯ = 0 ,

we conclude from (3.6) that

lim n ( c J δ n G 1 ( I δ n B 1 ) v 2 n v 2 n 2 + h ( 1 a n ) J γ n G 2 ( I γ n B 2 ) v 2 n 1 v 2 n 1 2 ) =0.

Therefore,

lim n J δ n G 1 ( I δ n B 1 ) v 2 n v 2 n = lim n J γ n G 2 ( I γ n B 2 ) v 2 n 1 v 2 n 1 =0.
(3.7)

Since { v 2 n } is a bounded sequence in , there is a subsequence { v ( 2 n ) k } of { v 2 n } such that v ( 2 n ) k x ¯ H and

lim sup n u v ¯ , v 2 n v ¯ = lim k u v ¯ , v ( 2 n ) k v ¯ =u v ¯ , x ¯ v ¯ .

On the other hand, 0<a δ n b<2 κ 1 , there exists a subsequence { δ n k j } of { δ n } such that { δ n k j } converges to a number δ ¯ [a,b]. By Lemma 2.5, we have

v ( 2 n ) k j J δ ¯ G 1 ( I δ ¯ B 1 ) v ( 2 n ) k j v ( 2 n ) k j J δ n k j G 1 ( I δ n k j B 1 ) v ( 2 n ) k j + J δ n k j G 1 ( I δ ¯ B 1 ) v ( 2 n ) k j J δ ¯ G 1 ( I δ ¯ B 1 ) v ( 2 n ) k j + J δ n k j G 1 ( I δ n k j B 1 ) v ( 2 n ) k j J δ n k j G 1 ( I δ ¯ B 1 ) v ( 2 n ) k j v ( 2 n ) k j J δ n k j G 1 ( I δ n k j B 1 ) v ( 2 n ) k j + | δ n k j δ ¯ | B 1 u n k j + | δ n k j δ | ¯ δ ¯ J δ ¯ G 1 ( I δ ¯ B 1 ) v ( 2 n ) k j ( I δ ¯ B 1 ) v ( 2 n ) k j 0 .
(3.8)

By (3.8) and Lemma 2.9, x ¯ Fix( J δ ¯ G 1 (I δ ¯ B 1 ))= ( B 1 + G 1 ) 1 (0). Since 0<c γ n d<2 κ 2 , there exists a subsequence { γ n k j } of { γ n } such that { γ n k j } converges to a number γ ¯ [c,d]. We have that

v 2 n + 1 v 2 n = a n u + b n v 2 n + c n J δ n G 1 ( I δ n B 1 ) v 2 n v 2 n a n u v 2 n + c n J δ n G 1 ( I δ n B 1 ) v 2 n v 2 n
(3.9)

and

v 2 n v 2 n 1 = f n u + g n v 2 n 1 + h n J γ n G 2 ( I γ n B 2 ) v 2 n 1 v 2 n 1 f n u v 2 n 1 + h n J γ n G 2 ( I γ n B 2 ) v 2 n 1 v 2 n 1 .
(3.10)

Since { v n } is bounded, we conclude from (3.7), (3.9), (3.10), and conditions (i), (ii) that

lim n v n + 1 v n =0.
(3.11)

By Lemma 2.5, we have

v ( 2 n ) k j + 1 J γ ¯ ( I γ ¯ B 2 ) v ( 2 n ) k j + 1 v ( 2 n ) k j + 1 J γ n k j ( I γ n k j B 2 ) v ( 2 n ) k j + 1 + J γ n k j ( I γ n k j B 2 ) v ( 2 n ) k j + 1 J γ n k j ( I γ ¯ B 2 ) v ( 2 n ) k j + 1 + J γ n k j ( I γ ¯ B 2 ) v ( 2 n ) k j + 1 J r ¯ ( I r ¯ B 2 ) v ( 2 n ) k j + 1 v ( 2 n ) k j + 1 J γ n k j ( I γ n k j B 2 ) v ( 2 n ) k j + 1 + | γ n k j γ ¯ | B 2 v ( 2 n ) k j + 1 + | γ n k j γ ¯ | γ ¯ J γ ¯ ( I γ ¯ B 2 ) v ( 2 n ) k j + 1 ( I γ ¯ B 2 ) v ( 2 n ) k j + 1 0 .
(3.12)

Since lim n v 2 n + 1 v 2 n =0, we know that v ( 2 n ) k j + 1 x ¯ . By (3.12) and Lemma 2.9, we know that J γ ¯ G 2 (I γ ¯ B 2 ) x ¯ = x ¯ . So, x ¯ ( B 2 + G 2 ) 1 (0). This shows that x ¯ ( B 1 + G 1 ) 1 (0) ( B 2 + G 2 ) 1 (0). It follows from v ¯ := P ( B 1 + G 1 ) 1 ( 0 ) ( B 2 + G 2 ) 1 ( 0 ) u and Lemma 2.3 that

lim sup n u v ¯ , v 2 n v ¯ =u v ¯ , x ¯ v ¯ 0.
(3.13)

By (3.11) and (3.13),

lim sup n u v ¯ , v 2 n + 1 v ¯ = lim sup n ( u v ¯ , v 2 n + 1 v 2 n + u v ¯ , v 2 n v ¯ ) lim sup n u v ¯ , v 2 n + 1 v 2 n + lim sup n u v ¯ , v 2 n v ¯ 0 .
(3.14)

Applying Lemma 2.7 to inequality (3.5) with t n =2u v ¯ , v 2 n + 1 v ¯ and k n =2(1 a n )u v ¯ , v 2 n v ¯ , we obtain from (3.13) and (3.14) and conditions (i), (ii) that lim n S n =0. That is, lim n v 2 n 1 = v ¯ . And then it follows from (3.11) that lim n v 2 n = v ¯ . Thus, lim n v n = v ¯ .

Case 2: Suppose that { S n } is not an eventually decreasing sequence. Let { S n i } be a subsequence of { S n } such that S n i S n i + 1 for all i0, also consider the sequence of integers { τ ( n ) } n n 0 , defined by τ(n)=max{kn, S k < S k + 1 }, for some n 0 ( n 0 is a sufficiently large number). Then { τ ( n ) } n n 0 is a nondecreasing sequence as lim n τ(n)=, and for all n n 0 , one has that

S τ ( n ) S τ ( n ) + 1 and S n S τ ( n ) + 1 .
(3.15)

That is, max{ S τ ( n ) , S n } S τ ( n ) + 1 . For such n n 0 , it follows from (3.15) that

v 2 τ ( n ) 1 v ¯ v 2 τ ( n ) + 1 v ¯
(3.16)

and

v 2 n 1 v ¯ v 2 τ ( n ) + 1 v ¯ .
(3.17)

From (3.15) and (3.5), we obtain

S τ ( n ) S τ ( n ) + 1 ( 1 a τ ( n ) ) ( 1 f τ ( n ) ) S τ ( n ) + 2 a τ ( n ) u v ¯ , v 2 τ ( n ) + 1 v ¯ + 2 f τ ( n ) ( 1 a τ ( n ) ) u v ¯ , v 2 τ ( n ) v ¯ c τ ( n ) J δ τ ( n ) G 1 ( I δ τ ( n ) B 1 ) v 2 τ ( n ) v 2 τ ( n ) 2 h τ ( n ) J γ τ ( n ) G 2 ( I γ τ ( n ) B 2 ) v 2 τ ( n ) 1 v 2 τ ( n ) 1 2 ( 1 a τ ( n ) ) ( 1 f τ ( n ) ) S τ ( n ) + 2 a τ ( n ) u v ¯ , v 2 τ ( n ) + 1 v ¯ + 2 f τ ( n ) ( 1 a τ ( n ) ) u v ¯ , v 2 τ ( n ) v ¯ .
(3.18)

Just as the argument of Case 1, we have

{ lim n v 2 τ ( n ) J δ τ ( n ) G 1 ( I δ τ ( n ) B 1 ) v 2 τ ( n ) = 0 , lim n v 2 τ ( n ) 1 J γ τ ( n ) G 2 ( I γ τ ( n ) B 2 ) v 2 τ ( n ) 1 = 0 , lim sup n u v ¯ , v 2 τ ( n ) + 1 v ¯ 0 , lim sup n u v ¯ , v 2 τ ( n ) v ¯ 0 , lim n v 2 τ ( n ) + 1 v 2 τ ( n ) = 0 .
(3.19)

By (3.15) and (3.18), we have

S τ ( n ) + 1 ( 1 a τ ( n ) ) ( 1 f τ ( n ) ) S τ ( n ) + 1 + 2 a τ ( n ) u v ¯ , v 2 τ ( n ) + 1 v ¯ + 2 f τ ( n ) ( 1 a τ ( n ) ) u v ¯ , v 2 τ ( n ) v ¯

for all n n 0 . This implies that

S τ ( n ) + 1 ( 1 a τ ( n ) ) ( 1 f τ ( n ) ) S τ ( n ) + 1 + a τ ( n ) K v 2 τ ( n ) + 1 v 2 τ ( n ) + 2 [ f τ ( n ) ( 1 a τ ( n ) ) + a τ ( n ) ] u v ¯ , v 2 τ ( n ) v ¯

for all n n 0 , where K=2u v ¯ . Furthermore, we have

S τ ( n ) + 1 a τ ( n ) K v 2 τ ( n ) + 1 v 2 τ ( n ) a τ ( n ) + f τ ( n ) ( 1 a τ ( n ) ) + 2 u v ¯ , v 2 τ ( n ) v ¯ K v 2 τ ( n ) + 1 v 2 τ ( n ) + 2 u v ¯ , v 2 τ ( n ) v ¯ .
(3.20)

Hence, it follows from (3.19) and (3.20) that

lim n S τ ( n ) + 1 =0.
(3.21)

By (3.15) and (3.21), we know that lim n S n =0. Then, just as the argument in the proof of Case 1, we obtain lim n v n = v ¯ . Therefore, the proof is completed. □

Remark 3.1

  1. (i)

    If we put B 1 = B 2 =0 in Theorem 3.1, then Theorem 3.1 is reduced to Theorem 7 in [20].

  2. (ii)

    Boikanyo et al. [20] showed that strong convergence theorem of a proximal point algorithm with error can be obtained from strong convergence of a proximal point algorithm without errors. Therefore, in Theorem 3.1, we study strong convergence of variational inclusion problems without error.

As a simple consequence of Theorem 3.1, we have the following theorem.

Theorem 3.2 Suppose that ( B 1 + G 1 ) 1 (0) is nonempty, and { a n }, { b n }, { c n } are sequences in [0,1] such that a n + b n + c n =1, 0< a n <1 for each nN. For an arbitrary fixed uH, define a sequence { x n } by

{ x 0 H  is chosen arbitrarily , x n + 1 : = a n u + b n x n + c n J δ n G 1 ( I δ n B 1 ) x n , n N { 0 } .

Then lim n v n = P ( B 1 + G 1 ) 1 ( 0 ) u if the following conditions are satisfied:

  1. (i)

    lim n a n =0, n = 1 a n =;

  2. (ii)

    0<a δ n b<2 κ 1 for each nN and for some a,b R + ;

  3. (iii)

    lim inf n c n >0.

Proof Set f n =0, B 2 =0, g n + h n =1, { g n } and { h n } are sequences in [0,1], and G 2 = ι H 1 . Define a sequence v n by

v 2 n + 1 = a n u+ b n x n + c n J δ n G 1 (I δ n B 1 ) x n ,nN{0}

and

v 2 n = v 2 n 1 = x n 1 .

Then

v 2 n + 1 = a n u+ b n v 2 n + c n J δ n G 1 (I δ n B 1 ) v 2 n ,nN{0},

and

v 2 n = f n u+ g n v 2 n 1 + h n v 2 n 1 = f n u+ g n v 2 n 1 + h n J γ n ι H 1 v 2 n 1 ,nN{0}.

Since

G 2 1 (0)= ( ι H 1 ) 1 0=Fix ( J γ n ι H 1 ) =Fix( P H 1 )= H 1 ,

it is easy to see that

( B 1 + G 1 ) 1 (0)= ( ( B 1 + G 1 ) 1 ( 0 ) H 1 ) = ( ( B 1 + G 1 ) 1 ( 0 ) ( B 2 + G 2 ) 1 ( 0 ) ) .

Then Theorem 3.2 follows from Theorem 3.1. □

Remark 3.2 Following the same argument as in Remark 12 in [20], we see that Theorems 3.1 and 3.2 contain [[23], Theorem 3.3], [[24], Theorem 1], [[32], Theorems 1-4] and many other recent results as special cases.

4 Applications

Now, we recall the following multiple sets split feasibility problem (MSSFP-A1):

Find  x ¯ H 1  such that  x ¯ G 1 1 (0) G 2 1 (0), A 1 x ¯ Fix( F 1 ) and  A 2 x ¯ Fix( F 2 ).

Theorem 4.1 [33]

Given any x ¯ H 1 , { λ n } and { γ n } are sequences in (0,).

  1. (i)

    If x ¯ is a solution of (MSSFP-A1), then J λ n G 1 (I ρ n A 1 (I F 1 ) A 1 ) J r n G 2 (I σ n A 2 (I F 2 ) A 2 ) x ¯ = x ¯ for each nN.

  2. (ii)

    Suppose that J λ n G 1 (I ρ n A 1 (I F 1 ) A 1 ) J r n G 2 (I σ n A 2 (I F 2 ) A 2 ) x ¯ = x ¯ with 0< ρ n < 2 A 1 2 + 2 , 0< σ n < 2 A 2 2 + 2 for each nN and the solution set of (MSSFP-A1) is nonempty. Then x ¯ is a solution of (MSSFP-A1).

In order to study the convergence theorems for the solution set of multiple split feasibility problem (MSSFP-A1), we need the following problems and the following essential tool which is a special case of Theorem 3.2 in [33]:

(SFP-1)Find  x ¯ H 1  such that  x ¯ Fix ( J ρ n G 1 )  and  A 1 x ¯ Fix( F 1 ).

Lemma 4.1 Given any x ¯ H 1 .

  1. (i)

    If x ¯ is a solution of (SFP-1), then J ρ n G 1 (I ρ n A 1 (I F 1 ) A 1 ) x ¯ = x ¯ for each nN.

  2. (ii)

    Suppose that J ρ n G 1 (I ρ n A 1 (I F 1 ) A 1 ) x ¯ = x ¯ with 0< ρ n < 2 A 1 2 + 2 for each nN, and the solution set of (SFP-1) is nonempty. Then (I ρ n A 1 (I F 1 ) A 1 ) and J ρ n G 1 (I ρ n A 1 (I F 1 ) A 1 ) are averaged and x ¯ is a solution of (SFP-1).

Proof (i) Suppose that x ¯ H 1 is a solution of (SFP-1). Then x ¯ Fix( J ρ n G 1 ), A 1 x ¯ Fix( F 1 ) for each nN. It is easy to see that

J ρ n G 1 ( I ρ n A 1 ( I F 1 ) A 1 ) x ¯ = x ¯ ,nN.

(ii) Since the solution set of (SFP-1) is nonempty, there exists w ¯ H 1 such that w ¯ Fix( J ρ n G 1 ), A 1 w ¯ Fix( F 1 ). Then w ¯ G 1 1 (0). If we put G 2 = G 1 and F 2 = F 1 , we get that the solution set of (MSSFP-A1) is nonempty. By Lemma 2.1 we have that

A 1 (I F 1 ) A 1  is  1 A 1 2 -ism.
(4.1)

By (4.1), 0< ρ n < 2 A 1 2 + 2 , and Lemma 2.8(ii), (iii), we know that

I ρ n A 1 (I F 1 ) A 1  is averaged for each nN.
(4.2)

On the other hand, for each nN, J ρ n G 1 is a firmly nonexpansive mappings, it is easy to see that

J ρ n G 1  is  1 2 -averaged.
(4.3)

Hence, by (4.2), (4.3) and Lemma 2.8(iv) and (v), we see that

J ρ n G 1 ( I ρ n A 1 ( I F 1 ) A 1 )  is averaged.

Since

J ρ n G 1 ( I ρ n A 1 ( I F 1 ) A 1 ) x ¯ = x ¯ ,

so

J ρ n G 1 ( I ρ n A 1 ( I F 1 ) A 1 ) J ρ n G 1 ( I ρ n A 1 ( I F 1 ) A 1 ) x ¯ = x ¯ .

Then Lemma 4.1 follows from Theorem 4.1 by taking G 1 = G 2 , F 1 = F 2 and ρ n = r n . □

Remark 4.1 From the following result, we know that Lemma 4.1 is more useful than Theorem 4.1.

Theorem 4.2 Suppose that the solution set Ω A 1 of (MSSFP-A1) is nonempty and { a n }, { b n }, { c n }, { f n }, { g n }, { h n } are sequences in [0,1] such that a n + b n + c n =1, f n + g n + h n =1, 0< a n <1, and 0< f n <1 for each nN. For an arbitrarily fixed uH, a sequence { v n } is defined by

{ v 0 H  is chosen arbitrarily , v 2 n + 1 : = a n u + b n v 2 n + c n J ρ n G 1 ( I ρ n A 1 ( I F 1 ) A 1 ) v 2 n , n N { 0 } , v 2 n : = f n u + g n v 2 n 1 + h n J σ n G 2 ( I σ n A 2 ( I F 2 ) A 2 ) v 2 n 1 , n N .

Then lim n v n is a unique solution of the following optimization problem:

min q Ω A 1 qu

provided the following conditions are satisfied:

  1. (i)

    lim n a n = lim n f n =0;

  2. (ii)

    either n = 1 a n = or n = 1 f n =;

  3. (iii)

    0<a ρ n b< 2 A 1 2 + 2 , 0<c σ n d< 2 A 2 2 + 2 for each nN, and for some a,b,c,d R + ;

  4. (iv)

    lim inf n c n >0, lim inf n h n >0.

Proof Since F i is firmly nonexpansive, it follows from Lemma 4.1 that A i (I F i ) A i is 1 A i 2 -ism for each i=1,2. For each i=1,2, put B i = A i (I F i ) A i in Lemma 4.1. Then algorithm in Theorem 3.1 follows immediately from algorithm in Theorem 4.2. Since Ω A 1 is nonempty, by Lemma 4.1, we have that

w ¯ ( Fix ( J ρ n G 1 ( ( I ρ n A 1 ( I F 1 ) A 1 ) ) ) Fix ( J σ n G 2 ( ( I σ n A 2 ( I F 2 ) A 2 ) ) ) )
(4.4)

for each nN. This implies that

w ¯ ( Fix ( J ρ n G 1 ( I ρ n B 1 ) ) ( Fix ( J σ n G 2 ( I σ n B 2 ) ) ) )
(4.5)

for each nN. Hence,

w ¯ ( ( B 1 + G 1 ) 1 ( 0 ) ( B 2 + G 2 ) 1 ( 0 ) ) .

By Theorem 3.1, lim n v n = x ¯ , where x ¯ = P ( B 1 + G 1 ) 1 ( 0 ) ( B 2 + G 2 ) 1 ( 0 ) u. That is,

x ¯ ( B 1 + G 1 ) 1 (0) ( B 2 + G 2 ) 1 (0)

and

x ¯ uqu

for all q ( B 1 + G 1 ) 1 (0) ( B 2 + G 2 ) 1 (0). Since

x ¯ ( B 1 + G 1 ) 1 (0) ( B 2 + G 2 ) 1 (0),

we know that

x ¯ Fix ( J ρ n G 1 ( I ρ n B 1 ) ) Fix ( J σ n G 2 ( I σ n B 2 ) ) .

That is,

x ¯ Fix ( J ρ n G 1 ( I ρ n A 1 ( I F 1 ) A 1 ) )

and

x ¯ Fix ( J σ n G 2 ( I σ n A 2 ( I F 2 ) A 2 ) ) .

By Lemma 4.1, we get that x ¯ Ω A 1 . Similarly, if q ( B 1 + G 1 ) 1 (0) ( B 2 + G 2 ) 1 (0), then q Ω A 1 . Therefore x ¯ = P Ω A 1 u. This shows that lim n v n is a unique solution of the optimization problem

min q Ω A 1 qu.

Therefore, the proof is completed. □

In the following theorem, we study the following multiple sets split feasibility problem (MSSMVIP-A2):

Find  x ¯ H 1  such that  x ¯ C, A 1 x ¯ Q and  A 2 x ¯ Q .

Let Ω A 2 denote the solution set of (MSSMVIP-A2). The following theorem is a special case of Theorem 4.3. Hence, it is also a special case of Theorem 4.2.

Theorem 4.3 Suppose that Ω A 2 is nonempty, and that { a n }, { b n }, { c n }, { f n }, { g n }, { h n } are sequences in [0,1] with a n + b n + c n =1, f n + g n + h n =1, 0< a n <1, and 0< f n <1 for each nN. For an arbitrary fixed uH, a sequence { v n } is defined by

{ v 0 H  is chosen arbitrarily , v 2 n + 1 : = a n u + b n v 2 n + c n P C ( I ρ n A 1 ( I P Q ) A 1 ) v 2 n , n N { 0 } , v 2 n : = f n u + g n v 2 n 1 + h n P C ( I σ n A 2 ( I P Q ) A 2 ) v 2 n 1 , n N .

Then lim n v n is a unique solution of the following optimization problem:

min q Ω A 2 qu

provided the following conditions are satisfied:

  1. (i)

    lim n a n = lim n f n =0;

  2. (ii)

    either n = 1 a n = or n = 1 f n =;

  3. (iii)

    0<a ρ n b< 2 A 1 2 + 2 , 0<c σ n d< 2 A 2 2 + 2 for each nN and for some a,b,c,d R + ;

  4. (iv)

    lim inf n c n >0, lim inf n h n >0.

Proof Put G 1 = G 2 = ι C , F 1 = P Q , and F 2 = P Q . Then G 1 , G 2 are two set-valued maximum monotone mappings, F 1 and F 2 are firmly nonexpansive mappings. Since J ρ n ι C = P C and J σ n ι C = P C , we have Fix( F 1 )=Fix( P Q )=Q, Fix( F 2 )=Fix( P Q )= Q , Fix( J ρ n ι C )=Fix( P C )=C and Fix( J σ n ι C )=Fix( P C )=C. Then Theorem 4.3 follows from Theorem 4.2. □

In the following theorem, we study the following split feasibility problem (MSSMVIP-A3):

Find  x ¯ H 1  such that  x ¯ C Q , A 1 x ¯ Q  where  Q  is a nonempty closed subset of  H 1 .

Let Ω A 3 denote the solution set of problem (MSSMVIP-A3). The following is also a special case of Theorem 4.3.

Theorem 4.4 Suppose that Q is a nonempty closed convex subset of H 1 , Ω A 3 is nonempty, and { a n }, { b n }, { c n }, { f n }, { g n }, { h n } are sequences in [0,1] with a n + b n + c n =1, f n + g n + h n =1, 0< a n <1, and 0< f n <1 for each nN. For an arbitrary fixed uH, a sequence { v n } is defined by

{ v 0 H  is chosen arbitrarily , v 2 n + 1 : = a n u + b n v 2 n + c n P C ( I ρ n A 1 ( I P Q ) A 1 ) v 2 n , n N { 0 } , v 2 n : = f n u + g n v 2 n 1 + h n P C ( I σ n ( I P Q ) ) v 2 n 1 , n N .

Then lim n v n is a unique solution of the following optimization problem:

min q Ω A 3 qu

provided the following conditions are satisfied:

  1. (i)

    lim n a n = lim n f n =0;

  2. (ii)

    either n = 1 a n = or n = 1 f n =;

  3. (iii)

    0<a ρ n b< 2 A 1 2 + 2 , 0<c< σ n <d< 2 3 for each nN and for some a,b,c,d R + ;

  4. (iv)

    lim inf n c n >0, lim inf n h n >0.

Proof Put A 2 =I and H 1 = H 3 in Theorem 4.3. Then Theorem 4.4 follows from Theorem 4.3. □

In the following theorem, we study the following convex feasibility problem (MSSMVIP-A4):

Find  x ¯ H 1  such that  x ¯ C Q Q ,  where  Q , Q  are nonempty closed subsets of  H 1 .

Let Ω A 4 denote the solution set of (MSSMVIP-A4). The following is a special case of Theorem 4.3.

Theorem 4.5 Suppose that Q and Q are nonempty closed convex subsets of H 1 , Ω A 4 is nonempty, and { a n }, { b n }, { c n }, { f n }, { g n }, { h n } are sequences in [0,1] with a n + b n + c n =1, f n + g n + h n =1, 0< a n <1, and 0< f n <1 for each nN. For an arbitrary fixed uH, a sequence { v n } is defined by

{ v 0 H  is chosen arbitrarily , v 2 n + 1 : = a n u + b n v 2 n + c n P C ( I ρ n ( I P Q ) ) v 2 n , n N { 0 } , v 2 n : = f n u + g n v 2 n 1 + h n P C ( I σ n ( I P Q ) ) v 2 n 1 , n N .

Then lim n v n is a unique solution of the following optimization problem:

min q Ω A 4 qu

provided the following conditions are satisfied:

  1. (i)

    lim n a n = lim n f n =0;

  2. (ii)

    either n = 1 a n = or n = 1 f n =;

  3. (iii)

    0<a< ρ n <b< 2 3 , 0<c< σ n <d< 2 3 for each nN and for some a,b,c,d R + ;

  4. (iv)

    lim inf n c n >0, lim inf n h n >0.

Proof Put A 1 = A 2 =I and H 1 = H 2 = H 3 in Theorem 4.3. Then Theorem 4.5 follows from Theorem 4.3. □

In the following theorem, we study the following convex feasibility problem (MSSMVIP-A5):

Find  x ¯ H 1  such that  x ¯ Q Q ,  where  Q  and  Q  are nonempty closed convex subsets of  H 1 .

Let Ω A 5 denote the solution set of (MSSMVIP-A5).

The following existent theorem of a convex feasibility problem follows immediately from Theorem 4.5.

Theorem 4.6 Suppose that Q and Q are nonempty closed convex subsets of H 1 , Ω A 5 is nonempty, and { a n }, { b n }, { c n }, { f n }, { g n }, { h n } are sequences in [0,1] with a n + b n + c n =1, f n + g n + h n =1, 0< a n <1, and 0< f n <1 for each nN. For an arbitrary fixed uH. Define a sequence { v n } by

{ v 0 H  is chosen arbitrarily , v 2 n + 1 : = a n u + b n v 2 n + c n ( I ρ n ( I P Q ) ) v 2 n , n N { 0 } , v 2 n : = f n u + g n v 2 n 1 + h n ( I σ n ( I P Q ) ) v 2 n 1 , n N .

Then lim n v n is a unique solution of the following optimization problem:

min q Ω A 5 qu

provided the following conditions are satisfied:

  1. (i)

    lim n a n = lim n f n =0;

  2. (ii)

    either n = 1 a n = or n = 1 f n =;

  3. (iii)

    0<a< ρ n <b< 2 3 , 0<c< σ n <d< 2 3 for each nN and for some a,b,c,d R + ;

  4. (iv)

    lim inf n c n >0, lim inf n h n >0.

Proof Put C= H 1 , then P C = P H 1 . Then Theorem 4.6 follows from Theorem 4.5. □

In the following theorem, we study the following system of convexly constrained linear inverse problem (SCCLIP):

Find  x ¯ H 1  such that  x ¯ C, A 1 x ¯ =b and  A 2 x ¯ = b , where b H 2  and  b H 3 .

Let Ω A 6 denote the solution set of (SCCLIP).

Theorem 4.7 Suppose that Ω A 6 is nonempty, and b H 2 , b H 3 . Let { a n }, { b n }, { c n }, { f n }, { g n }, and { h n } be sequences in [0,1] with a n + b n + c n =1, f n + g n + h n =1, 0< a n <1, and 0< f n <1 for each nN. For an arbitrary fixed uH, a sequence { v n } is defined by

{ v 0 H  is chosen arbitrarily , v 2 n + 1 : = a n u + b n v 2 n + c n P C ( v 2 n ρ n A 1 ( A 1 v 2 n b ) ) , n N { 0 } , v 2 n : = f n u + g n v 2 n 1 + h n P C ( v 2 n 1 σ n A 2 ( A 2 v 2 n 1 b ) ) , n N .

Then lim n v n is a unique solution of the following optimization problem:

min q Ω A 6 qu

provided the following conditions are satisfied:

  1. (i)

    lim n a n = lim n f n =0;

  2. (ii)

    either n = 1 a n = or n = 1 f n =;

  3. (iii)

    0<a ρ n b< 2 A 1 2 + 2 , 0<c σ n d< 2 A 2 2 + 2 for each nN and for some a,b,c,d R + ;

  4. (iv)

    lim inf n c n >0, lim inf n h n >0.

Proof Put Q={b} and Q ={ b }. Then Theorem 4.7 follows from Theorem 4.2. □

In the following theorem, we study the following convexly constrained linear inverse problem (CCLIP):

Find  x ¯ H 1  such that  x ¯ C Q  and  A 1 x ¯ = b ,  where  b H 2  and  Q  is a nonempty closed convex subset of  H 1 .

Let Ω A 7 denote the solution set of (CCLIP).

Theorem 4.8 Suppose that Q is a nonempty closed convex subset of H 1 . Ω A 7 is nonempty, b H 2 , and { a n }, { b n }, { c n }, { f n }, { g n }, { h n } are sequences in [0,1] with a n + b n + c n =1, f n + g n + h n =1, 0< a n <1, and 0< f n <1 for each nN. For an arbitrary fixed uH, a sequence { v n } is defined by

{ v 0 H  is chosen arbitrarily , v 2 n + 1 : = a n u + b n v 2 n + c n P C ( v 2 n ρ n ( A 1 v 2 n b ) ) , n N { 0 } , v 2 n : = f n u + g n v 2 n 1 + h n P C ( I σ n ( I P Q ) ) v 2 n 1 , n N .

Then lim n v n is a unique solution of the following optimization problem:

min q Ω A 7 qu

provided the following conditions are satisfied:

  1. (i)

    lim n a n = lim n f n =0;

  2. (ii)

    either n = 1 a n = or n = 1 f n =;

  3. (iii)

    0<a ρ n b< 2 A 1 2 + 2 , 0<c< σ n <d< 2 3 for each nN, and for some a,b,c,d R + ;

  4. (iv)

    lim inf n c n >0, lim inf n h n >0.

Remark 4.2 The iteration in Theorem 4.8 is different from the Landweber iterative method [19]:

x n + 1 := x n +γ A T (bA x n ),nN.

References

  1. Combettes PL: The convex feasible problem in image recovery. 95. In Advanced in Image and Electron Physics. Edited by: Hawkes P. Academic Press, New York; 1996:155–270.

    Google Scholar 

  2. Stark H: Image Recovery: Theory and Applications. Academic Press, New York; 1987.

    MATH  Google Scholar 

  3. Bregman LM: The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 1967, 7: 200–217.

    Article  MATH  MathSciNet  Google Scholar 

  4. Hundal H: An alternating projection that does not converge in norm. Nonlinear Anal. 2004, 57: 35–61. 10.1016/j.na.2003.11.004

    Article  MATH  MathSciNet  Google Scholar 

  5. Boikanyo OA, Morosanu G: Strong convergence of the method of alternating resolvents. J. Nonlinear Convex Anal. 2013, 14: 221–229.

    MATH  MathSciNet  Google Scholar 

  6. Censor Y, Elfving T: A multiprojection algorithm using Bregman projection in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692

    Article  MATH  MathSciNet  Google Scholar 

  7. Bauschke HH, Borwein JM: On projection algorithm for solving convex feasible problems. SIAM Rev. 1996, 38: 376–426.

    Article  MathSciNet  Google Scholar 

  8. Byrne C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18: 441–453. 10.1088/0266-5611/18/2/310

    Article  MATH  MathSciNet  Google Scholar 

  9. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006

    Article  MATH  MathSciNet  Google Scholar 

  10. Censor Y, Bortfeld T, Martin B, Trofimov A: A unified approach for inversion problems in intensity modulated radiation therapy. Phys. Med. Biol. 2003, 51: 2353–2365.

    Article  Google Scholar 

  11. Censor Y, Elfving T, Kopf N, Bortfeld T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21: 2071–2084. 10.1088/0266-5611/21/6/017

    Article  MATH  MathSciNet  Google Scholar 

  12. López G, Martín-Márquez V, Xu HK: Iterative algorithms for the multiple-sets split feasibility problem. In Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems. Edited by: Censor Y, Jiang M, Wang G. Medical Physics Publishing, Madison; 2010:243–279.

    Google Scholar 

  13. Lopez G, Martinez-Marquez V, Wang F, Xu HK: Solving the split feasibility problem without prior knowledge of matrix n . Inverse Probl. 2012., 28: Article ID 085004

    Google Scholar 

  14. Qu B, Xiu N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21: 1655–1665. 10.1088/0266-5611/21/5/009

    Article  MATH  MathSciNet  Google Scholar 

  15. Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018

    Google Scholar 

  16. Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018

    Google Scholar 

  17. Yang Q: The relaxed CQ algorithm solving the split feasible problem. Inverse Probl. 2004, 20: 1261–1266. 10.1088/0266-5611/20/4/014

    Article  MATH  Google Scholar 

  18. Eicke B: Iteration methods for convexly constrained ill-posed problems in Hilbert space. Numer. Funct. Anal. Optim. 1992, 13: 413–429. 10.1080/01630569208816489

    Article  MATH  MathSciNet  Google Scholar 

  19. Landweber L: An iteration formula for Fredholm integral equations of the first kind. Am. J. Math. 1951, 73: 615–624. 10.2307/2372313

    Article  MATH  MathSciNet  Google Scholar 

  20. Boikanyo OA, Morosanu G: A contraction proximal point algorithm with two monotone operators. Nonlinear Anal. 2012, 75: 5686–5692. 10.1016/j.na.2012.05.016

    Article  MATH  MathSciNet  Google Scholar 

  21. Kamimura S, Takahashi W: Approximating solutions of maximal monotone operators in Hilbert spaces. J. Approx. Theory 2000, 106: 226–240. 10.1006/jath.2000.3493

    Article  MATH  MathSciNet  Google Scholar 

  22. Xu HK: Iterative algorithm for nonlinear operators. J. Lond. Math. Soc. 2002, 2: 240–256.

    Article  MATH  Google Scholar 

  23. Yao Y, Noor MA: On convergence criteria of generalized proximal point algorithms. J. Comput. Appl. Math. 2008, 217: 46–55. 10.1016/j.cam.2007.06.013

    Article  MATH  MathSciNet  Google Scholar 

  24. Wang F, Cui H: On the contraction-proximal point algorithms with multi-parameters. J. Glob. Optim. 2012, 54: 485–491. 10.1007/s10898-011-9772-4

    Article  MATH  MathSciNet  Google Scholar 

  25. Yu, ZT, Lin, LJ, Chuang, CS: A Unified Study of The Split Feasible Problems With Applications. J. Nonlinear Convex Anal. (to appear)

  26. Takahashi W: Nonlinear Functional Analysis-Fixed Point Theory and Its Applications. Yokohama Publishers, Yokohama; 2000.

    MATH  Google Scholar 

  27. Marino G, Xu HK: Convergence of generalized proximal point algorithm. Commun. Pure Appl. Anal. 2004, 3: 791–808.

    Article  MATH  MathSciNet  Google Scholar 

  28. Maingé PE: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16: 899–912. 10.1007/s11228-008-0102-z

    Article  MATH  MathSciNet  Google Scholar 

  29. Combettes PL: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004, 53: 475–504. 10.1080/02331930412331327157

    Article  MATH  MathSciNet  Google Scholar 

  30. Goebel A, Kirk WA: Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.

    Book  MATH  Google Scholar 

  31. Rockafellar RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14: 877–898. 10.1137/0314056

    Article  MATH  MathSciNet  Google Scholar 

  32. Boikanyo OA, Morosanu G: Inexact Halpern-type proximal point algorithms. J. Glob. Optim. 2011, 57: 11–26.

    Article  MathSciNet  MATH  Google Scholar 

  33. Yu ZT, Lin LZ: Hierarchical problem with applications to mathematical programming with multiple sets split feasibility constraints. Fixed Point Theory Appl. 2013., 2013: Article ID 283 10.1186/1687-1812-2013-283

    Google Scholar 

Download references

Acknowledgements

Prof. C-S Chuang was supported by the National Science Council of Republic of China while he work on the publish, and Y-D Chen was supported by Southern Taiwan University of Science and Technology.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lai-Jiu Lin.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

L-JL designed and coordinated this research project and revised the paper. Y-DC carried out the project, drafted and revised the manuscript. C-SC coordinated the project and revised the paper.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Lin, LJ., Chen, YD. & Chuang, CS. Solutions for a variational inclusion problem with applications to multiple sets split feasibility problems. Fixed Point Theory Appl 2013, 333 (2013). https://doi.org/10.1186/1687-1812-2013-333

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2013-333

Keywords