Skip to main content

Viscosity iteration methods for a split feasibility problem and a mixed equilibrium problem in a Hilbert space

Abstract

In this paper, we consider and analyze two viscosity iteration algorithms (one implicit and one explicit) for finding a common element of the solution set MEP( F 1 , F 2 ) of a mixed equilibrium problem and the set Γ of a split feasibility problem in a real Hilbert space. Furthermore, we derive the strong convergence of a viscosity iteration algorithm to an element of MEP( F 1 , F 2 )Γ under mild assumptions.

1 Introduction

The split feasibility problem (SFP) in finite-dimensional Hilbert spaces was first introduced by Censor and Elfving [1] for modeling inverse problems which arise from phase retrievals and in medical image reconstruction [2]. In this paper we work in the framework of infinite-dimensional Hilbert spaces. In this setting, the split feasibility (SFP) is formulated as finding a point x with the property

x CandA x Q,
(1.1)

where C and D are the nonempty closed convex subsets of the infinite-dimensional real Hilbert spaces H 1 and H 2 , and A: H 1 H 2 is a bounded linear operator. For related works, please refer to [28].

Let H be a real Hilbert space whose inner product and norm are denoted by , and , respectively. Let C be a nonempty closed convex subset of H, and let F be a bifunction of C×C into which is the set of real numbers. The equilibrium problem for F:C×CR is to find xC such that

F(x,y)0,yC.
(1.2)

The set of solutions of (1.1) is denoted by EP(F). Equilibrium problems theory has emerged as an interesting and fascinating branch of applicable mathematics. The mixed equilibrium problem is as follows:

Find xC: F 1 (x,y)+ F 2 (x,y)+Ax,xy0,yC.
(1.3)

In the sequel, we indicate by MEP( F 1 , F 2 ,A) the set of solutions of our mixed equilibrium problem. If A=0, we denote MEP( F 1 , F 2 ,0) by MEP( F 1 , F 2 ). The mixed equilibrium problem (1.3) has become a rich source of inspiration and motivation for the study of a large number of problems arising in economics, optimization problems, variational inequalities, minimax problem, Nash equilibrium problem in noncooperative games and others (e.g., [914]).

It is our purpose in this paper to consider and analyze two viscosity iteration algorithms (one implicit and one explicit) for finding a common element of a solution set Γ of the split feasibility problem (1.1) and a set MEP( F 1 , F 2 ) of the mixed equilibrium problem (1.3) in a real Hilbert space. Furthermore, we prove that the proposed viscosity iteration methods converge strongly to a particular solution of the mixed equilibrium problem (1.3) and the split feasibility problem (1.1).

2 Preliminaries

Assume H is a Hilbert space and C is a nonempty closed convex subset of H. The projection, denoted by P C , from H onto C assigns for each xH the unique point P C xC so that

x P C x=inf { x y : y C } .

Proposition 2.1 (Basic properties of projections [15])

  1. (i)

    x P C x,y P C x0 for all xH and yC;

  2. (ii)

    xy, P C x P C y P C x P C x 2 for all x,yH;

  3. (iii)

    x P C x 2 x y 2 y P C x 2 for all xH and yC.

We also consider some nonlinear operators which are introduced in the following.

Definition 2.2 Let A:CH be a nonlinear mapping. A is said to be

  1. (i)

    Monotone if

    AxAy,xy0,x,yC.
  2. (ii)

    Strongly monotone if there exists a constant α>0 such that

    AxAy,xyα x y 2 ,x,yC.

For such a case, A is said to be α-strongly-monotone.

  1. (iii)

    Inverse-strongly monotone (ism) if there exists a constant α>0 such that

    AxAy,xyα A x A y 2 ,x,yC.

For such a case, A is said to be α-inverse-strongly-monotone (α-ism).

  1. (iv)

    k-Lipschitz continuous if there exists a constant k0 such that

    AxAykxy,x,yC.

Remark 2.3 Let F=Iγf, where f is a L-Lipschitz mapping on H with the coefficient L>0, γ= 1 L . It is a simple matter to see that the operator F is (1γL)-strongly monotone over H, i.e.,

FxFy,xy(1γL) x y 2 ,(x,y)H×H.

Definition 2.4 A mapping T:HH is said to be

  1. (a)

    nonexpansive if TxTyxy, x,yH;

  2. (b)

    firmly nonexpansive if 2TI is nonexpansive. T=(I+S)/2, where S:HH is nonexpansive, Alternatively, T is firmly nonexpansive if and only if

    T x T y 2 TxTy,xy,x,yH;
  3. (c)

    average if T=(1ϵ)I+ϵS, where ϵ(0,1) and S:HH is nonexpansive. In this case, we also claimed that T is ϵ-averaged. A firmly nonexpansive mapping is 1 2 -averaged.

Proposition 2.5 ([3])

Let T:HH be a given mapping.

  1. (i)

    T is nonexpansive if and only if the complement IT is 1 2 -ism.

  2. (ii)

    If T is v-ism, then for γ>0, γT is v γ -ism.

  3. (iii)

    T is averaged if and only if the complement IT is v-ism for v>1/2. Indeed, for α(0,1), T is α-averaged if and only if IT is 1 2 α -ism.

Proposition 2.6 ([3])

Given operators S,T,V:HH.

  1. (i)

    If T=(1α)S+αV for some α(0,1) and if S is averaged and V is nonexpansive, then S is averaged.

  2. (ii)

    T is firmly nonexpansive if and only if the complement IT is firmly nonexpansive.

  3. (iii)

    If T=(1α)S+αV for some α(0,1), S is firmly nonexpansive and V is nonexpansive, then T is averaged.

  4. (iv)

    The composite of finitely many averaged mappings is averaged. That is, if each of the mappings { T i } i = 1 N is averaged, then so is the composite T 1 ,, T N . In particular, if T 1 is α 1 -averaged and T 2 is α 2 -averaged, where α 1 , α 2 (0,1), then the composite T 1 T 2 is α-averaged, where α= α 1 + α 2 α 1 α 2 .

  5. (v)

    If the mappings { T i } i = 1 N are averaged and have a common fixed point, then

    i = 1 N F i x ( T i )= F i x ( T 1 T N ).

(Here the notation F i x (T) denotes the set of fixed points of the mapping T, that is, F i x (T)={xH:Tx=x}.)

Definition 2.7 A bifunction g:C×CR is monotone if g(x,y)+g(y,x)0, x,yC. A function G:CR is upper hemicontinuous if

lim sup t G ( t x + ( 1 t ) y ) G(y).

For solving the mixed equilibrium problem for a bifunction F:C×CR, let us assume that F satisfies the following conditions:

(A1) F(x,x)=0 for all xC;

(A2) F is monotone, that is, F(x,y)+F(y,x)0 for all x,yC;

(A3) for each x,y,zC,

lim t 0 F ( t z + ( 1 t ) x , y ) F(x,y);

(A4) for each xC, yF(x,y) is convex and lower semicontinuous.

Lemma 2.8 ([16])

Let C be a convex closed subset of a Hilbert space H. Let F 1 :C×CR be a bifunction such that

(f1) F 1 (x,x)=0, xC;

(f2) F 1 (x,) is monotone and supper hemicontinuous;

(f3) F 1 (,x) is lower semicontinuous and convex.

Let F 2 :C×CR be a bifunction such that

(h1) F 2 (x,x)=0, xC;

(h2) F 2 (x,) is monotone and upper semicontinuous;

(h3) F 2 (,x) is convex.

Moreover, let us suppose that

  1. (H)

    for fixed r>0 and xC, there exists a bounded set kC and aK such that for all zCK, F 1 (a,z)+ F 2 (z,a)+ 1 r az,zx<0, for r>0 and xH. Let T r :HC be a mapping defined by

    T r x= { z C : F 1 ( z , y ) + F 2 ( z , y ) + 1 r y z , z x 0 , y C } ,
    (2.1)

called a resolvent of F 1 and F 2 .

Then

  1. (i)

    T r x;

  2. (ii)

    T r is a single value;

  3. (iii)

    T r is firmly nonexpansive;

  4. (iv)

    MEP( F 1 , F 2 )= F i x ( T r ) and it is closed and convex.

Definition 2.9 Let H be a real Hilbert space and f:HH be a function.

  1. (i)

    Minimization problem:

    min x C f(x)= 1 2 A x P Q A x 2 .
  2. (ii)

    Tikhonov’s regularization problem:

    min x C f α (x)= 1 2 A x P Q A x 2 + 1 2 α x 2 ,
    (2.2)

where α>0 is the regularization parameter.

Proposition 2.10 ([17])

If the SFP is consistent, then the strong lim α 0 x α exists and is the minimum-norm solution of the SFP.

Proposition 2.11 ([17])

A necessary and sufficient condition for x α to converge in norm as α0 is that the minimization

lim u A ( C ) ¯ dist(u,Q)= min u A ( C ) ¯ u P Q u
(2.3)

is attained at a point in the set A(C).

Remark 2.12 ([17])

Assume that the SFP is consistent, and let x min be its minimum-norm solution, namely x min Γ has the property

x min =min { x : x Γ } .

From (2.2), observing that the gradient

f α (x)=f(x)+αI= A (I P Q )A+αI

is an (α+ A 2 )-Lipschitzian and α-strongly monotone mapping, the mapping P C (Iλ f α ) is a contraction with the coefficient

1 λ ( 2 α λ ( A 2 + α ) 2 ) 1 1 2 αλ,

where

0<λ< α ( A 2 + α ) 2 .
(2.4)

Remark 2.13 The mapping T= P C (Iλ f α ) is nonexpansive.

In fact, we have seen that f= A (I P Q )A is 1 A 2 -inverse strongly monotone and λf= A (I P Q )A is 1 λ A 2 -inverse strongly monotone, by Proposition 2.5(iii) the complement Iλf is λ A 2 2 -averaged. Therefore, noting that P C is 1 2 -averaged and applying Proposition 2.6(iv), we know that for each λ(0, 2 λ A 2 ), T= P C (Iλ f α ) is α-averaged, with

α= 1 2 + λ A 2 2 1 2 λ A 2 2 = 2 + λ A 2 4 (0,1).

Hence, it is clear that T is nonexpansive.

Lemma 2.14 ([17])

Assume that the SFP (1.1) is consistent. Define a sequence { x n } by the iterative algorithm

x n + 1 = P C (I γ n f α n ) x n = P C ( ( 1 γ n α n ) x n γ n A ( I P Q ) A x n ) ,
(2.5)

where { α n } and { γ n } satisfy the following conditions:

  1. (i)

    0< γ n α n A 2 + α n for all n;

  2. (ii)

    lim n α n =0 and lim n γ n =0;

  3. (iii)

    n = 1 α n γ n =;

  4. (iv)

    lim n | γ n + 1 γ n | γ n | α n + 1 α n | ( α n + 1 γ n + 1 ) 2 =0.

Then { x n } converges in norm to the minimum-norm solution of the SFP (1.1).

Lemma 2.15 ([18])

Let { x n } and { z n } be bounded sequences in a Banach space X and let { β n } be a sequence in [0,1] 0< lim inf n β n lim sup n β n <1. Suppose that x n + 1 =(1 β n ) z n + β n x n for all n0 and lim sup n ( z n + 1 z n x n + 1 x n )0. Then, lim n z n x n =0.

Lemma 2.16 ([19])

Let K be a nonempty closed convex subset of a real Hilbert space H and T:KK be a nonexpansive mapping with F i x (T). If { x n } is a sequence in K weakly converging to x and if {(IT) x n } converges strongly to y, then (IT)x=y; in particular, if y=0, then x F i x (T).

Lemma 2.17 ([20, 21])

Assume { α n } is a sequence of nonnegative real numbers such that

α n + 1 (1 σ n ) α n + δ n σ n ,

where { σ n } is a sequence in (0,1) and { δ n } is a sequence such that

  1. (1)

    n = 1 σ n =,

  2. (2)

    lim sup n δ n 0 or n = 1 | δ n σ n |<.

Then lim n α n =0.

3 Main results

In this section, we introduce two algorithms for solving the mixed equilibrium problem (1.3). Namely, we want to find a solution x of the mixed equilibrium problem (1.3) and x also solves the following variational inequality:

x Γ, ( γ g μ B ) x , x x 0,xΓ,
(3.1)

where B is a k-Lipschitz and η-strongly monotone operator on H with k>0, η>0 and 0<μ<2η/ k 2 , and g:CH is a β-contraction mapping, β(0,1). Let F 1 , F 2 :C×CR be two bifunctions. In order to find a particular solution of the variational inequality (3.1), we construct the following implicit algorithm.

Algorithm 3.1 For an arbitrary initial point x 0 , we define a sequence { x n } n 0 iteratively

x n =(ItμB) T r P C (I λ n f α n ) x n ,t(0,1),
(3.2)

for all n0, where { α n } is a real sequence in [0,1], T r is defined by Lemma 2.8 and f α n is introduced in Remark 2.12.

We show that the sequence x n defined by (3.2) converges to a particular solution of the variational inequality (3.1). As a matter of fact, in this paper, we study a general algorithm for solving the variational inequality (3.1).

Let g:CH be a β-contraction mapping. For each t(0,1), we consider the following mapping S t given by:

S t x= [ t γ g + ( I t μ B ) T r P C ( I λ n f α n ) ] x,xC.
(3.3)

Lemma 3.2 S t is a contraction. Indeed,

S t x S t y [ 1 ( τ γ β ) t ] xy,x,yH,

where t(0, 1 τ γ β ), and the sequence of { α n } and { γ n } satisfy the conditions (i)-(iv) in Lemma  2.14.

Proof It is clear that S t is a self-mapping. Observe that

(3.4)

Let μ(η μ k 2 2 )=τ and t(0,1), we obtain

( I t μ B ) x ( I t μ B ) y [ 1 t μ ( η μ k 2 2 ) ] x y = [ 1 t τ ] x y .

Note that P C and T r are nonexpansive, I λ n f α n is a contraction mapping with the coefficient 1λ α n and ItμB1tτ. Hence, x,yC, we obtain

S t x S t y = [ t γ g + ( I t μ B ) T r P C ( I λ n f α n ) ] x [ t γ g + ( I t μ B ) T r P C ( I λ n f α n ) ] y I t μ B ( I λ n f α n ) x ( I λ n f α n ) y + t γ g ( x ) g ( y ) ( 1 t τ ) ( 1 λ n α n ) x y 2 + t γ β x y 2 ( 1 ( τ γ β ) t ) x y 2 .

Therefore, S t is a contraction mapping when t(0, 1 τ γ β ). □

From Lemma 3.2 and using the Banach contraction principle, there exists a unique fixed point x t of S t in C, i.e., we obtain the following algorithm.

Algorithm 3.3 For an arbitrary initial point x 0 , we define a sequence { x n } n 0 iteratively

x n = [ ε n γ g + ( I ε n μ B ) T r P C ( I λ n f α n ) ] x n ,xC,
(3.5)

for all n0, where { α n } and { ε n } are two real sequences in [0,1], T r is defined by Lemma 2.8 and f α n is introduced in Remark 2.12.

At this point, we would like to point out that Algorithm 3.3 includes Algorithm 3.1 as a special case due to the fact that the contraction g is a possible nonself-mapping.

Theorem 3.4 Let C be a nonempty closed convex subset of a real Hilbert space H. Let B be a k-Lipschitz and η-strongly monotone operator on H with k>0, η>0 and 0<μ<2η/ k 2 , and the sequence of { α n } and { γ n } satisfy the conditions (i)-(iv) in Lemma  2.14. Let F 1 , F 2 :C×CR be two bifunctions which satisfy the conditions (f1)-(f4), (h1)-(h3) and (H) in Lemma  2.8. Let g:CH be a β-contraction. Assume Ω:=ΓMEP( F 1 , F 2 ). Then the sequence { x n } generated by implicit Algorithm 3.3 converges in norm, as ε n 0, to the unique solution x of the variational inequality (3.1). In particular, if we take g=0, then the sequence { x n } defined by Algorithm 3.1 converges in norm, as ε n 0, to the unique solution x of the following variational inequality:

μ B x , x x 0,xΩ.

Proof Next, we divide the remainder of the proof into several steps.

Step 1. We prove that the sequence { x n } is bounded.

Set u n = P C (I λ n f α n ) x n for all n0. Take qΩ. It is clear that q= P C (I λ n f α n )q. From Remark 2.13, we know that P C (I λ n f α n ) is nonexpansive, then we have

u n q= P C ( I λ n f α n ) x n P C ( I λ n f α n ) q x n q.
(3.6)

From (3.5), (3.6) and the fact that T r is nonexpansive, it follows that

x n q = [ ε n γ g ( x n ) + ( I ε n μ B ) T r u n ] P C q = ε n γ ( g ( x n ) g ( q ) ) + ( I ε n μ B ) ( T r u n q ) + ε n ( γ g ( q ) μ B q ) ε n γ β x n q + ( 1 ε n τ ) T r u n q + ε n μ B q γ g ( q ) ( 1 ε n ( τ γ β ) ) x n q + ε n μ B q γ g ( q ) .

It follows by induction that

x n q max { x n q , μ B q γ g ( q ) / ( τ γ β ) } max { x 0 q , μ B q γ g ( q ) / ( τ γ β ) } .

This indicates that { x n } is bounded. It is easy to deduce that {g( x n )} and { u n } are also bounded.

Now, we can choose a constant M>0 such that

Step 2. We prove that lim n x n u n =0.

From (3.5), (3.6) and the fact that T r is nonexpansive, we have

x n q 2 = ( I ε n μ B ) ( T r u n q ) + ε n ( γ g ( x n ) μ B q ) 2 ( 1 ε n τ ) T r u n q 2 + ε n ( 1 ε n τ ) ( γ g ( x n ) μ B q ) , T r u n q + ε n γ g ( x n ) μ B q 2 ( 1 ε n τ ) T r u n q 2 + ε n γ g ( x n ) μ B q T r u n q + ε n γ g ( x n ) μ B q 2 ( 1 ε n τ ) u n q 2 + ε n M .
(3.7)

Note that f α n (x) is an (α+ A 2 )-Lipschitzian and α-strongly monotone mapping. From Lemma 2.8, (3.5) and (3.6), we have

u n q 2 = P C ( I λ n f α n ) x n P C ( I λ n f α n ) q 2 ( I λ n f α n ) x n ( I λ n f α n ) q , u n q = 1 2 ( ( I λ n f α n ) x n ( I λ n f α n ) q 2 + u n q 2 x n u n λ n ( f α n ( x n ) f α n ( q ) ) 2 ) 1 2 ( ( 1 λ n α n ) x n q 2 + u n q 2 x n u n λ n ( f α n ( x n ) f α n ( q ) ) 2 ) 1 2 ( x n q 2 + u n q 2 x n u n 2 + 2 λ n x n u n , f α n ( x n ) f α n ( q ) λ n 2 f α n ( x n ) f α n ( q ) 2 ) ,

which implies that

u n q 2 x n q 2 x n u n 2 + 2 λ n x n u n , f α n ( x n ) f α n ( q ) λ n 2 f α n ( x n ) f α n ( q ) 2 x n q 2 x n u n 2 + 2 λ n x n u n f α n ( x n ) f α n ( q ) .
(3.8)

By (3.7) and (3.8), we obtain

u n q 2 (1 ε n τ) u n q 2 + ε n M x n u n 2 +2 λ n x n u n f α n ( x n ) f α n ( q ) .

It follows that

x n u n 2 ε n M+2 λ n x n u n f α n ( x n ) f α n ( q ) .

This together with lim n ε n =0 and lim n λ n =0 implies that

lim n x n u n =0.
(3.9)

Setting y n = T r u n , we have

x n y n = ε n γ g ( x n ) + ( I ε n μ B ) y n y n ε n γ g ( x n ) ε n μ B T r u n ε n γ g ( x n ) γ g ( u n ) + ε n γ g ( u n ) μ B T r u n ε n γ β x n u n + ε n γ g ( u n ) μ B T r u n .

From lim n ε n =0, { u n } is bounded and (3.9), we obtain

lim n x n y n =0.
(3.10)

By (3.9) and (3.10), we also have

lim n u n y n = lim n ( u n x n x n y n ) =0.
(3.11)

Step 3. We prove u n x Ω:=ΓMEP( F 1 , F 2 ).

By (3.4) and (3.5), we deduce

x n x 2 = [ ε n γ g ( x n ) + ( I ε n μ B ) y n ] x 2 y n x + ε n γ g ( x n ) ε n μ B y n 2 = T r u n x 2 + 2 ε n γ g ( x n ) , y n x 2 ε n μ B y n , y n x + ε n 2 γ g ( x n ) μ B y n 2 = u n x 2 + 2 ε n γ g ( x n ) γ g ( x ) , y n x + 2 ε n γ g ( x ) μ B x , y n x 2 ε n μ B T r u n μ B x , y n x + ε n 2 γ g ( x n ) μ B y n 2 u n x 2 + 2 ε n γ β x n x u n x + 2 ε n γ g ( x ) μ B x , y n x 2 ε n τ u n x 2 + ε n 2 γ g ( x n ) μ B y n 2 ( 1 2 ε n τ ) ( 1 λ n α n ) x n x 2 + 2 ε n γ β ( 1 λ n α n ) x n x 2 + 2 ε n γ g ( x ) μ B x , y n x + ε n 2 γ g ( x n ) μ B y n 2 ( 1 2 ε n ( τ γ β ) ) ( 1 λ n α n ) x n x 2 + 2 ε n γ g ( x ) μ B x , y n x + ε n 2 γ g ( x n ) μ B y n 2 ( 1 2 ε n ( τ γ β ) ) x n x 2 + 2 ε n γ g ( x ) μ B x , y n x + ε n 2 γ g ( x n ) μ B y n 2 .

It follows that

x n x 2 1 τ γ β γ g ( x ) μ B x , y n x + ε n 2 ( τ γ β ) γ g ( x n ) μ B y n 2 1 τ γ β γ g ( x ) μ B x , y n x + ε n 2 ( τ γ β ) M .
(3.12)

Since { x n } is bounded, without loss of generality, we may assume that { x n } converges weakly to a point x C. Hence, u n x and y n x .

Step 4. We show x ω w ( x n )Ω:=ΓMEP( F 1 , F 2 ).

Since y n = T r u n , for any yC, we obtain

F 1 ( y n y)+ F 2 ( y n y)+ 1 r y y n , y n u n 0.

From the monotonicity of F 1 and F 2 , we get

1 r y y n , y n u n F 1 (y y n )+ F 2 (y y n ),yC.

Hence,

y y n i , y n i x n i r F 1 (y y n i )+ F 2 (y y n i ),yC.
(3.13)

Since y n i x n i r 0 and y n x , from (A4), it follows F 1 (y x )+ F 2 (y x )0 for all yH. Put z t =ty+(1t) x for all t(0,1] and yH, then we have F 1 ( z t x )+ F 2 ( z t x )0, So, from (A1) and (A4), we have

0 = F 1 ( y t , y t ) + F 2 ( y t , y t ) t F 1 ( y t , y ) + ( 1 t ) F 1 ( y t , x ) + t F 2 ( y t , y ) + ( 1 t ) F 2 ( y t , x ) F 1 ( y t , y ) + F 2 ( y t , y )

and hence 0 F 1 ( y t ,y)+ F 2 ( y t ,y). From (A3), we have 0 F 1 ( x ,y)+ F 2 ( x ,y) for all yH. Therefore, x MEP( F 1 , F 2 ).

Next, we prove x Γ.

From Remark 2.13, we know that T= P C (I λ n f) is nonexpansive, then we have

x n T x n x n u n + u n T x n = x n u n + P C ( I λ n f α n ) x n P C ( I λ n f ) x n = x n u n + ( I λ n f α n ) x n ( I λ n f ) x n = x n u n + λ n α n x n .

So, from lim n x n u n =0, lim n α n =0, lim n λ n =0, n = 1 α n λ n = and the bounded sequence of { x n } it follows that

lim n x n T x n =0.
(3.14)

Thus, taking into account x n j x and u n j x , and from Lemma 2.16, we get x Γ. Therefore, we have x Ω:=ΓMEP( F 1 , F 2 ). This shows that it holds that

ω w ( x n )Ω:=ΓMEP( F 1 , F 2 ).

Step 5. lim n x n = x .

We substitute x for z in (3.12) to get

x n x 2 1 τ γ β γ g ( x ) μ B x , y n x + ε n 2 ( τ γ β ) M.

Hence, the weak convergence of y n x implies that x n x strongly.

Now, we return to (3.12) and take the limit as n to obtain

x z 2 1 τ γ β μ B z γ g ( z ) , z x ,zΩ.
(3.15)

In particular, x solves the following variational inequality:

zΩ, μ B z γ g ( z ) , z x 0, x Ω,

or the equivalent dual variational inequality

zΩ, μ B x γ g ( x ) , z x 0, x Ω.

Therefore, x =( P Ω g) x . That is, x is the unique fixed point in Ω of the contraction P Ω g. □

Remark 3.5 If we take g=0, then (3.15) is reduced to

x z 2 μ B z , z x ,zΩ.

Equivalently,

x 2 z , x ,zΩ.

This clearly implies that

x 2 z,zΩ.

Therefore, x is a particular solution of the variational inequality (3.1).

Next, we introduce an explicit algorithm for finding a solution of the variational inequality (3.1). This scheme is obtained by discretizing the implicit scheme (3.3). We show the strong convergence of this algorithm.

Theorem 3.6 Let C be a nonempty closed convex subset of a real Hilbert space H. Let B be a k-Lipschitz and η-strongly monotone operator on H with k>0, η>0 and 0<μ<2η/ k 2 , and the sequence of { α n } and { γ n } satisfy the conditions (i)-(iv) in Lemma  2.14. Let F 1 , F 2 :C×CR be two bifunctions which satisfy the conditions (f1)-(f4), (h1)-(h3) and (H) in Lemma  2.8. Let g:CH be a β-contraction. Assume Ω:=ΓMEP( F 1 , F 2 ). For given x 0 C, let the sequence { x n } generated by

x n + 1 = θ n x n +(1 θ n ) [ ε n γ g + ( I ε n μ B ) T r P C ( I λ n f α n ) ] x n ,n0,
(3.16)

where { ε n } and { θ n } are two sequences in [0,1], satisfy the following conditions:

  1. (i)

    lim n ε n =0 and n = 1 ε n =;

  2. (ii)

    0< lim inf n θ n lim sup n θ n <1;

  3. (iii)

    lim n λ n =0.

Then the sequence { x n } converges strongly to x which is the unique solution of the variational inequality (3.1). In particular, if g=0, then the sequence { x n } generated by

x n + 1 = θ n x n +(1 θ n ) [ ( I ε n μ B ) T r P C ( I λ n f α n ) ] x n ,n0,

converges strongly to a solution of the following variational inequality:

μ B x , x x 0,xΩ.

Proof First, we prove that the sequence { x n } is bounded. Indeed, pick zΩ.

Let z= P C (I λ n f α n )z. Set u n = P C (I λ n f α n ) x n for all n0. From (3.16), we have

u n z= P C ( I λ n f α n ) x n P C ( ( I λ n f α n ) ) z x n z,

and

x n + 1 z = θ n x n + ( 1 θ n ) [ ε n γ g ( x n ) + ( I ε n μ B ) T r u n ] z θ n x n z + ( 1 θ n ) ( I ε n μ B ) ( T r u n z ) + ε n ( γ g ( x n ) μ B z ) θ n x n z + ( 1 θ n ) [ ( 1 ε n τ ) u n z + ε n γ β x n z + ε n γ g ( z ) μ B z ] θ n x n z + ( 1 θ n ) [ ( 1 ε n τ ) x n z + ε n γ β x n z + ε n γ g ( z ) μ B z ] = [ 1 ( τ γ β ) ( 1 θ n ) ε n ] x n z + ε n ( 1 θ n ) γ g ( z ) μ B z max { x n z , γ g ( z ) μ B z τ γ β } .

By induction, we have, n>0,

x n zmax { x 0 z , γ g ( z ) μ B z τ γ β } .

Hence, { x n } is bounded. Consequently, we deduce that { u n }, {g( x n )} and {f( x n )} are all bounded. Let M>0 be a constant such that

sup n { x n u n , μ B T r u n + γ g ( x n ) , μ B T r u n γ g ( x n ) 2 } M.

Next, we show lim n x n u n =0.

Define x n + 1 = θ n x n +(1 θ n ) v n for all n>0. It follows from (3.16) that

This together with (i) implies that

lim sup n ( v n + 1 v n x n + 1 x n ) 0.

Hence, by Lemma 2.15, we get lim n v n x n =0. Consequently,

lim n x n + 1 x n = lim n (1 θ n ) v n x n =0.

By the convexity of the norm , we obtain

x n + 1 z 2 = θ n x n + ( 1 θ n ) v n z 2 θ n x n z 2 + ( 1 θ n ) v n z 2 θ n x n z 2 + ( 1 θ n ) T r u n z ε n ( μ B T r u n γ g ( x n ) ) 2 = θ n x n z 2 + ( 1 θ n ) [ u n z 2 2 ε n μ B T r u n γ g ( x n ) , T r u n z + ε n 2 μ B T r u n γ g ( x n ) 2 ] θ n x n z 2 + ( 1 θ n ) u n z 2 + ε n M .
(3.17)

Let y n = T r u n and by u n = P C (I λ n f α n ) x n , we obtain

y n z 2 = T r u n T r z 2 u n z 2 = P C ( I λ n f α n ) x n P C ( I λ n f α n ) z 2 ( I λ n f α n ) x n ( I λ n f α n ) z , u n z = 1 2 ( ( I λ n f α n ) x n ( I λ n f α n ) z 2 + u n z 2 ( x n z ) λ n ( f α n ( x n ) f α n ( z ) ) ( u n z ) 2 ) 1 2 ( x n z 2 + u n z 2 ( x n u n ) λ n ( f α n ( x n ) f α n ( z ) ) 2 ) 1 2 ( x n z 2 + u n z 2 x n u n 2 + 2 λ n x n u n , f α n ( x n ) f α n ( z ) λ n 2 f α n ( x n ) f α n ( z ) 2 ) .
(3.18)

Thus, we deduce

u n z 2 x n z 2 x n u n 2 + 2 λ n x n u n f α n ( x n ) f α n ( z ) x n z 2 x n u n 2 + λ n M f α n ( x n ) f α n ( z ) .
(3.19)

By (3.17) and (3.19), we obtain

x n + 1 z 2 θ n x n z 2 + ( 1 θ n ) u n z 2 + ε n M θ n x n z 2 + ( 1 θ n ) [ x n z 2 x n u n 2 + λ n M f α n ( x n ) f α n ( z ) ] + ε n M = x n z 2 ( 1 θ n ) x n u n 2 + ( λ n f α n ( x n ) f α n ( z ) + ε n ) M .

It follows that

(1 θ n ) x n u n 2 x n + 1 x n + ( λ n f α n ( x n ) f α n ( z ) + ε n ) M.

Since lim inf n (1 θ n )>0, lim n ε n =0, lim n x n + 1 x n =0, {f( x n )} is bounded and lim n λ n =0, we derive that

lim n x n u n =0.
(3.20)

Setting y n = T t u n , from (3.16), we have

x n y n x n + 1 x n + x n + 1 y n x n + 1 x n + θ n x n + ( 1 θ n ) [ ε n γ g ( x n ) + ( I ε n μ B ) y n ] y n x n + 1 x n + θ n x n y n + ( 1 θ n ) ε n γ g ( x n ) ε n μ B T t u n x n + 1 x n + θ n x n y n + ( 1 θ n ) ( ε n γ β x n u n + ε n γ g ( u n ) μ B T r u n ) .

Thus,

x n y n 1 ( 1 θ n ) x n + 1 x n + ε n γβ x n u n + ε n γ g ( u n ) μ B T r u n .

From lim n ε n =0 and { u n } is bounded, we obtain

lim n x n y n =0.
(3.21)

By (3.20) and (3.21), we also have

lim n y n u n = lim n ( y n x n + x n u n ) =0.

Next, we prove

lim sup n ( γ g μ B ) x , y n x 0, x P Ω g ( x ) .

Indeed, we can choose a subsequence { u n i } of { y n } such that

lim sup n ( γ g μ B ) x , y n x lim sup n ( γ g μ B ) x , y n i x 0.

Without loss of generality, we may further assume that y n i x ˜ . By the same argument as that of Step 4 from Theorem 3.4, we can deduce that x ˜ Ω. Therefore,

lim sup n ( γ g μ B ) x , y n x ( γ g μ B ) x , x ˜ x 0.

From (3.16), we have

where σ n =2 ε n (γβτ) and δ n = ( 1 θ n ) 2 ( γ β τ ) γg( x )μB x , y n x + ε n ( 1 θ n ) γ β τ M. It is clear that n = 0 σ n = and sup n δ n 0. Hence, all the conditions of Lemma 2.17 are satisfied. Therefore, we immediately deduce that lim n x n = x .

Remark 3.7 If we take g=0, by the similar argument as that in Theorem 3.6, we deduce immediately that x is a particular solution of the variational inequality (3.1). This completes the proof. □

4 Application in the multiple-set split feasibility problem

Recall that the multiple-set split feasibility problem (MSSFP) [4] is to find a point x such that

x C= i = 1 N C i andA x Q= j = 1 M Q j ,
(4.1)

where N,M1 are integers, C i and Q j are closed convex subsets of Hilbert spaces H 1 and H 2 , and A: H 1 H 2 is a bounded linear operator. The special case where N=M=1, called the split feasibility problem (1.1), was introduced by Censor and Elfving [4] for modeling phase retrieval and other image restoration problems.

Let Γ be the solution set of SFP, and let γ>0. Assume that x Γ. Thus, A x Q 1 which implies the equation (I P Q 1 )A x =0 which in turn implies the equation γ A (I P Q 1 )A x =0, and hence the fixed point equation (Iγ A (I P Q 1 )A) x = x . Requiring that x C 1 , we consider the fixed point equation

P C 1 ( I γ A ( I P Q 1 ) A ) x = x .
(4.2)

It is claimed that the solutions of the fixed point equation (4.2) are exactly the solution of the SFP. According the Byrne [2] and Xu [17], we obtain the following proposition.

Proposition 4.1 Given x H 1 , x solves the SFP if and only if x solves the fixed point (4.2).

From this proposition, we can easily obtain that MSSFP (4.1) is equivalent to a common fixed point problem of finitely many nonexpansive mappings, as we show in the following.

Decompose MSSFP into N subproblems (1iN):

x i C i ,A x i Q= j = 1 M Q j .

Next, we define a mapping T i as follows:

T i x= P C i (I γ i g)x= P C i ( I γ i j = 1 M β j A ( I P Q j ) A ) x,

where the proximity function g is defined by

g(x)= 1 2 j = 1 M β j A x P Q j A x 2 ,

where { β } j = 1 M are such that β j >0. Consider the minimization of g over C:

min x C g(x)= min x C 1 2 j = 1 M β j A x P Q j A x 2 .

Observe that the gradient g is

g(x)= j = 1 M β j A (I P Q j )Ax,
(4.3)

which is L-Lipschitz continuous with the constant L= j = 1 M β j A 2 and thus g(x) is 1 L -ism. It is claimed that if 0< γ i 2/L, T i is nonexpansive. Therefore, fixed point algorithms for nonexpansive mappings can be applied to MSSFP (4.1).

From Algorithm 3.1, Algorithm 3.3 and Proposition 4.1, we consider our results on the optimization method for solving MSSFP (4.1), and obtain the following two algorithms.

Algorithm 4.2 For an arbitrary initial point x 0 , we define a sequence { x n } n 0 iteratively

x n =(ItμB) T r [ P C N ( I γ g ) ] [ P C 1 ( I γ g ) ] x n ,t(0,1),
(4.4)

for all n0, T r is defined by Lemma 2.8 and g is introduced in (4.3).

Algorithm 4.3 For an arbitrary initial point x 0 , we define a sequence { x n } n 0 iteratively

x n = { ε n γ f + ( I ε n μ B ) T r [ P C N ( I γ g ) ] [ P C 1 ( I γ g ) ] } x n ,
(4.5)

for all n0, where { ε n } are two real sequences in [0,1], T r is defined by Lemma 2.8 and g is introduced in (4.3).

In addition, we would like to point out that Algorithm 4.3 includes Algorithm 4.2 as a special case due to the fact that the contraction f is a possible nonself-mapping. According to Theorem 3.4, we obtain the following theorem.

Theorem 4.4 Let C be a nonempty closed convex subset of a real Hilbert space H. Let B be a k-Lipschitz and η-strongly monotone operator on H with k>0, η>0 and 0<μ<2η/ k 2 , and the sequence of { α n } and { γ n } satisfy the conditions (i)-(iv) in Lemma  2.14. Let F 1 , F 2 :C×CR be two bifunctions which satisfy the conditions (f1)-(f4), (h1)-(h3) and (H) in Lemma  2.8. Let f:CH be a β-contraction. Assume Ω:=ΓMEP( F 1 , F 2 ), Γ is the solution set of MSSFP (4.1). Then the sequence { x n } generated by implicit Algorithm 4.3 converges in norm, as ε n 0, to the unique solution x of the variational inequality (3.1). In particular, if we take g=0, then the sequence { x n } defined by Algorithm 4.3 converges in norm, as ε n 0, to the unique solution x of the variational inequality (3.1).

Proof Let

U= T N T 1 = [ P C N ( I γ g ) ] [ P C 1 ( I γ g ) ] .
(4.6)

Then, as the composition of finitely many nonexpansive mappings, U is nonexpansive. Also Algorithm 4.3 can be written as

x n = [ ε n γ g + ( I ε n μ B ) T r U ] x n ,xC.
(4.7)

Since T r and U are nonexpansive, and following the proof of Theorem 3.4, we obtain the sequence { x n } converges strongly to a fixed point of U which is also a common fixed point of T 1 ,, T N or a solution of MSSFP (4.1). □

From Theorem 3.6, we introduce an explicit algorithm for finding a common fixed point and for solving the variational inequality (3.1) and multiple set feasibility problem (4.1). This scheme is obtained by discretizing the implicit scheme (4.8).

Theorem 4.5 Let C be a nonempty closed convex subset of a real Hilbert space H. Let B be a k-Lipschitz and η-strongly monotone operator on H with k>0, η>0 and 0<μ<2η/ k 2 , and the sequence of { α n } and { γ n } satisfy the conditions (i)-(iv) in Lemma  2.14. Let F 1 , F 2 :C×CR be two bifunctions which satisfy the conditions (f1)-(f4), (h1)-(h3) and (H) in Lemma  2.8. Let f:CH be a β-contraction. Assume Ω:=ΓMEP( F 1 , F 2 ), Γ is the solution set of MSSFP (4.1). For given x 0 C, let the sequence { x n } generated by

x n + 1 = θ n x n + ( 1 θ n ) { ε n γ f + ( I ε n μ B ) × T r [ P C N ( I γ g ) ] [ P C 1 ( I γ g ) ] } x n , n 0 ,
(4.8)

where { ε n } and { θ n } are two sequences in [0,1], satisfy the following conditions:

  1. (i)

    lim n ε n =0 and n = 1 ε n =;

  2. (ii)

    0< lim inf n θ n lim sup n θ n <1;

  3. (iii)

    lim n λ n =0.

Then the sequence { x n } converges strongly to x which is the unique solution of the variational inequality (3.1). In particular, if f=0, then the sequence { x n } generated by

x n + 1 = θ n x n +(1 θ n ) { ( I ε n μ B ) T r [ P C N ( I γ g ) ] [ P C 1 ( I γ g ) ] } x n ,n0,

converges strongly to a solution of the following variational inequality:

μ B x , x x 0,xΓ.

Proof Following the assumption of (4.6), (4.8) can be written as

x n + 1 = θ n x n +(1 θ n ) { ε n γ f + ( I ε n μ B ) T r U } x n ,n0.

Since T r and U are nonexpansive, following the proof of Theorem 3.6, we can easily claim that the sequence { x n } converges strongly to the common fixed point of T r which solves the mixed equilibrium problem (MEP( F 1 , F 2 )), and U is a solution of MSSFP (4.1). □

According to [22], we can obtain the following proposition.

Proposition 4.6 x is a solution of MSSFP (4.1) if and only if f( x )=0.

Observe that if MSSFP(4.1) is consistent, then any solution x is a minimizer of f with minimum value zero. Note that a proximity functionf is as follows:

f(x)= 1 2 i = 1 N α i x P C i x 2 + 1 2 j = 1 M β j A x P Q j A x 2 ,

where α i >0 for all 0<iN and β i >0 for all 0<iM. Then the gradient of f is

f(x)= i = 1 N α i (I P C i )x+ j = 1 M β j A (I P Q j )Ax.
(4.9)

It is claimed that the gradient f is Lipschitz with the constant

To see this, we notice that projections and their complements are nonexpansive. Thus, both I P C i and I P Q j are nonexpansive for each i and j. In addition, we can easily obtain that 1 L f is a nonexpansive mapping. Therefore, we can use the gradient projection method to solve the minimization problem:

min x Ω f(x),

where Ω is a closed convex subset of H 1 , whose intersection with the solution set of MSSFP is nonempty, and obtain a solution of the so-called constrained multiple set feasibility problem (CMSSFP):

xΩ such that  x  solves (4.1).

From Proposition 4.6 and Algorithm 3.3, we obtain the corresponding algorithm and the convergence theorems for MSSFP (4.1).

Algorithm 4.7 For an arbitrary initial point x 0 , we define a sequence { x n } n 0 iteratively

x n + 1 = θ n x n +(1 θ n ) { ε n γ g + 1 L ( I ε n μ B ) T r f } x n
(4.10)

for all n0, where { ε n } and { θ n } are two real sequences in [0,1], T r is defined by Lemma 2.8 and f is introduced in (4.9).

Theorem 4.8 Let C be a nonempty closed convex subset of a real Hilbert space H. Let B be a k-Lipschitz and η-strongly monotone operator on H with k>0, η>0 and 0<μ<2η/ k 2 , and the sequence of { α n } and { γ n } satisfy the conditions (i)-(iv) in Lemma  2.14. Let F 1 , F 2 :C×CR be two bifunctions which satisfy the conditions (f1)-(f4), (h1)-(h3) and (H) in Lemma  2.8. Let g:CH be a β-contraction. Assume Ω:=ΓMEP( F 1 , F 2 ), Γ is the solution set of MSSFP (4.1). For given x 0 C, let the sequence { x n } generated by Algorithm  4.7, where { ε n } and { θ n } are two sequences in [0,1], satisfy the following conditions:

  1. (i)

    lim n ε n =0 and n = 1 ε n =;

  2. (ii)

    0< lim inf n θ n lim sup n θ n <1;

  3. (iii)

    lim n λ n =0.

Then the sequence { x n } converges strongly to x which is the unique solution of the variational inequality (3.1). In particular, if g=0, then the sequence { x n } generated by

x n + 1 = θ n x n +(1 θ n ) { 1 L ( I ε n μ B ) T r f } x n ,n0,

converges strongly to a solution of the following variational inequality:

μ B x , x x 0,xΓ.

Proof From Proposition 4.6, we know that 1 L f is a nonexpansive mapping. Thus, using the proof of Theorem 3.4, we obtain that the sequence { x n } converges strongly to a fixed point of 1 L f or a solution of MSSFP (4.1), and this fixed point is a solution of the set MEP( F 1 , F 2 ) of mixed equilibrium problem (1.3). □

References

  1. Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692

    Article  MathSciNet  Google Scholar 

  2. Byrne C: Iterative oblique projection onto convex subsets and the split feasibility problem. Inverse Probl. 2002, 18: 441–453. 10.1088/0266-5611/18/2/310

    Article  MathSciNet  Google Scholar 

  3. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 18: 103–120.

    Article  MathSciNet  Google Scholar 

  4. Censor Y, Elfving T, Kopf N, Bortfeld T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21: 2071–2084. 10.1088/0266-5611/21/6/017

    Article  MathSciNet  Google Scholar 

  5. Lopez G, Martin V, Xu HK: Iterative algorithms for the multiple-sets split feasibility problem. In Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems. Edited by: Censor Y, Jiang M, Wang G. Medical Physics Publishing, Madison; 2009:243–279.

    Google Scholar 

  6. Qu B, Xiu N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21: 1655–1665. 10.1088/0266-5611/21/5/009

    Article  MathSciNet  Google Scholar 

  7. Wang F, Xu HK: Cyclic algorithms for split feasibility problems in Hilbert spaces. Nonlinear Anal. 2011. doi:10.1016/j.na.2011.03.044

    Google Scholar 

  8. Xu HK: A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22: 2021–2034. 10.1088/0266-5611/22/6/007

    Article  Google Scholar 

  9. Jaiboona C, Kumam P: A general iterative method for addressing mixed equilibrium problems and optimization problems. Nonlinear Anal. 2010, 73: 1180–1202. 10.1016/j.na.2010.04.041

    Article  MathSciNet  Google Scholar 

  10. Kumam P, Jaiboon C: Approximation of common solutions to system of mixed equilibrium problems, variational inequality problem, and strict pseudo-contractive mappings. Fixed Point Theory Appl. 2011., 2011: Article ID 347204

    Google Scholar 

  11. Saewan S, Kumam P: A modified hybrid projection method for solving generalized mixed equilibrium problems and fixed point problems in Banach spaces. Comput. Math. Appl. 2011, 62: 1723–1735. 10.1016/j.camwa.2011.06.014

    Article  MathSciNet  Google Scholar 

  12. Saewan S, Kumam P: Convergence theorems for mixed equilibrium problems, variational inequality problem and uniformly quasi-asymptotically nonexpansive mappings. Appl. Math. Comput. 2011, 218: 3522–3538. 10.1016/j.amc.2011.08.099

    Article  MathSciNet  Google Scholar 

  13. Xie DP: Auxiliary principle and iterative algorithm for a new system of generalized mixed equilibrium problems in Banach spaces. Appl. Math. Comput. 2011, 218: 3507–3514. 10.1016/j.amc.2011.08.097

    Article  MathSciNet  Google Scholar 

  14. Yao YH, Noor MA, Lioud YC, Kang SM: Some new algorithms for solving mixed equilibrium problems. Comput. Math. Appl. 2010, 60: 1351–1359. 10.1016/j.camwa.2010.06.016

    Article  MathSciNet  Google Scholar 

  15. Geobel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.

    Chapter  Google Scholar 

  16. Cianciaruso F, Marino G, Muglia L, Hong Y: A hybrid projection algorithm for finding solutions of mixed equilibrium problem and variational inequality problem. Fixed Point Theory Appl. 2010., 2010: Article ID 383740

    Google Scholar 

  17. Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018

    Google Scholar 

  18. Nadezhkina N, Takahashi W: Weak convergence theorem by an extra gradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006, 128: 191–201. 10.1007/s10957-005-7564-z

    Article  MathSciNet  Google Scholar 

  19. Browder FE: Fixed point theorems for noncompact mappings in Hilbert spaces. Proc. Natl. Acad. Sci. USA 1965, 43: 1272–1276.

    Article  MathSciNet  Google Scholar 

  20. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 2: 1–17.

    Google Scholar 

  21. Xu HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298: 279–291. 10.1016/j.jmaa.2004.04.059

    Article  MathSciNet  Google Scholar 

  22. Yao YH, Chen RD, Marino G, Liou YC: Applications of fixed-point and optimization methods to the multiple-set split feasibility problem. J. Appl. Math. 2012., 2012: Article ID 927530

    Google Scholar 

Download references

Acknowledgements

This work is supported in part by National Natural Science Foundation of China (71272148), the Ph.D. Programs Foundation of Ministry of Education of China (20120032110039) and China Postdoctoral Science Foundation (Grant No. 20100470783).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bin-Chao Deng.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Deng, BC., Chen, T. & Dong, QL. Viscosity iteration methods for a split feasibility problem and a mixed equilibrium problem in a Hilbert space. Fixed Point Theory Appl 2012, 226 (2012). https://doi.org/10.1186/1687-1812-2012-226

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2012-226

Keywords