Skip to main content

Iterative methods for constrained convex minimization problem in Hilbert spaces

Abstract

In this paper, based on Yamada’s hybrid steepest descent method, a general iterative method is proposed for solving constrained convex minimization problem. It is proved that the sequences generated by proposed implicit and explicit schemes converge strongly to a solution of the constrained convex minimization problem, which also solves a certain variational inequality.

MSC:58E35, 47H09, 65J15.

1 Introduction

Let H be a real Hilbert space with inner product , and induced norm . Let C be a nonempty, closed and convex subset of H. We need some nonlinear operators which are introduced below.

Let T,A:HH be nonlinear operators.

  • T is nonexpansive if TxTyxy for all x,yH.

  • T is Lipschitz continuous if there exists a constant L>0 such that TxTyLxy, for all x,yH.

  • A:HH is monotone if xy,AxAy0, for all x,yH.

  • Given is a number η>0, A:HH is η-strongly monotone if xy,AxAyη x y 2 , for all x,yH.

  • Given is a number υ>0. A:HH is υ-inverse strongly monotone (υ-ism) if xy,AxAyυ A x A y 2 , for all x,yH.

It is known that inverse strongly monotone operators have been studied widely (see [13]), and applied to solve practical problems in various fields; for instance, in traffic assignment problems (see [4, 5]).

  • T:HH is said to be an averaged mapping if T=(1α)I+αS, where α is a number in (0,1) and S:HH is nonexpansive. In particular, projections are (1/2)-averaged mappings.

Averaged mappings have received many investigations, see [610].

Consider the following constrained convex minimization problem:

min x C f(x),
(1.1)

where f:CR is a real valued convex function. Assume that the minimization problem (1.1) is consistent, and let S denote its solution set. It is known that the gradient-projection algorithm is one of the powerful methods for solving the minimization problem (1.1) (see [1118]), and sometimes the minimization problem (1.1) has more than one solution. So, regularization is needed. We can use the idea of regularization to design an iterative algorithm for finding the minimum-norm solution of (1.1).

We consider the regularized minimization problem:

min x C f α (x)=f(x)+ α 2 x 2 .
(1.2)

Here, α>0 is the regularization parameter, f is convex function with L-Lipschitz continuous gradient f. Let x min be minimum-norm solution of (1.1), namely, x min satisfies the property:

x min =min { x : x S } .

x min can be obtained by two steps. First, observing that the gradient f α =f+αI is (L+α)-Lipschitzian and α-strongly monotone, the mapping Proj C (Iγ f α ) is a contraction with coefficient 1 γ ( 2 α γ ( L + α ) 2 ) 1 1 2 αγ, where 0<γ α ( L + α ) 2 . So, the regularized problem (1.2) has a unique solution, which is denoted as x α C and which can be obtained via the Banach contraction principle. Secondly, letting α0 yields x α x min in norm. The following result shows that for suitable choices of γ and α, the minimum-norm solution x min can be obtained by a single step.

Theorem 1.1 [9]

Assume that the minimization problem (1.1) is consistent and let S denote its solution set. Assume that the gradient f is L-Lipschitz continuous. Let { x n } n = 0 be generated by the following iterative algorithm:

x n + 1 = Proj C (I γ n f α n ) x n = Proj C ( I γ n ( f + α n I ) ) x n ,n0.
(1.3)

Let { γ n } and { α n } satisfy the following conditions:

  1. (i)

    0< γ n α n / ( L + α n ) 2 for all n;

  2. (ii)

    α n 0 (and γ n 0) as n;

  3. (iii)

    n = 1 α n γ n =;

  4. (iv)

    (| γ n γ n 1 |+| α n γ n α n 1 γ n 1 |)/ ( α n γ n ) 2 0 as n.

Then x n x min as n.

In the assumptions of Theorem 1.1, the sequence { γ n } is forced to tend to zero. If we keep it as a constant, then we have weak convergence as follows.

Theorem 1.2 [19]

Assume that the minimization problem (1.1) is consistent and let S denote its solution set. Assume that the gradient f is L-Lipschitz continuous. Let { x n } n = 0 be generated by the following iterative algorithm:

x n + 1 = Proj C (Iγ f α n ) x n = Proj C ( I γ ( f + α n I ) ) x n ,n0.
(1.4)

Assume that 0<γ<2/L and n = 1 α n <. Then { x n } n = 0 converges weakly to a solution of the minimization problem (1.1).

In 2001, Yamada [10] introduced the following hybrid steepest descent method:

x n + 1 =(I s n μF)T x n ,
(1.5)

where F:HH is k-Lipschitzian and η-strongly monotone, and 0<μ<2η/ k 2 . It is proved that the sequence { x n } n = 0 generated by (1.5) converges strongly to x Fix(T), which solves the variational inequality:

F ( x ) , x z 0,zFix(T).

In this paper, we introduce a modification of algorithm (1.4) which is based on Yamada’s method. It is proved that the sequence generated by our proposed algorithm converges strongly to a minimizer of (1.1), which is also a solution of a certain variational inequality.

2 Preliminaries

In this section, we introduce some useful properties and lemmas which will be used in the proofs for the main results in the next section.

Proposition 2.1 [7, 8]

Let the operators S,T,V:HH be given:

  1. (i)

    If T=(1α)S+αV, for some α(0,1) and if S is averaged and V is nonexpansive, then T is averaged.

  2. (ii)

    The composition of finitely many averaged mappings is averaged. That is, if each of the mappings { T i } i = 1 N is averaged, then so is the composite T 1 T N . In particular, if T 1 is α 1 -averaged and T 2 is α 2 -averaged, where α 1 , α 2 (0,1), then the composite T 1 T 2 is α-averaged, where α= α 1 + α 2 α 1 α 2 .

  3. (iii)

    If the mappings { T i } i = 1 N are averaged and have a common fixed point, then

    i = 1 N Fix( T i )=Fix( T 1 T N ).

Here, the notations Fix(T) denotes the set of fixed point of the mapping T; that is, Fix(T):={xH:Tx=x}.

Proposition 2.2 [7, 20]

Let T:HH be given. We have:

  1. (i)

    T is nonexpansive, if and only if the complement IT is (1/2)-ism;

  2. (ii)

    If T is υ-ism, then for γ>0, γT is (υ/γ)-ism;

  3. (iii)

    T is averaged, if and only if the complement IT is υ-ism for some υ>1/2; indeed, for α(0,1), T is α-averaged, if and only if IT is (1/2α)-ism.

The so-called demiclosed principle for nonexpansive mappings will often be used.

Lemma 2.3 (Demiclosed Principle [21])

Let C be a closed and convex subset of a Hilbert space H and let T:CC be a nonexpansive mapping with Fix(T). If { x n } n = 1 is a sequence in C weakly converging to x and if { ( I T ) x n } n = 1 converges strongly to y, then (IT)x=y. In particular, if y=0, then xFix(T).

Recall the metric (nearest point) projection Proj C from a real Hilbert space H to a closed convex subset C of H is defined as follows: given xH, Proj C x is the unique point in C with the property

x Proj C x=inf { x y : y C } .

Proj C is characterized as follows.

Lemma 2.4 Let C be a closed and convex subset of a real Hilbert space H. Given xH and yC, then y= Proj C x if and only if there holds the inequality

xy,yz0,zC.

Lemma 2.5 Assume that { a n } n = 0 is a sequence of nonnegative real numbers such that

a n + 1 (1 γ n ) a n + γ n δ n + β n ,n0,

where { γ n } n = 0 and { β n } n = 0 are sequences in (0,1) and { δ n } n = 0 is a sequence in such that

  1. (i)

    n = 0 γ n =;

  2. (ii)

    either lim sup n δ n 0 or n = 0 γ n | δ n |<;

  3. (iii)

    n = 0 β n <.

Then lim n a n =0.

We adopt the following notation:

  • x n x means that x n x strongly;

  • x n x means that x n x weakly.

3 Main results

Recall that throughout this paper, we use S to denote the solution set of constrained convex minimization problem (1.1).

Let H be a real Hilbert space and C be a nonempty closed convex subset of Hilbert space H. Let F:CH be a k-Lipschitzian and η-strongly monotone operator with constant k>0, η>0 such that 0<μ<2η/ k 2 . Suppose that f is L-Lipschitz continuous. We now consider a mapping Q s on C defined by:

Q s (x)= Proj C (IsμF) T λ s (x),xC,

where s(0,1), and T λ s is nonexpansive. Let T λ s and λ s satisfy the following conditions:

  1. (i)

    Proj C (Iγ f λ s )=(1 θ s )I+ θ s T λ s and γ(0,2/L);

  2. (ii)

    θ s = 2 + γ ( L + λ s ) 4 ;

  3. (iii)

    λ s is continuous with respect to s and λ s =o(s).

It is easy to see that Q s is a contraction. Indeed, we have for each x,yC,

Q s ( x ) Q s ( y ) = Proj C ( I s μ F ) T λ s ( x ) Proj C ( I s μ F ) T λ s ( y ) ( I s μ F ) T λ s ( x ) ( I s μ F ) T λ s ( y ) ( 1 s τ ) x y ,

where τ= 1 2 μ(2ημ k 2 ). Hence, Q s has a unique fixed point in C, denoted by x s which uniquely solves the fixed-point equation

x s = Proj C (IsμF) T λ s ( x s ).
(3.1)

The following proposition summarizes the properties of the net { x s }.

Proposition 3.1 Let x s be defined by (3.1). Then the following properties for the net { x s } hold:

  1. (a)

    { x s } is bounded for s(0,1);

  2. (b)

    lim s 0 x s T λ s x s =0;

  3. (c)

    x s defines a continuous curve from (0,1) into C.

Proof It is well known that: x ˜ C solves the minimization problem (1.1) if and only if x ˜ solves the fixed-point equation

x ˜ = Proj C (Iγf) x ˜ = 2 γ L 4 x ˜ + 2 + γ L 4 T x ˜ ,

where 0<γ<2/L is a constant. It is clear that x ˜ =T x ˜ , i.e., x ˜ S=Fix(T).

(a) Take a fixed pS, we obtain that

It follows that

x s p ( 1 + s μ k ) T λ s ( p ) T ( p ) s τ + μ τ F ( p ) .
(3.2)

For xC, note that

Proj C (Iγ f λ s )x=(1 θ s )x+ θ s T λ s x

and

Proj C (Iγf)x=(1θ)x+θTx,

where θ s = 2 + γ ( L + λ s ) 4 and θ= 2 + γ L 4 .

Then we get

( θ θ s ) x + θ s T λ s x θ T x = Proj C ( I γ f λ s ) x Proj C ( I γ f ) x γ λ s x.

Since θ s = 2 + γ ( L + λ s ) 4 and θ= 2 + γ L 4 , there exists a real positive number M>0 such that

T λ s xTx λ s γ ( 5 x + T x ) 2 + γ ( L + λ s ) λ s Mx.
(3.3)

It follows from (3.2) and (3.3) that

x s p 1 + s μ k τ λ s s Mp+ μ τ F ( p ) .

Since λ s =o(s), there exists a real positive number M >0 such that λ s s M , and

Hence, { x s } is bounded.

(b) Note that the boundedness of { x s } implies that {F T λ s ( x s )} is also bounded. Hence, by the definition of { x s }, we have

(c) For γ(0,2/L), there exists

Proj C (Iγ f λ s )=(1 θ s )I+ θ s T λ s

and

Proj C (Iγ f λ s 0 )=(1 θ s 0 )I+ θ s 0 T λ s 0 ,

where θ s = 2 + γ ( L + λ s ) 4 and θ s 0 = 2 + γ ( L + λ s 0 ) 4 .

So for x s C, we get

for some appropriate constant N>0 such that

Nγ Proj C ( I γ f λ s ) x s +5γ x s .

Now take s, s 0 (0,1) and calculate

It follows that

x s x s 0 μ F T λ s ( x s ) s 0 τ |s s 0 |+ ( 1 + s 0 μ k ) N s 0 τ | λ s λ s 0 |.

Since {F T λ s ( x s )} is bounded, and λ s is continuous with respect to s, x s defines a continuous curve from (0,1) into C. □

The following theorem shows that the net { x s } converges strongly as s0 to a minimizer of (1.1), which solves some variational inequality.

Theorem 3.2 Let H be a real Hilbert space and C be a nonempty, closed and convex subset of Hilbert space H. Let F:CH be a k-Lipschitzian and η-strongly monotone operator with constant k>0, η>0 such that 0<μ<2η/ k 2 . Suppose that the minimization problem (1.1) is consistent and let S denote its solution set. Assume that the gradient f is Lipschitzian with constant L>0. Let x s be defined by (3.1), where the parameter s(0,1) and T λ s is nonexpansive. Let T λ s and λ s satisfy the following conditions:

  1. (i)

    Proj C (Iγ f λ s )=(1 θ s )I+ θ s T λ s and γ(0,2/L);

  2. (ii)

    θ s = 2 + γ ( L + λ s ) 4 ;

  3. (iii)

    λ s is continuous with respect to s and λ s =o(s).

Then the net { x s } converges strongly as s0 to a minimizer x of (1.1), which solves the variational inequality

F x , x z 0,zS.
(3.4)

Equivalently, we have Proj S (IμF) x = x .

Proof It is easy to see the uniqueness of a solution of the variational inequality (3.4). Indeed, suppose both x ˜ S and x ˆ S are solutions to (3.4), then

F x ˜ , x ˜ x ˆ 0
(3.5)

and

F x ˆ , x ˆ x ˜ 0.
(3.6)

Adding up (3.5) and (3.6) gets

F x ˜ F x ˆ , x ˜ x ˆ 0.

The strong monotonicity of F implies that x ˜ = x ˆ and the uniqueness is proved. Below we use x S to denote the unique solution of the variational inequality (3.4).

Let us show that x s x as s0. Set

y s =(IsμF) T λ s ( x s ).

Then we have x s = Proj C y s . For any given zS, we get

(3.7)

Since Proj C is the metric projection from H onto C, we have

y s x s ,z x s 0.

Note that Proj C (Iγf)z=z and Proj C (Iγf)= 2 γ L 4 I+ 2 + γ L 4 T, so we get z=Tz, i.e., zS=Fix(T).

It follows from (3.7) that

By (3.3), we obtain that

(3.8)

Since { x s } is bounded, it is obvious that if { s n } is a sequence in (0,1) such that s n 0, and x s n x ¯ .

By Proposition 3.1(b) and (3.3), we have

So, by Lemma 2.3, we get x ¯ Fix(T)=S.

Since λ s =o(s), we obtain from (3.8) that x s n x ¯ S.

Next, we show that x ¯ solves the variational inequality (3.4). Observe that

x s = Proj C y s = Proj C y s y s +(IsμF) T λ s ( x s ).

Hence, we conclude that

μF( x s )= 1 s ( Proj C y s y s )+ 1 s [ ( I s μ F ) T λ s ( x s ) ( I s μ F ) ( x s ) ] .

Since T λ s is nonexpansive, I T λ s is monotone. Note that, for any given zS, z=Tz and Proj C y s y s , Proj C y s z0.

By (3.3), it follows that

(3.9)

Since λ s =o(s), by Proposition 3.1(b), we obtain from (3.9) that

μ F ( x ¯ ) , x ¯ z 0.

So x ¯ S is a solution of the variational inequality (3.4). We get x ¯ = x by uniqueness. Therefore, x s x as s0.

The variational inequality (3.4) can be rewritten as

( I μ F ) x x , x z 0,zS.

So in terms of Lemma 2.4, it is equivalent to the following fixed point equation:

Proj S (IμF) x = x .

Next, we study the following iterative method. For a given arbitrary initial guess x 0 C, we propose the following explicit scheme that generates a sequence { x n } n = 0 in an explicit way:

x n + 1 = Proj C (I s n μF) T λ n ( x n ),
(3.10)

where the parameters { s n }(0,1). Let T λ n and λ n satisfy the following conditions:

  1. (i)

    Proj C (Iγ f λ n )=(1 θ n )I+ θ n T λ n and 0<γ<2/L;

  2. (ii)

    θ n = 2 + γ ( L + λ n ) 4 ;

  3. (iii)

    λ n =o( s n ).

It is proved that the sequence { x n } n = 0 converges strongly to a minimizer x S of (1.1), which also solves the variational inequality (3.4). □

Theorem 3.3 Let H be a real Hilbert space and C be a nonempty, closed and convex subset of Hilbert space H. Let F:CH be a k-Lipschitzian and η-strongly monotone operator with constant k>0, η>0 such that 0<μ<2η/ k 2 . Suppose that the minimization problem (1.1) is consistent and let S denote its solution set. Assume that the gradient f is Lipschitzian with constant L>0. Let { x n } n = 0 be generated by algorithm (3.10) and the parameters { s n }(0,1). Let T λ n , λ n and s n satisfy the following conditions:

  1. (C1)

    Proj C (Iγ f λ n )=(1 θ n )I+ θ n T λ n and γ(0,2/L);

  2. (C2)

    θ n = 2 + γ ( L + λ n ) 4 for all n;

  3. (C3)

    lim n s n =0 and n = 0 s n =;

  4. (C4)

    n = 0 | s n + 1 s n |<;

  5. (C5)

    λ n =o( s n ) and n = 0 | λ n + 1 λ n |<.

Then the sequence { x n } generated by the explicit scheme (3.10) converges strongly to a minimizer x of (1.1), which is also a solution of the variational inequality (3.4).

Proof It is well known that:

  1. (a)

    x ˜ C solves the minimization problem (1.1) if and only if x ˜ solves the fixed-point equation

    x ˜ = Proj C (Iγf) x ˜ = 2 γ L 4 x ˜ + 2 + γ L 4 T x ˜ ,

where 0<γ<2/L is a constant. It is clear that x ˜ =T x ˜ , i.e., x ˜ S=Fix(T).

  1. (b)

    the gradient f is 1/L-ism.

  2. (c)

    Proj C (Iγ f λ n ) is 2 + γ ( L + λ n ) 4 averaged for γ(0,2/L), in particular, the following relation holds:

    Proj C (Iγ f λ n )= 2 γ ( L + λ n ) 4 I+ 2 + γ ( L + λ n ) 4 T λ n =(1 θ n )I+ θ n T λ n .

We observe that { x n } is bounded. Indeed, take a fixed pS, we get

It follows that

x n + 1 p(1 s n τ) x n p+(1+ s n μk) T λ n ( p ) T ( p ) + s n μ F ( p ) .

Note that, by using the same argument as in the proof of (3.3), there exists a real positive number M>0 such that

T λ n pTp λ n γ ( 5 p + T p ) 2 + γ ( L + λ n ) λ n Mp.
(3.11)

Since λ n =o( s n ), there exists a real positive number M >0 such that λ n s n M and by (3.11) we get

It follows from induction that

x n pmax { x 0 p , μ F ( p ) + ( 1 + μ k ) M M p τ } ,n0.
(3.12)

Consequently, { x n } is bounded. It implies that { T λ n ( x n )} is also bounded.

We claim that

x n + 1 x n 0.
(3.13)

Indeed, since

Proj C (Iγ f λ n )= 2 γ ( L + λ n ) 4 I+ 2 + γ ( L + λ n ) 4 T λ n ,

we obtain that

T λ n = 4 Proj C ( I γ f λ n ) [ 2 γ ( L + λ n ) ] I 2 + γ ( L + λ n ) .

By using the same argument as in the proof of Proposition 3.1(c), we obtain that

for some appropriate constant K>0 such that

Kγ Proj C ( I γ f λ n ) ( x n 1 ) +5γ x n 1 ,n0.

Thus, we get

for some appropriate constant E>0 such that

E F T λ n ( x n 1 ) ,n0.

Consequently, we get

x n + 1 x n (1 s n τ) x n x n 1 +μE| s n s n 1 |+| λ n λ n 1 |(K+μkK).

By Lemma 2.5, we obtain x n + 1 x n 0.

Next, we show that

x n T λ n x n 0.
(3.14)

Indeed, it follows from (3.13) that

Now we show that

lim sup n x n x , μ F ( x ) 0,
(3.15)

where x S is a solution of the variational inequality (3.4).

Indeed, take a subsequence { x n k } of { x n } such that

lim sup n x n x , μ F ( x ) = lim k x n k x , μ F ( x ) .
(3.16)

Without loss of generality, we may assume that x n k x ˜ .

We observe that

x n T x n x n T λ n ( x n ) + T λ n ( x n ) T x n .

It follows from (3.11) that

x n T x n x n T λ n ( x n ) + λ n M x n .

By (3.14), we get x n T x n 0.

In terms of Lemma 2.3, we get x ˜ Fix(T)=S.

Consequently, from (3.16) and the variational inequality (3.4), it follows that

lim sup n x n x , μ F ( x ) = x ˜ x , μ F ( x ) 0.

Finally, we show that x n x .

As a matter of fact, set

y n =(I s n μF) T λ n ( x n ),n0.

Then, x n + 1 = Proj C y n y n + y n .

In terms of Lemma 2.4 and (3.11), we obtain

It follows that

since { x n } is bounded, we can take a constant L >0 such that

L (M+μkM) x x n + 1 x ,n0.

It then follows that

x n + 1 x 2 (1 s n τ) x n x 2 + s n δ n ,
(3.17)

where δ n = 2 1 + s n τ μF( x ), x n + 1 x + 2 λ n s n L .

By (3.15) and λ n =o( s n ), we get lim sup n δ n 0. Now applying Lemma 2.5 to (3.17) concludes that x n x as n. □

4 Application

In this section, we give an application of Theorem 3.3 to the split feasibility problem (say SFP, for short), which was introduced by Censor and Elfving [22]. Since its inception in 1994, the split feasibility problem (SFP) has received much attention (see [7, 23, 24]) due to its applications in signal processing and image reconstruction, with particular progress in intensity-modulated radiation therapy.

The SFP can mathematically be formulated as the problem of finding a point x with the property

xCandBxQ,
(4.1)

where C and Q are nonempty, closed and convex subset of Hilbert space H 1 and H 2 , respectively. B: H 1 H 2 is a bounded linear operator.

It is clear that x is a solution to the split feasibility problem (4.1) if and only if x C and B x Proj Q B x =0. We define the proximity function f by

f(x)= 1 2 B x Proj Q B x 2 ,

and consider the constrained convex minimization problem

min x C f(x)= min x C 1 2 B x Proj Q B x 2 .
(4.2)

Then x solves the split feasibility problem (4.1) if and only if x solves the minimization problem (4.2) with the minimize equal to 0. Byrne [7] introduced the so-called CQ algorithm to solve the (SFP).

x n + 1 = Proj C ( I γ B ( I Proj Q ) B ) x n ,n0,
(4.3)

where 0<γ<2/ B 2 . He obtained that the sequence { x n } generated by (4.3) converges weakly to a solution of the (SFP).

In order to obtain strong convergence iterative sequence to solve the (SFP), we propose the following algorithm:

x n + 1 = Proj C (I s n μF) T λ n ( x n ),
(4.4)

where the parameters { s n }(0,1) and { T λ n } satisfy the following conditions:

  1. (C1)

    Proj C (Iγ( B (I Proj Q )B+ λ n I))=(1 θ n )I+ θ n T λ n and γ(0,2/L);

  2. (C2)

    θ n = 2 + γ ( L + λ n ) 4 for all n,

where F:CH is k-Lipschitzian and η-strongly monotone operator with constant k>0, η>0 such that 0<μ<2η/ k 2 . We can show that the sequence { x n } generated by (4.4) converges strongly to a solution of the (SFP) (4.1) if the sequence { s n }(0,1) and the sequence { λ n } of parameters satisfy appropriate conditions.

Applying Theorem 3.3, we obtain the following result.

Theorem 4.1 Assume that the split feasibility problem (4.1) is consistent. Let the sequence { x n } be generated by (4.4). Where the sequence { s n }(0,1) and the sequence { λ n } satisfy the conditions (C3)-(C5). Then the sequence { x n } converges strongly to a solution of the split feasibility problem (4.1).

Proof By the definition of the proximity function f, we have

f(x)= B (I Proj Q )Bx,

and f is Lipschitz continuous, i.e.,

f ( x ) f ( y ) Lxy,

where L= B 2 .

Set f λ n (x)=f(x)+ λ n 2 x 2 , consequently

f λ n ( x ) = f ( x ) + λ n I ( x ) = B ( I Proj Q ) B x + λ n x .

Then the iterative scheme (4.4) is equivalent to

x n + 1 = Proj C (I s n μF) T λ n ( x n ),

where the parameters { s n }(0,1). { T λ n } satisfy the following conditions:

  1. (C1)

    Proj C (Iγ f λ n )=(1 θ n )I+ θ n T λ n and γ(0,2/L);

  2. (C2)

    θ n = 2 + γ ( L + λ n ) 4 for all n.

Due to Theorem 3.3, we have the conclusion immediately. □

References

  1. Brezis H: Operateurs Maximaux Monotones et Semi-Groups de Contractions dans les Espaces de Hilbert. North-Holland, Amsterdam; 1973.

    Google Scholar 

  2. Jaiboon C, Kumam P: A Hybrid extragradient viscosity approximation method for solving equilibrium problems and fixed point problems of infinitely many nonexpansive mappings. Fixed Point Theory Appl. 2009. doi:10.1155/2009/374815

    Google Scholar 

  3. Jitpeera T, Katchang P, Kumam P: A viscosity of Cesàro Mean approximation methods for a mixed equilibrium, variational inequalities, and fixed point prolems. Fixed Point Theory Appl. 2011. doi:10.1155/2011/945051

    Google Scholar 

  4. Bertsekas DP, Gafni EM: Projection methods for variational inequalities with applications to the traffic assignment problem. Math. Program. Stud. 1982, 17: 139–159. 10.1007/BFb0120965

    Article  MATH  MathSciNet  Google Scholar 

  5. Han D, Lo HK: Solving non-additive traffic assignment problems, a descent method for cocoercive variational inequalities. Eur. J. Oper. Res. 2004, 159: 529–544. 10.1016/S0377-2217(03)00423-5

    Article  MATH  MathSciNet  Google Scholar 

  6. Bauschke H, Borwein J: On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38: 367–426. 10.1137/S0036144593251710

    Article  MATH  MathSciNet  Google Scholar 

  7. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006

    Article  MATH  MathSciNet  Google Scholar 

  8. Combettes PL: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004, 53: 475–504. 10.1080/02331930412331327157

    Article  MATH  MathSciNet  Google Scholar 

  9. Xu HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150: 360–378. 10.1007/s10957-011-9837-z

    Article  MATH  MathSciNet  Google Scholar 

  10. Yamada I: The hybrid steepest descent for the variational inequality problems over the intersection of fixed points sets of nonexpansive mapping. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Application. Edited by: Butnariu D, Censor Y, Reich S. Elservier, New York; 2001:473–504.

    Chapter  Google Scholar 

  11. Levitin ES, Polyak BT: Constrained minimization methods. Zh. Vychisl. Mat. Mat. Fiz. 1996, 6: 787–823.

    Google Scholar 

  12. Calamai PH, Moré JJ: Projected gradient methods for linearly constrained problems. Math. Program. 1987, 39: 93–116. 10.1007/BF02592073

    Article  MATH  Google Scholar 

  13. Polyak BT: Introduction to Optimization. Optimization Software, New York; 1987.

    Google Scholar 

  14. Su M, Xu HK: Remarks on the gradient-projection algorithm. J. Nonlinear Anal. Optim. 2010, 1: 35–43.

    MathSciNet  Google Scholar 

  15. Jung JS: A general iterative approach to variational inequality problems and optimization problems. Fixed Point Theory Appl. 2011. doi:10.1155/2011/284363

    Google Scholar 

  16. Jung JS: A general composite iterative method for generalized mixed equilibrium problems, variational inequalities problems and optimization problems. J. Inequal. Appl. 2011. doi:10.1186/1029–242x-2011–51

    Google Scholar 

  17. Jitpeera T, Kumam P: A new iterative algorithm for solving common solutions of generalized mixed equilibrium problems, fixed point problems and variational inclusion problems with minimization problems. Fixed Point Theory Appl. 2012., 2012: Article ID 111. doi:10.1186/1687–1812–2012–111

    Google Scholar 

  18. Witthayarat U, Jitpeera T, Kumam P: A new modified hybrid steepest-descent by using a viscosity approximation method with a weakly contractive mapping for a system of equilibrium problems and fixed point problems with minimization problems. Abstr. Appl. Anal. 2012., 2012: Article ID 206345

    Google Scholar 

  19. Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018

    Google Scholar 

  20. Martinez-Yanes C, Xu HK: Strong convergence of the CQ method for fixed-point iteration processes. Nonlinear Anal. 2006, 64: 2400–2411. 10.1016/j.na.2005.08.018

    Article  MATH  MathSciNet  Google Scholar 

  21. Hundal H: An alternating projection that does not converge in norm. Nonlinear Anal. 2004, 57: 35–61. 10.1016/j.na.2003.11.004

    Article  MATH  MathSciNet  Google Scholar 

  22. Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692

    Article  MATH  MathSciNet  Google Scholar 

  23. López G, Martin-Márquez V, Wang FH, Xu HK: Solving the split feasibility problem without prior knowlege of matrix norms. Inverse Probl. 2012., 28: Article ID 085004

    Google Scholar 

  24. Zhao JL, Zhang YJ, Yang QZ: Modified projection methods for the split feasibility problem and the multiple-set split feasibility problem. Appl. Math. Comput. 2012, 219: 1644–1653. 10.1016/j.amc.2012.08.005

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors wish to thank the referees for their helpful comments, which notably improved the presentation of this manuscript. This work was supported in part by The Fundamental Research Funds for the Central Universities (the Special Fund of Science in Civil Aviation University of China: No. ZXH2012K001), and by the Science Research Foundation of Civil Aviation University of China (No. 2012KYM03).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ming Tian.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All the authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Tian, M., Huang, LH. Iterative methods for constrained convex minimization problem in Hilbert spaces. Fixed Point Theory Appl 2013, 105 (2013). https://doi.org/10.1186/1687-1812-2013-105

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2013-105

Keywords