Open Access

Iterative algorithm of common solutions for a constrained convex minimization problem, a quasi-variational inclusion problem and the fixed point problem of a strictly pseudo-contractive mapping

Fixed Point Theory and Applications20142014:54

https://doi.org/10.1186/1687-1812-2014-54

Received: 8 October 2013

Accepted: 19 February 2014

Published: 4 March 2014

Abstract

In this paper, a new iterative algorithm is proposed for finding a common solution to a constrained convex minimization problem, a quasi-variational inclusion problem and the fixed point problem of a strictly pseudo-contractive mapping in a real Hilbert space. It is proved that the sequence generated by the proposed algorithm converges strongly to a common solution of the three above described problems. By applying this result to some special cases, several interesting results can be obtained.

MSC:47H09, 47H10, 49J40.

Keywords

iterative algorithm constrained convex minimization quasi-variational inclusion strictly pseudo-contractive fixed point convergence analysis

1 Introduction

Variational inequalities, introduced by Hartman and Stampacchia [1] in the early sixties, are one of the most interesting and intensively studied classes of mathematical problems. They are a very powerful tool of the current mathematical technology and have been extended to study a considerable amount of problems arising in mechanics, physics, optimization and control, nonlinear programming, transportation equilibrium and engineering sciences (see, e.g., [24]). As a useful and important generalization of variational inequalities, quasi-variational inclusion problems are also further studied (see, e.g., [514] and the references contained therein).

Throughout this paper, we assume that H is a real Hilbert space with the inner product , and the induced norm , and let C be a nonempty closed convex subset of H. F ( T ) denotes a fixed point set of the mapping T.

Let Φ be a single-valued mapping of C into H and M be a multi-valued mapping with domain D ( M ) = C . The quasi-variational inclusion problem is to find u C such that
0 Φ ( u ) + M u .
(1.1)
The solution set of quasi-variational inclusion problem (1.1) is denoted by VI ( C , Φ , M ) . In particular, if M = δ C , where δ C : H [ 0 , ] is the indicator function of C, i.e.,
δ C ( x ) = { 0 , x C , + , x C ,
then the variational inclusion problem (1.1) is equivalent to finding u C such that
Φ ( u ) , v u 0 , v C .
(1.2)

This problem is called the Hartman-Stampacchia variational inequality problem [1]. The solution set of problem (1.2) is denoted by VI ( C , Φ ) .

Recall that T : C C is called a k-strictly pseudo-contractive mapping if there exists a constant k [ 0 , 1 ) such that
T x T y 2 x y 2 + k ( I T ) x ( I T ) y 2 , x , y C ,
(1.3)
and T is called a pseudo-contractive mapping if
T x T y 2 x y 2 + ( I T ) x ( I T ) y 2 , x , y C .
It is obvious that k = 0 , then the mapping T is nonexpansive, that is,
T x T y x y , x , y C .

It is well known that finding fixed points of nonexpansive mappings is an important topic in the theory of nonexpansive mappings and has wide applications in a number of applied areas, such as the convex feasibility problem [15, 16], the split feasibility problem [17], image recovery and signal processing [18]. After that, as an important generalization of nonexpansive mappings, strictly pseudo-contractive mappings become one of the most interesting studied classes of nonexpansive mappings (see, e.g., [1922]). In fact, strictly pseudo-contractive mappings have more powerful applications than nonexpansive mappings do such as in solving an inverse problem [23].

In order to find a common element of the solution set of quasi-variational inclusion problem (1.1) and the fixed point set of k-strictly pseudo-contractive mapping (1.3), which is also a solution of the following constrained convex minimization problem:
min x C f ( x ) ,
(1.4)
where f : C R is a real-valued convex function and assumes that problem (1.4) is consistent (i.e., its solution set is nonempty), let Ω denote its solution set. Ceng et al. [24] studied the following algorithm: take x 0 = x C arbitrarily and
{ y n = P C ( x n λ n f ( x n ) ) , t n = P C ( x n λ n f ( y n ) ) , z n = ( 1 α n α ˆ n ) x n + α n J M , μ n ( t n μ n Φ ( t n ) ) + α ˆ n S J M , μ n ( t n μ n Φ ( t n ) ) , C n = { z C : z n z x n z } , Q n = { z C : x n z , x x n 0 } , x n + 1 = P C n Q n x , n 0 .

Under appropriate conditions they obtained one strong convergence theorem.

In this paper, motivated and inspired by the above facts, we propose a new algorithm as follows: take x 0 C arbitrarily, set C 0 = C , and
{ y n = P C ( x n λ n f ( x n ) ) , z n = P C ( x n λ n f ( y n ) ) , t n = J M , μ n ( z n μ n Φ ( z n ) ) , w n = α n t n + ( 1 α n ) S t n , C n + 1 = { w C n : w n w x n w } , x n + 1 = P C n + 1 x 0 , n 0 ,

and also get a strong convergence theorem under certain conditions.

The remainder of this paper is organized as follows. In Section 2, some definitions and lemmas are provided to get the main results of this paper. In Section 3, we give and prove one strong convergence theorem about our proposed algorithm. Finally, in Section 4, we apply our conclusion to some special cases.

2 Preliminaries

Let H be a real Hilbert space. It is well known that
x y 2 = x 2 2 x , y + y 2
and
t x + ( 1 t ) y 2 = t x 2 + ( 1 t ) y 2 t ( 1 t ) x y 2

for all x , y H and t [ 0 , 1 ] .

Now, we recall some definitions and useful results which will be used in the next section.

Definition 2.1 Let T : C H be a nonlinear operator.
  1. (1)
    T is Lipschitz continuous if there exists a constant L > 0 such that
    T x T y L x y , x , y C .
     
  2. (2)
    T is monotone if
    T x T y , x y 0 , x , y C .
     
  3. (3)
    T is ρ-strongly monotone if there exists a number ρ > 0 such that
    T x T y , x y ρ x y 2 , x , y C .
     
  4. (4)
    T is η-inverse-strongly monotone if there exists a number η > 0 such that
    T x T y , x y η T x T y 2 , x , y C .
     

It is easy to see that the following results hold: (i) strongly monotone is monotone; (ii) an η-inverse-strongly monotone mapping is monotone and 1 η -Lipschitz continuous; (iii) T is k-strictly pseudo-contractive, then I T is 1 k 2 -inverse strongly monotone.

Definition 2.2 A multi-valued mapping M : D ( M ) H 2 H is called monotone if its graph G ( M ) = { ( x , f ) H × H : x D ( M ) , f M x } is a monotone set in H × H , that is, M is monotone if and only if
( x , f ) , ( y , g ) G ( M ) x y , f g 0 .

A monotone multi-valued mapping M is called maximal if for any ( x , f ) H × H , x y , f g 0 for every ( y , g ) G ( M ) implies f M x .

Remark 2.1 [24]

The following results are equivalent:
  1. (1)

    A multi-valued mapping M is maximal monotone;

     
  2. (2)

    M is monotone and ( I + λ M ) D ( M ) = H for each λ > 0 ;

     
  3. (3)

    M is monotone and its graph G ( M ) is not properly contained in the graph of any other monotone mapping in H.

     
Definition 2.3 P C : H C is called a metric projection if for every point x H , there exists a unique nearest point in C, denoted by P C x , such that
x P C x x y , y C .
Lemma 2.1 Let C be a nonempty closed convex subset of H and let P C : H C be a metric projection, then
  1. (1)

    P C x P C y 2 x y , P C x P C y , x , y H ;

     
  2. (2)

    moreover, P C is a nonexpansive mapping, i.e., P C x P C y x y , x , y H ;

     
  3. (3)

    x P C x , y P C x 0 , x H , y C ;

     
  4. (4)

    x y 2 x P C x 2 + y P C x 2 , x H , y C .

     
Definition 2.4 Let M : D ( M ) H 2 H be a multi-valued maximal monotone mapping, then the single-valued mapping J M , μ : H H defined by
J M , μ x = ( I + μ M ) 1 x , x H

is called the resolvent operator associated with M, where μ is any positive number and I is the identity mapping.

Lemma 2.2 [5, 24, 25]

The resolvent operator J M , μ associated with M is single-valued and firmly nonexpansive, i.e.,
J M , μ x J M , μ y 2 J M , μ x J M , μ y , x y , x , y H .

Consequently, J M , μ is nonexpansive and monotone.

Lemma 2.3 [24]

Let M be a multi-valued maximal monotone mapping with D ( M ) = C . Then, for any given μ > 0 , u C is a solution of problem (1.1) if and only if u C satisfies
u = J M , μ ( u μ Φ ( u ) ) .

Lemma 2.4 [24]

Let M be a multi-valued maximal monotone mapping with D ( M ) = C and let Φ : C H be a monotone, continuous and single-valued mapping. Then M + Φ is maximal monotone.

In the sequel, we use x n x and x n x to denote the weak convergence and strong convergence of the sequence { x n } in H, respectively.

Lemma 2.5 [26]

Let C be a nonempty closed convex subset of H and let T : C C be a k-strictly pseudocontractive mapping, then the following results hold:
  1. (1)
    inequality (1.3) is equivalent to
    T x T y , x y x y 2 1 k 2 ( I T ) x ( I T ) y 2 , x , y C .
     
  2. (2)
    T is Lipschitz continuous with a constant 1 + k 1 k , i.e.,
    T x T y 1 + k 1 k x y , x , y C .
     
  3. (3)
    (Demi-closedness principle) I T is demi-closed on C, that is,
    if  x n x C  and  ( I T ) x n 0 ,  then  x = T x .
     
Lemma 2.6 Let C be a nonempty closed convex subset of H and let T : C H be an η-inverse-strongly monotone mapping, then for all x , y C and η > 0 , we have
( I λ T ) x ( I λ T ) y 2 = ( x y ) λ ( T x T y ) 2 = x y 2 2 λ T x T y , x y + λ 2 T x T y 2 x y 2 + λ ( λ 2 η ) T x T y 2 .

So, if 0 < λ 2 η , then I λ T is a nonexpansive mapping from C to H.

Lemma 2.7 [24, 27]

Let A : C H be a monotone, Lipschitz continuous mapping, and let N C v be the normal cone to C at v C , i.e.,
N C v = { z H : v u , z 0 , u C } .
Define
T v = { A v + N C v , v C , , v C .

Then, T is maximal monotone and 0 T v if and only if v VI ( C , A ) .

For the minimization problem (1.4), if f is (Frechet) differentiable, then we have the following lemma.

Lemma 2.8 [28] (Optimality condition)

A necessary condition of optimality for a point x C to be a solution of the minimization problem (1.4) is that x solves the variational inequality
f ( x ) , x x 0 , x C .
(2.1)
Equivalently, x C solves the fixed point equation
x = P C ( x λ f ( x ) )

for every constant λ > 0 . If, in addition, f is convex, then the optimality condition (2.1) is also sufficient.

3 Main results

In this section, we prove a strong convergence theorem by an iterative algorithm for finding a solution of the constrained convex minimization problem (1.4), which is also a common solution of the quasi-variational inclusion problem (1.1) and the fixed point problem of a k-strictly pseudo-contractive mapping (1.3) in a real Hilbert space.

Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H. For the minimization problem (1.4), assume that f is (Frechet) differentiable and the gradient f is a ρ-inverse-strongly monotone mapping. Let Φ : C H be an η-inverse-strongly monotone mapping and M be a maximal monotone mapping with D ( M ) = C , and let S : C C be a k-strictly pseudo-contractive mapping such that F = F ( S ) Ω VI ( C , Φ , M ) . Pick any x 0 C and set C 0 = C . Let { x n } C be a sequence generated by
{ y n = P C ( x n λ n f ( x n ) ) , z n = P C ( x n λ n f ( y n ) ) , t n = J M , μ n ( z n μ n Φ ( z n ) ) , w n = α n t n + ( 1 α n ) S t n , C n + 1 = { w C n : w n w x n w } , x n + 1 = P C n + 1 x 0 , n 0 ,
(3.1)
where the following conditions hold:
  1. (i)

    0 < lim inf n λ n lim sup n λ n < ρ ;

     
  2. (ii)

    ϵ μ n 2 η for some ϵ ( 0 , 2 η ] ;

     
  3. (iii)

    k < lim inf n α n lim sup n α n < 1 .

     

Then the sequence { x n } converges strongly to P F x 0 .

Proof It is obvious that C n is closed for each n N . Since w n w x n w is equivalent to
w n x n 2 + 2 w n x n , x n w 0 ,

we have that C n is convex for each n N . Therefore, { x n } is well defined. We divide the proof into five steps.

Step 1. We show by induction that F C n for each n N .

It is obvious that F C 0 = C . Suppose that F C n for some n N . Let p F , we have
w n p 2 = α n t n + ( 1 α n ) S t n p 2 = α n ( t n p ) + ( 1 α n ) ( S t n p ) 2 = α n t n p 2 + ( 1 α n ) S t n p 2 α n ( 1 α n ) t n S t n 2 α n t n p 2 + ( 1 α n ) [ t n p 2 + k t n S t n 2 ] α n ( 1 α n ) t n S t n 2 = t n p 2 + ( 1 α n ) ( k α n ) t n S t n 2 t n p 2 .
(3.2)
According to Lemma 2.3, Lemma 2.2 and Lemma 2.6, we get
t n p 2 = J M , μ n ( z n μ n Φ ( z n ) ) J M , μ n ( p μ n Φ ( p ) ) 2 z n μ n Φ ( z n ) ( p μ n Φ ( p ) ) 2 z n p 2 + μ n ( μ n 2 η ) Φ ( z n ) Φ ( p ) 2 z n p 2 .
(3.3)
Since the gradient f is a ρ-inverse-strongly monotone mapping and p F Ω , from Lemma 2.8, we have
f ( y n ) f ( p ) , y n p 0 and f ( p ) , y n p 0 .
(3.4)
From Lemma 2.1(4) and (3.4), we obtain
z n p 2 = P C ( x n λ n f ( y n ) ) p 2 x n λ n f ( y n ) p 2 x n λ n f ( y n ) z n 2 = x n p 2 x n z n 2 + 2 λ n f ( y n ) , p z n = x n p 2 x n z n 2 + 2 λ n [ f ( y n ) f ( p ) , p y n + f ( p ) , p y n + f ( y n ) , y n z n ] x n p 2 x n z n 2 + 2 λ n f ( y n ) , y n z n = x n p 2 x n y n 2 2 x n y n , y n z n y n z n 2 + 2 λ n f ( y n ) , y n z n = x n p 2 x n y n 2 y n z n 2 + 2 x n λ n f ( y n ) y n , z n y n .
(3.5)
It is easy to see that ρ-inverse-strongly monotone mapping f is 1 ρ -Lipschitz continuous. Further, since y n = P C ( x n λ n f ( x n ) ) and by Lemma 2.1(3) we have
x n λ n f ( y n ) y n , z n y n = x n λ n f ( x n ) y n , z n y n + λ n f ( x n ) λ n f ( y n ) , z n y n λ n f ( x n ) λ n f ( y n ) , z n y n λ n f ( x n ) f ( y n ) z n y n λ n ρ x n y n z n y n .
(3.6)
Substituting (3.6) into (3.5), we obtain
z n p 2 x n p 2 x n y n 2 y n z n 2 + 2 x n λ n f ( y n ) y n , z n y n x n p 2 x n y n 2 y n z n 2 + 2 λ n ρ x n y n z n y n x n p 2 x n y n 2 y n z n 2 + λ n 2 ρ 2 x n y n 2 + z n y n 2 = x n p 2 + ( λ n 2 ρ 2 1 ) x n y n 2 x n p 2 .
(3.7)
From (3.2), (3.3) and (3.7), we have
w n p x n p .
(3.8)

Hence p C n + 1 . This implies that p C n for all n N .

Step 2. We prove that lim n x n + 1 x n = 0 and lim n x n w n = 0 .

Let x = P F x 0 . From x n = P C n x 0 and x F C n , we obtain
x n x 0 x x 0 .
(3.9)
Then { x n } is bounded. This implies that { z n } , { t n } and { w n } are also bounded. Since x n = P C n x 0 and x n + 1 C n + 1 C n , we have
x n x 0 x n + 1 x 0 .
Therefore lim n x n x 0 exists. From x n = P C n x 0 , x n + 1 C n + 1 C n and Lemma 2.1(4), we obtain
0 x n + 1 x n 2 x 0 x n + 1 2 x 0 x n 2 ,
which implies
lim n x n + 1 x n = 0 .
(3.10)
It follows from x n + 1 C n + 1 that w n x n + 1 x n x n + 1 , and hence
x n w n x n x n + 1 + x n + 1 w n 2 x n x n + 1 .
(3.11)
From (3.10) and (3.11), we have
lim n x n w n = 0 .
(3.12)

Step 3. We show that lim n t n S t n = 0 , lim n x n z n = 0 and lim n x n t n = 0 .

For p F , from (3.2), (3.3) and (3.7), we have
w n p 2 t n p 2 + ( 1 α n ) ( k α n ) t n S t n 2 z n p 2 + ( 1 α n ) ( k α n ) t n S t n 2 x n p 2 + ( λ n 2 ρ 2 1 ) x n y n 2 + ( 1 α n ) ( k α n ) t n S t n 2 .
Then
( 1 λ n 2 ρ 2 ) x n y n 2 + ( 1 α n ) ( α n k ) t n S t n 2 x n p 2 w n p 2 ( x n p + w n p ) x n w n .
(3.13)
Since 0 < lim inf n λ n lim sup n λ n < ρ and k < lim inf n α n lim sup n α n < 1 , from (3.12) and (3.13) we get
lim n x n y n = 0
(3.14)
and
lim n t n S t n = 0 .
(3.15)
As f is 1 ρ -Lipschitz continuous, we have
z n y n = P C ( x n λ n f ( y n ) ) P C ( x n λ n f ( x n ) ) x n λ n f ( y n ) ( x n λ n f ( x n ) ) = λ n f ( y n ) f ( x n ) λ n ρ x n y n .
Hence
lim n z n y n = 0 .
(3.16)
From (3.14) and (3.16), we obtain
lim n x n z n = 0 .
(3.17)
We observe
w n t n = α n t n + ( 1 α n ) S t n t n = ( 1 α n ) S t n t n S t n t n .
(3.18)
From (3.15), we get
lim n w n t n = 0 .
(3.19)
Combining (3.12) and (3.19), we have
lim n x n t n = 0 .
(3.20)
Step 4. Since { x n } is bounded, there exists a subsequence { x n i } which converges weakly to u. We show that
u F = F ( S ) Ω VI ( C , Φ , M ) .

Indeed, firstly, we show u F ( S ) . Since x n t n 0 and x n i u , we have t n i u . From S t n t n 0 , we obtain S t n i t n i 0 as i . By Lemma 2.5 (Demi-closedness principle), we can conclude that u F ( S ) .

Secondly, we show u Ω . Since z n = P C ( x n λ n f ( y n ) ) and by Lemma 2.1(3), we have
x n λ n f ( y n ) z n , z n v 0 ,
that is,
v z n , z n x n λ n + f ( y n ) 0 .
Let
T v = { f ( v ) + N C v , v C , , v C .
Then, from Lemma 2.7, we know that T is maximal monotone and 0 T v if and only if v VI ( C , f ) . Let G ( T ) be the graph of T and let ( v , h ) G ( T ) . Then we have h T v = f ( v ) + N C v . Hence h f ( v ) N C v . So, we have
v z , h f ( v ) 0 , z C .
Therefore,
v z n i , h v z n i , f ( v ) v z n i , f ( v ) v z n i , z n i x n i λ n i + f ( y n i ) = v z n i , f ( v ) f ( z n i ) + v z n i , f ( z n i ) f ( y n i ) v z n i , z n i x n i λ n i v z n i , f ( z n i ) f ( y n i ) v z n i , z n i x n i λ n i .
(3.21)

Since x n y n 0 and x n z n 0 , we have y n i u and z n i u . Then, from (3.21), we obtain v u , h 0 = v u , h 0 as i . Since T is maximal monotone, we have 0 T u and hence u VI ( C , f ) . According to Lemma 2.8, we obtain u Ω .

Finally, let us show u VI ( C , Φ , M ) . Since Φ : C H is η-inverse-strongly monotone and M is maximal monotone, by Lemma 2.4 we know that M + Φ is maximal monotone. Take a fixed ( y , g ) G ( M + Φ ) arbitrarily. Then we have g Φ ( y ) + M y , that is,
g Φ ( y ) M y .
Since t n i = J M , μ n i ( z n i μ n i Φ ( z n i ) ) , then
1 μ n i ( z n i μ n i Φ ( z n i ) t n i ) M t n i .
Therefore,
y t n i , g Φ ( y ) 1 μ n i ( z n i μ n i Φ ( z n i ) t n i ) 0 ,
which hence yields
y t n i , g y t n i , Φ ( y ) + 1 μ n i ( z n i μ n i Φ ( z n i ) t n i ) = y t n i , Φ ( y ) Φ ( z n i ) + y t n i , z n i t n i μ n i η Φ ( y ) Φ ( t n i ) 2 + y t n i , Φ ( t n i ) Φ ( z n i ) + y t n i , z n i t n i μ n i y t n i , Φ ( t n i ) Φ ( z n i ) + y t n i , z n i t n i μ n i .
(3.22)
Observe that
| y t n i , Φ ( t n i ) Φ ( z n i ) + y t n i , z n i t n i μ n i | y t n i Φ ( t n i ) Φ ( z n i ) + 1 μ n i y t n i z n i t n i 1 η y t n i t n i z n i + 1 ϵ y t n i z n i t n i = ( 1 η + 1 ϵ ) y t n i z n i t n i .
By x n z n 0 and x n t n 0 , we have z n t n 0 . Then
lim i | y t n i , Φ ( t n i ) Φ ( z n i ) + y t n i , z n i t n i μ n i | = 0 .
Let i , from (3.22) we get
y u , g 0 = y u , g 0 .
This implies that 0 Φ ( u ) + M u . Hence u VI ( C , Φ , M ) . Therefore,
u F = F ( S ) Ω VI ( C , Φ , M ) .

Step 5. We show that x n x , where x = P F x 0 .

Indeed, from x = P F x 0 , u F = F ( S ) Ω VI ( C , Φ , M ) and (3.9), we have
x x 0 u x 0 lim inf i x n i x 0 lim sup i x n i x 0 x x 0 .
Then
lim i x n i x 0 = u x 0 .
From x n i x 0 u x 0 and the Kadec-Klee property of H, we have x n i x 0 u x 0 , and hence x n i u . Since x n i = P C n i x 0 and x F C n i , we have
x x n i 2 = x x n i , x n i x 0 + x x n i , x 0 x x x n i , x 0 x .
Let i , by u F and x = P F x 0 , we have
x u 2 x u , x 0 x 0 .

Hence u = x , which implies that x n x . This completes the proof. □

4 Applications

From Theorem 3.1, we can obtain the following theorems.

Theorem 4.1 Let C be a nonempty closed convex subset of a real Hilbert space H. For the minimization problem (1.4), assume that f is (Frechet) differentiable and the gradient f is a ρ-inverse-strongly monotone mapping. Let S : C C be a k-strictly pseudo-contractive mapping such that F = F ( S ) Ω . Pick any x 0 C and set C 0 = C . Let { x n } C be a sequence generated by
{ y n = P C ( x n λ n f ( x n ) ) , z n = P C ( x n λ n f ( y n ) ) , w n = α n z n + ( 1 α n ) S z n , C n + 1 = { w C n : w n w x n w } , x n + 1 = P C n + 1 x 0 , n 0 ,
(4.1)
where the following conditions hold:
  1. (i)

    0 < lim inf n λ n lim sup n λ n < ρ ;

     
  2. (ii)

    k < lim inf n α n lim sup n α n < 1 .

     

Then the sequence { x n } converges strongly to P F x 0 .

Proof Let Φ = M = 0 in Theorem 3.1, we have VI ( C , 0 , 0 ) = C and F = F ( S ) Ω VI ( C , 0 , 0 ) = F ( S ) Ω . Let η be any positive number in the interval ( 0 , ) and take any sequence { μ n } [ ϵ , 2 η ] for some ϵ ( 0 , 2 η ] . In addition, we have
t n = J M , μ n ( z n μ n Φ ( z n ) ) = ( I + μ n M ) 1 z n = z n .

Therefore, by Theorem 3.1 we obtain the expected result. □

Theorem 4.2 Let C be a nonempty closed convex subset of a real Hilbert space H. Let S : C C be a nonexpansive mapping such that F ( S ) . Pick any x 0 C and set C 0 = C . Let { x n } C be a sequence generated by
{ w n = α n x n + ( 1 α n ) S x n , C n + 1 = { w C n : w n w x n w } , x n + 1 = P C n + 1 x 0 , n 0 ,
(4.2)

where the following condition holds: k < lim inf n α n lim sup n α n < 1 . Then the sequence { x n } converges strongly to P F x 0 .

Proof Let f = Φ = M = 0 and k = 0 in Theorem 3.1. Let ρ, η be any positive number in the interval ( 0 , ) . Take any sequence { λ n } which satisfies 0 < lim inf n λ n lim sup n λ n < ρ and take any sequence { μ n } [ ϵ , 2 η ] for some ϵ ( 0 , 2 η ] . In this case, we have
{ y n = P C ( x n λ n f ( x n ) ) = x n , z n = P C ( x n λ n f ( y n ) ) = x n , t n = J M , μ n ( z n μ n Φ ( z n ) ) = z n , w n = α n t n + ( 1 α n ) S t n = α n x n + ( 1 α n ) S x n .
(4.3)

Therefore, by Theorem 3.1 we obtain the expected result. □

Theorem 4.3 Let C be a nonempty closed convex subset of a real Hilbert space H. For the minimization problem (1.4), assume that f is (Frechet) differentiable and the gradient f is a ρ-inverse-strongly monotone mapping. Let Γ : C C be γ-strictly pseudo-contractive and let S : C C be a k-strictly pseudo-contractive mapping such that F = F ( S ) Ω F ( Γ ) . Pick any x 0 C and set C 0 = C . Let { x n } C be a sequence generated by
{ y n = P C ( x n λ n f ( x n ) ) , z n = P C ( x n λ n f ( y n ) ) , t n = ( 1 μ n ) z n + μ n Γ ( z n ) , w n = α n t n + ( 1 α n ) S t n , C n + 1 = { w C n : w n w x n w } , x n + 1 = P C n + 1 x 0 , n 0 ,
(4.4)
where the following conditions hold:
  1. (i)

    0 < lim inf n λ n lim sup n λ n < ρ ;

     
  2. (ii)

    ϵ μ n 1 γ for some ϵ ( 0 , 1 γ ] ;

     
  3. (iii)

    k < lim inf n α n lim sup n α n < 1 .

     

Then the sequence { x n } converges strongly to P F x 0 .

Proof Let Φ = I Γ and M = 0 in Theorem 3.1, then we have that Φ is η-inverse strongly monotone with η = 1 γ 2 . Now, we show that VI ( C , Φ , M ) = F ( Γ ) . In fact, since Φ = I Γ and M = 0 , we obtain
0 VI ( C , Φ , M ) 0 Φ ( u ) + M u 0 = Φ ( u ) 0 = u Γ ( u ) u F ( Γ ) .
Thus,
F = F ( S ) Ω VI ( C , Φ , M ) = F ( S ) Ω F ( Γ ) .
Note that μ n [ ϵ , 1 γ ] [ 0 , 1 ] , hence ( 1 μ n ) z n + μ n Γ ( z n ) C . In this case, we have
t n = J M , μ n ( z n μ n Φ ( z n ) ) = ( I + μ n M ) 1 ( z n μ n Φ ( z n ) ) = z n μ n Φ ( z n ) = z n μ n ( I Γ ) ( z n ) = ( 1 μ n ) z n + μ n Γ ( z n ) .

Therefore, by Theorem 3.1 we obtain the expected result. □

Declarations

Acknowledgements

Changfeng Ma is grateful for the hospitality and support during his research at Chern Mathematics Institute in Nankai University in June 2013. The project is supported by the National Natural Science Foundation of China (11071041), Fujian Natural Science Foundation (2013J01006) and R&D of Key Instruments and Technologies for Deep Resources Prospecting (the National R&D Projects for Key Scientific Instruments) under grant number ZDYZ2012-1-02-04.

Authors’ Affiliations

(1)
School of Mathematics and Computer Science, Fujian Normal University

References

  1. Hartman P, Stampacchia G: On some nonlinear elliptic differential equations. Acta Math. 1966, 115: 271–310. 10.1007/BF02392210View ArticleMathSciNetMATHGoogle Scholar
  2. Hlavác̆ek I, Haslinger J, Nečas J, Lovišek J Applied Mathematical Science 66. In Solution of Variational Inequalities in Mechanics. Springer, New York; 1988.View ArticleGoogle Scholar
  3. Fukushima M: Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems. Math. Program. 1992, 53: 99–110. 10.1007/BF01585696View ArticleMathSciNetMATHGoogle Scholar
  4. Dafermos S: Traffic equilibrium and variational inequalities. Transp. Sci. 1980, 14: 42–54. 10.1287/trsc.14.1.42View ArticleMathSciNetGoogle Scholar
  5. Zhang SS, Lee JHW, Chan CK: Algorithms of common solutions to quasi variational inclusion and fixed point problems. Appl. Math. Mech. 2008, 29: 571–581. 10.1007/s10483-008-0502-yView ArticleMathSciNetMATHGoogle Scholar
  6. Zhang SS, Kim JK, Kim KH: On the existence and iterative approximation problems of solutions for set-valued variational inclusions in Banach spaces. J. Math. Anal. Appl. 2002, 268: 89–108. 10.1006/jmaa.2001.7800View ArticleMathSciNetMATHGoogle Scholar
  7. Zhang SS: Set-valued variational inclusions in Banach spaces. J. Math. Anal. Appl. 2000, 248: 438–454. 10.1006/jmaa.2000.6919View ArticleMathSciNetGoogle Scholar
  8. Huang NJ: Generalized nonlinear variational inclusions with noncompact valued mappings. Appl. Math. Lett. 1996, 9: 25–29.View ArticleMATHGoogle Scholar
  9. Noor MA: Implicit resolvent dynamical systems for quasi variational inclusions. J. Math. Anal. Appl. 2002, 269: 216–226. 10.1016/S0022-247X(02)00014-8View ArticleMathSciNetMATHGoogle Scholar
  10. Noor MA: Three-step iterative algorithms for multivalued quasi variational inclusions. J. Math. Anal. Appl. 2001, 255: 589–604. 10.1006/jmaa.2000.7298View ArticleMathSciNetMATHGoogle Scholar
  11. Noor MA, Noor KI: Sensitivity analysis for quasi-variational inclusions. J. Math. Anal. Appl. 1999, 236: 290–299. 10.1006/jmaa.1999.6424View ArticleMathSciNetMATHGoogle Scholar
  12. Verma RU:Approximation solvability of a class of nonlinear set-valued variational inclusions involving ( A , η ) -monotone mappings. J. Math. Anal. Appl. 2008, 337: 969–975. 10.1016/j.jmaa.2007.01.114View ArticleMathSciNetMATHGoogle Scholar
  13. Verma RU: A -monotonicity and its role in nonlinear variational inclusions. J. Optim. Theory Appl. 2006, 129: 457–467. 10.1007/s10957-006-9079-7View ArticleMathSciNetMATHGoogle Scholar
  14. Verma RU: General nonlinear variational inclusion problems involving A -monotone mappings. Appl. Math. Lett. 2006, 19: 960–963. 10.1016/j.aml.2005.11.010View ArticleMathSciNetMATHGoogle Scholar
  15. Butnariu D, Censor Y, Gurfil P, Hadar E: On the behavior of subgradient projections methods for convex feasibility problem in Euclidean spaces. SIAM J. Optim. 2008, 19: 786–807. 10.1137/070689127View ArticleMathSciNetMATHGoogle Scholar
  16. Maruster S, Popirlan C: On the Mann-type iteration and the convex feasibility problem. J. Comput. Appl. Math. 2008, 212: 390–396. 10.1016/j.cam.2006.12.012View ArticleMathSciNetMATHGoogle Scholar
  17. Xu HK: A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22: 2021–2034. 10.1088/0266-5611/22/6/007View ArticleMATHGoogle Scholar
  18. Youla D: Mathematical theory of image restoration by the method of convex projections. In Image Recovery Theory and Applications. Edited by: Stark H. Academic Press, Orlando; 1987.Google Scholar
  19. Ke YF, Ma CF: A new relaxed extragradient-like algorithm for approaching common solutions of generalized mixed equilibrium problems, a more general system of variational inequalities and a fixed point problem. Fixed Point Theory Appl. 2013., 2013: Article ID 126Google Scholar
  20. Ke YF, Ma CF: The convergence analysis of the projection methods for a system of generalized relaxed cocoercive variational inequalities in Hilbert spaces. Fixed Point Theory Appl. 2013., 2013: Article ID 189Google Scholar
  21. Zegeye H, Shahzad N, Alghamdi MA: Convergence of Ishikawa’s iteration method for pseudocontractive mappings. Nonlinear Anal. 2011, 74: 7304–7311. 10.1016/j.na.2011.07.048View ArticleMathSciNetMATHGoogle Scholar
  22. Tang YC, Peng JG, Liu LW: Strong convergence theorem for pseudo-contractive mappings in Hilbert spaces. Nonlinear Anal. 2011, 74: 380–385. 10.1016/j.na.2010.08.048View ArticleMathSciNetMATHGoogle Scholar
  23. Scherzer O: Convergence criteria of iterative methods based on Landweber iteration for solving nonlinear problems. J. Math. Anal. Appl. 1991, 194: 911–933.View ArticleMathSciNetGoogle Scholar
  24. Ceng LC, Chou CY: On the relaxed hybrid-extragradient method for solving constrained convex minimization problems in Hilbert spaces. Taiwan. J. Math. 2013, 17: 911–936.View ArticleMathSciNetMATHGoogle Scholar
  25. Brezis H: Opérateur maximaux monotones et semigroupes de contractions dans les espaces de Hilbert. North-Holland, Amsterdam; 1973.Google Scholar
  26. Acedo GL, Xu HK: Iterative methods for strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 2007, 67: 2258–2271. 10.1016/j.na.2006.08.036View ArticleMathSciNetMATHGoogle Scholar
  27. Rockafellar RT: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 1970, 149: 75–88. 10.1090/S0002-9947-1970-0282272-5View ArticleMathSciNetMATHGoogle Scholar
  28. Su M, Xu HK: Remarks on the gradient-projection algorithm. J. Nonlinear Anal. Optim. 2010, 1: 35–43.MathSciNetGoogle Scholar

Copyright

© Ke and Ma; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.