Skip to main content

Iterative algorithms for monotone inclusion problems, fixed point problems and minimization problems

Abstract

We introduce new implicit and explicit iterative schemes for finding a common element of the set of fixed points of k-strictly pseudocontractive mapping and the set of zeros of the sum of two monotone operators in a Hilbert space. Then we establish strong convergence of the sequences generated by the proposed schemes to a common point of two sets, which is a solution of a certain variational inequality. Further, we find the unique solution of the quadratic minimization problem, where the constraint is the common set of two sets mentioned above. As applications, we consider iterative schemes for the Hartmann-Stampacchia variational inequality problem and the equilibrium problem coupled with fixed point problem.

MSC:47H05, 47H09, 47H10, 47J05, 47J07, 47J25, 47J20, 49M05.

1 Introduction

Let H be a real Hilbert space with the inner product , and the induced norm . Let C be a nonempty closed convex subset of H, and let T:CC be a self-mapping on C. We denote by F(T) the set of fixed points of T, that is, F(T):={xC:Tx=x}.

Let A:CH be a single-valued nonlinear mapping, and let B:H 2 H be a multivalued mapping. Then we consider the monotone inclusion problem (MIP) of finding xH such that

0Ax+Bx.
(1.1)

The set of solutions of the MIP (1.1) is denoted by ( A + B ) 1 0. That is, ( A + B ) 1 0 is the set of zeros of A+B. The MIP (1.1) provides a convenient framework for studying a number of problems arising in structural analysis, mechanics, economics and others; see, for instance [1, 2]. Also, various types of inclusion problems have been extended and generalized, and there are many algorithms for solving variational inclusions. For more details, see [35] and the references therein.

The class of pseudocontractive mappings is one of the most important classes of mappings among nonlinear mappings. We recall that a mapping T:CH is said to be k-strictly pseudocontractive if there exists a constant k[0,1) such that

T x T y 2 x y 2 +k ( I T ) x ( I T ) y 2 ,x,yC.

Note that the class of k-strictly pseudocontractive mappings includes the class of nonexpansive mappings as a subclass. That is, T is nonexpansive (i.e., TxTyxy, x,yC) if and only if T is 0-strictly pseudocontractive. The mapping T is also said to be pseudocontractive if k=1, and T is said to be strongly pseudocontractive if there exists a constant λ(0,1) such that TλI is pseudocontractive. Clearly, the class of k-strictly pseudocontractive mappings falls into the one between classes of nonexpansive mappings and pseudocontractive mappings. Also, we remark that the class of strongly pseudocontractive mappings is independent of the class of k-strictly pseudocontractive mappings (see [6]). Recently, many authors have been devoting the studies on the problems of finding fixed points for pseudocontractive mappings (see, for example, [710] and the references therein).

Recently, in order to study the MIP (1.1) coupled with the fixed point problem, many authors have introduced some iterative schemes for finding a common element of the set of solutions of the MIP (1.1) and the set of fixed points of a countable family of nonexpansive mappings (see [4, 5, 11] and the references therein).

Inspired and motivated by the above-mentioned recent works, in this paper, we introduce new implicit and explicit iterative schemes for finding a common element of the set of the solutions of the MIP (1.1) with a set-valued maximal monotone operator B and an inverse-strongly monotone mapping A and the set of fixed points of a k-strictly pseudocontractive mapping T. Then we establish results of the strong convergence of the sequences generated by the proposed schemes to a common point of two sets, which is a solution of a certain variational inequality. As a direct consequence, we find the unique solution of the quadratic minimization problem:

x ˜ 2 =min { x 2 : x F ( T ) ( A + B ) 1 0 } .

Moreover, as applications, we consider iterative algorithms for the Hartmann-Stampacchia variational inequality problem and the equilibrium problem coupled with fixed point problem of nonexpansive mappings.

2 Preliminaries and lemmas

Let H be a real Hilbert space, and let C be a nonempty closed convex subset of H. In the following, we write x n x to indicate that the sequence { x n } converges weakly to x. x n x implies that { x n } converges strongly to x.

Recall that a mapping f:CC is said to be contractive if there exists l[0,1) such that

f ( x ) f ( y ) lxy,x,yC.

A mapping A of C into H is called inverse-strongly monotone if there exists a positive real number α such that

xy,AxAyα A x A y 2

for all x,yC. For such a case, A is called α-inverse-strongly monotone. If A is an α-inverse-strongly monotone mapping of C into H, then it is obvious that A is 1 α -Lipschitzian and continuous. Let B be a mapping of H into 2 H . The effective domain of B is denoted by dom(B), that is, dom(B)={xH:Bx}. A multi-valued mapping B is said to be a monotone operator on H if xy,uv0 for all x,ydom(B), uBx, and vBy. A monotone operator B on H is said to be maximal if its graph is not properly contained in the graph of any other monotone operator on H. For a maximal monotone operator B on H and r>0, we may define a single-valued operator J r = ( I + r B ) 1 :Hdom(B), which is called the resolvent of B for r. Let B be a maximal monotone operator on H, and let B 1 0={xH:0Bx}. It is well known that B 1 0=F( J r ) for all r>0 and the resolvent J r is firmly nonexpansive, i.e.,

J r x J r y 2 xy, J r x J r y,x,yH,
(2.1)

and that the resolvent identity

J λ x= J μ ( μ λ x + ( 1 μ λ ) J λ x )
(2.2)

holds for all λ,μ>0 and xH. It is worth mentioning that the resolvent operator J λ is nonexpansive and 1-inverse-strongly monotone, and that a solution of the MIP (1.1) is a fixed point of the operator J λ (IλA) for all λ>0 (see [11]).

In a real Hilbert space H, we have

x y 2 = x 2 + y 2 2x,y
(2.3)

for all x,yH and λR. For every point xH, there exists a unique nearest point in C, denoted by P C x, such that

x P C x=inf { x y : y C } .

P C is called the metric projection of H onto C. It is well known that P C is nonexpansive, and P C is characterized by the property

u= P C xxu,uy0,xH,yC.
(2.4)

It is also well known that H satisfies the Opial condition, that is, for any sequence { x n } with x n x, the inequality

lim inf n x n x< lim inf n x n y

holds for every yH with yx. For these facts, see [12].

We need the following lemmas for the proof of our main results.

Lemma 2.1 In a real Hilbert space H, the following inequality holds:

x + y 2 x 2 +2y,x+y,x,yH.

Lemma 2.2 [12]

For all x,y,zH and α,β,γ[0,1] with α+β+γ=1, the following equality holds:

α x + β y + γ z 2 =α x 2 +β y 2 +γ z 2 αβ x y 2 βγ y z 2 γα z x 2 .

Lemma 2.3 [13]

Let H be a Hilbert space, let C be a closed convex subset of H. If T is a k-strictly pseudocontractive mapping on C, then the fixed point set F(T) is closed convex, so that the projection P F ( T ) is well defined, and F( P C T)=F(T).

Lemma 2.4 [13]

Let H be a real Hilbert space, let C be a closed convex subset of H, and let T:CH be a k-strictly pseudocontractive mapping. Define a mapping T:CH by Sx=λx+(1λ)Tx for all xC. Then, as λ[k,1), S is a nonexpansive mapping such that F(S)=F(T).

Lemma 2.5 [14]

Let C be a nonempty closed convex subset of a real Hilbert space H. Let the mapping A:CH be α-inverse strongly monotone, and let r>0 be a constant. Then we have

( I r A ) x ( I r A ) y 2 x y 2 +r(r2α) A x A y 2 ,x,yC.

In particular, if 0r2α, then IrA is nonexpansive.

Lemma 2.6 [15]

Let B:H 2 H be a maximal monotone operator, and let A:HH be a Lipschitz continuous mapping. Then the mapping B+A:H 2 H is a maximal monotone operator.

Remark 2.1 Lemma 2.6 implies that ( A + B ) 1 0 is closed and convex if B:H 2 H is a maximal monotone operator and A:HH is an inverse-strongly monotone mapping.

The following lemma is a variant of a Minty lemma (see [16]).

Lemma 2.7 Let C be a nonempty closed convex subset of a real Hilbert space H. Assume that the mapping G:CH is monotone and weakly continuous along segments, that is, G(x+ty)G(x) weakly as t0. Then the variational inequality

x ˜ C,G x ˜ ,p x ˜ 0,pC,

is equivalent to the dual variational inequality

x ˜ C,Gp,p x ˜ 0,pC.

Lemma 2.8 [17]

Let { x n } and { z n } be bounded sequences in a real Banach space E, and let { γ n } be a sequence in [0,1], which satisfies the following condition:

0< lim inf n γ n lim sup n γ n <1.

Suppose that x n + 1 = γ n x n +(1 γ n ) z n for all n1, and

lim sup n ( z n + 1 z n x n + 1 x n ) 0.

Then lim n z n x n =0.

Lemma 2.9 [18]

Let { s n } be a sequence of non-negative real numbers satisfying

s n + 1 (1 ξ n ) s n + ξ n δ n ,n1,

where {ξ} and { δ n } satisfy the following conditions:

  1. (i)

    { ξ n }[0,1] and n = 1 ξ n =;

  2. (ii)

    lim sup n δ n 0 or n = 1 ξ n δ n <.

Then lim n s n =0.

3 Iterative schemes

Throughout the rest of this paper, we always assume as follows: Let H be a real Hilbert space, and let C be a nonempty closed convex subset of H. Let A:CH be an α-inverse strongly monotone mapping, and let B be a maximal monotone operator on H such that the domain of B is included in C. Let J λ = ( I + λ B ) 1 be the resolvent of B for λ>0. Let f:CC be a contractive mapping with constant l(0,1), and let T:CC be a k-strictly pseudocontractive mapping. Define a mapping S:CC by Sx=λx+(1λ)Tx, xC, where λ[k,1). Then, by Lemma 2.4, S is nonexpansive.

In this section, we introduce the following iterative scheme that generates a net { x t } in an implicit way:

x t =tf( x t )+(1t)S J λ t ( x t λ t A x t ),t(0,1),
(3.1)

where 0<a λ t b<2α. We prove strong convergence of { x t }, as t0, to a point x ˜ in F(T) ( A + B ) 1 0, which is a solution of the following variational inequality:

( I f ) x ˜ , p x ˜ 0,pF(T) ( A + B ) 1 0.
(3.2)

Equivalently, x ˜ = P F ( T ) ( A + B ) 1 0 (2If) x ˜ .

If we take f0 in (3.1), then we have

x t =(1t)S J λ t ( x t λ t A x t ),t(0,1).
(3.3)

We also propose the following iterative scheme which generates a sequence { x n } in an explicit way:

x n + 1 = α n f( x n )+ β n x n +(1 α n β n )S J λ n ( x n λ n A x n ),n0,
(3.4)

where { α n },{ β n }(0,1), { λ n }(0,2α) and x 0 C is an arbitrary initial guess, and establish the strong convergence of this sequence to a fixed point x ˜ of T, which is also a solution of the variational inequality (3.2). If we take f0 in (3.4), then we have

x n + 1 = β n x n +(1 α n β n )S J λ n ( x n λ n A x n ),n0.
(3.5)

3.1 Strong convergence of the implicit algorithm

For t(0,1), consider the following mapping Q t on C defined by

Q t x=tf(x)+(1t)S J λ t (x λ t Ax),xC.

By Lemma 2.5, we have

Q t x Q t y = t f ( x ) + ( 1 t ) S J λ t ( x λ t A x ) ( t f ( y ) + ( 1 t ) S J λ t ( y λ t A y ) ) t f ( x ) f ( y ) + ( 1 t ) S J λ t ( x λ t A x ) S J λ t ( y λ t A y ) t l x y + ( 1 t ) ( I λ t A ) x ( I λ t A ) y t l x y + ( 1 t ) x y = [ 1 ( 1 l ) t ] x y .

Since 0<1(1l)t<1, Q t is a contractive mapping. Therefore, by the Banach contraction principle, Q t has a unique fixed point x t C, which uniquely solves the fixed point equation

x t =tf( x t )+(1t)S J λ t ( x t λ t A x t ),t(0,1).

Now, we prove strong convergence of the sequence { x t }, and show the existence of x ˜ F(T) ( A + B ) 1 0, which solves the variational inequality (3.2).

Theorem 3.1 Suppose that F(T) ( A + B ) 1 0. Then the net { x t } defined by the implicit method (3.1) converges strongly, as t0, to a point x ˜ F(T) ( A + B ) 1 0, which is the unique solution of the variational inequality (3.2).

Proof First, we can show easily the uniqueness of a solution of the variational inequality (3.2). In fact, if x ˜ F(T) ( A + B ) 1 0 and x ˆ F(T) ( A + B ) 1 0 both are solutions to (3.2). Then we have

( I f ) x ˜ , x ˆ x ˜ 0,
(3.6)
( I f ) x ˆ , x ˜ x ˆ 0.
(3.7)

Adding up (3.6) and (3.7) yields

( I f ) x ˜ ( I f ) x ˆ , x ˜ x ˆ 0.

This implies that (1l) x ˜ x ˆ 2 0. So x ˜ = x ˆ , and the uniqueness is proved. Below, we use x ˜ F(T) ( A + B ) 1 0 to denote the unique solution of the variational inequality (3.2).

Now, we prove that { x t } is bounded. Set y t = J λ t ( x t λ t A x t ) for all t(0,1). Take pF(T) ( A + B ) 1 0. It is clear that p= J λ t (p λ t Ap)=S J λ t (p λ t Ap) and p=Sp (by Lemma 2.4). Since J λ t is nonexpansive and A is α-inverse-strongly monotone, we have from Lemma 2.5 that

y t p 2 = J λ t ( x t λ t A x t ) J λ t ( p λ t A p ) 2 x t λ t A x t ( p λ t A p ) 2 x t p 2 + λ t ( λ t 2 α ) A x t A p 2 x t p 2 .
(3.8)

So, we have that

y t p x t p.
(3.9)

Moreover, from (3.1), it follows that

x t p = t f ( x t ) + ( I t ) S J λ t ( x t λ t A x t ) p t ( f ( x t ) f ( p ) ) + t f ( p ) p + ( 1 t ) J λ t ( x t λ t A x t ) p t l x t p + t f ( p ) p + ( 1 t ) y t p t l x t p + t f ( p ) p + ( 1 t ) x t p [ 1 t ( 1 l ) ] x t p + t f ( p ) p ,
(3.10)

that is,

x t p f ( p ) p 1 l .

Hence, { x t } is bounded, and so are { y t }, {f( x t )}, {A x t } and {S y t }.

From (3.8) and (3.10), we have

( 1 t l ) 2 x t p 2 [ ( 1 t ) y t p + t f ( p ) p ] 2 = ( 1 t ) 2 y t p 2 + t 2 f ( p ) p 2 + 2 ( 1 t ) t f ( p ) p y t p y t p 2 + t M 1 x t p 2 + λ t ( λ t 2 α ) A x t A p 2 + t M 1 ,
(3.11)

where M 1 >0 is an appropriate constant. This means that

a ( 2 α b ) A x t A p 2 λ t ( 2 α λ t ) A x t A p 2 t ( 2 l t l 2 ) x t p 2 + t M 1 0 as  t 0 .

Since a(2αb)>0, we deduce that

lim t 0 A x t Ap=0.
(3.12)

From (2.1) and (2.3), we also obtain

y t p 2 = J λ t ( x t λ t A x t ) J λ t ( p λ t A p ) 2 ( x t λ t A x t ) ( p λ t A p ) , y t p = 1 2 ( ( x t λ t A x t ) ( p λ t A p ) 2 + y t p 2 ( x t p ) λ t ( A x t A p ) ( y t p ) 2 ) 1 2 ( x t p 2 + y t p 2 ( x t y t ) λ t ( A x t A p ) 2 ) = 1 2 ( x t p 2 + y t p 2 x t y t 2 + 2 λ t x t y t , A x t A p λ t 2 A x t A p 2 ) .

So, we get

y t p 2 x t p 2 x t y t 2 + 2 λ t x t y t , A x t A p λ t 2 A x t A p 2 .
(3.13)

Since 2 is a convex function, by (3.13), we have

x t p 2 = t ( f ( x t ) p ) + ( 1 t ) ( S J λ t ( x t λ t A x t ) p ) 2 t ( f ( x t ) f ( p ) + f ( p ) p ) 2 + ( 1 t ) S y t S p 2 t ( l x t p + f ( p ) p ) 2 + ( 1 t ) y t p 2 t ( l x t p + f ( p ) p ) 2 + ( 1 t ) ( x t p 2 x t y t 2 + 2 λ t x t y t , A x t A p ) .
(3.14)

We deduce from (3.14) that

(1t) x t y t 2 ( t + A x t A p ) M 2 ,
(3.15)

where M 2 >0 is an appropriate constant. Since t0 and A x t Ap0, we have

lim t x t y t =0.
(3.16)

Observing that

S y t x t = S y t ( t f ( x t ) + ( 1 t ) S y t ) = t S y t f ( x t ) 0 as  t 0 ,

by (3.16), we obtain

S x t x t S x t S y t + S y t x t x t y t + S y t x t 0 as  t 0 .
(3.17)

Let { t n }(0,1) be a sequence such that t n 0 as n. Put x n := x t n , y n := y t n and λ n := λ t n . Since { x n } is bounded, there exists a subsequence { x n i } of { x n }, which converges weakly to x ˜ .

Next, we show that x ˜ F(T) ( A + B ) 1 0. Since C is closed and convex, C is weakly closed. So, we have x ˜ C. Let us show x ˜ F(T). Assume that x ˜ F(T) (=F(S)). Since x n i x ˜ and x ˜ S x ˜ , it follows from the Opial condition and (3.17) that

lim inf i x n i x ˜ < lim inf i x n i S x ˜ lim inf i ( x n i S x n i + S x n i S x ˜ ) lim inf i x n i x ˜ ,

which is a contradiction. So we get x ˜ F(T).

We shall show that x ˜ ( A + B ) 1 0. Since x t y t 0 as t0, it follows that { y n i } converges weakly to x ˜ . We choose a subsequence { λ n i } of { λ n } such that λ n i λ. Let vBu. Noting that

y t = J λ t ( x t λ t A x t )= ( I + λ t B ) 1 ( x t λ t A x t ),

we have that

x t λ t A x t y t + λ t B y t ,

and so,

x t y t λ t A x t B y t .

Since B is monotone, we have for (u,v)B,

x t y t λ t A x t v , y t u 0.
(3.18)

Since x t x ˜ ,A x t A x ˜ α A x t A x ˜ 2 and x n i x ˜ , we have A x n i A x ˜ . Then by (3.16) and (3.18), we obtain

A x ˜ v, x ˜ u0.

Since B is maximal monotone, this implies that A x ˜ B x ˜ , that is, 0(A+B) x ˜ . Hence, we have x ˜ ( A + B ) 1 0. Thus, we conclude that x ˜ F(T) ( A + B ) 1 0.

On the one hand, we note that for pF(T) ( A + B ) 1 0,

x t p=t ( f ( x t ) f ( p ) ) +t ( f ( p ) p ) +(1t) ( S J λ t ( x t λ t A x t ) p ) .

Then it follows that

x t p 2 = x t p , x t p = t ( f ( x t ) f ( p ) ) , x t p + t f ( p ) p , x t p + ( 1 t ) S J λ t ( x t λ t A x t ) p , x t p t l x t p 2 + t f ( p ) p , x t p + ( 1 t ) x t p 2 = ( 1 ( 1 l ) t ) x t p 2 + t f ( p ) p , x t p .

Hence, we have

x t p 2 1 1 l f ( p ) p , x t p .
(3.19)

In particular,

x n i p 2 1 1 l f ( p ) p , x n i p .
(3.20)

Since x ˜ F(T) ( A + B ) 1 0, by (3.20), we obtain

x n i x ˜ 1 1 l f ( x ˜ ) x ˜ , x n i x ˜ .
(3.21)

Since x n i x ˜ , it follows from (3.21) that x n i x ˜ as i.

Now, we return to (3.20) and take the limit as i to get

x ˜ p 2 1 1 l ( I f ) p , p x ˜ .
(3.22)

In particular, x ˜ solves the following variational inequality

x ˜ F(T) ( A + B ) 1 0, ( I f ) p , p x ˜ 0,pF(T) ( A + B ) 1 0,

or the equivalent dual variational inequality (see Lemma 2.7)

x ˜ F(T) ( A + B ) 1 0, ( I f ) x ˜ , p x ˜ 0,pF(T) ( A + B ) 1 0.
(3.23)

Finally, we show that the net { x t } converges strongly, as t0, to x ˜ . To this end, let { s k }(0,1) be another sequence such that s k 0 as k. Put x k := x s k , y k := y s k and λ k := λ s k . Let { x k j } be a subsequence of { x k }, and assume that x k j x ˆ . By the same proof as the one above, we have x ˆ F(T) ( A + B ) 1 0. Moreover, it follows from (3.23) that

( I f ) x ˜ , x ˜ x ˆ 0.
(3.24)

Interchanging x ˜ and x ˆ , we obtain

( I f ) x ˆ , x ˆ x ˜ 0.
(3.25)

Adding up (3.24) and (3.25) yields

( I f ) x ˜ ( I f ) x ˆ , x ˜ x ˆ 0.

Hence,

x ˜ x ˆ 2 f ( x ˜ ) f ( x ˆ ) , x ˜ x ˆ l x ˜ x ˆ 2 ,

that is, (1l) x ˜ x ˆ 2 0. Since l(0,1), we have x ˜ = x ˆ . Therefore, we conclude that x t x ˜ as t0.

Note that P F ( T ) ( A + B ) 1 0 is well defined by Lemma 2.3 and Remark 2.1. The variational inequality (3.2) can be rewritten as

( 2 I f ) x ˜ x ˜ , p x ˜ 0,pF(T) ( A + B ) 1 0.

By (2.4), this is equivalent to the fixed point equation

x ˜ = P F ( T ) ( A + B ) 1 0 (2If) x ˜ .

This completes the proof. □

From Theorem 3.1, we can deduce the following result.

Corollary 3.1 Suppose that F(T) ( A + B ) 1 0. Then the net { x t } defined by the implicit method (3.3) converges strongly, as t0, to x ˜ , which solves the following minimum norm problem: find x ˜ F(T) ( A + B ) 1 0 such that

x ˜ = min x F ( T ) ( A + B ) 1 0 x.
(3.26)

Proof From (3.22) with f0 and l=0, we have

x ˜ p 2 p,p x ˜ ,pF(T) ( A + B ) 1 0.

Equivalently,

x ˜ ,p x ˜ 0,pF(T) ( A + B ) 1 0.

This obviously implies that

x ˜ 2 p, x ˜ p x ˜ ,pF(T) ( A + B ) 1 0.

It turns out that x ˜ p for all pF(T) ( A + B ) 1 0. Therefore, x ˜ is the minimum-norm point of F(T) ( A + B ) 1 0. □

3.2 Strong convergence of the explicit algorithm

Now, using Theorem 3.1, we establish the strong convergence of an explicit iterative scheme for finding a solution of the variational inequality (3.2), where the constraint set is the common set of the fixed point set F(T) of the k-strictly pseudocontractive mapping T and the solution set ( A + B ) 1 0 of the MIP (1.1).

Theorem 3.2 Suppose that F(T) ( A + B ) 1 0. Let { α n },{ β n }(0,1) and { λ n }(0,2α) satisfy the following conditions:

  1. (C1)

    lim n α n =0;

  2. (C2)

    n = 0 α n =;

  3. (C3)

    0<c β n d<1;

  4. (C4)

    0<a λ n b<2α and lim n ( λ n λ n + 1 )=0.

Let the sequence { x n } be generated iteratively by (3.4):

where x 0 C is an arbitrary initial guess. Then the sequence { x n } converges strongly to a point x ˜ in F(T) ( A + B ) 1 0, which is the unique solution of the variational inequality (3.2).

Proof First, from condition (C1), without loss of generality, we assume that 2 ( 1 l ) α n 1 α n l <1, and we note that F(T)=F(S). From now, we put y n = J λ n ( x n λ n A x n ).

We divide the proof several steps as follows.

Step 1. We show that x n pmax{ x 0 p, f ( p ) p 1 l } for all n0 and all pF(T) ( A + B ) 1 0 (=F(S) ( A + B ) 1 0). Indeed, let pF(T) ( A + B ) 1 0. From p= J λ n (p λ n Ap)=S J λ n (p λ n Ap), Sp=p and Lemma 2.5, we get

y n p 2 = J λ n ( x n λ n A x n ) J λ n ( p r A p ) 2 ( x n λ n A x n ) ( p λ n A p ) 2 = ( x n p ) λ n ( A x n A p ) 2 = x n p 2 2 λ n x n p , A x n A p + λ n 2 A x n A p 2 x n p 2 2 λ n α A x n A p 2 + λ n 2 A x n A p 2 = x n p 2 + λ n ( λ n 2 α ) A x n A p 2 x n p 2 .
(3.27)

Using (3.27), we have

x n + 1 p = α n f ( x n ) + β n x n + ( 1 α n β n ) S J λ n ( x n λ n A x n ) p = α n ( f ( x n ) p ) + β n ( x n p ) + ( 1 α n β n ) ( S y n S p ) α n f ( x n ) f ( p ) + α n f ( p ) p + β n x n p + ( 1 α n β n ) y n p α n l x n p + α n f ( p ) p + β n x n p + ( 1 α n β n ) x n p = ( 1 ( 1 l ) α n ) x n p + ( 1 l ) α n f ( p ) p 1 l .

Using an induction, we have

x n pmax { x 0 p , f ( p ) p 1 l } .

Hence, { x n } is bounded, and so are { y n }, {A x n }, {f( x n )} and {S y n }.

Step 2. We show that lim n x n + 1 x n =0. Put u n = x n λ n A x n , and define

x n + 1 = β n x n +(1 β n ) z n ,n0.
(3.28)

Then we have

z n + 1 z n = x n + 2 β n + 1 x n + 1 1 β n + 1 x n + 1 β n x n 1 β n = α n + 1 f ( x n + 1 ) + β n + 1 x n + 1 + ( 1 α n + 1 β n + 1 ) S J λ n + 1 u n + 1 β n + 1 x n + 1 1 β n + 1 α n f ( x n ) + β n x n + ( 1 α n β n ) S J λ n u n β n x n 1 β n = α n + 1 f ( x n + 1 ) + ( 1 α n + 1 β n + 1 ) S J λ n + 1 u n + 1 1 β n + 1 α n f ( x n ) + ( 1 α n β n ) S J λ n u n 1 β n = α n + 1 1 β n + 1 f ( x n + 1 ) α n 1 β n f ( x n ) + S J λ n + 1 u n + 1 S J λ n u n α n + 1 1 β n + 1 S J λ n + 1 u n + 1 + α n 1 β n S J λ n u n = α n + 1 1 β n + 1 ( f ( x n + 1 ) S J λ n + 1 u n + 1 ) + α n 1 β n ( S J λ n u n f ( x n ) ) + S J λ n + 1 u n + 1 S J λ n u n .
(3.29)

Since I λ n + 1 A is nonexpansive for λ n + 1 (0,2α) (by Lemma 2.5), we have

( I λ n + 1 A ) x n + 1 ( I λ n + 1 A ) x n x n + 1 x n .
(3.30)

By the resolvent identity (2.2) and (3.30), we get

J λ n + 1 u n + 1 J λ n u n = J λ n ( λ n λ n + 1 u n + 1 + ( 1 λ n λ n + 1 ) J λ n + 1 u n + 1 ) J λ n u n λ n λ n + 1 u n + 1 + ( 1 λ n λ n + 1 ) J λ n + 1 u n + 1 u n λ n λ n + 1 u n + 1 u n + | 1 λ n λ n + 1 | J λ n + 1 u n + 1 u n u n + 1 u n + | 1 λ n λ n + 1 | ( u n + 1 u n + J λ n + 1 u n + 1 u n ) ( x n + 1 λ n + 1 A x n + 1 ) ( x n λ n A x n ) + | λ n + 1 λ n a | ( u n + 1 u n + J λ n + 1 u n + 1 u n ) ( I λ n + 1 A ) x n + 1 ( I λ n + 1 A ) x n + | λ n + 1 λ n | A x n + | λ n + 1 λ n | 1 a ( u n + 1 u n + J λ n + 1 u n + 1 u n ) x n + 1 x n + | λ n + 1 λ n | M 1 ,
(3.31)

where M 1 >0 is an appropriate constant. Hence, from (3.29) and (3.31), we obtain

z n + 1 z n α n + 1 1 β n + 1 ( f ( x n + 1 ) + S J λ n + 1 u n + 1 ) + α n 1 β n ( S J λ n u n + f ( x n ) ) + S J λ n + 1 u n + 1 S J λ n u n α n + 1 1 β n + 1 ( f ( x n + 1 ) + S J λ n + 1 u n + 1 ) + α n 1 β n ( S J λ n u n + f ( x n ) ) + J λ n + 1 u n + 1 J λ n u n α n + 1 1 β n + 1 ( f ( x n + 1 ) + S J λ n + 1 u n + 1 ) + α n 1 β n ( S J λ n u n + f ( x n ) ) + x n + 1 x n + | λ n + 1 λ n | M 1 .
(3.32)

It follows from conditions (C1) and (C4) that

lim sup n ( z n + 1 z n x n + 1 x n ) 0.

Thus, by Lemma 2.8, we have

lim n z n x n =0.
(3.33)

Consequently, we obtain

lim n x n + 1 x n = lim n (1 β n ) z n x n =0.

Step 3. We show that lim n A x n Ap=0 for pF(T) ( A + B ) 1 0. From (3.4), (3.27) and Lemma 2.2, we have

x n + 1 p 2 = α n f ( x n ) + β n x n + ( 1 α n β n ) S J λ n ( x n λ n A x n ) p 2 = α n ( f ( x n ) p ) + β n ( x n p ) + ( 1 α n β n ) ( S y n p ) 2 = α n f ( x n ) p 2 + β n x n p 2 + ( 1 α n β n ) S y n p 2 α n β n f ( x n ) x n 2 β n ( 1 α n β n ) x n S y n 2 α n ( 1 α n β n ) S y n f ( x n ) 2 α n ( f ( x n ) f ( p ) + f ( p ) p ) 2 + β n x n p 2 + ( 1 α n β n ) y n p 2 α n ( l 2 x n p 2 + 2 l x n p f ( p ) p + f ( p ) p 2 ) + β n x n p 2 + ( 1 α n β n ) x n p 2 + ( 1 α n β n ) λ n ( λ n 2 α ) A x n A p 2 ( 1 ( 1 l ) α n ) x n p 2 + ( 1 α n β n ) λ n ( λ n 2 α ) A x n A p 2 + α n ( 2 l x n p f ( p ) p + f ( p ) p 2 ) x n p 2 + ( 1 α n β n ) λ n ( λ n 2 α ) A x n A p 2 + α n M 2 ,
(3.34)

where M 2 >0 is an appropriate constant. From (3.34) and conditions (C3) and (C4), we deduce that

( 1 α n d ) a ( 2 α b ) A x n A p 2 ( 1 α n β n ) λ n ( 2 α λ n ) A x n A p 2 x n x n + 1 ( x n p + x n + 1 p ) + α n M 2 .

Since α n 0 (by condition (C1)) and x n + 1 x n 0 (by Step 2), we conclude that

lim n A x n Ap=0.

Step 4. We show that lim n x n y n =0. First, from (2.1) and (2.3), we get for pF(T) ( A + B ) 1 0,

y n p 2 = J λ n ( x n λ n A x n ) p 2 = J λ n ( x n λ n A x n ) J λ n ( p λ n A p ) 2 ( x n λ n A x n ) ( p λ n A p ) , y n p = 1 2 ( ( x n λ n A x n ) ( p λ n A p ) 2 + y n p 2 ( x n λ n A x n ) ( p λ n A p ) ( y n p ) 2 ) 1 2 ( x n p 2 + y n p 2 x n y n λ n ( A x n A p ) 2 ) = 1 2 ( x n p 2 + y n p 2 x n y n 2 + 2 λ n x n y n , A x n A p λ n 2 A x n A p 2 ) .

So, we have

y n p 2 x n p 2 x n y n 2 + 2 λ n x n y n , A x n A p λ n 2 A x n A p 2 x n p 2 x n y n 2 + 2 λ n x n y n , A x n A p .
(3.35)

Using (3.34) and (3.35), we obtain

x n + 1 p 2 α n ( l 2 x n p 2 + 2 l x n p f ( p ) p + f ( p ) p 2 ) + β n x n p 2 + ( 1 α n β n ) y n p 2 α n l x n p 2 + α n M 2 + β n x n p 2 + ( 1 α n β n ) ( x n p 2 x n y n 2 + 2 λ n x n y n , A x n A p ) = ( 1 ( 1 l ) α n ) x n p 2 ( 1 α n β n ) x n y n 2 + 2 λ n x n y n , A x n A p + α n M 2 x n p 2 ( 1 α n β n ) x n y n 2 + 2 b M 3 A x n A p + α n M 2 ,
(3.36)

where M 2 , M 3 >0 are appropriate constants. This implies that

( 1 α n d ) x n y n 2 ( 1 α n β n ) x n y n 2 x n x n + 1 ( x n + 1 p + x n p ) + 2 b M 3 A x n A p + α n M 2 .

Thus, from condition (C1), Step 2 and Step 3, we deduce that

lim n x n y n =0.

Step 5. We show that lim n S x n x n =0. First, by (3.4), we have

S y n x n S y n x n + 1 + x n + 1 x n = S y n ( α n f ( x n ) + β n x n + ( 1 α n β n ) S y n ) + x n + 1 x n α n S y n f ( x n ) + β n x n S y n + x n + 1 x n ,

and so,

S y n x n 1 1 β n ( α n S y n f ( x n ) + x n + 1 x n ) .

By conditions (C1) and (C3) and Step 2, we obtain

lim n S y n x n =0.

This together with Step 4 yields that

S x n x n S x n S y n + S y n x n x n y n + S y n x n 0 as  n .

Step 6. We show that

lim sup n f ( x ˜ ) x ˜ , x n x ˜ 0,

where x ˜ = lim t 0 x t with x t being defined by (3.1). We note that from Theorem 3.1, x ˜ Fix(T) ( A + B ) 1 0, and x ˜ is the unique solution of the variational inequality (3.2). To show this, we can choose a subsequence { x n i } of { x n } such that

lim i f ( x ˜ ) x ˜ , x n i x ˜ = lim sup n f ( x ˜ ) x ˜ , x n x ˜ .

Since { x n i } is bounded, there exists a subsequence { x n i j } of { x n i }, which converges weakly to w. Without loss of generality, we can assume that x n i w. By the same argument as in the proof of Theorem 3.1 together with Step 5, we have wF(T) ( A + B ) 1 0. Since x ˜ = P F ( T ) ( A + B ) 1 0 (2If) x ˜ is the unique solution of the variational inequality (3.2), we deduce that

lim sup n f ( x ˜ ) x ˜ , x n x ˜ = lim i f ( x ˜ ) x ˜ , x n i x ˜ = f ( x ˜ ) x ˜ , w x ˜ 0 .

Step 7. We show that lim n x n x ˜ =0, where x ˜ = lim t 0 x t with x t being defined by (3.1), and x ˜ is the unique solution of the variational inequality (3.2). Indeed, from (3.4), we note that

x n + 1 x ˜ = α n f ( x n ) + β n x n + ( 1 α n β n ) S J λ n ( x n λ n A x n ) x ˜ = α n ( f ( x n ) x ˜ ) + β n ( x n x ˜ ) + ( 1 α n β n ) ( S J λ n y n x ˜ ) .

Applying Lemma 2.1, we have

x n + 1 x ˜ 2 β n ( x n x ˜ ) + ( 1 α n β n ) ( S J λ n y n x ˜ ) 2 + 2 α n f ( x n ) f ( x ˜ ) , x n + 1 x ˜ + 2 α n f ( x ˜ ) x ˜ , x n + 1 x ˜ ( β n x n x ˜ + ( 1 α n β n ) y n x ˜ ) 2 + 2 α n l x n x ˜ x n + 1 x ˜ + 2 α n f ( x ˜ ) x ˜ , x n + 1 x ˜ ( 1 α n ) 2 x n x ˜ 2 + α n l ( x n x ˜ 2 + x n + 1 x ˜ 2 ) + 2 α n f ( x ˜ ) x ˜ , x n + 1 x ˜ .

This implies that

x n + 1 x ˜ 2 1 ( 2 l ) α n + α n 2 1 α n l x n x ˜ 2 + 2 α n 1 α n l f ( x ˜ ) x ˜ , x n + 1 x ˜ = 1 ( 2 l ) α n 1 α n l x n x ˜ 2 + α n 2 1 α n l x n x ˜ 2 + 2 α n 1 α n l f ( x ˜ ) x ˜ , x n + 1 x ˜ = ( 1 2 ( 1 l ) α n 1 α n l ) x n x ˜ 2 + α n 2 1 α n l x n x ˜ 2 + 2 α n 1 α n l f ( x ˜ ) x ˜ , x n + 1 x ˜ ( 1 2 ( 1 l ) α n 1 α n l ) x n x ˜ 2 + 2 ( 1 l ) α n 1 α n l ( α n M 4 2 ( 1 l ) + 1 1 l f ( x ˜ ) x ˜ , x n + 1 x ˜ ) = ( 1 ξ n ) x n x ˜ 2 + ξ n δ n ,

where M 4 >0 is an appropriate constant, ξ n = 2 ( 1 l ) α n 1 α n l and

δ n = α n M 4 2 ( 1 l ) + 1 1 l f ( x ˜ ) x ˜ , x n + 1 x ˜ .

From conditions (C1) and (C2) and Step 6, it is easy to see that ξ n 0, n = 0 ξ n = and lim sup n δ n 0. Hence, by Lemma 2.9, we conclude that x n x ˜ as n. This completes the proof. □

From Theorem 3.2, we deduce immediately the following result.

Corollary 3.2 Suppose that F(T) ( A + B ) 1 0. Let { α n },{ β n }(0,1) and { λ n }(0,2α) satisfy the following conditions:

  1. (C1)

    lim n α n =0;

  2. (C2)

    n = 0 α n =;

  3. (C3)

    0<c β n d<1;

  4. (C4)

    0<a λ n b<2α.

Let the sequence { x n } be generated iteratively by (3.5):

where x 0 C is an arbitrary initial guess. Then the sequence { x n } converges strongly to a point x ˜ in F(T) ( A + B ) 1 0, which is the unique solution of the minimum norm problem (3.26).

Proof The variational inequality (3.2) is reduced to the inequality

x ˜ ,p x ˜ 0,pF(T) ( A + B ) 1 0.

This is equivalent to x ˜ 2 p, x ˜ p x ˜ for all pF(T) ( A + B ) 1 0. It turns out that x ˜ p for all pF(T) ( A + B ) 1 0 and x ˜ is the minimum-norm point of F(T) ( A + B ) 1 0. □

Remark 3.1 It is worth pointing out that our iterative schemes (3.1) and (3.4) are new ones different from those in the literature. The iterative schemes (3.3) and (3.5) are also new ones different from those in the literature (see [5, 11] and the references therein).

4 Applications

Let H be a real Hilbert space, and let g be a proper lower semicontinuous convex function of H into (,]. Then the subdifferential ∂g of g is defined as follows:

g(x)= { z H g ( x ) + z , y x g ( y ) , y H }

for all xH. From Rockafellar [19], we know that ∂g is maximal monotone. Let C be a closed convex subset of H, and let i C be the indicator function of C, i.e.,

i C (x)={ 0 , x C , , x C .
(4.1)

Since i C is a proper lower semicontinuous convex function on H, the subdifferential i C of i C is a maximal monotone operator. It is well known that if B= i C , then the MIP (1.1) is equivalent to find uC such that

Au,vu0,vC.
(4.2)

This problem is called Hartman-Stampacchia variational inequality (see [20]). The set of solutions of the variational inequality (4.2) is denoted by VI(C,A).

The following result is proved by Takahashi et al. [11].

Lemma 4.1 [11]

Let C be a nonempty closed convex subset of a real Hilbert space H, let P C be the metric projection from H onto C, let i C be the subdifferential of i C , and let J λ be the resolvent of i C for λ>0, where i C is defined by (4.1) and J λ = ( I + λ i C ) 1 . Then

u= J λ xu= P C x,xH,yC.

Applying Theorem 3.2, we can obtain a strong convergence theorem for finding a common element of the set of solutions to the variational inequality (4.2) and the set of fixed points of a nonexpansive mapping.

Theorem 4.1 Let C be a nonempty closed convex subset of a real Hilbert space H. Let A be an α-inverse strongly monotone mapping of C into H, and let S be a nonexpansive mapping of C into itself such that F(S)VI(C,A). Let { α n },{ β n }(0,1) and { λ n }(0,2α) satisfy the following conditions:

  1. (C1)

    lim n α n =0;

  2. (C2)

    n = 0 α n =;

  3. (C3)

    0<c β n d<1;

  4. (C4)

    0<a λ n b<2α and lim n ( λ n λ n + 1 )=0.

Let the sequence { x n } be generated iteratively by

x n + 1 = α n f( x n )+ β n x n +(1 α n β n )S P C ( x n λ n A x n ),n0,

where x 0 C is an arbitrary initial guess. Then the sequence { x n } converges strongly to a point x ˜ in F(S)VI(C,A).

Proof Put B= i C . It is easy to show that VI(C,A)= ( A + i C ) 1 0. In fact,

x ( A + i C ) 1 0 0 A x + i C x A x i C x A x , u x 0 ( u C ) x VI ( C , A ) .

From Lemma 4.1, we get J λ n = P C for all λ n . Hence, the desired result follows from Theorem 3.2. □

As in [11, 21], we consider the problem for finding a common element of the set of solutions of a mathematical model related to equilibrium problems and the set of fixed points of a nonexpansive mapping in a Hilbert space.

Let C be a nonempty closed convex subset of a Hilbert space H, and let us assume that a bifunction Θ:C×CR satisfies the following conditions:

  1. (A1)

    Θ(x,x)=0 for all xC;

  2. (A2)

    Θ is monotone, that is, Θ(x,y)+Θ(y,x)0 for all x,yC;

  3. (A3)

    for each x,y,zC,

    lim t 0 Θ ( t z + ( 1 t ) x , y ) Θ(x,y);
  4. (A4)

    for each xC, yΘ(x,y) is convex and lower semicontinuous.

Then the mathematical model related to the equilibrium problem (with respect to C) is find x ˆ C such that

Θ( x ˆ ,y)0
(4.3)

for all yC. The set of such solutions x ˆ is denoted by EP(Θ). The following lemma was given in [22, 23].

Lemma 4.2 [22, 23]

Let C be a nonempty closed convex subset of H, and let Θ be a bifunction of C×C into satisfying (A1)-(A4). Then for any r>0 and xH, there exists zC such that

Θ(z,y)+ 1 r yz,zx0,yC.

Moreover, if we define T r :HC as follows:

T r x= { z C : Θ ( z , y ) + 1 r y z , z x 0 , y C }

for all xH, then the following hold:

  1. (1)

    T r is single-valued;

  2. (2)

    T r is firmly nonexpansive, that is, for any x,yH,

    T r x T r y 2 T r x T r y,xy;
  3. (3)

    F( T r )=EP(Θ);

  4. (4)

    EP(Θ) is closed and convex.

We call such T r the resolvent of Θ for r>0. The following lemma was given in Takahashi et al. [11].

Lemma 4.3 [11]

Let C be a nonempty closed convex subset of a real Hilbert space H, and let Θ be a bifunction of C×C into satisfying (A1)-(A4). Let A Θ be a multivalued mapping of H into itself defined by

A Θ x={ { z H : Θ ( x , y ) y x , z } , x C , , x C .

Then EP(Θ)= A Θ 1 0, and A Θ is a maximal monotone operator with dom( A Θ )C. Moreover, for any xH and r>0, the resolvent T r of Θ coincides with the resolvent of A Θ ; i.e.,

T r x= ( I + r A Θ ) 1 x.

Applying Lemma 4.3 and Theorem 3.2, we can obtain the following results.

Theorem 4.2 Let C be a nonempty closed convex subset of a real Hilbert space H, and let Θ be a bifunction of C×C into satisfying (A1)-(A4). Let A Θ be a maximal monotone operator with dom( A Θ )C defined as in Lemma  4.3, and let T λ be the resolvent of Θ for λ>0. Let A be an α-inverse strongly monotone mapping of C into H, and let S be a nonexpansive mapping from C into itself such that F(S) ( A + A Θ ) 1 0. Let { α n },{ β n }(0,1) and { λ n }(0,2α) satisfy the following conditions:

  1. (C1)

    lim n α n =0;

  2. (C2)

    n = 0 α n =;

  3. (C3)

    0<c β n d<1;

  4. (C4)

    0<a λ n b<2α and lim n ( λ n λ n + 1 )=0.

Let the sequence { x n } be generated iteratively by

x n + 1 = α n f( x n )+ β n x n +(1 α n β n )S T λ n ( x n λ n A x n ),n0,

where x 0 C is an arbitrary initial guess. Then the sequence { x n } converges strongly to a point x ˜ in F(S) ( A + A Θ ) 1 0.

Theorem 4.3 Let C be a nonempty closed convex subset of a real Hilbert space H, and let Θ be a bifunction of C×C into satisfying (A1)-(A4). Let A Θ be a maximal monotone operator with dom( A Θ )C defined as in Lemma  4.3, and let T λ be the resolvent of Θ for λ>0, and let S be a nonexpansive mapping from C into itself such that F(S)EP(Θ). Let { α n },{ β n }(0,1) and { λ n }(0,2α) satisfy the following conditions:

  1. (C1)

    lim n α n =0;

  2. (C2)

    n = 0 α n =;

  3. (C3)

    0<c β n d<1;

  4. (C4)

    0<a λ n b<2α and lim n ( λ n λ n + 1 )=0.

Let the sequence { x n } be generated iteratively by

x n + 1 = α n f( x n )+ β n x n +(1 α n β n )S T λ n ( x n ),n0,

where x 0 C is an arbitrary initial guess. Then the sequence { x n } converges strongly to a point x ˜ in F(S)EP(Θ).

Proof Put A=0 in Theorem 4.2. From Lemma 4.3, we also know that J λ n = T λ n for all n0. Hence, the desired result follows from Theorem 4.2. □

Remark 4.1 (1) As in Corollary 3.2, if we take f0 in Theorems 4.1, 4.2 and 4.3, then we can obtain the minimum-norm point of F(S)VI(C,A), F(S) ( A + A Θ ) 1 0 and F(S)EP(Θ), respectively.

(2) For several iterative schemes for zeros of monotone operators, variational inequality problems, generalized equilibrium problems, convex minimization problems, and fixed point problems, we can also refer to [2429] and the references therein. By combining our methods in this paper and methods in [2429], we will consider new iterative schemes for the above-mentioned problems coupled with the fixed point problems of nonlinear operators.

References

  1. Chang SS: Existence and approximation of solutions for set-valued variational inclusions in Banach spaces. Nonlinear Anal. 2001, 47: 583–594. 10.1016/S0362-546X(01)00203-6

    Article  MathSciNet  Google Scholar 

  2. Demyanov VF, Stavroulakis GE, Polyakova LN, Panagiotopoulos PD 10. In Quasi-Differentiability and Nonsmooth Modelling in Mechanics, Engineering and Economics. Kluwer Academic, Dordrecht; 1996.

    Chapter  Google Scholar 

  3. Lin LJ, Wang SY, Chuang CS: Existence theorems of systems of variational inclusion problems with applications. J. Glob. Optim. 2008, 40: 751–764. 10.1007/s10898-007-9160-2

    Article  MathSciNet  Google Scholar 

  4. Peng JW, Wang Y, Shyu DS, Yao JC: Common solutions of an iterative scheme for variational inclusions, equilibrium problems, and fixed point problems. J. Inequal. Appl. 2008., 2008: Article ID 720371 10.1155/2008/720371

    Google Scholar 

  5. Zhang SS, Lee JHW, Chan CK: Algorithms of common solutions to quasi variational inclusion and fixed point problems. Appl. Math. Mech. 2008, 29: 571–581. 10.1007/s10483-008-0502-y

    Article  MathSciNet  Google Scholar 

  6. Browder FE, Petryshn WV: Construction of fixed points of nonlinear mappings Hilbert space. J. Math. Anal. Appl. 1967, 20: 197–228. 10.1016/0022-247X(67)90085-6

    Article  MathSciNet  Google Scholar 

  7. Acedo GL, Xu HK: Iterative methods for strictly pseudo-contractions in Hilbert space. Nonlinear Anal. 2007, 67: 2258–2271. 10.1016/j.na.2006.08.036

    Article  MathSciNet  Google Scholar 

  8. Jung JS: Strong convergence of iterative methods for k -strictly pseudo-contractive mappings in Hilbert spaces. Appl. Math. Comput. 2010, 215: 3736–3753.

    Google Scholar 

  9. Jung JS: Some results on a general iterative method for k -strictly pseudo-contractive mappings. Fixed Point Theory Appl. 2011., 2011: Article ID 24 10.1186/1687-1812-2011-24

    Google Scholar 

  10. Morales CH, Jung JS: Convergence of paths for pseudo-contractive mappings in Banach spaces. Proc. Am. Math. Soc. 2000, 128: 3411–3419. 10.1090/S0002-9939-00-05573-8

    Article  MathSciNet  Google Scholar 

  11. Takahashi S, Takahashi W, Toyoda M: Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 2010, 147: 27–41. 10.1007/s10957-010-9713-2

    Article  MathSciNet  Google Scholar 

  12. Takahashi W: Nonlinear Functional Analysis: Fixed Point Theory and Its Applications. Yokohama Publishers, Yokohama; 2000.

    Google Scholar 

  13. Zhou H: Convergence theorems of fixed points for k -strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 2008, 69: 456–462. 10.1016/j.na.2007.05.032

    Article  MathSciNet  Google Scholar 

  14. Nadezhkina N, Takahashi W: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006, 128: 191–201. 10.1007/s10957-005-7564-z

    Article  MathSciNet  Google Scholar 

  15. Brézis H North-Holland Mathematics Studies 5. In Opérateurs maximaux monotones et semi-groupes de contractions dans les espaces de Hilbert. North-Holland, Amsterdam; 1973. Notas de Matemática (50)

    Google Scholar 

  16. Minty GJ: On the generalization of a direct method of the calculus of variations. Bull. Am. Math. Soc. 1967, 73: 315–321. 10.1090/S0002-9904-1967-11732-4

    Article  MathSciNet  Google Scholar 

  17. Suzuki T: Strong convergence of Krasnoselskii and Mann’s type sequences for one parameter nonexpansive semigroups without Bochner integral. J. Math. Anal. Appl. 2005, 305: 227–239. 10.1016/j.jmaa.2004.11.017

    Article  MathSciNet  Google Scholar 

  18. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240–256. 10.1112/S0024610702003332

    Article  Google Scholar 

  19. Rockafellar RT: On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 1970, 33: 209–216. 10.2140/pjm.1970.33.209

    Article  MathSciNet  Google Scholar 

  20. Hartman P, Stampacchia G: On some non-linear elliptic differential-functional equations. Acta Math. 1966, 115: 271–310. 10.1007/BF02392210

    Article  MathSciNet  Google Scholar 

  21. Takahashi S, Takahashi W: Strong convergence theorem for a generalized equilibrium problem and a nonexpansive mapping in Hilbert spaces. Nonlinear Anal. 2008, 69: 1025–1033. 10.1016/j.na.2008.02.042

    Article  MathSciNet  Google Scholar 

  22. Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.

    MathSciNet  Google Scholar 

  23. Combettes PI, Hirstoaga SA: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6: 117–136.

    MathSciNet  Google Scholar 

  24. Ceng LC, Ansari QH, Khan AR, Yao JC: Strong convergence on composite iterative schemes for zeros of m -accretive operators in Banach spaces. Nonlinear Anal. 2009, 70: 1830–1840. 10.1016/j.na.2008.02.083

    Article  MathSciNet  Google Scholar 

  25. Ceng LC, Ansari QH, Khan AR, Yao JC: Viscosity approximation methods for strongly positive and monotone operators. Fixed Point Theory 2009, 10: 35–71.

    MathSciNet  Google Scholar 

  26. Ceng LC, Ansari QH, Yao JC: Some iterative methods for finding fixed point and for solving constrained convex minimization problems. Nonlinear Anal. 2011, 74(16):5286–5302. 10.1016/j.na.2011.05.005

    Article  MathSciNet  Google Scholar 

  27. Ceng LC, Ansari QH, Yao JC: Extragradient-projection method for solving constrained convex minimization problems. Numer. Algebra Control Optim. 2011, 1(3):341–359.

    Article  MathSciNet  Google Scholar 

  28. Ceng LC, Ansari QH, Wong MM, Yao JC: Mann type hybrid extragradient method for variational inequalities, variational inclusion and fixed point problems. Fixed Point Theory 2012, 13(2):403–422.

    MathSciNet  Google Scholar 

  29. Zeng LC, Ansari QH, Shyu DS, Yao JC: Strong and weak convergence theorems for common solutions of generalized equilibrium problems and zeros of maximal monotone operators. Fixed Point Theory Appl. 2010., 2010: Article ID 590278

    Google Scholar 

Download references

Acknowledgements

The author would like to thank the anonymous referees for their careful reading and valuable suggestions, which improved the presentation of this manuscript, and the editor for his valuable comments along with providing some recent related papers. This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2013021600).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jong Soo Jung.

Additional information

Competing interests

The author declares that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Jung, J.S. Iterative algorithms for monotone inclusion problems, fixed point problems and minimization problems. Fixed Point Theory Appl 2013, 272 (2013). https://doi.org/10.1186/1687-1812-2013-272

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2013-272

Keywords