Skip to main content

Parallel algorithms for variational inclusions and fixed points with applications

Abstract

In this paper, we introduce two parallel algorithms for finding a zero of the sum of two monotone operators and a fixed point of a nonexpansive mapping in Hilbert spaces and prove some strong convergence theorems of the proposed algorithms. As special cases, we can approach the minimum-norm common element of the zero of the sum of two monotone operators and the fixed point of a nonexpansive mapping without using the metric projection. Further, we give some applications of our main results.

MSC:49J40, 47J20, 47H09, 65J15.

1 Introduction

Let H be a real Hilbert space. Let A:HH be a single-valued nonlinear mapping and B:H 2 H be a set-valued mapping.

Now, we are concerned with the following variational inclusion:

Find a zero xH of the sum of two monotone operators A and B such that

0Ax+Bx,
(1.1)

where 0 is the zero vector in H.

The set of solutions of the problem (1.1) is denoted by ( A + B ) 1 0. If H= R m , then the problem (1.1) becomes the generalized equation introduced by Robinson [1]. If A=0, then the problem (1.1) becomes the inclusion problem introduced by Rockafellar [2]. It is well known that the problem (1.1) is among the most interesting and intensively studied classes of mathematical problems and has wide applications in the fields of optimization and control, economics and transportation equilibrium, engineering science, and many others. For the past years, many existence results and iterative algorithms for various variational inequality and variational inclusion problems have been extended and generalized in various directions using novel and innovative techniques. A useful and important generalization is called the general variational inclusion involving the sum of two nonlinear operators. Moudafi and Noor [3] studied the sensitivity analysis of variational inclusions by using the technique of resolvent equations. Recently much attention has been given to developing iterative algorithms for solving the variational inclusions. Dong et al. [4] analyzed the solution’s sensitivity for variational inequalities and variational inclusions by using a resolvent operator technique. By using the concept and technique of resolvent operators, Agarwal et al. [5] and Jeong [6] introduced and studied a new system of parametric generalized nonlinear mixed quasi-variational inclusions in a Hilbert space. Lan [7] introduced and studied a stable iteration procedure for a class of generalized mixed quasi-variational inclusion systems in Hilbert spaces. Recently, Zhang et al. [8] introduced a new iterative scheme for finding a common element of the set of solutions to the problem (1.1) and the set of fixed points of nonexpansive mappings in Hilbert spaces. Peng et al. [9] introduced another iterative scheme by the viscosity approximate method for finding a common element of the set of solutions of a variational inclusion with set-valued maximal monotone mapping and inverse strongly monotone mappings, the set of solutions of an equilibrium problem, and the set of fixed points of a nonexpansive mapping. For some related work, see [923] and the references therein.

Recently, Takahashi et al. [24] introduced the following iterative algorithm for finding a zero of the sum of two monotone operators and a fixed point of a nonexpansive mapping:

x n + 1 = β n x n +(1 β n )S ( α n x + ( 1 α n ) J λ n B ( x n λ n A x n ) )
(1.2)

for all n0. Under some assumptions, they proved that the sequence { x n } converges strongly to a point of F(S) ( A + B ) 1 0.

Remark 1.1 We note that the algorithm (1.2) cannot be used to find the minimum-norm element due to the facts that xC and S is a self-mapping of C. However, there exist a large number of problems for which one needs to find the minimum-norm solution (see, for example, [2529]). A useful path to circumvent this problem is to use projection. Bauschke and Browein [30] and Censor and Zenios [31] provide reviews of the field. The main difficulty is in the computation. Hence it is an interesting problem to find the minimum-norm element without using the projection.

Motivated and inspired by the works in this field, we first suggest the following two algorithms without using projection:

x t =(1κ)S x t +κ J λ B ( t γ f ( x t ) + ( 1 t ) x t λ A x t )

for all t(0,1) and

x n + 1 =(1κ)S x n +κ J λ n B ( α n γ f ( x n ) + ( 1 α n ) x n λ n A x n )

for all n0. Notice that these two algorithms are indeed well defined (see the next section). We show that the suggested algorithms converge strongly to a point x ˜ = P F ( S ) ( A + B ) 1 0 (γf( x ˜ )) which solves the following variational inequality:

γ f ( x ˜ ) x ˜ , x ˜ z 0

for all zF(S) ( A + B ) 1 0.

As special cases, we can approach the minimum-norm element in F(S) ( A + B ) 1 0 without using the metric projection and give some applications.

2 Preliminaries

Let H be a real Hilbert space with the inner product , and the norm , respectively. Let C be a nonempty closed convex subset of H.

  1. (1)

    A mapping S:CC is said to be nonexpansive if

    SxSyxy

    for all x,yC. We denote by F(S) the set of fixed points of S.

  2. (2)

    A mapping A:CH is said to be α-inverse strongly monotone if there exists α>0 such that

    AxAy,xyα A x A y 2

for all x,yC.

It is well known that, if A is α-inverse strongly monotone, then AxAy 1 α xy for all x,yC.

Let B be a mapping from H into 2 H . The effective domain of B is denoted by dom(B), that is, dom(B)={xH:Bx}.

  1. (3)

    A multi-valued mapping B is said to be a monotone operator on H if

    xy,uv0

for all x,ydom(B), uBx, and vBy.

  1. (4)

    A monotone operator B on H is said to be maximal if its graph is not strictly contained in the graph of any other monotone operator on H.

Let B be a maximal monotone operator on H and B 1 0={xH:0Bx}. For a maximal monotone operator B on H and λ>0, we may define a single-valued operator J λ B = ( I + λ B ) 1 :Hdom(B), which is called the resolvent of B for λ. It is well known that the resolvent J λ B is firmly nonexpansive, i.e.,

J λ B x J λ B y 2 J λ B x J λ B y , x y

for all x,yC and B 1 0=F( J λ B ) for all λ>0. The following resolvent identity is well known: for any λ>0 and μ>0, the following identity holds:

J λ B x= J μ B ( μ λ x + ( 1 μ λ ) J λ B x )
(2.1)

for all xH.

We use the notation that x n x stands for the weak convergence of ( x n ) to x and x n x stands for the strong convergence of ( x n ) to x, respectively.

We need the following lemmas for the next section.

Lemma 2.1 ([32])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let A:CH be an α-inverse strongly monotone mapping and λ>0 be a constant. Then we have

( I λ A ) x ( I λ A ) y 2 x y 2 +λ(λ2α) A x A y 2

for all x,yC. In particular, if 0λ2α, then IλA is nonexpansive.

Lemma 2.2 ([33])

Let C be a closed convex subset of a Hilbert space H. Let S:CC be a nonexpansive mapping. Then F(S) is a closed convex subset of C and the mapping IS is demiclosed at 0, i.e. whenever { x n }C is such that x n x and (IS) x n 0, then (IS)x=0.

Lemma 2.3 ([1])

Let C be a nonempty closed convex subset of a real Hilbert space H. Assume that the mapping F:CH is monotone and weakly continuous along segments, that is, F(x+ty)F(x) weakly as t0. Then the variational inequality

x C, F x , x x 0

for all xC is equivalent to the dual variational inequality

x C, F x , x x 0

for all xC.

Lemma 2.4 ([34])

Let { x n }, { y n } be bounded sequences in a Banach space X and { β n } be a sequence in [0,1] with

0< lim inf n β n lim sup n β n <1.

Suppose that x n + 1 =(1 β n ) y n + β n x n for all n0 and

lim sup n ( y n + 1 y n x n + 1 x n ) 0.

Then lim n y n x n =0.

Lemma 2.5 ([35])

Assume that { a n } is a sequence of nonnegative real numbers such that

a n + 1 (1 γ n ) a n + δ n γ n

for all n1, where { γ n } is a sequence in (0,1) and { δ n } is a sequence such that

  1. (a)

    n = 1 γ n =;

  2. (b)

    lim sup n δ n 0 or n = 1 | δ n γ n |<.

Then lim n a n =0.

3 Main results

In this section, we prove our main results.

Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H. Let A be an α-inverse strongly monotone mapping from C into H. Let f:CH be a ρ-contraction and γ be a constant such that 0<γ< 1 ρ . Let B be a maximal monotone operator on H such that the domain of B is included in C. Let J λ B = ( I + λ B ) 1 be the resolvent of B for any λ>0 and S be a nonexpansive mapping from C into itself such that F(S) ( A + B ) 1 0. Let λ and κ be two constants satisfying aλb, where [a,b](0,2α) and κ(0,1). For any t(0,1 λ 2 α ), let { x t }C be a net generated by

x t =(1κ)S x t +κ J λ B ( t γ f ( x t ) + ( 1 t ) x t λ A x t ) .
(3.1)

Then the net { x t } converges strongly as t0+ to a point x ˜ = P F ( S ) ( A + B ) 1 0 (γf( x ˜ )), which solves the following variational inequality:

γ f ( x ˜ ) x ˜ , x ˜ z 0

for all zF(S) ( A + B ) 1 0.

Proof First, we show that the net { x t } is well defined. For any t(0,1 λ 2 α ), we define a mapping W:=(1κ)S+κ J λ B (tγf+(1t)IλA). Note that J λ B , S, and I λ 1 t A (see Lemma 2.1) are nonexpansive. For any x,yC, we have

W x W y = ( 1 κ ) ( S x S y ) + κ ( J λ B ( t γ f ( x ) + ( 1 t ) ( I λ 1 t A ) x ) J λ B ( t γ f ( y ) + ( 1 t ) ( I λ 1 t A ) y ) ) ( 1 κ ) S x S y + κ t γ ( f ( x ) f ( y ) ) + ( 1 t ) [ ( I λ 1 t A ) x ( I λ 1 t A ) y ] ( 1 κ ) x y + κ t γ f ( x ) f ( y ) + ( 1 t ) κ ( I λ 1 t A ) x ( I λ 1 t A ) y ( 1 κ ) x y + t κ γ ρ x y + ( 1 t ) κ x y = [ 1 ( 1 γ ρ ) κ t ] x y ,

which implies the mapping W is a contraction on C. We use x t to denote the unique fixed point of W in C. Therefore, { x t } is well defined. Set y t = J λ B u t and u t =γf( x t )+(1t) x t λA x t for all t>0. Taking zF(S) ( A + B ) 1 0, it is obvious that z=Sz= J λ B (zλAz) for all λ>0 and so

z=Sz= J λ B (zλAz)= J λ B ( t z + ( 1 t ) ( I λ 1 t A ) z )

for all t(0,1 λ 2 α ). From (3.1), it follows that

x t z = ( 1 κ ) ( S x t z ) + κ ( y t z ) ( 1 κ ) S x t z + κ y t z ( 1 κ ) x t z + κ y t z .

Hence we get x t z y t z. Since J λ B is nonexpansive, we have

y t z = J λ B ( t γ f ( x t ) + ( 1 t ) ( x t λ 1 t A x t ) ) J λ B ( t z + ( 1 t ) ( z λ 1 t A z ) ) ( t γ f ( x t ) + ( 1 t ) ( x t λ 1 t A x t ) ) ( t z + ( 1 t ) ( z λ 1 t A z ) ) = ( 1 t ) ( ( x t λ 1 t A x t ) ( z λ 1 t A z ) ) + t ( γ f ( x t ) z ) ( 1 t ) ( I λ 1 t A ) x t ( I λ 1 t A ) z + t γ f ( x t ) f ( z ) + t γ f ( z ) z ( 1 t ) x t z + t γ ρ x t z + t γ f ( z ) z .
(3.2)

Thus it follows that

x t z 1 1 γ ρ γ f ( z ) z .

Therefore, { x t } is bounded. We deduce immediately that {f( x t )}, {A x t }, {S x t }, { u t }, and { y t } are also bounded. By using the convexity of and the α-inverse strong monotonicity of A, from (3.2), we derive

x t z 2 y t z 2 ( 1 t ) ( ( x t λ 1 t A x t ) ( z λ 1 t A z ) ) + t ( γ f ( x t ) z ) 2 ( 1 t ) ( x t λ 1 t A x t ) ( z λ 1 t A z ) 2 + t γ f ( x t ) z 2 = ( 1 t ) ( x t z ) λ 1 t ( A x t A z ) 2 + t γ f ( x t ) z 2 = ( 1 t ) ( x t z 2 2 λ 1 t A x t A z , x t z + λ 2 ( 1 t ) 2 A x t A z 2 ) + t γ f ( x t ) z 2 ( 1 t ) ( x t z 2 2 α λ 1 t A x t A z 2 + λ 2 ( 1 t ) 2 A x t A z 2 ) + t γ f ( x t ) z 2 = ( 1 t ) ( x t z 2 + λ ( 1 t ) 2 ( λ 2 ( 1 t ) α ) A x t A z 2 ) + t γ f ( x t ) z 2 ( 1 t ) x t z 2 + λ ( 1 t ) ( λ 2 ( 1 t ) α ) A x t A z 2 + t γ f ( x t ) z 2
(3.3)

and so

λ ( 1 t ) ( 2 ( 1 t ) α λ ) A x t A z 2 t γ f ( x t ) z 2 t x t z 2 0.

By the assumption, we have 2(1t)αλ>0 for all t(0,1 λ 2 α ) and so we obtain

lim t 0 + A x t Az=0.
(3.4)

Next, we show x t S x t 0. By using the firm nonexpansivity of J λ B , we have

y t z 2 = J λ B ( t γ f ( x t ) + ( 1 t ) x t λ A x t ) z 2 = J λ B ( t γ f ( x t ) + ( 1 t ) x t λ A x t ) J λ B ( z λ A z ) 2 t γ f ( x t ) + ( 1 t ) x t λ A x t ( z λ A z ) , y t z = 1 2 ( t γ f ( x t ) + ( 1 t ) x t λ A x t ( z λ A z ) 2 + y t z 2 t γ f ( x t ) + ( 1 t ) x t λ ( A x t A z ) y t 2 ) .

Thus it follows that

y t z 2 t γ f ( x t ) + ( 1 t ) x t λ A x t ( z λ A z ) 2 t γ f ( x t ) + ( 1 t ) x t λ ( A x t A z ) y t 2 .

By the nonexpansivity of I λ 1 t A, we have

t γ f ( x t ) + ( 1 t ) x t λ A x t ( z λ A z ) 2 = ( 1 t ) ( ( x t λ 1 t A x t ) ( z λ 1 t A z ) ) + t ( γ f ( x t ) z ) 2 ( 1 t ) ( x t λ 1 t A x t ) ( z λ 1 t A z ) 2 + t γ f ( x t ) z 2 ( 1 t ) x t z 2 + t γ f ( x t ) z 2

and thus

x t z 2 y t z 2 ( 1 t ) x t z 2 + t γ f ( x t ) z 2 t γ f ( x t ) + ( 1 t ) x t λ ( A x t A z ) y t 2 .

Hence it follows that

t γ f ( x t ) + ( 1 t ) x t λ ( A x t A z ) y t 2 t γ f ( x t ) z 2 0.

Since A x t Az0, we deduce lim t 0 + x t y t =0, which implies that

lim t 0 + x t S x t =0.
(3.5)

From (3.2), we have

y t z 2 ( 1 t ) ( ( x t λ 1 t A x t ) ( z λ 1 t A z ) ) + t ( γ f ( x t ) z ) 2 = ( 1 t ) 2 ( x t λ 1 t A x t ) ( z λ 1 t A z ) 2 + 2 t ( 1 t ) γ f ( x t ) z , ( x t λ 1 t A x t ) ( z λ 1 t A z ) + t 2 γ f ( x t ) z 2 ( 1 t ) 2 x t z 2 + 2 t ( 1 t ) γ f ( x t ) z , x t λ 1 t ( A x t A z ) z + t 2 γ f ( x t ) z 2 = ( 1 t ) 2 x t z 2 + 2 t ( 1 t ) γ f ( x t ) f ( z ) , x t λ 1 t ( A x t A z ) z + 2 t ( 1 t ) γ f ( z ) z , x t λ 1 t ( A x t A z ) z + t 2 γ f ( x t ) z 2 .

Note that x t z y t z. Then we obtain

x t z 2 ( 1 t ) 2 x t z 2 + 2 t ( 1 t ) γ f ( x t ) f ( z ) ( x t z + λ 1 t ( A x t A z ) ) + 2 t ( 1 t ) γ f ( z ) z , x t λ 1 t ( A x t A z ) z + t 2 γ f ( x t ) z 2 ( 1 t ) 2 x t z 2 + 2 t ( 1 t ) γ ρ x t z 2 + 2 t λ γ ρ x t z A x t A z + 2 t ( 1 t ) γ f ( z ) z , x t λ 1 t ( A x t A z ) z + t 2 γ f ( x t ) z 2 [ 1 2 ( 1 γ ρ ) t ] x t z 2 + 2 t [ ( 1 t ) γ f ( z ) z , x t λ 1 t ( A x t A z ) z + t 2 ( γ f ( x t ) z 2 + x t z 2 ) + λ γ ρ x t z A x t A z ] .

Thus it follows that

x t z 2 1 1 γ ρ ( γ f ( z ) z , x t λ 1 t ( A x t A z ) z + t 2 ( γ f ( x t ) z 2 + x t z 2 ) + t γ f ( z ) z x t λ 1 t ( A x t A z ) z + λ γ ρ x t z A x t A z ) 1 1 γ ρ γ f ( z ) z , x t z + ( t + A x t A z ) M ,
(3.6)

where M is some constant such that

sup 1 1 γ ρ { 1 2 ( γ f ( x t ) z 2 + x t z 2 ) + γ f ( z ) z x t λ 1 t ( A x t A z ) z , λ γ ρ x t z : t ( 0 , 1 λ 2 α ) } M .

Next, we show that { x t } is relatively norm-compact as t0+. Assume that { t n }(0,1 λ 2 α ) is such that t n 0+ as n. Put x n := x t n . From (3.6), we have

x n z 2 1 1 γ ρ γ f ( z ) z , x n z + ( t n + A x n A z ) M.
(3.7)

Since { x n } is bounded, without loss of generality, we may assume that x n j x ˜ C. Hence y n j x ˜ because of x n y n 0. From (3.5), we have

lim n x n S x n =0.
(3.8)

We can use Lemma 2.2 to (3.8) to deduce x ˜ F(S). Further, we show that x ˜ is also in ( A + B ) 1 0. Let vBu. Note that y n = J λ B ( t n γf( x n )+(1 t n ) x n λA x n ) for all n1. Then we have

t n γ f ( x n ) + ( 1 t n ) x n λ A x n ( I + λ B ) y n t n γ f ( x n ) λ + 1 t n λ x n A x n y n λ B y n .

Since B is monotone, we have, for all (u,v)B,

t n γ f ( x n ) λ + 1 t n λ x n A x n y n λ v , y n u 0 t n γ f ( x n ) + ( 1 t n ) x n λ A x n y n λ v , y n u 0 A x n + v , y n u 1 λ x n y n , y n u t n λ x n γ f ( x n ) , y n u A x ˜ + v , y n u 1 λ x n y n , y n u t n λ x n γ f ( x n ) , y n u A x ˜ + v , y n u + A x ˜ A x n , y n u A x ˜ + v , y n u 1 λ x n y n y n u + t n λ x n γ f ( x n ) y n u A x ˜ + v , y n u + A x ˜ A x n y n u .

Thus it follows that

A x ˜ + v , x ˜ u 1 λ x n j y n j y n j u + t n j λ x n j γ f ( x n j ) y n j u + A x ˜ A x n j y n j u + A x ˜ + v , x ˜ y n j .
(3.9)

Since x n j x ˜ ,A x n j A x ˜ α A x n j A x ˜ 2 , A x n j Az, and x n j x ˜ , it follows that A x n j A x ˜ . We also observe that t n 0 and y n x n 0. Then, from (3.9), we can derive A x ˜ +v, x ˜ u0, that is, A x ˜ v, x ˜ u0. Since B is maximal monotone, we have A x ˜ B x ˜ . This shows that 0(A+B) x ˜ . Hence we have x ˜ F(S) ( A + B ) 1 0. Therefore, substituting x ˜ for z in (3.7), we get

x n x ˜ 2 1 1 γ ρ γ f ( x ˜ ) x ˜ , x n x ˜ + ( t n + A x n A x ˜ ) M.

Consequently, the weak convergence of { x n } to x ˜ actually implies that x n x ˜ . This proved the relative norm-compactness of the net { x t } as t0+.

Now, we return to (3.7) and, taking the limit as n, we have

x ˜ z 2 1 1 γ ρ γ f ( z ) z , x ˜ z

for all zF(S) ( A + B ) 1 0. In particular, x ˜ solves the following variational inequality:

x ˜ F(S) ( A + B ) 1 0, γ f ( z ) z , x ˜ z 0

for all zF(S) ( A + B ) 1 0 or the equivalent dual variational inequality (see Lemma 2.3):

x ˜ F(S) ( A + B ) 1 0, γ f ( x ˜ ) x ˜ , x ˜ z 0

for all zF(S) ( A + B ) 1 0. Hence x ˜ = P F ( S ) ( A + B ) 1 0 (γf( x ˜ )). Clearly, this is sufficient to conclude that the entire net { x t } converges to x ˜ . This completes the proof. □

Theorem 3.2 Let C be a nonempty closed convex subset of a real Hilbert space H. Let A be an α-inverse strongly monotone mapping from C into H. Let f:CH be a ρ-contraction and γ be a constant such that 0<γ< 1 ρ . Let B be a maximal monotone operator on H such that the domain of B is included in C. Let J λ B = ( I + λ B ) 1 be the resolvent of B for any λ>0 and S be a nonexpansive mapping from C into itself such that F(S) ( A + B ) 1 0. For any x 0 C, let { x n }C be a sequence generated by

x n + 1 =(1κ)S x n +κ J λ n B ( α n γ f ( x n ) + ( 1 α n ) x n λ n A x n )
(3.10)

for all n0, where κ(0,1), { λ n }(0,2α) and { α n }(0,1) satisfy the following conditions:

  1. (a)

    lim n α n =0, lim n α n + 1 α n =1 and n α n =;

  2. (b)

    a(1 α n ) λ n b(1 α n ), where [a,b](0,2α) and lim n λ n + 1 λ n α n + 1 =0.

Then the sequence { x n } converges strongly to a point x ˜ = P F ( S ) ( A + B ) 1 0 (γf( x ˜ )), which solves the following variational inequality:

γ f ( x ˜ ) x ˜ , x ˜ z 0

for all zF(S) ( A + B ) 1 0.

Proof Set y n = J λ n B u n , u n = α n γf( x n )+(1 α n ) x n λ n A x n for all n0. Pick up zF(S) ( A + B ) 1 0. It is obvious that

z=Sz= J λ n B (z λ n Az)= J λ n B ( α n z + ( 1 α n ) ( z λ n 1 α n A z ) )

for all n0. Since J λ n B , S, and I λ n 1 α n A are nonexpansive for all λ>0 and n1, we have

y n z = J λ n B ( α n γ f ( x n ) + ( 1 α n ) x n λ n A x n ) z = J λ n B ( α n γ f ( x n ) + ( 1 α n ) ( x n λ n 1 α n A x n ) ) J λ n B ( α n z + ( 1 α n ) ( z λ n 1 α n A z ) ) ( α n γ f ( x n ) + ( 1 α n ) ( x n λ n 1 α n A x n ) ) ( α n z + ( 1 α n ) ( z λ n 1 α n A z ) ) 2 = ( 1 α n ) ( ( x n λ n 1 α n A x n ) ( z λ n 1 α n A z ) ) + α n ( γ f ( x n ) z ) ( 1 α n ) x n z + α n γ f ( x n ) γ f ( z ) + α n γ f ( z ) z [ 1 ( 1 γ ρ ) α n ] x n z + α n γ f ( z ) z .
(3.11)

Hence we have

x n + 1 z ( 1 κ ) S x n z + κ y n z ( 1 κ ) x n z + κ [ 1 ( 1 γ ρ ) α n ] x n z + κ α n γ f ( z ) z = [ 1 ( 1 γ ρ ) κ α n ] x n z + κ α n γ f ( z ) z .

By induction, we have

x n + 1 zmax { x 0 z , 1 1 γ ρ γ f ( z ) z } .

Therefore, { x n } is bounded. Since A is α-inverse strongly monotone, it is 1 α -Lipschitz continuous. We deduce immediately that {f( x n )}, {S x n }, {A x n }, { u n }, and { y n } are also bounded. By using the convexity of and the α-inverse strong monotonicity of A, it follows from (3.11) that

( 1 α n ) ( ( x n λ n 1 α n A x n ) ( z λ n 1 α n A z ) ) + α n ( γ f ( x n ) z ) 2 ( 1 α n ) ( x n λ n 1 α n A x n ) ( z λ n 1 α n A z ) 2 + α n γ f ( x n ) z 2 = ( 1 α n ) ( x n z ) λ n 1 α n ( A x n A z ) 2 + α n γ f ( x n ) z 2 = ( 1 α n ) ( x n z 2 2 λ n 1 α n A x n A z , x n z + λ n 2 ( 1 α n ) 2 A x n A z 2 ) + α n γ f ( x n ) z 2 ( 1 α n ) ( x n z 2 2 α λ n 1 α n A x n A z 2 + λ n 2 ( 1 α n ) 2 A x n A z 2 ) + α n γ f ( x n ) z 2 = ( 1 α n ) ( x n z 2 + λ n ( 1 α n ) 2 ( λ n 2 ( 1 α n ) α ) A x n A z 2 ) + α n γ f ( x n ) z 2 .
(3.12)

By the condition (c), we get λ n 2(1 α n )α0 for all n0. Then, from (3.11) and (3.12), we obtain

J λ n B u n z 2 ( 1 α n ) ( x n z 2 + λ n ( 1 α n ) 2 ( λ n 2 ( 1 α n ) α ) A x n A z 2 ) + α n γ f ( x n ) z 2 .
(3.13)

From (3.10), it follows that

x n + 1 z 2 = ( 1 κ ) ( S x n z ) + κ ( J λ n B u n z ) 2 ( 1 κ ) x n z 2 + κ J λ n B u n z 2 .
(3.14)

Next, we estimate x n + 1 x n . In fact, we have

x n + 2 x n + 1 = ( 1 κ ) ( S x n + 1 S x n ) + κ ( y n + 1 y n ) ( 1 κ ) x n + 1 x n + κ y n + 1 y n

and

y n + 1 y n = J λ n + 1 B u n + 1 J λ n B u n J λ n + 1 B u n + 1 J λ n + 1 B u n + J λ n + 1 B u n J λ n B u n ( α n + 1 γ f ( x n + 1 ) + ( 1 α n + 1 ) x n + 1 λ n + 1 A x n + 1 ) ( α n γ f ( x n ) + ( 1 α n ) x n λ n A x n ) + J λ n + 1 B u n J λ n B u n = α n + 1 γ ( f ( x n + 1 ) f ( x n ) ) + ( α n + 1 α n ) γ f ( x n ) + ( 1 α n + 1 ) [ ( I λ n + 1 1 α n + 1 A ) x n + 1 ( I λ n + 1 1 α n + 1 A ) x n ] + ( α n α n + 1 ) x n + ( λ n λ n + 1 ) A x n + J λ n + 1 B u n J λ n B u n α n + 1 γ ρ x n + 1 x n + | α n + 1 α n | ( γ f ( x n ) + x n ) + ( 1 α n + 1 ) x n + 1 x n + | λ n λ n + 1 | A x n + J λ n + 1 B u n J λ n B u n = [ 1 ( 1 γ ρ ) α n + 1 ] x n + 1 x n + | α n + 1 α n | ( γ f ( x n ) + x n ) + | λ n λ n + 1 | A x n + J λ n + 1 B u n J λ n B u n .

By the resolvent identity (2.1), we have

J λ n + 1 B u n = J λ n B ( λ n λ n + 1 u n + ( 1 λ n λ n + 1 ) J λ n + 1 B u n ) .

Thus it follows that

J λ n + 1 B u n J λ n B u n = J λ n B ( λ n λ n + 1 u n + ( 1 λ n λ n + 1 ) J λ n + 1 B u n ) J λ n B u n ( λ n λ n + 1 u n + ( 1 λ n λ n + 1 ) J λ n + 1 B u n ) u n | λ n + 1 λ n | λ n + 1 u n J λ n + 1 B u n

and so

x n + 2 x n + 1 ( 1 κ ) x n + 1 x n + κ y n + 1 y n ( 1 κ ) x n + 1 x n + κ [ 1 ( 1 γ ρ ) α n + 1 ] x n + 1 x n + κ | α n + 1 α n | ( γ f ( x n ) + x n ) + κ | λ n λ n + 1 | A x n + κ | λ n + 1 λ n | λ n + 1 u n J λ n + 1 B u n [ 1 ( 1 γ ρ ) κ α n + 1 ] x n + 1 x n + ( 1 γ ρ ) κ α n + 1 [ | α n + 1 α n | α n + 1 γ f ( x n ) + x n 1 γ ρ + | λ n λ n + 1 | α n + 1 A x n 1 γ ρ + | λ n + 1 λ n | α n + 1 λ n + 1 u n J λ n + 1 B u n 1 γ ρ ] .

By the assumptions, we know that | α n + 1 α n | α n + 1 0 and | λ n + 1 λ n | α n + 1 0. Then, from Lemma 2.5, we get

lim n x n + 1 x n =0.
(3.15)

Thus, from (3.13) and (3.14), it follows that

x n + 1 z 2 ( 1 κ ) x n z 2 + κ J λ n B u n z 2 κ [ ( 1 α n ) ( x n z 2 + λ n ( 1 α n ) 2 ( λ n 2 ( 1 α n ) α ) A x n A z 2 ) + α n γ f ( x n ) z 2 ] + ( 1 κ ) x n z 2 = [ 1 κ α n ] x n z 2 + κ λ n 1 α n ( λ n 2 ( 1 α n ) α ) A x n A z 2 + κ α n γ f ( x n ) z 2 x n z 2 + κ λ n 1 α n ( λ n 2 ( 1 α n ) α ) A x n A z 2 + κ α n γ f ( x n ) z 2

and so

κ λ n ( 1 α n ) ( 2 ( 1 α n ) α λ n ) A x n A z 2 x n z 2 x n + 1 z 2 + κ α n γ f ( x n ) z 2 ( x n z x n + 1 z ) x n + 1 x n + κ α n γ f ( x n ) z 2 .

Since lim n α n =0, lim n x n + 1 x n =0, and lim inf n κ λ n ( 1 α n ) (2(1 α n )α λ n )>0, we have

lim n A x n Az=0.
(3.16)

Next, we show x n S x n 0. By using the firm nonexpansivity of J λ n B , we have

J λ n B u n z 2 = J λ n B ( α n γ f ( x n ) + ( 1 α n ) x n λ n A x n ) J λ n B ( z λ n A z ) 2 α n γ f ( x n ) + ( 1 α n ) x n λ n A x n ( z λ n A z ) , J λ n B u n z = 1 2 ( α n γ f ( x n ) + ( 1 α n ) x n λ n A x n ( z λ n A z ) 2 + J λ n B u n z 2 α n γ f ( x n ) + ( 1 α n ) x n λ n ( A x n A z ) J λ n B u n 2 ) .

From the condition (c) and the α-inverse strongly monotonicity of A, we know that I λ n A/(1 α n ) is nonexpansive. Hence it follows that

α n γ f ( x n ) + ( 1 α n ) x n λ n A x n ( z λ n A z ) 2 = ( 1 α n ) ( ( x n λ n 1 α n A x n ) ( z λ n 1 α n A z ) ) + α n ( γ f ( x n ) z ) 2 ( 1 α n ) ( x n λ n 1 α n A x n ) ( z λ n 1 α n A z ) 2 + α n γ f ( x n ) z 2 ( 1 α n ) x n z 2 + α n γ f ( x n ) z 2

and thus

J λ n B u n z 2 1 2 ( ( 1 α n ) x n z 2 + α n γ f ( x n ) z 2 + J λ n B u n z 2 α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n λ n ( A x n A z ) 2 ) ,

that is,

J λ n B u n z 2 ( 1 α n ) x n z 2 + α n γ f ( x n ) z 2 α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n λ n ( A x n A z ) 2 = ( 1 α n ) x n z 2 + α n γ f ( x n ) z 2 α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n 2 + 2 λ n α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n , A x n A z λ n 2 A x n A z 2 ( 1 α n ) x n z 2 + α n γ f ( x n ) z 2 α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n 2 + 2 λ n α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n A x n A z .

This together with (3.14) implies that

x n + 1 z 2 ( 1 κ ) x n z 2 + κ ( 1 α n ) x n z 2 + κ α n γ f ( x n ) z 2 κ α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n 2 + 2 λ n κ α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n A x n A z = [ 1 κ α n ] x n z 2 + κ α n γ f ( x n ) z 2 κ α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n 2 + 2 λ n κ α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n A x n A z

and hence

κ α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n 2 x n z 2 x n + 1 z 2 κ α n x n z 2 + κ α n γ f ( x n ) z 2 + 2 λ n κ α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n A x n A z ( x n z + x n + 1 z ) x n + 1 x n + κ α n γ f ( x n ) z 2 + 2 λ n κ α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n A x n A z .

Since x n + 1 x n 0, α n 0, and A x n Az0 (by (3.16)), we deduce

lim n α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n =0.

This implies that

lim n x n J λ n B u n =0.
(3.17)

Combining (3.10), (3.15), and (3.17), we get

lim n x n S x n =0.
(3.18)

Put x ˜ = lim t 0 + x t = P F ( S ) ( A + B ) 1 0 (γf( x ˜ )), where { x t } is the net defined by (3.1).

Finally, we show that x n x ˜ . Taking z= x ˜ in (3.16), we get A x n A x ˜ 0. First, we prove lim sup n γf( x ˜ ) x ˜ , x n x ˜ 0. We take a subsequence { x n i } of { x n } such that

lim sup n γ f ( x ˜ ) x ˜ , x n x ˜ = lim i γ f ( x ˜ ) x ˜ , x n i x ˜ .

There exists a subsequence { x n i j } of { x n i } which converges weakly to a point wC. Hence { y n i j } also converges weakly to w because of x n i j y n i j 0. By the demi-closedness principle of the nonexpansive mapping (see Lemma 2.2) and (3.18), we deduce wF(S). Furthermore, by a similar argument to that of Theorem 3.1, we can show that w is also in ( A + B ) 1 0. Hence we have wF(S) ( A + B ) 1 0. This implies that

lim sup n γ f ( x ˜ ) x ˜ , x n x ˜ = lim j γ f ( x ˜ ) x ˜ , x n i j x ˜ = γ f ( x ˜ ) x ˜ , w x ˜ .

Note that x ˜ = P F ( S ) ( A + B ) 1 0 (γf( x ˜ )). Then we have

γ f ( x ˜ ) x ˜ , w x ˜ 0

for all wF(S) ( A + B ) 1 0. Therefore, it follows that

lim sup n γ f ( x ˜ ) x ˜ , x n x ˜ 0.

From (3.10), we have

x n + 1 x ˜ 2 ( 1 κ ) x n x ˜ 2 + κ J λ n B u n x ˜ 2 = ( 1 κ ) x n x ˜ 2 + κ J λ n B u n J λ n B ( x ˜ λ n A x ˜ ) 2 ( 1 κ ) x n x ˜ 2 + κ u n ( x ˜ λ n A x ˜ ) 2 = ( 1 κ ) x n x ˜ 2 + κ α n γ f ( x n ) + ( 1 α n ) x n λ n A x n ( x ˜ λ n A x ˜ ) 2 = κ ( 1 α n ) ( ( x n λ n 1 α n A x n ) ( x ˜ λ n 1 α n A x ˜ ) ) + α n ( γ f ( x n ) x ˜ ) 2 + ( 1 κ ) x n x ˜ 2 = ( 1 κ ) x n x ˜ 2 + κ ( ( 1 α n ) 2 ( x n λ n 1 α n A x n ) ( x ˜ λ n 1 α n A x ˜ ) 2 + 2 α n ( 1 α n ) γ f ( x n ) x ˜ , ( x n λ n 1 α n A x n ) ( x ˜ λ n 1 α n A x ˜ ) + α n 2 γ f ( x n ) x ˜ 2 ) ( 1 κ ) x n x ˜ 2 + κ ( ( 1 α n ) 2 x n x ˜ 2 + 2 α n λ n γ f ( x n ) x ˜ , A x n A x ˜ + 2 α n ( 1 α n ) γ f ( x n ) f ( x ˜ ) , x n x ˜ + 2 α n ( 1 α n ) γ f ( x ˜ ) x ˜ , x n x ˜ + α n 2 γ f ( x n ) x ˜ 2 ) ( 1 κ ) x n x ˜ 2 + κ ( ( 1 α n ) 2 x n x ˜ 2 + 2 α n λ n γ f ( x n ) x ˜ A x n A x ˜ + 2 α n ( 1 α n ) γ ρ x n x ˜ 2 + 2 α n ( 1 α n ) γ f ( x ˜ ) x ˜ , x n x ˜ + α n 2 γ f ( x n ) x ˜ 2 ) [ 1 2 κ ( 1 γ ρ ) α n ] x n x ˜ 2 + 2 α n κ λ n γ f ( x n ) x ˜ A x n A x ˜ + 2 α n κ ( 1 α n ) γ f ( x ˜ ) x ˜ , x n x ˜ + κ α n 2 ( γ f ( x n ) x ˜ 2 + x n x ˜ 2 ) = [ 1 2 κ ( 1 γ ρ ) α n ] x n x ˜ 2 + 2 κ ( 1 γ ρ ) α n [ λ n 1 γ ρ γ f ( x n ) x ˜ A x n A x ˜ + 1 α n 1 γ ρ γ f ( x ˜ ) x ˜ , x n x ˜ + α n 2 ( 1 γ ρ ) ( γ f ( x n ) x ˜ 2 + x n x ˜ 2 ) ] .

It is clear that n 2κ(1γρ) α n = and

lim sup n { λ n 1 γ ρ γ f ( x n ) x ˜ A x n A x ˜ + 1 α n 1 γ ρ γ f ( x ˜ ) x ˜ , x n x ˜ + α n 2 ( 1 γ ρ ) ( γ f ( x n ) x ˜ 2 + x n x ˜ 2 ) } 0 .

Therefore, we can apply Lemma 2.5 to conclude that x n x ˜ . This completes the proof. □

Remark 3.3 One quite often seeks a particular solution of a given nonlinear problem, in particular, the minimum-norm element. For instance, given a closed convex subset C of a Hilbert space H 1 and a bounded linear operator W: H 1 H 2 , where H 2 is another Hilbert space. The C-constrained pseudoinverse of W, W C , is then defined as the minimum-norm solution of the constrained minimization problem

W C (b):=arg min x C Wxb,

which is equivalent to the fixed point problem

u= proj C ( u μ W ( W u b ) ) ,

where W is the adjoint of W and μ>0 is a constant, and b H 2 is such that P W ( C ) ¯ (b)W(C). From Theorems 3.1 and 3.2, we get the following corollaries which can find the minimum-norm element in F(S) ( A + B ) 1 0; that is, find x ˜ F(S) ( A + B ) 1 0 such that

x ˜ =arg min x F ( S ) ( A + B ) 1 0 x.

Corollary 3.4 Let C be a nonempty closed convex subset of a real Hilbert space H. Let A be an α-inverse strongly monotone mapping from C into H. Let B be a maximal monotone operator on H such that the domain of B is included in C. Let J λ B = ( I + λ B ) 1 be the resolvent of B for any λ>0 and S be a nonexpansive mapping from C into itself such that F(S) ( A + B ) 1 0. Let λ and κ be two constants satisfying aλb, where [a,b](0,2α) and κ(0,1). For any t(0,1 λ 2 α ), let { x t }C be a net generated by

x t =(1κ)S x t +κ J λ B ( ( 1 t ) x t λ A x t ) .

Then the net { x t } converges strongly as t0+ to a point x ˜ = P F ( S ) ( A + B ) 1 0 (0) which is the minimum-norm element in F(S) ( A + B ) 1 0.

Corollary 3.5 Let C be a closed convex subset of a real Hilbert space H. Let A be an α-inverse strongly monotone mapping from C into H and let B be a maximal monotone operator on H such that the domain of B is included in C. Let J λ B = ( I + λ B ) 1 be the resolvent of B for any λ>0 such that ( A + B ) 1 0. Let λ be a constant satisfying aλb, where [a,b](0,2α). For any t(0,1 λ 2 α ), let { x t }C be a net generated by

x t = J λ B ( ( 1 t ) x t λ A x t ) .

Then the net { x t } converges strongly as t0+ to a point x ˜ = P ( A + B ) 1 0 (0), which is the minimum-norm element in ( A + B ) 1 0.

Corollary 3.6 Let C be a nonempty closed convex subset of a real Hilbert space H. Let A be an α-inverse strongly monotone mapping from C into H. Let B be a maximal monotone operator on H such that the domain of B is included in C. Let J λ B = ( I + λ B ) 1 be the resolvent of B for any λ>0 and let S be a nonexpansive mapping from C into itself such that F(S) ( A + B ) 1 0. For any x 0 C, let { x n }C be a sequence generated by

x n + 1 =(1κ)S x n +κ J λ n B ( ( 1 α n ) x n λ n A x n )

for all n0, where κ(0,1), { λ n }(0,2α), and { α n }(0,1) satisfy the following conditions:

  1. (a)

    lim n α n =0, lim n α n + 1 α n =1, and n α n =;

  2. (b)

    a(1 α n ) λ n b(1 α n ), where [a,b](0,2α) and lim n λ n + 1 λ n α n =0.

Then the sequence { x n } converges strongly to a point x ˜ = P F ( S ) ( A + B ) 1 0 (0), which is the minimum-norm element in F(S) ( A + B ) 1 0.

Corollary 3.7 Let C be a closed convex subset of a real Hilbert space H. Let A be an α-inverse strongly monotone mapping from C into H and let B be a maximal monotone operator on H such that the domain of B is included in C. Let J λ B = ( I + λ B ) 1 be the resolvent of B for any λ>0 such that ( A + B ) 1 0. For any x 0 C, let { x n }C be a sequence generated by

x n + 1 =(1κ) x n +κ J λ n B ( ( 1 α n ) x n λ n A x n )

for all n0, where κ(0,1), { λ n }(0,2α), and { α n }(0,1) satisfy the following conditions:

  1. (a)

    lim n α n =0, lim n α n + 1 α n =1, and n α n =;

  2. (b)

    a(1 α n ) λ n b(1 α n ), where [a,b](0,2α) and lim n λ n + 1 λ n α n =0.

Then the sequence { x n } converges strongly to a point x ˜ = P ( A + B ) 1 0 (0), which is the minimum-norm element in ( A + B ) 1 0.

Remark 3.8 The present paper provides some methods which do not use projection for finding the minimum-norm solution problem.

4 Applications

Next, we consider the problem for finding the minimum-norm solution of a mathematical model related to equilibrium problems. Let C be a nonempty closed convex subset of a Hilbert space and G:C×CR be a bifunction satisfying the following conditions:

(E1) G(x,x)=0 for all xC;

(E2) G is monotone, i.e., G(x,y)+G(y,x)0 for all x,yC;

(E3) for all x,y,zC, lim sup t 0 G(tz+(1t)x,y)G(x,y);

(E4) for all xC, G(x,) is convex and lower semicontinuous.

Then the mathematical model related to the equilibrium problem (with respect to C) is as follows:

Find x ˜ C such that

G( x ˜ ,y)0
(4.1)

for all yC. The set of such solutions x ˜ is denoted by EP(G).

The following lemma appears implicitly in Blum and Oettli [36].

Lemma 4.1 Let C be a nonempty closed convex subset of a Hilbert space H. Let G be a bifunction from C×C into R satisfying the conditions (E1)-(E4). Then, for any r>0 and xH, there exists zC such that

G(z,y)+ 1 r yz,zx0

for all yC.

The following lemma was given in Combettes and Hirstoaga [37].

Lemma 4.2 Assume that G is a bifunction from C×C into R satisfying the conditions (E1)-(E4). For any r>0 and xH, define a mapping T r :HC as follows:

T r (x)= { z C : G ( z , y ) + 1 r y z , z x 0 , y C }

for all xH. Then the following hold:

  1. (a)

    T r is single-valued;

  2. (b)

    T r is a firmly nonexpansive mapping, i.e., for all x,yH,

    T r x T r y 2 T r x T r y,xy;
  3. (c)

    F( T r )=EP(G);

  4. (d)

    EP(G) is closed and convex.

We call such a T r the resolvent of G for any r>0. Using Lemmas 4.1 and 4.2, we have the following lemma (see [38] for a more general result).

Lemma 4.3 Let C be a nonempty closed convex subset of a Hilbert space H. Let G be a bifunction from C×C into R satisfying the conditions (E1)-(E4). Let A G be a multi-valued mapping from H into itself defined by

A G x={ { z H : G ( x , y ) y x , z , y C } , x C , , x C .

Then EP(G)= A G 1 (0) and A G is a maximal monotone operator with dom( A G )C. Further, for any xH and r>0, the resolvent T r of G coincides with the resolvent of A G , i.e.,

T r x= ( I + r A G ) 1 x.

Form Lemma 4.3 and Theorems 3.1 and 3.2, we have the following results.

Theorem 4.4 Let C be a nonempty closed convex subset of a real Hilbert space H. Let G be a bifunction from C×C into R satisfying the conditions (E1)-(E4) and T r be the resolvent of G for any r>0. Let S be a nonexpansive mapping from C into itself such that F(S)EP(G). For any t(0,1), let { x t }C be a net generated by

x t =(1κ)S x t +κ T r ( ( 1 t ) x t ) .

Then the net { x t } converges strongly as t0+ to a point x ˜ = P F ( S ) E P ( G ) (0), which is the minimum-norm element in F(S)EP(G).

Proof From Lemma 4.3, we know A G is maximal monotone. Thus, in Theorem 3.1, we can set J λ B = T r . At the same time, in Theorem 3.1, we can choose f=0 and A=0, and (3.1) reduces to

x t =(1κ)S x t +κ T r ( ( 1 t ) x t ) .

Consequently, from Theorem 3.1, we get the desired result. This completes the proof. □

Corollary 4.5 Let C be a nonempty closed convex subset of a real Hilbert space H. Let G be a bifunction from C×C into R satisfying the conditions (E1)-(E4) and T r be the resolvent of G for any r>0. Suppose that EP(G). For any t(0,1), let { x t }C be a net generated by

x t = T r ( ( 1 t ) x t ) .

Then the net { x t } converges strongly as t0+ to a point x ˜ = P E P ( G ) (0), which is the minimum-norm element in EP(G).

Theorem 4.6 Let C be a nonempty closed and convex subset of a real Hilbert space H. Let G be a bifunction from C×C into R satisfying the conditions (E1)-(E4) and T λ be the resolvent of G for any λ>0. Let S be a nonexpansive mapping from C into itself such that F(S)EP(G). For any x 0 C, let { x n }C be a sequence generated by

x n + 1 =(1κ)S x n +κ T λ n ( ( 1 α n ) x n )

for all n0, where κ(0,1), { λ n }(0,), and { α n }(0,1) satisfy the conditions:

  1. (a)

    lim n α n =0, lim n α n + 1 α n =1, and n α n =;

  2. (b)

    a λ n b, where [a,b](0,) and lim n λ n + 1 λ n α n =0.

Then the sequence { x n } converges strongly to a point x ˜ = P F ( S ) E P ( G ) (0), which is the minimum-norm element in F(S)EP(G).

Proof From Lemma 4.3, we know A G is maximal monotone. Thus, in Theorem 3.2, we can set J λ n B = T λ n . At the same time, in Theorem 3.2, we can choose f=0 and A=0, and (3.10) reduces to

x n + 1 =(1κ)S x n +κ T λ n ( ( 1 α n ) x n )

for all n0. Consequently, from Theorem 3.2, we get the desired result. This completes the proof. □

Corollary 4.7 Let C be a nonempty closed convex subset of a real Hilbert space H. Let G be a bifunction from C×C into R satisfying the conditions (E1)-(E4) and T λ be the resolvent of G for any λ>0. Suppose EP(G). For any x 0 C, let { x n }C be a sequence generated by

x n + 1 =(1κ) x n +(1 β n ) T λ n ( ( 1 α n ) x n )

for all n0, where κ(0,1), { λ n }(0,), and { α n }(0,1) satisfy the following conditions:

  1. (a)

    lim n α n =0, lim n α n + 1 α n =1, and n α n =;

  2. (b)

    a λ n b, where [a,b](0,) and lim n λ n + 1 λ n α n =0.

Then the sequence { x n } converges strongly to a point x ˜ = P E P ( G ) (0), which is the minimum-norm element in EP(G).

References

  1. Lu X, Xu HK, Yin X: Hybrid methods for a class of monotone variational inequalities. Nonlinear Anal. 2009, 71: 1032–1041. 10.1016/j.na.2008.11.067

    Article  MathSciNet  Google Scholar 

  2. Rockafella RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14: 877–898. 10.1137/0314056

    Article  Google Scholar 

  3. Moudafi A, Noor MA: Sensitivity analysis of variational inclusions by the Wiener-Hopf equations technique. J. Appl. Math. Stoch. Anal. 1999, 12: 223–232. 10.1155/S1048953399000210

    Article  MathSciNet  Google Scholar 

  4. Dong H, Lee BS, Huang NJ: Sensitivity analysis for generalized parametric implicit quasi-variational inequalities. Nonlinear Anal. Forum 2001, 6: 313–320.

    MathSciNet  Google Scholar 

  5. Agarwal RP, Huang NJ, Tan MY: Sensitivity analysis for a new system of generalized nonlinear mixed quasi-variational inclusions. Appl. Math. Lett. 2004, 17: 345–352. 10.1016/S0893-9659(04)90073-0

    Article  MathSciNet  Google Scholar 

  6. Jeong JU: A system of parametric generalized nonlinear mixed quasi-variational inclusions in Lp spaces. J. Appl. Math. Comput. 2005, 19: 493–506.

    Google Scholar 

  7. Lan HY:A stable iteration procedure for relaxed cocoercive variational inclusion systems based on(A,η)-monotone operators. J. Comput. Anal. Appl. 2007, 9: 147–157.

    MathSciNet  Google Scholar 

  8. Zhang SS, Lee Joseph HW, Chan CK: Algorithms of common solutions for quasi variational inclusion and fixed point problems. Appl. Math. Mech. 2008, 29: 571–581. 10.1007/s10483-008-0502-y

    Article  Google Scholar 

  9. Peng JW, Wang Y, Shyu DS, Yao JC: Common solutions of an iterative scheme for variational inclusions, equilibrium problems, and fixed point problems. J. Inequal. Appl. 2008., 2008: Article ID 720371

    Google Scholar 

  10. Ansari QH, Balooee J, Yao J-C: Iterative algorithms for systems of extended regularized nonconvex variational inequalities and fixed point problems. Appl. Anal. 2014, 93: 972–993. 10.1080/00036811.2013.809067

    Article  MathSciNet  Google Scholar 

  11. Bauschke HH, Combettes PL: A Dykstra-like algorithm for two monotone operators. Pac. J. Optim. 2008, 4: 383–391.

    MathSciNet  Google Scholar 

  12. Bauschke HH, Combettes PL, Reich S: The asymptotic behavior of the composition of two resolvents. Nonlinear Anal. 2005, 60: 283–301. 10.1016/j.na.2004.07.054

    Article  MathSciNet  Google Scholar 

  13. Ceng L-C, Al-Otaibi A, Ansari QH, Latif A: Relaxed and composite viscosity methods for variational inequalities, fixed points of nonexpansive mappings and zeros of accretive operators. Fixed Point Theory Appl. 2014., 2014: Article ID 29

    Google Scholar 

  14. Ceng L-C, Ansari QH, Schaible S, Yao J-C: Iterative methods for generalized equilibrium problems, systems of general generalized equilibrium problems and fixed point problems for nonexpansive mappings in Hilbert spaces. Fixed Point Theory 2011, 12: 293–308.

    MathSciNet  Google Scholar 

  15. Combettes PL: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004, 53: 475–504. 10.1080/02331930412331327157

    Article  MathSciNet  Google Scholar 

  16. Combettes PL, Hirstoaga SA: Approximating curves for nonexpansive and monotone operators. J. Convex Anal. 2006, 13: 633–646.

    MathSciNet  Google Scholar 

  17. Combettes PL, Hirstoaga SA: Visco-penalization of the sum of two monotone operators. Nonlinear Anal. 2008, 69: 579–591. 10.1016/j.na.2007.06.003

    Article  MathSciNet  Google Scholar 

  18. Eckstein J, Bertsekas DP: On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992, 55: 293–318. 10.1007/BF01581204

    Article  MathSciNet  Google Scholar 

  19. Fang YP, Huang NJ: H -Monotone operator resolvent operator technique for quasi-variational inclusions. Appl. Math. Comput. 2003, 145: 795–803. 10.1016/S0096-3003(03)00275-3

    Article  MathSciNet  Google Scholar 

  20. Lin LJ: Variational inclusions problems with applications to Ekeland’s variational principle, fixed point and optimization problems. J. Glob. Optim. 2007, 39: 509–527. 10.1007/s10898-007-9153-1

    Article  Google Scholar 

  21. Lions P-L, Mercier B: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16: 964–979. 10.1137/0716071

    Article  MathSciNet  Google Scholar 

  22. Moudafi A: On the regularization of the sum of two maximal monotone operators. Nonlinear Anal. 2009, 42: 1203–1208.

    Article  MathSciNet  Google Scholar 

  23. Passty GB: Ergodic convergence to a zero of the sum of monotone operators in Hilbert spaces. J. Math. Anal. Appl. 1979, 72: 383–390. 10.1016/0022-247X(79)90234-8

    Article  MathSciNet  Google Scholar 

  24. Takahashi S, Takahashi W, Toyoda M: Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 2010, 147: 27–41. 10.1007/s10957-010-9713-2

    Article  MathSciNet  Google Scholar 

  25. Ding F, Chen T: Iterative least squares solutions of coupled Sylvester matrix equations. Syst. Control Lett. 2005, 54: 95–107. 10.1016/j.sysconle.2004.06.008

    Article  MathSciNet  Google Scholar 

  26. Liu X, Cui Y: Common minimal-norm fixed point of a finite family of nonexpansive mappings. Nonlinear Anal. 2010, 73: 76–83. 10.1016/j.na.2010.02.041

    Article  MathSciNet  Google Scholar 

  27. Sabharwal A, Potter LC: Convexly constrained linear inverse problems: iterative least-squares and regularization. IEEE Trans. Signal Process. 1998, 46: 2345–2352. 10.1109/78.709518

    Article  MathSciNet  Google Scholar 

  28. Yao Y, Chen R, Xu HK: Schemes for finding minimum-norm solutions of variational inequalities. Nonlinear Anal. 2010, 72: 3447–3456. 10.1016/j.na.2009.12.029

    Article  MathSciNet  Google Scholar 

  29. Yao Y, Liou YC: Composite algorithms for minimization over the solutions of equilibrium problems and fixed point problems. Abstr. Appl. Anal. 2010., 2010: Article ID 763506

    Google Scholar 

  30. Bauschke HH, Borwein JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38: 367–426. 10.1137/S0036144593251710

    Article  MathSciNet  Google Scholar 

  31. Censor Y, Zenios SA: Parallel Optimization: Theory, Algorithms and Applications. Oxford University Press, New York; 1997.

    MATH  Google Scholar 

  32. Takahashi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118: 417–428. 10.1023/A:1025407607560

    Article  MathSciNet  Google Scholar 

  33. Goebel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.

    Chapter  Google Scholar 

  34. Suzuki T: Strong convergence theorems for infinite families of nonexpansive mappings in general Banach spaces. Fixed Point Theory Appl. 2005, 2005: 103–123.

    Article  Google Scholar 

  35. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 2: 1–17.

    Google Scholar 

  36. Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.

    MathSciNet  Google Scholar 

  37. Combettes PL, Hirstoaga SA: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6: 117–136.

    MathSciNet  Google Scholar 

  38. Aoyama K, Kimura Y, Takahashi W, Toyoda M: On a strongly nonexpansive sequence in Hilbert spaces. J. Nonlinear Convex Anal. 2007, 8: 471–489.

    MathSciNet  Google Scholar 

Download references

Acknowledgements

This project was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, under Grant No. (31-130-35-HiCi) and the fifth author was supported in part by NNSF of China (61362033) and NGY2012097.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Afrah AN Abdou.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Abdou, A.A., Alamri, B.A., Cho, Y.J. et al. Parallel algorithms for variational inclusions and fixed points with applications. Fixed Point Theory Appl 2014, 174 (2014). https://doi.org/10.1186/1687-1812-2014-174

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2014-174

Keywords