Skip to main content

Projection methods of iterative solutions in Hilbert spaces

Abstract

In this paper, zero points of the sum of two monotone mappings, solutions of a monotone variational inequality, and fixed points of a nonexpansive mapping are investigated based on a hybrid projection iterative algorithm. Strong convergence of the purposed iterative algorithm is obtained in the framework of real Hilbert spaces without any compact assumptions.

MSC:47H05, 47H09, 47J25, 90C33.

1 Introduction and preliminaries

Throughout this paper, we always assume that H is a real Hilbert space with an inner product 〈⋅,⋅〉 and a norm ∥⋅∥. Let C be a nonempty, closed, and convex subset of H. Let S:C→C be a nonlinear mapping. F(S) stands for the fixed point set of S; that is, F(S):={x∈C:x=Tx}.

Recall that S is said to be nonexpansive iff

∥Sx−Sy∥≤∥x−y∥,∀x,y∈C.

If C is a bounded, closed, and convex subset of H, then F(S) is not empty, closed, and convex; see [1].

Let A:C→H be a mapping. Recall that A is said to be inverse-strongly monotone iff there exists a constant α>0 such that

〈Ax−Ay,x−y〉≥α ∥ A x − A y ∥ 2 ,∀x,y∈C.

For such a case, A is also said to be α-inverse-strongly monotone.

A is said to be monotone iff

〈Ax−Ay,x−y〉≥0,∀x,y∈C.

Recall that the classical variational inequality is to find an x∈C such that

〈Ax,y−x〉≥0,∀y∈C.
(1.1)

In this paper, we use VI(C,A) to denote the solution set of (1.1). It is known that x∈C is a solution of (1.1) if and only if x is a fixed point of the mapping Proj C (I−rA), where r>0 is a constant, I stands for the identity mapping, and Proj C stands for the metric projection from H onto C. If A is α-inverse-strongly monotone and r∈(0,2α], then the mapping Proj C (I−rA) is nonexpansive; see [2] for more details. It follows that VI(C,A) is closed and convex.

Monotone variational inequality theory has emerged as a fascinating branch of mathematical and engineering sciences with a wide range of applications in industry, finance, economics, ecology, social, regional, pure, and applied sciences. In recent years, much attention has been given to developing efficient numerical methods for treating solution problems of monotone variational inequality. The gradient-projection method is a powerful tool for solving constrained convex optimization problems and has extensively been studied; see [3–5] and the references therein. It has recently been applied to solving split feasibility problems which find applications in image reconstructions and the intensity modulated radiation theory; see [6–9] and the references therein. However, the gradient-projection method requires the operator to be strongly monotone and Lipschitz continuous. These strong conditions rule out many applications. Extra gradient-projection method which was first introduce by Korpelevich [10] in the finite dimensional Euclidean space has been studied recently for relaxing the strong monotonicity of operators; see [11–13] and the references therein.

Recall that a set-valued mapping M:H⇉H is said to be monotone iff, for all x,y∈H, f∈Mx and g∈My imply 〈x−y,f−g〉>0. A monotone mapping M:H⇉H is maximal iff the graph Graph(M) of R is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping M is maximal if and only if, for any (x,f)∈H×H, 〈x−y,f−g〉≥0, for all (y,g)∈Graph(M) implies f∈Rx.

For a maximal monotone operator M on H and r>0, we may define the single-valued resolvent J r :H→D(M), where D(M) denotes the domain of M. It is known that J r is firmly nonexpansive, and M − 1 (0)=F( J r ), where F( J r ):={x∈D(M):x= J r x} and M − 1 (0):{x∈H:0∈Mx}.

Recently, variational inequalities, fixed point problems, and zero point problems have been investigated by many authors based on iterative methods; see, for example, [14–32] and the references therein. In this paper, zero point problems of the sum of a maximal monotone operator and an inverse-strongly monotone mapping, solution problems of a monotone variational inequality, and fixed point problems of a nonexpansive mapping are investigated. A hybrid iterative algorithm is considered for analyzing the convergence of iterative sequences. Strong convergence theorems are established in the framework of real Hilbert spaces without any compact assumptions.

In order to prove our main results, we also need the following definitions and lemmas.

Lemma 1.1 Let C be a nonempty, closed, and convex subset of H. Then the following inequality holds:

∥ x − Proj C x ∥ 2 + ∥ y − Proj C ∥ 2 ≤ ∥ x − y ∥ 2 ,∀x∈H,y∈C.

Lemma 1.2[1]

Let C be a nonempty, closed, and convex subset of H. LetS:C→Cbe a nonexpansive mapping. Then the mappingI−Sis demiclosed at zero, that is, if{ x n }is a sequence in C such that x n ⇀ x ¯ and x n −S x n →0, then x ¯ ∈F(S).

Lemma 1.3 Let C be a nonempty, closed, and convex subset of H, B:C→Hbe a mapping, andM:H⇉Hbe a maximal monotone operator. ThenF( J r (I−sB))= ( B + M ) − 1 (0).

Proof Notice that

p ∈ F ( J r ( I − s B ) ) ⟺ p = J r ( I − s B ) p ⟺ p − s B p ∈ p + s M p ⟺ 0 ∈ ( B + M ) − 1 ( 0 ) ⟺ p ∈ ( B + M ) − 1 ( 0 ) .

This completes the proof. □

Lemma 1.4[33]

Let C be a nonempty, closed, and convex subset of H, A:C→Hbe a Lipschitz monotone mapping, and N C xbe the normal cone to C atx∈C; that is, N C x={y∈H:〈x−u,y〉,∀u∈C}. Define

Wx={ A x + N C x , x ∈ C , ∅ , x ∉ C .

Then W is maximal monotone and0∈Wxif and only ifx∈VI(C,A).

2 Main results

Now, we are in a position to give our main results.

Theorem 2.1 Let C be a nonempty, closed, and convex subset of H. LetS:C→Cbe a nonexpansive mapping with a nonempty fixed point set, A:C→Hbe an α-Lipschitz continuous and monotone mapping, andB:C→Hbe a β-inverse-strongly monotone mapping. LetM:H⇉Hbe a maximal monotone operator such thatD(M)⊂C. Assume thatF:=F(S)∩ ( B + M ) − 1 (0)∩VI(C,A)is not empty. Let{ x n }be a sequence generated by the following iterative process:

{ x 1 ∈ C , C 1 = C , z n = Proj C ( J s n ( x n − s n B x n ) − r n A J s n ( x n − s n B x n ) ) , y n = α n x n + ( 1 − α n ) S Proj C ( J s n ( x n − s n B x n ) − r n A z n ) , C n + 1 = { v ∈ C n : ∥ y n − v ∥ ≤ ∥ x n − v ∥ } , x n + 1 = Proj C n + 1 x 1 , n ≥ 0 ,
(2.1)

where J s n = ( I + s n M ) − 1 , { r n }is a sequence in(0, 1 α ), { s n }is a sequence in(0,2β), and{ α n }is a sequence in(0,1). Assume that the following restrictions are satisfied:

  1. (a)

    0<a≤ r n ≤b< 1 α ;

  2. (b)

    0<c≤ s n ≤d<2β;

  3. (c)

    0≤ α n ≤e<1,

where a, b, c, d, and e are real constants. Then the sequence{ x n }converges strongly to Proj F x 1 .

Proof First, we show that C n is closed and convex for each n≥1. From the assumption, we see that C 1 =C is closed and convex. Suppose that C m is closed and convex for some m≥1. We show that C m + 1 is closed and convex for the same m. Let v 1 , v 2 ∈ C m + 1 and v=t v 1 +(1−t) v 2 , where t∈(0,1). Notice that

∥ y m −v∥≤∥ x m −v∥

is equivalent to

∥ y m ∥ 2 − ∥ x m ∥ 2 −2〈v, y m − x m 〉≥0.

It is clear that v∈ C m + 1 . This shows that C n is closed and convex for each n≥1.

Next, we show that F⊂ C n for each n≥1. Put u n = Proj C ( v n − r n A z n ), where v n = J s n ( x n − s n B x n ). From the assumption, we see that F⊂C= C 1 . Suppose that F⊂ C m for some m≥1. For any p∈F⊂ C m , we see from Lemma 1.1 that

∥ u m − p ∥ 2 ≤ ∥ v m − r m A z m − p ∥ 2 − ∥ v m − r m A z m − u m ∥ 2 = ∥ v m − p ∥ 2 − ∥ v m − u m ∥ 2 + 2 r m 〈 A z m , p − u m 〉 = ∥ v m − p ∥ 2 − ∥ v m − u m ∥ 2 + 2 r m ( 〈 A z m − A p , p − z m 〉 + 〈 A p , p − z m 〉 + 〈 A z m , z m − u m 〉 ) ≤ ∥ v m − p ∥ 2 − ∥ v m − z m + z m − u m ∥ 2 + 2 r m 〈 A z m , z m − u m 〉 = ∥ v m − p ∥ 2 − ∥ v m − z m ∥ 2 − ∥ z m − u m ∥ 2 + 2 〈 v m − z m − r m A z m , u m − z m 〉 .
(2.2)

Notice that A is Lipschitz continuous. In view of z m = Proj C ( v m − r m A v m ), we find that

〈 v m − z m − r m A z m , u m − z m 〉 = 〈 v m − z m − r m A v m , u m − z m 〉 + 〈 r m A v m − r m A z m , u m − z m 〉 ≤ r m α ∥ v m − z m ∥ ∥ u m − z m ∥ .
(2.3)

Substituting (2.3) into (2.2), we obtain that

∥ u m − p ∥ 2 ≤ ∥ v m − p ∥ 2 − ∥ v m − z m ∥ 2 − ∥ z m − u m ∥ 2 + 2 r m α ∥ v m − z m ∥ ∥ u m − z m ∥ ≤ ∥ v m − p ∥ 2 − ( 1 − r m 2 α 2 ) ∥ v m − z m ∥ 2 .
(2.4)

This in turn implies from restriction (a) that

∥ y m − p ∥ 2 ≤ α m ∥ x m − p ∥ 2 + ( 1 − α m ) ∥ S u m − p ∥ 2 ≤ α m ∥ x m − p ∥ 2 + ( 1 − α m ) ∥ u m − p ∥ 2 ≤ α m ∥ x m − p ∥ 2 + ( 1 − α m ) ( ∥ v m − p ∥ 2 − ( 1 − r m 2 α 2 ) ∥ v m − z m ∥ 2 ) ≤ ∥ x m − p ∥ 2 − ( 1 − α m ) ( 1 − r m 2 α 2 ) ∥ v m − z m ∥ 2 ≤ ∥ x m − p ∥ 2 .
(2.5)

This shows that p∈ C m + 1 . This proves that F⊂ C n for each n≥1. Note x n = Proj C n x 1 . For each p∈F⊂ C n , we have ∥ x 1 − x n ∥≤∥ x 1 −p∥. Since B is inverse-strongly monotone, we see from Lemma 1.3 that ( B + M ) − 1 (0) is closed and convex. Since A is Lipschitz continuous, we find that VI(C,A) is close and convex. This proves that F is closed and convex. It follows that

∥ x 1 − x n ∥≤∥ x 1 − Proj F x 1 ∥.
(2.6)

This implies that { x n } is bounded. Since x n = Proj C n x 1 and x n + 1 = Proj C n + 1 x 1 ∈ C n + 1 ⊂ C n , we have

0 ≤ 〈 x 1 − x n , x n − x n + 1 〉 = 〈 x 1 − x n , x n − x 1 + x 1 − x n + 1 〉 ≤ − ∥ x 1 − x n ∥ 2 + ∥ x 1 − x n ∥ ∥ x 1 − x n + 1 ∥ .

It follows that

∥ x n − x 1 ∥≤∥ x n + 1 − x 1 ∥.

This proves that lim n → ∞ ∥ x n − x 1 ∥ exists. Notice that

∥ x n − x n + 1 ∥ 2 = ∥ x n − x 1 ∥ 2 + 2 〈 x n − x 1 , x 1 − x n + 1 〉 + ∥ x 1 − x n + 1 ∥ 2 = ∥ x n − x 1 ∥ 2 − 2 ∥ x n − x 1 ∥ 2 + 2 〈 x n − x 1 , x n − x n + 1 〉 + ∥ x 1 − x n + 1 ∥ 2 ≤ ∥ x 1 − x n + 1 ∥ 2 − ∥ x n − x 1 ∥ 2 .

It follows that

lim n → ∞ ∥ x n − x n + 1 ∥=0.
(2.7)

In view of x n + 1 = Proj C n + 1 x 1 ∈ C n + 1 , we see that

∥ y n − x n + 1 ∥≤∥ x n − x n + 1 ∥.

This implies that

∥ y n − x n ∥≤∥ y n − x n + 1 ∥+∥ x n − x n + 1 ∥≤2∥ x n − x n + 1 ∥.

From (2.7), we find that

lim n → ∞ ∥ x n − y n ∥=0.
(2.8)

Since B is β-inverse-strongly monotone, we see from restriction (b) that

∥ ( I − s n B ) x − ( I − s n B ) y ∥ 2 = ∥ x − y ∥ 2 − 2 s n 〈 x − y , B x − B y 〉 + s n 2 ∥ B x − B y ∥ 2 ≤ ∥ x − y ∥ 2 − s n ( 2 β − s n ) ∥ B x − B y ∥ 2 ≤ ∥ x − y ∥ 2 , ∀ x , y ∈ C .

This implies from (2.5) that

∥ y n − p ∥ 2 ≤ α n ∥ x n − p ∥ 2 + ( 1 − α n ) ∥ v n − p ∥ 2 = α n ∥ x n − p ∥ 2 + ( 1 − α n ) ∥ J s n ( x n − s n B x n ) − J s n ( p − s n B p ) ∥ 2 ≤ ∥ x n − p ∥ 2 − ( 1 − α n ) s n ( 2 β − s n ) ∥ B x n − B p ∥ 2 .

It follows that

( 1 − α n ) s n ( 2 β − s n ) ∥ B x n − B p ∥ 2 ≤ ∥ x n − p ∥ 2 − ∥ y n − p ∥ 2 ≤ ∥ x n − y n ∥ ( ∥ x n − p ∥ + ∥ y n − p ∥ ) .

In view of restrictions (b) and (c), we find from (2.8) that

lim n → ∞ ∥B x n −Bp∥=0.
(2.9)

Since J s n is firmly nonexpansive, we find that

∥ v n − p ∥ 2 = ∥ J s n ( x n − s n B x n ) − J s n ( p − s n B p ) ∥ 2 ≤ 〈 v n − p , ( x n − s n B x n ) − ( p − s n B p ) 〉 = 1 2 ( ∥ v n − p ∥ 2 + ∥ ( x n − s n B x n ) − ( p − s n B p ) ∥ 2 − ∥ ( v n − p ) − ( ( x n − s n B x n ) − ( p − s n B p ) ) ∥ 2 ) ≤ 1 2 ( ∥ v n − p ∥ 2 + ∥ x n − p ∥ 2 − ∥ v n − x n + s n ( B x n − B p ) ∥ 2 ) = 1 2 ( ∥ v n − p ∥ 2 + ∥ x n − p ∥ 2 − ∥ v n − x n ∥ 2 − s n 2 ∥ B x n − B p ∥ 2 − 2 s n 〈 v n − x n , B x n − B p 〉 ) ≤ 1 2 ( ∥ v n − p ∥ 2 + ∥ x n − p ∥ 2 − ∥ v n − x n ∥ 2 + 2 s n ∥ v n − x n ∥ ∥ B x n − B p ∥ ) .

This in turn implies that

∥ v n − p ∥ 2 ≤ ∥ x n − p ∥ 2 − ∥ v n − x n ∥ 2 +2 s n ∥ v n − x n ∥∥B x n −Bp∥.
(2.10)

Combining (2.5) with (2.10), we arrive at

∥ y n − p ∥ 2 ≤ α n ∥ x n − p ∥ 2 + ( 1 − α n ) ∥ v n − p ∥ 2 ≤ ∥ x n − p ∥ 2 − ( 1 − α n ) ∥ v n − x n ∥ 2 + 2 s n ∥ v n − x n ∥ ∥ B x n − B p ∥ .

It follows that

( 1 − α n ) ∥ v n − x n ∥ 2 ≤ ∥ x n − p ∥ 2 − ∥ y n − p ∥ 2 + 2 s n ∥ v n − x n ∥ ∥ B x n − B p ∥ ≤ ∥ x n − y n ∥ ( ∥ x n − p ∥ + ∥ y n − p ∥ ) + 2 s n ∥ v n − x n ∥ ∥ B x n − B p ∥ .

In view of (2.8) and (2.9), we see from restriction (c) that

lim n → ∞ ∥ v n − x n ∥=0.
(2.11)

On the other hand, we find from (2.5) that

( 1 − α n ) ( 1 − r n 2 α 2 ) ∥ v n − z n ∥ 2 ≤ ∥ x n − p ∥ 2 − ∥ y n − p ∥ 2 ≤ ∥ x n − y n ∥ ( ∥ x n − p ∥ + ∥ y n − p ∥ ) .

In view of restrictions (a) and (c), we obtain from (2.8) that

lim n → ∞ ∥ v n − z n ∥=0.
(2.12)

Notice that

∥ u n − z n ∥ 2 = ∥ P C ( v n − r n A z n ) − P C ( v n − r n A v n ) ∥ 2 ≤ ∥ ( v n − r n A z n ) − ( v n − r n A v n ) ∥ 2 ≤ r n 2 α 2 ∥ z n − v n ∥ 2 .

Thanks to (2.12), we arrive at

lim n → ∞ ∥ u n − z n ∥=0.
(2.13)

Notice that

∥ x n − S x n ∥ ≤ ∥ x n − S u n ∥ + ∥ S u n − S x n ∥ ≤ ∥ x n − y n ∥ 1 − α n + ∥ u n − x n ∥ ≤ ∥ x n − y n ∥ 1 − α n + ∥ u n − z n ∥ + ∥ z n − v n ∥ + ∥ v n − x n ∥ .

In view of (2.8), (2.11), (2.12), and (2.13), we find from restriction (c) that

lim n → ∞ ∥ x n −S x n ∥=0.

Since { x n } is bounded, we find that there exists a subsequence { x n i } of { x n } such that x n i ⇀q∈C. From Lemma 1.2, we easily conclude that q∈F(S).

Now, we are in a position to show that x∈VI(C,A). Define

Wx={ A x + N C x , x ∈ C , ∅ , x ∉ C .

For any given (x,y)∈G(W), we have y−Ax∈ N C x. Since u n ∈C, we see from the definition of N C

〈x− u n ,y−Ax〉≥0.
(2.14)

In view of u n = P C ( v n − r n A z n ), we obtain that

〈x− u n , u n + r n A z n − v n 〉≥0

and hence

〈 x − u n , u n − v n r n + A z n 〉 ≥0.
(2.15)

In view of (2.14) and (2.15), we find that

〈 x − u n i , y 〉 ≥ 〈 x − u n i , A x 〉 ≥ 〈 x − u n i , A x 〉 − 〈 x − u n i , u n i − v n i r n i + A z n i 〉 = 〈 x − u n i , A x − A u n i 〉 + 〈 x − u n i , A u n i − A z n i 〉 − 〈 x − u n i , u n i − v n i r n i 〉 ≥ 〈 x − u n i , A u n i − A z n i 〉 − 〈 x − u n i , u n i − v n i r n i 〉 .
(2.16)

Notice that

∥ x n − u n ∥≤∥ x n − v n ∥+∥ v n − z n ∥+∥ z n − u n ∥.

It follows from (2.11), (2.12), and (2.13) that

lim n → ∞ ∥ x n − u n ∥=0.

Since A is Lipschitz continuous, we find from (2.16) that

〈x−q,y〉≥0.

Since W is maximal monotone, we conclude that q∈ W − 1 (0). This proves that q∈VI(C,A).

Finally, we prove that q∈ ( B + M ) − 1 (0). Notice that

x n − s n B x n ∈ v n + s n M v n ;

that is,

x n − v n s n −B x n ∈M v n .
(2.17)

Let μ∈Mν. Since M is monotone, we find from (2.17)

〈 x n − v n s n − B x n − μ , v n − ν 〉 ≥0.

In view of the restriction (b), we see that

〈−Bq−μ,q−ν〉≥0.

This implies that −Bq∈Mq, that is, q∈ ( B + M ) − 1 (0). This completes q∈F. Assume that there exists another subsequence { x n j } of { x n } which converges weakly to q ′ ∈F. We can easily conclude from Opial’s condition that q= q ′ .

Finally, we show that q= Proj F x 1 and { x n } converges strongly to q. This completes the proof of Theorem 2.1. In view of the weak lower semicontinuity of the norm, we obtain from (2.6) that

∥ x 1 − Proj F x 1 ∥ ≤ ∥ x 1 − q ∥ ≤ lim inf n → ∞ ∥ x 1 − x n ∥ ≤ lim sup n → ∞ ∥ x 1 − x n ∥ ≤ ∥ x 1 − Proj F x 1 ∥ ,

which yields that lim n → ∞ ∥ x 1 − x n ∥=∥ x 1 − Proj F x 1 ∥=∥ x 1 −q∥. It follows that { x n } converges strongly to Proj F x 1 . This completes the proof. □

If B=0, then Theorem 2.1 is reduced to the following.

Corollary 2.2 Let C be a nonempty, closed, and convex subset of H. LetS:C→Cbe a nonexpansive mapping with a nonempty fixed point set andA:C→Hbe an α-Lipschitz continuous and monotone mapping. LetM:H⇉Hbe a maximal monotone operator such thatD(M)⊂C. Assume thatF:=F(S)∩ M − 1 (0)∩VI(C,A)is not empty. Let{ x n }be a sequence generated by the following iterative process:

{ x 1 ∈ C , C 1 = C , z n = Proj C ( J s n x n − r n A J s n x n ) , y n = α n x n + ( 1 − α n ) S Proj C ( J s n x n − r n A z n ) , C n + 1 = { v ∈ C n : ∥ y n − v ∥ ≤ ∥ x n − v ∥ } , x n + 1 = Proj C n + 1 x 1 , n ≥ 0 ,

where J s n = ( I + s n M ) − 1 , { r n }is a sequence in(0, 1 α ), { s n }is a sequence in(0,+∞), and{ α n }is a sequence in(0,1). Assume that the following restrictions are satisfied:

  1. (a)

    0<a≤ r n ≤b< 1 α ;

  2. (b)

    0<c≤ s n <∞;

  3. (c)

    0≤ α n ≤d<1,

where a, b, c, and d are real constants. Then the sequence{ x n }converges strongly to Proj F x 1 .

If M=0, then J s n =I. Corollary 2.2 is reduced to the following.

Corollary 2.3 Let C be a nonempty, closed, and convex subset of H. LetS:C→Cbe a nonexpansive mapping with a nonempty fixed point set andA:C→Hbe an α-Lipschitz continuous and monotone mapping. Assume thatF:=F(S)∩VI(C,A)is not empty. Let{ x n }be a sequence generated by the following iterative process:

{ x 1 ∈ C , C 1 = C , z n = Proj C ( x n − r n A x n ) , y n = α n x n + ( 1 − α n ) S Proj C ( x n − r n A z n ) , C n + 1 = { v ∈ C n : ∥ y n − v ∥ ≤ ∥ x n − v ∥ } , x n + 1 = Proj C n + 1 x 1 , n ≥ 0 ,

where{ r n }is a sequence in(0, 1 α ), and{ α n }is a sequence in(0,1). Assume that the following restrictions are satisfied:

  1. (a)

    0<a≤ r n ≤b< 1 α ;

  2. (b)

    0≤ α n ≤c<1,

where a, b, and c are real constants. Then the sequence{ x n }converges strongly to Proj F x 1 .

If A=0, then Theorem 2.1 is reduced to the following.

Corollary 2.4 Let C be a nonempty, closed, and convex subset of H. LetS:C→Cbe a nonexpansive mapping with a nonempty fixed point set andB:C→Hbe a β-inverse-strongly monotone mapping. LetM:H⇉Hbe a maximal monotone operator such thatD(M)⊂C. Assume thatF:=F(S)∩ ( B + M ) − 1 (0)is not empty. Let{ x n }be a sequence generated by the following iterative process:

{ x 1 ∈ C , C 1 = C , y n = α n x n + ( 1 − α n ) S J s n ( x n − s n B x n ) , C n + 1 = { v ∈ C n : ∥ y n − v ∥ ≤ ∥ x n − v ∥ } , x n + 1 = Proj C n + 1 x 1 , n ≥ 0 ,

where J s n = ( I + s n M ) − 1 , { s n }is a sequence in(0,2β), and{ α n }is a sequence in(0,1). Assume that the following restrictions are satisfied:

  1. (a)

    0<a≤ s n ≤b<2β;

  2. (b)

    0≤ α n ≤c<1,

where a, b, and c are real constants. Then the sequence{ x n }converges strongly to Proj F x 1 .

Let f:H→(−∞,+∞] be a proper convex lower semicontinuous function. Then the subdifferential ∂f of f is defined as follows:

∂f(x)= { y ∈ H : f ( z ) ≥ f ( x ) + 〈 z − x , y 〉 , z ∈ H } ,∀x∈H.

From Rockafellar [34], we know that ∂f is maximal monotone. It is not hard to verify that 0∈∂f(x) if and only if f(x)= min y ∈ H f(y).

Let I C be the indicator function of C, i.e.,

I C (x)={ 0 , x ∈ C , + ∞ , x ∉ C .

Since I C is a proper lower semicontinuous convex function on H, we see that the subdifferential ∂ I C of I C is a maximal monotone operator. It is clear that J s x= P C x, ∀x∈H. Notice that ( B + ∂ I C ) − 1 (0)=VI(C,B). Indeed,

x ∈ ( B + ∂ I C ) − 1 ( 0 ) ⟺ 0 ∈ B x + ∂ I C x ⟺ − B x ∈ ∂ I C x ⟺ 〈 B x , y − x 〉 ≥ 0 ⟺ x ∈ VI ( C , B ) .
(2.18)

In the light of the above, the following is not hard to derive from Corollary 2.4.

Corollary 2.5 Let C be a nonempty, closed, and convex subset of H. LetS:C→Cbe a nonexpansive mapping with a nonempty fixed point set andB:C→Hbe a β-inverse-strongly monotone mapping. Assume thatF:=F(S)∩VI(C,B)is not empty. Let{ x n }be a sequence generated by the following iterative process:

{ x 1 ∈ C , C 1 = C , y n = α n x n + ( 1 − α n ) S P C ( x n − s n B x n ) , C n + 1 = { v ∈ C n : ∥ y n − v ∥ ≤ ∥ x n − v ∥ } , x n + 1 = Proj C n + 1 x 1 , n ≥ 0 ,

where{ s n }is a sequence in(0,2β), and{ α n }is a sequence in(0,1). Assume that the following restrictions are satisfied:

  1. (a)

    0<a≤ s n ≤b<2β;

  2. (b)

    0≤ α n ≤c<1,

where a, b, and c are real constants. Then the sequence{ x n }converges strongly to Proj F x 1 .

3 Applications

First, we consider the problem of finding a minimizer of a proper convex lower semicontinuous function.

Theorem 3.1 Letf:H→(−∞,+∞]be a proper convex lower semicontinuous function such that ( ∂ f ) − 1 (0)is not empty. Let{ x n }be a sequence generated by the following iterative process:

{ x 1 ∈ H , C 1 = H , y n = arg min s ∈ H { f ( z ) + ∥ z − x n ∥ 2 2 s n } , C n + 1 = { v ∈ C n : ∥ y n − v ∥ ≤ ∥ x n − v ∥ } , x n + 1 = Proj C n + 1 x 1 , n ≥ 0 ,

where{ s n }is a positive sequence such that0<a≤ s n , where a is a real constant. Then the sequence{ x n }converges strongly to Proj ( ∂ f ) − 1 ( 0 ) x 1 .

Proof Putting A=B=0, S=I, and α n ≡0, we can immediately draw the desired conclusion from Theorem 2.1. □

Second, we consider the problem of approximating a common fixed point of a pair of nonexpansive mappings.

Theorem 3.2 Let C be a nonempty, closed, and convex subset of H. LetS:C→CandT:C→Cbe a pair of nonexpansive mappings with a nonempty common fixed point set. Let{ x n }be a sequence generated by the following iterative process:

{ x 1 ∈ C , C 1 = C , z n = ( 1 − s n ) x n + s n T x n , y n = α n x n + ( 1 − α n ) S z n , C n + 1 = { v ∈ C n : ∥ y n − v ∥ ≤ ∥ x n − v ∥ } , x n + 1 = Proj C n + 1 x 1 , n ≥ 0 ,

where{ s n }, and{ α n }are sequences in(0,1). Assume that the following restrictions are satisfied:

  1. (a)

    0<a≤ s n ≤b<1;

  2. (b)

    0≤ α n ≤c<1,

where a, b, and c are real constants. Then the sequence{ x n }converges strongly to Proj F ( S ) ∩ F ( T ) x 1 .

Proof Putting A=0, M=∂ I C , and B=I−T, we see that B is 1 2 -inverse-strongly monotone. We also have F(T)=VI(C,B) and P C ( x n − s n B x n )=(1− s n ) x n + s n T x n . In view of (2.18), we can immediately obtain the desired result. □

Let F be a bifunction of C×C into R, where R denotes the set of real numbers. Recall the following equilibrium problem in the terminology of Blum and Oettli [35] (see also Fan [36]):

Findx∈CsuchthatF(x,y)≥0,∀y∈C.
(3.1)

To study the equilibrium problem (3.1), we may assume that F satisfies the following conditions:

(A1) F(x,x)=0 for all x∈C;

(A2) F is monotone, i.e., F(x,y)+F(y,x)≤0 for all x,y∈C;

(A3) for each x,y,z∈C,

lim sup t ↓ 0 F ( t z + ( 1 − t ) x , y ) ≤F(x,y);

(A4) for each x∈C, y↦F(x,y) is convex and lower semi-continuous.

Putting F(x,y)=〈Ax,y−x〉 for every x,y∈C, we see that the equilibrium problem (3.3) is reduced to the variational inequality (1.1).

The following lemma can be found in [35] and [37].

Lemma 3.3 Let C be a nonempty, closed, and convex subset of H andF:C×C→Rbe a bifunction satisfying (A1)-(A4). Then, for anys>0andx∈H, there existsz∈Csuch that

F(z,y)+ 1 s 〈y−z,z−x〉≥0,∀y∈C.

Further, define

T s x= { z ∈ C : F ( z , y ) + 1 s 〈 y − z , z − x 〉 ≥ 0 , ∀ y ∈ C }
(3.2)

for alls>0andx∈H. Then, the following hold:

  1. (a)

    T s is single-valued;

  2. (b)

    T s is firmly nonexpansive; that is,

    ∥ T s x − T s y ∥ 2 ≤〈 T s x− T s y,x−y〉,∀x,y∈H;
  3. (c)

    F( T s )=EP(F);

  4. (d)

    EP(F) is closed and convex.

Lemma 3.4[30]

Let C be a nonempty, closed, and convex subset of H, F be a bifunction fromC×CtoRwhich satisfies (A1)-(A4), and A F be a multivalued mapping of H into itself defined by

A F x={ { z ∈ H : F ( x , y ) ≥ 〈 y − x , z 〉 , ∀ y ∈ C } , x ∈ C , ∅ , x ∉ C .
(3.3)

Then A F is a maximal monotone operator with the domainD( A F )⊂C, EP(F)= A F − 1 (0), whereFP(F)stands for the solution set of (3.1), and

T s x= ( I + s A F ) − 1 x,∀x∈H,r>0,

where T s is defined as in (3.3).

Finally, we consider finding a solution of the equilibrium problem.

Theorem 3.5 Let C be a nonempty, closed, and convex subset of H. LetF:C×C→Rbe a bifunction satisfying (A1)-(A4) such thatEP(F)≠∅. Let{ x n }be a sequence generated by the following iterative process:

{ x 1 ∈ C , C 1 = C , y n = ( I + s n A F ) − 1 x n , C n + 1 = { v ∈ C n : ∥ y n − v ∥ ≤ ∥ x n − v ∥ } , x n + 1 = Proj C n + 1 x 1 , n ≥ 0 ,

where A F is defined as (3.3), and{ s n }is a positive sequence such that0<a≤ s n <∞, where a is a real constant Then the sequence{ x n }converges strongly to Proj F P ( F ) x 1 .

Proof Putting A=B=0, S=I and α n ≡0, we immediately reach the desired conclusion from Lemma 3.4. □

Authors’ information

Author’s information

References

  1. Browder FE: Nonlinear operators and nonlinear equations of evolution in Banach spaces. Proc. Symp. Pure Math. 1976, 18: 78–81.

    Google Scholar 

  2. Takahshi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118: 417–428. 10.1023/A:1025407607560

    Article  MathSciNet  Google Scholar 

  3. Polyak BT: Introduction to Optimization. Optimization Software, New York; 1987.

    Google Scholar 

  4. Calamai PH, Moré JJ: Projected gradient methods for linearly constrained problems. Math. Program. 1987, 39: 93–116. 10.1007/BF02592073

    Article  Google Scholar 

  5. Zhang M: Iterative algorithms for common elements in fixed point sets and zero point sets with applications. Fixed Point Theory Appl. 2012., 2012: Article ID 21

    Google Scholar 

  6. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2008, 20: 103–120.

    Article  MathSciNet  Google Scholar 

  7. Censor Y, Elfving T, Kopf N, Bortfeld T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21: 2071–2084. 10.1088/0266-5611/21/6/017

    Article  MathSciNet  Google Scholar 

  8. Censor Y, Bortfeld T, Martin B, Trofimov A: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51: 2353–2365. 10.1088/0031-9155/51/10/001

    Article  Google Scholar 

  9. Lopez G, Martin V, Xu HK: Perturbation techniques for nonexpansive mappings with applications. Nonlinear Anal. 2009, 10: 2369–2383. 10.1016/j.nonrwa.2008.04.020

    Article  MathSciNet  Google Scholar 

  10. Korpelevich GM: An extragradient method for finding saddle points and for other problems. Ekonom. i Mat. Metody 1976, 12: 747–756.

    MathSciNet  Google Scholar 

  11. Qin X, Cho SY, Kang SM: An extragradient-type method for generalized equilibrium problems involving strictly pseudocontractive mappings. J. Glob. Optim. 2011, 49: 679–693. 10.1007/s10898-010-9556-2

    Article  MathSciNet  Google Scholar 

  12. Anh PN, Kim JK, Muu LD: An extragradient algorithm for solving bilevel pseudomonotone variational inequalities. J. Glob. Optim. 2012, 52: 627–639. 10.1007/s10898-012-9870-y

    Article  MathSciNet  Google Scholar 

  13. Nadezhkina N, Takahashi W: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006, 128: 191–201. 10.1007/s10957-005-7564-z

    Article  MathSciNet  Google Scholar 

  14. Cho SY, Kang SM: Approximation of fixed points of pseudocontraction semigroups based on a viscosity iterative process. Appl. Math. Lett. 2011, 24: 224–228. 10.1016/j.aml.2010.09.008

    Article  MathSciNet  Google Scholar 

  15. Cho SY, Kang SM: Zero point theorems of m-accretive operators in a Banach space. Fixed Point Theory 2012, 13: 49–58.

    MathSciNet  Google Scholar 

  16. Kim JK, Cho SY, Qin X: Hybrid projection algorithms for generalized equilibrium problems and strictly pseudocontractive mappings. J. Inequal. Appl. 2010., 2010: Article ID 312602

    Google Scholar 

  17. Kang SM, Cho SY, Liu Z: Convergence of iterative sequences for generalized equilibrium problems involving inverse-strongly monotone mappings. J. Inequal. Appl. 2010., 2010: Article ID 827082

    Google Scholar 

  18. Ye J, Huang J: Strong convergence theorems for fixed point problems and generalized equilibrium problems of three relatively quasi-nonexpansive mappings in Banach spaces. J. Math. Comput. Sci. 2011, 1: 1–18.

    MathSciNet  Google Scholar 

  19. Qin X, Cho YJ, Kang SM: Convergence theorems of common elements for equilibrium problems and fixed point problems in Banach spaces. J. Comput. Appl. Math. 2009, 225: 20–30. 10.1016/j.cam.2008.06.011

    Article  MathSciNet  Google Scholar 

  20. Qin X, Cho SY, Kang SM: Strong convergence of shrinking projection methods for quasi- ϕ -nonexpansive mappings and equilibrium problems. J. Comput. Appl. Math. 2010, 234: 750–760. 10.1016/j.cam.2010.01.015

    Article  MathSciNet  Google Scholar 

  21. Husain S, Gupta S: A resolvent operator technique for solving generalized system of nonlinear relaxed cocoercive mixed variational inequalities. Adv. Fixed Point Theory 2012, 2: 18–28.

    Google Scholar 

  22. Qin X, Cho SY, Kang SM: On hybrid projection methods for asymptotically quasi- ϕ -nonexpansive mappings. Appl. Math. Comput. 2010, 215: 3874–3883. 10.1016/j.amc.2009.11.031

    Article  MathSciNet  Google Scholar 

  23. Kim JK: Strong convergence theorems by hybrid projection methods for equilibrium problems and fixed point problems of the asymptotically quasi- Ï• -nonexpansive mappings. Fixed Point Theory Appl. 2011., 2011: Article ID 10

    Google Scholar 

  24. Kim JK, Cho SY, Qin X: Some results on generalized equilibrium problems involving strictly pseudocontractive mappings. Acta Math. Sci. 2011, 31: 2041–2057.

    Article  MathSciNet  Google Scholar 

  25. Chang SS, Lee HWJ, Chan CK: A new method for solving equilibrium problem fixed point problem and variational inequality problem with application to optimization. Nonlinear Anal. 2009, 70: 3307–3319. 10.1016/j.na.2008.04.035

    Article  MathSciNet  Google Scholar 

  26. Qin X, Kang JI, Cho YJ: On quasi-variational inclusions and asymptotically strict pseudo-contractions. J. Nonlinear Convex Anal. 2010, 11: 441–453.

    MathSciNet  Google Scholar 

  27. Tada A, Takahashi W: Weak and strong convergence theorems for a nonexpansive mappings and an equilibrium problem. J. Optim. Theory Appl. 2007, 133: 359–370. 10.1007/s10957-007-9187-z

    Article  MathSciNet  Google Scholar 

  28. Cho SY, Kang SM: Approximation of common solutions of variational inequalities via strict pseudocontractions. Acta Math. Sci. 2012, 32: 1607–1618.

    Article  Google Scholar 

  29. Cho SY, Qin X, Kang SM: Hybrid projection algorithms for treating common fixed points of a family of demicontinuous pseudocontractions. Appl. Math. Lett. 2012, 25: 854–857. 10.1016/j.aml.2011.10.031

    Article  MathSciNet  Google Scholar 

  30. Takahashi S, Takahashi W, Toyoda M: Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 2010, 147: 27–41. 10.1007/s10957-010-9713-2

    Article  MathSciNet  Google Scholar 

  31. Lv S:Generalized systems of variational inclusions involving (A,η)-monotone mappings. Adv. Fixed Point Theory 2011, 1: 1–14.

    Google Scholar 

  32. Kamimura S, Takahashi W: Weak and strong convergence of solutions to accretive operator inclusions and applications. Set-Valued Anal. 2000, 8: 361–374. 10.1023/A:1026592623460

    Article  MathSciNet  Google Scholar 

  33. Rockafellar RT: Characterization of the subdifferentials of convex functions. Pac. J. Math. 1966, 17: 497–510. 10.2140/pjm.1966.17.497

    Article  MathSciNet  Google Scholar 

  34. Rockafellar RT: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 1970, 149: 75–88. 10.1090/S0002-9947-1970-0282272-5

    Article  MathSciNet  Google Scholar 

  35. Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.

    MathSciNet  Google Scholar 

  36. Fan K: A minimax inequality and applications. III. In Inequality. Edited by: Shisha O. Academic Press, New York; 1972:103–113.

    Google Scholar 

  37. Combettes PL, Hirstoaga SA: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6: 117–136.

    MathSciNet  Google Scholar 

Download references

Acknowledgements

The first author was supported by the National Natural Science Foundation of China (11071169, 11271105), the Natural Science Foundation of Zhejiang Province (Y6110287, Y12A010095).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jing Lu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

Both authors contributed equally to this work. Both authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Gu, F., Lu, J. Projection methods of iterative solutions in Hilbert spaces. Fixed Point Theory Appl 2012, 162 (2012). https://doi.org/10.1186/1687-1812-2012-162

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2012-162

Keywords