Open Access

Strong convergence of a splitting algorithm for treating monotone operators

Fixed Point Theory and Applications20142014:94

https://doi.org/10.1186/1687-1812-2014-94

Received: 5 January 2014

Accepted: 24 March 2014

Published: 9 April 2014

Abstract

In this paper, we investigate a splitting algorithm for treating monotone operators. Strong convergence theorems are established in the framework of Hilbert spaces.

Keywords

maximal monotone operatorfixed pointnonexpansive mappingproximal point algorithmzero point

1 Introduction and preliminaries

In this article, we always assume that H is a real Hilbert space with inner product , and norm and C is a nonempty, closed, and convex subset of H.

Let S : C C be a mapping. F ( S ) stands for the fixed point set of S. S is said to be contractive iff there exists a constant α ( 0 , 1 ) such that
S x S y α x y , x , y C .
It is well known that every contractive mapping has a unique fixed point in metric spaces. S is said to be nonexpansive iff
S x S y x y , x , y C .
If C is a bounded, closed, and convex subset of H, then F ( S ) is not empty, closed, and convex; see [1] and the references therein. S is said to be strictly pseudocontractive iff there exists a constant κ [ 0 , 1 ) such that
S x S y 2 x y 2 + κ x y S x + S y 2 , x , y C .

The class of strictly pseudocontractive mapping was introduced by Browder and Petryshyn [2]. It is clear that the class of strictly pseudocontractive mappings include the class of nonexpansive mappings as a special case. It is also not hard to see that strictly pseudocontractive mapping is continuous.

Let A : C H be a mapping. Recall that A is said to be monotone iff
A x A y , x y 0 , x , y C .
A is said to be strongly monotone iff there exists a constant κ > 0 such that
A x A y , x y κ x y 2 , x , y C .
A is said to be inverse-strongly monotone iff there exists a constant κ > 0 such that
A x A y , x y κ A x A y 2 , x , y C .

A is inverse-strongly monotone iff the inverse of A is strongly monotone. It is not hard to see that every inverse-strongly monotone mapping is monotone and continuous. Let I be the identity mapping on H. From [2], we know that I S is inverse-strongly monotone iff S is strictly pseudocontractive; for more details, see [2] and the references therein.

The classical variational inequality problem is formulated as finding a point x C such that
y x , A x 0 , y C .

Such a point x C is called a solution of the variational inequality. In this paper, we use V I ( C , A ) to denote the solution set of the variational inequality. It is known that x is a solution of the variational inequality iff x is a fixed point of the mapping Proj C ( I r A ) , where Proj C is the metric projection from H onto C, I is the identity and r is some positive real number. Recently, many authors studied solutions of inverse-strongly monotone variational inequalities based on the equivalence; see [313].

Recall that a set-valued mapping B : H H is said to be monotone iff, for all x , y H , f B x and g B y imply x y , f g > 0 . In this paper, we use B 1 ( 0 ) to stand for the zero point of B. A monotone mapping B : H H is maximal iff the graph Graph ( B ) of B is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping B is maximal if and only if, for any ( x , f ) H × H , x y , f g 0 , for all ( y , g ) Graph ( B ) implies f B x . For a maximal monotone operator B on H, and r > 0 , we may define the single-valued resolvent J r : H Dom ( B ) , where Dom ( B ) denote the domain of B. It is known that J r is firmly nonexpansive, and B 1 ( 0 ) = F ( J r ) .

One of the most important techniques for solving zero point problem of monotone operators goes back to the work of Browder [14]. Many important problems have reformulations which require finding zero points, for instance, evolution equations, complementarity problems, mini-max problems, variational inequalities and fixed point problems. It is well known that minimizing a convex function f can be reduced to finding zero points of the subdifferential mapping A = f . One of the basic ideas in the case of a Hilbert space H is reducing the above inclusion problem to a fixed point problem of the operator R A defined by R A = ( I + A ) 1 , which is called the classical resolvent of A. If A has some monotonicity conditions, the classical resolvent of A is with full domain and firmly nonexpansive. The property of the resolvent ensures that the Picard iterative algorithm x n + 1 = R A x n converge weakly to a fixed point of R A , which is necessarily a zero point of A. Rockafellar introduced this iteration method and call it the proximal point algorithm (PPA); for more details, see [15] and [16] and the references therein.

It is known that PPA is only convergent and it was also pointed in [17] that it is often impractical since, in many cases, to solve the fixed point problem exactly is either impossible or of the same difficult as the original zero point problem. Therefore, one of the most interesting and important problems in the theory of monotone operators is to find an efficient iterative algorithm to compute their zero points. In many disciplines, including economics [18], image recovery [19], quantum physics [20], and control theory [21], problems arises in infinite dimension spaces. In such problems, strong convergence (norm convergence) is often much more desirable than weak convergence, for it translates the physically tangible property that the energy x n x of the error between the iterate x n and the solution x eventually becomes arbitrarily small. The important of strong convergence is also underlined in [22], where a convex function f is minimized via the proximal point algorithm: it is shown that the rate of convergence of the value sequence { f ( x n ) } is better when { x n } converges strongly that it converges weakly. Such properties have a direct impact when the process is executed directly in the underlying infinite dimensional space.

To improve the weak convergence of PPA, many authors considered lots of different modifications; see [2336] the references therein. One of the classic results was established by Solodov and Svaiter [33]. They obtained strong convergence theorems in Hilbert space without any compact assumption but with the aid of the metric projection.

In this paper, we are concerned with the problem of finding an element in the zero point set of the sum of two operators which are inverse-strongly monotone and a maximal monotone and in the fixed point set of a mapping which is strictly pseudocontractive. Strong convergence theorems are established without the aid of the metric projections. The organization of this paper is as follows. In Section 1, we provide an introduction and some necessary preliminaries. In Section 2, a regularization iterative algorithm is investigated. A strong convergence theorem is established without the aid of metric projections. In Section 3, applications of the main results are discussed.

In order to prove our main results, we also need the following lemmas.

Lemma 1.1 [36]

Let A : C H be a mapping, and B : H H a maximal monotone operator. Then F ( J r ( I r B ) ) = ( A + B ) 1 ( 0 ) .

Lemma 1.2 [37]

Let E be a Banach space and let A be an m-accretive operator. For λ > 0 , μ > 0 , and x E , we have J λ x = J μ ( μ λ x + ( 1 μ λ ) J λ x ) , where J λ = ( I + λ A ) 1 and J μ = ( I + μ A ) 1 .

Lemma 1.3 [38]

Let { x n } and { y n } be bounded sequences in a Banach space E, and { β n } be a sequence in ( 0 , 1 ) with 0 < lim inf n β n lim sup n β n < 1 . Suppose that x n + 1 = ( 1 β n ) y n + β n x n , n 1 and
lim sup n ( y n + 1 y n x n + 1 x n ) 0 .

Then lim n y n x n = 0 .

Lemma 1.4 [39]

Let { a n } be a sequence of nonnegative numbers satisfying the condition a n + 1 ( 1 t n ) a n + t n b n + c n , n 0 , where { t n } is a number sequence in ( 0 , 1 ) such that lim n t n = 0 and n = 0 t n = , { b n } is a number sequence such that lim sup n b n 0 , and { c n } is a positive number sequence such that n = 0 c n < . Then lim n a n = 0 .

Lemma 1.5 [40]

Let S : C C be a strictly pseudocontractive mapping with the constant κ [ 0 , 1 ) . Then S is Lipschitz continuous and I S is demiclosed at zero. Define a mapping T : C C by T x : = a x + ( 1 a ) S x for each x C . Then, as a [ κ , 1 ) , T is nonexpansive such that F ( S ) = F ( T ) .

2 Convergence analysis

Theorem 2.1 Let A : C H be an α-inverse-strongly monotone mapping and let B be a maximal monotone operator on H. Let S : C C be a strictly pseudocontractive mapping with the constant κ [ 0 , 1 ) and let f : C C be a contractive mapping with the constant β [ 0 , 1 ) . Assume that Dom ( B ) C and F ( S ) ( A + B ) 1 ( 0 ) is not empty. Let J r n = ( I + r n B ) 1 and let { x n } be a sequence generated in the following process: x 0 C and
{ z n = α n f ( x n ) + ( 1 α n ) x n , y n = J r n ( z n r n A z n + e n ) , x n + 1 = β n x n + ( 1 β n ) ( γ n y n + ( 1 γ n ) S y n ) , n 0 ,
where { α n } , { β n } and { γ n } are real number sequences in ( 0 , 1 ) and { r n } is a positive real number sequence in ( 0 , 2 α ) . Assume that the control sequences satisfy the following restrictions:
  1. (a)

    lim n α n = 0 , n = 0 α n = ;

     
  2. (b)

    0 < lim inf n β n lim sup n β n < 1 ;

     
  3. (c)

    κ γ n a < 1 , lim n | γ n + 1 γ n | = 0 ;

     
  4. (d)

    0 < b r n c < 2 α and n = 1 | r n r n 1 | < ;

     
  5. (e)

    n = 0 e n < ,

     

where a, b and c are three real numbers. Then { x n } converges strongly to a point x ¯ F ( S ) ( A + B ) 1 ( 0 ) , where x ¯ = Proj F ( S ) ( A + B ) 1 ( 0 ) f ( x ¯ ) .

Proof First, we show that { x n } is bounded. Notice that I r n A is nonexpansive. Indeed, we have
( I r n A ) x ( I r n A ) y 2 = x y 2 2 r n x y , A x A y + r n 2 A x A y 2 x y 2 r n ( 2 α r n ) A x A y 2 .
In view of the restriction (d), we find that I r n A is nonexpansive. Fixing p F ( S ) ( A + B ) 1 ( 0 ) , we find that
z n p α n f ( x n ) p + ( 1 α n ) x n p ( 1 α n ( 1 β ) ) x n p + α n f ( p ) p .
Putting T n x : = γ n x + ( 1 γ n ) S x for each x C , we see from Lemma 1.5 that T n is nonexpansive with F ( T n ) = F ( S ) for each n 0 . It follows that
x n + 1 p β n x n p + ( 1 β n ) T n J r n ( z n r n A z n + e n ) p β n x n p + ( 1 β n ) z n p + ( 1 β n ) e n β n x n p + ( 1 α n ( 1 β ) ) ( 1 β n ) x n p + α n ( 1 β n ) f ( p ) p + e n ( 1 α n ( 1 β ) ( 1 β n ) ) x n p + α n ( 1 β n ) f ( p ) p + e n max { x n p , f ( p ) p 1 β } + e n max { x n 1 p , f ( p ) p 1 β } + e n 1 + e n max { x 0 p , f ( p ) p 1 β } + i = 0 n e i max { x 0 p , f ( p ) p 1 β } + i = 0 e i < .
This proves that the sequence { x n } is bounded, so are { y n } and { z n } . Notice that
z n z n 1 ( 1 α n ( 1 β ) ) x n x n 1 + | α n α n 1 | f ( x n 1 ) x n 1 .
Putting ρ n = z n r n A z n + e n , we find that
ρ n ρ n 1 z n z n 1 + r n r n 1 A z n 1 + e n + e n 1 ( 1 α n ( 1 β ) ) x n x n 1 + | α n α n 1 | f ( x n 1 ) x n 1 + | r n r n 1 | A z n 1 + e n + e n 1 .
It follows from Lemma 1.2 that
y n y n 1 = J r n 1 ( r n 1 r n ρ n + ( 1 r n 1 r n ) J r n ρ n ) J r n 1 ρ n 1 r n 1 r n ( ρ n ρ n 1 ) + ( 1 r n 1 r n ) ( J r n ρ n ρ n 1 ) ( ρ n ρ n 1 ) + ( 1 r n 1 r n ) ( J r n ρ n ρ n ) ρ n ρ n 1 + | r n r n 1 | b J r n ρ n ρ n ( 1 α n ( 1 β ) ) x n x n 1 + f n x n x n 1 + f n ,
where
f n = | α n α n 1 | f ( x n 1 ) x n 1 + | r n r n 1 | ( A z n 1 + J r n ρ n ρ n b ) + e n + e n 1 .
This implies that
T n y n T n 1 y n 1 T n y n T n y n 1 + T n y n 1 T n 1 y n 1 x n x n 1 + f n + | γ n γ n 1 | S y n 1 y n 1 .
In view of the restrictions (a), (c), (d), and (e), we find that
lim sup n ( T n y n T n 1 y n 1 x n x n 1 ) 0 .
It follows from Lemma 1.3 that
lim n T n y n x n = 0 .
(2.1)
This in turn implies that
lim n x n + 1 x n = 0 .
(2.2)
Notice that
x n + 1 p 2 β n x n p 2 + ( 1 β n ) T n J r n ρ n p 2 β n x n p 2 + ( 1 β n ) J r n ( z n r n A z n + e n ) p 2 β n x n p 2 + ( 1 β n ) ( z n r n A z n ) ( I r n A ) p 2 + e n ( e n + 2 ( z n r n A z n ) ( I r n A ) p ) β n x n p 2 + ( 1 β n ) z n p 2 r n ( 1 β n ) ( 2 α r n ) A z n A p 2 + e n ( e n + 2 ( z n r n A z n ) ( I r n A ) p ) β n x n p 2 + α n ( 1 β n ) f ( x n ) p 2 + ( 1 β n ) ( 1 α n ) x n p 2 r n ( 1 β n ) ( 2 α r n ) A z n A p 2 + g n ,
where g n = e n ( e n + 2 ( z n r n A z n ) ( I r n A ) p ) . It follows that
r n ( 1 β n ) ( 2 α r n ) A z n A p 2 x n p 2 x n + 1 p 2 + α n ( 1 β n ) f ( x n ) p 2 + g n ( x n p + x n + 1 p ) x n + 1 x n + α n f ( x n ) p 2 + g n .
In view of the restrictions (a), (b), (c), (d), and (e), we find from (2.2) that
lim n A z n A p = 0 .
(2.3)
Since J r n is firmly nonexpansive, we find that
J r n ρ n p 2 J r n ρ n p , ( z n r n A z n + e n ) ( p r n A p ) = 1 2 ( J r n ρ n p 2 + ( z n r n A z n + e n ) ( p r n A p ) 2 ( J r n ρ n p ) ( ( z n r n A z n + e n ) ( p r n A p ) ) 2 ) 1 2 ( J r n ρ n p 2 + ( I r n A ) z n ( I r n A ) p 2 + g n J r n ρ n z n e n + r n A z n r n A p 2 ) 1 2 ( J r n ρ n p 2 + z n p 2 + g n J r n ρ n z n e n 2 r n A z n r n A p 2 + 2 r n A z n A p J r n ρ n z n e n ) .
It follows that
J r n ρ n p 2 z n p 2 + g n J r n ρ n z n e n 2 r n A z n r n A p 2 + 2 e n J r n ρ n z n + r n A z n r n A p α n f ( x n ) p 2 + ( 1 α n ) x n p 2 + g n J r n ρ n z n e n 2 + 2 r n A z n A p J r n ρ n z n e n .
This implies that
x n + 1 p 2 β n x n p 2 + ( 1 β n ) T n J r n ρ n p 2 β n x n p 2 + ( 1 β n ) J r n ρ n p 2 x n p 2 + α n f ( x n ) p 2 + g n ( 1 β n ) J r n ρ n z n e n 2 + 2 r n A z n A p J r n ρ n z n e n .
It follows that
( 1 β n ) J r n ρ n z n e n 2 ( x n p + x n + 1 p ) x n x n + 1 + α n f ( x n ) p 2 + g n + 2 r n A z n A p J r n ρ n z n e n .
In view of the restrictions (a), (b), and (e), we find from (2.2) and (2.3) that lim n J r n ρ n z n e n = 0 . This in turn implies that
lim n J r n ρ n z n = 0 .
(2.4)
Notice that
y n x n y n z n + z n x n y n z n + α n f ( x n ) x n .
It follows from (2.4) that
lim n y n x n = 0 .
(2.5)
On the other hand, we have
T n x n x n ( γ n x n + ( 1 γ n ) S x n ) ( γ n y n + ( 1 γ n ) S y n ) + ( γ n y n + ( 1 γ n ) S y n ) x n γ n y n x n + ( 1 γ n ) S y n S x n + T n y n x n .
Since S is Lipschitz continuous, we find from (2.1) and (2.5) that
lim n T n x n x n = 0 .
(2.6)
Notice that
S y n y n S y n T n y n + T n y n T n x n + T n x n x n + x n y n γ n y n S y n + 2 y n x n + T n x n x n .
That is,
( 1 γ n ) S y n y n 2 y n x n + T n x n x n .
In view of (2.5) and (2.6), we find from the restriction (c) that
lim n S y n y n = 0 .
(2.7)
Since Proj F ( S ) ( A + B ) 1 ( 0 ) f is contractive, we see that there exists a unique fixed point, say x ¯ . Next, we show that lim sup n f ( x ¯ ) x ¯ , z n x ¯ 0 . To show it, we can choose a subsequence { z n i } of { z n } such that
lim sup n f ( x ¯ ) x ¯ , z n x ¯ = lim i f ( x ¯ ) x ¯ , z n i x ¯ .

Since { z n i } is bounded, we can choose a subsequence { z n i j } of { z n i } which converges weakly to some point x. We may assume, without loss of generality, that z n i converges weakly to x. In view of (2.4), we find that y n i also converges weakly to x. It follows from Lemma 1.5 that x F ( S ) .

Now, we are in a position to show that x ( A + B ) 1 ( 0 ) . Notice that y n = J r n ( z n r n A z n + e n ) . It follows that
z n r n A z n + e n ( I + r n B ) y n .
That is,
z n y n r n A z n + e n B y n .
Since B is monotone, we get, for any ( μ , ν ) B ,
y n μ , z n y n r n A z n + e n ν 0 .
Replacing n by n i and letting i , we obtain from (2.4) that
x μ , A x ν 0 .
This gives A x B x , that is, 0 ( A + B ) ( x ) . This proves that x ( A + B ) 1 ( 0 ) . This complete the proof that x F ( S ) ( A + B ) 1 ( 0 ) . It follows that
lim sup n f ( x ¯ ) x ¯ , z n x ¯ 0 .
Finally, we show that x n x ¯ . Notice that
z n x ¯ 2 α n f ( x n ) x ¯ , z n x ¯ + ( 1 α n ) x n x ¯ z n x ¯ α n f ( x n ) f ( x ¯ ) z n x ¯ + α n f ( x ¯ ) x ¯ , z n x ¯ + ( 1 α n ) x n x ¯ z n x ¯ 1 α n ( 1 β ) 2 ( x n x ¯ 2 + z n x ¯ 2 ) + α n f ( x ¯ ) x ¯ , z n x ¯ .
This implies that
z n x ¯ 2 ( 1 α n ( 1 β ) ) x n x ¯ 2 + 2 α n f ( x ¯ ) x ¯ , z n x ¯ .
It follows that
y n x ¯ 2 ( z n r n A z n ) ( x ¯ r n A x ¯ ) + e n 2 ( z n r n A z n ) ( x ¯ r n A x ¯ ) 2 + e n 2 + 2 e n ( z n r n A z n ) ( x ¯ r n A x ¯ ) z n x ¯ 2 + h n ( 1 α n ( 1 β ) ) x n x ¯ 2 + 2 α n f ( x ¯ ) x ¯ , z n x ¯ + h n ,
(2.8)
where h n = e n ( e n + 2 ( z n r n A z n ) ( I r n A ) x ¯ ) . It follows from (2.8) that
x n + 1 x ¯ 2 β n x n x ¯ 2 + ( 1 β n ) T n y n x ¯ 2 β n x n x ¯ 2 + ( 1 β n ) y n x ¯ 2 ( 1 α n ( 1 β n ) ( 1 β ) ) x n x ¯ 2 + 2 α n ( 1 β n ) f ( x ¯ ) x ¯ , z n x ¯ + h n .

In view of the restriction (a), (b), and (e), we find from Lemma 1.4 that x n x ¯ . This completes the proof. □

If S is nonexpansive and γ n 0 , then we have the following result immediately.

Corollary 2.2 Let A : C H be an α-inverse-strongly monotone mapping and let B be a maximal monotone operator on H. Let S : C C be a nonexpansive mapping and let f : C C be a contractive mapping with the constant β [ 0 , 1 ) . Assume that Dom ( B ) C and F ( S ) ( A + B ) 1 ( 0 ) is not empty. Let J r n = ( I + r n B ) 1 and let { x n } be a sequence generated in the following process: x 0 C and
{ y n = α n f ( x n ) + ( 1 α n ) x n , x n + 1 = β n x n + ( 1 β n ) S J r n ( y n r n A y n + e n ) , n 0 ,
where { α n } and { β n } are real number sequences in ( 0 , 1 ) and { r n } is a positive real number sequence in ( 0 , 2 α ) . Assume that the control sequences satisfy the following restrictions:
  1. (a)

    lim n α n = 0 , n = 0 α n = ;

     
  2. (b)

    0 < lim inf n β n lim sup n β n < 1 ;

     
  3. (c)

    0 < b r n c < 2 α and n = 1 | r n r n 1 | < ;

     
  4. (d)

    n = 0 e n < ,

     

where b and c are two real numbers. Then { x n } converges strongly to a point x ¯ F ( S ) ( A + B ) 1 ( 0 ) , where x ¯ = Proj F ( S ) ( A + B ) 1 ( 0 ) f ( x ¯ ) .

3 Applications

Many nonlinear problems arising in applied areas such as image recovery, signal processing, and machine learning are mathematically modeled as a nonlinear operator equation and this operator is decomposed as the sum of two nonlinear operators. The central problem is to iteratively find a zero point of the sum of two monotone operators, that is,
0 ( A + B ) ( x ) .
Many real word problems can be formulated as a problem of the above form. For instance, a stationary solution to the initial value problem of the evolution equation
{ 0 F u + u t , u 0 = u ( 0 )

can be recast as the inclusion problem when the governing maximal monotone F is of the form F = A + B ; for more details, see [41] and the references therein.

First, we give the following result.

Theorem 3.1 Let A : C H be an α-inverse-strongly monotone mapping and let B be a maximal monotone operator on H. Let f : C C be a contractive mapping with the constant β [ 0 , 1 ) . Assume that Dom ( B ) C and ( A + B ) 1 ( 0 ) is not empty. Let J r n = ( I + r n B ) 1 and let { x n } be a sequence generated in the following process: x 0 C and
{ y n = α n f ( x n ) + ( 1 α n ) x n , x n + 1 = β n x n + ( 1 β n ) J r n ( y n r n A y n + e n ) , n 0 ,
where { α n } and { β n } are real number sequences in ( 0 , 1 ) and { r n } is a positive real number sequence in ( 0 , 2 α ) . Assume that the control sequences satisfy the following restrictions:
  1. (a)

    lim n α n = 0 , n = 0 α n = ;

     
  2. (b)

    0 < lim inf n β n lim sup n β n < 1 ;

     
  3. (c)

    0 < b r n c < 2 α and n = 1 | r n r n 1 | < ;

     
  4. (d)

    n = 0 e n < ,

     

where b and c are two real numbers. Then { x n } converges strongly to a point x ¯ ( A + B ) 1 ( 0 ) , where x ¯ = Proj ( A + B ) 1 ( 0 ) f ( x ¯ ) .

Proof Put S = I , the identity on H. The desired conclusion can be obtained immediately.

Let H be a Hilbert space and f : H ( , + ] a proper convex lower semicontinuous function. Then the subdifferential ∂f of f is defined as follows:
f ( x ) = { y H : f ( z ) f ( x ) + z x , y , z H } , x H .
From Rockafellar [16], we know that ∂f is maximal monotone. It is easy to verify that 0 f ( x ) if and only if f ( x ) = min y H f ( y ) . Let I C be the indicator function of C, i.e.,
I C ( x ) = { 0 , x C , + , x C .
(3.1)

Since I C is a proper lower semicontinuous convex function on H, we see that the subdifferential I C of I C is a maximal monotone operator. Then y = ( I + λ I C ) 1 x y = Proj C x , x H , y C . □

Theorem 3.2 Let A : C H be an α-inverse-strongly monotone mapping. Let S : C C be a strictly pseudocontractive mapping with the constant κ [ 0 , 1 ) and let f : C C be a contractive mapping with the constant β [ 0 , 1 ) . Assume that F ( S ) V I ( C , A ) is not empty. Let { x n } be a sequence generated in the following process: x 0 C and
{ z n = α n f ( x n ) + ( 1 α n ) x n , y n = Proj C ( z n r n A z n + e n ) , x n + 1 = β n x n + ( 1 β n ) ( γ n y n + ( 1 γ n ) S y n ) , n 0 ,
where { α n } , { β n } and { γ n } are real number sequences in ( 0 , 1 ) and { r n } is a positive real number sequence in ( 0 , 2 α ) . Assume that the control sequences satisfy the following restrictions:
  1. (a)

    lim n α n = 0 , n = 0 α n = ;

     
  2. (b)

    0 < lim inf n β n lim sup n β n < 1 ;

     
  3. (c)

    κ γ n a < 1 , lim n | γ n + 1 γ n | = 0 ;

     
  4. (d)

    0 < b r n c < 2 α and n = 1 | r n r n 1 | < ;

     
  5. (e)

    n = 0 e n < ,

     

where a, b, and c are three real numbers. Then { x n } converges strongly to a point x ¯ F ( S ) ( A + B ) 1 ( 0 ) , where x ¯ = Proj F ( S ) ( A + B ) 1 ( 0 ) f ( x ¯ ) .

Proof Put B x = I C . Next, we show that V I ( C , A ) = ( A + I C ) 1 ( 0 ) . Notice that
x ( A + I C ) 1 ( 0 ) 0 A x + I C x A x I C x A x , y x 0 x V I ( C , A ) .

We can conclude the desired conclusion immediately. □

If S = I , the identity on H, then we find from Theorem 3.1 the following result immediately.

Corollary 3.3 Let A : C H be an α-inverse-strongly monotone mapping. Let S : C C be a strictly pseudocontractive mapping with the constant κ [ 0 , 1 ) and let f : C C be a contractive mapping with the constant β [ 0 , 1 ) . Assume that F ( S ) V I ( C , A ) is not empty. Let { x n } be a sequence generated in the following process: x 0 C and
{ y n = α n f ( x n ) + ( 1 α n ) x n , x n + 1 = β n x n + ( 1 β n ) Proj C ( y n r n A y n + e n ) , n 0 ,
where { α n } and { β n } are real number sequences in ( 0 , 1 ) and { r n } is a positive real number sequence in ( 0 , 2 α ) . Assume that the control sequences satisfy the following restrictions:
  1. (a)

    lim n α n = 0 , n = 0 α n = ;

     
  2. (b)

    0 < lim inf n β n lim sup n β n < 1 ;

     
  3. (c)

    0 < b r n c < 2 α and n = 1 | r n r n 1 | < ;

     
  4. (d)

    n = 0 e n < ,

     

where b and c are three real numbers. Then { x n } converges strongly to a point x ¯ V I ( C , A ) , where x ¯ = Proj V I ( C , A ) f ( x ¯ ) .

Let F be a bifunction of C × C into , where denotes the set of real numbers. Recall the following equilibrium problem:
Find  x C  such that  F ( x , y ) 0 , y C .
(3.2)

In this paper, we use E P ( F ) to denote the solution set of the equilibrium problem (3.2).

To study the equilibrium problems (3.2), we may assume that F satisfies the following conditions:

(A1) F ( x , x ) = 0 for all x C ;

(A2) F is monotone, i.e., F ( x , y ) + F ( y , x ) 0 for all x , y C ;

(A3) for each x , y , z C ,
lim sup t 0 F ( t z + ( 1 t ) x , y ) F ( x , y ) ;

(A4) for each x C , y F ( x , y ) is convex and weakly lower semicontinuous.

Putting F ( x , y ) = A x , y x for every x , y C , we see that the equilibrium problem (3.2) is reduced to a variational inequality.

Lemma 3.4 [42, 43]

Let C be a nonempty closed convex subset of H and let F : C × C R be a bifunction satisfying (A1)-(A4). Then, for any r > 0 and x H , there exists z C such that
F ( z , y ) + 1 r y z , z x 0 , y C .
Further, define
T r x = { z C : F ( z , y ) + 1 r y z , z x 0 , y C }
(3.3)
for all r > 0 and x H . Then, the following hold:
  1. (a)

    T r is single-valued;

     
  2. (b)
    T r is firmly nonexpansive, i.e., for any x , y H ,
    T r x T r y 2 T r x T r y , x y ;
     
  3. (c)

    F ( T r ) = E P ( F ) ;

     
  4. (d)

    E P ( F ) is closed and convex.

     

Lemma 3.5 [44]

Let C be a nonempty closed convex subset of a real Hilbert space H, F a bifunction from C × C to which satisfies (A1)-(A4) and A F a multivalued mapping of H into itself defined by
A F x = { { z H : F ( x , y ) y x , z , y C } , x C , , x C .
(3.4)
Then A F is a maximal monotone operator with the domain D ( A F ) C , E P ( F ) = A F 1 ( 0 ) and
T r x = ( I + r A F ) 1 x , x H , r > 0 ,

where T r is defined as in (3.3).

Theorem 3.6 Let A : C H be an α-inverse-strongly monotone mapping and let F B be a bifunction from C × C to which satisfies (A1)-(A4). Let S : C C be a strictly pseudocontractive mapping with the constant κ [ 0 , 1 ) and let f : C C be a contractive mapping with the constant β [ 0 , 1 ) . Assume that F ( S ) E P ( F ) is not empty. Let { x n } be a sequence generated in the following process: x 0 C and
{ z n = α n f ( x n ) + ( 1 α n ) x n , y n = T r n ( z n r n A z n + e n ) , x n + 1 = β n x n + ( 1 β n ) ( γ n y n + ( 1 γ n ) S y n ) , n 0 ,
where { α n } , { β n } , and { γ n } are real number sequences in ( 0 , 1 ) and { r n } is a positive real number sequence in ( 0 , 2 α ) . Assume that the control sequences satisfy the following restrictions:
  1. (a)

    lim n α n = 0 , n = 0 α n = ;

     
  2. (b)

    0 < lim inf n β n lim sup n β n < 1 ;

     
  3. (c)

    κ γ n a < 1 , lim n | γ n + 1 γ n | = 0 ;

     
  4. (d)

    0 < b r n c < 2 α and n = 1 | r n r n 1 | < ;

     
  5. (e)

    n = 0 e n < ,

     

where a, b, and c are three real numbers. Then { x n } converges strongly to a point x ¯ F ( S ) E P ( F ) , where x ¯ = Proj F ( S ) E P ( F ) f ( x ¯ ) .

If S = I , the identity on H, then we find from Theorem 3.6 the following result on the equilibrium problem immediately.

Corollary 3.7 Let A : C H be an α-inverse-strongly monotone mapping and let F B be a bifunction from C × C to which satisfies (A1)-(A4). Let f : C C be a contractive mapping with the constant β [ 0 , 1 ) . Assume that E P ( F ) is not empty. Let { x n } be a sequence generated in the following process: x 0 C and
{ y n = α n f ( x n ) + ( 1 α n ) x n , x n + 1 = β n x n + ( 1 β n ) T r n ( y n r n A y n + e n ) , n 0 ,
where { α n } and { β n } are real number sequences in ( 0 , 1 ) and { r n } is a positive real number sequence in ( 0 , 2 α ) . Assume that the control sequences satisfy the following restrictions:
  1. (a)

    lim n α n = 0 , n = 0 α n = ;

     
  2. (b)

    0 < lim inf n β n lim sup n β n < 1 ;

     
  3. (c)

    0 < b r n c < 2 α and n = 1 | r n r n 1 | < ;

     
  4. (d)

    n = 0 e n < ,

     

where b and c are two real numbers. Then { x n } converges strongly to a point x ¯ E P ( F ) , where x ¯ = Proj E P ( F ) f ( x ¯ ) .

Declarations

Acknowledgements

The authors are grateful to the referees for useful suggestions which improved the contents of the paper.

Authors’ Affiliations

(1)
Department of Mathematics, Gyengsang National University
(2)
Department of Mathematics, Hangzhou Normal University
(3)
Department of Mathematics, Faculty of Science, King Abdulaziz University
(4)
College of Statistics and Mathematics, Yunnan University of Finance and Economics

References

  1. Browder FE: Nonlinear operators and nonlinear equations of evolution in Banach spaces. Proc. Symp. Pure Math. 1976, 18: 78–81.Google Scholar
  2. Browder FE, Petryshyn WV: Construction of fixed points of nonlinear mappings in Hilbert spaces. J. Math. Anal. Appl. 1967, 20: 197–228. 10.1016/0022-247X(67)90085-6View ArticleMathSciNetGoogle Scholar
  3. Cho SY, Kang SM: Approximation of common solutions of variational inequalities via strict pseudocontractions. Acta Math. Sci. 2011, 32: 1607–1618.View ArticleGoogle Scholar
  4. Cho SY: Approximation of solutions of a generalized variational inequality problem based on iterative methods. Commun. Korean Math. Soc. 2010, 25: 207–214. 10.4134/CKMS.2010.25.2.207View ArticleMathSciNetGoogle Scholar
  5. Cho SY, Qin X, Kang SM: Iterative processes for common fixed points of two different families of mappings with applications. J. Glob. Optim. 2013, 57: 1429–1446. 10.1007/s10898-012-0017-yView ArticleMathSciNetGoogle Scholar
  6. Cho SY, Kang SM: Approximation of fixed points of pseudocontraction semigroups based on a viscosity iterative process. Appl. Math. Lett. 2011, 24: 224–228. 10.1016/j.aml.2010.09.008View ArticleMathSciNetGoogle Scholar
  7. Luo H, Wang Y: Iterative approximation for the common solutions of a infinite variational inequality system for inverse-strongly accretive mappings. J. Math. Comput. Sci. 2012, 2: 1660–1670.MathSciNetGoogle Scholar
  8. Cho SY, Li W, Kang SM: Convergence analysis of an iterative algorithm for monotone operators. J. Inequal. Appl. 2013., 2013: Article ID 199Google Scholar
  9. Zegeye H, Shahzad N: Strong convergence theorem for a common point of solution of variational inequality and fixed point problem. Adv. Fixed Point Theory 2012, 2: 374–397.Google Scholar
  10. Rodjanadid B, Sompong S: A new iterative method for solving a system of generalized equilibrium problems, generalized mixed equilibrium problems and common fixed point problems in Hilbert spaces. Adv. Fixed Point Theory 2013, 3: 675–705.Google Scholar
  11. Qin X, Cho SY, Kang SM: An extragradient-type method for generalized equilibrium problems involving strictly pseudocontractive mappings. J. Glob. Optim. 2011, 49: 679–693. 10.1007/s10898-010-9556-2View ArticleMathSciNetGoogle Scholar
  12. Qin X, Cho SY, Kang SM: Convergence of an iterative algorithm for systems of variational inequalities and nonexpansive mappings with applications. J. Comput. Appl. Math. 2009, 233: 231–240. 10.1016/j.cam.2009.07.018View ArticleMathSciNetGoogle Scholar
  13. Qin X, Cho SY, Kang SM: Iterative algorithms for variational inequality and equilibrium problems with applications. J. Glob. Optim. 2010, 48: 423–445. 10.1007/s10898-009-9498-8View ArticleMathSciNetGoogle Scholar
  14. Browder FE: Existence and approximation of solutions of nonlinear variational inequalities. Proc. Natl. Acad. Sci. USA 1966, 56: 1080–1086. 10.1073/pnas.56.4.1080View ArticleMathSciNetGoogle Scholar
  15. Rockafellar RT: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1976, 1: 97–116. 10.1287/moor.1.2.97View ArticleMathSciNetGoogle Scholar
  16. Rockafellar RT: Monotone operators and proximal point algorithm. SIAM J. Control Optim. 1976, 14: 877–898. 10.1137/0314056View ArticleMathSciNetGoogle Scholar
  17. Eckstein J: Approximate iterations in Bregman-function-based proximal algorithms. Math. Program. 1998, 83: 113–123.MathSciNetGoogle Scholar
  18. Khan MA, Yannelis NC: Equilibrium Theory in Infinite Dimensional Spaces. Springer, New York; 1991.View ArticleMATHGoogle Scholar
  19. Combettes PL: The convex feasibility problem in image recovery. 95. In Advanced in Imaging and Electron Physcis. Edited by: Hawkes P. Academic Press, New York; 1996:155–270.Google Scholar
  20. Dautray R, Lions JL 1. In Mathematical Analysis and Numerical Methods for Science and Technology. Springer, New York; 1990.Google Scholar
  21. Fattorini HO: Infinite-Dimensional Optimization and Control Theory. Cambridge University Press, Cambridge; 1999.View ArticleMATHGoogle Scholar
  22. Güler O: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 1991, 29: 403–409. 10.1137/0329022View ArticleMathSciNetGoogle Scholar
  23. Qin X, Kang SM, Cho YJ: Approximating zeros of monotone operators by proximal point algorithms. J. Glob. Optim. 2010, 46: 75–87. 10.1007/s10898-009-9410-6View ArticleMathSciNetGoogle Scholar
  24. Song J, Chen M: A modified Mann iteration for zero points of accretive operators. Fixed Point Theory Appl. 2013., 2013: Article ID 347Google Scholar
  25. Qing Y, Cho SY: Proximal point algorithms for zero points of nonlinear operators. Fixed Point Theory Appl. 2014., 2014: Article ID 42Google Scholar
  26. Qing Y, Cho SY: A regularization algorithm for zero points of accretive operators. Fixed Point Theory Appl. 2013., 2013: Article ID 341Google Scholar
  27. Cho SY, Kang SM: Zero point theorems for m -accretive operators in a Banach space. Fixed Point Theory 2012, 13: 49–58.MathSciNetGoogle Scholar
  28. Wu C, Lv S: Bregman projection methods for zeros of monotone operators. J. Fixed Point Theory 2013., 2013: Article ID 7Google Scholar
  29. Zhang M: Iterative algorithms for common elements in fixed point sets and zero point sets with applications. Fixed Point Theory Appl. 2012., 2012: Article ID 21Google Scholar
  30. Qin X, Su Y: Approximation of a zero point of accretive operator in Banach spaces. J. Math. Anal. Appl. 2007, 329: 415–424. 10.1016/j.jmaa.2006.06.067View ArticleMathSciNetGoogle Scholar
  31. Kim JK, Anh PN, Nam YM: Strong convergence of an extended extragradient method for equilibrium problems and fixed point problems. J. Korean Math. Soc. 2012, 49: 187–200. 10.4134/JKMS.2012.49.1.187View ArticleMathSciNetGoogle Scholar
  32. Qin X, Cho SY, Wang L: Iterative algorithms with errors for zero points of m -accretive operators. Fixed Point Theory Appl. 2013., 2013: Article ID 148Google Scholar
  33. Solodov MV, Svaiter BF: Forcing strong convergence of proximal point iterations in a Hilbert space. Math. Program. 2000, 87: 189–202.MathSciNetGoogle Scholar
  34. Zhang M: Strong convergence of a viscosity iterative algorithm in Hilbert spaces. J. Nonlinear Funct. Anal. 2014., 2014: Article ID 1Google Scholar
  35. Qin X, Shang M, Su Y: Strong convergence of a general iterative algorithm for equilibrium problems and variational inequality problems. Math. Comput. Model. 2008, 48: 1033–1046. 10.1016/j.mcm.2007.12.008View ArticleMathSciNetGoogle Scholar
  36. Cho SY: Strong convergence of an iterative algorithm for sums of two monotone operators. J. Fixed Point Theory 2013., 2013: Article ID 6Google Scholar
  37. Barbu V: Nonlinear Semigroups and Differential Equations in Banach Space. Noordhoff, Groningen; 1976.View ArticleMATHGoogle Scholar
  38. Suzuki T: Strong convergence of Krasnoselskii and Mann’s type sequences for one-parameter nonexpansive semigroups without Bochner integrals. J. Math. Anal. Appl. 2005, 305: 227–239. 10.1016/j.jmaa.2004.11.017View ArticleMathSciNetGoogle Scholar
  39. Xue Z, Zhou H, Cho YJ: Iterative solutions of nonlinear equations for m -accretive operators in Banach spaces. J. Nonlinear Convex Anal. 2000, 1: 313–320.MathSciNetGoogle Scholar
  40. Zhou H: Convergence theorems of fixed points for κ -strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 2008, 69: 456–462. 10.1016/j.na.2007.05.032View ArticleMathSciNetGoogle Scholar
  41. Lions PL, Mercier B: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16: 964–979. 10.1137/0716071View ArticleMathSciNetGoogle Scholar
  42. Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.MathSciNetGoogle Scholar
  43. Fan K: A minimax inequality and applications. III. In Inequality. Edited by: Shisha O. Academic Press, New York; 1972:103–113.Google Scholar
  44. Takahashi S, Takahashi W, Toyoda M: Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 2010, 147: 27–41. 10.1007/s10957-010-9713-2View ArticleMathSciNetGoogle Scholar

Copyright

© Cho et al.; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.