We’d like to understand how you use our websites in order to improve them. Register your interest.

# Strong convergence of a proximal-type algorithm for an occasionally pseudomonotone operator in Banach spaces

## Abstract

It is known that the proximal point algorithm converges weakly to a zero of a maximal monotone operator, but it fails to converge strongly. Then, in (Math. Program. 87:189-202, 2000), Solodov and Svaiter introduced the new proximal-type algorithm to generate a strongly convergent sequence and established a convergence property for the algorithm in Hilbert spaces. Further, Kamimura and Takahashi (SIAM J. Optim. 13:938-945, 2003) extended Solodov and Svaiter’s result to more general Banach spaces and obtained strong convergence of a proximal-type algorithm in Banach spaces. In this paper, by introducing the concept of an occasionally pseudomonotone operator, we investigate strong convergence of the proximal point algorithm in Hilbert spaces, and so our results extend the results of Kamimura and Takahashi.

MSC:47H05, 47J25.

## 1 Introduction

Let H be a real Hilbert space with inner product $〈⋅,⋅〉$, and let $T:H→ 2 H$ be a maximal monotone operator (or a multifunction) on H. We consider the classical problem:

Find $x∈H$ such that

$0∈Tx.$
(1.1)

A wide variety of the problems, such as optimization problems and related fields, min-max problems, complementarity problems, variational inequalities, equilibrium problems and fixed point problems, fall within this general framework. For example, if T is the subdifferential ∂f of a proper lower semicontinuous convex function $f:H→(−∞,∞)$, then T is a maximal monotone operator and the equation $0∈∂f(x)$ is reduced to $f(x)=min{f(z):z∈H}$. One method of solving $0∈Tx$ is the proximal point algorithm. Let I denote the identity operator on H. Rockafellar’s proximal point algorithm generates, for any starting point $x 0 =x∈H$, a sequence ${ x n }$ in H by the rule

$x n + 1 = ( I + r n T ) − 1 x n ,∀n≥0,$
(1.2)

where ${ r n }$ is a sequence of positive real numbers. Note that (1.2) is equivalent to

$0∈T x n + 1 + 1 r n ( x n + 1 − x n ),∀n≥0.$

This algorithm was first introduced by Martinet  and generally studied by Rockafellar  in the framework of Hilbert spaces. Later, many authors studied the convergence of (1.2) in Hilbert spaces (see Agarwal et al. , Brezis and Lions , Cho et al. , Cholamjiak et al. , Güler , Lions , Passty , Qin et al. , Song et al. , Solodov and Svaiter , Wei and Cho  and the references therein). Rockafellar  proved that, if $T − 1 0≠∅$ and $lim inf n → ∞ r n >0$, then the sequence ${ x n }$ generated by (1.2) converges weakly to an element of $T − 1 0$. Further, Rockafellar  posed an open question of whether the sequence ${ x n }$ generated by (1.2) converges strongly or not. This question was solved by Güler , who introduced an example for which the sequence ${ x n }$ generated by (1.2) converges weakly, but not strongly.

On the other hand, Kamimura and Takahashi [14, 15], Solodov and Svaiter  one decade ago modified the proximal point algorithm to generate a strongly convergent sequence. In 1999, Solodov and Svaiter  introduced the following algorithm ${ x n }$:

${ x 0 ∈ H , 0 = v n + 1 r n ( y n − x n ) , v n ∈ T y n , H n = { z ∈ E : 〈 z − y n , v n 〉 ≤ 0 } , W n = { z ∈ E : 〈 z − x n , x 0 − x n 〉 ≤ 0 } , x n + 1 = P H n ∩ W n x 0 , ∀ n ≥ 0 .$
(1.3)

To explain how the sequence ${ y n }$ is generated, we formally state the above algorithm as follows.

Choose any $x 0 ∈H$ and $σ∈[0,1)$. At iteration n, having $x n$, choose $r n >0$ and find $( y n , v n )$, an inexact solution of $0= v n + 1 r n ( y n − x n )$, $v n ∈T y n$ with tolerance σ. Define $H n$ and $W n$ as in (1.3). Take $x n + 1 = P H n ∩ W n x 0$. Note that at each iteration, there are two subproblems to be solved: find an inexact solution of the proximal point subproblem and find the projection of $x 0$ onto $H n ∩ W n$, the intersection of two half-spaces. By a classical result of Minty , the proximal subproblem always has an exact solution, which is unique. Notice that computing an approximate solution makes things easier. Hence, this part of the method is well defined. Regarding the projection step, it is easy to prove that $H n ∩ W n$ is never empty, even when the solution set is empty. Therefore, the whole algorithm is well defined in the sense that it generates an infinite sequence ${ x n }$ and an associated sequence of pairs ${( y n , v n )}$.

In 2003, Kamimura and Takahashi  extended Solodov and Svaiter’s result to more general Banach spaces like the spaces $L p$ ($1) by further modifying the proximal-point algorithm (1.2) in the following form in a smooth Banach space E:

${ x 0 ∈ E , 0 = v n + 1 r n ( J 2 ( y n ) − J 2 ( x n ) ) , v n ∈ T y n , H n = { z ∈ E : 〈 z − y n , v n 〉 ≤ 0 } , W n = { z ∈ E : 〈 z − x n , J 2 ( x 0 ) − J 2 ( x n ) 〉 ≤ 0 } , x n + 1 = P H n ∩ W n x 0 , ∀ n ≥ 0 ,$
(1.4)

to generate a strongly convergent sequence. They proved that if $T − 1 0≠∅$ and $lim inf n → ∞ r n >0$, then the sequence ${ x n }$ generated by (1.3) converges strongly to a point $P T − 1 0 x 0$.

In this paper, by introducing the concept of an occasionally pseudomonotone operator, we investigate strong convergence of the proximal point algorithm in Hilbert spaces, and so our results extend the results of Kamimura and Takahashi.

## 2 Preliminaries and definitions

Let E be a real Banach space with norm $∥⋅∥$, and let $E ∗$ denote the dual space of E. Let $〈x,f〉$ denote the value of $f∈ E ∗$ at $x∈E$. Let ${ x n }$ be a sequence in E. We denote the strong convergence of ${ x n }$ to $x∈E$ by $x n →x$ and the weak convergence by $x n ⇀x$, respectively.

Definition 2.1 A multivalued operator $T:E→ 2 E ∗$ with domain $D(T)={z∈E:Tz≠∅}$ and range $R(T)=⋃{Tz:z∈D(T)}$ is said to be monotone if $〈 x 1 − x 2 , y 1 − y 2 〉≥0$ for any $x i ∈D(T)$ and $y i ∈T x i$, $i=1,2$. A monotone operator T is said to be maximal if its graph $G(T)={(x,y):y∈Tx}$ is not properly contained in the graph of any other monotone operator.

Definition 2.2 A multivalued operator $T:E→ 2 E ∗$ with domain $D(T)$ and range $R(T)$ is said to be pseudomonotone (see also Karamardian ) if $〈 x 1 − x 2 , y 2 〉≥0$ implies $〈 x 1 − x 2 , y 1 〉≥0$ for any $x i ∈D(T)$ and $y i ∈T x i$, $i=1,2$.

It is obvious that each monotone operator is pseudomonotone, but the converse is not true.

We now introduce the concept of occasionally pseudomonotone as follows.

Definition 2.3 A multivalued operator $T:E→ 2 E ∗$ is said to be occasionally pseudomonotone if, for any $x i ∈D(T)$, there exist $y i ∈T x i$, $i=1,2$, such that $〈 x 1 − x 2 , y 2 〉≥0$ implies $〈 x 1 − x 2 , y 1 〉≥0$.

It is clear that every monotone operator is pseudomonotone and every pseudomonotone operator is occasionally pseudomonotone, but the converse implications need not be true. To this end, we observe the following examples.

Example 2.1 Let $E= R 3$ and $T:E→ 2 E ∗$ be a multi-valued operator defined by

$Tx={y= A r x:r∈R},∀x∈E,$

where

$A r =[ 0 0 − 1 0 − r 0 1 0 0 ].$

Then for any $x 1 = ( x 1 ( 1 ) , x 2 ( 1 ) , x 3 ( 1 ) ) T$, $x 2 = ( x 1 ( 2 ) , x 2 ( 2 ) , x 3 ( 2 ) ) T$ in $R 3$, if $y 1 = A r x 1$ and $y 2 = A r x 2$, then we have

$〈 x 1 − x 2 , y 1 − y 2 〉=−r ( x 2 ( 1 ) − x 2 ( 2 ) ) 2 .$

Thus, if $r≤0$, then T is monotone. However, if $r>0$, then T is neither monotone nor pseudomonotone. Indeed, for $x 1 =(0,1,0)$, then we have $y 1 = A r x 1 =(0,−r,0)$, $x 2 =(0,0,0)$ and $〈 x 1 − x 2 , y 2 〉=0≥0$, but $〈 x 1 − x 2 , y 1 〉=−r<0$.

Further, we see that T is occasionally pseudomonotone. To effect this, for any $x 1 = ( x 1 ( 1 ) , x 2 ( 1 ) , x 3 ( 1 ) ) T$ and $x 2 = ( x 1 ( 2 ) , x 2 ( 2 ) , x 3 ( 2 ) ) T$ in $R 3$, if $y i = A 0 x i$, $i=1,2$, then we have

$〈 x 1 − x 2 , y 2 〉=0≥0⟹〈 x 1 − x 2 , y 1 〉=0≥0.$

Example 2.2 The rotation operator on $R 2$ given by

$A=[ 0 1 − 1 0 ]$

is monotone and hence it is pseudomonotone. Thus, it follows that A is also occasionally pseudomonotone.

Maximality of pseudomonotone and occasionally pseudomonotone operators are defined as similar to maximality of a monotone operator. We denote by $L[ x 1 , x 2 ]$ the ray passing through $x 1$, $x 2$.

A Banach space E is said to be strictly convex if $∥ x + y 2 ∥<1$ for all $x,y∈E$ with $∥x∥=∥y∥=1$ and $x≠y$. It is also said to be uniformly convex if $lim n → ∞ ∥ x n − y n ∥=0$ for any two sequences ${ x n }$, ${ y n }$ in E such that $∥ x n ∥=∥ y n ∥=1$ and $lim n → ∞ ∥ x n + y n 2 ∥=1$.

It is known that a uniformly convex Banach space is reflexive and strictly convex. The spaces $ℓ 1$ and $L 1$ are neither reflexive nor strictly convex. Note also that a reflexive Banach space is not necessarily uniformly convex. For example, consider a finite dimensional Banach space in which the surface of the unit ball has a ‘flat’ part. We note that such a Banach space is reflexive because of finite dimension. But the ‘flat’ portion in the surface of the ball makes it nonuniformly convex. It is also well known that a Banach space E is reflexive if and only if every bounded sequence of elements of E contains a weakly convergent sequence.

Let $U={x∈E:∥x∥=1}$. A Banach space E is said to be smooth if the limit

$lim t → 0 ∥ x + t y ∥ − ∥ x ∥ t$
(2.1)

exists for all $x,y∈U$. It is also said to be uniformly smooth if the limit (2.1) is attained uniformly for $x,y∈U$.

It is well known that the spaces $ℓ p$, $L p$ and $W m$ (Sobolev space) ($1, m is a positive integer) are uniformly convex and uniformly smooth Banach spaces.

For any $p∈(1,∞)$, the mapping $J p :E→ 2 E ∗$ defined by

$J p x= { f ∈ E ∗ : 〈 x , f 〉 = ∥ f ∥ ⋅ ∥ x ∥ , ∥ f ∥ = ∥ x ∥ p − 1 } ,∀x∈E,$

is called the duality mapping with the gauge function $φ(t)= t p − 1$. In particular, for $p=2$, the duality mapping $J 2$ with gauge function $φ(t)=t$ is called the normalized duality mapping.

The following proposition gives some basic properties of the duality mapping.

Proposition 2.1 Let E be a real Banach space. For $1, the duality mapping $J p :E→ 2 E ∗$ has the following properties:

1. (1)

$J p (x)≠ϕ$ for all $x∈E$ and $D( J p )=E$, where $D( J p )$ denotes the domain of $J p$;

2. (2)

$J p (x)= ∥ x ∥ p − 2 ⋅ J 2 x$ for all $x∈E$ with $x≠0$;

3. (3)

$J p (αx)= α p − 1 ⋅ J 2 x$ for all $α∈[0,∞)$;

4. (4)

$J p (−x)=− J p (x)$;

5. (5)

$∥ x ∥ p − ∥ y ∥ p ≥p〈x−y,j〉$ for all $x,y∈E$ and $j∈ J p y$;

6. (6)

If E is smooth, then $J p$ is norm-to-weak continuous;

7. (7)

If E is uniformly smooth, then $J p$ is uniformly norm-to-norm continuous on each bounded subset of E;

8. (8)

$J p$ is bounded, i.e., for any bounded subset $A⊂E$, $J p (A)$ is a bounded subset in $E ∗$;

9. (9)

$J p$ can be equivalently defined as the subdifferential of the functional $ψ(x)= p − 1 ∥ x ∥ p$ (Asplund ), i.e.,

$J p (x)=∂ψ(x)= { f ∈ E ∗ : ψ ( y ) − ψ ( x ) ≥ 〈 y − x , f 〉 , ∀ y ∈ E } ;$
10. (10)

E is a uniformly smooth Banach space (equivalently, $E ∗$ is a uniformly convex Banach space) if and only if $J p$ is single-valued and uniformly continuous on any bounded subset of E (see, for instance, Xu and Roach , Browder ).

Proposition 2.2 Let E be a real Banach space, and let $J p :E→ 2 E ∗$, $1, be the duality mapping. Then for any $x,y∈E$,

$∥ x + y ∥ p ≤ ∥ x ∥ p +p〈y, j p 〉,∀ j p ∈ J p (x+y).$

Proof It is a straightforward consequence of the assertion (5) of Proposition 2.1 applied for x and $x+y$. Alternatively, from Proposition 2.1(9), it follows that $J p (x)=∂ψ(x)$ (subdifferential of the functional $ψ(x)$), where $ψ(x)= p − 1 ∥ x ∥ p$. Also, it follows from the definition of the subdifferential of ψ that

$ψ(x)−ψ(x+y)≥ 〈 x − ( x + y ) , j p 〉 ,∀ j p ∈ J p (x+y).$

Now, substituting $ψ(x)$ by $p − 1 ∥ x ∥ p$, we have

$∥ x + y ∥ p ≤ ∥ x ∥ p +p〈y, j p 〉,∀ j p ∈ J p (x+y).$

This completes the proof. □

Remark 2.1 If E is a uniformly smooth Banach space, it follows from Proposition 2.1(10) that $J p$ ($1) is a single-valued mapping. We now define the functions $Ψ,ϕ:E×E→R$ by

$Ψ(x,y)= ∥ x ∥ p −p 〈 x − y , J p ( y ) 〉 − ∥ y ∥ p ,∀x,y∈E,$

and ϕ is the support function satisfying the following condition:

$Ψ(x,y)=ϕ(x,y)+ ∥ x − y ∥ p ,∀x,y∈E.$
(2.2)

It is obvious from the definition of Ψ and Proposition 2.1(5) that

$Ψ(x,y)≥0,∀x,y∈E.$
(2.3)

Also, we see that

$Ψ ( x , y ) = ∥ x ∥ p − p 〈 x , J p ( y ) 〉 + ( p − 1 ) ∥ y ∥ p ≥ ∥ x ∥ p − p ∥ x ∥ ∥ J p ( y ) ∥ + ( p − 1 ) ∥ y ∥ p = ∥ x ∥ p − p ∥ x ∥ ∥ y ∥ p − 1 + ( p − 1 ) ∥ y ∥ p .$
(2.4)

In particular, for $p=2$, we have $Ψ(x,y)≥ ( ∥ x ∥ − ∥ y ∥ ) 2$.

Further, we can show the following two propositions.

Proposition 2.3 Let E be a smooth Banach space, and let ${ y n }$, ${ z n }$ be two sequences in E. If $Ψ( y n , z n )→0$, then $y n − z n →0$.

Proof It follows from $Ψ( y n , z n )→0$ that

$ϕ( y n , z n )→0, | ∥ y n ∥ − ∥ z n ∥ | ≤∥ y n − z n ∥→0$

because of (2.2) and (2.3). Therefore, if ${ z n }$ is bounded, then ${ y n }$ (and also if ${ y n }$ is bounded, then ${ z n }$) is also bounded and $y n − z n →0$. This completes the proof. □

Proposition 2.4 Let E be a reflexive, strictly convex and smooth Banach space, let C be a nonempty closed convex subset of E and let $x∈E$. Then there exists a unique element $x 0 ∈C$ such that

$Ψ( x 0 ,x)=inf { Ψ ( z , x ) : z ∈ C } .$
(2.5)

Proof Since E is reflexive and $∥ z n ∥→∞$ implies $Ψ( z n ,x)→∞$, there exists $x 0 ∈C$ such that $Ψ( x o ,x)=inf{Ψ(z,x):z∈C}$. Since E is strictly convex, $∥ ⋅ ∥ p$ is a strictly convex function, that is,

$∥ λ x 1 + ( 1 − λ ) x 2 ∥ p <λ ∥ x 1 ∥ p +(1−λ) ∥ x 2 ∥ p$

for all $x 1 , x 2 ∈E$ with $x 1 ≠ x 2$, $1≤p<∞$ and $λ∈(0,1)$. Then the function $Ψ(⋅,y)$ is also strictly convex. Therefore, $x 0 ∈C$ is unique. This completes the proof. □

For each nonempty closed convex subset C of a reflexive, convex and smooth Banach space E, we define the mapping $R C$ of E onto C by $R C x= x 0$, where $x 0$ is defined by (2.5). For the case $p=2$, it is easy to see that the mapping is coincident with the metric projection in the setting of Hilbert spaces. In our discussion, instead of the metric projection, we make use of the mapping $R C$. Finally, we prove two results concerning Proposition 2.4 and the mapping $R C$. The first one is the usual analogue of a characterization of the metric projection in a Hilbert space.

Proposition 2.5 Let E be a smooth Banach space, let C be a convex subset of E, let $x∈E$ and $x 0 ∈C$. Then

$Ψ( x 0 ,x)=inf { Ψ ( z , x ) : z ∈ C }$
(2.6)

if and only if

$〈 z − x 0 , J p ( x 0 ) − J p ( x ) 〉 ≥0,∀z∈C.$
(2.7)

Proof First, we show that (2.6) (2.7). Let $z∈C$ and $λ∈(0,1)$. It follows from $Ψ( x 0 ,x)≤Ψ((1−λ) x 0 +λz,x)$ that

$0 ≤ ∥ ( 1 − λ ) x 0 + λ x ∥ p − p 〈 ( 1 − λ ) x 0 + λ z − x , J p ( x ) 〉 − ∥ x ∥ p − ∥ x 0 ∥ p + p 〈 x 0 − x , J p ( x ) 〉 + ∥ x ∥ p = ∥ ( 1 − λ ) x 0 + λ z ∥ p − ∥ x 0 ∥ p − p λ 〈 z − x 0 , J p ( x ) 〉 ≤ p λ 〈 z − x 0 , J p ( ( 1 − λ ) x 0 + λ z ) 〉 − p λ 〈 z − x 0 , J p ( x ) 〉 = p λ 〈 z − x 0 , J p ( ( 1 − λ ) x 0 + λ z ) − J p ( x ) 〉 ,$

which implies

$〈 z − x 0 , J p ( ( 1 − λ ) x 0 + λ z ) − J p ( x ) 〉 ≥0.$

Taking $λ↓0$, since $J p$ is norm-to-weak continuous, we obtain

$〈 z − x 0 , J p ( x 0 ) − J p ( x ) 〉 ≥0,$

which shows (2.7).

Next, we show that (2.7) (2.6). For any $z∈C$, we have

$Ψ ( z , x ) − Ψ ( x 0 , x ) = ∥ z ∥ p − p 〈 z − x , J p ( x ) 〉 − ∥ x ∥ p − ∥ x 0 ∥ p + p 〈 x 0 − x , J p ( x ) 〉 + ∥ x ∥ p = ∥ z ∥ p − ∥ x 0 ∥ p − p 〈 z − x 0 , J p ( x ) 〉 ≥ p 〈 z − x 0 , J p ( x 0 ) 〉 − p 〈 z − x 0 , J p ( x ) 〉 = p 〈 z − x 0 , J p ( x 0 ) − J p ( x ) 〉 ≥ 0 ,$

which proves (2.6). This completes the proof. □

Proposition 2.6 Let E be a reflexive, strictly convex and smooth Banach space, let C be a nonempty closed convex subset of E, let $x∈E$ and $R C x∈C$ with

$∥y−x∥=∥y− R C x∥+∥ R C x−x∥,∀y∈L[x, R C x]∩C.$

Then we have

$Ψ(y, R C x)+Ψ( R C x,x)≤Ψ(y,x),∀y∈L[x, R C x]∩C.$

Proof It follows from Proposition 2.5 that This completes the proof. □

## 3 Main results

Throughout this section, unless otherwise stated, we assume that $T:E→ 2 E ∗$ is a occasionally pseudomonotone maximal monotone operator. In this section, we study the following algorithm ${ x n }$ in a smooth Banach space E, which is an extension of (1.2):

${ x 0 ∈ E , 0 = v n + 1 r n ( J p ( y n ) − J p ( x n ) ) , v n ∈ T y n , H n = { z ∈ E : 〈 z − y n , v n 〉 ≤ 0 } , W n = { z ∈ E : 〈 z − x n , J p ( x n ) − J p ( x n ) 〉 ≤ 0 } , x n + 1 = R H n ∩ W n x n , ∀ n ≥ 0 ,$
(3.1)

where ${ r n }$ is a sequence of positive real numbers.

First, we investigate the condition under which the algorithm (3.1) is well defined. Rockafellar  proved the following theorem.

Theorem 3.1 Let E be a reflexive, strictly convex and smooth Banach space, and let $T:E→ 2 E ∗$ be a monotone operator. Then T is maximal if and only if $R( J p +rT)= E ∗$ for all $r>0$.

By the appropriate modification of arguments in Theorem 3.1, we can prove the following.

Theorem 3.2 Let E be a reflexive, strictly convex and smooth Banach space, and let $T:E→ 2 E ∗$ be an occasionally pseudomonotone operator. Then T is maximal if and only if $R( J p +rT)= E ∗$ for all $r>0$.

Using Theorem 3.2, we can show the following result.

Proposition 3.3 Let E be a reflexive, strictly convex and smooth Banach space. If $T − 1 0≠∅$, then the sequence generated ${ x n }$ by (3.1) is well defined.

Proof From the definition of the sequence ${ x n }$, it is obvious that both $H n$ and $W n$ are closed convex sets. Let $w∈ T − 1 0$. From Theorem 3.2, there exists $( y 0 , v 0 )∈E× E ∗$ such that

$0= v 0 + 1 r 0 ( J p ( y 0 ) − J p ( x 0 ) ) , v 0 ∈T y 0 .$

Since T is occasionally pseudomonotone and $〈 y 0 −w,0〉=0≥0$, from $Tw∋0$, it follows that

$〈 y 0 −w, v 0 〉≥0$

for some $v 0 ∈T y 0$. It follows that $w∈ H 0$. On the other hand, it is clear that $w∈ W 0 =E$. Then $w∈ H 0 ∩ W 0$ and so $x 1 = R H 0 ∩ W 0 x 0$ is well defined. Suppose that $w∈ H n − 1 ∩ W n − 1$ is well defined for some $n≥1$. Again, by Theorem 3.2, we obtain $( y n , v n )∈E× E ∗$ such that

$0= v n + 1 r n ( J p ( y n ) − J p ( x n ) ) , v n ∈T y n .$

Then since T is occasionally pseudomonotone and $〈 y n −w,0〉=0≥0$, from $Tw∋0$, it follows that

$〈 y n −w, v n 〉≥0$

for some $v n ∈T y n$, and so $w∈ H n$. It follows from Proposition 2.5 that

$〈 w − x n , J p ( x 0 ) − J p ( x n ) 〉 = 〈 w − R H n − 1 ∩ W n − 1 x 0 , J p ( x 0 ) − J p ( R H n − 1 ∩ W n − 1 x 0 ) 〉 ≤ 0 ,$

which implies $w∈ W n$. Therefore, $w∈ H n ∩ W n$ and hence $x n − 1 = R H n ∩ W n x 0$ is well defined. Then by induction, the sequence ${ x n }$ generated by (3.1) is well defined for each $n≥0$. This completes the proof. □

Remark 3.1 From the above proof, we obtain

$T − 1 0⊂ H n ∩ W n ,∀n≥0.$

Now, we are ready to prove our main theorem.

Theorem 3.4 Let E be a reflexive, strictly convex and uniformly smooth Banach space. If $T − 1 0≠∅$, ϕ satisfies the condition (2.2) and ${ r n }⊂(0,∞)$ satisfies $lim inf n → ∞ r n >0$, then the sequence ${ x n }$ generated by (3.1) converges strongly to $R T − 1 0 x 0$.

Proof It follows from the definition of $W n + 1$ and Proposition 2.5 that $x n + 1 = R W n + 1 x 0$. Further, from $x 0 ∈L( x n , R W n + 1 x 0 )∩ W n − 1$ and Proposition 2.6, we have

$Ψ( x n , R W n + 1 x 0 )+Ψ( R W n + 1 x 0 , x 0 )≤Ψ( x n , x 0 )$

and hence

$Ψ( x n , x n + 1 )+Ψ( x n + 1 , x 0 )≤Ψ( x n , x 0 ).$
(3.2)

Since the sequence ${Ψ( x n , x 0 )}$ is monotone decreasing and bounded below by 0, it follows that $lim inf n → ∞ Ψ( x n , x 0 )$ exists and, in particular, ${Ψ( x n , x 0 )}$ is bounded. Then by (2.3), ${ x n }$ is also bounded. This implies that there exists a subsequence ${ x n i }$ of ${ x n }$ such that $x n i ⇀w$ for some $w∈E$.

Now, we show that $w∈ T − 1 0$. It follows from (3.2) that $Ψ( x n , x n + 1 )→0$. On the other hand, we have Since $R H n x n ∈ H n$ and $0= v n + 1 r n ( J p ( y n )− J p ( x n ))$, it follows that

$〈 y n − R H n x n − y n J p ( x n ) − J p ( y n ) 〉 ≥0,$

and so $Ψ( R H n x n , x n )≥Ψ( y n , x n )$. Further, since $x n + 1 ∈ H n$, we have

$Ψ( x n + 1 , x n )≥Ψ( R H n x n , x n ),$

which yields that

$Ψ( x n + 1 , x n )≥Ψ( R H n x n , x n )≥Ψ( y n , x n ).$

Then it follows from $Ψ( x n , x n + 1 )→0$ that $Ψ( y n , x n )→0$. Consequently, by Proposition 2.3, we have $y n − x n →0$, which implies $y n i ⇀w$. Moreover, since J is uniformly norm-to-norm continuous on bounded subsets and $lim inf n → ∞ r n >0$, we obtain

$v n =− 1 r n ( J p ( y n ) − J p ( x n ) ) →0.$

It follows from $v n ∈T y n$ with $v n →0$ and $y n i ⇀w$ that

$lim i , n → ∞ 〈z− y n i , v n 〉=〈z−w,0〉=0,∀z∈D(T).$

Then, since T is occasionally pseudomonotone, it follows that $〈z−w, z ′ 〉=0$ for some $z ′ ∈Tz$. Therefore, from the maximality of T, we obtain $w∈ T − 1 0$. Let $w ∗ ∈ R T − 1 0 x 0$. Now, from $x n + 1 = R H n ∩ W n x 0$ and $w ∗ ∈ T − 1 0⊂L( x n , R W n + 1 x 0 )∩ H n ∩ W n$, we have

$Ψ( x n + 1 , x 0 )≤Ψ ( w ∗ , x 0 ) .$

Then we have

$Ψ ( x n , w ∗ ) = Ψ ( x n , x 0 ) + Ψ ( x 0 , w ∗ ) − p 〈 x n , x 0 , J p ( w ∗ ) − J p ( x 0 ) 〉 + ∥ x n − w ∗ ∥ − ∥ x n − x 0 ∥ − ∥ x 0 − w ∗ ∥ ≤ Ψ ( w ∗ , x 0 ) + Ψ ( x 0 , w ∗ ) − p 〈 x n − x 0 , J p ( w ∗ ) − J p ( x 0 ) 〉 + ∥ x n − w ∗ ∥ − ∥ x n − x 0 ∥ − ∥ x 0 − w ∗ ∥ ,$

which yields

$lim sup i → ∞ Ψ ( x n i , w ∗ ) ≤ Ψ ( w ∗ , x 0 ) + Ψ ( x 0 , w ∗ ) − p 〈 w − x 0 , J p ( w ∗ ) − J p ( x 0 ) 〉 + ∥ w − w ∗ ∥ − ∥ w − x 0 ∥ − ∥ x 0 − w ∗ ∥ .$

Thus, from Proposition 2.5, we have Then we obtain $lim sup i → ∞ Ψ( x n i , w ∗ )≤0$, and hence $Ψ( x n i , w ∗ )→0$. It follows from Proposition 2.3 that $x n i → w ∗$. This means that the whole sequence ${ x n }$ generated by (3.1) converges weakly to $w ∗$ and each weakly convergent subsequence of ${ x n }$ converges strongly to $w ∗$. Therefore, ${ x n }$ converges strongly to $w ∗ ∈ R T − 1 0 x 0$. This completes the proof. □

## 4 An application

Let $f:E→(−∞,∞]$ be a proper convex lower semicontinuous function. Then the subdifferential ∂f of f is defined by

$∂f(x)= { v ∈ E ∗ : f ( y ) − f ( x ) ≥ 〈 y − x , v 〉 , ∀ y ∈ E } .$

Using Theorem 3.4, we consider the problem of finding a minimizer of the function f.

Theorem 4.1 Let E be reflexive, strictly convex and uniformly smooth Banach space, and let $f:E→(−∞,∞]$ be a proper convex lower semicontinuous function. Assume that ${ r n }⊂(0,∞)$ satisfies $lim inf n → ∞ r n >0$ and ${ x n }$ is the sequence generated by

${ x 0 ∈ E , y n = argmin z ∈ E { f ( z ) + 1 p r n ∥ z ∥ p − 1 r n 〈 z , J p ( x n ) 〉 } , 0 = v n + 1 r n ( J p ( y n ) − J p ( x n ) ) , v n ∈ T y n , H n = { z ∈ E : 〈 z − y n , v n 〉 ≤ 0 } , W n = { z ∈ E : 〈 z − x n , J p ( x n ) 〉 ≤ 0 } , x n + 1 = R H n ∩ W n x 0 , ∀ n ≥ 0 .$
(4.1)

If $( ∂ f ) − 1 ≠∅$, then the sequence ${ x n }$ generated by (4.1) converges strongly to the minimizer of f.

Proof Since $f:E→(−∞,∞]$ is a proper convex lower semicontinuous function, by Rockafelllar , the subdifferential ∂f of f is a maximal monotone operator and so it is also an occasionally pseudomonotone maximal operator. We also know that

$y n = argmin z ∈ E { f ( z ) + 1 p r n ∥ z ∥ p − 1 r n 〈 z , J p ( x n ) 〉 }$

is equivalent to the following:

$1 r n ( J p ( z ) − J p ( x n ) ) ∈∂f( y n ),∀z∈E.$

This implies that

$0∈∂f( y n )+ 1 r n ( J p ( y n ) − J p ( x n ) ) .$

Thus, we have $v n ∈∂f( y n )$ such that $0= v n + 1 r n ( J p ( y n )− J p ( x n ))$. Therefore, using Theorem 3.4, we get the conclusion. This completes the proof. □

## 5 Concluding remarks

We presented a modified proximal-type algorithm with the varied degree of rate of the convergence depending upon the choice of p ($1) for an occasionally pseudomonotone operator, which is a generalization of a monotone operator, to extend Kamimura and Takahashi’s result to more general Banach spaces which are not necessarily uniformly convex like locally uniformly Banach spaces. As an application, we consider the problem of finding a minimizer of a convex function in a more general setting of Banach spaces than what Kamimura and Takahashi have considered.

## References

1. 1.

Martinet B: Régularisation d’inéquations variationnelles par approximatios successives. Rev. Fr. Autom. Inform. Rech. Opér. 1970, 4: 154–159.

2. 2.

Rockafellar RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14: 877–898. 10.1137/0314056

3. 3.

Agarwal RP, Zhou HY, Cho YJ, Kang SM: Zeros and mappings theorems for perturbations of m -accretive operators in Banach spaces. Comput. Math. Appl. 2005, 49: 147–155. 10.1016/j.camwa.2005.01.012

4. 4.

Brezis H, Lions PL: Produits infinis de resolvantes. Isr. J. Math. 1978, 29: 329–345. 10.1007/BF02761171

5. 5.

Cho YJ, Kang SM, Zhou H: Approximate proximal point algorithms for finding zeroes of maximal monotone operators in Hilbert spaces. J. Inequal. Appl. 2008., 2008: Article ID 598191

6. 6.

Cholamjiak P, Cho YJ, Suantai S: Composite iterative schemes for maximal monotone operators in reflexive Banach spaces. Fixed Point Theory Appl. 2011., 2011: Article ID 7

7. 7.

Güler O: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 1991, 29: 403–419. 10.1137/0329022

8. 8.

Lions PL: Une méthode itérative de résolution d’une inéquation variationnelle. Isr. J. Math. 1978, 31: 204–208. 10.1007/BF02760552

9. 9.

Passty GB: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 1979, 72: 383–390. 10.1016/0022-247X(79)90234-8

10. 10.

Qin X, Cho YJ, Kang SM: Approximating zeros of monotone operators by proximal point algorithms. J. Glob. Optim. 2010, 46: 75–87. 10.1007/s10898-009-9410-6

11. 11.

Song Y, Kang JI, Cho YJ: On iterations methods for zeros of accretive operators in Banach spaces. Appl. Math. Comput. 2010, 216: 1007–1017. 10.1016/j.amc.2010.01.124

12. 12.

Solodov MV, Svaiter BF: A hybrid projection proximal point algorithm. J. Convex Anal. 1999, 6: 59–70.

13. 13.

Wei L, Cho YJ: Iterative schemes for zero points of maximal monotone operators and fixed points of nonexpansive mappings and their applications. Fixed Point Theory Appl. 2008., 2008: Article ID 168468

14. 14.

Kamimura S, Takahashi W: Approximating solutions of maximal monotone operators in Hilbert spaces. J. Approx. Theory 2000, 106: 226–240. 10.1006/jath.2000.3493

15. 15.

Kamimura S, Takahashi W: Weak and strong convergence of solutions to accretive operator inclusions and applications. Set-Valued Anal. 2000, 8: 361–374. 10.1023/A:1026592623460

16. 16.

Solodov MV, Svaiter BF: Forcing strong convergence of proximal point iterations in a Hilbert space. Math. Program. 2000, 87: 189–202.

17. 17.

Minty GJ: Monotone (nonlinear) operators in Hilbert space. Duke Math. J. 1962, 29: 341–346. 10.1215/S0012-7094-62-02933-2

18. 18.

Kamimura S, Takahashi W: Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim. 2003, 13: 938–945.

19. 19.

Karamardian S: Complementarity problems over cones with monotone or pseudomonotone maps. J. Optim. Theory Appl. 1976, 18: 445–454. 10.1007/BF00932654

20. 20.

Asplund E: Positivity of duality mappings. Bull. Am. Math. Soc. 1967, 73: 200–203. 10.1090/S0002-9904-1967-11678-1

21. 21.

Xu ZB, Roach GF: Characteristic inequalities in uniformly convex and uniformly smooth Banach spaces. J. Math. Anal. Appl. 1991, 157: 189–210. 10.1016/0022-247X(91)90144-O

22. 22.

Browder FE: Nonlinear operators and nonlinear equations of evolution in Banach spaces. 18. Proc. Sympos. Pure Math. 1976.

23. 23.

Rockafellar RT: Characterization of the subdifferentials of convex functions. Pac. J. Math. 1966, 17: 497–510. 10.2140/pjm.1966.17.497

## Acknowledgements

The second author was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (Grant number: 2012-0008170).

## Author information

Authors

### Corresponding author

Correspondence to Yeol Je Cho.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

All authors read and approved the final manuscript.

## Rights and permissions

Reprints and Permissions

Pathak, H.K., Cho, Y.J. Strong convergence of a proximal-type algorithm for an occasionally pseudomonotone operator in Banach spaces. Fixed Point Theory Appl 2012, 190 (2012). https://doi.org/10.1186/1687-1812-2012-190

• Accepted:

• Published:

### Keywords

• proximal point algorithm
• monotone operator
• maximal monotone operator
• pseudomonotone operator
• occasionally pseudomonotone operator
• maximal pseudomonotone operator
• maximal occasionally pseudomonotone operator
• Banach space
• strong convergence 