Skip to main content

Advertisement

Approximate solutions to variational inequality over the fixed point set of a strongly nonexpansive mapping

Article metrics

Abstract

Variational inequality problems over fixed point sets of nonexpansive mappings include many practical problems in engineering and applied mathematics, and a number of iterative methods have been presented to solve them. In this paper, we discuss a variational inequality problem for a monotone, hemicontinuous operator over the fixed point set of a strongly nonexpansive mapping on a real Hilbert space. We then present an iterative algorithm, which uses the strongly nonexpansive mapping at each iteration, for solving the problem. We show that the algorithm potentially converges in the fixed point set faster than algorithms using firmly nonexpansive mappings. We also prove that, under certain assumptions, the algorithm with slowly diminishing step-size sequences converges to a solution to the problem in the sense of the weak topology of a Hilbert space. Numerical results demonstrate that the algorithm converges to a solution to a concrete variational inequality problem faster than the previous algorithm.

MSC:47H06, 47J20, 47J25.

1 Introduction

The paper presents an iterative algorithm for the variational inequality problem [18] for a monotone, hemicontinuous operator A over a nonempty, closed convex subset C of a real Hilbert space H with inner product , and its induced norm ,

find zC such that yz,Az0 for all yC.
(1)

Problem (1) can be solved by using convex optimization techniques. A typical iterative procedure for Problem (1) is the projected gradient method [7, 9], and it is expressed as x 1 C and x n + 1 = P C (I r n A) x n for n=1,2, , where P C stands for the metric projection onto C, I is the identity mapping on H, and { r n }(0,). However, as the method requires repetitive use of P C , it can only be applied when the explicit form of P C is known (e.g., C is a closed ball or a closed cone). The following method, called the hybrid steepest descent method (HSDM) [10], enables us to consider the case in which C has a more complicated form: x 1 H and

x n + 1 =(I r n A)T x n

for all n=1,2, , where { r n }(0,1] and T:HH is an easily implemented nonexpansive mapping satisfying Fix(T):={xH:Tx=x}=C. HSDM converges strongly to a unique solution to the variational inequality problem over Fix(T),

find zFix(T) such that yz,Az0 for all yFix(T),
(2)

when A:HH is strongly monotone and Lipschitz continuous. Problem (2) contains many applications such as signal recovery problems [11], beam-forming problems [12], power-control problems [13, 14], bandwidth allocation problems [1517], and optimal control problems [18]. References [11, 19], and [20] presented acceleration methods for solving Problem (2) when A is strongly monotone and Lipschitz continuous. Algorithms were presented to solve Problem (2) when A is (strictly) monotone and Lipschitz continuous [15, 17]. When H= R N and A: R N R N is continuous (and is not necessarily monotone), a simple algorithm, x n + 1 := α n x n +(1 α n )(1/2)(I+T)( x n r n A x n ) ( α n , r n [0,1]), was presented in [14] and the algorithm converges to a solution to Problem (2) under some conditions.

Reference [21] proposed an iterative algorithm for solving Problem (2) when A:HH is monotone and hemicontinuous and showed that the algorithm weakly converges to a solution to the problem under certain assumptions. The results in [21] are summarized as follows: suppose that F:HH is a firmly nonexpansive mapping with Fix(F) and that A:HH is a monotone, hemicontinuous mapping with

VI ( Fix ( F ) , A ) := { z Fix ( F ) : y z , A z 0  for all  y Fix ( F ) } .

Define a sequence { x n }H by x 1 H and

{ y n = F ( I r n A ) x n , x n + 1 = α n x n + ( 1 α n ) y n
(3)

for all n=1,2, , where { α n }[0,1) and { r n }(0,1). Assume that {A x n } in algorithm (3) is bounded, and that there exists n 0 N such that VI(Fix(F),A)Ω:= n = n 0 {xFix(F): x n x,A x n 0}. If { α n } and { r n } satisfy lim sup n α n <1, n = 1 r n 2 <, and lim n x n y n / r n =0, then { x n } weakly converges to a point in VI(Fix(F),A). To relax the strong monotonicity condition of A considered in [10], a firmly nonexpansive mapping F is used in algorithm (3) in place of a nonexpansive mapping T. From the fact that a firmly nonexpansive mapping F can be represented by the form, F=(1/2)(I+T), for some nonexpansive mapping T, algorithm (3) when α n :=0 and F:=(1/2)(I+T) can be simplified as follows: x 1 H and

x n + 1 = 1 2 (I+T)( x n r n A x n )= 1 2 ( x n r n A x n )+ 1 2 T( x n r n A x n ).
(4)

In constrained optimization problems, one is required to satisfy constraint conditions early in the process of executing an iterative algorithm. From this viewpoint, we introduce the following algorithm with a weighted parameter α, which is more than 1/2:

x n + 1 = ( 1 α ) ( x n r n A x n ) + α T ( x n r n A x n ) = [ ( 1 α ) I + α T ] ( x n r n A x n ) = : S ( x n r n A x n ) .
(5)

Algorithm (5) potentially converges in the fixed point set faster than algorithm (4). Here, we can see that the mapping, S:=(1α)I+αT, satisfies the strong nonexpansivity condition [22], which is a weaker condition than firm nonexpansivity. This implies that the previous algorithms in [14, 21], which can be applied to Problem (2) when T is firmly nonexpansive, cannot solve Problem (2) when T is strongly nonexpansive.

In this paper, we present an iterative algorithm for solving the variational inequality problem with a monotone, hemicontinuous operator over the fixed point set of a strongly nonexpansive mapping and show that the algorithm weakly converges to a solution to the problem under certain assumptions.

The rest of the paper is organized as follows. Section 2 covers the mathematical preliminaries. Section 3 presents the algorithm for solving the variational inequality problem for a monotone, hemicontinuous operator over the fixed point set of a strongly nonexpansive mapping, and its convergence analyses. Section 4 provides numerical comparisons of the algorithm with the previous algorithm in [21] and shows that the algorithm converges to a solution to a concrete variational inequality problem faster than the previous algorithm. Section 5 concludes the paper.

2 Preliminaries

Throughout this paper, we will denote the set of all positive integers by and the set of all real numbers by . Let H be a real Hilbert space with inner product , and its induced norm . We denote the strong convergence and weak convergence of { x n } to xH by x n x and x n x, respectively. It is well known that H satisfies the following condition, called Opial’s condition [23]: for any { x n }H satisfying x n x 0 , lim inf n x n x 0 < lim inf n x n y holds for all yH with y x 0 ; see also [5, 6, 24]. To prove our main theorems, we need the following lemma, which was proven in [25]; see also [5, 6, 26].

Lemma 2.1 ([25])

Assume that { s n } and { e n } are sequences of non-negative numbers such that s n + 1 s n + e n for all nN. If n = 1 e n <, then lim n s n exists.

2.1 Strong nonexpansivity and fixed point set

Let T be a mapping of H into itself. We denote the fixed point set of T by Fix(T); i.e., Fix(T)={zH:Tz=z}. A mapping T:HH is said to be nonexpansive if TxTyxy for all x,yH. Fix(T) is closed and convex when T is nonexpansive [5, 6, 24, 27]. T:HH is said to be strongly nonexpansive [22] if T is nonexpansive and if, for bounded sequences { x n },{ y n }H, x n y n T x n T y n 0 implies x n y n (T x n T y n )0. The following properties for strongly nonexpansive mappings were shown in [22]:

  • Fix(T) is closed and convex when T:HH is strongly nonexpansive because T is also nonexpansive.

  • A strongly nonexpansive mapping, T:HH, with Fix(T) is asymptotically regular[24, 28], i.e., for each xH, lim n T n x T n + 1 x=0.

  • If S,T:HH are strongly nonexpansive, then ST is also strongly nonexpansive, and Fix(ST)=Fix(S)Fix(T) when Fix(S)Fix(T).

  • If S:HH is strongly nonexpansive and if T:HH is nonexpansive, then αS+(1α)T is strongly nonexpansive for α(0,1). If Fix(S)Fix(T), then Fix(αS+(1α)T)=Fix(S)Fix(T)[29]. In particular, since the identity mapping I is strongly nonexpansive, the mapping U:=αI+(1α)T is strongly nonexpansive. Such U is said to be averaged nonexpansive.

Example 2.1 Let DH be a closed convex set, which is simple in the sense that P D can be calculated explicitly. Furthermore, let f:HR be Fréchet differentiable and f:HH be Lipschitz continuous; i.e., there exists L>0 such that f(x)f(y)Lxy for all x,yH. Then, for r(0,2/L], S r := P D (Irf) is nonexpansive [30], [[31], Lemma 2.1]. Define T:HH by

T:=αI+(1α) S r ( α ( 0 , 1 ) ) .
(6)

Then T is strongly nonexpansive and Fix(T)={xD:f(x)= min y D f(y)}.

Example 2.2 Let D i H (i=0,1,,m) be a closed convex set, which is simple in the sense that P D i can be calculated explicitly. Define Φ(x):=(1/2) i = 1 m ω i d(x, D i ) for all xH, where ω i (0,1) with i = 1 m ω i =1 and d(x, D i ):=min{xy:y D i } (i=1,2,,m). Also, we define S:HH and T:HH as

S:= P D 0 [ i = 1 m ω i P D i ] ,T:=αI+(1α)S ( α ( 0 , 1 ) ) .
(7)

Then S is nonexpansive [[10], Proposition 4.2] and Fix(S)= C Φ :={x D 0 :Φ(x)= min y D 0 Φ(y)}. Hence, T is strongly nonexpansive and Fix(T)= C Φ . C Φ is referred to as a generalized convex feasible set [10, 32] and is defined as the subset of D 0 that is closest to D 1 , D 2 ,, D m in the mean square sense. Even if i = 0 m D i =, C Φ is well defined. C Φ = i = 0 m D i holds when i = 0 m D i . Accordingly, C Φ is a generalization of i = 0 m D i .

A mapping F:HH is said to be firmly nonexpansive [33] if F x F y 2 xy,FxFy for all x,yH (see also [24, 27, 34]). Every firmly nonexpansive mapping F can be expressed as F=(1/2)(I+T) given some nonexpansive mapping T [24, 27, 34]. Hence, the class of averaged nonexpansive mappings includes the class of firmly nonexpansive mappings.

2.2 Variational inequality

An operator A:HH is said to be monotone if xy,AxAy0 for all x,yH. A:HH is said to be hemicontinuous [[5], p.204] if, for any x,yH, the mapping g:[0,1]H defined by g(t)=A(tx+(1t)y) is continuous, where H has a weak topology. Let C be a nonempty, closed convex subset of H. The variational inequality problem [2, 4] for a monotone operator A:HH is as follows (see also [1, 3, 58]):

find zC such that yz,Az0 for all yC.

We denote the solution set of the variational inequality problem by VI(C,A). The monotonicity and hemicontinuity of A imply that VI(C,A)={zC:yz,Ay0 for all yC} [[5], Subsection 7.1]. This means that VI(C,A) is closed and convex. VI(C,A) is nonempty when A:HH is monotone and hemicontinuous, and CH is nonempty, compact, and convex [[5], Theorem 7.1.8].

Example 2.3 Let g:HR be convex and continuously Fréchet differentiable and A:=g. Then A is monotone and hemicontinuous.

(i) Suppose that f:HR is as in Example 2.1 and T:HH is defined as in (6) and set G:={zD:f(z)= min w D f(w)}. Then

VI ( Fix ( T ) , A ) = { x G : g ( x ) = min y G g ( y ) } .

A solution of this problem is a minimizer of g over the set of all minimizers of f over D. Therefore, the problem has a triplex structure [16, 31, 35].

(ii) Suppose that T:HH is defined as in (7). Then

VI ( Fix ( T ) , A ) = { x C Φ : g ( x ) = min y C Φ g ( y ) } .

This problem is to find a minimizer of g over the generalized convex feasible set [10, 13, 14, 16, 18].

3 Optimization of variational inequality over fixed point set

In this section, we present an iterative algorithm for solving the variational inequality problem for a monotone, hemicontinuous operator over the fixed point set of a strongly nonexpansive mapping and its convergence analyses. We assume that T:HH is a strongly nonexpansive mapping with Fix(T) and that A:HH is a monotone, hemicontinuous operator.

Algorithm 3.1

Step 0. Choose x 1 H, r 1 (0,1), and α 1 [0,1) arbitrarily, and let n:=1.

Step 1. Given x n H, choose r n (0,1) and α n [0,1) and compute x n + 1 H as

y n :=T( x n r n A x n ), x n + 1 := α n x n +(1 α n ) y n .

Step 2. Update n:=n+1, and go to Step 1.

To prove our main theorems, we need the following lemma.

Lemma 3.1 Suppose that { x n } is a sequence generated by Algorithm 3.1 and that {A x n } is bounded. Moreover, assume that

  1. (A)

    n = 1 r n <, or

  2. (B)

    n = 1 r n 2 <, VI(Fix(T),A), and the existence of an n 0 N satisfying VI(Fix(T),A)Ω:= n = n 0 {xFix(T): x n x,A x n 0}.

Then { x n } is bounded.

Proof Put z n := x n r n A x n for all nN. We first assume that condition (A) is satisfied and choose uFix(T) arbitrarily. Accordingly, we see that, for any nN,

x n + 1 u = α n x n + ( 1 α n ) y n u α n x n u + ( 1 α n ) z n u = α n x n u + ( 1 α n ) ( x n u ) r n A x n x n u + r n A x n .
(8)

From n = 1 r n <, the boundedness of {A x n }, and Lemma 2.1, the limit of { x n u} exists for all uFix(T), which implies that { x n } is bounded.

Next, suppose that condition (B) is satisfied, and let uFix(T). Then, from the monotonicity of A, we find that, for any nN,

x n + 1 u 2 = α n x n + ( 1 α n ) y n u 2 α n x n u 2 + ( 1 α n ) y n u 2 α n x n u 2 + ( 1 α n ) z n u 2 = α n x n u 2 + ( 1 α n ) ( x n u ) r n A x n 2 = α n x n u 2 + ( 1 α n ) ( x n u 2 + 2 r n u x n , A x n + r n 2 A x n 2 ) x n u 2 + ( 1 α n ) ( 2 r n u x n , A x n + K r n 2 ) = x n u 2 + ( 1 α n ) ( 2 r n u x n , A x n A u + 2 r n u x n , A u + K r n 2 ) x n u 2 + 2 r n ( 1 α n ) u x n , A u + K r n 2 ,
(9)

where K:=sup{ A x n 2 :nN}<. Especially in the case of uVI(Fix(T),A)Ω, it follows from condition (B) that, for any n n 0 ,

x n + 1 u 2 x n u 2 + 2 r n ( 1 α n ) u x n , A x n + K r n 2 x n u 2 + K r n 2 .

Hence, the condition, n = 1 r n 2 <, and Lemma 2.1 guarantee that the limit of { x n u} exists for all uVI(Fix(T),A). We thus conclude that { x n } is bounded. □

Now, we are in the position to perform the convergence analysis on Algorithm 3.1 under condition (A) in Lemma 3.1.

Theorem 3.1 Let { x n } be a sequence generated by Algorithm 3.1 and assume that {A x n } is bounded and that the sequences { α n }[0,1) and { r n }(0,1) satisfy

lim sup n α n <1, n = 1 r n <,and lim n x n y n r n =0.

Then Algorithm 3.1 converges weakly to a point in VI(Fix(T),A).

Proof Put z n := x n r n A x n for all nN. The proof consists of the following steps:

  1. (a)

    Prove that { x n } and { z n } are bounded.

  2. (b)

    Prove that lim n x n y n =0 and lim n x n T x n =0 hold.

  3. (c)

    Prove that { x n } converges weakly to a point in VI(Fix(T),A).

  1. (a)

    Choose uFix(T) arbitrarily. From the inequality, z n u=( x n r n A x n )u x n u+ r n A x n , and Lemma 3.1, we deduce that { z n } is bounded.

  2. (b)

    Put c:= lim n x n u for any uFix(T). Then, from n = 1 r n <, for any ε>0, we can choose mN such that | x n uc|ε, and r n ε for all nm. Also, there exists a>0 such that α n <a<1 for all nm because of lim sup n α n <1. Since y n =(1/(1 α n )) x n + 1 ( α n /(1 α n )) x n , we have

    y n u 1 1 α n x n + 1 u α n 1 α n x n u

for all nN. We find that, for any nm,

y n u 1 1 α n (cε) α n 1 α n (c+ε)=c 1 + α n 1 α n εc 1 + a 1 a ε.

Hence, for any uFix(T) and for any nm, we have

0 z n u T z n T u x n u + r n A x n y n u c + ε + K ε ( c 1 + a 1 a ε ) = ( 2 1 a + K ) ε ,

where K=sup{ A x n 2 :nN}<, which implies that lim n ( z n uT z n Tu)=0. Since T is strongly nonexpansive, we get

lim n ( z n u ) ( T z n u ) = lim n z n T z n = lim n z n y n =0.
(10)

From (10) and x n z n = r n A x n 0 as n, we also get

lim n x n y n =0.
(11)

From x n T x n x n y n + y n T x n x n y n + z n x n , and (11), we deduce that

lim n x n T x n =0.
(12)
  1. (c)

    From the boundedness of { x n }, there exists a subsequence { x n i } of { x n } such that { x n i } converges weakly to a point vH. From the nonexpansivity of T and (12), it is guaranteed that T is demiclosed (i.e., x n u and x n T x n 0 imply uFix(T)). Hence, we have vFix(T). From (9), we get, for any uFix(T) and for any nN,

    0 ( x n u + x n + 1 u ) ( x n u x n + 1 u ) + 2 r n ( 1 α n ) u x n , A u + K r n 2 ,

which means

0 L x n x n + 1 r n + 2 ( 1 α n ) u x n , A u + K r n = L ( 1 α n ) x n y n r n + 2 ( 1 α n ) u x n , A u + K r n L x n y n r n + 2 ( 1 α n ) u x n , A u + K r n ,

where L:=sup{ x n u+ x n + 1 u:nN}<. From x n y n / r n 0, x n v, lim sup n α n <1, and r n 0, we have

0uv,Aufor all uFix(T).

The monotonicity and hemicontinuity of A imply that vVI(Fix(T),A). Finally, we show that { x n } converges weakly to vVI(Fix(T),A). Assume that another subsequence { x n j } of { x n } converges weakly to w. Then, from the discussion above, we also get wVI(Fix(T),A). If vw, Opial’s theorem [23] guarantees that

lim n x n v = lim i x n i v < lim i x n i w = lim n x n w = lim j x n j w < lim j x n j v = lim n x n v .

This is a contradiction. Thus, v=w. This implies that every subsequence of { x n } converges weakly to the same point in VI(Fix(T),A). Therefore, { x n } converges weakly to vVI(Fix(T),A). This completes the proof. □

Remark 3.1 The numerical examples in [14, 16, 21] show that Algorithm 3.1 satisfies lim n x n y n / r n =0 when T is firmly nonexpansive and r n :=1/ n α (1α<2). However, when α2, there are counterexamples that do not satisfy lim n x n y n / r n =0 [14, 16, 21].

Remark 3.2 If the sequence { x n } satisfies the assumptions in Theorem 3.1, we need not assume that VI(Fix(T),A) or that n 0 N exists such that VI(Fix(T),A)Ω in condition (B) (see also [[14], Remark 7(c)]).

Remark 3.3 Let us provide the sufficient condition of the boundedness of {A x n }. Suppose that Fix(T) is bounded and A is Lipschitz continuous. Then we can set a bounded set V with Fix(T)V onto which the projection can be computed within a finite number of arithmetic operations (e.g., V is a closed ball with a large enough radius). Accordingly, we can compute

x n + 1 := P V ( α n x n + ( 1 α n ) y n ) (n=1,2,)
(13)

instead of x n + 1 in Algorithm 3.1. Since { x n }V and V is bounded, { x n } is bounded. The Lipschitz continuity of A means that A x n AxL x n x (xH), where L (>0) is a constant, and hence, {A x n } is bounded. We can prove that Algorithm 3.1 with Equation (13) and { α n } and { r n } satisfying the conditions in Theorem 3.1 (or Theorem 3.2) weakly converges to a point in VI(Fix(T),A) by referring to the proof of Theorem 3.1 (or Theorem 3.2).

We prove the following theorem under condition (B) in Lemma 3.1. The essential parts of a proof are similar those of Lemma 3.1 and Theorem 3.1, so we will only give an outline of the proof below.

Theorem 3.2 Let { x n } be a sequence generated by Algorithm 3.1. Assume that {A x n } is bounded and that { α n }[0,1) and { r n }(0,1) satisfy

lim sup n α n <1, n = 1 r n 2 <,and lim n x n y n r n =0.

If VI(Fix(T),A) and if there exists n 0 N such that VI(Fix(T),A) n = n 0 {xFix(T): x n x,A x n 0}, then the sequence { x n } converges weakly to a point in VI(Fix(T),A).

Proof Put z n := x n r n A x n for all nN. As in the proof of Theorem 3.1, we proceed with the following steps:

  1. (a)

    Prove that { x n } and { z n } are bounded.

  2. (b)

    Prove that lim n x n T x n =0 holds.

  3. (c)

    Prove that { x n } converges weakly to a point in VI(Fix(T),A).

  1. (a)

    From Lemma 3.1, it follows that the limit of { x n u} exists for all uVI(Fix(T),A), and hence { x n } and { z n } are bounded.

  2. (b)

    Let uVI(Fix(T),A) and put c:= lim n x n u. Since n = 1 r n 2 <, the condition, r n 0, holds. As in the proof of Theorem 3.1(b), for any ε>0, there exists mN such that

    | x n u c | ε,and y n uc 1 + a 1 a ε

for all nm. By lim sup n α n <1, there exists a>0 such that α n <a<1. Since the inequality z n u=( x n r n A x n )u x n u+ r n A x n holds, we have

0 z n u T z n T u x n u + r n A x n y n u c + ε + K ε ( c 1 + a 1 a ε ) = ( 2 1 a + K ) ε ,

where K=sup{ A x n 2 :nN}<. This implies that lim n ( z n uT z n Tu)=0. From the strong nonexpansivity of T, we get lim n z n T z n =0. The rest of the proof is the same as the proof of Theorem 3.1(b). Accordingly, we obtain lim n x n T x n =0.

  1. (c)

    Following the proof of Theorem 3.1(c), there exists a subsequence { x n i }{ x n } such that { x n i } converges weakly to vVI(Fix(T),A). Assume that another subsequence { x n j } of { x n } converges weakly to w. Then we also have wVI(Fix(T),A). Since the limit of { x n u} exists for uVI(Fix(T),A), Opial’s theorem [23] guarantees that v=w. This implies that every subsequence of { x n } converges weakly to the same point in VI(Fix(T),A), and hence, { x n } converges weakly to vVI(Fix(T),A). This completes the proof. □

As we mentioned in Section 1, to solve constrained optimization problems whose feasible set is the fixed point set of a nonexpansive mapping T, Algorithm 3.1 must converge in Fix(T) early in the execution. Therefore, it would be useful to use a large parameter α ((0,1)) when a strongly nonexpansive mapping is represented by (1α)I+αT. Theorem 3.1 has the following consequences.

Corollary 3.1 Let T:HH be a nonexpansive mapping with Fix(T) and let A:HH be a monotone, hemicontinuous mapping. Let { x n } be a sequence generated by x 1 H and

{ y n = ( ( 1 α ) I + α T ) ( x n r n A x n ) , x n + 1 = α n x n + ( 1 α n ) y n
(14)

for all nN, where { α n }[0,1), α(0,1) and { r n }(0,1). Assume that {A x n } is a bounded sequence and that

lim sup n α n <1, n = 1 r n <,and lim n x n y n r n =0.

Then { x n } converges weakly to a point in VI(Fix(T),A).

Proof Since every averaged nonexpansive mapping is strongly nonexpansive and Fix((1α)I+αT)=Fix(T) for α(0,1), Theorem 3.1 implies Corollary 3.1. □

By following the proof of Theorem 3.2 and Corollary 3.1, we get the following.

Corollary 3.2 Let T:HH be a nonexpansive mapping with Fix(T) and let A:HH be a monotone, hemicontinuous mapping. Let { x n } be a sequence in algorithm (14). Assume that {A x n } is a bounded sequence and that

lim sup n α n <1, n = 1 r n 2 <,and lim n x n y n r n =0.

If VI(Fix(T),A) and if there exists n 0 N such that VI(Fix(T),A) n = n 0 {xFix(T): x n x,A x n 0}, then { x n } converges weakly to a point in VI(Fix(T),A).

4 Numerical examples

Let us apply Algorithm 3.1 and the algorithm in [21] to the following variational inequality problem.

Problem 4.1 Define f: R 1 , 000 R and C i ( R 1 , 000 ) (i=1,2) by

f ( x ) : = 1 2 x , Q x ( x R 1 , 000 ) , C i : = { x R 1 , 000 : a i , x b i } ( i = 1 , 2 ) ,

where Q R 1 , 000 × 1 , 000 is positive semidefinite, a i :=( a i ( 1 ) , a i ( 2 ) ,, a i ( 1 , 000 ) ) R 1 , 000 , and b i R + (i=1,2). Find zVI( C 1 C 2 ,f).

We set Q as a diagonal matrix with diagonal components 0,1,,999 and choose a i ( j ) (0,100) (i=1,2, j=1,2,,1,000) to be Mersenne Twister pseudo-random numbers given by the random-real function of srfi-27, Gauche.a We also set b 1 :=5,000 and b 2 :=4,000. The compiler used in this experiment was gcc.b The double-precision floating points were used for arithmetic processing of real numbers. The language was C.

In the experiment, we used the following algorithm:

{ y n : = ( ( 1 α ) I + α P C 1 P C 2 ) ( x n 10 3 ( n + 1 ) 1.001 f ( x n ) ) , x n + 1 : = 1 2 x n + 1 2 y n ( n N ) ,
(15)

where α(0,1). Note that the projection P C i (i=1,2) can be computed within a finite number of arithmetic operations [[36], p.406] because C i (i=1,2) is halfspace. More precisely,

P C i (x)=x+ min { 0 , b i a i , x } a i 2 a i ( x R 1 , 000 , i = 1 , 2 ) .

We can see that algorithm (15) with α:=1/2 coincides with the previous algorithm in [21]. Hence, we comparec algorithm (15) with α:=9/10 with algorithm (15) with α:=1/2 and verify that algorithm (15) with α:=9/10 converges in C 1 C 2 =Fix( P C 1 P C 2 ) faster than algorithm (15) with α:=1/2. We selected one hundred initial points x=x(k) R 1 , 000 (k=1,2,,100) as pseudo-random numbers generated by the rand function of the C Standard Library. We executed algorithm (15) with α:=9/10 and algorithm (15) with α:=1/2 for these initial points. Let { x n (k)} be the sequence generated by x(k) and algorithm (15). Here, we define

D n := 1 100 k = 1 100 x n ( k ) P C 1 P C 2 ( x n ( k ) ) (nN).

The convergence of { D n } to 0 implies that algorithm (15) converges to a point in C 1 C 2 .

Corollary 3.1 guarantees that algorithm (15) converges to a solution to Problem 4.1 if {f( x n )} is bounded and if

lim n ( n + 1 ) 1.001 x n y n =0.
(16)

To verify whether algorithm (15) satisfies condition (16), we employed

X n := 1 100 k = 1 100 ( n + 1 ) 1.001 x n ( k ) y n ( k ) (nN),

where y n (k):=((1α)I+α P C 1 P C 2 )( x n (k)( 10 3 / ( n + 1 ) 1.001 )f( x n (k))) (k=1,2,,100, nN). The convergence of { X n } to 0 implies that algorithm (15) satisfies condition (16). We also used

F n := 1 100 k = 1 100 f ( x n ( k ) ) (nN)

to check that algorithm (15) is stable.

Figure 1 indicates the behaviors of D n for algorithm (15) with α:=9/10 and algorithm (15) with α:=1/2. This figure shows that { D n } in algorithm (15) with α:=9/10 converges to 0 faster than { D n } in algorithm (15) with α:=1/2; i.e., algorithm (15) with α:=9/10 converges in C 1 C 2 faster than the previous algorithm in [21].

Figure 1
figure1

Behavior of D n for algorithm ( 15 ) with α:=9/10 and algorithm ( 15 ) with α:=1/2 .

Figure 2 compares the behaviors of X n for algorithm (15) with α:=9/10 and algorithm (15) with α:=1/2 and shows that the { X n } generated by each algorithm converges to 0; i.e., they each satisfy (16). Therefore, from Corollary 3.1, we can conclude that they can find a solution to Problem 4.1.

Figure 2
figure2

Behavior of X n for algorithm ( 15 ) with α:=9/10 and algorithm ( 15 ) with α:=1/2 .

We can see from Figure 3 that { F n } generated by the two algorithms converge to the same value. Figures 1, 2, and 3 indicate that algorithm (15) with α:=9/10 converges to a solution to Problem 4.1 faster than the previous algorithm in [21]. This is because algorithm (15) uses a parameter (α:=9/10) that is larger than 1/2 and algorithm (15) with α>1/2 potentially converges in the constraint set C 1 C 2 faster than the previous algorithm in [21] with α:=1/2.

Figure 3
figure3

Behavior of F n for algorithm ( 15 ) with α:=9/10 and algorithm ( 15 ) with α:=1/2 .

5 Conclusion

We studied a variational inequality problem for a monotone, hemicontinuous operator over the fixed point set of a strongly nonexpansive mapping in a Hilbert space and devised an iterative algorithm for solving it. Our convergence analyses guarantee that the algorithm weakly converges to a solution under certain assumptions. We gave numerical results to support the convergence analyses on the algorithm. The results showed that the algorithm converges to a solution to a concrete variational inequality problem faster than the previous algorithm.

6 Endnotes

We used the Gauche scheme shell, 0.9.3.3 [utf-8,pthreads], x86_64-apple-darwin12.4.1.

We used gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00).

For example, we set a large parameter, i.e., much more than 1/2: α=9/10.

References

  1. 1.

    Kinderlehrer D, Stampacchia G: An Introduction to Variational Inequalities and Their Applications. Academic Press, New York; 1980.

  2. 2.

    Lions J-L, Stampacchia G: Variational inequalities. Commun. Pure Appl. Math. 1967, 20: 493–519. 10.1002/cpa.3160200302

  3. 3.

    Rockafellar RT, Wets RJ-B: Variational Analysis. Springer, Berlin; 1998.

  4. 4.

    Stampacchia G: Formes bilinéaires coercitives sur les ensembles convexes. C. R. Math. Acad. Sci. Paris 1964, 258: 4413–4416.

  5. 5.

    Takahashi W: Nonlinear Functional Analysis. Fixed Point Theory and Its Applications. Yokohama Publishers, Yokohama; 2000.

  6. 6.

    Takahashi W: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers, Yokohama; 2009.

  7. 7.

    Zeidler E: Nonlinear Functional Analysis and Its Applications. II/B. Springer, New York; 1990.

  8. 8.

    Zeidler E: Nonlinear Functional Analysis and Its Applications. III. Springer, New York; 1985.

  9. 9.

    Goldstein AA: Convex programming in Hilbert space. Bull. Am. Math. Soc. 1964, 70: 709–710. 10.1090/S0002-9904-1964-11178-2

  10. 10.

    Yamada I: The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings. Stud. Comput. Math. 8. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications. North-Holland, Amsterdam; 2001:473–504.

  11. 11.

    Combettes PL: A block-iterative surrogate constraint splitting method for quadratic signal recovery. IEEE Trans. Signal Process. 2003, 51: 1771–1782. 10.1109/TSP.2003.812846

  12. 12.

    Slavakis K, Yamada I: Robust wideband beamforming by the hybrid steepest descent method. IEEE Trans. Signal Process. 2007, 55: 4511–4522.

  13. 13.

    Iiduka H, Yamada I: An ergodic algorithm for the power-control games for CDMA data networks. J. Math. Model. Algorithms 2009, 8: 1–18. 10.1007/s10852-008-9099-4

  14. 14.

    Iiduka H: Fixed point optimization algorithm and its application to power control in CDMA data networks. Math. Program. 2012, 133: 227–242. 10.1007/s10107-010-0427-x

  15. 15.

    Iiduka H: Fixed point optimization algorithm and its application to network bandwidth allocation. J. Comput. Appl. Math. 2012, 236: 1733–1742. 10.1016/j.cam.2011.10.004

  16. 16.

    Iiduka H: Iterative algorithm for triple-hierarchical constrained nonconvex optimization problem and its application to network bandwidth allocation. SIAM J. Optim. 2012, 22: 862–878. 10.1137/110849456

  17. 17.

    Iiduka H: Fixed point optimization algorithms for distributed optimization in networked systems. SIAM J. Optim. 2013, 23: 1–26. 10.1137/120866877

  18. 18.

    Iiduka H, Yamada I: Computational method for solving a stochastic linear-quadratic control problem given an unsolvable stochastic algebraic Riccati equation. SIAM J. Control Optim. 2012, 50: 2173–2192. 10.1137/110850542

  19. 19.

    Iiduka H: Acceleration method for convex optimization over the fixed point set of a nonexpansive mapping. Math. Program. 2014. 10.1007/s10107-013-0741-1

  20. 20.

    Iiduka H, Yamada I: A use of conjugate gradient direction for the convex optimization problem over the fixed point set of a nonexpansive mapping. SIAM J. Optim. 2009, 19: 1881–1893. 10.1137/070702497

  21. 21.

    Iiduka H: A new iterative algorithm for the variational inequality problem over the fixed point set of a firmly nonexpansive mapping. Optimization 2010, 59: 873–885. 10.1080/02331930902884158

  22. 22.

    Bruck RE, Reich S: Nonexpansive projections and resolvents of accretive operators in Banach spaces. Houst. J. Math. 1977, 3: 459–470.

  23. 23.

    Opial Z: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73: 591–597. 10.1090/S0002-9904-1967-11761-0

  24. 24.

    Goebel K, Kirk WA: Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.

  25. 25.

    Tan KK, Xu HK: Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process. J. Math. Anal. Appl. 1993, 178: 301–308. 10.1006/jmaa.1993.1309

  26. 26.

    Berinde V Lecture Notes in Mathematics. In Iterative Approximation of Fixed Points. Springer, Berlin; 2007.

  27. 27.

    Goebel K, Reich S: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York; 1984.

  28. 28.

    Browder FE, Petryshyn WV: The solution by iteration of linear functional equations in Banach spaces. Bull. Am. Math. Soc. 1966, 72: 566–570. 10.1090/S0002-9904-1966-11543-4

  29. 29.

    Aoyama K, Kimura Y, Takahashi W, Toyoda M: On a strongly nonexpansive sequence in Hilbert spaces. J. Nonlinear Convex Anal. 2007, 8: 471–489.

  30. 30.

    Baillon J-B, Haddad G: Quelques propriétés des opérateurs angle-bornés et n -cycliquement monotones. Isr. J. Math. 1977, 26: 137–150. 10.1007/BF03007664

  31. 31.

    Iiduka H: Strong convergence for an iterative method for the triple-hierarchical constrained optimization problem. Nonlinear Anal. 2009, 71: 1292–1297. 10.1016/j.na.2009.01.133

  32. 32.

    Combettes PL, Bondon P: Hard-constrained inconsistent signal feasibility problems. IEEE Trans. Signal Process. 1999, 47: 2460–2468. 10.1109/78.782189

  33. 33.

    Browder FE: Convergence theorems for sequences of nonlinear operators in Banach spaces. Math. Z. 1967, 100: 201–225. 10.1007/BF01109805

  34. 34.

    Reich S, Shoikhet D: Nonlinear Semigroups, Fixed Points, and Geometry of Domains in Banach Spaces. Imperial College Press, London; 2005.

  35. 35.

    Iiduka H: Iterative algorithm for solving triple-hierarchical constrained optimization problem. J. Optim. Theory Appl. 2011, 148: 580–592. 10.1007/s10957-010-9769-z

  36. 36.

    Bauschke HH, Borwein JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38: 367–426. 10.1137/S0036144593251710

Download references

Acknowledgements

We are sincerely grateful to the Lead Guest Editor, Qamrul Hasan Ansari, of Special Issue on Variational Analysis and Fixed Point Theory: Dedicated to Professor Wataru Takahashi on the occasion of his 70th birthday, and the two anonymous referees for helping us improve the original manuscript. This work was supported by the Japan Society for the Promotion of Science through a Grant-in-Aid for JSPS Fellows (08J08592) and a Grant-in-Aid for Young Scientists (B) (23760077).

Author information

Correspondence to Hideaki Iiduka.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Iemoto, S., Hishinuma, K. & Iiduka, H. Approximate solutions to variational inequality over the fixed point set of a strongly nonexpansive mapping. Fixed Point Theory Appl 2014, 51 (2014) doi:10.1186/1687-1812-2014-51

Download citation

Keywords

  • variational inequality problem
  • fixed point set
  • strongly nonexpansive mapping
  • monotone operator