Skip to main content

Strong convergence of relaxed hybrid steepest-descent methods for triple hierarchical constrained optimization

Abstract

Up to now, a large number of practical problems such as signal processing and network resource allocation have been formulated as the monotone variational inequality over the fixed point set of a nonexpansive mapping, and iterative algorithms for solving these problems have been proposed. The purpose of this article is to investigate a monotone variational inequality with variational inequality constraint over the fixed point set of one or finitely many nonexpansive mappings, which is called the triple-hierarchical constrained optimization. Two relaxed hybrid steepest-descent algorithms for solving the triple-hierarchical constrained optimization are proposed. Strong convergence for them is proven. Applications of these results to constrained generalized pseudoinverse are included.

AMS Subject Classifications: 49J40; 65K05; 47H09.

1 Introduction

Let H be a real Hilbert space with inner product 〈·, ·〉 and norm ∥ · ∥, let C be a nonempty closed convex subset of H and let R be the set of all real numbers. For a given nonlinear operator A : H → H, the following classical variational inequality problem is formulated as finding a point x* ∈ C such that

A x * , x - x * ≥ 0 , ∀ x ∈ C .
(1.1)

The set of solutions of problem (1.1) is denoted by VI(C,A). Variational inequalities were initially studied by Stampacchia [1] and ever since have been widely studied, since they cover as diverse disciplines as partial differential equations, optimal control, optimization, mathematical programming, mechanics, and finance. On the other hand, a number of mathematical programs and iterative algorithms have been developed to resolve complex real world problems. In particular, monotone variational inequalities with a fixed point constraint [2–4] include such practical problems as signal recovery [3], beamforming [5], and power control [6], and many iterative algorithms for solving them have been presented.

The constraint set has been defined in [3, 5] as the intersection of finite, closed, and convex subsets, C0 and C i (i = 1,2,...,m), of a real Hilbert space, and is represented as the fixed point set of the direct product mapping composed of the metric projections onto the C i s. The case, in which the intersection of the C i s is empty, has been considered in [2, 6]. When C0 is the absolute set, for which the condition must be satisfied, the constraint set is defined as the subset of C0 with the elements closet to the C i s (i = 1,2,... ,m) in terms of the norm. This set is represented as the fixed point set of the mapping composed of the metric projections onto the C i s [[2], Proposition 4.2]. Iterative algorithms have been presented in [2–4] for the convex optimization problem with a fixed point constraint along with proof that these algorithms converge strongly to the unique solution of problems with a strongly monotone operator. The strong monotonicity condition guarantees the uniqueness of the solution. A hierarchical fixed point problem, equivalent to the variational inequality for a monotone operator over the fixed point set, has been discussed [7, 8] along with iterative algorithms for solving it. The solution presented [7, 8] is not always unique, so that there may be many solutions to the problem. In that case, a solution, that results in practical systems and networks being more stable and reliable, must be found from among candidate solutions. Hence, it would be reasonable to identify the unique minimizer of an appropriate objective function over the hierarchical fixed point constraint. Very recently, related iterative methods and their convergence analysis for solving hierarchical fixed point problems, hierarchical optimization problems and hierarchical variational inequality problems can be found in [9–16].

Let T : H → H be a self-mapping on H. We denote by Fix(T) the set of fixed points of T. A mapping T : H → H is called L-Lipschitz continuous if there exists a constant L ≥ 0 such that

T x - T y ≤ L x - y , ∀ x , y ∈ H .
(1.2)

In particular, if L ∈ [0,1), T is called a contraction; if L = 1, T is called a nonexpansive mapping. A mapping A : H → H is called α-inverse strongly monotone if there exists α > 0 such that

A x - A y , x - y ≥ α A x - A y 2 , ∀ x , y ∈ H .
(1.3)

Obviously, every inverse strongly monotone mapping is a monotone and Lipschitz continuous mapping; see, e.g., [17].

In 2001, Yamada [2] introduced a hybrid steepest-descent method for finding an element of VI(C, F). His idea is stated now. Assume that C is the fixed point set of a nonexpansive mapping T : H → H; that is,

C : Fix ( T ) = { x ∈ H : T x = x } .

Support that F is η-strongly monotone and κ-Lipschitz continuous with constants η,κ > 0. Take a fixed number μ ∈ (0, 2η/κ2) and a sequence {λ n } ⊂ (0,1) satisfying the conditions below:

(L1) limn→∞λ n = 0;

(L2) ∑ n = 0 ∞ λ n =∞;

(L3) lim n → ∞ λ n - λ n + 1 / λ n + 1 2 = 0 .

Starting with an arbitrary initial guess x0 ∈ H, one can generate a sequence {u n } by the following algorithm:

u n + 1 : T u n - λ n + 1 μ F ( T u n ) , ∀ n ≥ 0 .
(1.4)

Then, Yamada [2] proved that {u n } converges strongly to the unique element of VI(C, F). In the case where C is expressed as the intersection of the fixed-point sets of N nonexpansive mappings T i : H → H with N ≥ 1 an integer, Yamada [2] proposed another algorithm,

u n + 1 : = T [ n + 1 ] u n - λ n + 1 μ F T [ n + 1 ] u n , ∀ n ≥ 0 .
(1.5)

where T[k]:= Tk modN, for integer k ≥ 1, with the mod function taking values in the set {1,2,..., N} [i.e., if k = jN + q for some integers j ≥ 0 and 0 ≤ q < N, then T[k]= T N if q = 0 and T[k]= T q if 1 < q < N], 0 where μ ∈ (0, 2η/κ2) and where the sequence {λ n } of parameters satisfies conditions (L1), (L2), and (L4),

(L4) ∑ n = 0 ∞ λ n - λ n + N is convergent.

Under these conditions, Yamada [2] proved the strong convergence of {u n } to the unique element of VI(C,F).

In 2003, Xu and Kim [18] continued the convergence study of the hybrid steepest-descent algorithms (1.4) and (1.5). The major contribution is that the strong convergence of the algorithms (1.4) and (1.5) holds with the condition (L3) replaced by the condition

(L3)' limn→∞λ n /λn+1= 1, or equivalently, limn→∞(λ n - λn+1)/λn+1= 0, and with condition (L4) replaced by the condition

(L4)' limn→∞λ n /λn+N= 1, or equivalently, limn→∞(λ n - λn+N)/λn+N= 0.

Theorem XK1 (see [[18], Theorem 3.1]). Assume that 0 < μ < 2η/κ2. Assume also that the control conditions (L1), (L2), and (L3)' hold for {λ n }. Then, the sequence {u n } generated by algorithm (1.4) converges strongly to the unique element u* of VI(C, F).

Theorem XK2 (see [[18], Theorem 3.2]). Let μ ∈ (0,2η/κ2) and let conditions (L1), (L2), and (L4)' be satisfied. Assume in addition that

C = ⋂ i = 1 N Fix ( T i ) = Fix T 1 T 2 . . . T N = Fix T N T 1 . . . T N - 1 = ⋯ = Fix T 2 T 3 . . . T N T 1 .

Then, the sequence {u n } generated by algorithm (1.5) converges in norm to the unique element u* of VI(C,F).

Recall the variational inequality for a monotone operator A1 : H → H over the fixed point set of a nonexpansive mapping T : H → H:

Find x ̄ ∈ VI Fix ( T ) , A 1 : = x ̄ ∈ Fix ( T ) : A 1 x ̄ , y - x ̄ ≥ 0 , ∀ y ∈ Fix ( T ) ,

where Fix ( T ) := x ∈ H : T x = x ≠∅. Very recently Iiduka [19] introduced the following monotone variational inequality with the variational inequality constraint over the fixed point set of a nonexpansive mapping:

Problem I (see [[19], Problem 3.1]). Assume that

  1. (i)

    T : H → H is a nonexpansive mapping with Fix ( T ) ≠∅;

  2. (ii)

    A1 : H → H is α-inverse strongly monotone;

  3. (iii)

    A2: H → H is β-strongly monotone and L-Lipschitz continuous, that is, there are constants β, L > 0 such that

    A 2 x - A 2 y , x - y ≥ β x - y 2 and A 2 x - A 2 y ≤ L x - y

for all x, y ∈ H;

  1. (iv)

    VI Fix ( T ) , A 1 ≠∅.

Then the objective is to

find x * ∈ VI VI Fix ( T ) , A 1 A 2 : = x * ∈ VI Fix ( T ) , A 1 : A 2 x * , v - x * ≥ 0 , ∀ v ∈ VI Fix ( T ) , A 1 .

Since this problem has a triple structure in contrast with bilevel programming problems or hierarchical constrained optimization problems or hierarchical fixed point problem, it is referred to as a triple-hierarchical constrained optimization problem (THCOP). He presented some examples of the THCOP and proposed an iterative algorithm for finding solutions of such problem.

Algorithm I (see [[19], Algorithm 4.1]). Let T : H → H and A i : H → H (i = 1, 2) satisfy Assumptions (i)-(iv) in Problem I. The following steps are presented for solving Problem I.

Step 0. Take { α n } n = 0 ∞ , { λ n } n = 0 ∞ ⊂ ( 0 , ∞ ) , and μ > 0, choose x0 ∈ H arbitrarily, and let n := 0.

Step 1. Given x n ∈ H, compute xn+1∈ H as

y n : = T x n - λ n A 1 x n , x n + 1 : = y n - μ α n A 2 y n .

Update n := n + 1 and go to Step 1.

The convergence analysis of the proposed algorithm was also studied in [19]. The following strong convergence theorem is established for Algorithm I.

Theorem I (see [[19], Theorem 4.1]). Assume that { y n } n = 0 ∞ in Algorithm I is bounded. If μ ∈ (0, 2β/L2) is used and if { α n } n = 0 ∞ ⊂ 0 , 1 and { λ n } n = 0 ∞ ⊂ 0 , 2 α satisfying (i) limn→∞α n = 0, (ii) ∑ n = 0 ∞ α n =∞, (iii) ∑ n = 0 ∞ α n + 1 - α n <∞, (iv) ∑ n = 0 ∞ λ n + 1 - λ n <∞, and (v) λ n ≤ α n ∀n ≥ 0 are used, then the sequence, { x n } n = 0 ∞ , generated by Algorithm I satisfies the following properties.

  1. (a)

    { x n } n = 0 ∞ is bounded;

  2. (b)

    limn→∞∥x n - y n ∥ = 0 and limn→∞∥x n - Tx n ∥ = 0 hold;

  3. (c)

    If ∥x n - y n ∥ = o(λ n ), { x n } n = 0 ∞ converges strongly to the unique solution of Problem I.

Motivated and inspired by the above research work, we continue the convergence study of Iiduka's relaxed hybrid steepest-descent Algorithm I. It is proven that under the lack of the boundedness assumption of { y n } n = 0 ∞ , { x n } n = 0 ∞ converges strongly to the unique solution of Problem I.

On the other hand, we introduce the following monotone variational inequality with the variational inequality constraint over the intersection of the fixed point sets of N nonexpan-sive mappings T i : H → H, with N ≥ 1 an integer.

Problem II. Assume that

  1. (i)

    each T i : H → H is a nonexpansive mapping with ⋂ i = 1 N Fix ( T i ) ≠∅;

  2. (ii)

    A1 : H → H is α-inverse strongly monotone;

  3. (iii)

    A2: H → H is β-strongly monotone and L-Lipschitz continuous;

  4. (iv)

    VI ⋂ i = 1 N Fix T i , A 1 ≠∅.

Then the objective is to

find x * ∈ VI VI( ⋂ i = 1 N Fix ( T i ) , A 1 ) A 2 : = x * ∈ VI( ⋂ i = 1 N Fix ( T i ) , A 1 ) : A 2 x * , v - x * ≥ 0 , ∀ v ∈ VI( ⋂ i = 1 N Fix ( T ) , A 1 ) .

Another algorithm is proposed for Problem II.

Algorithm II. Let T i : H → H (i = 1,2,..., N) and A i : H → H (i = 1,2) satisfy Assumptions (i)-(iv) in Problem II. The following steps are presented for solving Problem II.

Step 0. Take { α n } n = 0 ∞ ⊂ 0 , 1 , { λ n } n = 0 ∞ ⊂ 0 , 2 α , μ ∈ 0 , 2 β / L 2 , choose x0 ∈ H arbitrarily, and let n := 0.

Step 1. Given x n ∈ H, compute xn+1∈ H as

y n : = T [ n + 1 ] x n - λ n A 1 x n , x n + 1 : = y n - μ α n A 2 y n .

Update n := n + 1 and go to Step 1.

In this article, suppose first that there hold the following conditions:

(A1) limn→∞α n = 0;

(A2) ∑ n = 0 ∞ α n =∞;

(A3) limn→∞(α n - αn+1)/αn+1= 0 or ∑ n = 0 ∞ α n + 1 - α n <∞;

(A4) limn→∞(λ n - λn+1)/λn+1= 0 or ∑ n = 0 ∞ λ n + 1 - λ n <∞;

(A5) λ n ≤ α n for all n ≥ 0.

It is proven that under Conditions (A1)-(A5), the sequence { x n } n = 0 ∞ generated by Algorithm I converges strongly to the unique solution of Problem I.

Second, assume that there hold the following conditions:

(B1) limn→∞α n = 0;

(B2) ∑ n = 0 ∞ α n =∞;

(B3) limn→∞(α n - αn+N)/αn+N= 0 or ∑ n = 0 ∞ α n + N - α n <∞;

(B4) limn→∞(λ n - λn+N)/λn+N= 0 or ∑ n = 0 ∞ λ n + N - λ n <∞;

(B5) λ n ≤ α n for all n ≥ 0.

It is proven that under Conditions (B1)-(B5), the sequence { x n } n = 0 ∞ generated by Algorithm II converges strongly to the unique solution of Problem II. It is worth pointing out that in our results there is no assumption of the boundedness imposed on the sequences {x n } and {y n } generated by Algorithms I or II.

In addition, if N = 1, then Algorithm II reduces to the above Algorithm I. Hence, Algorithm II is more general and more flexible than Algorithm I. Obviously, our problem of finding the unique element of VI VI â‹‚ i = 1 N Fix T i , A 1 A 2 is more general and more subtle than the problem of finding the unique element of VI(VI(Fix(T), A1),A2). Beyond question, our results represent the modification, supplement, extension, and development of the above Theorem I.

The rest of the article is organized as follows. After some preliminaries in Section 2, we introduce two relaxed hybrid steepest-descent algorithms for solving Problems I and II in Section 3, respectively. Strong convergence for them is proven. Applications of these results to constrained generalized pseudoinverse are given in the last section, Section 4.

2 Preliminaries

Let H be a real Hilbert space with an inner product 〈·,·〉 and its induced norm ∥ · ∥. Throughout this article, we write x n ⇀ x to indicate that the sequence {x n } converges weakly to x. x n → x implies that {x n } converges strongly to x. A function f : H → R is said to be convex iff, for any x, y ∈ H and for any λ ∈ [0,1], f(λx + (1 - λ)y) ≤ λf(x) + (1 - λ)f(y). It is said to be strongly convex iff, α > 0 exists such that, for all x,y ∈ H and for all λ ∈ [ 0 , 1 ] , f λ x + ( 1 - λ ) y ≤ λ f ( x ) + ( 1 - λ ) f ( y ) - 1 2 α λ ( 1 - λ ) x - y 2 .

A : H → H is referred to as a strongly monotone operator with α > 0 [[20], Definition 25.2(iii)] iff 〈Ax - Ay, x - y〉 ≥ α∥x - y∥2 for all x, y ∈ H. It is said to be inverse-strongly monotone with α > 0 (α-inverse-strongly monotone) [[17], Definition, p. 200] (see [[21], Definition 2.3.9(e)] for the definition of this operator, called a co-coercive operator, on the finite dimensional spaces) iff 〈Ax -Ay, x - y〉 ≥ α∥Ax - Ay∥2 for all x,y ∈ H.

A : H → H is said to be hemicontinuous [[22], p. 204], [[20], Definition 27.14] iff, for any x,y ∈ H, the mapping g : [0,1] → H, defined by g(t) := A(tx + (1 - t)y) (t ∈ [0,1]), is continuous, where H has a weak topology. A : H → H is referred to as a Lipschitz continuous (L-Lipschitz continuous) operator [[23], Sect. 1.1], [[20], Definition 27.14] iff L > 0 exists such that ∥Ax - Ay∥ ≤ L∥x - y∥ for all x,y ∈ H. The fixed point set of the mapping A : H → H is denoted by Fix(A) := {x ∈ H : Ax = x}.

Let f : H → R be a Frechet differentiable function. This means that f is convex (resp. strongly convex) iff ∇f : H → H is monotone (resp. strongly monotone) [[20], Proposition 25.10], [[24], Sect. IV, Theorem 4.1.4]. If f : H → R is convex and if ∇f : H → H is 1/L-Lipschitz continuous, ∇f is L-inverse-strongly monotone [[25], Theorem 5].

The metric projection onto the nonempty, closed and convex set C (⊂ H), denoted by P C , is defined by, for all x ∈ H, P C x ∈ C and ∥x - P C x∥ = infy∈C∥x - y∥.

The variational inequality [1, 26] for a monotone operator A : H → H over a nonempty, closed, and convex set C (⊂ H), is to find a point in

VI ( C , A ) : = x * ∈ C : A x * , y - x * ≥ 0 , ∀ y ∈ C .

Some properties of the solution set of the monotone variational inequality are as follows:

Proposition 2.1. Let C (⊂ H) be nonempty, closed and convex, A : H → H be monotone and hemicontinuous, and f : H → R be convex and Frechet differentiable. Then,

  1. (i)

    [[22], Lemma 7.1.7] VI(C,A) = {x* ∈ C : 〈Ay, y - x*〉 ≥ 0, ∀y ∈ C}.

  2. (ii)

    [[20], Theorem 25.C] VI ( C , A ) ≠∅ when C is bounded.

  3. (iii)

    [[27], Lemma 2.24] VI(C, A) = Fix(P C (I - λA)) for all λ > 0, where I stands for the identity mapping on H.

  4. (iv)

    [[27], Theorem 2.31] VI(C, A) consists of one point, if A is strongly monotone and Lipschitz continuous.

  5. (v)

    [[26], Chap. II, Proposition 2.1 (2.1) and (2.2)] VI(C, ∇f) = Argminx∈Cf(x) := {x* ∈ C: f(x*) = minx∈Cf(x)}.

On the other hand, the mapping T : H → H is referred to as a nonexpansive mapping [22, 23, 28–30] iff, ∥Tx - Ty∥ ≤ ∥x - y∥ for all x,y ∈ H. The metric projection P C onto a given nonempty, closed, and convex set C (⊂ H), satisfies the nonexpansivity with Fix(P C ) = C [[22], Theorem 3.1.4(i)], [[29], p. 371], [[30], Theorem 2.4-3]. The fixed point set of a nonexpansive mapping has the following properties:

Proposition 2.2. Let C (⊂ H) be nonempty, closed, and convex, and T : C → C be nonexpansive. Then,

  1. (i)

    [[23], Proposition 5.3] Fix(T) is closed and convex;

  2. (ii)

    [[23], Theorem 5.1] Fix ( T ) ≠∅ when C is bounded.

The following proposition provides an example of a nonexpansive mapping in which the fixed point set is equal to the solution set of the monotone variational inequality.

Proposition 2.3 (see [[19], Proposition 2.3]). Let C (⊂ H) be nonempty, closed, and convex, and A : H → H be α-inverse-strongly monotone. Then, for any given λ ∈ [0, 2α], S λ : H → H defined by

S λ x : P C ( I - λ A ) x , ∀ x ∈ H ,

satisfies the nonexpansivity and Fix(S λ ) = VI(C,A).

The following proposition is needed to prove the main theorems in this article.

Proposition 2.4 (see [[2], Lemma 3.1]). Let A : H → H be β-strongly monotone and L-Lipschitz continuous, let T : H → H be a nonexpansive mapping and let μ ∈ (0,2β/L2). For λ ∈ [0,1], define Tλ: H → H by Tλx := Tx - λμATx for all x ∈ H. Then, for all x, y ∈ H,

T λ x - T λ y ≤ ( 1 - λ τ ) x - y

holds, where τ:=1- 1 - μ ( 2 β - μ L 2 ) ∈ ( 0 , 1 ] ..

The following lemmas will be used for the proof of our main results in this article.

Lemma 2.1 (see [31]). Let {a n } be a sequence of nonnegative real numbers satisfying the property

a n + 1 ≤ ( 1 - s n ) a n + s n t n + δ n , ∀ n ≥ 0 ,

where {s n } ⊂ (0,1] and {t n } are such that

  1. (i)

    ∑ n = 0 ∞ s n =∞;

  2. (ii)

    either lim supn→ ∞t n ≤ 0 or ∑ n = 0 ∞ s n t n <∞;

  3. (iii)

    ∑ n = 0 ∞ δ n <∞.

Then limn→ ∞, a n = 0.

Lemma 2.2 (see [[23], Demiclosedness Principle]). Assume that T is a nonexpansive self-mapping of a closed convex subset C of a Hilbert space H. If T has a fixed point, then I - T is demiclosed. That is, whenever {x n } is a sequence in C weakly converging to some x ∈ C and the sequence {(I - T)x n } strongly converges to some y, it follows that (I - T)x = y. Here I is the identity operator of H.

The following lemma is an immediate consequence of an inner product.

Lemma 2.3. In a real Hilbert space H, there holds the inequality

x + y 2 ≤ x 2 + 2 y , x + y , ∀ x , y ∈ H .

Lemma 2.4. Let { a n } n = 0 ∞ be a bounded sequence of nonnegative real numbers and { b n } n = 0 ∞ be a sequence of real numbers such that lim supn→ ∞b n ≤ 0. Then, lim supn→ ∞a n b n ≤ 0.

Proof. Since { a n } n = 0 ∞ is a bounded sequence of nonnegative real numbers, there is a constant a > 0 such that 0 ≤ a n ≤ a for all n ≥ 0. Note that lim supn→ ∞b n ≤ 0. Hence, given ε > 0 arbitrarily, there exists an integer n0 ≥ 1 such that b n < ε for all n ≥ n0. This implies that

a n b n ≤ a n ε ≤ a ε , ∀ n ≥ n 0 .

Therefore, we have

lim sup n → ∞ a n b n ≤ a ε .

From the arbitrariness of ε > 0, it follows that lim supn→ ∞a n b n ≤ 0.

3 Relaxed hybrid steepest-descent algorithms

In this section, T : H → H and A i : H → H (i = 1, 2) are assumed to satisfy Assumptions (i)-(iv) in Problem I. First the following algorithm is presented for Problem I.

Algorithm 3.1.

Step 0. Take { α n } n = 0 ∞ ⊂ 0 , 1 , { λ n } n = 0 ∞ ⊂ 0 , 2 α , μ ∈ 0 , 2 β / L 2 , choose x0 ∈ H arbitrarily, and let n := 0.

Step 1. Given x n ∈ H, compute xn+1∈ H as

y n : = T x n - λ n A 1 x n , x n + 1 : = y n - μ α n A 2 y n .

Update n := n + 1 and go to Step 1.

The following convergence analysis is presented for Algorithm 3.1:

Theorem 3.1. Let μ∈ ( 0 , 2 β / L 2 ) , { α n } n = 0 ∞ ⊂ ( 0 , 1 ] and { λ n } n = 0 ∞ ⊂ 0 , 2 α such that

  1. (i)

    limn→ ∞α n = 0;

  2. (ii)

    ∑ n = 0 ∞ α n =∞;

  3. (iii)

    limn→ ∞(α n - αn+1)/αn+1= 0 or ∑ n = 0 ∞ α n + 1 - α n <∞;

  4. (iv)

    limn→ ∞(λ n - λn+1)/λn+1= 0 or ∑ n = 0 ∞ λ n + 1 - λ n <∞;

  5. (v)

    λ n ≤ α n for all n ≥ 0.

Then the sequence { x n } n = 0 ∞ generated by Algorithm 3.1 satisfies the following properties:

  1. (a)

    { x n } n = 0 ∞ is bounded;

  2. (b)

    limn→ ∞∥x n - y n ∥ = 0 and limn→ ∞∥x n - Tx n ∥ = 0 hold;

  3. (c)

    { x n } n = 0 ∞ converges strongly to the unique solution of Problem I provided ∥x n - y n ∥ = o(λ n ).

Proof. Let {x*} = VI(VI(Fix(T), A1), A2). Assumption (iii) in Problem I guarantees that

A 2 y n - A 2 x * ≤ L y n - x * , ∀ n ≥ 0 .

Putting z n = x n - λ n A1x n for all n ≥ 0, we have

x n + 1 = y n - μ α n A 2 y n = T z n - μ α n A 2 T z n = T α n z n , ∀ n ≥ 0 .

We divide the rest of the proof into several steps.

Step 1. {x n } is bounded. Indeed, since A1 is α-inverse strongly monotone and { λ n } n = 0 ∞ ⊂ 0 , 2 α , we have

x n - x * - λ n ( A 1 x n - A 1 x * ) 2 = x n - x * 2 - 2 λ n A 1 x n - A 1 x * , x n - x * + λ n 2 A 1 x n - A 1 x * 2 ≤ x n - x * 2 - λ n ( 2 α - λ n ) A 1 x n - A 1 x * 2 ≤ x n - x * 2 .

Utilizing Proposition 2.4 and Condition (v) we have (note that T α n x * = x * - α n μ A 2 x * )

x n + 1 - x * = T α n z n - x * ≤ T α n z n - T α n x * + T α n x * - x * ≤ ( 1 - α n τ ) z n - x * + α n μ A 2 x * = ( 1 - α n τ ) x n - x * - λ n A 1 x n + α n μ A 2 x * = ( 1 - α n τ ) x n - x * - λ n ( A 1 x n - A 1 x * ) - λ n A 1 x * + α n μ A 2 x * ≤ ( 1 - α n τ ) x n - x * - λ n ( A 1 x n - A 1 x * ) + λ n A 1 x * + α n μ A 2 x * ≤ ( 1 - α n τ ) x n - x * + λ n A 1 x * + α n μ A 2 x * ≤ ( 1 - α n τ ) x n - x * + λ n A 1 x * + α n μ A 2 x * ≤ ( 1 - α n τ ) x n - x * + α n A 1 x * + μ A 2 x * = ( 1 - α n τ ) x n - x * + α n τ ⋅ A 1 x * + μ A 2 x * / τ ≤ max x n - x * , A 1 x * + μ A 2 x * / τ ,
(3.1)

where τ:=1- 1 - μ ( 2 β - μ L 2 ) . By induction, it is easy to see that

x n + 1 - x * ≤ max x 0 - x * , A 1 x * + μ A 2 x * / τ , ∀ n ≥ 0 .

This implies that { x n } n = 0 ∞ is bounded. Assumption (ii) in Problem I guarantees that A1 is 1/α-Lipschitz continuous; that is,

A 1 x n - A 1 x * ≤ ( 1 / α ) x n - x * , ∀ n ≥ 0 .

Thus, the boundedness of {x n } ensures the boundedness of {A1x n }. From y n = T(x n - λ n A1x n ) and the nonexpansivity of T, it follows that { y n } n = 0 ∞ is bounded. Since A2 is L-Lipschitz continuous, {A2y n } is also bounded.

Step 2. limn→ ∞∥x n - y n ∥ = limn→ ∞∥x n - Tx n ∥ = 0. Indeed, utilizing Proposition 2.4, we obtain from the α-inversely strong monotonicity of A1 that

x n + 1 - x n = T α n z n - T α n - 1 z n - 1 ≤ T α n z n - T α n z n - 1 + T α n z n - 1 - T α n - 1 z n - 1 ≤ ( 1 - α n τ ) z n - z n - 1 + α n - α n - 1 μ A 2 T z n - 1 = ( 1 - α n τ ) x n - λ n A 1 x n - x n - 1 + λ n - 1 A 1 x n - 1 + α n - α n - 1 μ A 2 y n - 1 = ( 1 - α n τ ) x n - x n - 1 - λ n ( A 1 x n - A 1 x n - 1 ) + ( λ n - 1 - λ n ) A 1 x n - 1 + α n - α n - 1 μ A 2 y n - 1 ≤ ( 1 - α n τ ) x n - x n - 1 - λ n ( A 1 x n - A 1 x n - 1 ) + λ n - λ n - 1 A 1 x n - 1 + α n - α n - 1 μ A 2 y n - 1 ≤ ( 1 - α n τ ) x n - x n - 1 + λ n - λ n - 1 A 1 x n - 1 + α n - α n - 1 μ A 2 y n - 1 ≤ ( 1 - α n τ ) x n - x n - 1 + λ n - λ n - 1 A 1 x n - 1 + α n - α n - 1 μ A 2 y n - 1 .

Since both {A1x n } and {A2y n } are bounded, from Lemma 2.1 and Conditions (iii), (iv) it follows that

lim n → ∞ x n + 1 - x n = 0 .
(3.2)

In the meantime, from ∥xn+1-y n ∥ = α n μ∥ A2y n ∥ and Condition (i), we get limn→ ∞∥xn+1-y n ∥ = 0. Since ∥x n - y n ∥ ≤ ∥x n - xn+1∥ + ∥xn+1- y n ∥,

lim n → ∞ x n - y n = 0
(3.3)

is obtained from (3.2). Moreover, the nonexpansivity of T guarantees that

y n - T x n = T ( x n - λ n A 1 x n ) - T x n ≤ λ n A 1 x n .

Hence, Conditions (i) and (v) lead to limn→ ∞∥y n - Tx n ∥ = 0. Therefore,

lim n → ∞ x n - T x n = 0
(3.4)

is obtained from (3.3).

Step 3. lim supn→ ∞〈A1x*,x* - x n 〉 ≤ 0. Indeed, choose a subsequence { x n i } of {x n } such that

lim sup n → ∞ A 1 x * , x * - x n = lim i → ∞ A 1 x * , x * - x n i .

The boundedness of { x n i } implies the existence of a subsequence { x n i j } of { x n i } and a point x ^ ∈H such that x n i j ⇀ x ^ . We may assume without loss of generality that x n i ⇀ x ^ , that is, lim i → ∞ w , x n i = w , x ^ ( ∀ w ∈ H ) .

First, we can readily see that x ^ ∈Fix ( T ) . As a matter of fact, utilizing Lemma 2.2 we deduce immediately from (3.4) and x n i ⇀ x ^ that x ^ ∈Fix ( T ) . From x* ∈ VI(Fix(T), A1), we derive

lim sup n → ∞ A 1 x * , x * - x n = lim i → ∞ A 1 x * , x * - x n i = A 1 x * , x * - x ^ ≤ 0 .
(3.5)

Step 4. lim supn→∞〈A2x*,x* - x n 〉 ≤ 0. Indeed, choose a subsequence { x n k } of {x n } such that

lim sup n → ∞ A 2 x * , x * - x n = lim k → ∞ A 2 x * , x * - x n k .

The boundedness of { x n k } implies that there is a subsequence of { x n k } which converges weakly to a point x ̄ ∈H. Without loss of generality, we may assume that x n k ⇀ x ̄ . Utilizing Lemma 2.2 we conclude immediately from (3.4) and x n k ⇀ x ̄ that x ̄ ∈Fix ( T ) .

Let y ∈ Fix(T) be fixed arbitrarily. Then, in terms of Lemma 2.3, we conclude from the nonexpansivity of T and monotonicity of A1 that for all n ≥ 0,

y n - y 2 = T ( x n - λ n A 1 x n ) - T y 2 ≤ ( x n - y ) - λ n A 1 x n 2 = x n - y 2 + 2 λ n A 1 x n , y - x n + λ n 2 A 1 x n 2 ≤ x n - y 2 + 2 λ n A 1 y , y - x n + λ n 2 A 1 x n 2 ,
(3.6)

which implies that for all n ≥ 0,

0 ≤ 1 λ n x n - y 2 - y n - y 2 + 2 A 1 y , y - x n + λ n A 1 x n 2 = x n - y + y n - y x n - y - y n - y λ n + 2 A 1 y , y - x n + λ n A 1 x n 2 ≤ M 0 x n - y n λ n + λ n + 2 A 1 y , y - x n ,

where M0 := sup{∥x n - y∥ + ∥y n - y∥ + ∥A1x n ∥2 : n ≥ 0} < ∞. From ∥x n - y n ∥ = o(λ n ) and Conditions (i) and (v), for any ε > 0, there exists an integer m0 ≥ 0 such that M0(∥x n - y n ∥/λ n + λ n ) ≤ ε for all n ≥ m0. Hence, 0 ≤ ε + 2〈A1y,y - x n 〉 for all n ≥ m0. Putting n := n k , we derive ε+2 A 1 y , y - x ̄ ≥0 as k → ∞, from x n k ⇀ x ̄ ∈Fix ( T ) . Since ε > 0 is arbitrary, it is clear that A 1 y , y - x ̄ ≥0 for all y ∈ Fix(T). Accordingly, utilizing Proposition 2.1 (i) we deduce from the α-inverse strong monotonicity of A1 that x ̄ ∈ VI Fix ( T ) , A 1 . Therefore, from {x*} = VI(VI(Fix(T), A1), A2), we have

lim sup n → ∞ A 2 x * , x * - x n = lim k → ∞ A 2 x * , x * - x n k = A 2 x * , x * - x ̄ ≤ 0 .
(3.7)

Step 5. limn→∞∥x n - x*∥ = 0. Indeed, observe first that for all n ≥ 0,

z n - x * 2 = x n - x * - λ n A 1 x n 2 = x n - x * 2 - 2 λ n A 1 x n , x n - x * + λ n 2 A 1 x n 2 = x n - x * 2 - 2 λ n A 1 x n - A 1 x * , x n - x * + 2 λ n A 1 x * , x * - x n + λ n 2 A 1 x n 2 ≤ x n - x * 2 + 2 λ n A 1 x * , x * - x n + λ n 2 M 0 .

Utilizing Lemma 2.3 and Proposition 2.4, we deduce from Inequality (3.6) that for all n ≥ 0,

x n + 1 - x * 2 = T α n z n - T α n x * + T α n x * - x * 2 ≤ T α n z n - T α n x * 2 + 2 T α n x * - x * , x n + 1 - x * ≤ ( 1 - α n τ ) 2 z n - x * 2 + 2 μ α n A 2 x * , x * - x n + 1 ≤ ( 1 - α n τ ) z n - x * 2 + 2 μ α n A 2 x * , x * - x n + 1 ≤ ( 1 - α n τ ) x n - x * 2 + 2 λ n A 1 x * , x * - x n + λ n 2 M 0 + 2 μ α n A 2 x * , x * - x n + 1 ≤ ( 1 - α n τ ) x n - x * 2 + 2 λ n ( 1 - α n τ ) A 1 x * , x * - x n + λ n 2 M 0 + 2 μ α n A 2 x * , x * - x n + 1 = ( 1 - α n τ ) x n - x * 2 + α n τ ⋅ 1 τ 2 λ n α n ( 1 - α n τ ) A 1 x * , x * - x n + M 0 λ n α n λ n + 2 μ A 2 x * , x * - x n + 1 .
(3.8)

It is easy to see that both { 2 λ n α n ( 1 - α n τ ) } and { M 0 λ n α n } are bounded and nonnegative sequences. Since ∑ n = 0 ∞ α n =∞, λ n ≤ α n → 0 (n → ∞), lim supn→∞ 〈A1x*,x* - x n ) ≤ 0 and lim supn→∞〈A2x*,x* - xn+1〉 ≤ 0, we conclude that ∑ n = 0 ∞ α n τ=∞ and

lim sup n → ∞ 1 τ 2 λ n α n ( 1 - α n τ ) A 1 x * , x * - x n + M 0 λ n α n λ n + 2 μ A 2 x * , x * - x n + 1 ≤ 1 τ lim sup n → ∞ 2 λ n α n ( 1 - α n τ ) A 1 x * , x * - x n + lim sup n → ∞ M 0 λ n α n λ n + lim sup n → ∞ 2 μ A 2 x * , x * - x n + 1 ≤ 0 .

(according to Lemma 2.4.) Therefore, utilizing Lemma 2.1 we have

lim n → ∞ x n - x * = 0 .

This completes the proof.

On the other hand, T i : H → H (i = 1,2,... ,N) and A i : H → H (i = 1,2) are assumed to satisfy Assumptions (i)-(iv) in Problem II. Then the following algorithm is presented for Problem II.

Algorithm 3.2.

Step 0. Take { α n } n = 0 ∞ ⊂ 0 , 1 , { λ n } n = 0 ∞ ⊂ 0 , 2 α , μ ∈ 0 , 2 β / L 2 , choose x0 ∈ H arbitrarily, and let n := 0.

Step 1. Given x n ∈ H, compute xn+1∈ H as

y n : = T [ n + 1 ] x n - λ n A 1 x n , x n + 1 : = y n - μ α n A 2 y n .

Update n := n + 1 and go to Step 1.

The following convergence analysis is presented for Algorithm 3.2:

Theorem 3.2. Let μ ∈ (0, 2β/L2), { α n } n = 0 ∞ ⊂ 0 , 1 , and { λ n } n = 0 ∞ ⊂ 0 , 2 α such that

  1. (i)

    limn→∞α n = 0;

  2. (ii)

    ∑ n = 0 ∞ α n =∞;

  3. (iii)

    limn→∞(α n - αn+N)/αn+N= 0 or ∑ n = 0 ∞ α n + N - α n <∞;

  4. (iv)

    limn→∞(λ n - λn+N)/λn+N= 0 or ∑ n = 0 ∞ λ n + N - λ n <∞;

  5. (v)

    λ n ≤ α n for all n ≥ 0.

Assume in addition that

⋂ i = 1 N Fix ( T i ) = Fix T 1 T 2 . . . T N = Fix T N T 1 . . . T N - 1 = ⋯ = Fix T 2 T 3 . . . T N T 1 .
(3.9)

Then the sequence { x n } n = 0 ∞ generated by Algorithm 3.2 satisfies the following properties:

  1. (a)

    { x n } n = 0 ∞ is bounded;

  2. (b)

    limn→∞∥xn+N-x n ∥ = 0 and limn→∞∥x n - T[n+N]... T[n+1]x n ∥ = 0 hold;

  3. (c)

    { x n } n = 0 ∞ converges strongly to the unique solution of Problem II provided ∥x n - y n ∥ = o(λ n ).

Proof. Let { x * }  = VI VI â‹‚ i = 1 N Fix T i , A 1 A 2 . Assumption (iii) in Problem II guarantees that

A 2 y n - A 2 x * ≤ L y n - x * , ∀ n ≥ 0 .

Putting z n = x n - λ n A1x n for all n ≥ 0, we have

x n + 1 = y n - μ α n A 2 y n = T [ n + 1 ] z n - μ α n A 2 T [ n + 1 ] z n = T [ n + 1 ] α n z n , ∀ n ≥ 0 .

We divide the rest of the proof into several steps.

Step 1. {x n } is bounded. Indeed, since A1 is α-inverse strongly monotone and { λ n } n = 0 ∞ ⊂ 0 , 2 α , we have

x n - x * - λ n ( A 1 x n - A 1 x * ) 2 = x n - x * 2 - 2 λ n A 1 x n - A 1 x * , x n - x * + λ n 2 A 1 x n - A 1 x * 2 ≤ x n - x * 2 - λ n ( 2 α - λ n ) A 1 x n - A 1 x * 2 ≤ x n - x * 2 .

Utilizing Proposition 2.4 and Condition (v) we have (note that T [ n + 1 ] α n x * = x * - α n μ A 2 x * , for all n ≥ 0)

x n + 1 - x * = T [ n + 1 ] α n z n - x * ≤ T [ n + 1 ] α n z n - T [ n + 1 ] α n x * + T [ n + 1 ] α n x * - x * ≤ ( 1 - α n τ ) z n - x * + α n μ A 2 x * = ( 1 - α n τ ) x n - x * - λ n A 1 x n + α n μ A 2 x * = ( 1 - α n τ ) x n - x * - λ n ( A 1 x n - A 1 x * ) - λ n A 1 x * + α n μ A 2 x * ≤ ( 1 - α n τ ) x n - x * - λ n ( A 1 x n - A 1 x * ) + λ n A 1 x * + α n μ A 2 x * ≤ ( 1 - α n τ ) x n - x * + λ n A 1 x * + α n μ A 2 x * ≤ ( 1 - α n τ ) x n - x * + λ n A 1 x * + α n μ A 2 x * ≤ ( 1 - α n τ ) x n - x * + α n A 1 x * + μ A 2 x * = ( 1 - α n τ ) x n - x * + α n τ ⋅ A 1 x * + μ A 2 x * / τ ≤ max x n - x * , A 1 x * + μ A 2 x * / τ ,

where τ:=1- 1 - μ ( 2 β - μ L 2 ) . . From this, we get by induction

x n + 1 - x * ≤ max x 0 - x * , A 1 x * + μ A 2 x * / τ , ∀ n ≥ 0 .

Hence { x n } n = 0 ∞ is bounded. Assumption (ii) in Problem II guarantees that A1 is 1/α-Lipschitz continuous; that is,

A 1 x n - A 1 x * ≤ ( 1 / α ) x n - x * , ∀ n ≥ 0 .

Thus, the boundedness of {x n } ensures the boundedness of {A1x n }. From y n = T[n+1](x n -λ n A1x n ) and the nonexpansivity of T[n+1], it follows that { y n } n = 0 ∞ is bounded. Since A2 is L-Lipschitz continuous, {A2y n } is also bounded.

Step 2. limn→∞∥xn+N- x n ∥ = limn→∞∥x n - T[n+N],... ,T[n+1]x n ∥ = 0. Indeed, from the nonexpansivity of each T i (i = 1, 2,..., N), Proposition 2.3, and the condition λ n ≤ 2α (∀n ≥ 0) we conclude that for all n ≥ 0,

z n + N - z n = x n + N - λ n + N A 1 x n + N - ( x n - λ n A 1 x n ) = x n + N - λ n + N A 1 x n + N - ( x n - λ n + N A 1 x n ) + ( λ n - λ n + N ) A 1 x n ≤ x n + N - x n - λ n + N ( A 1 x n + N - A 1 x n ) + M 1 λ n - λ n + N ≤ x n + N - x n + M 1 λ n - λ n + N ,

where M1 := sup{∥A1x n ∥ : n ≥ 0} < ∞. From Proposition 2.4, it is found that

x n + N - x n = y n + N - 1 - μ α n + N - 1 A 2 y n + N - 1 - ( y n - 1 - μ α n - 1 A 2 y n - 1 ) = T [ n + N ] α n + N - 1 z n + N - 1 - T [ n ] α n - 1 z n - 1 ≤ T [ n + N ] α n + N - 1 z n + N - 1 - T [ n + N ] α n + N - 1 z n - 1 + T [ n + N ] α n + N - 1 z n - 1 - T [ n ] α n - 1 z n - 1 ≤ 1 - α n + N - 1 τ z n + N - 1 - z n - 1 + μ α n + N - 1 - α n - 1 A 2 T [ n ] z n - 1 = 1 - α n + N - 1 τ z n + N - 1 - z n - 1 + μ α n + N - 1 - α n - 1 A 2 y n - 1
≤ 1 - α n + N - 1 τ x n + N - 1 - x n - 1 + M 1 λ n + N - 1 - λ n - 1 + μ M 2 α n + N - 1 - α n - 1 = ( 1 - α n + N - 1 τ ) x n + N - 1 - x n - 1 + ( 1 - α n + N - 1 τ ) M 1 λ n + N - 1 - λ n - 1 + μ M 2 α n + N - 1 - α n - 1 ≤ ( 1 - α n + N - 1 τ ) x n + N - 1 - x n - 1 + μ M 2 α n + N - 1 - α n - 1 + M 1 λ n + N - 1 - λ n - 1 ,

where M2 := sup{∥A2y n ∥ : n ≥ 0} < ∞. Utilizing Lemma 2.1 we deduce from Conditions

(iii), (iv) that

lim n → ∞ x n + N - x n = 0 .
(3.10)

From ∥xn+1-y n ∥ = μα n ∥A2y n ∥ ≤ μM2α n and Condition (i), we get limn→∞∥xn+1-y n ∥ = 0. Now we observe that the following relation holds:

x n + N - x n = x n + N - T [ n + N ] x n + N - 1 - λ n + N - 1 A 1 x n + N - 1 + T [ n + N ] x n + N - 1 - λ n + N - 1 A 1 x n + N - 1 - T [ n + N ] T [ n + N - 1 ] x n + N - 2 - λ n + N - 2 A 1 x n + N - 2 + ⋯ + T [ n + N ] . . . T [ n + 2 ] x n + 1 - λ n + 1 A 1 x n + 1 - T [ n + N ] . . . T [ n + 1 ] ( x n - λ n A 1 x n ) + T [ n + N ] . . . T [ n + 1 ] x n - λ n A 1 x n - x n .
(3.11)

Since ∥xn+1- y n ∥ → 0 and λ n → 0 as n → ∞, from the nonexpansivity of each T i (i = 1,2,..., N) and boundedness of {A1x n } it follows that as n → ∞ we have

x n + N - T [ n + N ] x n + N - 1 - λ n + N - 1 A 1 x n + N - 1 → 0 , T [ n + N ] x n + N - 1 - λ n + N - 1 A 1 x n + N - 1 - T [ n + N ] T [ n + N - 1 ] x n + N - 2 - λ n + N - 2 A 1 x n + N - 2 → 0 , ⋯ T [ n + N ] . . . T [ n + 2 ] ( x n + 1 - λ n + 1 A 1 x n + 1 ) - T [ n + N ] . . . T [ n + 1 ] ( x n - λ n A 1 x n ) → 0 .

Hence from (3.10) and (3.11) it follows that

lim n → ∞ T [ n + N ] ⋯ T [ n + 1 ] ( x n - λ n A 1 x n ) - x n = 0 .

Note that

T [ n + N ] ⋯ T [ n + 1 ] x n - x n ≤ T [ n + N ] ⋯ T [ n + 1 ] x n - T [ n + N ] ⋯ T [ n + 1 ] ( x n - λ n A 1 x n ) + T [ n + N ] ⋯ T [ n + 1 ] ( x n - λ n A 1 x n ) - x n ≤ λ n A 1 x n + T [ n + N ] ⋯ T [ n + 1 ] ( x n - λ n A 1 x n ) - x n → 0 ( n → ∞ ) .

That is,

lim n → ∞ T [ n + N ] ⋯ T [ n + 1 ] x n - x n = 0 .
(3.12)

Step 3. lim supn→∞〈A1x*,x* - x n 〉 ≤ 0. Indeed, choose a subsequence { x n i } of {x n } such that

lim sup n → ∞ A 1 x * , x * - x n = lim i → ∞ A 1 x * , x * - x n i .

The boundedness of { x n i } implies the existence of a subsequence { x n i j } of { x n i } and a point x ^ ∈H such that x n i j ⇀ x ^ . We may assume without loss of generality that x n i ⇀ x ^ , that is, lim i → ∞ w , x n i = w , x ^ ( ∀ w ∈ H ) .

First, we can readily see that x ^ ∈ ⋂ i = 1 N Fix ( T i ) . As a matter of fact, since the pool of mappings {T i : 1 ≤ i ≤ N} is finite, we may further assume (passing to a further subsequence if necessary) that, for some integer k ∈ {1,2,... ,N},

T [ n i ] ≡ T k , ∀ i ≥ 1 .

Then, it follows from (3.12) that

x n i - T [ i + N ] ⋯ T [ i + 1 ] x n i → 0 .

Hence, by Lemma 2.2, we conclude that

x ^ ∈ Fix T [ i + N ] ⋯ T [ i + 1 ] .

Together with Assumption (3.9) this implies that x ^ ∈ ⋂ i = 1 N Fix ( T i ) . Now, since

x * ∈ VI ⋂ i = 1 N Fix ( T i ) , A 1 ,

we obtain

lim sup n → ∞ A 1 x * , x * - x n = lim i → ∞ A 1 x * , x * - x n i = A 1 x * , x * - x ^ ≤ 0 .
(3.13)

Step 4. lim supn→∞〈A2x*,x* - x n 〉 ≤ 0. Indeed, choose a subsequence { x n k } of {x n } such that

lim sup n → ∞ A 2 x * , x * - x n = lim k → ∞ A 2 x * , x * - x n k .

The boundedness of { x n k } implies that there is a subsequence of { x n k } which converges weakly to a point x ̄ ∈H. Without loss of generality, we may assume that x n k ⇀ x ̄ . Repeating the same argument as in the proof of x ^ ∈ ⋂ i = 1 N Fix ( T i ) , we have x ̄ ∈ ⋂ i = 1 N Fix ( T i ) .

Let y∈ ⋂ i = 1 N Fix ( T i ) be fixed arbitrarily Then, it follows from the nonexpansivity of each T i (i = 1, 2,..., N) and monotonicity of A1 that for all n ≥ 0,

y n - y 2 = T [ n + 1 ] ( x n - λ n A 1 x n ) - T [ n + 1 ] y 2 ≤ ( x n - y ) - λ n A 1 x n 2 = x n - y 2 + 2 λ n A 1 x n , y - x n + λ n 2 A 1 x n 2 ≤ x n - y 2 + 2 λ n A 1 y , y - x n + λ n 2 M 1 2 ,
(3.14)

which implies that for all n ≥ 0,

0 ≤ 1 λ n x n - y 2 - y n - y 2 + 2 A 1 y , y - x n + λ n M 1 2 = x n - y + y n - y x n - y - y n - y λ n + 2 A 1 y , y - x n + λ n M 1 2 ≤ M 3 x n - y n λ n + 2 A 1 y , y - x n , + λ n M 1 2 ,

where M3 := sup{∥x n -y∥ + ∥y n -y∥ : n ≥ 0} < ∞. From ∥x n - y n ∥ = o(λ n ) and Conditions (i) and (v), for any ε > 0, there exists an integer m0 > 0 such that M 3 x n - y n / λ n + M 1 2 λ n ≤ ε for all n ≥ m0. Hence, 0 ≤ ε + 2〈A1y, y - x n 〉 for all n ≥ m0. Putting n := n k , we derive ε + 2 A 1 y , y - x Ì„ ≥ 0 as k → ∞, from x n k ⇀ x Ì„ ∈ â‹‚ i = 1 N Fix ( T i ) . Since ε > 0 is arbitrary, it is clear that A 1 y , y - x Ì„ ≥0 for all y∈ â‹‚ i = 1 N Fix ( T i ) . Accordingly, utilizing Proposition 2.1 (i) we deduce from the α-inverse strong monotonicity of A1 that x Ì„ ∈VI â‹‚ i = 1 N Fix ( T i ) , A 1 . Therefore, from { x * }  = VI VI â‹‚ i = 1 N Fix T i , A 1 A 2 , we have

lim sup n → ∞ A 2 x * , x * - x n = lim k → ∞ A 2 x * , x * - x n k = A 2 x * , x * - x ̄ ≤ 0 .
(3.15)

Step 5. limn→∞∥x n - x*∥ = 0. Indeed, repeating the same argument as in Step 5 of the proof of Theorem 3.1, from (3.14) we can derive

lim n → ∞ x n - x * = 0 .

This completes the proof.

Remark 3.1. If we set N = 1 in Theorem 3.2, then the limit limn→∞∥xn+N- x n ∥ = 0 reduces to the one limn→∞∥xn+1- x n ∥ = 0. In this case, we have

x n - y n ≤ x n - x n + 1 + x n + 1 - y n = x n + 1 - x n + μ α n A 2 y n → 0 ( n → ∞ ) ,

that is, limn→∞∥x n -y n ∥ = 0.

Remark 3.2. Recall that a self-mapping T of a nonempty closed convex subset K of a real Hilbert space H is called attracting nonexpansive [32, 33] if T is nonexpansive and if, for x, p ∈ K with x ∉ Fix(T) and p ∈ Fix(T),

T x - p < x - p .

Recall also that T is firmly nonexpansive [32, 33] if

x - y , T x - T y ≥ T x - T y 2 , ∀ x , y ∈ K .

It is known that Assumption (3.9) in Theorem 3.2 is automatically satisfied if each T i is attracting nonexpansive. Since a projection is firmly nonexpansive, we have the following consequence of Theorem 3.2.

Corollary 3.1. Let μ ∈ 0 , 2 β / L 2 , { α n } n = 0 ∞ ⊂ 0 , 1 , and { λ n } n = 0 ∞ ⊂ 0 , 2 α such that

  1. (i)

    limn→∞α n = 0;

  2. (ii)

    ∑ n = 0 ∞ α n =∞;

  3. (iii)

    limn→∞(α n - αn+N)/αn+N= 0 or ∑ n = 0 ∞ α n + N - α n <∞;

  4. (iv)

    limn→∞(λ n - λn+N)/λn+N= 0 or ∑ n = 0 ∞ λ n + N - λ n <∞;

  5. (v)

    λ n ≤ α n for all n ≥ 0.

Take x0 ∈ H arbitrarily and let the sequence { x n } n = 0 ∞ be generated by the iterative algorithm

y n : = P [ n + 1 ] x n - λ n A 1 x n , x n + 1 : = y n - μ α n A 2 y n , ∀ n ≥ 0 ,

where

P i = P C i , ∀ i ∈ { 1 , 2 , . . . , N } ,

and A1 is the same as in Problem I. Then the sequence { x n } n = 0 ∞ satisfies the following properties:

  1. (a)

    { x n } n = 0 ∞ is bounded;

  2. (b)

    limn→∞∥xn+N-x n ∥ = 0 and limn→∞∥x n - P[n+N]... P[n+1]x n ∥ = 0 hold;

  3. (c)

    { x n } n = 0 ∞ converges strongly to the unique element of VI VI ⋂ i = 1 N C i , A 1 , A 2 provided ∥x n -y n ∥ = o(λ n ).

Proof. In Theorem 3.2, putting T i = P i (i = 1, 2,..., N), we have

Fix ( T i ) = Fix ( P i ) = C i , ∀ i ∈ { 1 , 2 , . . . , N } .

It is easy to see that Assumption (3.9) is automatically satisfied and that

VI VI â‹‚ i = 1 N Fix ( T i ) , A 1 , A 2 = VI VI â‹‚ i = 1 N C i , A 1 , A 2

Therefore, in terms of Theorem 3.2 we obtain the desired result.

4 Applications to constrained pseudoinverse

Let K be a nonempty closed convex subset of a real Hilbert space H. Let A be a bounded linear operator on H. Given an element b ∈ H, consider the minimization problem

min x ∈ K A x - b 2 .
(3.16)

Let S b denotes the solution set. Then, S b is closed and convex. It is known that S b is nonempty if and only if

P A ( K ) ¯ ( b ) ∈ A ( K ) .

In this case, S b has a unique element with minimum norm; that is, there exists a unique point x† ∈ S b satisfying

x † 2 = min x 2 : x ∈ S b .
(3.17)

Definition 4.1 (see [34]). The K-constrained pseudoinverse of A (symbol A K † ) is defined as

D ( A K † ) = { b ∈ H : P A ( K ¯ ) ( b ) ∈ A ( K ) } , A K † ( b ) = x † , ∀ b ∈ D ( A K † ) ,

where x† ∈ S b is the unique solution to (3.17).

We introduce now the K-constrained generalized pseudoinverse of A (see [2]).

Let θ : H → R be a differentiable convex function such that θ' is a L-Lipschitz continuous and β-strongly monotone operator for some L > 0 and β > 0. Under these assumptions, there exists a unique point x ̃ † ∈ S b for b∈D A K † such that

θ x ̃ † = min θ ( x ) : x ∈ S b .
(3.18)

Definition 4.2. The K-constrained generalized pseudoinverse of A associated with θ (symbol A θ † ) is defined as

D A K , θ † = D A K † , A K , θ † ( b ) = x ̃ † , ∀ b ∈ D A K , θ † ,

where x ̃ † ∈ S b is the unique solution to (3.18). Note that, if

θ ( x ) = 1 2 x 2 ,

then the K-constrained generalized pseudoinverse A K , θ † of A associated with θ reduces to the K-constrained pseudoinverse A K † of A in Definition 4.1.

Now we apply the results in Section 3 to construct the K-constrained generalized pseudoinverse A K , θ † of A. But first, observe that x ^ ∈K solves the minimization problem (3.16) if and only if there holds the following optimality condition:

A * ( A x ^ - b ) , x - x ^ ≥ 0 , ∀ x ∈ K ,

where A* is the adjoint of A. This is equivalent to, for each λ > 0,

λ A * b + ( I - λ A * A ) x ^ - x ^ , x - x ^ ≤ 0 , ∀ x ∈ K ,

or

P K λ A * b + ( I - λ A * A ) x ^ = x ^ .
(3.19)

Define a mapping T : H → H by

T x = P K λ A * b + ( I - λ A * A ) x , ∀ x ∈ H .
(3.20)

Lemma 4.1 (see [[18], Lemma 4.1]). If λ ∈ (0, 2∥A∥-2) and if b∈D A K † , then T is attracting nonexpansive and Fix(T) = S b .

Theorem 4.1. Let μ ∈ (0, 2β/L2), { α n } n = 0 ∞ ⊂ 0 , 1 , and { λ n } n = 0 ∞ ⊂ 0 , 2 α such that

  1. (i)

    limn→∞α n = 0;

  2. (ii)

    ∑ n = 0 ∞ α n =∞;

  3. (iii)

    limn→∞(α n - αn+1)/αn+1= 0 or ∑ n = 0 ∞ α n + 1 - α n <∞;

  4. (iv)

    limn→∞(λ n - λn+1)/λn+1= 0 or ∑ n = 0 ∞ λ n + 1 - λ n <∞;

  5. (v)

    λ n ≤ α n for all n ≥ 0.

Take x0 ∈ H arbitrarily and let { x n } n = 0 ∞ be the sequence generated by the algorithm

y n : = T x n - λ n A 1 x n , x n + 1 : = y n - μ α n θ ′ ( y n ) , ∀ n ≥ 0 ,
(3.21)

where T is given in (3.20) and A1 is the same as in Problem I. Then the sequence { x n } n = 0 ∞ satisfies the following properties:

  1. (a)

    { x n } n = 0 ∞ is bounded;

  2. (b)

    limn→∞∥x n - y n ∥ = 0 and limn→∞∥x n - Tx n ∥ = 0 hold;

  3. (c)

    { x n } n = 0 ∞ converges strongly to the unique element of VI(VI(S b , A1), θ') provided ∥x n - y n ∥ = o(λ n ).

Proof. In Theorem 3.1, put A2 := θ'. Since Fix(T) = S b and θ' is L-Lipschitz continuous and β-strongly monotone, utilizing Theorem 3.1 we obtain the desired result.

Corollary 4.1 (see [[18], Theorem 4.1]). Let μ ∈ (0,2β/L2) and { α n } n = 0 ∞ ⊂ 0 , 1 such that

  1. (i)

    limn→∞α n = 0;

  2. (ii)

    ∑ n = 0 ∞ α n =∞;

  3. (ii)

    limn→∞(α n - αn+1)/αn+1= 0 or ∑ n = 0 ∞ α n + 1 - α n <∞.

Take x0 ∈ H arbitrarily and let { x n } n = 0 ∞ be the sequence generated by the algorithm

x n + 1 = T x n - μ α n θ ′ ( T x n ) , ∀ n ≥ 0 ,

where T is given in (3.20). Then the sequence { x n } n = 0 ∞ satisfies the following properties:

  1. (a)

    { x n } n = 0 ∞ is bounded;

  2. (b)

    limn→∞∥x n - Tx n ∥ = 0 holds;

  3. (c)

    { x n } n = 0 ∞ converges strongly to A K , θ † ( b ) .

Proof. Note that the minimization problem (3.18) is equivalent to the following variational inequality problem

θ ′ x ̃ † , x - x ̃ † ≥ 0 , ∀ x ∈ S b ,
(3.22)

where S b = Fix(T) and θ' is L-Lipschitz continuous and β-strongly monotone. In Theorem 4.1, put A1 = 0. Then we have

VI VI ( S b , A 1 ) θ ′ = VI ( S b , θ ′ ) = x ̃ † = A K , θ † ( b ) .

Take a number α ∈ (0,∞) arbitrarily. Then A1 is α-inverse strongly monotone. Now, choose a sequence { λ n } n = 0 ∞ ⊂ 0 , 2 α such that Conditions (iv), (v) in Theorem 4.1 hold, that is,

  1. (iv)

    limn→∞(λ n - λn+1)/λn+1= 0 or ∑ n = 0 ∞ λ n + 1 - λ n <∞;

  2. (v)

    λ n ≤ α n for all n ≥ 0.

In this case, Algorithm (3.21) reduces to the following

y n : = T x n - λ n A 1 x n = T x n , x n + 1 : = y n - μ α n θ ′ ( y n ) , ∀ n ≥ 0 ,

which is equivalent to

x n + 1 : = T x n - μ α n θ ′ ( T x n ) , ∀ n ≥ 0 .

Therefore, all conditions in Theorem 4.1 are satisfied. Consequently, utilizing Theorem 4.1 we derive the desired result.

Lemma 4.2 (see [32, 33]). Assume that N is a positive integer and assume that T i i = 1 N are N attracting nonexpansive mappings on H having a common fixed point. Then,

â‹‚ i = 1 N Fix ( T i ) = Fix T 1 T 2 . . . T N .

Now, assume that S b 1 , . . . , S b N is a family of N closed convex subsets of K such that

S b = â‹‚ i = 1 N S b i .
(3.23)

For each 1 ≤ i ≤ N, we define T i : H → H by

T i x = P S b i λ A * b + I - λ A * A x , ∀ x ∈ H ,

where P S b i is the projection from H onto S b i .

Theorem 4.2. Let μ ∈ (0, 2β/L2), { α n } n = 0 ∞ ⊂ 0 , 1 , and { λ n } n = 0 ∞ ⊂ 0 , 2 α such that

  1. (i)

    limn→∞α n = 0;

  2. (ii)

    ∑ n = 0 ∞ α n =∞;

  3. (iii)

    limn→∞(α n - αn+N)/αn+N= 0 or ∑ n = 0 ∞ α n + N - α n <∞;

  4. (iv)

    limn→∞(λ n - λn+N)/λn+N= 0 or ∑ n = 0 ∞ λ n + N - λ n <∞;

  5. (v)

    λ n ≤ α n for all n ≥ 0.

Take x0 ∈ H arbitrarily and let { x n } n = 0 ∞ be the sequence generated by the algorithm

y n : = T [ n + 1 ] x n - λ n A 1 x n , x n + 1 : = y n - μ α n θ ′ ( y n ) , ∀ n ≥ 0 ,
(3.24)

where each T i (1 ≤ i ≤ N) is given as above and A1 is the same as in Problem II. Then the sequence { x n } n = 0 ∞ satisfies the following properties:

  1. (a)

    { x n } n = 0 ∞ is bounded;

  2. (b)

    limn→∞∥xn+N-x n ∥ = 0 and limn→∞∥x n - T[n+N]... T[n+1]x n ∥ = 0 hold;

  3. (c)

    { x n } n = 0 ∞ converges strongly to the unique element of VI(VI(S b , A1), θ') provided ∥x n - y n ∥ = o(λ n ).

Proof. We observe first that

S b = Fix ( T ) = â‹‚ i = 1 N Fix ( T i ) .
(3.25)

Indeed,

⋂ i = 1 N Fix ( T i ) ⊂ ⋂ i = 1 N S b i = S b .

Conversely, if x ̄ ∈ S b , then for all x ∈ K, we have

A * ( A x ̄ - b ) , x - x ̄ ≥ 0 .
(3.26)

Since each S b i is a subset of K, (3.26) holds over S b i . This implies that

x ̄ ∈ Fix ( T i ) , ∀ i ∈ { 1 , 2 , . . . , N } ,

and hence

x ̄ ∈ ⋂ i = 1 N Fix ( T i ) .

By Lemmas 4.1 and 4.2, we see that Assumption (3.9) in Theorem 3.2 holds. In Theorem 3.2, put A2 := θ'. Since θ' is L-Lipschitz continuous and β-strongly monotone, utilizing Theorem 3.2 we obtain the desired result.

Corollary 4.2 (see [[18], Theorem 4.2]). Let μ ∈ (0,2β/L2) and { α n } n = 0 ∞ ⊂ 0 , 1 such that

  1. (i)

    limn →∞α n = 0;

  2. (ii)

    ∑ n = 0 ∞ α n =∞;

  3. (iii)

    limn→∞(α n - αn+N)/αn+N= 0 or ∑ n = 0 ∞ α n + N - α n <∞.

Take x0 ∈ H arbitrarily and let { x n } n = 0 ∞ be the sequence generated by the algorithm

x n + 1 = T [ n + 1 ] x n - μ α n θ ′ ( T [ n + 1 ] x n ) , ∀ n ≥ 0 ,

where each T i (1 ≤ i ≤ N) is given as above. Then the sequence { x n } n = 0 ∞ satisfies the following properties:

  1. (a)

    { x n } n = 0 ∞ is bounded;

  2. (b)

    limn→∞∥xn+N-x n ∥ = 0 and limn→∞∥x n - T[n+N]... T[n+1]x n ∥ = 0 hold;

  3. (c)

    { x n } n = 0 ∞ converges strongly to the unique solution x ̃ † = A K , θ † ( b ) of (3.18).

Proof. Note that the minimization problem (3.18) is equivalent to the following variational inequality problem

θ ′ x ̃ † , x - x ̃ † ≥ 0 , ∀ x ∈ S b ,

where S b = Fix(T) and θ' is L-Lipschitz continuous and β-strongly monotone. In Theorem 4.2, put A1 = 0. Then we have

VI VI ( S b , A 1 ) , θ ′ = VI ( S b , θ ′ ) = A K , θ † ( b ) .

Take a number α ∈ (0,∞) arbitrarily. Then A1 is α-inverse strongly monotone. Now, choose a sequence { λ n } n = 0 ∞ ⊂ 0 , 2 α such that Conditions (iv), (v) in Theorem 4.2 hold, that is,

  1. (iv)

    limn→∞(λ n - λn+N)/λn+N= 0 or ∑ n = 0 ∞ λ n + N - λ n <∞;

  2. (iv)

    λ n ≤ α n for all n ≥ 0.

In this case, Algorithm (3.24) reduces to the following

y n : = T [ n + 1 ] x n - λ n A 1 x n = T [ n + 1 ] x n , x n + 1 : = y n - μ α n θ ′ ( y n ) , ∀ n ≥ 0 ,

which is equivalent to

x n + 1 = T [ n + 1 ] x n - μ α n θ ′ ( T [ n + 1 ] x n ) , ∀ n ≥ 0 .

Therefore, all conditions in Theorem 4.2 are satisfied. Consequently, utilizing Theorem 4.2 we derive the desired result.

References

  1. Kinderlehrer D, Stampacchia G: An Introduction to Variational Inequalities and Their Applications. In Classics Appl Math. Volume 31. SIAM, Philadelphia; 2000.

    Google Scholar 

  2. Yamada I: The hybrid steepest-descent method for the variational inequality problem over the intersection of fixed-point sets of nonexpansive mappings. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications. Edited by: Butnariu, D, Censor, Y, Reich, S. Kluwer Academic Publishers, Dordrecht, Netherlands; 2001:473–504.

    Chapter  Google Scholar 

  3. Combettes PL: A block-iterative surrogate constraint splitting method for quadratic signal recovery. IEEE Trans Signal Process 2003, 51(7):1771–1782. 10.1109/TSP.2003.812846

    Article  MathSciNet  Google Scholar 

  4. Iiduka H, Yamada I: A use of conjugate gradient direction for the convex optimization problem over the fixed point set of a nonexpansive mapping. SIAM J Optim 2009, 19: 1881–1893. 10.1137/070702497

    Article  MATH  MathSciNet  Google Scholar 

  5. Slavakis K, Yamada I: Robust wideband beamforming by the hybrid steepest descent method. IEEE Trans Signal Process 2007, 55: 4511–4522.

    Article  MathSciNet  Google Scholar 

  6. Iiduka H: Fixed point optimization algorithm and its application to power control in CDMA data networks. Math Program 2010.

    Google Scholar 

  7. Mainge PE, Moudafi A: Strong convergence of an iterative method for hierarchical fixed-point problems. Pac J Optim 2007, 3: 529–538.

    MATH  MathSciNet  Google Scholar 

  8. Moudafi A: Krasnoselski-Mann iteration for hierarchical fixed-point problems. Inverse Probl 2007, 23: 1635–1640. 10.1088/0266-5611/23/4/015

    Article  MATH  MathSciNet  Google Scholar 

  9. Cabot A: Proximal point algorithm controlled by a slowly vanishing term: applications to hierarchical minimization. SIAM J Optim 2005, 15: 555–572. 10.1137/S105262340343467X

    Article  MATH  MathSciNet  Google Scholar 

  10. Luo ZQ, Pang JS, Ralph D: Mathematical Programs with Equilibrium Constraints. Cambridge University Press, New York; 1996.

    Chapter  Google Scholar 

  11. Izmaelov AF, Solodov MV: An active set Newton method for mathematical program with complementarity constraints. SIAM J Optim 2008, 19: 1003–1027. 10.1137/070690882

    Article  MathSciNet  Google Scholar 

  12. Hirstoaga SA: Iterative selection methods for common fixed point problems. J Math Anal Appl 2006, 324: 1020–1035. 10.1016/j.jmaa.2005.12.064

    Article  MATH  MathSciNet  Google Scholar 

  13. Iiduka H: Strong convergence for an iterative method for the triple-hierarchical constrained optimization problem. Nonlinear Anal 2009, 71: 1292–1297. 10.1016/j.na.2009.01.133

    Article  MathSciNet  Google Scholar 

  14. Iiduka H: A new iterative algorithm for the variational inequality problem over the fixed point set of a firmly nonexpansive mapping. Optimization 2010, 59: 873–885. 10.1080/02331930902884158

    Article  MATH  MathSciNet  Google Scholar 

  15. Zeng LC, Wong NC, Yao JC: Convergence analysis of modified hybrid steepest-descent methods with variable parameters for variational inequalities. J Optim Theory Appl 2007, 132: 51–69. 10.1007/s10957-006-9068-x

    Article  MATH  MathSciNet  Google Scholar 

  16. Ceng LC, Ansari QH, Yao JC: Iterative methods for triple hierarchical variational inequalities in Hilbert spaces. J Optim Theory Appl 2011, 151: 489–512. 10.1007/s10957-011-9882-7

    Article  MATH  MathSciNet  Google Scholar 

  17. Browder FE, Petryshyn WV: Construction of fixed points of nonlinear mappings in Hilbert spaces. J Math Anal Appl 1967, 20: 197–228. 10.1016/0022-247X(67)90085-6

    Article  MATH  MathSciNet  Google Scholar 

  18. Xu HK, Kim TH: Convergence of hybrid steepest-descent methods for variational inequalities. J Optim Theory Appl 2003, 119: 185–201.

    Article  MATH  MathSciNet  Google Scholar 

  19. Iiduka H: Iterative algorithm for solving Triple-hierarchical constrained optimization problem. J Optim Theory Appl 2011, 148: 580–592. 10.1007/s10957-010-9769-z

    Article  MATH  MathSciNet  Google Scholar 

  20. Zeidler E: Nonlinear Functional Analysis and Its Applications II/B: Nonlinear Monotone Operators. Springer, New York; 1985.

    Chapter  Google Scholar 

  21. Facchinei F, Pang JS: Finite-Dimensional Variational Inequalities and Complementarity Problems I. Springer, New York; 2003.

    Google Scholar 

  22. Takahashi W: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama; 2000.

    Google Scholar 

  23. Goebel K, Kirk WA: Topics on Metric Fixed-Point Theory. Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge; 1990.

    Chapter  Google Scholar 

  24. Hiriart-Urruty JB, Lemarechal C: Convex Analysis and Minimization Algorithms I. Springer, New York; 1993.

    Chapter  Google Scholar 

  25. Baillon JB, Haddad G: Quelques proprietes des operateurs angle-bornes et n -cycliquement monotones. Isr J Math 1977, 26: 137–150. 10.1007/BF03007664

    Article  MATH  MathSciNet  Google Scholar 

  26. Ekeland I, Temam R: Convex Analysis and Variational problems. In Classics Appl Math. Volume 28. SIAM, Philadelphia; 1999.

    Google Scholar 

  27. Vasin VV, Ageev AL: Ill-Posed Problems with A Priori Information, V.S.P. Intl Science, Utrecht; 1995.

    Chapter  Google Scholar 

  28. Goebel K, Reich S: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York; 1984.

    Google Scholar 

  29. Bauschke HH, Borwein JM: On projection algorithms for solving convex feasibility problems. SIAM Rev 1996, 38: 367–426. 10.1137/S0036144593251710

    Article  MATH  MathSciNet  Google Scholar 

  30. Stark H, Yang Y: Vector Space Projections: A Numerical Approach to Signal and Image Processing, Neural Nets, and Optics. Wiley, New York; 1998.

    Google Scholar 

  31. Xu HK: Iterative algorithms for nonlinear operators. J Lond Math Soc 2002, 66: 240–256. 10.1112/S0024610702003332

    Article  MATH  Google Scholar 

  32. Bauschke HH: The approximation of fixed points of compositions of nonexpansive mappings in Hilbert spaces. J Math Anal Appl 1996, 202: 150–159. 10.1006/jmaa.1996.0308

    Article  MATH  MathSciNet  Google Scholar 

  33. Bauschke HH, Borwein JM: On projection algorithms for solving convex feasibility problems. SIAM Rev 1996, 38: 367–426. 10.1137/S0036144593251710

    Article  MATH  MathSciNet  Google Scholar 

  34. Engl HW, Hanke M, Neubauer A: Regularization of Inverse Problems. Kluwer, Dordrecht, Holland; 2000.

    Google Scholar 

Download references

Acknowledgements

The authors are grateful to the referees for their careful reading and noting several misprints, and their helpful and useful comments.

The research of the first author was partially supported by the National Science Foundation of China (11071169), Innovation Program of Shanghai Municipal Education Commission (09ZZ133) and Leading Academic Discipline Project of Shanghai Normal University (DZL707), the research of the second author was partially supported by a grant from NSC 100-2115-M-033-001-, and the research of the third author was partially supported by the grant NSC 99-2221-E-037-007-MY3.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M M Wong.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

The authors declare that their contributions are very much alike.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Zeng, L.C., Wong, M.M. & Yao, J.C. Strong convergence of relaxed hybrid steepest-descent methods for triple hierarchical constrained optimization. Fixed Point Theory Appl 2012, 29 (2012). https://doi.org/10.1186/1687-1812-2012-29

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2012-29

Keywords