Skip to main content

General iterative methods for monotone mappings and pseudocontractive mappings related to optimization problems

Abstract

In this paper, we introduce two general iterative methods for a certain optimization problem of which the constrained set is the common set of the solution set of the variational inequality problem for a continuous monotone mapping and the fixed point set of a continuous pseudocontractive mapping in a Hilbert space. Under some control conditions, we establish the strong convergence of the proposed methods to a common element of the solution set and the fixed point set, which is the unique solution of a certain optimization problem. As a direct consequence, we obtain the unique minimum-norm common point of the solution set and the fixed point set.

1 Introduction

Let H be a real Hilbert space with the inner product \(\langle\cdot,\cdot\rangle\) and the induced norm \(\Vert \cdot \Vert \). Let C be a nonempty closed convex subset of H, and let \(S : C \to C\) be a self-mapping on C. We denote by \(\operatorname {Fix}(S)\) the set of fixed points of S and by \(P_{C}\) the metric projection of H onto C.

A mapping F of C into H is called monotone if

$$\langle x - y,Fx - Fy\rangle\ge 0, \quad \forall x, y \in C. $$

A mapping F of C into H is called α-inverse-strongly monotone (see [1, 2]) if there exists a positive real number α such that

$$\langle x - y, Fx - Fy\rangle\ge\alpha \Vert Fx - Fy\Vert ^{2}, \quad \forall x, y \in C. $$

If F is an α-inverse-strongly monotone mapping of C into H, then it is obvious that F is \(\frac{1}{\alpha}\)-Lipschitz continuous, that is, \(\Vert Fx - Fy\Vert \le\frac{1}{\alpha} \Vert x - y\Vert \) for all \(x, y \in C\). Clearly, the class of monotone mappings includes the class of α-inverse-strongly monotone mappings.

An operator A is said to be strongly positive on H if there exists a constant \(\overline{\gamma} > 0\) such that

$$\langle Ax,x\rangle\ge\overline{\gamma} \Vert x\Vert ^{2},\quad \forall x \in H. $$

A mapping F of C into H is called γ̅-strongly monotone if there exists a positive real number γ̅ such that

$$\langle x - y,Fx -Fy\rangle\ge\overline{\gamma} \Vert x - y\Vert ^{2}, \quad \forall x, y \in C. $$

Clearly, the class of monotone mappings includes the class of strongly positive mappings.

Let F be a nonlinear mapping of C into H. The variational inequality problem is to find \(u \in C\) such that

$$ \langle v - u, Fu\rangle\ge0,\quad \forall v \in C. $$
(1.1)

We denote the set of solutions of the variational inequality problem (1.1) by \(\operatorname {VI}(C,F)\). The variational inequality problem has been extensively studied in the literature; see [2–6] and the references therein.

The class of pseudocontractive mappings is one of the most important classes of mappings among nonlinear mappings. We recall that a mapping \(T : C \to H\) is said to be pseudocontractive if

$$\Vert Tx - Ty\Vert ^{2} \le \Vert x - y\Vert ^{2} + \bigl\Vert (I - T)x - (I - T)y\bigr\Vert ^{2},\quad \forall x, y \in C, $$

and T is said to be k-strictly pseudocontractive if there exists a constant \(k \in[0,1)\) such that

$$\Vert Tx - Ty\Vert ^{2} \le \Vert x - y\Vert ^{2} + k \bigl\Vert (I - T)x - (I - T)y\bigr\Vert ^{2},\quad \forall x, y \in C, $$

where I is the identity mapping. Note that the class of k-strictly pseudocontractive mappings includes the class of nonexpansive mappings as a subclass. That is, T is nonexpansive (i.e., \(\Vert Tx - Ty\Vert \le \Vert x - y\Vert \), \(\forall x, y \in C\)) if and only if T is 0-strictly pseudocontractive. Clearly, the class of pseudocontractive mappings includes the class of strictly pseudocontractive mappings as a subclass, and the class of k-strictly pseudocontractive mappings falls into the one between the class of nonexpansive mappings and the class of pseudocontractive mappings. Moreover, this inclusion is strict due to an example in [7] (see also Example 5.7.1 and Example 5.7.2 in [8]). Recently, many authors have been devoting the studies to the problems of finding fixed points for pseudocontractive mappings; see, for example, [9–15] and the references therein.

The following optimization problem has been studied extensively by many authors:

$$\min_{x \in\Omega}\frac{\mu}{2}\langle Ax, x\rangle+ \frac{1}{2}\Vert x - u\Vert ^{2} - h(x), $$

where \(\Omega= \bigcap_{i=1}^{\infty}C_{i}\), \(C_{1}, C_{2}, \ldots \) are infinitely many closed convex subsets of H such that \(\bigcap_{i=1}^{\infty}C_{i} \neq\emptyset\); \(u \in H\); \(\mu\ge0\) is a real number; A is a strongly positive bounded linear self-adjoint operator on H; and h is a potential function for γf (i.e., \(h' = \gamma f\) for \(\gamma> 0\) and a function f on H). For this kind of minimization problems, see, for example, Bauschke and Borwein [16], Combettes [17], Deutsch and Yamada [18], Jung [19], and Xu [20, 21] when \(\Omega= \bigcap_{i=1}^{N}C_{i}\) and \(h(x) = \langle x, b\rangle\) for a given point b in H.

Iterative methods for nonexpansive mappings and strictly pseudocontractive mappings have recently been applied to solve the optimization problem, where the constraint set is the fixed point set of the mapping; see, for instance, [6, 10, 11, 18, 22–25] and the references therein. Some iterative methods for equilibrium problems, variational inequality problems, and fixed point problems to solve optimization problem, where the constraint set is the common set of the solution set of the problems and the fixed point set of the mappings, were also investigated by many authors recently; see, for instance, [26, 27] and the references therein. We can refer to [28] for certain iterative methods for the integral boundary value problems with causal operators, and we can refer to [29] for iterative methods for solving certain random operator equations.

In particular, in 2006, combining Moudafi’s method [30] with Xu’s method [21], Marino and Xu [24] introduced the following general iterative method for a nonexpansive mapping S:

$$ x_{n+1} = \alpha_{n}\gamma fx_{n} + (I - \alpha_{n}A)Sx_{n}, \quad \forall n \ge0, $$
(1.2)

where \(\gamma> 0\) and f is a contractive mapping on H. Under well-known control conditions on the sequence \(\{\alpha_{n}\} \subset[0,1]\), they proved the strong convergence of the sequence \(\{x_{n}\}\) generated by (1.2) to a point \(\widetilde{x} \in \operatorname {Fix}(S)\), which is the unique solution of the variational inequality

$$\bigl\langle (A - \gamma f )\widetilde{x},\widetilde{x} - p\bigr\rangle \le0,\quad \forall p \in \operatorname {Fix}(S), $$

which is the optimality condition for the optimization problem

$$\min_{x \in \operatorname {Fix}(S)}\frac{1}{2}\langle Ax,x\rangle- h(x), $$

where h is a potential function for γf. Very recently, Jung [23] proposed the following general iterative method for a k-strictly pseudocontractive mapping T for some \(0 \le k <1\):

$$ x_{n+1} = \alpha_{n}(u + \gamma fx_{n}) + \bigl(I - \alpha_{n}(I + \mu A)\bigr)P_{C}Sx_{n}, \quad \forall n \ge0, $$
(1.3)

where \(u \in C\); \(\mu\ge0\) is a real number; and \(S : C \to H\) is a mapping defined by \(Sx = kx + {(1 - k)}Tx\). Under different control conditions on the sequence \(\{\alpha_{n}\} \subset [0,1]\) and the sequence \(\{x_{n}\}\) generated by (1.3), he showed the strong convergence of the sequence \(\{x_{n}\}\) to a point \(\widetilde{x} \in \operatorname {Fix}(T)\), which is the unique solution of the optimization problem

$$\min_{x \in \operatorname {Fix}(T)}\frac{\mu}{2}\langle Ax,x\rangle+ \frac{1}{2}\Vert x - u\Vert ^{2} - h(x), $$

where h is a potential function for γf.

On the other hand, in order to study the variational inequality problem (1.1) coupled with the fixed point problem, many authors have introduced some iterative methods for finding an element of \(\operatorname {VI}(C,F) \cap \operatorname {Fix}(S)\), where F is an α-inverse-strongly monotone mapping and S is a nonexpansive mapping; see [1, 31–34] and the references therein. Some iterative methods for finding an element of \(\operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\) were also presented by many authors, where F is a continuous monotone mapping and T is a continuous pseudocontractive mapping; see [35–37] and the references therein. In the case that E is a Banach space with the dual \(E^{*}\), we can refer to [38] for iterative methods for finding an element of \(\operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\), where \(F : C \to E^{\ast}\) is an α-inverse-strongly monotone mapping and T is a relatively weak nonexpansive mapping, and we can refer to [39] for iterative methods for finding an element of \(\bigcap_{i=1}^{N}\operatorname {Fix}(T_{i}) \cap \operatorname {VI}(C,F)\), where F is an α-inverse-strongly accretive mapping and \(T_{i}\), \(i = 1, \ldots, N\), are \(k_{i}\)-strictly pseudocontractive mappings. And we can consult [40] for iterative methods for finding a common element of \(\operatorname {VI}(C,F_{1}) \cap \operatorname {VI}(C,F_{2})\), where \(F_{1}, F_{2} : C \to E^{*}\) are two continuous monotone mappings.

Recently, researchers have also invented some iterative methods for finding the minimum norm element in the solution set of certain problems (for instance, variational inequality problem, minimization problem, split feasibility problem, etc.) and the fixed point set of nonlinear mappings (for instance, nonexpansive mapping, strictly pseudocontractive mapping, Lipschitzian pseudocontractive mapping, etc.); see, for instance, [41–43] and the references therein.

In this paper, as a continuation of study for the above-mentioned optimization problems, we consider the following optimization problem of which the constrained set is \(\operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\):

$$ \min_{x \in \operatorname {VI}(C,F) \cap \operatorname {Fix}(T)}\frac{\mu}{2}\langle Ax, x\rangle+ \frac{1}{2}\Vert x - u\Vert ^{2} - h(x), $$
(1.4)

where F is a continuous monotone mapping; T is a continuous pseudocontractive mapping \(u \in C\); \(\mu\ge0\) is a real number; and h is a potential function for γf when f is a contractive mapping and \(\gamma > 0\). We present two general iterative methods for solving the optimization problem (1.4). First, we introduce an implicit general iterative method. Consequently, by discretizing the continuous implicit method, we provide an explicit general iterative method. Under some control conditions, we show the strong convergence of the proposed methods to an element of \(\operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\), which is the unique solution of the optimization problem (1.4). As special cases, we obtain two iterative methods which converge strongly to the minimum norm point of \(\operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\). Our results unify, complement, develop, and improve upon the corresponding results of Jung [22, 23], Yao et al. [27], and some recent results in the literature.

2 Preliminaries and lemmas

Let H be a real Hilbert space, and let C be a nonempty closed convex subset of H. We write \(x_{n} \rightharpoonup x\) to indicate that the sequence \(\{x_{n}\}\) converges weakly to x. \(x_{n} \to x\) implies that \(\{x_{n}\}\) converges strongly to x.

For every point \(x \in H\), there exists a unique nearest point in C, denoted by \(P_{C}(x)\), such that

$$\bigl\Vert x - P_{C}(x)\bigr\Vert \le \Vert x - y\Vert ,\quad \forall y \in C. $$

\(P_{C}\) is called the metric projection of H onto C. It is well known that \(P_{C}\) is nonexpansive and is characterized by the properties

$$ u = P_{C}(x) \quad \Longleftrightarrow\quad \langle x - u,u - y\rangle\ge0,\quad \forall x \in H, y \in C. $$
(2.1)

In a Hilbert space H, the following equality holds:

$$ \Vert x - y\Vert ^{2} = \Vert x\Vert ^{2} + \Vert y\Vert ^{2} - 2\langle x,y\rangle, \quad \forall x, y \in H. $$
(2.2)

We need the following lemmas for the proof of our main results.

Lemma 2.1

In a real Hilbert space H, there holds the following inequality:

$$\Vert x + y\Vert ^{2} \le \Vert x\Vert ^{2} + 2 \langle y,x + y\rangle,\quad \forall x, y \in H. $$

Lemma 2.2

([20])

Let \(\{s_{n}\}\) be a sequence of nonnegative real numbers satisfying

$$s_{n+1} \le(1 - w_{n})s_{n} + w_{n} \delta_{n} + \nu_{n},\quad \forall n \ge0, $$

where \(\{w_{n}\}\), \(\{\delta_{n}\}\), and \(\{\nu_{n}\}\) satisfy the following conditions:

  1. (i)

    \(\{w_{n}\} \subset[0,1]\) and \(\sum_{n = 0}^{\infty}w_{n} = \infty\) or, equivalently, \(\prod_{n=0}^{\infty}(1 - w_{n}) = 0\);

  2. (ii)

    \(\limsup_{n \to\infty}\delta_{n} \le0\) or \(\sum_{n = 0}^{\infty}w_{n}\vert \delta_{n}\vert < \infty\);

  3. (iii)

    \(\nu_{n} \ge0\) (\(n \ge0\)), \(\sum_{n=0}^{\infty}\nu_{n} < \infty\).

Then \(\lim_{n \to\infty}s_{n} = 0\).

The following lemmas can be easily proven, and therefore, we omit the proofs.

Lemma 2.3

Let H be a real Hilbert space, and let \(A : H \to H\) be a strongly positive bounded linear operator with a constant \(\overline{\gamma} > 0\). Let \(f : H \to H\) be a contractive mapping with a constant \(k \in (0,1)\). Let \(\mu\ge0\) and \(0 < \gamma< \frac{1 + \mu\overline{\gamma}}{k}\). Then

$$\bigl\langle x - y,\bigl((I + \mu A) - \gamma f\bigr)x - \bigl((I + \mu A) - \gamma f\bigr)y\bigr\rangle \ge (1 + \mu\overline{\gamma} - \gamma k)\Vert x - y \Vert ^{2}, \quad \forall x, y \in H. $$

That is, \((I + \mu A) - \gamma f\) is strongly monotone with a constant \(1 + \mu\overline{\gamma} - \gamma k\).

Lemma 2.4

([24])

Let \(\mu> 0\), and let \(A : H \to H\) be a strongly positive bounded linear self-adjoint operator on a Hilbert space H with a constant \(\overline{\gamma} > 0\). Let \(0 < \xi\le(1 + \mu \Vert A\Vert )^{-1}\). Then \(\Vert I - \xi(I + \mu A)\Vert < 1 - \xi(1 + \mu\overline{\gamma})\).

The following lemmas are Lemma 2.3 and Lemma 2.4 of Zegeye [44], respectively.

Lemma 2.5

([44])

Let C be a closed convex subset of a real Hilbert space H. Let \(F : C \to H\) be a continuous monotone mapping. Then, for \(r > 0\) and \(x \in H\), there exists \(z \in C\) such that

$$\langle y - z,Fz\rangle+ \frac{1}{r}\langle y - z,z- x\rangle\ge0, \quad \forall y \in C. $$

For \(r > 0\) and \(x \in H\), define \(F_{r} : H \to C\) by

$$F_{r}x = \biggl\{ z \in C: \langle y - z,Fz\rangle+ \frac{1}{r} \langle y - z,z- x\rangle\ge0, \forall y \in C \biggr\} . $$

Then the following hold:

  1. (i)

    \(F_{r}\) is single-valued;

  2. (ii)

    \(F_{r}\) is firmly nonexpansive, that is,

    $$\Vert F_{r}x - F_{r}y\Vert ^{2} \le\langle x - y,F_{r}x - F_{r}y\rangle,\quad \forall x, y \in H; $$
  3. (iii)

    \(\operatorname {Fix}(F_{r}) = \operatorname {VI}(C,F)\);

  4. (iv)

    \(\operatorname {VI}(C,F)\) is a closed convex subset of C.

Lemma 2.6

([44])

Let C be a closed convex subset of a real Hilbert space H. Let \(T : C \to H\) be a continuous pseudocontractive mapping. Then, for \(r > 0\) and \(x \in H\), there exists \(z \in C\) such that

$$\langle y - z,Tz\rangle- \frac{1}{r}\bigl\langle y - z, (1 + r)z- x\bigr\rangle \le0, \quad \forall y \in C. $$

For \(r > 0\) and \(x \in H\), define \(T_{r} : H \to C\) by

$$T_{r}x = \biggl\{ z \in C: \langle y - z,Tz\rangle- \frac{1}{r} \bigl\langle y - z,(1 + r)z- x\bigr\rangle \le0, \forall y \in C \biggr\} . $$

Then the following hold:

  1. (i)

    \(T_{r}\) is single-valued;

  2. (ii)

    \(T_{r}\) is firmly nonexpansive, that is,

    $$\Vert T_{r}x - T_{r}y\Vert ^{2} \le\langle x - y,T_{r}x - T_{r}y\rangle,\quad \forall x, y \in H; $$
  3. (iii)

    \(\operatorname {Fix}(T_{r}) = \operatorname {Fix}(T)\);

  4. (iv)

    \(\operatorname {Fix}(T)\) is a closed convex subset of C.

The following lemma can be found in [26, 45].

Lemma 2.7

Let C be a nonempty closed convex subset of a real Hilbert space H, and let \(g : C \to\mathbb{R} \cup\{\infty\}\) be a proper lower semicontinuous differentiable convex function. If \(x^{*}\) is a solution of the minimization problem

$$g\bigl(x^{*}\bigr) = \inf_{x \in C}g(x), $$

then

$$\bigl\langle g'\bigl(x^{*}\bigr),p - x^{*}\bigr\rangle \ge0,\quad p \in C. $$

In particular, if \(x^{*}\) solves the optimization problem

$$\min_{x \in C}\frac{\mu}{2}\langle Ax,x\rangle+ \frac{1}{2}\Vert x - u\Vert ^{2} - h(x), $$

where A is a bounded linear self-adjoint operator on H, then

$$\bigl\langle u + \bigl(\gamma f - (I + \mu A)\bigr)x^{*},p - x^{*}\bigr\rangle \le0,\quad p \in C, $$

where h is a potential function of γf.

3 Main results

Throughout the rest of this paper, we always assume the following:

  • H is a real Hilbert space;

  • C is a nonempty closed subspace subset of H;

  • \(A : C \to C\) is a strongly positive linear bounded self-adjoint operator with a constant \(\overline{\gamma} > 0\);

  • \(f : C \to C\) is a contractive mapping with a constant \(k \in(0,1)\);

  • Constants \(\mu\ge0\) and \(0 < \gamma< \frac{1 + \mu \overline{\gamma}}{k}\);

  • \(F : C \to H\) is a continuous monotone mapping;

  • \(\operatorname {VI}(C,F)\) is the set of the variational inequality problem (1.1) for F;

  • \(T : C \to C\) is a continuous pseudocontractive mapping such that \(\operatorname {VI}(C,F) \cap \operatorname {Fix}(T) \neq\emptyset\);

  • \(F_{r_{t}} : H \to C\) is a mapping defined by

    $$F_{r_{t}}x = \biggl\{ z \in C: \langle y - z,Fz\rangle+ \frac {1}{r_{t}} \langle y - z,z- x\rangle\ge0, \forall y \in C \biggr\} $$

    for \(r_{t} \in(0,\infty)\), \(t \in(0,1)\), and \(\liminf_{t \to 0}r_{t} > 0\);

  • \(T_{r_{t}} : H \to C\) is a mapping defined by

    $$T_{r_{t}}x = \biggl\{ z \in C: \langle y - z,Tz\rangle- \frac {1}{r_{t}} \bigl\langle y - z,(1 + r_{t})z- x\bigr\rangle \le0, \forall y \in C \biggr\} $$

    for \(r_{t} \in(0,\infty)\), \(t \in(0,1)\), and \(\liminf_{t \to 0}r_{t} > 0\);

  • \(F_{r_{n}} : H \to C\) is a mapping defined by

    $$F_{r_{n}}x = \biggl\{ z \in C: \langle y - z,Fz\rangle+ \frac {1}{r_{n}} \langle y - z,z- x\rangle\ge0, \forall y \in C \biggr\} $$

    for \(r_{n} \in(0,\infty)\) and \(\liminf_{n \to\infty}r_{n} > 0\);

  • \(T_{r_{n}} : H \to C\) is a mapping defined by

    $$T_{r_{n}}x = \biggl\{ z \in C: \langle y - z,Tz\rangle- \frac {1}{r_{n}} \bigl\langle y - z,(1 + r_{n})z- x\bigr\rangle \le0, \forall y \in C \biggr\} $$

    for \(r_{n} \in(0,\infty)\) and \(\liminf_{n \to\infty}r_{n} > 0\);

  • \(u \in C\).

By Lemma 2.5 and Lemma 2.6, we note that \(F_{r_{t}}\), \(T_{r_{t}}\), \(F_{r_{n}}\), and \(T_{r_{n}}\) are nonexpansive, \(\operatorname {Fix}(F_{r_{n}}) = \operatorname {VI}(C,F)= \operatorname {Fix}(F_{r_{t}})\), and \(\operatorname {Fix}(T_{r_{n}}) = \operatorname {Fix}(T) = \operatorname {Fix}(T_{r_{t}})\).

In this section, first, we introduce the following general iterative method that generates a net \(\{x_{t}\}_{t \in(0, \min\{1,\frac{1}{1 + \mu \Vert A\Vert }\})}\) in an implicit way:

$$ x_{t} = t(u + \gamma fx_{t}) + \bigl(I - t(I + \mu A) \bigr)T_{r_{t}}F_{r_{t}}x_{t}. $$
(3.1)

Now, for \(t \in(0, \min\{1,\frac{1}{1 + \mu \Vert A\Vert }\})\), consider a mapping \(Q_{t} : C \to C\) defined by

$$Q_{t}x = t(u + \gamma fx) + \bigl(I - t(I + \mu A) \bigr)T_{r_{t}}F_{r_{t}}x,\quad \forall x \in C. $$

It is easy to see that \(Q_{t}\) is a contractive mapping with a constant \(1 - t(1 + \mu\overline{\gamma} -\gamma k)\). Indeed, since \(T_{r_{t}}F_{r_{t}}\) is nonexpansive, by Lemma 2.4, we have

$$\begin{aligned} \Vert Q_{t}x - Q_{t}y\Vert \le{}&\bigl\Vert \bigl(I - t(I + \mu A)\bigr)T_{r_{t}}F_{r_{t}}x - \bigl(I - t(I + \mu A)\bigr)T_{r_{t}}F_{r_{t}}y\bigr\Vert \\ &{}+ t\bigl\Vert (u + \gamma fx) -(u + \gamma fy)\bigr\Vert \\ \le{}&\bigl(1 - t(1 + \mu\overline{\gamma})\bigr)\Vert x - y\Vert + t\gamma k \Vert x - y\Vert \\ = {}&\bigl(1 - t(1 + \mu\overline{\gamma} - \gamma k)\bigr)\Vert x - y\Vert . \end{aligned} $$

Since \(0 < t < \min\{1, \frac{1}{1 + \mu \Vert A\Vert }\}\), it follows that

$$0 < 1 - t(1 + \mu\overline{\gamma} - \gamma k) < 1. $$

Hence \(Q_{t}\) is a contractive mapping. By the Banach contraction principle, \(Q_{t}\) has a unique fixed point, denoted by \(x_{t}\), which uniquely solves the fixed point equation (3.1).

We summarize the basic properties of \(\{x_{t}\}\).

Proposition 3.1

Let \(\{x_{t}\}\) be defined via (3.1). Then

  1. (i)

    \(\{x_{t}\}\) is bounded for \(t \in(0,\min\{1, \frac {1}{1 + \mu \Vert A\Vert }\})\);

  2. (ii)

    \(\lim_{t \to0}\Vert x_{t} - T_{r_{t}}F_{r_{t}}x_{t}\Vert = 0\);

  3. (iii)

    \(x_{t} : (0,\min\{1, \frac{1}{1 + \mu \Vert A\Vert }\}) \to H\) is locally Lipschitzian, provided \(r_{t} : (0,\min\{1, \frac{1}{1 + \mu \Vert A\Vert }\}) \to(0, \infty)\) is locally Lipschitzian;

  4. (iv)

    \(x_{t}\) defines a continuous path from \((0,\min\{1, \frac{1}{1 + \mu \Vert A\Vert }\})\) into H, provided \(r_{t} : (0,\min\{1, \frac{1}{1 + \mu \Vert A\Vert }\})\to(0, \infty)\) is continuous.

Proof

(i) Let \(z_{t} = F_{r_{t}}x_{t}\), and let \(u_{t} = T_{r_{t}}z_{t}\). Let \(p \in \operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\). Then \(p = F_{r_{t}}p\) by Lemma 2.5(iii) and \(p = T_{r_{t}}p\) (\(= Tp\)) by Lemma 2.6(iii), and from the nonexpansivity of \(T_{r_{t}}\) and \(F_{r_{t}}\), it follows that

$$ \Vert u_{t} - p\Vert = \Vert T_{r_{t}}z_{t} - T_{r_{t}}p\Vert \le \Vert z_{t} - p\Vert , $$
(3.2)

and

$$ \Vert z_{t} - p\Vert = \Vert F_{r_{t}}x_{t} - F_{r_{t}}p\Vert \le \Vert x_{t} - p\Vert . $$
(3.3)

Let \(\overline{A} = I + \mu A\). By (3.2) and (3.3), we have

$$\begin{aligned} \Vert x_{t} - p\Vert = {}&\bigl\Vert t(u + \gamma fx_{t}) + (I - t\overline{A})T_{r_{t}}z_{t} - p\bigr\Vert \\ ={} &\bigl\Vert (I - t\overline{A})T_{t}z_{t} - (I - t \overline{A})T_{r_{t}}p + t\gamma fx_{t} - t\gamma fp + t(u + \gamma f - \overline{A})p\bigr\Vert \\ \le{}&\bigl\Vert (I - t\overline{A})T_{r_{t}}z_{t} - (I - t \overline{A})T_{r_{t}}p\bigr\Vert + t\gamma k\Vert x_{t} - p \Vert \\ &{}+ t\bigl(\Vert u\Vert + \bigl\Vert (\gamma f - \overline{A})p\bigr\Vert \bigr) \\ \le{}&\bigl(1 - t(1 + \mu\overline{\gamma})\bigr)\Vert z_{t} - p \Vert + t\gamma k\Vert x_{t} - p\Vert + t\bigl(\Vert u\Vert + \bigl\Vert (\gamma f - \overline {A})p\bigr\Vert \bigr) \\ \le{}&\bigl(1 - t(1 + \mu\overline{\gamma})\bigr)\Vert x_{t} - p \Vert + t\gamma k\Vert x_{t} - p\Vert + t\bigl(\Vert u\Vert + \bigl\Vert (\gamma f - \overline {A})p\bigr\Vert \bigr) \\ = {}&\bigl(1 - t(1 + \mu\overline{\gamma} - \gamma k)\bigr)\Vert x_{t} - p\Vert + t\bigl(\Vert u\Vert + \bigl\Vert (\gamma f - \overline{A})p\bigr\Vert \bigr). \end{aligned}$$

So, it follows that

$$\Vert x_{t} - p\Vert \le\frac{\Vert u \Vert + \Vert (\gamma f - \overline{A})p\Vert }{1 + \mu\overline{\gamma} - \gamma k}. $$

Hence \(\{x_{t}\}\) is bounded and so are \(\{z_{t}\} = \{F_{r_{t}}x_{t}\}\), \(\{u_{t}\} = \{T_{r_{t}}z_{t}\}\), and \(\{\overline{A}T_{r_{t}}z_{t}\}= \{\overline{A}T_{r_{t}}F_{r_{t}}x_{t}\}\), and \(\{fx_{t}\}\).

(ii) Let \(z_{t} = F_{r_{t}}x_{t}\). By the definition of \(\{x_{t}\}\) and the boundedness of \(\{fx_{t}\}\) and \(\{\overline{A}T_{r_{t}}z_{t}\}\) in (i), we have

$$\begin{aligned} \Vert x_{t} - T_{r_{t}}F_{r_{t}}x_{t} \Vert &=t\bigl\Vert \overline {A}T_{r_{t}}F_{r_{t}}x_{t} - (u + \gamma fx_{t})\bigr\Vert \\ &\le t\bigl(\Vert \overline{A}T_{r_{t}}z_{t}\Vert + \Vert u \Vert + \gamma \Vert fx_{t}\Vert \bigr) \to0 \quad \text{as } t \to0. \end{aligned} $$

(iii) Let \(t, t_{0} \in(0,\min\{1,\frac{1}{1 + \mu \Vert A\Vert }\})\), and let \(z_{t} = F_{r_{t}}x_{t}\) and \(z_{t_{0}} = F_{r_{t_{0}}}x_{t_{0}}\). Let \(u_{t} = T_{r_{t}}z_{t}\) and \(u_{t_{0}} = T_{r_{t_{0}}}z_{t_{0}}\). Then we get

$$ \langle y - u_{t_{0}},Tu_{t_{0}}\rangle- \frac{1}{r_{t_{0}}}\bigl\langle y - u_{t_{0}},(1 + r_{t_{0}})u_{t_{0}} - z_{t_{0}}\bigr\rangle \le0, \quad \forall y \in C, $$
(3.4)

and

$$ \langle y - u_{t},Tu_{t}\rangle- \frac{1}{r_{t}}\bigl\langle y - u_{t},(1 + r_{t})u_{t} - z_{t}\bigr\rangle \le0,\quad \forall y \in C. $$
(3.5)

Putting \(y = u_{t}\) in (3.4) and \(y = u_{t_{0}}\) in (3.5), we obtain

$$ \langle u_{t} - u_{t_{0}},Tu_{t_{0}}\rangle- \frac{1}{r_{t_{0}}}\bigl\langle u_{t} - u_{t_{0}},(1 + r_{t_{0}})u_{t_{0}} - z_{t_{0}}\bigr\rangle \le0, $$
(3.6)

and

$$ \langle u_{t_{0}} - u_{t},Tu_{t}\rangle- \frac{1}{r_{t}}\bigl\langle u_{t_{0}} - u_{t},(1 + r_{t})u_{t} - z_{t}\bigr\rangle \le0. $$
(3.7)

Adding up (3.6) and (3.7), we have

$$\langle u_{t} - u_{t_{0}},Tu_{t_{0}}- Tu_{t} \rangle- \biggl\langle u_{t} - u_{t_{0}},\frac{(1 + r_{t_{0}})u_{t_{0}} - z_{t_{0}}}{r_{t_{0}}} - \frac{(1 + r_{t})u_{t} - z_{t}}{r_{t}}\biggr\rangle \le0, $$

which implies that

$$\bigl\langle u_{t} - u_{t_{0}},(u_{t} - Tu_{t}) - (u_{t_{0}} - Tu_{t_{0}})\bigr\rangle - \biggl\langle u_{t} - u_{t_{0}},\frac{u_{t_{0}} - z_{t_{0}}}{r_{t_{0}}} - \frac{u_{t} - z_{t}}{r_{t}}\biggr\rangle \le0. $$

Now, using the fact that T is pseudocontractive, we get

$$\biggl\langle u_{t} - u_{t_{0}},\frac{u_{t_{0}} - z_{t_{0}}}{r_{t_{0}}} - \frac{u_{t} - z_{t}}{r_{t}}\biggr\rangle \ge0, $$

and hence

$$ \biggl\langle u_{t} - u_{t_{0}},u_{t_{0}} - u_{t} + u_{t} - z_{t_{0}} - \frac{r_{t_{0}}}{r_{t}}(u_{t} - z_{t})\biggr\rangle \ge0. $$
(3.8)

Without loss of generality, let us assume that there exists a real number \(r_{t} > b > 0\) for \(t \in(0,\min\{1,\frac{1}{1 + \mu \Vert A\Vert }\})\). Then, by (3.8), we have

$$ \begin{aligned}[b]\Vert u_{t} - u_{t_{0}}\Vert ^{2} &\le \biggl\langle u_{t} - u_{t_{0}},z_{t} - z_{t_{0}} + \biggl(1 - \frac{r_{t_{0}}}{r_{t}}\biggr) (u_{t} - z_{t})\biggr\rangle \\ &\le \Vert u_{t} -u_{t_{0}}\Vert \biggl\{ \Vert z_{t} - z_{t_{0}}\Vert + \biggl\vert 1 - \frac{r_{t_{0}}}{r_{t}} \biggr\vert \Vert u_{t} - z_{t}\Vert \biggr\} . \end{aligned} $$
(3.9)

Hence, from (3.9) we obtain

$$ \begin{aligned}[b]\Vert u_{t} - u_{t_{0}}\Vert &\le \Vert z_{t} - z_{t_{0}}\Vert + \frac{1}{r_{t}}\vert r_{t} - r_{t_{0}}\vert \Vert u_{t} - z_{t}\Vert \\ &\le \Vert z_{t} - z_{t_{0}}\Vert + \frac{1}{b} \vert r_{t} - r_{t_{0}}\vert L, \end{aligned} $$
(3.10)

where \(L = \sup\{\| u_{t} - z_{t} : t \in(0,\min\{1,\frac{1}{1 + \mu \Vert A\Vert }\})\|\}\).

Moreover, since \(z_{t} = F_{r_{t}}x_{t}\) and \(z_{t_{0}} = F_{r_{t_{0}}}x_{t_{0}}\), we get

$$ \langle y - z_{t},Fz_{t}\rangle+ \frac{1}{r_{t}}\langle y - z_{t},z_{t} - x_{t}\rangle\ge0,\quad \forall y \in C, $$
(3.11)

and

$$ \langle y - z_{t_{0}},Fz_{t_{0}}\rangle+ \frac{1}{r_{t_{0}}}\langle y - z_{t_{0}},z_{t_{0}} - x_{t_{0}}\rangle\ge0,\quad \forall y \in C. $$
(3.12)

Putting \(y = z_{t_{0}}\) in (3.11) and \(y = z_{t}\) in (3.12), we obtain

$$ \langle z_{t_{0}} - z_{t},Fz_{t}\rangle+ \frac{1}{r_{t}}\langle z_{t_{0}} - z_{t},z_{t} - x_{t}\rangle\ge0, $$
(3.13)

and

$$ \langle z_{t} - z_{t_{0}},Fz_{t_{0}}\rangle+ \frac{1}{r_{t_{0}}}\langle z_{t} - z_{t_{0}},z_{t_{0}} - x_{t_{0}}\rangle\ge0. $$
(3.14)

Adding up (3.13) and (3.14), we have

$$- \langle z_{t} - z_{t_{0}},Fz_{t} - Fz_{t_{0}}\rangle+ \biggl\langle z_{t_{0}} - z_{t}, \frac{z_{t} - x_{t}}{r_{t}} - \frac{z_{t_{0}} - x_{t_{0}}}{r_{t_{0}}}\biggr\rangle \ge0. $$

Since F is monotone, we get

$$\biggl\langle z_{t_{0}} - z_{t},\frac{z_{t} - x_{t}}{r_{t}} - \frac{z_{t_{0}} - x_{t_{0}}}{r_{t_{0}}}\biggr\rangle \ge0, $$

and hence

$$ \biggl\langle z_{t} - z_{t_{0}},z_{t_{0}} - z_{t} + z_{t} - x_{t_{0}} - \frac{r_{t_{0}}}{r_{t}}(z_{t} - x_{t})\biggr\rangle \ge0. $$
(3.15)

Then, using the method in (3.8) and (3.9), from (3.15) we have

$$\begin{aligned} \Vert z_{t} - z_{t_{0}}\Vert ^{2} &\le \biggl\langle z_{t} - z_{t_{0}},z_{t} - x_{t} + x_{t} - x_{t_{0}} - \frac{r_{t_{0}}}{r_{t}}(z_{t} - x_{t})\biggr\rangle \\ &=\biggl\langle z_{t} - z_{t_{0}},x_{t} - x_{t_{0}} + \biggl(1 - \frac{r_{t_{0}}}{r_{t}}\biggr) (z_{t} - x_{t})\biggr\rangle \\ &\le \Vert z_{t} - z_{t_{0}}\Vert \biggl\{ \Vert x_{t} - x_{t_{0}}\Vert + \frac{1}{b}\vert r_{t} - r_{t_{0}}\vert \Vert z_{t} - x_{t}\Vert \biggr\} . \end{aligned} $$

This implies that

$$ \Vert z_{t} - z_{t_{0}}\Vert \le \Vert x_{t} - x_{t_{0}}\Vert + \frac{1}{b}\vert r_{t} - r_{t_{0}}\vert M, $$
(3.16)

where \(M = \sup\{\Vert z_{t} - x_{t}\Vert : t \in(0,\min\{1,\frac{1}{1 + \mu \Vert A\Vert }\})\|\}\). Combining (3.10) with (3.16), we get

$$ \Vert u_{t} - u_{t_{0}}\Vert = \Vert T_{r_{t}}z_{t} - T_{r_{t_{0}}}z_{t_{0}}\Vert \le \Vert x_{t} - x_{t_{0}}\Vert + \frac{1}{b}\vert r_{t} - r_{t_{0}}\vert (L + M). $$
(3.17)

Now, using (3.17), we calculate

$$\begin{aligned} &\Vert x_{t} - x_{t_{0}}\Vert \\ &\quad =\bigl\Vert (I - t\overline{A})T_{r_{t}}F_{r_{t}}x_{t} + t(u + \gamma fx_{t}) - (I - t_{0}\overline{A})T_{r_{t_{0}}}F_{r_{t_{0}}}x_{t_{0}} - t_{0}(u + \gamma fx_{t_{0}})\bigr\Vert \\ &\quad \le\bigl\Vert (I - t\overline{A})T_{r_{t}}z_{t} - (I - t_{0}\overline {A})T_{r_{t}}z_{t}\bigr\Vert + \bigl\Vert (I - t_{0}\overline{A})T_{r_{t}}z_{t} - (I - t_{0}\overline {A})T_{r_{t_{0}}}z_{t_{0}}\bigr\Vert \\ &\qquad {}+ \gamma \vert t - t_{0}\vert \Vert fx_{t}\Vert + \vert t - t_{0}\vert \Vert u\Vert + \gamma t_{0} \Vert fx_{t} - fx_{t_{0}}\Vert \\ &\quad \le \vert t - t_{0}\vert \Vert \overline{A}\Vert \Vert T_{r_{t}}z_{t}\Vert + \bigl(1 - t_{0}(1 + \mu \overline{\gamma})\bigr)\Vert T_{r_{t}}z_{t} - T_{r_{t_{0}}}z_{t_{0}}\Vert \\ &\qquad {}+ \gamma \vert t - t_{0}\vert \Vert fx_{t}\Vert + \vert t - t_{0}\vert \Vert u\Vert + t_{0}\gamma k \Vert x_{t} - x_{t_{0}}\Vert \\ &\quad \le \vert t - t_{0}\vert \Vert \overline{A}\Vert \Vert T_{r_{t}}z_{t}\Vert + \bigl(1 - t_{0}(1 + \mu \overline{\gamma})\bigr)\biggl[\Vert x_{t} - x_{t_{0}}\Vert + \frac{1}{b}\vert r_{t} - r_{t_{0}}\vert (L + M) \biggr] \\ &\qquad {}+ \gamma \vert t - t_{0}\vert \Vert fx_{t}\Vert + \vert t - t_{0}\vert \Vert u\Vert + t_{0}\gamma k \Vert x_{t} - x_{t_{0}}\Vert . \end{aligned} $$

This implies that

$$\begin{aligned} \Vert x_{t} - x_{t_{0}}\Vert \le{}& \frac{\Vert \overline{A}\Vert \Vert T_{r_{t}}z_{t}\Vert + \gamma \Vert fx_{t}\Vert + \Vert u\Vert }{t_{0}(1 + \mu\overline{\gamma} - \gamma k)}\vert t - t_{0}\vert \\ &{}+ \frac{(1 - t_{0}(1 + \mu\overline{\gamma}))\frac{1}{b}(L + M)}{t_{0}(1 + \mu\overline{\gamma}- \gamma k)}\vert r_{t} - r_{t_{0}}\vert . \end{aligned} $$

Since \(r_{t} : (0,\min\{1,\frac{1}{1 + \mu \Vert A\Vert }\}) \to (0,\infty)\) is locally Lipschitzian, \(x_{t}\) is also locally Lipschitzian.

(iv) From the last inequality in (iii), the result follows immediately. □

We prove the following theorem for strong convergence of the net \(\{x_{t}\}\) as \(t \to0\), which guarantees the existence of solutions of the optimization problem (1.4).

Theorem 3.1

Let the net \(\{x_{t}\}\) be defined via (3.1). Then \(x_{t}\) converges strongly to a point \(\widetilde{x} \in \operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\) as \(t \to0\), which solves the variational inequality

$$ \bigl\langle u + \bigl(\gamma f - (I + \mu A)\bigr)\widetilde{x},p - \widetilde {x}\bigr\rangle \le 0,\quad \forall p \in \operatorname {VI}(C,F) \cap \operatorname {Fix}(T). $$
(3.18)

This x̃ is the unique solution of the optimization problem (1.4).

Proof

We first show the uniqueness of a solution of the variational inequality (3.18), which is indeed a consequence of the strong monotonicity of \((I + \mu A) -\gamma f\). In fact, since A is a strongly positive bounded linear operator with a constant \(\overline{\gamma} > 0\), we know from Lemma 2.3 that \(I + \mu A - \gamma f\) is strongly monotone with a constant \(1 + \mu\overline{\gamma} - \gamma k \in(0,1)\). Suppose that \(\widetilde{x} \in \operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\) and \(\widehat{x} \in \operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\) both are solutions to (3.18). Then we have

$$ \bigl\langle u + \bigl(\gamma f - (I + \mu A)\bigr)\widetilde{x},\widetilde{x} - \widehat{x} \bigr\rangle \le0 $$
(3.19)

and

$$ \bigl\langle u + \bigl(\gamma f - (I + \mu A)\bigr)\widehat{x},\widehat{x} - \widetilde {x}\bigr\rangle \le0. $$
(3.20)

Adding up (3.19) and (3.20) yields

$$\bigl\langle \bigl((I + \mu A) - \gamma f\bigr)\widetilde{x} - \bigl((I + \mu A) - \gamma f\bigr)\widehat{x},\widetilde{x} - \widehat{x}\bigr\rangle \le0. $$

The strong monotonicity of \((I + \mu A) - \gamma f\) (Lemma 2.3) implies that \(\widetilde{x} = \widehat{x}\) and the uniqueness is proved.

Next, we prove that \(x_{t} \to\widetilde{x}\) as \(t \to0\). Let \(\overline{A} = (I + \mu A)\), and let \(z_{t} = F_{r_{t}}x_{t}\). Observing \(\operatorname {Fix}(T) = \operatorname {Fix}(T_{r_{t}})\) (by Lemma 2.6(iii)) and \(\operatorname {Fix}(F_{r_{t}}) = \operatorname {VI}(C,F)\) (by Lemma 2.5(iii)), from (3.1) we write, for given \(p \in \operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\),

$$\begin{aligned} x_{t} - p &=(I - t\overline{A})T_{r_{t}}z_{t} - (I - t\overline{A})T_{r_{t}}p + t\gamma(fx_{t} - fp) + t \bigl(u + (\gamma f - \overline{A})p\bigr) \\ &=(I - t\overline{A}) (T_{r_{t}}z_{t} - T_{r_{t}}p) + t\gamma(fx_{t} - fp) + t\bigl(u + (\gamma f - \overline{A})p\bigr) \end{aligned} $$

to derive that

$$\begin{aligned} \Vert x_{t} - p\Vert ^{2} = {}&\bigl\langle (I - t\overline{A}) (T_{r_{t}}z_{t} - T_{r_{t}}p),x_{t} - p\bigr\rangle + t\gamma\langle fx_{t} - fp,x_{t} - p \rangle \\ &{}+ t\bigl\langle u + (\gamma f - \overline{A})p,x_{t} - p\bigr\rangle \\ \le{}&\bigl(1 - t(1 + \mu\overline{\gamma})\bigr)\Vert z_{t} - p \Vert \Vert x_{t} - p\Vert + t\gamma k \Vert x_{t} - p\Vert ^{2} \\ &{}+ t\bigl\langle u + (\gamma f - \overline{A})p,x_{t} - p\bigr\rangle \\ \le{}&\bigl(1 - t(1 + \mu\overline{\gamma})\bigr)\Vert x_{t} - p \Vert ^{2}+ t\gamma k \Vert x_{t} - p\Vert ^{2} \\ &{}+ t\bigl\langle u + (\gamma f - \overline{A})p,x_{t} - p\bigr\rangle . \end{aligned} $$

Therefore we have

$$ \Vert x_{t} - p\Vert ^{2} \le\frac{1}{1 + \mu\overline{\gamma} - \gamma k} \bigl\langle u + (\gamma f - \overline{A})p,x_{t} - p\bigr\rangle . $$
(3.21)

Since \(\{x_{t}\}\) is bounded as \(t \to0\) (by Proposition 3.1(i)), there exists a subsequence \(\{t_{n}\}\) in \((0,\min\{1,\frac{1}{1 + \mu \Vert A\Vert }\})\) such that \(t_{n} \to0\) and \(x_{t_{n}} \rightharpoonup x^{\ast}\). First of all, we prove that \(x^{\ast} \in \operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\). To this end, we divide its proof into four steps.

Step 1. We show that \(\lim_{n \to\infty} \Vert x_{t_{n}} - z_{t_{n}}\Vert = 0\), where \(z_{t_{n}} = F_{r_{t_{n}}}x_{t_{n}}\). To show this, let \(p \in \operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\). Since \(p = F_{r_{t_{n}}}p\), from (2.2) we deduce

$$\begin{aligned} \Vert z_{t_{n}} - p\Vert ^{2} & = \Vert F_{r_{t_{n}}}x_{t_{n}} - F_{r_{t_{n}}}p\Vert ^{2} \\ &\le\langle z_{t_{n}} - p,x_{t_{n}} - p\rangle \\ &= \frac{1}{2}\bigl[\Vert x_{t_{n}} - p\Vert ^{2} + \Vert z_{t_{n}} - p\Vert ^{2} - \Vert x_{t_{n}} - z_{t_{n}}\Vert ^{2}\bigr], \end{aligned} $$

and hence

$$\Vert z_{t_{n}} - p\Vert ^{2} \le \Vert x_{t_{n}} - p\Vert ^{2} - \Vert x_{t_{n}} - z_{t_{n}}\Vert ^{2} . $$

Thus, from (3.2) we have

$$\Vert T_{r_{n}}z_{t_{n}} - p\Vert ^{2} \le \Vert z_{t_{n}} - p\Vert ^{2} \le \Vert x_{t_{n}} - p\Vert ^{2} - \Vert x_{t_{n}} - z_{t_{n}}\Vert ^{2} . $$

This implies

$$\begin{aligned} \Vert x_{t_{n}} - z_{t_{n}}\Vert ^{2} &\le \Vert x_{t_{n}} - p\Vert ^{2} - \Vert T_{r_{n}}z_{t_{n}} - p\Vert ^{2} \\ &\le\bigl(\Vert x_{t_{n}} - p\Vert + \Vert T_{r_{n}}z_{t_{n}} - p\Vert \bigr) \bigl(\Vert x_{t_{n}} - p\Vert - \Vert T_{r_{n}}z_{t_{n}} - p\Vert \bigr) \\ &\le\bigl(\Vert x_{t_{n}} - p\Vert + \Vert T_{r_{n}}z_{t_{n}} - p\Vert \bigr)\Vert x_{t_{n}} - T_{r_{n}}z_{t_{n}} \Vert \\ &=\bigl(\Vert x_{t_{n}} - p\Vert + \Vert T_{r_{n}}z_{t_{n}} - p\Vert \bigr)\Vert x_{t_{n}} - T_{r_{n}}F_{r_{n}}x_{t_{n}} \Vert . \end{aligned} $$

Since \(t_{n} \to0\) and \(\Vert x_{t_{n}} - T_{r_{n}}F_{r_{n}}x_{t_{n}}\Vert \to0\) by Proposition 3.1(ii), we get \(\Vert x_{t_{n}} - z_{t_{n}}\Vert \to0\) by the boundedness of \(\{x_{t}\}\) and \(\{T_{r_{t}}z_{t}\}\).

Step 2. We show that \(\lim_{n \to\infty} \Vert u_{t_{n}} - z_{t_{n}}\Vert = 0\), where \(u_{t_{n}} = T_{r_{t_{n}}}z_{t_{n}}\). Indeed, from Proposition 3.1(ii) and Step 1 it follows that

$$\Vert u_{t_{n}} - z_{t_{n}}\Vert \le \Vert u_{t_{n}} - x_{t_{n}}\Vert + \Vert x_{t_{n}} - z_{t_{n}}\Vert \to0 \quad (\text{as } n \to\infty). $$

Step 3. We show that \(x^{\ast} \in \operatorname {VI}(C,F)\). In fact, from the definition of \(z_{t_{n}} = F_{r_{t_{n}}}x_{t_{n}}\) we have

$$ \langle y - z_{t_{n}},Fz_{t_{n}}\rangle+ \biggl\langle y - z_{t_{n}},\frac{z_{t_{n}} - x_{t_{n}}}{r_{t_{n}}}\biggr\rangle \ge0, \quad \forall y \in C. $$
(3.22)

Set \(w_{t} = tv + (1 - t)x^{\ast}\) for all \(t \in(0,1]\) and \(v \in C\). Then \(w_{t} \in C\). From (3.22) it follows that

$$ \begin{aligned}[b] \langle w_{t} - z_{t_{n}},Fw_{t}\rangle &\ge\langle w_{t} - z_{t_{n}},Fw_{t}\rangle- \langle w_{t} - z_{t_{n}},Fz_{t_{n}}\rangle- \biggl\langle w_{t} - z_{t_{n}},\frac{z_{t_{n}} - x_{t_{n}}}{r_{t_{n}}}\biggr\rangle \\ &=\langle w_{t} - z_{t_{n}},Fw_{t} - Fz_{t_{n}}\rangle- \biggl\langle w_{t} - z_{t_{n}}, \frac{z_{t_{n}} - x_{t_{n}}}{r_{t_{n}}}\biggr\rangle . \end{aligned} $$
(3.23)

By Step 1, we have \(\frac{z_{t_{n}} - x_{t_{n}}}{r_{t_{n}}} \to0\) as \(n \to\infty\). Moreover, since \(x_{t_{n}} \rightharpoonup x^{\ast}\), by Step 1, we have \(z_{t_{n}} \rightharpoonup x^{\ast}\) as \(n \to \infty\). Since F is monotone, we also have that \(\langle w_{t} - z_{t_{n}},Fw_{t} - Fz_{t_{n}}\rangle\ge0\). Thus, from (3.23) it follows that

$$0 \le\lim_{n \to\infty}\langle w_{t} - z_{t_{n}},Fw_{t} \rangle= \bigl\langle w_{t} - x^{\ast},Fw_{t}\bigr\rangle , $$

and hence

$$\bigl\langle v - x^{\ast},Fw_{t}\bigr\rangle \ge0,\quad \forall v \in C. $$

If \(t \to0\), the continuity of F yields that

$$\bigl\langle v - x^{\ast},Fx^{\ast}\bigr\rangle \ge0,\quad \forall v \in C. $$

This implies that \(x^{\ast} \in \operatorname {VI}(C,F)\).

Step 4. We show that \(x^{\ast} \in \operatorname {Fix}(T)\). In fact, from the definition of \(u_{t_{n}} = T_{r_{t_{n}}}z_{t_{n}}\), we have

$$ \langle y - u_{t_{n}},Tu_{t_{n}}\rangle- \frac{1}{r_{t_{n}}}\bigl\langle y - u_{t_{n}},(1 + r_{t_{n}})u_{t_{n}} - z_{t_{n}}\bigr\rangle \le0,\quad \forall y \in C. $$
(3.24)

Put \(w_{t} = tv + (1 - t)x^{\ast}\) for all \(t \in(0,1]\) and \(v \in C\). Then \(w_{t} \in C\), and from (3.24) and pseudocontractivity of T it follows that

$$ \begin{aligned}[b] \langle u_{t_{n}} - w_{t},Tw_{t}\rangle \ge{}&\langle u_{t_{n}} - w_{t},Tw_{t}\rangle+ \langle w_{t} - u_{t_{n}},Tu_{t_{n}}\rangle \\ &{}- \frac{1}{r_{t_{n}}}\bigl\langle w_{t} - u_{t_{n}},(1 + r_{t_{n}})u_{t_{n}} - z_{t_{n}}\bigr\rangle \\ = {}&{-} \langle w_{t} - u_{t_{n}},Tw_{t} - Tu_{t_{n}}\rangle- \frac{1}{r_{t_{n}}}\langle w_{t} - u_{t_{n}},u_{t_{n}} - z_{t_{n}}\rangle \\ &{}- \langle w_{t} - u_{t_{n}},u_{t_{n}}\rangle \\ \ge{}&{-} \Vert w_{t} - u_{t_{n}}\Vert ^{2} - \frac{1}{r_{t_{n}}}\langle w_{t} - u_{t_{n}},u_{t_{n}} - z_{t_{n}}\rangle- \langle w_{t} - u_{t_{n}},u_{t_{n}} \rangle \\ ={} &{- }\langle w_{t} - u_{t_{n}},w_{t}\rangle- \biggl\langle w_{t} - u_{t_{n}}, \frac{u_{t_{n}} - z_{t_{n}}}{r_{t_{n}}}\biggr\rangle . \end{aligned} $$
(3.25)

By Step 2, we get \(\frac{u_{t_{n}} - z_{t_{n}}}{r_{t_{n}}} \to0\) as \(n \to\infty\). Moreover, since \(x_{t_{n}} \rightharpoonup x^{\ast}\), by Step 1 and Step 2, we have \(u_{t_{n}} \rightharpoonup x^{\ast}\) as \(n \to\infty\). Therefore, from (3.25), as \(n \to\infty\), it follows that

$$\bigl\langle x^{\ast} - w_{t},Tw_{t}\bigr\rangle \ge \bigl\langle x^{\ast} - w_{t},w_{t}\bigr\rangle , $$

and hence

$$- \bigl\langle v - x^{\ast},Tw_{t}\bigr\rangle \ge- \bigl\langle v - x^{\ast},w_{t}\bigr\rangle ,\quad \forall v \in C. $$

Letting \(t \to0\) and using the fact that T is continuous, we get

$$- \bigl\langle v - x^{\ast},Tx^{\ast}\bigr\rangle \ge- \bigl\langle v - x^{\ast},x^{\ast}\bigr\rangle ,\quad \forall v \in C. $$

Now, let \(v = Tx^{\ast}\). Then we obtain \(x^{\ast} = Tx^{\ast}\) and hence \(x^{\ast} \in \operatorname {Fix}(T)\). Therefore, \(x^{\ast} \in \operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\).

Now, we substitute \(x^{\ast}\) for p in (3.21) to obtain

$$ \bigl\Vert x_{t_{n}} - x^{\ast}\bigr\Vert ^{2} \le \frac{1}{1 + \mu\overline{\gamma} - \gamma k}\bigl\langle u + (\gamma f - \overline{A})x^{\ast},x_{t_{n}} - x^{\ast}\bigr\rangle . $$
(3.26)

Note that \(x_{t_{n}} \rightharpoonup x^{\ast}\) and \(\lim_{n \to \infty}t_{n} = 0\). These facts and inequality (3.26) imply that \(x_{t_{n}} \to x^{\ast}\) strongly.

Finally, we prove that \(x^{\ast}\) is a solution of the variational inequality (3.18). In fact, putting \(x_{t_{n}}\) in place of \(x_{t}\) in (3.21) and taking the limit as \(t_{n} \to0\), we obtain

$$\bigl\Vert x^{\ast} - p\bigr\Vert ^{2} \le \frac{1}{1 + \mu\overline{\gamma} - \gamma k}\bigl\langle u + (\gamma f - \overline{A})p, x^{\ast} - p \bigr\rangle ,\quad \forall p \in \operatorname {VI}(C,F) \cap \operatorname {Fix}(T). $$

In particular, \(x^{\ast}\) solves the following variational inequality:

$$x^{\ast} \in \operatorname {VI}(C,F) \cap \operatorname {Fix}(T), \quad \bigl\langle u + (\gamma f - \overline {A})p,p - x^{\ast} \bigr\rangle \le0,\quad \forall p \in \operatorname {VI}(C,F) \cap \operatorname {Fix}(T), $$

or the equivalent dual variational inequality (see [46])

$$x^{\ast} \in \operatorname {VI}(C,F) \cap \operatorname {Fix}(T),\quad \bigl\langle u + (\gamma f - \overline {A})x^{\ast},p - x^{\ast}\bigr\rangle \le0,\quad \forall p \in \operatorname {VI}(C,F) \cap \operatorname {Fix}(T). $$

That is, \(x^{\ast} \in \operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\) is a solution of the variational inequality (3.18); hence \(x^{\ast} = \widetilde{x}\) by uniqueness. In summary, we have shown that each cluster point of \(\{x_{t}\}\) (at \(t \to0\)) equals x̃. Therefore \(x_{t} \to \widetilde{x}\) as \(t \to0\). By (3.18) and Lemma 2.7, we deduce immediately the desired result. This completes the proof. □

If we take \(\mu= 0\), \(u = 0\) and \(f \equiv0\) in Theorem 3.1, then we have the following corollary.

Corollary 3.1

Let \(\{x_{t}\}\) be defined by

$$x_{t} = (I - t)T_{r_{t}}F_{r_{t}}x_{t}. $$

Then \(\{x_{t}\}\) converges strongly as \(t \to0\) to a point \(\widetilde{x} \in \operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\), which is the minimum norm point of \(\operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\).

Taking \(T \equiv I\) in Theorem 3.1, we have the following corollary.

Corollary 3.2

Let \(\{x_{t}\}\) be defined by

$$x_{t} = t(u + \gamma fx_{t}) +\bigl(I - t(I + \mu A) \bigr)F_{r_{t}}x_{t}. $$

Then \(\{x_{t}\}\) converges strongly as \(t \to0\) to a point \(\widetilde{x} \in \operatorname {VI}(C,F)\), where is the unique solution of the optimization problem

$$ \min_{x \in \operatorname {VI}(C,F)}\frac{\mu}{2}\langle Ax, x\rangle+ \frac {1}{2}\Vert x - u\Vert ^{2} - h(x). $$
(3.27)

Proof

If \(T \equiv I\), then \(T_{r}\) in Lemma 2.6 is the identity mapping. Thus the result follows from Theorem 3.1. □

Taking \(F \equiv0\) in Theorem 3.1, we get the following corollary.

Corollary 3.3

Let \(\{x_{t}\}\) be defined by

$$x_{t} = t(u + \gamma fx_{t}) + \bigl(I - t(I + \mu A) \bigr)T_{r_{t}}x_{t}. $$

Then \(\{x_{t}\}\) converges strongly as \(t \to0\) to a point \(\widetilde{x} \in \operatorname {Fix}(T)\), where is the unique solution of the optimization problem

$$ \min_{x \in \operatorname {Fix}(T)}\frac{\mu}{2}\langle Ax, x\rangle+ \frac{1}{2}\Vert x - u\Vert ^{2} - h(x). $$
(3.28)

Proof

If \(F \equiv0\), then \(F_{r}\) in Lemma 2.5 is the identity mapping. Thus the result follows from Theorem 3.1. □

If, in Theorem 3.1, we take \(C \equiv H\), then we obtain the following corollary.

Corollary 3.4

Let \(T : H \to H\) be a continuous pseudocontractive mapping, and let \(F : H \to H\) be a continuous monotone mapping. Let \(\{x_{t}\}\) be defined by (3.1). Then \(\{x_{t}\}\) converges strongly as \(t \to0\) to a point \(\widetilde{x} \in F^{-1}(0) \cap \operatorname {Fix}(T)\), which is the unique solution of the optimization problem

$$ \min_{x \in F^{-1}(0) \cap \operatorname {Fix}(T)}\frac{\mu}{2}\langle Ax, x\rangle+ \frac{1}{2}\Vert x - u\Vert ^{2} - h(x). $$
(3.29)

Proof

Since \(D(F) = H\), we have \(\operatorname {VI}(H,F) = F^{-1}(0)\). So, by Theorem 3.1, we obtain the desired result. □

Now, we propose the following general iterative method which generates a sequence in an explicit way:

$$ x_{n+1} = \alpha_{n}(u + \gamma fx_{n}) + \bigl(I - \alpha_{n}(I + \mu A)\bigr)T_{r_{n}}F_{r_{n}}x_{n}, \quad \forall n \ge0, $$
(3.30)

where \(x_{0} \in H\) is an arbitrary initial guess; \(\{\alpha_{n}\} \in[0,1]\) and \(\{r_{n}\} \subset(0,\infty)\); and we establish the strong convergence of this sequence to a point \(\widetilde{x} \in \operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\), which is the unique solution of the optimization problem (1.4).

Theorem 3.2

Let \(\{x_{n}\}\) be the sequence generated by the explicit scheme (3.30). Let \(\{\alpha_{n}\}\) and \(\{r_{n}\} \subset(0,\infty)\) satisfy the following conditions:

  1. (C1)

    \(\{\alpha_{n}\} \subset[0,1]\) and \(\alpha_{n} \to0\) as \(n \to\infty\);

  2. (C2)

    \(\sum_{n = 0}^{\infty}\alpha_{n} = \infty\);

  3. (C3)

    \(\vert \alpha_{n+1} - \alpha_{n}\vert \le o(\alpha_{n+1}) + \sigma_{n}\), \(\sum_{n=0}^{\infty}\sigma_{n} <\infty\) (the perturbed control condition);

  4. (C4)

    \(\liminf_{n\to\infty}r_{n} > 0\) and \(\sum_{n = 0}^{\infty} \vert r_{n+1} - r_{n}\vert < \infty\).

Then \(\{x_{n}\}\) converges strongly to a point \(\widetilde{x} \in \operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\), which is the unique solution of the variational inequality (3.18). This x̃ is the unique solution of the optimization problem (1.4).

Proof

First, note that from condition (C1), without loss of generality, we assume that \(\alpha_{n}(1 + \mu\overline{\gamma} - \gamma k) < 1\) and \(\frac{2\alpha_{n}(1 + \mu\overline{\gamma} - \gamma k)}{1 - \alpha_{n}\gamma k} < 1\) for all \(n \ge0\). Let \(\widetilde{x} \in \operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\) be the unique solution of the variational inequality (3.18). (The existence of x̃ follows from Theorem 3.1.)

From now on, we put \(\overline{A} = I + \mu A\), \(z_{n} = F_{r_{n}}x_{n}\) and \(u_{n} = T_{r_{n}}z_{n}\). Let \(p \in \operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\). Then \(p = T_{r_{n}}p\) by Lemma 2.6(iii) and \(p = F_{r_{n}}p\) by Lemma 2.5(iii). Moreover, from the nonexpansivity of \(F_{r_{n}}\) it follows that

$$ \Vert z_{n} - p\Vert = \Vert F_{r_{n}}x_{n} - F_{r_{n}}p\Vert \le \Vert x_{n} - p\Vert . $$
(3.31)

We divide the proof into several steps as follows.

Step 1. We show that \(\{x_{n}\}\) is bounded. First of all, by (3.31), we deduce

$$\begin{aligned} &\Vert x_{n+1} - p\Vert \\ &\quad =\bigl\Vert \alpha_{n}(u + \gamma fx_{n}) + (I - \alpha_{n}\overline{A})T_{r_{n}}z_{n} - p\bigr\Vert \\ &\quad =\bigl\Vert (I - \alpha_{n}\overline{A})T_{r_{n}}z_{n} - (I - \alpha_{n}\overline{A})T_{r_{n}}p + \alpha_{n}\gamma(fx_{n} - fp) + \alpha_{n}\bigl(u + (\gamma f - \overline{A})p\bigr)\bigr\Vert \\ &\quad \le\bigl\Vert (I - \alpha_{n}\overline{A})T_{r_{n}}z_{n} - (I - \alpha_{n}\overline{A})T_{r_{n}}p\bigr\Vert + \alpha_{n}\gamma k\Vert x_{n} - p\Vert + \alpha_{n}\bigl(\Vert u\Vert + \bigl\Vert (\gamma f - \overline {A})p\bigr\Vert \bigr) \\ &\quad \le\bigl(1 - \alpha_{n}(1 + \mu\overline{\gamma})\bigr)\Vert z_{n} - p\Vert + \alpha_{n}\gamma k\Vert x_{n} - p\Vert + \alpha_{n}\bigl(\Vert u\Vert + \bigl\Vert (\gamma f - \overline {A})p\bigr\Vert \bigr) \\ &\quad \le\bigl(1 - \alpha_{n}(1 + \mu\overline{\gamma})\bigr)\Vert x_{n} - p\Vert + \alpha_{n}\gamma k\Vert x_{n} - p\Vert + \alpha_{n}\bigl(\Vert u\Vert + \bigl\Vert (\gamma f - \overline {A})p\bigr\Vert \bigr) \\ &\quad =\bigl(1 - \alpha_{n}(1 + \mu\overline{\gamma} - \gamma k)\bigr) \Vert x_{n} - p\Vert + \alpha_{n}\bigl(\Vert u\Vert + \bigl\Vert (\gamma f - \overline{A})p\bigr\Vert \bigr) \\ &\quad \le\max \biggl\{ \Vert x_{n} - p\Vert ,\frac{\Vert u\Vert + \Vert (\gamma f - \overline{A})p\Vert }{1 + \mu\overline{\gamma} - \gamma k} \biggr\} . \end{aligned} $$

By induction, we derive

$$\Vert x_{n} - p\Vert \le\max \biggl\{ \Vert x_{0} - p \Vert ,\frac{ \Vert u\Vert + \Vert (\gamma f - \overline{A})p\Vert }{1 + \mu\overline{\gamma} - \gamma k} \biggr\} , \quad \forall n \ge0. $$

This implies that \(\{x_{n}\}\) is bounded and so are \(\{z_{n}\} = \{F_{r_{n}}x_{n}\}\), \(\{u_{n}\}=\{T_{r_{n}}z_{n}\}\), \(\{fx_{n}\}\), and \(\{\overline{A}T_{r_{n}}z_{n}\}\). As a consequence, with the control condition (C1), we get

$$ \Vert x_{n+1} - T_{r_{n}}z_{n}\Vert \le \alpha_{n}\bigl(\Vert u\Vert + \gamma \Vert fx_{n} - \overline{A}T_{r_{n}}z_{n}\Vert \bigr) \to0\quad (n \to \infty). $$
(3.32)

Step 2. We show that \(\lim_{n \to\infty} \Vert x_{n+1} - x_{n}\Vert = 0\). In fact, by using the same method as in the proof of Proposition 3.1(iii) together with \(z_{n} = F_{r_{n}}x_{n}\), \(z_{n-1} = F_{r_{n-1}}x_{n-1}\), \(u_{n} = T_{r_{n}}z_{n}\), and \(u_{n-1} = T_{r_{n-1}}z_{n-1}\) instead of \(z_{t} = F_{r_{t}}x_{t}\), \(z_{t_{0}} = F_{r_{t_{0}}}x_{t_{0}}\), \(u_{t} = T_{r_{t}}z_{t}\), and \(u_{t_{0}} = T_{r_{t_{0}}}z_{t_{0}}\), respectively, we have

$$ \Vert T_{r_{n}}z_{n} - T_{r_{n-1}}z_{n-1} \Vert \le \Vert x_{n} - x_{n-1}\Vert + \frac{1}{b} \vert r_{n} - r_{n-1}\vert (M_{1} + M_{2}), $$
(3.33)

where \(M_{1} = \sup\{\Vert u_{n} - z_{n}\Vert : n \ge0\}\), \(M_{2} = \sup\{\Vert z_{n} - x_{n}\Vert : n\ge0\}\), and \(r_{n} > b > 0\), \(n \ge0\) for some b. Thus, by (3.33) and Lemma 2.4, we derive

$$ \begin{aligned}[b] &\Vert x_{n + 1} - x_{n}\Vert \\ &\quad =\bigl\Vert \alpha_{n}(u + \gamma fx_{n}) + (I - \alpha_{n}\overline{A})T_{r_{n}}z_{n} \\ & \qquad {}- \alpha_{n-1}(u + \gamma fx_{n-1}) - (I - \alpha_{n-1}\overline {A})T_{r_{n-1}}z_{n-1}\bigr\Vert \\ &\quad \le\bigl\Vert (I - \alpha_{n}\overline{A}) (T_{r_{n}}z_{n} - T_{r_{n-1}}z_{n-1})\bigr\Vert + \vert \alpha_{n} - \alpha_{n-1}\vert \Vert \overline{A}\Vert \Vert T_{r_{n-1}}z_{n-1}\Vert \\ &\qquad {}+ \alpha_{n}\gamma \Vert fx_{n} - fx_{n-1} \Vert + \vert \alpha_{n} - \alpha_{n-1}\vert \Vert u \Vert \\ &\quad \le\bigl(1 - \alpha_{n}(1 + \mu\overline{\gamma})\bigr)\Vert T_{r_{n}}z_{n} - T_{r_{n-1}}z_{n-1}\Vert \\ &\qquad {}+ \alpha_{n}\gamma k\Vert x_{n} - x_{n-1}\Vert + \vert \alpha_{n} - \alpha_{n-1}\vert \bigl(\Vert \overline{A}\Vert \Vert T_{r_{n-1}}z_{n-1}\Vert +\Vert u\Vert \bigr) \\ &\quad \le\bigl(1 - \alpha_{n}(1 + \mu\overline{\gamma})\bigr)\biggl[ \Vert x_{n} - x_{n-1}\Vert + \frac{1}{b}\vert r_{n} - r_{n-1}\vert (M_{1} + M_{2}) \biggr] \\ &\qquad {}+ \alpha_{n}\gamma k\Vert x_{n} - x_{n-1} \Vert + \vert \alpha_{n} - \alpha_{n-1}\vert M_{3} \\ &\quad \le\bigl(1 - \alpha_{n}(1 + \mu\overline{\gamma} - \gamma k)\bigr) \Vert x_{n} - x_{n-1}\Vert \\ &\qquad {}+ \vert \alpha_{n} - \alpha_{n-1}\vert M_{3} + \frac{1}{b}\vert r_{n} - r_{n-1}\vert (M_{1} + M_{2}) \\ &\quad \le\bigl(1 - \alpha_{n}(1 + \mu\overline{\gamma} - \gamma k)\bigr) \Vert x_{n} - x_{n-1}\Vert \\ &\qquad {} + \bigl(o(\alpha_{n}) + \sigma_{n-1}\bigr)M_{3}+ \frac{1}{b}\vert r_{n} - r_{n-1}\vert (M_{1} + M_{2}),\end{aligned} $$
(3.34)

where \(M_{3} = \sup\{\Vert \overline{A}\Vert \Vert T_{r_{n}}z_{n}\Vert + \Vert u\Vert : n \ge0\}\). By taking \(s_{n +1} = \Vert x_{n + 1} - x_{n}\Vert \), \(w_{n} = \alpha_{n}(1 + \mu\overline{\gamma} - \gamma k)\), \(w_{n}\delta_{n} = M_{3}o(\alpha_{n})\) and \(\nu_{n} = \sigma_{n-1}M_{3} + \frac{1}{b}\vert r_{n} - r_{n-1}\vert (M_{1} + M_{2})\), from (3.34) we deduce

$$s_{n+1} \le(1 - w_{n})s_{n} + w_{n} \delta_{n} + \nu_{n}. $$

Hence, by conditions (C2), (C3), (C4) and Lemma 2.2, we obtain

$$\lim_{n \to\infty} \Vert x_{n + 1} - x_{n}\Vert = 0. $$

Step 3. We show that \(\lim_{n \to\infty} \Vert x_{n} - z_{n}\Vert = 0\). By taking \(x_{n}\) and \(z_{n}\) instead of \(x_{t_{n}}\) and \(z_{t_{n}}\) in Step 1 of the proof of Theorem 3.1, the result follows from Step 1 in the proof of Theorem 3.1, (3.32) and Step 2.

Step 4. We show that \(\lim_{n \to\infty} \Vert x_{n} - u_{n}\Vert = 0\), where \(u_{n} = T_{r_{n}}z_{n}\). In fact, from (3.32) and Step 2, we have

$$\begin{aligned} \Vert x_{n} - u_{n}\Vert &= \Vert x_{n} - T_{r_{n}}z_{n}\Vert \\ &\le \Vert x_{n} - x_{n+1}\Vert + \Vert x_{n+1} - T_{r_{n}}z_{n}\Vert \to0\quad (\text{as } n \to \infty). \end{aligned} $$

Step 5. We show that \(\lim_{n \to\infty} \Vert u_{n} - z_{n}\Vert = 0\), where \(u_{n} = T_{r_{n}}z_{n}\). In fact, from Step 3 and Step 4, we have

$$\Vert u_{n} - z_{n}\Vert \le \Vert u_{n} - x_{n}\Vert + \Vert x_{n} - z_{n}\Vert \to0\quad (\text{as } n \to\infty). $$

Step 6. We show that \(\limsup_{n\to\infty}\langle u + (\gamma f - \overline{A})\widetilde{x},x_{n} - \widetilde{x}\rangle\le0\). To this end, take a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) such that

$$\limsup_{n \to\infty}\bigl\langle u + (\gamma f - \overline{A}) \widetilde{x},x_{n} - \widetilde{x}\bigr\rangle = \lim _{k \to \infty}\bigl\langle u + (\gamma f - \overline{A}) \widetilde{x},x_{n_{k}} - \widetilde{x}\bigr\rangle . $$

Without loss of generality, we may assume that \(x_{n_{k}} \rightharpoonup p\). Take \(x_{n_{k}}\) and \(z_{n_{k}}\) in place of \(x_{t_{n}}\) and \(z_{t_{n}}\) in Step 3 and Step 4 of the proof of Theorem 3.1. Then, from Step 3 and Step 4 in the proof of Theorem 3.1 along with Step 5, we derive \(p \in \operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\). Hence, from (3.18) we conclude

$$\begin{aligned} \limsup_{n \to\infty}\bigl\langle u + (\gamma f - \overline{A})\widetilde{x},x_{n} - \widetilde{x}\bigr\rangle &= \lim _{k \to\infty}\bigl\langle u + (\gamma f - \overline{A}) \widetilde{x},x_{n_{k}} - \widetilde{x}\bigr\rangle \\ &=\bigl\langle u + (\gamma f - \overline{A})\widetilde{x},p - \widetilde{x} \bigr\rangle \le0.\end{aligned} $$

Step 7. We show that \(\lim_{n \to\infty} \Vert x_{n} - \widetilde{x} \Vert = 0\). Note that \(\widetilde{x} \in \operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\). Let \(z_{n} = F_{r_{n}}x_{n}\). By (3.30), \(\widetilde{x} = F_{r_{n}}\widetilde{x}\), and \(\widetilde{x} = T_{r_{n}}\widetilde{x}\), we deduce

$$x_{n+1} - \widetilde{x} = (I - \alpha_{n}\overline{A}) (T_{r_{n}}z_{n} - T_{r_{n}}\widetilde{x}) + \alpha_{n}\gamma(fx_{n} - f\widetilde{x}) + \alpha_{n}\bigl(u + (\gamma f - \overline{A})\widetilde{x}\bigr). $$

Applying Lemma 2.1 and Lemma 2.4, we obtain

$$\begin{aligned} &\Vert x_{n+1} - \widetilde{x}\Vert ^{2} \\ &\quad =\bigl\Vert (I - \alpha_{n}\overline{A}) (T_{r_{n}}z_{n} - T_{r_{n}}\widetilde{x}) + \alpha_{n}\gamma(fx_{n} - f\widetilde{x}) + \alpha_{n}\bigl(u + (\gamma f - \overline{A}) \widetilde{x}\bigr)\bigr\Vert ^{2} \\ &\quad \le\bigl\Vert (I - \alpha_{n}\overline{A}) (T_{r_{n}}z_{n} - T_{r_{n}}\widetilde{x})\bigr\Vert ^{2} + 2 \alpha_{n}\gamma\langle fx_{n} - f\widetilde{x},x_{n+1} - \widetilde {x}\rangle \\ &\qquad {}+ 2\alpha_{n}\bigl\langle u + (\gamma f - \overline{A}) \widetilde{x},x_{n+1} - \widetilde{x}\bigr\rangle \\ &\quad \le\bigl(1 - \alpha_{n}(1 + \mu\overline{\gamma}) \bigr)^{2}\Vert z_{n} - \widetilde{x}\Vert ^{2} + 2\alpha_{n}\gamma k\Vert x_{n} - \widetilde{x}\Vert \Vert x_{n+1} - \widetilde{x}\Vert \\ &\qquad {}+ 2\alpha_{n}\bigl\langle u + (\gamma f - \overline{A}) \widetilde{x},x_{n+1} - \widetilde{x}\bigr\rangle \\ &\quad \le\bigl(1 - \alpha_{n}(1 + \mu\overline{\gamma}) \bigr)^{2}\Vert x_{n} - \widetilde{x}\Vert ^{2} + \alpha_{n}\gamma k\bigl(\Vert x_{n} - \widetilde{x} \Vert ^{2} + \Vert x_{n+1} - \widetilde{x}\Vert ^{2}\bigr) \\ &\qquad {}+ 2\alpha_{n}\bigl\langle u + (\gamma f - \overline{A}) \widetilde{x},x_{n+1} - \widetilde{x}\bigr\rangle \\ &\quad \le\bigl(1 - 2\alpha_{n}(1 + \mu\overline{\gamma}) + \alpha_{n}\gamma k\bigr)\Vert x_{n} - \widetilde{x}\Vert ^{2} + \alpha_{n}^{2}(1 + \mu\overline{\gamma })^{2}\Vert x_{n} - \widetilde{x}\Vert ^{2} + \alpha_{n}\gamma k\Vert x_{n+1} - \widetilde{x}\Vert ^{2} \\ &\qquad {} + 2\alpha_{n}\bigl\langle u + (\gamma f - \overline{A}) \widetilde{x},x_{n+1} - \widetilde{x}\bigr\rangle . \end{aligned}$$
(3.35)

It then follows from (3.35) that

$$\begin{aligned} \Vert x_{n+1} - \widetilde{x}\Vert ^{2} \le{}&\frac{1 - 2\alpha_{n}(1 + \mu\overline{\gamma}) + \alpha_{n}\gamma k}{1 - \alpha_{n}\gamma k}\Vert x_{n} - \widetilde{x} \Vert ^{2} \\ &{}+ \frac{\alpha_{n}^{2}(1 + \mu\overline{\gamma})^{2}}{1 - \alpha_{n}\gamma k}\Vert x_{n} - \widetilde{x}\Vert ^{2} + \frac{2\alpha_{n}}{1 - \alpha_{n}\gamma k}\bigl\langle u + (\gamma f - \overline{A}) \widetilde{x},x_{n+1} - \widetilde{x}\bigr\rangle \\ ={} & \biggl(1 - \frac{2\alpha_{n}(1 + \mu\overline{\gamma} - \gamma k)}{1 - \alpha_{n}\gamma k} \biggr)\Vert x_{n} - \widetilde{x}\Vert ^{2} \\ &{}+ \frac{\alpha_{n}^{2}(1 + \mu\overline{\gamma})^{2}}{1 - \alpha_{n}\gamma k}\Vert x_{n} - \widetilde{x}\Vert ^{2} + \frac{2\alpha_{n}}{1 - \alpha_{n}\gamma k}\bigl\langle u + (\gamma f - \overline{A}) \widetilde{x},x_{n+1} - \widetilde{x}\bigr\rangle \\ \le{}&(1 - w_{n})\Vert x_{n} - \widetilde{x}\Vert ^{2} + w_{n}\delta_{n}, \end{aligned}$$

where

$$\begin{aligned}& w_{n} = \frac{2\alpha_{n}(1 + \mu\overline{\gamma} - \gamma k)}{1 - \alpha _{n}\gamma k}\quad \text{and} \\ &\delta_{n} = \frac{1}{2(1 + \mu\overline{\gamma} - \gamma k)} \bigl[\alpha_{n}(1 + \mu \overline{\gamma})^{2}M_{4} + 2\bigl\langle u + (\gamma f - \overline{A})\widetilde{x},x_{n+1} - \widetilde{x}\bigr\rangle \bigr], \end{aligned} $$

where \(M_{4} = \sup\{\Vert x_{n} - \widetilde{x}\Vert ^{2} : n\ge0\}\). It can be easily seen from conditions (C1) and (C2) and Step 6 that \(w_{n} \to0\), \(\sum_{n = 0}^{\infty}w_{n} = \infty\) and \(\limsup_{n \to \infty}\delta_{n} \le0 \). From Lemma 2.2 with \(\nu_{n} = 0\), we conclude that \(\lim_{n \to\infty} \Vert x_{n} - \widetilde{x}\Vert = 0\). This completes the proof. □

If we take \(\mu= 0\), \(u =0\) and \(f \equiv0\) in Theorem 3.2, then we have the following corollary.

Corollary 3.5

Let \(\{x_{n}\}\) be defined by

$$x_{n+1} = (1 - \alpha_{n})T_{r_{n}}F_{r_{n}}x_{n}. $$

Assume that the sequences \(\{\alpha_{n}\}\) and \(\{r_{n}\}\) satisfy conditions (C1)-(C4) in Theorem  3.2. Then \(\{x_{n}\}\) converges strongly to a point \(\widetilde{x} \in \operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\), which is the minimum norm point of \(\operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\).

Taking \(T \equiv I\) in Theorem 3.2, we have the following corollary.

Corollary 3.6

Let \(\{x_{n}\}\) be generated by the following iterative scheme:

$$x_{n+1} = \alpha_{n}(u + \gamma fx_{n}) + \bigl(I - \alpha_{n}(I + \mu A)\bigr)F_{r_{n}}x_{n}, \quad \forall n \ge0. $$

Assume that the sequences \(\{\alpha_{n}\}\) and \(\{r_{n}\}\) satisfy conditions (C1)-(C4) in Theorem  3.2. Then \(\{x_{n}\}\) converges strongly to a point \(\widetilde{x} \in \operatorname {VI}(C,F)\), which is the unique solution of the optimization problem (3.27).

Taking \(F \equiv0\) in Theorem 3.2, we get the following corollary.

Corollary 3.7

Let \(\{x_{n}\}\) be generated by the following iterative scheme:

$$x_{n+1} = \alpha_{n}(u + \gamma fx_{n}) + \bigl(I - \alpha_{n}(I + \mu A)\bigr)T_{r_{n}}x_{n},\quad \forall n \ge0. $$

Assume that the sequences \(\{\alpha_{n}\}\) and \(\{r_{n}\}\) satisfy conditions (C1)-(C4) in Theorem  3.2. Then \(\{x_{n}\}\) converges strongly to a point \(\widetilde{x} \in \operatorname {Fix}(T)\), which is the unique solution of the optimization problem (3.28).

Taking \(C \equiv H\), we have the following corollary.

Corollary 3.8

Let \(T : H \to H\) be a continuous pseudocontractive mapping, and let \(F : H \to H\) be a continuous monotone mapping. Let \(\{x_{n}\}\) be generated by (3.30). Assume that the sequences \(\{\alpha_{n}\}\) and \(\{r_{n}\}\) satisfy conditions (C1)-(C4) in Theorem  3.2. Then \(\{x_{n}\}\) converges strongly to a point \(\widetilde{x} \in F^{-1}(0) \cap \operatorname {Fix}(T)\), which is the unique solution of the optimization problem (3.29).

Remark 3.1

(1) For finding an element of \(\operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\), \(\operatorname {VI}(C,F)\), \(\operatorname {Fix}(T)\), and \(F^{-1}(0) \cap \operatorname {Fix}(T)\), which is the unique solution of the optimization problems (1.4), (3.27), (3.28), and (3.29), respectively, where T is a continuous pseudocontractive mapping and F is a continuous monotone mapping, our results are new ones different from previous those introduced by several authors. Consequently, our results supplement, develop, and improve upon the corresponding results given recently by several authors in this direction (for example, see [35–37] for \(\operatorname {VI}(C,F) \cap \operatorname {Fix}(T)\) in the case of a continuous monotone mapping F and a continuous pseudocontractive mapping T; see [1, 31–34] for \(\operatorname {VI}(C,F) \cap \operatorname {Fix}(S)\) in the case of an α-inverse-strongly monotone mapping F and a nonexpansive mapping S; see [11, 22, 23] for \(\operatorname {Fix}(T)\) of a strictly pseudocontractive mapping T; and see [24, 27] for \(\operatorname {Fix}(S)\) of a nonexpansive mapping S).

(2) As in Corollary 3.1 and Corollary 3.5, from Corollaries 3.2, 3.3, 3.4, 3.6, 3.7, and 3.8, we can obtain the minimum norm point of \(\operatorname {VI}(C,F)\), \(\operatorname {Fix}(T)\), and \(F^{-1}(0) \cap \operatorname {Fix}(T)\) for the continuous monotone mapping F and the continuous pseudocontractive mapping T, respectively.

(3) We can replace the perturbed control condition \(\vert \alpha_{n+1} - \alpha_{n}\vert \le o(\alpha_{n+1}) + \sigma_{n}\), \(\sum_{n=0}^{\infty}\sigma_{n} <\infty\) on the control parameter \(\{\alpha_{n}\}\) in (C3) of Theorem 3.2 by the following conditions [20, 21]:

  1. (a)

    \(\sum_{n = 0}^{\infty} \vert \alpha_{n+1} -\alpha_{n}\vert < \infty\); or

  2. (b)

    \(\lim_{n \to\infty}\frac{\alpha_{n}}{\alpha_{n+1}} = 1\) or, equivalently, \(\lim_{n \to\infty}\frac{\alpha_{n} -\alpha_{n+1}}{\alpha_{n+1}} = 0\).

References

  1. Iiduka, H, Takahashi, W: Strong convergence theorems for nonexpansive mappings and inverse-strongly monotone mappings. Nonlinear Anal. 61, 341-350 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  2. Liu, F, Nashed, MZ: Regularization of nonlinear ill-posed variational inequalities and convergence rates. Set-Valued Anal. 6, 313-344 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  3. Browder, FE: Nonlinear monotone operators and convex sets in Banach spaces. Bull. Am. Math. Soc. 71, 780-785 (1965)

    Article  MATH  MathSciNet  Google Scholar 

  4. Bruck, RE: On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space. J. Math. Anal. Appl. 61, 159-164 (1977)

    Article  MATH  MathSciNet  Google Scholar 

  5. Lions, PL, Stampacchia, G: Variational inequalities. Commun. Pure Appl. Math. 20, 493-517 (1967)

    Article  MATH  MathSciNet  Google Scholar 

  6. Yamada, I: The hybrid steepest descent method for the variational inequality of the intersection of fixed point sets of nonexpansive mappings. In: Butnariu, D, Censor, Y, Reich, S (eds.) Inherently Parallel Algorithm for Feasibility and Optimization, and Their Applications, pp. 473-504. Kluwer Academic, Dordrecht (2001)

    Chapter  Google Scholar 

  7. Chidume, CE, Mutangadura, S: An example on the Mann iteration method for Lipschitz pseudocontractions. Proc. Am. Math. Soc. 129, 2359-2363 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  8. Agarwal, RP, O’Regan, D, Sahu, DR: Fixed Point Theory for Lipschitzian-Type Mappings with Applications. Springer, Berlin (2009)

    MATH  Google Scholar 

  9. Acedo, GL, Xu, HK: Iterative methods for strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 67, 2258-2271 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  10. Cho, YJ, Kang, SM, Qin, X: Some results on k-strictly pseudo-contractive mappings in Hilbert spaces. Nonlinear Anal. 70, 1956-1964 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  11. Jung, JS: Strong convergence of iterative methods for k-strictly pseudo-contractive mappings in Hilbert spaces. Appl. Math. Comput. 215, 3746-3753 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  12. Jung, JS: Iterative methods for pseudocontractive mappings in Banach spaces. Abstr. Appl. Anal. 2013, Article ID 643602 (2013). doi:10.1155/643602

    Google Scholar 

  13. Morales, CH: Strong convergence of path for continuous pseudo-contractive mappings. Proc. Am. Math. Soc. 135, 2831-2838 (2007)

    Article  MATH  Google Scholar 

  14. Morales, CH, Jung, JS: Convergence of paths for pseudo-contractive mappings in Banach spaces. Proc. Am. Math. Soc. 128, 3411-3419 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  15. Zhou, H: Convergence theorems of fixed points for k-strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 69, 456-462 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  16. Bauschke, HH, Borwein, JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38, 367-426 (1997)

    Article  MathSciNet  Google Scholar 

  17. Combettes, PL: Hilbertian convex feasibility problem: convergence of projection methods. Appl. Math. Optim. 35, 311-330 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  18. Deutch, F, Yamada, I: Minimizing certain convex functions over the intersection of the fixed point sets of nonexpansive mappings. Numer. Funct. Anal. Optim. 19, 33-56 (1998)

    Article  MathSciNet  Google Scholar 

  19. Jung, JS: Iterative algorithms with some control conditions for quadratic optimizations. Panam. Math. J. 16(4), 13-25 (2006)

    MATH  Google Scholar 

  20. Xu, HK: An iterative algorithm for nonlinear operator. J. Lond. Math. Soc. 66, 240-256 (2002)

    Article  MATH  Google Scholar 

  21. Xu, HK: An iterative approach to quadratic optimization. J. Optim. Theory Appl. 116, 659-678 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  22. Jung, JS: A general iterative scheme for k-strictly pseudo-contractive mappings and optimization problems. Appl. Math. Comput. 217, 5581-5588 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  23. Jung, JS: Some algorithms for solving optimization problems in Hilbert spaces. J. Nonlinear Convex Anal. 16(1), 22-35 (2015)

    Google Scholar 

  24. Marino, G, Xu, HK: A general iterative method for nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 318, 43-52 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  25. Yamada, I, Ogura, N, Yamashita, Y, Sakaniwa, K: Quadratic approximation of fixed points of nonexpansive mappings in Hilbert spaces. Numer. Funct. Anal. Optim. 19, 165-190 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  26. Yao, YH, Aslam Noor, M, Zainab, S, Liou, YC: Mixed equilibrium problems and optimization problems. J. Math. Anal. Appl. 354, 319-329 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  27. Yao, Y, Kang, SM, Liou, YC: Algorithms for approximating minimization problems in Hilbert spaces. J. Comput. Appl. Math. 235, 3515-3526 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  28. Wang, W, Tian, J: Generalized monotone iterative method for integral boundary value problems with causal operators. J. Nonlinear Sci. Appl. 8(5), 600-609 (2015)

    MathSciNet  Google Scholar 

  29. BocÅŸan, G: Convergence of iterative methods for solving random operator equations. J. Nonlinear Sci. Appl. 6(1), 2-6 (2013)

    MATH  MathSciNet  Google Scholar 

  30. Moudafi, A: Viscosity approximation methods for fixed-points problem. J. Math. Anal. Appl. 241, 46-55 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  31. Chen, J, Zhang, L, Fan, T: Viscosity approximation methods for nonexpansive mappings and monotone mappings. J. Math. Anal. Appl. 334, 1450-1461 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  32. Jung, JS: A new iteration method for nonexpansive mappings and monotone mappings in Hilbert spaces. J. Inequal. Appl. 2010, Article ID 251761 (2010). doi:10.1155/2010/251761

    Google Scholar 

  33. Su, Y, Shang, M, Qin, X: An iterative method of solution for equilibrium and optimization problems. Nonlinear Anal. 69, 2709-2719 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  34. Takahashi, W, Toyoda, M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 118(2), 417-428 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  35. Chamnampan, T, Kumam, P: A new iterative method for a common solution of fixed points for pseudo-contractive mappings and variational inequalities. Fixed Point Theory Appl. 2012, Article ID 67 (2012). doi:10.1186/1687-1812-2012-67

    Article  Google Scholar 

  36. Wangkeeree, R, Nammanee, K: New iterative methods for a common solution of fixed points for pseudo-contractive mappings and variational inequalities. Fixed Point Theory Appl. 2013, Article ID 233 (2013). doi:10.1186/1687-1812-2013-233

    Article  MathSciNet  Google Scholar 

  37. Zegeye, H, Shahzad, N: Strong convergence of an iterative method for pseudo-contractive and monotone mappings. J. Glob. Optim. 54, 173-184 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  38. Zegeye, H, Shahzad, N: Strong convergence theorems for monotone mappings and relatively weak nonexpansive mappings. Nonlinear Anal. 70(7), 2707-2716 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  39. Zegeye, H, Shahzad, N: Algorithms for solutions of variational inequalities in the set of common fixed points of finite family λ-strictly pseudocontractive mappings. Numer. Funct. Anal. Optim. 36(6), 799-816 (2015)

    Article  MathSciNet  Google Scholar 

  40. Zegeye, H, Shahzad, N: Approximating common solution of variational inequality problems for two monotone mappings in Banach spaces. Optim. Lett. 5(5), 691-704 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  41. Jung, JS: Some algorithms for finding fixed points and solutions of variational inequalities. Abstr. Appl. Anal. 2012, Article ID 153456 (2012). doi:10.1155/2012/153456

    Article  Google Scholar 

  42. Sunthrayuth, P, Cho, YJ, Kumam, P: General iterative algorithms approach to variational inequalities and minimum-norm fixed point for minimization and split feasibility problems. Opsearch 51(3), 400-415 (2014)

    Article  MathSciNet  Google Scholar 

  43. Yao, Y, Marino, G, Xu, HK, Liou, YC: Construction of minimum-norm fixed points of pseudocontractions in Hilbert spaces. J. Inequal. Appl. 2014, Article ID 206 (2014)

    Article  MathSciNet  Google Scholar 

  44. Zegeye, H: An iterative approximation method for a common fixed point of two pseudocontractive mappings. ISRN Math. Anal. 2011, Article ID 621901 (2011)

    MathSciNet  Google Scholar 

  45. Oden, JT: Qualitative Methods on Nonlinear Mechanics. Prentice-Hall, Englewood Cliffs (1986)

    Google Scholar 

  46. Minty, GJ: On the generalization of a direct method of the calculus of variations. Bull. Am. Math. Soc. 73, 315-321 (1967)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

The author would like to thank the anonymous reviewers for their valuable suggestions and comments along with providing some recent related papers.

This study was supported by research funds from Dong-A University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jong Soo Jung.

Additional information

Competing interests

The author declares to have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jung, J.S. General iterative methods for monotone mappings and pseudocontractive mappings related to optimization problems. Fixed Point Theory Appl 2015, 202 (2015). https://doi.org/10.1186/s13663-015-0451-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-015-0451-x

MSC

Keywords