Skip to main content

Regularized gradient-projection methods for the constrained convex minimization problem and the zero points of maximal monotone operator

Abstract

In this paper, based on the viscosity approximation method and the regularized gradient-projection algorithm, we find a common element of the solution set of a constrained convex minimization problem and the set of zero points of the maximal monotone operator problem. In particular, the set of zero points of the maximal monotone operator problem can be transformed into the equilibrium problem. Under suitable conditions, new strong convergence theorems are obtained, which are useful in nonlinear analysis and optimization. As an application, we apply our algorithm to solving the split feasibility problem and the constrained convex minimization problem in Hilbert spaces.

1 Introduction

Throughout this paper, let and be the sets of positive integers and real numbers, respectively. Let H be a real Hilbert space with the inner product \(\langle\cdot,\cdot\rangle \) and norm \(\|\cdot\|\). Let C be a nonempty, closed, and convex subset of H. We introduce some operators which will be used in this paper.

A mapping \(f:C\rightarrow C\) is a contraction if there exists \(k\in (0,1)\) such that \(\|f(x)-f(y)\|\leq k\|x-y\|\) for all \(x,y\in C\). A nonlinear operator \(T:C\rightarrow C\) is called nonexpansive if \(\|Tx-Ty\|\leq\|x-y\|\) for all \(x,y\in C\). The set of fixed points of T is denoted by \(\operatorname{Fix}(T)\). A nonlinear mapping \(A:H\rightarrow H\) is called monotone if \(\langle x-y, Ax-Ay\rangle\geq0\) for all \(x,y\in H\).

Firstly, consider the following constrained convex minimization problem:

$$ \min_{x\in C}g(x), $$
(1.1)

where \(g:C\rightarrow\mathbb{R}\) is a real-valued convex function. Assume that the constrained convex minimization problem (1.1) is solvable, and let U denote the solution set of (1.1). The gradient-projection algorithm (GPA) generates a sequence \(\{x_{n}\}_{n=0}^{\infty}\) according to the recursive formula

$$ x_{n+1}=P_{C}(I-\beta_{n}\nabla g)x_{n},\quad \forall n\geq0, $$
(1.2)

where the parameters \(\beta_{n}\) are real positive numbers, and \(P_{C}\) is the metric projection from H onto C. It is well known that the convergence of the algorithms (1.2) depends on the behavior of the gradient g. If the gradient g is only assumed to be inverse strongly monotone, then the sequence \(\{x_{n}\}\) defined by the algorithm (1.2) can only converge weakly to a minimizer of (1.1). If the gradient g is Lipschitz continuous and strongly monotone, then the sequence generated by (1.2) can converge strongly to a minimizer of (1.1) provided the parameters \(\beta_{n}\) satisfy appropriate conditions.

As we all know, Xu [1] gave an averaged mapping approach to the gradient-projection method, and he constructed a counterexample which shows that the sequence generated by the gradient-projection method does not converge strongly in the infinite-dimensional space. Moreover, he presented two modifications of the gradient-projection method which are shown to have strong convergence.

In 2011, motivated by Xu, Ceng et al. [2] proposed the following iterative algorithm:

$$ x_{n+1}=P_{C}\bigl[\theta_{n}\gamma f(x_{n})+(I-\theta_{n}\mu F)T_{n}(x_{n}) \bigr],\quad n\geq0, $$
(1.3)

where \(f:C\rightarrow H\) is an l-Lipschitzian mapping with a constant \(l>0\), and \(F:C\rightarrow H\) is a k-Lipschitzian and η-strongly monotone operator with constants \(k,\eta>0\). Let \(0<\mu<2\eta/k^{2}\), \(0\leq\gamma l<\tau\), and \(\tau=1-\sqrt{1-\mu(2\eta-\mu k^{2})}\). Let \(T_{n}\) and \(\theta_{n}\) satisfy \(\theta_{n}=\frac{2-\lambda_{n}L}{4}\), \(P_{C}(I-\lambda_{n}\nabla g)=\theta_{n}I+(1-\theta_{n})T_{n}\). Under suitable conditions, it is proved that the sequence \(\{x_{n}\}_{n=0}^{\infty}\) generated by (1.3) converges strongly to a minimizer \(x^{*}\) of (1.1). There are also many other methods for solving constrained convex minimization problems, such as extragradient-projection method (see [3]) and so on.

However, we all know that the minimization problem (1.1) has more than one solution under some conditions, so regularization is needed in finding the unique solution of the minimization problem (1.1). Now, we consider the following regularized minimization problem:

$$ \min_{x\in C}g_{\alpha}(x):=g(x)+\frac{\alpha}{2}\|x \|^{2}, $$

where \(\alpha>0\) is the regularization parameter, g is a convex function with a \(1/L\)-ism continuous gradient g. Then the RGPA generates a sequence \(\{x_{n}\}_{n=0}^{\infty}\) by the following recursive formula:

$$ x_{n+1}=P_{C}(I-\beta\nabla g_{\alpha_{n}})x_{n}=P_{C}\bigl[x_{n}-\beta ( \nabla g+\alpha_{n}I) (x_{n})\bigr], $$
(1.4)

where the parameter \(\alpha_{n}>0\), β is a constant with \(0<\beta<2/L\), and \(P_{C}\) is the metric projection from H onto C. We all know that the sequence \(\{x_{n}\}_{n=0}^{\infty}\) generated by algorithm (1.4) converges weakly to a minimizer of (1.1) in the setting of infinite-dimensional spaces (see [4]). In 2013, however, Ceng et al. [5] established a strong convergence result via an implicit hybrid method with regularization for solving constrained convex minimization problems and fixed point problems in Hilbert spaces. This method is based on the RGPA.

Secondly, consider the problem of zero points of maximal monotone operator:

$$ B^{-1}0=\{x\in H: 0\in Bx\}, $$
(1.5)

where B is a mapping of H into \(2^{H}\), the effective domain of B is denoted by domB or \(D(B)\), that is, \(\operatorname{dom}B=\{x\in H: Bx\neq\emptyset\}\). A multi-valued mapping B is said to be a monotone operator on H if \(\langle x-y, u-v\rangle\geq0\) for all \(x,y\in \operatorname{dom}B\), \(u\in Bx\), \(v\in By\). A monotone operator B on H is said to be maximal if its graph is not properly contained in the graph of any other monotone operator on H. For a maximal monotone operator B on H and \(r>0\), we may define a single-valued operator \(J_{r}=(I+rB)^{-1}: H\rightarrow \operatorname{dom}B\), which is called the resolvent of B for r. We denote by \(A_{r}=\frac{1}{r}(I-J_{r})\) the Yosida approximation of B for \(r>0\). We know from [6] that

$$ A_{r}x\in BJ_{r}x,\quad \forall x\in H, r>0. $$
(1.6)

Let B be a maximal monotone operator on H and define the set of zero points of B as follows:

$$ B^{-1}0=\{x\in H: 0\in Bx\}. $$

It is well known that \(B^{-1}0=\operatorname{Fix}(J_{r})\) for all \(r>0\) and the resolvent \(J_{r}\) is firmly nonexpansive, i.e.,

$$ \|J_{r}x-J_{r}y\|^{2}\leq\langle x-y, J_{r}x-J_{r}y\rangle,\quad \forall x,y\in H. $$
(1.7)

Thirdly, consider the equilibrium problem (EP) which is to find \(z\in C\) such that

$$ F(z,y)\geq0, \quad \forall y\in C, $$
(1.8)

where F is a bifunction of \(C\times C\) into , and is the set of real numbers. We denote the set of solutions of EP by \(\operatorname{EP}(F)\). Given a mapping \(T:C\rightarrow H\), let \(F(x,y)=\langle Tx,y-x\rangle\) for all \(x,y\in C\), then \(z\in \operatorname{EP}(F)\) if and only if \(\langle Tz,y-z\rangle\geq0\) for all \(y\in C\), i.e., z is a solution of the variational inequality. Numerous problems in physics, optimizations, and economics reduce to finding a solution of (1.8). Some methods have been proposed to solve the equilibrium problem; see, for instance, [711].

In 2000, Moudafi [12] introduced the viscosity approximation method for nonexpansive mappings, extended in [13]. Let f be a contraction on H, starting with an arbitrary initial \(x_{0}\in H\), define a sequence \(\{x_{n}\}\) recursively by

$$ x_{n+1} = \alpha_{n}f(x_{n})+(1- \alpha_{n})Tx_{n},\quad n\geq0, $$
(1.9)

we use \(\operatorname{Fix}(T)\) to denote the set of fixed points of the mapping T, i.e., \(\operatorname{Fix}(T)=\{x\in H: x=Tx\}\).

For finding the common solution of \(\operatorname{EP}(F)\) and a fixed point problem, Takahashi and Takahashi [8] introduced the following iterative scheme by the viscosity approximation method in a Hilbert space: \(x_{1}\in H\) and

$$ \left \{ \begin{array}{l} F(u_{n}, y)+\frac{1}{r_{n}}\langle y-u_{n}, u_{n}-x_{n}\rangle\geq0,\quad \forall y\in C, \\ x_{n+1}=\alpha_{n}f(x_{n})+(1-\alpha_{n})T(u_{n}),\quad \forall n\in \mathbb{N}, \end{array} \right . $$
(1.10)

where \(\{\alpha_{n}\}\subset(0,1)\) and \(\{\gamma_{n}\}\subset(0,\infty)\) satisfy some appropriate conditions. Further, they proved \(\{x_{n}\}\) and \(\{u_{n}\}\) converge strongly to \(z\in \operatorname{Fix}(T)\cap \operatorname{EP}(F)\), where \(z=P_{\operatorname{Fix}(T)\cap \operatorname{EP}(F)}f(z)\).

In 2010, Zeng et al. [14] proved a strong convergence theorem for finding a common element of the solution set EP of a generalized equilibrium problem and the set \(T^{-1}0\cap\widetilde{T}^{-1}0\) for two maximal monotone operators T and \(\widetilde{T}\) defined on a Banach space X: \(x_{0}\in X\) and

$$ x_{n+1}=\prod_{H_{n}\cap W_{n}}x_{0},\quad n=0,1,2,\ldots, $$

where \(H_{n}=\{z\in C: \phi(z,K_{r_{n}}y_{n})\leq(\alpha_{n}+\tilde{\alpha}_{n}-\alpha _{n}\tilde{\alpha}_{n})\phi(z,x_{0})+(1-\alpha_{n})(1-\tilde{\alpha }_{n})\phi(z,x_{n})\}\), \(W_{n}=\{z\in C: \langle x_{n}-z, Jx_{0}-Jx_{n}\rangle\geq0\}\), \(\tilde{x}_{n}=J^{-1}(\alpha_{n}Jx_{0}+(1-\alpha_{n})(\beta _{n}Jx_{n}+(1-\beta_{n})JJ_{r_{n}}x_{n}))\), \(y_{n}=J^{-1}(\tilde{\alpha}_{n}Jx_{0}+(1-\tilde{\alpha}_{n})(\tilde {\beta}_{n}J\tilde{x}_{n}+(1-\tilde{\beta}_{n})J\tilde{J}_{r_{n}}\tilde {x}_{n}))\), and \(\{\alpha_{n}\}\), \(\{\beta_{n}\}\), \(\{\tilde{\alpha}_{n}\}\), \(\{\tilde{\beta}_{n}\}\), \(\{r_{n}\}\) satisfy some appropriate conditions. Then the sequence \(\{x_{n}\}\) converges strongly to \(\prod_{T^{-1}0\cap\widetilde{T}^{-1}0\cap \mathrm{EP}}x_{0}\), where \(\prod_{T^{-1}0\cap\widetilde{T}^{-1}0\cap \mathrm{EP}}\) is the generalized projection of X onto \(T^{-1}0\cap\widetilde{T}^{-1}0\cap \mathrm{EP}\).

In 2012, Tian and Liu [15] introduced the following iterative method in a Hilbert space: \(x_{1}\in C\) and

$$ \left \{ \begin{array}{l} F(u_{n}, y)+\frac{1}{r_{n}}\langle y-u_{n}, u_{n}-x_{n}\rangle\geq0,\quad \forall y\in C, \\ x_{n+1}=\alpha_{n}\gamma f(u_{n})+(I-\alpha_{n}A)T_{n}(u_{n}),\quad \forall n\in\mathbb{N}, \end{array} \right . $$
(1.11)

where \(F: C\times C\rightarrow\mathbb{R}\), \(u_{n}=Q_{\beta_{n}}(x_{n})\), \(P_{C}(I-\lambda_{n}\nabla g)=\theta_{n}I+(1-\theta_{n})T_{n}\), \(\theta_{n}=\frac{2-\lambda_{n}L}{4}\), and \(\{\lambda_{n}\}\subset(0,2/L)\), and \(\{\alpha_{n}\}\), \(\{r_{n}\}\), \(\{\theta_{n}\}\), satisfy appropriate conditions. Further, they proved that the sequence \(\{x_{n}\}\) converges strongly to a point \(q\in U\cap \operatorname{EP}(F)\), which solves the variational inequality

$$ \bigl\langle (A-\gamma f)q,q-z\bigr\rangle \leq0, \quad z\in U\cap \operatorname{EP}(F). $$

It is the first time that the equilibrium and constrained convex minimization problems have been solved.

Defining a set-valued mapping \(A_{F}\subset H\times H\) by

$$ A_{F}x= \left \{ \begin{array}{l@{\quad}l} \{z\in H: F(x,y)\geq\langle y-x,z\rangle, \forall y\in C\},& \forall x\in C, \\ \emptyset,& \forall x\notin C, \end{array} \right . $$

we find from [16] that \(A_{F}\) is a maximal monotone operator such that the domain is included in C; see Lemma 2.5 in Section 2 for more details.

In this paper, motivated and inspired by the above results, we introduce a new iterative algorithm: \(x_{1}\in C\) and

$$ \left \{ \begin{array}{l} u_{n}=J_{r_{n}}(x_{n}), \\ x_{n+1}=\alpha_{n}f(x_{n})+(1-\alpha_{n})T_{\lambda_{n}}(u_{n}), \quad \forall n\in\mathbb{N} \end{array} \right . $$
(1.12)

for finding an element of \(U\cap B^{-1}0\), where \(F: C\times C\rightarrow\mathbb{R}\), \(T_{\lambda_{n}}=P_{C}(I-\beta\nabla g_{\lambda_{n}})\), \(\nabla g_{\lambda_{n}}=\nabla g+ \lambda_{n}I\), \(\beta\in(0,2/L)\). Under appropriate conditions, it is proved that the sequence \(\{x_{n}\}\) generated by (1.12) converges strongly to a point \(q\in U\cap B^{-1}0\), which solves the variational inequality

$$ \bigl\langle (I-f)q,q-z\bigr\rangle \leq0,\quad \forall z\in U\cap B^{-1}0. $$

Equivalently, \(q=P_{U\cap B^{-1}0}f(q)\).

Finally, in Section 4, we apply the above algorithm to the split feasibility problem, and we give concrete examples and the numerical result in Section 5.

2 Preliminaries

In this section we introduce some properties and lemmas which will be useful in the proofs for the main results in the next section.

Throughout this paper, we always assume that C is a nonempty closed convex subset of a real Hilbert space H. We denote the strong convergence of \(\{x_{n}\}\) to \(x\in C\) by \(x_{n}\rightarrow x\) and the weak convergence by \(x_{n}\rightharpoonup x\). Let \(\operatorname{Fix}(T)\) denote the set of fixed points of the mapping T, \(\operatorname{EP}(F)\) denote the solution set of the equilibrium problem (1.8), and U denote the solution set of (1.1). We find that, for any \(x,y\in H\), the following inequality holds in an inner product space X:

$$ \|x+y\|^{2}\leq\|x\|^{2}+2\langle y,x+y \rangle,\quad \forall x,y\in X. $$
(2.1)

Firstly, we recall the metric (nearest point) projection from H onto C is the mapping \(P_{C}: H\rightarrow C\) which is defined as follows: given \(x\in H\), \(P_{C}x\) is the unique point in C with the property

$$ \|x-P_{C}x\|=\inf_{y\in C}\|x-y\|=: d(x,C). $$

\(P_{C}\) is characterized as follows.

Lemma 2.1

Given \(x\in H\) and \(y\in C\). Then \(y=P_{C}x\) if and only if the following inequality holds:

$$ \langle x-y,y-z\rangle\geq0, \quad \forall z\in C. $$

Secondly, we introduce the following lemma, which is about the resolvent of the maximal monotone operator.

Lemma 2.2

([16, 17]; see also [18])

Let H be a real Hilbert space and let B be a maximal monotone operator on H. For \(r>0\) and \(x\in H\), define the resolvent \(J_{r}x\). Then the following holds:

$$ \frac{s-t}{s}\langle J_{s}x-J_{t}x, J_{s}x-x\rangle\geq\|J_{s}x-J_{t}x \|^{2} $$

for all \(s,t>0\) and \(x\in H\). In particular,

$$ \|J_{s}x-J_{t}x\|\leq\bigl(\vert s-t\vert /s\bigr) \|x-J_{s}x\| $$

for all \(s,t>0\) and \(x\in H\).

Thirdly, for solving the equilibrium problem for a bifunction \(F: C\times C\rightarrow\mathbb{R}\), let us assume that F satisfies the following conditions:

  1. (A1)

    \(F(x,x)\)=0 for all \(x\in C\);

  2. (A2)

    F is monotone, i.e., \(F(x,y)+F(y,x)\leq0\) for all \(x,y\in C\);

  3. (A3)

    for each \(x, y, z\in C\), \(\lim_{t\downarrow0}F(tz+(1-t)x,y)\leq F(x,y)\);

  4. (A4)

    for each \(x\in C\), \(y\mapsto F(x,y)\) is convex and lower semicontinuous.

Then we have the following lemma, which appears implicitly in Blum and Oettli [19].

Lemma 2.3

([19])

Let F be a bifunction of \(C\times C\) into satisfying (A1)-(A4). Let \(r>0\) and \(x\in H\). Then there exists \(z\in C\) such that

$$ F(z,y)+\frac{1}{r}\langle y-z,z-x\rangle\geq0, \quad \forall y\in C. $$

The following lemma was also given in Combettes and Hirstoaga [11].

Lemma 2.4

([11])

Assume that \(F: C\times C\rightarrow\mathbb{R}\) satisfies (A1)-(A4). For \(r>0\) and \(x\in H\), define a mapping \(J_{r}: H\rightarrow C\) as follows:

$$ J_{r}(x)=\biggl\{ z\in C: F(z,y)+\frac{1}{r}\langle y-z,z-x \rangle\geq0, \forall y\in C\biggr\} $$

for all \(x\in H\). Then the following hold:

  1. (1)

    \(J_{r}\) is single-valued;

  2. (2)

    \(J_{r}\) is a firmly nonexpansive mapping, i.e., for all \(x,y\in H\),

    $$ \|J_{r}x-J_{r}y\|^{2}\leq\langle J_{r}x-J_{r}y,x-y\rangle; $$
  3. (3)

    \(\operatorname{Fix}(J_{r})=\operatorname{EP}(F)\);

  4. (4)

    \(\operatorname{EP}(F)\) is closed and convex.

We call such \(J_{r}\) the resolvent of F for \(r>0\). Using Lemma 2.3 and Lemma 2.4, Takahashi obtained the following lemma. See [20] for a more general result.

Lemma 2.5

([16])

Let H be a Hilbert space and let C be a nonempty, closed, and convex subset of H. Let \(F: C\times C\rightarrow\mathbb{R}\) satisfy (A1)-(A4). Let \(A_{F}\) be a set-valued mapping of H into itself defined by

$$ A_{F}x= \left \{ \begin{array}{l@{\quad}l} \{z\in H: F(x,y)\geq\langle y-x,z\rangle, \forall y\in C\},& \forall x\in C, \\ \emptyset,& \forall x\notin C. \end{array} \right . $$

Then \(\operatorname{EP}(F)=A^{-1}_{F}0\) and \(A_{F}\) is a maximal monotone operator with \(\operatorname{dom} A_{F}\subset C\). Furthermore, for any \(x\in H\) and \(r>0\), the resolvent \(J_{r}\) of F coincides with the resolvent of \(A_{F}\), i.e.,

$$ J_{r}x=(I+rA_{F})^{-1}x. $$

Besides, the following two lemmas are extremely important in the proof of the theorems.

Lemma 2.6

([1])

Assume that \(\{a_{n}\}_{n=0}^{\infty}\) is a sequence of nonnegative real numbers such that

$$a_{n+1} \leq(1-\gamma_{n})a_{n} + \gamma_{n}\delta_{n} +\beta_{n}, \quad n \geq0, $$

where \(\{\gamma_{n}\}_{n=0}^{\infty}\) and \(\{\beta_{n}\}_{n=0}^{\infty}\) are sequences in \((0,1)\) and \(\{\delta_{n}\}_{n=0}^{\infty}\) is a sequence in such that

  1. (i)

    \(\sum_{n=0}^{\infty}\gamma_{n} = \infty\);

  2. (ii)

    either \(\limsup_{n\rightarrow\infty}\delta_{n} \leq0\) or \(\sum_{n=0}^{\infty}\gamma_{n}|\delta_{n}| < \infty\);

  3. (iii)

    \(\sum_{n=0}^{\infty}\beta_{n} < \infty\).

Then \(\lim_{n\rightarrow\infty}a_{n} = 0\).

The so-called demiclosed principle for nonexpansive mappings will often be used.

Lemma 2.7

(Demiclosed principle [21])

Let \(T : C\rightarrow C\) be a nonexpansive mapping with \(F(T)\neq\emptyset\). If \(\{x_{n}\}_{n=1}^{\infty}\) is a sequence in C weakly converging to x and if \(\{(I-T)x_{n}\}_{n=1}^{\infty}\) converges strongly to y, then \((I-T)x = y\). In particular, if \(y = 0\), then \(x\in F(T)\).

Lemma 2.8

([22])

Let C be a nonempty, closed, and convex subset of H and let \(i_{C}\) be the indicator function of C, then \(i_{C}\) is a proper lower semicontinuous convex function on H and the subdifferential \(\partial i_{C}\) of \(i_{C}\) is a maximal monotone operator. Define \(J_{\lambda}x=(I+\lambda\partial i_{C})^{-1}x\), for all \(x\in H\). We see that, for any \(x\in H\) and \(u\in C\), \(u=J_{\lambda}x\Longleftrightarrow u=P_{C}x\).

3 Main results

In this section, we will give our main results of this paper. Let B be a maximal monotone operator on H such that the domain of B is included in C, and define the set of zero points of B as follows:

$$ B^{-1}0=\{x\in H: 0\in Bx\}. $$

We always denote \(\operatorname{Fix}(T)\) as the fixed point set of the nonexpansive mapping T, denote U as the solution set of the constrained convex minimization problem (1.1), and denote \(\operatorname{EP}(F)\) as the solution set of the equilibrium problem (1.8).

Let f be a contraction on C with the constant \(k\in(0,1)\). Suppose that g is \(1/L\)-ism continuous. Let \({ J_{r_{n}}}\) be a sequence of mappings defined as in Lemma 2.4. Consider the mapping \(G_{n}\) on C defined by

$$ G_{n}(x)=\alpha_{n}f(x)+(1-\alpha_{n})T_{\lambda_{n}}J_{r_{n}}(x),\quad \forall x\in C, n\in\mathbb{N}, $$

where \(P_{C}(I-\beta\nabla g_{\lambda_{n}})=T_{\lambda_{n}}\), \(\nabla g_{\lambda_{n}}=\nabla g+ \lambda_{n}I\), \(\lambda_{n}\subset (0, 2/\beta-L)\), \(\beta\in(0,2/L)\), \(\{\alpha_{n}\}\subset(0,1)\). It is easy to prove that \(\nabla g_{\lambda_{n}}\) is \(\frac{1}{L+\lambda_{n}}\)-ism, \(T_{\lambda_{n}}\) is nonexpansive. It is easy to see that \(G_{n}\) is a contraction. Indeed, by Lemma 2.4, we have, for each \(x,y\in C\),

$$\begin{aligned} \bigl\Vert G_{n}(x)-G_{n}(y)\bigr\Vert =&\bigl\Vert \bigl(\alpha_{n}f(x)-\alpha_{n}f(y)\bigr)+(1- \alpha_{n}) \bigl(T_{\lambda _{n}}J_{r_{n}}(x)-T_{\lambda_{n}}J_{r_{n}}(y) \bigr)\bigr\Vert \\ \leq&\alpha_{n}k\Vert x-y\Vert +(1-\alpha_{n})\Vert x-y\Vert \\ =&\bigl(1-\alpha_{n}(1-k)\bigr)\Vert x-y\Vert . \end{aligned}$$

Since \(0<1-\alpha_{n}(1-k)<1\), it follows that \(G_{n}\) is a contraction. Therefore, by the Banach contraction principle, \(G_{n}\) has a unique fixed point \(x_{n}^{f}\in C\) such that

$$ x_{n}^{f}=\alpha_{n}f\bigl(x_{n}^{f} \bigr)+(1-\alpha_{n})T_{\lambda _{n}}J_{r_{n}} \bigl(x_{n}^{f}\bigr). $$

For simplicity, we will write \(x_{n}\) for \(x_{n}^{f}\) provided no confusion occurs. Then we prove the convergence of \(\{x_{n}\}\), while we claim the existence of the \(q\in U\cap B^{-1}0\), which solves the variational inequality

$$ \bigl\langle (I-f)q,p-q\bigr\rangle \geq0, \quad \forall p\in U\cap B^{-1}0. $$
(3.1)

Equivalently, \(q=P_{U\cap B^{-1}0}f(q)\).

Theorem 3.1

Let H be a real Hilbert space and let C be a nonempty closed convex subset of H. Let B be a maximal monotone operator on H such that the domain of B is included in C. Let \(J_{r}=(I+rB)^{-1}\) be the resolvent of B for \(r>0\). Let g be a real-valued convex function of C into , and the gradient g be a \(1/L\)-ism with \(L>0\). Let f be a contraction with the constant \(k\in(0,1)\). Assume that \(U\cap B^{-1}0\neq\emptyset\). Let the sequences \(\{u_{n}\}\) and \(\{x_{n}\}\) be generated by

$$ \left \{ \begin{array}{l} u_{n}=J_{r_{n}}(x_{n}), \\ x_{n}=\alpha_{n}f(x_{n})+(1-\alpha_{n})T_{\lambda_{n}}(u_{n}),\quad \forall n\in\mathbb{N}, \end{array} \right . $$
(3.2)

where \(T_{\lambda_{n}}=P_{C}(I-\beta\nabla g_{\lambda_{n}})\), \(\nabla g_{\lambda_{n}}=\nabla g+ \lambda_{n}I\), \(\beta\in(0,2/L)\). Let \(\{r_{n}\}\), \(\{\alpha_{n}\}\), \(\{\lambda_{n}\}\) satisfy the following conditions:

  1. (i)

    \(\{r_{n}\}\subset(0,\infty)\), \(\liminf_{n\rightarrow\infty}r_{n}>0\);

  2. (ii)

    \(\{\alpha_{n}\}\subset(0,1)\), \(\lim_{n\rightarrow\infty}\alpha_{n}=0\);

  3. (iii)

    \(\{\lambda_{n}\}\subset(0,2/\beta-L)\), \(\lambda_{n}=o(\alpha_{n})\).

Then the sequence \(\{x_{n}\}\) converges strongly to a point \(q\in U\cap B^{-1}0\), which solves the variational inequality (3.1).

Proof

It is well known that \(\hat{x}\in C\) solves the minimization problem (1.1) if and only if for each fixed \(0<\beta <2/L\), \(\hat{x}\) solves the fixed point equation

$$ \hat{x}=P_{C}(I-\beta\nabla g)\hat{x}=T\hat{x}. $$

It is clear that \(\hat{x}=T\hat{x}\), i.e., \(\hat{x}\in U=\operatorname{Fix}(T)\).

First, we claim that \(\{x_{n}\}\) is bounded. Indeed, pick any \(p\in U\cap B^{-1}0\), since \(u_{n}=J_{r _{n}}(x_{n})\), and \(p=J_{r _{n}}(p)\), we know that, for any \(n\in N\),

$$ \|u_{n}-p\|=\bigl\Vert J_{r_{n}}(x_{n})-J_{r_{n}}(p) \bigr\Vert \leq\|x_{n}-p\|. $$
(3.3)

Thus, we derive that

$$\begin{aligned} \Vert x_{n}-p\Vert =&\bigl\Vert \alpha_{n}f(x_{n})+(1- \alpha_{n})T_{\lambda_{n}}(u_{n})-p\bigr\Vert \\ \leq&\bigl\Vert \alpha_{n}f(x_{n})- \alpha_{n}f(p)\bigr\Vert +\bigl\Vert \alpha_{n}f(p)- \alpha _{n}p\bigr\Vert +(1-\alpha_{n})\bigl\Vert T_{\lambda_{n}}(u_{n})-T_{\lambda_{n}}(p)\bigr\Vert \\ &{}+(1-\alpha_{n})\bigl\Vert T_{\lambda_{n}}(p)-T(p)\bigr\Vert \\ \leq&\alpha_{n}k\Vert x_{n}-p\Vert + \alpha_{n}\bigl\Vert (I-f)p\bigr\Vert +(1-\alpha_{n}) \Vert u_{n}-p\Vert \\ &{}+(1-\alpha_{n})\bigl\Vert T_{\lambda_{n}}(p)-T(p)\bigr\Vert \\ \leq&\bigl(1-\alpha_{n}(1-k)\bigr)\Vert x_{n}-p\Vert +\alpha_{n}\bigl\Vert (I-f)p\bigr\Vert +(1-\alpha _{n}) \bigl\Vert T_{\lambda_{n}}(p)-T(p)\bigr\Vert . \end{aligned}$$

It follows that

$$ \|x_{n}-p\|\leq\frac{1}{1-k}\bigl\Vert (I-f)p \bigr\Vert +\frac{1-\alpha_{n}}{\alpha _{n}(1-k)}\bigl\Vert T_{\lambda_{n}}(p)-T(p)\bigr\Vert . $$
(3.4)

For \(x\in C\), note that

$$ P_{C}(I-\beta\nabla g_{\lambda_{n}})x=T_{\lambda_{n}}x $$

and

$$ P_{C}(I-\beta\nabla g)x=Tx. $$

Then we get

$$\begin{aligned} \|T_{\lambda_{n}}x-Tx\| =&\bigl\Vert P_{C}(I-\beta \nabla g_{\lambda_{n}})x-P_{C}(I-\beta\nabla g)x\bigr\Vert \\ \leq&\lambda_{n}\beta\|x\|. \end{aligned}$$
(3.5)

It follows from (3.4) and (3.5) that

$$ \|x_{n}-p\|\leq\frac{1}{1-k}\bigl\Vert (I-f)p\bigr\Vert + \frac{(1-\alpha_{n})\beta }{1-k}\cdot\frac{\lambda_{n}}{\alpha_{n}}\|p\|. $$

Since \(\lambda_{n}=o(\alpha_{n})\), there exists a real number \(M>0\) such that \(\frac{\lambda_{n}}{\alpha_{n}}\leq M\), and

$$\begin{aligned} \Vert x_{n}-p\Vert \leq&\frac{1}{1-k}\bigl\Vert (I-f)p \bigr\Vert +\frac{(1-\alpha_{n})\beta}{1-k}M\Vert p\Vert \\ =&\frac{\Vert (I-f)p\Vert +(1-\alpha_{n})\beta M\Vert p\Vert }{1-k}. \end{aligned}$$

Hence \(\{x_{n}\}\) is bounded and we also find that \(\{u_{n}\}\) is bounded.

Next, we show that \(\|x_{n}-u_{n}\|\rightarrow0\). Indeed, for any \(p\in U\cap B^{-1}0\), by Lemma 2.4, we have

$$\begin{aligned} \|u_{n}-p\|^{2} =&\bigl\Vert J_{r_{n}}(x_{n})-J_{r_{n}}(p) \bigr\Vert ^{2} \\ \leq&\langle x_{n}-p,u_{n}-p\rangle \\ =&\frac{1}{2}\bigl(\|x_{n}-p\|^{2}+ \|u_{n}-p\|^{2}-\|u_{n}-x_{n} \|^{2}\bigr). \end{aligned}$$

This implies that

$$ \|u_{n}-p\|^{2}\leq\|x_{n}-p \|^{2}-\|u_{n}-x_{n}\|^{2}. $$
(3.6)

Then from (3.5), (3.6), and (2.1), we derive that

$$\begin{aligned} \Vert x_{n}-p\Vert ^{2} =&\bigl\Vert \alpha_{n}f(x_{n})+(1-\alpha_{n})T_{\lambda_{n}}(u_{n})-p \bigr\Vert ^{2} \\ =&\bigl\Vert \alpha_{n}f(x_{n})-\alpha_{n}p+(1- \alpha_{n}) \bigl(T_{\lambda _{n}}(u_{n})-p\bigr)\bigr\Vert ^{2} \\ \leq&(1-\alpha_{n})^{2}\bigl\Vert T_{\lambda_{n}}(u_{n})-T_{\lambda _{n}}(p)+T_{\lambda_{n}}(p)-T(p) \bigr\Vert ^{2}+2\alpha_{n}\bigl\langle f(x_{n})-p,x_{n}-p\bigr\rangle \\ \leq&(1-\alpha_{n})^{2}\bigl(\Vert u_{n}-p \Vert ^{2}+2\Vert u_{n}-p\Vert \cdot\bigl\Vert T_{\lambda _{n}}(p)-T(p)\bigr\Vert +\bigl\Vert T_{\lambda_{n}}(p)-T(p)\bigr\Vert ^{2}\bigr) \\ &{}+2\alpha_{n}\bigl\Vert f(x_{n})-p\bigr\Vert \cdot \Vert x_{n}-p\Vert \\ \leq&(1-\alpha_{n}) \bigl(\Vert x_{n}-p\Vert ^{2}-\Vert u_{n}-x_{n}\Vert ^{2}+2 \Vert u_{n}-p\Vert \cdot \lambda_{n}\beta \Vert p \Vert +\lambda_{n}^{2}\beta^{2}\Vert p\Vert ^{2}\bigr) \\ &{}+2\alpha_{n}\bigl(k\Vert x_{n}-p\Vert +\bigl\Vert (I-f)p\bigr\Vert \bigr)\cdot \Vert x_{n}-p\Vert . \end{aligned}$$

Hence, we obtain

$$\begin{aligned} (1-\alpha_{n})\|u_{n}-x_{n}\|^{2} \leq&(2\alpha_{n}k-\alpha_{n})\|x_{n}-p \|^{2}+2(1-\alpha_{n})\lambda _{n}\beta \|u_{n}-p\|\cdot\|p\| \\ &{}+(1-\alpha_{n})\lambda_{n}^{2} \beta^{2}\|p\|^{2}+2\alpha_{n}\bigl\Vert (I-f)p \bigr\Vert \cdot \|x_{n}-p\|. \end{aligned}$$

Since both \(\{x_{n}\}\) and \(\{u_{n}\}\) are bounded and \(\alpha_{n}\rightarrow0\), \(\lambda_{n}\rightarrow0\), it follows that \(\|u_{n}-x_{n}\|\rightarrow0\).

Then we show that \(\|x_{n}-T_{\lambda_{n}}(x_{n})\|\rightarrow0\). Indeed,

$$\begin{aligned} \bigl\Vert x_{n}-T_{\lambda_{n}}(x_{n})\bigr\Vert =& \bigl\Vert x_{n}-T_{\lambda_{n}}(u_{n})+T_{\lambda_{n}}(u_{n})-T_{\lambda _{n}}(x_{n}) \bigr\Vert \\ \leq&\bigl\Vert x_{n}-T_{\lambda_{n}}(u_{n})\bigr\Vert +\bigl\Vert T_{\lambda _{n}}(u_{n})-T_{\lambda_{n}}(x_{n}) \bigr\Vert \\ \leq&\bigl\Vert \alpha_{n}f(x_{n})+(1- \alpha_{n})T_{\lambda _{n}}(u_{n})-T_{\lambda_{n}}(u_{n}) \bigr\Vert +\Vert u_{n}-x_{n}\Vert \\ \leq&\alpha_{n}\bigl\Vert f(x_{n})-T_{\lambda_{n}}(u_{n}) \bigr\Vert +\Vert u_{n}-x_{n}\Vert . \end{aligned}$$

Since \(\alpha_{n}\rightarrow0\) and \(\|u_{n}-x_{n}\|\rightarrow0\), we obtain \(\|x_{n}-T_{\lambda_{n}}(x_{n})\|\rightarrow0\).

Thus,

$$\begin{aligned} \bigl\Vert u_{n}-T_{\lambda_{n}}(u_{n})\bigr\Vert =& \bigl\Vert u_{n}-x_{n}+x_{n}-T_{\lambda_{n}}(x_{n})+T_{\lambda _{n}}(x_{n})-T_{\lambda_{n}}(u_{n}) \bigr\Vert \\ \leq&\Vert u_{n}-x_{n}\Vert +\bigl\Vert x_{n}-T_{\lambda_{n}}(x_{n})\bigr\Vert +\bigl\Vert T_{\lambda _{n}}(x_{n})-T_{\lambda_{n}}(u_{n})\bigr\Vert \\ \leq&\Vert u_{n}-x_{n}\Vert +\bigl\Vert x_{n}-T_{\lambda_{n}}(x_{n})\bigr\Vert +\Vert x_{n}-u_{n}\Vert \end{aligned}$$

and

$$ \bigl\Vert x_{n}-T_{\lambda_{n}}(u_{n})\bigr\Vert \leq\|u_{n}-x_{n}\|+\bigl\Vert T_{\lambda _{n}}(u_{n})-u_{n} \bigr\Vert , $$

we have \(\|u_{n}-T_{\lambda_{n}}(u_{n})\|\rightarrow0\) and \(\|x_{n}-T_{\lambda_{n}}(u_{n})\|\rightarrow0\).

Since \(\{u_{n}\}\) is bounded, without loss of generality, we can assume that \(u_{n_{i}}\rightharpoonup q\). Next, we show that \(q\in U\cap B^{-1}0\).

By (3.5), we have

$$\begin{aligned} \bigl\Vert u_{n}-T(u_{n})\bigr\Vert \leq&\bigl\Vert u_{n}-T_{\lambda_{n}}(u_{n})\bigr\Vert +\bigl\Vert T_{\lambda _{n}}(u_{n})-T(u_{n})\bigr\Vert \\ \leq&\bigl\Vert u_{n}-T_{\lambda_{n}}(u_{n})\bigr\Vert +\lambda_{n}\beta \Vert u_{n}\Vert . \end{aligned}$$

Since \(\|u_{n}-T_{\lambda_{n}}(u_{n})\|\rightarrow0\) and \(\lambda_{n}\rightarrow0\), we have \(\|u_{n}-T(u_{n})\|\rightarrow 0\).

So, by Lemma 2.7, we get \(q\in \operatorname{Fix}(T)=U\).

Next, we show that \(q\in B^{-1}0\). Since \(u_{n}=J_{r_{n}}(x_{n})\), B is a maximal monotone operator, we have from (1.6) \(A_{r_{n_{i}}}x_{n_{i}}\in BJ_{r_{n_{i}}}x_{n_{i}}\), where \(A_{r}\) is the Yosida approximation of B for \(r>0\). Furthermore, we have, for any \((u,v)\in B\),

$$ \biggl\langle u-u_{n_{i}}, v-\frac{x_{n_{i}}-u_{n_{i}}}{r_{n_{i}}}\biggr\rangle \geq0. $$

Since \(\liminf_{n\rightarrow\infty}r_{n}>0\), \(u_{n_{i_{j}}}\rightharpoonup q\) and \(x_{n_{i_{j}}}-u_{n_{i_{j}}}\rightarrow0\), we have

$$ \langle u-q, v\rangle\geq0. $$

Since B is a maximal monotone operator, we have \(0\in Bq\) and hence \(q\in B^{-1}0\). Thus we have \(q\in U\cap B^{-1}0\).

On the other hand, we note that

$$\begin{aligned} x_{n}-q =&\alpha_{n}f(x_{n})+(1- \alpha_{n})T_{\lambda_{n}}(u_{n})-q \\ =&\alpha_{n}f(x_{n})-\alpha_{n}f(q)+ \alpha_{n}f(q)-\alpha _{n}q+(1-\alpha_{n}) \bigl(T_{\lambda_{n}}(u_{n})-q\bigr). \end{aligned}$$

Hence, we obtain from (3.3) and (3.5)

$$\begin{aligned} \Vert x_{n}-q\Vert ^{2} =&\alpha_{n}\bigl\langle (f-I)q,x_{n}-q\bigr\rangle \\ &{}+\bigl\langle \alpha_{n}\bigl(f(x_{n})-f(q)\bigr)+(1- \alpha_{n}) \bigl(T_{\lambda _{n}}(u_{n})-T(q) \bigr),x_{n}-q\bigr\rangle \\ \leq&\alpha_{n}\bigl\langle (f-I)q,x_{n}-q\bigr\rangle \\ &{}+\bigl(\alpha_{n}k\Vert x_{n}-q\Vert +(1- \alpha_{n})\bigl\Vert T_{\lambda _{n}}(u_{n})-T_{\lambda_{n}}(q)+T_{\lambda_{n}}(q)-T(q) \bigr\Vert \bigr)\cdot \Vert x_{n}-q\Vert \\ \leq&\alpha_{n}\bigl\langle (f-I)q,x_{n}-q\bigr\rangle + \alpha_{n}k\Vert x_{n}-q\Vert ^{2} \\ &{}+(1-\alpha_{n})\Vert u_{n}-q\Vert \cdot \Vert x_{n}-q\Vert +(1-\alpha_{n})\lambda _{n}\beta \Vert q\Vert \cdot \Vert x_{n}-q\Vert \\ \leq&\alpha_{n}\bigl\langle (f-I)q,x_{n}-q\bigr\rangle +(1- \alpha_{n}+\alpha _{n}k)\Vert x_{n}-q\Vert ^{2} \\ &{}+(1-\alpha_{n})\lambda_{n}\beta \Vert q\Vert \cdot \Vert x_{n}-q\Vert . \end{aligned}$$

It follows that

$$ \|x_{n}-q\|^{2} \leq\frac{\langle (f-I)q,x_{n}-q\rangle}{1-k}+\frac{(1-\alpha_{n})\lambda_{n}\beta\|q\| \cdot\|x_{n}-q\|}{(1-k)\alpha_{n}}. $$

In particular,

$$ \|x_{n_{i}}-q\|^{2} \leq\frac{(1-\alpha_{n})\beta}{1-k}\cdot \frac{\lambda_{n_{i}}}{\alpha _{n_{i}}}\|q\|\cdot\|x_{n_{i}}-q\|+\frac{1}{1-k}\bigl\langle (f-I)q,x_{n_{i}}-q\bigr\rangle . $$
(3.7)

Since \(x_{n_{i}}\rightharpoonup q\) and \(\lambda_{n}=o(\alpha_{n})\), it follows from (3.7) that \(x_{n_{i}}\rightarrow q\) as \(i\rightarrow \infty\).

Next, we show that q solves the variational inequality (3.1).

Observe that

$$\begin{aligned} \begin{aligned} x_{n} &=\alpha_{n}f(x_{n})+(1- \alpha_{n})T_{\lambda_{n}}(u_{n}) \\ &=\alpha_{n}f(x_{n})+(1-\alpha_{n})T_{\lambda_{n}}J_{r_{n}}(x_{n}). \end{aligned} \end{aligned}$$

Hence, we conclude that

$$ (I-f) (x_{n})=-\frac{1}{\alpha_{n}}(I-T_{\lambda _{n}}J_{r_{n}}) (x_{n})-T_{\lambda_{n}}J_{r_{n}}(x_{n})+x_{n}. $$

Since \(T_{\lambda_{n}}J_{r_{n}}\) is nonexpansive, we find that \(I-T_{\lambda_{n}}J_{r_{n}}\) is monotone. Note that, for any given \(z\in U\cap B^{-1}0\),

$$\begin{aligned} \bigl\langle (I-f) (x_{n}),x_{n}-z\bigr\rangle =&- \frac{1}{\alpha_{n}}\bigl\langle (I-T_{\lambda _{n}}J_{r_{n}}) (x_{n})-(I-T_{\lambda_{n}}J_{r_{n}})z,x_{n}-z\bigr\rangle \\ &{}-\frac{1}{\alpha_{n}}\bigl\langle (I-T_{\lambda _{n}}J_{r_{n}})z,x_{n}-z \bigr\rangle -\bigl\langle T_{\lambda_{n}}(u_{n})-x_{n},x_{n}-z \bigr\rangle \\ \leq&-\frac{1}{\alpha_{n}}\bigl\langle (I-T_{\lambda _{n}}J_{r_{n}})z,x_{n}-z \bigr\rangle -\bigl\langle T_{\lambda_{n}}(u_{n})-x_{n},x_{n}-z \bigr\rangle \\ \leq&\frac{1}{\alpha_{n}}\bigl\Vert z-T_{\lambda_{n}}(z)\bigr\Vert \cdot \Vert x_{n}-z\Vert +\bigl\Vert T_{\lambda_{n}}(u_{n})-x_{n} \bigr\Vert \cdot \Vert x_{n}-z\Vert \\ \leq&\frac{\lambda_{n}}{\alpha_{n}}\beta \Vert z\Vert \cdot \Vert x_{n}-z \Vert +\bigl\Vert T_{\lambda_{n}}(u_{n})-x_{n}\bigr\Vert \cdot \Vert x_{n}-z\Vert . \end{aligned}$$

Now, replacing n with \(n_{i}\) in the above inequality, and letting \(i\rightarrow\infty\), since \(\lambda_{n}=o(\alpha_{n})\), \(\|T_{\lambda_{n}}(u_{n})-x_{n}\|\rightarrow0\), we have

$$ \bigl\langle (I-f)q,q-z\bigr\rangle =\lim_{i\rightarrow\infty}\bigl\langle (I-f)x_{n_{i}},x_{n_{i}}-z\bigr\rangle \leq0. $$

From the arbitrariness of \(z\in U\cap B^{-1}0\), it follows that \(q\in U\cap B^{-1}0\) is a solution of the variational inequality (3.1). Further, by the uniqueness of the solution of the variational inequality (3.1), we conclude that \(x_{n}\rightarrow q\) as \(n\rightarrow\infty\).

The variational inequality (3.1) can be rewritten as

$$ \bigl\langle f(q)-q,q-z\bigr\rangle \geq0, \quad \forall z\in U\cap B^{-1}0. $$

By Lemma 2.1, it is equivalent to the following fixed point equation:

$$ P_{U\cap B^{-1}0}f(q)=q. $$

This completes the proof. □

Theorem 3.2

Let H be a real Hilbert space and let C be a nonempty closed convex subset of H. Let B be a maximal monotone operator on H such that the domain of B is included in C. Let \(J_{r}=(I+rB)^{-1}\) be the resolvent of B for \(r>0\). Let g be a real-valued convex function of C into , and the gradient g be a \(1/L\)-ism with \(L>0\). Let f be a contraction with the constant \(k\in(0,1)\). Assume that \(U\cap B^{-1}0\neq\emptyset\). Let the sequences \(\{u_{n}\}\) and \(\{x_{n}\}\) be generated by \(x_{1}\in C\) and

$$ \left \{ \begin{array}{l} u_{n}=J_{r_{n}}(x_{n}), \\ x_{n+1}=\alpha_{n}f(x_{n})+(1-\alpha_{n})T_{\lambda_{n}}(u_{n}),\quad \forall n\in\mathbb{N}, \end{array} \right . $$
(3.8)

where \(T_{\lambda_{n}}=P_{C}(I-\beta\nabla g_{\lambda_{n}})\), \(\nabla g_{\lambda_{n}}=\nabla g+ \lambda_{n}I\), \(\beta\in(0,2/L)\). Let \(\{r_{n}\}\), \(\{\alpha_{n}\}\), \(\{\lambda_{n}\}\) satisfy the following conditions:

  1. (C1)

    \(\{r_{n}\}\subset(0,\infty)\), \(\liminf_{n\rightarrow\infty}r_{n}>0\), \(\sum_{n=1}^{\infty}|r_{n+1}-r_{n}|<\infty\);

  2. (C2)

    \(\{\alpha_{n}\}\subset(0,1)\), \(\lim_{n\rightarrow\infty}\alpha_{n}=0\), \(\sum_{n=1}^{\infty}\alpha_{n}=\infty\), \(\sum_{n=1}^{\infty}|\alpha_{n+1}-\alpha_{n}|<\infty\);

  3. (C3)

    \(\{\lambda_{n}\}\subset(0,2/\beta-L)\), \(\lambda_{n}=o(\alpha_{n})\), \(\sum_{n=1}^{\infty}|\lambda_{n+1}-\lambda _{n}|<\infty\).

Then the sequence \(\{x_{n}\}\) converges strongly to a point \(q\in U\cap B^{-1}0\), which solves the variational inequality (3.1).

Proof

It is clear that \(\hat{x}\in C\) solves the minimization problem (1.1) if and only if for each fixed \(0<\beta<2/L\), \(\hat{x}\) solves the fixed point equation

$$ \hat{x}=P_{C}(I-\beta\nabla g)\hat{x}=T\hat{x}, $$

and \(\hat{x}=T\hat{x}\), i.e., \(\hat{x}\in U=\operatorname{Fix}(T)\).

Now, we first show that \(\{x_{n}\}\) is bounded. Indeed, pick any \(p\in U\cap B^{-1}0\), since \(u_{n}=J_{r_{n}}(x_{n})\), by Lemma 2.4, we know that

$$ \|u_{n}-p\|=\bigl\Vert J_{r_{n}}(x_{n})-J_{r_{n}}(p) \bigr\Vert \leq\|x_{n}-p\|. $$
(3.9)

Thus, we derive from (3.5) that

$$\begin{aligned} \Vert x_{n+1}-p\Vert =&\bigl\Vert \alpha_{n}f(x_{n})+(1- \alpha_{n})T_{\lambda_{n}}(u_{n})-p\bigr\Vert \\ =&\bigl\Vert (1-\alpha_{n}) \bigl(T_{\lambda_{n}}(u_{n})-p \bigr)+\alpha_{n}\bigl(f(x_{n})-p\bigr)\bigr\Vert \\ \leq&(1-\alpha_{n})\bigl\Vert T_{\lambda_{n}}(u_{n})-T_{\lambda _{n}}(p)+T_{\lambda_{n}}(p)-T(p) \bigr\Vert \\ &{}+\alpha_{n}\bigl\Vert f(x_{n})-f(p)+f(p)-p\bigr\Vert \\ \leq&(1-\alpha_{n}) \bigl(\Vert u_{n}-p\Vert +\bigl\Vert T_{\lambda_{n}}(p)-T(p)\bigr\Vert \bigr) \\ &{}+\alpha_{n}\bigl(k\Vert x_{n}-p\Vert +\bigl\Vert f(p)-p\bigr\Vert \bigr) \\ \leq&(1-\alpha_{n}) \bigl(\Vert x_{n}-p\Vert + \lambda_{n}\beta \Vert p\Vert \bigr) \\ &{}+\alpha_{n}\bigl(k\Vert x_{n}-p\Vert +\bigl\Vert f(p)-p\bigr\Vert \bigr) \\ \leq&\bigl(1-\alpha_{n}(1-k)\bigr)\Vert x_{n}-p\Vert \\ &{}+\alpha_{n}(1-k)\biggl[\frac{\lambda_{n}\beta(1-\alpha_{n})}{\alpha _{n}(1-k)}\Vert p\Vert + \frac{\alpha_{n}}{\alpha_{n}(1-k)}\cdot\bigl\Vert f(p)-p\bigr\Vert \biggr]. \end{aligned}$$

Since \(\lambda_{n}=o(\alpha_{n})\), there exists a real number \(E>0\) such that \(\frac{\lambda_{n}}{\alpha_{n}}\leq E\). Thus,

$$ \|x_{n+1}-p\|\leq\bigl(1-\alpha_{n}(1-k)\bigr) \|x_{n}-p\|+\alpha_{n}(1-k)\frac {E\beta\|p\|+\|f(p)-p\|}{1-k}. $$

By induction, we have

$$ \|x_{n}-p\|\leq\max\biggl\{ \|x_{1}-p\|,\frac{1}{1-k} \bigl(E\beta\|p\|+\bigl\Vert f(p)-p\bigr\Vert \bigr)\biggr\} ,\quad n\geq1. $$

Hence, \(\{x_{n}\}\) is bounded. From (3.9), we also find that \(\{u_{n}\}\) is bounded.

Next, we show that \(\|x_{n+1}-x_{n}\|\rightarrow0\).

Indeed, since g is \(1/L\)-ism, \(P_{C}(I-\beta\nabla g_{\lambda_{n}})=T_{\lambda_{n}}\) is nonexpansive, we derive that

$$\begin{aligned} \bigl\Vert T_{\lambda_{n}}(u_{n-1})-T_{\lambda_{n-1}}(u_{n-1}) \bigr\Vert =&\bigl\Vert P_{C}(I-\beta\nabla g_{\lambda_{n}})u_{n-1}-P_{C}(I- \beta\nabla g_{\lambda_{n-1}})u_{n-1}\bigr\Vert \\ \leq&\bigl\Vert (I-\beta\nabla g_{\lambda_{n}})u_{n-1}-(I-\beta \nabla g_{\lambda _{n-1}})u_{n-1}\bigr\Vert \\ =&\beta\bigl\Vert \nabla g(u_{n-1})+\lambda_{n-1}u_{n-1}- \nabla g(u_{n-1})-\lambda_{n}u_{n-1}\bigr\Vert \\ =&\beta|\lambda_{n}-\lambda_{n-1}| \Vert u_{n-1} \Vert . \end{aligned}$$

Thus, we get

$$\begin{aligned} \Vert x_{n+1}-x_{n}\Vert =&\bigl\Vert \bigl(\alpha_{n}f(x_{n})+(1-\alpha_{n})T_{\lambda_{n}}(u_{n}) \bigr)-\bigl(\alpha _{n-1}f(x_{n-1}) \\ &{}+(1-\alpha_{n-1})T_{\lambda_{n-1}}(u_{n-1})\bigr)\bigr\Vert \\ \leq&\bigl\Vert \alpha_{n}f(x_{n})- \alpha_{n}f(x_{n-1})\bigr\Vert \\ &{}+\bigl\Vert \alpha_{n}f(x_{n-1})-\alpha_{n-1}f(x_{n-1}) \bigr\Vert \\ &{}+\bigl\Vert (1-\alpha_{n}) \bigl(T_{\lambda_{n}}(u_{n})-T_{\lambda_{n}}(u_{n-1}) \bigr)\bigr\Vert \\ &{}+\bigl\Vert (1-\alpha_{n})T_{\lambda_{n}}(u_{n-1})-(1- \alpha_{n})T_{\lambda _{n-1}}(u_{n-1})\bigr\Vert \\ &{}+\bigl\Vert (1-\alpha_{n})T_{\lambda_{n-1}}(u_{n-1})-(1- \alpha _{n-1})T_{\lambda_{n-1}}(u_{n-1})\bigr\Vert \\ \leq&\alpha_{n}k\Vert x_{n}-x_{n-1}\Vert +| \alpha_{n}-\alpha_{n-1}|\cdot\bigl\Vert f(x_{n-1}) \bigr\Vert \\ &{}+(1-\alpha_{n})\Vert u_{n}-u_{n-1}\Vert +(1-\alpha_{n})\bigl\Vert T_{\lambda _{n}}(u_{n-1})-T_{\lambda_{n-1}}(u_{n-1}) \bigr\Vert \\ &{}+|\alpha_{n}-\alpha_{n-1}|\cdot\bigl\Vert T_{\lambda_{n-1}}(u_{n-1})\bigr\Vert \\ \leq&\alpha_{n}k\Vert x_{n}-x_{n-1}\Vert +(1-\alpha_{n})\Vert u_{n}-u_{n-1}\Vert \\ &{}+(1-\alpha_{n})\beta|\lambda_{n}-\lambda_{n-1}| \cdot \Vert u_{n-1}\Vert \\ &{}+|\alpha_{n}-\alpha_{n-1}|\bigl(\bigl\Vert f(x_{n-1})\bigr\Vert +\bigl\Vert T_{\lambda _{n-1}}(u_{n-1}) \bigr\Vert \bigr) \\ \leq&\alpha_{n}k\Vert x_{n}-x_{n-1}\Vert +(1-\alpha_{n})\Vert u_{n}-u_{n-1}\Vert \\ &{}+M_{1}\bigl(\vert \lambda_{n}-\lambda_{n-1} \vert +|\alpha_{n}-\alpha_{n-1}|\bigr) \end{aligned}$$
(3.10)

for some appropriate constant \(M_{1}>0\) such that

$$ M_{1}\geq\max\bigl\{ \beta\|u_{n-1}\|,\bigl\| f(x_{n-1}) \bigr\| +\bigl\Vert T_{\lambda _{n-1}}(u_{n-1})\bigr\Vert \bigr\} , \quad \forall n\geq1. $$

Since \(u_{n+1}=J_{r_{n+1}}(x_{n+1})\) and \(u_{n}=J_{r_{n}}(x_{n})\), we have from Lemma 2.2

$$\begin{aligned} \Vert u_{n+1}-u_{n}\Vert =&\Vert J_{r_{n+1}}x_{n+1}-J_{r_{n}}x_{n} \Vert \\ =&\Vert J_{r_{n+1}}x_{n+1}-J_{r_{n+1}}x_{n} \Vert +\Vert J_{r_{n+1}}x_{n}-J_{r_{n}}x_{n} \Vert \\ \leq&\Vert x_{n+1}-x_{n}\Vert +\frac{|r_{n+1}-r_{n}|}{r_{n+1}} \Vert J_{r_{n+1}}x_{n}-x_{n}\Vert . \end{aligned}$$

Since \(\liminf_{n\rightarrow\infty}r_{n}>0\), without loss of generality, we may assume that there exists a real number a such that \(r_{n}\geq a>0\) for all \(n\in\mathbb{N}\).

Thus, we have

$$\begin{aligned} \|u_{n+1}-u_{n}\| \leq&\|x_{n+1}-x_{n} \|+\frac{|r_{n+1}-r_{n}|}{a}\| J_{r_{n+1}}x_{n}-x_{n}\| \\ \leq&\|x_{n+1}-x_{n}\|+|r_{n+1}-r_{n}|M_{2}, \end{aligned}$$
(3.11)

where \(M_{2}=\sup\{\frac{1}{a}\|J_{r_{n+1}}x_{n}-x_{n}\|: n\in \mathbb{N}\}\). From (3.10) and (3.11), we obtain

$$\begin{aligned} \Vert x_{n+1}-x_{n}\Vert \leq&\alpha_{n}k \Vert x_{n}-x_{n-1}\Vert +(1-\alpha_{n})\Vert u_{n}-u_{n-1}\Vert \\ &{}+M_{1}\bigl(\vert \lambda_{n}-\lambda_{n-1} \vert +|\alpha_{n}-\alpha_{n-1}|\bigr) \\ \leq&\alpha_{n}k\Vert x_{n}-x_{n-1}\Vert +(1-\alpha_{n}) \bigl(\Vert x_{n}-x_{n-1}\Vert +|r_{n}-r_{n-1}|M_{2}\bigr) \\ &{}+M_{1}\bigl(\vert \lambda_{n}-\lambda_{n-1} \vert +|\alpha_{n}-\alpha_{n-1}|\bigr) \\ \leq&\bigl(1-\alpha_{n}(1-k)\bigr)\Vert x_{n}-x_{n-1} \Vert +M_{2}|r_{n}-r_{n-1}| \\ &{}+M_{1}\bigl(\vert \lambda_{n}-\lambda_{n-1} \vert +|\alpha_{n}-\alpha_{n-1}|\bigr) \\ \leq&\bigl(1-\alpha_{n}(1-k)\bigr)\Vert x_{n}-x_{n-1} \Vert \\ &{}+M_{3}\bigl(\vert r_{n}-r_{n-1}\vert +| \lambda_{n}-\lambda_{n-1}|+|\alpha_{n}-\alpha _{n-1}|\bigr), \end{aligned}$$

where \(M_{3}=\max\{M_{1},M_{2}\}\). Hence by Lemma 2.6, we have

$$ \lim_{n\rightarrow\infty}\|x_{n+1}-x_{n} \|=0. $$
(3.12)

Then, from (3.11), (3.12), and \(|r_{n+1}-r_{n}|\rightarrow0\), we have

$$ \lim_{n\rightarrow\infty}\|u_{n+1}-u_{n} \|=0. $$
(3.13)

For any \(p\in U\cap B^{-1}0\), in the same way as in the proof of Theorem 3.1, we have

$$ \|u_{n}-p\|^{2}\leq\|x_{n}-p \|^{2}-\|u_{n}-x_{n}\|^{2}. $$
(3.14)

Then, from (3.5) and (3.14), by the same argument as in the proof of Theorem 3.1, we derive that

$$\begin{aligned} \Vert x_{n+1}-p\Vert ^{2} =&\bigl\Vert (1- \alpha_{n}) \bigl(T_{\lambda_{n}}(u_{n})-p\bigr)+ \alpha_{n}\bigl(f(x_{n})-p\bigr)\bigr\Vert ^{2} \\ \leq&\bigl\Vert T_{\lambda_{n}}(u_{n})-T_{\lambda_{n}}(p)+T_{\lambda _{n}}(p)-T(p) \bigr\Vert ^{2} \\ &{}+2\alpha_{n}\bigl\Vert T_{\lambda_{n}}(u_{n})-p \bigr\Vert \cdot\bigl\Vert f(x_{n})-p\bigr\Vert +\alpha _{n}\bigl\Vert f(x_{n})-p\bigr\Vert ^{2} \\ \leq&\Vert x_{n}-p\Vert ^{2}-\Vert u_{n}-x_{n}\Vert ^{2}+2\Vert u_{n}-p\Vert \cdot\lambda_{n}\beta \Vert p\Vert + \lambda_{n}^{2}\beta^{2}\Vert p\Vert ^{2} \\ &{}+\alpha_{n}\bigl(2\bigl(\Vert u_{n}-p\Vert + \lambda_{n}\beta \Vert p\Vert \bigr)\cdot\bigl\Vert f(x_{n})-p\bigr\Vert +\bigl\Vert f(x_{n})-p\bigr\Vert ^{2}\bigr) \end{aligned}$$

and hence

$$\begin{aligned} \Vert x_{n}-u_{n}\Vert ^{2} \leq&\Vert x_{n}-p\Vert ^{2}-\Vert x_{n+1}-p\Vert ^{2}+2\beta\lambda_{n}\Vert u_{n}-p\Vert \cdot \Vert p\Vert +\lambda_{n}^{2}\beta^{2} \Vert p\Vert ^{2} \\ &{}+\alpha_{n}\bigl(2\bigl(\Vert u_{n}-p\Vert +\beta \lambda_{n}\Vert p\Vert \bigr)\cdot\bigl\Vert f(x_{n})-p\bigr\Vert +\bigl\Vert f(x_{n})-p\bigr\Vert ^{2}\bigr) \\ \leq&\Vert x_{n+1}-x_{n}\Vert \bigl(\Vert x_{n}-p\Vert +\Vert x_{n+1}-p\Vert \bigr) \\ &{}+2\beta\lambda_{n}\Vert u_{n}-p\Vert \cdot \Vert p\Vert +\lambda_{n}^{2}\beta^{2}\Vert p\Vert ^{2} \\ &{}+\alpha_{n}\bigl(2\bigl(\Vert u_{n}-p\Vert +\beta \lambda_{n}\Vert p\Vert \bigr)\cdot\bigl\Vert f(x_{n})-p\bigr\Vert +\bigl\Vert f(x_{n})-p\bigr\Vert ^{2}\bigr). \end{aligned}$$

Since both \(\{x_{n}\}\), \(\{f(x_{n})\}\) and \(\{u_{n}\}\) are bounded, \(\alpha_{n}\rightarrow0\), \(\lambda_{n}\rightarrow0\), and \(\|x_{n+1}-x_{n}\|\rightarrow0\), we have

$$ \lim_{n\rightarrow\infty}\|x_{n}-u_{n} \|=0. $$
(3.15)

Next, we derive that

$$\begin{aligned} \bigl\Vert x_{n}-T_{\lambda_{n}}(x_{n})\bigr\Vert =& \bigl\Vert x_{n}-x_{n+1}+x_{n+1}-T_{\lambda_{n}}(u_{n})+T_{\lambda _{n}}(u_{n})-T_{\lambda_{n}}(x_{n}) \bigr\Vert \\ \leq&\Vert x_{n}-x_{n+1}\Vert +\bigl\Vert x_{n+1}-T_{\lambda_{n}}(u_{n})\bigr\Vert +\bigl\Vert T_{\lambda _{n}}(u_{n})-T_{\lambda_{n}}(x_{n})\bigr\Vert \\ \leq&\Vert x_{n}-x_{n+1}\Vert +\alpha_{n} \bigl\Vert f(x_{n})-T_{\lambda_{n}}(u_{n})\bigr\Vert + \Vert u_{n}-x_{n}\Vert . \end{aligned}$$

From (3.12), (3.15), and \(\alpha_{n}\rightarrow0\), we have

$$ \bigl\Vert x_{n}-T_{\lambda_{n}}(x_{n})\bigr\Vert \rightarrow0. $$

It follows that \(\|u_{n}-T_{\lambda_{n}}(u_{n})\|\rightarrow0\).

Now we show that

$$ \limsup_{n\rightarrow\infty}\bigl\langle x_{n}-q,-(I-f)q\bigr\rangle \leq0, $$

where \(q\in U\cap B^{-1}0\) is a unique solution of the variational inequality (3.1).

Indeed, take a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) such that

$$ \limsup_{n\rightarrow\infty}\bigl\langle x_{n}-q,-(I-f)q \bigr\rangle =\lim_{k\rightarrow\infty}\bigl\langle x_{n_{k}}-q,-(I-f)q \bigr\rangle . $$
(3.16)

Since \(\{x_{n}\}\) is bounded, without loss of generality, we may assume that \(x_{n_{k}}\rightharpoonup\tilde{x}\).

By the same argument as in the proof of Theorem 3.1, we have \(\tilde {x}\in U\cap B^{-1}0\).

Since \(q=P_{U\cap B^{-1}0}f(q)\), it follows that

$$ \limsup_{n\rightarrow\infty}\bigl\langle (I-f)q,q-x_{n} \bigr\rangle =\bigl\langle (I-f)q,q-\tilde{x}\bigr\rangle \leq0. $$
(3.17)

Finally, we show that \(x_{n}\rightarrow q\).

In fact,

$$\begin{aligned} x_{n+1}-q =&\alpha_{n}f(x_{n})+(1- \alpha_{n})T_{\lambda_{n}}(u_{n})-q \\ =&\alpha_{n}\bigl(f(x_{n})-f(q)\bigr)+ \alpha_{n}\bigl(f(q)-q\bigr) \\ &{}+(1-\alpha _{n}) \bigl(T_{\lambda_{n}}(u_{n})-T_{\lambda_{n}}(q)\bigr) \\ &{}+(1-\alpha_{n}) \bigl(T_{\lambda_{n}}(q)-T(q)\bigr). \end{aligned}$$

So, from (3.5) and (3.9), we derive

$$\begin{aligned} \Vert x_{n+1}-q\Vert ^{2} =&(1-\alpha_{n}) \bigl\langle \bigl(T_{\lambda_{n}}(u_{n})-T_{\lambda _{n}}(q)\bigr)+ \bigl(T_{\lambda_{n}}(q)-T(q)\bigr),x_{n+1}-q\bigr\rangle \\ &{}+\alpha_{n}\bigl\langle f(x_{n})-f(q),x_{n+1}-q \bigr\rangle +\alpha_{n}\bigl\langle -(I-f)q,x_{n+1}-q\bigr\rangle \\ \leq&(1-\alpha_{n}) \bigl(\Vert u_{n}-q\Vert + \lambda_{n}\beta \Vert q\Vert \bigr)\Vert x_{n+1}-q \Vert \\ &{}+\alpha_{n}k\Vert x_{n}-q\Vert \cdot \Vert x_{n+1}-q\Vert +\alpha_{n}\bigl\langle -(I-f)q,x_{n+1}-q \bigr\rangle \\ \leq&(1-\alpha_{n})\Vert x_{n}-q\Vert \cdot \Vert x_{n+1}-q\Vert +(1-\alpha_{n})\lambda _{n}\beta \Vert q\Vert \cdot \Vert x_{n+1}-q\Vert \\ &{}+\alpha_{n}k\Vert x_{n}-q\Vert \cdot \Vert x_{n+1}-q\Vert +\alpha_{n}\bigl\langle -(I-f)q,x_{n+1}-q \bigr\rangle \\ \leq&\bigl(1-\alpha_{n}(1-k)\bigr)\Vert x_{n}-q\Vert \cdot \Vert x_{n+1}-q\Vert +\lambda_{n}\beta \Vert q \Vert \cdot \Vert x_{n+1}-q\Vert \\ &{}+\alpha_{n}\bigl\langle -(I-f)q,x_{n+1}-q\bigr\rangle \\ \leq&\bigl(1-\alpha_{n}(1-k)\bigr)\frac{1}{2}\bigl(\Vert x_{n}-q\Vert ^{2}+\Vert x_{n+1}-q\Vert ^{2}\bigr) \\ &{}+\alpha_{n}\biggl[\bigl\langle -(I-f)q,x_{n+1}-q\bigr\rangle +\frac{\lambda_{n}}{\alpha_{n}}\beta \Vert q\Vert \cdot \Vert x_{n+1}-q \Vert \biggr]. \end{aligned}$$

It follows that

$$\begin{aligned} \|x_{n+1}-q\|^{2} \leq&\frac{1-\alpha_{n}(1-k)}{1+\alpha_{n}(1-k)} \|x_{n}-q\|^{2} \\ &{}+\frac{2\alpha_{n}}{1+\alpha_{n}(1-k)}\biggl[\bigl\langle -(I-f)q,x_{n+1}-q\bigr\rangle +\frac{\lambda_{n}}{\alpha_{n}}\beta\|q\|\cdot\| x_{n+1}-q\|\biggr] \\ \leq&\bigl(1-\alpha_{n}(1-k)\bigr)\|x_{n}-q \|^{2} \\ &{}+\frac{2\alpha_{n}}{1+\alpha_{n}(1-k)}\biggl[\bigl\langle -(I-f)q,x_{n+1}-q\bigr\rangle +\frac{\lambda_{n}}{\alpha_{n}}\beta\|q\|\cdot\| x_{n+1}-q\|\biggr]; \end{aligned}$$

since \(\{x_{n}\}\) is bounded, we can take a constant \(M>0\) such that

$$ M\geq\|x_{n+1}-q\|, \quad n\geq1. $$

Then we obtain

$$ \|x_{n+1}-q\|^{2}\leq\bigl(1- \alpha_{n}(1-k)\bigr)\|x_{n}-q\|^{2}+ \alpha_{n}\delta_{n}, $$
(3.18)

where \(\delta_{n}=\frac{2}{1+\alpha_{n}(1-k)}[\langle -(I-f)q,x_{n+1}-q\rangle+\frac{\lambda_{n}}{\alpha_{n}}\beta\|q\|M]\).

By (3.17) and \(\lambda_{n}=o(\alpha_{n})\), we get \(\limsup_{n\rightarrow\infty}\delta_{n}\leq0\). Now applying Lemma 2.6 to (3.18) one concludes that \(x_{n}\rightarrow q\) as \(n\rightarrow\infty\). The variational inequality (3.1) can be rewritten as

$$ \bigl\langle f(q)-q,q-z\bigr\rangle \geq0,\quad \forall z\in U\cap B^{-1}0. $$

By Lemma 2.1, it is equivalent to the following fixed point equation:

$$ P_{U\cap B^{-1}0}f(q)=q. $$

This completes the proof. □

4 Applications

In this section, we will give some applications, which are useful in nonlinear analysis and optimization.

Theorem 4.1

Let H be a real Hilbert space and let C be a nonempty closed convex subset of H. Let F be a bifunction of \(C\times C\) into satisfying (A1)-(A4). Let \(J_{r}\) be the resolvent of F for \(r>0\). Let g be a real-valued convex function of C into , and the gradient g be a \(1/L\)-ism with \(L>0\). Let f be a contraction with the constant \(k\in(0,1)\). Assume that \(U\cap \operatorname{EP}(F)\neq\emptyset\). Let the sequences \(\{u_{n}\}\) and \(\{x_{n}\}\) be generated by \(x_{1}\in C\) and

$$ \left \{ \begin{array}{l} F(u_{n},y)+\frac{1}{r_{n}}\langle y-u_{n},u_{n}-x_{n}\rangle\geq0,\quad \forall y\in C, \\ x_{n+1}=\alpha_{n}f(x_{n})+(1-\alpha_{n})T_{\lambda_{n}}(u_{n}),\quad \forall n\in\mathbb{N}, \end{array} \right . $$
(4.1)

where \(T_{\lambda_{n}}=P_{C}(I-\beta\nabla g_{\lambda_{n}})\), \(\nabla g_{\lambda_{n}}=\nabla g+ \lambda_{n}I\), \(\beta\in(0,2/L)\). Let \(\{r_{n}\}\), \(\{\alpha_{n}\}\), \(\{\lambda_{n}\}\) satisfy the conditions (C1)-(C3). Then the sequence \(\{x_{n}\}\) converges strongly to a point \(q\in U\cap \operatorname{EP}(F)\), where \(q=P_{U\cap \operatorname{EP}(F)}f(q)\).

Proof

For a bifunction F of \(C\times C\) into satisfying (A1)-(A4), we can define a maximal monotone operator \(A_{F}\) with \(\operatorname{dom}A_{F}\subset C\). Put \(B=A_{F}\) in Theorem 3.2. Then by Lemma 2.5, we have \(u_{n}=J_{r_{n}}(x_{n})\). Thus we obtain the desired result by Theorem 3.2. □

On the other hand, based on Theorem 3.2 and Theorem 4.1, we will give another two applications of it. In 1994, Censor and Elfving [23] introduced the split feasibility problem (SFP). Then various algorithms were introduced by some authors to solve it (see [24] and [2529]). Recently, many authors have paid attention to the SFP due to its application in signal processing and image reconstructions (see [4, 30, 31]).

Let C and Q be nonempty, closed, and convex subset of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively. Then the SFP under consideration in this paper can mathematically be formulated as finding a point x satisfying the following property:

$$ x\in C\quad \mbox{and}\quad Ax\in Q, $$
(4.2)

where \(A:H_{1}\rightarrow H_{2}\) is a bounded linear operator. It is clear that \(x^{*}\) is a solution to the split feasibility problem (4.2) if and only if \(x^{*}\in C\) and \(Ax^{*}-P_{Q}Ax^{*}=0\). We define the proximity function g by

$$g(x)=\frac{1}{2}\|Ax-P_{Q}Ax\|^{2}. $$

Then consider the constrained convex minimization problem

$$ \min_{x\in C}g(x)=\min_{x\in C} \frac{1}{2}\|Ax-P_{Q}Ax\|^{2}. $$
(4.3)

Then \(x^{*}\) solves the SFP (4.2) if and only if \(x^{*}\) solves the minimization problem (4.3) with the minimum equal to 0.

In particular, the so-called CQ algorithm was introduced by Byrne [24]. Take the initial guess \(x_{0}\in H_{1}\) arbitrarily, and define \(\{x_{n}\}\) recursively as follows:

$$ x_{n+1}=P_{C}\bigl(I-\beta A^{*}(I-P_{Q})A\bigr)x_{n},\quad n\geq0, $$
(4.4)

where \(0<\beta<2/\|A\|^{2}\) and \(P_{C}\) denotes the projector onto C. Then the sequence \(\{x_{n}\}\) generated by (4.4) converges weakly to a solution of the SFP.

Let B be a maximal monotone operator on Hilbert space H. Let \(J_{r}=(I+rB)^{-1}\) be the resolvent of B for \(r>0\). In order to obtain a strong convergence iterative sequence to solve the SFP, we propose a new algorithm as follows: \(x_{1}\in C\),

$$ \left \{ \begin{array}{l} u_{n}=J_{r_{n}}(x_{n}), \\ x_{n+1}=\alpha_{n}f(x_{n})+(1-\alpha_{n})T_{\lambda_{n}}(u_{n}),\quad \forall n\in\mathbb{N}, \end{array} \right . $$
(4.5)

where \(f:C\rightarrow C\) is a contraction with the constant \(k\in(0,1)\), and \(\{T_{\lambda_{n}}\}\) satisfy \(P_{C}(I-\beta(A^{*}(I-P_{Q})A+\lambda_{n}I))=T_{\lambda_{n}}\) for all n, and \(\beta\in(0,2/\|A\|^{2})\). We can show that the sequence \(\{x_{n}\}\) generated by (4.5) converges strongly to a solution of the SFP (4.2) if the sequence \(\{\alpha_{n}\}\subset(0,1)\) and the sequence \(\{\lambda_{n}\}\) of parameters satisfy appropriate conditions. Applying Theorem 3.2, we obtain the following result.

Theorem 4.2

Assume that the split feasibility problem (4.2) is consistent. Let the sequence \(\{x_{n}\}\) be generated by (4.5). Here the sequences \(\{r_{n}\}\) and \(\{\alpha_{n}\}\subset(0,1)\), and the sequence \(\{\lambda_{n}\}\) satisfy the conditions (C1)-(C3). Then the sequence \(\{x_{n}\}\) converges strongly to a point \(q\in W\cap B^{-1}0\), where W denotes the solution set of SFP (4.2).

Proof

By the definition of the proximity function g, we have

$$\nabla g(x)=A^{*}(I-P_{Q})Ax, $$

since \(P_{Q}\) is \(1/2\)-averaged mapping, then \(I-P_{Q}\) is 1-ism, for \(\forall x,y\in C\), we obtain

$$\begin{aligned}& \bigl\langle \nabla g(x)-\nabla g(y),x-y\bigr\rangle -1/\Vert A\Vert ^{2}\cdot\bigl\Vert \nabla g(x)-\nabla g(y)\bigr\Vert ^{2} \\& \quad = \bigl\langle A^{*}(I-P_{Q})Ax-A^{*}(I-P_{Q})Ay,x-y \bigr\rangle \\& \qquad {} -1/\Vert A\Vert ^{2}\cdot\bigl\Vert A^{*}(I-P_{Q})Ax-A^{*}(I-P_{Q})Ay\bigr\Vert ^{2} \\& \quad = \bigl\langle A^{*}\bigl[(I-P_{Q})Ax-(I-P_{Q})Ay \bigr],x-y\bigr\rangle \\& \qquad {} -1/\Vert A\Vert ^{2}\cdot\bigl\Vert A^{*} \bigl[(I-P_{Q})Ax-(I-P_{Q})Ay\bigr]\bigr\Vert ^{2} \\& \quad = \bigl\langle (I-P_{Q})Ax-(I-P_{Q})Ay,Ax-Ay\bigr\rangle \\& \qquad {} -1/\Vert A\Vert ^{2}\cdot\bigl\Vert A^{*} \bigl[(I-P_{Q})Ax-(I-P_{Q})Ay\bigr]\bigr\Vert ^{2} \\& \quad \geq \bigl\Vert (I-P_{Q})Ax-(I-P_{Q})Ay\bigr\Vert ^{2} \\& \qquad {} -\bigl\Vert (I-P_{Q})Ax-(I-P_{Q})Ay\bigr\Vert ^{2} \\& \quad = 0. \end{aligned}$$

So, g is \(1/\|A\|^{2}\)-ism.

Set \(g_{\lambda_{n}}(x)=g(x)+\frac{\lambda_{n}}{2}\|x\|^{2}\), consequently,

$$\begin{aligned} \nabla g_{\lambda_{n}}(x) =&\nabla g(x)+\lambda_{n}I(x) \\ =&A^{*}(I-P_{Q})Ax+\lambda_{n}x. \end{aligned}$$

Then the iterative scheme (4.5) is equivalent to

$$ \left \{ \begin{array}{l} u_{n}=J_{r_{n}}(x_{n}), \\ x_{n+1}=\alpha_{n}f(x_{n})+(1-\alpha_{n})T_{\lambda_{n}}(u_{n}),\quad \forall n\in\mathbb{N}, \end{array} \right . $$

where \(\{T_{\lambda_{n}}\}\) satisfy \(P_{C}(I-\beta(A^{*}(I-P_{Q})A+\lambda_{n}I))=T_{\lambda_{n}}\) for all n, and \(\beta\in(0,2/\|A\|^{2})\). □

However, in order to obtain a strong convergence iterative sequence to solve the SFP, we propose another new algorithm as follows: \(x_{1}\in C\),

$$ \left \{ \begin{array}{l} F(u_{n},y)+\frac{1}{r_{n}}\langle y-u_{n},u_{n}-x_{n}\rangle\geq0,\quad \forall y\in C, \\ x_{n+1}=\alpha_{n}f(x_{n})+(1-\alpha_{n})T_{\lambda_{n}}(u_{n}),\quad \forall n\in\mathbb{N}, \end{array} \right . $$
(4.6)

where \(f:C\rightarrow C\) is a contraction with the constant \(k\in(0,1)\), and \(\{T_{\lambda_{n}}\}\) satisfy \(P_{C}(I-\beta(A^{*}(I-P_{Q})A+\lambda_{n}I))=T_{\lambda_{n}}\) for all n, and \(\beta\in(0,2/\|A\|^{2})\). We can show that the sequence \(\{x_{n}\}\) generated by (4.6) converges strongly to a solution of the SFP (4.2) if the sequence \(\{\alpha_{n}\}\subset(0,1)\) and the sequence \(\{\lambda_{n}\}\) of parameters satisfy appropriate conditions. Applying Theorem 3.2, we obtain the following result.

Theorem 4.3

Assume that the split feasibility problem (4.1) is consistent. Let the sequence \(\{x_{n}\}\) be generated by (4.4). Here the sequences \(\{r_{n}\}\) and \(\{\alpha_{n}\}\subset(0,1)\), the sequence \(\{\lambda_{n}\}\) satisfies the conditions (C1)-(C3). Then the sequence \(\{x_{n}\}\) converges strongly to a point \(q\in W\cap \operatorname{EP}(F)\), where W denotes the solution set of SFP (4.2).

Proof

For a bifunction F of \(C\times C\) into satisfying (A1)-(A4), we can define a maximal monotone operator \(A_{F}\) with \(\operatorname{dom}A_{F}\subset C\). Put \(F=A_{F}\) in Theorem 3.2. Then by Lemma 2.5, we have \(u_{n}=J_{r_{n}}(x_{n})\). Thus we obtain the desired result by Theorem 3.2 and Theorem 4.2. □

5 Numerical results

In this section, we present the following concrete examples to judge the numerical performance of our algorithm. By using the algorithm in Theorem 4.2 and Theorem 3.2, we illustrate its realization, effectiveness, and convergence in solving a system of linear equations and a constrained convex minimization problem.

The first example is the \(4\times4\) system of linear equations, which use the algorithm in Theorem 4.2.

Example 1

In Theorem 4.2, we assume that \(H_{1}=H_{2}=\mathbb{R}^{4}\). Take \(f=\frac{1}{4}I\), where I denotes the \(4\times4\) identity matrix. Given the parameters \(\alpha_{n}=\frac{1}{n+2}\), \(\lambda_{n}=\frac{1}{(n+2)^{2}}\) for every \(n\geq0\). \(\beta=\frac{1}{100}\). Take

$$\begin{aligned}& A=\left ( \begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} 1&-1&2&-1\\ 2&-2&3&-3\\ 1&1&1&0\\ 1&-1&4&3 \end{array} \right ), \end{aligned}$$
(5.1)
$$\begin{aligned}& b=\left ( \begin{array}{@{}c@{}} -2\\ -10\\ 6\\ 18 \end{array} \right ). \end{aligned}$$
(5.2)

The SFP can be formulated as the problem of finding a point \(x^{*}\) with the property

$$x^{*}\in C \quad \mbox{and}\quad Ax^{*}\in Q, $$

where \(C=H_{1}\), \(Q=\{b\}\subset H_{2}\). That is, \(x^{*}\) is the solution of the system of linear equations \(Ax=b\), and

$$ x^{*}=\left ( \begin{array}{@{}c@{}} 1\\ 3\\ 2\\ 4 \end{array} \right ).$$
(5.3)

Then by Theorem 4.2 and Lemma 2.8, the sequence \(\{x_{n}\}\) is generated by

$$x_{n+1} =\frac{1}{4(n+2)}x_{n}+\frac{n+1}{n+2} \biggl(x_{n}-\frac {1}{100}A^{*}Ax_{n}+ \frac{1}{100}A^{*}b-\frac{1}{100(n+2)^{2}}x_{n}\biggr). $$

As \(n\rightarrow\infty\), we have \(\{x_{n}\}\rightarrow x^{*}=(1,3,2,4)^{T}\).

From Table 1, we can easily see that with iterative number increasing, \(x_{n}\) approaches the exact solution \(x^{*}\) and the errors gradually approach zero.

Table 1 Numerical results as regards Example  1

The second example is also the constrained convex minimization problem, which uses the algorithm in Theorem 3.2.

Example 2

In Theorem 3.2, we assume that \(H=\mathbb{R}\) and \(C=[0,2]\). Take \(f=\frac{1}{4}I\), where I denotes the unit function. Given the parameters \(\alpha_{n}=\frac{1}{n+2}\), \(\lambda_{n}=\frac {1}{(n+2)^{2}}\) for every \(n\geq0\). \(\beta=\frac{1}{2}\). Consider the problem (1.1) and take the function

$$ g(x)=\frac{-x}{e^{x}}, \quad \forall x\in C. $$
(5.4)

The problem (1.1) can be written as

$$ \min_{x\in[0,2]}\frac{-x}{e^{x}}. $$
(5.5)

It is easy to see that g is \(1/2\)-ism, that is, \(L=2\). In order to solve the problem (5.5), we can find a point \(x^{*}\in[0,2]\), such that \(g(x)\) reaches the minimum at \(x^{*}\), and \(x^{*}=1\).

Then by Theorem 3.2 and Lemma 2.8, the sequence \(\{x_{n}\}\) is generated by

$$x_{n+1} =\frac{1}{4(n+2)}x_{n}+\frac{n+1}{n+2}P_{C} \biggl(x_{n}-\frac {1}{2}\biggl(\frac{x_{n}}{e^{x_{n}}}- \frac{1}{e^{x_{n}}}+\frac{1}{(n+2)^{2}}x_{n}\biggr)\biggr). $$

As \(n\rightarrow\infty\), we have \(\{x_{n}\}\rightarrow x^{*}\).

From Table 2, we easily see that by using the regularization method and with iterative number increasing, \(x_{n}\) approaches to \(x^{*}\) and the errors gradually approach to zero.

Table 2 Numerical results as regards Example  2

From the computer programming’s point of view, the above algorithms in the concrete examples are easier to implement in this paper.

6 Conclusion

In a real Hilbert space, methods for solving the equilibrium problem and constrained convex minimization problem have been extensively studied, respectively. Recently, Tian and Liu were first to propose composite iterative algorithms for finding a common solution of an equilibrium and a constrained convex minimization problem. However, in this paper, we use the regularized gradient-projection algorithm to find the unique solution of the problems of constrained convex minimization problem and the zero points of maximal monotone operator, which also solves a certain variational inequality. In particular, under suitable conditions, the zero points of a maximal monotone operator problem can be transformed into the equilibrium problem. Then new strong convergence theorems and applications are obtained, which also solve a certain variational inequality. Finally, we apply this algorithm to the split feasibility problem and the constrained convex minimization problem, and we illustrate the effectiveness, realization, and convergence of our algorithm by giving concrete examples and numerical results.

References

  1. Xu, HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 150, 360-378 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  2. Ceng, LC, Ansari, QH, Yao, JC: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 74, 5286-5302 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  3. Ceng, LC, Ansari, QH, Yao, JC: Extragradient-projection method for solving constrained convex minimization problems. Numer. Algebra Control Optim. 1(3), 341-359 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  4. Xu, HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 26, 105018 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  5. Ceng, LC, Ansari, QH, Wen, CF: Multi-step implicit iterative methods with regularization for minimization problems and fixed point problems. J. Inequal. Appl. 2013, Article ID 240 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  6. Takahashi, W: Convex Analysis and Approximation of Fixed Points. Yokohama Publishers, Yokohama (2000)

    MATH  Google Scholar 

  7. Flam, SD, Antipin, AS: Equilibrium programming using proximal-like algorithms. Math. Program. 78, 29-41 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  8. Takahashi, S, Takahashi, W: Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 331, 506-515 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  9. He, HM, Liu, SY, Cho, YJ: An explicit method for systems of equilibrium problems and fixed points of infinite family of nonexpansive mappings. J. Comput. Appl. Math. 235, 4128-4139 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  10. Qin, XL, Cho, YJ, Kang, SM: Convergence analysis on hybrid projection algorithms for equilibrium problems and variational inequality problems. Math. Model. Anal. 14, 335-351 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  11. Combettes, PL, Hirstoaga, SA: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 6, 117-136 (2005)

    MATH  MathSciNet  Google Scholar 

  12. Moudafi, A: Viscosity approximation method for fixed-points problems. J. Math. Anal. Appl. 241, 46-55 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  13. Marino, G, Xu, HK: A general method for nonexpansive mappings in Hilbert space. J. Math. Anal. Appl. 318, 43-52 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  14. Zeng, LC, Ansari, QH, Shyu, DS, Yao, JC: Strong and weak convergence theorems for common solutions of generalized equilibrium problems and zeros of maximal monotone operators. Fixed Point Theory Appl. 2010, Article ID 590278 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  15. Tian, M, Liu, L: General iterative methods for equilibrium and constrained convex minimization problem. Optimization 63(9), 1367-1385 (2014)

    Article  MATH  MathSciNet  Google Scholar 

  16. Takahashi, S, Takahashi, W, Toyoda, M: Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 147, 27-41 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  17. Eshita, K, Takahashi, W: Approximating zero points of accretive operators in general Banach spaces. JP J. Fixed Point Theory Appl. 2, 105-116 (2007)

    MATH  MathSciNet  Google Scholar 

  18. Takahashi, W: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama (2000)

    MATH  Google Scholar 

  19. Blum, E, Oettli, W: From optimization and variational inequalities to a equilibrium problems. Math. Stud. 63, 123-145 (1994)

    MATH  MathSciNet  Google Scholar 

  20. Aoyama, K, Kimura, Y, Takahashi, W: Maximal monotone operators and maximal monotone functions for equilibrium problems. J. Convex Anal. 15, 395-409 (2008)

    MATH  MathSciNet  Google Scholar 

  21. Hundal, H: An alternating projection that does not converge in norm. Nonlinear Anal. 57, 35-61 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  22. Lin, LJ, Takahashi, W: A general iterative method for hierarchical variational inequality problems in Hilbert spaces and applications. Positivity 16, 429-453 (2012)

    Article  MathSciNet  Google Scholar 

  23. Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221-239 (1994)

    Article  MATH  MathSciNet  Google Scholar 

  24. Byrne, C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103-120 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  25. Byrne, C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18(2), 441-453 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  26. Qu, B, Xiu, N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 21(5), 1655-1662 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  27. Xu, HK: A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 22(6), 2021-2034 (2006)

    Article  MATH  Google Scholar 

  28. Yang, Q: The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 20(4), 1261-1266 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  29. Yang, Q, Zhao, J: Generalized KM theorems and their applications. Inverse Probl. 22(3), 833-844 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  30. Lopez, G, Martin-Marquez, V, Wang, FH, Xu, HK: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 28, 085004 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  31. Zhao, JL, Zhang, YJ, Yang, QZ: Modified projection methods for the split feasibility problem and the multiple-set split feasibility problem. Appl. Math. Comput. 219, 1644-1653 (2012)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors thank the referees for their helping comments, which notably improved the presentation of this paper. This work was supported by the Foundation of Tianjin Key Laboratory for Advanced Signal Processing.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ming Tian.

Additional information

Competing interests

The author declare that they have no competing interests.

Authors’ contributions

All the authors read and approved the final manuscript.

Rights and permissions

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tian, M., Jiao, SW. Regularized gradient-projection methods for the constrained convex minimization problem and the zero points of maximal monotone operator. Fixed Point Theory Appl 2015, 11 (2015). https://doi.org/10.1186/s13663-015-0258-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-015-0258-9

MSC

Keywords