Skip to main content

An iterative algorithm for fixed point problem and convex minimization problem with applications

Abstract

In this paper we prove the strong convergence of an iterative sequence for finding a common element of the fixed points set of a strictly pseudocontractive mapping and the solution set of the constrained convex minimization problem for a convex and continuously Fréchet differentiable functional in a real Hilbert space. We apply our result to solving the split feasibility problem and the convexly constrained linear inverse problem involving the fixed point problem for a strictly pseudocontractive mapping in a real Hilbert space.

1 Introduction

Let H be a real Hilbert space and C a nonempty, closed, and convex subset of H. A mapping \(T:C\rightarrow C\) is said to be nonexpansive if

$$\begin{aligned} \| Tx-Ty\| \leq\| x-y\|, \quad\forall x,y\in C, \end{aligned}$$
(1.1)

and \(T:C\rightarrow C\) is said to be k-strictly pseudocontractive (see [1]) if for \(0\leq k <1\),

$$\begin{aligned} \| Tx-Ty\|^{2} \leq\| x-y\|^{2}+k \bigl\| (I-T)x-(I-T)y\bigr\| ^{2},\quad \forall x,y\in C. \end{aligned}$$
(1.2)

It is well known that every nonexpansive mapping is strictly pseudocontractive. In a real Hilbert space H, we can show that (1.2) is equivalent to

$$\begin{aligned} \langle Tx-Ty,x-y\rangle\leq\| x-y\|^{2}- \frac{1-k}{2}\bigl\| (I-T)x-(I-T)y\bigr\| ^{2}. \end{aligned}$$
(1.3)

A point \(x \in C\) is called a fixed point of T if \(Tx=x\). The set of fixed points of T is denoted by \(F(T)\). The iterative approximation of fixed points for k-strictly pseudocontractive mappings has been studied extensively by many authors (see, for example, [213] and the references contained therein).

For any point \(u \in H\), there exists a unique point \(P_{C} u \in C\) such that

$$\|u-P_{C} u\| \leq\|u-y\|,\quad \forall y \in C. $$

\(P_{C}\) is called the metric projection of H onto C. We know that \(P_{C}\) is a nonexpansive mapping of H onto C. It is also well known that \(P_{C}\) satisfies

$$\begin{aligned} \langle x-y, P_{C} x-P_{C} y \rangle\geq \|P_{C} x-P_{C} y\|^{2}, \end{aligned}$$
(1.4)

for all \(x,y \in H\). Furthermore, \(P_{C} x\) is characterized by the properties \(P_{C} x \in C\) and

$$\begin{aligned} \langle x-P_{C} x, P_{C} x-y \rangle\geq0, \end{aligned}$$
(1.5)

for all \(y \in C\).

Definition 1.1

A mapping \(T:H\rightarrow H\) is said to be firmly nonexpansive if and only if \(2T-I\) is nonexpansive, or equivalently

$$\langle x-y,Tx-Ty\rangle\geq\|Tx-Ty\|^{2}, \quad\forall x,y \in H. $$

Alternatively, T is firmly nonexpansive if and only if T can be expressed as

$$T=\frac{1}{2}(I+S), $$

where \(S:H\rightarrow H\) is nonexpansive. For example, the projections are firmly nonexpansive.

Definition 1.2

A mapping \(T:H\rightarrow H\) is said to be an averaged mapping if and only if it can be written as the average of the identity mapping I and a nonexpansive mapping; that is,

$$\begin{aligned} T=(1-\alpha)I+\alpha S, \end{aligned}$$
(1.6)

where \(\alpha\in(0,1)\) and \(S:H\rightarrow H\) is nonexpansive. More precisely, when (1.6) holds, we say that T is α-averaged. Thus, firmly nonexpansive mappings (in particular, projections) are \(\frac{1}{2}\)-averaged mappings.

Definition 1.3

A nonlinear operator T whose domain \(D(T )\subset H\) and range \(R(T )\subset H\) is said to be:

  1. (a)

    monotone if

    $$\langle x-y, Tx-Ty\rangle\geq0,\quad \forall x, y \in D(T), $$
  2. (b)

    β-strongly monotone if there exists \(\beta> 0\) such that

    $$\langle x-y, Tx-Ty\rangle\geq\beta\|x-y\|^{2}, \quad\forall x, y \in D(T), $$
  3. (c)

    ν-inverse strongly monotone (for short, ν-ism) if there exists \(\nu> 0\) such that

    $$\langle x-y, Tx-Ty\rangle\geq\nu\|Tx-Ty\|^{2},\quad \forall x, y \in D(T). $$

It can easily be seen that (i) if T is nonexpansive, then \(I-T\) is monotone; (ii) the projection mapping \(P_{C}\) is a 1-ism. The inverse strongly monotone (also referred to as co-coercive) operators have been widely used to solve practical problems in various fields, for instance, in traffic assignment problems (see, for example, [14, 15] and the references therein).

Consider the following constrained convex minimization problem:

$$\begin{aligned} \operatorname{minimize}\bigl\{ f(x):x \in C\bigr\} , \end{aligned}$$
(1.7)

where \(f:C\rightarrow\mathbb{R}\) is a real-valued convex function. We say that the minimization problem (1.7) is consistent if the minimization problem (1.7) has a solution. In the sequel, we shall denote the set of solutions of problem (1.7) by Γ. If f is (Fréchet) differentiable, then the gradient-projection method (for short, GPM) generates a sequence \(\{x_{n}\}\) using the following recursive formula:

$$\begin{aligned} x_{n+1} = P_{C} \bigl(x_{n}-\lambda \nabla f(x_{n})\bigr), \quad\forall n \geq1, \end{aligned}$$
(1.8)

or more generally,

$$\begin{aligned} x_{n+1} = P_{C} \bigl(x_{n}- \lambda_{n} \nabla f(x_{n})\bigr),\quad \forall n \geq1, \end{aligned}$$
(1.9)

where in both (1.8) and (1.9) the initial guess \(x_{1}\) is taken from C arbitrarily, and the parameters, λ or \(\lambda _{n}\), are positive real numbers. The convergence of the algorithms (1.8) and (1.9) depends on the behavior of the gradient f. As a matter of fact, it is known that if f is α-strongly monotone and L-Lipschitzian with constants \(\alpha, L > 0\), then the operator

$$\begin{aligned} T := P_{C} (I -\lambda\nabla f ) \end{aligned}$$
(1.10)

is a contraction; hence, the sequence \(\{x_{n}\}\) defined by the algorithm (1.8) converges in norm to the unique solution of the minimization problem (1.7). More generally, if the sequence \(\{ \lambda_{n}\}\) is chosen to satisfy the property

$$\begin{aligned} 0 < \liminf\lambda_{n}\leq\limsup\lambda_{n} <\frac{2\alpha}{L^{2}}, \end{aligned}$$
(1.11)

then the sequence \(\{x_{n}\}\) defined by the algorithm (1.9) converges in norm to the unique minimizer of (1.7). However, if the gradient f fails to be strongly monotone, the operator T defined by (1.10) would fail to be contractive; consequently, the sequence \(\{x_{n}\}\) generated by the algorithm (1.9) may fail to converge strongly (see [16, Section 4]). If f is Lipschitzian, then the algorithms (1.8) and (1.9) can still converge in the weak topology under certain conditions.

The gradient-projection method for finding the approximate solutions of the constrained convex minimization problem is well known; see, for example, [17] and the references therein. The convergence of the sequence generated by this method depends on the behavior of the gradient of the objective function. If the gradient fails to be strongly monotone, then the strong convergence of the sequence generated by gradient-projection method may fail. Recently, Xu [16] gave an alternative operator-oriented approach to algorithm (1.9); namely, an averaged mapping approach. He gave his averaged mapping approach to the gradient-projection algorithm (1.9) and the relaxed gradient-projection algorithm. Moreover, he constructed a counterexample which shows that algorithm (1.8) does not converge in norm in an infinite-dimensional space, and he also presented two modifications of gradient-projection algorithms which are shown to have strong convergence. Further, he regularized the minimization problem (1.7) to devise an iterative scheme that generates a sequence converging in norm to the minimum-norm solution of (1.7) in the consistent case.

Very recently, motivated by the work of Xu [16], Ceng et al. [18] proposed the following implicit iterative scheme:

$$x_{\lambda}=P_{C}\bigl(s\gamma Vx_{\lambda}+(I-s\mu F)T_{\lambda}x_{\lambda}\bigr) $$

and the following explicit iterative scheme:

$$x_{n+1}=P_{C}\bigl(s_{n}\gamma Vx_{n}+(I-s\mu F)T_{n} x_{n}\bigr) $$

for finding the approximate minimizer of a constrained convex minimization problem and prove that the sequences generated by their schemes converge strongly to a solution of the constrained convex minimization problem (see [18] for more details). Such a solution is also a solution of a variational inequality defined over the set of fixed points of a nonexpansive mapping.

Motivated by the aforementioned results, we introduce an iterative algorithm for finding a fixed point of a strictly pseudocontractive mapping which is also a solution to a constrained convex minimization problem for a convex and continuously Fréchet differentiable functional in a real Hilbert space and prove strong convergence of the sequences generated by our scheme in a real Hilbert space. We apply our result to the split feasibility problem and the convexly constrained linear inverse problem involving the fixed point problem for a strictly pseudocontractive mapping in a real Hilbert space.

We shall adopt the following notations in this paper:

  • \(x_{n}\rightarrow x\) means that \(x_{n}\rightarrow x\) strongly;

  • \(x_{n}\rightharpoonup x\) means that \(x_{n}\rightarrow x\) weakly;

  • \(w_{w}(x_{n}):= \{x: \exists x_{n_{j}}\rightharpoonup x\}\) is the weak w-limit set of the sequence \(\{x_{n}\}_{n=1}^{\infty}\).

2 Main results

We first state some known results which will be used in the sequel.

Lemma 2.1

Let H be a real Hilbert space. Then the following result holds:

$$\| x+y\|^{2}\leq\| x\|^{2}+2\langle y,x+y\rangle,\quad\forall x,y \in H. $$

Lemma 2.2

([13])

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let \(T:C\rightarrow C\) be k-strictly pseudocontractive mapping. Then \(I-T\) is demiclosed at 0, i.e., if \(x_{n}\rightharpoonup x\in C\) and \(x_{n}-Tx_{n}\rightarrow0\), then \(x=Tx\).

Lemma 2.3

([19])

Assume \(\{a_{n}\}\) is a sequence of nonnegative real numbers such that

$$a_{n+1}\leq(1-\gamma_{n})a_{n}+ \gamma_{n}\delta_{n},\quad n\geq0, $$

where \(\{\gamma_{n}\}\) is a sequence in \((0,1)\) and \(\{\delta_{n}\}\) is a sequence in such that

  1. (i)

    \(\sum_{n=0}^{\infty}\gamma_{n}=\infty\);

  2. (ii)

    \(\limsup_{n\rightarrow\infty}\delta_{n}\leq0\) or \(\sum _{n=0}^{\infty}| \delta_{n}\gamma_{n}|<\infty\).

Then \(\lim_{n\rightarrow\infty}a_{n}=0\).

Following the method of proof Li and Yao [20] and Maingé [21], we now prove the following theorem.

Theorem 2.4

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Suppose that the minimization problem (1.7) is consistent and let Γ denote its solution set. Assume that the gradient f is L-Lipschitzian with constant \(L>0\). Let T be a k-strictly pseudocontractive mapping of C into itself such that \(F(T)\cap\Gamma\neq\emptyset\). Let \(\{t_{n}\}\) be a sequence in \((0,1)\), \(\{\alpha_{n}\}\) a sequence in \((0,(1-k)(1-t_{n}))\subset (0,1)\), and \(\{\lambda_{n}\}\) a sequence in \((0,\frac{2}{L} )\) satisfying the following conditions:

  1. (i)

    \(\lim_{n\rightarrow\infty}t_{n}=0\);

  2. (ii)

    \(\sum_{n=1}^{\infty}t_{n}=\infty\);

  3. (iii)

    \(0<\liminf_{n\rightarrow\infty} \alpha_{n} \leq\limsup_{n\rightarrow\infty} \alpha_{n}<1-k\);

  4. (iv)

    \(0 < \liminf_{n\rightarrow\infty} \lambda_{n}\leq\limsup_{n\rightarrow\infty}\lambda_{n} <\frac{2}{L}\).

Then the sequences \(\{u_{n}\}\) and \(\{x_{n}\}\) generated for fixed \(u\in C\) by \(u_{1},x_{1}\in C\),

$$\begin{aligned} \left \{ \begin{array}{@{}l} x_{n}=P_{C}(u_{n}-\lambda_{n}\nabla f(u_{n})), \\ u_{n+1}=(1-\alpha_{n})x_{n}+ \alpha_{n}Tx_{n}-t_{n}(x_{n}-u) ,\quad n \geq1, \end{array} \right . \end{aligned}$$
(2.1)

converge strongly to \(x^{*}\in F(T)\cap\Gamma\), where \(x^{*}:=P_{F(T)\cap\Gamma}u\).

Proof

Inspired by the method of proof of [16], it is well known that \(x^{*} \in C\) solves the minimization problem (1.7) if and only if \(x^{*}\) solves the fixed point equation

$$x^{*}=P_{C}(I-\lambda\nabla f)x^{*}, $$

where \(\lambda>0\) is any fixed positive number. For the sake of simplicity, we may assume that (due to condition (iv))

$$0< a \leq\lambda_{n}\leq b<\frac{2}{L},\quad n\geq1, $$

where a and b are constants. Furthermore, it is also well known from the proof of [16] that the gradient f is \(\frac{1}{L}\)-ism, \((I-\lambda_{n}\nabla f)\) is nonexpansive (see also [22]) and that \(P_{C}(I-\lambda\nabla f )\) is \(\frac{2 + \lambda L}{4}\)-averaged for \(0 < \lambda< \frac{2}{L}\). Hence we find that, for each n, \(P_{C}(I-\lambda_{n} \nabla f )\) is \(\frac{2 + \lambda_{n} L}{4}\)-averaged. Therefore, we can write

$$\begin{aligned} P_{C}(I-\lambda_{n} \nabla f ) = \frac{2 - \lambda_{n} L}{4}I+\frac{2 + \lambda_{n} L}{4}S_{n}=(1-\gamma_{n})I+ \gamma_{n}S_{n}, \end{aligned}$$
(2.2)

where \(S_{n}\) is nonexpansive and \(\gamma_{n} = \frac{2 + \lambda _{n} L}{4} \in[a_{1}, b_{1}]\subset(0, 1)\), where \(a_{1} = \frac{2 + a L}{4}\) and \(b_{1} = \frac{2 + b L}{4} < 1\). Then we can rewrite (2.1) as

$$\begin{aligned} \left \{ \begin{array}{@{}l} x_{n}=(1-\gamma_{n})u_{n}+\gamma_{n}S_{n}u_{n}, \\ u_{n+1}=(1-\alpha_{n})x_{n}+ \alpha_{n}Tx_{n}-t_{n}(x_{n}-u) ,\quad n \geq1. \end{array} \right . \end{aligned}$$
(2.3)

For any \(x^{*}\in F(T)\cap\Gamma\), noticing that \(S_{n}x^{*}=x^{*}\), we have

$$\begin{aligned} \bigl\| x_{n}-x^{*}\bigr\| =&\bigl\| (1-\gamma_{n}) \bigl(u_{n}-x^{*}\bigr)+\gamma _{n} \bigl(S_{n}u_{n}-x^{*}\bigr)\bigr\| \\ \leq&\bigl\| u_{n}-x^{*}\bigr\| \end{aligned}$$
(2.4)

and

$$\begin{aligned} \bigl\| u_{n+1}-x^{*}\bigr\| =& \bigl\| (1- \alpha_{n}-t_{n}) \bigl(x_{n}-x^{*} \bigr)+\alpha _{n}\bigl(Tx_{n}-x^{*}\bigr)+ t_{n}\bigl(u-x^{*}\bigr)\bigr\| \\ \leq& \bigl\| (1-\alpha_{n}-t_{n}) \bigl(x_{n}-x^{*} \bigr)+\alpha _{n}\bigl(Tx_{n}-x^{*}\bigr)\bigr\| + t_{n}\bigl\| u-x^{*}\bigr\| . \end{aligned}$$
(2.5)

But from (1.2) and (1.3), we obtain

$$\begin{aligned} &\bigl\| (1-\alpha_{n}-t_{n}) \bigl(x_{n}-x^{*}\bigr)+\alpha _{n} \bigl(Tx_{n}-x^{*}\bigr)\bigr\| ^{2} \\ &\quad=(1- \alpha_{n}-t_{n})^{2} \bigl\| x_{n}-x^{*} \bigr\| ^{2}+\alpha_{n}^{2}\bigl\| Tx_{n}-x^{*} \bigr\| ^{2} \\ &\qquad{}+2(1-\alpha_{n}-t_{n})\alpha_{n}\bigl\langle Tx_{n}-x^{*},x_{n}-x^{*}\bigr\rangle \\ &\quad\leq (1-\alpha_{n}-t_{n})^{2} \bigl\| x_{n}-x^{*}\bigr\| ^{2} +\alpha_{n}^{2}\bigl[\bigl\| x_{n}-x^{*} \bigr\| ^{2}+k \| x_{n}-Tx_{n}\|^{2}\bigr] \\ &\qquad{}+2(1-\alpha_{n}-t_{n})\alpha_{n} \biggl[\bigl\| x_{n}-x^{*}\bigr\| ^{2}-\frac {1-k}{2}\| x_{n}-Tx_{n}\|^{2} \biggr] \\ &\quad= (1-t_{n})^{2} \bigl\| x_{n}-x^{*} \bigr\| ^{2} +\bigl[k\alpha_{n}^{2}-(1-k) (1-\alpha_{n}-t_{n}) \alpha_{n}\bigr]\| x_{n}-Tx_{n}\|^{2} \\ &\quad= (1-t_{n})^{2} \bigl\| x_{n}-x^{*} \bigr\| ^{2} + \alpha_{n}\bigl[\alpha_{n}-(1-t_{n}) (1-k) \bigr]\| x_{n}-Tx_{n}\|^{2} \\ &\quad\leq (1-t_{n})^{2} \bigl\| x_{n}-x^{*} \bigr\| ^{2}, \end{aligned}$$
(2.6)

which implies

$$\begin{aligned} \bigl\| (1-\alpha_{n}-t_{n}) \bigl(x_{n}-x^{*}\bigr)+\alpha_{n} \bigl(Tx_{n}-x^{*}\bigr)\bigr\| \leq (1-t_{n}) \bigl\| x_{n}-x^{*}\bigr\| . \end{aligned}$$
(2.7)

Therefore, it follows from (2.5), (2.7), and (2.4) that

$$\begin{aligned} \bigl\| u_{n+1}-x^{*}\bigr\| \leq& (1-t_{n}) \bigl\| x_{n}-x^{*}\bigr\| + t_{n}\bigl\| u-x^{*}\bigr\| \\ \leq& (1-t_{n}) \bigl\| u_{n}-x^{*}\bigr\| + t_{n}\bigl\| u-x^{*}\bigr\| \\ \leq& \max\bigl\{ \bigl\| u_{n}-x^{*}\bigr\| , \bigl\| u-x^{*}\bigr\| \bigr\} . \end{aligned}$$
(2.8)

By induction, we have

$$\bigl\| u_{n}-x^{*}\bigr\| \leq\max\bigl\{ \bigl\| u_{1}-x^{*} \bigr\| , \bigl\| u-x^{*}\bigr\| \bigr\} . $$

Hence, \(\{u_{n}\}\) is bounded and so is \(\{x_{n}\}\). Now, using (1.2), we have

$$\begin{aligned} &\bigl\| Tx-x^{*}\bigr\| ^{2} \leq \bigl\| x-x^{*} \bigr\| ^{2}+k \| x-Tx\|^{2} \\ &\quad\Rightarrow\quad \bigl\langle Tx-x^{*},Tx-x^{*}\bigr\rangle \leq \bigl\langle x-x^{*},x-Tx\bigr\rangle +\bigl\langle x-x^{*},Tx-x^{*} \bigr\rangle + k \| x-Tx\|^{2} \\ &\quad\Rightarrow\quad\bigl\langle Tx-x^{*},Tx-x\bigr\rangle \leq \bigl\langle x-x^{*},x-Tx\bigr\rangle + k \| x-Tx\|^{2} \\ &\quad\Rightarrow\quad\langle Tx-x,Tx-x\rangle+\bigl\langle x-x^{*},Tx-x\bigr\rangle \leq\bigl\langle x-x^{*},x-Tx\bigr\rangle + k \| x-Tx \|^{2} \\ &\quad\Rightarrow\quad (1-k)\| x-Tx\|^{2}\leq2\bigl\langle x-x^{*},x-Tx \bigr\rangle . \end{aligned}$$
(2.9)

Therefore, by (2.4) and Lemma 2.1, we obtain

$$\begin{aligned} \bigl\| x_{n+1}-x^{*}\bigr\| ^{2} \leq&\bigl\| u_{n+1}-x^{*}\bigr\| ^{2}=\bigl\| (1-\alpha_{n})x_{n}+ \alpha_{n}Tx_{n}-t_{n}(x_{n}-u)-x^{*} \bigr\| ^{2} \\ =&\bigl\| \bigl(x_{n}-x^{*}\bigr)-\alpha _{n}(x_{n}-Tx_{n})-t_{n}(x_{n}-u) \bigr\| ^{2} \\ \leq&\bigl\| \bigl(x_{n}-x^{*}\bigr)-\alpha_{n}(x_{n}-Tx_{n}) \bigr\| ^{2}-2t_{n}\bigl\langle x_{n}-u,u_{n+1}-x^{*} \bigr\rangle \\ =&\bigl\| x_{n}-x^{*}\bigr\| ^{2}-2\alpha_{n} \bigl\langle x_{n}-Tx_{n},x_{n}-x^{*} \bigr\rangle +\alpha_{n}^{2}\| x_{n}-Tx_{n} \|^{2} \\ &{}-2t_{n}\bigl\langle x_{n}-u,u_{n+1}-x^{*} \bigr\rangle \\ \leq&\bigl\| x_{n}-x^{*}\bigr\| ^{2}-\alpha_{n}(1-k) \| x_{n}-Tx_{n}\|^{2}+\alpha _{n}^{2} \| x_{n}-Tx_{n}\|^{2} \\ &{} -2t_{n}\bigl\langle x_{n}-u,u_{n+1}-x^{*} \bigr\rangle \\ =& \bigl\| x_{n}-x^{*}\bigr\| ^{2}-\alpha_{n} \bigl[(1-k)-\alpha_{n}\bigr]\| x_{n}-Tx_{n} \|^{2} \\ &{} -2t_{n}\bigl\langle x_{n}-u,u_{n+1}-x^{*} \bigr\rangle \\ \leq&\bigl\| u_{n}-x^{*}\bigr\| ^{2}-\alpha_{n} \bigl[(1-k)-\alpha_{n}\bigr]\| x_{n}-Tx_{n} \|^{2} \\ &{} -2t_{n}\bigl\langle x_{n}-u,u_{n+1}-x^{*} \bigr\rangle . \end{aligned}$$
(2.10)

Since \(\{x_{n}\}\) and \(\{u_{n}\}\) are bounded, \(\exists M>0\) such that \(-2\langle x_{n}-u,u_{n+1}-x^{*}\rangle\leq M\) for all \(n\geq1\). Therefore,

$$\begin{aligned} \bigl\| x_{n+1}-x^{*}\bigr\| ^{2}-\bigl\| x_{n}-x^{*}\bigr\| ^{2}+\alpha_{n} \bigl[(1-k)-\alpha _{n}\bigr]\| x_{n}-Tx_{n} \|^{2}\leq t_{n}M. \end{aligned}$$
(2.11)

Now we divide the rest of the proof into two cases.

Case 1.

Assume that \(\{\| x_{n}-x^{*}\|\}\) is monotonically decreasing sequence. Then \(\{\|x_{n}-x^{*}\|\}\) is convergent and obviously

$$\begin{aligned} \bigl\| x_{n+1}-x^{*}\bigr\| -\bigl\| x_{n}-x^{*} \bigr\| \rightarrow0, \quad n\rightarrow\infty. \end{aligned}$$
(2.12)

This together with (2.11) and the condition that \(t_{n}\rightarrow 0\) implies that

$$\begin{aligned} \| x_{n}-Tx_{n}\|\rightarrow0,\quad n\rightarrow \infty. \end{aligned}$$
(2.13)

From (2.1) and (1.4), we obtain (noting that \((I-\lambda_{n}\nabla f)\) is nonexpansive)

$$\begin{aligned}[b] \bigl\| x_{n}-x^{*}\bigr\| ^{2}={}& \bigl\| P_{C} \bigl(u_{n}-\lambda_{n}\nabla f(u_{n}) \bigr)-P_{C}\bigl(x^{*}-\lambda _{n}\nabla f \bigl(x^{*}\bigr)\bigr)\bigr\| ^{2} \\ \leq{}&\bigl\langle \bigl(u_{n}-\lambda_{n}\nabla f(u_{n})\bigr)-\bigl(x^{*}-\lambda_{n}\nabla f \bigl(x^{*}\bigr)\bigr), x_{n}-x^{*}\bigr\rangle \\ ={}&\frac{1}{2} \bigl[ \bigl\| \bigl(u_{n}-\lambda_{n} \nabla f(u_{n})\bigr)-\bigl(x^{*}-\lambda _{n} \nabla f\bigl(x^{*}\bigr)\bigr)\bigr\| ^{2} +\bigl\| x_{n}-x^{*} \bigr\| ^{2} \\ &{}-\bigl\| \bigl(u_{n}-\lambda_{n}\nabla f(u_{n}) \bigr)-\bigl(x^{*}-\lambda_{n}\nabla f\bigl(x^{*} \bigr)\bigr)-\bigl(x_{n}-x^{*}\bigr)\bigr\| ^{2} \bigr] \\ \leq{}& \frac{1}{2} \bigl[\bigl\| u_{n}-x^{*}\bigr\| ^{2}+\bigl\| x_{n}-x^{*}\bigr\| ^{2}-\bigl\| (u_{n}-x_{n})- \lambda_{n}\bigl(\nabla f(u_{n})-\nabla f \bigl(x^{*}\bigr)\bigr)\bigr\| ^{2} \bigr] \\ ={}&\frac{1}{2} \bigl[\bigl\| u_{n}-x^{*}\bigr\| ^{2}+\bigl\| x_{n}-x^{*}\bigr\| ^{2}-\|u_{n}-x_{n} \|^{2}+2\lambda _{n} \bigl\langle u_{n}-x_{n}, \nabla f(u_{n})-\nabla f\bigl(x^{*}\bigr)\bigr\rangle \\ &{}-\lambda_{n}^{2}\bigl\| \nabla f(u_{n})-\nabla f \bigl(x^{*}\bigr)\bigr\| ^{2} \bigr]. \end{aligned} $$

Therefore,

$$\begin{aligned} \bigl\| x_{n}-x^{*}\bigr\| ^{2} \leq& \bigl\| u_{n}-x^{*}\bigr\| ^{2}-\|u_{n}-x_{n} \|^{2}+2\lambda_{n} \bigl\langle u_{n}-x_{n}, \nabla f(u_{n})-\nabla f\bigl(x^{*}\bigr)\bigr\rangle \\ &{}-\lambda_{n}^{2}\bigl\| \nabla f(u_{n})-\nabla f \bigl(x^{*}\bigr)\bigr\| ^{2}. \end{aligned}$$
(2.14)

Also, from (2.1) and (2.8), we obtain

$$\begin{aligned} \bigl\| x_{n}-x^{*}\bigr\| ^{2} \leq&\bigl\| \bigl(u_{n}- \lambda_{n}\nabla f(u_{n})\bigr)-\bigl(x^{*}- \lambda _{n}\nabla f\bigl(x^{*}\bigr)\bigr)\bigr\| ^{2} \\ =&\bigl\| u_{n}-x^{*}\bigr\| ^{2}-2\lambda_{n} \bigl\langle u_{n}-x^{*},\nabla f(u_{n})-\nabla f\bigl(x^{*} \bigr)\bigr\rangle +\lambda_{n}^{2}\bigl\| \nabla f(u_{n})-\nabla f\bigl(x^{*}\bigr)\bigr\| ^{2} \\ \leq&\bigl\| u_{n}-x^{*}\bigr\| ^{2}-\frac{2 \lambda_{n}}{L}\bigl\| \nabla f(u_{n})-\nabla f\bigl(x^{*}\bigr)\bigr\| ^{2}+ \lambda_{n}^{2}\bigl\| \nabla f(u_{n})-\nabla f \bigl(x^{*}\bigr)\bigr\| ^{2} \\ =&\bigl\| u_{n}-x^{*}\bigr\| ^{2}-\lambda_{n} \biggl( \frac{2}{L}-\lambda_{n} \biggr)\bigl\| \nabla f(u_{n})- \nabla f\bigl(x^{*}\bigr)\bigr\| ^{2} \\ \leq& \bigl(\bigl\| x_{n-1}-x^{*}\bigr\| +t_{n-1}\bigl\| u-x^{*}\bigr\| \bigr)^{2}-\lambda_{n} \biggl(\frac {2}{L}- \lambda_{n} \biggr)\bigl\| \nabla f(u_{n})-\nabla f \bigl(x^{*}\bigr)\bigr\| ^{2} \\ =&\bigl\| x_{n-1}-x^{*}\bigr\| ^{2}+t_{n-1}\bigl(2 \bigl\| x_{n-1}-x^{*}\bigr\| \bigl\| u-x^{*}\bigr\| +t_{n-1}\bigl\| u-x^{*}\bigr\| ^{2}\bigr) \\ &{}-\lambda_{n} \biggl(\frac{2}{L}-\lambda_{n} \biggr)\bigl\| \nabla f(u_{n})-\nabla f\bigl(x^{*}\bigr) \bigr\| ^{2}. \end{aligned}$$

This implies that for some \(M^{*}>0\), we have

$$\begin{aligned} a \biggl(\frac{2}{L}-b \biggr)\bigl\| \nabla f(u_{n})- \nabla f\bigl(x^{*}\bigr)\bigr\| ^{2} \leq &\lambda_{n} \biggl(\frac{2}{L}-\lambda_{n} \biggr)\bigl\| \nabla f(u_{n})-\nabla f\bigl(x^{*}\bigr)\bigr\| ^{2} \\ \leq& \bigl\| x_{n-1}-x^{*}\bigr\| ^{2}-\bigl\| x_{n}-x^{*} \bigr\| ^{2}+t_{n-1}M^{*} \\ =&\bigl(\bigl\| x_{n-1}-x^{*}\bigr\| -\bigl\| x_{n}-x^{*}\bigr\| \bigr) \bigl( \bigl\| x_{n-1}-x^{*}\bigr\| +\bigl\| x_{n}-x^{*}\bigr\| \bigr) \\ &{}+t_{n-1}M^{*}. \end{aligned}$$
(2.15)

Using condition (i), condition (iv), and (2.12) in (2.15), we obtain

$$\begin{aligned} \underset{n\rightarrow\infty}{\lim}\bigl\| \nabla f(u_{n})- \nabla f\bigl(x^{*}\bigr)\bigr\| =0. \end{aligned}$$
(2.16)

Using (2.8) in (2.14), we have

$$\begin{aligned} \|u_{n}-x_{n}\|^{2} \leq& \bigl\| u_{n}-x^{*}\bigr\| ^{2}-\bigl\| x_{n}-x^{*} \bigr\| ^{2}+2\lambda_{n} \bigl\langle u_{n}-x_{n}, \nabla f(u_{n})-\nabla f\bigl(x^{*}\bigr)\bigr\rangle \\ &-\lambda_{n}^{2}\bigl\| \nabla f(u_{n})-\nabla f \bigl(x^{*}\bigr)\bigr\| ^{2} \\ \leq& \bigl(\bigl\| x_{n-1}-x^{*}\bigr\| + t_{n-1}\bigl\| u-x^{*}\bigr\| \bigr)^{2}-\bigl\| x_{n}-x^{*} \bigr\| ^{2} \\ &{}+2\lambda_{n} \bigl\langle u_{n}-x_{n}, \nabla f(u_{n})-\nabla f\bigl(x^{*}\bigr)\bigr\rangle -\lambda_{n}^{2}\bigl\| \nabla f(u_{n})-\nabla f \bigl(x^{*}\bigr)\bigr\| ^{2} \\ =&\bigl\| x_{n-1}-x^{*}\bigr\| ^{2}-\bigl\| x_{n}-x^{*} \bigr\| ^{2}+2t_{n-1}\bigl\| u-x^{*}\bigr\| \bigl\| x_{n-1}-x^{*} \bigr\| +t^{2}_{n-1}\bigl\| u-x^{*}\bigr\| ^{2} \\ &{}+2\lambda_{n} \bigl\langle u_{n}-x_{n}, \nabla f(u_{n})-\nabla f\bigl(x^{*}\bigr)\bigr\rangle - \lambda_{n}^{2}\bigl\| \nabla f(u_{n})-\nabla f \bigl(x^{*}\bigr)\bigr\| ^{2} \\ \leq&\bigl\| x_{n-1}-x^{*}\bigr\| ^{2}-\bigl\| x_{n}-x^{*}\bigr\| ^{2}+2t_{n-1}\bigl\| u-x^{*}\bigr\| \bigl\| x_{n-1}-x^{*}\bigr\| +t^{2}_{n-1} \bigl\| u-x^{*}\bigr\| ^{2} \\ &{}+2\lambda_{n} \bigl\langle u_{n}-x_{n}, \nabla f(u_{n})-\nabla f\bigl(x^{*}\bigr)\bigr\rangle . \end{aligned}$$
(2.17)

By using (2.16) in (2.17), we see that

$$\underset{n\rightarrow\infty}{\lim}\bigl\| P_{C}\bigl(u_{n}- \lambda_{n}\nabla f(u_{n})\bigr)-u_{n}\bigr\| = \underset{n\rightarrow\infty}{\lim}\|x_{n}-u_{n}\|=0. $$

Suppose that \(p \in w_{w}(u_{n})\) and \(\{u_{n_{j}}\}\) is a subsequence of \(\{u_{n}\}\) such that \(u_{n_{j}}\rightharpoonup p\). Observe that since \(\lim_{n\rightarrow\infty}\|x_{n}-u_{n}\|=0\), we also have \(x_{n_{j}}\rightharpoonup p\). Using Lemma 2.2 and (2.13), we have \(p \in F(T)\).

We next prove that \(p \in\Gamma\). We may assume that \(\lambda _{n_{j}}\rightarrow\lambda\); then we have \(0<\lambda<\frac{2}{L}\). Set \(S:=P_{C}(I-\lambda\nabla f)\); then S is nonexpansive. Then we get

$$\begin{aligned} &\bigl\| P_{C}(I-\lambda\nabla f)u_{n_{j}}-u_{n_{j}}\bigr\| \\ &\quad\leq \bigl\| P_{C}(I-\lambda\nabla f)u_{n_{j}}-P_{C}(I- \lambda_{n_{j}} \nabla f)u_{n_{j}}\bigr\| +\bigl\| P_{C}(I- \lambda_{n_{j}} \nabla f)u_{n_{j}}-u_{n_{j}}\bigr\| \\ &\quad\leq\bigl\| (I-\lambda\nabla f)u_{n_{j}}-(I-\lambda_{n_{j}} \nabla f)u_{n_{j}}\bigr\| +\bigl\| P_{C}(I-\lambda_{n_{j}} \nabla f)u_{n_{j}}-u_{n_{j}}\bigr\| \\ &\quad=|\lambda_{n_{j}}-\lambda|\bigl\| \nabla f(u_{n_{j}})\bigr\| + \bigl\| P_{C}(I-\lambda_{n_{j}} \nabla f)u_{n_{j}}-u_{n_{j}} \bigr\| \rightarrow0. \end{aligned}$$

It then follows from Lemma 2.2 that \(p \in F(S)\). But \(F(S)=\Gamma \), therefore, we have \(p \in\Gamma\). Hence, \(p \in F(T)\cap\Gamma\).

Setting \(y_{n}=(1-\alpha_{n})x_{n}+\alpha_{n}Tx_{n}\), \(n\geq1\), then from (2.1) we have

$$u_{n+1}=y_{n}-t_{n}(x_{n}-u). $$

It then follows that

$$\begin{aligned} u_{n+1} =&(1-t_{n})y_{n}-t_{n}(x_{n}-y_{n}-u) \\ =& (1-t_{n})y_{n}-t_{n}\alpha_{n}(x_{n}-Tx_{n})+t_{n}u. \end{aligned}$$
(2.18)

Also,

$$\begin{aligned} \bigl\| y_{n}-x^{*}\bigr\| ^{2} =&\bigl\| x_{n}-x^{*}- \alpha_{n}(x_{n}-Tx_{n})\bigr\| ^{2} \\ =&\bigl\| x_{n}-x^{*}\bigr\| ^{2}-2\alpha_{n}\bigl\langle x_{n}-Tx_{n},x_{n}-x^{*}\bigr\rangle +\alpha _{n}^{2}\| x_{n}-Tx_{n} \|^{2} \\ \leq& \bigl\| x_{n}-x^{*}\bigr\| ^{2}-\alpha_{n}\bigl[(1-k)- \alpha_{n}\bigr]\| x_{n}-Tx_{n}\|^{2} \\ \leq& \bigl\| x_{n}-x^{*}\bigr\| ^{2}. \end{aligned}$$
(2.19)

By (2.4) and applying Lemma 2.1 to (2.18), we have

$$\begin{aligned} \bigl\| x_{n+1}-x^{*}\bigr\| ^{2} \leq& \bigl\| u_{n+1}-x^{*}\bigr\| ^{2} =\bigl\| (1-t_{n}) \bigl(y_{n}-x^{*}\bigr)-t_{n}\alpha_{n}(x_{n}-Tx_{n})-t_{n} \bigl(x^{*}-u\bigr)\bigr\| ^{2} \\ \leq& (1-t_{n})^{2}\bigl\| y_{n}-x^{*} \bigr\| ^{2}-2t_{n}\bigl\langle \alpha _{n}(x_{n}-Tx_{n})+ \bigl(x^{*}-u\bigr),u_{n+1}-x^{*}\bigr\rangle \\ = & (1-t_{n})^{2}\bigl\| y_{n}-x^{*} \bigr\| ^{2}-2t_{n}\alpha_{n}\bigl\langle x_{n}-Tx_{n},u_{n+1}-x^{*}\bigr\rangle -2t_{n} \bigl\langle x^{*}-u,u_{n+1}-x^{*}\bigr\rangle \\ \leq& (1-t_{n})^{2}\bigl\| x_{n}-x^{*} \bigr\| ^{2}-2t_{n}\alpha_{n}\bigl\langle x_{n}-Tx_{n},u_{n+1}-x^{*}\bigr\rangle -2t_{n} \bigl\langle x^{*}-u,u_{n+1}-x^{*}\bigr\rangle \\ \leq& (1-t_{n})\bigl\| x_{n}-x^{*}\bigr\| ^{2}+t_{n} \bigl[-2\alpha_{n}\bigl\langle x_{n}-Tx_{n},u_{n+1}-x^{*} \bigr\rangle -2\bigl\langle x^{*}-u,u_{n+1}-x^{*}\bigr\rangle \bigr]. \end{aligned}$$
(2.20)

We observe that \(\limsup_{n\rightarrow\infty} \{-2\langle x^{*}-u,u_{n+1}-x^{*}\rangle \} \leq-2 \langle x^{*}-u,p-x^{*}\rangle\leq0\) (since \(x^{*}=P_{F(T)\cap\Gamma}u\)) and \(2\alpha_{n}\langle x_{n}-Tx_{n},u_{n+1}-x^{*}\rangle\rightarrow0\). Therefore by Lemma 2.3, \(\| x_{n}-x^{*}\|\rightarrow0\) and consequently \(\| u_{n}-x^{*}\| \rightarrow0\). That is, \(x_{n}\rightarrow x^{*}\), \(n\rightarrow\infty\).

Case 2.

Assume that \(\{\| x_{n}-x^{*}\|\}\) is not monotonically decreasing sequence. Set \(\Gamma_{n}=\| x_{n}-x^{*}\|^{2}\) and let \(\tau:\mathbb {N}\rightarrow\mathbb{N}\) be a mapping for all \(n\geq n_{0}\) (for some \(n_{0}\) large enough) defined by

$$\tau(n):=\max\{k\in\mathbb{N}:k\leq n,\Gamma_{k}< \Gamma_{k+1}\}. $$

Clearly, τ is a non-decreasing sequence such that \(\tau (n)\rightarrow\infty\) as \(n \rightarrow\infty\) and

$$\Gamma_{\tau(n)+1}-\Gamma_{\tau(n)}\geq0, \quad\forall n\geq n_{0}. $$

After a similar conclusion from (2.10), it is easy to see that

$$\| x_{\tau(n)} - Tx_{\tau(n)} \|^{2} \leq\frac{t_{\tau(n)} M}{\alpha _{\tau(n)}[(1-k) -\alpha_{\tau(n)}]} \rightarrow0,\quad n\rightarrow\infty . $$

Thus,

$$\| x_{\tau(n)} - Tx_{\tau(n)} \| \rightarrow0,\quad n\rightarrow\infty. $$

By a similar argument as above in Case 1, we conclude immediately that

$$\underset{n\rightarrow\infty}{\lim}\bigl\| P_{C}\bigl(u_{\tau(n)}- \lambda_{\tau (n)}\nabla f(u_{\tau(n)})\bigr)-u_{\tau(n)}\bigr\| = \underset{n\rightarrow\infty }{\lim}\|x_{\tau(n)}-u_{\tau(n)}\|=0. $$

Since \(\{u_{\tau(n)}\}\) is bounded, there exists a subsequence of \(\{u_{\tau(n)}\}\), still denoted by \(\{u_{\tau(n)}\}\) which converges weakly to \(p\in C\). Observe that since \(\lim_{n\rightarrow\infty}\|x_{\tau(n)}-u_{\tau(n)}\|=0\), we also have \(x_{\tau(n)}\rightharpoonup p\). Using Lemma 2.2 and the fact that \(\| x_{\tau(n)} - Tx_{\tau(n)} \| \rightarrow0\), \(n\rightarrow\infty\), we have \(p \in F(T)\). Similarly, we can show that \(p \in\Gamma\). Therefore, \(p \in F(T)\cap\Gamma\). At the same time, we note from (2.20) that, for all \(n\geq n_{0}\) (noting that \(0\leq\Gamma _{\tau(n)+1}-\Gamma_{\tau(n)}\), \(\forall n\geq n_{0}\)),

$$\begin{aligned}[b] 0\leq{}& \bigl\| x_{\tau(n)+1}-x^{*}\bigr\| ^{2}-\bigl\| x_{\tau(n)}-x^{*}\bigr\| ^{2} \\ \leq{}& t_{\tau(n)}\bigl[-2\bigl\langle \alpha_{\tau(n)}(x_{\tau(n)}-Tx_{\tau (n)}),u_{\tau(n)+1}-x^{*} \bigr\rangle -2\bigl\langle x^{*}-u,u_{\tau (n)+1}-x^{*}\bigr\rangle - \bigl\| x_{\tau(n)}-x^{*}\bigr\| ^{2}\bigr], \\ &{}\forall n\geq n_{0}, \end{aligned} $$

which implies

$$\begin{aligned} &\bigl\| x_{\tau(n)}-x^{*}\bigr\| ^{2} \leq-2\bigl\langle \alpha_{\tau(n)}(x_{\tau (n)}-Tx_{\tau(n)}),u_{\tau(n)+1}-x^{*} \bigr\rangle -2\bigl\langle x^{*}-u,u_{\tau (n)+1}-x^{*}\bigr\rangle , \\ &\quad\forall n\geq n_{0}. \end{aligned}$$
(2.21)

Since \(\{u_{\tau(n)+1}\}\) converges weakly to p as \(\tau (n)\rightarrow\infty\) and \(\| x_{\tau(n)} - Tx_{\tau(n)} \|\rightarrow 0\) as \(\tau(n)\rightarrow\infty\), we deduce from (2.21) (noting that \(x^{*}=P_{F(T)\cap\Gamma}u\)) that

$$\underset{n\rightarrow\infty}{\limsup}\bigl\| x_{\tau(n)}-x^{*} \bigr\| ^{2}\leq -2\bigl\langle x^{*}-u,p-x^{*}\bigr\rangle \leq0, $$

which implies that

$$\underset{n\rightarrow\infty}{\lim}\bigl\| x_{\tau(n)}-x^{*}\bigr\| =0. $$

Therefore,

$$\lim_{n\rightarrow\infty}\Gamma_{\tau(n)}=\lim_{n\rightarrow\infty} \Gamma_{\tau(n)+1}= 0. $$

Furthermore, if \(n\geq n_{0}\) and \(n\neq\tau(n)\), it follows from the definition of \(\tau(n)\) that \(\tau(n)< n\). It is easy to see that \(\Gamma_{\tau(n)}\leq\Gamma_{\tau(n)+1}\). On the other hand, since \(\Gamma_{j}\geq\Gamma_{j+1}\) for \(\tau(n)+1\leq j\leq n\). Therefore we obtain, for all \(n\geq n_{0}\),

$$0\leq\Gamma_{n}\leq\max\{\Gamma_{\tau(n)},\Gamma_{\tau(n)+1} \}=\Gamma _{\tau(n)+1}. $$

Hence \(\lim\Gamma_{n}=0\), that is, \(\{x_{n}\}\) converges strongly to \(x^{*}\). Furthermore, \(\{u_{n}\}\) converges strongly to \(x^{*}\). This completes the proof. □

Corollary 2.5

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Suppose that the minimization problem (1.7) is consistent and let Γ denote its solution set. Assume that the gradient f is L-Lipschitzian with constant \(L>0\). Let \(\{ t_{n}\}\) be a sequence in \((0,1)\) and \(\{\lambda_{n}\}\) a sequence in \((0,\frac{2}{L} )\) satisfy the following conditions:

  1. (i)

    \(\lim_{n\rightarrow\infty}t_{n}=0\);

  2. (ii)

    \(\sum_{n=1}^{\infty}t_{n}=\infty\);

  3. (iii)

    \(0 < \liminf\lambda_{n}\leq\limsup\lambda_{n} <\frac{2}{L}\).

Then the sequence \(\{x_{n}\}\) generated for fixed \(u\in C\) by \(x_{1}\in C\),

$$\begin{aligned} x_{n+1}=t_{n}u +(1-t_{n})P_{C} \bigl(x_{n}-\lambda_{n}\nabla f(x_{n})\bigr),\quad n \geq1, \end{aligned}$$
(2.22)

converge strongly to \(x^{*}\in\Gamma\), where \(x^{*}:=P_{\Gamma}u\).

Proof

Taking \(T=I\) in Theorem 2.4, we obtain the desired conclusion. □

Remark 2.6

Corollary 2.5 improves on Corollary 5.3 of Xu [16] in the sense that the conditions \(\sum_{n=1}^{\infty}|t_{n+1}-t_{n}|<\infty\) and \(\sum_{n=1}^{\infty }|\gamma_{n+1}-\gamma_{n}|<\infty\) assumed in Xu [16] are dispensed with in our Corollary 2.5.

Corollary 2.7

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let \(T:C\rightarrow C\) be a k- strictly pseudocontractive mapping such that \(F(T)\neq\emptyset\). Let \(\{t_{n}\}\) be a sequence in \((0,1)\) and \(\{\alpha_{n}\}\) a sequence in \((0,(1-k)(1-t_{n}))\subset (0,1)\) satisfying the following conditions:

  1. (i)

    \(\lim_{n\rightarrow\infty}t_{n}=0\);

  2. (ii)

    \(\sum_{n=0}^{\infty}t_{n}=\infty\);

  3. (iii)

    \(0<\liminf_{n\rightarrow\infty} \alpha_{n} \leq\limsup_{n\rightarrow\infty} \alpha_{n}<1-k\).

Then the sequence \(\{x_{n}\}\) generated for fixed \(u\in C\) by \(x_{1}\in C\),

$$x_{n+1}=t_{n} u+(1-t_{n}-\alpha_{n})x_{n}+ \alpha_{n}Tx_{n}, \quad n\geq1, $$

strongly converges to a fixed point \(x^{*}\) of T, where \(x^{*}:=P_{F(T)}u\).

Proof

Taking \(f\equiv0\) in Theorem 2.4, we obtain the desired conclusion. □

Remark 2.8

Corollary 2.7 complements Theorem 3.1 of Li and Yao [20].

We next apply the result in Theorem 2.4 to approximate the common fixed point of a finite family of strictly pseudocontractive mappings, which is also a solution to minimization problem (1.7) in real Hilbert spaces.

Theorem 2.9

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Suppose that the minimization problem (1.7) is consistent and let Γ denote its solution set. Assume that the gradient f is L-Lipschitzian with constant \(L>0\). For each \(i=1,2,\ldots,N\), let \(T_{i}:C\rightarrow C\) be a \(k_{i}\)-strictly pseudocontractive mapping such that \(\bigcap_{i=1}^{N} F(T_{i})\cap\Gamma\neq \emptyset\). Assume that \(\{\delta_{i}\}_{i=1}^{N}\) is a finite sequence of positive numbers such that \(\sum_{i=1}^{N}\delta_{i}=1\). Let \(\{ t_{n}\}\) be a sequence in \((0,1)\), \(\{\alpha_{n}\}\) a sequence in \((0,(1-k)(1-t_{n}))\subset(0,1)\), \(k:=\max\{k_{i}:i=1,2,\ldots,N\}\), and \(\{ \lambda_{n}\}\) a sequence in \((0,\frac{2}{L} )\) satisfying the following conditions:

  1. (i)

    \(\lim_{n\rightarrow\infty}t_{n}=0\);

  2. (ii)

    \(\sum_{n=0}^{\infty}t_{n}=\infty\);

  3. (iii)

    \(0<\liminf_{n\rightarrow\infty} \alpha_{n} \leq\limsup_{n\rightarrow\infty} \alpha_{n}<1-k\);

  4. (iv)

    \(0 < \liminf_{n\rightarrow\infty} \lambda_{n}\leq\limsup_{n\rightarrow\infty}\lambda_{n} <\frac{2}{L}\).

Then the sequences \(\{u_{n}\}\) and \(\{x_{n}\}\) generated for fixed \(u\in C\) by \(u_{1}\in C\),

$$\begin{aligned} \left \{ \begin{array}{@{}l} x_{n}=P_{C}(u_{n}-\lambda_{n}\nabla f(u_{n})), \\ u_{n+1}=(1-\alpha_{n})x_{n}+ \alpha_{n}\sum_{i=1}^{N}\delta _{i}T_{i}x_{n}-t_{n}(x_{n}-u) ,\quad n \geq1, \end{array} \right . \end{aligned}$$
(2.23)

converge strongly to \(x^{*}\in\bigcap_{i=1}^{N} F(T_{i})\cap\Gamma\), where \(x^{*}:=P_{\bigcap_{i=1}^{N} F(T_{i})\cap\Gamma}u\).

Proof

Define \(A:=\sum_{i=1}^{N}\delta_{i}T_{i}\). Then, by the results in [2, 10], A is a k-strictly pseudocontractive mapping and \(F(A)=\bigcap_{i=1}^{N} F(T_{i})\). We can rewrite the scheme (2.23) as

$$\begin{aligned} \left \{ \begin{array}{@{}l} x_{n}=P_{C}(u_{n}-\lambda_{n}\nabla f(u_{n})), \\ u_{n+1}=(1-\alpha_{n})x_{n}+ \alpha_{n}Ax_{n}-t_{n}(x_{n}-u) ,\quad n \geq1. \end{array} \right . \end{aligned}$$

Now, Theorem 2.4 guarantees that \(\{x_{n}\}\) and \(\{u_{n}\}\) converge strongly to a common fixed point of the family \(\{T_{i}\}_{i=1}^{N}\), which is also a solution to minimization problem (1.7). □

3 Applications

In this section, we give an application of Theorem 2.4 to the split feasibility problem and the convexly constrained linear inverse problem.

3.1 Split feasibility problem

The split feasibility problem (SFP, for short) was introduced by Censor and Elfving [23]. The SFP problem has gained much attention of several authors due to its applications to image reconstruction, signal processing, and intensity-modulated radiation therapy (see [2426]).

This SFP can be mathematically formulated as the problem of finding a point x with the property

$$ x\in C \quad\mbox{and}\quad Bx\in Q, $$
(3.1)

where C and Q are nonempty, closed, and convex subsets of Hilbert space \(H_{1}\) and \(H_{2}\), respectively, and \(B : H_{1}\rightarrow H_{2}\) is a bounded linear operator.

Clearly, \(x^{*}\) is a solution to the split feasibility problem (3.1) if and only if \(x^{*}\in C\) and \(Bx^{*}-P_{Q} Bx^{*} = 0\). The proximity function f is defined by

$$ f(x) =\frac{1}{2}\|Bx-P_{Q}Bx\|^{2} $$
(3.2)

and we consider the constrained convex minimization problem

$$ \min_{x\in C}f(x)=\min_{x\in C} \frac{1}{2}\|Bx-P_{Q}Bx\|^{2}. $$
(3.3)

Then \(x^{*}\) solves the split feasibility problem (3.1) if and only if \(x^{*}\) solves the minimization problem (3.3). In [24], the CQ algorithm was introduced to solve the SFP,

$$ x_{n+1} = P_{C} \bigl(I-\lambda B^{*}(I-P_{Q})B\bigr)x_{n},\quad n\geqslant0, $$
(3.4)

where \(0<\lambda<\frac{2}{\|B\|^{2}}\) and \(B^{*}\) is the adjoint of B. It was proved that the sequence generated by (3.4) converges weakly to a solution of the SFP.

We propose the following algorithm to obtain a strong convergence iterative sequence to solve the SFP and the fixed point problem for a k-strictly pseudocontractive mapping T. For any given \(u \in C\), let the sequences \(\{x_{n}\}\) and \(\{u_{n}\}\) be generated iteratively by \(u_{1} \in C\),

$$\begin{aligned} \left \{ \begin{array}{@{}l} x_{n}=P_{C} (I-\lambda_{n} (B^{*}(I-P_{Q})B+ I)u_{n}), \\ u_{n+1}=(1-\alpha_{n})x_{n}+ \alpha_{n}Tx_{n}-t_{n}(x_{n}-u) ,\quad n \geq1, \end{array} \right . \end{aligned}$$
(3.5)

where \(\{t_{n}\}, \{\alpha_{n}\}\subset(0,1)\), \(\{\alpha_{n}\}\) a sequence in \((0,(1-k)(1-t_{n}))\subset(0,1)\), and \(\{\lambda_{n}\}\) a sequence in \((0,\frac{2}{\|B\|^{2}} )\) satisfying the following conditions:

  1. (i)

    \(\lim_{n\rightarrow\infty}t_{n}=0\);

  2. (ii)

    \(\sum_{n=0}^{\infty}t_{n}=\infty\);

  3. (iii)

    \(0<\liminf_{n\rightarrow\infty} \alpha_{n} \leq\limsup_{n\rightarrow\infty} \alpha_{n}<1-k\);

  4. (iv)

    \(0 < \liminf_{n\rightarrow\infty} \lambda_{n}\leq\limsup_{n\rightarrow\infty}\lambda_{n} <\frac{2}{\|B\|^{2}}\).

We obtain the following convergence result for solving split feasibility problem (3.1) and the fixed point problem for a k-strictly pseudocontractive mapping by applying Theorem 2.4.

Theorem 3.1

Let C and Q be nonempty, closed, and convex subset of real Hilbert space \(H_{1}\) and \(H_{2}\), respectively, and \(B : H_{1}\rightarrow H_{2}\) be a bounded linear operator. Let \(f(x) =\frac{1}{2}\|Bx-P_{Q}Bx\|^{2}\) and let \(\Gamma=\arg \min_{x\in C}f(x)\). Let T be a k-strictly pseudocontractive mapping of C into itself such that \(F(T)\cap\Gamma\neq\emptyset\). Let the sequences \(\{x_{n}\}\) and \(\{u_{n}\}\) be generated by (3.5), where \(\{t_{n}\}, \{\alpha_{n}\} \subset(0,1)\), and \(\{\lambda_{n}\}\) in \((0,\frac{2}{\|B\|^{2}} )\) satisfying the conditions (i)-(iv) above. Then the sequences \(\{ x_{n}\}\) and \(\{u_{n}\}\) converge strongly to a solution \(x^{*}\) of the split feasibility problem (3.1) which is also a fixed point of a k-strictly pseudocontractive mapping T where \(x^{*}:=P_{F(T)\cap \Gamma}u\).

Proof

Using the definition of the proximity function f, we have

$$ \nabla f(x)=B^{*}(I-P_{Q})Bx, $$
(3.6)

and f is Lipschitz continuous, that is,

$$ \bigl\| \nabla f(x)-\nabla f(y)\bigr\| \leqslant L\|x-y\|, $$
(3.7)

where \(L=\|B\|^{2}\).

Then we obtain

$$\begin{aligned} \nabla f=B^{*}(I-P_{Q})Bx \end{aligned}$$

and f is Lipschitzian with Lipschitz constant \(L:=\|B\|^{2}\). Then the iterative scheme (3.5) is equivalent to

$$\begin{aligned} \left \{ \begin{array}{@{}l} x_{n}=P_{C}(u_{n}-\lambda_{n} \nabla f(u_{n})), \\ u_{n+1}=(1-\alpha_{n})x_{n}+ \alpha_{n}Tx_{n}-t_{n}(x_{n}-u) ,\quad n \geq1, \end{array} \right . \end{aligned}$$
(3.8)

where \(\{t_{n}\}, \{\alpha_{n}\}\subset(0,1)\), \(\{\alpha_{n}\}\) a sequence in \((0,(1-k)(1-t_{n}))\subset(0,1)\), and \(\{\lambda_{n}\}\) a sequence in \((0,\frac{2}{L} )\) satisfying the following conditions:

  1. (i)

    \(\lim_{n\rightarrow\infty}t_{n}=0\);

  2. (ii)

    \(\sum_{n=0}^{\infty}t_{n}=\infty\);

  3. (iii)

    \(0<\liminf_{n\rightarrow\infty} \alpha_{n} \leq\limsup_{n\rightarrow\infty} \alpha_{n}<1-k\);

  4. (iv)

    \(0 < \liminf\lambda_{n}\leq\limsup\lambda_{n} <\frac{2}{L}\).

The desired conclusion follows from Theorem 2.4. □

3.2 Convexly constrained linear inverse problem

Consider the convexly constrained linear inverse problem (cf. [27])

$$\begin{aligned} \left \{ \begin{array}{@{}l} Ax=b, \\ x \in C, \end{array} \right . \end{aligned}$$
(3.9)

where \(H_{1}\) and \(H_{2}\) are real Hilbert spaces and \(A:H_{1}\rightarrow H_{2}\) is a bounded linear mapping and \(b\in H_{2}\). To solve (3.9), we consider the following convexly constrained minimization problem:

$$\begin{aligned} \min_{x \in C} f(x):= \min_{x \in C} \frac{1}{2}\|Ax-b\|^{2}. \end{aligned}$$
(3.10)

In general, every solution to (3.9) is a solution to (3.10). However, a solution to (3.10) may not necessarily satisfy (3.9). Moreover, if a solution of (3.9) is nonempty then it follows from Lemma 4.2 of [28] that

$$C \cap(\nabla f)^{-1} \neq\emptyset. $$

It is well known that the projected Landweber method (see [29]) given by

$$\begin{aligned} \left \{ \begin{array}{@{}l} x_{1} \in C, \\ x_{n+1}=P_{C}[x_{n}-\lambda A^{*}(Ax_{n}-b)],\quad n\geq1, \end{array} \right . \end{aligned}$$

where \(A^{*}\) is the adjoint of A and \(0<\lambda<2\alpha\) with \(\alpha =\frac{1}{\|A\|^{2}}\), converges weakly to a solution of (3.9). In what follows, we present an algorithm with strong convergence for solving (3.9) and the fixed point problem for strictly pseudocontractive mapping.

Corollary 3.2

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Suppose that the convexly constrained linear inverse problem (3.9) is consistent and let Γ denote its solution set. Let T be a k-strictly pseudocontractive mapping of C into itself such that \(F(T)\cap\Gamma\neq\emptyset\). Let \(\{t_{n}\}\) be a sequence in \((0,1)\), \(\{\alpha_{n}\}\) a sequence in \((0,(1-k)(1-t_{n}))\subset(0,1)\), and \(\{\lambda_{n}\}\) a sequence in \((0,\frac{2}{\|A\|^{2}} )\) satisfying the following conditions:

  1. (i)

    \(\lim_{n\rightarrow\infty}t_{n}=0\);

  2. (ii)

    \(\sum_{n=0}^{\infty}t_{n}=\infty\);

  3. (iii)

    \(0<\liminf_{n\rightarrow\infty} \alpha_{n} \leq\limsup_{n\rightarrow\infty} \alpha_{n}<1-k\);

  4. (iv)

    \(0 < \liminf_{n\rightarrow\infty} \lambda_{n}\leq\limsup_{n\rightarrow\infty}\lambda_{n} <\frac{2}{\|A\|^{2}}\).

Then the sequences \(\{u_{n}\}\) and \(\{x_{n}\}\) generated for fixed \(u\in C\) by \(u_{1}\in C\),

$$\begin{aligned} \left \{ \begin{array}{@{}l} x_{n}=P_{C}(u_{n}-\lambda_{n}A^{*}(Au_{n}-b)), \\ u_{n+1}=(1-\alpha_{n})x_{n}+ \alpha_{n}Tx_{n}-t_{n}(x_{n}-u) ,\quad n \geq1, \end{array} \right . \end{aligned}$$

converge strongly to \(x^{*}\in F(T)\cap\Gamma\), where \(x^{*}:=P_{F(T)\cap\Gamma}u\).

Proof

Let f be defined by (3.10). Then from [24] we have \(\nabla f (x) = A^{*}(Ax-b)\), \(x \in H_{1}\) which is L-Lipschitzian with constant \(L=\|A\|^{2}\). Thus, by Theorem 2.4 we obtain the required assertion. □

Remark 3.3

The convergence rate of the projection gradient method is at best linear. The linear convergence is attained with Polyak’s stepsize and for an objective function with a sharp set of minima. The linear convergence rate is not the best known rate. There are methods with a superlinear (quadratic) rate such as the Newton method and the interior point method, which uses Newton’s directions. These methods, however, require f to be twice differentiable among other conditions. The potential drawback of the projection gradient method is that it can be very slow when the gradient directions are almost perpendicular to the directions pointing toward the optimal set Γ, corresponding to

$$\nabla f(x_{n})^{T}\bigl(x_{n} - x^{*}\bigr) \approx0. $$

In this case, the method may exhibit zig-zag behavior, depending on the initial iterate \(x_{1}\). To overcome a possibly slow convergence of the gradient-projection method, the gradient is often scaled. In this case, the method takes the form

$$x_{n+1}=P_{C}\bigl(x_{n}-\lambda_{n} \Lambda_{n} \nabla f(x_{n})\bigr), $$

where \(\Lambda_{n}\) is a diagonal matrix with positive entries on its diagonal, i.e., \([\Lambda_{n}]ii > 0\) for all \(i = 1, \ldots,m\) and all n. For more details, see [30, pp.91-105].

References

  1. Browder, FE, Petryshyn, WV: Construction of fixed points of nonlinear mappings in Hilbert space. J. Math. Anal. Appl. 20, 197-228 (1967)

    Article  MATH  MathSciNet  Google Scholar 

  2. Acedo, GL, Xu, H-K: Iterative methods for strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 67, 2258-2271 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  3. Ceng, LC, Al-Homidan, S, Ansari, QH, Yao, JC: An iterative scheme for equilibrium problems and fixed point problems of strict pseudo-contraction mappings. J. Comput. Appl. Math. 223, 967-974 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  4. Chidume, CE, Abbas, M, Ali, B: Convergence of the Mann iteration algorithm for a class of pseudo-contractive mappings. Appl. Math. Comput. 194, 1-6 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  5. Cholamjiak, P, Suantai, S: Strong convergence for a countable family of strict pseudocontractions in q-uniformly smooth Banach spaces. Comput. Math. Appl. 62, 787-796 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  6. Hao, Y, Cho, SY: Some results on strictly pseudocontractive nonself mapping and equilibrium problems in Hilbert spaces. Abstr. Appl. Anal. 2012, Article ID 543040 (2012)

    MathSciNet  MATH  Google Scholar 

  7. Jaiboon, C, Kumam, P: Strong convergence theorems for solving equilibrium problems and fixed point problems of strictly pseudocontraction mappings by two hybrid projection methods. J. Comput. Appl. Math. 234(3), 722-732 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  8. Jung, JS: Iterative methods for mixed equilibrium problems and pseudocontractive mappings. Fixed Point Theory Appl. 2012, Article ID 184 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  9. Liu, Y: A general iterative method for equilibrium problems and strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 71, 4852-4861 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  10. Marino, G, Xu, H-K: Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces. J. Math. Anal. Appl. 329(1), 336-346 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  11. Qin, L-J, Wang, L: An iteration method for solving equilibrium problems, common fixed point problems of pseudocontractive mappings of Browder-Petryshyn type in Hilbert spaces. Int. Math. Forum 6(2), 63-74 (2011)

    MATH  MathSciNet  Google Scholar 

  12. Shehu, Y: Iterative methods for family of strictly pseudocontractive mappings and system of generalized mixed equilibrium problems and variational inequality problems. Fixed Point Theory Appl. 2011, Article ID 852789 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  13. Zhou, HY: Convergence theorems of fixed points for strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 69(2), 456-462 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  14. Bertsekas, DP, Gafni, EM: Projection methods for variational inequalities with applications to the traffic assignment problem. Math. Program. Stud. 17, 139-159 (1982)

    Article  MATH  MathSciNet  Google Scholar 

  15. Han, D, Lo, HK: Solving non-additive traffic assignment problems: a descent method for co-coercive variational inequalities. Eur. J. Oper. Res. 159, 529-544 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  16. Xu, HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 150(2), 360-378 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  17. Su, M, Xu, HK: Remarks on the gradient-projection algorithm. J. Nonlinear Anal. Optim. 1(1), 35-43 (2010)

    MathSciNet  Google Scholar 

  18. Ceng, L-C, Ansari, QH, Yao, J-C: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 74, 5286-5302 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  19. Xu, HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 1-17 (2002)

    Article  MathSciNet  Google Scholar 

  20. Li, M, Yao, Y: Strong convergence of an iterative algorithm for λ-strictly pseudo-contractive mappings in Hilbert spaces. An. Ştiinţ. Univ. ‘Ovidius’ Constanţa 18(1), 219-228 (2010)

    MATH  MathSciNet  Google Scholar 

  21. Maingé, PE: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16, 899-912 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  22. Baillon, JB, Haddad, G: Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Isr. J. Math. 26, 137-150 (1977)

    Article  MATH  MathSciNet  Google Scholar 

  23. Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221-239 (1994)

    Article  MATH  MathSciNet  Google Scholar 

  24. Byrne, C: Unified treatment of some algorithms in signal processing and image construction. Inverse Problems 20, 103-120 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  25. Lopez, G, Martin-Marquez, V, Wang, FH, Xu, HK: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 28, 085004 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  26. Tian, M, Huang, L: Iterative methods for constrained convex minimization problem in Hilbert spaces. Fixed Point Theory Appl. (2013). doi:10.1186/1687-1812-2013-105

    MathSciNet  MATH  Google Scholar 

  27. Eicke, B: Iteration methods for convexly constrained ill-posed problems in Hilbert space. Numer. Funct. Anal. Optim. 13, 413-429 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  28. Wang, F, Xu, HK: Approximating curve and strong convergence of the CQ algorithm for the SPF. J. Inequal. Appl. 2010, Article ID 102085 (2010)

    MATH  Google Scholar 

  29. Engl, HW, Hanke, M, Neubauer, A: Regularization of Inverse Problems. Kluwer Academic, Dordrecht (1996)

    Book  MATH  Google Scholar 

  30. Nedić, A: Lecture Notes Optimization I. Network Mathematics Graduate Programme. Hamilton Institute, Maynooth, Ireland (2008)

Download references

Acknowledgements

This work was supported by the NSF of China (No. 11401063) and Natural Science Foundation of Chongqing (cstc2014jcyjA00016).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yekini Shehu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cai, G., Shehu, Y. An iterative algorithm for fixed point problem and convex minimization problem with applications. Fixed Point Theory Appl 2015, 7 (2015). https://doi.org/10.1186/s13663-014-0253-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-014-0253-6

MSC

Keywords