Open Access

On the convergence of a generalized modified Krasnoselskii iterative process for generalized strictly pseudocontractive mappings in uniformly convex Banach spaces

Fixed Point Theory and Applications20162016:60

Received: 25 August 2015

Accepted: 25 April 2016

Published: 10 May 2016


This paper aims to study the strong convergence of generalized modified Krasnoselskii iterative process for finding the minimum norm solutions of certain nonlinear equations with generalized strictly pseudocontractive, demiclosed, coercive, bounded, and potential mappings in uniformly convex Banach spaces. An application to nonlinear pseudomonotone equations is provided. The results extend and improve recent work in this direction.


generalized modified Krasnoselskii iterative generalized strictly pseudocontractive mappings minimum norm solutions uniformly convex Banach spaces


47H05 47H10

1 Introduction and preliminaries

Let H be a real Hilbert space with norm \(\|\cdot\|_{H}\) and inner product \((\cdot,\cdot)\). Let C be a nonempty closed and convex subset of H. Let T be a nonlinear mapping of H into itself. Let I denote the identity mapping on H. Denote by \(\mathfrak{F}(T)\) the set of fixed points of T.

Moreover, the symbols and → stand for weak and strong convergence, respectively.

We say that T is generalized Lipschitzian iff there exists a nonnegative real valued function \(r(x,y)\) satisfying \(\sup_{x,y\in H} \{r(x,y)\}=\lambda<\infty\) such that
$$ \|Tx-Ty\|_{H} \leq r(x,y) \|x-y\|_{H},\quad \forall x, y \in H. $$
Recently, this class of mappings has been studied by Saddeek and Ahmed [1], and Saddeek [2].

For \(r(x,y)=\lambda\in(0,1)\) (resp., \(r(x,y)=1\)) such mappings are said to be λ-contractive (resp., nonexpansive) mappings.

If \(r(x,y)=\lambda> 0\), then the class of generalized Lipschitzian mappings coincide with the class of λ-Lipschitzian mappings.

We say that T is generalized strictly pseudocontractive iff for each pair of points x, y in H there exist nonnegative real valued functions \(r_{i}(x,y)\), \(i=1,2\), satisfying
$$\sup_{x,y\in H} \Biggl\{ \sum_{i=1}^{2}r_{i}(x, y)\Biggr\} =\lambda' < \infty $$
such that
$$ \Vert Tx-Ty\Vert _{H}^{2} \leq r_{1}(x,y) \Vert x-y\Vert _{H}^{2}+r_{2}(x,y) \bigl\Vert (I-T) (x)-(I-T) (y)\bigr\Vert _{H}^{2}. $$
By letting \(r_{1}(x,y)=1\) and \(r_{2}(x,y)=\lambda\in[0,1)\) (resp., \(r_{i}(x,y)=1\), \(i=1,2\)) in (1.2), we may derive the class of λ-strictly pseudocontractive (resp., pseudocontractive) mappings, which is due to Browder and Petryshyn [3].

The class of λ-strictly pseudocontractive mappings has been studied recently by various authors (see, for example, [49]).

It worth noting that the class of generalized strictly pseudocontractive mappings includes generalized Lipschizian mappings, λ-strictly pseudocontractive mappings, λ-Lipschitzian mappings, pseudocontractive mappings, nonexpansive (or 0-strictly pseudocontractive) mappings.

These mappings appear in nonlinear analysis and its applications.

Definition 1.1

For any \(x, y, z \in H\) the mapping T is said to be
  1. (i)

    demiclosed at 0 (see, for example, [10]) if \(Tx=0\) whenever \(\{x_{n}\}\subset H\) with \(x_{n}\rightharpoonup x\) and \(Tx_{n}\rightarrow0\), as \(n \rightarrow \infty\);

  2. (ii)
    pseudomonotone (see, for example, [11]) if it is bounded and \(x_{n}\rightharpoonup x \in H\) and
    $$ \limsup_{n\rightarrow \infty} ( Tx_{n},x_{n}-x) \leq0 \quad \Longrightarrow\quad \liminf_{n\rightarrow\infty} ( Tx_{n},x_{n}-y) \geq (Tx,x-y); $$
  3. (iii)
    coercive (see, for example, [12]) if
    $$ (Tx,x) \geq \rho\bigl(\Vert x\Vert _{H}\bigr)\|x \|_{H},\qquad \lim_{\xi\rightarrow+\infty} \rho (\xi)=+\infty; $$
  4. (iv)
    potential (see, for example, [13]) if
    $$ \int^{1}_{0}\bigl(\bigl(T\bigl(t(x+y),x+y\bigr) \bigr)-\bigl(T(tx),x\bigr)\bigr) \,dt= \int^{1}_{0}\bigl( T(x+ty),y\bigr) \,dt; $$
  5. (v)
    hemicontinuous (see, for example, [12]) if
    $$ \lim_{t\rightarrow 0} \bigl( T(x+ty),z\bigr)=(Tx,z); $$
  6. (vi)
    demicontinuous (see, for example, [12]) if
    $$ \lim_{\|x_{n}-x\|_{H}\rightarrow 0} ( Tx_{n},y)=(Tx,y); $$
  7. (vii)
    uniformly monotone (see, for example, [11]) if there exist \(p\geq2\), \(\alpha>0\) such that
    $$ ( Tx-Ty,x-y) \geq\alpha\|x-y\|_{H}^{p}; $$
  8. (viii)
    bounded Lipschitz continuous (see, for example, [13]) if there exist \(p\geq2\), \(M>0\) such that
    $$ \|Tx-Ty\|_{H} \leq M \bigl(\Vert x\Vert _{H}+\|y \|_{H}\bigr)^{p-2} \|x-y\|_{H}. $$

It should be noted that any demicontinuous mapping is hemicontinuous and every uniformly monotone is monotone (i.e., \((Tx-Ty,x-y)\geq 0\), \(\forall x, y\in H\)) and every monotone hemicontinuous is pseudomonotone.

If T is uniformly monotone (resp. bounded Lipschitz continuous) with \(p=2\), then T is called strongly monotone (resp. M-Lipschitzian).

For \(x_{0}\in C\) the Krasnoselskii iterative process (see, for example, [14]) starting at \(x_{0}\) is defined by
$$ x_{n+1}=(1-\tau)x_{n}+ \tau T x_{n}, \quad n\geq0, $$
where \(\tau\in(0,1)\).

Recently, in a real Hilbert space setting, Saddeek and Ahmed [1] proved that the Krasnoselskii iterative sequence given by (1.3) converges weakly to a fixed point of T under the basic assumptions that \(I-T\) is generalized Lipschitzian, demiclosed at 0, coercive, bounded, and potential. Moreover, they also applied their result to the stationary filtration problem with a discontinuous law.

However, the convergence in [1] is in general not strong. Very recently, motivated and inspired by the work in He and Zhu [15], Saddeek [2] introduced the following modified Krasnoselskii iterative algorithm by the boundary method:
$$ x_{n+1}=\bigl(1-\tau h(x_{n})\bigr) x_{n}+ \tau T_{\tau} x_{n},\quad n\geq0, $$
where \(x_{0}=x \in C\), \(\tau\in(0,1)\), \(T_{\tau} = (1- \tau) I + \tau T\) and \(h:\rightarrow[0,1]\) is a function defined by
$$ h(x) =\inf\bigl\{ \alpha\in[0,1] : \alpha x \in C\bigr\} ,\quad \forall x \in C. $$
By replacing \(T_{\tau}\) by T and taking \(h(x_{n})=1\), \(\forall n\geq0\) in (1.4), we can obtain (1.3).

Saddeek [2] obtained some strong convergence theorems of the iterative algorithm (1.4) for finding the minimum norm solutions of certain nonlinear operator equations.

The class of uniformly convex Banach spaces play an important role in both the geometry of Banach spaces and relative topics in nonlinear functional analysis (see, for example, [16, 17]).

Let X be a real Banach space with its dual \(X^{\ast}\). Denote by \(\langle\cdot,\cdot\rangle\) the duality pairing between \(X^{\ast}\) and X. Let \(\|\cdot\|_{X}\) be a norm in X, and \(\|\cdot\|_{X^{\ast}}\) be a norm in \(X^{\ast}\).

A Banach space X is said to be strictly convex if \(\|x+y\|_{X}<2\) for every \(x, y \in X\) with \(\|x\|_{X}\leq1\), \(\|y\|_{X}\leq1\) and \(x\neq y\).

A Banach space X is said to be uniformly convex if for every \(\varepsilon>0\), there exists an increasing positive function \(\delta(\varepsilon)\) with \(\delta(0)=0\) such that \(\|x\|_{X}\leq 1\), \(\|y\|_{X}\leq1\) with \(\|x-y\|_{X}\geq\varepsilon\) imply \(\|x+y\|_{X}\leq2(1-\delta(\varepsilon))\) for every \(x, y \in X\).

It is well known that every Hilbert space is uniformly convex and every uniformly convex Banach space is reflexive and strictly convex.

A Banach space X is said to have a Gateaux differentiable norm (see, for example, [10], p.69) if for every \(x, y \in X\) with \(\|x\|_{X}= 1\), \(\|y\|_{X}= 1\) the following limit exists:
$$ \lim_{t\rightarrow0^{+}} \frac{[\|x+ty\|_{X}-\|x\|_{X}]}{t}. $$
X is said to have a uniformly Gateaux differentiable norm if for all \(y \in X\) with \(\|y\|_{X}= 1\), the limit is attained uniformly for \(\|x\|_{X}= 1\).

Hilbert spaces, \(L^{p}\) (or \(l_{p}\)) spaces, and Sobolev spaces \(W_{p}^{1}\) (\(1< p<\infty\)) are uniformly convex and have a uniformly Gateaux differentiable norm.

The generalized duality mapping \(J_{p}\), \(p>1\) from X to \(2^{X^{\ast}}\) is defined by
$$ J_{p}(x)=\bigl\{ x^{\ast} \in X^{\ast}: \bigl\langle x^{\ast},x\bigr\rangle = \Vert x\Vert _{X}^{p}, \bigl\Vert x^{\ast}\bigr\Vert _{X^{\ast}} =\Vert x\Vert _{X}^{p-1}\bigr\} , \quad \forall x \in X. $$
It is well known that (see, for example, [18, 19]) if the uniformly convex Banach space X is a uniformly Gateaux differentiable norm, then \(J_{p}\) is single valued (we denote it by \(j_{p}\)), one to one and onto. In this case the inverse of \(j_{p}\) will be denoted by \(j_{p}^{-1}\).

Definition 1.1 above can easily be stated for mappings T from C to \(X^{\ast}\). The only change here is that one replaces the inner product \((\cdot,\cdot)\) by the bilinear form \(\langle\cdot,\cdot\rangle\).

Given a nonlinear mapping A of C into \(X^{\ast}\). The variational inequality problem associated with C and A is to find
$$ x \in C : \langle Ax-f, y-x\rangle\geq0,\quad \forall y\in C, f\in X^{\ast}. $$
The set of solutions of the variational inequality (1.5) is denoted by \(\operatorname{VI}(C,A)\).

It is well known (see, for example, [12, 20, 21]) that if A is pseudomonotone and coercive, then \(\operatorname{VI}(C,A)\) is a nonempty, closed, and convex subset of X. Further, if \(A=j_{p}-T\), then \(\tilde{\mathfrak{F}}(j_{p},T)=\{x \in C: j_{p}x=Tx\}=A^{-1}0\). In addition, there exists also a unique element \(z=\operatorname{proj}_{A^{-1}0}(0) \in \operatorname{VI}(A^{-1}0,j_{p})\), called the minimum norm solution of variational inequality (1.5) (or the metric projection of the origin onto \(A^{-1}0\)). If \(X=H\), then \(j_{p}=I\) and hence \(\tilde{\mathfrak{F}}=\mathfrak{F}\).

Example 1.1

Let Ω be a bounded domain in \(\mathbb{R}^{n}\) with Lipschitz continuous boundary. Let us consider \(p\geq2\), \(\frac{1}{p}+\frac{1}{q}=1\), and \(X=\mathring{W}_{p}^{(1)}(\Omega)\), \(X^{\ast }={W}_{q}^{(-1)}(\Omega)\). The p-Laplacian is the mapping \(-\Delta_{p}:\mathring{W}_{p}^{(1)}(\Omega)\rightarrow {W}_{q}^{(-1)}(\Omega)\), \(\Delta_{p}u=\operatorname{div}(|\nabla u|^{p-2}\nabla u)\) for \(u\in \mathring{W}_{p}^{(1)}(\Omega)\).

It is well known that the p-Laplacian is in fact the generalized duality mapping \(j_{p}\) (more specifically, \(j_{p}=-\Delta_{p}\)), i.e., \(\langle j_{p}u,v\rangle=\int_{\Omega}{|\nabla u|^{p-2}(\nabla u, \nabla u) \,dx}\), \(\forall u, v \in \mathring{W}_{p}^{(1)}(\Omega)\).

From [22], p.312, we have
$$\begin{aligned} \langle j_{p}u-j_{p}v,u-v\rangle =& \int_{\Omega}\bigl(\bigl(|\nabla u|^{p-2}\nabla u-|\nabla v|^{p-2}\nabla v\bigr), \nabla (u-v)\bigr) \,dx \\ \geq& M \int_{\Omega}|\nabla u-\nabla v|^{p} \,dx \quad \text{for some } M>0, \end{aligned}$$
which implies that \(j_{p}\) is uniformly monotone.
By [22], p.314, we have
$$ \bigl\vert \langle j_{p}u-j_{p}v,w\rangle\bigr\vert \leq M\|u-v\|_{\mathring{W}_{p}^{(1)}(\Omega)}\bigl(\Vert u\Vert _{\mathring{W}_{p}^{(1)}(\Omega)} +\|v \|_{\mathring{W}_{p}^{(1)}(\Omega)}\bigr)^{p-2}\|w\|_{\mathring{W}_{p}^{(1)}(\Omega)}, $$
$$\begin{aligned} \|j_{p}u-j_{p}v\|_{{W}_{q}^{(-1)}(\Omega)} =&\sup_{w\in \mathring{W}_{p}^{(1)}(\Omega)} \frac{|\langle j_{p}u-j_{p}v,w\rangle|}{\|w\|_{\mathring{W}_{p}^{(1)}(\Omega)}} \\ \leq& M \|u-v\|_{\mathring{W}_{p}^{(1)}(\Omega)}\bigl(\Vert u\Vert _{\mathring{W}_{p}^{(1)}(\Omega)} +\|v \|_{\mathring{W}_{p}^{(1)}(\Omega)}\bigr)^{p-2}, \end{aligned}$$
this shows that \(j_{p}\) is bounded Lipschitz continuous.

The generalized duality mapping \(j_{p}=-\Delta_{p}\) is bounded, demicontinuous (and hence hemicontinuous) and monotone, and hence \(j_{p}\) is pseudomonotone.

From the definition of \(j_{p}\), it follows that \(j_{p}\) is coercive.

Since \(j_{p}u\in{W}_{q}^{(-1)}(\Omega)\), \(\forall u\in \mathring{W}_{p}^{(1)}(\Omega)\) is the subgradient of \(\frac{1}{p}\|u\|_{\mathring{W}_{p}^{(1)}(\Omega)}^{p}\), it follows that \(j_{p}\) is potential.

Since \(j_{p}\) is pseudomonotone and coercive (it is surjective), then \(j_{p}\) is demiclosed at 0 (see Saddeek [2] for an explanation).

The mapping \(j_{p}\) is generalized strictly pseudocontractive with \(r_{1}(x,y)=1\).

The following two lemmas play an important role in the sequel.

Lemma 1.1


Let \(\{a_{n}\}\), \(\{b_{n}\}\), and \(\{c_{n}\}\) be nonnegative real sequences satisfying
$$ a_{n+1} \leq(1-\gamma_{n}) a_{n}+ b_{n}+ c_{n}, \quad \forall n\geq0, $$
where \(\gamma_{n}\subset(0,1)\), \(\sum_{n=0}^{\infty} \gamma_{n} = \infty\), \(\limsup_{n \rightarrow\infty} \frac{b_{n}}{\gamma_{n}} \leq0\), and \(\sum_{n=0}^{\infty} c_{n} < \infty\). Then \(\lim_{n\rightarrow\infty}a_{n}=0\).

Lemma 1.2


Let X be a real uniformly convex Banach space with a uniformly Gateaux differentiable norm, and let \(X^{\ast}\) be its dual. Then, for all \(x^{\ast}, y^{\ast} \in X^{\ast}\), the following inequality holds:
$$ \bigl\Vert x^{\ast}+y^{\ast}\bigr\Vert ^{2}_{X^{\ast}} \leq \bigl\Vert x^{\ast}\bigr\Vert ^{2}_{X^{\ast}}+2 \bigl\langle y^{\ast}, j_{p}^{-1}x^{\ast}-y\bigr\rangle , \quad y \in X, $$
where \(j_{p}^{-1}\) is the inverse of the duality mapping \(j_{p}\).
Let us now generalize the algorithm (1.4) for a pair of mappings as follows:
$$ j_{p}x_{n+1}=\bigl(1-\tau h(x_{n}) \bigr) j_{p}x_{n}+ \tau T_{\tau}^{j_{p}} x_{n}, \quad n\geq0, $$
where \(x_{0}=x \in C\), \(\tau\in(0,1)\), \(T_{\tau}^{j_{p}} = (1- \tau) j_{p} + \tau T\), \(T:C\rightarrow X^{\ast}\) is a suitable mapping, and \(j_{p}: X\rightarrow X^{\ast}\) is the generalized duality mapping.

This algorithm can also be regarded as a modification of algorithm (3) in [1]. We shall call this algorithm the generalized modified Krasnoselskii iterative algorithm.

In the case when X is uniformly convex Banach space, the generalized strictly pseudocontractive mapping (1.2) can be written as follows:
$$\begin{aligned} \Vert Tx-Ty\Vert _{X^{\ast}}^{p} \leq& r_{1}(x,y) \Vert j_{p}x-j_{p}y\Vert _{X^{\ast}}^{p} \\ &{}+r_{2}(x,y) \bigl\Vert (j_{p}-T) (x)-(j_{p}-T) (y)\bigr\Vert _{X^{\ast}}^{p}, \quad p\in[2,\infty), \end{aligned}$$
where \(r_{1}(x,y)\) and \(r_{2}(x,y)\) satisfy the same conditions as above.

Obviously, (1.6) and (1.7) reduce to (1.4) and (1.2), respectively, when X is a Hilbert space.

The main purpose of this paper is to extend the results in [2] to uniformly convex Banach spaces and to generalized modified iterative processes with generalized strictly pseudocontractive mappings.

2 Main results

Now we are ready to state and prove the results of this paper.

Theorem 2.1

Let X be a real uniformly convex Banach space with a uniformly Gateaux differentiable norm and \(X^{\ast}\) be its dual. Let C be a nonempty closed convex subset of X. Let \(j_{p}: X\rightarrow X^{\ast}\) be the generalized duality mapping and let \(T: C\rightarrow X^{\ast}\) be a bounded Lipschitz continuous nonlinear mapping. Define \(S_{h(x)} : C\rightarrow X^{\ast}\) by
$$ S_{h(x)}x= \bigl(h(x)+\tau-1\bigr)j_{p}x-\tau Tx, \quad \forall x \in C, $$
where the function \(h(x)\) is defined as above and \(\tau\in(0,1)\).
Assume that \(S_{h(x)}\) is demiclosed at 0, coercive, potential, bounded, and generalized strictly pseudocontractive in the sense of (1.7), here \(r_{i}=r_{i}(x,y)\), \(i=1,2\), satisfy the following condition:
$$ \sup_{x, y \in C} \bigl[r_{1}+\bigl(2-h(x) \bigr)^{p}r_{2}\bigr]= \bigl(\lambda' \bigr)^{p} < \infty, \quad p\geq2. $$
Suppose that the constant α appearing in (1.5) is as follows:
$$ \alpha= \sup_{x, y \in C}\Bigl[\|x-y\|_{X}+2 \sup _{x \in C}\|x\|_{X}\Bigr]^{p-2}\|x-y \|_{X}^{2-p}, \quad p\geq2. $$
Then the iterative sequence \(\{x_{n}\}\) generated by algorithm (1.6) with \(\sum_{n=0}^{\infty} h(x_{n})= \infty\) and \(0< \tau= \min \{1, \frac{1}{\lambda' M}\}\), converges strongly to \(\bar{x} \in \operatorname{VI}(S_{{h(\bar{x})}}^{-1}0,j_{p})\), \(\bar{x}=\operatorname{proj}_{S_{{h(\bar{x})}}^{-1}0}(0)\), where \(S_{{h(\bar{x})}}^{-1}0=\tilde{\mathfrak{F}}(h(\bar{x})j_{p},T_{\tau}^{j_{p}})\).


First observe that \(\{x_{n}\}\) is well defined because \(S_{h(x)}\) is bounded and \(\lambda'<\infty\). Next, we show that the sequence \(\{x_{n}\}\) is bounded. Since \(S_{h(x)}\) is coercive, it is sufficient (see proof of Theorem 4.1 in [2]) to show that
$$ \{x_{n}\} \subset S_{0}, \qquad \|x_{n}\|_{X}\leq R_{0}, \quad n \geq0, $$
where \(S_{0}= \{x \in C : F(x)\leq F(x_{0})\}\), \(R_{0}=\sup_{x \in S_{0}} \|x\|_{X}\), and \(F: X\rightarrow(-\infty, \infty]\) is a real function defined as follows:
$$ F(x)= \int^{1}_{0} \bigl\langle S_{h(x)}(tx), x \bigr\rangle \,dt,\quad \forall x \in X. $$
From the definition of \(S_{0}\), it follows immediately that \(x_{0}\in S_{0}\). Suppose, for \(n\geq1\), that \(x_{n}\in S_{0}\). We now claim that \(x_{n+}\in S_{0}\). Indeed, from (1.7), the bounded Lipschitz continuity of \(j_{p}\), T, and the definition of \(S_{h(x)}\), we obtain
$$\begin{aligned}& \bigl\Vert S_{h(x_{n})}\bigl(x_{n+1}+t(x_{n}-x_{n+1}) \bigr) -S_{h(x_{n})}(x_{n})\bigr\Vert _{X^{\ast}}^{p} \\& \quad \leq r_{1}\bigl\Vert j_{p}\bigl(x_{n+1}+t(x_{n}-x_{n+1}) \bigr)-j_{p}x_{n}\bigr\Vert _{X^{\ast}}^{p} \\& \qquad {} + r_{2}\bigl\Vert (j_{p}-S_{h(x_{n})}) \bigl(x_{n+1}+t(x_{n}-x_{n+1})\bigr)-(j_{p}-S_{h(x_{n})}) (x_{n})\bigr\Vert _{X^{\ast}}^{p} \\& \quad \leq r_{1}\bigl\Vert j_{p}\bigl(x_{n+1}+t(x_{n}-x_{n+1}) \bigr)-j_{p}(x_{n})\bigr\Vert _{X^{\ast }}^{p} \\& \qquad {}+ r_{2}\bigl[\bigl(2-\tau-h(x_{n})\bigr)\bigl\Vert j_{p}\bigl(x_{n+1}+t(x_{n}-x_{n+1}) \bigr)-j_{p}(x_{n})\bigr\Vert _{X^{\ast}} \\& \qquad {}+\tau \bigl\Vert T\bigl(x_{n+1}+t(x_{n}-x_{n+1}) \bigr)-T(x_{n})\bigr\Vert _{X^{\ast}}\bigr]^{p} \\& \quad \leq (1-t)^{p}M^{p}\bigl[r_{1}+ \bigl(2-h(x_{n})\bigr)^{p}r_{2}\bigr] \\& \qquad {}\times\bigl(\bigl\Vert x_{n+1}+t(x_{n}-x_{n+1}) \bigr\Vert _{X}+\Vert x_{n}\Vert _{X} \bigr)^{p(p-2)}\Vert x_{n}-x_{n+1}\Vert _{X}^{p} \\& \quad = (1-t)^{p}M^{p}\bigl[r_{1}+ \bigl(2-h(x_{n})\bigr)^{p}r_{2}\bigr] \\& \qquad {}\times\bigl(\bigl\Vert x_{n+1}+t(x_{n}-x_{n+1}) \bigr\Vert _{X}-\Vert x_{n}\Vert _{X}+2 \Vert x_{n}\Vert _{X}\bigr)^{p(p-2)}\Vert x_{n}-x_{n+1}\Vert _{X}^{p} \\& \quad \leq(1-t)^{p}M^{p}\bigl[r_{1}+ \bigl(2-h(x_{n})\bigr)^{p}r_{2}\bigr] \\& \qquad {}\times\bigl[(1-t)\Vert x_{n}-x_{n+1}\Vert _{X}+2\Vert x_{n}\Vert _{X} \bigr]^{p(p-2)}\Vert x_{n}-x_{n+1}\Vert _{X}^{p} \\& \quad \leq M^{p}\bigl[r_{1}+\bigl(2-h(x_{n}) \bigr)^{p}r_{2}\bigr] \\& \qquad {}\times\bigl[\Vert x_{n}-x_{n+1}\Vert _{X}+2R_{0}\bigr]^{p(p-2)}\Vert x_{n}-x_{n+1}\Vert _{X}^{p}\quad \text{for } t \in[0,1]. \end{aligned}$$
$$\begin{aligned}& \bigl\Vert S_{h(x_{n})}\bigl(x_{n+1}+t(x_{n}-x_{n+1}) \bigr)-S_{h(x_{n})}(x_{n})\bigr\Vert _{X^{\ast }}^{p} \\& \quad \leq M \lambda' \bigl[\Vert x_{n}-x_{n+1} \Vert _{X}+2R_{0}\bigr]^{p-2}\Vert x_{n}-x_{n+1}\Vert _{X}. \end{aligned}$$
This implies that
$$\begin{aligned}& \bigl\vert \bigl\langle S_{h(x_{n})}\bigl(x_{n+1}+t(x_{n}-x_{n+1}) \bigr)-S_{h(x_{n})}(x_{n}), x_{n}-x_{n+1}\bigr\rangle \bigr\vert \\& \quad \leq M \lambda' \bigl[\|x_{n}-x_{n+1} \|_{X}+2R_{0}\bigr]^{p-2}\|x_{n}-x_{n+1} \|_{X}^{2}. \end{aligned}$$
Since \(S_{h(x)}\) is potential and \(j_{p}\) is uniformly monotone, by (2.2), (1.6), and (2.4) it follows that
$$\begin{aligned} F(x_{n})-F(x_{n+1}) =& \int^{1}_{0} \bigl(\bigl\langle S_{h(x_{n})}(tx_{n}),x_{n} \bigr\rangle -\bigl\langle S_{h(x_{n})}(tx_{n+1}),x_{n+1} \bigr\rangle \bigr) \,dt \\ =& \int^{1}_{0} \bigl(\bigl\langle S_{h(x_{n})} \bigl(x_{n+1}+t(x_{n}-x_{n+1})\bigr),x_{n}-x_{n+1} \bigr\rangle \bigr) \,dt \\ =& \int^{1}_{0} \bigl(\bigl\langle S_{h(x_{n})} \bigl(x_{n+1}+t(x_{n}-x_{n+1})\bigr),x_{n}-x_{n+1} \bigr\rangle \bigr) \,dt \\ &{}- \int^{1}_{0} \bigl\langle S_{h(x_{n})}(x_{n}),x_{n}-x_{n+1} \bigr\rangle \,dt+\bigl\langle S_{h(x_{n})}(x_{n}),x_{n}-x_{n+1} \bigr\rangle \\ \geq& - \int^{1}_{0} \bigl\vert \bigl\langle S_{h(x_{n})}\bigl(x_{n+1}+t(x_{n}-x_{n+1}) \bigr)-S_{h(x_{n})}(x_{n}),x_{n}-x_{n+1}\bigr\rangle \bigr\vert \,dt \\ &{}+\bigl\langle S_{h(x_{n})}(x_{n}),x_{n}-x_{n+1} \bigr\rangle \\ \geq& -M \lambda'\bigl[\Vert x_{n}-x_{n+1} \Vert _{X}+2R_{0}\bigr]^{p-2}\Vert x_{n}-x_{n+1}\Vert _{X}^{2} \\ &{}+ \frac{1}{\tau} \langle j_{p}x_{n}-j_{p}x_{n+1},x_{n}-x_{n+1} \rangle \\ \geq& -M \lambda'\bigl[\Vert x_{n}-x_{n+1} \Vert _{X}+2R_{0}\bigr]^{p-2}\Vert x_{n}-x_{n+1}\Vert _{X}^{2}+ \frac{\alpha}{\tau} \Vert x_{n}-x_{n+1}\Vert _{X}^{p}, \end{aligned}$$
which together with the restriction on α implies that
$$ F(x_{n})-F(x_{n+1})\geq\mu \bigl[\Vert x_{n}-x_{n+1}\Vert _{X}+2R_{0} \bigr]^{p-2}\Vert x_{n}-x_{n+1}\Vert _{X}^{2},\quad \mu=\frac{1}{\tau}-M \lambda'>0. $$
Therefore, \(F(x_{n+1})\leq F(x_{n})\leq F(x_{0})\), which implies that \(x_{n+1} \in S_{0}\). Thus, by mathematical induction we get \(x_{n} \in S_{0}\) for all \(n\geq0\). This shows that \(x_{n}\) is bounded. This, together with the definition of \(j_{p}\), the boundedness of \(S_{h(x_{n})}\), and (1.6), (2.2), implies that the sequences \(\{S_{h(x_{n})}(x_{n})\}\), \(\{j_{p}(x_{n})\}\), \(\{T_{\tau}^{j_{p}}(x_{n})\}\), and \(\{F(x_{n})\}\) are also bounded.
Further, it follows from (2.5) that the sequence \(\{F(x_{n})\}\) is monotonically decreasing and therefore convergent. Consequently, from (2.5), we have
$$ \lim_{n\rightarrow\infty} \|x_{n}-x_{n+1} \|_{X}= 0. $$
Hence, by the bounded Lipschitz continuity of \(j_{p}\), we obtain
$$ \lim_{n\rightarrow\infty} \|j_{p}x_{n}-j_{p}x_{n+1} \|_{X^{\ast}}= 0. $$
Therefore, by (1.6) and the definition of \(S_{h(x)}\), we then have
$$ \lim_{n\rightarrow\infty} \|S_{h(x_{n})}x_{n} \|_{X^{\ast}}= 0. $$
Let be a weak limit point of \(\{x_{n}\}\), then there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) such that
$$ \lim_{k\rightarrow\infty} \|x_{n_{k}}-\bar{x} \|_{X} \rightarrow \sigma_{\bar{x}}. $$
Since \(S_{h(x)}\) is demiclosed at 0, it follows from (2.8) and (2.9) that \(S_{h(\bar{x})}\bar{x}=0\), and hence
$$ \bar{x}\in S_{{h(\bar{x})}}^{-1}0. $$
Now, we show that
$$ \limsup_{n\rightarrow\infty} \langle S_{h(x_{n})}x_{n}, x_{n+1}-\tilde{x}\rangle\leq0, \quad \forall\tilde{x}\in S_{{h(\tilde{x})}}^{-1}0. $$
Using (1.6) and the definition of \(S_{h(x)}\), we get
$$\begin{aligned} \bigl\langle S_{h(x_{n})}(x_{n}), x_{n+1}- \tilde{x}\bigr\rangle \leq& \bigl\langle S_{h(x_{n})}(x_{n}), x_{n+1}-x_{n}\bigr\rangle + \tau^{-1}\bigl\langle j_{p}(x_{n})-j_{p}(x_{n+1}), x_{n}-\tilde {x}\bigr\rangle \\ \leq& \bigl\Vert S_{h(x_{n})}(x_{n})\bigr\Vert _{X^{\ast}} \Vert x_{n}-x_{n+1}\Vert _{X} \\ &{}+ \tau^{-1}\Vert j_{p}x_{n}-j_{p}x_{n+1} \Vert _{X^{\ast}} \Vert x_{n}-\tilde {x}\Vert _{X}. \end{aligned}$$
Taking the lim sup as \(n\rightarrow\infty\) in (2.12) and using (2.6), (2.7), and (2.8) yield the desired inequality (2.11).
Now, let us show that
$$ \limsup_{n\rightarrow\infty} \bigl\langle -j_{p}( \bar{x}), x_{n+1}-\bar{x}\bigr\rangle \leq0, $$
where is the metric projection of the origin onto \(S_{{h(\bar{x})}}^{-1}0\).
Let \(\{x_{n_{k}}\}\) be a subsequence of \(\{x_{n}\}\) such that \(x_{n_{k+1}}\rightarrow\tilde{x}\in S_{{h(\tilde{x})}}^{-1}0\) and
$$ \limsup_{n\rightarrow\infty} \bigl\langle -j_{p}(\bar{x}), x_{n+1}-\bar{x}\bigr\rangle =\limsup_{k\rightarrow\infty} \bigl\langle -j_{p}(\bar{x}), x_{n_{k+1}}-\bar{x}\bigr\rangle . $$
It follows from Kato [21] that
$$ \limsup_{n\rightarrow\infty} \bigl\langle -j_{p}( \bar{x}), x_{n+1}-\bar{x}\bigr\rangle = \bigl\langle -j_{p}( \bar{x}), \tilde{x}-\bar{x}\bigr\rangle \leq0. $$
This proves the desired inequality (2.13), and, hence by (2.10) and (2.14), we obtain
$$ \bar{x}\in S_{{h(\bar{x})}}^{-1}0\cap \operatorname{VI} \bigl(S_{{h(\bar{x})}}^{-1}0,j_{p}\bigr). $$
Now, we prove that \(j_{p}x_{n}\rightarrow j_{p}\bar{x}\) as \(n\rightarrow\infty\).
By using (1.6) and Lemma 1.2, we get
$$\begin{aligned} \Vert j_{p}x_{n+1}-j_{p}\bar{x} \Vert _{X^{\ast}}^{2} =& \bigl\Vert \bigl(1-\tau h(x_{n})\bigr) (j_{p}x_{n}-j_{p} \bar{x})+ \tau \bigl(T_{\tau}^{j_{p}} x_{n}-h(x_{n})j_{p} \bar{x}\bigr)\bigr\Vert _{X^{\ast}}^{2} \\ \leq& \bigl(1-\tau h(x_{n})\bigr)^{2}\Vert j_{p}x_{n}-j_{p}\bar{x}\Vert _{X^{\ast}}^{2}+2\tau \bigl\langle T_{\tau}^{j_{p}} x_{n}-h(x_{n})j_{p}\bar{x}, x_{n+1}- \bar{x} \bigr\rangle \\ \leq& \bigl(1-\tau h(x_{n})\bigr)^{2}\Vert j_{p}x_{n}-j_{p}\bar{x}\Vert _{X^{\ast}}^{2}+2\tau \bigl[\bigl\langle T_{\tau}^{j_{p}} x_{n}-h(x_{n})j_{p}x_{n}, x_{n+1}-\bar{x} \bigr\rangle \\ &{}+ h(x_{n})\bigl\langle -j_{p}(\bar{x}), x_{n+1}-\bar{x}\bigr] \bigr\rangle +2\tau h(x_{n})\bigl\Vert j_{p}(x_{n})\bigr\Vert _{X^{\ast}} \Vert x_{n+1}-\bar{x} \Vert _{X}. \end{aligned}$$
Set \(\gamma_{n}=\tau h(x_{n})(2-\tau h(x_{n}))\), \(a_{n}=\|j_{p}x_{n}-j_{p}\bar{x}\|_{X^{\ast}}^{2}\), \(b_{n}=2\tau [\langle T_{\tau}^{j_{p}} x_{n}-h(x_{n})j_{p}x_{n}, x_{n+1}-\bar{x} \rangle +h(x_{n})\langle-j_{p}(\bar{x}), x_{n+1}-\bar{x} \rangle]\), and \(c_{n}=2\tau h(x_{n})\|j_{p}(x_{n})\|_{X^{\ast}} \|x_{n+1}-\bar{x} \|_{X}\).
Then inequality (2.16) becomes
$$ a_{n+1} \leq(1-\gamma_{n}) a_{n}+ b_{n}+ c_{n},\quad \forall n\geq0. $$
From \(\sum_{n=0}^{\infty} h(x_{n})= \infty\), and (2.13), it follows that \(\sum_{n=0}^{\infty} \gamma_{n}= \infty\), \(\limsup_{n\rightarrow\infty} \frac{b_{n}}{\gamma_{n}}\leq0\), and \(\sum_{n=0}^{\infty} c_{n}< \infty\). Consequently, applying Lemma 1.1 to (2.17), we conclude that
$$ \lim_{n\rightarrow\infty}\|j_{p}x_{n}-j_{p} \bar{x}\|_{X^{\ast}}=0. $$
Finally, we show that \(x_{n}\rightarrow\bar{x}\) as \(n\rightarrow\infty\).
From the uniform monotonicity of \(j_{p}\), we have
$$\begin{aligned} \Vert x_{n}-\bar{x}\Vert _{X}^{p} \leq& \frac{1}{\alpha}\langle j_{p}x_{n}-j_{p} \bar{x}, x_{n}-\bar{x}\rangle \\ \leq& \frac{1}{\alpha} \Vert j_{p}x_{n}-j_{p} \bar{x}\Vert _{X^{\ast}}\bigl[\Vert x_{n}\Vert _{X}+\Vert \bar{x}\Vert _{X}\bigr]. \end{aligned}$$
Letting \(n\rightarrow\infty\) in (2.19) and using (2.18) and the boundedness of \(\{x_{n}\}\), we obtain \(x_{n}\rightarrow \bar{x}\), as \(n\rightarrow\infty\). This completes the proof. □

An immediate consequence of Theorem 2.1 is the following corollary.

Corollary 2.1

Let \(X=H\) be a real Hilbert space, and let C be a nonempty closed convex subset of H. Let \(T: C\rightarrow H\) be an M-Lipschitzian mapping. Define \(\hat{S}_{h(x)} : C\rightarrow H\) by
$$ \hat{S}_{h(x)}x= h(x)x-T_{\tau}x, \quad \forall x \in C, $$
where \(T_{\tau}=(1-\tau)I+\tau T\). Let \(\hat{S}_{h(x)}\), τ, and \(h(x)\) be as in Theorem  2.1 and \(p=2\) (i.e., \(j_{p}=I\), \(\alpha=1\), and \(\sup_{x, y \in C} [r_{1}+(2-h(x))^{2}r_{2}]= (\lambda')^{2}<\infty\)). Then the sequence \(\{x_{n}\}\) defined by
$$ x_{n+1}=\bigl(1-\tau h(x_{n})\bigr) x_{n}+ \tau T_{\tau} x_{n}, \quad n\geq0, $$
with \(\sum_{n=0}^{\infty} h(x_{n})= \infty\) converges strongly to \(\bar{x} \in \operatorname{VI}({\hat{S}_{h(\bar{x})}}^{-1}0,I)\), \(\bar{x}=\operatorname{proj}_{S_{{h(\bar{x})}}^{-1}0}(0)\), where \({\hat{S}_{h(\bar{x})}}^{-1}0=\tilde{\mathfrak{F}}(h(\bar {x})I,T_{\tau})\).

A special case of Corollary 2.1 is the following theorem due to Saddeek [2], who proved it under the condition that T is generalized Lipschitzian, which in turn, is a generalization of Theorem 2 of Saddeek and Ahmed [1].

Corollary 2.2

Except for the M-Lipschitzian condition for the mapping T, let all the other assumptions of Corollary  2.1 be satisfied and \(r_{2}=0\). Then the sequence \(\{x_{n}\}\) defined by (2.20) with \(\sum_{n=0}^{\infty} h(x_{n})= \infty\), \(0<\tau=\min\{1,\frac{1}{\lambda}\}\), and \(\sup_{x, y \in C} [r_{1}(x,y)]= (\lambda)^{2}<\infty\), converges strongly to \(\bar{x}=\operatorname{proj}_{S_{{h(\bar{x})}}^{-1}0}(0)\).

Remark 2.1

All conditions imposed in Theorem 2.1 on the mapping \(S_{h(x)}\) are quintessential to prove the main theorem, more precisely for the existence solution of \(S_{h(x)}x=0\), and to ensure the strong convergence of the generalized modified Krasnoselski iterative algorithm.

3 Application to nonlinear pseudomonotone equations

In this section, we study nonlinear equations for pseudomonotone mappings; that is; we seek \(x\in C\) such that
$$ Ax=f, \quad f\in X^{\ast}, $$
where \(A:C\rightarrow X^{\ast}\) is a nonlinear pseudomonotone mapping.

To ensure the existence of solutions of (3.1), we shall assume that A is pseudomonotone and coercive on \(\mathring{W}_{p}^{(1)}(\Omega)\) (\(1< p<\infty\)) (see, for example, [12]). Such nonlinear equations occur, in particular, in descriptions of a stabilized filtration and in problems of finding the equilibria of soft shells (see, for example, [25, 26]).

Theorem 3.1

Besides the assumptions on A, let A be potential and satisfy the following condition:
$$ \|Ax-Ay\|_{X^{\ast}}\leq\|j_{p}x-j_{p}y \|_{X^{\ast}}, \quad \forall x, y \in C. $$
Then the sequence \(\{x_{n}\}\) generated by \(x_{0}=x \in C\),
$$ j_{p}x_{n+1}=j_{p}x_{n}- \tau\bigl(A(x_{n})-f\bigr),\quad n \geq0, $$
where \(0 <\tau=\min\{1, \frac{1}{M}\}\), converges strongly to the minimum norm solution of equation (3.1), provided that \(\sum_{n=0}^{\infty} h(x_{n})= \infty\).


Define \(S_{h(x)} : C\rightarrow X^{\ast}\) by \(S_{h(x)}x= Ax-f\), \(\forall x \in C\). Since (3.1) has at least one solution, then \(S_{{h(x)}}^{-1}0\neq\phi\). On the other hand, condition (3.2) with the bounded Lipschitz continuity of \(j_{p}\) clearly imply that A is bounded Lipschitz continuous and the potentiality of \(j_{p}\) imply that \(s_{h(x)}\) is potential.

Now, we show that condition (1.7) is implied by (3.2). Indeed by (3.2) and the definition of \(S_{h(x)}\), we get
$$ \Vert S_{h(x)}x-S_{h(x)}y\Vert _{X^{\ast}}^{p}= \Vert Ax-Ay\Vert _{X^{\ast}}^{p}\leq \Vert j_{p}x-j_{p}y\Vert _{X^{\ast}}^{p}. $$
Hence \(S_{{h(x)}}\) satisfies condition (1.7) with \(r_{1}(x,y)=1\), \(r_{2}(x,y)=0\), and \(\lambda'=1\).

Finally, the pseudomonotonicity of A implies that \(S_{{h(x)}}\) is demiclosed at 0 can be proved by proceeding as in the proof of Theorem 5.1 of [2]. Now we apply Theorem 2.1 to yield the desired result. □

Remark 3.1

If we set \(X=H\) (i.e., \(j_{p}=I\) and \(p=2\)), then the condition (3.2) reduces to the M-Lipschitzian condition of the operator A. Hence from Theorem 3.1 we obtain Theorem 5.1 of [2], which in turn is a generalization of Theorem 3 of [1].

4 Conclusion

In this work, we introduce a generalized modified Krasnoselskii iterative process involving a pair of a generalized strictly pseudocontractive mapping and a generalized duality mapping and prove some strong convergence theorems of the proposed iterative process to the minimum norm solutions of certain nonlinear equations in the framework of uniformly convex Banach spaces. These results improve and generalize recent work in this direction.



The author would like to thank the editor and the reviewers for their valuable suggestions and comments.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

Department of Mathematics, Faculty of Science, Assiut University


  1. Saddeek, AM, Ahmed, SA: Iterative solution of nonlinear equations of the pseudo-monotone type in Banach spaces. Arch. Math. 44, 273-281 (2008) MathSciNetGoogle Scholar
  2. Saddeek, AM: A strong convergence theorem for a modified Krasnoselskii iteration method and its application to seepage theory in Hilbert spaces. J. Egypt. Math. Soc. 22, 476-480 (2014) MathSciNetView ArticleMATHGoogle Scholar
  3. Browder, FE, Petryshyn, WV: Construction of fixed points of nonlinear mappings in Hilbert spaces. J. Math. Anal. Appl. 20, 197-228 (1967) MathSciNetView ArticleMATHGoogle Scholar
  4. Marino, G, Xu, HK: Weak and strong convergence theorems for k-strict pseudocontractions in Hilbert spaces. J. Math. Anal. Appl. 329, 336-349 (2007) MathSciNetView ArticleMATHGoogle Scholar
  5. Zhou, H: Convergence theorems of fixed points for k-strict pseudocontractions in Hilbert spaces. Nonlinear Anal. 69, 456-462 (2008) MathSciNetView ArticleMATHGoogle Scholar
  6. Enyi, CD, Iyiola, OS: A new iterative scheme for common solution of equilibrium problems, variational inequalities and fixed point of k-strictly pseudocontractive mappings in Hilbert spaces. Br. J. Math. Comput. Sci. 4(4), 512-527 (2014) View ArticleGoogle Scholar
  7. Hao, Y: A strong convergence theorem on generalized equilibrium problems and strictly pseudocontractive mappings. Proc. Est. Acad. Sci. 60(1), 12-24 (2011) MathSciNetView ArticleMATHGoogle Scholar
  8. He, Z: Strong convergence of the new modified composite iterative method for strict pseudocontractions in Hilbert spaces. Note Mat. 31(2), 67-78 (2011) MathSciNetMATHGoogle Scholar
  9. Li, M, Yao, Y: Strong convergence of an iterative algorithm for λ-strictly pseudocontractive mappings in Hilbert spaces. An. Ştiinţ. Univ. ‘Ovidius’ Constanţa 18(1), 219-228 (2010) MathSciNetMATHGoogle Scholar
  10. Goebel, K, Kirk, WA: Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge (1990) View ArticleMATHGoogle Scholar
  11. Zeidler, E: Nonlinear Functional Analysis and Its Applications. II/B. Nonlinear Monotone Operators. Springer, Berlin (1990) View ArticleMATHGoogle Scholar
  12. Lions, JL: Quelques Methodes de Resolution des Problemes aux Limites Nonlineaires. Dunod/Gauthier-Villars, Paris (1969) MATHGoogle Scholar
  13. Gajewski, H, Groger, K, Zacharias, K: Nichtlineare Operatorgleichungen und Operatordifferentialgleichungen. Akademie Verlag, Berlin (1974) MATHGoogle Scholar
  14. Krasnoselskii, MA: Two observations about the method of successive approximations. Usp. Mat. Nauk 10, 123-127 (1955) MathSciNetGoogle Scholar
  15. He, S, Zhu, W: A modified Mann iteration by boundary point method for finding minimum-norm fixed point of nonexpansive mappings. Abstr. Appl. Anal. 2013, Article ID 768595 (2013) MathSciNetMATHGoogle Scholar
  16. Benyamini, Y, Lindenstrauss, J: Geometric Nonlinear Functional Analysis, vol. 1. Amer. Math. Soc. Colloq. Publ., vol. 48. Am. Math. Soc., Providence (2000) MATHGoogle Scholar
  17. Diestel, J: Sequences and Series in Banach Spaces. Graduate Texts in Mathematics, vol. 92. Springer, New York (1984) MATHGoogle Scholar
  18. Reich, S: On the asymptotic behavior of nonlinear semigroups and the range of accretive operators. J. Math. Anal. Appl. 79(1), 123-126 (1981) MathSciNetView ArticleGoogle Scholar
  19. Takahashi, W: Nonlinear Functional Analysis - Fixed Point Theory and Its Applications. Yokohama Publishers, Yokohama (2000) MATHGoogle Scholar
  20. Alber, YI: Metric and generalized projection operators in Banach spaces. In: Kartsatos, A (ed.) Properties and Applications: Theory and Applications of Nonlinear Operators of Accretive and Monotone Type, pp. 15-50. Dekker, New York (1996) Google Scholar
  21. Kato, T: Nonlinear semigroups and evolution equations. J. Math. Soc. Jpn. 19, 508-520 (1957) MathSciNetView ArticleMATHGoogle Scholar
  22. Ciarlet, P: The Finite Element Method for Elliptic Problems. North-Holland, New York (1978) MATHGoogle Scholar
  23. Liu, LS: Ishikawa and Mann iterative process with errors for nonlinear strongly accretive mappings in Banach spaces. J. Math. Anal. Appl. 194, 114-125 (1995) MathSciNetView ArticleMATHGoogle Scholar
  24. Cioranescu, I: Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems. Mathematics and Its Applications, vol. 62. Kluwer Academic, Dordrecht (1990) View ArticleMATHGoogle Scholar
  25. Badriev, IB, Shagidullin, RR: Investigating one dimensional equations of the soft shell statistical condition and algorithm of their solution. Izv. Vysš. Učebn. Zaved., Mat. 6, 8-16 (1992) MathSciNetMATHGoogle Scholar
  26. Lapin, AV: On the research of some problems of nonlinear filtration theory. Ž. Vyčisl. Mat. Mat. Fiz. 19(3), 689-700 (1979) MathSciNetGoogle Scholar


© Saddeek 2016