Open Access

Strong convergence of a general iterative algorithm for a finite family of accretive operators in Banach spaces

Fixed Point Theory and Applications20152015:90

https://doi.org/10.1186/s13663-015-0335-0

Received: 10 September 2014

Accepted: 19 May 2015

Published: 17 June 2015

Abstract

The purpose of this paper is to present a new iterative scheme for finding a common solution to a variational inclusion problem with a finite family of accretive operators and a modified system of variational inequalities in infinite-dimensional Banach spaces. Under mild conditions, a strong convergence theorem for approximating this common solution is proved. The methods in the paper are novel and different from those in the early and recent literature.

Keywords

iterative algorithm strong convergence fixed point q-uniformly smooth Banach space inverse-strongly accretive operator

MSC

47H09 47H10 47H17

1 Introduction

Variational inequalities theory, which was introduced by Stampacchia [1] in the early 1960s, has emerged as an interesting and fascinating branch of applicable mathematics with a wide range of applications in industry, finance, economics, social, pure and applied sciences. It has been shown that this theory provides the most natural, direct, simple, unified and efficient framework for a general treatment of a wide class of unrelated linear and nonlinear problems; see, for example, [25] and the references therein. Variational inequalities have been extended and generalized in several directions using novel and new techniques.

In 1968, Brézis [6] initiated the study of the existence theory of a class of variational inequalities, later known as variational inclusions, using proximal-point mappings due to Moreau [7]. Variational inclusions include variational, quasi-variational, variational-like inequalities as special cases. Variational inclusions can be viewed as an innovative and novel extension of the variational principles and thus have wide applications in the fields of optimization, control, economics and engineering sciences.

In recent years, much attention has been given to study the system of variational inclusions/inequalities, which occupies a central and significant role in the interdisciplinary research among analysis, geometry, biology, elasticity, optimization, imaging processing, biomedical sciences and mathematical physics. One can see an immense breadth of mathematics and its simplicity in the works of this research. A number of problems leading to the system of variational inclusions/inequalities arise in applications to variational problems and engineering. It is well known that the system of variational inclusions/inequalities can provide new insight regarding problems being studied and can stimulate new and innovative ideas for problem solving.

In 2000, Ansari and Yao [8] introduced the system of generalized implicit variational inequalities and proved the existence of its solution. They derived the existence results for a solution of system of generalized variational inequalities, from which they established the existence of a solution of system of optimization problems.

Ansari et al. [9] introduced the system of vector equilibrium problems and proved the existence of its solution. Moreover, they also applied their results to the system of vector variational inequalities. The results of [8] and [9] were used as tools to solve the Nash equilibrium problem for non-differentiable and (non)convex vector-valued functions.

Let \(A,B : C \rightarrow E\) be two nonlinear mappings. In 2010, Yao et al. [10] introduced a system of general variational inequalities problem of finding \((x^{*}, y^{*}) \in C \times C\) such that
$$ \left \{ \textstyle\begin{array}{l} \langle Ay^{*}+x^{*}-y^{*}, j(x-x^{*}) \rangle\geq0,\quad \forall x \in C, \\ \langle Bx^{*}+y^{*}-x^{*}, j(x-y^{*} ) \rangle\geq0, \quad \forall x \in C. \end{array}\displaystyle \right . $$
(1.1)
In 2-uniformly smooth Banach spaces, Kangtunyakarn [11], recently, introduced a new system of variational inequalities problem of finding \((x^{*}, y^{*}) \in C \times C\) such that
$$ \left \{ \textstyle\begin{array}{l} \langle x^{*}-(I-\lambda A)(ax^{*}+(1-a)y^{*}), j(x-x^{*}) \rangle\geq0,\quad \forall x \in C,\\ \langle y^{*}-(I-\mu B)x^{*}, j(x-y^{*} ) \rangle\geq0, \quad \forall x \in C. \end{array}\displaystyle \right . $$
(1.2)
If \(a=0\), then problem (1.2) reduces to the problem of finding \((x^{*}, y^{*}) \in C \times C\) such that
$$ \left \{ \textstyle\begin{array}{l} \langle\lambda Ay^{*}+x^{*}-y^{*}, j(x-x^{*}) \rangle\geq0, \quad \forall x \in C,\\ \langle \mu Bx^{*}+y^{*}-x^{*}, j(x-y^{*} ) \rangle\geq0,\quad \forall x \in C, \end{array}\displaystyle \right . $$
(1.3)
which is introduced by Cai and Bu [12]. In Hilbert spaces, problem (1.3) reduces to the problem of finding \((x^{*}, y^{*}) \in C \times C\) such that
$$ \left \{ \textstyle\begin{array}{l} \langle\lambda Ay^{*}+x^{*}-y^{*}, x-x^{*} \rangle\geq0,\quad \forall x \in C, \\ \langle \mu Bx^{*}+y^{*}-x^{*}, x-y^{*} \rangle\geq0,\quad \forall x \in C, \end{array}\displaystyle \right . $$
(1.4)
which is introduced by Ceng et al. [13]. If \(A = B\), then problem (1.4) collapses the problem of finding \((x^{*}, y^{*}) \in C\times C\) such that
$$ \left \{ \textstyle\begin{array}{l} \langle\lambda Ay^{*}+x^{*}-y^{*}, x-x^{*} \rangle\geq0, \quad \forall x \in C, \\ \langle \mu Ax^{*}+y^{*}-x^{*}, x-y^{*} \rangle\geq0, \quad \forall x \in C, \end{array}\displaystyle \right . $$
(1.5)
which is introduced by Verma [14]. In particular, if we let \(x^{*} = y^{*}\) in (1.5), then problem (1.4) is nothing but the classical variational inequality problem: find \(x^{*}\in C\) such that
$$ \bigl\langle Ax^{*}, x-x^{*} \bigr\rangle \geq0,\quad \forall x \in C. $$
(1.6)
The set of solutions of problem (1.6) is denoted by \(\operatorname{VI}(C,A)\).
Motivated by the works mentioned above, we shall consider the following problem in q-uniformly smooth Banach spaces: find \((x^{*}, y^{*}) \in C\times C\) such that
$$ \left \{ \textstyle\begin{array}{l} \langle x^{*}-(I-\lambda A)(ax^{*}+(1-a)y^{*}), j_{q}(x-x^{*}) \rangle\geq0,\quad \forall x \in C, \\ \langle y^{*}-(I-\mu B)x^{*}, j_{q}(x-y^{*} ) \rangle\geq0,\quad \forall x \in C, \end{array}\displaystyle \right . $$
(1.7)
where \(a\in[0,1]\), \(\lambda> 0\) and \(\mu> 0\) are three constants. This problem is called a modified system of variational inequalities, which clearly includes problems (1.1)-(1.6) as special cases.
In order to find a common element of the set of solutions of problem (1.2) and the set of fixed points of nonlinear operators, Kangtunyakarn [11] also studied the following algorithm in a 2-uniformly smooth Banach space:
$$ x_{n+1}=G\bigl(\alpha_{n}u+\beta_{n}x_{n}+ \gamma_{n}S^{A}x_{n}\bigr),\quad \forall n \geq1, $$
(1.8)
where \(S^{A}\) is the \(S^{A}\)-mapping generated by \(S_{1}, S_{2},\ldots, S_{N}\), \(T_{1}, T_{2},\ldots, T_{N}\), \(G:C \rightarrow C\) is the mapping defined by \(Gx= Q_{C}(I-\lambda A)(aI+(1-a)Q_{C}(I-\mu B))x\), and \(Q_{C} \) is a sunny nonexpansive retraction of E onto C. Then, under mild conditions, they established a strong convergence theorem.
On the other hand, we know that the quasi-variational inclusion problem in the setting of Hilbert spaces has been extensively studied in the literature; see, for instance, [1523]. There is, however, little work in the existing literature on this problem in the setting of Banach spaces. The main difficulties are due to the fact that the inner product structure of Hilbert spaces fails to be true in Banach spaces. To overcome these difficulties, López et al. [24] used a new technique to carry out certain initiative investigations on splitting methods for accretive operators in Banach spaces. They considered the following algorithms with errors in Banach spaces:
$$ x_{n+1}=(1-\alpha_{n})x_{n}+\alpha_{n} \bigl(J_{r_{n}}\bigl(x_{n}-r_{n}(Ax_{n}+a_{n}) \bigr)+b_{n}\bigr) $$
(1.9)
and
$$ x_{n+1}=\alpha_{n} u +(1-\alpha_{n}) \bigl(J_{r_{n}}\bigl(x_{n}-r_{n}(Ax_{n}+a_{n}) \bigr)+b_{n}\bigr), $$
(1.10)
where \(u\in E\), \(\{a_{n}\}, \{b_{n}\}\subset E\) and \(J_{r_{n}} = (I +r_{n}B)^{-1}\) is the resolvent of B. Then they established the weak and strong convergence of algorithms (1.9) and (1.10), respectively.
Recently, Khuangsatung and Kangtunyakarn [25] introduced the following algorithm in Hilbert spaces for finding a common element of the set of fixed points of a k-strictly pseudononspreading mapping, the set of solutions of a finite family of variational inclusion problems and the set of solutions of a finite family of equilibrium problems:
$$ \left \{ \textstyle\begin{array}{l} \sum_{i=1}^{N}\alpha_{i}\Psi_{i}(z_{n},y)+\frac{1}{r_{n}}\langle y-z_{n}, z_{n}-w_{n}\rangle\geq0,\quad \forall y\in C, \\ w_{n+1}=\alpha_{n}\mu+\beta_{n}w_{n}+\gamma_{n}J_{M,\lambda}(I-\lambda \sum_{i=1}^{N}b_{i}A_{i})w_{n} \\ \hphantom{w_{n+1}=}{}+\eta_{n}(I-\rho_{n}(I-S))w_{n}+\delta_{n}z_{n},\quad \forall n\geq1. \end{array}\displaystyle \right . $$
(1.11)
And, under suitable conditions, they proved the strong convergence of the sequence \(\{w_{n}\}\).

Motivated and inspired by Zhang et al. [19], Qin et al. [26], López et al. [24], Takahashi et al. [27] and Khuangsatung and Kangtunyakarn [25], we suggest and analyze a new iterative algorithm for finding a common solution to a variational inclusion problem with a finite family of accretive operators and a modified system of variational inequalities in infinite-dimensional Banach spaces. We also prove the convergence analysis of the proposed algorithm under some suitable conditions. The results obtained in this paper improve and extend the corresponding results announced by many others.

2 Preliminaries

Throughout this paper, we denote by E and \(E^{*}\) a real Banach space and the dual space of E, respectively. We use \(\operatorname{Fix}(T )\) to denote the set of fixed points of T and \(\mathscr{B}_{r}\) to denote the closed ball with center zero and radius r. Let C be a subset of E and \(q > 1\) be a real number. The (generalized) duality mapping \(J_{q} : E \rightarrow2^{E^{*}}\) is defined by
$$ J_{q} (x)=\bigl\{ x^{*}\in E^{*}:\bigl\langle x,x^{*}\bigr\rangle =\Vert x \Vert ^{q}, \bigl\Vert x^{*}\bigr\Vert =\Vert x\Vert ^{q-1} \bigr\} $$
for all \(x \in E\), where \(\langle \cdot,\cdot\rangle\) denotes the generalized duality pairing between E and \(E^{*}\). It is well known that if E is smooth, then \(J_{q}\) is single-valued, which is denoted by \(j_{q}\).
Let C be a nonempty closed convex subset of a real Banach space E. Let \(A : E\rightarrow E\) be a single-valued nonlinear mapping, and let \(M : E\rightarrow2^{E}\) be a multivalued mapping. The so-called quasi-variational inclusion problem is to find a \(z \in E\) such that
$$ 0\in(A + M)z. $$
(2.1)
The set of solutions of (2.1) is denoted by \(\operatorname{VI}(E, A, M)\).

Definition 2.1

Let E be a Banach space. Then a function \(\delta_{E} : [0, 2]\rightarrow[0,1]\) is said to be the modulus of convexity of E if
$$ \delta_{E}(\epsilon)=\inf\biggl\{ 1-\frac{\Vert x+y\Vert }{2}:\Vert x\Vert \leq1, \Vert y\Vert \leq 1, \Vert x-y\Vert \geq\epsilon\biggr\} . $$

If \(\delta_{E}(\epsilon)>0\) for all \(\epsilon\in(0, 2]\), then E is uniformly convex.

Definition 2.2

The function \(\rho_{E}: [0, 1) \rightarrow[0, 1)\) is said to be the modulus of smoothness of E if
$$ \rho_{E}(t)=\sup\biggl\{ \frac{1}{2}\bigl(\Vert x+ty\Vert + \Vert x-ty\Vert \bigr)-1:\Vert x\Vert =\Vert y\Vert =1\biggr\} . $$
A Banach space E is said to be:
  1. (1)

    uniformly smooth if \(\frac{\rho_{E}(t)}{t}\rightarrow0\) as \(t\rightarrow0\);

     
  2. (2)

    q-uniformly smooth if there exists a fixed constant \(c > 0\) such that \(\rho_{E}(t)\leq ct^{q}\), where \(q\in(1,2]\).

     

It is known that a uniformly convex Banach space is reflexive and strictly convex.

Definition 2.3

A mapping \(T : C\rightarrow E\) is said to be:
  1. (1)
    nonexpansive if
    $$ \Vert Tx-Ty\Vert \leq \Vert x-y\Vert \quad \text{for all } x, y \in C; $$
     
  2. (2)
    r-contractive if there exists \(r\in[0, 1)\) such that
    $$ \Vert Tx-Ty\Vert \leq r\Vert x-y\Vert \quad \text {for all } x, y \in C; $$
     
  3. (3)
    accretive if for all \(x, y \in C\), there exists \(j_{q}(x- y) \in J_{q}(x- y)\) such that
    $$ \bigl\langle Tx-Ty, j_{q}(x-y)\bigr\rangle \geq0; $$
     
  4. (4)
    η-strongly accretive if for all \(x, y \in C\), there exist \(\eta> 0\) and \(j_{q}(x-y) \in J_{q} (x - y)\) such that
    $$ \bigl\langle Tx-Ty, j_{q}(x-y)\bigr\rangle \geq\eta \Vert x-y\Vert ^{q}; $$
     
  5. (5)
    μ-inverse-strongly accretive if for all \(x, y \in C\), there exist \(\mu> 0\) and \(j_{q}(x-y) \in J_{q} (x - y)\) such that
    $$ \bigl\langle Tx-Ty, j_{q}(x-y)\bigr\rangle \geq\mu \Vert Tx-Ty\Vert ^{q}. $$
     

Definition 2.4

A set-valued mapping \(T: \operatorname{Dom}(T) \rightarrow2^{E}\) is said to be:
  1. (1)
    accretive if for any \(x, y \in \operatorname{Dom}(T)\), there exists \(j_{q}(x-y)\in J_{q}(x-y)\) such that for all \(u\in T(x)\) and \(v \in T(y)\),
    $$ \bigl\langle u-v,j_{q}(x - y)\bigr\rangle \geq0; $$
     
  2. (2)

    m-accretive if T is accretive and \((I + \rho T)(\operatorname{Dom}(T))= E\) for every (equivalently, for some) \(\rho> 0\), where I is the identity mapping.

     
Let \(M : \operatorname{Dom}(M)\rightarrow2^{E}\) be m-accretive. The mapping \(J_{M,\rho}: E \rightarrow \operatorname{Dom}(M)\) defined by
$$ J_{M,\rho}(u)=(I+\rho M)^{-1}(u),\quad \forall u\in E, $$
is called the resolvent operator associated with M, where ρ is any positive number and I is the identity mapping. It is well known that \(J_{M,\rho}\) is single-valued and nonexpansive.

We need some facts and tools which are listed as lemmas below.

Lemma 2.5

([28])

Let E be a Banach space and \(J_{q}\) be a generalized duality mapping. Then, for any given \(x, y\in E\), the following inequality holds:
$$ \Vert x+y\Vert ^{q}\leq \Vert x\Vert ^{q}+q\bigl\langle y, j_{q}(x+y)\bigr\rangle ,\quad j_{q}(x+y)\in J_{q}(x+y). $$

Lemma 2.6

([29])

Let \(\{\alpha_{n}\}\) be a sequence of nonnegative numbers satisfying the property
$$ \alpha_{n+1} \leq(1- \gamma_{n} )\alpha_{n} + b_{n}+ \gamma_{n}c_{n}, \quad n\in \mathbb{N}, $$
where \(\{\gamma_{n} \}\), \(\{b_{n} \}\), \(\{c_{n} \}\) satisfy the restrictions:
  1. (i)

    \(\sum_{n=1}^{\infty}\gamma_{n}=\infty\), \(\lim_{n\rightarrow \infty}\gamma_{n} =0\);

     
  2. (ii)

    \(b_{n}\geq0\), \(\sum_{n=1}^{\infty}b_{n}<\infty\);

     
  3. (iii)

    \(\limsup_{n\rightarrow\infty}c_{n} \leq0\).

     
Then \(\lim_{n\rightarrow\infty}\alpha_{n} =0\).

Lemma 2.7

([28])

Let \(1 < p <\infty\), \(q \in(1, 2]\), \(r > 0\) be given. If E is a real q-uniformly smooth Banach space, then there exists a constant \(C_{q} > 0\) such that
$$ \Vert x+y\Vert ^{q}\leq \Vert x\Vert ^{q}+q\bigl\langle y, j_{q}(x)\bigr\rangle +C_{q}\Vert y\Vert ^{q},\quad \forall x,y\in E. $$

Lemma 2.8

([30])

Let C be a nonempty closed convex subset of a real q-uniformly smooth Banach space E. Let the mapping \(A : C\rightarrow E\) be an α-inverse-strongly accretive operator. Then the following inequality holds:
$$ \bigl\Vert (I-\lambda A)x- (I-\lambda A)y\bigr\Vert ^{q}\leq \Vert x-y\Vert ^{q}-\lambda\bigl(q\alpha-C_{q} \lambda^{q-1}\bigr) \Vert Ax-Ay\Vert ^{q}. $$

In particular, if \(0 <\lambda\leq(\frac{q\alpha}{C_{q} })^{\frac{1}{q-1}}\), then \(I-\lambda A\) is nonexpansive.

Recall that if C and D are nonempty subsets of a Banach space E such that C is closed convex and \(D\subset C\), then a mapping \(Q : C\rightarrow D\) is sunny [31] provided
$$ Q\bigl(x + t \bigl(x-Q(x)\bigr)\bigr) = Q(x) $$
for all \(x \in C\) and \(t \geq0\), whenever \(Qx + t(x -Q(x)) \in C\). A mapping \(Q : C\rightarrow D\) is called a retraction if \(Qx = x\) for all \(x \in D\). Furthermore, Q is a sunny nonexpansive retraction from C onto D if Q is a retraction from C onto D which is also sunny and nonexpansive. A subset D of C is called a sunny nonexpansive retraction of C if there exists a sunny nonexpansive retraction from C onto D. The following lemma collects some properties of the sunny nonexpansive retraction.

Lemma 2.9

([30, 31])

Let C be a closed convex subset of a smooth Banach space E. Let D be a nonempty subset of C. Let \(Q : C \rightarrow D\) be a retraction and let j, \(j_{q}\) be the normalized duality mapping and generalized duality mapping on E, respectively. Then the following are equivalent:
  1. (i)

    Q is sunny and nonexpansive;

     
  2. (ii)

    \(\Vert Qx-Qy\Vert ^{2}\leq\langle x-y, j(Qx-Qy)\rangle\), \(\forall x, y \in C\);

     
  3. (iii)

    \(\langle x - Qx, j(y- Qx)\rangle\leq0\), \(\forall x \in C\), \(y \in D\);

     
  4. (iv)

    \(\langle x - Qx, j_{q} (y- Qx)\rangle\leq0\), \(\forall x \in C\), \(y \in D\).

     

Lemma 2.10

Let \(A: C\rightarrow E\) and \(M: C\supseteq \operatorname{Dom}(M)\rightarrow2^{E}\) be two nonlinear operators. Denote \(J_{r}\) by
$$ J_{r}:=J_{M, r}=(I+rM)^{-1} $$
and \(T_{r}\) by
$$ T_{r}:=J_{r}(I-rA)=(I+rM)^{-1}(I-rA). $$

Then it holds for all \(r > 0\) that \(\operatorname{Fix}(T_{r}) = \operatorname {VI}(E, A, M)\).

Proof

From the definition of \(T_{r}\), it follows that
$$\begin{aligned} x=T_{r}x \quad \Longleftrightarrow\quad & x=(I+rM)^{-1}(I-rA)x \\ \quad \Longleftrightarrow\quad &(I-rA)x\in(I+rM)x \\ \quad \Longleftrightarrow\quad &0\in(A+M)x \\ \quad \Longleftrightarrow\quad & x\in\operatorname{VI}(E, A, M). \end{aligned}$$
This completes the proof. □

Lemma 2.11

([24])

Assume that C is a nonempty closed subset of a real uniformly convex and q-uniformly smooth Banach space E. Suppose that \(A: C \rightarrow E\) is α-inverse-strongly accretive and M is an m-accretive operator in E, with \(\operatorname{Dom}(M)\subseteq C\). Then it holds that:
  1. (i)
    Given \(0 < s\leq r\) and \(x\in E\),
    $$ \Vert T_{s}x-T_{r}x\Vert \leq\biggl\vert 1- \frac{s}{r} \biggr\vert \Vert x-T_{r}x\Vert \quad \textit{and} \quad \Vert x-T_{s}x\Vert \leq2\Vert x-T_{r}x\Vert . $$
     
  2. (ii)
    Given \(k> 0\), there exists a continuous, strictly increasing and convex function \(\phi_{q}: [0, \infty)\rightarrow[0, \infty)\) with \(\phi_{q} (0) = 0\) such that for all \(x, y \in\mathscr{B}_{k}\),
    $$\begin{aligned} \|{T_{r}x-T_{r}y}\|^{q} \leq& \|{x-y} \|^{q}-r\bigl(\alpha q-r^{q-1}C_{q}\bigr)\|{Ax-Ay} \|^{q} \\ &{}-\phi_{q} \bigl(\bigl\Vert {(I-J_{r}) (I-rA)x -(I-J_{r}) (I-rA)y}\bigr\Vert \bigr). \end{aligned}$$
     

3 Main results

For every \(i = 1, 2,\ldots,N\), let \(A_{i}:C\rightarrow E\) and \(M:C\supseteq \operatorname{Dom}(M)\rightarrow2^{E}\) be nonlinear mappings. From (2.1), we introduce the combination of variational inclusion problems in Banach spaces as follows: find a point \(x^{*}\in C\) such that
$$ 0\in\Biggl(\sum_{i=1}^{N} \lambda_{i}A_{i}+M\Biggr)x^{*}, $$
(3.1)
where \(\lambda_{i}\) is a real positive number for all \(i = 1, 2,\ldots,N\) with \(\sum_{i=1}^{N}\lambda_{i}=1\). The set of solutions of (3.1) in Banach spaces is denoted by \(\operatorname{VI}(E,\sum_{i=1}^{N}\lambda_{i}A_{i}, M)\).

To prove the strong convergence results, we also need the following four lemmas.

Lemma 3.1

Let C be a nonempty closed convex subset of a real smooth Banach space E. Let \(N\geq1\) be some positive integer, \(A_{i}: C\rightarrow E\) be \(\eta_{i}\)-inverse-strongly accretive with \(\eta =\min\{\eta_{1}, \eta_{2},\ldots, \eta_{N}\}\), and M be m-accretive in E with \(\operatorname{Dom}(M)\subseteq C\). Let \(\{\lambda_{i}\}\) be a real number sequence in \((0,1)\) with \(\sum_{i=1}^{N}\lambda_{i}=1\) and \(\operatorname {VI}(E,\sum_{i=1}^{N}\lambda_{i}A_{i}, M)\neq\emptyset\). Then
$$ \operatorname{VI}\Biggl(E,\sum_{i=1}^{N} \lambda_{i}A_{i}, M\Biggr)=\bigcap _{i=1}^{N} \operatorname {VI}(E, A_{i}, M). $$

Proof

It is obvious that \(\bigcap_{i=1}^{N} \operatorname{VI}(E, A_{i}, M)\subseteq\operatorname {VI}(E,\sum_{i=1}^{N}\lambda_{i}A_{i}, M)\). Next we prove that
$$ \operatorname{VI}\Biggl(E,\sum_{i=1}^{N} \lambda_{i}A_{i}, M\Biggr)\subseteq\bigcap _{i=1}^{N} \operatorname{VI}(E, A_{i}, M). $$
Suppose that \(x_{1}\in \operatorname{VI}(E,\sum_{i=1}^{N}\lambda_{i}A_{i}, M)\) and \(x_{2}\in\bigcap_{i=1}^{N} \operatorname{VI}(E, A_{i}, M)\). We have from Lemma 2.10 that
$$ x_{1}\in\operatorname{Fix}\Biggl(J_{r}\Biggl(I-r\sum _{i=1}^{N}\lambda_{i}A_{i} \Biggr)\Biggr). $$
Since \(\bigcap_{i=1}^{N} \operatorname{VI}(E, A_{i}, M) \subseteq\operatorname {VI}(E,\sum_{i=1}^{N}\lambda_{i}A_{i}, M)\), we have \(x_{2}\in\operatorname {VI}(E,\sum_{i=1}^{N}\lambda_{i}A_{i}, M)\). Again, from Lemma 2.10, we have
$$ x_{2}\in\operatorname{Fix}\Biggl(J_{r}\Biggl(I-r\sum _{i=1}^{N}\lambda_{i}A_{i} \Biggr)\Biggr). $$
In light of the nonexpansiveness of \(J_{r}\), we deduce that
$$\begin{aligned} \Vert x_{1}-x_{2} \Vert ^{q} =&\Biggl\Vert J_{r}\Biggl(I-r\sum_{i=1}^{N} \lambda_{i}A_{i}\Biggr)x_{1}-J_{r} \Biggl(I-r\sum_{i=1}^{N}\lambda_{i}A_{i} \Biggr)x_{2} \Biggr\Vert ^{q} \\ \leq&\Biggl\Vert \Biggl(I-r\sum_{i=1}^{N} \lambda_{i}A_{i}\Biggr)x_{1}- \Biggl(I-r\sum _{i=1}^{N}\lambda_{i}A_{i} \Biggr)x_{2}\Biggr\Vert ^{q} \\ =&\Biggl\Vert ( x_{1}-x_{2})-r\Biggl(\sum _{i=1}^{N}\lambda_{i}A_{i}x_{1}- \sum_{i=1}^{N}\lambda_{i}A_{i}x_{2} \Biggr)\Biggr\Vert ^{q} \\ \leq&\Vert x_{1}-x_{2}\Vert ^{q}-qr \sum _{i=1}^{N}\lambda _{i}\bigl\langle A_{i}x_{1}-A_{i}x_{2}, j_{q}( x_{1}-x_{2})\bigr\rangle \\ &{}+C_{q}r^{q}\sum_{i=1}^{N} \lambda_{i}\Vert A_{i}x_{1}-A_{i}x_{2} \Vert ^{q} \\ \leq&\Vert x_{1}-x_{2}\Vert ^{q}-qr\sum _{i=1}^{N}\lambda_{i} \eta_{i}\Vert A_{i}x_{1}-A_{i}x_{2} \Vert ^{q}+C_{q}r^{q}\sum _{i=1}^{N}\lambda_{i}\Vert A_{i}x_{1}-A_{i}x_{2}\Vert ^{q} \\ \leq&\Vert x_{1}-x_{2}\Vert ^{q}-q r\eta\sum _{i=1}^{N}\lambda _{i} \Vert A_{i}x_{1}-A_{i}x_{2}\Vert ^{q}+C_{q}r^{q}\sum_{i=1}^{N} \lambda _{i}\Vert A_{i}x_{1}-A_{i}x_{2} \Vert ^{q} \\ \leq&\Vert x_{1}-x_{2}\Vert ^{q}-r\sum _{i=1}^{N}\lambda_{i}\bigl(q \eta-C_{q}r^{q-1} \bigr)\Vert A_{i}x_{1}-A_{i}x_{2} \Vert ^{q}, \end{aligned}$$
which means that
$$ r\sum_{i=1}^{N}\lambda_{i}\bigl(q \eta-C_{q}r^{q-1} \bigr)\Vert A_{i}x_{1}-A_{i}x_{2} \Vert ^{q}\leq0. $$
By Lemma 2.10, without loss of generality, we may assume \(r\in (0,(\frac{q\eta}{C_{q} })^{\frac{1}{q-1}} )\). We then deduce that
$$ A_{i}x_{1}=A_{i}x_{2},\quad \forall i=1,2,\ldots,N. $$
(3.2)
Again since \(x_{1}\in \operatorname{VI}(E,\sum_{i=1}^{N}\lambda_{i}A_{i}, M)\) and \(x_{2}\in\bigcap_{i=1}^{N} \operatorname{VI}(E, A_{i}, M) \), we find that
$$ 0\in\sum_{i=1}^{N}\lambda_{i}A_{i}x_{1}+Mx_{1} $$
(3.3)
and
$$ 0\in \sum_{i=1}^{N}\lambda_{i}A_{i}x_{2}+Mx_{2}. $$
(3.4)
We derive from (3.3) and (3.4) that
$$ 0\in\sum_{i=1}^{N}\lambda_{i}A_{i}x_{1}+Mx_{1} -\sum_{i=1}^{N}\lambda _{i}A_{i}x_{2}-Mx_{2}. $$
(3.5)
It then follows from (3.2) and (3.5) that
$$ 0\in Mx_{1} -Mx_{2}. $$
By virtue of \(x_{2}\in\bigcap_{i=1}^{N} \operatorname{VI}(E, A_{i}, M)\) and (3.2), we see
$$ 0\in Mx_{1} -Mx_{2}+Mx_{2}+ A_{i}x_{2}=Mx_{1}+ A_{i}x_{2}=Mx_{1}+ A_{i}x_{1} $$
(3.6)
for all \(i =1,2,\ldots,N\), which yields that
$$ x_{1}\in\bigcap_{i=1}^{N} \operatorname{VI}(E, A_{i}, M). $$
Hence, we obtain the desired result. □

Lemma 3.2

Let E, C, M, η, \(\lambda_{i}\) and \(A_{i}\) be the same as those in Lemma  3.1. Then the mapping \(\sum_{i=1}^{N}\lambda_{i}A_{i}\) is η-inverse-strongly accretive.

Proof

Let \(x, y \in C\). It follows that
$$\begin{aligned}& \Biggl\langle \sum_{i=1}^{N} \lambda_{i}A_{i}x -\sum_{i=1}^{N} \lambda_{i}A_{i}y, j_{q}(x-y) \Biggr\rangle \\& \quad = \sum _{i=1}^{N}\lambda_{i} \bigl\langle A_{i}x-A_{i}y, j_{q}(x -y) \bigr\rangle \\& \quad \geq\sum_{i=1}^{N}\lambda_{i} \eta_{i} \Vert A_{i}x -A_{i}y \Vert ^{q} \\& \quad \geq\sum_{i=1}^{N}\lambda_{i} \eta \Vert A_{i}x - A_{i}y\Vert ^{q} \\& \quad \geq \eta\Biggl\Vert \sum_{i=1}^{N} \lambda_{i} A_{i}x -\sum_{i=1}^{N} \lambda_{i} A_{i}y\Biggr\Vert ^{q}. \end{aligned}$$
Consequently, the mapping \(\sum_{i=1}^{N}\lambda_{i}A_{i}\) is η-inverse-strongly accretive. □

Lemma 3.3

Assume that C is a nonempty closed subset of a real uniformly convex and q-uniformly smooth Banach space E. Let \(S:C\rightarrow C\) be nonexpansive, \(A:C\rightarrow E\) be η-inverse-strongly accretive, and \(M:\operatorname{Dom}(M)\rightarrow2^{E}\) be m-accretive with \(\operatorname{Dom}(M)\subseteq C\). Assume \(r\in(0, (\frac{q\eta}{C_{q} })^{\frac{1}{q-1}})\) and \(\operatorname{Fix}(S)\cap\operatorname {Fix}(T_{r})\neq\emptyset\). Then \(\operatorname{Fix}(ST_{r})=F(T_{r}S)=\operatorname{Fix}(S)\cap\operatorname{Fix}(T_{r})\).

Proof

It is easy to check that \(\operatorname{Fix}(S)\cap\operatorname {Fix}(T_{r})\subseteq\operatorname{Fix}(ST_{r})\) and \(\operatorname{Fix}(S)\cap \operatorname{Fix}(T_{r})\subseteq\operatorname{Fix}(T_{r}S)\). We are left to show that \(\operatorname{Fix}(ST_{r})\subseteq\operatorname{Fix}(S)\cap\operatorname {Fix}(T_{r}) \) and \(\operatorname{Fix}(T_{r}S)\subseteq\operatorname{Fix}(S)\cap \operatorname{Fix}(T_{r})\).

We first prove \(\operatorname{Fix}(ST_{r})\subseteq\operatorname{Fix}(S)\cap \operatorname{Fix}(T_{r})\). Suppose that \(\hat{x}\in \operatorname{Fix}(ST_{r})\) and \(\tilde{x}\in\operatorname{Fix}(S)\cap\operatorname{Fix}(T_{r})\). We have by Lemma 2.11 that
$$\begin{aligned} \Vert \hat{x}-\tilde{x}\Vert ^{q} =&\Vert ST_{r} \hat{x}-ST_{r}\tilde{x}\Vert ^{q} \\ \leq&\Vert T_{r}\hat{x}-T_{r}\tilde{x}\Vert ^{q} \\ \leq&\Vert \hat{x}-\tilde{x}\Vert ^{q}-r\bigl(\eta q-r^{q-1}C_{q}\bigr)\Vert A\hat{x}-A\tilde{x}\Vert ^{q} \\ &{}-\phi_{q}\bigl\Vert (I-J_{r}) (I-rA) \hat{x}-(I-J_{r}) (I-rA)\tilde {x}\bigr\Vert . \end{aligned}$$
Hence, we have from \(r\in(0,(\frac{q\eta}{C_{q} })^{\frac{1}{q-1}})\) and the property of \(\phi_{q}\) that
$$ \Vert A\hat{x}-A\tilde{x}\Vert =\bigl\Vert (I-J_{r}) (I-rA) \hat{x}-(I-J_{r}) (I-rA)\tilde{x}\bigr\Vert =0. $$
It follows that
$$ \Vert \hat{x}-T_{r} \hat{x}-\tilde{x}+T_{r} \tilde{x} \Vert =0. $$
Hence, we have
$$ T_{r} \hat{x} = \hat{x}. $$
By the assumption of \(\hat{x}\in\operatorname{Fix}(ST_{r})\), we have \(\hat {x}=S\hat{x}\). This means that \(\hat{x}\in\operatorname{Fix}(S)\cap\operatorname{Fix}(T_{r})\).
We now prove \(\operatorname{Fix}( T_{r}S)\subseteq\operatorname{Fix}(S)\cap \operatorname{Fix}(T_{r})\). Suppose that \(\tilde{u}\in \operatorname{Fix}(T_{r}S)\) and \(\hat{u}\in\operatorname{Fix}(S)\cap\operatorname{Fix}(T_{r})\). Repeating the above proof again, we get that
$$ \Vert AS\tilde{u}-AS\hat{u}\Vert =\bigl\Vert (I-J_{r}) (I-rA)S \tilde{u}-(I-J_{r}) (I-rA)S\hat{u}\bigr\Vert =0. $$
It follows that
$$ \Vert S\tilde{u}-T_{r}S \tilde{u}-\hat{u}+T_{r}S \hat{u} \Vert =0. $$
Hence, we have
$$ S\tilde{u} =T_{r} S\tilde{u}. $$
By the assumption of \(\tilde{u}\in\operatorname{Fix}(T_{r}S)\), we have \(\tilde{u}=S\tilde{u}\) and \(\tilde{u}=T_{r} \tilde{u}\). This means that \(\tilde{u}\in\operatorname{Fix}(S)\cap\operatorname{Fix}(T_{r})\), which implies \(\operatorname{Fix}( T_{r}S)\subseteq\operatorname{Fix}(S)\cap\operatorname {Fix}(T_{r})\). □

Lemma 3.4

Let C be a nonempty closed convex subset of a q-uniformly smooth Banach space E, and let \(A,B : C\rightarrow E\) be two nonlinear mappings. Let \(Q_{C}\) be a sunny nonexpansive retraction from E onto C. For \(\forall\lambda, \mu>0\) and \(a \in[0, 1]\), define a mapping
$$ Gx:= Q_{C}(I-\lambda A) \bigl(aI+(1-a)Q_{C}(I-\mu B) \bigr)x,\quad \forall x\in C. $$

Then \((x^{*}, y^{*})\) is a solution of problem (1.7) if and only if \(x^{*} = Gx^{*}\), where \(y^{*}= Q_{C}(I-\mu B) x^{*}\).

Proof

First, we prove ‘’.

Let \((x^{*}, y^{*})\) be a solution of (1.7), and we have
$$ \left \{ \textstyle\begin{array}{l} \langle x^{*}-(I-\lambda A)(ax^{*}+(1-a)y^{*}), j_{q}(x-x^{*}) \rangle\geq0,\quad \forall x \in C, \\ \langle y^{*}-(I-\mu B)x^{*}, j_{q}(x-y^{*} ) \rangle\geq0, \quad \forall x \in C. \end{array}\displaystyle \right . $$
From Lemma 2.9, we have
$$ x^{*}=Q_{C}(I-\lambda A) \bigl(ax^{*}+(1-a)y^{*}\bigr) $$
and \(y^{*}= Q_{C}(I-\mu B)x^{*}\).
It follows that
$$ x^{*}= Q_{C}(I-\lambda A) \bigl(aI+(1-a)Q_{C}(I-\mu B) \bigr)x^{*}=Gx^{*}, $$
which implies that \(x^{*}\in\operatorname{Fix}(G)\), where \(y^{*}= Q_{C}(I-\mu B) x^{*}\).

Next we prove ‘’.

Let \(x^{*}\in\operatorname{Fix}(G)\) and \(y^{*}= Q_{C}(I-\mu B) x^{*}\). Then
$$x^{*}=Gx^{*}= Q_{C}(I-\lambda A) \bigl(aI+(1-a)Q_{C}(I-\mu B) \bigr)x^{*}=Q_{C}(I-\lambda A) \bigl(ax^{*}+(1-a)y^{*}\bigr). $$
It follows from Lemma 2.9 that
$$ \left \{ \textstyle\begin{array}{l} \langle x^{*}-(I-\lambda A)(ax^{*}+(1-a)y^{*}), j_{q}(x-x^{*}) \rangle\geq0,\quad \forall x \in C, \\ \langle y^{*}-(I-\mu B)x^{*}, j_{q}(x-y^{*} ) \rangle\geq0, \quad \forall x \in C. \end{array}\displaystyle \right . $$
Then we find that \((x^{*}, y^{*})\) is a solution of problem (1.7). □

Example 3.5

([11])

Let \(\mathbb{R}\) be a real line with the Euclidean norm and let \(A,B: \mathbb{R}\rightarrow\mathbb{R}\) be defined by \(Ax=\frac{x-1}{4}\) and \(Bx =\frac{x-1}{2}\) for all \(x \in\mathbb {R}\). The mapping \(G : \mathbb{R}\rightarrow\mathbb{R}\) is defined by
$$ Gx:=(I-2A) \biggl(\frac{1}{2}I+\frac{1}{2}(I-3B)\biggr)x $$
for all \(x\in\mathbb{R}\). Then \(1\in\operatorname{Fix}(G)\) and \((1, 1)\) is a solution of problem (1.7).

Theorem 3.6

Let E be a uniformly convex and q-uniformly smooth Banach space. Let \(N\geq1\) be some positive integer and let \(A_{i}: C\rightarrow E\) be \(\eta_{i}\)-inverse-strongly accretive with \(\eta =\min\{\eta_{1}, \eta_{2},\ldots, \eta_{N}\}\). Let M be m-accretive on E with \(\operatorname{Dom}(M)\subseteq C\), \(f: C\rightarrow C\) be r-contractive. Let \(A,B:C\rightarrow E\) be α- and β-inverse-strongly accretive, respectively. Define a mapping \(Gx:= Q_{C}(I-\lambda A)(aI+(1-a)Q_{C}(I-\mu B))x\) for all \(x\in C\) and \(a\in[0,1]\). Assume \(\lambda\in(0,(\frac{q\alpha}{C_{q} })^{\frac{1}{q-1}} )\), \(\mu\in(0,(\frac{q\beta}{C_{q} })^{\frac{1}{q-1}} )\) and \(\operatorname{Fix}(G)\cap \bigcap_{i=1}^{N} \operatorname{VI}(E, A_{i}, M)\). For arbitrarily given \(x_{1}\in C\), let \(\{x_{n}\}\) be the sequence generated iteratively by
$$ x_{n+1}=\alpha_{n} f(x_{n})+(1- \alpha_{n})J_{r_{n}} \Biggl(\Biggl(I-r_{n}\sum _{i=1}^{N}\lambda_{i}A_{i} \Biggr)Gx_{n}+e_{n}\Biggr), $$
(3.7)
where \(\{e_{n}\}_{1}^{\infty}\subset E\), \(\{\alpha_{n}\}_{1}^{\infty}\subset [0,1]\), \(\{\lambda_{n}\}_{1}^{N}\subset(0,1)\) and \(\{r_{n}\}_{1}^{\infty}\subset (0, +\infty)\) satisfy the following conditions:
  1. (i)

    \(\sum_{n=1}^{\infty} \Vert e_{n}\Vert < \infty\);

     
  2. (ii)

    \(\sum_{n=1}^{\infty}\alpha_{n}=\infty\), \(\lim_{n\rightarrow \infty}\alpha_{n}=0\) and \(\sum_{n=1}^{\infty} \vert \alpha _{n+1}-\alpha_{n}\vert <\infty\);

     
  3. (iii)

    \(0<\liminf_{n\rightarrow\infty}r_{n}\leq\limsup_{n\rightarrow\infty}r_{n}<(\frac{q\eta}{C_{q}})^{\frac{1}{q-1}}\) and \(\sum_{n=1}^{\infty} \vert r_{n+1}-r_{n}\vert <\infty\);

     
  4. (iv)

    \(\sum_{n=1}^{N} \lambda_{i}=1\).

     
Then \(\{x_{n}\}\) converges strongly to some point \(x\in\operatorname {Fix}(G)\cap\bigcap_{i=1}^{N} \operatorname{VI}(E, A_{i}, M)\), which solves the variational inequality
$$ \bigl\langle f(x)- x, j_{q}(p-x) \bigr\rangle \leq0,\quad p\in \operatorname{Fix}(G)\cap\bigcap_{i=1}^{N} \operatorname{VI}(E, A_{i}, M). $$
And \((x,y)\) solves problem (1.7), where \(y=Q_{C}(I-\mu B)x\).

Proof

Let \(\{y_{n}\}\) be a sequence generated by
$$ y_{n+1}=\alpha_{n} f(y_{n})+(1- \alpha_{n})T_{n}Gy_{n}, $$
(3.8)
where \(T_{n}:=J_{r_{n}}(I-r_{n}\sum_{i=1}^{N}\lambda_{i}A_{i})\). Hence to show the desired result, it suffices to prove that \(y_{n}\rightarrow x\). Indeed, by virtue of Lemma 2.8, Lemma 3.2, (iii), \(\lambda \in(0,(\frac{q\alpha}{C_{q} })^{\frac{1}{q-1}} )\) and \(\mu\in(0,(\frac{q\beta}{C_{q} })^{\frac{1}{q-1}} )\), we find that \(T_{n}:C\rightarrow C\) and \(G:C\rightarrow C\) are nonexpansive. And hence,
$$\begin{aligned}& \Vert y_{n+1}-x_{n+1}\Vert \\& \quad \leq\alpha_{n}r\|{ y_{n}-x_{n}}\|+(1- \alpha_{n})\Biggl\| J_{r_{n}}\Biggl(I-r_{n}\sum _{i=1}^{N}\lambda_{i}A_{i} \Biggr)Gy_{n} \\& \qquad {} -J_{r_{n}}\Biggl(\Biggl(I-r_{n}\sum _{i=1}^{N}\lambda_{i}A_{i} \Biggr)Gx_{n}+e_{n}\Biggr)\Biggr\| \\& \quad \leq\alpha_{n}r\Vert y_{n}-x_{n} \Vert +(1-\alpha_{n})\Biggl\Vert \Biggl(I-r_{n}\sum _{i=1}^{N}\lambda_{i}A_{i} \Biggr)Gy_{n} -\Biggl(I-r_{n}\sum_{i=1}^{N} \lambda_{i}A_{i}\Biggr)Gx_{n}\Biggr\Vert +\Vert e_{n}\Vert \\& \quad \leq\bigl[1-\alpha_{n}(1-r)\bigr]\Vert y_{n}-x_{n} \Vert +\Vert e_{n}\Vert . \end{aligned}$$
(3.9)
By virtue of Lemma 2.6 and (3.9), we see \(\lim_{n\rightarrow\infty} \Vert y_{n}-x_{n}\Vert =0\).

First, we prove that the sequence \(\{y_{n}\}\) is bounded.

Taking \(x\in \operatorname{Fix}(G)\cap\bigcap_{i=1}^{N}(A_{i}+M)^{-1}(0)\), we find \(x\in \operatorname{Fix}(G)\cap\operatorname{Fix}(T_{n})\) by Lemma 2.10 and Lemma 3.1. It follows from (3.8) and Lemma 3.3 that
$$\begin{aligned} \Vert y_{n+1}-x\Vert &=\bigl\Vert \alpha _{n}f(y_{n})+(1- \alpha_{n})T_{n}Gy_{n}-x\bigr\Vert \\ &\leq\alpha_{n}\bigl\Vert f(y_{n})-x\bigr\Vert +(1- \alpha_{n}) \Vert T_{n}Gy_{n}-x\Vert \\ &\leq\alpha_{n}\bigl\Vert f(y_{n})-f(x)\bigr\Vert + \alpha_{n} \bigl\Vert f(x)-x\bigr\Vert +(1-\alpha_{n}) \Vert T_{n}Gy_{n}-x \Vert \\ &\leq\alpha_{n} r\Vert y_{n} - x \Vert + \alpha_{n}\bigl\Vert f(x)-x\bigr\Vert +(1-\alpha_{n}) \Vert y_{n}-x\Vert \\ &=\bigl[1-\alpha_{n}(1-r)\bigr]\Vert y_{n} - x \Vert + \alpha_{n} \bigl\Vert f(x)-x\bigr\Vert \\ &\leq\max\biggl\{ \frac{\Vert f(x)-x\Vert }{1-r},\Vert y_{n}-x\Vert \biggr\} . \end{aligned}$$
By induction, we have
$$ \Vert y_{n}-x\Vert \leq\max\biggl\{ \frac{\Vert f(x)-x\Vert }{1-r}, \Vert y_{1}-x\Vert \biggr\} , \quad \forall n\geq1. $$
Hence, \(\{y_{n}\}\) is bounded, so are \(\{f(y_{n})\}\), \(\{T_{n}(y_{n})\}\) and \(\{ T_{n}G(y_{n})\}\).
Next, we prove that
$$ \lim_{n\rightarrow\infty} \Vert y_{n+1}- y_{n}\Vert \rightarrow0. $$
(3.10)
Write \(V=\sum_{i=1}^{N}\lambda_{i}A_{i}\). Noticing Lemma 3.2, we get that the mapping V is η-inverse-strongly accretive. Putting \(z_{n}=T_{n}Gy_{n}\), we derive from Lemma 2.11 that
$$\begin{aligned}& \Vert z_{n+1}-z_{n}\Vert \\& \quad =\Vert T_{n+1}Gy_{n+1}-T_{n}Gy_{n} \Vert \\& \quad \leq \Vert T_{n+1}Gy_{n+1}-T_{n}Gy_{n+1} \Vert +\Vert T_{n}Gy_{n+1}-T_{n}Gy_{n} \Vert \\& \quad \leq\biggl\vert 1-\frac{r_{\alpha_{n}}}{r_{\beta_{n}}}\biggr\vert \bigl\Vert Gy_{n+1}-J_{r_{\beta_{n}}}(1-r_{\beta_{n}}V)Gy_{n+1} \bigr\Vert +\Vert Gy_{n+1}-Gy_{n}\Vert \\& \quad \leq \vert r_{\beta_{n}}-r_{\alpha_{n}}\vert \frac{ \Vert Gy_{n+1}-J_{r_{\beta_{n}}}(1-r_{\beta_{n}} V)Gy_{n+1}\Vert }{r_{\beta_{n}}} + \Vert y_{n+1}-y_{n}\Vert \\& \quad \leq \vert r_{n+1}-r_{n}\vert M_{1} + \Vert y_{n+1}-y_{n}\Vert , \end{aligned}$$
(3.11)
where \(M_{1}>\sup_{n\geq1}\{\frac{\Vert Gy_{n+1}-J_{r_{\beta _{n}}}(1-r_{\beta_{n}} V)Gy_{n+1}\Vert }{r_{\beta_{n}}}\}\), \(r_{\alpha_{n}}=\min\{ r_{n+1},r_{n}\}\) and \(r_{\beta_{n}}=\max\{ r_{n+1},r_{n}\}\).
Combining (3.8) and (3.11), we find that
$$\begin{aligned}& \Vert y_{n+1}-y_{n}\Vert \\& \quad =\bigl\Vert \alpha_{n}f(y_{n})+(1- \alpha_{n})z_{n}-\alpha _{n-1}f(y_{n-1})-(1- \alpha_{n-1})z_{n-1} \bigr\Vert \\& \quad =\bigl\Vert (\alpha_{n}-\alpha_{n-1}) \bigl(f(y_{n-1})-z_{n-1}\bigr)+(1-\alpha _{n}) (z_{n}-z_{n-1})\bigr\Vert \\& \qquad {} +\alpha_{n}\bigl\Vert f(y_{n})-f(y_{n-1}) \bigr\Vert \\& \quad \leq \vert \alpha_{n}-\alpha_{n-1}\vert \bigl\Vert f(y_{n-1})-z_{n-1}\bigr\Vert +(1-\alpha_{n}) \Vert z_{n}-z_{n-1}\Vert +\alpha_{n}r\Vert y_{n}- y_{n-1}\Vert \\& \quad \leq \vert \alpha_{n}-\alpha_{n-1}\vert M_{2}+(1-\alpha _{n})\Vert z_{n}-z_{n-1} \Vert +\alpha_{n}r\Vert y_{n}- y_{n-1}\Vert \\& \quad \leq \vert \alpha_{n}-\alpha_{n-1}\vert M_{2}+\vert r_{n}-r_{n-1}\vert M_{1} +\bigl[1-\alpha_{n}(1-r)\bigr]\Vert y_{n}-y_{n-1} \Vert , \end{aligned}$$
where \(M_{2}>\sup_{n\geq1}\{\Vert f(y_{n})-z_{n}\Vert \}\). It follows from Lemma 2.6, (ii) and (iii) that \(\lim_{n\rightarrow\infty} \Vert y_{n+1}-y_{n}\Vert =0\).
Again, using Lemma 2.5, Lemma 2.11 and Lemma 3.3, we obtain
$$\begin{aligned}& \Vert y_{n+1}-x\Vert ^{q} \\& \quad =\bigl\Vert \alpha_{n}\bigl(f(y_{n})-x\bigr)+(1- \alpha_{n}) (T_{n}Gy_{n}-x)\bigr\Vert ^{q} \\& \quad \leq(1-\alpha_{n})\Vert T_{n}Gy_{n}-x \Vert ^{q}+q\alpha_{n} \bigl\langle f(y_{n})-x, j_{q}(y_{n+1}-x)\bigr\rangle \\& \quad \leq \Vert T_{n}Gy_{n}-x\Vert ^{q}+q \alpha_{n} M_{3} \\& \quad \leq \Vert y_{n}-x\Vert ^{q} -r_{n} \bigl(\alpha q-r_{n}^{q-1}C_{q}\bigr)\Vert VGy_{n}-VGx\Vert ^{q} \\& \qquad {} -\phi_{q} \bigl(\Vert Gy_{n}-r_{n}VGy_{n}-T_{n}Gy_{n}+r_{n}VGx \Vert \bigr)+q\alpha_{n} M_{3}, \end{aligned}$$
where \(M_{3}>\sup_{n\geq1}\{ \langle f(y_{n})-x, j_{q}(y_{n+1}-x)\rangle\}\). Meanwhile, by the fact that \(a^{r}-b^{r}\leq ra^{r-1}(a-b)\) for all \(r\geq1\), we find that
$$\begin{aligned}& r_{n}\bigl(\alpha q-r_{n}^{q-1}C_{q} \bigr)\Vert VGy_{n}-VGx\Vert ^{q} \\& \qquad {} +\phi_{q} \bigl(\Vert Gy_{n}-r_{n}VGy_{n}-T_{n}Gy_{n}+r_{n}VGx \Vert \bigr) \\& \quad \leq \Vert y_{n}-x\Vert ^{q} -\Vert y_{n+1}-x \Vert ^{q}+q\alpha_{n} M_{3} \\& \quad \leq q\Vert y_{n}-x\Vert ^{q-1}\bigl(\Vert y_{n}-x \Vert -\Vert y_{n+1}-x\Vert \bigr)+q \alpha_{n} M_{3}. \end{aligned}$$
(3.12)
It follows immediately from (ii), (iii), (3.12) and the property of \(\phi_{q}\) that
$$ \lim_{n\rightarrow\infty} \Vert VGy_{n}-VGx\Vert =\lim _{n\rightarrow\infty} \Vert Gy_{n}-r_{n}VGy_{n}-T_{n}Gy_{n}+r_{n}VGx \Vert =0, $$
which implies that
$$ \lim_{n\rightarrow\infty} \Vert T_{n}Gy_{n}-Gy_{n} \Vert =0. $$
(3.13)
In view of condition (iii), there exists \(\varepsilon> 0\) such that \(r_{n}\geq\varepsilon\) for all \(n\geq1\). Then we get, by Lemma 2.11, that
$$ \lim_{n\rightarrow\infty} \Vert T_{\varepsilon}G y_{n}-Gy_{n} \Vert \leq \lim_{n\rightarrow\infty}2\Vert T_{n} Gy_{n}-Gy_{n}\Vert =0. $$
(3.14)

We show \(\lim_{n\rightarrow\infty} \Vert T_{\varepsilon}Gy_{n}-y_{n}\Vert =0\).

Thanks to (3.10), (3.13), (3.14) and (ii), we see
$$\begin{aligned}& \Vert T_{\varepsilon}G y_{n}-y_{n}\Vert \\& \quad \leq \Vert T_{\varepsilon}Gy_{n}- T_{n} Gy_{n}\Vert +\Vert T_{n}Gy_{n}-y_{n} \Vert \\& \quad \leq \Vert T_{\varepsilon}Gy_{n}-Gy_{n}\Vert + \Vert Gy_{n}-T_{n}Gy_{n}\Vert +\Vert T_{n}Gy_{n}-y_{n+1}\Vert +\Vert y_{n+1}-y_{n}\Vert \\& \quad \leq \Vert T_{\varepsilon}Gy_{n}-Gy_{n}\Vert + \Vert Gy_{n}-T_{n}Gy_{n}\Vert + \alpha_{n}\bigl\Vert f(y_{n})- T_{n}Gy_{n} \bigr\Vert +\Vert y_{n+1}-y_{n}\Vert \\& \quad \rightarrow0. \end{aligned}$$
(3.15)
Next we prove that
$$ \limsup_{n\rightarrow\infty} \bigl\langle f(x)-x, j_{q}(y_{n}-x) \bigr\rangle \leq0. $$
Equivalently (should \(\Vert y_{n}-x\Vert \neq0\)), we need to prove that
$$ \limsup_{n\rightarrow\infty} \bigl\langle f(x)-x, j(y_{n}-x)\bigr\rangle \leq0. $$
To this end, let \(x_{t}\) satisfy \(x_{t} = tf(x_{t}) +(1-t) T_{\varepsilon}G x_{t}\). By Xu’s Theorem 4.1 in [32], we get \(x_{t} \rightarrow x\in \operatorname{Fix}(T_{\varepsilon}G)=\operatorname{Fix}(G)\cap\bigcap_{i=1}^{N} \operatorname{VI}(E, A_{i}, M)\) (by Lemma 2.10, Lemma 3.1 and Lemma 3.3) as \(t\rightarrow0\), which x solves the variational inequality
$$ \bigl\langle f(x)-x, j(p-x)\bigr\rangle \leq0,\quad \forall p\in\operatorname {Fix}(T_{\varepsilon}G). $$
Using subdifferential inequality, we deduce that
$$\begin{aligned}& \Vert x_{t}-y_{n}\Vert ^{2} \\& \quad =t\bigl\langle f(x_{t})-y_{n}, j(x_{t}-y_{n}) \bigr\rangle +(1-t)\bigl\langle T_{\varepsilon}Gx_{t}-y_{n}, j(x_{t}-y_{n})\bigr\rangle \\& \quad =t\bigl\langle f(x_{t})-z_{t}, j(x_{t}-y_{n}) \bigr\rangle +t\bigl\langle x_{t}-y_{n}, j(x_{t}-y_{n}) \bigr\rangle \\& \qquad {} +(1-t)\bigl\langle T_{\varepsilon}Gx_{t}- T_{\varepsilon}G y_{n}, j(x_{t}-y_{n})\bigr\rangle +(1-t)\bigl\langle T_{\varepsilon}Gy_{n}-y_{n}, j(x_{t}-y_{n})\bigr\rangle \\& \quad \leq t\bigl\langle f(x_{t})-x_{t}, j(x_{t}-y_{n})\bigr\rangle +t\Vert x_{t}-y_{n} \Vert ^{2}+(1-t)\Vert x_{t}-y_{n}\Vert ^{2} \\& \qquad {} +(1-t)\Vert T_{\varepsilon}Gy_{n}-y_{n} \Vert \Vert x_{t}-y_{n}\Vert \\& \quad \leq t\bigl\langle f(x_{t})-x_{t}, j(x_{t}-y_{n})\bigr\rangle + \Vert x_{t}-y_{n} \Vert ^{2} +\Vert T_{\varepsilon}Gy_{n}-y_{n} \Vert \Vert x_{t}-y_{n}\Vert , \end{aligned}$$
which implies that
$$ \bigl\langle f(x_{t})-x_{t}, j(y_{n}-x_{t}) \bigr\rangle \leq\frac{\Vert T_{\varepsilon}Gy_{n}-y_{n}\Vert }{t}\Vert x_{t}-y_{n}\Vert . $$
(3.16)
Using (3.15), taking the upper limit as \(n\rightarrow\infty\) firstly, and then as \(t\rightarrow0\) in (3.16), we have
$$ \limsup_{t\rightarrow0}\limsup_{n\rightarrow\infty} \bigl\langle f(x_{t})-x_{t}, j(y_{n}-x_{t})\bigr\rangle \leq0. $$
Since E is a uniformly smooth Banach space, we have that the duality mapping j is norm-to-norm uniform on any bounded subset of E, which ensures that the limits \(\limsup_{t\rightarrow0}\) and \(\limsup_{n\rightarrow\infty}\) are interchangeable. Then we have
$$ \limsup_{n\rightarrow\infty} \bigl\langle f(x)-x, j(y_{n}-x)\bigr\rangle \leq0. $$

Finally, we show \(\Vert y_{n}-x\Vert \rightarrow0\).

By Lemma 3.3 and the fact that \(ab\leq\frac{1}{q}a^{q} +\frac{q-1}{q}b^{\frac{q}{q-1}}\), we get
$$\begin{aligned}& \Vert y_{n+1}-x\Vert ^{q} \\& \quad =\bigl\Vert \alpha_{n}f(y_{n})+(1- \alpha_{n}) T_{n}Gy_{n}-x\bigr\Vert ^{q} \\& \quad = \bigl\langle \alpha_{n}f(y_{n})+(1- \alpha_{n}) T_{n}Gy_{n}-x,j_{q}(y_{n+1}-x) \bigr\rangle \\& \quad = \alpha_{n}\bigl\langle f(y_{n})-f(x),j_{q}(y_{n+1}-x) \bigr\rangle +\alpha_{n}\bigl\langle f(x)-x,j_{q}(y_{n+1}-x) \bigr\rangle \\& \qquad{} +(1-\alpha_{n})\bigl\langle T_{n}Gy_{n}-x,j_{q}(y_{n+1}-x) \bigr\rangle \\& \quad \leq\alpha_{n}\bigl\Vert f(y_{n})-f(x)\bigr\Vert \Vert y_{n+1}-x\Vert ^{q-1}+\alpha_{n}\bigl\langle f(x)-x,j_{q}(y_{n+1}-x)\bigr\rangle \\& \qquad {} +(1-\alpha_{n})\Vert y_{n}-x\Vert \Vert y_{n+1}-x\Vert ^{q-1} \\& \quad \leq\alpha_{n}r\Vert y_{n} -x\Vert \Vert y_{n+1}-x\Vert ^{q-1}+\alpha_{n}\bigl\langle f(x)-x,j_{q}(y_{n+1}-x)\bigr\rangle \\& \qquad {} +(1-\alpha_{n})\Vert y_{n}-x\Vert \Vert y_{n+1}-x\Vert ^{q-1} \\& \quad \leq\bigl[1-\alpha_{n}(1- r)\bigr]\Vert y_{n}-x \Vert \Vert y_{n+1}-x\Vert ^{q-1}+\alpha_{n} \bigl\langle f(x)-x,j_{q}(y_{n+1}-x)\bigr\rangle \\& \quad \leq\bigl[1-\alpha_{n}(1- r)\bigr]\frac{1}{q}\Vert y_{n}-x\Vert ^{q}+\frac{q-1}{q} \Vert y_{n+1}-x\Vert ^{q} \\& \qquad {} +\alpha_{n}\bigl\langle f(x)-x,j_{q}(y_{n+1}-x) \bigr\rangle , \end{aligned}$$
which implies that
$$ \Vert y_{n+1}-x\Vert ^{q}\leq\bigl[1- \alpha_{n}(1- r)\bigr]\Vert y_{n}-x\Vert ^{q} +q\alpha_{n} \bigl\langle f(x)-x,j_{q}(y_{n+1}-x) \bigr\rangle . $$
(3.17)
Apply Lemma 2.6 to (3.17) to conclude \(y_{n} \rightarrow x\in \operatorname{Fix}(G)\cap\bigcap_{i=1}^{N} \operatorname{VI}(E, A_{i}, M)\) as \(n \rightarrow\infty\), which solves the variational inequality
$$ \bigl\langle f(x)- x, j_{q}(p-x) \bigr\rangle \leq0,\quad p\in \operatorname{Fix}(G)\cap \bigcap_{i=1}^{N} \operatorname{VI}(E, A_{i}, M). $$
And \((x, y)\) is a solution of the modified system of variational inequalities problem (1.7) due to Lemma 3.4, where \(y=Q_{C}(I-\mu B)x\). This completes the proof. □

Remark 3.7

Theorem 3.6 improves and extends Theorem 3.7 of López et al. [24] in the sense:
  • From the problem of finding a solution for a variational inclusion problem with two accretive operators to problem of finding a common solution for a variational inclusion problem with a finite family of accretive operators and a modified system of variational inequalities.

Remark 3.8

Theorem 3.6 improves and extends Theorem 2.1 of Zhang et al. [19], Theorem 3.1 of Qin et al. [26], Theorem 3.1 of Takahashi et al. [27] and Theorem 3.1 of Khuangsatung and Kangtunyakarn [25] in the following senses:
  • From Hilbert spaces to uniformly convex and q-uniformly smooth Banach spaces.

  • From finding a common element of the set of solutions for the variational inclusion problem with two accretive operators and the set of fixed points of nonexpansive mappings to finding a common solution to a variational inclusion problem with a finite family of accretive operators and a modified system of variational inequalities.

As a direct consequence of Theorem 3.6, we obtain the following corollary.

Corollary 3.9

Let C be a nonempty, closed and convex subset of a Hilbert space H. Let \(N\geq1\) be some positive integer and let \(A_{i}: C\rightarrow H\) be \(\eta_{i}\)-inverse-strongly monotone with \(\eta=\min \{\eta_{1}, \eta_{2},\ldots, \eta_{N}\}\). Let M be maximal monotone in H with \(\operatorname{Dom}(M)\subseteq C\), \(f: C\rightarrow C\) be r-contractive. Let \(A,B:C\rightarrow H\) be α- and β-inverse-strongly monotone, respectively. Define a mapping \(Gx:= \operatorname{Proj}_{C}(I-\lambda A)(aI+(1-a)\operatorname{Proj}_{C}(I-\mu B))x\) for all \(x\in C\) and \(a\in[0,1]\), where \(\operatorname{Proj}_{C}\) is the metric projection from H onto C. Assume that \(\lambda\in(0, 2\alpha)\), \(\mu\in(0, 2\beta)\) and \(\operatorname{Fix}(G)\cap \bigcap_{i=1}^{N} \operatorname{VI}(H, A_{i}, M)\neq\emptyset\). For arbitrarily given \(x_{1}\in C\), let \(\{x_{n}\}\) be the sequence generated iteratively by
$$ x_{n+1}=\alpha_{n} f(x_{n})+(1- \alpha_{n})J_{r_{n}} \Biggl(\Biggl(I-r_{n}\sum _{i=1}^{N}\lambda_{i}A_{i} \Biggr)Gx_{n}+e_{n}\Biggr), $$
where \(\{e_{n}\}_{1}^{\infty}\subset E\), \(\{\alpha_{n}\}_{1}^{\infty}\subset [0,1]\), \(\{\lambda_{n}\}_{1}^{N}\subset[0,1]\) and \(\{r_{n}\}_{1}^{\infty}\subset (0, +\infty)\) satisfy the following conditions:
  1. (i)

    \(\sum_{n=1}^{\infty} \Vert e_{n}\Vert < \infty\);

     
  2. (ii)

    \(\sum_{n=1}^{\infty}\alpha_{n}=\infty\), \(\lim_{n\rightarrow \infty}\alpha_{n}=0\) and \(\sum_{n=1}^{\infty} \vert \alpha _{n+1}-\alpha_{n}\vert <\infty\);

     
  3. (iii)

    \(0<\liminf_{n\rightarrow\infty}r_{n}\leq\limsup_{n\rightarrow\infty}r_{n}< 2\eta\) and \(\sum_{n=1}^{\infty} \vert r_{n+1}-r_{n}\vert <\infty\);

     
  4. (iv)

    \(\sum_{n=1}^{N} \lambda_{i}=1\).

     
Then \(\{x_{n}\}\) converges strongly to some point \(x\in\operatorname {Fix}(G)\cap \bigcap_{i=1}^{N} \operatorname{VI}(H, A_{i}, M)\), which solves the variational inequality
$$ \bigl\langle f(x)- x, p-x \bigr\rangle \leq0,\quad p\in \operatorname{Fix}(G)\cap \bigcap_{i=1}^{N} \operatorname{VI}(H, A_{i}, M). $$

4 Applications

In this section, we give some applications of our main results in the framework of Hilbert spaces. Let C be a nonempty, closed and convex subset of a Hilbert space, and let \(f : C \times C \rightarrow\mathbb{R}\) be a bifunction satisfying the following conditions:
  1. (A1)

    \(f(x, x) = 0\) for all \(x \in C\);

     
  2. (A2)

    f is monotone, i.e., \(f (x, y) +f (y, x)\leq0\) for all \(x,y \in C\);

     
  3. (A3)
    for all \(x, y, z \in C\),
    $$ \limsup_{t\downarrow0}f\bigl(tz +(1 -t)x, y\bigr)\leq f (x, y); $$
     
  4. (A4)

    for all \(x \in C, f(x,\cdot)\) is convex and lower semi-continuous.

     
Then the mathematical model related to equilibrium problems (with respect to C) is to find \(\hat{x }\in C\) such that
$$ f (\hat{x }, y)\geq0 $$
for all \(y \in C\). The set of such solutions \(\hat{x }\) is denoted by \(\operatorname{EP}(f)\). The following lemma appears implicitly in Blum and Oettli [33].

Lemma 4.1

Let C be a nonempty, closed and convex subset of H and let \(f : C \times C \rightarrow\mathbb{R}\) be a bifunction satisfying (A1)-(A4). Let \(r >0\) and \(x\in H\). Then there exists \(z \in C\) such that
$$ f (z,y) + \frac{1}{r}\langle y-z, z-x\rangle\geq0,\quad \forall y \in C. $$

The following lemma is given in Combettes and Hirstoaga [34].

Lemma 4.2

Assume that \(f : C \times C \rightarrow \mathbb{R}\) satisfies (A1)-(A4). For \(r >0\) and \(x \in H\), define a mapping \(S_{r}: H\rightarrow C\) as follows:
$$ S_{r}x = \biggl\{ z \in C: f (z,y) + \frac{1}{r}\langle y-z, z-x\rangle\geq0, \forall y \in C\biggr\} $$
for all \(x \in H\). Then the following hold:
  1. (i)

    \(S_{r}\) is single-valued;

     
  2. (ii)

    \(S_{r}\) is a firmly nonexpansive mapping, i.e., for all \(x,y \in H\), \(\Vert S_{r}x-S_{r}y\Vert ^{2}\leq\langle S_{r}x-S_{r}y, x-y\rangle\);

     
  3. (iii)

    \(\operatorname{Fix}(S_{r}) = \operatorname{EP}(f)\);

     
  4. (iv)

    \(\operatorname{EP}(f)\) is closed and convex.

     

We call such \(S_{r}\) the resolvent of f for \(r > 0\). Using Lemma 4.1 and Lemma 4.2, Takahashi et al. [27] proved the following result.

Lemma 4.3

Let H be a Hilbert space and let C be a nonempty, closed and convex subset of H. Let \(f : C\times C\rightarrow\mathbb{R}\) satisfy (A1)-(A4). Let \(A_{f}\) be a multivalued mapping of H into itself defined by
$$ A_{f}x=\left \{ \textstyle\begin{array}{l@{\quad}l} \{z\in H: f (x, y)\geq\langle y-x, z\rangle,\forall y \in C\},& x\in C,\\ \emptyset, &x\notin C. \end{array}\displaystyle \right . $$
Then \(\operatorname{EP}(f) = A_{f}^{-1}0\) and \(A_{f}\) is a maximal monotone operator. Further, for any \(x\in H\) and \(r >0\), the resolvent \(S_{r}\) of f coincides with the resolvent of \(A_{f}\); i.e., \(S_{r}x = (I +rA_{f} )^{-1}x\).

Theorem 4.4

Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let \(f:C\times C\rightarrow\mathbb{R}\) be a bifunction satisfying (A1)-(A4) and let \(S_{\delta}\) be the resolvent of f for \(\delta> 0\). Let \(\psi: C\rightarrow C\) be r-contractive, \(A,B:C\rightarrow H\) be α- and β-inverse-strongly monotone, respectively. Define a mapping \(Gx:= \operatorname{Proj}_{C}(I-\lambda A)(aI+(1-a)\operatorname{Proj}_{C}(I-\mu B))x\) for all \(x\in C\) and \(a\in[0,1]\). Assume that \(\lambda\in(0, 2\alpha)\), \(\mu\in(0, 2\beta)\) and \(\operatorname{Fix}(G)\cap \operatorname{EP}(f)\neq\emptyset\). For arbitrarily given \(x_{1}\in C\), let \(\{x_{n}\}\) be the sequence generated iteratively by
$$ x_{n+1}=\alpha_{n} \psi(x_{n})+(1- \alpha_{n})S_{r_{n}}(Gx_{n}+e_{n}), $$
where \(\{e_{n}\}_{1}^{\infty}\subset E\), \(\{\alpha_{n}\}_{1}^{\infty}\subset [0,1]\) and \(\{r_{n}\}_{1}^{\infty}\subset(0, +\infty)\) for all \(n\in \mathbb{N}\) satisfy the following conditions:
  1. (i)

    \(\sum_{i=1}^{\infty} \Vert e_{n}\Vert <\infty\);

     
  2. (ii)

    \(\sum_{n=1}^{\infty}\alpha_{n}=\infty\), \(\lim_{n\rightarrow \infty}\alpha_{n}=0\) and \(\sum_{n=1}^{\infty} \vert \alpha _{n+1}-\alpha_{n}\vert <\infty\);

     
  3. (iii)

    \(\sum_{n=1}^{\infty} \vert r_{n+1}-r_{n}\vert <\infty\).

     
Then \(\{x_{n}\}\) converges strongly to some point \(x\in\operatorname {Fix}(G)\cap \operatorname{EP}(f)\), which solves the variational inequality
$$ \bigl\langle f(x)- x, p-x \bigr\rangle \leq0,\quad p\in\operatorname{Fix}(G)\cap \operatorname{EP}(f). $$

Proof

Put \(A_{i}= 0\) for \(i=1,2,\ldots,N\) in Corollary 3.9. From Lemma 4.3, we know that \(J_{r_{n}} = S_{r_{n}}\) for all \(n \in\mathbb{N}\). So, we obtain the desired result by Corollary 3.9. □

Let \(g : H \rightarrow(-\infty, +\infty]\) be a proper convex lower semi-continuous function. Then, the subdifferential ∂g of g is defined as follows:
$$ \partial g=\bigl\{ y\in H: g(z)\geq g(x)+\langle z-x,y\rangle, z\in H\bigr\} , \quad \forall x\in H. $$
From Rockafellar [35], we know that ∂g is maximal monotone. It is easy to verify that \(0\in\partial g(x)\) if and only if \(g(x) = \min_{ y\in H} g(y)\). Let \(I_{C}\) be the indicator function of C, i.e.,
$$ I_{C}(x)= \left \{ \textstyle\begin{array}{l@{\quad}l} 0, &x\in C, \\ +\infty,& x \notin C. \end{array}\displaystyle \right . $$
Then \(I_{C}\) is a proper lower semi-continuous convex function on H, and the subdifferential \(\partial I_{C}\) of \(I_{C}\) is a maximal monotone operator. Furthermore, suppose that C is a nonempty closed convex subset. Then
$$ (I + \lambda\partial I_{C})^{-1}x=\operatorname{Proj}_{C} x,\quad \forall x\in H, \lambda>0. $$
For more details, one can refer to [27].

Theorem 4.5

Let C be a nonempty, closed and convex subset of a Hilbert space H. Let \(N\geq1\) be some positive integer and let \(A_{i}: C\rightarrow H\) be \(\eta_{i}\)-inverse-strongly monotone with \(\eta=\min \{\eta_{1}, \eta_{2},\ldots, \eta_{N}\}\) for each \(i\in\{1,2,\ldots, N\}\). Let \(f: C\rightarrow C\) be r-contractive. Let \(A,B:C\rightarrow H\) be α- and β-inverse-strongly monotone, respectively. Define a mapping \(Gx:= \operatorname{Proj}_{C}(I-\lambda A)(aI+(1-a)\operatorname{Proj}_{C}(I-\mu B))x\) for all \(x\in C\) and \(a\in[0,1]\). Assume that \(\lambda\in(0, 2\alpha)\), \(\mu\in(0, 2\beta)\) and \(\operatorname{Fix}(G)\cap \bigcap_{i=1}^{N} \operatorname{VI}(C, A_{i})\neq\emptyset\). For arbitrarily given \(x_{1}\in C\), let \(\{x_{n}\}\) be the sequence generated iteratively by
$$ x_{n+1}=\alpha_{n} f(x_{n})+(1- \alpha_{n})\operatorname{Proj}_{C}\Biggl( \Biggl(I-r_{n}\sum_{i=1}^{N} \lambda_{i}A_{i}\Biggr)Gx_{n}+e_{n} \Biggr), $$
where \(\{e_{n}\}_{1}^{\infty}\subset E\), \(\{\alpha_{n}\}_{1}^{\infty}\subset [0,1]\), \(\{\lambda_{n}\}_{1}^{N}\subset[0,1]\) and \(\{r_{n}\}_{1}^{\infty}\subset (0, +\infty)\) satisfy the following conditions:
  1. (i)

    \(\sum_{n=1}^{\infty} \Vert e_{n}\Vert < \infty\);

     
  2. (ii)

    \(\sum_{n=1}^{\infty}\alpha_{n}=\infty\), \(\lim_{n\rightarrow \infty}\alpha_{n}=0\) and \(\sum_{n=1}^{\infty} \vert \alpha _{n+1}-\alpha_{n}\vert <\infty\);

     
  3. (iii)

    \(0<\liminf_{n\rightarrow\infty}r_{n}\leq\limsup_{n\rightarrow\infty}r_{n}< 2\eta\) and \(\sum_{n=1}^{\infty} \vert r_{n+1}-r_{n}\vert <\infty\);

     
  4. (iv)

    \(\sum_{n=1}^{N} \lambda_{i}=1\).

     
Then \(\{x_{n}\}\) converges strongly to some point \(x\in\operatorname {Fix}(G)\cap \bigcap_{i=1}^{N} \operatorname{VI}(C, A_{i})\), which solves the variational inequality
$$ \bigl\langle f(x)- x, p-x \bigr\rangle \leq0,\quad p\in \operatorname{Fix}(G)\cap \bigcap_{i=1}^{N} \operatorname{VI}(C, A_{i}). $$

Proof

Put \(B= \partial I_{C}\). Next, we show that \(\operatorname{VI}(C,A_{i}) = \operatorname{VI}(H, A_{i}, \partial I_{C})\). Notice that
$$\begin{aligned} x\in\operatorname{VI}(H, A_{i}, \partial I_{C})&\quad \Longleftrightarrow\quad 0\in A_{i}x+ \partial I_{C}x \\ &\quad \Longleftrightarrow \quad -A_{i}x\in\partial I_{C}x \\ &\quad \Longleftrightarrow\quad \langle A_{i}x, y-x\rangle\geq0 \\ &\quad \Longleftrightarrow\quad x\in\operatorname{VI}(C,A_{i}). \end{aligned}$$
In view of Theorem 3.6, we find the desired result immediately. □

Let \(W : H\rightarrow\mathbb{R}\) be a convex and differentiable function and \(M : H \rightarrow\mathbb{R}\) be a convex function. Consider the convex minimization problem \(\min_{x\in H}(Wx + Mx)\). From [35], we know if W is \(\frac{1}{L}\)-Lipschitz continuous, then it is L-inverse-strongly monotone. Hence, we have the following theorem.

Theorem 4.6

Let C be a nonempty, closed and convex subset of a Hilbert space H. Let \(N\geq1\) be some positive integer. Let \(W_{i}: H\rightarrow\mathbb {R}\) be a convex and differentiable function and \(\nabla W_{i}\) be \(\frac{1}{L_{i}}\)-Lipschitz continuous with \(L=\min\{L_{1}, L_{2},\ldots, L_{N}\} \) for each \(i\in\{1,2,\ldots, N\}\). Let M be a convex and lower semi-continuous function, \(f: C\rightarrow C\) be r-contractive. Let \(A,B:C\rightarrow H\) be a convex and differentiable function and let A, B be α- and β-Lipschitz continuous, respectively. Define a mapping
$$ G'x:= \operatorname{Proj}_{C}(I-\lambda\nabla A) \bigl(aI+(1-a)\operatorname {Proj}_{C}(I-\mu\nabla B)\bigr)x, \quad \forall x\in C, a\in[0,1]. $$
Assume that \(\lambda\in(0, 2\alpha)\), \(\mu\in(0, 2\beta)\) and \(\operatorname{Fix}(G')\cap \bigcap_{i=1}^{N} \operatorname{VI}(H, \nabla W_{i}, \partial M)\neq\emptyset \). For arbitrarily given \(x_{1}\in C\), let \(\{x_{n}\}\) be the sequence generated iteratively by
$$ x_{n+1}=\alpha_{n} f(x_{n})+(1- \alpha_{n}) (I+r_{n}\partial M)^{-1}\Biggl( \Biggl(I-r_{n}\sum_{i=1}^{N} \lambda_{i}\nabla W_{i}\Biggr)Gx_{n}+e_{n} \Biggr), $$
where \(\{e_{n}\}_{1}^{\infty}\subset E\), \(\{\alpha_{n}\}_{1}^{\infty}\subset [0,1]\), \(\{\lambda_{n}\}_{1}^{N}\subset[0,1]\) and \(\{r_{n}\}_{1}^{\infty}\subset (0, +\infty)\) satisfy the following conditions:
  1. (i)

    \(\sum_{n=1}^{\infty} \Vert e_{n}\Vert < \infty\);

     
  2. (ii)

    \(\sum_{n=1}^{\infty}\alpha_{n}=\infty\), \(\lim_{n\rightarrow \infty}\alpha_{n}=0\) and \(\sum_{n=1}^{\infty} \vert \alpha _{n+1}-\alpha_{n}\vert <\infty\);

     
  3. (iii)

    \(0<\liminf_{n\rightarrow\infty}r_{n}\leq\limsup_{n\rightarrow\infty}r_{n}< 2L\) and \(\sum_{n=1}^{\infty} \vert r_{n+1}-r_{n}\vert <\infty\);

     
  4. (iv)

    \(\sum_{n=1}^{N} \lambda_{i}=1\).

     
Then \(\{x_{n}\}\) converges strongly to some point \(x\in\operatorname {Fix}(G')\cap \bigcap_{i=1}^{N} \operatorname{VI}(H, \nabla W_{i}, \partial M)\), which solves the variational inequality
$$ \bigl\langle f(x)- x, p-x \bigr\rangle \leq0,\quad p\in \operatorname{Fix} \bigl(G'\bigr)\cap \bigcap_{i=1}^{N} \operatorname{VI}(H, \nabla W_{i}, \partial M). $$

Proof

Put \(M=\partial M\), \(A=\nabla A\), \(B=\nabla B\), \(A_{i}=\nabla W_{i}\) for each \(i\in\{1,2,\ldots, N\}\) in Theorem 3.6. Then we get the desired conclusions immediately. □

5 Numerical examples

The purpose of this section is to give two numerical examples supporting Theorem 3.6.

Example 5.1

Let \(\mathbb{R}\) be a real line with the Euclidean norm. For all \(x \in\mathbb{R}\), let \(A,B, M, f: \mathbb{R}\rightarrow\mathbb{R}\) be defined by \(Ax=\frac{1}{3}x\), \(Bx =\frac{1}{6}x\), \(Mx =x\) and \(f(x)=\frac {1}{2}x\), respectively. For each \(i\in\{1,2,\ldots, N\}\), let \(A_{i}: \mathbb{R}\rightarrow\mathbb{R}\) be defined by \(A_{i}x=\frac{i}{6}x\) for all \(x\in\mathbb{R}\). Let \(a=\frac{1}{2}\), \(\lambda=2\), \(\mu =3\), \(\lambda_{i}=\frac{2}{3^{i}}+\frac{1}{N3^{N}}\) for each \(i\in\{ 1,2,\ldots, N\}\), and \(e_{n}=\frac{e_{1}}{n^{2}}\) (\(i=1,2,\ldots\)), where \(\vert e_{1}\vert <\infty\). Let the sequence \(\{x_{n}\}\) be generated iteratively by (3.7), where \(\alpha_{n}=\frac{1}{n}\) and \(r_{n}=\frac{1}{n+2}+\frac{1}{N}\). Then the sequence \(\{x_{n}\}\) converges strongly to 0.

Solution: It can be observed that all the assumptions of Theorem 3.6 are satisfied. It is also easy to check that
$$ \operatorname{Fix}(G)\cap\bigcap_{i=1}^{N} \operatorname{VI}(E, A_{i}, M)=\{0\}. $$
We rewrite (3.7) as follows:
$$\begin{aligned} x_{n+1} =&\frac{1}{2n}x_{n}+ \biggl(1- \frac{1}{n} \biggr)\frac{(n+2)N}{n(N+1)+3N+2} \\ &{}\times\Biggl[\frac{1}{4}x_{n} - \biggl(\frac{1}{n+2}+ \frac{1}{N} \biggr)\sum_{i=1}^{N} \biggl(\frac{2}{3^{i}} +\frac{1}{N3^{N}} \biggr)\frac{i}{24}x_{n}+ \frac{e_{1}}{n^{2}}\Biggr]. \end{aligned}$$
(5.1)
Using algorithm (5.1) and choosing \(e_{1}=x_{1}=5\) with \(N=1\) and \(N=100\) (see Table 1), we see that Figure 1 and numerical results demonstrate Theorem 3.6.
Figure 1

The convergence of \(\pmb{\{x_{n}\}}\) with initial values \(\pmb{e_{1}=x_{1}=5}\) .

Table 1

The values of the sequence \(\pmb{\{x_{n}\}}\)

n

N  = 1

N  = 100

\(\boldsymbol{x_{n}}\)

\(\boldsymbol{x_{n}}\)

1

5.000000000000000

5.000000000000000

2

2.500000000000000

2.500000000000000

3

1.012731481481481

1.352926587301587

4

0.398516414141414

0.708148944121956

5

0.185768821022727

0.395562727289485

50

0.001141959256001

0.002664953434053

96

0.000306349806735

0.000718002934223

97

0.000300029191229

0.000703224190461

98

0.000293902199966

0.000688897141470

99

0.000287961003866

0.000675003563404

100

0.000282198165612

0.000661526142407

Next, we present a numerical example in \(\mathbb{R}^{3}\) that also supports our result.

Example 5.2

Let the inner product \(\langle\cdot, \cdot\rangle: \mathbb{R}^{3} \times\mathbb{R}^{3} \rightarrow\mathbb{R}\) be defined by \(\langle \mathbf{x}, \mathbf{y}\rangle=\mathbf{x} \cdot\mathbf{y}= x_{1}\cdot y_{1}+x_{2}\cdot y_{2}+x_{3}\cdot y_{3}\) and the usual norm \(\Vert \cdot \Vert : \mathbb{R}^{3}\rightarrow\mathbb{R}\) be defined by \(\Vert \mathbf{x}\Vert =\sqrt{x_{1}^{2}+x_{2}^{2}+x_{3}^{2}}\) for all \(\mathbf{x}=(x_{1}, x_{2}, x_{3})\), \(\mathbf{y}=(y_{1}, y_{2}, y_{3})\in \mathbb{R}^{3}\). Let \(A,B, M, f: \mathbb{R}^{3}\rightarrow\mathbb{R}^{3}\) be defined by \(A\mathbf{x}=\frac{1}{4}\mathbf{x}\), \(B\mathbf{x}=f\mathbf {x}=\frac{1}{6}\mathbf{x}\) and \(M\mathbf{x}=\mathbf{x}\) for all \(\mathbf{x}\in\mathbb{R}^{3}\), respectively. For each \(i\in\{ 1,2,\ldots, N\}\), let \(A_{i}: \mathbb{R}^{3}\rightarrow\mathbb{R}^{3}\) be defined by \(A_{i}\mathbf{x}=\frac{i}{6}\mathbf{x}\) for all \(\mathbf{x}\in\mathbb{R}^{3}\). Let \(a=\frac{1}{2}\), \(\lambda =3\), \(\mu=6\), \(\lambda_{i}=\frac{5}{6^{i}}+\frac{1}{N6^{N}}\) for each \(i\in\{1,2,\ldots, N\}\) and \(e_{n}=\frac{e_{1}}{n^{2}}\) (\(n = 1, 2, \ldots \)), where \(e_{1}\in\mathbb{R}^{3}\) and \(\Vert e_{1}\Vert <\infty \). Let the sequence \(\{\mathbf{x}_{n}\}\) be generated iteratively by (3.7), where \(\alpha_{n}=\frac{1}{n}\) and \(r_{n}=\frac{1}{n+2}+\frac{1}{N}\). Then the sequence \(\{\mathbf {x}_{n}\}\) converges strongly to 0.

Solution: It can be observed that all the assumptions of Theorem 3.6 are satisfied. It is also easy to check \(\operatorname{Fix}(G)\cap\bigcap_{i=1}^{N} \operatorname{VI}(E, A_{i}, M)=\{0\}\).

We rewrite (3.7) as follows:
$$\begin{aligned} \mathbf{x}_{n+1} =&\frac{1}{6n}\mathbf{x}_{n}+ \biggl(1-\frac{1}{n} \biggr)\frac{(n+2)N}{n(N+1)+3N+2} \\ &{}\times\Biggl[\frac{1}{6}\mathbf{x}_{n} - \biggl( \frac{1}{n+2}+\frac{1}{N} \biggr)\sum_{i=1}^{N} \biggl(\frac{5}{6^{i}} +\frac{1}{N6^{N}} \biggr)\frac{i}{36} \mathbf{x}_{n}+\frac {e_{1}}{n^{2}}\Biggr]. \end{aligned}$$
(5.2)
Utilizing algorithm (5.2) and choosing \(\mathbf{x}_{1}=e_{1}=(1,6,12)\) with \(N=100\), we report the numerical results in Table 2. In addition, Figure 2 also demonstrates Theorem 3.6.
Figure 2

The convergence of \(\pmb{\{\mathbf{x}_{n}\}}\) with initial values \(\pmb{\mathbf{x}_{1} =e_{1}=(1,6,12)}\) and \(\pmb{N=100}\) .

Table 2

The values of the sequence \(\pmb{\{\mathbf{x}_{n}\}}\)

n

\(\boldsymbol{x_{n}^{1}}\)

\(\boldsymbol{x_{n}^{2}}\)

\(\boldsymbol{x_{n}^{3}}\)

1

1.000000000000000

6.000000000000000

12.000000000000000

2

0.166666666666667

1.000000000000000

2.000000000000000

3

0.123544973544974

0.741269841269841

1.482539682539682

4

0.078950180010786

0.473701080064716

0.947402160129433

5

0.051217417354718

0.307304504128309

0.614609008256618

50

0.000476097239794

0.002856583438761

0.005713166877522

96

0.000128869584585

0.000773217507511

0.001546435015021

97

0.000126223335925

0.000757340015550

0.001514680031100

98

0.000123657771457

0.000741946628743

0.001483893257487

99

0.000121169643936

0.000727017863616

0.001454035727232

100

0.000118755867858

0.000712535207149

0.001425070414298

Declarations

Acknowledgements

This research was partially supported by the Innovation Program of Shanghai Municipal Education Commission (15ZZ068), Ph.D. Program Foundation of Ministry of Education of China (20123127110002) and Program for Outstanding Academic Leaders in Shanghai City (15XD1503100).

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
College of Science, Zhongyuan University of Technology
(2)
Department of Mathematics, Shanghai Normal University

References

  1. Stampacchia, G: Formes bilineaires coercivities sur les ensembles convexes. C. R. Acad. Sci. Paris 258, 4413-4416 (1964) MATHMathSciNetGoogle Scholar
  2. Peng, J, Wang, Y, Shyu, D, Yao, JC: Common solutions of an iterative scheme for variational inclusions, equilibrium problems, and fixed point problems. J. Inequal. Appl. 2008, Article ID 720371 (2008) MathSciNetView ArticleGoogle Scholar
  3. Qin, X, Chang, SS, Cho, YJ, Kang, SM: Approximation of solutions to a system of variational inclusions in Banach spaces. J. Inequal. Appl. 2010, Article ID 916806 (2010) MathSciNetView ArticleGoogle Scholar
  4. Hao, Y: On variational inclusion and common fixed point problems in Hilbert spaces with applications. Appl. Math. Comput. 217, 3000-3010 (2010) MATHMathSciNetView ArticleGoogle Scholar
  5. Yao, Y, Yao, J: On modified iterative method for nonexpansive mappings and monotone mappings. Appl. Math. Comput. 186, 1551-1558 (2007) MATHMathSciNetView ArticleGoogle Scholar
  6. Brézis, H: Équations et inéquations non linéaires dans les espaces vectoriels en dualité. Ann. Inst. Fourier (Grenoble) 18, 115-175 (1968) MATHMathSciNetView ArticleGoogle Scholar
  7. Moreau, JJ: Proximité et dualité dans un espaces hilbertien. Bull. Soc. Math. Fr. 93, 273-299 (1965) MATHMathSciNetGoogle Scholar
  8. Ansari, QH, Yao, JC: Systems of generalized variational inequalities and their applications. Appl. Anal. 76, 203-217 (2000) MATHMathSciNetView ArticleGoogle Scholar
  9. Ansari, QH, Schaible, S, Yao, JC: The system of generalized vector equilibrium problems with applications. J. Glob. Optim. 22, 3-16 (2002) MATHMathSciNetView ArticleGoogle Scholar
  10. Yao, Y, Noor, MA, Noor, K, Liou, YC, Yaqoob, H: Modified extragradient methods for a system of variational inequalities in Banach spaces. Acta Appl. Math. 110, 1211-1224 (2010) MATHMathSciNetView ArticleGoogle Scholar
  11. Kangtunyakarn, A: The modification of system of variational inequalities for fixed point theory in Banach spaces. Fixed Point Theory Appl. 2014, Article ID 123 (2014) View ArticleGoogle Scholar
  12. Cai, G, Bu, S: Modified extragradient methods for variational inequality problems and fixed point problems for an infinite family of nonexpansive mappings in Banach spaces. J. Glob. Optim. 55, 437-457 (2013) MATHMathSciNetView ArticleGoogle Scholar
  13. Ceng, LC, Wang, C, Yao, YC: Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 67, 375-390 (2008) MATHMathSciNetView ArticleGoogle Scholar
  14. Verma, RU: On a new system of nonlinear variational inequalities and associated iterative algorithms. Math. Sci. Res. Hot-Line 3, 65-68 (1999) MATHMathSciNetGoogle Scholar
  15. Chang, SS: Existence and approximation of solutions for set-valued variational inclusions in Banach space. Nonlinear Anal. 47(1), 583-594 (2001) MATHMathSciNetView ArticleGoogle Scholar
  16. Chang, SS: Set-valued variational inclusions in Banach spaces. J. Math. Anal. Appl. 248(2), 438-454 (2000) MATHMathSciNetView ArticleGoogle Scholar
  17. Noor, MA: Generalized set-valued variational inclusions and resolvent equations. J. Math. Anal. Appl. 228(1), 206-220 (1998) MATHMathSciNetView ArticleGoogle Scholar
  18. Hartman, P, Stampacchia, G: On some non-linear elliptic differential-functional equations. Acta Math. 115, 271-310 (1966) MATHMathSciNetView ArticleGoogle Scholar
  19. Zhang, SS, Lee, J, Chan, CK: Algorithms of common solutions for quasi-variational inclusion and fixed point problems. Appl. Math. Mech. 29, 571-581 (2008) MATHMathSciNetView ArticleGoogle Scholar
  20. Manaka, H, Takahashi, W: Weak convergence theorems for maximal monotone operators with nonspreading mappings in a Hilbert space. CUBO 13, 11-24 (2011) MATHMathSciNetView ArticleGoogle Scholar
  21. Kamimura, S, Takahashi, W: Approximating solutions of maximal monotone operators in Hilbert spaces. J. Approx. Theory 106, 226-240 (2000) MATHMathSciNetView ArticleGoogle Scholar
  22. Aoyama, K, Iiduka, H, Takahashi, W: Weak convergence of an iterative sequence for accretive operators in Banach spaces. Fixed Point Theory Appl. 2006, Article ID 35390 (2006) MathSciNetView ArticleGoogle Scholar
  23. Zegeye, H, Shahzad, N: Strong convergence theorems for a common zero of a finite family of m-accretive mappings. Nonlinear Anal. 66(5), 1161-1169 (2007) MATHMathSciNetView ArticleGoogle Scholar
  24. López, G, Martín-Márquez, V, Wang, FH, Xu, HK: Forward-backward splitting methods for accretive operators in Banach spaces. Abstr. Appl. Anal. 2012, Article ID 109236 (2012) View ArticleGoogle Scholar
  25. Khuangsatung, W, Kangtunyakarn, A: Algorithm of a new variational inclusion problem and strictly pseudononspreading mapping with application. Fixed Point Theory Appl. 2014, Article ID 209 (2014) MathSciNetView ArticleGoogle Scholar
  26. Qin, XL, Cho, SY, Wang, L: A regularization method for treating zero points of the sum of two monotone operators. Fixed Point Theory Appl. 2014, Article ID 75 (2014) View ArticleGoogle Scholar
  27. Takahashi, S, Takahashi, W, Toyoda, M: Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 147, 27-41 (2010) MATHMathSciNetView ArticleGoogle Scholar
  28. Xu, HK: Inequalities in Banach spaces with applications. Nonlinear Anal. 16, 1127-1138 (1991) MATHMathSciNetView ArticleGoogle Scholar
  29. Xu, HK: Iterative algorithm for nonlinear operators. J. Lond. Math. Soc. 2, 1-17 (2002) Google Scholar
  30. Song, YL, Ceng, LC: A general iteration scheme for variational inequality problem and common fixed point problems of nonexpansive mappings in q-uniformly smooth Banach spaces. J. Glob. Optim. 57, 1327-1348 (2013) MATHMathSciNetView ArticleGoogle Scholar
  31. Reich, S: Asymptotic behavior of contractions in Banach spaces. J. Math. Anal. Appl. 44, 57-70 (1973) MATHMathSciNetView ArticleGoogle Scholar
  32. Xu, HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 298, 279-291 (2004) MATHMathSciNetView ArticleGoogle Scholar
  33. Blum, E, Oettli, W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123-145 (1994) MATHMathSciNetGoogle Scholar
  34. Combettes, PL, Hirstoaga, SA: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 6, 117-136 (2005) MATHMathSciNetGoogle Scholar
  35. Rockafellar, R: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1, 97-116 (1976) MATHMathSciNetView ArticleGoogle Scholar

Copyright

© Song and Ceng 2015