Skip to content

Advertisement

Open Access

Fixed point problems and a system of generalized nonlinear mixed variational inequalities

Fixed Point Theory and Applications20132013:186

https://doi.org/10.1186/1687-1812-2013-186

Received: 9 October 2012

Accepted: 19 June 2013

Published: 16 July 2013

Abstract

In this paper, we introduce and consider a new system of generalized nonlinear mixed variational inequalities involving six different nonlinear operators and discuss the existence and uniqueness of solution of the aforesaid system. We use three nearly uniformly Lipschitzian mappings S i ( i = 1 , 2 , 3 ) to suggest and analyze some new three-step resolvent iterative algorithms with mixed errors for finding an element of the set of fixed points of the nearly uniformly Lipschitzian mapping Q = ( S 1 , S 2 , S 3 ) , which is the unique solution of the system of generalized nonlinear mixed variational inequalities. The convergence analysis of the suggested iterative algorithms under suitable conditions is studied. In the final section, an important remark on a class of some relaxed cocoercive mappings is discussed.

MSC:47H05, 47J20, 49J40, 90C33.

Keywords

system of generalized mixed variational inequalitiesfixed point problemsnearly uniformly Lipschitzian mappingthree-step resolvent iterative algorithmconvergence

1 Introduction

Variational inequality theory, which was initially introduced by Stampacchia [1] in 1964, is a branch of applicable mathematics with a wide range of applications in industry, physical, regional, social, pure, and applied sciences. This field is dynamic and is experiencing an explosive growth in both theory and applications; as a consequence, research techniques and problems are drawn from various fields. Variational inequalities have been generalized and extended in different directions using the novel and innovative techniques. An important and useful generalization is called the mixed variational inequality, or the variational inequality of the second kind, involving the nonlinear term. For applications, numerical methods, and other aspects of variational inequalities, see, for example, [122] and the references therein. In recent years, much attention has been given to develop efficient and implementable numerical methods including projection method and its variant forms, Wiener-Hopf (normal) equations, linear approximation, auxiliary principle, and descent framework for solving variational inequalities and related optimization problems. It is well known that the projection method and its variant forms and Wiener-Hopf equations technique cannot be used to suggest and analyze iterative methods for solving mixed variational inequalities due to the presence of the nonlinear term. These facts motivated us to use the technique of resolvent operators, the origin of which can be traced back to Martinet [11] and Brezis [4]. In this technique, the given operator is decomposed into the sum of two (or more) maximal monotone operators, whose resolvents are easier to evaluate than the resolvent of the original operator. Such a method is known as the operator splitting method. This can lead to the development of very efficient methods, since one can treat each part of the original operator independently. The operator splitting methods and related techniques have been analyzed and studied by many authors including Peaceman and Rachford [15], Lions and Mercier [9], Glowinski and Tallec [7], and Tseng [18]. For an excellent account of the alternating direction implicit (splitting) methods, see [2]. A useful feature of the forward-backward splitting method for solving the mixed variational inequalities is that the resolvent step involves the subdifferential of the proper, convex and lower semicontinuous part only and the other part facilitates the problem decomposition.

Equally important is the area of mathematical sciences known as the resolvent equations, which was introduced by Noor [12]. Noor [12] established the equivalence between the mixed variational inequalities and the resolvent equations using essentially the resolvent operator technique. The resolvent equations are being used to develop powerful and efficient numerical methods for solving the mixed variational inequalities and related optimization problems. It is worth mentioning that if the nonlinear term involving the mixed variational inequalities is the indicator function of a closed convex set in a Hilbert space, then the resolvent operator is equal to the projection operator.

On the other hand, related to the variational inequalities, we have the problem of finding the fixed points of nonexpansive mappings, which is the subject of current interest in functional analysis. It is natural to consider a unified approach to these two different problems. Motivated and inspired by the research going in this direction, Noor and Huang [14] considered the problem of finding a common element of the set of solutions of variational inequalities and the set of fixed points of nonexpansive mappings. It is well known that every nonexpansive mapping is a Lipschitzian mapping. Lipschitzian mappings have been generalized by various authors. Sahu [23] introduced and investigated nearly uniformly Lipschitzian mappings as generalization of Lipschitzian mappings.

In the present paper, we introduce and consider a new system of generalized nonlinear mixed variational inequalities involving six different nonlinear operators (SGNMVID). We first verify the equivalence between the SGNMVID and the fixed point problems, and then by this equivalent formulation, we discuss the existence and uniqueness of the solution of the SGNMVID. Applying nearly uniformly Lipschitzian mappings S i ( i = 1 , 2 , 3 ) and the aforesaid equivalent alternative formulation, we suggest and analyze some new three-step resolvent iterative algorithms with mixed errors for finding the element of the set of fixed points of the nearly uniformly Lipschitzian mapping Q = ( S 1 , S 2 , S 3 ) , which is the unique solution of the SGNMVID. Also, the convergence analysis of the suggested iterative algorithms under suitable conditions is studied. In the final section, some comments on the results related to a class of strongly monotone mappings are discussed. The results presented in this paper extend and improve some known results in the literature.

2 Preliminaries and basic results

Throughout this article, we let be a real Hilbert space which is equipped with an inner product , and the corresponding norm . Let T i : H × H × H H and g i : H H ( i = 1 , 2 , 3 ) be six nonlinear single-valued operators such that for each i = 1 , 2 , 3 , g i is an onto operator, and let φ i denote the subdifferential of the function φ i ( i = 1 , 2 , 3 ), where for each i = 1 , 2 , 3 , φ i : H R { + } is a proper convex lower semicontinuous function on . For any given constants ρ , η , γ > 0 , we consider the problem of finding x , y , z H such that
{ ρ T 1 ( y , z , x ) + x g 1 ( y ) , g 1 ( x ) x ρ φ 1 ( x ) ρ φ 1 ( g 1 ( x ) ) , x H , η T 2 ( z , x , y ) + y g 2 ( z ) , g 2 ( x ) y η φ 2 ( y ) η φ 2 ( g 2 ( x ) ) , x H , γ T 3 ( x , y , z ) + z g 3 ( x ) , g 3 ( x ) z γ φ 3 ( z ) γ φ 3 ( g 3 ( x ) ) , x H ,
(2.1)

which is called the system of generalized nonlinear mixed variational inequalities involving six different nonlinear operators (SGNMVID).

If for each i = 1 , 2 , 3 , g i I , the identity operator, and φ i ( x ) = δ K ( x ) , for all x K , where δ K is the indicator function of a nonempty closed convex set K in defined by
δ K ( y ) = { 0 , y K , , y K ,
then problem (2.1) reduces to the following system:
{ ρ T 1 ( y , z , x ) + x y , x x 0 , x K , η T 2 ( z , x , y ) + y z , x y 0 , x K , γ T 3 ( x , y , z ) + z x , x z 0 , x K ,
(2.2)

which was introduced and studied by Cho and Qin [6].

For different choices of operators and constants, we obtain different systems and problems considered and studied in [1, 5, 8, 13, 17, 1921] and the references therein.

Definition 2.1 A set-valued operator T : H H is said to be monotone if, for any x , y H ,
u v , x y 0 , u T ( x ) , v T ( y ) .

A monotone set-valued operator T is called maximal if its graph, Gph ( T ) : = { ( x , y ) H × H : y T ( x ) } , is not properly contained in the graph of any other monotone operator. It is well known that T is a maximal monotone operator if and only if ( I + λ T ) ( H ) = H for all λ > 0 , where I denotes the identity operator on .

Definition 2.2 [4]

For any maximal monotone operator T, the resolvent operator associated with T of parameter λ is defined as
J T λ ( u ) = ( I + λ T ) 1 ( u ) , u H .
It is single-valued and nonexpansive, that is,
J T λ ( u ) J T λ ( v ) u v , u , v H .
If φ is a proper, convex and lower-semicontinuous function, then its subdifferential ∂φ is a maximal monotone operator, see Theorem 4 in [24]. In this case, we can define the resolvent operator associated with the subdifferential ∂φ of parameter λ as follows:
J φ λ ( u ) = ( I + λ φ ) 1 ( u ) , u H .

The resolvent operator J φ λ has the following useful characterization.

Lemma 2.1 [13]

For a given z H , x H satisfies the inequality
x z , y x + λ φ ( y ) λ φ ( x ) 0 , y H ,

if and only if x = J φ λ ( z ) , where J φ λ is the resolvent operator associated with ∂φ of parameter λ > 0 .

It is well known that J φ λ is nonexpansive, that is,
J φ λ ( u ) J φ λ ( v ) u v , u , v H .

Let us recall that a mapping T : H H is nonexpansive if T x T y x y for all x , y H . In recent years, nonexpansive mappings have been generalized and investigated by various authors. In the next definitions, several generalizations of nonexpansive mappings are stated.

Definition 2.3 A nonlinear mapping T : H H is called
  1. (a)
    L-Lipschitzian if there exists a constant L > 0 such that
    T x T y L x y , x , y H ;
     
  2. (b)
    generalized Lipschitzian [25] if there exists a constant L > 0 such that
    T x T y L ( x y + 1 ) , x , y H ;
     
  3. (c)
    generalized ( L , M ) -Lipschitzian [23] if there exist two constants L , M > 0 such that
    T x T y L ( x y + M ) , x , y H ;
     
  4. (d)
    asymptotically nonexpansive [26] if there exists a sequence { k n } [ 1 , ) with lim n k n = 1 such that for each n N ,
    T n x T n y k n x y , x , y H ;
     
  5. (e)
    pointwise asymptotically nonexpansive [27] if, for each integer n 1 ,
    T n x T n y α n ( x ) x y , x , y H ,
     
where α n 1 pointwise on X;
  1. (f)
    uniformly L-Lipschitzian if there exists a constant L > 0 such that for each n N ,
    T n x T n y L x y , x , y H .
     

Definition 2.4 [23]

A nonlinear mapping T : H H is said to be
  1. (a)
    nearly Lipschitzian with respect to the sequence { a n } if for each n N , there exists a constant k n > 0 such that
    T n x T n y k n ( x y + a n ) , x , y H ,
    (2.3)
     

where { a n } is a fix sequence in [ 0 , ) with a n 0 , as n .

For an arbitrary, but fixed n N , the infimum of constants k n in (2.3) is called nearly Lipschitz constant and is denoted by η ( T n ) . Notice that
η ( T n ) = sup { T n x T n y x y + a n : x , y H , x y } .

Definition 2.5 [23]

A nearly Lipschitzian mapping T with the sequence { ( a n , η ( T n ) ) } is said to be
  1. (a)
    nearly nonexpansive if η ( T n ) = 1 for all n N , that is,
    T n x T n y x y + a n , x , y H ;
     
  2. (b)

    nearly asymptotically nonexpansive if η ( T n ) 1 for all n N and lim n η ( T n ) = 1 , in other words, k n 1 for all n N with lim n k n = 1 ;

     
  3. (c)

    nearly uniformly L-Lipschitzian if η ( T n ) L for all n N , in other words, k n = L for all n N .

     
Remark 2.2 It should be pointed out that:
  1. (a)

    Every nonexpansive mapping is an asymptotically nonexpansive mapping, and every asymptotically nonexpansive mapping is a pointwise asymptotically nonexpansive mapping. Also, the class of Lipschitzian mappings properly includes the class of pointwise asymptotically nonexpansive mappings.

     
  2. (b)

    It is obvious that every Lipschitzian mapping is a generalized Lipschitzian mapping. Furthermore, every mapping with a bounded range is a generalized Lipschitzian mapping. It is easy to see that the class of generalized ( L , M ) -Lipschitzian mappings is more general than the class of generalized Lipschitzian mappings.

     
  3. (c)

    Clearly, the class of nearly uniformly L-Lipschitzian mappings properly includes the class of generalized ( L , M ) -Lipschitzian mappings and that of uniformly L-Lipschitzian mappings. Note that every nearly asymptotically nonexpansive mapping is nearly uniformly L-Lipschitzian.

     

Some interesting examples to investigate relations between the mappings given in Definitions 2.3, 2.4 and 2.5 can be found in [3].

3 Existence of solution and uniqueness

In this section, we prove the existence and uniqueness theorem for a solution of the system of generalized nonlinear mixed variational inequalities (2.1). For this end, we need the following lemma, in which, by using the resolvent operator technique and Lemma 2.1, the equivalence between the system of generalized nonlinear mixed variational inequalities (2.1) and fixed point problems is stated.

Lemma 3.1 Let T i , g i , φ i ( i = 1 , 2 , 3 ), ρ, η and γ be the same as in SGNMVID (2.1). Then ( x , y , z ) H × H × H is a solution of SGNMVID (2.1) if and only if
{ x = J φ 1 ρ ( g 1 ( y ) ρ T 1 ( y , z , x ) ) , y = J φ 2 η ( g 2 ( z ) η T 2 ( z , x , y ) ) , z = J φ 3 γ ( g 3 ( x ) γ T 3 ( x , y , z ) ) ,
(3.1)

where J φ 1 ρ is the resolvent operator associated with φ 1 of parameter ρ, J φ 2 η is the resolvent operator associated with φ 2 of parameter η and J φ 3 γ is the resolvent operator associated with φ 3 of parameter γ.

Proof ( x , y , z ) H × H × H is a solution of SGNMVID (2.1) if and only if
{ x ( g 1 ( y ) ρ T 1 ( y , z , x ) ) , g 1 ( x ) x + ρ φ 1 ( g 1 ( x ) ) ρ φ 1 ( x ) 0 , x H , y ( g 2 ( z ) η T 2 ( z , x , y ) ) , g 2 ( x ) y + η φ 2 ( g 2 ( x ) ) η φ 2 ( y ) 0 , x H , z ( g 3 ( x ) γ T 3 ( x , y , z ) ) , g 3 ( x ) z + γ φ 3 ( g 3 ( x ) ) γ φ 3 ( z ) 0 , x H .
(3.2)
Since for each i = 1 , 2 , 3 , g i is an onto operator, Lemma 2.1 implies that ( x , y , z ) H × H × H is a solution of (3.2) if and only if
{ x = J φ 1 ρ ( g 1 ( y ) ρ T 1 ( y , z , x ) ) , y = J φ 2 η ( g 2 ( z ) η T 2 ( z , x , y ) ) , z = J φ 3 γ ( g 3 ( x ) γ T 3 ( x , y , z ) ) .

This completes the proof. □

Definition 3.1 Let T : H × H × H H and g : H H be two single-valued operators. Then the operator
  1. (a)
    T is called monotone in the first variable if
    T ( x , , ) T ( y , , ) , x y 0 , x , y H ;
     
  2. (b)
    T is called r-strongly monotone in the first variable if there exists a constant r > 0 such that
    T ( x , , ) T ( y , , ) , x y r x y 2 , x , y H ;
     
  3. (c)
    T is called ( κ , θ ) -relaxed cocoercive in the first variable if there exist two constants κ , θ > 0 such that
    T ( x , , ) T ( y , , ) , x y κ T ( x , , ) T ( y , , ) 2 + θ x y 2 , x , y H ;
     
  4. (d)
    T is said to be μ-Lipschitz continuous in the first variable if there exists a constant μ > 0 such that
    T ( x , , ) T ( y , , ) μ x y , x , y H ;
     
  5. (e)
    g is called γ-Lipschitz continuous if there exists a constant γ > 0 such that
    g ( x ) g ( y ) γ x y , x , y H ;
     
  6. (f)
    g is said to be ν-strongly monotone if there exists a constant ν > 0 such that
    g ( x ) g ( y ) , x y ν x y 2 , x , y H .
     
Theorem 3.2 Let T i , g i , φ i ( i = 1 , 2 , 3 ), ρ, η and γ be the same as in SGNMVID (2.1) such that for each i = 1 , 2 , 3 , T i is ς i -strongly monotone and σ i -Lipschitz continuous in the first variable and g i is π i -strongly monotone and δ i -Lipschitz continuous. If the constants ρ, η and γ satisfy the following conditions:
{ | ρ ς 1 σ 1 2 | < ς 1 2 σ 1 2 μ 1 ( 2 μ 1 ) σ 1 2 , | η ς 2 σ 2 2 | < ς 2 2 σ 2 2 μ 2 ( 2 μ 2 ) σ 2 2 , | γ ς 3 σ 3 2 | < ς 3 2 σ 3 2 μ 3 ( 2 μ 3 ) σ 3 2 , ς i > σ i μ i ( 2 μ i ) ( i = 1 , 2 , 3 ) , μ i = 1 ( 2 π i δ i 2 ) < 1 ( i = 1 , 2 , 3 ) , 2 π i < 1 + δ i 2 ( i = 1 , 2 , 3 ) ,
(3.3)

then SGNMVID (2.1) admits a unique solution.

Proof Define the mappings Ψ , Φ , Θ : H × H × H H by
Ψ ( x , y , z ) = J φ 1 ρ ( g 1 ( y ) ρ T 1 ( y , z , x ) ) , Φ ( x , y , z ) = J φ 2 η ( g 2 ( z ) η T 2 ( z , x , y ) ) , Θ ( x , y , z ) = J φ 3 γ ( g 3 ( x ) γ T 3 ( x , y , z ) ) ,
(3.4)
for all ( x , y , z ) H × H × H . Define on H × H × H by
( x , y , z ) = x + y + z , ( x , y , z ) H × H × H .
It is obvious that ( H × H × H , ) is a Banach space. Moreover, define F : H × H × H H × H × H as follows:
F ( x , y , z ) = ( Ψ ( x , y , z ) , Φ ( x , y , z ) , Θ ( x , y , z ) ) , ( x , y , z ) H × H × H .
(3.5)
Now, we prove that F is a contraction mapping. For this end, let ( x , y , z ) , ( x ˆ , y ˆ , z ˆ ) H × H × H be given. By using the nonexpansivity property of the resolvent operator J φ 1 ρ , we get
Ψ ( x , y , z ) Ψ ( x ˆ , y ˆ , z ˆ ) = J φ 1 ρ ( g 1 ( y ) ρ T 1 ( y , z , x ) ) J φ 1 ρ ( g 1 ( y ˆ ) ρ T 1 ( y ˆ , z ˆ , x ˆ ) ) g 1 ( y ) g 1 ( y ˆ ) ρ ( T 1 ( y , z , x ) T 1 ( y ˆ , z ˆ , x ˆ ) ) y y ˆ ( g 1 ( y ) g 1 ( y ˆ ) ) + y y ˆ ρ ( T 1 ( y , z , x ) T 1 ( y ˆ , z ˆ , x ˆ ) ) .
(3.6)
Because g 1 is π 1 -strongly monotone and δ 1 -Lipschitz continuous, we have
y y ˆ ( g 1 ( y ) g 1 ( y ˆ ) ) 2 = y y ˆ 2 2 g 1 ( y ) g 1 ( y ˆ ) , y y ˆ + g 1 ( y ) g 1 ( y ˆ ) 2 ( 1 2 π 1 ) y y ˆ 2 + g 1 ( y ) g 1 ( y ˆ ) 2 ( 1 2 π 1 + δ 1 2 ) y y ˆ 2 .
(3.7)
Since T 1 is ς 1 -strongly monotone and σ 1 -Lipschitz continuous in the first variable, we conclude that
y y ˆ ρ ( T 1 ( y , z , x ) T 1 ( y ˆ , z ˆ , x ˆ ) ) 2 = y y ˆ 2 2 ρ T 1 ( y , z , x ) T 1 ( y ˆ , z ˆ , x ˆ ) , y y ˆ + ρ 2 T 1 ( y , z , x ) T 1 ( y ˆ , z ˆ , x ˆ ) 2 ( 1 2 ρ ς 1 ) y y ˆ 2 + ρ 2 T 1 ( y , z , x ) T 1 ( y ˆ , z ˆ , x ˆ ) 2 ( 1 2 ρ ς 1 + ρ 2 σ 1 2 ) y y ˆ 2 .
(3.8)
Substituting (3.7) and (3.8) in (3.6), we deduce that
Ψ ( x , y , z ) Ψ ( x ˆ , y ˆ , z ˆ ) ( 1 2 π 1 + δ 1 2 + 1 2 ρ ς 1 + ρ 2 σ 1 2 ) y y ˆ .
(3.9)
Like in the proof of (3.9), we can establish that
Φ ( x , y , z ) Φ ( x ˆ , y ˆ , z ˆ ) ( 1 2 π 2 + δ 2 2 + 1 2 η ς 2 + η 2 σ 2 2 ) z z ˆ
(3.10)
and
Θ ( x , y , z ) Θ ( x ˆ , y ˆ , z ˆ ) ( 1 2 π 3 + δ 3 2 + 1 2 γ ς 3 + γ 2 σ 3 2 ) x x ˆ .
(3.11)
From (3.9)-(3.11), it follows that
Ψ ( x , y , z ) Ψ ( x ˆ , y ˆ , z ˆ ) + Φ ( x , y , z ) Φ ( x ˆ , y ˆ , z ˆ ) + Θ ( x , y , z ) Θ ( x ˆ , y ˆ , z ˆ ) ϑ x x ˆ + θ y y ˆ + ϱ z z ˆ ,
(3.12)
where
ϑ = 1 2 π 3 + δ 3 2 + 1 2 γ ς 3 + γ 2 σ 3 2 , θ = 1 2 π 1 + δ 1 2 + 1 2 ρ ς 1 + ρ 2 σ 1 2 , ϱ = 1 2 π 2 + δ 2 2 + 1 2 η ς 2 + η 2 σ 2 2 .
(3.13)
Applying (3.5) and (3.12), we conclude that
F ( x , y , z ) F ( x ˆ , y ˆ , z ˆ ) λ ( x , y , z ) ( x ˆ , y ˆ , z ˆ ) ,
(3.14)

where λ = max { ϑ , θ , ϱ } . Condition (3.3) implies that 0 λ < 1 and so (3.14) guarantees that the mapping F is contraction. According to the Banach fixed point theorem, there exists a unique point ( x , y , z ) H × H × H such that F ( x , y , z ) = ( x , y , z ) . It follows from (3.4) and (3.5) that x = J φ 1 ρ ( g 1 ( y ) ρ T 1 ( y , z , x ) ) , y = J φ 2 η ( g 2 ( z ) η T 2 ( z , x , y ) ) and z = J φ 3 γ ( g 3 ( x ) γ T 3 ( x , y , z ) ) . Now, it follows from Lemma 3.1 that ( x , y , z ) H × H × H is a unique solution of SGNMVID (2.1). This completes the proof. □

4 Some new three-step resolvent iterative algorithms

In this section, applying nearly uniformly Lipschitzian mappings S i ( i = 1 , 2 , 3 ) and by using the equivalent alternative formulation (3.1), we suggest and analyze some new three-step resolvent iterative algorithms with mixed errors for finding an element of the set of fixed points of Q = ( S 1 , S 2 , S 3 ) , which is the unique solution of SGNMVID (2.1).

Let S 1 : H H be a nearly uniformly L 1 -Lipschitzian mapping with the sequence { a n } n = 1 , let S 2 : H H be a nearly uniformly L 2 -Lipschitzian mapping with the sequence { b n } n = 1 and let S 3 : H H be a nearly uniformly L 3 -Lipschitzian mapping with the sequence { c n } n = 1 . We define the self-mapping of H × H × H as follows:
Q ( x , y , z ) = ( S 1 x , S 2 y , S 3 z ) , x , y , z H .
(4.1)
Then Q = ( S 1 , S 2 , S 3 ) : H × H × H H × H × H is a nearly uniformly max { L 1 , L 2 , L 3 } -Lipschitzian mapping with the sequence { a n + b n + c n } n = 1 with respect to the norm in H × H × H . To see this fact, let ( x , y , z ) , ( x , y , z ) H × H × H be arbitrary. Then, for any n N , we have
Q n ( x , y , z ) Q n ( x , y , z ) = ( S 1 n x , S 2 n y , S 3 n z ) ( S 1 n x , S 2 n y , S 3 n z ) = ( S 1 n x S 1 n x , S 2 n y S 2 n y , S 3 n z S 3 n z ) = S 1 n x S 1 n x + S 2 n y S 2 n y + S 3 n z S 3 n z L 1 ( x x + a n ) + L 2 ( y y + b n ) + L 3 ( z z + c n ) max { L 1 , L 2 , L 3 } ( x x + y y + z z + a n + b n + c n ) = max { L 1 , L 2 , L 3 } ( ( x , y , z ) ( x , y , z ) + a n + b n + c n ) .
We denote the sets of all the fixed points of S i ( i = 1 , 2 , 3 ) and by Fix ( S i ) and Fix ( Q ) , respectively, and the set of all the solutions of system (2.1) by SGNMVID ( H , T i , g i , φ i , i = 1 , 2 , 3 ) . It is clear that for any ( x , y , z ) H × H × H , ( x , y , z ) Fix ( Q ) if and only if x Fix ( S 1 ) , y Fix ( S 2 ) and z Fix ( S 3 ) , that is, Fix ( Q ) = Fix ( S 1 , S 2 , S 3 ) = Fix ( S 1 ) × Fix ( S 2 ) × Fix ( S 3 ) . We now characterize the problem. Let T i , g i , φ i ( i = 1 , 2 , 3 ), ρ, η and γ be the same as in SGNMVID (2.1). If ( x , y , z ) Fix ( Q ) SGNMVID ( H , T i , g i , φ i , i = 1 , 2 , 3 ) , then x Fix ( S 1 ) , y Fix ( S 2 ) , z Fix ( S 3 ) and ( x , y , z ) SGNMVID ( H , T i , g i , φ i , i = 1 , 2 , 3 ) . Therefore, it follows from Lemma 3.1 that for each n N ,
{ x = S 1 n x = J φ 1 ρ ( g 1 ( y ) ρ T 1 ( y , z , x ) ) = S 1 n J φ 1 ρ ( g 1 ( y ) ρ T 1 ( y , z , x ) ) , y = S 2 n y = J φ 2 η ( g 2 ( z ) η T 2 ( z , x , y ) ) = S 2 n J φ 2 η ( g 2 ( z ) η T 2 ( z , x , y ) ) , z = S 3 n z = J φ 3 γ ( g 3 ( x ) γ T 3 ( x , y , z ) ) = S 3 n J φ 3 γ ( g 3 ( x ) γ T 3 ( x , y , z ) ) .
(4.2)

The fixed point formulation (4.2) is used to suggest the following three-step resolvent iterative algorithm with mixed errors for finding an element of the set of fixed points of the nearly uniformly Lipschitzian mapping Q = ( S 1 , S 2 , S 3 ) , which is a unique solution of SGNMVID (2.1).

Algorithm 4.1 Let T i , g i , φ i ( i = 1 , 2 , 3 ), ρ, η and γ be the same as in SGNMVID (2.1). For an arbitrary chosen initial point ( x 1 , y 1 , z 1 ) H × H × H , compute the iterative sequence { ( x n , y n , z n ) } n = 1 by the iterative processes
{ x n + 1 = ( 1 α n β n ) x n + α n S 1 n J φ 1 ρ ( g 1 ( y n + 1 ) ρ T 1 ( y n + 1 , z n + 1 , x n ) ) x n + 1 = + α n e n + β n j n + r n , y n + 1 = ( 1 α n β n ) x n + α n S 2 n J φ 2 η ( g 2 ( z n + 1 ) η T 2 ( z n + 1 , x n , y n ) ) y n + 1 = + α n p n + β n q n + k n , z n + 1 = ( 1 α n β n ) x n + α n S 3 n J φ 3 γ ( g 3 ( x n ) γ T 3 ( x n , y n , z n ) ) z n + 1 = + α n s n + β n t n + l n ,
(4.3)
where S i : H H ( i = 1 , 2 , 3 ) are three nearly uniformly Lipschitzian mappings, { α n } n = 1 , { α n } n = 1 , { α n } n = 1 , { β n } n = 1 , { β n } n = 1 and { β n } n = 1 are sequences in the interval [ 0 , 1 ] such that n = 1 α n = , n = 1 β n < , n = 1 β n < , n = 1 β n < , α n + β n 1 , α n + β n 1 , α n + β n 1 , lim n α n = 1 , lim n α n = 1 and { e n } n = 1 , { p n } n = 1 , { s n } n = 1 , { j n } n = 1 , { q n } n = 1 , { t n } n = 1 , { r n } n = 1 , { k n } n = 1 , { l n } n = 1 are nine sequences in to take into account a possible inexact computation of the resolvent operator point satisfying the following conditions:
{ e n = e n + e n , p n = p n + p n , s n = s n + s n , lim n e n = 0 , lim n p n = 0 , lim n s n = 0 , n = 1 e n < , n = 1 p n < , n = 1 s n < , n = 1 r n < , n = 1 k n < , n = 1 l n < .
(4.4)

If for each i = 1 , 2 , 3 , S i I , then Algorithm 4.1 reduces to the following algorithm.

Algorithm 4.2 Let T i , g i , φ i ( i = 1 , 2 , 3 ), ρ, η and γ be the same as in SGNMVID (2.1). For an arbitrary chosen initial point ( x 1 , y 1 , z 1 ) H × H × H , compute the iterative sequence { ( x n , y n , z n ) } n = 1 in the following way:
{ x n + 1 = ( 1 α n β n ) x n + α n J φ 1 ρ ( g 1 ( y n + 1 ) ρ T 1 ( y n + 1 , z n + 1 , x n ) ) + α n e n + β n j n + r n , y n + 1 = ( 1 α n β n ) x n + α n J φ 2 η ( g 2 ( z n + 1 ) η T 2 ( z n + 1 , x n , y n ) ) + α n p n + β n q n + k n , z n + 1 = ( 1 α n β n ) x n + α n J φ 3 γ ( g 3 ( x n ) γ T 3 ( x n , y n , z n ) ) + α n s n + β n t n + l n ,

where the sequences { α n } n = 1 , { α n } n = 1 , { α n } n = 1 , { β n } n = 1 , { β n } n = 1 , { β n } n = 1 , { e n } n = 1 , { p n } n = 1 , { s n } n = 1 , { j n } n = 1 , { q n } n = 1 , { t n } n = 1 , { r n } n = 1 , { k n } n = 1 and { l n } n = 1 are the same as in Algorithm 4.1.

Remark 4.3 Equality (4.2) can be written as follows:
{ x = S 1 n J φ 1 ρ ( u ) , y = S 2 n J φ 2 η ( v ) , z = S 3 n J φ 3 γ ( w ) , u = g 1 ( y ) ρ T 1 ( y , z , x ) , v = g 2 ( z ) η T 2 ( z , x , y ) , w = g 3 ( x ) γ T 3 ( x , y , z ) .
(4.5)

The fixed point formulation (4.5) enables us to suggest the following iterative algorithms.

Algorithm 4.4 Let T i , g i , φ i ( i = 1 , 2 , 3 ), ρ, η and γ be the same as in SGNMVID (2.1). For an arbitrary chosen initial point ( u 1 , v 1 , w 1 ) H × H × H , compute the iterative sequence { ( x n , y n , z n ) } n = 1 in the following way:
{ x n = S 1 n J φ 1 ρ ( u n ) , y n = S 2 n J φ 2 η ( v n ) , z n = S 3 n J φ 3 γ ( w n ) , u n + 1 = ( 1 α n β n ) u n + α n ( g 1 ( y n ) ρ T 1 ( y n , z n , x n ) ) + α n e n + β n j n + r n , v n + 1 = ( 1 α n β n ) v n + α n ( g 2 ( z n ) η T 2 ( z n , x n , y n ) ) + α n p n + β n q n + k n , w n + 1 = ( 1 α n β n ) w n + α n ( g 3 ( x n ) γ T 3 ( x n , y n , z n ) ) + α n s n + β n t n + l n ,
(4.6)

where S i : H H ( i = 1 , 2 , 3 ) are three nearly uniformly Lipschitzian mappings, { α n } n = 1 , { β n } n = 1 are sequences in [ 0 , 1 ] such that n = 1 α n = , n = 1 β n < , α n + β n 1 and the sequences { e n } n = 1 , { p n } n = 1 , { s n } n = 1 , { j n } n = 1 , { q n } n = 1 , { t n } n = 1 , { r n } n = 1 , { k n } n = 1 , { l n } n = 1 are the same as in Algorithm 4.1 satisfying (4.4).

If β n = 0 , for all n N , then Algorithm 4.4 reduces to the following algorithm.

Algorithm 4.5 Let T i , g i , φ i ( i = 1 , 2 , 3 ), ρ, η and γ be the same as in SGNMVID (2.1). For an arbitrary chosen initial point ( u 1 , v 1 , w 1 ) H × H × H , compute the iterative sequence { ( x n , y n , z n ) } n = 1 in the following way:
{ x n = S 1 n J φ 1 ρ ( u n ) , y n = S 2 n J φ 2 η ( v n ) , z n = S 3 n J φ 3 γ ( w n ) , u n + 1 = ( 1 α n ) u n + α n ( g 1 ( y n ) ρ T 1 ( y n , z n , x n ) ) + α n e n + r n , v n + 1 = ( 1 α n ) v n + α n ( g 2 ( z n ) η T 2 ( z n , x n , y n ) ) + α n p n + k n , w n + 1 = ( 1 α n ) w n + α n ( g 3 ( x n ) γ T 3 ( x n , y n , z n ) ) + α n s n + l n ,

where S i ( i = 1 , 2 , 3 ), { α n } n = 1 , { e n } n = 1 , { p n } n = 1 , { s n } n = 1 , { r n } n = 1 , { k n } n = 1 and { l n } n = 1 are the same as in Algorithm 4.1.

If S i I ( i = 1 , 2 , 3 ), then Algorithm 4.4 collapses to the following algorithm.

Algorithm 4.6 Let T i , g i , φ i ( i = 1 , 2 , 3 ), ρ, η and γ be the same as in SGNMVID (2.1). For an arbitrary chosen initial point ( u 1 , v 1 , w 1 ) H × H × H , compute the iterative sequence { ( x n , y n , z n ) } n = 1 in the following way:
{ x n = J φ 1 ρ ( u n ) , y n = J φ 2 η ( v n ) , z n = J φ 3 γ ( w n ) , u n + 1 = ( 1 α n β n ) u n + α n ( g 1 ( y n ) ρ T 1 ( y n , z n , x n ) ) + α n e n + β n j n + r n , v n + 1 = ( 1 α n β n ) v n + α n ( g 2 ( z n ) η T 2 ( z n , x n , y n ) ) + α n p n + β n q n + k n , w n + 1 = ( 1 α n β n ) w n + α n ( g 3 ( x n ) γ T 3 ( x n , y n , z n ) ) + α n s n + β n t n + l n ,

where the sequences { α n } n = 1 , { β n } n = 1 , { e n } n = 1 , { p n } n = 1 , { s n } n = 1 , { j n } n = 1 , { q n } n = 1 , { t n } n = 1 , { r n } n = 1 , { k n } n = 1 and { l n } n = 1 are the same as in Algorithm 4.1.

5 Main results

In this section, we discuss the convergence analysis of the suggested three-step resolvent iterative algorithms under suitable conditions. For this end, we need the following lemma.

Lemma 5.1 Let { a n } , { b n } and { c n } be three nonnegative real sequences satisfying the following condition: There exists a natural number n 0 such that
a n + 1 ( 1 t n ) a n + b n t n + c n , n n 0 ,

where t n [ 0 , 1 ] , n = 0 t n = , lim n b n = 0 , n = 0 c n < . Then lim n 0 a n = 0 .

Proof The proof directly follows from Lemma 2 in Liu [10]. □

Theorem 5.2 Let T i , g i , φ i ( i = 1 , 2 , 3 ), ρ, η and γ be the same as in Theorem 3.2 and let all the conditions of Theorem 3.2 hold. Suppose that S 1 : H H is a nearly uniformly L 1 -Lipschitzian mapping with the sequence { b n } n = 1 , that S 2 : H H is a nearly uniformly L 2 -Lipschitzian mapping with the sequence { c n } n = 1 , that S 3 : H H is a nearly uniformly L 3 -Lipschitzian mapping with the sequence { d n } n = 1 , and that the self-mapping of H × H × H is defined by (4.1) such that Fix ( Q ) SGNMVID ( H , T i , g i , φ i , i = 1 , 2 , 3 ) . Further, let L i λ < 1 , where λ is the same as in (3.14). Then the iterative sequence { ( x n , y n , z n ) } n = 1 generated by Algorithm 4.1, converges strongly to the only element of Fix ( Q ) SGNMVID ( H , T i , g i , φ i , i = 1 , 2 , 3 ) .

Proof According to Theorem 3.2, SGNMVID (2.1) has a unique solution ( x , y , z ) H × H × H . Accordingly, in view of Lemma 3.1, ( x , y , z ) satisfies (3.1). Since SGNMVID ( H , T i , g i , φ i , i = 1 , 2 , 3 ) is a singleton set, it follows from Fix ( Q ) SGNMVID ( H , T i , g i , φ i , i = 1 , 2 , 3 ) that ( x , y , z ) Fix ( Q ) and so x Fix ( S 1 ) , y Fix ( S 2 ) and z Fix ( S 3 ) . Hence, for each n N , we can write
{ x = ( 1 α n β n ) x + α n S 1 n J φ 1 ρ ( g 1 ( y ) ρ T 1 ( y , z , x ) ) + β n x , y = ( 1 α n β n ) y + α n S 2 n J φ 2 η ( g 2 ( z ) η T 2 ( z , x , y ) ) + β n y , z = ( 1 α n β n ) z + α n S 3 n J φ 3 γ ( g 3 ( x ) γ T 3 ( x , y , z ) ) + β n z ,
(5.1)
where the sequences { α n } n = 1 , { α n } n = 1 , { α n } n = 1 , { β n } n = 1 , { β n } n = 1 and { β n } n = 1 are the same as in Algorithm 4.1. Let Γ = sup n 1 { j n x , q n y , t n z } . It follows from (4.3), (5.1) and the assumptions that
x n + 1 x ( 1 α n β n ) x n x + α n S 1 n J φ 1 ρ ( g 1 ( y n + 1 ) ρ T 1 ( y n + 1 , z n + 1 , x n ) ) S 1 n J φ 1 ρ ( g 1 ( y ) ρ T 1 ( y , z , x ) ) + β n j n x + α n e n + r n ( 1 α n β n ) x n x + α n L 1 ( g 1 ( y n + 1 ) g 1 ( y ) ρ ( T 1 ( y n + 1 , z n + 1 , x n ) T 1 ( y , z , x ) ) + b n ) + α n ( e n + e n ) + r n + β n Γ ( 1 α n β n ) x n x + α n L 1 ( y n + 1 y ( g 1 ( y n + 1 ) g 1 ( y ) ) + y n + 1 y ρ ( T 1 ( y n + 1 , z n + 1 , x n ) T 1 ( y , z , x ) ) + b n ) + α n e n + e n + r n + β n Γ .
(5.2)
Since g 1 is π 1 -strongly monotone and δ 1 -Lipschitz continuous, and T 1 is ς 1 -strongly monotone and σ 1 -Lipschitz continuous in the first variable, similar to the proofs of (3.7) and (3.8), one can prove that
y n + 1 y ( g 1 ( y n + 1 ) g 1 ( y ) ) 1 2 π 1 + δ 1 2 y n + 1 y
(5.3)
and
y n + 1 y ρ ( T 1 ( y n + 1 , z n + 1 , x n ) T 1 ( y , z , x ) ) 1 2 ρ ς 1 + ρ 2 σ 1 2 y n + 1 y .
(5.4)
Substituting (5.3) and (5.4) in (5.2), we get
x n + 1 x ( 1 α n β n ) x n x + α n L 1 θ y n + 1 y + α n L 1 b n + α n e n + e n + r n + β n Γ ,
(5.5)
where θ is the same as in (3.13). It follows from (4.3) and (5.1) that
y n + 1 y ( 1 α n β n ) x n y + α n S 2 n J φ 2 η ( g 2 ( z n + 1 ) η T 2 ( z n + 1 , x n , y n ) ) S 2 n J φ 2 η ( g 2 ( z ) η T 2 ( z , x , y ) ) + β n q n y + α n p n + k n ( 1 α n β n ) x n y + α n L 2 ( g 2 ( z n + 1 ) g 2 ( z ) η ( T 2 ( z n + 1 , x n , y n ) T 2 ( z , x , y ) ) + c n ) + α n ( p n + p n ) + k n + β n Γ ( 1 α n β n ) x n y + α n L 2 ( z n + 1 z ( g 2 ( z n + 1 ) g 2 ( z ) ) + z n + 1 z η ( T 2 ( z n + 1 , x n , y n ) T 2 ( z , x , y ) ) + c n ) + α n p n + p n + k n + β n Γ .
(5.6)
Since g 2 is π 2 -strongly monotone and δ 2 -Lipschitz continuous, and T 2 is ς 2 -strongly monotone and σ 2 -Lipschitz continuous in the first variable, we can get
z n + 1 z ( g 2 ( z n + 1 ) g 2 ( z ) ) 1 2 π 2 + δ 2 2 z n + 1 z
(5.7)
and
z n + 1 z η ( T 2 ( z n + 1 , x n , y n ) T 2 ( z , x , y ) ) 1 2 η ς 2 + η 2 σ 2 2 z n + 1 z .
(5.8)
Combining (5.6)-(5.8), we conclude that
y n + 1 y ( 1 α n β n ) x n y + α n L 2 ϱ z n + 1 z + α n L 2 c n + α n p n + p n + k n + β n Γ ,
(5.9)
where ϱ is the same as in (3.13). From (4.3) and (5.1), it follows that
z n + 1 z ( 1 α n β n ) x n z + α n S 3 n J φ 3 γ ( g 3 ( x n ) γ T 3 ( x n , y n , z n ) ) S 3 n J φ 3 γ ( g 3 ( x ) γ T 3 ( x , y , z ) ) + β n t n z + α n s n + l n ( 1 α n β n ) x n z + α n L 3 ( g 3 ( x n ) g 3 ( x ) γ ( T 3 ( x n , y n , z n ) T 3 ( x , y , z ) ) + d n ) + α n ( s n + s n ) + l n + β n Γ ( 1 α n β n ) x n z + α n L 3 ( x n x ( g 3 ( x n ) g 3 ( x ) ) + x n x γ ( T 3 ( x n , y n , z n ) T 3 ( x , y , z ) ) + d n ) + α n s n + s n + l n + β n Γ .
(5.10)
Because g 3 is π 3 -strongly monotone and δ 3 -Lipschitz continuous, and T 3 is ς 3 -strongly monotone and σ 3 -Lipschitz continuous in the first variable, we can obtain
x n x ( g 3 ( x n ) g 3 ( x ) ) 1 2 π 3 + δ 3 2 x n x
(5.11)
and
x n x γ ( T 3 ( x n , y n , z n ) T 3 ( x , y , z ) ) 1 2 γ ς 3 + γ 2 σ 3 2 x n x .
(5.12)
Substituting (5.11) and (5.12) in (5.10), deduce that
z n + 1 z ( 1 α n β n ) x n z + α n L 3 ϑ x n x + α n L 3 d n + α n s n + s n + l n + β n Γ ,
(5.13)

where ϑ is the same as in (3.13).

By using (5.13) and the fact that L 3 ϑ < 1 , we have
z n + 1 z ( 1 α n β n ) x n z + α n L 3 ϑ x n x + α n L 3 d n + α n s n + s n + l n + β n Γ ( 1 α n β n ) x n x + α n L 3 ϑ x n x + α n L 3 d n + ( 1 α n β n ) x z + α n s n + s n + l n + β n Γ ( 1 α n β n ) x n x + α n x n x + α n L 3 d n + ( 1 α n β n ) x z + α n s n + s n + l n + β n Γ x n x + α n L 3 d n + ( 1 α n β n ) x z + α n s n + s n + l n + β n Γ .
(5.14)
It follows from (5.9), (5.14) and the fact that L 2 ϱ < 1 that
y n + 1 y ( 1 α n β n ) x n y + α n L 2 ϱ z n + 1 z + α n L 2 c n + α n p n + p n + k n + β n Γ ( 1 α n β n ) x n x + ( 1 α n β n ) x y + α n L 2 ϱ z n + 1 z + α n L 2 c n + α n p n + p n + k n + β n Γ ( 1 α n β n ) x n x + ( 1 α n β n ) x y + α n L 2 ϱ ( x n x + α n L 3 d n + ( 1 α n β n ) x z + α n s n + s n + l n + β n Γ ) + α n L 2 c n + α n p n + p n + k n + β n Γ x n x + ( 1 α n β n ) x y + α n ( 1 α n β n ) L 2 ϱ x z + α n α n L 2 L 3 ϱ d n + α n α n L 2 ϱ s n + α n L 2 ϱ s n + α n L 2 ϱ l n + α n L 2 ϱ β n Γ + α n L 2 c n + α n p n + p n + k n + β n Γ .
(5.15)
Applying (5.5) and (5.15), it follows that
x n + 1 x ( 1 α n β n ) x n x + α n L 1 θ y n + 1 y + α n L 1 b n + α n e n + e n + r n + β n Γ ( 1 α n β n ) x n x + α n L 1 θ ( x n x + ( 1 α n β n ) x y + α n ( 1 α n β n ) L 2 ϱ x z + α n α n L 2 L 3 ϱ d n + α n α n L 2 ϱ s n + α n L 2 ϱ s n + α n L 2 ϱ l n + α n L 2 ϱ β n Γ + α n L 2 c n + α n p n + p n + k n + β n Γ ) + α n L 1 b n + α n e n + e n + r n + β n Γ ( 1 α n β n ) x n x + α n L 1 θ x n x + α n ( 1 α n β n ) L 1 θ x y + α n α n ( 1 α n β n ) L 1 L 2 θ ϱ x z + α n α n α n L 1 L 2 L 3 θ ϱ d n + α n α n L 1 L 2 θ c n + α n L 1 b n + α n α n α n L 1 L 2 θ ϱ s n + α n α n L 1 L 2 θ ϱ s n + α n α n L 1 L 2 θ ϱ l n + α n α n L 1 θ p n + α n L 1 θ p n + α n L 1 θ k n + α n e n + e n + r n + ( α n α n L 1 L 2 θ ϱ β n + α n L 1 θ β n + β n ) Γ ( 1 α n ( 1 L 1 θ ) ) x n x + α n ( 1 L 1 θ ) [ ( 1 α n β n ) L 1 θ x y + α n ( 1 α n β n ) L 1 L 2 θ ϱ x z 1 L 1 θ + α n α n L 1 L 2 L 3 θ ϱ d n + α n L 1 L 2 θ c n + L 1 b n + α n α n L 1 L 2 θ ϱ s n + α n L 1 θ p n + e n 1 L 1 θ ] + α n α n L 1 L 2 θ ϱ s n + α n α n L 1 L 2 θ ϱ l n + α n L 1 θ p n + α n L 1 θ k n + e n + r n + ( α n α n L 1 L 2 θ ϱ β n + α n L 1 θ β n + β n ) Γ .
(5.16)

From n = 1 β n < , n = 1 β n < and n = 1 β n < , we infer that lim n β n = lim n β n = lim n β n = 0 . Since L 1 θ < 1 , lim n α n = lim n α n = 1 and lim n b n = lim n c n = lim n d n = 0 , in view of (4.4), it is evident that the conditions of Lemma 5.1 are satisfied and so Lemma 5.1 and (5.16) guarantee that x n x , as n . Because n = 1 l n < , n = 1 k n < , n = 1 s n < and n = 1 p n < , we have l n 0 , k n 0 , s n 0 and p n 0 , as n . Now, it follows from (4.4), (5.14) and (5.15) that y n y and z n z , as n . Therefore, the sequence { ( x n , y n , z n ) } n = 1 generated by Algorithm 4.1 converges strongly to the unique solution ( x , y , z ) of SGNMVID (2.1), that is, the only element of Fix ( Q ) SGNMVID ( H , T i , g i , φ i , i = 1 , 2 , 3 ) . This completes the proof. □

Theorem 5.3 Suppose that T i , g i , φ i ( i = 1 , 2 , 3 ), ρ, η and γ are the same as in Theorem 3.2 and let all the conditions of Theorem 3.2 hold. Then the iterative sequence { ( x n , y n , z n ) } n = 1 generated by Algorithm 4.2 converges strongly to the unique solution of SGNMVID (2.1).

Theorem 5.4 Let T i , g i , φ i , S i ( i = 1 , 2 , 3 ), , ρ, η and γ be the same as in Theorem 5.2 and let all the conditions of Theorem 5.2 hold. Then the iterative sequence { ( x n , y n , z n ) } n = 1 generated by Algorithm 4.4 converges strongly to the only element of Fix ( Q ) SGNMVID ( H , T i , g i , φ i , i = 1 , 2 , 3 ) .

Proof Theorem 3.2 guarantees the existence of a unique solution ( x , y , z ) H × H × H for SGNMVID (2.1). Hence, Lemma 3.1 implies that x = J φ 1 ρ ( g 1 ( y ) ρ T 1 ( y , z , x ) ) , y = J φ 2 η ( g 2 ( z ) η T 2 ( z , x , y ) ) , z = J φ 3 γ ( g 3 ( x ) γ T 3 ( x , y , z ) ) . Since SGNMVID ( H , T i , g i , φ i , i = 1 , 2 , 3 ) is a singleton set, by using Fix ( Q ) SGNMVID ( H , T i , g i , φ i , i = 1 , 2 , 3 ) , we conclude that ( x , y , z ) Fix ( Q ) and so x Fix ( S 1 ) , y Fix ( S 2 ) and z Fix ( S 3 ) . Hence, in view of Remark 4.3, for each n N , we can write
{ x = S 1 n J φ 1 ρ ( u ) , y = S 2 n J φ 2 η ( v ) , z = S 3 n J φ 3 γ ( w ) , u = ( 1 α n β n ) u + α n ( g 1 ( y ) ρ T 1 ( y , z , x ) ) + β n u , v = ( 1 α n β n ) v + α n ( g 2 ( z ) η T 2 ( z , x , y ) ) + β n v , w = ( 1 α n β n ) w + α n ( g 3 ( x ) γ T 3 ( x , y , z ) ) + β n w ,
(5.17)
where the sequences { α n } n = 1 and { β n } n = 1 are the same as in Algorithm 4.4. Let Γ ˆ = sup n 1 { j n u , q n v , t n w } . By using (4.6), (5.17) and the assumptions, we have
u n + 1 u ( 1 α n β n ) u n u + α n g 1 ( y n ) g 1 ( y ) ρ ( T 1 ( y n , z n , x n ) T 1 ( y , z , x ) ) + β n j n u + α n ( e n + e n ) + r n ( 1 α n β n ) u n u + α n y n y ( g 1 ( y n ) g 1 ( y ) ) + α n y n y ρ ( T 1 ( y n , z n , x n ) T 1 ( y , z , x ) ) + α n e n + e n + r n + β n Γ ˆ .
(5.18)
Since g 1 is π 1 -strongly monotone and δ 1 -Lipschitz continuous, and T 1 is ς 1 -strongly monotone and σ 1 -Lipschitz continuous in the first variable, similar to the proofs of (3.7) and (3.8), one can prove that
y n y ( g 1 ( y n ) g 1 ( y ) ) 1 2 π 1 + δ 1 2 y n y
(5.19)
and
y n y ρ ( T 1 ( y n , z n , x n ) T 1 ( y , z , x ) ) 1 2 ρ ς 1 + ρ 2 σ 1 2 y n y .
(5.20)
Combining (5.18)-(5.20), we get
u n + 1 u ( 1 α n β n ) u n u + α n θ y n y + α n e n + e n + r n + β n Γ ˆ ,
(5.21)
where θ is the same as in (3.13). It follows from (4.6) and (5.17) that
y n y = S 2 n J φ 2 η ( v n ) S 2 n J φ 2 η ( v ) L 2 ( J φ 2 η ( v n ) J φ 2 η ( v ) + c n ) L 2 ( v n v + c n ) .
(5.22)
Substituting (5.22) in (5.21), conclude that
u n + 1 u ( 1 α n β n ) u n u + α n L 2 θ v n v + α n L 2 θ c n + α n e n + e n + r n + β n Γ ˆ .
(5.23)
Like in the proofs of (5.18)-(5.23), we can verify that
v n + 1 v ( 1 α n β n ) v n v + α n L 3 ϱ w n w + α n L 3 ϱ d n + α n p n + p n + k n + β n Γ ˆ
(5.24)
and
w n + 1 w ( 1 α n β n ) w n w + α n L 1 ϑ u n u + α n L 1 ϑ b n + α n s n + s n + l n + β n Γ ˆ ,
(5.25)

where ϱ and ϑ are the same as in (3.13).

Let L = max { L i : i = 1 , 2 , 3 } . Then, applying (5.23)-(5.25), we obtain
( u n + 1 , v n + 1 , w n + 1 ) ( u , v , w ) ( 1 α n β n ) ( u n , v n , w n ) ( u , v , w ) + α n L λ ( u n , v n , w n ) ( u , v , w ) + α n L λ ( b n + c n + d n ) + α n ( e n , p n , s n ) + ( e n , p n , s n ) + ( r n , k n , l n ) + 3 β n Γ ˆ ( 1 α n ( 1 L λ ) ) ( u n , v n , w n ) ( u , v , w ) + α n ( 1 L λ ) ( e n , p n , s n ) + L λ ( b n + c n + d n ) 1 L λ + ( e n , p n , s n ) + ( r n , k n , l n ) + 3 β n Γ ˆ ,
(5.26)
where λ is the same as in (3.14). Since n = 1 α n = , n = 1 β n < , L λ < 1 and lim n b n = lim n c n = lim n d n = 0 , in view of (4.4), we note that all the conditions of Lemma 5.1 are satisfied. Hence, Lemma 5.1 and (5.26) guarantee that ( u n , v n , w n ) ( u , v , w ) , as n . By using (4.6) and (5.17), we have
x n x = S 1 n J φ 1 ρ ( u n ) S 1 n J φ 1 ρ ( u ) L 1 ( J φ 1 ρ ( u n ) J φ 1 ρ ( u ) + b n ) L 1 ( u n u + b n )
(5.27)
and
z n z = S 3 n J φ 3 γ ( w n ) S 3 n J φ 3 γ ( w ) L 3 ( J φ 3 γ ( w n ) J φ 3 γ ( w ) + d n ) L 3 ( w n w + d n ) .
(5.28)

Since lim n u n = u , lim n v n = v , lim n w n = w and lim n b n = lim n c n = lim n d n = 0 , from inequalities (5.22), (5.27) and (5.28) it follows that y n y , x n x and z n z , as n . Hence, the sequence { ( x n , y n , z n ) } n = 1 generated by Algorithm 4.4 converges strongly to the unique solution ( x , y , z ) of SGNMVID (2.1), that is, the only element of Fix ( Q ) SGNMVID ( H , T i , g i , φ i , i = 1 , 2 , 3 ) . This completes the proof. □

Like in the proof of Theorem 5.4, one can prove the convergence of the iterative sequences generated by Algorithms 4.5 and 4.6, and we omit their proofs.

Theorem 5.5 Suppose that T i , g i , φ i , S i ( i = 1 , 2 , 3 ), , ρ, η and γ are the same as in Theorem 5.2 and let all the conditions of Theorem 5.2 hold. Then the iterative sequence { ( x n , y n , z n ) } n = 1 generated by Algorithm 4.5 converges strongly to the only element of Fix ( Q ) SGNMVID ( H , T i , g i , φ i , i = 1 , 2 , 3 ) .

Theorem 5.6 Assume that T i , g i , φ i ( i = 1 , 2 , 3 ), ρ, η and γ are the same as in Theorem 3.2 and let all the conditions of Theorem 3.2 hold. Then the iterative sequence { ( x n , y n , z n ) } n = 1 generated by Algorithm 4.6 converges strongly to the unique solution of SGNMVID (2.1).

6 An important remark on a relaxed cocoercive mapping

In view of Definition 3.1, we note that the relaxed cocoercivity condition is weaker than the strong monotonicity condition. In other words, the class of relaxed cocoercive mappings is more general than the class of strongly monotone mappings. However, it is worth to point out that if the considered mapping T is ( κ , θ ) -relaxed cocoercive and γ-Lipschitz mapping such that θ > κ γ 2 , then it must be a ( θ κ γ 2 ) -strongly monotone mapping. Hence, the results that appeared in this paper can be also applied to a class of relaxed cocoercive mappings. In fact, one may rewrite the results considered under relaxed cocoercivity and Lipschitzian conditions of mappings and apply a known result on the strongly monotone condition to a new form. Below, we present an example of the mentioned situation.

For given three different nonlinear operators T 1 , T 2 , g : H × H H and a continuous function φ : H R { + } , Noor [13] introduced and considered the problem of finding ( x , y ) H × H such that
{ ρ T 1 ( y , x ) + x g ( y ) , g ( x ) x ρ φ ( x ) ρ φ ( g ( x ) ) , x H , η T 2 ( x , y ) + y g ( x ) , g ( x ) y η φ ( y ) η φ ( g ( x ) ) , x <