Skip to main content

Existence and approximation of solutions for system of generalized mixed variational inequalities

Abstract

The aim of this work is to study a system of generalized mixed variational inequalities, existence and approximation of its solution using the resolvent operator technique. We further propose an algorithm which converges to its solution and common fixed points of two Lipschitzian mappings. Parallel algorithms are used, which can be used to simultaneous computation in multiprocessor computers. The results presented in this work are more general and include many previously known results as special cases.

MSC:47J20, 65K10, 65K15, 90C33.

1 Introduction and preliminaries

Variational inequality theory was introduced by Stampacchia [1] in the early 1960s. The birth of variational inequality problem coincides with Signorini problem, see [[2], p.282]. The Signorini problem consists of finding the equilibrium of a spherically shaped elastic body resting on the rigid frictionless plane. Let H be a real Hilbert space whose inner product and norm are denoted by , and , respectively. A variational inequality involving the nonlinear bifurcation, which characterized the Signorini problem with nonlocal friction is: find xH such that

Tx,yx+φ(y,x)φ(x,x)0,yH,

where T:HH is a nonlinear operator and φ(,):H×HR{+} is a continuous bifunction.

Inequality above is called mixed variational inequality problem. It is an useful and important generalization of variational inequalities. This type of variational inequality arise in the study of elasticity with nonlocal friction laws, fluid flow through porus media and structural analysis. Mixed variational inequalities have been generalized and extended in many directions using novel and innovative techniques. One interesting problem is to find common solution of a system of variational inequalities. The existence problem for solutions of a system of variational inequalities has been studied by Husain and Tarafdar [3]. System of variational inequalities arises in double porosity models and diffusion through a composite media, description of parallel membranes, etc.; see [4] for details.

In this paper, we consider the following system of generalized mixed variational inequalities (SGMVI). Find x , y H such that

{ ρ 1 T 1 ( y , x ) + g 1 ( x ) g 1 ( y ) , x g 1 ( x ) + φ ( x ) φ ( g 1 ( x ) ) 0 , ρ 2 T 2 ( x , y ) + g 2 ( y ) g 2 ( x ) , x g 2 ( y ) + φ ( x ) φ ( g 2 ( y ) ) 0
(1.1)

for all xH and ρ 1 , ρ 2 >0, where T 1 , T 2 :H×HH are nonlinear mappings and g 1 , g 2 :HH are any mappings.

If T 1 , T 2 :HH are univariate mappings then the problem (SGMVI) reduced to the following. Find x , y H such that

{ ρ 1 T 1 ( y ) + g 1 ( x ) g 1 ( y ) , x g 1 ( x ) + φ ( x ) φ ( g 1 ( x ) ) 0 , ρ 2 T 2 ( x ) + g 2 ( y ) g 2 ( x ) , x g 2 ( y ) + φ ( x ) φ ( g 2 ( y ) ) 0
(1.2)

for all xH and ρ 1 , ρ 2 >0.

If T 1 = T 2 =T and g 1 = g 2 =I, then the problem (SGMVI) reduces to the following system of mixed variational inequalities considered by [5, 6]. Find x , y H such that

{ ρ 1 T ( y , x ) + x y , x x + φ ( x ) φ ( x ) 0 , ρ 2 T ( x , y ) + y x , x y + φ ( x ) φ ( y ) 0
(1.3)

for all xH and ρ 1 , ρ 2 >0.

If K is closed convex set in H and φ(x)= δ K (x) for all xK, where δ K is the indicator function of K defined by

δ K (x)={ 0 , if  x K ; + , otherwise ,

then the problem (1.1) reduces to the following system of general variational inequality problem: Find x , y K such that

{ ρ 1 T 1 ( y , x ) + g 1 ( x ) g 1 ( y ) , x g 1 ( x ) 0 , ρ 2 T 2 ( x , y ) + g 2 ( y ) g 2 ( x ) , x g 2 ( y ) 0
(1.4)

for all xK and ρ 1 , ρ 2 >0. The problem (1.4) with g 1 = g 2 has been studied by [7].

If T 1 = T 2 =T and g 1 = g 2 =I, then the problem (1.4) reduces to the following system of general variational inequality problem. Find x , y K such that

{ ρ 1 T ( y , x ) + x y , x x 0 , ρ 2 T ( x , y ) + y x , x y 0
(1.5)

for all xK and ρ 1 , ρ 2 >0. The problem (1.5) is studied by Verma [8, 9] and Chang et al. [10].

In the study of variational inequalities, projection methods and its variant form has played an important role. Due to presence of the nonlinear term φ, the projection method and its variant forms cannot be extended to suggest iterative methods for solving mixed variational inequalities. If the nonlinear term φ in the mixed variational inequalities is a proper, convex and lower semicontinuous function, then the variational inequalities involving the nonlinear term φ are equivalent to the fixed point problems and resolvent equations. Hassouni and Moudafi [11] used the resolvent operator technique to study a new class of mixed variational inequalities.

For a multivalued operator T:HH, the domain of T, the range of T and the graph of T denote by

D(T)= { u H : T ( u ) } ,R(T)= u H T(u)

and

Graph(T)= { ( u , u ) H × H : u D ( T )  and  u T ( u ) } ,

respectively.

Definition 1.1 T is called monotone if and only if for each uD(T), vD(T) and u T(u), v T(v), we have

v u , v u 0.

T is maximal monotone if it is monotone and its graph is not properly contained in the graph of any other monotone operator.

T 1 is the operator defined by v T 1 (u)uT(v).

Definition 1.2 ([12])

For a maximal monotone operator T, the resolvent operator associated with T, for any σ>0, is defined as

J T (u)= ( I + σ T ) 1 (u),uH.

It is known that a monotone operator is maximal if and only if its resolvent operator is defined everywhere. Furthermore, the resolvent operator is single-valued and nonexpansive i.e., J T (x) J T (y)xy for all x,yH. In particular, it is well known that the subdifferential ∂φ of φ is a maximal monotone operator; see [13].

Lemma 1.3 ([12])

For a given z,uH satisfies the inequality

uz,xu+σφ(x)σφ(u)0,xH

if and only if u= J φ (z), where J φ = ( I + σ φ ) 1 is the resolvent operator and σ>0 is a constant.

Using Lemma 1.3, we will establish following important relation.

Lemma 1.4 The variational inequality problem (1.1) is equivalent to finding x , y H such that

{ x = x g 1 ( x ) + J φ ( g 1 ( y ) ρ 1 T 1 ( y , x ) ) , y = y g 2 ( y ) + J φ ( g 2 ( x ) ρ 2 T 2 ( x , y ) ) ,
(1.6)

where J φ = ( I + φ ) 1 is the resolvent operator and ρ 1 , ρ 2 >0.

Proof Let x , y H be a solution of (1.1). Then for all xH, we have

{ ρ 1 T 1 ( y , x ) + g 1 ( x ) g 1 ( y ) , x g 1 ( x ) + φ ( x ) φ ( g 1 ( x ) ) 0 , ρ 2 T 2 ( x , y ) + g 2 ( y ) g 2 ( x ) , x g 2 ( y ) + φ ( x ) φ ( g 2 ( y ) ) 0 ,

which can be written as

{ g 1 ( x ) ( g 1 ( y ) ρ 1 T 1 ( y , x ) ) , x g 1 ( x ) + φ ( x ) φ ( g 1 ( x ) ) 0 , g 2 ( y ) ( g 2 ( x ) ρ 2 T 2 ( x , y ) ) , x g 2 ( y ) + φ ( x ) φ ( g 2 ( y ) ) 0 ,

using Lemma 1.3 for σ=1, we get

{ g 1 ( x ) = J φ ( g 1 ( y ) ρ 1 T 1 ( y , x ) ) , g 2 ( y ) = J φ ( g 2 ( x ) ρ 2 T 2 ( x , y ) ) ,

i.e.,

{ x = x g 1 ( x ) + J φ ( g 1 ( y ) ρ 1 T 1 ( y , x ) ) , y = y g 2 ( y ) + J φ ( g 2 ( x ) ρ 2 T 2 ( x , y ) ) .

This completes the proof. □

Definition 1.5 An operator g:HH is said to be

  1. (1)

    ζ-strongly monotone if for each x, x H, there exists a constant ζ>0 such that

    g ( x ) g ( x ) , x x ζ x x 2

    for all y, y H;

  2. (2)

    η-Lipschitz continuous if for each x, x H, there exists a constant η>0 such that

    g ( x ) g ( x ) η x x .

    An operator T:H×HH is said to be

  3. (3)

    relaxed (ω,t)-cocoercive with respect to the first argument if for each x, x H, there exist constants t>0 and ω>0 such that

    T ( x , ) T ( x , ) , x x ω T ( x , ) T ( x , ) 2 +t x x 2 ;
  4. (4)

    μ-Lipschitz continuous with respect to the first argument if for each x, x H, there exists a constant μ>0 such that

    T ( x , ) T ( x , ) μ x x ;
  5. (5)

    γ-Lipschitz continuous with respect to the second argument if for each y, y H, there exists a constant γ>0 such that

    T ( , y ) T ( , y ) γ y y .

Lemma 1.6 ([14])

Let { a n } and { b n } be two nonnegative real sequences satisfying the following conditions:

a n + 1 (1 d n ) a n + b n ,n n 0 ,

where n 0 is some nonnegative integer, d n (0,1) with n = 0 d n = and b n =o( d n ), then a n 0 as n.

Several iterative algorithms have been devised to study existence and approximation of different classes of variational inequalities. Most of them are sequential iterative methods, when we implement such algorithms on computers, then only one processor is used at a time. Availability of multiprocessor computers enabled researchers to develop iterative algorithms having the parallel characteristics. Lions [15] studied a parallel algorithm for a solution of parabolic variational inequalities. Bertsekas and Tsitsiklis [16, 17] developed parallel algorithm using the metric projection. Recently, Yang et al. [7] studied parallel projection algorithm for a system of nonlinear variational inequalities.

2 Existence and convergence

Lemma 1.4 established the equivalence between the fixed-point problem and the variational inequality problem (1.1). Using this equivalence in this section, we construct a parallel iterative algorithm to approximate the solution of the problem (1.1) and study the convergence of the sequence generated by the algorithm.

Algorithm 2.1 For arbitrary chosen points x 0 , y 0 H, compute the sequences { x n } and { y n } such that

{ x n + 1 = x n g 1 ( x n ) + J φ ( g 1 ( y n ) ρ 1 T 1 ( y n , x n ) ) , y n + 1 = y n g 2 ( y n ) + J φ ( g 2 ( x n ) ρ 2 T 2 ( x n , y n ) ) ,
(2.1)

where J φ = ( I + φ ) 1 is the resolvent operator and ρ 1 , ρ 2 is positive real numbers.

Theorem 2.2 Let H be a real Hilbert space. Let T i :H×HH and g i :HH be mappings such that T i is relaxed ( ω i , t i )-cocoercive, μ i -Lipschitz continuous with respect to the first argument, γ i -Lipschitz continuous with respect to the second argument and g i is η i -Lipschitz continuous, ζ i -strongly monotone mapping for i=1,2. Assume that the following assumptions hold:

where κ= i = 1 2 1 2 ζ i + η i 2 <1.

Then there exist x , y H, which solves the problem (1.1). Moreover, the iterative sequences { x n } and { y n } generated by the Algorithm 2.1 converges to x and y , respectively.

Proof Using (2.1), we have

(2.2)

Since T 1 is relaxed ( ω 1 , t 1 )-cocoercive and μ 1 -Lipschitz continuous in the first argument, we have

(2.3)

Since g 1 is η 1 -Lipschitz continuous and ζ 1 -strongly monotone,

(2.4)

Similarly,

y n y n 1 ( g 1 ( y n ) g 1 ( y n 1 ) ) 2 ( 1 2 ζ 1 + η 1 2 ) y n y n 1 2 .
(2.5)

By γ 1 -Lipschitz continuity of T 1 with respect to second argument,

T 1 ( y n 1 , x n ) T 1 ( y n 1 , x n 1 ) γ 1 x n x n 1 .
(2.6)

It follows from (2.2)-(2.6) that

x n + 1 x n ( ψ 1 + ρ 1 γ 1 ) x n x n 1 +( ψ 1 + θ 1 ) y n y n 1 ,
(2.7)

where ψ 1 = 1 2 ζ 1 + η 1 2 and θ 1 = 1 + 2 ρ 1 ω 1 μ 1 2 2 ρ 1 t 1 + ρ 1 2 μ 1 2 .

Similarly, we get

y n + 1 y n ( ψ 2 + θ 2 ) x n x n 1 +( ψ 2 + ρ 2 γ 2 ) y n y n 1 ,
(2.8)

where ψ 2 = 1 2 ζ 2 + η 2 2 and θ 2 = 1 + 2 ρ 2 ω 2 μ 2 2 2 ρ 2 t 2 + ρ 2 2 μ 2 2 .

Now (2.7) and (2.8) imply

x n + 1 x n + y n + 1 y n ( ψ 1 + ψ 2 + θ 2 + ρ 1 γ 1 ) x n x n 1 + ( ψ 1 + ψ 2 + θ 1 + ρ 2 γ 2 ) y n y n 1 Θ ( x n x n 1 + y n y n 1 ) ,

where Θ=max{( ψ 1 + ψ 2 + θ 2 + ρ 1 γ 1 ),( ψ 1 + ψ 2 + θ 1 + ρ 2 γ 2 )}<1 by assumption. Hence { x n } and { y n } are both Cauchy sequences in H, and { x n } converges to x H and { y n } converges to y H. Since g 1 , g 2 , T 1 , T 2 and J φ are all continuous, we have

{ x = x g 1 ( x ) + J φ ( g 1 ( y ) ρ 1 T 1 ( y , x ) ) , y = y g 2 ( y ) + J φ ( g 2 ( x ) ρ 2 T 2 ( x , y ) ) .

The result follows from Lemma 1.4. This completes the proof. □

If T 1 , T 2 :HH are univariate mappings, then the Algorithm 2.1 reduces to the following.

Algorithm 2.3 For arbitrary chosen points x 0 , y 0 H, compute the sequences { x n } and { y n } such that

{ x n + 1 = x n g 1 ( x n ) + J φ ( g 1 ( y n ) ρ 1 T 1 ( y n ) ) , y n + 1 = y n g 2 ( y n ) + J φ ( g 2 ( x n ) ρ 2 T 2 ( x n ) ) ,

where J φ = ( I + φ ) 1 is the resolvent operator and ρ 1 , ρ 2 is positive real numbers.

Theorem 2.4 Let H be a real Hilbert space. Let T i , g i :HH be mappings such that T i is relaxed ( ω i , t i )-cocoercive, μ i -Lipschitz continuous and g i is η i -Lipschitz continuous, ζ i -strongly monotone mapping for i=1,2. Assume that the following assumptions hold:

where κ= i = 1 2 1 2 ζ i + η i 2 <1.

Then there exist x , y H, which solves the problem (1.2). Moreover the iterative sequences { x n } and { y n } generated by the Algorithm 2.3 converges to x and y , respectively.

3 Relaxed algorithm and approximation solvability

Lemma 1.4 implies that the system of general mixed variational inequality problem (1.1) is equivalent to the fixed-point problem. This alternative equivalent formulation is very useful for a numerical point of view. In this section, we construct a relaxed iterative algorithm for solving the problem (1.1) and study the convergence of the iterative sequence generated by the algorithm.

Algorithm 3.1 For arbitrary chosen points x 0 , y 0 H, compute the sequences { x n } and { y n } such that

{ x n + 1 = ( 1 α n ) x n + α n ( x n g 1 ( x n ) + J φ ( g 1 ( y n ) ρ 1 T 1 ( y n , x n ) ) ) , y n + 1 = ( 1 β n ) y n + β n ( y n g 2 ( y n ) + J φ ( g 2 ( x n ) ρ 2 T 2 ( x n , y n ) ) ) ,
(3.1)

where J φ = ( I + φ ) 1 is the resolvent operator, { α n }, { β n } are sequences in [0,1] and ρ 1 , ρ 2 is positive real numbers.

We first prove a result, which will be helpful to prove main result of this section.

Lemma 3.2 Let H be a real Hilbert space. Let { x n } and { y n } be sequences in H such that

x n + 1 x + y n + 1 y max { ( 1 t n ) , ( 1 s n ) } ( x n x + y n y )
(3.2)

for some x , y H, where { s n } and { t n } are sequences in (0,1) such that n = 0 t n = and n = 0 s n =. Then { x n } and { y n } converges to x and y , respectively.

Proof Now, define the norm 1 on H×H by

( x , y ) 1 =x+y,(x,y)H×H.

Then (H×H, 1 ) is a Banach space. Hence, (3.2) implies that

( x n + 1 , y n + 1 ) ( x , y ) 1 max { ( 1 t n ) , ( 1 s n ) } ( x n , y n ) ( x , y ) 1 .

Using Lemma 1.6, we get

lim n ( x n , y n ) ( x , y ) 1 =0.

Therefore, sequences { x n } and { y n } converges to x and y , respectively. This completes the proof. □

We now present the approximation solvability of the problem (1.1).

Theorem 3.3 Let H be a real Hilbert space H. Let T i :H×HH and g i :HH be mappings such that T i is relaxed ( ω i , t i )-cocoercive, μ i -Lipschitz continuous with respect to the first argument, γ i -Lipschitz continuous with respect to the second argument and g i is η i -Lipschitz continuous, ζ i -strongly monotone mapping for i=1,2. Suppose that x , y H be a solution of the problem (1.1) and { α n }, { β n } are sequences in [0,1]. Assume that the following assumptions hold:

  1. (i)

    0< Θ 1 n = α n (1( ψ 1 + ρ 1 γ 1 )) β n ( ψ 2 + θ 2 )<1,

  2. (ii)

    0< Θ 2 n = β n (1( ψ 2 + ρ 2 γ 2 )) α n ( ψ 1 + θ 1 )<1,

  3. (iii)

    n = 0 Θ 1 n = and n = 0 Θ 2 n =,

where

θ i = 1 + 2 ρ i ω i μ i 2 2 ρ 1 t i + ρ i 2 μ i 2 , ψ i = 1 2 ζ i + η i 2 ,i=1,2.

Then the sequences { x n } and { y n } generated by the Algorithm 3.1 converges to x and y , respectively.

Proof From Theorem 2.2 the problem (1.1) has a solution ( x , y ) in H. By Lemma 1.4, we have

{ x = x g 1 ( x ) + J φ ( g 1 ( y ) ρ 1 T 1 ( y , x ) ) , y = y g 2 ( y ) + J φ ( g 2 ( x ) ρ 2 T 2 ( x , y ) ) .
(3.3)

To prove the result, we first evaluate x n + 1 x for all n0. Using (3.1), we obtain

(3.4)

Since T 1 is relaxed ( ω 1 , t 1 )-cocoercive and μ 1 -Lipschitz mapping with respect to the first argument, we have

(3.5)

Since g 1 is η 1 -Lipschitz continuous and ζ 1 -strongly monotone,

(3.6)

Similarly, we have

y n y ( g 1 ( y n ) g 1 ( y ) ) 2 ( 1 2 ζ 1 + η 1 2 ) y n y 2 .
(3.7)

By γ 1 -Lipschitz continuity of T 1 with respect to second argument,

T 1 ( y , x n ) T 1 ( y , x ) γ 1 x n x .
(3.8)

By (3.4)-(3.8), we have

x n + 1 x [ 1 α n + α n ( ψ 1 + ρ 1 γ 1 ) ] x n x + α n ( ψ 1 + θ 1 ) y n y ,
(3.9)

where ψ 1 = 1 2 ζ 1 + η 1 2 and θ 1 = 1 + 2 ρ 1 ω 1 μ 1 2 2 ρ 1 t 1 + ρ 1 2 μ 1 2 .

Similarly, we have

y n + 1 y β n ( ψ 2 + θ 2 ) x n x + [ 1 β n + β n ( ψ 2 + ρ 2 γ 2 ) ] y n y ,
(3.10)

where ψ 2 = 1 2 ζ 2 + η 2 2 and θ 2 = 1 + 2 ρ 2 ω 2 μ 2 2 2 ρ 2 t 2 + ρ 2 2 μ 2 2 .

Now (3.9) and (3.10) imply

where

By the assumptions and Lemma 3.2, we get that the sequences { x n } and { y n } converges to x and y , respectively. This completes the proof. □

Remark 3.4 Theorem 3.3 extend and generalize the main result in [5], which itself is a extension and improvement of the main result in Chang et al. [10].

If T 1 , T 2 :HH are univariate mappings, then the Algorithm 3.1 reduces to the following.

Algorithm 3.5 For arbitrary chosen points x 0 , y 0 H, compute the sequences { x n } and { y n } such that

{ x n + 1 = ( 1 α n ) x n + α n ( x n g 1 ( x n ) + J φ ( g 1 ( y n ) ρ 1 T 1 ( y n ) ) ) , y n + 1 = ( 1 β n ) y n + β n ( y n g 2 ( y n ) + J φ ( g 2 ( x n ) ρ 2 T 2 ( x n ) ) ) ,

where J φ = ( I + φ ) 1 is the resolvent operator, { α n }, { β n } are sequences in [0,1] and ρ 1 , ρ 2 is positive real numbers.

As a consequence of Theorem 3.3, we have following result.

Corollary 3.6 Let H be a real Hilbert space H. Let T i , g i :HH be mappings such that T i is relaxed ( ω i , t i )-cocoercive, μ i -Lipschitz continuous and g i is η i -Lipschitz continuous, ζ i -strongly monotone mapping for i=1,2. Suppose that x , y H be a solution of the problem (1.2) and { α n }, { β n } are sequences in [0,1]. Assume that the following assumptions hold:

  1. (i)

    0< Θ 1 n = α n (1 ψ 1 ) β n ( ψ 2 + θ 2 )<1,

  2. (ii)

    0< Θ 2 n = β n (1 ψ 2 ) α n ( ψ 1 + θ 1 )<1,

  3. (iii)

    n = 0 Θ 1 n = and n = 0 Θ 2 n =,

where

θ i = 1 + 2 ρ i ω i μ i 2 2 ρ 1 t i + ρ i 2 μ i 2 , ψ i = 1 2 ζ i + η i 2 ,i=1,2.

Then the sequences { x n } and { y n } generated by the Algorithm 3.5 converges to x and y , respectively.

4 Algorithms for common element

Now, we consider, the approximation solvability of the system (1.1) which is also a common fixed point of two Lipschitzian mappings. We propose a relaxed two-step algorithm, which can be applied to the approximation of solution of the problem (1.1) and common fixed point of two Lipschitzian mappings.

Algorithm 4.1 For arbitrary chosen points x 0 , y 0 H, compute the sequences { x n } and { y n } such that

{ x n + 1 = ( 1 α n ) x n + α n S 1 ( x n g 1 ( x n ) + J φ ( g 1 ( y n ) ρ 1 T 1 ( y n , x n ) ) ) , y n + 1 = ( 1 β n ) y n + β n S 2 ( y n g 2 ( y n ) + J φ ( g 2 ( x n ) ρ 2 T 2 ( x n , y n ) ) ) ,
(4.1)

where J φ = ( I + φ ) 1 is the resolvent operator, { α n }, { β n } are sequences in [0,1] and ρ 1 , ρ 2 be positive real numbers.

Let F( S i ) denote the set of fixed points of the mapping S i , i.e., F( S i )={xH: S i x=x}, Fix(S)= i = 1 2 F( S i ) and the set of solutions of the problem (1.1).

Theorem 4.2 Let H be a real Hilbert space H. Let T i :H×HH and g i :HH be mappings such that T i is relaxed ( ω i , t i )-cocoercive, μ i -Lipschitz continuous with respect to the first argument, γ i -Lipschitz continuous with respect to the second argument and g i is η i -Lipschitz continuous, ζ i -strongly monotone mapping for i=1,2. Let S i :HH be ϑ i -Lipschitzian mapping for i=1,2 with Fix(S), { α n }, { β n } are sequences in [0,1]. Assume that the following assumptions hold:

  1. (i)

    0< Θ 1 n = α n ϑ(1( ψ 1 + ρ 1 γ 1 )) β n ϑ( ψ 2 + θ 2 )<1,

  2. (ii)

    0< Θ 2 n = β n ϑ(1( ψ 2 + ρ 2 γ 2 )) α n ϑ( ψ 1 + θ 1 )<1,

  3. (iii)

    n = 0 Θ 1 n = and n = 0 Θ 2 n =,

where ϑ=max{ ϑ 1 , ϑ 2 } and

θ i = 1 + 2 ρ i ω i μ i 2 2 ρ 1 t i + ρ i 2 μ i 2 , ψ i = 1 2 ζ i + η i 2 ,i=1,2.

If , then the sequences { x n } and { y n } generated by the Algorithm 4.1 converges to x and y , respectively, such that and { x , y }Fix(S).

Proof Let us have and { x , y }Fix(S). By Lemma 1.4, we have

{ x = x g 1 ( x ) + J φ ( g 1 ( y ) ρ 1 T 1 ( y , x ) ) , y = y g 2 ( y ) + J φ ( g 2 ( x ) ρ 2 T 2 ( x , y ) ) .

Also since { x , y }Fix(S), we have

{ x = S 1 ( x g 1 ( x ) + J φ ( g 1 ( y ) ρ 1 T 1 ( y , x ) ) ) , y = S 2 ( y g 2 ( y ) + J φ ( g 2 ( x ) ρ 2 T 2 ( x , y ) ) ) .

To prove the result, we first evaluate x n + 1 x for all n0. Using (4.1), we obtain

(4.2)

Using the arguments as in the proof of Theorem 3.3, from (4.2) we get that

x n + 1 x [ 1 α n + α n ϑ 1 ( ψ 1 + ρ 1 γ 1 ) ] x n x + α n ϑ 1 ( ψ 1 + θ 1 ) y n y ,
(4.3)

where ψ 1 = 1 2 ζ 1 + η 1 2 and θ 1 = 1 + 2 ρ 1 ω 1 μ 1 2 2 ρ 1 t 1 + ρ 1 2 μ 1 2 .

Similarly, we get

y n + 1 y β n ϑ 2 ( ψ 2 + θ 2 ) x n x + [ 1 β n + β n ϑ 2 ( ψ 2 + ρ 2 γ 2 ) ] y n y ,
(4.4)

where ψ 2 = 1 2 ζ 2 + η 2 2 and θ 2 = 1 + 2 ρ 2 ω 2 μ 1 2 2 ρ 2 t 2 + ρ 2 2 μ 2 2 .

Adding (4.3) and (4.4), taking ϑ=max{ ϑ 1 , ϑ 2 } we get

where

By the assumptions and Lemma 3.2, we get that the sequences { x n } and { y n } converges to x and y , respectively. This completes the proof. □

A mapping S:HH is said to be asymptotically λ-strictly pseudocontractive [18] if there exist a sequence { k n }[1,) with lim n k n =1 such that

S n x S n y 2 k n 2 x y 2 +λ ( x S n x ) ( y S n y ) 2

for some λ(0,1), for all x,yH and n1.

Kim and Xu [19] proved that, if S:HH is an asymptotically λ-strictly pseudocontractive mapping, then S n is a Lipschitzian mapping with Lipschitz constant

L n = λ + 1 + ( k n 2 1 ) ( 1 λ ) 1 λ

for each integer n>1.

Also if x F(S), then x F( S n ) for all integer n1.

Assume that S i :HH is asymptotically λ i -strictly pseudocontractive mappings for i=1,2 with i = 1 2 F( S i ). Now generate sequence { x n } and { y n } by Algorithm 4.1 with S 1 := S 1 j and S 2 := S 2 k for some integer j,k>1. Theorem 4.2 can be applied to study approximate solvability of the problem (1.1) and common fixed points of two asymptotically strictly pseudocontractive mappings.

A mapping S:HH is said to be asymptotically nonexpansive [20] if there exists a sequence { k n }[1,) with lim n k n =1 such that S n x S n y k n xy for all x,yK and n1. Clearly every asymptotically nonexpansive mapping is an asymptotically 0-strictly pseudocontractive mapping. Theorem 4.2 can be applied to study approximate solvability of the problem (1.1) and common fixed points of two asymptotically nonexpansive mappings.

Remark 4.3 An important feature of the algorithms used in the paper is its suitability for implementing on multiprocessor computer. Assume that { x n } and { y n } are given, in order to get the new iterative point; we can set one processor of computer to compute { x n + 1 } and other processor to compute { y n + 1 }, i.e., { x n + 1 } and { y n + 1 } are computed parallel, which will take less time then computing { x n + 1 } and { y n + 1 } in a sequence using a single processor; we refer [16, 17, 2123] and references therein for more examples and ideas of the parallel iterative methods.

References

  1. Stampacchia G: Formes bilineaires coercitives sur les ensembles convexes. C. R. Math. Acad. Sci. Paris 1964, 258: 4413–4416.

    MathSciNet  Google Scholar 

  2. Antman S: The influence of elasticity in analysis: modern developments. Bull. Am. Math. Soc. 1983, 9(3):267–291. 10.1090/S0273-0979-1983-15185-6

    Article  MathSciNet  Google Scholar 

  3. Husain T, Tarafdar E: Simultaneous variational inequalities minimization problems and related results. Math. Jpn. 1994, 39(2):221–231.

    MathSciNet  Google Scholar 

  4. Showalter RE Mathematical Surverys and Monographs 49. In Monotone Operators in Banach Spaces and Nonlinear Partial Differential Equations. Am. Math. Soc., Providence; 1997.

    Google Scholar 

  5. He Z, Gu F: Generalized systems for relaxed cocoercive mixed varuational inequalities in Hilbert spaces. Appl. Math. Comput. 2009, 214: 26–30. 10.1016/j.amc.2009.03.056

    Article  MathSciNet  Google Scholar 

  6. Petrot N: A resolvent operator techinque for approximated solving of generalized system mixed variational inequality and fixed point problems. Appl. Math. Lett. 2010, 12: 440–445.

    Article  MathSciNet  Google Scholar 

  7. Yang H, Zhou L, Li Q: A parallel projection method for a system of nonlinear variational inequalities. Appl. Math. Comput. 2010, 217: 1971–1975. 10.1016/j.amc.2010.06.053

    Article  MathSciNet  Google Scholar 

  8. Verma RU: Projection methods, algorithms and new systems of variational inequalities. Comput. Math. Appl. 2001, 41: 1025–1031. 10.1016/S0898-1221(00)00336-9

    Article  MathSciNet  Google Scholar 

  9. Verma RU: General convergence analysis for two-step projection methods and applications to variational problems. Appl. Math. Lett. 2005, 18: 1286–1292. 10.1016/j.aml.2005.02.026

    Article  MathSciNet  Google Scholar 

  10. Chang SS, Lee HWJ, Chan CK: Generalized system for relaxed cocoercive variational inequalities in Hilbert spaces. Appl. Math. Lett. 2007, 20: 329–334. 10.1016/j.aml.2006.04.017

    Article  MathSciNet  Google Scholar 

  11. Hassouni A, Moudafi A: Perturbed algorithm for variational inclusions. J. Math. Anal. Appl. 1984, 185: 706–712.

    Article  MathSciNet  Google Scholar 

  12. Brezis H North-Holland Mathematics Studies 5. In Opérateurs maximaux monotone et semi-groupes de contractions dans les espaces de Hilbert. North-Holland, Amsterdam/London; 1973. Notas de matematics, vol. 50

    Google Scholar 

  13. Minty HJ: On the monotonicity of the gradient of a convex function. Pac. J. Math. 1964, 14: 243–247. 10.2140/pjm.1964.14.243

    Article  MathSciNet  Google Scholar 

  14. Weng XL: Fixed point iteration for local strictly pseudocontractive mapping. Proc. Am. Math. Soc. 1994, 113: 727–731.

    Article  Google Scholar 

  15. Lions JL: Parallel algorithms for the solution of variational inequalities. Interfaces Free Bound. 1999, 1: 13–16.

    Google Scholar 

  16. Bertsekas DP, Tsitsiklis JN: Parallel and Distributed Computation: Numerical Methods. Prentice Hall, Upper Saddle River; 1989.

    MATH  Google Scholar 

  17. Bertsekas DP, Tsitsiklis JN: Some aspects of the parallel and distributed iterative algorithms - a survey. Automatica (Oxf.) 1991, 27(1):3–21. 10.1016/0005-1098(91)90003-K

    Article  MathSciNet  Google Scholar 

  18. Liu Q: Convergence theorems of the sequence of iterates for asymptotically demicontractive and hemicontractive mappings. Nonlinear Anal. 1996, 26: 1835–1842. 10.1016/0362-546X(94)00351-H

    Article  MathSciNet  Google Scholar 

  19. Kim TH, Xu HK: Convergence of the modified Mann’s iteration method for asymptotically strict pseudo-contractions. Nonlinear Anal. 2008, 68: 2828–2836. 10.1016/j.na.2007.02.029

    Article  MathSciNet  Google Scholar 

  20. Goebel K, Kirk WA: A fixed point theorem for asymptotically nonexpansive mappings. Proc. Am. Math. Soc. 1972, 35: 171–174. 10.1090/S0002-9939-1972-0298500-3

    Article  MathSciNet  Google Scholar 

  21. Baudet GM: Asynchronous iterative methods for multiprocessors. J. Assoc. Comput. Mach. 1978, 25: 226–244. 10.1145/322063.322067

    Article  MathSciNet  Google Scholar 

  22. Hoffmann KH, Zou J: Parallel algorithms of Schwarz variant for variational inequalities. Numer. Funct. Anal. Optim. 1992, 13: 449–462. 10.1080/01630569208816491

    Article  MathSciNet  Google Scholar 

  23. Hoffmann KH, Zou J: Parallel solution of variational inequality problems with nonlinear source terms. IMA J. Numer. Anal. 1996, 16: 31–45. 10.1093/imanum/16.1.31

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shin Min Kang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Thakur, B.S., Khan, M.S. & Kang, S.M. Existence and approximation of solutions for system of generalized mixed variational inequalities. Fixed Point Theory Appl 2013, 108 (2013). https://doi.org/10.1186/1687-1812-2013-108

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2013-108

Keywords

  • system of generalized mixed variational inequality
  • fixed point problem
  • resolvent operator technique
  • relaxed cocoercive mapping
  • maximal monotone operator
  • parallel iterative algorithm