Skip to main content

Graph-convergent analysis of over-relaxed (A,η,m)-proximal point iterative methods with errors for general nonlinear operator equations

Abstract

In this paper, we introduce and study a new class of over-relaxed (A,η,m)-proximal point iterative methods with errors for solving general nonlinear operator equations in Hilbert spaces. By using Liu’s inequality and the generalized resolvent operator technique associated with (A,η,m)-monotone operators, we also prove the existence of solution for the nonlinear operator inclusions and discuss the graph-convergent analysis of iterative sequences generated by the algorithm. Furthermore, we give some examples and an application for solving the open question (2) due to Li and Lan (Adv. Nonlinear Var. Inequal. 15(1):99-109, 2012). The numerical simulation examples are given to illustrate the validity of our results.

MSC:49J40, 47H05, 65B05.

1 Introduction

It is well known that as a mathematical programming tool, variational inequalities have been extended and generalized in various directions using novel and innovative techniques for solving a wide class of problems arising in different branches of pure and applied sciences. Nonlinear variational (operator) inclusions, complementarity problems and equilibrium problems are useful and important generalizations, which provide us with a general and unified framework for studying a wide range of interesting and important problems arising in mathematics, physics, engineering sciences, economics finance and other corresponding optimization problems; the proximal point algorithm has been studied by many authors. For the recent state of the art, see, for example, [128] and the references therein.

Recently, Verma [26] introduced a general framework for the over-relaxed A-proximal point algorithm based on the A-maximal monotonicity and pointed out that ‘the over-relaxed A-proximal point algorithm is of interest in the sense that it is quite application-oriented, but nontrivial in nature’. Pan et al. [19] introduced a general nonlinear mixed set-valued inclusion framework for the over-relaxed A-proximal point algorithm based on the (A,η)-accretive mapping and studied the approximation solvability of a general class of inclusion problems using the generalized resolvent operator technique associated with an (A,η)-accretive mapping. They also discussed the convergence of iterative sequences generated by the algorithm in q-uniformly smooth Banach spaces.

On the other hand, in order to generalize the (H,η)-monotonicity, A-monotonicity and other existing monotone operators, Lan [10] first introduced a new concept of (A,η)-monotone (so-called (A,η,m)-maximal monotone [15]) operators and studied some properties of (A,η)-monotone operators and defined resolvent operators associated with (A,η)-monotone operators. In 2008, Verma [25] developed a general framework for a hybrid proximal point algorithm using the notion of (A,η)-monotonicity and explored convergence analysis for this algorithm in the context of solving a class of nonlinear inclusion problems along with some results on the resolvent operator corresponding to (A,η)-monotonicity. Very recently, Lan [13] introduced and studied a new class of hybrid (A,η,m)-proximal point algorithms with errors for solving general nonlinear operator inclusion problems in Hilbert spaces based on (A,η,m)-monotonicity framework. Furthermore, by using the generalized resolvent operator technique associated with (A,η,m)-monotone operators, the approximation solvability of operator inclusion problems and the convergence rate of iterative sequences generated by the algorithm were discussed. Li and Lan [17] introduced and studied the over-relaxed (A,η)-proximal point algorithm framework for approximating the solutions of operator inclusions by using the generalized resolvent operator technique associated with (A,η)-monotone operators and by means of two different methods. Further, some special cases and some open questions are given. In [14], we introduced and studied a new general class of hybrid (A,η,m)-proximal point algorithm frameworks for finding the common solutions of nonlinear operator equations and fixed point problems of Lipschitz continuous operators in Hilbert spaces. Further, by using the generalized resolvent operator technique associated with (A,η,m)-maximal monotone operators, we discussed the approximation solvability of operator equation problems and the convergence of iterative sequences generated by the algorithm frameworks.

Motivated and inspired by the above works, in this paper, we shall introduce and study a new class of over-relaxed proximal point algorithms for approximating solvability of the following general nonlinear operator equation in Hilbert space :

Find xH such that

0A ( f ( x ) ) g(x)+ρM ( f ( x ) ) ,
(1.1)

where A,f,g:HH are three nonlinear operators, M:H 2 H is a set-valued monotone operator with f(H)domM() and f(H)domA(), 2 H denotes the family of all the nonempty subsets of and ρ is a positive constant.

Problem (1.1) can be written as

f(x) R ρ , M A ( g ( x ) ) =0,
(1.2)

where the resolvent operator R ρ , M A = ( A + ρ M ) 1 and η:H×HH is a nonlinear operator.

Remark 1.1 For appropriate and suitable choices of A, f, g, M, η and , one can know that a number of general classes of problems of variational character, including minimization or maximization (whether constraint or not) of functions, variational problems and minimax problems, can be the special cases of problems (1.1) and (1.2). For more details, see [15, 14, 1618, 2225, 28] and the references therein, and the following examples.

Example 1.1 If g=A, then problem (1.1) is equivalent to finding xH such that x

0A ( f ( x ) ) A(x)+ρM ( f ( x ) ) ,
(1.3)

which was studied by Li [9].

Further, problem (1.3) was considered by Lan [13], and Li and Lan [17] and Verma [25, 26] when in (1.3), fI, the identity operator.

Example 1.2 Suppose that A:HH is r-strongly η-monotone, and that F:HR is locally Lipschitz such that ∂F, the subdifferential, is m-relaxed η-monotone with rm>0. It is easy to see that

x y , η ( x , y ) (rm) x y 2 ,

where xA(x)+F(x) and yA(y)+F(y) for all x,yH. Thus, A+F is η-pseudomonotone, which is indeed η-maximal monotone. This is equivalent to stating that A+F is (A,η,m)-maximal monotone (see [3]) and problem (1.1) becomes finding xH such that

g(x)(1+ρ)A ( f ( x ) ) +ρF ( f ( x ) ) .

Moreover, by using the generalized resolvent operator technique associated with (A,η,m)-monotone operators, the Lipschitz continuity of the generalized resolvent operator and Liu’s inequality [29], we will also discuss the existence of solution for the nonlinear operator inclusion (1.1) and the graphical convergence of iterative sequences generated by the algorithm. Furthermore, we give some (numerical simulation) examples and applications for solving the open question (2) in [17] and for illustrating the validity of the main results presented in this paper using software Matlab 7.0.

2 Preliminaries

In order to obtain our main results, some preliminaries must firstly be given as follows.

Definition 2.1 Let A,f:HH and η:H×HH be single-valued operators, and let M:H 2 H be a set-valued operator. Then

  1. (i)

    f is δ-strongly monotone if there exists a constant δ>0 such that

    f ( x ) f ( y ) , x y δ x y 2 x,yH,

    which implies that f is δ-expanding, i.e.,

    f ( x ) f ( y ) δxyx,yH;
  2. (ii)

    A is r-strongly η-monotone if there exists a positive constant r such that

    A ( x ) A ( y ) , η ( x , y ) r x y 2 x,yH;
  3. (iii)

    A is β-Lipschitz continuous if there exists a constant β>0 such that

    A ( x ) A ( y ) βxyx,yH;
  4. (iv)

    η is τ-Lipschitz continuous if there exists a constant τ>0 such that

    η ( x , y ) τxyx,yH;
  5. (v)

    M is m-relaxed η-monotone if there exists a constant m>0 such that for all x,yH, uM(x) and vM(y),

    u v , η ( x , y ) m x y 2 ;
  6. (vi)

    M is said to be (A,η,m)-monotone if M is m-relaxed η-monotone and R(A+ρM)=H for every ρ>0.

Remark 2.1 (1) For appropriate and suitable choices of m, A and η and , one can know that the (A,η,m)-monotonicity (so-called (A,η)-monotonicity [10], (A,η)-maximal relaxed monotonicity [3], (A,η,m)-maximal monotonicity [15]) includes the (H,η)-monotonicity, H-monotonicity, A-monotonicity, maximal η-monotonicity, classical maximal monotonicity (see [13, 911, 1317, 2327]). Further, we note that the idea of this extension is close to the idea of extending convexity to invexity introduced by Hanson in [30], and the problem studied in this paper can be used in invex optimization and also for solving the variational-like inequalities as a direction for further applied research, see related works in [21, 22] and the references therein.

(2) Moreover, the operator M is said to be generalized maximal monotone (in short GMM-monotone) if:

  1. (i)

    M is monotone;

  2. (ii)

    A+ρM is maximal monotone or pseudomonotone for ρ>0.

Example 2.1 ([3])

Suppose that A:HH is r-strongly η-monotone, and that f:HR is locally Lipschitz such that ∂f, the subdifferential, is m-relaxed η-monotone with rm>0. Clearly, we have

x y , η ( x , y ) (rm) x y 2 ,

where xA(x)+f(x) and yA(y)+f(y) for all x,yH. Thus, A+f is η-pseudomonotone, which is indeed maximal η-monotone. This is equivalent to stating that A+f is (A,η,m)-monotone.

Lemma 2.1 ([10])

Let η:H×HH be τ-Lipschitz continuous, A:HH be an r-strongly η-monotone operator and M:H 2 H be an (A,η,m)-monotone operator with m<r. Then the resolvent operator R ρ , M A , η :HH defined by

R ρ , M A , η (x)= ( A + ρ M ) 1 (x)xH

is τ r ρ m -Lipschitz continuous.

3 Graph-convergent analysis

In this section, we shall introduce and study a new class of over-relaxed proximal point algorithms for solving the general nonlinear operator equation (1.1) with (A,η,m)-monotonicity framework in Hilbert spaces. Further, by using the generalized resolvent operator technique associated with (A,η,m)-monotone operators, the Lipschitz continuity and Liu’s inequality, the existence of solution for the nonlinear operator inclusion problem and the graphical convergence of iterative sequences generated by the algorithm will be discussed.

Definition 3.1 Let be a real Hilbert space, M n ,M:H 2 H be (A,η,m)-monotone operators on for n=0,1,2, . Let A:HH be r-strongly η-monotone and τ-Lipschitz continuous. The sequence M n is graph-convergent to M, denoted by M n A G M, if for every (x,y)graph(M), there exists a sequence ( x n , y n )graph( M n ) such that

x n x, y n yas n.

By the same method as in Theorem 2.1 of [31], we have the following result.

Lemma 3.1 Let M n ,M:H 2 H be (A,η,m)-monotone operators on for n=0,1,2, . Then the sequence M n A G M if and only if

R ρ , M n A , η (x) R ρ , M A (x)xH,

where R ρ , M n A , η = ( A + ρ M n ) 1 , R ρ , M A = ( A + ρ M ) 1 , ρ>0 is a constant, and A:HH is r-strongly η-monotone and τ-Lipschitz continuous.

Lemma 3.2 ([29])

Let { a n }, { b n }, { c n } be three nonnegative real sequences satisfying the following condition: there exists a natural number n 0 such that

a n + 1 (1 t n ) a n + b n t n + c n n n 0 ,

where t n [0,1], n = 0 t n =, lim n b n =0, n = 0 c n <. Then a n 0 (n).

Algorithm 3.1 Step 1. Choose an arbitrary initial point x 0 H.

Step 2. Choose sequences { α n }, { ε n }, { ρ n } and { e n } such that for n0, { α n },{ ρ n }[0,) and { ε n }(0,1) are three sequences satisfying

α= lim sup n α n <1, n = 0 ε n <, ρ n ρ ( 0 , r m ) ,

and { e n } is an error sequence in to take into account a possible inexact computation of the operator point, which satisfies n = 0 e n <.

Step 3. Let { x n }H be a sequence generated by the following iterative procedure:

A ( f ( x n + 1 ) ) =(1 α n )A ( f ( x n ) ) + α n y n + e n ,
(3.1)

and let y n satisfy

y n A ( R ρ n , M n A ( g ( x n ) ) ) ε n y n A ( f ( x n ) ) ,

where n0, R ρ n , M n A , η = ( A + ρ n M n ) 1 and ρ n >0 is a constant.

Step 4. If x n and y n (n=0,1,2,) satisfy (3.1) to sufficient accuracy, stop; otherwise, set k:=k+1 and return to Step 2.

Algorithm 3.2 For an arbitrary initial point x 0 H, the sequence { x n }H can be generated by the following iterative procedure:

A ( f ( x n + 1 ) ) = ( 1 α n ) A ( f ( x n ) ) + α n y n + e n , y n A ( R ρ n , M n A ( A ( x n ) ) ) ε n y n A ( f ( x n ) ) ,

where n0, { α n },{ ρ n }[0,) and { ε n }(0,1) are three sequences satisfying

α= lim sup n α n <1, n = 0 ε n <, ρ n ρ ( 0 , r m ) ,

and { e n } is an error sequence in to take into account a possible inexact computation of the operator point, which satisfies n = 0 e n <.

Remark 3.1 Indeed, Algorithm 3.2 becomes Algorithm 3.1 in [9] when M n =M, e n 0 and α n 1 for all n0, which includes the algorithm of Theorem 3.2 in [26].

Theorem 3.1 Assume that is a real Hilbert space, η:HH is τ-Lipschitz continuous, g:HH is κ-Lipschitz continuous, A:HH is ς-Lipschitz continuous and r-strongly η-monotone, f:HH is β-Lipschitz continuous and δ-strongly monotone. Let M n ,M:H 2 H be (A,η,m)-monotone operators with m<r, f(H)domM(), f(H)domA(), f(H)dom M n () for n=0,1,2, and M n A G M. In addition, suppose that

  1. (i)

    the iterative sequence { x n } generated by Algorithm  3.1 is bounded;

  2. (ii)

    there exists a constant ρ>0 such that

    { 1 2 δ + β 2 + κ τ r ρ m < 1 , r δ > β ς τ ( 1 α ) , ρ < r m α ς κ τ 2 m [ r δ β ς τ ( 1 α ) ] ,

then

  1. (1)

    the general nonlinear operator equation (1.1) based on (A,η,m)-monotonicity framework has a unique solution x in ;

  2. (2)

    the sequence { x n } converges linearly to the solution x .

Proof Firstly, for any given ρ>0, define F:HH by

F(x)=xf(x)+ R ρ , M A ( g ( x ) ) xH.

By the assumptions of the theorem and Lemma 2.1, for all x,yH, we have

F ( x ) F ( y ) x y [ f ( x ) f ( y ) ] + R ρ , M A ( g ( x ) ) R ρ , M A ( g ( y ) ) ϑ x y ,

where ϑ= 1 2 δ + β 2 + κ τ r ρ m . It follows from condition (ii) that 0<ϑ<1 and so F is a contractive mapping, which shows that F has a unique fixed point in .

Next, we prove conclusion (2). Let x be a solution of problem (1.1). Then, for all ρ n >0 and n0, it follows from Lemma 3.1 that

A ( f ( x ) ) =(1 α n )A ( f ( x ) ) + α n A ( R ρ n , M A ( g ( x ) ) ) ,
(3.2)
A ( R ρ n , M n A ( g ( x n ) ) ) A ( R ρ n , M A ( g ( x ) ) ) A ( R ρ n , M n A ( g ( x n ) ) ) A ( R ρ n , M n A ( g ( x ) ) ) + A ( R ρ n , M n A ( g ( x ) ) ) A ( R ρ n , M A ( g ( x ) ) ) ς τ r ρ n m g ( x n ) g ( x ) + ς h n ,
(3.3)

where

h n = R ρ n , M n A ( g ( x ) ) R ρ n , M A ( g ( x ) ) 0.
(3.4)

Let

A ( f ( z n + 1 ) ) =(1 α n )A ( f ( x n ) ) + α n A ( R ρ n , M n A ( g ( x n ) ) ) + e n n0.

By the assumptions of the theorem, (3.2) and (3.3), now we find the estimate

A ( f ( z n + 1 ) ) A ( f ( x ) ) ( 1 α n ) A ( f ( x n ) ) A ( f ( x ) ) + α n A ( R ρ n , M n A ( g ( x n ) ) ) A ( R ρ n , M A ( g ( x ) ) ) + e n ( 1 α n ) A ( f ( x n ) ) A ( f ( x ) ) + α n ς τ r ρ n m g ( x n ) g ( x ) + α n ς h n + e n ( 1 α n ) A ( f ( x n ) ) A ( f ( x ) ) + α n ς τ κ r ρ n m x n x + α n ς h n + e n θ n x n x + α n ς h n + e n ,
(3.5)

where

θ n =βς(1 α n )+ α n ς τ κ r ρ n m .

Since

A ( f ( x n + 1 ) ) =(1 α n )A ( f ( x n ) ) + α n y n + e n ,

and

A ( f ( x n + 1 ) ) A ( f ( x n ) ) = α n ( y n A ( f ( x n ) ) ) + e n ,

it follows that

A ( f ( x n + 1 ) ) A ( f ( z n + 1 ) ) = α n y n A ( R ρ n , M n A ( g ( x n ) ) ) α n ε n y n A ( f ( x n ) ) ε n A ( f ( x n + 1 ) ) A ( f ( x n ) ) + ε n e n .
(3.6)

Now, we estimate using (3.5) and (3.6) that

A ( f ( x n + 1 ) ) A ( f ( x ) ) A ( f ( x n + 1 ) ) A ( f ( z n + 1 ) ) + A ( f ( z n + 1 ) ) A ( f ( x ) ) ε n A ( f ( x n + 1 ) ) A ( f ( x n ) ) + θ n x n x + α n ς h n + ( 1 + ε n ) e n .

This implies that

A ( f ( x n + 1 ) ) A ( f ( x ) ) θ n 1 ε n x n x + α n ς h n 1 ε n + 1 + ε n 1 ε n e n .
(3.7)

It follows from (3.7) and the strong monotonicity of A and f that

A ( f ( x n + 1 ) ) A ( f ( x ) ) τ f ( x n + 1 ) f ( x ) A ( f ( x n + 1 ) ) A ( f ( x ) ) η ( f ( x n + 1 ) , f ( x ) ) A ( f ( x n + 1 ) ) A ( f ( x ) ) , η ( f ( x n + 1 ) , f ( x ) ) r f ( x n + 1 ) f ( x ) 2 ,

i.e.,

A ( f ( x n + 1 ) ) A ( f ( x ) ) r τ f ( x n + 1 ) f ( x ) r δ τ x n + 1 x ,

and

x n + 1 x τ θ n r δ ( 1 ε n ) x n x + α n τ ς h n r δ ( 1 ε n ) + τ ( 1 + ε n ) r δ ( 1 ε n ) e n ( 1 t n ) x n x + b n t n + c n ,
(3.8)

where t n =1 τ θ n r δ ( 1 ε n ) , b n = α n τ ς h n r δ ( 1 ε n ) τ θ n , c n = τ ( 1 + ε n ) r δ ( 1 ε n ) e n .

Thus, it follows from (3.4), condition (ii), Lemma 3.2 and (3.8) that the { x n } converges linearly to the solution x . This completes the proof. □

Remark 3.2 Condition (ii) of Theorem 3.1 holds for some suitable value of constants, for example, δ=1.90, β=1.9017, κ=0.05, τ=0.3317, r=2.24, m=2.2015, α=0.35, ς=6.7831 and ρ=0.8687.

From Theorem 3.1, we have the following results.

Theorem 3.2 Let A, f, M, η and be the same as in Theorem  3.1. If the iterative sequence { x n } generated by Algorithm  3.2 is bounded and there exists a constant ρ>0 such that

{ 1 2 δ + β 2 + ς τ r ρ m < 1 , r δ > β ς τ ( 1 α ) , ρ < r m α ς 2 τ 2 m [ r δ β ς τ ( 1 α ) ] ,

then the sequence { x n } converges linearly to the unique solution x of problem (1.3).

Remark 3.3 Conditions in Theorem 3.2 are weaker and less than those in Theorem 3.1 of [9], and the (graphical) convergence analysis is considered according to (ii) of Remark 3.2 in [9]. That is, the Lipschitz continuity of the inverse operator M 1 and the inequality condition (recalled relative cocoercivity, see [4]) are replaced by inequality (3.3).

Remark 3.4 It is easy to see that the corresponding results can be obtained if fI, or e n 0, or M n =M in Algorithms 3.1 and 3.2, or M and M n for all n0 are (H,η)-monotone, H-monotone, A-monotone, maximal η-monotone and classical maximal monotone, respectively. Therefore, the main results presented in this paper improve and generalize the corresponding results of [1, 2, 9, 13, 17, 25, 26].

Remark 3.5 Clearly, it follows from Algorithms 3.1 and 3.2 that the sequence { y n } is considered to control the iterative sequence { x n } and can be optimized for increasing convergence rate, which is worthy to be studied in the future.

4 Some examples with applications

In this section, we shall give the following examples to illustrate the validity of our main results.

Example 4.1 Let be a real Hilbert space, η:HH be τ-Lipschitz continuous, A:HH be σ-Lipschitz continuous and r-strongly η-monotone, and for n=0,1,2, , let M n ,M:H 2 H be an (A,η,m)-monotone operator with m<r and M n A G M. For an arbitrary initial point x 0 H, suppose that the sequence { x n }H generated by the following iterative procedure is bounded:

A ( x n + 1 ) = ( 1 α n ) A ( x n ) + α n y n + e n , y n A ( R ρ n , M n A ( A ( x n ) ) ) ε n y n A ( x n ) ,
(4.1)

where n0, { α n },{ ρ n }[0,) and { ε n }(0,1) are three sequences satisfying

α= lim sup n α n <1, n = 0 ε n <, ρ n ρ ( 0 , r m ) ,

and { e n } is an error sequence in to take into account a possible inexact computation of the operator point, which satisfies n = 0 e n <. In addition, if there exists a constant ρ>0 such that

ρ< r m max { σ τ m , α σ 2 τ 2 m [ r σ τ ( 1 α ) ] } ,r>στ(1α),

then the sequence { x n } converges linearly to a solution x of the following nonlinear inclusion problem:

Find xH such that

0M(x).

Proof The result can be obtained by the proof of Theorem 3.1 and so it is omitted. □

Remark 4.1 The corresponding results can be obtained if e n 0 in (4.1) for all n0, or the element y n in (4.1) satisfies respectively the following inequalities:

y n A ( J ρ n , A M n ( A ( x n ) ) ) ε n y n A ( x n )

and

y n J ρ n M n ( x n ) ε n y n x n ,

where M n is of the same monotonicity as M under some conditions for all n0, and J ρ n , A M n (A( x n ))= ( A + ρ n M n ) 1 associates with A-maximal monotonicity and J ρ n M n ( x n )= ( I + ρ n M n ) 1 associates with classical maximal monotonicity. Furthermore, it follows from Example 4.1 that the open question (2) in [17] is solved.

Example 4.2 Let H= R 2 , constants δ=1.8984, β=1.9022, κ=0.05, τ=0.3302, r=2.2241, m=2.169, ς=6.814, α=0.38 and ρ=0.8782. Suppose that for any n0, ρ n = ρ n 1 + n , α n = α n 2 1 + n 2 , ε n = 1 n 1.4 , e n = e 1 + n 5.2 , M n = M n 3 ( n 3 + 2 n 1 ) and

e = ( 0.29 0.37 ) , M = ( 3 1 2 3 ) , x 0 = ( 0.5 1.2 ) , f ( x ) = ( 3 x 1 4 x 2 ) , g ( x ) = ( ( 2 x 1 + x 2 2 ) arctan x 1 arctan x 2 3 + 2 x 1 2 ) , A ( x ) = ( 2 x 1 0.48 ( x 2 + arctan x 1 ) 3 x 2 0.75 ( x 1 arctan x 2 ) ) .

Then A is 2.2241-strongly η-monotone, the conditions in Theorem 3.1 hold. Further, the sequence { x n } converges linearly to a solution x =(0.0000,0.0000) of problem (1.1) under termination tolerance 10−9.

Moreover, x is also a fixed point of If+ R ρ , M A g and the numerical simulation graphs for the sequences { x n } and { y n } generated by Algorithm 3.1 are given with 76 iterations in Figure 1. Further, the numerical simulation graphs for the sequences { x n } and { y n } generated by Algorithm 3.1 are shown with 46 iterations in Figure 2 when the controlling sequence { y n } has been optimized partly. It is easy to see that the acceleration efficiency is 39.47%.

Figure 1
figure 1

Numerical simulation with non-optimized condition.

Figure 2
figure 2

Numerical simulation with optimized condition.

5 Conclusions

In this paper, we introduce and study a new class of over-relaxed proximal point perturbed iterative algorithms for solving the following general nonlinear operator equation with (A,η,m)-monotonicity framework in Hilbert spaces :

0A ( f ( x ) ) g(x)+ρM ( f ( x ) ) ,

where A,f,g:HH are three nonlinear operators, M:H 2 H is an (A,η,m)-monotone operator with f(H)domM() and f(H)domA(), 2 H denotes the family of all the nonempty subsets of and ρ is a positive constant.

Further, by using the generalized resolvent operator technique associated with (A,η,m)-monotone operators, the Lipschitz continuity of the generalized resolvent operator and Liu’s inequality [29], we also discuss the existence of a solution for the nonlinear operator equation and the graphical convergence of iterative sequences generated by the algorithm.

Finally, we give some examples with applications for solving the open question (2) in [17]. The numerical simulation examples are given to illustrate the validity of the main results presented in this paper using software Matlab 7.0.

References

  1. Agarwal RP, Verma RU: Role of relative A -maximal monotonicity in overrelaxed proximal point algorithm with applications. J. Optim. Theory Appl. 2009, 143(1):1–15. 10.1007/s10957-009-9554-z

    Article  MathSciNet  Google Scholar 

  2. Agarwal RP, Verma RU: General implicit variational inclusion problems based on A -maximal(m)-relaxed monotonicity (AMRM) frameworks. Appl. Math. Comput. 2009, 215: 367–379. 10.1016/j.amc.2009.04.078

    Article  MathSciNet  Google Scholar 

  3. Agarwal RP, Verma RU:General system of(A,η)-maximal relaxed monotone variational inclusion problems based on generalized hybrid algorithms. Commun. Nonlinear Sci. Numer. Simul. 2010, 15(2):238–251. 10.1016/j.cnsns.2009.03.037

    Article  MathSciNet  Google Scholar 

  4. Agarwal RP, Verma RU: Relatively maximal monotone mappings and applications to general inclusions. Appl. Anal. 2012, 91(1):105–120. 10.1080/00036811.2010.538687

    Article  MathSciNet  Google Scholar 

  5. Agarwal RP, Verma RU:Super-relaxed(η)-proximal point algorithms, relaxed (η)-proximal point algorithms, linear convergence analysis, and nonlinear variational inclusions. Fixed Point Theory Appl. 2009., 2009: Article ID 957407

    Google Scholar 

  6. Cai LC, Lan HY, Zou YZ: Perturbed algorithms for solving nonlinear relaxed cocoercive operator equations with general A -monotone operators in Banach spaces. Commun. Nonlinear Sci. Numer. Simul. 2011, 16(10):3923–3932. 10.1016/j.cnsns.2011.01.024

    Article  MathSciNet  Google Scholar 

  7. Chen PW, Gui CF: Linear convergence analysis of the use of gradient projection methods on total variation problems. Comput. Optim. Appl. 2013, 54(2):283–315. 10.1007/s10589-011-9412-4

    Article  MathSciNet  Google Scholar 

  8. Ke YF, Ma CF: The convergence analysis of the projection methods for a system of generalized relaxed cocoercive variational inequalities in Hilbert spaces. Fixed Point Theory Appl. 2013., 2013: Article ID 189

    Google Scholar 

  9. Li F:On over-relaxed proximal point algorithms for generalized nonlinear operator equation with(A,η,m)-monotonicity framework. Int. J. Mod. Nonlinear Theory Appl. 2012, 1(3):67–72. 10.4236/ijmnta.2012.13009

    Article  Google Scholar 

  10. Lan HY:A class of nonlinear(A,η)-monotone operator inclusion problems with relaxed cocoercive mappings. Adv. Nonlinear Var. Inequal. 2006, 9(2):1–11.

    MathSciNet  Google Scholar 

  11. Lan HY, Cui YS, Fu Y:New approximation-solvability of general nonlinear operator inclusion couples involving(A,η,m)-resolvent operators and relaxed cocoercive type operators. Commun. Nonlinear Sci. Numer. Simul. 2012, 17(4):1844–1851. 10.1016/j.cnsns.2011.09.005

    Article  MathSciNet  Google Scholar 

  12. Lan HY, Cai LC: Variational convergence of a new proximal algorithm for nonlinear general A -monotone operator equation systems in Banach spaces. Nonlinear Anal. TMA 2009, 71(12):6194–6201. 10.1016/j.na.2009.06.012

    Article  MathSciNet  Google Scholar 

  13. Lan HY:On hybrid(A,η,m)-proximal point algorithm frameworks for solving general operator inclusion problems. J. Appl. Funct. Anal. 2012, 7(3):258–266.

    MathSciNet  Google Scholar 

  14. Lan HY, Cai LC, Wu SL:General hybrid(A,η,m)-proximal point algorithm frameworks for finding common solutions of nonlinear operator equations and fixed point problems. Commun. Nonlinear Sci. Numer. Simul. 2013, 18(4):895–904. 10.1016/j.cnsns.2012.08.026

    Article  MathSciNet  Google Scholar 

  15. Lan HY:Sensitivity analysis for generalized nonlinear parametric(A,η,m)-maximal monotone operator inclusion systems with relaxed cocoercive type operators. Nonlinear Anal. TMA 2011, 74(2):386–395. 10.1016/j.na.2010.08.049

    Article  Google Scholar 

  16. Lan HY:New proximal algorithms for a class of(A,η)-accretive variational inclusion problems with non-accretive set-valued mappings. J. Appl. Math. Comput. 2007, 25(1–2):255–267. 10.1007/BF02832351

    Article  MathSciNet  Google Scholar 

  17. Li F, Lan HY:Over-relaxed(A,η)-proximal point algorithm framework for approximating the solutions of operator inclusions. Adv. Nonlinear Var. Inequal. 2012, 15(1):99–109.

    MathSciNet  Google Scholar 

  18. Li F, Lan HY, Cho YJ:Graphical approximation of common solutions to generalized nonlinear relaxed cocoercive operator equation systems with(A,η)-accretive mappings. Fixed Point Theory Appl. 2012., 2012: Article ID 14

    Google Scholar 

  19. Pan XB, Li HG, Xu AJ: The over-relaxed A -proximal point algorithm for general nonlinear mixed set-valued inclusion framework. Fixed Point Theory Appl. 2011., 2011: Article ID 840978

    Google Scholar 

  20. Salzo S, Villa S: Convergence analysis of a proximal Gauss-Newton method. Comput. Optim. Appl. 2012, 53(2):557–589. 10.1007/s10589-012-9476-9

    Article  MathSciNet  Google Scholar 

  21. Soleimani-damaneh M: Generalized invexity in separable Hilbert spaces. Topology 2009, 48(2–4):66–79. 10.1016/j.top.2009.11.004

    Article  MathSciNet  Google Scholar 

  22. Soleimani-damaneh M: Infinite (semi-infinite) problems to characterize the optimality of nonlinear optimization problems. Eur. J. Oper. Res. 2008, 188(1):49–56. 10.1016/j.ejor.2007.04.026

    Article  MathSciNet  Google Scholar 

  23. Verma RU: Generalized over-relaxed proximal algorithm based on A -maximal monotonicity framework and applications to inclusion problems. Math. Comput. Model. 2009, 49(7–8):1587–1594. 10.1016/j.mcm.2008.05.045

    Article  Google Scholar 

  24. Verma RU: General over-relaxed proximal point algorithm involving A -maximal relaxed monotone mappings with applications. Nonlinear Anal. TMA 2009, 71(12):e1461-e1472. 10.1016/j.na.2009.01.184

    Article  Google Scholar 

  25. Verma RU:A hybrid proximal point algorithm based on the(A,η)-maximal monotonicity framework. Appl. Math. Lett. 2008, 21: 142–147. 10.1016/j.aml.2007.02.017

    Article  MathSciNet  Google Scholar 

  26. Verma RU: A general framework for the over-relaxed A -proximal point algorithm and applications to inclusion problems. Appl. Math. Lett. 2009, 22: 698–703. 10.1016/j.aml.2008.05.001

    Article  MathSciNet  Google Scholar 

  27. Verma RU: General class of implicit variational inclusions and graph convergence on A -maximal relaxed monotonicity. J. Optim. Theory Appl. 2012, 155(1):196–214. 10.1007/s10957-012-0030-9

    Article  MathSciNet  Google Scholar 

  28. Wen DJ, Long XJ, Gong QF: Convergence analysis of projection methods for a new system of general nonconvex variational inequalities. Fixed Point Theory Appl. 2012., 2012: Article ID 59

    Google Scholar 

  29. Liu LS: Ishikawa and Mann iterative process with errors for nonlinear strongly accretive mappings in Banach spaces. J. Math. Anal. Appl. 1995, 194: 114–135. 10.1006/jmaa.1995.1289

    Article  MathSciNet  Google Scholar 

  30. Hanson MA: On sufficiency of Kuhn-Tucker conditions. J. Math. Anal. Appl. 1981, 80(2):545–550. 10.1016/0022-247X(81)90123-2

    Article  MathSciNet  Google Scholar 

  31. Verma RU: A generalization to variational convergence for operators. Adv. Nonlinear Var. Inequal. 2008, 11(2):97–101.

    MathSciNet  Google Scholar 

Download references

Acknowledgements

This work has been partially supported by the Open Research Fund of Artificial Intelligence of Key Laboratory of Sichuan Province (2012RYY04), Sichuan Province Youth Fund project (2011JTD0031), and the Cultivation Project of Sichuan University of Science and Engineering (2011PY01).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Heng-you Lan.

Additional information

Competing interests

The author declares that they have no competing interests

Author’s contributions

H-yL conceived of the study, its design and coordination. The author read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lan, Hy. Graph-convergent analysis of over-relaxed (A,η,m)-proximal point iterative methods with errors for general nonlinear operator equations. Fixed Point Theory Appl 2014, 161 (2014). https://doi.org/10.1186/1687-1812-2014-161

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2014-161

Keywords