Research  Open  Published:
The WienerHopf Equation Technique for Solving General Nonlinear Regularized Nonconvex Variational Inequalities
Fixed Point Theory and Applicationsvolume 2011, Article number: 86 (2011)
Abstract
In this paper, we introduce and study some new classes of extended general nonlinear regularized nonconvex variational inequalities and the extended general nonconvex WienerHopf equations, and by the projection operator technique, we establish the equivalence between the extended general nonlinear regularized nonconvex variational inequalities and the fixed point problems as well as the extended general nonconvex WienerHopf equations. Then by using this equivalent formulation, we discuss the existence and uniqueness of solution of the problem of extended general nonlinear regularized nonconvex variational inequalities. We apply the equivalent alternative formulation and a nearly uniformly Lipschitzian mapping S for constructing some new pstep projection iterative algorithms with mixed errors for finding an element of set of the fixed points of nearly uniformly Lipschitzian mapping S which is unique solution of the problem of extended general nonlinear regularized nonconvex variational inequalities. We also consider the convergence analysis of the suggested iterative schemes under some suitable conditions.
Mathematics Subject Classification (2010)
Primary 47H05; Secondary 47J20, 49J40
1 Introduction
The theory of variational inequalities, which was initially introduced by Stampacchia [1] in 1964, is a branch of the mathematical sciences dealing with general equilibrium problems. It has a wide range of applications in economics, optimizations research, industry, physics, and engineering sciences. Many research papers have been written lately, both on the theory and applications of this field. Important connections with main areas of pure and applied sciences have been made, see for example [2, 3] and the references cited therein. The development of variational inequality theory can be viewed as the simultaneous pursuit of two different lines of research. On the one hand, it reveals the fundamental facts on the qualitative aspects of the solution to important classes of problems; on the other hand, it also enables us to develop highly efficient and powerful new numerical methods to solve, for example, obstacle, unilateral, free, moving and the complex equilibrium problems. One of the most interesting and important problems in variational inequality theory is the development of an efficient numerical method. There is a substantial number of numerical methods including projection method and its variant forms, WienerHolf (normal) equations, auxiliary principle, and descent framework for solving variational inequalities and complementarity problems. For the applications on physical formulations, numerical methods and other aspects of variational inequalities, see [1–52] and the references therein.
Projection method and its variant forms represent important tool for finding the approximate solution of various types of variational and quasivariational inequalities, the origin of which can be traced back to Lions and Stampacchia [31]. The projection type methods were developed in 1970's and 1980's. The main idea in this technique is to establish the equivalence between the variational inequalities and the fixed point problems using the concept of projection. This alternate formulation enables us to suggest some iterative methods for computing the approximate solution. Shi [50, 51] and Robinson [48] considered the problem of solving a system of equations which are called the WienerHopf equations or normal maps. Shi [50] and Robinson [48] proved that the variational inequalities and the WienerHopf equations are equivalent by using the projection technique. It turned out that this alternative equivalent formulation is more general and flexible. It has shown in [48–53] that the WienerHopf equations provide us a simple, elegant and convenient device for developing some efficient numerical methods for solving variational inequalities and complementarity problems.
It should be pointed that almost all the results regarding the existence and iterative schemes for solving variational inequalities and related optimizations problems are being considered in the convexity setting. Consequently, all the techniques are based on the properties of the projection operator over convex sets, which may not hold in general, when the sets are nonconvex. It is known that the uniformly proxregular sets are nonconvex and include the convex sets as special cases, for more details, see for example [23, 28, 29, 46]. In recent years, Bounkhel et al. [23], Noor [36, 41] and Pang et al. [45] have considered variational inequalities in the context of uniformly proxregular sets.
On the other hand, related to the variational inequalities, we have the problem of finding the fixed points of the nonexpansive mappings, which is the subject of current interest in functional analysis. It is natural to consider a unified approach to these two different problems. Motivated and inspired by the research going in this direction, Noor and Huang [43] considered the problem of finding the common element of the set of the solutions of variational inequalities and the set of the fixed points of the nonexpansive mappings. Noor [38] suggested and analyzed some threestep iterative algorithms for finding the common elements of the set of the solutions of the Noor variational inequalities and the set of the fixed points of nonexpansive mappings. He also discussed the convergence analysis of the suggested iterative algorithms under some conditions.
Recently, Qin and Noor [47] established the equivalence between general variational inequalities and general WienerHopf equations. They proposed and analyzed a new iterative method for solving variational inequalities and related optimization problems. They also considered the problem of finding a comment element of fixed points of nonexpansive mappings and the set of solution of the general variational inequalities.
It is well known that every nonexpansive mapping is a Lipschitzian mapping. Lipschitzian mappings have been generalized by various authors. Sahu [53] introduced and investigated nearly uniformly Lipschitzian mappings as generalization of Lipschitzian mappings.
Motivated and inspired by the above works, at the present paper, some new classes of the extended general nonlinear regularized nonconvex variational inequalities and the extended general nonconvex WienerHopf equations are introduced and studied, and by the projection technique, the equivalence between the extended general nonlinear regularized nonconvex variational inequalities and the fixed point problems as well as the extended general nonconvex WienerHopf equations is proved. Then by using this equivalent formulation, the existence and uniqueness of solution of the problem of extended general nonlinear regularized nonconvex variational inequalities are discussed. Applying the equivalent alternative formulation and a nearly uniformly Lipschitzian mapping S, some new pstep projection iterative algorithms with mixed errors for finding an element of the set of fixed points of nearly uniformly Lipschitzian mapping S which is a unique solution of the problem of extended general nonlinear regularized nonconvex variational inequalities are defined. The convergence analysis of the suggested iterative schemes under some suitable conditions is discussed. Some remarks about established statements by Noor [38], Noor et al. [44] and Qin and Noor [47] are presented. Also, this fact that their statements are special cases of our results is shown. The results obtained in this paper may be viewed as an refinement and improvement of the previously known results.
2 Preliminaries and basic results
Throughout this article, we will let $\mathcal{H}$ be a real Hilbert space which is equipped with an inner product 〈.,.〉 and corresponding norm cdot and K be a nonempty convex subset of $\mathcal{H}$. We denote by d_{ K }(·) or d(., K) the usual distance function to the subset K, i.e., ${d}_{K}\left(u\right)=\underset{v\in K}{\text{inf}}\left\rightuv\left\right$. Let us recall the following wellknown definitions and some auxiliary results of nonlinear convex analysis and nonsmooth analysis [27–29, 46].
Definition 2.1. Let $u\in \mathcal{H}$ is a point not lying in K. A point v ∈ K is called a closest point or a projection of u onto K if d_{ K }(u) = u  v. The set of all such closest points is denoted by P_{ K }(u), i.e.,
Definition 2.2. The proximal normal cone of K at a point $u\in \mathcal{H}$ with u ∉ K is given by
Clarke et al. [28], in Proposition 1.1.5, give a characterization of ${N}_{K}^{P}\left(u\right)$ as follows:
Lemma 2.3. Let K be a nonempty closed subset in $\mathcal{H}$. Then $\xi \in {N}_{K}^{P}\left(u\right)$if and only if there exists a constant α = α (ξ, u) > 0 such that 〈ξ, v  u〉 ≤ αv  u^{2} for all v ∈ K.
The above inequality is called the proximal normal inequality. The special case in which K is closed and convex is an important one. In Proposition 1.1.10 of [28], the authors give the following characterization of the proximal normal cone the closed and convex subset $K\subset \mathcal{H}$:
Lemma 2.4. Let K be a nonempty closed and convex subset in $\mathcal{H}$. Then $\xi \in {N}_{K}^{P}\left(u\right)$ if and only if 〈ξ, v  u〉 ≤ 0 for all v ∈ K.
Definition 2.5. Let X is a real Banach space and f : X → ℝ be Lipschitzian with constant τ near a given point x ∈ X, that is, for some ε > 0, we have f(y)  f(z) ≤ τy  z for all y, z ∈ B(x; ε), where B(x; ε) denotes the open ball of radius r > 0 and centered at x. The generalized directional derivative of f at x in the direction v, denoted by f°(x; v), is defined as follows:
where y is a vector in X and t is a positive scalar.
The generalized directional derivative defined earlier can be used to develop a notion of tangency that does not require K to be smooth or convex.
Definition 2.6. The tangent cone T_{ K }(x) to K at a point x in K is defined as follows:
Having defined a tangent cone, the likely candidate for the normal cone is the one obtained from T_{ K }(x) by polarity. Accordingly, we define the normal cone of K at x by polarity with T_{ K }(x) as follows:
Definition 2.7. The Clarke normal cone, denoted by ${N}_{K}^{C}\left(x\right)$, is given by ${N}_{K}^{C}\left(x\right)=\overline{co}\left[{N}_{K}^{P}\left(x\right)\right]$, where $\overline{co}\left[S\right]$ means the closure of the convex hull of S. It is clear that one always has ${N}_{K}^{P}\left(x\right)\subseteq {N}_{K}^{C}\left(x\right)$. The converse is not true in general. Note that ${N}_{K}^{C}\left(x\right)$ is always closed and convex cone, whereas ${N}_{K}^{P}\left(x\right)$ is always convex, but may not be closed (see [27, 28, 46]).
In 1995, Clarke et al. [29] introduced and studied a new class of nonconvex sets called proximally smooth sets; subsequently, Poliquin et al. in [46] investigated the aforementioned sets, under the name of uniformly proxregular sets. These have been successfully used in many nonconvex applications in areas such as optimizations, economic models, dynamical systems, differential inclusions, etc. For such as applications see [20–22, 24]. This class seems particularly well suited to overcome the difficulties which arise due to the nonconvexity assumptions on K. We take the following characterization proved in [29] as a definition of this class. We point out that the original definition was given in terms of the differentiability of the distance function (see [29]).
Definition 2.8. For any r ∈ (0, +∞], a subset K_{ r } of $\mathcal{H}$ is called normalized uniformly proxregular (or uniformly rproxregular [29]) if every nonzero proximal normal to K_{ r } can be realized by an rball.
This means that for all $\stackrel{\u0304}{x}\in {K}_{r}$ and $0\ne \xi \in {N}_{{K}_{r}}^{P}\left(\stackrel{\u0304}{x}\right)$ with ξ = 1,
Obviously, the class of normalized uniformly proxregular sets is sufficiently large to include the class of convex sets, pconvex sets, C^{1,1} submanifolds (possibly with boundary) of $\mathcal{H}$, the images under a C^{1,1} diffeomorphism of convex sets and many other nonconvex sets, see [25, 29].
Lemma 2.9. [29]A closed set $K\subseteq \mathcal{H}$ is convex if and only if it is proximally smooth of radius r for every r > 0.
If r = +∞, then in view of Definition 2.8 and Lemma 2.9, the uniform rproxregularity of K_{ r } is equivalent to the convexity of K_{ r }, which makes this class of great importance. For the case of that r = +∞, we set K_{ r } = K.
The following proposition summarizes some important consequences of the uniform proxregularity needed in the sequel. The proof of this results can be found in [29, 46].
Proposition 2.10. Let r > 0 and K_{ r } be a nonempty closed and uniformly rproxregular subset of $\mathcal{H}$ . Set $U\left(r\right)=\left\{u\in \mathcal{H}:0<{d}_{{K}_{r}}\left(u\right)<r\right\}$ . Then the following statements hold:

(a)
For all x ∈ U(r), one has ${P}_{{K}_{r}}\left(x\right)\ne \varnothing $;

(b)
For all r' ∈ (0, r), ${P}_{{K}_{r}}$ is Lipschitzian continuous with constant $\frac{r}{r{r}^{\prime}}$ on $U\left({r}^{\prime}\right)=\left\{u\in \mathcal{H}:0<{d}_{{K}_{r}}\left(u\right)<{r}^{\prime}\right\}$;

(c)
The proximal normal cone is closed as a setvalued mapping.
As a direct consequent of part (c) of Proposition 2.10, we have ${N}_{{K}_{r}}^{C}\left(x\right)={N}_{{K}_{r}}^{P}\left(x\right)$. Therefore, we will define ${N}_{{K}_{r}}\left(x\right):={N}_{{K}_{r}}^{C}\left(x\right)={N}_{{K}_{r}}^{P}\left(x\right)$ for such a class of sets.
In order to make clear the concept of rproxregular sets, we state the following concrete example: The union of two disjoint intervals [a, b] and [c, d] is rproxregular with $r=\frac{cb}{2}$. The finite union of disjoint intervals is also rproxregular and r depends on the distances between the intervals.
Definition 2.11. Let $T,g:\mathcal{H}\to \mathcal{H}$ be two singlevalued operators. Then the operator T is said to be:

(a)
monotone if
$$\u27e8T\left(x\right)T\left(y\right),xy\u27e9\ge 0,\phantom{\rule{1em}{0ex}}\forall x,y\in \mathcal{H};$$ 
(b)
rstrongly monotone if there exists a constant r > 0 such that
$$\u27e8T\left(x\right)T\left(y\right),xy\u27e9\ge r\left\rightxy{}^{2},\phantom{\rule{1em}{0ex}}\forall x,y\in \mathcal{H};$$ 
(c)
κstrongly monotone with respect to g if there exists a constant κ > 0 such that
$$\u27e8T\left(x\right)T\left(y\right),g\left(x\right)g\left(y\right)\u27e9\ge \kappa \left\rightxy{}^{2},\phantom{\rule{1em}{0ex}}\forall x,y\in \mathcal{H};$$ 
(d)
(ξ, ς)relaxed cocoercive if there exist constants ξ, ς > 0 such that
$$\u27e8T\left(x\right)T\left(y\right),xy\u27e9\ge \phantom{\rule{2.77695pt}{0ex}}\xi \left\rightT\left(x\right)T\left(y\right){}^{2}+\phantom{\rule{2.77695pt}{0ex}}\varsigma xy{}^{2},\phantom{\rule{1em}{0ex}}\forall x,y\in \mathcal{H};$$ 
(e)
γLipschitzian continuous if there exists a constant γ > 0 such that
$$\left\rightT\left(x\right)T\left(y\right)\left\right\le \gamma \left\rightxy\left\right,\phantom{\rule{1em}{0ex}}\forall x,y\in \mathcal{H}.$$
In the next definitions, several generalizations of the nonexpansive mappings which have been introduced by various authors in recent years are stated.
Definition 2.12. A nonlinear mapping $T:\mathcal{H}\to \mathcal{H}$ is said to be:

(a)
nonexpansive if
$$\left\rightTxTy\left\right\phantom{\rule{2.77695pt}{0ex}}\le \phantom{\rule{2.77695pt}{0ex}}\left\rightxy\left\right,\phantom{\rule{1em}{0ex}}\forall x,y\in \mathcal{H};$$ 
(b)
LLipschitzian if there exists a constant L > 0 such that
$$\left\rightTxTy\left\right\le L\left\rightxy\left\right,\phantom{\rule{1em}{0ex}}\forall x,y\in \mathcal{H};$$ 
(c)
generalized Lipschitzian if there exists a constant L > 0 such that
$$\left\rightTxTy\left\right\le L\left(\left\rightxy\left\right+\phantom{\rule{2.77695pt}{0ex}}1\right),\phantom{\rule{1em}{0ex}}\forall x,y\in \mathcal{H};$$ 
(d)
generalized (L, M)Lipschitzian [53] if there exist two constants L, M > 0 such that
$$\left\rightTxTy\left\right\le L\left(\left\rightxy\left\right+\phantom{\rule{2.77695pt}{0ex}}M\right),\phantom{\rule{1em}{0ex}}\forall x,y\in \mathcal{H};$$ 
(e)
asymptotically nonexpansive [54] if there exists a sequence {k_{ n }} ⊆ [1, ∞) with $\underset{n\to \infty}{\text{lim}}{k}_{n}=1$ such that, for each n ∈ ℕ,
$$\left\right{T}^{n}x{T}^{n}y\left\right\le {k}_{n}\left\rightxy\left\right,\phantom{\rule{1em}{0ex}}\forall x,y\in \mathcal{H};$$ 
(f)
pointwise asymptotically nonexpansive [55] if, for each integer n ∈ ℕ,
$$\left\right{T}^{n}x{T}^{n}y\left\right\le {\alpha}_{n}\left(x\right)\left\rightxy\left\right,\phantom{\rule{1em}{0ex}}\forall x,y\in \mathcal{H},$$
where α_{ n } → 1 pointwise on X;

(g)
uniformly LLipschitzian if there exists a constant L > 0 such that, for each n ∈ ℕ,
$$\left\right{T}^{n}x{T}^{n}y\left\right\le L\left\rightxy\left\right,\phantom{\rule{1em}{0ex}}\forall x,y\in \mathcal{H}.$$
Definition 2.13. [53] A nonlinear mapping $T:\mathcal{H}\to \mathcal{H}$ is said to be:

(a)
nearly Lipschitzian with respect to the sequence {a_{ n }} if for each n ∈ ℕ, there exists a constant k_{ n } > 0 such that
$$\left\right{T}^{n}x{T}^{n}y\left\right\le \phantom{\rule{2.77695pt}{0ex}}{k}_{n}\left(\left\rightxy\left\right+\phantom{\rule{2.77695pt}{0ex}}{a}_{n}\right),\phantom{\rule{1em}{0ex}}\forall x,y\in \mathcal{H},$$(2.1)
where {a_{ n }} is a fix sequence in [0, ∞) with a_{ n } → 0 as n → ∞.
The infimum of constants k_{ n } in (2.1) is called nearly Lipschitz constant, which is denoted by η(T ^{n}). Notice that
A nearly Lipschitzian mapping T with the sequence {(a_{ n }, η(T ^{n}))} is said to be:

(b)
nearly nonexpansive if η(T^{n}) = 1 for all n ∈ ℕ, that is,
$$\left\right{T}^{n}x{T}^{n}y\left\right\le \phantom{\rule{2.77695pt}{0ex}}\left\rightxy\left\right+\phantom{\rule{2.77695pt}{0ex}}{a}_{n},\phantom{\rule{1em}{0ex}}\forall x,y\in \mathcal{H};$$ 
(c)
nearly asymptotically nonexpansive if η(T^{n}) ≤ 1 for all n ∈ ℕ and $\underset{n\to \infty}{\text{lim}}\eta \left({T}^{n}\right)=1$, in other words, k_{ n } ≥ 1 for all n ∈ ℕ with $\underset{n\to \infty}{\text{lim}}{k}_{n}=1$;

(d)
nearly uniformly LLipschitzian if η(T^{n}) ≤ L for all n ∈ ℕ, in other words, k_{ n } = L for all n ∈ ℕ.
Remark 2.14. It should be pointed that

(1)
Every nonexpansive mapping is a asymptotically nonexpansive mapping and every asymptotically nonexpansive mapping is a pointwise asymptotically nonexpansive mapping. Also, the class of Lipschitzian mappings properly includes the class of pointwise asymptotically nonexpansive mappings.

(2)
It is obvious that every Lipschitzian mapping is a generalized Lipschitzian mapping. Furthermore, every mapping with a bounded range is a generalized Lipschitzian mapping. It is easy to see that the class of generalized (L, M)Lipschitzian mappings is more general that the class of generalized Lipschitzian mappings.

(3)
Clearly, the class of nearly uniformly LLipschitzian mappings properly includes the class of generalized (L, M)Lipschitzian mappings and that of uniformly LLipschitzian mappings. Note that every nearly asymptotically nonexpansive mapping is nearly uniformly LLipschitzian.
Now, we present some new examples to investigate relations between these mappings.
Example 2.15. Let $\mathcal{H}=\mathbb{R}$ and define a mapping $T:\mathcal{H}\to \mathcal{H}$ as follow:
where γ > 1 is a constant real number. Evidently, the mapping T is discontinuous at the points x = 0, γ. Since every Lipschitzian mapping is continuous, it follows that T is not Lipschitzian. For each n ∈ ℕ, take ${a}_{n}=\frac{1}{{\gamma}^{n}}$. Then,
Since ${T}^{n}z=\frac{1}{\gamma}$ for all z ∈ ℝ and n ≥ 2, it follows that for all x, y ∈ ℝ and n ≥ 2,
Hence T is a nearly nonexpansive mapping with respect to the sequence $\left\{{a}_{n}\right\}=\left\{\frac{1}{{\gamma}^{n}}\right\}$.
The following example shows that the nearly uniformly LLipschitzian mappings are not necessarily continuous.
Example 2.16. Let $\mathcal{H}=\left[0,b\right]$, where b ∈ (0, 1] is an arbitrary constant real number, and the selfmapping T of $\mathcal{H}$ be defined as below:
where γ ∈ (0, 1) is also an arbitrary constant real number. It is plain that the mapping T is discontinuous in the point b. Hence T is not a Lipschitzian mapping. For each n ∈ ℕ, take a_{ n } = γ^{n1}. Then, for all n ∈ ℕ and x, y ∈ [0, b), we have
If x ∈ [0, b) and y = b, then, for each n ∈ ℕ, we have T^{n}x = γ^{n}x and T^{n}y = 0. Since 0 <x  y ≤ b ≤ 1, it follows that, for all n ∈ N,
Hence T is a nearly uniformly γLipschitzian mapping with respect to the sequence {a_{ n }} = {γ^{n1}}.
Obviously, every nearly nonexpansive mapping is a nearly uniformly Lipschitzian mapping. In the following example, we show that the class of nearly uniformly Lipschitzian mappings properly includes the class of nearly nonexpansive mappings.
Example 2.17. Let $\mathcal{H}=\mathbb{R}$ and the selfmapping T of $\mathcal{H}$ be defined as follow:
Evidently, the mapping T is discontinuous in the points x = 0, 1, 2. Hence T is not a Lipschitzian mapping. Take for each n ∈ N, ${a}_{n}=\frac{1}{{2}^{n}}$. Then T is not a nearly nonexpansive mapping with respect to the sequence $\left\{\frac{1}{{2}^{n}}\right\}$, because taking x = 1 and $y=\frac{1}{2}$, we have Tx = 2, $Ty=\frac{1}{2}$ and
However,
and for all n ≥ 2,
since ${T}^{n}z=\frac{1}{2}$ for all z ∈ ℝ and n ≥ 2. Hence, for each L ≥ 4, T is a nearly uniformly LLipschitzian mapping with respect to the sequence $\left\{\frac{1}{{2}^{n}}\right\}$.
It is clear that every uniformly LLipschitzian mapping is a nearly uniformly LLipschitzian mapping. In the next example, we show that the class nearly uniformly LLipschitzian mappings properly includes the class of uniformly LLipschitzian mappings.
Example 2.18. Let $\mathcal{H}=\mathbb{R}$ and let the selfmapping $T$ of $\mathcal{H}$ be defined the same as in Example 2.17. Then T is not a uniformly 4Lipschitzian mapping. In fact, if x = 1 and $y\in \left(1,\frac{3}{2}\right)$, then we have Tx  Ty > 4x  y because $0<\phantom{\rule{2.77695pt}{0ex}}xy\phantom{\rule{2.77695pt}{0ex}}<\frac{1}{2}$. But, in view of Example 2.17, T is a nearly uniformly 4Lipschitzian mapping.
The following example shows that the class of generalized Lipschitzian mappings properly includes the class of Lipschitzian mappings and that of mappings with bounded range.
Example 2.19. [26] Let $\mathcal{H}=\mathbb{R}$ and a mapping $T:\mathcal{H}\to \mathcal{H}$ be defined by
Then T is a generalized Lipschitzian mapping which is not Lipschitzian and whose range is not bounded.
3 Extended general regularized nonconvex variational inequality
In this section, we introduce a new problem of extended general nonlinear regularized nonconvex variational inequality and some special cases of the problem in Hilbert spaces and investigate their relations.
Let $T,f,g:\mathcal{H}\to \mathcal{H}$ be three nonlinear singlevalued operators such that ${K}_{r}\subseteq f\left(\mathcal{H}\right)$. We consider the problem of finding $u\in \mathcal{H}$ such that g(u) ∈ K_{ r } and
where ρ > 0 is a constant. The problem (3.1) is called the extended general nonlinear regularized nonconvex variational inequality involving three different nonlinear operators (EGNRNVID).
Proposition 3.1. If K_{ r } is a uniformly proxregular set, then the problem (3.1) is equivalent to that of finding $u\in \mathcal{H}$ such that g(u) ∈ K_{ r } and
where ${N}_{{K}_{r}}^{P}\left(s\right)$ denotes the Pnormal cone of K_{ r } at s in the sense of nonconvex analysis.
Proof. Let $u\in \mathcal{H}$ with g(u) ∈ K_{ r } be a solution of the problem (3.1). If ρT(u) + g(u)  f(u) = 0, because the vector zero always belongs to any normal cone, we have $0\in \rho T\left(u\right)+g\left(u\right)f\left(u\right)+{N}_{{K}_{r}}^{P}\left(g\left(u\right)\right)$. If ρT(u) + g(u)  f(u) ≠ 0, then for all $v\in \mathcal{H}$ with f(v) ∈ K_{ r }, one has
Now, by using Lemma 2.3 conclude that $\left(\rho T\left(u\right)+g\left(u\right)f\left(u\right)\right)\in {N}_{{K}_{r}}^{P}\left(g\left(u\right)\right)$ and so
Conversely, if $u\in \mathcal{H}$ with g(u) ∈ K_{ r } is a solution of the problem (3.2), then Definition 2.8 guarantees that $u\in \mathcal{H}$ with g(u) ∈ K_{ r } is a solution of the problem (3.1). This completes the proof.
The problem (3.2) is called the extended general nonconvex variational inclusion associated with EGNRNVID problem.
Some special cases of the problem (3.1) are as follows:

(1)
If g ≡ I (: the identity operator), then the problem (3.1) collapses to the following problem: Find u ∈ K_{ r } such that
$$\u27e8\rho T\left(u\right)+uf\left(u\right),f\left(v\right)u\u27e9+\frac{1}{2r}\left\rightf\left(v\right)u{}^{2}\phantom{\rule{2.77695pt}{0ex}}\ge 0,\phantom{\rule{1em}{0ex}}\forall v\in \mathcal{H}:f\left(v\right)\in {K}_{r},$$(3.3)
which is a new problem of general nonlinear regularized nonconvex variational inequality involving two nonlinear operators (GNRNVID).

(2)
If f = g, then the problem (3.1) reduces to the following problem: Find $u\in \mathcal{H}$ such that g(u) ∈ K_{ r } and
$$\u27e8\rho T\left(u\right),g\left(v\right)g\left(u\right)\u27e9+\frac{1}{2r}\left\rightg\left(v\right)g\left(u\right){}^{2}\phantom{\rule{2.77695pt}{0ex}}\ge 0,\phantom{\rule{1em}{0ex}}\forall v\in \mathcal{H}:g\left(v\right)\in {K}_{r},$$(3.4)
which is also a new problem of general nonlinear regularized nonconvex variational inequality involving two nonlinear operators (GNRNVID).

(3)
If g ≡ I, then the problem (3.4) collapses to the following problem: Find u ∈ K_{ r } such that
$$\u27e8\rho T\left(u\right),vu\u27e9+\frac{1}{2r}\left\rightvu{}^{2}\phantom{\rule{2.77695pt}{0ex}}\ge 0,\phantom{\rule{1em}{0ex}}\forall v\in {K}_{r},$$(3.5)
which is a new problem of nonlinear regularized nonconvex variational inequality (NRNVI).

(4)
If r = ∞, i.e., K_{ r } = K, the convex set in $\mathcal{H}$, then the problem (3.1) changes into that of finding $u\in \mathcal{H}$ such that g(u) ∈ K and
$$\u27e8\rho T\left(u\right)+g\left(u\right)f\left(u\right),f\left(v\right)g\left(u\right)\u27e9\ge 0,\phantom{\rule{1em}{0ex}}\forall v\in \mathcal{H}:f\left(v\right)\in K.$$(3.6)
The inequality of type (3.6) is introduced and studied by Noor [33, 39].

(5)
If r = ∞, then the problem (3.3) is equivalent to the problem: Find u ∈ K such that
$$\u27e8\rho T\left(u\right)+uf\left(u\right),f\left(v\right)u\u27e9\ge 0,\phantom{\rule{1em}{0ex}}\forall v\in \mathcal{H}:f\left(v\right)\in K.$$(3.7)
The problem (3.7) is introduced and studied by Noor [34].

(6)
If r = ∞, then the problem (3.4) reduces to the following problem: Find $u\in \mathcal{H}$ such that g(u) ∈ K and
$$\u27e8T\left(u\right),g\left(v\right)g\left(u\right)\u27e9\ge 0,\phantom{\rule{1em}{0ex}}\forall v\in \mathcal{H}:g\left(v\right)\in K,$$(3.8)
which is known as the general nonlinear variational inequality introduced and studied by Noor [37] in 1988.

(7)
If r = ∞, then the problem (3.5) changes into the problem: Find u ∈ K such that
$$\u27e8Tu,vu\u27e9\ge 0,\phantom{\rule{1em}{0ex}}\forall v\in K.$$(3.9)
The inequality of type (3.9) is called variational inequality, which was introduced and studied by Stampacchia [1] in 1964.
Now, we prove the existence and uniqueness theorem for solution of the problem of extended general nonlinear regularized nonconvex variational inequality (3.1). For this end, we need to the following lemma in which by using the projection operator technique, we verify the equivalence between the problem (3.1) and the fixed point problem.
Lemma 3.2. Let T, f, g and ρ > 0 be the same as in the problem (3.1). Then $u\in \mathcal{H}$ with g(u) ∈ K_{ r } is a solution of the problem (3.1) if and only if
where ${P}_{{K}_{r}}$ is the projection of $\mathcal{H}$ onto K_{ r }.
Proof. Let $u\in \mathcal{H}$ with g(u) ∈ K_{ r } be a solution of the problem (3.1). Then, by using Proposition 3.1, we have
where I is identity operator and we have used the wellknown fact that ${P}_{{K}_{r}}={\left(I+{N}_{{K}_{r}}^{P}\right)}^{1}$.
Theorem 3.3. Let T, f, g and ρ be the same as in the problem (3.1) such that

(a)
T is κstrongly monotone with respect to f and σLipschitz continuous;

(b)
g is τstrongly monotone and ιLipschitz continuous;

(c)
f is ϖLipschitz continuous.
If the constant ρ > 0 satisfies the following condition:
where r' ∈ (0, r), then the problem (3.1) admits a unique solution.
Proof. Define the mapping $\varphi :\mathcal{H}\to \mathcal{H}$ by
Now, we establish that ϕ is a contraction mapping. Let $x,\widehat{x}\in \mathcal{H}$ with g(x), $g\left(\widehat{x}\right)\in {K}_{r}$ be given. It follows from Proposition 2.10 that
$\begin{array}{ll}\hfill \left\right\varphi \left(x\right)\varphi \left(\widehat{x}\right)\left\right& \le \phantom{\rule{2.77695pt}{0ex}}\left\rightx\widehat{x}\left(g\left(x\right)g\left(\widehat{x}\right)\right)\left\right+\left\right{P}_{{K}_{r}}\left(f\left(x\right)\rho T\left(x\right)\right){P}_{{K}_{r}}\left(f\left(\widehat{x}\right)\rho T\left(\widehat{x}\right)\right)\left\right\phantom{\rule{2em}{0ex}}\\ \le \phantom{\rule{2.77695pt}{0ex}}\left\rightx\widehat{x}\left(g\left(x\right)g\left(\widehat{x}\right)\right)\left\right+\frac{r}{r{r}^{\prime}}\left\rightf\left(x\right)f\left(\widehat{x}\right)\rho \left(T\left(x\right)T\left(\widehat{x}\right)\right)\left\right.\phantom{\rule{2em}{0ex}}\end{array}$ (3.13)
By using τstrongly monotonicity and ιLipschitzian continuity of g, we have
Since T is κstrongly monotone with respect to f and σLipschitzian continuous, and f is ϖLipschitzian continuous, we gain
Substituting (3.14) and (3.15) for (3.13), we obtain
where
In view of the condition (3.11), we note that 0 ≤ γ < 1 and so from (3.16) conclude that the mapping ϕ is contraction. According to Banach fixed point theorem, ϕ has a unique fixed point in $\mathcal{H}$, that is, there exists a unique point $u\in \mathcal{H}$ with g(u) ∈ K_{ r } such that ϕ(u) = u. It follows from (3.12) that $g\left(u\right)={P}_{{K}_{r}}\left(f\left(u\right)\rho T\left(u\right)\right)$. Now, Lemma 3.2 guarantees that $u\in \mathcal{H}$ with g(u) ∈ K_{ r } is a solution of the problem (3.1). This completes the proof.
As in the proof of Theorem 3.3, one can prove the existence and uniqueness theorem for solution of the problems (3.3)(3.5) and we omit their proofs.
Theorem 3.4. Assume that T, f and ρ are the same as in the problem (3.3) such that

(a)
T is κstrongly monotone with respect to f and σLipschitz continuous;

(b)
f is ϖLipschitz continuous.
If the constant ρ > 0 satisfies the following condition:
where r' ∈ (0, r), then the problem (3.3) admits a unique solution.
Theorem 3.5. Let T, g and ρ be the same as in the problem (3.4) such that

(a)
T is κstrongly monotone with respect to f and σLipschitz continuous;

(b)
g is τstrongly monotone and ιLipschitz continuous.
If the constant ρ > 0 satisfies the following condition:
where r' ∈ (0, r), then the problem (3.4) admits a unique solution.
Theorem 3.6. Suppose that T and ρ are the same as in the problem (3.5) such that T is κstrongly monotone and σLipschitz continuous. If the constant ρ > 0 satisfies the following condition:
where r' ∈ (0, r), then the problem (3.5) admits a unique solution.
4 Nearly uniformly Lipschitzian mappings and finite step projection iterative algorithms
In this section, applying a nearly uniformly Lipschitzian mapping S and by using the fixed point formulation (3.10), we suggest and analyze some new pstep projection iterative algorithms with mixed errors for finding an element of set of the fixed points of nearly uniformly Lipschitzian mapping S which is unique solution of the problem of extended general nonlinear regularized nonconvex variational inequality (3.1).
Let S : K_{ r } → K_{ r } be a nearly uniformly Lipschitzian mapping. We denote the set of all the fixed points of S by Fix(S) and the set of all the solutions of the problem (3.1) by EGNRNVID(K_{ r }, T, f, g). We now characterize the problem. If u ∈ Fix(S) ∩ EGNRNVID(K_{ r }, T, f, g), then it follows from Lemma 3.2 that, for each n ≥ 0,
The fixed point formulation (4.1) enables us to define the following pstep projection iterative algorithms with mixed errors for finding a common element of two different sets of solutions of the fixed points of the nearly uniformly Lipschitzian mappings and the extended general nonlinear regularized nonconvex variational inequalities (3.1).
Algorithm 4.1. Let T, f, g and ρ be the same as in the problem (3.1). For arbitrary chosen initial point x_{0} ∈ K_{ r }, compute the iterative sequence ${\left\{{x}_{n}\right\}}_{n=0}^{\infty}$ by the iterative process
where
S : K_{ r } → K_{ r } is a nearly uniformly Lipschitzian mapping, ${\left\{{\alpha}_{n,i}\right\}}_{n=0}^{\infty}$, ${\left\{{\beta}_{n,i}\right\}}_{n=0}^{\infty}\left(i=1,2,\dots ,p\right)$ are 2p sequences in interval [0,1] such that ${\sum}_{n=0}^{\infty}{\prod}_{i=1}^{p}{\alpha}_{n,i}=\infty ,{\alpha}_{n,i}+{\beta}_{n,i}\le 1$, ${\sum}_{n=0}^{\infty}{\beta}_{n,i}<\infty $, and ${\left\{{e}_{n,i}\right\}}_{n=0}^{\infty}$, ${\left\{{l}_{n,i}\right\}}_{n=0}^{\infty}$, ${\left\{{r}_{n,i}\right\}}_{n=0}^{\infty}\left(i=1,2,\dots ,p\right)$ are 3p sequences in $\mathcal{H}$ to take into account a possible inexact computation of the resolvent operator point satisfying the following conditions: ${\left\{{l}_{n,i}\right\}}_{n=0}^{\infty}\left(i=1,2,\dots ,p\right)$ are p bounded sequences in $\mathcal{H}$ and ${\sum}_{n=0}^{\infty}{\beta}_{n,i}<\infty $, ${\left\{{r}_{n,i}\right\}}_{n=0}^{\infty}$ are 2p sequences in $\mathcal{H}$ such that
Algorithm 4.2. Assume that T, f and ρ are the same as in the problem (3.3). For arbitrary chosen initial point x_{0} ∈ K_{ r }, compute the iterative sequence ${\left\{{x}_{n}\right\}}_{n=0}^{\infty}$ by the iterative process
where S, ${\left\{{\alpha}_{n,i}\right\}}_{n=0}^{\infty}$, ${\left\{{\beta}_{n,i}\right\}}_{n=0}^{\infty}$, ${\left\{{e}_{n,i}\right\}}_{n=0}^{\infty}$, ${\left\{{l}_{n,i}\right\}}_{n=0}^{\infty}$, ${\left\{{r}_{n,i}\right\}}_{n=0}^{\infty}\left(i=1,2,\dots ,p\right)$ are the same as in Algorithm 4.1.
Algorithm 4.3. Let T, g and ρ be the same as in the problem (3.4). For arbitrary chosen initial point x_{0} ∈ K_{ r }, compute the iterative sequence ${\left\{{x}_{n}\right\}}_{n=0}^{\infty}$ as follows:
where
and S, ${\left\{{\alpha}_{n,i}\right\}}_{n=0}^{\infty}$, ${\left\{{\beta}_{n,i}\right\}}_{n=0}^{\infty}$, ${\left\{{e}_{n,i}\right\}}_{n=0}^{\infty}$, ${\left\{{l}_{n,i}\right\}}_{n=0}^{\infty}$, ${\left\{{r}_{n,i}\right\}}_{n=0}^{\infty}\left(i=1,2,\dots ,p\right)$ are the same as in Algorithm 4.1.
Algorithm 4.4. Let T and ρ be the same as in the problem (3.5). For arbitrary chosen initial point x_{0} ∈ K_{ r }, compute the iterative sequence ${\left\{{x}_{n}\right\}}_{n=0}^{\infty}$ by using
where S, ${\left\{{\alpha}_{n,i}\right\}}_{n=0}^{\infty}$, ${\left\{{\beta}_{n,i}\right\}}_{n=0}^{\infty}$, ${\left\{{e}_{n,i}\right\}}_{n=0}^{\infty}$, ${\left\{{l}_{n,i}\right\}}_{n=0}^{\infty}$, ${\left\{{r}_{n,i}\right\}}_{n=0}^{\infty}\left(i=1,2,\dots ,p\right)$ are the same as in Algorithm 4.1.
Remark 4.5. It should be pointed out that

(1)
If e_{n,i}= r_{n,i}= 0, for all n ≥ 0 and i = 1, 2,..., p, then Algorithms 4.14.4 change into the perturbed iterative process with mean errors.

(2)
When e_{n,i}= l_{n,i}= r_{n,i}= 0, for all n ≥ 0 and i = 1, 2,..., p, then Algorithms 4.14.4 reduce to the perturbed iterative process without error.
Remark 4.6. Algorithms 2.12.6 in [38] and Algorithm 2.1 in [44] are special cases of Algorithms 4.14.4. In brief, for a suitable and appropriate choice of the operators T, f, g, the constant ρ, and the sequences ${\left\{{\alpha}_{n,i}\right\}}_{n=0}^{\infty}$, ${\left\{{\beta}_{n,i}\right\}}_{n=0}^{\infty}$, ${\left\{{e}_{n,i}\right\}}_{n=0}^{\infty}$, ${\left\{{l}_{n,i}\right\}}_{n=0}^{\infty}$, ${\left\{{r}_{n,i}\right\}}_{n=0}^{\infty}\left(i=1,2,\dots ,p\right)$, one can obtain a number of new and previously known iterative schemes for solving the problems (3.1) and (3.3)(3.5) and related problems. This clearly shows that Algorithms 4.14.4 are quite general and unifying.
Now, we discuss the convergence analysis of the suggested iterative Algorithms 4.14.4 under some suitable conditions. For this end, we need to the following lemma:
Lemma 4.7. Let a_{ n }}, b_{ n }} and c_{ n }} be three nonnegative real sequences satisfying the following condition: there exists a natural number n_{0} such that
where t_{ n } ∈ [0, 1], ${\sum}_{n=0}^{\infty}{t}_{n}=\infty $ , lim_{ n→∞ }b_{ n }= 0, ${\sum}_{n=0}^{\infty}{c}_{n}<\infty $ . Then lim_{n→0}a_{ n }= 0.
Proof. The proof directly follows from Lemma 2 in Liu [32].
Theorem 4.8. Let T, f, g and ρ be the same as in Theorem 3.3 such that the conditions (a)(c) and (3.11) in Theorem 3.3 hold. Assume that S : K_{ r } → K_{ r } is a nearly uniformly LLipschitzian mapping with the sequence ${\left\{{b}_{n}\right\}}_{n=0}^{\infty}$ such that Fix(S) ∩ EGNRNVID(K_{ r }, T, f, g) ≠ ∅. Further, let Lγ < 1, where γ is the same as in (3.17). If there exists a constant α > 0 such that ${\prod}_{i=1}^{p}{\alpha}_{n,i}>\alpha $ for each n≥ 0, then the iterative sequence ${\left\{{x}_{n}\right\}}_{n=0}^{\infty}$ generated by Algorithm 4.1 converges strongly to the only element of Fix(S) ∩ EGNRNVID(K_{ r },. T, f, g).
Proof. According to Theorem 3.3, the problem (3.1) has a unique solution ${x}^{*}\in \mathcal{H}$ with g(x*) ∈ K_{ r }. Hence, in view of Lemma 3.2, $g\left({x}^{*}\right)={P}_{{K}_{r}}\left(f\left({x}^{*}\right)\rho T\left({x}^{*}\right)\right)$. Since EGNRNVID(K_{ r }, T, f, g) is a singleton set, it follows from Fix(S) ∩ EGNRNVID(K_{ r }, T, f, g) ≠ ∅ that x* ∈ Fix(S). Accordingly, for each n ≥ 0 and i ∈ {1, 2,..., p}, we can write
where the sequences ${\left\{{\alpha}_{n,i}\right\}}_{n=0}^{\infty}$ and ${\left\{{\beta}_{n,i}\right\}}_{n=0}^{\infty}\left(i=1,2,\dots ,p\right)$ are the same as in Algorithm 4.1. Let Γ = sup_{n≥0}{l_{n,i} x*: i = 1, 2,. .., p}. It follows from (4.2), (4.4), Proposition 2.10 and the assumptions that
Since T is κstrongly monotone with respect to f and σLipschitz continuous, g is τstrongly monotone and ιLipschitz continuous, in similar way to the proofs (3.14) and (3.15), we can prove that
and
Substituting (4.6) and (4.7) for (4.5), we obtain
Like in the proofs of (4.5)(4.8), we can establish that, for each i ∈ {1, 2,..., p  2},
and
By using (4.9) and (4.10), we get
As in the proof of (4.11), applying (4.9) and (4.11), we have
Continuing this procedure in (4.10)(4.12), we gain
It follows from (4.8) and (4.13) that