Skip to main content

New Tseng’s extragradient methods for pseudomonotone variational inequality problems in Hadamard manifolds

Abstract

We propose Tseng’s extragradient methods for finding a solution of variational inequality problems associated with pseudomonotone vector fields in Hadamard manifolds. Under standard assumptions such as pseudomonotone and Lipschitz continuous vector fields, we prove that any sequence generated by the proposed methods converges to a solution of variational inequality problem, whenever it exits. Moreover, we give some numerical experiments to illustrate our main results.

1 Introduction

The theory of variational inequality problems, which was firstly introduced by Stampacchia [1], has significant applications in numerous fields, for example, optimal control, boundary valued problems, network equilibrium problems, and so forth. It has been widely concentrated on finite- or infinite-dimensional linear spaces; see, for instance, [27] and the bibliography therein.

Let K be a nonempty, closed, and convex subset of a real Hilbert space H and \(T : K \to H\) a single-valued operator. The variational inequality problem (VIP) is to find \(x^{*} \in K\) such that

$$ \bigl\langle Tx^{*}, y-x^{*} \bigr\rangle \geq 0, \quad \forall y \in K. $$
(1)

Many researches have been proposed and analyzed iterative algorithms for approximating the solution of variational inequality problem (1), such as the simplest projection method [4, 8], the extragradient method [9], and the subgradient extragradient method [1012].

One popular method is the well-known Tseng’s extragradient method presented by Tseng [13]. The algorithm is described as follows:

$$ \textstyle\begin{cases} y_{n} = \pi _{K}(x_{n} - \mu Tx_{n}), \\ x_{n+1} = y_{n} - \mu (Ty_{n} - Tx_{n}), \quad \forall n \geq 1. \end{cases} $$

The weak convergence of this method was established in [13] under the additional assumption that T is monotone Lipschitz continuous, and further studied in [14] for the case when T is pseudomonotone Lipschitz continuous. Recently, Tseng’s extragradient method has been studied by many authors; see, for instance, [6, 7, 1518] and the references therein.

Recently, several nonlinear issues appeared in fixed point, variational inclusion, equilibrium, and optimization problems trying to adapt the theory from linear spaces to nonlinear systems because certain problems cannot be posed in a linear space and require a manifold structure. The extension of the concept, techniques, as well as methods from linear spaces to Riemannian manifolds has some significant advantages. For example, some constrained optimization problems can be viewed as unconstrained ones from the Riemannian geometry perspective, another advantage is that some optimization problems with nonconvex objective functions become convex through the introduction of an appropriate Riemannian metric. The investigation of extensions and development of nonlinear problem techniques has got a lot of consideration. Therefore, many authors have focused on the Riemannian framework; see, for example, [1928] and the references therein.

Let M be an Hadamard manifold, TM the tangent bundle of M, K a nonempty, closed, geodesic convex subset of M and exp an exponential mapping. In 2003, Németh [22] introduced the variational inequality problem on an Hadamard manifold which is to find \(x^{*} \in K\) such that

$$ \bigl\langle Tx^{*}, \exp ^{-1}_{x^{*}}y \bigr\rangle \geq 0, \quad \forall y \in K, $$
(2)

where \(T : K \to TM\) is a single-valued vector field. The author generalized some basic existence and uniqueness theorems of the classical theory of variational inequality problems from Euclidean spaces to Hadamard manifolds. We use \(VIP(T,K)\) to denote the set of solutions of the variational inequality problem (2). Inspired by [22], many authors further studied this problem in Riemannian manifolds. See, for instance, Li et al. [29] who studied the variational inequality problems on general Riemannian manifolds. Tang et al. [30] introduced the proximal point method for variational inequalities with pseudomonotone vector fields. Ferreira et al. [31] suggested an extragradient-type algorithm for solving variational inequality problem (2) on Hadamard manifolds. The Korpelevich’s method to solve variational inequality problems was presented by Tang and Huang [32]. Tang et al. [33] extended a projection-type method for variational inequalities from Euclidean spaces to Hadamard manifolds. In 2019, Junfeng et al. [23] purposed two Tseng’s extragradient methods to solve variational inequality problem (2) in Hadamard manifolds. Under the assumption that T is pseudomonotone Lipschitz continuous, the authors proved that a sequences generated by the proposed methods converge to solutions of variational inequality problems on Hadamard manifolds. The step sizes in the first algorithm are obtained by using a line search, and in the second they are found just by using two previous iterates, so it is unnecessary to know the Lipschitz constants.

Motivated by the above results, in this article we purpose three effective Tseng’s extragradient methods for solving variational inequality (2) on Hadamard manifolds. The step sizes in the first algorithm depend on the Lipschitz constant, for the second we obtain them by using a line search, and for the last one only we get them by using two previous iterates. For the last two algorithms, the Lipschitz constant need not be known. Under appropriate assumptions, we prove that any sequence generated by the proposed methods converges to a solution of variational inequality (2).

The rest of the paper is as follows: In Sect. 2, we present some vital ideas of geometry and nonlinear examination in Riemannian manifolds which can be discovered in any standard book on manifolds, such as [3437], and will be needed in the sequel. In Sect. 3, our three algorithms based on Tseng’s extragradient method for variational inequality are presented, and we analyze their convergence on Hadamard manifolds. In Sect. 4, we provide numerical examples to show the efficiency of the proposed algorithms. In Sect. 5, we give remarks and conclusions.

2 Preliminaries

Let M be a connected finite-dimensional manifold. For \(p \in M\), let \(T_{p}M\) be the tangent space of M at p, which is a vector space of the same dimension as M. The tangent bundle of M is denoted by \(TM = \bigcup_{p \in M}T_{p}M\). A smooth mapping \(\langle \cdot , \cdot \rangle : TM \times TM \to \mathbb{R}\) is said to be a Riemannian metric on M if \(\langle \cdot , \cdot \rangle _{p} : T_{p}M \times T_{p}M \to \mathbb{R}\) is an inner product for all \(x \in M\). We denote by \(\Vert \cdot \Vert _{p}\) the norm corresponding to the inner product \(\langle \cdot , \cdot \rangle _{p}\) on \(T_{p}M\). If there is no confusion, we omit the subscript p. A differentiable manifold M endowed with a Riemannian metric \(\langle \cdot , \cdot \rangle \) is said to be a Riemannian manifold.

The length of a piecewise smooth curve \(\omega : [a,b] \to M\) joining \(\omega (a) = p\) to \(\omega (b) = q\), through \(L(\omega ) = \int _{a}^{b} \Vert \omega {'}(t)\Vert \,dt\), where \(\omega {'}(t)\) is the tangent vector at \(\omega (t)\) in the tangent space \(T_{\omega (t)}M\). Minimizing this length functional over the set of all such curves, we obtain a Riemannian distance \(d(p,q)\) which induces the original topology on M.

Let be a Levi-Civita connection associated with the Riemannian manifold M. Given a smooth curve ω, a smooth vector field X along ω is said to be parallel if \(\nabla _{\omega {'}}X ={\mathbf{0}}\) where 0 denotes the zero section of TM. If \(\omega {'}\) itself is parallel, we say that ω is a geodesic, and in this case \(\Vert \omega {'}\Vert \) is a constant. When \(\Vert \omega {'}\Vert = 1\), ω is said to be normalized. A geodesic joining p to q in M is said to be a minimizing geodesic if its length equals to \(d(p,q)\).

The parallel transport \({\mathrm{P}}_{{\omega },{\omega (b)},{\omega (a)}}: T_{\omega (a)}M \to T_{ \omega (b)}M\) on the tangent bundle TM along \(\omega : [a,b] \to \mathbb{R}\) with respect to is defined by

$$ {\mathrm{P}}_{{\omega },{\omega (b)},{\omega (a)}}(v) = V \bigl(\omega (b) \bigr), \quad \forall a,b \in \mathbb{R} \text{ and } v \in T_{\omega (a)}M, $$

where V is the unique vector field such that \(\nabla _{\omega {'}(t)}V = {\mathbf{0}}\) for all \(t \in [a,b]\) and \(V(\omega (a)) = v\). If ω is a minimizing geodesic joining p to q, then we write \({\mathrm{P}}_{q,p}\) instead of \({\mathrm{P}}_{\omega ,q,p}\). Note that, for every \(a,b,b_{1},b_{2} \in \mathbb{R}\), we have

$$ {\mathrm{P}}_{\omega (b_{2}),\omega (b_{1})} \circ {\mathrm{P}}_{\omega (b_{1}), \omega (a)} = { \mathrm{P}}_{\omega (b_{2}),\omega (a)} \quad \text{and} \quad {\mathrm{P}}^{-1}_{ \omega (b),\omega (a)} = {\mathrm{P}}_{\omega (a),\omega (b)}. $$

Also \({\mathrm{P}}_{\omega (b),\omega (a)}\) is an isometry from \(T_{\omega (a)}M\) to \(T_{\omega (b)}M\), that is, the parallel transport preserves the inner product

$$ \bigl\langle {\mathrm{P}}_{\omega (b),\omega (a)}(u), { \mathrm{P}}_{\omega (b), \omega (a)}(v) \bigr\rangle _{\omega (b)} = \langle u,v\rangle _{\omega (a)}, \quad \forall u,v \in T_{\omega (a)}M. $$
(3)

A Riemannian manifold M is said to be complete if for all \(p \in M\), all geodesics emanating starting from p are defined for all \(t \in \mathbb{R}\). Hopf–Rinow theorem asserts that if M is complete then any pair of points in M can be joined by a minimizing geodesic. Moreover, \((M,d)\) is a complete metric space, every bounded closed subset is compact. If M is a complete Riemannian manifold, then the exponential map \(\exp _{p} : T_{p}M \to M\) at \(p \in M\) is defined by

$$ \exp _{p}\nu =\omega _{\nu }(1,p), \quad \forall \nu \in T_{p}M, $$

where \(\omega _{\nu }(\cdot ,p)\) is the geodesic starting from p with velocity ν (i.e., \(\omega _{\nu }(0,p) = p\) and \(\omega {'}_{\nu }(0,p) = \nu \)). Then, for any value of t, we have \(\exp _{p}t\nu =\omega _{\nu }(t,p)\) and \(\exp _{p}{\mathbf{0}} =\omega _{\nu }(0,p) =p\). Note that the mapping \(\exp _{p}\) is differentiable on \(T_{p}M\) for every \(p \in M\). The exponential map has inverse \(\exp ^{-1}_{p} : M \to T_{p}M\). Moreover, for any \(p,q \in M\), we have \(d(p,q) = \Vert \exp _{p}^{-1}q\Vert \).

A complete simply connected Riemannian manifold of nonpositive sectional curvature is said to be an Hadamard manifold. In the remaining part of this paper, M will denote a finite-dimensional Hadamard manifold.

The following proposition is outstanding and will be helpful.

Proposition 1

([34])

Let \(p \in M\). The exponential mapping \(\exp _{p} : T_{p}M \to M\) is a diffeomorphism, and for any two points \(p,q \in M\) there exists a unique normalized geodesic joining p to q, which is can be expressed by the formula

$$ \omega (t) = \exp _{p} t \exp _{p}^{-1}q, \quad \forall t \in [0,1]. $$

A geodesic triangle \(\triangle (p_{1},p_{2},p_{3})\) of a Riemannian manifold M is a set consisting of three points \(p_{1}\), \(p_{2}\), and \(p_{3}\), and three minimizing geodesics joining these points.

Proposition 2

([34])

Let \(\triangle (p_{1},p_{2},p_{3})\) be a geodesic triangle in M. Then

$$ d^{2}(p_{1},p_{2}) + d^{2}(p_{2},p_{3}) - 2 \bigl\langle \exp ^{-1}_{p_{2}}p_{1} , \exp ^{-1}_{p_{2}}p_{3} \bigr\rangle \leq d^{2}(p_{3},p_{1}), $$
(4)

and

$$ d^{2}(p_{1},p_{2}) \leq \bigl\langle \exp ^{-1}_{p_{1}}p_{3}, \exp ^{-1}_{p_{1}}p_{2} \bigr\rangle + \bigl\langle \exp ^{-1}_{p_{2}}p_{3}, \exp ^{-1}_{p_{2}}p_{1} \bigr\rangle . $$
(5)

Moreover, if α is the angle at \(p_{1}\), then we have

$$ \bigl\langle \exp ^{-1}_{p_{1}}p_{2}, \exp ^{-1}_{p_{1}}p_{3} \bigr\rangle = d(p_{2},p_{1})d(p_{1},p_{3}) \cos \alpha . $$

The following relation between geodesic triangles in Riemannian manifolds and triangles in \(\mathbb{R}^{2}\) can be found in [37].

Lemma 1

([37])

Let \(\triangle (p_{1},p_{2},p_{3})\) be a geodesic triangle in M. Then, there exists a triangle \(\triangle (\overline{p_{1}}, \overline{p_{2}}, \overline{p_{3}})\) for \(\triangle (p_{1},p_{2},p_{3})\) such that \(d(p_{i},p_{i+1}) = \Vert \overline{p_{i}} - \overline{p_{i+1}}\Vert \), with the indices taken modulo 3; it is unique up to an isometry of \(\mathbb{R}^{2}\).

The triangle \(\triangle (\overline{p_{1}}, \overline{p_{2}}, \overline{p_{3}})\) in Lemma 1 is said to be a comparison triangle for \(\triangle (p_{1},p_{2},p_{3})\). The points \(\overline{p_{1}}\), \(\overline{p_{2}}\), \(\overline{p_{3}}\) are called comparison points to the points \(p_{1}\), \(p_{2}\), \(p_{3}\), respectively.

Lemma 2

Let \(\triangle (p_{1},p_{2},p_{3})\) be a geodesic triangle in M and \(\triangle (\overline{p_{1}}, \overline{p_{2}}, \overline{p_{3}})\) be its comparison triangle.

  1. (i)

    Let \(\alpha _{1}\), \(\alpha _{2}\), \(\alpha _{3}\) (respectively, \(\overline{\alpha _{1}}\), \(\overline{\alpha _{2}}\), \(\overline{\alpha _{3}}\)) be the angles of \(\triangle (p_{1},p_{2},p_{3})\) (respectively, \(\triangle (\overline{p_{1}}, \overline{p_{2}}, \overline{p_{3}})\)) at the vertices \(p_{1}\), \(p_{2}\), \(p_{3}\) (respectively, \(\overline{p_{1}}\), \(\overline{p_{2}}\), \(\overline{p_{3}}\)). Then

    $$ \alpha _{1} \leq \overline{\alpha _{1}}, \quad \quad \alpha _{2} \leq \overline{\alpha _{2}}, \quad \textit{and} \quad \alpha _{3} \leq \overline{\alpha _{3}}. $$
  2. (ii)

    Let q be a point on the geodesic joining \(p_{1}\) to \(p_{2}\) and its comparison point in the interval \([\overline{p_{1}}, \overline{p_{2}}]\). If \(d(p_{1},q) = \Vert \overline{p_{1}} -\overline{q} \Vert \) and \(d(p_{2},q) = \Vert \overline{p_{2}} - \overline{q} \Vert \), then \(d(p_{3},q) \leq \Vert \overline{p_{3}} - \overline{q}\Vert \).

Definition 1

A subset K in an Hadamard manifold M is called geodesic convex if for all p and q in K, and for any geodesic \(\omega : [a,b] \to M\), \(a,b \in \mathbb{R}\) such that \(p =\omega (a)\) and \(q =\omega (b)\), one has \(\omega ((1-t)a + tb) \in K\), for all \(t \in [0,1]\).

Definition 2

A function \(f: M \to \mathbb{R}\) is called geodesic convex if for any geodesic ω in M, the composition function \(f \circ \omega : [a,b] \to \mathbb{R}\) is convex, that is,

$$ (f\circ \omega ) \bigl(ta + (1-t)b \bigr) \leq t(f \circ \omega ) (a) + (1-t) (f \circ \omega ) (b), \quad a,b \in \mathbb{R}\text{, and } \forall t \in [0,1] . $$

The following remarks and lemma will be helpful in the sequel.

Remark 1

([25])

If \(x,y \in M\) and \(v \in T_{x}M\), then

$$ \bigl\langle v, - \exp ^{-1}_{x}y \bigr\rangle = \bigl\langle v , { \mathrm{P}}_{x,y} \exp ^{-1}_{y}x \bigr\rangle = \bigl\langle {\mathrm{P}}_{y,x} v , \exp ^{-1}_{y}x \bigr\rangle . $$
(6)

Remark 2

([23])

Let \(x,y,z \in M\) and \(v \in T_{x}M\). By using (5) and Remark 1,

$$ \bigl\langle v, \exp ^{-1}_{x}y \bigr\rangle \leq \bigl\langle v, \exp ^{-1}_{x}z \bigr\rangle + \bigl\langle v, {\mathrm{P}}_{x,z}\exp ^{-1}_{z}y \bigr\rangle . $$
(7)

Lemma 3

([25])

Let \(x_{0} \in M\) and \(\{x_{n}\} \subset M\) with \(x_{n} \to x_{0}\). Then the following assertions hold:

  1. (i)

    For any \(y \in M\), we have \(\exp ^{-1}_{x_{n}}y \to \exp ^{-1}_{x_{0}}y\) and \(\exp ^{-1}_{y}x_{n} \to \exp ^{-1}_{y}x_{0}\);

  2. (ii)

    If \(v_{n} \in T_{x_{n}}M\) and \(v_{n} \to v_{0}\), then \(v_{0} \in T_{x_{0}}M\);

  3. (iii)

    Let \(u_{n}, v_{n} \in T_{x_{n}}M\) and \(u_{0}, v_{0} \in T_{x_{0}}M\), if \(u_{n} \to u_{0}\) and \(v_{n} \to v_{0}\), then \(\langle u_{n}, v_{n}\rangle \to \langle u_{0},v_{0}\rangle \);

  4. (iv)

    For every \(u \in T_{x_{0}}M\), the function \(V: M \to TM\), defined by \(V(x) = {\mathrm{P}}_{x,x_{0}}u\) for all \(x \in M\), is continuous on M.

Next, we present some concept of monotonicity and Lipschitz continuity of a single-valued vector field. Let K be a nonempty subset of M and \(\mathcal{X}(K)\) denote the set of all single-valued vector fields \(T : K \to TM\) such that \(Tx \in T_{x}M\), for each \(x \in K\).

Definition 3

([26, 32])

A vector field \(T \in \mathcal{X}(K)\) is called

  1. (i)

    monotone if

    $$ \bigl\langle Tx, \exp ^{-1}_{x}y \bigr\rangle + \bigl\langle Ty, \exp ^{-1}_{y}x \bigr\rangle \leq 0, \quad \forall x,y \in K; $$
  2. (ii)

    pseudomonotone if

    $$ \bigl\langle Tx, \exp ^{-1}_{x}y \bigr\rangle \geq 0 \quad \Rightarrow \quad \bigl\langle Ty, \exp ^{-1}_{y}x \bigr\rangle \leq 0, \quad \forall x,y \in K; $$
  3. (iii)

    Γ-Lipschitz continuous if there is \(\Gamma > 0\) such that

    $$ \Vert {\mathrm{P}}_{x,y}Ty-Tx \Vert \leq \Gamma d(x,y), \quad \forall x,y \in K. $$

Let us end this section with the following results which are essential in establishing our main convergence theorems.

Definition 4

([19])

Let K be a nonempty subset of M and \(\{x_{n}\}\) be a sequence in M. Then \(\{x_{n}\}\) is said to be Fejér monotone with respect to K if for all \(p \in K\) and \(n \in \mathbb{N}\),

$$ d(x_{n+1},p) \leq d(x_{n},p). $$

Lemma 4

([19])

Let K be a nonempty subset of M and \(\{x_{n}\} \subset M\) be a sequence in M such that \(\{x_{n}\}\) is Fejér monotone with respect to K. Then the following hold:

  1. (i)

    For every \(p \in K\), \(d(x_{n},p)\) converges;

  2. (ii)

    \(\{x_{n}\}\) is bounded;

  3. (iii)

    Assume that every cluster point of \(\{x_{n}\}\) belongs to K, then \(\{x_{n}\}\) converges to a point in K.

3 Main results

In this section, we discuss three algorithms for solving pseudomonotone variational problems. Throughout the remainder of this paper, unless explicitly stated otherwise, K always denotes a nonempty, closed, geodesic convex subset of an Hadamard manifold M. Consider a vector field \(T \in \mathcal{X}(K)\). In order to solve the variational inequality problem (2), we consider the following assumptions:

  1. (H1)

    \(VIP(T,K)\) is nonempty.

  2. (H2)

    The vector field \(T \in \mathcal{X}(K)\) is pseudomonotone and Γ-Lipschitz continuous.

First, we introduced a Tseng’s extragradient method for the variational inequality (2) on Hadamard manifolds. The step sizes in this algorithm are obtained employing the Lipschitz constant. The algorithm is described as Algorithm 1.

Algorithm 1
figure a

Tseng’s extragradient method

The following remark gives us a stopping criterion.

Remark 3

If \(x_{n} = y_{n}\), then \(x_{n}\) is a solution. In view of (8), we get

$$\begin{aligned} 0 \leq & \biggl\langle {\mathrm{P}}_{y_{n},x_{n}} Tx_{n} - \frac{1}{\mu _{n}} \exp ^{-1}_{y_{n}}x_{n}, \exp ^{-1}_{y_{n}}y \biggr\rangle \\ =& \bigl\langle Tx_{n} , \exp ^{-1}_{x_{n}}y \bigr\rangle , \quad \forall y \in K, \end{aligned}$$

then \(x_{n} \in VIP(T,K)\).

To prove the convergence of Algorithm 1, we need the following lemma.

Lemma 5

Suppose that assumptions (H1)(H2) hold. Let \(\{x_{n}\}\) be a sequence generated by Algorithm 1. Then

$$ d^{2}(x_{n+1},x) \leq d^{2}(x_{n},x) - \bigl(1-\Gamma ^{2} \mu _{n}^{2} \bigr)d^{2}(x_{n},y_{n}), \quad \forall x \in VIP(T,K). $$
(8)

Proof

Let \(x \in VIP(T,K)\), then from (8), we obtain

$$ \biggl\langle {\mathrm{P}}_{y_{n},x_{n}} Tx_{n} - \frac{1}{\mu _{n}} \exp ^{-1}_{y_{n}}x_{n}, \exp ^{-1}_{y_{n}}x \biggr\rangle \geq 0, $$

that is,

$$ \bigl\langle \exp ^{-1}_{y_{n}}x_{n}, \exp ^{-1}_{y_{n}}x \bigr\rangle \leq \mu _{n} \bigl\langle {\mathrm{P}}_{y_{n},x_{n}} Tx_{n} , \exp ^{-1}_{y_{n}}x \bigr\rangle . $$
(9)

As \(x \in VIP(T,K)\), this implies \(\langle Tx,\exp ^{-1}_{x}y_{n}\rangle \geq 0\). Since T is pseudomonotone, then we get \(\langle Ty_{n},\exp ^{-1}_{y_{n}}x\rangle \leq 0\). Now,

$$\begin{aligned} \bigl\langle Ty_{n} - {\mathrm{P}}_{y_{n},x_{n}}Tx_{n}, \exp ^{-1}_{y_{n}}x \bigr\rangle =& \bigl\langle Ty_{n} , \exp ^{-1}_{y_{n}}x \bigr\rangle - \bigl\langle { \mathrm{P}}_{y_{n},x_{n}}Tx_{n}, \exp ^{-1}_{y_{n}}x \bigr\rangle \\ \leq & - \bigl\langle {\mathrm{P}}_{y_{n},x_{n}}Tx_{n}, \exp ^{-1}_{y_{n}}x \bigr\rangle . \end{aligned}$$
(10)

Fix \(n \in \mathbb{N}\). Let \(\triangle (y_{n},x_{n},x) \subseteq M\) be a geodesic triangle with vertices \(y_{n}\), \(x_{n}\), and x, and \(\triangle (\overline{y_{n}},\overline{x_{n}},\overline{x}) \subseteq \mathbb{R}^{2}\) be the corresponding comparison triangle. Then, we have

$$ d(y_{n},x) = \Vert \overline{y_{n}} - \overline{x} \Vert , \quad \quad d(x_{n},x) = \Vert \overline{x_{n}} - \overline{x} \Vert , \quad {\textrm{and}} \quad d(y_{n},x_{n}) = \Vert \overline{y_{n}} - \overline{x_{n}} \Vert . $$

Again, letting \(\triangle (x_{n+1},y_{n},x) \subseteq M\) be a geodesic triangle with vertices \(x_{n+1}\), \(y_{n}\), and x, and \(\triangle (\overline{x_{n+1}},\overline{y_{n}},\overline{x}) \subseteq \mathbb{R}^{2}\) be the corresponding comparison triangle, one obtains

$$ d(x_{n+1},x) = \Vert \overline{x_{n+1}} - \overline{x} \Vert , \quad \quad d(y_{n},x) = \Vert \overline{y_{n}} - \overline{x} \Vert , \quad \text{and} \quad d(x_{n+1},y_{n}) = \Vert \overline{x_{n+1}} - \overline{y_{n}} \Vert . $$

Now,

$$\begin{aligned} d^{2}(x_{n+1},x) =& \Vert \overline{x_{n+1}} - \overline{x} \Vert ^{2} \\ =& \bigl\Vert (\overline{x_{n+1}} - \overline{y_{n}}) + ( \overline{y_{n}} - \overline{x}) \bigr\Vert ^{2} \\ =& \Vert \overline{y_{n}} - \overline{x} \Vert ^{2} + \Vert \overline{x_{n+1}} - \overline{y_{n}} \Vert ^{2} + 2 \langle \overline{x_{n+1}} - \overline{y_{n}} , \overline{y_{n}} - \overline{x} \rangle \\ =& \bigl\Vert (\overline{y_{n}} - \overline{x_{n}}) + ( \overline{x_{n}} - \overline{x}) \bigr\Vert ^{2} + \Vert \overline{x_{n+1}} - \overline{y_{n}} \Vert ^{2} + 2 \langle \overline{x_{n+1}} - \overline{y_{n}} , \overline{y_{n}} - \overline{x} \rangle \\ =& \Vert \overline{y_{n}} - \overline{x_{n}} \Vert ^{2} + \Vert \overline{x_{n}} - \overline{x} \Vert ^{2} + 2 \langle \overline{y_{n}} - \overline{x_{n}}, \overline{x_{n}} - \overline{x} \rangle + \Vert \overline{x_{n+1}} - \overline{y_{n}} \Vert ^{2} \\ & {} + 2 \langle \overline{x_{n+1}} - \overline{y_{n}} , \overline{y_{n}} - \overline{x} \rangle \\ =& \Vert \overline{y_{n}} - \overline{x_{n}} \Vert ^{2} + \Vert \overline{x_{n}} - \overline{x} \Vert ^{2} - 2 \langle \overline{y_{n}} - \overline{x_{n}}, \overline{y_{n}} - \overline{x_{n}} \rangle + 2 \langle \overline{y_{n}} - \overline{x_{n}}, \overline{y_{n}} - \overline{x} \rangle \\ & {} + \Vert \overline{x_{n+1}} - \overline{y_{n}} \Vert ^{2} + 2 \langle \overline{x_{n+1}} - \overline{y_{n}} , \overline{y_{n}} - \overline{x} \rangle +2 \Vert \overline{y_{n}} - \overline{x} \Vert ^{2} -2 \Vert \overline{y_{n}} - \overline{x} \Vert ^{2} \\ =& \Vert \overline{x_{n}} - \overline{x} \Vert ^{2} - \Vert \overline{y_{n}} - \overline{x_{n}} \Vert ^{2} + \Vert \overline{x_{n+1}} - \overline{y_{n}} \Vert ^{2} + 2 \langle \overline{x_{n}} - \overline{y_{n}}, \overline{x} - \overline{y_{n}} \rangle \\ & {} + 2 \langle \overline{x_{n+1}} - \overline{y_{n}} , \overline{y_{n}} - \overline{x} \rangle + 2 \langle \overline{y_{n}} - \overline{x} , \overline{y_{n}} - \overline{x} \rangle -2 \Vert \overline{y_{n}} - \overline{x} \Vert ^{2} \\ =& d^{2}(x_{n},x) - d^{2}(y_{n},x_{n}) + \Vert \overline{x_{n+1}} - \overline{y_{n}} \Vert ^{2} + 2 \langle \overline{x_{n}} - \overline{y_{n}}, \overline{x} - \overline{y_{n}} \rangle \\ & {} + 2 \langle \overline{x_{n+1}} - \overline{x}, \overline{y_{n}} - \overline{x} \rangle - 2d^{2}(y_{n},x) . \end{aligned}$$
(11)

If α, α̅ are the angles at the vertices \(y_{n}\) \(\overline{y_{n}}\), in view of Lemma 2, we get \(\alpha \leq \overline{\alpha }\). In addition, by Proposition 2, we have

$$\begin{aligned} \langle \overline{x_{n}} - \overline{y_{n}}, \overline{x} - \overline{y_{n}}\rangle =& \Vert \overline{x_{n}} - \overline{y_{n}} \Vert \Vert \overline{x} - \overline{y_{n}} \Vert \cos \overline{\alpha } \\ =& d(x_{n},y_{n})d(y_{n},x)\cos \overline{ \alpha } \\ \leq & d(x_{n},y_{n})d(y_{n},x)\cos \alpha \\ =& \bigl\langle \exp _{y_{n}}^{-1}x_{n}, \exp ^{-1}_{y_{n}}x \bigr\rangle . \end{aligned}$$
(12)

Repeating the same argument as above yields

$$ \langle \overline{x_{n+1}} - \overline{x}, \overline{y_{n}} - \overline{x}\rangle \leq \bigl\langle \exp _{x}^{-1}x_{n+1}, \exp ^{-1}_{x}y_{n} \bigr\rangle $$
(13)

and

$$\begin{aligned} \Vert \overline{x_{n+1}} - \overline{y_{n}} \Vert ^{2} =& \langle \overline{x_{n+1}} - \overline{y_{n}} , \overline{x_{n+1}} - \overline{y_{n}} \rangle \\ \leq & \bigl\langle \exp ^{-1}_{y_{n}}x_{n+1} ,\exp ^{-1}_{y_{n}}x_{n+1} \bigr\rangle \\ =& \bigl\Vert \exp ^{-1}_{y_{n}}x_{n+1} \bigr\Vert ^{2} \\ =& \mu _{n}^{2} \Vert {\mathrm{P}}_{y_{n},x_{n}}Tx_{n} - Ty_{n} \Vert ^{2}. \end{aligned}$$
(14)

Substituting (14), (15), and (16) into (13), we get

$$\begin{aligned} d^{2}(x_{n+1},x) \leq & d^{2}(x_{n},x) - d^{2}(y_{n},x_{n}) + \mu _{n}^{2} \Vert {\mathrm{P}}_{y_{n},x_{n}}Tx_{n} - Ty_{n} \Vert ^{2} \\ & {} +2 \bigl\langle \exp _{y_{n}}^{-1}x_{n}, \exp ^{-1}_{y_{n}}x \bigr\rangle + 2 \bigl\langle \exp _{x}^{-1}x_{n+1}, \exp ^{-1}_{x}y_{n} \bigr\rangle -2d^{2}(y_{n},x). \end{aligned}$$

It follows from Remark 2 that the last inequality becomes

$$\begin{aligned} d^{2}(x_{n+1},x) \leq & d^{2}(x_{n},x) - d^{2}(y_{n},x_{n}) + \mu _{n}^{2} \Vert {\mathrm{P}}_{y_{n},x_{n}}Tx_{n} - Ty_{n} \Vert ^{2}\\ & {} +2 \bigl\langle \exp _{y_{n}}^{-1}x_{n}, \exp ^{-1}_{y_{n}}x \bigr\rangle + 2 \bigl\langle \exp _{x}^{-1}y_{n}, \exp ^{-1}_{x}y_{n} \bigr\rangle \\ & {} + 2 \bigl\langle {\mathrm{P}}_{x,y_{n}} \exp _{y_{n}}^{-1}x_{n+1}, \exp ^{-1}_{x}y_{n} \bigr\rangle -2d^{2}(y_{n},x) \\ =& d^{2}(x_{n},x) - d^{2}(y_{n},x_{n}) + \mu _{n}^{2} \Vert {\mathrm{P}}_{y_{n},x_{n}}Tx_{n} - Ty_{n} \Vert ^{2} \\ & {} +2 \bigl\langle \exp _{y_{n}}^{-1}x_{n}, \exp ^{-1}_{y_{n}}x \bigr\rangle - 2 \bigl\langle \exp _{y_{n}}^{-1}x_{n+1}, \exp ^{-1}_{y_{n}}x \bigr\rangle . \end{aligned}$$

From the definition of \(x_{n+1} = \exp _{y_{n}}\mu _{n} ({\mathrm{P}}_{y_{n},x_{n}}Tx_{n} - Ty_{n})\), we get \(\exp _{y_{n}}^{-1}x_{n+1} = \mu _{n} ({\mathrm{P}}_{y_{n},x_{n}}Tx_{n} - Ty_{n})\). From the above inequality, we obtain

$$\begin{aligned} d^{2}(x_{n+1},x) \leq & d^{2}(x_{n},x) - d^{2}(y_{n},x_{n}) + \mu _{n}^{2} \Vert {\mathrm{P}}_{y_{n},x_{n}}Tx_{n} - Ty_{n} \Vert ^{2} \\ & {} +2 \bigl\langle \exp _{y_{n}}^{-1}x_{n}, \exp ^{-1}_{y_{n}}x \bigr\rangle + 2\mu _{n} \bigl\langle Ty_{n} - {\mathrm{P}}_{y_{n},x_{n}}Tx_{n} , \exp ^{-1}_{y_{n}}x \bigr\rangle . \end{aligned}$$
(15)

Substituting (11) and (12) into (17) and using the fact that T is Γ-Lipschitz continuous, we deduce that

$$\begin{aligned} d^{2}(x_{n+1},x) \leq & d^{2}(x_{n},x) - d^{2}(y_{n},x_{n}) + \Gamma ^{2} \mu _{n}^{2} d(x_{n},y_{n}) \\ & {} + 2 \mu _{n} \bigl\langle {\mathrm{P}}_{y_{n},x_{n}}Tx_{n} , \exp ^{-1}_{y_{n}}x \bigr\rangle - 2 \mu _{n} \bigl\langle {\mathrm{P}}_{y_{n},x_{n}}Tx_{n}, \exp ^{-1}_{y_{n}}x \bigr\rangle . \end{aligned}$$

Thus,

$$ d^{2}(x_{n+1},x) \leq d^{2}(x_{n},x) - \bigl(1 - \Gamma ^{2} \mu _{n}^{2} \bigr) d^{2}(x_{n},y_{n}), \quad \forall x \in VIP(T,K). $$

Therefore, the proof is completed. □

In the light of the above lemma, we have the following result on the convergence of Algorithm 1.

Theorem 1

Suppose that assumptions (H1)(H2) hold. Then the sequence \(\{x_{n}\}\) generated by Algorithm 1 converges to a solution of the variational inequality problem (2).

Proof

Since \(0 < \mu {'} \leq \mu _{n} \leq \mu {''} < \frac{1}{\Gamma }\), we deduce that \(0 < \Gamma \mu _{n} < 1\). This implies that \(0 < 1 - \Gamma ^{2} \mu _{n}^{2} < 1\). Let \(x^{*} \in VIP(T,K)\). In view of (10), we obtain

$$\begin{aligned} d^{2} \bigl(x_{n+1},x^{*} \bigr) \leq & d^{2} \bigl(x_{n},x^{*} \bigr) - \bigl(1-\Gamma ^{2} \mu _{n}^{2} \bigr)d^{2}(x_{n},y_{n}) \\ \leq & d^{2} \bigl(x_{n},x^{*} \bigr). \end{aligned}$$
(16)

Therefore, \(\{x_{n}\}\) is Fejér monotone with respect to \(VIP(T,K)\).

Next, we show that \(\lim_{n \to \infty }d(x_{n},y_{n}) = 0\). By rearranging (18), and using \(\mu _{n} \in [\mu {'}, \mu {''} ]\), we have

$$\begin{aligned} d^{2}(x_{n},y_{n}) \leq & \frac{1}{1-\Gamma ^{2} \mu _{n}^{2}} \bigl(d^{2} \bigl(x_{n},x^{*} \bigr) - d^{2} \bigl(x_{n+1},x^{*} \bigr) \bigr) \\ \leq & \frac{1}{1-\Gamma ^{2} \mu _{n}^{\prime \prime {2}}} \bigl(d^{2} \bigl(x_{n},x^{*} \bigr) - d^{2} \bigl(x_{n+1},x^{*} \bigr) \bigr). \end{aligned}$$
(17)

Since \(\{x_{n}\}\) is Fejér monotone with respect to \(VIP(T,K)\), by (i) of Lemma 4, \(\lim_{n \to \infty }d(x_{n},x^{*})\) exists. Letting \(n \to \infty \) in (19), we obtain

$$ \lim_{n \to \infty }d(x_{n},y_{n})= 0. $$
(18)

From the fact that the sequence \(\{x_{n}\}\) is Fejér monotone and by (ii) of Lemma 4, \(\{x_{n}\}\) is bounded. Hence, there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) which converges to a cluster point u of \(\{x_{n}\}\). In view of (20), we get \(y_{n_{k}} \to u\) as \(k \to \infty \). Next, we show that \(u \in VIP(T,K)\). From (8), we get

$$ \biggl\langle {\mathrm{P}}_{y_{n_{k}},x_{n_{k}}} Tx_{n_{k}} - \frac{1}{\mu _{n_{k}}} \exp ^{-1}_{y_{n_{k}}}x_{n_{k}}, \exp ^{-1}_{y_{n_{k}}}y \biggr\rangle \geq 0, \quad \forall y \in K, $$

and we further have

$$ \bigl\langle \exp ^{-1}_{y_{n_{k}}}x_{n_{k}} - \mu _{n_{k}} {\mathrm{P}}_{y_{n_{k}},x_{n_{k}}}Tx_{n_{k}}, \exp ^{-1}_{y_{n_{k}}}y \bigr\rangle \leq 0, \quad \forall y \in K. $$

Noting Remarks 2 and 3 in the last inequality gives

$$\begin{aligned} 0 \geq & \bigl\langle \exp ^{-1}_{y_{n_{k}}}x_{n_{k}} - \mu _{n_{k}} {\mathrm{P}}_{y_{n_{k}},x_{n_{k}}}Tx_{n_{k}}, \exp ^{-1}_{y_{n_{k}}}y \bigr\rangle \\ =& \bigl\langle \exp ^{-1}_{y_{n_{k}}}x_{n_{k}} , \exp ^{-1}_{y_{n_{k}}}y \bigr\rangle - \mu _{n_{k}} \bigl\langle { \mathrm{P}}_{y_{n_{k}},x_{n_{k}}}Tx_{n_{k}}, \exp ^{-1}_{y_{n_{k}}}y \bigr\rangle \\ \geq & \bigl\langle \exp ^{-1}_{y_{n_{k}}}x_{n_{k}} , \exp ^{-1}_{y_{n_{k}}}y \bigr\rangle - \mu _{n_{k}} \bigl\langle { \mathrm{P}}_{y_{n_{k}},x_{n_{k}}}Tx_{n_{k}}, \exp ^{-1}_{y_{n_{k}}}x_{n_{k}} \bigr\rangle - \mu _{n_{k}} \bigl\langle {\mathrm{P}}_{y_{n_{k}},x_{n_{k}}}Tx_{n_{k}}, {\mathrm{P}}_{y_{n_{k}},x_{n_{k}}}\exp ^{-1}_{x_{n_{k}}}y \bigr\rangle \\ =& \bigl\langle \exp ^{-1}_{y_{n_{k}}}x_{n_{k}} , \exp ^{-1}_{y_{n_{k}}}y \bigr\rangle - \mu _{n_{k}} \bigl\langle { \mathrm{P}}_{y_{n_{k}},x_{n_{k}}}Tx_{n_{k}}, \exp ^{-1}_{y_{n_{k}}}x_{n_{k}} \bigr\rangle {} - \mu _{n_{k}} \bigl\langle Tx_{n_{k}}, \exp ^{-1}_{x_{n_{k}}}y \bigr\rangle . \end{aligned}$$
(19)

It follows from (21) that

$$\begin{aligned} \bigl\langle Tx_{n_{k}}, \exp ^{-1}_{x_{n_{k}}}y \bigr\rangle \geq \frac{1}{\mu _{n_{k}}} \bigl\langle \exp ^{-1}_{y_{n_{k}}}x_{n_{k}} , \exp ^{-1}_{y_{n_{k}}}y \bigr\rangle + \bigl\langle Tx_{n_{k}}, \exp ^{-1}_{x_{n_{k}}}y_{n_{k}} \bigr\rangle . \end{aligned}$$

Since \(\mu _{n} \in [\mu {'}, \mu {''} ]\), and hence

$$\begin{aligned} \bigl\langle Tx_{n_{k}}, \exp ^{-1}_{x_{n_{k}}}y \bigr\rangle \geq \frac{1}{\mu ^{\prime }} \bigl\langle \exp ^{-1}_{y_{n_{k}}}x_{n_{k}} , \exp ^{-1}_{y_{n_{k}}}y \bigr\rangle + \bigl\langle Tx_{n_{k}}, \exp ^{-1}_{x_{n_{k}}}y_{n_{k}} \bigr\rangle . \end{aligned}$$

Utilizing Lemma 3 and letting \(k \to \infty \), we get

$$ \bigl\langle Tu, \exp ^{-1}_{u} y \bigr\rangle \geq 0, \quad \forall y \in K, $$

which implies that \(u \in VIP(T,K)\). By (iii) of Lemma 4, the sequence \(\{x_{n}\}\) generated by Algorithm 1 converges to a solution of the problem (2). This completes the proof. □

The step sizes in Algorithm 1 rely upon the Lipschitz constants. Unfortunately, these constants are often unknown or difficult to approximate. Next, we present Tseng’s extragradient method when the Lipschitz constant is unknown. Then the algorithm reads as Algorithm 2.

Algorithm 2
figure b

Tseng’s extragradient method with line search

To prove the convergence of Algorithm 2, we need the following lemmas.

Lemma 6

([23])

The Armijo-like search rule (22) is well defined and

$$ \min \biggl\{ \eta , \frac{\tau l}{\Gamma } \biggr\} \leq \mu _{n} \leq \eta . $$

Lemma 7

Let \(\{x_{n}\}\) be a sequence generated by Algorithm 2. Then

$$ d^{2}(x_{n+1},x) \leq d^{2}(x_{n},x) - \bigl(1-\tau ^{2} \bigr)d^{2}(x_{n},y_{n}), \quad \forall x \in VIP(T,K). $$
(20)

Proof

Let \(x \in VIP(T,K)\). Then, according to the proof of Lemma 5, we can obtain the following inequality:

$$ d^{2}(x_{n+1},x) \leq d^{2}(x_{n},x) -d^{2}(x_{n},y_{n}) + \mu _{n}^{2} \Vert {\mathrm{P}}_{y_{n},x_{n}}Tx_{n} - Ty_{n} \Vert ^{2}. $$

Using (22), one obtains

$$\begin{aligned} d^{2}(x_{n+1},x) \leq & d^{2}(x_{n},x) -d^{2}(x_{n},y_{n}) + \tau ^{2} d^{2}(x_{n},y_{n}) \\ =& d^{2}(x_{n},x) - \bigl(1-\tau ^{2} \bigr)d^{2}(x_{n},y_{n}). \end{aligned}$$
(21)

Therefore, the proof is completed. □

Based on the above two lemmas, we have the following result on the convergence of Algorithm 2.

Theorem 2

Suppose that assumptions (H1)(H2) hold. Then the sequence \(\{x_{n}\}\) generated by Algorithm 2 converges to a solution of the variational inequality problem (2).

Proof

Let \(x^{*} \in VIP(T,K)\). Since \(\tau \in (0,1)\), by (23), we get

$$\begin{aligned} d^{2} \bigl(x_{n+1},x^{*} \bigr) \leq & d^{2} \bigl(x_{n},x^{*} \bigr) - \bigl(1-\tau ^{2} \bigr)d^{2}(x_{n},y_{n}) \\ \leq & d^{2} \bigl(x_{n},x^{*} \bigr). \end{aligned}$$
(22)

Therefore, \(\{x_{n}\}\) is Fejér monotone with respect to \(VIP(T,K)\).

Next, we show that \(\lim_{n \to \infty }d(x_{n},y_{n}) = 0\). In view of (25) and \(\tau \in (0,1)\), we have

$$ \bigl(1-\tau ^{2} \bigr)d^{2}(x_{n},y_{n}) \leq d^{2} \bigl(x_{n},x^{*} \bigr) - d^{2} \bigl(x_{n+1},x^{*} \bigr), $$

and we further have

$$ d^{2}(x_{n},y_{n}) \leq \frac{1}{1-\tau ^{2}} \bigl(d^{2} \bigl(x_{n},x^{*} \bigr) - d^{2} \bigl(x_{n+1},x^{*} \bigr) \bigr). $$
(23)

Since \(\{x_{n}\}\) is Fejér monotone with respect to \(VIP(T,K)\), using Lemma 4, \(\lim_{n \to \infty }d(x_{n},x^{*})\) exists. By letting \(n \to \infty \) in (26), we obtain

$$ \lim_{n \to \infty }d(x_{n},y_{n})= 0. $$

Since the sequence \(\{x_{n}\}\) is Fejér monotone, by (ii) of Lemma 4, \(\{x_{n}\}\) is bounded. Hence, there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) which converges to a cluster point u of \(\{x_{n}\}\). Now, using an argument similar to the proof of the fact that \(u \in VIP(T,K)\) in Theorem 1, we have \(u \in VIP(T,K)\), as required. By (iii) of Lemma 4, the sequence \(\{x_{n}\}\) generated by Algorithm 2 converges to a solution of the problem (2). Therefore, the proof is completed. □

Next, we present a modified Tseng’s extragradient method to solve the variational inequality (2). The step sizes in this algorithm are obtained by simple updating, as opposed to utilizing the line search, which brings about a lower computational cost. More precisely, the algorithm is designed as Algorithm 3.

Algorithm 3
figure c

Modified Tseng’s extragradient method

To prove the convergence of Algorithm 3, we need the following results.

Lemma 8

([23])

The sequence \(\{\mu _{n}\}\) generated by Algorithm 3 is monotonically decreasing with lower bound \(\min \left \{ \frac{\tau }{\Gamma },\mu _{0} \right \} \).

Remark 4

([23])

By Lemma 8, the limit of \(\{\mu _{n}\}\) exists. We denote \(\mu =\lim_{n \to \infty }\mu _{n}\). Then \(\mu >0\) and \(\lim_{n \to \infty } \left(\frac{\mu _{n}}{\mu _{n+1}} \right ) =1 \).

Lemma 9

Let \(\{x_{n}\}\) be a sequence generated by Algorithm 3. Then

$$ d^{2}(x_{n+1},x) \leq d^{2}(x_{n},x) - \biggl(1- \mu ^{2}_{n} \frac{\tau ^{2}}{\mu ^{2}_{n+1}} \biggr)d^{2}(x_{n},y_{n}), \quad \forall x \in VIP(T,K). $$
(24)

Proof

It is easy to see that, by the definition of \(\{\mu _{n}\}\), we have

$$ \Vert P_{y_{n},x_{n}}Tx_{n} - Ty_{n} \Vert \leq \frac{\tau }{\mu _{n+1}}d(x_{n},y_{n}), \quad \forall n \in \mathbb{N}. $$
(25)

Indeed, if \(Tx_{n} = Ty_{n}\) then the inequality (28) holds. Otherwise, we have

$$ \mu _{n+1} = \min \biggl\{ \frac{\tau d(x_{n},y_{n})}{ \Vert {\mathrm{P}}_{y_{n},x_{n}}Tx_{n}- Ty_{n} \Vert }, \mu _{n} \biggr\} \leq \frac{\tau d(x_{n},y_{n})}{ \Vert {\mathrm{P}}_{y_{n},x_{n}}Tx_{n}- Ty_{n} \Vert }. $$

This implies that

$$ \Vert {\mathrm{P}}_{y_{n},x_{n}}Tx_{n}- Ty_{n} \Vert \leq \frac{\tau }{\mu _{n+1}}d(x_{n},y_{n}). $$

Therefore, the inequality (28) holds when \(Tx_{n} = Ty_{n}\) and \(Tx_{n} \neq Ty_{n}\).

Letting \(x \in VIP(T,K)\), similarly as in the proof of Lemma 5 we can deduce the following inequality:

$$ d^{2}(x_{n+1},x) \leq d^{2}(x_{n},x) -d^{2}(x_{n},y_{n}) + \mu _{n}^{2} \Vert {\mathrm{P}}_{y_{n},x_{n}}Tx_{n} - Ty_{n} \Vert ^{2}. $$

Substituting (31) into the above inequality, we get

$$\begin{aligned} d^{2}(x_{n+1},x) \leq & d^{2}(x_{n},x) -d^{2}(x_{n},y_{n}) + \mu ^{2}_{n} \frac{\tau ^{2}}{\mu _{n+1}^{2}}d^{2}(x_{n},y_{n}) \\ =& d^{2}(x_{n},x) - \biggl(1- \mu ^{2}_{n} \frac{\tau ^{2}}{\mu ^{2}_{n+1}} \biggr)d^{2}(x_{n},y_{n}). \end{aligned}$$

Therefore, the proof is completed. □

From the above results, now we ready to prove the convergence of Algorithm 3.

Theorem 3

Suppose that assumptions (H1)(H2) hold. Then the sequence \(\{x_{n}\}\) generated by Algorithm 3 converges to a solution of the variational inequality problem (2).

Proof

Let \(x^{*} \in VIP(T,K)\). Then by (27) we have

$$\begin{aligned} d^{2} \bigl(x_{n+1},x^{*} \bigr) \leq d^{2} \bigl(x_{n},x^{*} \bigr) - \biggl(1- \mu ^{2}_{n} \frac{\tau ^{2}}{\mu ^{2}_{n+1}} \biggr)d^{2}(x_{n},y_{n}). \end{aligned}$$
(26)

From Remark 4, we have

$$ \lim_{n \to \infty } \biggl(1 - \mu _{n}^{2} \frac{\tau ^{2}}{\mu _{n+1}^{2}} \biggr) = 1 - \tau ^{2} >0, $$
(27)

that is, for some \(N \geq 0\), for all \(n \geq N\), such that \(1 - \mu _{n}^{2} \frac{\tau ^{2}}{\mu _{n+1}^{2}} >0\). It suggests that \(d(x_{n+1},x^{*}) \leq d(x_{n},x^{*})\) for all \(n \geq N\). Therefore, \(\{x_{n}\}\) is Fejér monotone with respect to \(VIP(T,K)\).

Next, we show that \(\lim_{n \to \infty }d(x_{n},y_{n}) = 0\). From (32), we have

$$ \biggl(1 - \mu _{n}^{2} \frac{\tau ^{2}}{\mu _{n+1}^{2}} \biggr)d^{2}(x_{n},y_{n}) \leq d^{2} \bigl(x_{n},x^{*} \bigr) - d^{2} \bigl(x_{n+1},x^{*} \bigr), $$

and we further have

$$ d^{2}(x_{n},y_{n}) \leq \frac{1}{1 - \mu _{n}^{2} \frac{\tau ^{2}}{\mu _{n+1}^{2}}} \bigl(d^{2} \bigl(x_{n},x^{*} \bigr) - d^{2} \bigl(x_{n+1},x^{*} \bigr) \bigr). $$
(28)

Since \(\{x_{n}\}\) is Fejér monotone with respect to \(VIP(T,K)\), using Lemma 4, \(\lim_{n \to \infty }d(x_{n},x^{*})\) exists. By letting \(n \to \infty \) in (26), we obtain

$$ \lim_{n \to \infty }d(x_{n},y_{n})= 0. $$

Since the sequence \(\{x_{n}\}\) is Fejér monotone, by (ii) of Lemma 4, \(\{x_{n}\}\) is bounded. Hence, there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) which converges to a cluster point u of \(\{x_{n}\}\). Now, using an argument similar to the proof of the fact that \(u \in VIP(T,K)\) in Theorem 1, we have \(u \in VIP(T,K)\), as required. By (iii) of Lemma 4, the sequence \(\{x_{n}\}\) generated by Algorithm 3 converges to a solution of the problem (2). Therefore, the proof is completed. □

4 Numerical experiments

In this section, we present three numerical examples in the framework of Hadamard manifolds to represent convergence of Algorithms 1, 2, and 3. All programs are written in Matlab R2016b and computed on PC Intel(R) Core(TM) i7 @1.80 GHz, with 8.00 GB RAM.

Let \(M := \mathbb{R}^{++} = \{x \in \mathbb{R} : x >0\}\) and \((\mathbb{R}^{++},\langle \cdot , \cdot \rangle )\) be the Riemannian manifold, and \(\langle \cdot , \cdot \rangle \) the Riemannian metric defined by

$$ \langle u, v \rangle := \frac{1}{x^{2}}uv, $$

for all vectors \(u,v \in T_{x}M\). The tangent space at \(x \in M\) is denoted by \(T_{x}M\). For \(x \in M \), the tangent space \(T_{x}M\) at x equals \(\mathbb{R}\). In addition, the parallel transport is the identity mapping. The Riemannian distance \(d: M \times M \to \mathbb{R}^{+} \) is defined by

$$ d(x,y) : = \biggl\vert \ln \frac{x}{y} \biggr\vert , \quad \forall x, y \in M; $$

see, for instance, [35]. Then, \((\mathbb{R}^{++},\langle \cdot , \cdot \rangle )\) is an Hadamard manifold, and the unique geodesic \(\omega : \mathbb{R} \to M\) starting from \(\omega (0) = x\) with \(v = \omega {'}(0) \in T_{x}M\) is defined by \(\omega (t) := x e^{(vt/x)}\). Therefore,

$$ \exp _{x}tv = x e^{(vt/x)}. $$

The inverse exponential map is defined by

$$ \exp ^{-1}_{x}y =\omega {'}(0) = x \ln \frac{y}{x}. $$

Example 1

Let \(K = [1,+\infty )\) be a geodesic convex subset of \(\mathbb{R}^{++}\), and \(T: K \to \mathbb{R}\) be a single-valued vector field defined by

$$ Tx := \frac{x}{2}\ln x, \quad \forall x \in K. $$

Let \(x,y \in K\) and \(\langle Tx, \exp ^{-1}_{x}y \rangle \geq 0\), then we have

$$\begin{aligned} \bigl\langle Ty, \exp ^{-1}_{y}x \bigr\rangle \leq & \bigl\langle Ty, \exp ^{-1}_{y}x \bigr\rangle + \bigl\langle Tx, \exp ^{-1}_{x}y \bigr\rangle \\ =& \frac{1}{y^{2}} \biggl(\frac{y}{2} \ln y \biggr) \biggl(y \ln \frac{x}{y} \biggr) + \frac{1}{x^{2}} \biggl(\frac{x}{2} \ln x \biggr) \biggl(x \ln \frac{y}{x} \biggr) \\ =& \frac{\ln y}{2} \ln \frac{x}{y} + \frac{\ln x}{2} \ln \frac{y}{x} \\ =& -\frac{1}{2} \ln ^{2} \frac{x}{y} \\ \leq & 0. \end{aligned}$$

Hence, T is pseudomonotone. Next, we show that T is Lipschitz continuous. Given \({x,y \in K}\),

$$\begin{aligned} \Vert {\mathrm{P}}_{x,y}Tx - Ty \Vert ^{2} =& \biggl\Vert \frac{x}{2}\ln x - \frac{y}{2}\ln y \biggr\Vert ^{2} \\ =& \biggl\Vert \frac{x}{2}\ln x \biggr\Vert ^{2} -2 \biggl\Vert \frac{x}{2} \ln x \biggr\Vert \biggl\Vert \frac{y}{2}\ln y \biggr\Vert + \biggl\Vert \frac{y}{2}\ln y \biggr\Vert ^{2} \\ =& \frac{1}{4} \ln ^{2} x - \frac{2}{4} \ln x \ln y + \frac{1}{4} \ln ^{2} y \\ =& \frac{1}{4} \ln ^{2} \frac{x}{y} \\ =& \frac{1}{4}d^{2}(x,y), \end{aligned}$$

and thus, T is \(1/2\)-Lipschitz continuous. So, T is pseudomonotone and \(1/2\)-Lipschitz continuous. Clearly, the variational inequality has a unique solution. Hence,

$$\begin{aligned}& \begin{aligned} \bigl\langle Tx^{*}, \exp ^{-1}_{x^{*}}y \bigr\rangle ={}& \frac{1}{x^{*^{2}}} \biggl(\frac{x^{*}}{2} \ln x^{*} \biggr) \biggl(x^{*} \ln \frac{x^{*}}{y} \biggr) \\ ={}& \frac{\ln x^{*}}{2} \ln \frac{x^{*}}{y} \geq 0, \quad y \in K \end{aligned} \\& \quad \Leftrightarrow \quad x^{*} = 1. \end{aligned}$$

We deduce that \(VIP(T,K) = \{1\}\). Choose \(\eta = l = \tau =0.5\) and let \(\mu _{n} = \frac{1}{2} - \frac{1}{n+3}\). With the initial point \(x_{0} = 2\), the numerical results of Algorithms 1, 2, and 3 are shown in Table 1 and Fig. 1.

Figure 1
figure 1

Iterative process of Example 1

Table 1 The numerical results for Example 1

Example 2

Let \(K = [1,2]\) be a geodesic convex subset of \(\mathbb{R}^{++}\), and \(T: K \to \mathbb{R}\) be a single-valued vector field defined by

$$ Tx := -x \ln \frac{2}{x}, \quad \forall x \in K. $$

Let \(x,y \in K\) and \(\langle Tx, \exp ^{-1}_{x}y \rangle \geq 0\). Then we have

$$\begin{aligned} \bigl\langle Ty, \exp ^{-1}_{y}x \bigr\rangle \leq & \bigl\langle Ty, \exp ^{-1}_{y}x \bigr\rangle + \bigl\langle Tx, \exp ^{-1}_{x}y \bigr\rangle \\ =& \frac{1}{y^{2}} \biggl(- y \ln \frac{2}{y} \biggr) \biggl(y \ln \frac{x}{y} \biggr) + \frac{1}{x^{2}} \biggl(-x \ln \frac{2}{x} \biggr) \biggl(x \ln \frac{y}{x} \biggr) \\ =& \biggl(- \ln \frac{2}{y} \biggr) \biggl(\ln \frac{x}{y} \biggr) + \biggl(- \ln \frac{2}{x} \biggr) \biggl( \ln \frac{y}{x} \biggr) \\ =& -\ln 2 \ln \frac{x}{y} + \ln y \ln \frac{x}{y} - \ln 2 \ln \frac{y}{x} + \ln x \ln \frac{y}{x} \\ =& -\ln 2 \ln 1 + \ln y \ln \frac{x}{y} + \ln x \ln \frac{y}{x} \\ =& \bigl( \ln x \ln y - \ln ^{2} y + \ln x \ln y - \ln ^{2} x \bigr) \\ =& - \ln ^{2} \frac{x}{y} \\ \leq & 0. \end{aligned}$$

Hence, T is pseudomonotone, and it is easy to see that T is 1-Lipschitz continuous. Thereby, T is pseudomonotone and 1-Lipschitz continuous. Clearly, the variational inequality has a unique solution. Hence,

$$\begin{aligned}& \begin{aligned} \bigl\langle Tx^{*}, \exp ^{-1}_{x^{*}}y \bigr\rangle ={}& \frac{1}{x^{*^{2}}} \biggl( - x^{*} \ln \frac{2}{x^{*}} \biggr) \biggl(x^{*} \ln \frac{x^{*}}{y} \biggr) \\ ={}& - \ln \frac{2}{x^{*}} \ln \frac{x^{*}}{y} \geq 0, \quad y \in K \end{aligned} \\& \quad \Leftrightarrow \quad x^{*} = 2. \end{aligned}$$

We deduce that \(VIP(T,K) = \{2\}\). Choose \(\eta = l = \tau =0.5\) and let \(\mu _{n} = \frac{1}{2} - \frac{1}{n+3}\). With the initial point \(x_{0} = 1\), the numerical results of Algorithms 1, 2, and 3 are shown in Table 2 and Fig. 2.

Figure 2
figure 2

Iterative process of Example 2

Table 2 The numerical results for Example 2

Example 3

Following [38], let \(\mathbb{R}^{++}_{n}\) is the product space of \(\mathbb{R}^{++}\), that is, \(\mathbb{R}^{++}_{n} = \{x = (x_{1},x_{2},\ldots ,x_{n})\in \mathbb{R}^{n} : x_{i} >0, \ i = 1,\ldots ,n\}\). Let \(M = (\mathbb{R}^{++}_{n}, \langle \cdot , \cdot \rangle )\) with the metric defined by \(\langle u, v \rangle := u^{T}V(x)v\), for \(x \in \mathbb{R}_{n}^{++}\) and \(u,v \in T_{x}\mathbb{R}_{n}^{++} \) where \(V(x)\) is a diagonal metric defined by \(V(x) = \operatorname{diag} (x_{1}^{-2},x_{2}^{-2},\ldots ,x_{n}^{-2} )\). In addition, the Riemannian distance is defined by \(d(x,y) := \sqrt{\sum_{i=1}^{n}\ln ^{2}\frac{x_{i}}{y_{i}}} \), for all \(x, y \in \mathbb{R}_{n}^{++}\).

Let \(K = \{x = (x_{1},x_{2},\ldots ,x_{n}) : 1 \leq x_{i} \leq 10, i = 1,2,\ldots ,n\} \) be a closed, geodesic convex subset of \(\mathbb{R}_{n}^{++}\) and \(T: K \to \mathbb{R}^{n}\) be a single-valued vector field defined by

$$ (Tx)_{i} := x_{i}\ln x_{i}, \quad i = 1,2, \ldots , n. $$

This vector filed is a monotone and 1-Lipschitz continuous on \(\mathbb{R}_{n}^{++}\), see [39, Example 1]. We choose \(\eta = l = \tau =0.5\) and let \(\mu _{n} = \frac{1}{2} - \frac{1}{n+3}\). The starting points are randomly generated by the MATLAB built-in function rand:

$$ x_{0} = 1+{\mathtt{rand}}(n,1). $$

The terminal criterion is \(d(x_{n},y_{n}) \leq \epsilon \). For the numerical experiment, we take \(\epsilon = 10^{-5}\), and \(n = 50, 100, 500\). The number of iterations (Iteration) and the computing time (Time) measured in seconds are described in Table 3.

Table 3 The numerical results of Algorithms 1, 2, and 3 for the number of iterations (Iteration) and the computing time (Time) measured in seconds with \(n = 20, 50, 100, 200\)

The aforementioned results have delineated that Algorithm 1 is much quicker than Algorithms 2 and 3. In particular, if the Lipschitz-type constants are known, Algorithm 1 works well. Regardless of whether the Lipschitz constants are needed or not, we observe that Algorithm 2 is much quicker than Algorithm 3. However, Algorithm 3 has a lower computational cost than Algorithm 2.

5 Conclusions

In this paper, we focus on the variational inequality problem in Hadamard manifolds. Three Tseng’s-type methods are purposed to solve pseudomonotone variational inequality problems. The convergence of the proposed algorithms is established under standard conditions. Moreover, numerical experiments are supplied to illustrate the effectiveness of our algorithms.

Availability of data and materials

Not applicable.

References

  1. Stampacchia, G.: Formes bilinéaires coercitives sur les ensembles convexes. C. R. Acad. Sci. Paris 258, 4413–4416 (1964)

    MathSciNet  MATH  Google Scholar 

  2. Kinderlehrer, D., Stampacchia, G.: An Introduction to Variational Inequalities and Their Applications. Classics in Applied Mathematics, vol. 31. SIAM, Philadelphia (2000). https://doi.org/10.1137/1.9780898719451. Reprint of the 1980 original

    Book  MATH  Google Scholar 

  3. Iusem, A.N., Nasri, M.: Korpelevich’s method for variational inequality problems in Banach spaces. J. Glob. Optim. 50(1), 59–76 (2011). https://doi.org/10.1007/s10898-010-9613-x

    Article  MathSciNet  MATH  Google Scholar 

  4. Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems. Vol. II. Springer Series in Operations Research. Springer, New York (2003)

    MATH  Google Scholar 

  5. Hu, Y.H., Song, W.: Weak sharp solutions for variational inequalities in Banach spaces. J. Math. Anal. Appl. 374(1), 118–132 (2011). https://doi.org/10.1016/j.jmaa.2010.08.062

    Article  MathSciNet  MATH  Google Scholar 

  6. Thong, D.V., Hieu, D.V.: Inertial extragradient algorithms for strongly pseudomonotone variational inequalities. J. Comput. Appl. Math. 341, 80–98 (2018). https://doi.org/10.1016/j.cam.2018.03.019

    Article  MathSciNet  MATH  Google Scholar 

  7. Thong, D.V., Hieu, D.V.: Weak and strong convergence theorems for variational inequality problems. Numer. Algorithms 78(4), 1045–1060 (2018). https://doi.org/10.1007/s11075-017-0412-z

    Article  MathSciNet  MATH  Google Scholar 

  8. Khanh, P.D., Vuong, P.T.: Modified projection method for strongly pseudomonotone variational inequalities. J. Glob. Optim. 58(2), 341–350 (2014). https://doi.org/10.1007/s10898-013-0042-5

    Article  MathSciNet  MATH  Google Scholar 

  9. Korpelevič, G.M.: An extragradient method for finding saddle points and for other problems. Èkon. Mat. Metody 12(4), 747–756 (1976)

    MathSciNet  Google Scholar 

  10. Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148(2), 318–335 (2011). https://doi.org/10.1007/s10957-010-9757-3

    Article  MathSciNet  MATH  Google Scholar 

  11. Censor, Y., Gibali, A., Reich, S.: Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optim. Methods Softw. 26(4–5), 827–845 (2011). https://doi.org/10.1080/10556788.2010.551536

    Article  MathSciNet  MATH  Google Scholar 

  12. Censor, Y., Gibali, A., Reich, S.: Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 61(9), 1119–1132 (2012). https://doi.org/10.1080/02331934.2010.539689

    Article  MathSciNet  MATH  Google Scholar 

  13. Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38(2), 431–446 (2000). https://doi.org/10.1137/S0363012998338806

    Article  MathSciNet  MATH  Google Scholar 

  14. Boţ, R.I., Csetnek, E.R., Vuong, P.T.: The forward-backward-forward method from continuous and discrete perspective for pseudo-monotone variational inequalities in Hilbert spaces. Eur. J. Oper. Res. 287(1), 49–60 (2020). https://doi.org/10.1016/j.ejor.2020.04.035

    Article  MathSciNet  MATH  Google Scholar 

  15. Boţ, R.I., Csetnek, E.R.: An inertial Tseng’s type proximal algorithm for nonsmooth and nonconvex optimization problems. J. Optim. Theory Appl. 171(2), 600–616 (2016). https://doi.org/10.1007/s10957-015-0730-z

    Article  MathSciNet  MATH  Google Scholar 

  16. Wang, F., Xu, H.K.: Weak and strong convergence theorems for variational inequality and fixed point problems with Tseng’s extragradient method. Taiwan. J. Math. 16(3), 1125–1136 (2012). https://doi.org/10.11650/twjm/1500406682

    Article  MathSciNet  MATH  Google Scholar 

  17. Thong, D.V., Van Hieu, D.: Strong convergence of extragradient methods with a new step size for solving variational inequality problems. Comput. Appl. Math. 38(3), Article ID 136 (2019). https://doi.org/10.1007/s40314-019-0899-0

    Article  MathSciNet  MATH  Google Scholar 

  18. Thong, D.V., Van Hieu, D.: New extragradient methods for solving variational inequality problems and fixed point problems. J. Fixed Point Theory Appl. 20(3), Article ID 129 (2018). https://doi.org/10.1007/s11784-018-0610-x

    Article  MathSciNet  MATH  Google Scholar 

  19. Ferreira, O.P., Oliveira, P.R.: Proximal point algorithm on Riemannian manifolds. Optimization 51(2), 257–270 (2002). https://doi.org/10.1080/02331930290019413

    Article  MathSciNet  MATH  Google Scholar 

  20. Li, C., López, G., Martín-Márquez, V., Wang, J.H.: Resolvents of set-valued monotone vector fields in Hadamard manifolds. Set-Valued Var. Anal. 19(3), 361–383 (2011). https://doi.org/10.1007/s11228-010-0169-1

    Article  MathSciNet  MATH  Google Scholar 

  21. Ansari, Q.H., Babu, F., Li, X.B.: Variational inclusion problems in Hadamard manifolds. J. Nonlinear Convex Anal. 19(2), 219–237 (2018)

    MathSciNet  MATH  Google Scholar 

  22. Németh, S.Z.: Variational inequalities on Hadamard manifolds. Nonlinear Anal. 52(5), 1491–1498 (2003). https://doi.org/10.1016/S0362-546X(02)00266-3

    Article  MathSciNet  MATH  Google Scholar 

  23. Chen, J., Liu, S., Chang, X.: Tseng’s extragradient methods for variational inequality on Hadamard manifolds. Appl. Anal. (2019). https://doi.org/10.1080/00036811.2019.1695783

    Article  Google Scholar 

  24. Ansari, Q.H., Babu, F.: Existence and boundedness of solutions to inclusion problems for maximal monotone vector fields in Hadamard manifolds. Optim. Lett. 14(3), 711–727 (2020). https://doi.org/10.1007/s11590-018-01381-x

    Article  MathSciNet  MATH  Google Scholar 

  25. Li, C., López, G., Martín-Márquez, V.: Monotone vector fields and the proximal point algorithm on Hadamard manifolds. J. Lond. Math. Soc. 79(3), 663–683 (2009). https://doi.org/10.1112/jlms/jdn087

    Article  MathSciNet  MATH  Google Scholar 

  26. Wang, J.H., López, G., Martín-Márquez, V., Li, C.: Monotone and accretive vector fields on Riemannian manifolds. J. Optim. Theory Appl. 146(3), 691–708 (2010). https://doi.org/10.1007/s10957-010-9688-z

    Article  MathSciNet  MATH  Google Scholar 

  27. Li, C., Yao, J.C.: Variational inequalities for set-valued vector fields on Riemannian manifolds: convexity of the solution set and the proximal point algorithm. SIAM J. Control Optim. 50(4), 2486–2514 (2012). https://doi.org/10.1137/110834962

    Article  MathSciNet  MATH  Google Scholar 

  28. Fan, J., Qin, X., Tan, B.: Tseng’s extragradient algorithm for pseudomonotone variational inequalities on Hadamard manifolds. Appl. Anal., 1–14 (2020)

  29. Li, S.L., Li, C., Liou, Y.C., Yao, J.C.: Existence of solutions for variational inequalities on Riemannian manifolds. Nonlinear Anal. 71(11), 5695–5706 (2009). https://doi.org/10.1016/j.na.2009.04.048

    Article  MathSciNet  MATH  Google Scholar 

  30. Tang, Gj., Zhou, Lw., Huang, Nj.: The proximal point algorithm for pseudomonotone variational inequalities on Hadamard manifolds. Optim. Lett. 7(4), 779–790 (2013). https://doi.org/10.1007/s11590-012-0459-7

    Article  MathSciNet  MATH  Google Scholar 

  31. Ferreira, O.P., Pérez, L.R.L., Németh, S.Z.: Singularities of monotone vector fields and an extragradient-type algorithm. J. Glob. Optim. 31(1), 133–151 (2005). https://doi.org/10.1007/s10898-003-3780-y

    Article  MathSciNet  MATH  Google Scholar 

  32. Tang, Gj., Huang, Nj.: Korpelevich’s method for variational inequality problems on Hadamard manifolds. J. Glob. Optim. 54(3), 493–509 (2012). https://doi.org/10.1007/s10898-011-9773-3

    Article  MathSciNet  MATH  Google Scholar 

  33. Tang, Gj., Wang, X., Liu, Hw.: A projection-type method for variational inequalities on Hadamard manifolds and verification of solution existence. Optimization 64(5), 1081–1096 (2015). https://doi.org/10.1080/02331934.2013.840622

    Article  MathSciNet  MATH  Google Scholar 

  34. Sakai, T.: Riemannian Geometry. Translations of Mathematical Monographs, vol. 149. Am. Math. Soc., Providence (1996). Translated from the 1992 Japanese original by the author

    Book  Google Scholar 

  35. do Carmo, M.Pa.: Riemannian Geometry. Mathematics: Theory & Applications. Birkhäuser Boston, Boston (1992). https://doi.org/10.1007/978-1-4757-2201-7. Translated from the second Portuguese edition by Francis Flaherty

    Book  MATH  Google Scholar 

  36. Udrişte, C.: Convex Functions and Optimization Methods on Riemannian Manifolds. Mathematics and Its Applications, vol. 297. Kluwer Academic, Dordrecht (1994). https://doi.org/10.1007/978-94-015-8390-9

    Book  MATH  Google Scholar 

  37. Bridson, M.R., Haefliger, A.: Metric Spaces of Non-positive Curvature. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 319. Springer, Berlin (1999). https://doi.org/10.1007/978-3-662-12494-9

    Book  MATH  Google Scholar 

  38. Da Cruz Neto, J.X., Ferreira, O.P., Pérez, L.R.L., Németh, S.Z.: Convex- and monotone-transformable mathematical programming problems and a proximal-like point method. J. Glob. Optim. 35(1), 53–69 (2006). https://doi.org/10.1007/s10898-005-6741-9

    Article  MathSciNet  MATH  Google Scholar 

  39. Ansari, Q.H., Babu, F.: Proximal point algorithm for inclusion problems in Hadamard manifolds with applications. Optim. Lett. (2019). https://doi.org/10.1007/s11590-019-01483-0

    Article  Google Scholar 

Download references

Acknowledgements

This research is supported by Postdoctoral Fellowship from King Mongkut’s University of Technology Thonburi (KMUTT), Thailand. Moreover, this research project is supported by Thailand Science Research and Innovation (TSRI) Basic Research Fund: Fiscal year 2021 under project number 64A306000005. The authors acknowledge the financial support provided by the Center of Excellence in Theoretical and Computational Science (TaCS-CoE), KMUTT.

Funding

Postdoctoral Fellowship from King Mongkut’s University of Technology Thonburi (KMUTT), Thailand. Thailand Science Research and Innovation (TSRI) Basic Research Fund: Fiscal year 2021 under project number 64A306000005. The Center of Excellence in Theoretical and Computational Science (TaCS-CoE), KMUTT, Thailand.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally in writing this article. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Parin Chaipunya.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Khammahawong, K., Kumam, P., Chaipunya, P. et al. New Tseng’s extragradient methods for pseudomonotone variational inequality problems in Hadamard manifolds. Fixed Point Theory Algorithms Sci Eng 2021, 5 (2021). https://doi.org/10.1186/s13663-021-00689-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-021-00689-1

MSC

Keywords