Skip to main content

Generalized proximal-type methods for weak vector variational inequality problems in Banach spaces

Abstract

In this paper, we propose a class of generalized proximal-type method by the virtue of Bregman functions to solve weak vector variational inequality problems in Banach spaces. We carry out a convergence analysis on the method and prove the weak convergence of the generated sequence to a solution of the weak vector variational inequality problems under some mild conditions. Our results extend some known results to more general cases.

1 Introduction

The well-known proximal point algorithm (PPA, for short) is a powerful tool for solving optimization problems and variational inequality problems, which was first introduced by Martinet [1] and its celebrated progress was attained in the work of Rockafellar [2]. The classical proximal point algorithm generated a sequence \(\{z^{k}\}\subset H\) with an initial point \(z^{0}\) through the following iteration:

$$ z^{k+1}=(I+c_{k}T)^{-1}z^{k}, $$
(1.1)

where \(\{c_{k}\}\) is a sequence of positive real numbers bounded away from zero. Rockafellar [2] proved that for a maximal monotone operator T, the sequence \(\{z^{k}\}\) weakly converges to a zero of T under some mild conditions. From then on, many works have been devoted to the investigation of the proximal point algorithms, its applications, and generalizations (see [3–8] and the references therein for scalar-valued optimization problems and variational inequality problems; see [9–14] for vector-valued optimization problems). The notion of a Bregman function has its origin in [15] and the name was first used in optimization problems and its related topics by Censor and Lent in [16]. Bregman function algorithms have been extensively used for various optimization problems and variational inequality problems (e.g. [17–20] in finite-dimensional spaces, [21, 22] in infinite-dimensional spaces).

The concept of vector variational inequality was firstly introduced by Giannessi [23] in finite-dimensional spaces. The vector variational inequality problems have found a lot of important applications in multiobjective decision making problems, network equilibrium problems, traffic equilibrium problems, and so on. Because of these significant applications, the study of vector variational inequalities has attracted wide attention. Chen and Yang [24] investigated general vector variational inequality problems and vector complementary problems in infinite-dimensional spaces. Chen [25] considered vector variational inequality problems with a variable ordering structure. Through the last 20 years of development, existence results of solutions, duality theorems, and topological properties of solution sets of several kinds of vector variational inequalities have been derived. One can find a fairly complete review of the main results as regards the vector variational inequalities in the monograph [26]. Very recently, Chen [27] constructed a class of vector-valued proximal-type method for solving a weak vector variational inequality problem in finite-dimensional spaces. They generalized some the classical results of Rockafellar [2] from the scalar-valued case to the vector-valued case.

An important motivation for making an analysis of the convergence properties of both proximal point algorithms and Bregman function algorithms is related to the mesh independence principle [28–30]. The mesh independence principle relies on infinite dimensional convergence results for predicting the convergence properties of the discrete finite-dimensional method. Furthermore it can provide a theoretical foundation for the justification of refinement strategies and help to design the refinement process. As we know, many practical problems in economics and engineering are modeled in infinite-dimensional spaces, thus it is necessary to analyze the convergence property of the algorithms in abstract spaces. Motivated by [22, 27], in this paper we propose a class of generalized proximal-type methods by virtue of the Bregman function for solving weak vector variational inequality problems in Banach spaces. We carry out a convergence analysis of the method and prove convergence of the generated sequence to a solution of the weak vector variational inequality problems under some mild conditions.

The paper is organized as follows.

In Section 2, we present some basic concepts, assumptions and preliminary results. In Section 3, we introduce the proximal-type method and carry out convergence analysis on the method. In Section 4, we draw a conclusion and make some remarks.

2 Preliminaries

In this section, we present some basic definitions and propositions for the proof of our main results.

Let X be a smooth reflexive Banach space with norm \(\Vert \cdot \Vert _{X}\), denote by \(X^{*}\) the dual space of X and Y is also a smooth reflexive Banach space ordered by a nontrivial, closed and convex cone C with nonempty interior intC, which defines a partial order \(\leq_{C}\) in Y, i.e., \(y_{1}\leq_{C} y_{2}\) if and only if \(y_{2}-y_{1}\in C\) for any \(y_{1}, y_{2}\in Y\) and for relation \(\leq_{\operatorname {int}C}\), given by \(y_{1}\leq_{\operatorname {int}C} y_{2}\) if and only if \(y_{2}-y_{1}\in \operatorname {int}C\) for any \(y_{1}, y_{2}\in Y\). We denote by \(Y^{*}\) the dual space of Y, by \(C^{*}\) the positive polar cone of C, i.e.,

$$C^{*}= \bigl\{ z\in Y^{*}: \langle z, y\rangle\geq0, \forall z\in C \bigr\} . $$

We denote \(C^{*+}=\{z\in Y^{*}: \langle y, z\rangle>0\}\) for all \(y\in C\backslash\{0\}\).

Lemma 2.1

[31]

Let \(e\in \operatorname {int}C\) be fixed and \(C^{0}=\{z\in C^{*}\mid \langle z, e\rangle=1\}\), then \(C^{0}\) is a \(weak^{*}\)-compact subset of \(C^{*}\).

Denote by \(L(X, Y)\) the set of all continuous linear operators from X to Y. Denote by \(\langle \varphi , x\rangle\) the value \(\varphi (x)\) of φ at \(x\in X\) if \(\varphi \in L(X,Y)\). Let \(X_{0}\subset X\) be nonempty, closed, and convex, consider the weak vector variational inequality problem of finding \(\bar {x}\in X_{0}\) such that

$$\begin{aligned} \mbox{(WVVI)}\quad \bigl\langle T(\bar {x}), x-\bar {x}\bigr\rangle \nleq_{\operatorname {int}C} 0\quad \forall x\in X_{0}, \end{aligned}$$

where \(T: X_{0}\rightarrow L(X, Y)\) be a mapping. Denote by XÌ„ the solution set of (VVI). Let \(\lambda\in C^{0}\) and \(\lambda(T): X_{0}\rightarrow X^{*}\), consider the corresponding scalar-valued variational inequality problem of finding \(x^{*}\in X_{0}\) such that

$$\mbox{(VIP$_{\lambda}$)}\quad \bigl\langle \lambda(T) \bigl(x^{*} \bigr), x-x^{*}\bigr\rangle \geq0\quad \forall x\in X_{0}. $$

Denote by \(\bar{X}_{\lambda}\) the solution set of (VIP λ ).

Definition 2.1

[32]

Let \(X_{0}\subset X\) be nonempty, closed, and convex, and \(F: X_{0}\rightarrow X^{*}\) be a single-valued mapping.

  1. (i)

    F is said to be monotone on \(X_{0}\) if, for any \(x_{1}, x_{2}\in X_{0}\), we have

    $$\bigl\langle F(x_{1})-F(x_{2}), x_{1}-x_{2} \bigr\rangle \geq0. $$
  2. (ii)

    F is said to be pseudomonotone on \(X_{0}\) if, for any \(x_{1}, x_{2}\in X_{0}\), we have

    $$\bigl\langle F(x_{2}), x_{1}-x_{2}\bigr\rangle \geq0\quad \Rightarrow\quad \bigl\langle F(x_{1}), x_{1}-x_{2} \bigr\rangle \geq0. $$

Definition 2.2

[26]

Let \(X_{0}\subset X\) be nonempty, closed, and convex. \(T: X_{0}\rightarrow L(X, Y)\) is a mapping, which is said to be C-monotone on \(X_{0}\) if, for any \(x_{1}, x_{2}\in X_{0}\), we have

$$\bigl\langle T(x_{1})-T(x_{2}), x_{1}-x_{2} \bigr\rangle \geq_{C} 0. $$

Proposition 2.1

[26]

Let \(X_{0}\) and T be defined as in Definition  2.2, we have the following statements:

  1. (i)

    T is C-monotone if and only if, for any \(\lambda\in C^{0}\), the mapping \(\lambda(T): X_{0}\rightarrow X^{*}\) is monotone;

  2. (ii)

    if T is C-monotone, then for any \(\lambda\in C^{0}\), \(\lambda(T): X_{0}\rightarrow X^{*}\) is pseudomonotone.

Definition 2.3

[33]

Let \(L\subset L(X, Y)\) be a nonempty set. The weak and strong C-polar cones of L are defined, respectively, by

$$ L^{w_{0}}_{C}:= \bigl\{ x\in X: \langle l, x\rangle \ngeq_{C} 0, \forall l\in L \bigr\} $$
(2.1)

and

$$ L^{s_{0}}_{C}:= \bigl\{ x\in X: \langle l, x\rangle \leq_{C} 0, \forall l\in L \bigr\} . $$
(2.2)

Definition 2.4

[26]

Let \(K\subset X\) be nonempty, closed, and convex, let \(F: K\rightarrow Y\) be a vector-valued mapping. A linear operator V is said to be a strong subgradient of F at \(\bar{x}\in K\) if

$$F(x)-F(\bar{x}) - \langle V, x-\bar{x}\rangle\geq_{C}0\quad \forall x\in K. $$

A linear operator V is said to be a weak subgradient of F at \(\bar{x}\in K\) if

$$F(x)-F(\bar{x}) - \langle V, x-\bar{x}\rangle\nleq_{\operatorname {int}C}0\quad \forall x\in K. $$

Denote by \(\partial^{w}_{C}F(\bar{x})\) the set of weak subgradients of F on K at x̄.

Let \(K\subset X\) be nonempty, closed, and convex. A vector-valued indicator function \(\delta(x\mid K)\) of K at x is defined by

$$\delta(x\mid K)= \textstyle\begin{cases} 0\in Y,& \mbox{if } x\in K;\\ +\infty_{C}, & \mbox{if }x\notin K. \end{cases} $$

An important and special case in the theory of weak subgradients is the case that \(F(x)=\delta(x\mid K)\) becomes a vector-valued indicator function of K. By Definition 2.4, we obtain \(V\in \partial^{w}_{C}\delta(x^{*}\mid K)\) if and only if

$$ \bigl\langle V, x-x^{*} \bigr\rangle \ngeq _{\operatorname {int}C}0\quad \forall x \in K. $$
(2.3)

Definition 2.5

A set \(VN^{w}_{K}(x^{*})\subset L(X, Y)\) is said to be a weak normality operator set to K at \(x^{*}\), if for every \(V\in VN^{w}_{K}(x^{*})\), the inequality (2.3) holds.

Clearly, \(VN^{w}_{K}(x^{*})=\partial^{w}_{C}\delta(x^{*}\mid K)\). As for the scalar-valued case, from [34] we know that \(v^{*}\in\partial\delta_{K}(x^{*})=N_{K}(x^{*})\) if and only if

$$ \bigl\langle {v^{*}}, x-x^{*}\bigr\rangle \leq0 \quad \forall x\in K, $$
(2.4)

where \(\delta_{K}(x)\) is the scalar-valued indicator function of K. The inequality (2.4) means that \(v^{*}\) is normal to K at \({x^{*}}\).

Definition 2.6

Let \(VN^{w}_{K}(\cdot): X\rightrightarrows L(X, Y)\) be a set-valued mapping, which is said to be a weak normal mapping for K, if for any \(y\in K\), \(V\in VN^{w}_{K}(y)\) such that

$$ \langle V, x-y\rangle\ngeq_{\operatorname {int}C}0\quad \forall x\in K. $$
(2.5)

\(VN^{s}_{K}(\cdot)\) is said to be a strong normal mapping for K, if for any \(y\in K\), \(V\in VN^{s}_{K}(y)\) such that

$$ \langle V, x-y\rangle\leq_{C}0 \quad \forall x\in K. $$
(2.6)

As in [35], the normal mapping for K is a set-valued mapping, which is defined, for any \(y\in K\), \(v\in N_{K}(y)\), by

$$\langle v, x-y\rangle\leq0\quad \forall x\in K. $$

Next we will introduce the definition and some basic results as regards the maximal monotone mapping.

Definition 2.7

[32]

Let a set-valued map \(G: X_{0}\subset X\rightrightarrows X^{*}\) be given, it is said to be monotone if

$$\langle z-\bar{z}, w-\bar{w}\rangle\geq0 $$

for all z and z̄ in \(X_{0}\), all w in \(G(z)\) and w̄ in \(G(\bar{z})\). It is said to be maximal monotone if, in addition, the graph

$$\operatorname {gph}(G)= \bigl\{ (z,w)\in X\times X^{*}\mid w\in G(z) \bigr\} $$

is not properly contained in the graph of any other monotone operator from X to \(X^{*}\).

Definition 2.8

[36]

An operator \(A : X\rightarrow X^{*}\) is said to be regular if for any \(u\in R(A)\) and for any \(y\in \operatorname {dom}(A)\),

$$ \sup_{(z, v \in \operatorname {grh}(A))}\langle v-u, y-z\rangle < \infty. $$
(2.7)

Proposition 2.2

[37]

The subdifferential of a proper lower semicontinuous convex function Ï• is a regular operator.

Lemma 2.2

[35]

Let K be a nonempty, closed, and convex subset of X.

  1. (a)

    If \(T_{1} : X \rightrightarrows X^{*}\) is the normal mapping to K and \(T_{2}: X\rightarrow X^{*}\) is a single-valued monotone operator such that \(\operatorname {int}K\cap \operatorname {dom}(T_{2})\neq \emptyset\) and \(T_{2}\) is continuous on K, then \(T_{1}+T_{2}\) is a maximal monotone operator.

  2. (b)

    If \(T_{1} , T_{2} : X \rightrightarrows X^{*}\) are maximal monotone operators, and \(\operatorname {int}(\operatorname {dom}T_{1}) \cap \operatorname {dom}T_{2}\neq\emptyset\), then \(T_{1}+T_{2}\) is a maximal monotone operator.

Lemma 2.3

[22]

Assume that X is a reflexive Banach space and J its normalized duality mapping. Let \(T_{0}, T_{1}: X\rightrightarrows X^{*}\) be maximal monotone operators such that

  1. (a)

    \(T_{1}\) is regular.

  2. (b)

    \(\operatorname {dom}T_{1}\cap \operatorname {dom}T_{0}\neq\emptyset\) and \(R(T_{1})=X^{*}\).

  3. (c)

    \(T_{1}+T_{0}\) is maximal monotone.

Then we have \(R(T_{1}+T_{0})=X^{*}\).

Proposition 2.3

[38]

Any maximal monotone operator \(S : X \rightrightarrows X^{*}\) is demiclosed, if the conditions below hold:

  • If \(\{x_{k}\}\) converges weakly to x and \(\{w_{k} \in S(x_{k})\}\) converges strongly to w, then \(w \in S(x)\).

  • If \(\{x_{k}\}\) converges strongly to x and \(\{w_{k} \in S(x_{k})\}\) converges weakly to w, then \(w \in S(x)\).

Definition 2.9

[22]

Let X be a smooth reflexive Banach space and \(f : X\rightarrow R \) a strictly convex, proper, and lower semicontinuous function with closed domain \(D := \operatorname {dom}(f)\). Assume from now on that \(\operatorname {int}D\neq\emptyset\) and that f is Gâteaux differentiable on intD. The Bregman distance with respect to f is the function \(B_{f}(z, x): D\times \operatorname {int}D\rightarrow R\) defined by

$$ B_{f}(z, x) := f(z)-f(x)- \bigl\langle \nabla f(x), z-x \bigr\rangle , $$
(2.8)

where \(\nabla f(x)\) is the differential of f defined in intD.

Consider the following set of assumptions on f.

A1::

The right level sets of \(B_{f} (y,\cdot )\),

$$ S_{y, \alpha}= \bigl\{ z\in \operatorname {int}D: B_{f} (y, z)\leq \alpha \bigr\} , $$
(2.9)

are bounded for all \(\alpha\geq0\) and for all \(y\in D\).

A2::

If \(\{x^{k}\}\subset \operatorname {int}K\) and \(\{y^{k}\}\subset \operatorname {int}K\) are bounded sequences such that \(\lim_{k\rightarrow+\infty} \Vert x_{k}-y_{k}\Vert \)=0, then

$$ \lim_{k\rightarrow+\infty} \bigl(\nabla f(x_{k})- \nabla f(y_{k}) \bigr)=0. $$
(2.10)
A3::

Assume \(\{x^{k}\}\subset K\) is bounded, \(\{ y^{k}\}\subset \operatorname {int}K\) is such that \(w\mbox{-} \lim_{k\rightarrow +\infty}y^{k}= y\). Further assume that \(w\mbox{-} \lim_{k\rightarrow +\infty} B_{f}(x^{k}, y^{k})=0\), then

$$w\mbox{-} \lim_{k\rightarrow+\infty}x^{k}= y. $$
A4::

For every \(y\in X^{*}\), there exists \(x\in \operatorname {int}D\) such that \(\nabla f(x) = y\).

We say that f is said to be a Bregman function if it satisfies A1-A3, and f is said to be a coercive Bregman function if it satisfies A1-A4. Without loss of generality, we assume \(X_{0}\subset \operatorname {dom}f\).

Definition 2.10

[39]

Let \(X_{0}\subset X\) be a closed and convex set with \(\operatorname {int}X_{0}\neq\emptyset\), \(f: X_{0}\rightarrow R\) a convex function which is Gâteaux differentiable in \(\operatorname {int}X_{0}\). We define the modulus of convexity of f, \(v_{f}: \operatorname {int}X_{0}\times R_{+}\rightarrow R_{+}\) by

$$ v_{f}(z,t):=\inf \bigl\{ B_{f}(x,z): \Vert x-z\Vert =t \bigr\} , $$
(2.11)

where \(B_{f}(x,z)\) is the Bregman distance with respect to f.

  1. (a)

    The function f is said to be totally convex in \(\operatorname {int}X_{0}\) if and only if \(v_{f}(z, t)>0\) for all \(z\in \operatorname {int}X_{0}\) and \(t>0\).

  2. (b)

    The function f is said to be uniformly totally convex if for any bounded set \(K\subset \operatorname {int}X_{0}\) and \(t>0\), we have

    $$ \inf_{x\in K}v_{f}(x,t)>0. $$
    (2.12)

Next we will introduce some fundamental definitions of the asymptotic analysis in infinite spaces.

Definition 2.11

[40]

Let K be a nonempty set in X. Then the asymptotic cone of the set K, denoted by \(K^{\infty}\), is the set of all vectors \(d\in X\) that are weak limits in the direction of the sequence \(\{x_{k}\}\subset K\), namely

$$ K^{\infty}= \biggl\{ d\in X\Bigm| \exists t_{k} \rightarrow+\infty, \mbox{ and } x_{k}\in K, w\mbox{-} \lim_{k\rightarrow +\infty} \frac{x_{k}}{t_{k}}=d \biggr\} . $$
(2.13)

In the case that K is convex and closed, then, for any \(x_{0}\in K\),

$$ K^{\infty}=\{d\in X \mid x_{0}+td\in K, \forall t>0 \}. $$
(2.14)

3 Main results

Proposition 3.1

Let \(X_{0}\subset X\) be nonempty, closed, and convex, and \(VN^{w}_{X_{0}}(\cdot)\) be a weak normal mapping for \(X_{0}\). For any \(x^{*}\in X_{0}\), and \(\varphi\in VN^{w}_{X_{0}}(x^{*})\), there exists a \(\lambda\in C^{0}\) such that \(\lambda (\varphi)\in N_{X_{0}}(x^{*})\).

Proof

By the definition of the weak normal mapping, we know that

$$\bigl\langle \varphi, x-x^{*} \bigr\rangle \ngeq_{\operatorname {int}C}0\quad \forall x\in X_{0}. $$

It follows that

$$\bigl\langle \varphi, x-x^{*} \bigr\rangle \in Y \backslash \operatorname {int}C\quad \forall x\in X_{0}, $$

and

$$\bigl\langle \varphi, X_{0}-x^{*} \bigr\rangle \subset Y \backslash \operatorname {int}C. $$

That is,

$$\bigl\langle \varphi, X_{0}-x^{*} \bigr\rangle \cap \operatorname {int}C=\emptyset. $$

By virtue of the convexity of \(X_{0}\) and the Hahn-Banach theorem, there exists a \(\lambda^{*}\in C^{*}\backslash\{0\}\) such that

$$\bigl\langle \lambda^{*}(\varphi), x-x^{*} \bigr\rangle \leq0 \quad \forall x\in X_{0}. $$

Since \(\Vert \lambda^{*}\Vert >0\), one obtains

$$\biggl\langle \frac{ \lambda^{*}}{\Vert \lambda^{*}\Vert }(\varphi ), x-x^{*} \biggr\rangle \leq0 \quad \forall x\in X_{0}. $$

Clearly, we have \(\frac{\lambda^{*}}{\Vert \lambda^{*}\Vert }\in C^{0}\). Without loss of generality, let \(\lambda=\frac{\lambda^{*}}{\Vert \lambda^{*}\Vert }\), one has

$$\bigl\langle \lambda(\varphi), x-x^{*} \bigr\rangle \leq0\quad \forall x\in X_{0}. $$

That is, \(\lambda(\varphi)\in N_{X_{0}}(x^{*})\). The proof is complete. □

We propose the following exact generalized proximal-type method (GPTM, for short) for solving the problem (WVVI):

Step (1) take \(x_{0}\in X_{0}\);

Step (2) given any \(x_{k}\in X_{0}\), if \(x_{k}\in\bar{X}\), then the algorithm stops; otherwise go to Step (3);

Step (3) take \(x_{k}\notin\bar{X}\); we define \(x_{k+1}\) by the following expression:

$$ 0\in T(x_{k+1})\lambda_{k}+VN^{w}_{X_{0}}(x_{k+1}) \lambda_{k}+ \varepsilon_{k} \bigl(\nabla f(x_{k+1})- \nabla f(x_{k}) \bigr), $$
(3.1)

where the sequence \(\lambda_{k}\in C^{0}\), \(\varepsilon_{k}\in(0, \varepsilon]\), \(\varepsilon>0\), and \(VN^{w}_{X_{0}}(\cdot)\) is the weak normal mapping to \(X_{0}\), f is a coercive Bregman function; go to Step (2).

Remark 3.1

The algorithm GPTM is actually a kind of exact proximal point algorithm, where the sequence \(\{\lambda_{k}\}\in C^{0}\) is called a sequence of scalarization parameters, a bounded exogenous sequence of positive real numbers \(\{\varepsilon_{k}\}\) is called a sequence of regularization parameters. For every \(x_{k}\notin\bar{X}\), we try to find a \(x_{k+1}\) such that \(0\in X^{*}\) belongs to the inclusion (3.1).

Next we will show the main results of this paper.

Theorem 3.1

Let \(X_{0}\subset X\) be nonempty, closed, and convex, \(T: X_{0} \rightarrow L(X, Y)\) be continuous and C-monotone on \(X_{0}\), if \(\operatorname {dom}T\cap \operatorname {int}X_{0}\neq\emptyset\). The sequence \(\{x_{k}\}\) generated by the method (GPTM) is well defined.

Proof

Let \(x_{0}\in X_{0}\) be an initial point and suppose that the method (GPTM) reaches Step \((k)\). We then show that the next iterate \(x_{k+1}\) does exist. By the assumptions, \(T(\cdot)\) is continuous and C-monotone on \(X_{0}\). By virtue of Proposition 2.1, we see that \(\lambda(T)\) is monotone and continuous on \(X_{0}\) for any \(\lambda\in C^{0}\). From Proposition 3.1, there exists a \(\bar{\lambda}\in C^{0}\) such that the mapping \(VN^{w}_{X_{0}}(\cdot){\bar{\lambda}}\) is a normal mapping on \(X_{0}\). Thus, by the assumption \(\operatorname {dom}T\cap \operatorname {int}X_{0}\neq\emptyset\) and the statement (a) of Lemma 2.2, one sees that, for any \(x\in X_{0}\), the mapping \((VN_{X_{0}}(x)+ T(x))\bar{\lambda}\) is maximal monotone. Without loss of generality, let \(\lambda_{k}=\bar{\lambda}\). Once again by the statement (b) of Lemma 2.2, we see that \(T(\cdot)\lambda_{k}+VN^{w}_{X_{0}}(\cdot) \lambda_{k}+ \varepsilon_{k} \nabla f(\cdot)\) is maximal monotone. Taking \(\varepsilon_{k} \nabla f\) as \(T_{1}\) in Lemma 2.3, it is easy to check that all assumptions are satisfied. It follows that \(R(T(\cdot)\lambda_{k}+VN^{w}_{X_{0}}(\cdot) \lambda_{k}+ \varepsilon_{k} \nabla f(\cdot))=X^{*}\). Then we see that, for any given \(\varepsilon_{k} \nabla f(x_{k})\in X^{*}\), there exists a \(x_{k+1}\) such that

$$ \varepsilon_{k} \nabla f(x_{k}) \in T(x_{k+1})\lambda_{k}+VN^{w}_{X_{0}}(x_{k+1}) \lambda_{k}+ \varepsilon_{k} \nabla f(x_{k+1}). $$
(3.2)

The proof is complete. □

Theorem 3.2

Let the same assumptions as those in Theorem  3.1 hold. Further suppose that \(X_{0}^{\infty}\cap[T(X_{0})]^{w_{0}}_{C}=\{0\}\) and XÌ„ is nonempty and bounded. Then the sequence \(\{x_{k}\}\) generated by the method (GPTM) is bounded and

$$ \sum_{k=0}^{\infty}B_{f}(x_{k+1}, x_{k})< \infty. $$
(3.3)

Proof

From the method (GPTM), we know that if the algorithm stops at some iteration, the point \(x_{k}\) will be a constant thereafter. Now we assume that the sequence \(\{x_{k}\}\) will not stop after a finite number of iterations. From Proposition 3.1, we know that there exist \(\lambda_{k}\in C^{0}\) and \(\varphi_{k+1}\in VN^{w}_{X_{0}}(x_{k+1})\) such that \(\lambda_{k}(\varphi_{k+1})\in N_{X_{0}}(x_{k+1})\). From the inclusion (3.1), one has

$$0=T(x_{k+1})\lambda_{k}+ \lambda_{k}( \varphi_{k+1})+\varepsilon_{k} \bigl(\nabla f(x_{k+1})- \nabla f(x_{k}) \bigr). $$

By the fact that \(\lambda_{k}(\varphi_{k+1})\in N_{X_{0}}(x_{k+1})\), we obtain

$$ \bigl\langle \lambda_{k}(\varphi_{k+1}), x-x_{k+1} \bigr\rangle \leq0\quad \forall x\in X_{0}. $$
(3.4)

It follows that

$$ \bigl\langle T(x_{k+1})\lambda_{k} + \varepsilon_{k} \bigl(\nabla f(x_{k+1})-\nabla f(x_{k}) \bigr), x-x_{k+1} \bigr\rangle \geq0\quad \forall x\in X_{0}. $$
(3.5)

On the other hand, from the assumption that XÌ„ is nonempty and bounded, without loss of generality, let \(x^{*}\in\bar{X}\). By virtue of Theorem 3.3 in [41], we know that \(x^{*}\) is also a solution of the problem (VIP\(_{\lambda_{k}}\)), where

$$\mbox{(VIP$_{\lambda_{k}}$)} \quad \bigl\langle T \bigl(x^{*} \bigr)\lambda_{k}, x-x^{*} \bigr\rangle \geq0\quad \forall x\in X_{0}. $$

It follows that

$$\bigl\langle T \bigl(x^{*} \bigr)\lambda_{k}, x^{*}-x_{k+1}\bigr\rangle \leq0. $$

By the C-monotonicity of T, one has

$$ \bigl\langle T(x_{k+1})\lambda_{k}, x^{*}-x_{k+1} \bigr\rangle \leq0. $$
(3.6)

Combining (3.5) with (3.6), we obtain

$$\bigl\langle \varepsilon_{k} \bigl(\nabla f(x_{k+1})-\nabla f(x_{k}) \bigr), x^{*}-x_{k+1}\bigr\rangle \geq0. $$

By Property 2.1 of [22] (‘three points property’), we have

$$ B_{f}(x, x_{k+1})=B_{f}(x, x_{k})-B_{f}(x_{k+1}, x_{k})+ \bigl\langle \nabla f(x_{k})-\nabla f(x_{k+1}), x-x_{k+1} \bigr\rangle $$
(3.7)

for any \(x\in X_{0}\). Hence from the inclusion (3.1), we know there exists \(\lambda_{k}(\varphi_{k+1})\in VN_{X_{0}}(x_{k+1})\) such that

$$ \frac{1}{\varepsilon_{k}}\bigl(T(x_{k+1})\lambda_{k}+ \lambda_{k}(\varphi_{k+1})\bigr)=\nabla f(x_{k})-\nabla f(x_{k+1}). $$
(3.8)

Combining (3.7) with (3.8), we obtain

$$ B_{f}(x, x_{k+1})=B_{f}(x, x_{k})-B_{f}(x_{k+1}, x_{k})+ \frac {1}{\varepsilon_{k}} \bigl(T(x_{k+1})\lambda_{k}+ \lambda_{k}(\varphi_{k+1}) \bigr). $$
(3.9)

Since \(\lambda_{k}(\varphi_{k+1})\in N_{X_{0}}(x_{k+1})\), we have

$$ \bigl\langle \lambda_{k}(\varphi_{k+1}), x^{*}-x_{k+1} \bigr\rangle \leq0. $$
(3.10)

It follows that

$$ B_{f} \bigl(x^{*}, x_{k+1} \bigr)\leq B_{f} \bigl(x^{*}, x_{k} \bigr)-B_{f}(x_{k+1}, x_{k})-\frac{1}{\varepsilon_{k}} \bigl\langle T(x_{k+1}) \lambda_{k}, x_{k+1}-x^{*} \bigr\rangle . $$
(3.11)

Clearly, we have \(\frac{1}{\varepsilon_{k}}\langle T(x_{k+1})\lambda _{k}, x_{k+1}-x^{*}\rangle\geq0\). It follows that

$$ B_{f} \bigl(x^{*}, x_{k+1} \bigr)\leq B_{f} \bigl(x^{*}, x_{k} \bigr)-B_{f}(x_{k+1}, x_{k}). $$
(3.12)

Hence we see that the sequence \(B_{f}(x^{*}, x_{k})\) is decreasing as \(k\rightarrow\infty\). Taking \(\alpha= B_{f}(x^{*}, x_{0})\), we obtain the sequence \(\{x_{k}\}\subset S_{x^{*}, \alpha}\), which is a bounded set, by the assumption \(A_{1}\) of the Bregman function.

As for the sequence \(B_{f}(x^{*}, x_{k})\), it converges to a limit \(l^{*}\). From (3.12), we have

$$ B_{f}(x_{k+1}, x_{k})\leq B_{f} \bigl(x^{*}, x_{k} \bigr)-B_{f} \bigl(x^{*}, x_{k+1} \bigr). $$
(3.13)

Summing up the above inequality, we obtain

$$ \sum_{k=0}^{\infty}B_{f}(x_{k+1}, x_{k})\leq\sum _{k=0}^{\infty}\bigl\{ B_{f} \bigl(x^{*}, x_{k} \bigr)-B_{f} \bigl(x^{*}, x_{k+1} \bigr) \bigr\} =B_{f} \bigl(x^{*}, x_{0} \bigr)-l^{*}< \infty. $$
(3.14)

The proof is complete. □

Theorem 3.3

Let the same assumptions as those in Theorem  3.2 hold. If we assume that the Bregman function f is uniformly totally convex, then any weak accumulation point of \(\{x_{k}\}\) is a solution of the problem (WVVI).

Proof

If there exists \(k_{0} \geq1\) such that \(x_{k_{0}+p}=x_{k_{0}}\), \(\forall p\geq1\), then it is clear that \(x_{k_{0}}\) is the unique weak cluster point of \(\{ x_{k}\}\) and it is also a solution of problem (WVVI). Suppose that the algorithm does not terminate finitely. Then, by Theorem 3.2, we see that \(\{x_{k}\}\) is bounded and it has some weak cluster points. Next we show that all of weak cluster points are solutions of problem (WVVI). Let xÌ‚ be a weak cluster point of \(\{x_{k}\}\) and \(\{x_{k_{j}}\}\) be a subsequence of \(\{x_{k}\}\), which weakly converges to xÌ‚. From the inequality (3.3), we know that

$$ \lim_{j\rightarrow+\infty}B_{f}(x_{k_{j}+1}, x_{k_{j}})=0. $$
(3.15)

Hence by the assumption \(A_{3}\) of f, we obtain

$$ w\mbox{-} \lim_{j\rightarrow+\infty}x_{k_{j}+1}=\hat{x}. $$
(3.16)

Next we will show that \(\lim_{j\rightarrow+\infty} \Vert x_{k_{j}+1}- x_{k_{j}}\Vert =0\). Suppose the contrary, without loss of generality we assume there exists a \(\beta>0\) such that

$$ \Vert x_{k_{j}+1}-x_{k_{j}}\Vert >\beta, $$
(3.17)

for all \(j\geq0\). Let D be a bounded set which contains the sequence \(\{x_{k_{j}}\}\). We have

$$ B_{f}(x_{k_{j}+1}, x_{k_{j}})\geq\inf \bigl\{ B_{f}(z, x_{k_{j}}): \Vert x_{k_{j}+1}-x_{k_{j}} \Vert =\Vert z-x_{k_{j}}\Vert \bigr\} =v_{f} \bigl(x_{k_{j}}, \Vert x_{k_{j}+1}-x_{k_{j}}\Vert \bigr). $$
(3.18)

From the assumption (3.17) and Property 2.2 in [22], we obtain

$$ v_{f} \bigl(x_{k_{j}}, \Vert x_{k_{j}+1}-x_{k_{j}}\Vert \bigr)>v_{f}(x_{k_{j}}, \beta)\geq\inf_{x\in D}v_{f}(x, \beta). $$
(3.19)

Since we assume f is uniformly totally convex, it follows that \(\inf_{x\in D}v_{f}(x, \beta)>0\). One has

$$ B_{f}(x_{k_{j}+1}, x_{k_{j}})\geq\inf _{x\in D}v_{f}(x, \beta)>0. $$
(3.20)

Clearly, there is a contradiction between (3.20) and (3.15), hence we have

$$\lim_{j\rightarrow+\infty} \Vert x_{k_{j}+1}- x_{k_{j}}\Vert =0. $$

By the assumption \(A_{2}\) of the Bregman function f, one has

$$ \lim_{j\rightarrow+\infty} \bigl(\nabla f(x_{k_{j}+1})- \nabla f(x_{k_{j}}) \bigr)=0. $$
(3.21)

By the inclusion (3.1), one sees that there exist \(\lambda_{k_{j}}\in C^{0}\) and \(\varphi_{k_{j}+1}\in VN^{w}_{X_{0}}(x_{k_{j}+1})\), such that

$$0=T(x_{k_{j}+1})\lambda_{k_{j}}+\lambda_{k_{j}}(\varphi _{k_{j}+1})+\varepsilon_{k_{j}} \bigl(\nabla f(x_{{k_{j}}+1})- \nabla f(x_{{k_{j}}}) \bigr). $$

It follows that

$$ \lim_{j\rightarrow+\infty}\frac{1}{\varepsilon _{k_{j}}} \bigl\{ T(x_{k_{j}+1})\lambda_{k_{j}}+\lambda_{k_{j}}( \varphi_{k_{j}+1}) \bigr\} =0. $$
(3.22)

\(\{\lambda_{k_{j}}\}\subset C^{0}\), which is a weak∗-compact set. Without loss of generality, we assume the sequence \(w^{*}-\lim_{j\rightarrow+\infty}\lambda_{k_{j}}=\bar{\lambda}\). From Proposition 3.1, for every j we have

$$\lambda_{k_{j}}(\varphi_{k_{j}+1}) \in N_{X_{0}}(x_{k_{j}+1}). $$

By the demiclosedness of the graph \(T+N_{X_{0}}\) (see Proposition 2.3), we conclude that

$$0\in T(\bar{x})\bar{\lambda}+VN^{w}_{X_{0}}(\bar{x})\bar{\lambda}. $$

Meanwhile, from Proposition 3.1, we know that there exists a \(\bar{\omega}\in VN^{w}_{X_{0}}(\bar{x})\bar{\lambda}\). By the definition of \(N_{X_{0}}(\bar{x})\), we know that

$$\langle\bar{\omega},x-\bar{x}\rangle\leq0 \quad \forall x\in X_{0}. $$

That is,

$$ \bigl\langle T(\bar{x})\bar{\lambda}, x-\bar{x} \bigr\rangle \geq0\quad \forall x\in X_{0}. $$
(3.23)

Thus

$$ \bigl\langle T(\bar{x}), x-\bar{x} \bigr\rangle \notin -\operatorname {int}C \quad \forall x \in X_{0}. $$
(3.24)

We conclude that x̄ is a solution of problem (WVVI). The proof is complete. □

Theorem 3.4

Let the same assumptions as those in Theorem  3.3 hold. If ∇f is a weak to weak continuous mapping, then the whole sequence \(\{x_{k}\}\) converges weakly to a solution of problem (WVVI).

Proof

In this part we will show x̄ is the only solution of (WVVI). Suppose the contrary, assume that both x̂ and x̄ are weak cluster points of \(\{x_{n}\}\)

$$w\mbox{-} \lim_{j\rightarrow +\infty}x_{k_{j}}=\hat{x},\qquad w\mbox{-} \lim _{i\rightarrow +\infty}x_{k_{i}}=\bar{x} $$

and xÌ‚ and xÌ„ are also solutions of (WVVI). From the proof of Theorem 3.2, we see that the sequences \(B_{f}(\hat {x}, x_{k})\) and \(B_{f} (\bar{x}, x_{k})\) are bounded and let \(l_{1}\) and \(l_{2}\) be their limits, then

$$ \lim_{k\rightarrow+\infty} \bigl( B_{f}(\hat{x}, x_{k})- B_{f} (\bar{x}, x_{k})\bigr)=l_{1}-l_{2}=f( \hat{x})-f(\bar{x})+\lim_{k\rightarrow+\infty} \bigl\langle \nabla f(x_{k}), \bar{x}-\hat{x} \bigr\rangle . $$
(3.25)

Let \(\lim_{k\rightarrow+\infty}\langle\nabla f(x_{k}), \bar{x}-\hat{x}\rangle=l\). Taking \(k=k_{j}\) in (3.25) and by virtue of the weak to weak continuity of \(\nabla f(\cdot)\) we obtain

$$ l= \bigl\langle \nabla f(\hat{x}), \bar{x}-\hat{x} \bigr\rangle . $$
(3.26)

Once again taking \(k=k_{i}\) in (3.25) and by virtue of the weak to weak continuity of \(\nabla f(\cdot)\) we obtain

$$ l= \bigl\langle \nabla f(\bar{x}), \bar{x}-\hat{x} \bigr\rangle . $$
(3.27)

Combining the equality (3.26) with the equality (3.27), one has

$$ \bigl\langle \nabla f(\bar{x})- \nabla f(\hat{x}), \bar{x}-\hat{x} \bigr\rangle =0. $$
(3.28)

From the strict convexity of f, we have \(\bar{x}=\hat{x}\), which means x̄ is the only solution of (WVVI). The proof is complete. □

4 Conclusions

In this paper, we proposed a class of generalized proximal-type method by virtue of the Bregman distance functions to solve weak vector variational inequality problems in Banach spaces. We carried out a convergence analysis on the method and proved weak convergence of the generated sequence to a solution of the weak vector variational inequality problems under some mild conditions. Our results extended some well-known results to more general cases.

References

  1. Martinet, B: Regularisation d’inéquations variationelles par approximations successives. Rev. Fr. Inform. Rech. Oper. 2, 154-159 (1970)

    MathSciNet  Google Scholar 

  2. Rockafellar, RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14(5), 877-898 (1976)

    Article  MATH  MathSciNet  Google Scholar 

  3. Auslender, A, Haddou, M: An interior proximal point method for convex linearly constrained problems and its extension to variational inequalities. Math. Program. 71, 77-100 (1995)

    MATH  MathSciNet  Google Scholar 

  4. Güler, O: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 29, 403-419 (1991)

    Article  MATH  MathSciNet  Google Scholar 

  5. Kamimura, S, Takahashi, W: Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim. 13(3), 938-945 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  6. Pathak, HK, Cho, YJ: Strong convergence of a proximal-type algorithm for an occasionally pseudomonotone operator in Banach spaces. Fixed Point Theory Appl. 2012, 190 (2012)

    Article  MathSciNet  Google Scholar 

  7. Pennanen, T: Local convergence of the proximal point algorithm and multiplier methods without monotonicity. Math. Oper. Res. 27(1), 170-191 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  8. Solodov, MV, Svaiter, BF: Forcing strong convergence of proximal point iterations in a Hilbert space. Math. Program., Ser. A 87, 189-202 (2000)

    MATH  MathSciNet  Google Scholar 

  9. Bonnel, H, Iusem, AN, Svaiter, BF: Proximal methods in vector optimization. SIAM J. Optim. 15(4), 953-970 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  10. Ceng, LC, Yao, JC: Approximate proximal methods in vector optimization. Eur. J. Oper. Res. 181(1), 1-19 (2007)

    Article  MathSciNet  Google Scholar 

  11. Chen, Z, Huang, XX, Yang, XQ: Generalized proximal point algorithms for multiobjective optimization problems. Appl. Anal. 90, 935-949 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  12. Chen, Z, Zhao, KQ: A proximal-type method for convex vector optimization problem in Banach spaces. Numer. Funct. Anal. Optim. 30, 70-81 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  13. Chen, Z, Huang, HQ, Zhao, KQ: Approximate generalized proximal-type method for convex vector optimization problem in Banach spaces. Comput. Math. Appl. 57, 1196-1203 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  14. Chen, Z: Asymptotic analysis in convex composite multiobjective optimization problems. J. Glob. Optim. 55, 507-520 (2013)

    Article  MATH  Google Scholar 

  15. Bregman, LM: The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 7, 200-217 (1967). doi:10.1016/0041-5553(67)90040-7

    Article  Google Scholar 

  16. Censor, Y, Lent, A: An iterative row-action method for interval convex programming. J. Optim. Theory Appl. 34, 321-353 (1981)

    Article  MATH  MathSciNet  Google Scholar 

  17. Bauschke, HH, Borwein, JM: Legendre function and the method of random Bregman functions. J. Convex Anal. 4, 27-67 (1997)

    MATH  MathSciNet  Google Scholar 

  18. Censor, Y, Zenios, SA: The proximal minimization algorithm with D-function. J. Optim. Theory Appl. 73, 451-464 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  19. Chen, G, Teboulle, M: Convergence analysis of a proximal-like minimization algorithm using Bregman function. SIAM J. Optim. 3(3), 538-543 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  20. Kiwiel, KC: Proximal minimization methods with generalized Bregman functions. SIAM J. Control Optim. 35, 326-349 (1997)

    Article  MathSciNet  Google Scholar 

  21. Burachik, RS, Iusem, AN: A generalized proximal point algorithm for the variational inequality problem in a Hilbert space. SIAM J. Optim. 8(1), 197-216 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  22. Burachik, RS, Scheimberg, S: A proximal point algorithm for the variational inequality problem in Banach space. SIAM J. Control Optim. 39(5), 1633-1649 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  23. Giannessi, F: Theorems of alternative, quadratic programs and complementarity problems. In: Cottle, RW, Giannessi, F, Lions, JL (eds.) Variational Inequality and Complementarity Problems. Wiley, New York (1980)

    Google Scholar 

  24. Chen, GY, Yang, XQ: The vector complementary problem and its equivalences with vector minimal element in ordered spaces. J. Math. Anal. Appl. 153, 136-158 (1990)

    Article  MATH  MathSciNet  Google Scholar 

  25. Chen, GY: Existence of solutions for a vector variational inequality: an extension of the Hartman-Stampacchia theorem. J. Optim. Theory Appl. 74, 445-456 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  26. Chen, GY, Huang, XX, Yang, XQ: Vector Optimization, Set-Valued and Variational Analysis. Lecture Notes in Economics and Mathematical Systems, vol. 541. Springer, Berlin (2005)

    MATH  Google Scholar 

  27. Chen, Z: Asymptotic analysis for proximal-type methods in vector variational inequality problems. Oper. Res. Lett. 43, 226-230 (2015)

    Article  MathSciNet  Google Scholar 

  28. Allgower, EL, Böhmer, K, Potra, FA, Rheinboldt, WC: A mesh independence principle for operator equations and their discretizations. SIAM J. Numer. Anal. 23, 160-169 (1986)

    Article  MATH  MathSciNet  Google Scholar 

  29. Heinkenschloss, M: Mesh independence for nonlinear least squares problems with norm constraints. SIAM J. Optim. 3, 81-117 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  30. Laumen, M: Newton’s mesh independence principle for a class of optimal shape design problems. SIAM J. Control Optim. 37, 1142-1168 (1999)

    Article  MathSciNet  Google Scholar 

  31. Huang, XX, Fang, YP, Yang, XQ: Characterizing the nonemptiness and compactness of the solution set of a vector variational inequality by scalarization. J. Optim. Theory Appl. 162(2), 548-558 (2014)

    Article  MATH  MathSciNet  Google Scholar 

  32. Facchinei, F, Pang, JS: Finite-Dimensional Variational Inequalities and Complementarity Problems, Volume I, II. Springer Series in Operations Research. Springer, Berlin (2004)

    Book  Google Scholar 

  33. Hu, R, Fang, YP: On the nonemptiness and compactness of the solution sets for vector variational inequalities. Optimization 59, 1107-1116 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  34. Rockafellar, RT: Convex Analysis. Princeton University Press, Princeton (1970)

    Book  MATH  Google Scholar 

  35. Rockafellar, RT: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 149, 75-88 (1970)

    Article  MATH  MathSciNet  Google Scholar 

  36. Auslender, A: Optimisation. Methodes Numeriques. Masson, Paris (1976)

    MATH  Google Scholar 

  37. Brezis, H, Haraux, A: Image d’une somme d’operateurs monotones et applications. Isr. J. Math. 23, 165-186 (1976)

    Article  MATH  MathSciNet  Google Scholar 

  38. Pascali, D, Sburlan, S: Nonlinear Mappings of Monotone Type. Nijhoff, Dordrecht (1978)

    Book  MATH  Google Scholar 

  39. Butnariu, D, Iusem, AN: Local moduli of convexity and their applications to finding almost common points of measurable families of operators. In: Censor, Y, Reich, S (eds.) Recent Developments in Optimization Theory and Nonlinear Analysis. Contemp. Math., vol. 204, pp. 61-91. AMS, Providence (1997)

    Chapter  Google Scholar 

  40. Attouch, H, Buttazzo, G, Michaille, G: Variational Analysis in Sobolev and BV Spaces: Applications to PDEs and Optimization. MPS/SIAM, Philadelphia (2006)

    Book  Google Scholar 

  41. Fan, JH, Jing, Y, Zhong, RY: Nonemptiness and boundedness of solution sets for vector variational inequalities via topological method. J. Glob. Optim. 63, 181-193 (2015). doi:10.1007/s10898-015-0279-2

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work is partially supported by the National Science Foundation of China (No. 11001289, 71471122). The authors thank the anonymous referees for their helpful suggestions, which have contributed to a major improvement of the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhe Chen.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pu, Lc., Wang, Xy. & Chen, Z. Generalized proximal-type methods for weak vector variational inequality problems in Banach spaces. Fixed Point Theory Appl 2015, 191 (2015). https://doi.org/10.1186/s13663-015-0440-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-015-0440-0

MSC

Keywords