Skip to main content

New descent LQP alternating direction methods for solving a class of structured variational inequalities

Abstract

In this paper, we propose two descent alternating direction methods based on a logarithmic-quadratic proximal method for structured variational inequalities. The first method can be viewed as an extension of the method proposed by Bnouhachem and Xu (Comput. Math. Appl. 67:671-680, 2014) by performing an additional step at each iteration. The second method generates the new iterate by searching the optimal step size along a new descent direction, which can be viewed as a refinement and improvement of the first one. Under certain conditions, the global convergence of the both methods is proved.

1 Introduction

We consider the constrained convex programming problem with the following separate structure:

$$ \min \bigl\{ \theta_{1}(x) +\theta_{2}(y)\vert Ax+By=b, x\in\mathcal {R}_{+}^{n}, y\in\mathcal{R}_{+}^{m} \bigr\} , $$
(1.1)

where \(\theta_{1}:\mathcal{R}_{+}^{n}\rightarrow\mathcal{R}\) and \(\theta_{2}:\mathcal{R}_{+}^{m}\rightarrow\mathcal{R}\) are closed proper convex functions, \(A\in{ \mathcal{R}}^{l\times n}\), \({B \in{ \mathcal{R}}^{l\times m}}\) are given matrices, and \(b\in{ \mathcal{R}}^{l}\) is a given vector.

A large number of problems can be modeled as problem (1.1). In practice, these classes of problems have very large size and, due to their practical importance, they have received a great deal of attention from many researchers. Various methods have been suggested to find the solution of problem (1.1). A popular approach is the alternating direction method (ADM) which was proposed by Gabay and Mercier [2] and Gabay [3]. The ADM can reduce the scale of variational inequalities by decomposing the original problem into a series of subproblems with a lower scale. To make the ADM more efficient and practical, some strategies have been studied; For further details, we refer to [4–13] and the references therein.

Let \(\partial(\cdot)\) denote the sub-gradient operator of a convex function, and \(f(x) \in\partial\theta_{1}(x)\) and \(g(y)\in\partial\theta_{2}(y)\) are the sub-gradient of \(\theta_{1}(x)\) and \(\theta_{2}(y)\), respectively. By attaching a Lagrange multiplier vector \(\lambda\in{ \mathcal{R}}^{l}\) to the linear constraint \(Ax + By =b\), problem (1.1) can be written in terms of finding \(w \in{ \mathcal{W}}\) such that

$$ \bigl(w'-w\bigr)^{\top} Q(w)\geq0, \quad \forall w' \in{ \mathcal{W}}, $$
(1.2)

where

$$ w=\left ( \begin{matrix} x \\ y \\ \lambda \end{matrix} \right ), \qquad Q(w)=\left ( \begin{matrix} f(x)-A^{\top} \lambda\\ g(y)-B^{\top} \lambda\\ Ax+By-b \end{matrix} \right ), \qquad{\mathcal{W}} = \mathcal{R}_{+}^{n} \times \mathcal{R}_{+}^{m} \times {\mathcal{R}}^{l}. $$
(1.3)

Problem (1.2)-(1.3) is referred to as structured variational inequalities (in short, SVI).

Very recently, Yuan and Li [14] developed the following logarithmic-quadratic proximal (LQP)-based decomposition method by applying the LQP terms to regularize the ADM subproblems: For a given \(w^{k}=(x^{k},y^{k},\lambda^{k})\in\mathcal{R}^{n}_{++}\times \mathcal{R}^{m}_{++}\times\mathcal{R}^{l}\), and \(\mu\in(0,1)\), the new iterative \((x^{k+1}, y^{k+1}, \lambda^{k+1})\) is obtained via solving the following system:

$$\begin{aligned}& f(x) - A^{\top} \bigl[ \lambda^{k} - H \bigl(A x + B y^{k} - b\bigr) \bigr] + R \bigl[ \bigl(x-x^{k}\bigr) + \mu\bigl(x^{k} - X_{k}^{2} x^{-1}\bigr) \bigr] = 0, \end{aligned}$$
(1.4)
$$\begin{aligned}& g(y) - B^{\top} \bigl[ \lambda^{k} - H (A {x} + B y - b) \bigr] + S \bigl[ \bigl(y-y^{k}\bigr) + \mu\bigl(y^{k} - Y_{k}^{2} y^{-1}\bigr) \bigr] = 0, \end{aligned}$$
(1.5)
$$\begin{aligned}& \lambda^{k+1} = \lambda^{k} - H \bigl(A x^{k} + B y^{k} - b\bigr), \end{aligned}$$
(1.6)

where \(H \in{ \mathcal{R}}^{l \times l}\), \(R \in{ \mathcal{R}}^{n \times n}\), and \(S \in{ \mathcal{R}}^{m \times m}\) are symmetric positive definite.

Note that the LQP method was presented originally in [15]. It seems that it is easier to solve a series of systems of nonlinear equations than to solve a series of sub-variational inequalities in many cases. Later, Bnouhachem et al. [16], Bnouhachem and Ansari [17], and Li [18] proposed some LQP alternating direction methods and made the LQP alternating direction method more practical. Each iteration of the above methods contains a prediction and a correction. The predictor is obtained via solving (1.4)-(1.6) and the new iterate is obtained by a convex combination of the previous point and the one generated by a projection-type method along a descent direction for [17, 18]. In 2014, Bnouhachem and Xu [1] proposed the following LQP alternating direction: For a given \(w^{k}=(x^{k},y^{k},\lambda^{k})\in\mathcal{R}^{n}_{++}\times \mathcal{R}^{m}_{++}\times\mathcal{R}^{l}\), and \(\mu\in(0,1)\), the predictor \(\tilde{w}^{k}=(\tilde{x}^{k},\tilde{y}^{k},\tilde{\lambda }^{k})\in \mathcal{R}^{n}_{++}\times\mathcal{R}^{m}_{++}\times\mathcal{R}^{l}\) is obtained via solving the following system:

$$\begin{aligned}& f(x) - A^{\top}\bigl[\lambda^{k} - H (A x + B y - b) \bigr] + R\bigl[\bigl(x-x^{k}\bigr) + \mu\bigl(x^{k} - X_{k}^{2} x^{-1}\bigr)\bigr]=:\xi^{k}_{x} \approx0, \end{aligned}$$
(1.7a)
$$\begin{aligned}& g(y) - B^{\top}\bigl[\lambda^{k} - H (A {x} + B y - b)\bigr]+ S\bigl[\bigl(y-y^{k}\bigr) + \mu\bigl(y^{k} - Y_{k}^{2} y^{-1}\bigr)\bigr]=:\xi^{k}_{y} \approx0, \end{aligned}$$
(1.7b)
$$\begin{aligned}& \tilde{\lambda}^{k} = \lambda^{k} - H \bigl(A \tilde{x}^{k} + B \tilde{y}^{k} - b\bigr), \end{aligned}$$
(1.7c)

where

$$\begin{aligned}& \bigl\Vert G^{-1}\xi^{k}\bigr\Vert ^{2}_{G}\leq\frac{1-\mu}{1+\mu}\eta^{2} \bigl\Vert w^{k}-\tilde{w}^{k}\bigr\Vert ^{2}_{G}, \quad\eta\in(0,1), \end{aligned}$$
(1.8)
$$\begin{aligned}& \xi^{k}=\left ( \begin{matrix} \xi^{k}_{x}\\ \xi^{k}_{y} \\ 0 \end{matrix} \right ) , \end{aligned}$$
(1.9)

and

$$ G=\left ( \begin{matrix} (1+\mu) R & & \\ & (1+\mu)S & \\ & & H^{-1} \end{matrix} \right ) $$
(1.10)

is a positive definite (block diagonal) matrix.

Take the new iterate \(w^{k+1}=(x^{k+1},y^{k+1},\lambda^{k+1})\) as the solution of the following system:

$$\begin{aligned}& \tau\bigl[f\bigl(\tilde{x}^{k}\bigr)-A^{\top}\tilde{\lambda}^{k}\bigr] + R\bigl[\bigl(x-x^{k}\bigr) + \mu \bigl(x^{k} - X_{k}^{2} x^{-1}\bigr) \bigr]= 0, \end{aligned}$$
(1.11a)
$$\begin{aligned}& \tau\bigl[g\bigl(\tilde{y}^{k}\bigr)-B^{\top}\tilde{\lambda}^{k}\bigr] + S\bigl[\bigl(y-y^{k}\bigr) + \mu \bigl(y^{k} - Y_{k}^{2} y^{-1}\bigr)\bigr] = 0, \end{aligned}$$
(1.11b)
$$\begin{aligned}& \lambda^{k+1} = \lambda^{k} - \tau H \bigl(A \tilde{x}^{k} + B \tilde{y}^{k} - b\bigr). \end{aligned}$$
(1.11c)

Each iteration of the above method contains a prediction and a correction. The predictor is obtained via solving (1.4)-(1.6) and the new iterate is computed directly by an explicit formula derived from the original LQP method.

Inspired and motivated by the above research, we present two methods to solve SVI. The first one can be viewed as an extension of the method proposed in [1] by performing an additional step at each iteration. The second method can be viewed as an extension of the first one by using a new descent direction which provides a significant refinement and improvement of the first one. We also study the global convergence of the both methods method under certain conditions.

2 Iterative methods and convergence results

In this section, we suggest and analyze two new modified logarithmic-quadratic proximal alternating direction methods for solving structured variational inequalities. The following lemma provides some basic properties of the projection onto Ω.

Lemma 2.1

Let G be a symmetry positive definite matrix and Ω be a nonempty closed convex subset of \(R^{l}\). We denote by \(P_{\Omega,G}(\cdot)\) the projection under the G-norm, that is,

$$P_{\Omega,G}(v)=\operatorname{argmin} \bigl\{ \Vert v-u\Vert _{G} : u \in\Omega\bigr\} . $$

Then

$$\begin{aligned}& \bigl(z-P_{\Omega,G}[z]\bigr)^{\top} G \bigl(P_{\Omega,G}[z]-v\bigr) \geq0, \quad \forall z \in R^{l}, v \in\Omega; \end{aligned}$$
(2.1)
$$\begin{aligned}& \bigl\Vert P_{\Omega,G}[u]-P_{\Omega,G}[v]\bigr\Vert _{G} \leq \Vert u-v\Vert _{G}, \quad\forall u,v\in R^{l}; \end{aligned}$$
(2.2)
$$\begin{aligned}& \bigl\Vert u-P_{\Omega,G}[z]\bigr\Vert _{G}^{2} \leq \Vert z-u\Vert _{G}^{2}-\bigl\Vert z-P_{\Omega,G}[z]\bigr\Vert _{G}^{2}, \quad\forall z \in R^{l}, u\in\Omega. \end{aligned}$$
(2.3)

For convenience, we make the following standard assumptions to guarantee that the problem under consideration is solvable and the proposed methods are well defined.

Assumption A

\(f(x)\) is monotone with respect to \(\mathcal{R}^{n}_{++}\) and \(g(y)\) is monotone with respect to \(\mathcal{R}^{m}_{++}\).

Assumption B

The solution set of SVI, denoted by \({\mathcal{W}}^{*}\), is nonempty.

Then the iterative scheme of the first method is given as follows.

Algorithm 2.1

  1. Step 0.

    The initial step:

    Given \(\varepsilon>0\), \(\mu\in(0,1)\) and \(w^{1}=(x^{1}, y^{1}, \lambda^{1}) \in\mathcal{R}^{n}_{++}\times \mathcal{R}^{m}_{++}\times\mathcal{R}^{l}\). Set \(k=1\).

  2. Step 1.

    Prediction step:

    Compute \(\tilde{w}^{k}=(\tilde{x}^{k},\tilde{y}^{k},\tilde{\lambda }^{k})\in\mathcal{R}^{n}_{++}\times \mathcal{R}^{m}_{++}\times\mathcal{R}^{l}\) by solving the system (1.7a)-(1.7c).

  3. Step 2.

    Convergence verification:

    If \(\{\max{ \Vert x^{k}-\tilde{x}^{k}\Vert _{\infty}, \Vert y^{k}-\tilde{y}^{k}\Vert _{\infty}}, \Vert \lambda^{k}-\tilde{\lambda}^{k}\Vert _{\infty}\}<\epsilon\), then stop.

  4. Step 3.

    Correction step:

    Compute \(\bar{w}^{k}=(\bar{x}^{k},\bar{y}^{k},\bar{\lambda}^{k})\) by solving the following system:

    $$\begin{aligned}& \frac{1-\mu}{1+\mu}\alpha_{k}\bigl[f\bigl( \tilde{x}^{k}\bigr)-A^{\top}\tilde{\lambda }^{k}\bigr] + R\bigl[\bigl(x-x^{k}\bigr) + \mu\bigl(x^{k} - X_{k}^{2} x^{-1}\bigr)\bigr]= 0, \end{aligned}$$
    (2.4a)
    $$\begin{aligned}& \frac{1-\mu}{1+\mu}\alpha_{k}\bigl[g\bigl( \tilde{y}^{k}\bigr)-B^{\top}\tilde{\lambda }^{k}\bigr] + S\bigl[\bigl(y-y^{k}\bigr) + \mu\bigl(y^{k} - Y_{k}^{2} y^{-1}\bigr)\bigr] = 0, \end{aligned}$$
    (2.4b)
    $$\begin{aligned}& \lambda^{k+1} = \lambda^{k} - \frac{1-\mu}{1+\mu}\alpha_{k} H \bigl(A \tilde{x}^{k} + B \tilde{y}^{k} - b\bigr), \end{aligned}$$
    (2.4c)

    where

    $$\begin{aligned}& \alpha_{k}={\frac{\varphi_{k}}{\Vert d_{k}\Vert _{G}^{2}}}, \end{aligned}$$
    (2.5)
    $$\begin{aligned}& \varphi_{k}:= \bigl\Vert x^{k} - \tilde{x}^{k}\bigr\Vert _{R}^{2}+ \bigl\Vert y^{k} -\tilde{y}^{k}\bigr\Vert _{S}^{2}+ \bigl\Vert \lambda^{k} -\lambda^{k+1}\bigr\Vert _{H^{-1}}^{2}+\bigl(w^{k}- \tilde{w}^{k} \bigr)^{\top}\xi^{k}, \end{aligned}$$
    (2.6)

    and

    $$ d_{k}:=w^{k}- \tilde{w}^{k} +G^{-1}\xi^{k}. $$
    (2.7)

    The new iterate \(w^{k+1}(\tau_{k})=(x^{k+1},y^{k+1},\lambda^{k+1})\) is given by

    $$ w^{k+1}(\tau_{k})=\rho w^{k}+(1- \rho)P_{\mathcal{W}} \bigl[w^{k}-\tau_{k} d'_{k} \bigr], \quad\rho\in(0,1), $$
    (2.8)

    where

    $$ d'_{k}:=w^{k}-\bar{w}^{k}. $$
    (2.9)

    Set \(k:= k + 1\) and go to Step 1.

\(\tau_{k}\) is a positive scalar.

How to choose a suitable step length \(\tau_{k}\) to force convergence will be discussed later.

Theorem 2.1

[1]

For given \(w^{k}=(x^{k}, y^{k},\lambda^{k}) \in\mathcal{R}_{++}^{n} \times \mathcal{R}_{++}^{m}\times{ \mathcal{R}}^{l}\), let \(\bar{w}^{k}=(\bar{x}^{k},\bar{y}^{k},\bar{\lambda}^{k})\) be generated by (2.4a)-(2.4c). Then for any \(w^{*}=(x^{*}, y^{*}, \lambda^{*}) \in{ \mathcal{W}}^{*}\), we have

$$\begin{aligned} \bigl\Vert w^{k} - w^{*}\bigr\Vert _{G}^{2}- \bigl\Vert \bar{w}^{k}- w^{*}\bigr\Vert _{G}^{2} \geq\frac{1-\mu}{1+\mu }\Phi(\alpha_{k}), \end{aligned}$$
(2.10)

where

$$ \Phi(\alpha_{k}):=2\alpha_{k} \varphi_{k}-\alpha_{k}^{2}\Vert d_{k} \Vert ^{2}_{G}. $$
(2.11)

Theorem 2.2

[1]

For given \(w^{k}\in\mathcal{R}_{++}^{n} \times\mathcal{R}_{++}^{m}\times {\mathcal{R}}^{l}\), let \(\tilde{w}^{k}\) be generated by (1.7b)-(1.7c), then we have the following:

$$ \alpha_{k}\geq\frac{1}{2} $$
(2.12)

and

$$ \Phi(\alpha_{k})\geq\frac{(1-\eta^{2})(1-\mu)}{4(1+\mu)}\bigl\Vert w^{k} - \tilde{w}^{k}\bigr\Vert _{G}^{2}. $$
(2.13)

How should one choose values of \(\tau_{k}\) to ensure that \(w^{k+1}(\tau_{k})\) is closer to the solution set than \(w^{k}\)? For this purpose, we define

$$ \Theta_{1}(\tau_{k})=\bigl\Vert w^{k}-w^{*}\bigr\Vert _{G}^{2}-\bigl\Vert w^{k+1}(\tau_{k})-w^{*}\bigr\Vert _{G}^{2}. $$
(2.14)

Theorem 2.3

Let \(w^{*}=(x^{*},y^{*}, \lambda^{*}) \in{ \mathcal{W}}^{*}\), then we have

$$ \Theta_{1}(\tau_{k})\geq (1-\rho) \bigl( \tau_{k}\bigl\{ \bigl\Vert d'_{k}\bigr\Vert _{G}^{2}+\bigl\Vert w^{k} - w^{*}\bigr\Vert _{G}^{2} - \bigl\Vert \bar {w}^{k}- w^{*}\bigr\Vert _{G}^{2}\bigr\} -\tau_{k}^{2}\bigl\Vert d'_{k}\bigr\Vert _{G}^{2} \bigr). $$
(2.15)

Proof

Since \(w^{*}=(x^{*},y^{*}, \lambda^{*}) \in{ \mathcal{W}}^{*}\) and \(w_{*}^{k}(\tau_{k})=P_{\mathcal{W}} [w^{k}-\tau_{k} d'_{k} ]\), it follows from (2.3) that

$$ \bigl\Vert w_{*}^{k}(\tau_{k})-w^{*} \bigr\Vert _{G}^{2}\leq\bigl\Vert w^{k}- \tau_{k} d'_{k}-w^{*}\bigr\Vert _{G}^{2}-\bigl\Vert w^{k}-\tau_{k} d'_{k}-w_{*}^{k}( \tau_{k})\bigr\Vert _{G}^{2}. $$
(2.16)

On the other hand, we have

$$\begin{aligned} \bigl\Vert w^{k+1}(\tau_{k})-w^{*}\bigr\Vert _{G}^{2} =&\bigl\Vert \rho\bigl(w^{k}-w^{*} \bigr)+(1-\rho) \bigl(w_{*}^{k}(\tau _{k})-w^{*} \bigr)\bigr\Vert _{G}^{2} \\ =& \rho^{2}\bigl\Vert w^{k}-w^{*}\bigr\Vert _{G}^{2}+(1-\rho)^{2}\bigl\Vert w_{*}^{k}(\tau_{k})-w^{*}\bigr\Vert _{G}^{2} \\ &{}+2\rho(1-\rho) \bigl(w^{k}-w^{*}\bigr)^{\top}G \bigl(w_{*}^{k}(\tau_{k})-w^{*}\bigr). \end{aligned}$$

Using the identity

$$2(a+b)^{\top}Gb=\Vert a+b\Vert _{G}^{2}-\Vert a\Vert _{G}^{2}+\Vert b\Vert _{G}^{2}, $$

for \(a=w^{k}-w^{k}_{*}(\tau_{k})\), \(b=w^{k}_{*}(\tau_{k})-w^{*}\), and (2.16), we obtain

$$\begin{aligned} \bigl\Vert w^{k+1}(\tau_{k})-w^{*}\bigr\Vert _{G}^{2} =&\rho^{2}\bigl\Vert w^{k}-w^{*}\bigr\Vert _{G}^{2}+(1- \rho)^{2}\bigl\Vert w_{*}^{k}(\tau_{k}) -w^{*}\bigr\Vert _{G}^{2} \\ &{}+\rho (1-\rho)\bigl\{ \bigl\Vert w^{k}-w^{*}\bigr\Vert _{G}^{2}-\bigl\Vert w^{k}-w_{*}^{k}( \tau_{k})\bigr\Vert +\bigl\Vert w_{*}^{k}( \tau_{k})-w^{*}\bigr\Vert _{G}^{2}\bigr\} \\ =& \rho\bigl\Vert w^{k}-w^{*}\bigr\Vert _{G}^{2}+(1- \rho)\bigl\Vert w_{*}^{k}(\tau_{k})-w^{*}\bigr\Vert _{G}^{2}-\rho (1-\rho)\bigl\Vert w^{k}-w_{*}^{k}(\tau_{k})\bigr\Vert _{G}^{2} \\ \leq&\rho\bigl\Vert w^{k}-w^{*}\bigr\Vert _{G}^{2}+(1- \rho)\bigl\Vert w^{k}-\tau_{k} d'_{k}-w^{*} \bigr\Vert _{G}^{2} \\ &{}-(1-\rho)\bigl\Vert w^{k}-\tau_{k} d'_{k}-w_{*}^{k}( \tau_{k})\bigr\Vert _{G}^{2}-\rho(1-\rho )\bigl\Vert w^{k}-w_{*}^{k}(\tau_{k})\bigr\Vert _{G}^{2} \\ =&\bigl\Vert w^{k}-w^{*}\bigr\Vert _{G}^{2}-(1- \rho)\bigl\{ \bigl\Vert w^{k}-w_{*}^{k}( \tau_{k})-\tau_{k} d'_{k}\bigr\Vert _{G}^{2} +\rho\bigl\Vert w^{k}-w_{*}^{k}( \tau_{k})\bigr\Vert _{G}^{2} \\ &{}-\tau_{k}^{2}\bigl\Vert d'_{k} \bigr\Vert _{G}^{2}+2\tau_{k} \bigl(w^{k}-w^{*}\bigr)^{\top}Gd'_{k}\bigr\} \\ \leq&\bigl\Vert w^{k}-w^{*}\bigr\Vert _{G}^{2}-(1- \rho)\bigl\{ 2\tau_{k}\bigl(w^{k}-w^{*}\bigr)^{\top}Gd'_{k}- \tau_{k}^{2}\bigl\Vert d'_{k}\bigr\Vert _{G}^{2}\bigr\} . \end{aligned}$$

Using the definition of \(\Theta_{1}(\tau_{k})\), we get

$$\begin{aligned} \Theta_{1}(\tau_{k}) \geq& (1-\rho)\bigl\{ 2 \tau_{k}\bigl(w^{k}-w^{*}\bigr)^{\top}Gd'_{k}- \tau _{k}^{2}\bigl\Vert d'_{k}\bigr\Vert _{G}^{2}\bigr\} \\ =&(1-\rho) \bigl(2\tau_{k}\bigl\{ \bigl\Vert d'_{k} \bigr\Vert _{G}^{2}-\bigl(w^{*}-\bar{w}^{k} \bigr)^{\top}Gd'_{k}\bigr\} -\tau_{k}^{2} \bigl\Vert d'_{k}\bigr\Vert _{G}^{2} \bigr). \end{aligned}$$
(2.17)

The identity

$$ \bigl(w^{*}-\bar{w}^{k}\bigr)^{\top}Gd'_{k}={ \frac{1}{2}} \bigl(\bigl\Vert \bar{w}^{k}-w^{*}\bigr\Vert _{G}^{2}-\bigl\Vert w^{k}-w^{*} \bigr\Vert _{G}^{2} \bigr)+{\frac{1}{2}}\bigl\Vert d'_{k}\bigr\Vert _{G}^{2} $$
(2.18)

implies

$$ \bigl\Vert d'_{k}\bigr\Vert _{G}^{2}-2\bigl(w^{*}-\bar{w}^{k} \bigr)^{\top}Gd'_{k}=\bigl\Vert w^{k}-w^{*}\bigr\Vert _{G}^{2}-\bigl\Vert \bar{w}^{k}-w^{*}\bigr\Vert _{G}^{2}. $$
(2.19)

Substituting (2.19) in (2.17), we get the assertion of this lemma. □

Using Theorem 2.1 and Theorem 2.3, we get

$$ \Theta_{1}(\tau_{k})\geq(1-\rho) \Lambda_{1}(\tau_{k}), $$
(2.20)

where

$$ \Lambda_{1}(\tau_{k})=\tau_{k}\biggl\{ \bigl\Vert d'_{k}\bigr\Vert _{G}^{2}+ \biggl(\frac{1-\mu}{1+\mu } \biggr) \Phi(\alpha_{k})\biggr\} - \tau_{k}^{2}\bigl\Vert d'_{k}\bigr\Vert _{G}^{2}. $$
(2.21)

\(\Lambda_{1}(\tau_{k})\) measures the progress obtained in the kth iteration. It is natural to choose a step length \(\tau_{k}\) which maximizes the progress. Note that \(\Lambda_{1}(\tau_{k})\) is a quadratic function of \(\tau_{k}\) and it reaches its maximum at

$$\tau^{*}_{k}=\frac{\Vert d'_{k}\Vert _{G}^{2}+ (\frac{1-\mu}{1+\mu} ) \Phi(\alpha_{k})}{2\Vert d'_{k}\Vert _{G}^{2}} $$

and

$$ \Lambda_{1}\bigl(\tau^{*}_{k}\bigr)= \frac{\tau^{*}_{k}\{\Vert d'_{k}\Vert _{G}^{2}+ (\frac {1-\mu}{1+\mu} ) \Phi(\alpha_{k})\}}{2}. $$
(2.22)

Using (2.13), we have

$$\begin{aligned} \tau^{*}_{k} \geq&\frac{(1-\mu)^{2}(1-\eta^{2})}{4(1+\mu)^{2}} \biggl(\frac {\Vert d'_{k}\Vert _{G}^{2}+\Vert w^{k}-\tilde{w}^{k}\Vert _{G}^{2}}{2\Vert d'_{k}\Vert _{G}^{2}} \biggr) \\ \geq&\frac{(1-\mu)^{2}(1-\eta^{2})}{8(1+\mu)^{2}}, \end{aligned}$$
(2.23)

which implies

$$\begin{aligned} \Lambda_{1}\bigl(\tau^{*}_{k}\bigr) \geq& \frac{\tau^{*}_{k} (\frac{1-\mu}{1+\mu } ) \Phi(\alpha_{k})}{2} \\ \geq& \frac{(1-\mu)^{4}(1-\eta^{2})^{2}}{64(1+\mu)^{4}}\bigl\Vert w^{k}-\tilde{w}^{k} \bigr\Vert _{G}^{2}. \end{aligned}$$
(2.24)

Remark 2.1

If \(\rho=0\) and \(\tau^{*}_{k}=1\), Algorithm 2.1 reduces to the method proposed in [1]. Since \(\tau^{*}_{k}\) is to maximize the profit function \(\Lambda_{1}(\tau)\), we have

$$ \Lambda_{1}\bigl(\tau^{*}_{k}\bigr)\geq \Lambda_{1}(1). $$
(2.25)

Inequalities (2.20) and (2.25) show theoretically that Algorithm 2.1 is expected to make more progress than the method proposed in [1] at each iteration, and so it explains theoretically that Algorithm 2.1 outperforms the method proposed in [1].

Inspired and motivated by the first method, we next propose the following new LQP alternating direction method for solving SVI.

Algorithm 2.2

  1. Step 0.

    The initial step:

    Given \(\varepsilon>0\), \(\mu\in(0,1)\), and \(w^{1}=(x^{1}, y^{1}, \lambda^{1}) \in\mathcal{R}^{n}_{++}\times\mathcal {R}^{m}_{++}\times\mathcal{R}^{l}\) and \(d''_{0}=(0,0,0)\). Set \(k=1\).

  2. Step 1.

    Prediction step:

    Compute \(\tilde{w}^{k}=(\tilde{x}^{k},\tilde{y}^{k},\tilde{\lambda }^{k})\in\mathcal{R}^{n}_{++}\times \mathcal{R}^{m}_{++}\times\mathcal{R}^{l}\) by solving the system (1.7a)-(1.7c).

  3. Step 2.

    Convergence verification:

    If \(\{\max{ \Vert x^{k}-\tilde{x}^{k}\Vert _{\infty}, \Vert y^{k}-\tilde{y}^{k}\Vert _{\infty}}, \Vert \lambda^{k}-\tilde{\lambda}^{k}\Vert _{\infty}\}<\epsilon\), then stop.

  4. Step 3.

    Correction step:

    Compute \(\bar{w}^{k}=(\bar{x}^{k},\bar{y}^{k},\bar{\lambda}^{k})\) by solving the system (2.4a)-(2.4c). The new iterate \(w^{k+1}(\beta_{k} )\) is given by

    $$ w^{k+1}(\beta_{k})=\rho w^{k}+(1- \rho)P_{\mathcal{W}} \bigl[w^{k}-\beta_{k} d''_{k} \bigr] , \quad\rho\in(0,1), $$
    (2.26)

    where

    $$\begin{aligned}& d''_{k}:=d'_{k}+ \sigma_{k}{d''_{k-1}}, \\& \sigma_{k}=\operatorname{max} \biggl(0,\frac{-{d^{\prime\top}_{k}}G{d''_{k-1}}}{\Vert {d''_{k-1}}\Vert _{G}^{2}} \biggr) , \end{aligned}$$
    (2.27)

    and

    $$ \beta_{k} =\frac{\Vert d'_{k}\Vert _{G}^{2}+ (\frac{1-\mu}{1+\mu} ) \Phi(\alpha_{k})}{2\Vert d''_{k}\Vert _{G}^{2}}. $$
    (2.28)

    Set \(k:= k + 1\) and go to Step 1.

The following lemma has motivated us to choose the new descent direction as \(d''_{k}\).

Lemma 2.2

For any \(k\geq1\), we have

$$ {d^{\prime\prime\top}_{k-1}}G \bigl(w^{k}-w^{*}\bigr)\geq0. $$
(2.29)

Proof

From (2.18) and (2.10) it is easy to show that

$$\begin{aligned} \bigl(w^{k}-w^{*}\bigr)^{\top}Gd'_{k} \geq&\frac{\Vert d'_{k}\Vert _{G}^{2}+ (\frac{1-\mu}{1+\mu } ) \Phi(\alpha_{k})}{2} \\ \geq&\frac{(1-\mu)^{2}(1-\eta^{2})}{8(1+\mu)^{2}}\bigl\Vert w^{k} - \tilde{w}^{k} \bigr\Vert _{G}^{2}. \end{aligned}$$
(2.30)

Note that (2.29) is trivially true for \(k=1\) since \(d''_{0}=(0,0,0)\). By induction, consider any \(k\geq2\) and assume that

$${d^{\prime\prime\top}_{k-2}}G \bigl(w^{k-1}-w^{*}\bigr)\geq0. $$

Using the definition of \(d''_{k-1}\), we have

$$\begin{aligned} {d^{\prime\prime\top}_{k-1}}G \bigl(w^{k}-w^{*}\bigr) =&{d^{\prime\prime\top}_{k-1}}G \bigl(w^{k-1}-w^{*}\bigr)+{d^{\prime\prime\top}_{k-1}}G \bigl(w^{k}-w^{k-1}\bigr) \\ =&{d^{\prime\top}_{k-1}}G\bigl(w^{k-1}-w^{*} \bigr)+\sigma _{k-1}{d^{\prime\prime\top}_{k-2}}G \bigl(w^{k-1}-w^{*}\bigr) \\ &{}+{d^{\prime\prime\top}_{k-1}}G \bigl(w^{k}-w^{k-1}\bigr) \\ \geq &{d^{\prime\top}_{k-1}}G\bigl(w^{k-1}-w^{*} \bigr)+{d^{\prime\prime\top}_{k-1}}G \bigl(w^{k}-w^{k-1}\bigr) \\ \geq&\frac{\Vert {d'_{k-1}}\Vert _{G}^{2}+ (\frac{1-\mu}{1+\mu} ) \Phi(\alpha_{k-1})}{2} \\ &{}-\bigl\Vert {d''_{k-1}}\bigr\Vert _{G}\bigl\Vert P_{\mathcal{W}} \bigl[w^{k-1}- \beta_{{k-1}} {d''_{k-1}} \bigr] -w^{k-1}\bigr\Vert _{G} \\ \geq&\frac{\Vert {d'_{k-1}}\Vert _{G}^{2}+ (\frac{1-\mu}{1+\mu} ) \Phi(\alpha_{k-1})}{2}-\beta_{{k-1}}\bigl\Vert {d''_{k-1}} \bigr\Vert _{G}^{2} \\ =&0, \end{aligned}$$

where the second inequality follows from (2.30). Hence, the lemma is proved. □

Remark 2.2

From (2.30) and Lemma 2.2, we get

$$\bigl(w^{k}-w^{*}\bigr)^{\top}Gd''_{k} \geq\frac{(1-\mu)^{2}(1-\eta^{2})}{8(1+\mu)^{2}}\bigl\Vert w^{k} - \tilde{w}^{k}\bigr\Vert _{G}^{2}. $$

Then \(-d''_{k} \) is a descent direction of \(\frac{1}{2}\Vert w^{k}-w^{*}\Vert ^{2}\) at the point \(w^{k}\). Since \(-d''_{k} \) is a descent direction of the distance function at \(w^{k}\), along \(-d''_{k}\), one can find a new iterate which is closer to the solution set. Due to this fact, we construct (2.26).

Lemma 2.3

Let \(w^{*}=(x^{*},y^{*}, \lambda^{*}) \in{ \mathcal{W}}^{*}\), then we have

$$ \Theta_{2}(\beta_{k})\geq (1-\rho) \bigl( \beta_{k}\bigl\{ \bigl\Vert d'_{k}\bigr\Vert _{G}^{2}+\bigl\Vert w^{k} - w^{*}\bigr\Vert _{G}^{2}-\bigl\Vert \bar {w}^{k}- w^{*}\bigr\Vert _{G}^{2}\bigr\} -\beta_{k}^{2} \bigl\Vert d''_{k}\bigr\Vert _{G}^{2} \bigr), $$
(2.31)

where

$$ \Theta_{2}(\beta_{k})=\bigl\Vert w^{k}-w^{*}\bigr\Vert _{G}^{2}-\bigl\Vert w^{k+1}(\beta_{k})-w^{*}\bigr\Vert _{G}^{2}. $$
(2.32)

Proof

Similar to (2.17), we have

$$\begin{aligned} \bigl\Vert w^{k+1}(\beta_{k})-w^{*}\bigr\Vert _{G}^{2} \leq&\bigl\Vert w^{k}-w^{*}\bigr\Vert _{G}^{2}-(1-\rho)\bigl\{ 2\beta _{k} \bigl(w^{k}-w^{*}\bigr)^{\top}Gd''_{k}- \beta_{k}^{2}\bigl\Vert d''_{k} \bigr\Vert _{G}^{2}\bigr\} \\ =&\bigl\Vert w^{k}-w^{*}\bigr\Vert _{G}^{2} -(1- \rho)\bigl\{ 2\beta_{k}\bigl(w^{k}-w^{*}\bigr)^{\top}G \bigl(d'_{k}+\sigma _{k}{d''_{k-1}} \bigr)-\beta_{k}^{2}\bigl\Vert d''_{k} \bigr\Vert _{G}^{2}\bigr\} \\ \leq&\bigl\Vert w^{k}-w^{*}\bigr\Vert _{G}^{2}-(1- \rho)\bigl\{ 2\beta_{k}\bigl(w^{k}-w^{*}\bigr)^{\top}Gd'_{k}- \beta_{k}^{2}\bigl\Vert d''_{k} \bigr\Vert _{G}^{2}\bigr\} . \end{aligned}$$

Using (2.19) and the definition of \(\Theta_{2}(\beta_{k})\), we get the assertion of this lemma. □

Using Theorem 2.1 and Lemma 2.3, we get

$$ \Theta_{2}(\beta_{k})\geq(1-\rho) \Lambda_{2}(\beta_{k}), $$
(2.33)

where

$$\begin{aligned} \Lambda_{2}(\beta_{k}) =&\beta_{k} \biggl\{ \bigl\Vert d'_{k}\bigr\Vert _{G}^{2}+ \biggl(\frac{1-\mu }{1+\mu} \biggr) \Phi( \alpha_{k})\biggr\} -\beta_{k}^{2}\bigl\Vert d'_{k}\bigr\Vert _{G}^{2} \\ =&\frac{\beta_{k}\{\Vert d'_{k}\Vert _{G}^{2}+ (\frac{1-\mu}{1+\mu} ) \Phi(\alpha_{k})\}}{2}. \end{aligned}$$
(2.34)

Remark 2.3

From (2.27), it is easy to prove that

$$\bigl\Vert d''_{k}\bigr\Vert _{G}^{2}\leq\bigl\Vert d'_{k}\bigr\Vert _{G}^{2}, $$

which implies that

$$\tau^{*}_{k}\leq\beta_{k}. $$

From (2.20), (2.22), (2.33), and (2.34), we have

$$\begin{aligned} \Theta_{1}\bigl(\tau^{*}_{k}\bigr)\geq(1-\rho) \Lambda_{1}\bigl(\tau^{*}_{k}\bigr)=\frac{\tau ^{*}_{k}(1-\rho)\{\Vert d'_{k}\Vert _{G}^{2}+ (\frac{1-\mu}{1+\mu} ) \Phi(\alpha_{k})\}}{2} \end{aligned}$$

and

$$\begin{aligned} \Theta_{2}(\beta_{k})\geq(1-\rho)\Lambda_{2}( \beta_{k})=\frac{\beta _{k}(1-\rho)\{\Vert d'_{k}\Vert _{G}^{2} + (\frac{1-\mu}{1+\mu} ) \Phi(\alpha_{k})\}}{2}. \end{aligned}$$

From the above inequalities, we obtain

$$ \Lambda_{2}(\beta_{k})\geq \Lambda_{1}\bigl(\tau^{*}_{k}\bigr). $$
(2.35)

Inequality (2.35) shows theoretically that Algorithm 2.2 can be expected to make more progress than Algorithm 2.1 at each iteration, and so it explains theoretically that Algorithm 2.2 outperforms Algorithm 2.1.

Now, we mainly focus on investigating the convergence of the proposed methods. The following theorem plays a crucial role in the convergence of the proposed methods.

Theorem 2.4

Let \(w^{k+1}\) be the new iterate generated by

$$w^{k+1} =w^{k+1}\bigl( \tau^{*}_{k}\bigr) \quad \textit{or}\quad w^{k+1}=w^{k+1}(\beta_{k}). $$

Then, for any \(w^{*} \in{ \mathcal{W}}^{*}\), \(w^{k}\) and \(\tilde{w}^{k}\) are bounded, and

$$ \bigl\Vert w^{k+1} - w^{*}\bigr\Vert ^{2} \le \bigl\Vert w^{k} - w^{*}\bigr\Vert ^{2} -c\bigl\Vert w^{k} - \tilde{w}^{k}\bigr\Vert _{G}^{2}, $$
(2.36)

where

$$c:=\frac{(1-\mu)^{4}(1-\eta^{2})^{2}}{64(1+\mu)^{4}}>0. $$

Proof

Inequality (2.36) follows from (2.20), (2.24), (2.33), and (2.35) immediately. It follows from (2.36) that

$$\bigl\Vert w^{k+1}-w^{*}\bigr\Vert \leq\bigl\Vert w^{k}-w^{*}\bigr\Vert \leq\cdots\leq\bigl\Vert w^{0}-w^{*} \bigr\Vert , $$

and thus, \(\{ w^{k}\}\) is a bounded sequence.

It follows from (2.36) that

$$\sum_{k=0}^{\infty}c\bigl\Vert w^{k}-\tilde{w}^{k}\bigr\Vert ^{2}_{G}< + \infty, $$

which means that

$$ \lim_{k\to\infty}\bigl\Vert w^{k}- \tilde{w}^{k}\bigr\Vert _{G}=0. $$

Since \(\{ w^{k}\}\) is a bounded sequence, we conclude that \(\{\tilde {w}^{k}\}\) is also bounded. □

The convergence of the proposed method can be proved by using similar arguments to [1]. Hence the proof is omitted.

Theorem 2.5

[1]

The sequence \(\{w^{k}\}\) generated by the proposed methods converges to some \(w^{\infty}\) which is a solution of SVI.

3 Conclusions

In this paper, we proposed two new modified logarithmic-quadratic proximal alternating direction methods for solving structured variational inequalities. The first one can be viewed as an extension of the method proposed in [1] by performing an additional step at each iteration and another optimal step length is employed to reach substantial progress in each iteration while the second method generates the new iterate by searching the optimal step size along a new descent direction. It is proved theoretically that the lower-bound of the progress obtained by the second method is greater than that by the first one. Global convergence of the proposed method is proved under mild assumptions.

References

  1. Bnouhachem, A, Xu, MH: An inexact LQP alternating direction method for solving a class of structured variational inequalities. Comput. Appl. Math. 67, 671-680 (2014)

    Article  MathSciNet  Google Scholar 

  2. Gabay, D, Mercier, B: A dual algorithm for the solution of nonlinear variational problems via finite-element approximations. Comput. Appl. Math. 2, 17-40 (1976)

    Article  MATH  Google Scholar 

  3. Gabay, D: Applications of the method of multipliers to variational inequalities. In: Fortin, M, Glowinski, R (eds.) Augmented Lagrange Methods: Applications to the Solution of Boundary-Valued Problems, pp. 299-331. North-Holland, Amsterdam (1983)

    Chapter  Google Scholar 

  4. Chen, G, Teboulle, M: A proximal-based decomposition method for convex minimization problems. Math. Program. 64, 81-101 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  5. Eckstein, J: Some saddle-function splitting methods for convex programming. Optim. Methods Softw. 4, 75-83 (1994)

    Article  MathSciNet  Google Scholar 

  6. Kontogiorgis, S, Meyer, RR: A variable-penalty alternating directions method for convex optimization. Math. Program. 83, 29-53 (1998)

    MathSciNet  MATH  Google Scholar 

  7. He, BS, Yang, H, Wang, SL: Alternating directions method with self-adaptive penalty parameters for monotone variational inequalities. J. Optim. Theory Appl. 106, 349-368 (2000)

    Article  MathSciNet  Google Scholar 

  8. He, BS, Zhou, J: A modified alternating direction method for convex minimization problems. Appl. Math. Lett. 13, 123-130 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  9. He, BS, Liao, LZ, Han, DR, Yang, H: A new inexact alternating directions method for monotone variational inequalities. Math. Program. 92, 103-118 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  10. Jiang, ZK, Bnouhachem, A: A projection-based prediction-correction method for structured monotone variational inequalities. Appl. Math. Comput. 202, 747-759 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  11. Yang, J, Zang, Y: Alternating direction algorithms for \(l_{1}\) problems in compressive sensing. SIAM J. Sci. Comput. 33(1), 250-278 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  12. Tao, M, Yuan, XM: On the \(O(1/t)\) convergence rate of alternating direction method with logarithmic-quadratic proximal regularization. SIAM J. Optim. 22(4), 1431-1448 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  13. Wang, K, Xu, LL, Han, DR: A new parallel splitting descent method for structured variational inequalities. J. Ind. Manag. Optim. 10(2), 461-476 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  14. Yuan, XM, Li, M: An LQP-based decomposition method for solving a class of variational inequalities. SIAM J. Optim. 21(4), 1309-1318 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  15. Auslender, A, Teboulle, M, Ben-Tiba, S: A logarithmic-quadratic proximal method for variational inequalities. Comput. Optim. Appl. 12, 31-40 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  16. Bnouhachem, A, Benazza, H, Khalfaoui, M: An inexact alternating direction method for solving a class of structured variational inequalities. Appl. Math. Comput. 219, 7837-7846 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  17. Bnouhachem, A, Ansari, QH: A descent LQP alternating direction method for solving variational inequality problems with separable structure. Appl. Math. Comput. 246, 519-532 (2014)

    Article  MathSciNet  Google Scholar 

  18. Li, M: A hybrid LQP-based method for structured variational inequalities. Int. J. Comput. Math. 89(10), 1412-1425 (2012)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors are grateful to King Fahd University of Petroleum & Minerals for providing excellent research facilities.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Suliman Al-Homidan.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bnouhachem, A., Al-Homidan, S. & Ansari, Q.H. New descent LQP alternating direction methods for solving a class of structured variational inequalities. Fixed Point Theory Appl 2015, 137 (2015). https://doi.org/10.1186/s13663-015-0387-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-015-0387-1

MSC

Keywords