Skip to main content

Solving systems of nonlinear matrix equations involving Lipshitzian mappings

Abstract

In this study, both theoretical results and numerical methods are derived for solving different classes of systems of nonlinear matrix equations involving Lipshitzian mappings.

2000 Mathematics Subject Classifications: 15A24; 65H05.

1 Introduction

Fixed point theory is a very attractive subject, which has recently drawn much attention from the communities of physics, engineering, mathematics, etc. The Banach contraction principle [1] is one of the most important theorems in fixed point theory. It has applications in many diverse areas.

Definition 1.1 Let M be a nonempty set and f: MM be a given mapping. We say that x* M is a fixed point of f if fx* = x*.

Theorem 1.1 (Banach contraction principle [1]). Let (M, d) be a complete metric space and f: MM be a contractive mapping, i.e., there exists λ [0, 1) such that for all x, y M,

d ( f x , f y ) λ d ( x , y ) .
(1)

Then the mapping f has a unique fixed point x* M. Moreover, for every x0 M, the sequence (x k ) defined by: xk+1= fx k for all k = 0, 1, 2, ... converges to x*, and the error estimate is given by:

d ( x k , x * ) λ k 1 - λ d ( x 0 , x 1 ) , f o r a l l k = 0 , 1 , 2 ,

Many generalizations of Banach contraction principle exists in the literature. For more details, we refer the reader to [24].

To apply the Banach fixed point theorem, the choice of the metric plays a crucial role. In this study, we use the Thompson metric introduced by Thompson [5] for the study of solutions to systems of nonlinear matrix equations involving contractive mappings.

We first review the Thompson metric on the open convex cone P(n) (n ≥ 2), the set of all n×n Hermitian positive definite matrices. We endow P(n) with the Thompson metric defined by:

d ( A , B ) = max log M ( A B ) , log M ( B A ) ,

where M(A/B) = inf{λ > 0: AλB} = λ+(B-1/2AB-1/2), the maximal eigenvalue of B-1/2AB-1/2. Here, XY means that Y - X is positive semidefinite and X < Y means that Y - X is positive definite. Thompson [5] (cf. [6, 7]) has proved that P(n) is a complete metric space with respect to the Thompson metric d and d(A, B) = ||log(A-1/2BA-1/2)||, where ||·|| stands for the spectral norm. The Thompson metric exists on any open normal convex cones of real Banach spaces [5, 6]; in particular, the open convex cone of positive definite operators of a Hilbert space. It is invariant under the matrix inversion and congruence transformations, that is,

d ( A , B ) = d ( A - 1 , B - 1 ) = d ( M A M * , M B M * )
(2)

for any nonsingular matrix M. The other useful result is the nonpositive curvature property of the Thompson metric, that is,

d ( X r , Y r ) r d ( X , Y ) , r [ 0 , 1 ] .
(3)

By the invariant properties of the metric, we then have

d ( M X r M * , M Y r M * ) | r | d ( X , Y ) , r [ - 1 , 1 ]
(4)

for any X, Y P(n) and nonsingular matrix M.

Lemma 1.1 (see [8]). For all A, B, C, D P(n), we have

d ( A + B , C + D ) max { d ( A , C ) , d ( B , D ) } .

In particular,

d ( A + B , A + C ) d ( B , C ) .

2 Main result

In the last few years, there has been a constantly increasing interest in developing the theory and numerical approaches for HPD (Hermitian positive definite) solutions to different classes of nonlinear matrix equations (see [821]). In this study, we consider the following problem: Find (X1, X2, ..., X m ) (P(n)) m solution to the following system of nonlinear matrix equations:

X i r i = Q i + j = 1 m A j * F i j ( X j ) A j α i j , i = 1 , 2 , , m ,
(5)

where r i ≥ 1, 0 < |α ij | ≤ 1, Q i ≥ 0, A i are nonsingular matrices, and F ij : P(n) → P (n) are Lipshitzian mappings, that is,

sup X , Y P ( n ) , X Y d ( F i j ( X ) , F i j ( Y ) ) d ( X , Y ) = k i j < .
(6)

If m = 1 and α11 = 1, then (5) reduces to find X P(n) solution to Xr = Q + A*F(X)A. Such problem was studied by Liao et al. [15]. Now, we introduce the following definition.

Definition 2.1 We say that Problem (5) is Banach admissible if the following hypothesis is satisfied:

max 1 i m max 1 j m { | α i j | k i j r i } < 1 .

Our main result is the following.

Theorem 2.1 If Problem (5) is Banach admissible, then it has one and only one solution ( X 1 * , X 2 * , , X m * ) ( P ( n ) ) m . Moreover, for any (X1(0), X2(0), ..., X m (0)) (P(n)) m, the sequences (X i (k))k≥0, 1 ≤ im, defined by:

X i ( k + 1 ) = Q i + j = 1 m ( A j * F i j ( X j ( k ) ) A j ) α i j 1 r i ,
(7)

converge respectively to X 1 * , X 2 * ,, X m * , and the error estimation is

max { d ( X 1 ( k ) , X 1 * ) , d ( X 2 ( k ) , X 2 * ) , , d ( X m ( k ) , X m * ) } q m k 1 - q m max { d ( X 1 ( 1 ) , X 1 ( 0 ) ) , d ( X 2 ( 1 ) , X 2 ( 0 ) ) , , d ( X m ( 1 ) , X m ( 0 ) ) } ,
(8)

where

q m = max 1 i m max 1 j m { | α i j | k i j r i } .

Proof. Define the mapping G: (P(n)) m → (P(n)) m by:

G ( X 1 , X 2 , , X m ) = ( G 1 ( X 1 , X 2 , , X m ) , G 2 ( X 1 , X 2 , , X m ) , , G m ( X 1 , X 2 , , X m ) ) ,

for all X = (X1, X2, ..., X m ) (P(n)) m , where

G i ( X ) = Q i + j = 1 m ( A j * F i j ( X j ) A j ) α i j 1 r i ,

for all i = 1, 2, ..., m. We endow (P(n)) m with the metric d m defined by:

d m ( ( X 1 , X 2 , , X m ) , ( Y 1 , Y 2 , , Y m ) ) = max d ( X 1 , Y 1 ) , d ( X 2 , Y 2 ) , , d ( X m , Y m ) ,

for all X = (X1, X2, ..., X m ), Y = (Y1, Y2, ..., Y m ) (P (n)) m . Obviously, ((P(n)) m , d m ) is a complete metric space.

We claim that

d m ( G ( X ) , G ( Y ) ) q m d m ( X , Y ) , for all  X , Y ( P ( n ) ) m .
(9)

For all X, Y (P(n)) m , We have

d m ( G ( X ) , G ( Y ) ) = max 1 i m { d ( G i ( X ) , G i ( Y ) ) } .
(10)

On the other hand, using the properties of the Thompson metric (see Section 1), for all i = 1, 2, ..., m, we have

d ( G i ( X ), G i ( Y ) ) = d ( ( Q i + j = 1 m ( A j * F i j ( X j ) A j ) α i j ) 1 / r i , ( Q i + j = 1 m ( A j * F i j ( Y j ) A j ) α i j ) 1 / r i ) 1 r i d ( Q i + j = 1 m ( A j * F i j ( X j ) A j ) α i j , Q i + j = 1 m ( A j * F i j ( Y j ) A j ) α i j ) 1 r i d ( j = 1 m ( A j * F i j ( X j ) A j ) α i j , j = 1 m ( A j * F i j ( Y j ) A j ) α i j ) 1 r i d ( ( A 1 * F i 1 ( X 1 ) A 1 ) α i 1 + j = 2 m ( A j * F i j ( X j ) A j ) α i j , ( A 1 * F i 1 ( Y 1 ) A 1 ) α i 1 + j = 2 m ( A j * F i j ( Y j ) A j ) α i j ) 1 r i m a x { d ( ( A 1 * F i 1 ( X 1 ) A 1 ) α i 1 , ( A 1 * F i 1 ( Y 1 ) A 1 ) α i 1 ), d ( j = 2 m ( A j * F i j ( X j ) A j ) α i j , j = 2 m ( A j * F i j ( Y j ) A j ) α i j ) } 1 r i m a x { d ( ( A 1 * F i 1 ( X 1 ) A 1 ) α i 1 , ( A 1 * F i 1 ( Y 1 ) A 1 ) α i 1 ), , d ( ( A m * F i m ( X m ) A m ) α i m , ( A m * F i m ( Y m ) A m ) α i m ) } 1 r i m a x { | α i 1 | d ( A 1 * F i 1 ( X 1 ) A 1 , A 1 * F i 1 ( Y 1 ) A 1 ), , | α i m | d ( A m * F i m ( X m ) A m , A m * F i m ( Y m ) A m ) } 1 r i m a x { | α i 1 | d ( F i 1 ( X 1 ), F i 1 ( Y 1 ) ) , , | α i m | d ( F i m ( X m ), F i m ( Y m ) ) } 1 r i m a x { | α i 1 | k i 1 d ( X 1 , Y 1 ), , | α i m | k i m d ( X m , Y m ) } max 1 j m { | α i j | k i j } r i m a x { d ( X 1 , Y 1 ), , d ( X m , Y m ) } max 1 j m { | α i j | k i j / r i } d m ( X , Y ) .

Thus, we proved that for all i = 1, 2, ..., m, we have

d ( G i ( X ) , G i ( Y ) ) max 1 j m { | α i j | k i j r i } d m ( X , Y ) .
(11)

Now, (9) holds immediately from (10) and (11). Applying the Banach contraction principle (see Theorem 1.1) to the mapping G, we get the desired result. □

3 Examples and numerical results

3.1 The matrix equation: X = ( ( ( X 1 / 2 + B 1 ) 1 / 2 + B 2 ) 1 / 3 + B 3 ) 1 / 2

We consider the problem: Find X P(n) solution to

X = ( ( ( X 1 / 2 + B 1 ) 1 / 2 + B 2 ) ) 1 / 3 + B 3 ) 1 / 2 ,
(12)

where B i ≥ 0 for all i = 1, 2, 3.

Problem (12) is equivalent to: Find X1 P (n) solution to

X 1 r 1 = Q 1 + ( A 1 * F 11 ( X 1 ) A 1 ) α 11 ,
(13)

where r1 = 2, Q1 = B3, A1 = I n (the identity matrix), α11 = 1/3 and F11 : P(n) → P (n) is given by:

F 11 ( X ) = ( X 1 2 + B 1 ) - 1 2 + B 2 .

Proposition 3.1 F11is a Lipshitzian mapping with k11 ≤ 1/4.

Proof. Using the properties of the Thompson metric, for all X, Y P(n), we have

d ( F 11 ( X ) , F 11 ( Y ) ) = d ( ( X 1 2 + B 1 ) - 1 2 + B 2 , ( Y 1 2 + B 1 ) - 1 2 + B 2 ) d ( ( X 1 2 + B 1 ) - 1 2 , ( Y 1 2 + B 1 ) - 1 2 ) 1 2 d ( X 1 2 + B 1 , Y 1 2 + B 1 ) 1 2 d ( X 1 2 , Y 1 2 ) 1 4 d ( X , Y ) .

Thus, we have k11 ≤ 1/4. □

Proposition 3.2 Problem (13) is Banach admissible.

Proof. We have

| α 11 | k 11 r 1 1 3 1 4 2 = 1 24 < 1 .

This implies that Problem (13) is Banach admissible. □

Theorem 3.1 Problem (13) has one and only one solution X 1 * P ( n ) . Moreover, for any X1(0) P(n), the sequence (X1(k))k≥0defined by:

X 1 ( k + 1 ) = ( X 1 ( k ) 1 2 + B 1 ) - 1 2 + B 2 1 3 + B 3 1 2 ,
(14)

converges to X 1 * , and the error estimation is

d ( X 1 ( k ) , X 1 * ) q 1 k 1 - q 1 d ( X 1 ( 1 ) , X 1 ( 0 ) ) ,
(15)

where q1 = 1/4.

Proof. Follows from Propositions 3.1, 3.2 and Theorem 2.1. □

Now, we give a numerical example to illustrate our result given by Theorem 3.1.

We consider the 5 × 5 positive matrices B1, B2, and B3 given by:

B 1 = 1 . 0000 0 . 5000 0 . 3333 0 . 2500 0 0 . 5000 1 . 0000 0 . 6667 0 . 5000 0 0 . 3333 0 . 6667 1 . 0000 0 . 7500 0 0 . 2500 0 . 5000 0 . 7500 1 . 0000 0 0 0 0 0 0 , B 2 = 1 . 4236 1 . 3472 1 . 1875 1 . 0000 0 1 . 3472 1 . 9444 1 . 8750 1 . 6250 0 1 . 1875 1 . 8750 2 . 1181 1 . 9167 0 1 . 0000 1 . 6250 1 . 9167 1 . 8750 0 0 0 0 0 0

and

B 3 = 2 . 7431 3 . 3507 3 . 3102 2 . 9201 0 3 . 3507 4 . 6806 4 . 8391 4 . 3403 0 3 . 3102 4 . 8391 5 . 2014 4 . 7396 0 2 . 9201 4 . 3403 4 . 7396 4 . 3750 0 0 0 0 0 0 .

We use the iterative algorithm (14) to solve (12) for different values of X1(0):

X 1 ( 0 ) = M 1 = 1 0 0 0 0 0 2 0 0 0 0 0 3 0 0 0 0 0 4 0 0 0 0 0 5 , X 1 ( 0 ) = M 2 = 0 . 02 0 . 01 0 0 0 0 . 01 0 . 02 0 . 01 0 0 0 0 . 01 0 . 02 0 . 01 0 0 0 0 . 01 0 . 02 0 . 01 0 0 0 0 . 01 0 . 02

and

X 1 ( 0 ) = M 3 = 30 15 10 7 . 5 6 15 30 20 15 12 10 20 30 22 . 5 18 7 . 5 15 22 . 5 30 24 6 12 18 24 30 .

For X1(0) = M1, after 9 iterations, we get the unique positive definite solution

X 1 ( 9 ) = 1 . 6819 0 . 69442 0 . 61478 0 . 51591 0 0 . 69442 1 . 9552 0 . 96059 0 . 84385 0 0 . 61478 0 . 96059 2 . 0567 0 . 9785 0 0 . 51591 0 . 84385 0 . 9785 1 . 9227 0 0 0 0 0 1

and its residual error

R ( X 1 ( 9 ) ) = X 1 ( 9 ) - X 1 ( 9 ) 1 2 + B 1 - 1 2 + B 2 1 3 + B 3 1 2 = 6 . 346 × 1 0 - 13 .

For X1(0) = M2, after 9 iterations, the residual error

R ( X 1 ( 9 ) ) = 1 . 5884 × 1 0 - 12 .

For X1(0) = M3, after 9 iterations, the residual error

R ( X 1 ( 9 ) ) = 1 . 1123 × 1 0 - 12 .

The convergence history of the algorithm for different values of X1(0) is given by Figure 1, where c1 corresponds to X1(0) = M1, c2 corresponds to X1(0) = M2, and c3 corresponds to X1(0) = M3.

Figure 1
figure 1

Convergence history for Eq. (12).

3.2 System of three nonlinear matrix equations

We consider the problem: Find (X1, X2, X3) (P(n))3 solution to

X 1 = I n + A 1 * ( X 1 1 3 + B 1 ) 1 2 A 1 + A 2 * ( X 2 1 4 + B 2 ) 1 3 A 2 + A 3 * ( X 3 1 5 + B 3 ) 1 4 A 3 , X 2 = I n + A 1 * ( X 1 1 5 + B 1 ) 1 4 A 1 + A 2 * ( X 2 1 3 + B 2 ) 1 2 A 2 + A 3 * ( X 3 1 4 + B 3 ) 1 3 A 3 , X 3 = I n + A 1 * ( X 1 1 4 + B 1 ) 1 3 A 1 + A 2 * ( X 2 1 5 + B 2 ) 1 4 A 2 + A 3 * ( X 3 1 3 + B 3 ) 1 2 A 3 ,
(16)

where A i are n × n singular matrices.

Problem (16) is equivalent to: Find (X1, X2, X3) (P(n))3 solution to

X i r i = Q i + j = 1 3 ( A j * F i j ( X j ) A j ) α i j , i = 1 , 2 , 3 ,
(17)

where r1 = r2 = r3 = 1, Q1 = Q2 = Q3 = I n and for all i, j {1, 2, 3}, α ij = 1,

F i j ( X j ) = ( X j θ i j + B j ) γ i j , θ = ( θ i j ) = 1 3 1 4 1 5 1 5 1 3 1 4 1 4 1 5 1 3 , γ = ( γ i j ) = 1 2 1 3 1 4 1 4 1 2 1 3 1 3 1 4 1 2 .

Proposition 3.3 For all i, j {1, 2, 3}, F ij : P(n) → P(n) is a Lipshitzian mapping with k ij γ ij θ ij .

Proof. For all X, Y P(n), since θ ij , γ ij (0, 1), we have

d ( F i j ( X ) , F i j ( Y ) ) = d ( ( X θ i j + B j ) γ i j , ( Y θ i j + B j ) γ i j ) γ i j d ( X θ i j + B j , Y θ i j + B j ) γ i j d ( X θ i j , Y θ i j ) γ i j θ i j d ( X , Y ) .

Then, F ij is a Lipshitzian mapping with k ij γ ij θ ij . □

Proposition 3.4 Problem (17) is Banach admissible.

Proof. We have

max 1 i 3 max 1 j 3 { | α i j | k i j r i } = max 1 i , j 3 k i j max 1 i , j 3 γ i j θ i j = 1 6 < 1 .

This implies that Problem (17) is Banach admissible. □

Theorem 3.2 Problem (16) has one and only one solution ( X 1 * , X 2 * , X 3 * ) ( P ( n ) ) 3 . Moreover, for any (X1(0), X2(0), X3(0)) (P(n))3, the sequences (X i (k))k≥0, 1 ≤ i ≤ 3, defined by:

X i ( k + 1 ) = I n + j = 1 3 A j * F i j ( X j ( k ) ) A j ,
(18)

converge respectively to X 1 * , X 2 * , X 3 * , and the error estimation is

max { d ( X 1 ( k ) , X 1 * ) , d ( X 2 ( k ) , X 2 * ) , d ( X 3 ( k ) , X 3 * ) } q 3 k 1 - q 3 max { d ( X 1 ( 1 ) , X 1 ( 0 ) ) , d ( X 2 ( 1 ) , X 2 ( 0 ) ) , d ( X 3 ( 1 ) , X 3 ( 0 ) ) } ,
(19)

where q3 = 1/6.

Proof. Follows from Propositions 3.3, 3.4 and Theorem 2.1. □

Now, we give a numerical example to illustrate our obtained result given by Theorem 3.2.

We consider the 3 × 3 positive matrices B1, B2 and B3 given by:

B 1 = 1 . 0 . 5 0 0 . 5 1 0 0 0 0 , B 2 = 1 . 25 1 0 1 1 . 25 0 0 0 0 and B 3 = 1 . 75 1 . 625 0 1 . 625 1 . 75 0 0 0 0 .

We consider the 3 × 3 nonsingular matrices A1, A2 and A3 given by:

A 1 = 0 . 3107 - 0 . 5972 0 . 7395 0 . 9505 0 . 1952 - 0 . 2417 0 - 0 . 7780 - 0 . 6282 , A 2 = 1 . 5 - 2 0 . 5 0 . 5 0 - 0 . 5 - 0 . 5 2 - 1 . 5

and

A 3 = - 1 - 1 1 1 - 1 1 - 1 - 1 - 1 .

We use the iterative algorithm (18) to solve Problem (16) for different values of (X1(0), X2(0), X3(0)):

X 1 ( 0 ) = X 2 ( 0 ) = X 3 ( 0 ) = M 1 = 1 0 0 0 2 0 0 0 3 ,
X 1 ( 0 ) = X 2 ( 0 ) = X 3 ( 0 ) = M 2 = 0 . 02 0 . 01 0 0 . 01 0 . 02 0 . 01 0 0 . 01 0 . 02

and

X 1 ( 0 ) = X 2 ( 0 ) = X 3 ( 0 ) = M 3 = 30 15 10 15 30 20 10 20 30 .

The error at the iteration k is given by:

R ( X 1 ( k ) , X 2 ( k ) , X 3 ( k ) ) = max 1 i 3 X i ( k ) - I 3 - j = 1 3 A j * F i j ( X j ( k ) ) A j .

For X1(0) = X2(0) = X3(0) = M1, after 15 iterations, we obtain

X 1 ( 15 ) = 10 . 565 - 4 . 4081 2 . 7937 - 4 . 4081 16 . 883 - 6 . 6118 2 . 7937 - 6 . 6118 9 . 7152 , X 2 ( 15 ) = 11 . 512 - 5 . 8429 3 . 1922 - 5 . 8429 19 . 485 - 7 . 9308 3 . 1922 - 7 . 9308 10 . 68

and

X 3 ( 15 ) = 11 . 235 - 3 . 5241 3 . 2712 - 3 . 5241 17 . 839 - 7 . 8035 3 . 2712 - 7 . 8035 11 . 618 .

The residual error is given by:

R ( X 1 ( 15 ) , X 2 ( 15 ) , X 3 ( 15 ) ) = 4 . 722 × 1 0 - 15 .

For X1(0) = X2(0) = X3(0) = M2, after 15 iterations, the residual error is given by:

R ( X 1 ( 15 ) , X 2 ( 15 ) , X 3 ( 15 ) ) = 4 . 911 × 1 0 - 15 .

For X1(0) = X2(0) = X3(0) = M3, after 15 iterations, the residual error is given by:

R ( X 1 ( 15 ) , X 2 ( 15 ) , X 3 ( 15 ) ) = 8 . 869 × 1 0 - 15 .

The convergence history of the algorithm for different values of X1(0), X2(0), and X3(0) is given by Figure 2, where c1 corresponds to X1(0) = X2(0) = X3(0) = M1, c2 corresponds to X1(0) = X2(0) = X3(0) = M2 and c3 corresponds to X1(0) = X2(0) = X3(0) = M3.

Figure 2
figure 2

Convergence history for Sys. (16).

References

  1. Banach S: Sur les opérations dans les ensembles abstraits et leur application aux équations intégrales. Fund Math 1922, 3: 133–181.

    Google Scholar 

  2. Agarwal R, Meehan M, O'Regan D: Fixed Point Theory and Applications. Cambridge Tracts in Mathematics, Cambridge University Press, Cambridge, UK 2001., 141:

    Google Scholar 

  3. Ćirić L: A generalization of Banach's contraction principle. Proc Am Math Soc 2,45(2):273–273.

    Google Scholar 

  4. Kirk W, Sims B: Handbook of Metric Fixed Point Theory. Kluwer, Dordrecht; 2001.

    Chapter  Google Scholar 

  5. Thompson A: On certain contraction mappings in a partially ordered vector space. Proc Am Math Soc 1963, 14: 438–443.

    Google Scholar 

  6. Nussbaum R: Hilbert's projective metric and iterated nonlinear maps. Mem Amer Math Soc 1988,75(391):1–137.

    MathSciNet  Google Scholar 

  7. Nussbaum R: Finsler structures for the part metric and Hilbert' projective metric and applications to ordinary differential equations. Differ Integral Equ 1994, 7: 1649–1707.

    MathSciNet  Google Scholar 

  8. Lim Y:Solving the nonlinear matrix equation X=Q+ i = 1 m M i X δ i M i * via a contraction principle. Linear Algebra Appl 2009, 430: 1380–1383. 10.1016/j.laa.2008.10.034

    Article  MathSciNet  Google Scholar 

  9. Duan X, Liao A:On Hermitian positive definite solution of the matrix equation X- i = 1 m A i * X r A i =Q. J Comput Appl Math 2009, 229: 27–36. 10.1016/j.cam.2008.10.018

    Article  MathSciNet  Google Scholar 

  10. Duan X, Liao A, Tang B:On the nonlinear matrix equation X- i = 1 m A i * X δ i A i =Q. Linear Algebra Appl 2008, 429: 110–121. 10.1016/j.laa.2008.02.014

    Article  MathSciNet  Google Scholar 

  11. Duan X, Peng Z, Duan F: Positive defined solution of two kinds of nonlinear matrix equations. Surv Math Appl 2009, 4: 179–190.

    MathSciNet  Google Scholar 

  12. Hasanov V: Positive definite solutions of the matrix equations X ± A * X-qA = Q . Linear Algebra Appl 2005, 404: 166–182.

    Article  MathSciNet  Google Scholar 

  13. Ivanov I, Hasanov V, Uhilg F: Improved methods and starting values to solve the matrix equations X ± A * X-1A = I iteratively. Math Comput 2004, 74: 263–278. 10.1090/S0025-5718-04-01636-9

    Article  Google Scholar 

  14. Ivanov I, Minchev B, Hasanov V:Positive definite solutions of the equation X- A * X - 1 A=I. In Application of Mathematics in Engineering'24, Proceedings of the XXIV Summer School Sozopol'98 Edited by: Heron Press S. 1999, 113–116.

    Google Scholar 

  15. Liao A, Yao G, Duan X: Thompson metric method for solving a class of nonlinear matrix equation. Appl Math Comput 2010, 216: 1831–1836. 10.1016/j.amc.2009.12.022

    Article  MathSciNet  Google Scholar 

  16. Liu X, Gao H: On the positive definite solutions of the matrix equations Xs ± ATX- tA = I n . Linear Algebra Appl 2003, 368: 83–97.

    Article  MathSciNet  Google Scholar 

  17. Ran A, Reurings M, Rodman A: A perturbation analysis for nonlinear selfadjoint operators. SIAM J Matrix Anal Appl 2006, 28: 89–104. 10.1137/05062873

    Article  MathSciNet  Google Scholar 

  18. Shi X, Liu F, Umoh H, Gibson F: Two kinds of nonlinear matrix equations and their corresponding matrix sequences. Linear Multilinear Algebra 2004, 52: 1–15. 10.1080/0308108031000112606

    Article  MathSciNet  Google Scholar 

  19. Zhan X, Xie J: On the matrix equation X + ATX-1A = I . Linear Algebra Appl 1996, 247: 337–345.

    Article  MathSciNet  Google Scholar 

  20. Dehgham M, Hajarian M: An efficient algorithm for solving general coupled matrix equations and its application. Math Comput Modeling 2010, 51: 1118–1134. 10.1016/j.mcm.2009.12.022

    Article  Google Scholar 

  21. Zhoua B, Duana G, Li Z: Gradient based iterative algorithm for solving coupled matrix equations. Syst Control Lett 2009, 58: 327–333. 10.1016/j.sysconle.2008.12.004

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maher Berzig.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

All authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Berzig, M., Samet, B. Solving systems of nonlinear matrix equations involving Lipshitzian mappings. Fixed Point Theory Appl 2011, 89 (2011). https://doi.org/10.1186/1687-1812-2011-89

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2011-89

Keywords