- Research Article
- Open Access
- Published:

# Nonexpansive Matrices with Applications to Solutions of Linear Systems by Fixed Point Iterations

*Fixed Point Theory and Applications*
**volume 2010**, Article number: 821928 (2009)

## Abstract

We characterize (i) matrices which are nonexpansive with respect to some matrix norms, and (ii) matrices whose average iterates approach zero or are bounded. Then we apply these results to iterative solutions of a system of linear equations.

Throughout this paper, will denote the set of real numbers, the set of complex numbers, and the complex vector space of complex matrices. A function is a *matrix norm* if for all , it satisfies the following five axioms:

(1);

(2) if and only if ;

(3) for all complex scalars ;

(4);

(5).

Let be a norm on . Define on by

This norm on is a matrix norm, called the *matrix norm induced by*. A matrix norm on is called an *induced matrix norm* if it is induced by some norm on . If is a matrix norm on , there exists an induced matrix norm on such that for all (cf. [1, page 297]). Indeed one can take to be the matrix norm induced by the norm on defined by

where is the matrix in whose columns are all equal to . For , denotes the spectral radius of .

Let be a norm in . A matrix is a *contraction* relative to if it is a contraction as a transformation from into ; that is, there exists such that

Evidently this means that for the matrix norm induced by , . The following theorem is well known (cf. [1, Sections 5.6.9–5.6.12]).

Theorem 1.

For a matrix , the following are equivalent:

(a) is a contraction relative to a norm in ;

(b) for some induced matrix norm ;

(c) for some matrix norm ;

(d);

(e)

That (b) follows from (c) is a consequence of the previous remark about an induced matrix norm being less than a matrix norm. Since all norms on are equivalent, the limit in (d) can be relative to any norm on , so that (d) is equivalent to all the entries of converge to zero as , which in turn is equivalent to for all .

In this paper, we first characterize matrices in that are *nonexpansive* relative to some norm on , that is,

Then we characterize those such that

converges to zero as , and those that is bounded.

Finally we apply our theory to approximation of solution of using iterative methods (fixed point iteration methods).

Theorem 2.

For a matrix , the following are equivalent:

(a) is nonexpansive relative to some norm on ;

(b) for some induced matrix norm ;

(c) for some matrix norm ;

(d) is bounded;

(e), and for any eigenvalue of with , the geometric multiplicity is equal to the algebraic multiplicity.

Proof.

As in the previous theorem, (a), (b), and (c) are equivalent. Assume that (b) holds. Let the norm be induced by a vector norm of . Then

proving that is bounded in norm for every . Taking , we see that the set of all columns of is bounded. This proves that is bounded in maximum column sum matrix norm ([1, page 294]), and hence in any norm in . Note that the last part of the proof also follows from the Uniform Boundedness Principle (see, e.g., [2, Corollary , page 66])

Now we prove that (d) implies (e). Suppose that has an eigenvalue with . Let be an eigenvector corresponding to . Then

as , where is any vector norm of . This contradicts (d). Hence . Now suppose that is an eigenvalue with and the Jordan block corresponding to is not diagonal. Then there exist nonzero vectors such that . Let . Then

and . It follows that is unbounded, contradicting (d). Hence (d) implies (e).

Lastly we prove that (e) implies (c). Assume that (e) holds. is similar to its Jordan canonical form whose nonzero off-diagonal entries can be made arbitrarily small by similarity ([1, page 128]). Since the Jordan block for each eigenvalue with modulus 1 is diagonal, we see that there is an invertible matrix such that the -sum of each row of is less than or equal to 1, that is, , where is the maximum row sum matrix norm ([1, page 295]). Define a matrix norm by . Then we have .

Let be an eigenvalue of a matrix . The index of , denoted by index() is the smallest value of for which ([1, pages 148 and 131]). Thus condition (e) above can be restated as , and for any eigenvalue of with , .

Let . Consider

We call the -average of . As with , we have for every if and only if in , and that is bounded for every if and only if is bounded in . We have the following theorem.

Theorem 3.

Let . Then

(a) converges to if and only if for some matrix norm and that is not an eigenvalue of ,

(b) is bounded if and only if for every eigenvalue with and that if is an eigenvalue of .

Proof.

First we prove the sufficiency part of (a). Let be a vector in . Let

By Theorem 2 for any eigenvalues of either or and .

If is written in its Jordan canonical form , then the -average of is , where is the -average of . is in turn composed of the -average of each of its Jordan blocks. For a Jordan block of of the form

must be less than 1. Its -average has constant diagonal and upper diagonals. Let be the constat value of its th upper diagonal ( being the diagonal) and let . Then ( for )

Using the relation , we obtain

Thus, we have as . By induction, using (13) above and the fact that as , we obtain as . Therefore as .

If the Jordan block is diagonal of constant value , then and the -average of the block is diagonal of constant value .

We conclude that and hence as .

Now we prove the necessity part of (a). If 1 is an eigenvalue of and is a corresponding eigenvector, then for every and of course fails to converge to 0. If is an eigenvalue of with and is a corresponding eigenvector, then

which approaches to as . If is an eigenvalue of with and , then there exist nonzero vectors such that . Then by using the identity

we get

It follows that does not exist. This completes the proof of part (a).

Suppose that satisfies the conditions in (b) and that is the Jordan canonical form of . Let be an eigenvalue of and let be a column vector of corresponding to . If , then the restriction of to the subspace spanned by is a contraction, and we have . If , and , then by conditions in (b) either , or there exist with such that . In the former case, we have and in the latter case, we see from (16) that is bounded. Finally if then since , we have and hence . In all cases, we proved that is bounded. Since column vectors of form a basis for , the sufficiency part of (b) follows.

Now we prove the necessity part of (b). If has an eigenvalue with and eigenvector , then as shown above as . If has 1 as an eigenvalue and , then there exist nonzero vectors such that and . Then which is unbounded. If is an eigenvalue of with and , then there exist nonzero vectors and such that and . By expanding and using the identity

we obtain

which approaches to as . This completes the proof.

We now consider applications of preceding theorems to approximation of solution of a linear system , where and a given vector in . Let be a given invertible matrix in . is a solution of if and only if is a fixed point of the mapping defined by

is a contraction if and only if is. In this case, by the well known Contraction Mapping Theorem, given any initial vector , the sequence of iterates converges to the unique solution of . In practice, given , each successive is obtained from by solving the equation

The classical methods of Richardson, Jacobi, and Gauss-Seidel (see, e.g., [3]) have and respectively, where is the identity matrix, the diagonal matrix containing the diagonal of , and the lower triangular matrix containing the lower triangular portion of . Thus by Theorem 1 we have the following known theorem.

Theorem 4.

Let , with invertible. Let . If , then is invertible and the sequence defined recursively by

converges to the unique solution of .

Theorem 4 fails if , For a simple example, let and any nonzero vector.

We need the following lemma in the proof of the next two theorems. For a matrix , we will denote and the range and the null space of respectively.

Lemma 5.

Let be a singular matrix in such that the geometric multiplicity and the algebraic multiplicity of the eigenvalue 0 are equal, that is, . Then there is a unique projection whose range is the range of and whose null space is the null space of , or equivalently, . Moreover, restricted to is an invertible transformation from onto .

Proof.

If is a Jordan canonical form of where the eigenvalues 0 appear at the end portion of the diagonal of , then the matrix

is the required projection. Obviously maps into . If and , then and so . This proves that is invertible on .

Remark 6.

Under the assumptions of Lemma 5, we will call the component of a vector in the projection of on along . Note that by definition of index, the condition in the lemma is equivalent to .

Theorem 7.

Let be a matrix in and a vector in . Let be an invertible matrix in and let . Assume that and that for every eigenvalue of with modulus , that is, is nonexpansive relative to a matrix norm. Starting with an initial vector in define recursively by

for Let

If is consistent, that is, has a solution, then converge to a solution vector with rate of convergence . If is inconsistent, then . More precisely, and , where and is the projection of on along .

Proof.

First we assume that is invertible so that is also invertible. Let be the mapping defined by . Then . Let . Then and hence . Let . is the unique solution of and

Since the sequence in the theorem is , we have

Since is invertible, is not an eigenvalue of , and by Theorem 3 part (a) as . Moreover, from the proof of the same theorem, .

Next we consider the case when is not invertible. Since is invertible, we have and . The index of the eigenvalue 0 of is the index of eigenvalue 1 of . Thus by Lemma 5, . For every vector , let and denote the component of in the subspace and , respectively.

Assume that is consistent, that is, . Then . By Lemma 5, the restriction of on its range is invertible, so there exists a unique in such that , or equivalently, . For any vector , we have

Since maps into and restricted to is invertible, we can apply the preceding proof and conclude that the sequence as defined before converges to and . Now , showing that is a solution of .

Assume now that , that is, is inconsistent. Then and with As in the preceding case there exists a unique such that . Note that for all , . Thus for any vector and any positive integer

where . As in the preceding case, is bounded and converges to 0. Thus and , and hence . This completes the proof.

Next we consider another kind of iteration in which the nonlinear case was considered in Ishikawa [4]. Note that the type of mappings in this case is slightly weaker than nonexpansivity (see condition (c) in the next lemma).

Lemma 8.

Let be an matrix. The following are equivalent:

(a)for every , there exists a matrix norm such that ,

(b)for every , there exists an induced matrix norm such that ,

(c) and if is an eigenvalue of .

Proof.

As in the proof of Theorem 2, (a) and (b) are equivalent. For , denote by . Suppose now that (a) holds. Let be an eigenvalue of . Then is an eigenvalue of . By Theorem 2 for every and hence . If 1 is an eigenvalue of , then it is also an eigenvalue of . By Theorem 2, the index of 1, as an eigenvalue of , is 1. Since obviously and have the same eigenvectors corresponding to the eigenvalue 1, the index of 1, as an eigenvalue of , is also 1. This proves (c).

Now assume (c) holds. Since for , every eigenvalue of , except possibly for 1, has modulus less than 1. Reasoning as above, if 1 is an eigenvalue of , then its index is 1. Therefore by Theorem 2, (a) holds. This completes the proof.

Theorem 9.

Let and . Let be an invertible matrix in , and . Suppose and that if is an eigenvalue of . Let be fixed. Starting with an initial vector , define recursively by

If is consistent, then converges to a solution vector of with rate of convergence given by

where is any number satisfying

If is inconsistent, then ; more precisely,

where is the projection of on along .

Proof.

Let , and . Then .

First we assume that is invertible. Then is invertible and 1 is not an eigenvalue of ; thus . Let . We have

By a well known theorem (see, e.g. [1]), for every .

Assume now that is not invertible and . Then is in the range of . Since satisfies the condition in Lemma 8, satisfies the condition in Lemma 5. Thus the restriction of on its range is invertible and there exists in such that , or equivalently, . For any vector , we have

Since maps into and restricted to is invertible, we can apply the preceding proof and conclude that the sequence converges to and . solves since .

Assume lastly that , that is, is inconsistent. Then and with . As before there exists such that . Note that for . Then

Since converges to 0, we have

and hence . This completes the proof.

By taking and considering only nonexpansive matrices in Theorems 7 and 9, we obtain the following corollary.

Corollary 10.

Let be an matrix such that for some matrix norm . Let be a vector in . Then:

(a) starting with an initial vector in define recursively as follows:

for Let

for If is consistent, then converges to a solution vector with rate of convergence given by

If is inconsistent, then .

(b) let be a fixed number. Starting with an initial vector , let

If is consistent, then converges to a solution vector of with rate of convergence given by

where is any number satisfying

If is inconsistent, then .

Remark 11.

If in the previous corollary, , and in part (b), the sequence converges to a solution. This is the Richardson method, see for example, [3]. Even in this case, our method in part (b) may yield a better approximation. For example if

, and , then the th iterate in the Richardson method is away from the solution , while the th iterate using the method in the corollary part (b) with is less than .

An matrix is called *diagonally dominant* if

for all . If is diagonally dominant with for every and if or , where is the diagonal matrix containing the diagonal of , and the lower triangular matrix containing the lower triangular entries of , then it is easy to prove that where denotes the maximum row sum matrix norm; see, for example, [1, 3]. The following follows from Theorems 7 and 9.

Corollary 12.

Let be a diagonally dominant matrix with for all . Let or , where is the diagonal matrix containing the diagonal of , and the lower triangular matrix containing the lower triangular entries of . Let be a vector in . Then:

(a) starting with an initial vector in define recursively as follows:

for Let

for If is consistent, then converges to a solution vector with rate of convergence given by

If is inconsistent, then .

(b) Let be a fixed number. Starting with an initial vector , let

If is consistent, then converges to a solution vector of with rate of convergence given by

where is any number satisfying

If is inconsistent, then .

## References

- 1.
Horn RA, Johnson CR:

*Matrix Analysis*. Cambridge University Press, Cambridge, UK; 1985. - 2.
Dunford N, Schwartz JT:

*Linear Operators, Part I*. Interscience Publishers, New York, NY, USA; 1957. - 3.
Kincaid D, Cheney W:

*Numerical Analysis*. Brooks/Cole, Pacific Grove, Calif, USA; 1991. - 4.
Ishikawa S:

**Fixed points and iteration of a nonexpansive mapping in a Banach space.***Proceedings of the American Mathematical Society*1976,**59**(1):65–71. 10.1090/S0002-9939-1976-0412909-X

## Author information

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## About this article

#### Received

#### Accepted

#### Published

#### DOI

### Keywords

- Triangular Matrix
- Iterative Solution
- Matrix Norm
- Solution Vector
- Initial Vector