- Research
- Open Access
- Published:

# Several numerical methods for computing unitary polar factor of a matrix

*Advances in Difference Equations*
**volumeÂ 2016**, ArticleÂ number:Â 4 (2016)

## Abstract

We present several numerical schemes for computing the unitary polar factor of rectangular complex matrices. Error analysis shows high orders of convergence. Many experiments in terms of number of iterations and elapsed times are reported to show the efficiency of the new methods in contrast to the existing ones.

## 1 Preliminaries

Let \(\mathbb{C}^{m\times n}\) (\(m\geq n\)) denote the linear space of all \(m\times n\) complex matrices. The polar decomposition of a complex matrix \(A\in\mathbb{C}^{m\times n}\) could be defined as

where *H* is a Hermitian positive semi-definite matrix of order *n* and \(U\in\mathbb{C}^{m\times n}\) is a sub-unitary matrix [1]. A matrix *U* is sub-unitary if \(\|Ux\|_{2}=\|x\|_{2}\) for any \(x\in\mathcal{R}(U^{H})=\mathcal{N}(U)^{\bot}\), where \(\mathcal{R}\) and \(\mathcal{N}\) denote the linear space spanned by columns of matrix *X* (range of *X*) and the null space of matrix *X*, respectively. Note that if \(\operatorname{rank}(A)= n\) then \(U^{*}U=I_{n}\), and *U* is an orthonormal Stiefel matrix.

The Hermitian factor *H* is always unique and can be written as \((A^{*}A)^{\frac{1}{2}}\), while the unitary factor *U* is unique if *A* is nonsingular; see for more [2].

It is required to remark that the polar and matrix sign decompositions are intimately connected [3]. For example, Robertsâ€™ integral formula [4],

has an analog in

These integral formulas reveal that any property or iterative method involving the matrix sign function can be transformed into one for the polar decomposition by replacing \(A^{2}\) via \(A^{*}A\), and *vice versa*.

Practical interest in the polar decomposition stems mainly from the fact that the unitary polar factor of *A* is the nearest unitary matrix to *A* in any unitarily invariant norm. The polar decomposition is therefore of interest whenever it is required to orthogonalize a matrix [5]. To obtain more background in this topic, one may refer to [6â€“9].

Now we briefly review some of the most important iterative matrix methods for computing polar decomposition. Among many iterations (see *e.g.* [10] and the references therein) available for finding *U*, the most practically useful one is the Newton iteration. The method of Newton introduced for polar decomposition in [5] is as follows:

for the square nonsingular cases and the following alternative for general rectangular casesÂ [11]:

wherein \(U^{\dagger}\) stands for the Moore-Penrose generalized inverse. Note that, throughout this work, \(U_{k}^{-*}\) stands for \((U_{k}^{-1})^{*}\). Similar notations are used as well.

### Remark 1.1

We point out that here we focus mainly on computing the unitary polar factor of rectangular matrices, since the high-order methods discussed in this work will not require the computation of pseudo-inverse and is better than the corresponding Newtonâ€™s version (5), which requires the computation of one pseudo-inverse per computing cycle.

Recently, an efficient cubically convergent method has been introduced in [12] as follows:

where \(Y_{k}=U_{k}^{*}U_{k}\), \(Z_{k}=Y_{k}Y_{k}\).

An (enough close) initial matrix \(U_{0}\) must be employed in such matrix fixed-point type methods to ensure convergence. Such an approximation/guess for the unitary factor of the rectangular complex matrices can be constructed by

where \(\alpha>0\) is an estimate of \(\|A\|_{2}\). This is known as one of the good ways in the literature for constructing an initial value to ensure the convergence of iterative Newton-type methods for finding the unitary polar factor of *A*.

The other sections of this paper are organized as follows. In SectionÂ 2, we derive an iteration function for polar decomposition. Next, SectionÂ 3 discusses the convergence properties of this method. It is revealed that the rate of convergence is six since the proposed formulation transforms the singular values of the approximated matrices produced per cycle with a sixth rate to unity (one). This discloses that our method is quite rapid. Several other new iterative methods are constructed in SectionÂ 4. Many numerical experiments are provided to support the theoretical aspects of the paper in SectionÂ 5. Finally, conclusions are drawn in SectionÂ 6.

## 2 A numerical method

The procedure of constructing a new iterative method for *U*, is to apply a zero-finder on a particular map [13]. That is, solving the following nonlinear (matrix) equation:

where *I* is the identity matrix, by an appropriate root-finding method could yield novel schemes.

To that end, we first introduce the following iterative expression for finding the simple zeros of nonlinear equations:

with \(L(u_{k})=\frac{f''(u_{k}) f(u_{k})}{f'(u_{k})^{2}}\). This is a combination of the cubical method proposed in [12] and the quadratically convergent Newtonâ€™s method.

### Theorem 2.1

*Let*
\(\alpha\in D\)
*be a simple zero of a sufficiently differentiable function*
\(f:D\subseteq\mathbb{C}\rightarrow\mathbb{C}\)
*for an open interval*
*D*, *which contains*
\(x_{0}\)
*as an initial approximation of*
*Î±*. *Then the iterative expression* (9) *has sixth order of convergence*.

### Proof

The proof is based on Taylor expansions of the function *f* around the appropriate points and would be similar to those taken in [14]. As a consequence, it is skipped over.â€ƒâ–¡

Here using (9) for solving \(u^{2}-1=0\), we have the following iteration in the reciprocal form:

The iteration obtained after applying a nonlinear equation solver on the mapping (8) and its reciprocal, could be used for polar decomposition. But here, the experimental results show that the reciprocal form (10) is more stable in the presence of round-off errors.

Drawing the attraction basins [15] of (10) for finding the solution of the polynomial equation \(u^{2}-1=0\) in the complex plane reveals that the application of (9) for finding matrix sign function and consequently the unitary polar factor has global convergence. This is done in FigureÂ 1 on the rectangle \([-2,2]\times[-2,2]\).

By taking into account this global convergence behavior, we extend (10) as follows:

where \(U_{0}\) is chosen by (7) (or its simplest form \(U_{0}=A\)) and \(Y_{k}=U_{k}^{*}U_{k}\), \(Z_{k}=Y_{k}Y_{k}\), \(W_{k}=Y_{k}Z_{k}\), and \(L_{k}=Y_{k}W_{k}\). The iteration algorithm (11) converges to the unitary polar factor under some conditions. These discussions will be presented in the next section.

## 3 Convergence properties

This section is dedicated to the convergence properties of (11) for finding the unitary polar factor of *A*.

### Theorem 3.1

*Assume that*
\(A\in\mathbb{C}^{m\times n}\)
*is an arbitrary matrix*. *Then the matrix iterates*
\(\{U_{k}\}_{k=0}^{k=\infty}\)
*of* (11) *converge to*
*U*.

### Proof

The proof of this theorem follows the lines of the proofs given in [16]. As such, it is skipped over.â€ƒâ–¡

### Theorem 3.2

*Let*
\(A\in\mathbb{C}^{m\times n}\)
*be an arbitrary matrix*. *Then the new method* (11) *is of sixth order to find the unitary polar factor of*
*A*.

### Proof

The proposed scheme (11) transforms the singular values of \(U_{k}\) according to the following map:

and it leaves the singular vectors invariant. From equation (12), it is enough to show that convergence of the singular values to unity possesses a sixth order of convergence for \(k\geq1\). Thus, we arrive at

Taking absolute values from both sides of (13), one gets the following:

This demonstrates the sixth rate of convergence for the proposed numerical algorithm (11). Consequently, the proof is complete.â€ƒâ–¡

### Remark 3.1

The presented method is not a member of the PadÃ© family of iterations given in [17] (and discussed deeply in [18]), with global convergence. As a result, it is interesting from both theoretical and computational point of views.

The new formulation (11) is quite rapid, but there is still a way for speeding up the whole process via an acceleration technique given for Newtonâ€™s method in [5], known as scaling. Some important scaling approaches were derived in different norms as comes next; we have

where \(\|\cdot\|_{2}\) is the spectral norm. This scale factor is optimal in the given \(U_{k}\), since (15) minimizes the next error \(\| U_{k+1}-U\|_{2}\). Unfortunately, to determine the scale factor (15), one needs to compute two extreme singular values of \(U_{k}\) at each iteration. To save the cost of computing the extreme singular values, one might approximate the scaling parameter as in the following [19]:

or

Another relatively inexpensive scaling factor is [20]

The complex modulus of the determinant in this choice is inexpensively obtained from the same matrix factorization used to calculate \(U_{k}^{-1}\).

Finally in this section, the new scheme can be expressed in the following accelerated form as well:

## 4 Some other iterative methods

As discussed in the preceding sections, the construction of the iterative methods for finding the unitary polar factor of a matrix mainly relies on the nonlinear equation solver which is going to be applied on the mapping (8).

Now, some may question that the construction (9) is straightforward, since it is the combination of two already known methods. It is here stated that the main goal is to attain a new scheme for a polar decomposition which has *global convergence behavior* and is *new*, *i.e.*, it is not a member of the PadÃ© family of iterations (or its reciprocal). So, the novelty and usefulness of (9) in terms of solving nonlinear equations is not of main interest here and the importance is focused on providing a novel and useful scheme for finding the unitary polar factor.

To construct some other new and useful iterative methods for finding the unitary polar factor of a matrix, we could again use the first sub-step of (9) along with different kinds of approximation for the newly appearing first derivative in the second sub-step. As such, we could derive the following nonlinear equation solver:

wherein \(f[x_{k},y_{k}]\) is the two-point divided difference. Note again that pursuing the optimality conjecture of Kung-Traub or usefulness of the iterative method in terms of solving nonlinear equation is not the only cutting-edge factor, since the most eminent factor is in designing a new scheme for unitary polar factor with global convergence behavior. An application of (20) to equation (8) results in the following *fourth*-*order* scheme for the unitary polar factor:

At this moment, by applying a similar secant-like strategy in a third sub-step after (20), one may design the following *seventh*-*order* scheme:

and subsequently the following iterative method:

The attraction basins of these two new iterative methods are provided in FigureÂ 2, which manifest their global convergence behavior. Note that a theoretical discussion for proving this global behavior is also possible using a similar strategy as given in [16].

The error analysis of the new schemes (21) and (23) are similar to the case given in SectionÂ 3. As a result, they are not included here.

## 5 Numerical results

We have tested the contributed methods (11), (21), (23) denoted by PM1, PM2, and PM3, respectively, using the programming package Mathematica 10 in double precision [21]. Apart from this scheme, several iterative methods, such as (5) denoted by NM, and (6) denoted by KHM, and the scaled Newton method (denoted by ANM) are given by

have been tested and compared. We used the following stopping criterion: \(R_{k+1}=\frac{\|U_{k+1}-U_{k}\|_{\infty}}{\|U_{k}\|_{\infty}}\leq \epsilon\), wherein \(\epsilon=10^{-10}\) is the tolerance.

We now apply different numerical methods for finding the unitary polar factors of many randomly generated rectangular matrices with complex entries. In order to help the readers to re-run the experiments we used \(\mathtt{SeedRandom[12345]}\) for producing pseudo-random (complex) numbers.

The random matrices for different dimensions of \(m\times n\) are constructed by the following piece of Mathematica code (\(I=\sqrt{-1}\)):

SeedRandom[12345]; number = 15;

Table[A[l] = RandomComplex[{-10 - 10 I,

Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 10 + 10 I}, {m, n}];, {l, number}];

We have gathered up the numerical results for the experiments in TablesÂ 1-6. The initial approximation is constructed as \(U_{0}=\frac{1}{\|A\| _{2}}A\). Only for the cases \(m\times n=110\times100\) and \(m\times n=510\times500\), the comparisons of the required number of iterations have been reported and we mainly focused on the elapsed CPU time (in seconds) to clearly reveal that our proposed scheme is quite efficient in most cases. The results of comparison for the square nonsingular cases of \(m\times n=600\times600\) are included in TableÂ 7. This shows that the efficient results are in complete agreement with the CPU time utilized in the execution of program(s) for PM2.

To give an answer to the key question: whether the increasing order convergence is worth in view of increasing the matrix multiplications in each iteration, it is requisite to incorporate the notion of efficiency index, \(p^{1/\theta}\), whereas *p* and *Î¸* stand for the rate of convergence and the computational cots per cycle, respectively. This is achieved by assuming that each matrix-matrix multiplication cost 1-unit while the cost for one regular matrix inverse is 1.5-unit and one matrix Moore-Penrose inverse is 3-unit. Consequently, the efficiency indices for the discussed methods are: \(E(\mbox{4})\simeq1.2599\), \(E(\mbox{6})\simeq1.2210\), \(E(\mbox{11})\simeq1.2698\), \(E(\mbox{21})\simeq1.2866\), and \(E(\mbox{23})\simeq1.2962\).

However, it is also required to state that for square cases and as could be seen in TableÂ 8, the NM and ANM are better choices since they are using the regular inverses in their iterative structures, unlike their structures in the rectangular cases. Furthermore, it sounds as if the computation of the scaling factor for the proposed method will not be attractive, due to the computation of an extra pseudo-inverse per cycle.

The acquired numerical results agree with the theoretical discussions given in SectionsÂ 2 and 3, overwhelmingly. As a result, we can state that PM1-PM3 reduce the number of iterations and time in finding the polar decomposition.

## 6 Concluding remarks

In this paper, we developed high-order methods for matrix polar decomposition. It has been shown that the convergence is global. Many numerical tests (of various dimensions) have been provided to show the performance of the new method.

In 1991, Kenney and Laub [17] proposed a family of rational iterative methods for sign (subsequently for polar decomposition), based on PadÃ© approximation. Their principal PadÃ© iterations are convergent globally. Thus, we have convergent methods of arbitrary orders for sign (subsequently for polar decomposition). However, here we tried to propose new methods, which are interesting from theoretical point of view and are not members of PadÃ© family. Numerical results have demonstrated the behavior of the new algorithms.

## References

Higham, NJ: Functions of Matrices: Theory and Computation. SIAM, Philadelphia (2008)

Laszkiewicz, B, ZiÈ©tak, K: Approximation of matrices and family of Gander methods for polar decomposition. BIT Numer. Math.

**46**, 345-366 (2006)Higham, NJ: The matrix sign decomposition and its relation to the polar decomposition. Linear Algebra Appl.

**212/213**, 3-20 (1994)Roberts, JD: Linear model reduction and solution of the algebraic Riccati equation by use of the sign function. Int. J.Â Control

**32**, 677-687 (1980)Higham, NJ: Computing the polar decomposition - with applications. SIAM J. Sci. Stat. Comput.

**7**, 1160-1174 (1986)Byers, R: Solving the algebraic Riccati equation with the matrix sign function. Linear Algebra Appl.

**85**, 267-279 (1987)Gander, W: Algorithms for the polar decomposition. SIAM J. Sci. Stat. Comput.

**11**, 1102-1115 (1990)Soheili, AR, Toutounian, F, Soleymani, F: A fast convergent numerical method for matrix sign function with application in SDEs. J. Comput. Appl. Math.

**282**, 167-178 (2015)Soleymani, F, StanimiroviÄ‡, PS, StojanoviÄ‡, I: A novel iterative method for polar decomposition and matrix sign function. Discrete Dyn. Nat. Soc.

**2015**, Article ID 649423 (2015)Nakatsukasa, Y, Bai, Z, Gygi, F: Optimizing Halleyâ€™s iteration for computing the matrix polar decomposition. SIAM J. Matrix Anal. Appl.

**31**, 2700-2720 (2010)Du, K: The iterative methods for computing the polar decomposition of rank-deficient matrix. Appl. Math. Comput.

**162**, 95-102 (2005)Khaksar Haghani, F: A third-order Newton-type method for finding polar decomposition. Adv. Numer. Anal.

**2014**, Article ID 576325 (2014)Soleymani, F, StanimiroviÄ‡, PS, Shateyi, S, Haghani, FK: Approximating the matrix sign function using a novel iterative method. Abstr. Appl. Anal.

**2014**, Article ID 105301 (2014)Soleymani, F: Some high-order iterative methods for finding all the real zeros. Thai J. Math.

**12**, 313-327 (2014)Cordero, A, Soleymani, F, Torregrosa, JR, Shateyi, S: Basins of attraction for various Steffensen-type methods. J. Appl. Math.

**2014**, Article ID 539707 (2014)Khaksar Haghani, F, Soleymani, F: On a fourth-order matrix method for computing polar decomposition. Comput. Appl. Math.

**34**, 389-399 (2015)Kenney, C, Laub, AJ: Rational iterative methods for the matrix sign function. SIAM J. Matrix Anal. Appl.

**12**, 273-291 (1991)KielbasiÅ„ski, A, ZieliÅ„ski, P, ZiÈ©tak, K: On iterative algorithms for the polar decomposition of a matrix. Appl. Math. Comput.

**270**, 483-495 (2015)Dubrulle, AA: Frobenius iteration for the matrix polar decomposition. Technical report HPL-94-117, Hewlett-Packard Company (1994)

Byers, R, Xu, H: A new scaling for Newtonâ€™s iteration for the polar decomposition and its backward stability. SIAM J. Matrix Anal. Appl.

**30**, 822-843 (2008)Wolfram Research, Inc., Mathematica, Version 10.0, Champaign, IL (2015)

## Acknowledgements

The authors thank the anonymous referees for their suggestions which helped to improve the quality of the paper.

## Author information

### Authors and Affiliations

### Corresponding author

## Additional information

### Competing interests

The authors declare that they have no competing interests.

### Authorsâ€™ contributions

All authors jointly worked on deriving the results and approved the final manuscript.

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## About this article

### Cite this article

Soleymani, F., Khaksar Haghani, F. & Shateyi, S. Several numerical methods for computing unitary polar factor of a matrix.
*Adv Differ Equ* **2016**, 4 (2016). https://doi.org/10.1186/s13662-015-0732-z

Received:

Accepted:

Published:

DOI: https://doi.org/10.1186/s13662-015-0732-z

### MSC

- 65F30

### Keywords

- iterative methods
- polar decomposition
- numerical methods
- polar factor
- Hermitian
- order of convergence