Polar decomposition of real matrices
Theory of Polar Decomposition is described in Wikipedia
For the sake of simplicity we shall restrict our attention to real-valued non-singular square (invertible) matrices.
Theorem. Every such a matrix B can be decomposed as follows and the decomposition is unique:
$$
B = U Q
$$
Where $U =$ orthogonal matrix and $Q=$ symmetric positive definite matrix.
Instead of providing still another proof of the above theorem, I have decided to be practical and describe how to numerically
obtain the decomposition.
First define the transpose of the matrix multiplied with the original:
$$
P = B^T B = \left(B^TB\right)^T = P^T \quad \mbox{: symmetric}
$$
It follows that $P$ is symmetric and positive definite.
Then what we need is the square root of the matrix $P$. In order to understand how to obtain it, let's take a look at
Newton's method for obtaining the square root
of a real positive number $p$:
$$
f(x) = x^2 - p\quad \Longrightarrow \\ x_{n+1} = x_n - \frac{x_n^2-p}{2x_n} = \left(x_n + p\,x_n^{-1}\right)/2
$$
Let's do the same for our matrix $P$, iterations starting with the unit matrix:
$$
X_0 = I \quad ; \quad X_{n+1} = \left(X_n + P X_n^{-1}\right)/2 \quad ; \quad \sqrt{P} = \lim_{n\to\infty} X_n
$$
According to a
MSE reference
$X_n^{-1}$ is positive definite and symmetric, provided that $X_n$ is positive definite and symmetric. Products and sums
of positive definite and symmetric matrices are positive definite and symmetric too. Therefore,
by induction to $n$, it can be proved that $\sqrt{P}$ is positive definite and symmetric as well. Now define $Q=\sqrt{P}=Q^T$.
At last define $U = B Q^{-1}$ and prove that $U$ is orthogonal:
$$
U^{T}U = \left(B Q^{-1}\right)^T\left(B Q^{-1}\right) = \left(Q^{-1}\right)^T\left(B^TB\right)Q^{-1} = \\
\left(Q^T\right)^{-1} Q^2 Q^{-1} = \left(Q^{-1}Q\right)\left(QQ^{-1}\right) = II = I
$$
The conclusion looks like trivial:
$$
B = \left[B\left(B^TB\right)^{-1/2}\right]\left[\left(B^TB\right)^{1/2}\right]
$$
So far so good about theory. I want to talk about practice now, which in modern times is computer programming:
My favorite programming language is (Delphi) Pascal.
First we need some standard matrix manipulation routines and a Newton-Raphson routine for calculating the matrix
square root. These are implemented in a Unit called 'Wiskunde'. The software is completed by testing if theory
works in practice as expected. The number of iterations in procedure 'Newton' has been determined as follows.
A suitable stopping criterion is:
$$
\left|\left| X^2 - B^TB\,\right|\right| < \epsilon
$$
with $\epsilon = 10^{-10}$ and the norm of a $n \times n$ matrix $A$ defined as:
$$
\left|\left|A\right|\right| = \sqrt{\sum_{i=1}^n \sum_{j=1}^n A_{ij}^2}
$$
Convergence is very fast. When cast in MathJax format the output looks like this:
$$ \small
\begin{bmatrix} 0.361048 & -0.297419 & -0.227079 \\ 0.171654 & -0.181309 & -0.338205 \\ -0.127762 & -0.074326 & -0.417988 \end{bmatrix} = \\ \small
\begin{bmatrix} 0.491571 & -0.868969 & -0.057015 \\ 0.599413 & 0.385125 & -0.701701 \\ -0.631714 & -0.310760 & -0.710187 \end{bmatrix}
\begin{bmatrix} 0.361082 & -0.207928 & -0.050301 \\ -0.207928 & 0.211719 & 0.196967 \\ -0.050301 & 0.196967 & 0.547115 \end{bmatrix}
$$