This question is triggered by another one:
Suppose we seek a $2\times2$ matrix equivalent of
the imaginary unit, that is a matrix $\,i\,$ such that $\,i^2 = -1\,$ , or,
with $\;a,b,c,d \in \mathbb{R}$ :
$$
i^2 = \begin{bmatrix} a&b\\c&d \end{bmatrix}
\begin{bmatrix} a&b\\c&d \end{bmatrix} =
\begin{bmatrix} a^2+bc&ab+bd\\ac+cd&bc+d^2 \end{bmatrix} =
- \begin{bmatrix} 1&0\\0&1 \end{bmatrix}
$$
Leading to the following equations:
$$
a^2+bc=-1 \quad ; \quad b(a+d)=0 \quad ; \quad c(a+d)=0 \quad ; \quad bc+d^2=-1
$$
Subtracting the first from the last one gives two possible solutions:
$$
a^2-d^2=0 \quad \Longrightarrow \quad a = \pm\, d
$$
The solution $\,a=d\ne0\,$ leads to $\,b=0\,$ and $\,c=0\,$ , giving $\,a^2 = -1$ ,
which is impossible in the reals. The other possibility is $\,d = -a$ :
$$
i = \begin{bmatrix} a&b\\c&-a \end{bmatrix}
\quad \mbox{with} \quad a^2+bc = -1 \quad \mbox{or} \quad
\begin{vmatrix} a&b\\c&-a \end{vmatrix} = 1
$$
But here I'm stuck. IMO it does not follow that $\,i\,$ is
a special case of the above matrix, namely the one with
$\,a=0\,$ and $\,b=-1$ , $c=1$ :
$$
i = \begin{bmatrix} 0&-1\\1&0 \end{bmatrix}
$$
Am I missing something obvious?
Note. In a comment
by Meelo it is remarked that <quote>One can notice that this representation is not unique, though one must always treat $1$ as the matrix
$$ \pmatrix{1&&0\\0&&1} $$ any matrix with characteristic polynomial $x^2+1$ suffices to represent $i$ . For instance, you could use
$$\pmatrix{1&&-2\\1&&-1}$$ for $i$.</quote> That's right, because
$$
\begin{vmatrix}a-\lambda&b\\c&-a-\lambda\end{vmatrix}=(a-\lambda)(-a-\lambda)-bc=\lambda^2+1
$$
I don't see, however, how this can be matched with my answer at the same place:
$$
e^{\begin{bmatrix} 1 & -2 \\ 1 & -1 \end{bmatrix}\theta} = \mbox{?} =
\begin{bmatrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \end{bmatrix}
$$
Allow me to repeat a piece of theory which shall be well known to many readers
of Mathematics Stack Exchange, but it is rather unfamiliar to myself and probably
to some others as well; some keywords are
Matrix similarity
and Equivalence relation.
In the sequel, all matrices are $2\times 2$, real valued and non-singular.
Two matrices $A$ and $B$ are called similar, written as $A \sim B$
if there exists a matrix $P$ such that:
$$
B = P^{-1}\, A\, P \qquad \mbox{with:} \quad P = \begin{bmatrix}p&q\\r&s\end{bmatrix}
$$
Example:
$$
\begin{bmatrix}0&-1\\1&0\end{bmatrix}
\sim \begin{bmatrix}0&1\\-1&0\end{bmatrix}
\quad \mbox{because} \quad
\begin{bmatrix}0&1\\1&0\end{bmatrix}
\begin{bmatrix}0&-1\\1&0\end{bmatrix}
\begin{bmatrix}0&1\\1&0\end{bmatrix}=
\begin{bmatrix}0&1\\-1&0\end{bmatrix}
$$
With other words: $\;i \sim -i$ , which is somehow relevant to what follows.
Similarity is sort of an equality, because it's easy to prove that:
$$
A \sim A \quad ; \quad (A \sim B) \; \Longleftrightarrow \; (B \sim A)
\quad ; \quad (A \sim B) \; \wedge \; (B \sim C) \; \Longrightarrow \; (A \sim C)
$$
Now let $\left[i\right]$ denote the "standard" matrix representation of the imaginary unit.
Then we have for an arbitrary $2\times 2$ matrix $P$ :
$$ P^{-1}\,\left[i\right]\,P =
\begin{bmatrix}s&-q\\-r&p\end{bmatrix}/D
\begin{bmatrix}0&-1\\1&0\end{bmatrix}
\begin{bmatrix}p&q\\r&s\end{bmatrix}
\qquad\mbox{with:}\quad
D = \begin{vmatrix}p&q\\r&s\end{vmatrix} = (ps-qr)
$$ $$
\Longrightarrow \qquad P^{-1}\,\left[i\right]\,P =
\begin{bmatrix}(pr+qs)&-(p^2+q^2)\\
(r^2+s^2)&-(pr+qs)\end{bmatrix}/(ps-qr)
$$
The determinant is (there is a shortcut for this, though):
$$
\begin{vmatrix}-(pq+rs)/(ps-qr)&-(q^2+s^2)/(ps-qr)\\
(p^2+r^2)/(ps-qr)&(pq+rs)/(ps-qr)\end{vmatrix} =
\frac{-(pq+rs)^2+(q^2+s^2)(p^2+r^2)}{(ps-qr)^2} = 1
$$
Hence $\;P^{-1}\,\left[i\right]\,P\;$ has the form as required: see my question.
$$
\begin{bmatrix}a&b\\c&-a\end{bmatrix} \qquad\mbox{with:}\quad a^2+bc=-1
$$
It is concluded that all matrix representations of the imaginary unit
are similar (but not equal).
If we do the above for powers of $(P^{-1}\left[ i \right]P)$ , then:
$$
\left(P^{-1}\left[ i \right]P\right)^2 = P^{-1}\left[ i \right]^2 P \\
\left(P^{-1}\left[ i \right]P\right)^3 = P^{-1}\left[ i \right]^3 P \\
\cdots \\
\left(P^{-1}\left[ i \right]P\right)^n = P^{-1}\left[ i \right]^n P
$$
In this way, it's easy to see how the result in the comment by Meelo
can be generalized:
$$
e^{P^{-1}\,\left[i\right]\theta\,P} = P^{-1} e^{\left[i\right]\theta} P
$$
Or, whatever matrix equivalent of $\,i\,$ may be preferred:
$$
e^{\left[i\right]\,\theta} = P^{-1}\begin{bmatrix} \cos(\theta) & -\sin(\theta) \\
\sin(\theta) & \cos(\theta) \end{bmatrix} P \sim
\begin{bmatrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \end{bmatrix}
$$
This answers all my questions. But it feels like walking on thin ice.
So please correct me if I have contributed to more confusion instead of
clarification somewhere.