Is a matrix multiplied with its transpose something special?

Special? Yes! The matrix $A^TA$ is abundant with the Least Squares Finite Element Method in Numerical Analysis:

EDIT. Another interesting application of the specialty of $A^TA$ is perhaps the following. Suppose that we have a dedicated matrix inversion routine at our disposal, namely for a matrix $A$ with pivots only at the main diagonal. How can we use this routine for inverting an arbitrary matrix $A$ ? Assuming that the inverse $A^{-1}$ does exist and nothing else is known. As follows.
  1. Multiply $A$ on the left with $A^T$, giving $B = A^T A$ .
  2. The inverse can of $B$ can be determined by employing our special matrix inversion routine.
    The reason is that the pivots of $B$ are always at the main diagonal: see the first reference.
  3. The inverse matrix is $B^{-1} = (A^TA)^{-1} = A^{-1}A^{-T}$ .
    Therefore multiply on the right with $A^T$ , giving : $A^{-1}A^{-T}A^T = A^{-1}$ .
    The inverse that has been sought for is recovered herewith.
Software demo (Delphi Pascal):
program Demo;

type
  matrix = array of array of double;

procedure Inverteren(var b : matrix);
{
  Matrix inversion
  pivots on diagonal
}
var
  pe : double;
  NDM,i,j,k : integer;
begin
  NDM := Length(b);
  for k := 0 to NDM-1 do
  begin
    pe := b[k,k];
    b[k,k] := 1;
    for i := 0 to NDM-1 do
    begin
      b[i,k] := b[i,k]/pe;
      if (i = k) then Continue;
      for j := 0 to NDM-1 do
      begin
        if (j = k) then Continue;
        b[i,j] := b[i,j] - b[i,k]*b[k,j];
      end;
    end;
    for j := 0 to NDM-1 do
      b[k,j] := - b[k,j]/pe;
    b[k,k] := 1/pe;
  end;
end;

procedure inverse(b : matrix; var q : matrix);
{
  Matrix inversion
  without pivoting
}
var
  NDM,i,j,k : integer;
  p : matrix;
  h : double;
begin
  NDM := Length(b);
  SetLength(p,NDM,NDM);
{ Transpose * original }
  for i := 0 to NDM-1 do
  begin
    for j := 0 to NDM-1 do
    begin
      h := 0;
      for k := 0 TO NDM-1 do
        h := h + b[k,i]*b[k,j];
      p[i,j] := h;
    end;
  end;
  Inverteren(p);
  SetLength(q,NDM,NDM);
{ (B^T*B)^(-1)*B^T = B^(-1) }
  for i := 0 to NDM-1 do
  begin
    for j := 0 to NDM-1 do
    begin
      h := 0;
      for k := 0 TO NDM-1 do
        h := h + p[i,k]*b[j,k];
      q[i,j] := h;
    end;
  end;
end;

procedure test(NDM : integer);
var
  i,j,k : integer;
  b,q : matrix;
  h : double;
begin
  SetLength(b,NDM,NDM);
  Random; Random;
  for i := 0 to NDM-1 do
  begin
    for j := 0 to NDM-1 do
    begin
      b[i,j] := Random;
    end;
  end;
  inverse(b,q);
  for i := 0 to NDM-1 do
  begin
    for j := 0 to NDM-1 do
    begin
      h := 0;
      for k := 0 TO NDM-1 do
        h := h + q[i,k]*b[k,j];
      Write(h:5:2,' ');
    end;
    Writeln;
  end;
end;

BEGIN
  test(9);
END.
Output:
 1.00  0.00  0.00 -0.00  0.00  0.00  0.00  0.00 -0.00 
 0.00  1.00  0.00 -0.00  0.00  0.00  0.00  0.00 -0.00 
-0.00  0.00  1.00  0.00 -0.00 -0.00 -0.00 -0.00  0.00 
-0.00 -0.00 -0.00  1.00 -0.00 -0.00 -0.00 -0.00  0.00 
 0.00  0.00  0.00  0.00  1.00  0.00  0.00  0.00  0.00 
 0.00  0.00  0.00 -0.00 -0.00  1.00  0.00  0.00 -0.00 
-0.00 -0.00 -0.00 -0.00 -0.00 -0.00  1.00 -0.00  0.00 
-0.00 -0.00 -0.00  0.00 -0.00 -0.00 -0.00  1.00  0.00 
-0.00 -0.00 -0.00 -0.00 -0.00 -0.00 -0.00 -0.00  1.00 
LATE EDIT.
Theory of Polar Decomposition is described in Wikipedia. If attention is restricted to real-valued (non-singular square invertible) matrices, then an appropriate question and some answers are found in Polar decomposition of real matrices. Especially the following formula over there leaves no doubt that a matrix multiplied with its transpose IS something special: $$ B = \left[B\left(B^TB\right)^{-1/2}\right]\left[\left(B^TB\right)^{1/2}\right] $$ Least Squares methods (employing a matrix multiplied with its transpose) are also very useful with Automated Balancing of Chemical Equations