previous   overview   next

Upper and Lower case

$ \def \EN {\quad \mbox{and} \quad} \def \XOR {\qquad \mbox{xor} \qquad} \def \hieruit {\quad \Longrightarrow \quad} \def \slechts {\quad \Longleftrightarrow \quad} $ There are two cases in which a tri-diagonal system may be reduced to a system with only two diagonals, resulting in either an Upper two-diagonal matrix $U$ or a Lower two-diagonal matrix $L$: $$ \left[ \begin{array}{ccccccc} . & . & . & & & & \\ & -b & 1 & 0 & & & \\ & & -b & 1 & 0 & & \\ & & & -b & 1 & 0 & \\ & & & & . & . & . \end{array} \right] = L \qquad \left[ \begin{array}{ccccccc} . & . & . & & & & \\ & 0 & 1 & -a & & & \\ & & 0 & 1 & -a & & \\ & & & 0 & 1 & -a & \\ & & & & . & . & . \end{array} \right] = U $$ The properties of being an Upper or a Lower matrix are Persistent Properties.
Use $(a',b') = (a^2,b^2)/(1-2.a.b)$ to prove: $$ a' = a^2 \EN b'= b = 0 \XOR b' = b^2 \EN a' = a = 0 $$ $$ a = \sqrt{a'} \EN b = b' = 0 \XOR b = \sqrt{b'} \EN a' = a = 0 $$ Here "xor" stands for: eXclusive OR. We can also write: \begin{eqnarray*} a(2.dx) = a^2(dx) & \EN & a(dx/2) = a^{1/2}(dx) \\ b(2.dx) = b^2(dx) & \EN & b(dx/2) = b^{1/2}(dx) \end{eqnarray*} And subsequently develop another piece of theory, which will be analogous then to the theory of the "Quotient Function".
It's easy to find the Finite Difference equations which are associated with the Lower and Upper matrix, respectively: \begin{eqnarray*} -b.T_{i-1} + T_i = 0 & \XOR & T_i - a.T_{i+1} = 0 \\ T_i = b.T_{i-1} & \XOR & T_i = a.T_{i+1} \end{eqnarray*} But the equations can also be written as follows, assigning to $i$ an increment or a decrement, as it is appropriate: \begin{eqnarray*} -b.T_i + T_{i+1} = 0 & \XOR & T_{i-1} - a.T_i = 0 \\ T_i = 1/b.T_{i+1} & \XOR & T_i = 1/a.T_{i-1} \end{eqnarray*} Herewith we see that an Upper matrix can be made equivalent to a Lower matrix, and vice versa, as follows: $$ \left[ \begin{array}{ccccccc} . & . & . & & & & \\ & -b & 1 & 0 & & & \\ & & -b & 1 & 0 & & \\ & & & -b & 1 & 0 & \\ & & & & . & . & . \end{array} \right] \equiv \left[ \begin{array}{ccccccc} . & . & . & & & & \\ & 0 & 1 & -1/b & & & \\ & & 0 & 1 & -1/b & & \\ & & & 0 & 1 & -1/b & \\ & & & & . & . & . \end{array} \right] $$ $$ \left[ \begin{array}{ccccccc} . & . & . & & & & \\ & 0 & 1 & - a & & & \\ & & 0 & 1 & -a & & \\ & & & 0 & 1 & - a & \\ & & & & . & . & . \end{array} \right] \equiv \left[ \begin{array}{ccccccc} . & . & . & & & & \\ & -1/a & 1 & 0 & & & \\ & & -1/a & 1 & 0 & & \\ & & & -1/a & 1 & 0 & \\ & & & & . & . & . \end{array} \right] $$ This is quite analogous to $(b/a)^{-1} = (a/b)$ and traversing the grid in the reverse direction.
A natural boundary condition for the Lower matrix is at $(i = 1)$, because, while going from the top to the bottom, the Lower matrix corresponds with the following F.D. equations: $$ T_1 = T_1 \; ; \; -b.T_1 + T_2 = 0 \; ; \; ... \; ; \; -b.T_{i-1} + T_i = 0 \; ; \; ... \; ; \; -b.T_{N-1} + T_N = 0 $$ The system can be solved directly, by forward substitution: $$ T_2 = b.T_1 \; ; \; T_3 = b.T_2 = b^2.T_1 \; ; \; ... \; ; \; T_i = b^{i-1}.T_1 \; ; \; ... \; ; \; T_N = b^{N-1}.T_1 $$ A natural boundary condition for the Upper matrix is at $(i = N)$, because, while going from the bottom to the top, the Upper matrix corresponds with the following F.D. equations: $$ T_N = T_N \; ; \; T_{N-1} - a.T_N = 0 \; ; \; ... \; ; \; T_i - a.T_{i+1} = 0 \; ; \; ... \; ; \; T_1 - a.T_2 = 0 $$ The system can be solved directly, by backward substitution: $$ T_{N-1} = a.T_N \; , \; T_{N-2} = a^2.T_N \; , \; ... \; , \; T_{N-i} = a^i.T_N \; , \; ... \; , \; T_1 = a^{N-1}.T_N $$ Herewith the general F.D solutions for the L/U matrices become: $$ T_i = T_1.b^{i-1} \XOR T_{N-i} = T_N.a^i $$ Rewrite the Upper solution: $$ T_{N-i} = T_N.a^i \hieruit T_1 = T_{N-(N-1)} = T_N.a^{N-1} $$ $$ \hieruit T_i = T_{N-(N-i)} = T_N.a^{N-i} = T_N.a^{N-1}.a^{1-i} = T_1.(1/a)^{i-1} $$ Rewrite the Lower solution: $$ T_i = T_1.b^{i-1} \hieruit T_N = T_1.b^{N-1} $$ $$ \hieruit T_{N-i} = T_1.b^{N-i-1} = T_1.b^{N-1}.b^{-i} = T_N.(1/b)^i $$ We conclude that there exists quite some resemblance between the Lower and the Upper solutions: $$ T_N.a^{N-i} = T_1.(1/a)^{i-1} \XOR T_1.b^{i-1} = T_N.(1/b)^i $$ Let $L$ denote the total length of the mesh. (Herewith we can take care of the fact that real exponents preferably should be dimensionless.) Substituting $x = (i-1).dx/L$ then leads to accompanying Analytical solutions: \begin{eqnarray*} & T(x) = T(0).b^{x/L} \slechts T(L-x) = T(L).b^{-x/L} \\ & T(x) = T(0).a^{-x/L} \slechts T(L-x) = T(L).a^{x/L} \end{eqnarray*} The boundary conditions should, of course, be consistent herewith: $$ T(L)=T(0).b \XOR T(0)=T(L).a $$ It can be demonstrated that the Rule of Positive Coefficients must remain valid for Upper and Lower matrices, irrespective of any laws for grid coarsening and grid refinement. Take a closer look at the Upper solution $T_i = b^{i-1}$. For negative $b$, the solution $T_i$ would change sign with every increment of $i$. If $i-1$ was even, hence $b^{i-1}$ positive, then $i$ would be odd, hence $b^i$ negative. But then $i+1$ would be even again and $b^{i+1}$ positive. The solution would exhibit a strong oscillatory behaviour. Even worse, the period of the oscillations would be proportional to the grid-spacing. If a numerical solution is assumed to converge to a smooth analytical solution, then such an "unstable" behaviour clearly does not lead to the desired result. Thus unstable numerical solutions are commonly considered as being unacceptable. We conclude that acceptable solutions are only obtained for $b > 0$. Very much the same argument can be employed, in order to prove, once more, that $a > 0$.