Improved Error Analysis

$ \def \half {\frac{1}{2}} \def \norm {\frac{1}{\sigma \sqrt{2\pi}} \; } \def \MET {\qquad \mbox{where} \quad} \def \hieruit {\quad \Longrightarrow \quad} \def \slechts {\quad \Longleftrightarrow \quad} $ Parceval's Theorem for uniform combs of hat functions $P(x)$ with discretization $\Delta$ reads as follows. $$ \frac{1}{\Delta/2} \int_{-\Delta/2}^{+\Delta/2} P(x)^2 \, dx = \half a_0^2 + \sum_{k=1}^\infty a_k^2 $$ Where $a_k/2 = A(k.\omega)$ , $\omega = 2\pi/\Delta$ and: $$ P(x) = \sum_{L=-\infty}^{+\infty} p(x-L\Delta) \Delta = 1 + \sum_{k=1}^{\infty} a_k\,cos(k\omega x) $$ Lemma. $$ \frac{1}{\Delta/2} \int_{-\Delta/2}^{+\Delta/2} P(x) \, dx = 2 $$ Proof. In the section Uniform Combs of Hat functions it has been shown that: $$ a_k = \frac{1}{\Delta/2} \int_{-\Delta/2}^{+\Delta/2} P(x) \cos(k\:2\pi/\Delta\,x)\, dx = 2 \times A(k\omega) \hieruit $$ $$ \frac{1}{\Delta/2} \int_{-\Delta/2}^{+\Delta/2} P(x)\, dx = a_0 = 2 \times A(0) = 2 $$ End of proof. But not the end of error analysis. $$ \half a_0^2 = 2\, A^2(0) = 2 \quad ; \quad a_k^2 = 4\, A^2(k\omega) \hieruit $$ $$ \frac{1}{\Delta/2} \int_{-\Delta/2}^{+\Delta/2} \left[ P(x)-1\right]^2\, dx = $$ $$ \frac{1}{\Delta/2} \int_{-\Delta/2}^{+\Delta/2} P(x)^2\, dx - 2 \frac{1}{\Delta/2} \int_{-\Delta/2}^{+\Delta/2} P(x)\, dx + \frac{1}{\Delta/2} \int_{-\Delta/2}^{+\Delta/2} 1\, dx = $$ $$ 2 + 4 \sum_{k=1}^\infty A^2(k\omega) - 2 \times 2 + 2 \hieruit $$ $$ \frac{1}{\Delta/2} \int_{-\Delta/2}^{+\Delta/2} \left[ P(x)-1\right]^2\, dx = \sum_{k=1}^\infty a_k^2 \MET a_k = 2 A(k\omega) $$ In words. The square of the differences between the comb and unity, integrated over a discretization interval (and divided by half of it) is equal to the sum of squares of the Fourier coefficients (of the cosines).
The left hand side can be interpreted as (twice) a mean square relative error, which is, of course, a more accurate measure than just the first term of the remaining Fourier series. The former method is quite good for a Uniform Comb of Gaussians, but for a uniform comb of e.g. Cauchy distributions, it is already a bit doubtful, as we have seen in the preceding subsection.
Example. Uniform comb of Gaussians. Let: $$ \sum_{k=1}^\infty \left[ 2 \times A(k\omega) \right]^2 < \epsilon^2 \slechts \sum_{k=1}^\infty e^{-(k\omega\sigma)^2} < (\epsilon/2)^2 $$ Because this series is converging very fast, we decide again to take only the first term of it: $$ e^{-(\omega\sigma)^2} < (\epsilon/2)^2 \slechts e^{-(\omega\sigma)^2/2} < \epsilon/2 $$ Thus resulting in exactly the same condition as has been found before: $$ \sigma = \frac{\Delta}{2\pi} \alpha \MET \alpha = \sqrt{2\ln(2/\epsilon)} $$ Example. Uniform comb of Cauchy's. Let: $$ \sum_{k=1}^\infty \left[ 2 \times A(k\omega) \right]^2 < \epsilon^2 \slechts \sum_{k=1}^\infty \left[ e^{-2\omega\sigma} \right]^k < (\epsilon/2)^2 $$ $$ \slechts \frac{e^{-2\omega\sigma}}{1 - e^{-2\omega\sigma}} < (\epsilon/2)^2 \slechts e^{-2\omega\sigma} < \frac{(\epsilon/2)^2}{1 + (\epsilon/2)^2} $$ $$ \slechts \sigma > \frac{\Delta}{2\pi} \ln\sqrt{(1 + (2/\epsilon)^2} \approx \frac{\Delta}{2\pi} \ln(2/\epsilon) $$ The latter approximation because errors are supposed to be small. Which then is the same as found with the simpler method.