The Limit Concept
Disclaimer.
The following can be found in numerous standard texts on Calculus,
such as:
- James Stewart 5e, Calculus early transcendentals, Thomson,
ISBN 0-534-27409-9
-
Wikipedia Limit (mathematics)
Suppose $f$ is a real-valued function of $x$ and $a$ is a real number.
The expression:
$$
\lim_{x\rightarrow a} f(x) = L
$$
means that $f(x)$ can be
made as close to $L$ as desired, by making $x$ sufficiently close to $a$,
but without actually letting $x$ be $a$. In this case, we say that
"the limit of $f(x)$, as $x$ approaches $a$, is $L$".
The exact definition is as follows. Let $f(x)$ be a function defined
on an interval that contains $x = a$ . Then we say:
$$
\lim_{x\rightarrow a} f(x) = L
$$
if for every number $\epsilon > 0$ there is some number $\delta > 0$ such that:
$$
| f(x) - L | < \epsilon \quad \mbox{whenever}
\quad 0 < | x - a | < \delta
$$
Note that $f(x)$ is possibly not defined at $x = a$ .
As an example, consider the following limit:
$$
\lim_{x\rightarrow 1} \frac{x^2-1}{x-1} = 2
$$
Indeed. Because $a = 1$ , but $0 < |x-a|$ , so $x \neq 1$ , we may safely write:
$$
\lim_{x\rightarrow 1} \frac{x^2-1}{x-1} =
\lim_{x\rightarrow 1} \frac{(x-1)(x+1)}{x-1} =
\lim_{x\rightarrow 1} (x+1)
$$
Now take $\delta = \epsilon$, then there always exists some number $\delta$,
namely $\epsilon$, such that $|(x+1)-2| = |x-1| < \epsilon$ whenever
$|x-1| < \delta$.
Closely related to the definition of a limit, but not identical to it, is
the definition of continuity. A function $f$ is continuous at a number
$a$ if:
$$
\lim_{x\rightarrow a} f(x) = f(a)
$$
Example. Let $f(x)$ be defined by:
$$
f(x) = \left\{ \begin{array}{ll}
2 & \mbox{for} \quad x = 1 \\
(x^2-1)/(x-1) & \mbox{for} \quad x \neq 1
\end{array} \right.
$$
Then we have proved that $f(x)$ is continuous at $x=1$. If we define instead:
$$
f(x) = \left\{ \begin{array}{ll}
0 & \mbox{for} \quad x = 1 \\
(x^2-1)/(x-1) & \mbox{for} \quad x \neq 1
\end{array} \right.
$$
Then $f(x)$ is not continuous at $x=1$, because:
$\lim_{x\rightarrow 1} f(x) = 2 \neq 0$.
Suppose, again, that $f$ is a real-valued function of $x$. The expression:
$$
\lim_{x \rightarrow \infty} f(x) = L
$$
means that $f(x)$ can be made as close to $L$ as desired, by making $x$ large
enough. Without actually letting $x$ be $\infty$ would be a trivial addendum in
this case. We say that "the limit of $f(x)$, as $x$ approaches infinity,
is $L$". The exact definition is as follows. Let $f(x)$ be a function defined
for sufficiently large values of $x$. Then we say:
$$
\lim_{x\rightarrow \infty} f(x) = L
$$
if for every number $\epsilon > 0$ there is some number $N > 0$ such that:
$$
| f(x) - L | < \epsilon \quad \mbox{whenever} \quad x > N
$$
As an example, consider the following limit:
$$
\lim_{x\rightarrow \infty} 1/x = 0
$$
We find that this limit is equal to zero. Indeed:
$$
| 1/x - 0 | < \epsilon \quad \mbox{whenever} \quad x > N
\quad \mbox{with} \quad N = 1/\epsilon
$$
Last but not least we have infinite limits. Symbolically:
$$
lim_{x\rightarrow \infty} f(x) = \infty
$$
if for every number $M > 0$ there is some number $N > 0$ such that:
$$
f(x) > M \quad \mbox{whenever} \quad x > N
$$
But, in some circles, such infinite limits are simply said not to exist.
Claimer. The following will not be found in any standard texbook on
calculus. Let's restrict attention to the first limit definition given above:
$$
\lim_{x\rightarrow a} f(x) = L
$$
if for every number $\epsilon > 0$ there is some number $\delta > 0$ such that:
$$
| f(x) - L | < \epsilon \quad \mbox{whenever}
\quad 0 < | x - a | < \delta
$$
Numbers like $\epsilon > 0$ and $\delta > 0$ are known in computational work as
errors. Even the real numbers themselves in a digital computer
are not error free; as is obvious if one seeks to represent irrational numbers
such as $\pi$ or $\sqrt{2}$. If $x$ is the floating point number representing
$\pi$ and $\delta$ is the "machine eps" (i.e. an error) then only the following
is true:
$$
x \not \equiv \pi \quad \mbox{and} \quad 0 < | x - \pi | < \delta
$$
Even worse. While carrying out computations with real machine numbers, errors
tend to accumulate. This is actually reflected within the $(\delta,\epsilon)$
formalism for limits. Let, for example, $|x-a| < \delta_x$ and
$|y-a| < \delta_y$, where $(a,b)$ are supposed to be "exact" and $(x,y)$ are
machine numbers. Next calculate:
$$
| (x+y) - (a+b) | = | (x-a) + (y-b) | \le |x-a| + |y-b| < \delta_x + \delta_y
$$
So the error in the sum $(x+y)$ is most probably greater than one of the errors
$(\delta_x,\delta_y)$ , namely the sum of these. Similar expressions can easily
be derived
for the other elementary operations. And are encountered as well in common
proofs involving limits. We may conclude that the concept of a limit actually
represents error processing - though in an idealized manner. To put
it the other way around: the materialization of the limit concept is
error processing.
Now take another look at the limit definition, especially the
clause "if for every number $\epsilon > 0$ there is some number $\delta > 0$
such that". If $f(x)$ is interpreted as (the end result of) a calculation, then
it simply says that the error propagation of that calculation, though dependent
upon initial errors $\delta$, must be guaranteed to be limited within a certain
pre-defined bound $\epsilon$.