$
\def \J {\Delta}
\def \half {\frac{1}{2}}
\def \kwart {\frac{1}{4}}
\def \hieruit {\quad \Longrightarrow \quad}
\def \slechts {\quad \Longleftrightarrow \quad}
\def \norm {\frac{1}{\sigma \sqrt{2\pi}} \; }
\def \EN {\quad \mbox{and} \quad}
\def \OF {\quad \mbox{or} \quad}
\def \wit {\quad \mbox{;} \quad}
\newcommand{\dq}[2]{\displaystyle \frac{\partial #1}{\partial #2}}
\newcommand{\oq}[2]{\partial #1 / \partial #2}
\newcommand{\qq}[3]{\frac{\partial^2 #1}{{\partial #2}{\partial #3}}}
\def \erf {\operatorname{Erf}}
$
Fuzzyfication of Line Segments
Let the line segment $l$ between $(x_0,y_0)$ and $(x_1,y_1)$ be given by:
$$
x(t) = x_0 + (x_1 - x_0).t \EN
y(t) = y_0 + (y_1 - y_0).t
\qquad \mbox{where:} \quad 0 < t < 1
$$
The fuzzyfied line segment $L$ is defined by a convolution integral of $l$ with
the standard normal distribution, in two dimensions:
$$
L(x,y) = \iint \left(\norm\right)^2
e^{-\half\left[(\xi-x)^2 + (\eta-y)^2\right]/\sigma^2}
l(\xi,\eta) \, d\xi.d\eta
$$
Here $l(x,y)$ denotes the line segment. It is advantageous to introduce other
coordinates, which are associated with $l$ itself. The parameter $t$ is one of
these coordinates. Let the thickness of the line segment be denoted by $D$ and
the length measured along $l$ with $s$, then
$s = t . \sqrt{(x_1 - x_0)^2 + (y_1 - y_0)^2}$ and:
$$
\xi = x_0 + (x_1 - x_0).t \qquad
\eta = y_0 + (y_1 - y_0).t
$$ $$
d\xi.d\eta = D.ds = D.\sqrt{(x_1 - x_0)^2 + (y_1 - y_0)^2} \, dt
$$
Furthermore, the function $l(x,y)$ has a value $1$ at the line segment (and $0$
everywhere else). Herewith, the double integral becomes:
$$
L(x,y) = \frac{1}{2\pi\sigma^2} \; D.\sqrt{(x_1-x_0)^2 + (y_1-y_0)^2}
$$ $$
. \; \int_0^1
e^{-\half\left\{\left[x_0+(x_1-x_0).t - x\right]^2
+ \left[y_0+(y_1-y_0).t - y\right]^2\right\}/\sigma^2}
\; . \; 1 \; . \; dt
$$
So that is what we have to calculate. Start rewriting the exponent:
$$
\left[x_0+(x_1-x_0).t - x\right]^2 + \left[(y_0+(y_1-y_0).t - y\right]^2 =
$$ $$
\left[ (x_1-x_0)^2 + (y_1-y_0)^2 \right] . t^2 - 2 .
\left[ (x - x_0)(x_1-x_0) + (y - y_0)(y_1-y_0) \right] . t
$$ $$
+ \left[ (x - x_0)^2 + (y - y_0)^2 \right] = A.t^2 - 2.B.t + C
$$
Where:
$$ \begin{array}{l}
A = (x_1-x_0)^2 + (y_1-y_0)^2 \\
B = (x - x_0)(x_1-x_0) + (y - y_0)(y_1-y_0) \\
C = (x - x_0)^2 + (y - y_0)^2 \end{array}
$$
Write as follows: $A.t^2 - 2.B.t + C =$
$$
A \left[t^2 - 2 \frac{B}{A} + \left(\frac{B}{A}\right)^2 \right]
- A \left(\frac{B}{A}\right)^2 + C = A \left(t - \frac{B}{A} \right)^2
- \frac{B^2 - A.C}{A}
$$
Simplify with:
$$
X = x - x_0 \qquad
Y = y - y_0 \qquad
X_1 = x_1 - x_0 \qquad
Y_1 = y_1 - y_0
$$
Giving:
$$
B^2 - A.C = \left[(x - x_0)(x_1-x_0) + (y - y_0)(y_1-y_0)\right]^2
$$ $$
- \left[(x_1-x_0)^2 + (y_1-y_0)^2\right]\left[(x - x_0)^2 + (y - y_0)^2\right]
$$ $$
= (X.X_1 + Y.Y_1)^2 - (X_1 + Y_1)^2(X^2 + Y^2)
$$ $$
= X^2.X_1^2 + 2.X.X_1.Y.Y_1 + Y^2.Y_1^2
- X_1^2.X^2 - X_1^2.Y^2 - Y_1^2.X^2 - Y_1^2.Y^2
$$ $$
= - X_1^2.Y^2 + 2.X_1.Y.Y_1.X - Y_1^2.X^2 = - \left(X_1.Y - Y_1.X\right)^2
$$ $$
\hieruit - \frac{B^2 - A.C}{A} =
\frac{\left[(x_1-x_0)(y-y_0) - (y_1-y_0)(x-x_0)\right]^2}
{(x_1-x_0)^2 + (y_1-y_0)^2}
$$
Herewith the exponent becomes: $A.t^2 - 2.B.t + C =$
$$
\left[(x_1-x_0)^2 + (y_1-y_0)^2\right]
\left[t - \frac{(x-x_0)(x_1-x_0) + (y-y_0)(y_1-y_0)}
{(x_1-x_0)^2 + (y_1-y_0)^2}\right]^2
$$ $$
\; + \; \frac{\left[(x_1-x_0)(y-y_0) - (y_1-y_0)(x-x_0)\right]^2}
{(x_1-x_0)^2 + (y_1-y_0)^2}
$$
Define position vectors:
$$
\vec{r} = (x,y) \qquad \vec{r}_0 = (x_0,y_0) \qquad \vec{r}_1 = (x_1,y_1)
$$
Inner products ($\cdot$) and the absolute value of an outer product ($\times$)
may be easily recognized:
$$
(x-x_0)(x_1-x_0) + (y-y_0)(y_1-y_0) =
(\vec{r}_1-\vec{r}_0 \cdot \vec{r}-\vec{r}_0)
$$ $$
(x_1-x_0)^2 + (y_1-y_0)^2 =
(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)
$$ $$
| (x_1-x_0)(y-y_0) - (y_1-y_0)(x-x_0) | =
| (\vec{r}_1-\vec{r}_0) \times (\vec{r}-\vec{r}_0) |
$$
Herewith: $A.t^2 - 2.B.t + C =$
$$
(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)
\left[t - \frac{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}-\vec{r}_0)}
{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)}\right]^2
+ \frac{\left| (\vec{r}_1-\vec{r}_0) \times (\vec{r}-\vec{r}_0)\right|^2}
{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)}
$$
Rewrite:
$$
L(x,y) = \frac{1}{2\pi\sigma^2}
\; D.\sqrt{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)}
\; \int_0^1 e^{-\half\left\{A.t^2 - 2.B.t + C\right\}/\sigma^2} \; dt
$$ $$
= \frac{1}{2\pi\sigma^2}
\; D.\sqrt{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)}
\; . \; e^{-\half\left\{
\left| (\vec{r}_1-\vec{r}_0) \times (\vec{r}-\vec{r}_0)\right|^2
/ (\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0) \right\} / \sigma^2}
$$ $$
\; \int_0^1 e^{-\half\left\{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)
\left[t - (\vec{r}_1-\vec{r}_0 \cdot \vec{r}-\vec{r}_0)
/ (\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)\right]^2 \right\}
/ \sigma^2 } \; dt
$$
Thus, the exponential function splits up into a part which is quite independent
of the running parameter $t$ and another part which is still dependent on it.
Only the latter has to be integrated further, of course.
For that purpose, introduce a new variable $u$:
$$
u = \left( \sqrt{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)} \, . \, t
- \frac{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}-\vec{r}_0)}
{\sqrt{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)}} \right)
/ \sigma
$$ $$
\hieruit
du = \sqrt{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)} \, . \, dt
/ \sigma
\hieruit dt = \frac{\sigma}
{\sqrt{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)}}\,du
$$
And the integral becomes:
$$
\frac{\sigma \sqrt{2\pi}}
{\sqrt{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)}}
\frac{1}{\sqrt{2\pi}} \int_{u_0}^{u_1} e^{-\half u^2} du
$$
Where:
$$
u_0 = - \frac{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}-\vec{r}_0)}
{\sqrt{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)}}
/ \sigma
$$
And:
$$
u_1 = \left( \sqrt{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)}
- \frac{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}-\vec{r}_0)}
{\sqrt{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)}} \right)
/ \sigma
$$ $$
= \frac{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)
- (\vec{r}_1-\vec{r}_0 \cdot \vec{r}-\vec{r}_0)}
{\sqrt{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)}}
/ \sigma
= \frac{(\vec{r}_1-\vec{r} \cdot \vec{r}_1-\vec{r}_0)}
{\sqrt{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)}}
/ \sigma
$$
Summarizing:
$$
u_0 = - \frac{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}-\vec{r}_0)}
{\sigma \sqrt{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)}} \EN
u_1 = - \frac{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}-\vec{r}_1)}
{\sigma \sqrt{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)}}
$$
The integral over a normal distribution can always be expressed as the sum of
two error functions, where the ERror Function ($\erf$) is defined as:
$$
\erf(x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^x e^{ - \half t^2} \; dt
\hieruit \frac{1}{\sqrt{2\pi}} \int_{u_0}^{u_1} e^{ - \half u^2} \; du =
\erf(u_1) - \erf(u_0)
$$
Thus, with the values involved, the integral to be calculated turns out to be
equal to:
$$
\frac{\sigma \sqrt{2\pi}}
{\sqrt{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)}}
\left[ \erf(u_1) - \erf(u_0) \right] =
$$ $$
\frac{\sigma \sqrt{2\pi}}
{\sqrt{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)}}
\left[ \erf(-u_0) - \erf(-u_1) \right] =
\frac{\sigma \sqrt{2\pi}}
{\sqrt{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)}}
$$ $$
\;.\;\left[ \erf\left(
\frac{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}-\vec{r}_0)}
{\sigma \sqrt{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)}}\right)
- \erf\left(
\frac{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}-\vec{r}_1)}
{\sigma \sqrt{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)}}\right)
\right]
$$
There are two factors in front of the end result, which partly cancel out:
$$
\frac{1}{2\pi\sigma^2} \;
D.\sqrt{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)}
\frac{\sigma \sqrt{2\pi}}
{\sqrt{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)}}
= \frac{D}{\sigma\sqrt{2\pi}}
$$
Therefore the final result must read:
$$
L(x,y) = \frac{D}{\sigma\sqrt{2\pi}}
\; . \; e^{-\half\left\{
\left| (\vec{r}_1-\vec{r}_0) \times (\vec{r}-\vec{r}_0)\right|^2
/ (\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0) \right\} / \sigma^2}
$$ $$
\;.\;\left[ \erf\left(
\frac{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}-\vec{r}_0)}
{\sigma \sqrt{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)}}\right)
- \erf\left(
\frac{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}-\vec{r}_1)}
{\sigma \sqrt{(\vec{r}_1-\vec{r}_0 \cdot \vec{r}_1-\vec{r}_0)}}\right)
\right]
$$
Having arrived at the end of the story, let's put everything the other way
around. Instead of fuzzying a line segment, leave it as it is. Consider instead
the influence of a fuzzy point in the plane upon an exact line segment, that is:
integrate the bell shaped function $exp(-\half(\xi^2+\eta^2)/\sigma^2)$ of the
fuzzy pixel over all points of the line segment. Here $\xi$ and $\eta$ are the
components of the vector joining the pixel with any point of the line segment.
Setting up the mathematics results in:
$$
\frac{1}{\sigma\sqrt{2\pi}} \sqrt{(x_1-x_0)^2 + (y_1-y_0)^2}
$$ $$
. \; \int_0^1
e^{-\half\left\{\left[x_0+(x_1-x_0).t - x\right]^2
+ \left[y_0+(y_1-y_0).t - y\right]^2\right\}/\sigma^2}
\; dt
$$
Apart from a constant factor, this is completely equivalent with working the
other way around. So it makes hardly any difference, if at all, whether a fuzzy
line segment is sensed by an exact pixel or whether a fuzzy pixel is sensed by
an exact line segment. Thus it makes no difference whether the theory or
the experiment is fuzzyfied, as long as one of both is the fuzzy one.