Thứ Bảy, 1 tháng 3, 2014

Tài liệu Image processing P3 pptx

Statistical Description of Images
93
Example
3.3
Show that
if
the covariance
cij
of two random variables is zero, the two
variables are uncorrelated.
Expanding the
right
hand side
of
the definition
of
the covariance we get:
cij
=
E1fi.t-j
-
Pfi
fj
-
Pfj
fi
+
Pfi
Pfj
}
=
EUifj)
-
PfiE{fj)
-
PfjE{fi)
+
PfiPfj
=
EUifj)
-
PfiPfj
-
PfjPfi
+
PfiPfj
=
EUifj)
-
PfiPfj
(3.12)
Notice that the operation
of
taking the expectation value
of
a fixed number has no
effect on it; i.e.
E{pfi)
=
pfi. If
cij
=
0,
we get:
Etfifj)
=
PfiPfj
=
E{fi)E{fj>
(3.13)
which shows that
fi
and
fj
are uncorrelated.
How do we then define
a
random field?
If
we define
a
random variable
at
every point in
a
2-dimensional space we say that
we have
a
2-dimensional
random field.
The position of the space where
a
random
variable is defined is like
a
parameter of the random field:
f
(r;
Wi)
(3.14)
This function for fixed
r
is
a
random variable but for fixed
wi
(outcome) is
a
2-
dimensional function in the plane, an image, say.
As
wi
scans all possible outcomes of
the underlying statistical experiment, the random field represents
a
series of images.
On the other hand, for
a
given outcome, (fixed
wi),
the random field gives the grey
level values
at
the various positions in
an
image.
Example
3.4
Using an unloaded die, we conducted
a
series of experiments. Each
experiment consisted of throwing the die four times. The outcomes
{wl,
w2,
w3,
wq)
of sixteen experiments are given below:
94
Image Processing: The Fundamentals
If
r
is a 2-dimensional vector taking values:
1(1,1>7
(1,217
(1,317
(1,417 (2,117 (2,217 (27
31,
(2741,
(37
l),
(37 2), (373), (374), (47
l),
(47 2), (4,3)7 (4,4)1
give the series of images defined by the random field
f
(r;
wi).
The first image is formed
by
placing the first outcome
of
each experiment
in
the
corresponding position, the second
by
using the second outcome
of
each experiment,
and
so
on. The ensemble
of
images we obtain is:
(2
3
1
3)
(5
1
2
2)
(3
5
1
5)
(l
6 5
4)
(3‘15)
1331
2541 1263 6462
3211 4652 4436 4264
6315 5221 2541 4656
How can we relate two random variables that appear in the same random
field?
For fixed
r
a
random field becomes
a
random variable with an expectation value which
depends on
r:
(3.16)
Since for different values of
r
we have different random variables,
f(rl;
wi)
and
f
(1-2;
wi),
we can define their correlation, called
autocorrelation
(we use “auto” because
the two variables come from the same random field) as:
+m
+m
Rff(r1,
r2)
=
E(f
(r1;
w2)f
(r2;
Wi)}
=
s_,L
Zlz2pf(z17
z2;
rl,
r2)dzldz2
(3.17)
The
autocovariance
C(rl,r2)
is defined by:
cff(rl?r2)
=E{[f(rl;wi)
-pf(rl)l[f(r2;Wi)
-pf(r2)l}
(3.18)
Statistical Description
of
Images
95
Example
3.5
Show that:
Starting
from
equation
(3.18):
How can we relate two random variables that belong to two different
random fields?
If
we have two random fields, i.e. two series of images generated by two different
underlying random experiments, represented by
f
and
g,
we can define their
cross
correlation:
and their
cross covaraance:
Two random fields are called
uncorrelated
if for any
rl
and
r2:
This is equivalent to:
96
Image Processing: The Fundamentals
Example
3.6
Show that for two uncorrelated random fields we have:
E{f(rl;Wi)g(r2;wj))
=
E{f(rl;wi>>E{g(r2;wj))
It
follows
trivially from the definition
of
uncorrelated random
fields
(Cfg(rl, r2)
=
0)
and the expression:
Cfg(rl,r2)
=
E{f(rl;wi)g(ra;wj))
-
CLfh)CL&2)
(3.24)
which can
be
proven in
a
similar
way
as
Example
3.5.
Since we always have just one version of an image how do we calculate the
expectation values that appear in all previous definitions?
We make the assumption that the image we have is a
homogeneous
random field and
ergodic.
The theorem of ergodicity which we then invoke allows us to replace the
ensemble statistics with the spatial statistics of an image.
When is
a
random field homogeneous?
If
the expectation value of
a
random field does not depend on
r,
and if its autocorre-
lation function is translation invariant, then the field is called
homogeneous.
A
translation invariant autocorrelation function depends on only one argument,
the relative shifting of the positions
at
which we calculate the values of the random
field:
Example
3.7
Show that the autocorrelation function
R(rl,r2)
of a homogeneous
random field depends only on the difference vector
rl
-
r2.
The autocorrelation function
of
a
homogeneous random
field
is translation
invariant. Therefore, for any translation vector
ro
we
can write:
Rff(rl,r2)
=
E{f(rl;wi)f(r2;Wi))
=
E{f(r1+ ro;wi)f(r2
+
ro;wi))
=
Rff(r1
+
ro,
r2
+
ro)
Qro
(3.26)
Statistical Description of Images
97
How can we calculate the spatial statistics of a random field?
Given
a
random field we can define its spatial average as:
1
J,moO
3
S,
f
(r;
wi)dxdy
(3.28)
where
ss
is the integral over the whole space
S
with area
S
and
r
=
(X,
y).
The result
p(wi)
is clearly
a
function of the outcome on which
f
depends; i.e.
p(wi)
is
a
random
variable.
The spatial autocorrelation function of the random field is defined as:
(3.29)
This is another random variable.
When is a random field ergodic?
A
random field is ergodic when it is ergodic with respect to the mean and with respect
to the autocorrelation function.
When is a random field ergodic with respect to the mean?
A
random field is said to be ergodic with respect to the mean, if it is homogeneous
and its spatial average, defined by
(3.28),
is independent of the outcome on which
f
depends; i.e. it is
a
constant and is equal to the ensemble average defined by equation
(3.16):
f
(r;
wi)dxdy
=
p
=
a
constant
(3.30)
When is a random field ergodic with respect to the autocorrelation
function?
A
random field is said to be ergodic with respect to the autocorrelation function
if it is homogeneous and its spatial autocorrelation function, defined by
(3.29),
is
independent of the outcome of the experiment on which
f
depends, and depends
98
Image Processing: The Fundamentals
only on the displacement
ro,
and it is equal to the ensemble autocorrelation function
defined by equation
(3.25):
E{f(r;
wi)f(r
+
ro;
Q)}
=
lim
-
f(r;
wi)f(r
+
ro;
wi)dxdy
=
R(r0)
(3.31)
'S
s+ms
S
Example
3.8
Assuming ergodicity, compute the autocorrelation matrix of the
following image:
A
3
X
3
image has the
form:
(3.32)
To compute its autocorrelation function we write
it
as
a
column vector
by
stacking
its columns one under the other:
g
=
(911 g21 g31 g12 g22 g32 g13 g23
g33
)T
(3.33)
The autocorrelation matrix is given by:
C
=
E{ggT}.
Instead
of
averaging Over
all possible versions
of
the image, we average over
all
pairs ofpixels at the Same
relative position
in
the image since ergodicity is assumed. Thus, the autocorrela-
tion matrix will have the following structure:
911 g21 g31 g12 g22 g32
g13
g23 g33
gllABCDEFGHI
g21BABJDEKGH
912DJLABCDEF
g22EDJBABJDE
g3lCBALJDMKG
(3.34)
g32FEDCBALJD
g13GKMDJLABC
g23HGKEDJBAB
933IHGFEDCBA
The top row and the left-most column
of
this matrix show which elements
of
the
image are associated with which
in
order to produce the corresponding entry
in
the matrix.
A
is the average square element:
Statistical Description
of
Images
99
(3.35)
B
is the average value
of
the product
of
vertical neighbours. We have six such
pairs. We must sum the product
of
their values and divide. The question is
whether we must divide by the actual number
of
pairs
of
vertical neighbours we
have, i.e.
6,
or divide by the total number
of
pixels we have, i.e.
9.
This issue
is relevant to the calculation
of
all entries
of
matrix
(3.34)
apart from entry
A.
If
we divide by the actual number
of
pairs, the correlation
of
the most distant
neighbours (for which very few pairs are available) will be exaggerated. Thus, we
chose to divide by the total number
of
pixels
in
the image knowing that this dilutes
the correlation between distant neighbours, although this might be significant. This
problem arises because
of
the finite size
of
the images. Note that formulae
(3.29)
and
(3.28)
really apply for infinite sized images. The problem is more significant
in
the case
of
this example which deals with a very small image for which border
effects are exaggerated.
C
is the average product
of
vertical neighbours once removed. We have three such
pairs:
II
D
is the average product
of
horizontal neighbours. There are six such pairs:
I1
E
is the average product
of
diagonal neighbours. There are four such pairs:
100
Image Processing: The Fundamentals
F: F
=
=0.44
G: G=y=0.33
Statistical Description of Images
101
M:
X
X
M
=
$
=
0.11
So,
the autocorrelation matrix is:
2
1.33 0.67 1.33 0.89
0.44
0.33 0.22 0.11
1.33
2
1.33 0.89 1.33 0.89 0.22 0.33 0.22
0.67 1.33
2
0.44 0.89 1.33 0.11 0.22 0.22
1.33 0.89 0.44
2
1.33 0.67 1.33 0.89 0.44
0.89 1.33 0.89 1.33
2
1.33 0.89 1.33 0.89
0.44 0.89 1.33 0.67 1.33
2
0.44 0.89 1.33
0.33 0.22 0.11 1.33 0.89
0.44
2
1.33 0.67
0.22 0.33 0.22 0.89 1.33 0.89 1.33
2
1.33
0.11 0.22 0.33 0.44 0.89 1.33 0.67 1.33
2
Example
3.9
The following ensemble of images is given:
(5 3
4
3),(;
:),(S
4
4
3),(3 5 6 4)
5462 3523 6428
6671 3545 2266 4322’
5423 4662 6546 5334
4354 4545 2764 5366
Is this ensemble of images ergodic with respect to the mean? Is it
ergodic with respect to the autocorrelation?
It
is ergodic with respect to the mean because the average
of
each image is
4.125
and the average at each pixel position over all eight images is also
4.125.
102
Image Processing: The Fundamentals
It
is not ergodic with respect to the autocorrelation function.
To
prove this let
us
calculate one element
of
the autocorrelation matrix? say element E(g23g34) which
is the average
of
product values
of
all pixels at position
(2,3)
and
(3,4)
over all
images:
4x1+4x5+4x6+6x2+6x4+2x4+2x7+5x4
8
E(g23934)
=
-
4+20+24+12+24+8+14+20 126
-
8
-
-
8
-
=
15.75
This should be equal to the element
of
the autocorrelation function which expresses
the spatial average
of
pairs
of
pixels which are diagonal neighbours from top left
to bottom right direction. Consider the last image
in
the ensemble. We have:
-
5x4+3x5+6x2+4x2+4x3+5x4+4x5+2x4+3x4
-
16
-
20+15+12+8+12+20+20+8+12
-
16
=
7.9375
The two numbers are not the same, and therefore the ensemble is not ergodic with
respect to the autocorrelation function.
What
is
the implication
of
ergodicity?
If
an ensemble of images is ergodic, then we can calculate its mean and autocorrelation
function by simply calculating spatial averages over
any
image of the ensemble we
happen to have.
For example, suppose that we have
a
collection of
M
images of similar type
{gl(x,
y),
g2(x,
y),
.
.
.
,
gM(x,
y)}.
The mean and autocorrelation function of this col-
lection can be calculated by taking averages over all images in the collection. On the
other hand, if we assume ergodicity, we can pick up only one of these images and
calculate the mean and the autocorrelation function from it with the help of spatial
averages. This will be correct if the natural variability of all the different images
is statistically the same as the natural variability exhibited by the contents of each
single image separately.
How can we exploit ergodicity to reduce the number
of
bits needed
for
representing an image?
Suppose that we have an ergodic image
g
which we would like to transmit over a
communication channel. We would like the various bits of the image we transmit
to be uncorrelated
so
that we do not duplicate information already transmitted; i.e.

Không có nhận xét nào:

Đăng nhận xét