DIP Reference Notes
DIP Reference Notes
he term gray level is used often to refer to the intensity of monochrome images.
Color images are formed by a combination of individual 2-D images.
For example: The RGB color system, a color image consists of three (red, green and
blue) individual component images. For this reason many of the techniques developed for
monochrome images can be extended to color images by processing the three component
images individually.
An image may be continuous with respect to the x- and y- coordinates and also in
amplitude. Converting such an image to digital form requires that the coordinates, as well as
the amplitude, be digitized.
APPLICATIONS OF DIGITAL IMAGE PROCESSING
Since digital image processing has very wide applications and almost all of the technical
fields are impacted by DIP, we will just discuss some of the major applications of DIP.
∙ Medical processing,
∙ Earth resources;
∙ Geographical mapping;
Teleconferencing
∙ In military communications.
Medical applications:
∙ Processing of chest X- rays
∙ Cineangiograms
Ultrasonic scanning
IMAGE PROCESSING TOOLBOX (IPT) is a collection of functions that extend the
capability of the MATLAB numeric computing environment. These functions, and the
Color image processing: It is an area that is been gaining importance because of the use of
digital images over the internet. Color image processing deals with basically color models
and their implementation in image processing applications.
Wavelets and Multiresolution Processing: These are the foundation for representing image
in various degrees of resolution.
Compression: It deals with techniques reducing the storage required to save an image, or the
bandwidth required to transmit it over the network. It has to major approaches a) Lossless
Compression b) Lossy Compression
Morphological processing: It deals with tools for extracting image components that are
useful in the representation and description of shape and boundary of objects. It is majorly
used in automated inspection applications.
There are three types of computerized processes in the processing of image 1) Low level
process -these involve primitive operations such as image processing to reduce noise,
contrast enhancement and image sharpening. These kind of processes are characterized by
fact the both inputs and output are images.
2) Mid level image processing - it involves tasks like segmentation, description of those
objects to reduce them to a form suitable for computer processing, and classification of
individual objects. The inputs to the process are generally images but outputs are attributes
extracted from images.
3) High level processing – It involves “making sense” of an ensemble of recognized objects,
as in image analysis, and performing the cognitive functions normally associated with vision.
Representing Digital Images:
The result of sampling and quantization is matrix of real numbers. Assume that an
image f(x,y) is sampled so that the resulting digital image has M rows and N Columns. The
values of the coordinates (x,y) now become discrete quantities thus the value of the
coordinates at orgin become 9X,y) =(o,o) The next Coordinates value along the first signify
the iamge along the first row. it does not mean that these are the actual values of physical
coordinates when the image was sampled.
Thus the right side of the matrix represents a digital element, pixel or pel. The matrix can be
represented in the following form as well. The sampling process may be viewed as
partitioning the xy plane into a grid with the coordinates of the center of each grid being a
pair of elements from the Cartesian products Z2 which is the set of all ordered pair of
elements (Zi, Zj) with Zi and Zj being integers from Z. Hence f(x,y) is a digital image if gray
Then, the number, b, of bites required to store a digital image is B=M *N* k When M=N, the
2
equation become b=N *k
When an image can have 2k gray levels, it is referred to as “k- bit”. An image with 256
8
possible gray levels is called an “8- bit image” (256=2 ).
In order to generate a 2-D image using a single sensor, there has to be relative displacements
in both the x- and y-directions between the sensor and the area to be imaged. Figure shows an
arrangement used in high-precision scanning, where a film negative is mounted onto a drum
whose mechanical rotation provides displacement in one dimension. The single sensor is
mounted on a lead screw that provides motion in the perpendicular direction. Since
mechanical motion can be controlled with high precision, this method is an inexpensive (but
slow) way to obtain high-resolution images. Other similar mechanical arrangements use a flat
bed, with the sensor moving in two linear directions. These types of mechanical digitizers
sometimes are referred to as microdensitometers.
Image Acquisition using a Sensor strips:
A geometry that is used much more frequently than single sensors consists of an in-line
arrangement of sensors in the form of a sensor strip, shows. The strip provides imaging
elements in one direction. Motion perpendicular to the strip provides imaging in the other
direction. This is the type of arrangement used in most flat bed scanners. Sensing devices
with 4000 or more in-line sensors are possible. In-line sensors are used routinely in airborne
imaging applications, in which the imaging system is mounted on an aircraft that flies at a
constant altitude and speed over the geographical area to be imaged. One dimensional
imaging sensor strips that respond to various bands of the electromagnetic spectrum are
mounted perpendicular to the direction of flight. The imaging strip gives one line of an image
at a time, and the motion of the strip completes the other dimension of a two-dimensional
image. Lenses or other focusing schemes are used to project area to be scanned onto the
sensors. Sensor strips mounted in a ring configuration are used in medical and industrial
imaging to obtain cross-sectional (“slice”) images of 3-D objects.
∙ digital photographs
∙ satellite images
Computer graphics, CAD drawings, and vector graphics in general are not considered in this
course even though their reproduction is a possible source of an image. In fact, one goal of
intermediate level image processing may be to reconstruct a model (e.g. vector
representation) for a given digital image.
RELATIONSHIP BETWEEN PIXELS:
We consider several important relationships between pixels in a digital
image. NEIGHBORS OF A PIXEL
• A pixel p at coordinates (x,y) has four horizontal and vertical neighbors whose
coordinates are given by:
(x+1,y), (x-1, y), (x, y+1), (x,y-1)
This set of pixels, called the 4-neighbors or p, is denoted by N4(p). Each pixel is one
unit distance from (x,y) and some of the neighbors of p lie outside the digital image if (x,y) is
on the border of the image. The four diagonal neighbors of p have coordinates and are
denoted by ND (p).
(x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1)
These points, together with the 4-neighbors, are called the 8-neighbors of p, denoted
by N8 (p).
As before, some of the points in ND (p) and N8 (p) fall outside the image if (x,y) is on
the border of the image.
ADJACENCY AND CONNECTIVITY
Let v be the set of gray –level values used to define adjacency, in a binary image, v={1}. In
a gray-scale image, the idea is the same, but V typically contains more elements, for
example, V = {180, 181, 182, …, 200}.
If the possible intensity values 0 – 255, V set can be any subset of these 256 values.
if we are reference to adjacency of pixel with value.
Three types of adjacency
∙ 4- Adjacency – two pixel P and Q with value from V are 4 –adjacency if A is in the set
N4(P)
∙ 8- Adjacency – two pixel P and Q with value from V are 8 –adjacency if A is in the set
N8(P)
∙ M-adjacency –two pixel P and Q with value from V are m – adjacency if (i) Q is in
N4(p) or (ii) Q is in N D(q) and the set N4(p) ∩ N4(q) has no pixel whose values are
from V.
• Mixed adjacency is a modification of 8-adjacency. It is introduced to eliminate the
ambiguities that often arise when 8-adjacency is used.
• For example:
Fig:1.8(a) Arrangement of pixels; (b) pixels that are 8-adjacent (shown dashed) to the
center pixel; (c) m-adjacency.
Types of Adjacency:
• In this example, we can note that to connect between two pixels (finding a path between
two pixels):
– In 8-adjacency way, you can find multiple paths between two pixels – While, in
m-adjacency, you can find only one path between two pixels • So, m-adjacency has
eliminated the multiple path connection that has been generated by the 8-adjacency.
• Two subsets S1 and S2 are adjacent, if some pixel in S1 is adjacent to some pixel in S2.
Adjacent means, either 4-, 8- or m-adjacency.
A Digital Path:
• A digital path (or curve) from pixel p with coordinate (x,y) to pixel q with coordinate (s,t) is
a sequence of distinct pixels with coordinates (x0,y0), (x1,y1), …, (xn, yn) where (x0,y0) = (x,y)
and (xn, yn) = (s,t) and pixels (xi, yi) and (xi-1, yi-1) are adjacent for 1 ≤ i ≤ n • n is
the length of the path
• If (x0,y0) = (xn, yn), the path is closed.
We can specify 4-, 8- or m-paths depending on the type of adjacency
specified. • Return to the previous example:
Fig:1.8 (a) Arrangement of pixels; (b) pixels that are 8-adjacent(shown dashed) to the
center pixel; (c) m-adjacency.
In figure (b) the paths between the top right and bottom right pixels are 8-paths. And
the path between the same 2 pixels in figure (c) is m-path
Connectivity:
• Let S represent a subset of pixels in an image, two pixels p and q are said to be
connected in S if there exists a path between them consisting entirely of pixels in S.
Pixels having a distance less than or equal to some value r from (x,y) are the points
contained in a disk of radius „ r „centered at (x,y)
• The D4 distance (also called city-block distance) between p and q is defined as:
D4 (p,q) = | x – s | + | y – t |
Example:
The pixels with distance D4 ≤ 2 from (x,y) form the following contours of
constant distance.
The pixels with D4 = 1 are the 4-neighbors of (x,y)
• The D8 distance (also called chessboard distance) between p and q is defined as:
D8 (p,q) = max(| x – s |,| y – t |)
Pixels having a D8 distance from (x,y), less than or equal to some value r form a
square Centered at (x,y).
Example:
D8 distance ≤ 2 from (x,y) form the following contours
of constant distance.
• Dm distance:
It is defined as the shortest m-path between the points.
Case2: If p1 =1 and p3 = 0
now, p1 and p will no longer be adjacent (see m-adjacency definition)
then, the length of the shortest
path will be 3 (p, p1, p2, p4)
Case3: If p1 =0 and p3 = 1
The same applies here, and the shortest –m-path will be 3 (p, p2, p3,
p4)
Image enhancement approaches fall into two broad categories: spatial domain
methods and frequency domain methods. The term spatial domain refers to the image plane
itself, and approaches in this category are based on direct manipulation of pixels in an image.
Frequency domain processing techniques are based on modifying the Fourier
transform of an image. Enhancing an image provides better contrast and a more detailed image as
compare to non enhanced image. Image enhancement has very good applications. It is used to enhance
medical images, images captured in remote sensing, images from satellite e.t.c. As indicated
previously, the term spatial domain refers to the aggregate of pixels composing an image.
Spatial domain methods are procedures that operate directly on these pixels. Spatial domain
processes will be denoted by the expression.
g(x,y) = T[f(x,y)]
where f(x, y) is the input image, g(x, y) is the processed image, and T is an operator
on f, defined over some neighborhood of (x, y). The principal approach in defining a
neighborhood about a point (x, y) is to use a square or rectangular subimage area centered at
(x, y), as Fig. 2.1 shows. The center of the subimage is moved from pixel to pixel starting,
say, at the top left corner. The operator T is applied at each location (x, y) to yield the output,
g, at that location. The process utilizes only the pixels in the area of the image spanned by the
neighborhood.
∙ Imagenegative
∙ Logtransformations
∙ Powerlawtransformations
∙ Piecewise-Lineartransformationfunctions
LINEARTRANSFORMATION:
First we will look at the linear transformation. Linear transformation includes simple
identity and negative transformation. Identity transformation has been discussed in our
DIGITAL IMAGE PROCESSING Page 22
tutorial of image transformation, but a brief description of this transformation has been given
here.
Identity transition is shown by a straight line. In this transition, each value of the
input image is directly mapped to each other value of output image. That results in the same
input image and output image. And hence is called identity transformation. It has been
shown below:
Fig.Lineartransformationbetweeninputandoutput.
NEGATIVETRANSFORMATION:
The second linear transformation is negative transformation, which is invert of
identity transformation. In negative transformation, each value of the input image is
subtracted from the L-1 and mapped onto the output image
IMAGE NEGATIVE: The image negative with gray level value in the range of [0, L-1] is
obtained by negativetransformation givenby S=T(r) or
S =L-1 – r
Wherer=graylevelvalueatpixel(x,y)
Listhelargestgraylevelconsistsintheimage
It results in getting photograph negative. It is useful when for enhancing white details
embedded in dark regionsofthe image.
The overall graph of these transitions has been shown
below.
Negative
nth root
Log
nth power
Fig.Somebasicgray-leveltransformationfunctionsusedforimageenhancement.
DIGITAL IMAGE PROCESSING Page 23
In this case the following transition has been done.
s = (L – 1) – r
since the input image of Einstein is an 8 bpp image, so the number of levels in this image
are 256. Putting 256 in the equation, we get this
s = 255 – r
So each value is subtracted by 255 and the result image has been shown above. So what
happens is that, the lighter pixels become dark and the darker picture becomes light. And it
results in image negative.
It has been shown in the graph below.
Fig.Negativetransformations.
LOGARITHMICTRANSFORMATIONS:
Logarithmic transformation further contains two type of transformation. Log transformation
and inverse log transformation.
LOGTRANSFORMATIONS:
The log transformations can be defined by this formula
s = c log(r + 1).
Where s and r are the pixel values of the output and the input image and c is a constant. The
value 1 is added to each of the pixel value of the input image because if there is a pixel
intensity of 0 in the image, then log (0) is equal to infinity. So 1 is added, to make the
minimum value at least 1.
During log transformation, the dark pixels in an image are expanded as compare to the
higher pixel values. The higher pixel values are kind of compressed in log transformation.
This result in following image enhancement.
An anotherwayof representingLOGTRANSFORMATIONS: Enhance detailsin the
darkerregionsof an image attheexpenseofdetailinbrighterregions.
T(f)=C*log(1+r)
∙ HereCisconstantand r≥0.
DIGITAL IMAGE PROCESSING Page 24
∙ The shape of the curve shows that this transformation maps the narrow range of low
gray level valuesintheinputimageintoawiderrangeofoutputimage.
∙ Theoppositeistrueforhighlevelvaluesofinputimage.
Fig. 2.13 Plot of the equation S = crγfor various values of γ (c =1 in all cases). This type of
transformation is used for enhancing images for different type of display devices. The gamma
of different display devices is different. For example Gamma of CRT
lies in between of 1.8 to 2.5, that means the image displayed on CRT is dark.
Varyinggamma(γ)obtainsfamilyof possibletransformationcurves S =C*rγ
HereC and γ are positive constants. Plot of S versus r for variousvaluesof γ is
γ>1compressesdarkvalues
Expandsbrightvalues
γ <1(similartoLogtransformation)
Expands dark values
Compressesbrightvalues
When C=γ=1, itreducestoidentitytransformation.
CORRECTING GAMMA:
s=crγ
s=cr (1/2.5)
The same image but with different gamma values has been shown
here. Piecewise-Linear Transformation Functions:
A complementary approach to the methods discussed in the previous three sections is
to use piecewise linear functions. The principal advantage of piecewise linear functions over
the types of functions we have discussed thus far is that the form of piecewise functions can
be arbitrarily complex.
The principal disadvantage of piecewise functions is that their specification requires
considerably more user input.
Contrast stretching: One of the simplest piecewise linear functions is a contrast-stretching
transformation. Low-contrast images can result from poor illumination, lack of dynamic
Fig. x
Contrast stretching. (a) Form of transformation function. (b) A low-contrast stretching. (c)
Result of contrast stretching. (d) Result of thresholding (original image courtesy of Dr.Roger
Heady, Research School of Biological Sciences, Australian National University Canberra
Australia.
Figure x(b) shows an 8-bit image with low contrast. Fig. x(c) shows the result of contrast
stretching, obtained by setting (r1, s1 )=(rmin, 0) and (r2, s2)=(rmax,L-1) where rmin and rmax
denote the minimum and maximum gray levels in the image, respectively.Thus, the
transformation function stretched the levels linearly from their original range to the full range
Fig. y (a)This transformation highlights range [A,B] of gray levels and reduces all others to a
constant level (b) This transformation highlights range [A,B] but preserves all other levels.
(c) An image . (d) Result of using the transformation in (a).
BIT-PLANE SLICING:
Instead of highlighting gray-level ranges, highlighting the contribution made to total
image appearance by specific bits might be desired. Suppose that each pixel in an image is
represented by 8 bits. Imagine that the image is composed of eight 1-bit planes, ranging from
bit-plane 0 for the least significant bit to bit plane 7 for the most significant bit. In terms of 8-
In terms of bit-plane extraction for an 8-bit image, it is not difficult to show that the (binary)
image for bit-plane 7 can be obtained by processing the input image with a thresholding gray-
level transformation function that (1) maps all levels in the image between 0 and 127 to one
level (for example, 0); and (2) maps all levels between 129 and 255 to another (for example,
255).The binary image for bit-plane 7 in Fig. 3.14 was obtained in just this manner. It is left
as an exercise
(Problem 3.3) to obtain the gray-level transformation functions that would yield the other bit
planes.
Histogram Processing:
The histogram of a digital image with gray levels in the range [0, L-1] is a discrete
function of the form
H(rk)=nk
where rk is the kth gray level and nk is the number of pixels in the image having the
level rk.. A normalized histogram is given by the equation
p(rk)=nk/n for k=0,1,2,…..,L-1
P(rk) gives the estimate of the probability of occurrence of gray level
rk. The sum of all components of a normalized histogram is equal to 1.
The histogram plots are simple plots of H(rk)=nk versus rk.
Histogram Equalization:
Histogram equalization is a common technique for enhancing the appearance of images.
Suppose we have an image which is predominantly dark. Then its histogram would be
The transformation function is assumed to fulfill two condition T(r) is single valued and
monotonically increasing in the internal 0<T(r)<1 for 0<r<1.The transformation function
should be single valued so that the inverse transformations should exist. Monotonically
increasing condition preserves the increasing order from black to white in the output
image. The second conditions guarantee that the output gray levels will be in the same
range as the input levels. The gray levels of the image may be viewed as random
variables in the interval [0.1]. The most fundamental descriptor of a random variable is
its probability density function (PDF) Pr(r) and Ps(s) denote the probability density
functions of random variables r and s respectively. Basic results from an elementary
probability theory states that if Pr(r) and Tr are known and T-1(s) satisfies conditions
(a), then the probability density function Ps(s) of the transformed variable is given by
the formula
Where as P and Q are the padded sizes from the basic equations
Wraparound error in their circular convolution can be avoided by padding these
functions with zeros,
VISUALIZATION: IDEAL LOW PASS FILTER:
Aa shown in fig.below
Fig: ideal low pass filter 3-D view and 2-D view and line graph.
Fig: (a)
original image, (b)-(f) Results of filtering using ILPFs with cutoff frequencies
set at radii values 10, 30, 60, 160 and 460, as shown in fig.2.2.2(b). The power removed by
these filters was 13, 6.9, 4.3, 2.2 and 0.8% of the total, respectively.
The severe blurring in this image is a clear indication that most of the sharp detail information
in the picture is contained in the 13% power removed by the filter. As the filter radius is
increases less and less power is removed, resulting in less blurring. Fig. (c ) through (e) are
characterized by “ringing” , which becomes finer in texture as the amount of high frequency
content removed decreases.
WHY IS THERE RINGING?
Ideal low-pass filter function is a rectangular function
The inverse Fourier transform of a rectangular function is a sinc
function.
Fig. Spatial representation of ILPFs of order 1 and 20 and corresponding intensity
profiles through the center of the filters( the size of all cases is 1000x1000 and the cutoff
frequency is 5), observe how ringing increases as a function of filter order.
BUTTERWORTH LOW-PASS FILTER:
Transfor funtion of a Butterworth lowpass filter (BLPF) of order n, and with cutoff
frequency at a distance D0 from the origin, is defined as
-
Transfer function does not have sharp discontinuity establishing cutoff between
passed and filtered frequencies.
Cut off frequency D0 defines point at which H(u,v) = 0.5
Fig.
Fig. (a) Original image.(b)-(f) Results of filtering using BLPFs of order 2, with cutoff
frequencies at the radii
Fig.
Fig.(a) Original image. (b)-(f) Results of filtering using GLPFs with cutoff
frequencies at the radii shown in fig.2.2.2. compare with fig.2.2.3 and fig.2.2.6
. (a) Original image (784x 732 pixels). (b) Result of filtering using a GLPF with D0 = 100.
(c) Result of filtering using a GLPF with D0 = 80. Note the reduction in fine skin lines in the
magnified sections in (b) and (c).
Fig. shows an application of lowpass filtering for producing a smoother, softer
looking result from a sharp original. For human faces, the typical objective is to reduce the
sharpness of fine skin lines and small blemished.
IMAGE SHARPENING USING FREQUENCY DOMAIN FILTERS: An image can be
smoothed by attenuating the high-frequency components of its Fourier transform. Because
edges and other abrupt changes in intensities are associated with high-frequency components,
image sharpening can be achieved in the frequency domain by high pass filtering, which
attenuates the low-frequency components without disturbing high frequency information in
the Fourier transform.
The filter function H(u,v) are understood to be discrete functions of size PxQ; that is
the discrete frequency variables are in the range u = 0,1,2,…….P-1 and v = 0,1,2,…….Q-1.
The meaning of sharpening is
∙ Edges and fine detail characterized by sharp transitions in image intensity ∙ Such
Fig: Top row: Perspective plot, image representation, and cross section of a typical
ideal high-pass filter. Middle and bottom rows: The same sequence for typical butter-worth
and Gaussian high-pass filters.
IDEAL HIGH-PASS FILTER:
A 2-D ideal high-pass filter (IHPF) is defined as
H (u,v) = 0, if D(u,v) ≤ D0
1, if D(u,v) ˃ D0
Fig.. Spatial representation of typical (a) ideal (b) Butter-worth and (c) Gaussian
frequency domain high-pass filters, and corresponding intensity profiles through their centers.
We can expect IHPFs to have the same ringing properties as ILPFs. This is demonstrated
clearly in Fig.. which consists of various IHPF results using the original image in Fig.(a) with
D0 set to 30, 60,and 160 pixels, respectively. The ringing in Fig. (a) is so severe that it
produced distorted, thickened object boundaries (e.g.,look at the large letter “a” ). Edges of
the top three circles do not show well because they are not as strong as the other edges in the
image (the intensity of these three objects is much closer to the background intensity, giving
discontinuities of smaller magnitude).
FILTERED RESULTS: IHPF:
Fig..
Results of high-pass filtering the image in Fig.(a) using an IHPF with D0 = 30, 60, and 160.
Where D(u,v) is given by Eq.(3). This expression follows directly from Eqs.(3) and (6). The
middle row of Fig.2.2.11. shows an image and cross section of the BHPF function. Butter-
worth high-pass filter to behave smoother than IHPFs. Fig.2.2.14.shows the performance of a
BHPF of order 2 and with D0 set to the same values as in Fig.2.2.13. The boundaries are
much less distorted than in Fig.2.2.13. even for the smallest value of cutoff frequency.
FILTERED RESULTS: BHPF:
Fig.
Where D(u,v) is given by Eq.(4). This expression follows directly from Eqs.(2) and
(6). The third row in Fig.2.2.11. shows a perspective plot, image and cross section of the
GHPF function. Following the same format as for the BHPF, we show in Fig.2.2.15.
comparable results using GHPFs. As expected, the results obtained are more gradual than
with the previous two filters.
FILTERED RESULTS:GHPF:
Fi
g. Results of high-pass filtering the image in fig.(a) using a GHPF with D0 = 30, 60 and 160,
corresponding to the circles in Fig.(b).
IMAGE RESTORATION:
Restoration improves image in some predefined sense. It is an objective process.
Restoration attempts to reconstruct an image that has been degraded by using a priori
knowledge of the degradation phenomenon. These techniques are oriented toward
modeling the degradation and then applying the inverse process in order to recover the
original image. Restoration techniques are based on mathematical or probabilistic
models of image processing. Enhancement, on the other hand is based on human
subjective preferences regarding what constitutes a “good” enhancement result. Image
Restoration refers to a class of methods that aim to remove or reduce the degradations
that have occurred while the digital image was being obtained. All natural images when
displayed have gone through some sort of degradation:
∙ Acquisition mode, or
∙ Processing mode
▪ Sensor noise
∙ Others
Degradation Model:
Degradation process operates on a degradation function that operates on an input
image with an additive noise term. Input image is represented by using the notation
f(x,y), noise term can be represented as η(x,y).These two terms when combined gives
the result as g(x,y). If we are given g(x,y), some knowledge about the degradation
function H or J and some knowledge about the additive noise teem η(x,y), the objective
of restoration is to obtain an estimate f'(x,y) of the original image. We want the estimate
to be as close as possible to the original image. The more we know about h and η , the
closer f(x,y) will be to f'(x,y). If it is a linear position invariant process, then degraded
image is given in the spatial domain by
g(x,y)=f(x,y)*h(x,y)+η(x,y)
Where z represents the gray level, μ= mean of average value of z, σ= standard deviation.
Its shape is similar to Rayleigh disruption. This equation is referred to as gamma density
it is correct only when the denominator is the gamma function.
(iv)Exponential Noise:
Exponential distribution has an exponential shape. The PDF of exponential noise is given
as
Where a>0. The mean and variance of this density are given by
(v)Uniform Noise:
The PDF of uniform noise is given by
dot.
This operation can be using a convolution mask in which all coefficients have
value 1/mn A mean filter smoothes local variations in image Noise is reduced as a result
of blurring. For every pixel in the image, the pixel value is replaced by the mean value
of its neighboring pixels with a weight .This will resulted in a smoothing effect in the
image.
(b)Geometric Mean filter:
An image restored using a geometric mean filter is given by the
expression
Here, each restored pixel is given by the product of the pixel in the sub image window,
raised to the power 1/mn. A geometric mean filters but it to loose image details in the
process.
(c)Harmonic Mean filter:
The harmonic mean filtering operation is given by the expression
The harmonic mean filter works well for salt noise but fails for pepper noise. It does
well with Gaussian noise also.
(d)Order statistics filter:
Order statistics filters are spatial filters whose response is based on ordering the pixel
contained in the image area encompassed by the filter. The response of the filter at any
point is determined by the ranking result.
The original of the pixel is included in the computation of the median of the filter are
quite possible because for certain types of random noise, the provide excellent noise
reduction capabilities with considerably less blurring then smoothing filters of similar
size. These are effective for bipolar and unipolor impulse noise.
(e)Max and Min filter:
Using the l00th percentile of ranked set of numbers is called the max filter and is given
by the equation
It is used for finding the brightest point in an image. Pepper noise in the image has very
low values, it is reduced by max filter using the max selection process in the sublimated
area sky. The 0th percentile filter is min filter.
This filter is useful for flinging the darkest point in image. Also, it reduces salt noise
of the min operation.
(f)Midpoint filter:
The midpoint filter simply computes the midpoint between the maximum and minimum
values in the area encompassed by
It comeliness the order statistics and averaging .This filter works best for randomly
distributed noise like Gaussian or uniform noise.
Periodic Noise by Frequency domain filtering:
These types of filters are used for this purpose
Band Reject Filters:
It removes a band of frequencies about the origin of the Fourier
transformer. Ideal Band reject Filter:
We know that
Therefore
From the above equation we observe that we cannot recover the undegraded image
exactly because N(u,v) is a random function whose Fourier transform is not known.
One approach to get around the zero or small-value problem is to limit the filter
frequencies to values near the origin.
We know that H(0,0) is equal to the average values of h(x,y).
By Limiting the analysis to frequencies near the origin we reduse the probability of
encountering zero values.
Minimum mean Square Error (Wiener) filtering:
The inverse filtering approach has poor performance. The wiener filtering approach
uses the degradation function and statistical characteristics of noise into the
restoration process.
The objective is to find an estimate of the uncorrupted image f such that the mean
square error between them is minimized.
The error measure is given by
in vector-matrix form
by