


































































Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
This lecture is related to Pattern Classification and Recognition. It was delivered by Sahayu Agendra at Banasthali Vidyapith. It includes: Non-linear, Classifiers, Single, Line, Hyperplane, Class, Operations, Linearly, SeparableTwo-layer, Perception
Typology: Slides
1 / 74
This page cannot be seen from the preview
Don't miss anything!
1
x
1
x
2
XOR
Class
0
0
0
B
0
1
1
A
1
0
1
A
1
1
0
B
2
4
Then class B is located outside the shaded area andclass A inside.
This is a two-phase design.
Draw two lines (hyperplanes)
Each
of
them
is
realized
by
a
perceptron.
The
outputs of the perceptrons will bedepending on the position of
x
Find the position of
x w.r.t.
both lines,
based on the values of
y
, y 1
0
) (
) (^
2
1
x
g
x
g
2 , 1
(^01)
)) (
(
i
x
g f
y
i
i
5
Equivalently:
The computations of the first phase
perform a mapping
1
st
phase
2
nd
phase
x
1
x
2
y
1
y
2
0
0
0(-)
0(-)
B(0)
0
1
1(+)
0(-)
A(1)
1
0
1(+)
0(-)
A(1)
1
1
1(+)
1(+)
B(0)^ T y
y
y
x
]
,
[^
2
1
7
Computations
of
the
first
phase
perform
a
mapping
that
transforms
the
nonlinearly
separable problem to a linearly separable one.
The architecture
8
one
hidden
and
one
output
layer.
The
activation functions are
following lines (hyperplanes)
0
1 2
2
) (
0
3 2
) (
0
1 2
) (
2
1
2
1
2
2
1
1
y
y
y g
x
x
x
g
x
x
x
g
10
performs a mapping of a vectoronto the vertices of the unit side
p^
hypercube
The mapping is achieved with
p
neurons each realizing
a hyperplane.
The output of each of these neurons is 0
or 1 depending on the relative position of
x
w.r.t.
the
hyperplane.
11
Intersections of these hyperplanes form regions in the^ l
Each region corresponds to a
vertex of the
p^
unit hypercube.
13
The
output
neuron
realizes
a
hyperplane
in
the
transformed
space,
that
separates
some
of
the
vertices
from
the
others.
Thus,
the
two
layer
perceptron has the capability to classify vectors into classes
that
consist
of
unions
of
polyhedral
regions
But
union.
It depends on the
relative position of the corresponding vertices.
y
14
The architecture
This is capable to classify vectors into classes consistingof
union of polyhedral regions.
The idea is similar to the XOR problem.
It realizes
more than one planes in the
space. p
16
This is an algorithmic procedure that computes thesynaptic weights iteratively, so that an adopted costfunction is minimized (optimized)
In
a
large
number
of
optimizing
procedures,
computation
of
derivatives
are
involved.
Hence,
discontinuous activation functions pose a problem, i.e.,
There is always an escape path!!!
The logistic function
is an example.
Other functions are also possible and
in some cases more desirable.
0
0
0
1
) (^
x x
x f
17
19
nonlinear
optimization one.
For
the gradient descent method
r
r
r
r
r
J w
w
w
w
( w
1
1
1
1
1
(old)
new)
20
The Procedure:
the weights of the last (
rd
) layer and then moving
towards the first
met
Two major philosophies:
The gradients of the last layer are
computed once ALL training data have appeared to thealgorithm, i.e., by summing up all error terms.
The gradients are computed every time
a new training data pair appears.
Thus gradients are
based on successive individual errors.