Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Non Linear Classifiers-Classification and Recognition of Patterns-Lecture Slides, Slides of Pattern Classification and Recognition

This lecture is related to Pattern Classification and Recognition. It was delivered by Sahayu Agendra at Banasthali Vidyapith. It includes: Non-linear, Classifiers, Single, Line, Hyperplane, Class, Operations, Linearly, SeparableTwo-layer, Perception

Typology: Slides

2011/2012

Uploaded on 07/17/2012

bandhula
bandhula 🇮🇳

4.7

(10)

94 documents

1 / 74

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
1
Non Linear Classifiers
The XOR problem
x1x2XOR Class
00 0 B
01 1 A
10 1 A
11 0 B
docsity.com
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
pf42
pf43
pf44
pf45
pf46
pf47
pf48
pf49
pf4a

Partial preview of the text

Download Non Linear Classifiers-Classification and Recognition of Patterns-Lecture Slides and more Slides Pattern Classification and Recognition in PDF only on Docsity!

1

Non Linear Classifiers

The XOR problem

x

1

x

2

XOR

Class

0

0

0

B

0

1

1

A

1

0

1

A

1

1

0

B

2

There is no single line (hyperplane) that separatesclass A from class B.

On the contrary, AND and OR

operations are linearly separable problems

4

Then class B is located outside the shaded area andclass A inside.

This is a two-phase design.

  • Phase 1:

Draw two lines (hyperplanes)

Each

of

them

is

realized

by

a

perceptron.

The

outputs of the perceptrons will bedepending on the position of

x

  • Phase 2:

Find the position of

x w.r.t.

both lines,

based on the values of

y

, y 1

0

) (

) (^

2

1

x

g

x

g

  

2 , 1

(^01)

)) (

(

i

x

g f

y

i

i

5

Equivalently:

The computations of the first phase

perform a mapping

1

st

phase

2

nd

phase

x

1

x

2

y

1

y

2

0

0

0(-)

0(-)

B(0)

0

1

1(+)

0(-)

A(1)

1

0

1(+)

0(-)

A(1)

1

1

1(+)

1(+)

B(0)^ T y

y

y

x

]

,

[^

2

1

7

Computations

of

the

first

phase

perform

a

mapping

that

transforms

the

nonlinearly

separable problem to a linearly separable one. 

The architecture

8

  • This is known as the two layer perceptron with

one

hidden

and

one

output

layer.

The

activation functions are

  • The neurons (nodes) of the figure realize the

following lines (hyperplanes)

f

0

1 2

2

) (

0

3 2

) (

0

1 2

) (

2

1

2

1

2

2

1

1

y

y

y g

x

x

x

g

x

x

x

g

10

performs a mapping of a vectoronto the vertices of the unit side

H

p^

hypercube

The mapping is achieved with

p

neurons each realizing

a hyperplane.

The output of each of these neurons is 0

or 1 depending on the relative position of

x

w.r.t.

the

hyperplane.

11

Intersections of these hyperplanes form regions in the^ l

  • dimensional space.

Each region corresponds to a

vertex of the

H

p^

unit hypercube.

13

The

output

neuron

realizes

a

hyperplane

in

the

transformed

space,

that

separates

some

of

the

vertices

from

the

others.

Thus,

the

two

layer

perceptron has the capability to classify vectors into classes

that

consist

of

unions

of

polyhedral

regions

.^

But

NOT ANY

union.

It depends on the

relative position of the corresponding vertices.

y

14

Three layer-perceptrons

The architecture 

This is capable to classify vectors into classes consistingof

ANY

union of polyhedral regions.

The idea is similar to the XOR problem.

It realizes

more than one planes in the

space. p

R

y

16

The Backpropagation Algorithm

This is an algorithmic procedure that computes thesynaptic weights iteratively, so that an adopted costfunction is minimized (optimized) 

In

a

large

number

of

optimizing

procedures,

computation

of

derivatives

are

involved.

Hence,

discontinuous activation functions pose a problem, i.e., 

There is always an escape path!!!

The logistic function

is an example.

Other functions are also possible and

in some cases more desirable.

  

 

0

0

0

1

) (^

x x

x f

exp(

(^

ax

x

f^

17

19

  • The task is a

nonlinear

optimization one.

For

the gradient descent method

r

r

r

r

r

J w

w

w

w

( w

1

1

1

1

1

(old)

new)

 

 

20

The Procedure:

  • Initialize unknown weights randomly with small values.• Compute the gradient terms backwards, starting with

the weights of the last (

rd

) layer and then moving

towards the first

  • Update the weights• Repeat the procedure until a termination procedure is

met

Two major philosophies:

  • Batch mode:

The gradients of the last layer are

computed once ALL training data have appeared to thealgorithm, i.e., by summing up all error terms.

  • Pattern mode:

The gradients are computed every time

a new training data pair appears.

Thus gradients are

based on successive individual errors.