Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Linear Classifiers: Perceptron Algorithm, Cost Function, and Support Vector Machines, Slides of Pattern Classification and Recognition

Linear classifiers, specifically focusing on the perceptron algorithm, the cost function, and support vector machines (svms). The perceptron algorithm is a method for training a linear classifier in the presence of linearly separable classes. The cost function is used to measure the error of the classifier and find the optimal solution. Support vector machines (svms) are a type of linear classifier that finds the hyperplane with the maximum margin between the classes.

Typology: Slides

2011/2012

Uploaded on 07/17/2012

bandhula
bandhula 🇮🇳

4.7

(10)

94 documents

1 / 47

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
1
The Problem: Consider a two class task with ω1, ω2
02211
0
...
0)(
wxwxwxw
wxwxg
ll
T
2121
0201
21
, 0)(
0
:hyperplanedecision on the , Assume
xxxxw
wxwwxw
xx
T
TT
LINEAR CLASSIFIERS
docsity.com
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f

Partial preview of the text

Download Linear Classifiers: Perceptron Algorithm, Cost Function, and Support Vector Machines and more Slides Pattern Classification and Recognition in PDF only on Docsity!

1

The Problem:

Consider a two class task with

ω

ω

2

0

2 2

1 1

0 ...

0

) (

w x w x w x w

w

x w

x g

l l

T

2 1

2

1

0

2

0

1

2 1

,

0 )

( 0

:

hyperplane

decision

on the

,

Assume

x x

x

x

w

w x w w x w

x x

T

T

T

LINEAR CLASSIFIERS

2

Hence:

hyperplane

on the

w

(^

0

w

x

w

x

g

T

(^22)

(^21)

(^22)

(^21)

0

) (

,^

w

w

x g

z

w

w

w

d

4

Our goal:

Compute a solution, i.e., a hyperplane

w

so that

  • The steps
    • Define a cost function to be minimized.– Choose an algorithm to minimize the cost

function.

  • The minimum corresponds to a solution.

1 2

x

x w

T

5

The Cost Function

  • Where

Y

is

the

subset

of

the

vectors

wrongly

classified by

w

.^

When

Y

=O (empty set) a solution

is achieved and

  • • •



Y x

T x^

x

w

w

J

(^

w J

1 2

and

if 1

and

if 1

x

Y

x

x

Y

x

x x

(^

w

J

7

  • Wherever valid• This is the celebrated Perceptron Algorithm.

old)(

) (old)(

(new)

w

w w w J

w

w

w

w

       

 

x

x w

w

w w J

Y x^

Y x

x

T x

^

 

  

 

)

(

) (

x

t w

t w

Y x

x

t 

 ) (

) 1

(

w

8

An example:

The perceptron algorithm

converges

in a

finite

number of iteration steps to a solution if

x

x t t

x

t w

x

t w

t w

c^ t

t

t k

k

t

t k

k

t

 

     

 :

e.g.,

lim ,

lim

0

2

0

10

The perceptron

shold

thre

weights

synaptic

or

synapses

0

w

s

w

i

It is a learning machine that learns from thetraining vectors via the perceptron algorithm. 

The network is called perceptron or neuron.

11

Example:

At some stage

t

the perceptron algorithm

results in The corresponding hyperplane is

0

2

1

w

w

w

0

(^5). 0

2

1

x

x

   

        

   

    

    

    

    

(^5). (^51). 0 0

(^42). 1

(^75). 1 0

(^2). 0 ) 1 ( (^7). 0

(^05). 1 0

(^4). 0 ) 1 ( (^7). 0

(^5). 1 1 0

) 1

(t w

ρ

13

SMALL, in the mean square error sense, means to chooseso that the cost function:

w

response.

desired

ing

correspond

the

is

min

arg

minimum.

becomes

]

[(

(^

2

y

w

J

w

x w y E w J

w

T

14

Minimizing

where

R

x^

is the autocorrelation matrix

and

the crosscorrelation vector.

]

[

ˆ

]

[

]

[

)]

( [ 2

0

] )

[(

) (

: in

results

to

w.r. ) (

1

2 y x E R w

y x E w x x E

w x

y x E

x w

y

E w

w w J

w

w J

x

T

T

T

  

 

   

    

]

[

]...

[

]

[

.

..........

...

..........

.

..........

]

[

]...

[

]

[

]

[

2

1

1

2 1

1 1

l l

l

l

l

T

x

x x E x x E x x E x x E x x E x x E x x E R

]

... [

]

[

]

[

1 y x E

y x E y x E l

16

  • The goal is to compute

:

  • The above is equivalent to a number

M

of MSE minimization

problems. That is:Design each

so that its desired output is 1 for

and 0 for

any other class.

Remark: The MSE criterion belongs to a more general class ofcost function with the following important property:

  • The value of

is an estimate, in the MSE sense, of the

a-posteriori probability

,^

provided that the desired

responses used during training are

and 0

otherwise.

^

^

^ 

^ 

M i

T i

i

W

T

W

x w y E x W y E W

1

2

2

min

arg

min

arg

ˆ

i w

i

x

x

g

i

(^

x

P

i

i

i^

x

y

W

17

Mean square error regression: Let

,^

be

jointly distributed random vectors with a joint pdf

  • The goal: Given the value of

,^

estimate the value of

In the pattern recognition framework, given

one wants

to estimate the respective label

  • The MSE estimate

of

,^

given

,^

is defined as:

  • It turns out that:

The above is known as the regression of

given

and

it is, in general, a non-linear function of

. If

is

Gaussian the MSE regressor is linear.

M

y

x

(^

y

x

p

x

y

x

ˆy

y

x

^

2

~

min

arg

ˆ^

y

y

E

y

y^

^

^

y d x y p y x y E y

y

x

x

(^

y x p 1   y

docsity.com

19

Pseudoinverse Matrix

Define   

responses

desired

ing

correspond

y

matrix)

(an

   

    

T 1 T^2 T N (^1) N y ... y

Nxl

x x ... x

X

matrix)

(an

]

[^

2

1

lxN

x

x

x

X

N

T^

 

N i

T i i

T^

x x

X

X

1

i

N i

i

T

y x

y

X

1

20

Thus

Assume

N=l

X

square and invertible.

Then

y

X

y

X

X X

w

y

X

w X X

y x

w x x

T

T

T

T N i

N i

i i

i T i 

^

 1

1

1

)

(

ˆ

ˆ )

(

)

(

ˆ )

(

T

T^

X

X

X

X

1 )

(^

^

Pseudoinverse of

X

1

1

1

1 )

(

X

X

X X X X X X X

T

T

T

T