Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Resulting Optimization Problem - Introduction to Pattern Recognition - Lecture Slides, Slides of Advanced Algorithms

The main points are:Resulting Optimization Problem, Optimal Hyperplane, Quadratic Cost Function, Kuhn-Tucker Conditions, Slack Variables, Non-Linear Discriminant Functions, Kernel Function, Kernel Function Based Classifier, Support Vectors

Typology: Slides

2012/2013

Uploaded on 04/20/2013

padmaghira
padmaghira 🇮🇳

3.2

(5)

55 documents

1 / 158

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Recap
We have been discussing the SVM method.
PR NPTEL course p.1/158
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
pf42
pf43
pf44
pf45
pf46
pf47
pf48
pf49
pf4a
pf4b
pf4c
pf4d
pf4e
pf4f
pf50
pf51
pf52
pf53
pf54
pf55
pf56
pf57
pf58
pf59
pf5a
pf5b
pf5c
pf5d
pf5e
pf5f
pf60
pf61
pf62
pf63
pf64

Partial preview of the text

Download Resulting Optimization Problem - Introduction to Pattern Recognition - Lecture Slides and more Slides Advanced Algorithms in PDF only on Docsity!

Recap^ •^ We have been discussing the SVM method.

PR NPTEL course – p.1/

Recap^ •^ We have been discussing the SVM method.^ •^ We have looked at the formulation of optimalseparating hyperplane.

PR NPTEL course – p.2/

Recap^ •^ We have been discussing the SVM method.^ •^ We have looked at the formulation of optimalseparating hyperplane.^ •^ The resulting optimization problem is sloved moreeasily in the dual form.^ •^ By using a kernel function we can implicitly map thefeature vectors to a high dimensional space and findan optimal hyperplane there.

PR NPTEL course – p.4/

The optimization problem for SVM^ •^ The optimal hyperplane is a solution of the followingconstrained optimization problem:

PR NPTEL course – p.5/

The optimization problem for SVM^ •^ The optimal hyperplane is a solution of the followingconstrained optimization problem:Find^ W^ ∈ ℜ

m,^ b^ ∈ ℜ^ to^1 T^ minimize W^ W^2 T^ subject to y(W^ X+^ ii^

b)^ ≥^1 ,^ i^ = 1,... , n

-^ Quadratic cost function and linear (inequality)constraints.

PR NPTEL course – p.7/

The optimization problem for SVM^ •^ The optimal hyperplane is a solution of the followingconstrained optimization problem:Find^ W^ ∈ ℜ

m,^ b^ ∈ ℜ^ to^1 T^ minimize W^ W^2 T^ subject to y(W^ X+^ ii^

b)^ ≥^1 ,^ i^ = 1,... , n

-^ Quadratic cost function and linear (inequality)constraints. •^ Kuhn-Tucker conditions are necessary and sufficient.Every local minimum is global minimum.

PR NPTEL course – p.8/

-^ The dual of this problem is:^ maxn^ μ∈ℜ

n∑q(μ) =^ μ−i^ i= n∑ 1 μμyyXij^ ij^2 i,j= TXji^

subject to^ μ

≥^0 ,^ i^ = 1,... , n,i

n∑^ yμ= 0ii^ i=

-^ Then the final solution is:∑^ ∗^ ∗^ W^ =^ μ^ i^

∗^ yX, b=^ y−Xiij^ T∗W^ , j^ such thatj^

μ>^0 j^ PR NPTEL course – p.10/

-^ This problem has no solution if training data are notlinearly separable.

PR NPTEL course – p.11/

-^ This problem has no solution if training data are notlinearly separable. •^ Hence, in general, we use slack variables. •^ The optimization problem now is^ min^ W,b,ξ

1 T^ W^ W^ +^ C 2

n∑^ ξi i= subject to^ y

T^ (W X+^ b)^ ≥ii^ 1 −^ ξ,^ i^ = 1,... , ni ξ≥^0 ,^ i^ = 1,... , ni^

PR NPTEL course – p.13/

-^ The dual problem now is:^ max^ μ^

n∑q(μ) =^ μ−i^ i= n∑ 1 μμyyXij^ ij^2 i,j= TXji^

subject to^0

≤^ μ≤^ C,^ i^ = 1i^

n∑,... , n,^ yi i= μ= 0i^ PR NPTEL course – p.14/

-^ The dual problem now is:^ max^ μ^

n∑q(μ) =^ μ−i^ i= n∑ 1 μμyyXij^ ij^2 i,j= TXji^

subject to^0

≤^ μ≤^ C,^ i^ = 1i^

n∑,... , n,^ yi i= μ= 0i^

-^ The only difference – upper bound on

μ.i

-^ We solve the dual and the final optimal hyperplane is∑^ ∗^ ∗^ W^ =^ μy^ i^

X,ii ∗ T∗ b= y− XW^ , j^ such thatj j

0 < μ< C.j^ PR NPTEL course – p.16/

Non-linear discriminant functions^ •^ The idea is that we map the feature vectors into a highdimensional space and find a linear classifier there.

PR NPTEL course – p.17/

Non-linear discriminant functions^ •^ The idea is that we map the feature vectors into a highdimensional space and find a linear classifier there.^ •^ In general, we can use a mapping,

′m m φ : ℜ→ ℜ^. ′m • In ℜ^ , the training set is^ {(Z, yi ), i^ = 1,... , n}i

,^ Z=^ φ(X)i^ i^ PR NPTEL course – p.19/

Non-linear discriminant functions^ •^ The idea is that we map the feature vectors into a highdimensional space and find a linear classifier there.^ •^ In general, we can use a mapping,

′m m φ : ℜ→ ℜ^. ′m • In ℜ^ , the training set is^ {(Z, yi ), i^ = 1,... , n}i

,^ Z=^ φ(X)i^ i

-^ We can find optimal hyperplane by solving the dual.

PR NPTEL course – p.20/