




























































































Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
The main points are:Resulting Optimization Problem, Optimal Hyperplane, Quadratic Cost Function, Kuhn-Tucker Conditions, Slack Variables, Non-Linear Discriminant Functions, Kernel Function, Kernel Function Based Classifier, Support Vectors
Typology: Slides
1 / 158
This page cannot be seen from the preview
Don't miss anything!
PR NPTEL course – p.1/
PR NPTEL course – p.2/
PR NPTEL course – p.4/
PR NPTEL course – p.5/
m,^ b^ ∈ ℜ^ to^1 T^ minimize W^ W^2 T^ subject to y(W^ X+^ ii^
b)^ ≥^1 ,^ i^ = 1,... , n
-^ Quadratic cost function and linear (inequality)constraints.
PR NPTEL course – p.7/
m,^ b^ ∈ ℜ^ to^1 T^ minimize W^ W^2 T^ subject to y(W^ X+^ ii^
b)^ ≥^1 ,^ i^ = 1,... , n
-^ Quadratic cost function and linear (inequality)constraints. •^ Kuhn-Tucker conditions are necessary and sufficient.Every local minimum is global minimum.
PR NPTEL course – p.8/
-^ The dual of this problem is:^ maxn^ μ∈ℜ
n∑q(μ) =^ μ−i^ i= n∑ 1 μμyyXij^ ij^2 i,j= TXji^
subject to^ μ
≥^0 ,^ i^ = 1,... , n,i
n∑^ yμ= 0ii^ i=
-^ Then the final solution is:∑^ ∗^ ∗^ W^ =^ μ^ i^
∗^ yX, b=^ y−Xiij^ T∗W^ , j^ such thatj^
μ>^0 j^ PR NPTEL course – p.10/
-^ This problem has no solution if training data are notlinearly separable.
PR NPTEL course – p.11/
-^ This problem has no solution if training data are notlinearly separable. •^ Hence, in general, we use slack variables. •^ The optimization problem now is^ min^ W,b,ξ
n∑^ ξi i= subject to^ y
T^ (W X+^ b)^ ≥ii^ 1 −^ ξ,^ i^ = 1,... , ni ξ≥^0 ,^ i^ = 1,... , ni^
PR NPTEL course – p.13/
-^ The dual problem now is:^ max^ μ^
n∑q(μ) =^ μ−i^ i= n∑ 1 μμyyXij^ ij^2 i,j= TXji^
subject to^0
≤^ μ≤^ C,^ i^ = 1i^
n∑,... , n,^ yi i= μ= 0i^ PR NPTEL course – p.14/
-^ The dual problem now is:^ max^ μ^
n∑q(μ) =^ μ−i^ i= n∑ 1 μμyyXij^ ij^2 i,j= TXji^
subject to^0
≤^ μ≤^ C,^ i^ = 1i^
n∑,... , n,^ yi i= μ= 0i^
-^ The only difference – upper bound on
μ.i
-^ We solve the dual and the final optimal hyperplane is∑^ ∗^ ∗^ W^ =^ μy^ i^
X,ii ∗ T∗ b= y− XW^ , j^ such thatj j
0 < μ< C.j^ PR NPTEL course – p.16/
PR NPTEL course – p.17/
′m m φ : ℜ→ ℜ^. ′m • In ℜ^ , the training set is^ {(Z, yi ), i^ = 1,... , n}i
,^ Z=^ φ(X)i^ i^ PR NPTEL course – p.19/
′m m φ : ℜ→ ℜ^. ′m • In ℜ^ , the training set is^ {(Z, yi ), i^ = 1,... , n}i
,^ Z=^ φ(X)i^ i
-^ We can find optimal hyperplane by solving the dual.
PR NPTEL course – p.20/