Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Kernels-Introduction to Machine Learning-Lecture 13-Computer Science, Lecture notes of Introduction to Machine Learning

Kernels, Boosting, Slack Variables, Non-Separable Case, SVM, Boosting, Nonlinear Features, Logistic Regression, Nonlinear Mapping, Kernel Trick, Mercer’s Kernels, Popular Kernels, Radial Basis Function Kernel, RBF Kernel, SVM Regression, Penalized Loss Minimizer, Stepwise Regression, Greedy Assembly, Greg Shakhnarovich, Lecture Slides, Introduction to Machine Learning, Computer Science, Toyota Technological Institute at Chicago, United States of America.

Typology: Lecture notes

2011/2012

Uploaded on 03/12/2012

alfred67
alfred67 🇺🇸

4.9

(20)

328 documents

1 / 25

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Lecture 13: Kernels, boosting
TTIC 31020: Introduction to Machine Learning
Instructor: Greg Shakhnarovich
TTI–Chicago
October 25, 2010
Lecture 13: Kernels, bo osting TTIC 31020
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19

Partial preview of the text

Download Kernels-Introduction to Machine Learning-Lecture 13-Computer Science and more Lecture notes Introduction to Machine Learning in PDF only on Docsity!

Lecture 13: Kernels, boosting

TTIC 31020: Introduction to Machine Learning

Instructor: Greg Shakhnarovich

TTI–Chicago

October 25, 2010

Review

We start with argmaxw,w 0

1 ‖w‖ mini^ yi

wT^ xi + w 0

In linearly separable case, we get a quadratic program

max

∑^ N

i=

αi −

∑^ N

i,j=

αiαj yiyj xTi xj

subject to

∑^ N

i=

αiyi = 0, αi ≥ 0 for all i = 1,... , N.

Solving it for α we get the SVM classifier

yˆ = sign

w ˆ 0 +

αi> 0

αiyixTi x

Plan for today

Kernel trick and SVMs

Boosting

Nonlinear features

As with logistic regression, we can move to nonlinear classifiers by mapping data into nonlinear feature space.

φ : [x 1 , x 2 ]T^ → [x^21 ,

2 x 1 x 2 , x^22 ]T

Example of nonlinear mapping

Consider the mapping: φ : [x 1 , x 2 ]T^ → [1,

2 x 1 ,

2 x 2 , x^21 , x^22 ,

2 x 1 x 2 ]T^.

The (linear) SVM classifier in the feature space:

yˆ = sign

w ˆ 0 +

αi> 0

αiyiφ(xi)T^ φ(x)

The dot product in the feature space:

φ(x)T^ φ(z) = 1 + 2x 1 z 1 + 2x 2 z 2 + x^21 z^21 + x^22 z^22 + 2x 1 x 2 z 1 z 2

1 + xT^ z

Dot products and feature space

We defined a non-linear mapping into feature space

φ : [x 1 , x 2 ]T^ → [1,

2 x 1 ,

2 x 2 , x^21 , x^22 ,

2 x 1 x 2 ]T

and saw that φ(x)T^ φ(z) = K(x, z) using the kernel

K(x, z) =

1 + xT^ z

I.e., we can calculate dot products in the feature space implicitly, without ever writing the feature expansion!

Mercer’s kernels

What kind of function K is a valid kernel, i.e. such that there exists a feature space Φ(x) in which K(x, z) = φ(x)T^ φ(z)?

Theorem due to Mercer (1930s): K must be

  • (^) Continuous;
  • symmetric: K(x, z) = K(z, x);
  • (^) positive definite: for any x 1 ,... , xN , the kernel matrix

K =

K(x 1 , x 1 ) K(x 1 , x 2 ) K(x 1 , xN )

................. K(xN , x 1 ) K(xN , x 2 ) K(xN , xN )

must be positive definite.

Some popular kernels

The linear kernel: K(x, z) = xT^ z.

This leads to the original, linear SVM.

The polynomial kernel:

K(x, z; c, d) = (c + xT^ z)d.

We can write the expansion explicitly, by concatenating powers up to d and multiplying by appropriate weights.

Radial basis function kernel

K(x, z; σ) = exp

σ^2

‖x − z‖^2

The RBF kernel is a measure of similarity between two examples.

  • The feature space is infinite-dimensional!

What is the role of parameter σ?

Radial basis function kernel

K(x, z; σ) = exp

σ^2

‖x − z‖^2

The RBF kernel is a measure of similarity between two examples.

  • The feature space is infinite-dimensional!

What is the role of parameter σ? Consider σ → 0.

SVM with RBF (Gaussian) kernels

Data are linearly separable in the (infinite-dimensional) feature space

We don’t need to explicitly compute dot products in that feature space – instead we simply evaluate the RBF kernel.

SVM regression

The key ideas:

-insensitive loss -tube

z

L(z)

y

y + 

y − 

y(x)

x

̂ ξ > 0

ξ > 0

Two sets of slack variables:

yi ≤ f (xi) +  + ξi, yi ≥ f (xi) −  − ξ˜i, ξi ≥ 0 , ξ˜i ≥ 0.

Optimization: min C

i

ξi + ξ˜i

  • 12 ‖w‖^2

SVM: summary

Two main ideas:

  • large margin classification,
  • (^) the kernel trick.

Complexity of classifier depends on the number of SVs.

  • (^) Controlled indirectly by C and kernel parameters.

One of the most successful ML techniques!

A crucial component: good QP solver.

Recommended off-the-shelf package: SVMlight http://svmlight.joachims.org

Stepwise regression for classification

Can perform stepwise selection for any classifier of the form

yˆ(x) = f

∑^ d

j=

wj φj (x)

For instance, logistic regression:

  • (^) Step 1: ˆy(x) = sign

σ(w 1 xd j 11 ) − 1 / 2

  • Step 2: ˆy(x) = sign

σ(w 1 xd j 11 + w 2 xd j 22 ) − 1 / 2