Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Regularization-Introduction to Machine Learning-Lecture 10-Computer Science, Lecture notes of Introduction to Machine Learning

Regularization, Probabilistic View, Logistic Model, Loss, Logistic Regression, Extensions, Surrogate Loss, Additive Models, Visualizing, Softmax, MAP Estimation, Penalized Likelihood Surface, Separable Data, Effect of Regularization, Scaled Objective, Probabilistic Interpretation, Occam’s Razor, Optimal Linear Classifier, Classification Margin, Discriminant Function, Hyperplane, Representer Theorem, Support Vector Machines, Kernels, Greg Shakhnarovich, Lecture Slides, Introduction to Machine Lea

Typology: Lecture notes

2011/2012

Uploaded on 03/12/2012

alfred67
alfred67 🇺🇸

4.9

(20)

328 documents

1 / 35

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Lecture 10: Regularization: probabilistic view
TTIC 31020: Introduction to Machine Learning
Instructor: Greg Shakhnarovich
TTI–Chicago
October 18, 2010
Lecture 10: Regularization: probabilistic view TTIC 31020
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23

Partial preview of the text

Download Regularization-Introduction to Machine Learning-Lecture 10-Computer Science and more Lecture notes Introduction to Machine Learning in PDF only on Docsity!

Lecture 10: Regularization: probabilistic view

TTIC 31020: Introduction to Machine Learning

Instructor: Greg Shakhnarovich

TTI–Chicago

October 18, 2010

Review

Logistic model:

log

p (y = 1 | x) p (y = 0 | x)

= w 0 + wT^ x = 0.

⇒ p (y = 1 | x) =

1 + exp(−w 0 − wT^ x)

Maximum likelihood = minimium log-loss

argmax w,w 0

∑^ N

i=

[

yi log σ(w 0 + wT^ xi) + (1 − yi) log

1 − σ(w 0 + wT^ xi)

)]

Plan for today

Talk more about logistic regression

  • Loss
  • (^) Regularization
  • Extensions

Start large margin classification

Surrogate loss

Recall that we really want to minimize 0/1 loss Instead, we are minimizing the log-loss:

argmax w

∑^ N

i=

log p(yi|xi; w) = argmin w

∑^ N

i=

log p(yi|xi ; w)

This is a surrogate loss; we work with it since it is not computationally feasible to optimize the 0/1 loss directly.

yf (x)

L(yf (x), 1)

0 / (^1) log p

err^2

Generalized additive models

As with regression we can extend this framework to arbitrary features (basis functions):

p (y = 1 | x) = σ (w 0 + φ 1 (x) +... + φm(x)).

Example: quadratic logistic regression in 2D

p (y = 1 | x) = σ

w 0 + w 1 x 1 + w 2 x 2 + w 3 x^21 + w 4 x^22

  • Decision boundary of this classifier:

w 0 + w 1 x 1 + w 2 x 2 + w 3 x^21 + w 4 x^22 = 0,

i.e. it’s a quadratic decision boundary.

Logistic regression: 2D example

Linear Quadratic

−6 −4 −2 0 2 4 6

0

2

4

6

−6 −4 −2 0 2 4 6

0

2

4

6

Visualizing the log-likelihood surface

We will look at a 2D example, and assume w 0 = 0, i.e. our model will be ˆp(y = 1|x) = σ(w 1 x 1 + w 2 x 2 ).

log p as a function of w Contour plot: high/low

w 1

w 2

−3 −3 −2 −1 0 1 2 3

0

1

2

3

Mapping from boundaries to w

A line αw in the w 1 , w 2 space corresponds to a set of parralel decision boundaries of the form αwT^ x = 0.

The sign of α determines the direction.

Overfitting with logistic regression

We can get the same decision boundary with an infinite number of settings for w.

When the data are separable by w 0 + αwT^ x = 0, what’s the best choice for α?

p(y = 1 | x) = σ(w 0 + αwT^ x).

Overfitting with logistic regression

We can get the same decision boundary with an infinite number of settings for w.

When the data are separable by w 0 + αwT^ x = 0, what’s the best choice for α?

p(y = 1 | x) = σ(w 0 + αwT^ x).

With α → ∞, we have p(yi|x; w 0 , αw) → 1.

Reminder: MAP for coin tosses

Similar problem in Lecture 1: given H, H, H, H do we believe the ML estimate that μ = p(X = H) = 1?

Solution: introdue a prior over μ, use Bayes rule,

p(μ | X) =

p(X | μ)p(μ) p(X)

and obtain MAP estimate

μ̂ M AP = argmax μ

log p(μ|X) = argmax μ

{log p(X|μ) + log p(μ)}

Usually have a prior that favors θ far from 0 or 1.

MAP estimation for logistic regression

Intuition: similar to the coin toss experiment, we may have some belief about the value of w before seeing any data.

  • E.g., may prefer smaller values of ‖w‖ recall our previous motivation for regularizing w!

A possible prior that captures that belief:

p(w) = N

w; 0 , σ^2 I

In the 2D case (again, ignoring w 0 ) this means

p(w 1 , w 2 ) =

2 πσ^2

exp

w^21 + w 22 2 σ^2

Penalized likelihood surface

log p(X|w) log p(w; σ) log ˜p(X, w; σ)

−3^ −3 −2 −1 0 1 2 3

0

1

2

3

−3−3 −2 −1 0 1 2 3

0

1

2

3

−3−3 −2 −1 0 1 2 3

0

1

2

3

This is our objective function, and we can find its peak by gradient descent as before.

  • (^) Need to modify the calculation of gradient and Hessian.

The effect of regularization: separable data

log ˜p(X, w; σ) =

∑^ N

i=

log p (yi | xi; w) −

2 σ^2

‖w‖

σ = 1.

−3 −2 −1 0 1 2 3

0

1

2

3 σ^ = 0.

−3 −2 −1 0 1 2 3

0

1

2

3 σ^ = 0.

−3 −2 −1 0 1 2 3

0

1

2

3

σ^2 = 1 σ^2 = 0. 5 σ^2 = 0. 1