Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Optimal Classification-Introduction to Machine Learning-Lecture 09-Computer Science, Lecture notes of Introduction to Machine Learning

Optimal Classification, Logistic Regression, Linear Classifiers, Risk, Expected Loss, Conditional Risk, Log-Odds Ratio, Logistic Model, Decision Boundary, Likelihood, Derivative, Logistic, Maximum Likelihood, Finding, Gradient Ascent, Descent, Stochastic Gradient Ascent, Batch Gradient Ascent, Newton Raphson, Probabilistic, Overfitting, Greg Shakhnarovich, Lecture Slides, Introduction to Machine Learning, Computer Science, Toyota Technological Institute at Chicago, United States of America.

Typology: Lecture notes

2011/2012

Uploaded on 03/12/2012

alfred67
alfred67 🇺🇸

4.9

(20)

328 documents

1 / 49

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Lecture 9: Optimal classification, logistic
regression
TTIC 31020: Introduction to Machine Learning
Instructor: Greg Shakhnarovich
TTI–Chicago
October 15, 2010
Lecture 9: Optimal classification, logistic regression TTIC 31020
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31

Partial preview of the text

Download Optimal Classification-Introduction to Machine Learning-Lecture 09-Computer Science and more Lecture notes Introduction to Machine Learning in PDF only on Docsity!

Lecture 9: Optimal classification, logistic

regression

TTIC 31020: Introduction to Machine Learning

Instructor: Greg Shakhnarovich

TTI–Chicago

October 15, 2010

Review

Decision boundary set by ˆwT^ x + w 0 = 0.

−15 −10 −5 0 5 10 15

0

5

10

−8 −10 −5 0 5 10 −6^ −

−2^0

24

68

10

−10 −5 0 5 10 15

0

5

Linear classifiers

y ˆ = h(x) = sign

w 0 + wT^ x

Classifying using a linear decision boundary effectively reduces the data dimension to 1.

Need to find w (direction) and w 0 (location) of the boundary

Want to minimize the expected zero/one loss for classifier h : X → Y, which for (x, y) is

L(h(x), y) =

0 if h(x) = y, 1 if h(x) 6 = y.

Risk of a classifier

The risk (expected loss) of a C-way classifier h(x):

R(h) = Ex,y [L(h(x), y)]

x

∑^ C

c=

L(h(x), c) p(x, y = c) dx

Risk of a classifier

The risk (expected loss) of a C-way classifier h(x):

R(h) = Ex,y [L(h(x), y)]

x

∑^ C

c=

L(h(x), c) p(x, y = c) dx

x

[ C

c=

L(h(x), c) p (y = c | x)

]

p(x)dx

Clearly, it’s enough to minimize the conditional risk for any x:

R(h | x) =

∑^ C

c=

L(h(x), c)p (y = c | x).

Conditional risk of a classifier

R(h | x) =

∑^ C

c=

L(h(x), c)p (y = c | x)

Conditional risk of a classifier

R(h | x) =

∑^ C

c=

L(h(x), c)p (y = c | x)

= 0 · p (y = h(x) | x) + 1 ·

c 6 =h(x)

p (y = c | x)

c 6 =h(x)

p (y = c | x)

Conditional risk of a classifier

R(h | x) =

∑^ C

c=

L(h(x), c)p (y = c | x)

= 0 · p (y = h(x) | x) + 1 ·

c 6 =h(x)

p (y = c | x)

c 6 =h(x)

p (y = c | x) = 1 − p (y = h(x) | x).

To minimize conditional risk given x, the classifier must decide

h(x) = argmax c

p (y = c | x).

This is the best possible classifier in terms of generalization, i.e. expected misclassification rate on new examples.

Log-odds ratio

Optimal rule h(x) = argmaxc p (y = c | x) is equivalent to

h(x) = c∗^ ⇔

p (y = c∗^ | x) p (y = c | x)

≥ 1 ∀c

⇔ log

p (y = c∗^ | x) p (y = c | x)

≥ 0 ∀c

For the binary case,

h(x) = 1 ⇔ log

p (y = 1 | x) p (y = 0 | x)

The logistic model

We can model the (unknown) decision boundary directly:

log

p (y = 1 | x) p (y = 0 | x)

= w 0 + wT^ x = 0.

Since p (y = 1 | x) = 1 − p (y = 0 | x), we have (after exponentiating):

p (y = 1 | x) 1 − p (y = 1 | x)

= exp(w 0 + wT^ x) = 1

The logistic model

We can model the (unknown) decision boundary directly:

log

p (y = 1 | x) p (y = 0 | x)

= w 0 + wT^ x = 0.

Since p (y = 1 | x) = 1 − p (y = 0 | x), we have (after exponentiating):

p (y = 1 | x) 1 − p (y = 1 | x)

= exp(w 0 + wT^ x) = 1

p (y = 1 | x)

= 1 + exp(−w 0 − wT^ x) = 2

⇒ p (y = 1 | x) =

1 + exp(−w 0 − wT^ x)

The logistic function

p (y = 1 | x) =

1 + exp(−w 0 − wT^ x)

The logistic function σ(x) = (^) 1+^1 e−x : For any x, 0 ≤ σ(x) ≤ 1; Monotonic, σ(−∞) = 0, σ(+∞) = 1

σ(0) = 1/2. To shift the crossing to an arbitrary z: σ(x − z).

To change the “slope”: σ(ax). −5^0 −4 −3 −2 −1 0 1 2 3 4 5

σ^1 (x) σ(x−2) σ(2x) σ(0.5x+1)

Logistic function in Rd

What if x ∈ Rd^ = [x 1... xd]T^?

σ(w 0 + wT^ x) is a scalar function of a scalar variable w 0 + wT^ x.

the direction of w determines orientation; w 0 determines the location; ‖w‖ determines the slope.

Logistic regression: decision boundary

p (y = 1 | x) = σ(w 0 + wT^ x) = 1/ 2 ⇔ w 0 + wT^ x = 0

With linear logistic model we get a linear decision boundary.

−5−4 −2 0 2 4 6

0

1

2

3

4

w

w 0 +wT^ x =