Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Bayesian Wrap-Up - Lecture Slides | CS 591, Study notes of Programming Languages

Material Type: Notes; Class: ST: Prog Analy &Mechanization; Subject: Computer Science; University: University of New Mexico; Term: Fall 2007;

Typology: Study notes

Pre 2010

Uploaded on 07/23/2009

koofers-user-b6g
koofers-user-b6g 🇺🇸

10 documents

1 / 22

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Bayesian Wrap-Up
(probably)
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16

Partial preview of the text

Download Bayesian Wrap-Up - Lecture Slides | CS 591 and more Study notes Programming Languages in PDF only on Docsity!

Bayesian Wrap-Up

(probably)

5 minutes of math...

Marginal probabilities

If you have a joint PDF:

... and want to know about the probability of just one RV (regardless of what happens to the others)

Marginal PDF of or :

f (X 1 , X 2 )

f (X

1

X 2

f (X

1

, X

2

)dx

2

X 1 X 2

f (X

2

X 1

f (X

1

, X

2

)dx

1

5 minutes of math...

Conditional probabilities

Suppose you have a joint PDF, f ( H , W )

Now you get to see one of the values, e.g., H=“ 183cm

What’s your probability estimate of A , given this new knowledge?

f (W |H) =

f (H, W )

f (H)

f (H, W )

w

f (H, W )dw

5 minutes of math...

From cond prob. rule, it’s 2 steps to Bayes’ rule:

(Often helps algebraically to think of “given that” operator, “|”, as a division operation)

Pr[W |H] =

Pr[W, H]

Pr[H]

Pr[H|W ] Pr[W ]

Pr[H]

Uncertainty over params

Maximum likelihood treats parameters as (unknown) constants

Job is just to pick the constants so as to maximize data likelihood

Fullblown Bayesian modeling treats params as random variables

PDF over parameter variables tells us how certain/uncertain we are about the location of that parameter

Also allows us to express prior beliefs (probabilities) about params

Example: Coin flipping

Have a “weighted” coin -- want to figure out θ=Pr[heads]

Maximum likelihood:

Flip coin a bunch of times, measure #heads; #tails

Use estimator to return a single value for θ

Bayesian (MAP):

Start w/ distribution over what θ might be

Flip coin a bunch of times, measure #heads; #tails

Update distribution, but never reduce to a single number

Example: Coin flipping

0 0.2 0.4 0.6 0.8 1 0

1

2

!=Pr[heads] f( ! ) MAP Estimate: 1 heads; 0 tails (1 total) 0 0.2 0.4 0.6 0.8 1 0

1

2

3 !=Pr[heads] f( ! ) ML Estimate: 1 heads; 0 tails (1 total) 1 flip total

Example: Coin flipping

0 0.2 0.4 0.6 0.8 1 0

1

2

!=Pr[heads] f( ! ) MAP Estimate: 2 heads; 3 tails (5 total) 0 0.2 0.4 0.6 0.8 1 0

1

2

3 !=Pr[heads] f( ! ) ML Estimate: 2 heads; 3 tails (5 total) 5 flips total

Example: Coin flipping

0 0.2 0.4 0.6 0.8 1 0

1

2

3

4 !=Pr[heads] f( ! ) MAP Estimate: 8 heads; 12 tails (20 total) 0 0.2 0.4 0.6 0.8 1 0

1

2

3 !=Pr[heads] f( ! ) ML Estimate: 8 heads; 12 tails (20 total) 20 flips total

Example: Coin flipping

0 0.2 0.4 0.6 0.8 1 0 1 2 3 4 5 6 !=Pr[heads] f( ! ) MAP Estimate: 16 heads; 34 tails (50 total) 0 0.2 0.4 0.6 0.8 1 0

1

2

3 !=Pr[heads] f( ! ) ML Estimate: 16 heads; 34 tails (50 total) 50 flips total

How does it work?

Think of parameters as just another kind of random variable

Now your data distribution is

This is the generative distribution

A.k.a. observation distribution, sensor model, etc.

What we want is some model of parameter as a function of the data

Get there with Bayes’ rule:

Pr[X|Θ]

Pr[Θ|X] =

Pr[X|Θ] Pr[Θ]

Pr[X]

What does that mean?

Let’s look at the parts:

Generative distribution

Describes how data is generated by the underlying process

Usually easy to write down (well, easier than the other parts, anyway)

Same old PDF/PMF we’ve been working with

Can be used to “generate” new samples of data that “look like” your training data

Pr[X|Θ]

What does that mean?

The data prior :

Expresses the probability of seeing data set X independent of any particular model

Huh?

Pr[X]

What does that mean?

The data prior :

Expresses the probability of seeing data set X independent of any particular model

Can get it from the joint data/parameter model :

In practice, often don’t need it explicitly (why?)

Pr[X]

Pr[X] =

Θ

Pr[X, Θ]dΘ

Θ

Pr[X|Θ] Pr[Θ]dΘ