Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Tutorial on Probability and Estimation-Introduction to Machine Learning-Lecture 01-Computer Science, Lecture notes of Introduction to Machine Learning

Tutorial on Probability and Estimation, Dhruv Batra, Probability, Continuous Random Variables, Bias, Probabilistic Model, Probability Distributions, Sequence Probability, Parameter Estimation Problem, Maximum Likelihood Estimator, Bernoulli, Bayes Rule, MAP Estimation, Discrete Vs. Continuous RV, Beta Prior for Bernoulli, Conjugate Prior, MAP Estimate, Bernoulli, Beta Distributions, Gaussian Distributions, Greg Shakhnarovich, Lecture Slides, Introduction to Machine Learning, Computer Science, To

Typology: Lecture notes

2011/2012

Uploaded on 03/12/2012

alfred67
alfred67 🇺🇸

4.9

(20)

328 documents

1 / 89

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Lecture 1: Tutorial on probability and estimation
TTIC 31020: Introduction to Machine Learning
Instructor: Greg Shakhnarovich,
Lecture by Dhruv Batra
TTI–Chicago
September 27, 2010
Lecture 1: Tutorial on probability and estimation TTIC 31020
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
pf42
pf43
pf44
pf45
pf46
pf47
pf48
pf49
pf4a
pf4b
pf4c
pf4d
pf4e
pf4f
pf50
pf51
pf52
pf53
pf54
pf55
pf56
pf57
pf58
pf59

Partial preview of the text

Download Tutorial on Probability and Estimation-Introduction to Machine Learning-Lecture 01-Computer Science and more Lecture notes Introduction to Machine Learning in PDF only on Docsity!

Lecture 1: Tutorial on probability and estimation

TTIC 31020: Introduction to Machine Learning

Instructor: Greg Shakhnarovich, Lecture by Dhruv Batra

TTI–Chicago

September 27, 2010

Welcome

TTIC 31020, Introduction to Machine Learning

MWF 9:30-10:20am

Instructor: Greg Shakhnarovich, greg@ttic.edu

TA: Feng Zhao, lfzhao@ttic.edu

Greg is traveling this week; all administrative details will be discussed next Monday.

Why probability?

This class is mostly about statistical methods and models in machine learning

Probability is fundamental in dealing with uncertainty inherent in real world problems

Statistics leverages laws of probability to evaluate important properties of the world from data, and make intelligent predictions about future

Background

Things you should have seen before

Background

Things you should have seen before

  • Discrete vs. Continuous Random Variables
  • (^) pmf vs pdf

Background

Things you should have seen before

  • Discrete vs. Continuous Random Variables
  • (^) pmf vs pdf
  • (^) Joint vs Marginal vs Conditional Distributions

Background

Things you should have seen before

  • Discrete vs. Continuous Random Variables
  • (^) pmf vs pdf
  • (^) Joint vs Marginal vs Conditional Distributions
  • IID: Independant Identically Distributed
  • (^) Bayes Rule and Priors

Background

Things you should have seen before

  • Discrete vs. Continuous Random Variables
  • (^) pmf vs pdf
  • (^) Joint vs Marginal vs Conditional Distributions
  • IID: Independant Identically Distributed
  • (^) Bayes Rule and Priors

This refresher WILL revise these topics.

Problem: estimating bias in coin toss

A single coin toss produces H or T.

A sequence of n coin tosses produces a sequence of values; n = 4 T ,H,T ,H

Problem: estimating bias in coin toss

A single coin toss produces H or T.

A sequence of n coin tosses produces a sequence of values; n = 4 T ,H,T ,H H,H,T ,T

Problem: estimating bias in coin toss

A single coin toss produces H or T.

A sequence of n coin tosses produces a sequence of values; n = 4 T ,H,T ,H H,H,T ,T T ,T ,T ,H

A probabilistic model allows us to model the uncertainly inherent in the process (randomness in tossing a coin), as well as our uncertainty about the properties of the source (fairness of the coin).

Probabilistic model

First, for convenience, convert H → 1, T → 0.

  • We have a random variable X taking values in { 0 , 1 }

Probabilistic model

First, for convenience, convert H → 1, T → 0.

  • We have a random variable X taking values in { 0 , 1 }

Bernoulli distribution with parameter μ:

Pr(X = 1; μ) = μ.

We will write for simplicity p(x) or p(x; μ) instead of Pr(X = x; μ)

Probabilistic model

First, for convenience, convert H → 1, T → 0.

  • We have a random variable X taking values in { 0 , 1 }

Bernoulli distribution with parameter μ:

Pr(X = 1; μ) = μ.

We will write for simplicity p(x) or p(x; μ) instead of Pr(X = x; μ)

The parameter μ ∈ [0, 1] specifies the bias of the coin

  • Coin is fair if μ = (^12)