Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Variance, Bernoulli and Binomial Distributions, Lecture notes of Probability and Statistics

An introduction to the concepts of variance, bernoulli and binomial distributions. The concept of variance as a measure of spread in a distribution and provides examples for calculating variance for a 6-sided die, bernoulli and binomial distributions. The document also discusses the relationship between bernoulli and binomial distributions and provides examples for calculating probabilities of different outcomes for binomial distributions.

Typology: Lecture notes

2021/2022

Uploaded on 09/12/2022

aarti
aarti 🇺🇸

4.5

(8)

224 documents

1 / 2

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Variance, Bernoulli and Binomials
Chris Piech
CS109
Handout #11
April 11th, 2016
Today we are going to finish up our conversation of functions that we apply to random variables. Last time
we talked about expectation, today we will cover variance. Then we will introduce two common, naturally
occurring random variable types.
Variance
Consider the following 3 distributions (PMFs)
All three have the same expected value, E[X] = 3 but the “spread” in the distributions is quite different.
Variance is a formal quantification of “spread”.
If Xis a random variable with mean µthen the variance of X, denoted Var(X), is: Var(X) = E[(Xµ)2].
When computing the variance often we use a different form of the same equation: Var(X) = E[X2]E[X]2.
Intuitively this is the weighted average distance of a sample to the mean.
Here are some useful identities for variance:
Var(aX +b) = a2Var(X)
Standard deviation is the root of variance: SD(X) = pVar(X)
Example 1
Let X= value on roll of a 6 sided die. Recall that E[X] = 7/2. First lets calculate E[X2]
E[X2] = (12)1
6+ (22)1
6+ (32)1
6+ (42)1
6+ (52)1
6+ (62)1
6=91
6
Which we can use to compute the variance:
Var(X) = E[X2](E[X])2
=91
6
7
22
=35
12
Bernoulli
A Bernoulli random variable is random indicator variable (1 = success, 0 = failure) that represents whether
or not an experiment with probability presulted in success. Some example uses include a coin flip, random
binary digit, whether a disk drive crashed or whether someone likes a Netflix movie.
Let Xbe a Bernoulli Random Variable XBer(p).
E[X] = p
Var(X) = p(1p)
pf2

Partial preview of the text

Download Variance, Bernoulli and Binomial Distributions and more Lecture notes Probability and Statistics in PDF only on Docsity!

Variance, Bernoulli and Binomials

Chris Piech CS

Handout # April 11th, 2016

Today we are going to finish up our conversation of functions that we apply to random variables. Last time we talked about expectation, today we will cover variance. Then we will introduce two common, naturally occurring random variable types.

Variance

Consider the following 3 distributions (PMFs)

All three have the same expected value, E[X] = 3 but the “spread” in the distributions is quite different. Variance is a formal quantification of “spread”.

If X is a random variable with mean μ then the variance of X, denoted Var(X), is: Var(X) = E[(X–μ)^2 ]. When computing the variance often we use a different form of the same equation: Var(X) = E[X^2 ] − E[X]^2. Intuitively this is the weighted average distance of a sample to the mean.

Here are some useful identities for variance:

  • Var(aX + b) = a^2 Var(X)
  • Standard deviation is the root of variance: SD(X) =

Var(X)

Example 1

Let X = value on roll of a 6 sided die. Recall that E[X] = 7 /2. First lets calculate E[X^2 ]

E[X^2 ] = ( 12 ) 1 6

Which we can use to compute the variance:

Var(X) = E[X^2 ] − (E[X])^2

=

Bernoulli

A Bernoulli random variable is random indicator variable (1 = success, 0 = failure) that represents whether or not an experiment with probability p resulted in success. Some example uses include a coin flip, random binary digit, whether a disk drive crashed or whether someone likes a Netflix movie.

Let X be a Bernoulli Random Variable X ∼ Ber(p).

E[X] = p Var(X) = p( 1 − p)

Binomial

A Binomial random variable is random variable that represents the number of successes in n successive independent trials of a Bernoulli experiment. Some example uses include # of heads in n coin flips, # dist drives crashed in 1000 computer cluster.

Let X be a Binomial Random Variable. X ∼ Bin(n, p) where p is the probability of success in a given trial.

P(X = k) =

n k

pk( 1 − p)n−k

E[X] = np Var(X) = np(1–p)

Example 2

Let X = number of heads after a coin is flipped three times. X ∼ Bin( 3 , 0. 5 ). What is the probability of different outcomes?

P(X = 0 ) =

p^0 ( 1 − p)^3 = 1 8

P(X = 1 ) =

p^1 ( 1 − p)^2 =

P(X = 2 ) =

p^2 ( 1 − p)^1 =

P(X = 3 ) =

p^3 ( 1 − p)^0 =

Example 3

When sending messages over a network there is a chance that the bits will become corrupt. A Hamming Code allows for a 4 bit code to be encoded as 7 bits, and maintains the property that if 0 or 1 bit(s) are corrupted then the message can be perfectly reconstructed. You are working on the Voyager space mission and the probability of any bit being lost in space is 0.1. How does reliability change when using a Hamming code?

Image we use error correcting codes. Let X ∼ Bin( 7 , 0. 1 )

P(X = 0 ) =

( 0. 1 )^0 ( 0. 9 )^7 ≈ 0. 468

P(X = 1 ) =

( 0. 1 )^1 ( 0. 9 )^6 = 0. 372

P(X = 0 ) + P(X = 1 ) = 0. 850

What if we didn’t use error correcting codes? Let X ∼ Bin( 4 , 0. 1 )

P(X = 0 ) =

( 0. 1 )^0 ( 0. 9 )^4 ≈ 0. 656

Using Hamming Codes improves reliability by 30%

Disclaimer: This handout was made fresh just for you. Notice any mistakes? Let Chris know.