

Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
An introduction to the concepts of variance, bernoulli and binomial distributions. The concept of variance as a measure of spread in a distribution and provides examples for calculating variance for a 6-sided die, bernoulli and binomial distributions. The document also discusses the relationship between bernoulli and binomial distributions and provides examples for calculating probabilities of different outcomes for binomial distributions.
Typology: Lecture notes
1 / 2
This page cannot be seen from the preview
Don't miss anything!
Chris Piech CS
Handout # April 11th, 2016
Today we are going to finish up our conversation of functions that we apply to random variables. Last time we talked about expectation, today we will cover variance. Then we will introduce two common, naturally occurring random variable types.
Consider the following 3 distributions (PMFs)
All three have the same expected value, E[X] = 3 but the “spread” in the distributions is quite different. Variance is a formal quantification of “spread”.
If X is a random variable with mean μ then the variance of X, denoted Var(X), is: Var(X) = E[(X–μ)^2 ]. When computing the variance often we use a different form of the same equation: Var(X) = E[X^2 ] − E[X]^2. Intuitively this is the weighted average distance of a sample to the mean.
Here are some useful identities for variance:
Var(X)
Let X = value on roll of a 6 sided die. Recall that E[X] = 7 /2. First lets calculate E[X^2 ]
E[X^2 ] = ( 12 ) 1 6
Which we can use to compute the variance:
Var(X) = E[X^2 ] − (E[X])^2
=
A Bernoulli random variable is random indicator variable (1 = success, 0 = failure) that represents whether or not an experiment with probability p resulted in success. Some example uses include a coin flip, random binary digit, whether a disk drive crashed or whether someone likes a Netflix movie.
Let X be a Bernoulli Random Variable X ∼ Ber(p).
E[X] = p Var(X) = p( 1 − p)
A Binomial random variable is random variable that represents the number of successes in n successive independent trials of a Bernoulli experiment. Some example uses include # of heads in n coin flips, # dist drives crashed in 1000 computer cluster.
Let X be a Binomial Random Variable. X ∼ Bin(n, p) where p is the probability of success in a given trial.
P(X = k) =
n k
pk( 1 − p)n−k
E[X] = np Var(X) = np(1–p)
Let X = number of heads after a coin is flipped three times. X ∼ Bin( 3 , 0. 5 ). What is the probability of different outcomes?
P(X = 0 ) =
p^0 ( 1 − p)^3 = 1 8
P(X = 1 ) =
p^1 ( 1 − p)^2 =
p^2 ( 1 − p)^1 =
p^3 ( 1 − p)^0 =
When sending messages over a network there is a chance that the bits will become corrupt. A Hamming Code allows for a 4 bit code to be encoded as 7 bits, and maintains the property that if 0 or 1 bit(s) are corrupted then the message can be perfectly reconstructed. You are working on the Voyager space mission and the probability of any bit being lost in space is 0.1. How does reliability change when using a Hamming code?
Image we use error correcting codes. Let X ∼ Bin( 7 , 0. 1 )
P(X = 0 ) =
What if we didn’t use error correcting codes? Let X ∼ Bin( 4 , 0. 1 )
P(X = 0 ) =
Using Hamming Codes improves reliability by 30%
Disclaimer: This handout was made fresh just for you. Notice any mistakes? Let Chris know.