

Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
The concept of linear expectation in the context of discrete real-valued random variables. It covers the definition of expectation, the linearity of expectation theorem, and its applications to various distributions such as binomial and geometric distributions. Additionally, it introduces the union bound theorem, which is useful for upper bounding the probability of multiple events occurring. Examples and proofs to help students understand these concepts.
What you will learn
Typology: Lecture notes
1 / 3
This page cannot be seen from the preview
Don't miss anything!
LNMB: Randomized Algorithms Handout: Linear of expectation Date: March 1, 2021 Instructor: Nikhil Bansal
We assume that the reader is already familiar with notions such as random variable, probability space, events (our random variables and spaces will be discrete and finite). Let X be a discrete real-valued random variable. The expectation or mean of X is E[X] =
x x^ Pr[X^ =^ x].
Example 1. If X is an indicator random variable, which is 1 with probability p and 0 other- wise. Then E[X] = Pr[X = 1] = p.
Example 2. Given a coin, with probability of heads p, let X denote the number of independent coin tosses until the first head appears. Then Pr[X = i] = p(1 − p)i−^1 , for i = 1,.. .. Then
E[X] =
i≥ 1
ip(1 − p)i−^1 = 1/p.
A very important (but deceptively simple) theorem is the following.
Theorem 1. Let X, Y be random variables. Then E[X + Y ] = E[X] + E[Y ].
Key point: This does not require any assumption on X and Y (like they are independent). This often allows us to compute the mean of very complex random variables in a much simpler way.
Proof.
E[X + Y ] =
x,y
(x + y) Pr[X = x, Y = y]
x
x(
y
Pr[X = x, Y = y]) +
y
y(
x
Pr[X = x, Y = y])
x
xP r[X = x] +
y
y Pr[Y = y] = E[X] + E[Y ].
(n k
pk(1 − p)n−k. Now, one can compute E[X] by computing
k k^ Pr[X^ =^ k] directly.^ By a much simpler way would be the following. Let Xi = 1 if i-th coin is head and Xi = 0 otherwise. Then X =
i Xi^ and as^ E[Xi] =^ p^ for each^ i, by linearity of expectation
E[X] =
i
E[Xi] = np
E[X] =
i
E[Xi] =
i
pi.
∑n i=0 i^ Pr[L^ =^ i]. But these probabilities get quite complicated and this becomes a mess. Attempt 2: Here is a much simpler way, using linearity of expectation. Let Xi(π) = 1 if ball i is lucky in π, and 0 otherwise. Then Xi is a random variable and for a random permutation π, we have Prπ[Xi = 1] = 1/n. This holds for all i. As X(π) =
i Xi(π) for any^ π, by linearity of expectation^ E[X] =^
i E[Xi] =^ n(1/n) = 1.
i
E[Xi] = n(
n
n − 1 +... + 1) = nHn ≈ n log n.
∑n i=1 Yi(π), so^ E[X] =^
i E[Yi]. We now calculate E[Yi]. For any fixed element i, we claim that the probability that i lies in a cycle of length k in exactly 1/n, for each k = 1, 2 ,... , n (it is surprising this does not depend on k). To see this, imagine tracking the permutation starting from i. Chance it is of length