Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Linearity of Expectation: Applications to Balls into Bins, Coupon Collector, and Quicksort, Lecture notes of Advanced Calculus

The concept of Linearity of Expectation, a fundamental theorem in probability theory. It discusses its applications to various problems such as Balls into Bins, Coupon Collector, and Quicksort. theorems, calculations, and proofs to illustrate the use of Linearity of Expectation in solving these problems.

What you will learn

  • What is Linearity of Expectation and why is it important?
  • What is the expected number of comparisons in Quicksort using Linearity of Expectation?
  • How does Linearity of Expectation apply to the Balls into Bins problem?

Typology: Lecture notes

2021/2022

Uploaded on 09/12/2022

kaijiang
kaijiang 🇺🇸

4.4

(7)

281 documents

1 / 6

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Chapter 2
Linearity of Expectation
Linearity of expectation basically says that the expected value of a sum of random variables
is equal to the sum of the individual expectations. Its importance can hardly be over-
estimated for the area of randomized algorithms and probabilistic methods. Its main
power lies in the facts that it
(i) is applicable for sums of any random variables (independent or not), and
(ii) that it often allows simple “local” arguments instead of “global” ones.
2.1 Basics
For some given (discrete) probability space any mapping X: Zis called a (numer-
ical) random variable. The expected value of Xis given by
E[X] = X
ω
X(ω)Pr [ω] = X
xZ
xPr [X=x]
provided that PxZ|x|Pr [X=x] converges.
Example 2.1. Let X1and X2denote two independent rolls of a fair dice. What is the
expected value of the sum X=X1+X2? We use the definition, calculate and obtain
E[X] = 2 ·1
36 + 3 ·1
36 +· ·· + 12 ·1
36 = 7.
As stated already, linearity of expectation allows us to compute the expected value of
a sum of random variables by computing the sum of the individual expectations.
Theorem 2.2. Let X1, . . . , Xnbe any finite collection of discrete random variables and
let X=Pn
i=1 Xi. Then we have
E[X] = E"n
X
i=1
Xi#=
n
X
i=1
E[Xi].
13
pf3
pf4
pf5

Partial preview of the text

Download Linearity of Expectation: Applications to Balls into Bins, Coupon Collector, and Quicksort and more Lecture notes Advanced Calculus in PDF only on Docsity!

Chapter 2

Linearity of Expectation

Linearity of expectation basically says that the expected value of a sum of random variables is equal to the sum of the individual expectations. Its importance can hardly be over- estimated for the area of randomized algorithms and probabilistic methods. Its main power lies in the facts that it

(i) is applicable for sums of any random variables (independent or not), and

(ii) that it often allows simple “local” arguments instead of “global” ones.

2.1 Basics

For some given (discrete) probability space Ω any mapping X : Ω → Z is called a (numer- ical) random variable. The expected value of X is given by

E [X] =

ω∈Ω

X(ω)Pr [ω] =

x∈Z

xPr [X = x]

provided that

x∈Z |x|Pr [X^ =^ x] converges.

Example 2.1. Let X 1 and X 2 denote two independent rolls of a fair dice. What is the expected value of the sum X = X 1 + X 2? We use the definition, calculate and obtain

E [X] = 2 ·

As stated already, linearity of expectation allows us to compute the expected value of a sum of random variables by computing the sum of the individual expectations.

Theorem 2.2. Let X 1 ,... , Xn be any finite collection of discrete random variables and let X =

∑n i=1 Xi. Then we have

E [X] = E

[ (^) n ∑

i=

Xi

]

∑^ n

i=

E [Xi].

Proof. We use the definition, reorder the sum by its finiteness, and obtain

E [X] =

ω∈Ω

X(ω)Pr [ω]

ω∈Ω

(X 1 (ω) + · · · + Xn(ω))Pr [ω]

∑^ n

i=

ω∈Ω

Xi(ω)Pr [ω]

∑^ n

i=

E [Xi] ,

which was claimed.

It can be shown that linearity of expectation also holds for countably infinite summa- tions in certain cases. For example, it holds that

E

[ ∞

i=

Xi

]

∑^ ∞

i=

E [Xi]

if

i=1 E^ [|Xi|] converges.

Example 2.3. Recalling Example 2.1, we first compute E [X 1 ] = E [X 2 ] = 1 · 1 /6 + · · · + 6 · 1 /6 = 7/2 and hence

E [X] = E [X 1 ] + E [X 2 ] =

Admittedly, in such a trivial example, the power of linearity of expectation can hardly be seen. This should, however, change in the applications to come.

2.2 Applications

2.2.1 Balls Into Bins

Many problems in computer science and combinatorics can be formulated in terms of a Balls into Bins process. We give some examples here. Suppose we have m balls, labeled i = 1,... , m and n bins, labeled j = 1,... , n. Each ball is thrown into one of the bin independently and uniformly at random.

Theorem 2.4. Let Xj denote the number of balls in bin j. Then, for j = 1,... , m we have E [Xj ] = m n

Proof. Define an indicator variable

Xi,j =

1 ball i falls into bin j, 0 otherwise,

Theorem 2.7. Let X be the number of lots bought until at least one coupon of each type is drawn. Then we have E [X] = n · Hn,

where Hn =

∑n i=1 1 /i^ denotes the^ n-th Harmonic number.

Proof. We partition the process of drawing lots into phases. In phase Pi for i = 1,... , n we have already collected i − 1 distinct coupons and the phase ends once we have drawn i distinct coupons. Let Xi be the number of lots bought in phase Pi. Suppose we are in phase Pi, then the probability that the next lot terminates this phase is (n − i + 1)/n. This is because there are n − (i − 1) many coupon-types we have not yet collected. Any of those coupons will be the i-th distinct type to be collected (since we have exactly i − 1 at the moment). These events happen with probability 1/n, each. These considerations imply that the random variable Xi has geometric distribution with success-probability (n − i + 1)/n, i.e., Xi ∼ Geo ((n − i + 1)/n). Its expected value is the reciprocal, i.e., E [Xi] = n/(n − i + 1). Now we invoke linearity of expectation and obtain

E [X] = E

[ (^) n ∑

i=

Xi

]

∑^ n

i=

E [Xi] =

∑^ n

i=

n n − i + 1

= n · Hn

as claimed.

Recall that log n ≤ Hn ≤ log n + 1. Thus we basically have to buy n log n lots.

2.2.3 Quicksort

The problem of Sorting is the following: We are given a sequence x = (x 1 , x 2 ,... , xn) of (pairwise distinct) numbers and are asked to find a permutation π of (1, 2 ,... , n) such that the sequence (xπ(1), xπ(2),... , xπ(n)) satisfies xπ(1) ≤ xπ(2) ≤ · · · ≤ xπ(n). The assumption that the numbers are pairwise distinct can easily be removed, but we consider it for clarity of exposition. Any algorithm for this problem is allowed to ask queries of the type “a < b?”, called a comparison. Let rtA(x) be the number of comparisons of an algorithm A given a sequence x. The idea of the algorithm Quicksort is to choose some element p from x, called the pivot, and divide x into two subsequences x′^ and x′′. The sequence x′^ contains the elements xi < p and x′′^ those xi > p. Quicksort is then called recursively until the input-sequence is empty. The sequence (Quicksort(x′), p, Quicksort(x′′)) is finally returned. The exposition of Quicksort here is actually not yet a well-defined algorithm, because we have not yet said how we want to choose the pivot element p. This choice drastically affects the running time as we will see shortly. In the sequel let X denote the number of comparisons “<” executed by Quicksort.

Deterministic Algorithm

Suppose we choose always the first element in the input-sequence, i.e., p = x 1. It is well-known that this variant of Quicksort has the weakness that it may require Ω

n^2

comparisons.

Observation 2.8. There is an instance with X = n(n − 1)/ 2.

Algorithm 2.1 Quicksort

Input. Sequence (x 1 , x 2 ,... , xn)

Output. Sequence (xπ(1), xπ(2),... , xπ(n))

(1) If n = 0 return.

(2) Otherwise choose p ∈ x arbitrarily and remove p from x. Let x′^ and x′′^ be two empty sequences.

(3) For i = 1,... , n, if xi < p append xi to x′, otherwise append xi to x′′.

(4) Return (Quicksort(x′), p, Quicksort(x′′))

Proof. Consider x = (1, 2 ,... , n). Then, in step (3), x′^ remains empty while x′′^ contains n − 1 elements. This step requires n − 1 comparisons. By induction, the recursive calls Quicksort(x′) and Quicksort(x′′) require 0, respectively (n − 1)(n − 2)/2 comparisons “<”. Thus, the whole algorithm needs X = n − 1 + (n − 1)(n − 2)/2 = n(n − 1)/ 2 comparisons.

Randomized Algorithm

Now suppose that we always choose the pivot element equiprobably among the available el- ements. This gives obviously rise to a Las Vegas algorithm, because we will never compute a wrong result. These choices merely affect the (expected) running time.

Theorem 2.9. We have that E [X] = 2(n + 1)Hn − 4 n.

Proof. Without loss of generality (by renaming the numbers in x), we assume that the original sequence x is a permutation of (1, 2 ,... , n). So, for any i < j ∈ { 1 , 2 ,... , n} let the random variable Xi,j be equal to one if i and j are compared during the course of the algorithm and zero otherwise.∑ The total number of comparisons is hence X = n− 1 i=

∑n j=i+1 Xi,j^. Thus, by linearity of expectation,

E [X] = E

n∑− 1

i=

∑^ n

j=i+

Xi,j

n∑− 1

i=

∑^ n

j=i+

E [Xi,j ] =

n∑− 1

i=

∑^ n

j=i+

Pr [Xi,j = 1] ,

which shows that we have to derive the probability that i and j are compared. First observe that each element will be pivot element in the course of the algorithm exactly once. Thus the input x and the random choices of the algorithm induce a random sequence P = (P 1 , P 2 ,... , Pn) of pivots. Fix i and j arbitrarily. When will these elements be compared? We claim that it will be the case if and only if either i or j is the first pivot from the set {i,... , j} in the sequence P. If i and j are compared, then either one must be the pivot and they must be in the same subsequence of x. Thus all previous pivots (if any) must be smaller than i or larger than j, since i and j would end up in different subsequences of x, otherwise. Hence, either i or j is the first pivot in the set {i,... , j} appearing in P. The converse direction is trivial: If one of i or j is the first pivot from the set {i,... , j} in P , then i and j are still in the same subsequence of x and will hence be compared.