






Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
The concept of independent events in probability theory and provides examples to illustrate the concept. Independent events are those for which knowing the occurrence or non-occurrence of one event does not affect the probability of the other event. The document also discusses the distinction between independence and mutual exclusivity and provides examples of systems of components and calculating probabilities using independence assumptions.
Typology: Study notes
1 / 10
This page cannot be seen from the preview
Don't miss anything!
Events A and B are independent if: knowing whether A occured does not change the probability of B. Mathematically, can say in two equivalent ways: P(B|A) = P(B) P(A and B) = P(B ∩ A) = P(B) × P(A). Important to distinguish independence from mutually exclusive which would say B ∩ A is empty (cannot happen). Example. Deal 2 cards from deck A first card is Ace C second card is Ace P(C |A) = 3 51 P(C ) = 4 52 (last class). So A and C are dependent.
Example. Throw 2 dice A first die lands 1 B second die shows larger number than first die C both dice show same number P(B|A) = 5 6 P(B) =? =^ 15 36 by counting so A and B dependent. P(C |A) = 1 6 P(C^ ) =^ 6 36 =^ 1 6 so A and C independent. Note 1: here B and C are mutually exclusive. Note 2: writing B = ”second die shows smaller number than first die ” we have P(B ) = P(B) by symmetry P(B ∪ B ) = P(C c ) = 1 − P(C ) = 5 6 giving a “non-counting” argument that P(B) = 5 12
(silly) Example. Throw 2 dice. If sum is at least 7 I show you the dice; if not, I don’t. A: I show you first die lands 1 B: I show you second die lands 1 P(A) = 1 36
1 36
so A and B dependent. Conceptual point. This illustrates a subtle point: being told by a truthful person that “A happened” is not (for probability/statistics purposes) exactly the same as “knowing A happened”. [car accident example]
Systems of components Will show logic diagrams: system works if there is some path left-to-right which passes only though working components. Assume components work/fail independently, P(Ci works ) = pi , P(Ci fails ) = 1 − pi. Note in practice the independence assumption is usually unrealistic. Math question: calculate P( system works ) in terms of the numbers pi and the network structure.
More complicated example: [picture on board] We could write out all 16 combinations; instead let’s condition on whether or not C 1 works. P(system works) = P(system works|C 1 works)P(C 1 works)
Example: Deal 4 cards. What is chance we get exactly one Spade? event 1st 2nd 3rd 4th F 1 S N N N F 2 N S N N F 3 F 4 N N N S [board: repeated conditioning] P(F 1 ) =
P(exactly one Spade) = P(F 1 or F 2 or F 3 or F 4 )) = P(F 1 ) + P(F 2 ) + P(F 3 ) + P(F 4 ) = 4 × P(F 1 ) ≈ 44%.
Bayes rule: updating probabilities as new information is acquired. (silly) Example There are 2 coins: one is fair: P(Heads) = 1/2; one is biased: P(Heads) = 9/ 10 Pick one coin at random. Toss 3 times. Suppose we get 3 Heads. What then is the chance that the coin we picked is the biased coin? Abstract set-up: Partition (B 1 , B 2 ,.. .) of “alternate possibilities”. Know prior probabilities P(Bi ). Then observe some event A happens (the “new information”) for which we know P(A|Bi ). We want to calculate the posterior probabilities P(Bi |A). Bayes formula: P(Bi |A) = P(A|Bi )P(Bi ) P(A|B 1 )P(B 1 ) + P(A|B 2 )P(B 2 ) +...
[example above on board]