




























































































Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Practice problems with full explanations that reinforce knowledge, coverage of the most up-to-date developments in your course field, and in-depth review of practices and applications. Fully compatible with your classroom text, Schaum's highlights all the important facts you need to know. Use Schaum's to shorten your study time-and get your best test scores!
Typology: Thesis
1 / 320
This page cannot be seen from the preview
Don't miss anything!
Theory and Problems of
Probability, Random Variables, and Random
Processes
Start of Citation[PU]McGraw-Hill Professional[/PU][DP]1997[/DP]End of Citation
Preface
The purpose of this book is to provide an introduction to principles of
probability, random variables, and random processes and their applications.
The book is designed for students in various disciplines of engineering,
science, mathematics, and management. It may be used as a textbook and/or as
a supplement to all current comparable texts. It should also be useful to those
interested in the field for self-study. The book combines the advantages of both
the textbook and the so-called review book. It provides the textual explanations
of the textbook, and in the direct way characteristic of the review book, it gives
hundreds of completely solved problems that use essential theory and
techniques. Moreover, the solved problems are an integral part of the text. The
background required to study the book is one year of calculus, elementary
differential equations, matrix analysis, and some signal and system theory,
including Fourier transforms.
I wish to thank Dr. Gordon Silverman for his invaluable suggestions and
critical review of the manuscript. I also wish to express my appreciation to the
editorial staff of the McGraw-Hill Schaum Series for their care, cooperation,
and attention devoted to the preparation of the book. Finally, I thank my wife,
Daisy, for her patience and encouragement.
HWEI P. HSU
MONTVILLE, NEW JERSEY
Start of Citation[PU]McGraw-Hill Professional[/PU][DP]1997[/DP]End of Citation
sample points (as in Example 1.1) or countably infinite sample points (as in Example 1.2). A set is called countable if its elements can be placed in a one-to-one correspondence with the positive integers. A sample space S is said to be continuous if the sample points constitute a continuum (as in Example 1.3).
Since we have identified a sample space S as the set of all possible outcomes of a random experi- ment, we will review some set notations in the following. If C is an element of S (or belongs to S), then we write
If S is not an element of S (or does not belong to S), then we write
u s
A c B
sample point of S is often referred to as an elementary event. Note that the sample space S is the subset of itself, that is, S c S. Since S is the set of all possible outcomes, it is often called the certain event.
EXAMPLE 1.4 Consider the experiment of Example 1.2. Let A be the event that the number of tosses required until the first head appears is even. Let B be the event that the number of tosses required until the first head appears is odd. Let C be the event that the number of tosses required until the first head appears is less than 5. Express events A, B, and C.
Two sets A and B are equal, denoted A = B, if and only if A c B and B c A.
Suppose A c S. The complement of set A, denoted A, is the set containing all elements in S but not in A.
A= {C: C: E S a n d $ A)
both.
4. Intersection:
The intersection of sets A and B, denoted A n B, is the set containing all elements in both A and B.
The set containing no element is called the null set, denoted 0. Note that
Two sets A and B are called disjoint or mutually exclusive if they contain no common element,
The definitions of the union and intersection of two sets can be extended to any finite number of sets as follows: n U A ~ = A , u A , U.. - U A, i = 1 = ([: [ E A l or [ E AZ or. - - E A,)
Note that these definitions can be extended to an infinite number of sets:
In our definition of event, we state that every subset of S is an event, including S and the null set
S = the certain event @ = the impossible event
If A and B are events in S, then
A u B = the event that either A or B or both occurred
n U A, = the event that at least one of the A, occurred; i = 1 n n Ai = the event that all of the A, occurred. i = 1
B. Venn Diagram:
For instance, in the three Venn diagrams shown in Fig. 1-1, the shaded areas represent, respectively, the events A u B, A n B, and A. The Venn diagram in Fig.. 1-2 indicates that B c A and the event
Distributive Laws:
De Morgan's Laws:
These relations are verified by showing that any element that is contained in the set on the left side of the equality sign is also contained in the set on the right side, and vice versa. One way of showing this
Similarly, De Morgan's laws also can be extended as follows (Prob. 1.17):
An assignment of real numbers to the events defined in a sample space S is known as the prob- ability measure. Consider a random experiment with a sample space S, and let A be a particular event defined in S.
Suppose that the random experiment is repeated n times. If event A occurs n(A) times, then the probability of event A, denoted P(A), is defined as
where n(A)/n is called the relative frequency of event A. Note that this limit may not exist, and in addition, there are many situations in which the concepts of repeatability may not be valid. It is clear that for any event A, the relative frequency of A will have the following properties:
occurs in all of the n repeated trials.
2. If A and B are mutually exclusive events, then
and
Let S be a finite sample space and A be an event in S. Then in the axiomatic definition, the , probability P(A) of the event A is a real number assigned to A which satisfies the following three axioms :
Axiom 1 : P(A) 2 0 (1.21)
Axiom 2: P(S) = 1 (1.22)
Axiom 3 : P ( A u B) = P(A) + P(B) if A n B = 0 (1.23)
If the sample space S is not finite, then axiom 3 must be modified as follows:
Axiom 3': If A , , A , , ... is an infinite sequence of mutually exclusive events in S (Ai n A j = 0 for i # j), then
These axioms satisfy our intuitive notion of probability measure obtained from the notion of relative frequency.
By using the above axioms, the following useful properties of probability can be obtained:
where the sum of the second term is over all distinct pairs of events, that of the third term is over all distinct triples of events, and so forth.
and a similar equality holds for any subcollection of the events.
Note that property 4 can be easily derived from axiom 2 and property 3. Since A c S, we have
is the conditional probability of an event B given event A. From Eqs. (1.39) and (1.40), we have
P(A n B) = P(A I B)P(B) = P(B I A)P(A) (1.41)
Equation (1.dl) is often quite useful in computing the joint probability of events.
From Eq. (1.41) we can obtain the following Bayes' rule:
n UAi = A, u A, u v A, = S and A, n Aj = @ i # j i = 1 Let B be any event in S. Then
which is known as the total probability of event B (Prob. 1.47). Let A = Ai in Eq. (1.42); then, using Eq. (1.44), we obtain
Note that the terms on the right-hand side are all conditioned on events A i , while the term on the left is conditioned on B. Equation (1.45) is sometimes referred to as Bayes' theorem.
Two events A and B are said to be (statistically) independent if and only if
It follows immediately that if A and B are independent, then by Eqs. (1.39) and (1.40),
Then
Thus, if A is independent of B, then the probability of A's occurrence is unchanged by information as to whether or not B has occurred. Three events A, B, C are said to be independent if and only if
CHAP. 11 PROBABILITY
We may also extend the definition of independence to more than three events. The events A,, A,,. .. ,
P(Ail n A,, n.. n Aik)= P(Ai1)P(Ai,) P(Aik) (1.51) Finally, we define an infinite set of events to be independent if and only if every finite subset of these events is independent. To distinguish between the mutual exclusiveness (or disjointness) and independence of a collec- tion of events we summarize as follows:
P( i)A,) = P(AJ i = 1 i = 1
and a similar equality holds for any subcollection of the events.
Solved Problems
obtained.
(a) The sampling space S, is given by
where, for example, HTH indicates a head on the first and third throws and a tail on the second throw. There are eight sample points in S,. (b) The sampling space S , is given by Sz = (0, 1, 2, 3) where, for example, the outcome 2 indicates that two heads were obtained in the three tosses. The sample space S, contains four sample points.
1.2. Consider an experiment of drawing two cards at random from a bag containing four cards marked with the integers 1 through 4.
(a) Find the sample space S , of the experiment if the first card is replaced before the second is drawn.
(a) The sample space S, contains 16 ordered pairs (i, J], 1 Ii 1 4, 1 5 j 5 4, where the first number indicates the first number drawn. Thus,
[(l, 1) (1, 2) (1, 3) (1,4))
A Fig. 1-
1.6. An automobile dealer offers vehicles with the following options: (a) With or without automatic transmission
(c) With one of two choices of a stereo system (d) With one of three exterior colors ,
If the sample space consists of the set of all possible vehicle types, what is the number of out- comes in the sample space? The tree diagram for the different types of vehicles is shown in Fig. 1-4. From Fig. 1-4 we see that the number of sample points in S is 2 x 2 x 2 x 3 = 24.
Transmission Automatic
Air-conditioning
Stereo
Color
Fig. 1-
There are z4 = 16 possible events in S. They are 0; {a), (b), {c), {d); {a, b), {a, c), {a, d), {b, c), {b, d), ( c , d ) ; {a, b, c), (a, b, 4 , (a, c, d), {b, c, d) ; S = {a, b, c, dl-
Let S = {s,, s,, .. ., s,). Let Q be the family of all subsets of S. (a is sometimes referred to as the power set of S.) Let Si be the set consisting of two statements, that is, Si = (Yes, the si is in; No, the s, is not in) Then Cl can be represented as the Cartesian product n = s, x s, x ... x s,
PROBABILITY [CHAP 1
Since each subset of S can be uniquely characterized by an element in the above Cartesian product, we obtain the number of elements in Q by
'
where n(Si) = number of elements in Si = 2. An alternative way of finding n(Q) is by the following summation:
n(Ql= ( y ) = i=O i = o^ i! ( n^ -^ i)! The proof that the last sum is equal to 2" is not easy.
1.9. Consider the experiment of Example 1.2. We define the events
A = { k : k is odd) B = { k : 4 < k 1 7 ) C = { k : 1 5 k 5 10)
where k is the number of tosses required until the first H (head) appears. Determine the events A, B , C , A u B , B u C , A n B , A n C , B n C , a n d A n B.
B = { k : k = 1, 2, 3 or k 2 8) C = ( k : k r 1 1 ) A u B = { k : k is odd or k = 4, 6 ) B u C = C A n B = ( 5 , 7) A n C = {I, 3, 5, 7, 9) B n C = B A n B = ( 4 , 6 )
1.10. The sample space of an experiment is the real line expressed as
(a) Consider the events
A , = { v : 0 S v < $ A, = { v : f 5 V < $
Determine the events
(b) Consider the events
U Ai and A, i = 1 i = 1
B, = { v : v 5^1 B, = { v : v < 3)