



Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Material Type: Exam; Professor: Neal; Class: INTRO TO LINEAR ALG; Subject: Mathematics (Univ); University: Western Kentucky University; Term: Fall 2008;
Typology: Exams
1 / 6
This page cannot be seen from the preview
Don't miss anything!
Start with $ j. Place a bet for which you win with probability p ≤ 0.50 and lose with
probability q = 1 − p. If you win the bet, then you gain $1; but if you lose the bet, then
you lose $1. Your goal is to reach $ n before dropping to $0. (You quit playing when
you reach either $0 or $ n ).
(a) What are the probabilities of being at each possible dollar amount after m bets?
(b) What is the average amount of money remaining after m bets?
(c) What is the long term probability of reaching $ n before dropping to $0?
Example 1. Suppose we begin with $5, win with probability p = 0.38 , and win or lose
$1 at a time. We wish to reach a goal of $8. We quit if we reach $8 or go broke.
(a) After 5 bets, what are the probabilities of having $0, $1,.. ., $8?
(b) What is the player’s average fortune after these 5 bets?
(c) What is the “long-term” probability of reaching $8 before going broke?
Markov Chain Solution
We let B be the 1 × ( n +1) initial state matrix that gives the initial probabilities of being at
each possible dollar amount from $0, $1,.. ., $ j ,.. ., $ n. Every entry is 0 except for a 1
in the $ j spot which is the ( j +1) column. In Example 1, we initially have $5 with
probability 1; thus, B is given by
Next, we let A be the ( n +1)× ( n +1) matrix of transition probabilities from each dollar
amount to every other dollar amount. The values ai j are the probabilities of having $ j
after a bet given that one had $ i beforehand. Below is the complete matrix A for our
example where p = 0.38 and q = 0.62.
next state
previous state
Only the terms inside the brackets are part of the matrix. The other terms on the
outside are placeholders that tell the possible dollar amounts.
The inner terms are the probabilities of being at the next state after a bet given that
you were at the previous state beforehand. We quit when we reach $0 or $8. So if we
have $0 or $8, then we still have that same amount with probability 1 in the transition.
Thus the first and last rows have a single 1 in the respective dollar spots.
Lastly, let C be the 1 × ( n +1) matrix of all possible dollar amounts:
(a) To find the probabilities of having each possible dollar amount after m bets, we
multiply B × A
m
. (b) To find your average amount of money after m bets, multiply
B × A
m × C
T
. (c) To find the “long-term” probabilities of reaching the boundaries,
multiply B × A
m for a “large” m such as m = 200. All of the probabilities for the dollar
amounts $1 to $( n − 1 ) converge to 0.
We now complete the example with p = 0.38, q = 0.62, c = 5 and n = 8. You can use
the MARKOV1 program that will be provided to work this example. View matrix [D]
on the TI-83-84, or matd on the TI-89 to see the probabilities computed in Part (a) below.
(a) The probabilities of each possible dollar amount after m = 5 bets are
5 = (0.09161 0 0.28075 0 0.34415 0 0.18984 0 0.09366).
Thus, there is roughly a 9.36% chance of having reached the goal of $8 within 5 bets.
(b) The average fortune after 5 bets is given by the product B × A
5 × C
T ≈ 3.83. After 5
bets, players on average will have about $3.83.
(c) For the long-term results, we compute B × A
So when starting with $5, there is roughly a 21.46% chance of reaching $8 before
going broke. Your average final fortune is then $8× 0.2146 ≈ $1.7168.
Note: There is a closed-form solution for the end state that can be derived using
difference equations. Under these conditions of moving up or down 1 unit at a time, the
probability of reaching n before m when starting at j is given by
j Pm
j − m
n − m
if p = q
1 − ( q / p ) j − m
1 − ( q / p ) n − m if p ≠ q
Let x (^) j be the probability of reaching the upper boundary first when starting with $ j.
Then we know x 0 = 0 and xn = 1 (because if we start at 0 then we can't reach n first,
and if we start at n then we are certain to reach n first.)
Now consider the values of x (^) j for 1 ≤ j ≤ n − 1. Assuming we go up or down $
dollar at a time, then x (^) j can be written in terms of x (^) j − 1 and x (^) j + 1 by
x (^) j = q x (^) j − 1 + p x (^) j + 1.
We can re-write this set of equations, for 1 ≤ j ≤ n − 1 , as
q x (^) j − 1 – x (^) j + p x (^) j + 1 = 0
The system of equations is completed with the equations x 0 = 0 , and xn = 1.
Example 3. Suppose we win with probability p = 0.38 and win or lose $1 at a time. We
quit if we reach $8 or go broke. (We don't specify an initial starting amount.) What are
the probabilities of reaching $8 before going broke when starting with each of $0, $1,
$2,.. ., $8?
Solution. We let x (^) j be the probability of reaching $8 before going broke when starting
with $ j. Then
x 0 = 0
0.62 x 0 − x 1 + 0.38 x 2 = 0
0.62 x 1 − x 2 + 0.38 x 3 = 0
0.62 x 2 − x 3 + 0.38 x 4 = 0
.
.
x 8 = 1
Note that the matrix of coefficients matrix is nearly identical to the matrix of
transition probabilities used in the Markov Chain method. The only difference is that
now there is a string of − 1 down the inner portion of the main diagonal rather than 0.
We shall call this matrix of coefficients A. Let F be the 9× 1 column of constants.
Then we are solving the system AX = F.
x 0
x 1
x 2
x 3
x 4
x 5
x 6
x 7
x 8
The matrix of coefficients A will always be invertible. Here det( A ) ≈ –0.089 ≠ 0.
Hence we can solve by X = A
− 1 F. (You can use the provided program MARKOV2 ).
x 0
x 1
x 2
x 3
x 4
x 5
x 6
x 7
x 8
− 1 F =
Thus when starting with $7, there is a 60.5% chance of reaching $8 before going
broke. When starting with $5, there is a 21.46% chance of reaching $8 before going
broke (as seen with the long-term probability in the Markov Chain solution). This
technique applies whenever the payoff is 1:1. You can always scale down the numbers
as in Example 2.
Average Number of Bets
Suppose we want to find the average number of bets needed to hit either the goal of $ n or
$0. We can again create a system of equations. Let x (^) j be the average number of plays
when starting with $ j. Then we know x 0 = 0 and xn = 0 (because if we start at 0 or n ,
then it takes no time to reach 0 or n ).
Now consider the values of x (^) j for 1 ≤ j ≤ n –1. Again x (^) j can be written in terms of
x (^) j − 1 and x (^) j + 1. First one bet must be made, then we start over with either $( j − 1), with
$( j + 1). Thus,
x (^) j = 1 + q x (^) j − 1 + p x (^) j + 1.
The middle set of equations, for 1 ≤ j ≤ n − 1 , becomes q x (^) j − 1 – x (^) j + p x (^) j + 1 = –