



Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
These are the important key points of lab solutions of Introductory Statistics are: Random Sample, Population, Exponential Distribution, Density, Smallest Order, Unbiased Estimator, Distribution Function, Formula, Density Function, Exponential Distribution
Typology: Study notes
1 / 5
This page cannot be seen from the preview
Don't miss anything!
TA: Yury Petrachenko, CAB 484, yuryp@ualberta.ca, http://www.ualberta.ca/∼yuryp/
Review Questions, Chapters 8, 9
8.15 Suppose that Y 1
2
n
denote a random sample of size n from a population with an
exponential distribution whose density is given by
f (y) =
(1/θ)e
−y/θ , y > 0
0 , elsewhere.
If Y (1)
= min(Y 1
2
n
) denotes the smallest-order statistic, show that
θ = nY (1)
is an
unbiased estimator for θ and find MSE(
θ).
Solution. Let’s find the distribution function of Y :
F (y) =
1 − e
−y/θ
, y > 0
0 , elsewhere.
Now we can use the formula F Y (1)
(y) = 1 −
1 − F (y)
n
or f Y (1)
= n
1 − F (y)
n− 1
f (y) to find
the the density function for Y (1)
: for y > 0,
f Y (1)
= n
e
−y/θ
n− 1 1
θ
e
−y/θ
=
n
θ
e
−yn
θ .
We can recognize this density function to be the density of the exponential distribution with
parameter θ
n, Y (1)
∼ Exp
θ
n
Knowing the distribution of Y (1)
allows us to compute the expectation of
θ = nY (1)
θ] = nE[Y (1)
nθ
n
= θ.
So, E[
θ] = θ, and
θ is an unbiased estimator of θ.
To find MSE(
θ), use the formula MSE(
θ) = V [
θ] +
θ)
2
. Since the estimator is unbiased,
its bias B(
θ) equals zero. For the variance, remember that Y (1)
is exponential. We have
θ) = V [
θ] + 0 = n
2
V
(1)
= n
2
θ
2
n
2
= θ
2
. §
9.7 Suppose that Y 1
2
n
denote a random sample of size n from an exponential distribution
with density function given by
f (y) =
(1/θ)e
−y/θ
, y > 0
0 , elsewhere.
In Exercise 8.15 we determined that
θ 1
= nY (1)
is an unbiased estimator of θ with MSE(
θ)= θ
2 .
Consider the estimator
θ 2
Y , and find the efficiency of
θ 1
relative to
θ 2
Solution. First compute the variance of
θ 2
θ 2 ] = V [
Y 1 + · · · + Yn
n
n
2
V [Y 1 + · · · + Yn] =
n
2
V [Y 1 ] + · · · + V [Yn]
n
2
θ
2
2
n times
nθ
2
n
2
θ
2
n
To find the relative efficiency, we need to find the ratio of two variances:
eff(
θ 1 ,
θ 2 ) =
θ 2 )
θ 1
θ
2
n
θ
2
n
We conclude that
θ 2
is preferable to
θ 1
9.61 Let Y 1
2
n
denote a random sample from the probability density function
f (y) =
(θ + 1)y
θ
, 0 < y < 1; θ > − 1
0 , elsewhere.
Find an estimator for θ by the method of moments.
Solution. Let’s find the first moment of this distribution:
μ =
1
0
(θ + 1) y
θ+
dy =
(θ + 1) y
θ+
θ + 2
1
0
θ + 1
θ + 2
The method of moments implies
θ + 1
θ + 2
θ =
Solution. This is a somewhat different problem from the previous one because the support
of the density function depends on θ. Recall the indicator function I(A). It is equal to one
when A is true, and zero if A is false.
We can write the likelihood function in the following way:
n ∏
i=
f (y i
n ∏
i=
2 θ + 1
I(0 ≤ y i
≤ 2 θ + 1) =
(2θ + 1)
n
n ∏
i=
I(0 ≤ y i
≤ 2 θ + 1).
We can simplify this even further if we note that the product of indicator is non-zero only
when all of the underlying conditions fulfill. That is, all y i
are less that 2θ + 1 and positive.
Notice that this statement is equivalent to the following: 0 ≤ y(1) and y(n) ≤ 2 θ + 1. (We use
order statistics y (1)
= min(y 1
,... , y n
) and y (n)
= max(y 1
,... , y n
).) We have
(2θ + 1)
n
I(0 ≤ y(1)) · I(y(n) ≤ 2 θ + 1).
Now look at the first part of the likelihood function L, (2θ + 1)
−n
. Notice that this is a
decreasing (and continuous) function of θ. If we want to maximize L, we should choose the
value of θ as small as possible. Notice that if 2θ + 1 is smaller than y (n)
, then the value of L(θ)
is zero. So, the minimum of 2θ + 1 is y (n)
. This gives the minimum value for θ and maximizes
the likelihood L(θ). We conclude (provided at least one observation in the sample is positive)
(n)
θ + 1 ∴
θ =
(n)
9.80 Let Y 1
2
n
denote a random sample from the probability density function
f (y) =
(θ + 1)y
θ
, 0 < y < 1; θ > − 1
0 , elsewhere.
Find the maximum-likelihood estimator for θ. Compare your answer to the method of mo-
ments estimator found in Exercise 9.61.
Solution. Define the likelihood function:
n ∏
i=
(θ + 1)y
θ
i
= (θ + 1)
n
( n ∏
i=
yi
θ
Take the logarithms:
ln L = n ln(θ + 1) + θ
n ∑
i=
ln y i
Find critical points:
d
dθ
ln L =
n
θ + 1
n ∑
i=
ln y i
so
θ = −
n
n
i=
ln y i
and finally
θ = −
n
n
i=
ln Y i
This is quite different from the method of moments estimator found in Exercise 9.61. §