Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Random Sample - Introductory Statistics - Lab Solutions, Study notes of Mathematical Statistics

These are the important key points of lab solutions of Introductory Statistics are: Random Sample, Population, Exponential Distribution, Density, Smallest Order, Unbiased Estimator, Distribution Function, Formula, Density Function, Exponential Distribution

Typology: Study notes

2012/2013

Uploaded on 01/11/2013

bigna
bigna 🇮🇳

5

(1)

33 documents

1 / 5

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Stat 366 Lab 2 Solutions (September 21, 2006) page 1
TA: Yury Petrachenko, CAB 484, yuryp@ualberta.ca, http://www.ualberta.ca/yuryp/
Review Questions, Chapters 8, 9
8.15 Suppose that Y1,Y2, ..., Yndenote a random sample of size nfrom a population with an
exponential distribution whose density is given by
f(y) =
(1)ey/θ , y > 0
0,elsewhere.
If Y(1) = min(Y1, Y2, . . . , Yn) denotes the smallest-order statistic, show that ˆ
θ=nY(1) is an
unbiased estimator for θand find MSE(ˆ
θ).
Solution. Let’s find the distribution function of Y:
F(y) =
1ey/θ , y > 0
0,elsewhere.
Now we can use the formula FY(1)(y) = 1 £1F(y)¤nor fY(1) =n¡1F(y)¢n1f(y) to find
the the density function for Y(1): for y > 0,
fY(1) =n¡ey/θ¢n11
θey/θ =n
θe
yn
θ.
We can recognize this density function to be the density of the exponential distribution with
parameter θ±n,Y(1) Exp¡θ
n¢.
Knowing the distribution of Y(1) allows us to compute the expectation of ˆ
θ=nY(1):
E[ˆ
θ] = nE[Y(1) ] =
n=θ.
So, E[ˆ
θ] = θ, and ˆ
θis an unbiased estimator of θ.
To find MSE(ˆ
θ), use the formula MSE(ˆ
θ) = V[ˆ
θ] + ¡B(ˆ
θ)¢2. Since the estimator is unbiased,
its bias B(ˆ
θ) equals zero. For the variance, remember that Y(1) is exponential. We have
MSE(ˆ
θ) = V[ˆ
θ] + 0 = n2V£Y(1) ¤=n2θ2
n2=θ2.¤
pf3
pf4
pf5

Partial preview of the text

Download Random Sample - Introductory Statistics - Lab Solutions and more Study notes Mathematical Statistics in PDF only on Docsity!

TA: Yury Petrachenko, CAB 484, yuryp@ualberta.ca, http://www.ualberta.ca/∼yuryp/

Review Questions, Chapters 8, 9

8.15 Suppose that Y 1

, Y

2

,... , Y

n

denote a random sample of size n from a population with an

exponential distribution whose density is given by

f (y) =

(1/θ)e

−y/θ , y > 0

0 , elsewhere.

If Y (1)

= min(Y 1

, Y

2

,... , Y

n

) denotes the smallest-order statistic, show that

θ = nY (1)

is an

unbiased estimator for θ and find MSE(

θ).

Solution. Let’s find the distribution function of Y :

F (y) =

1 − e

−y/θ

, y > 0

0 , elsewhere.

Now we can use the formula F Y (1)

(y) = 1 −

[

1 − F (y)

]

n

or f Y (1)

= n

1 − F (y)

n− 1

f (y) to find

the the density function for Y (1)

: for y > 0,

f Y (1)

= n

e

−y/θ

n− 1 1

θ

e

−y/θ

=

n

θ

e

−yn

θ .

We can recognize this density function to be the density of the exponential distribution with

parameter θ

n, Y (1)

∼ Exp

θ

n

Knowing the distribution of Y (1)

allows us to compute the expectation of

θ = nY (1)

E[

θ] = nE[Y (1)

] =

n

= θ.

So, E[

θ] = θ, and

θ is an unbiased estimator of θ.

To find MSE(

θ), use the formula MSE(

θ) = V [

θ] +

B(

θ)

2

. Since the estimator is unbiased,

its bias B(

θ) equals zero. For the variance, remember that Y (1)

is exponential. We have

MSE(

θ) = V [

θ] + 0 = n

2

V

[

Y

(1)

]

= n

2

θ

2

n

2

= θ

2

. §

9.7 Suppose that Y 1

, Y

2

,... , Y

n

denote a random sample of size n from an exponential distribution

with density function given by

f (y) =

(1/θ)e

−y/θ

, y > 0

0 , elsewhere.

In Exercise 8.15 we determined that

θ 1

= nY (1)

is an unbiased estimator of θ with MSE(

θ)= θ

2 .

Consider the estimator

θ 2

Y , and find the efficiency of

θ 1

relative to

θ 2

Solution. First compute the variance of

θ 2

V [

θ 2 ] = V [

Y ] = V

[

Y 1 + · · · + Yn

n

]

n

2

V [Y 1 + · · · + Yn] =

n

2

V [Y 1 ] + · · · + V [Yn]

n

2

θ

2

  • · · · + θ

2

n times

2

n

2

θ

2

n

To find the relative efficiency, we need to find the ratio of two variances:

eff(

θ 1 ,

θ 2 ) =

V (

θ 2 )

V (

θ 1

θ

2

n

θ

2

n

We conclude that

θ 2

is preferable to

θ 1

9.61 Let Y 1

, Y

2

,... , Y

n

denote a random sample from the probability density function

f (y) =

(θ + 1)y

θ

, 0 < y < 1; θ > − 1

0 , elsewhere.

Find an estimator for θ by the method of moments.

Solution. Let’s find the first moment of this distribution:

μ =

1

0

(θ + 1) y

θ+

dy =

(θ + 1) y

θ+

θ + 2

1

0

θ + 1

θ + 2

The method of moments implies

Y =

θ + 1

θ + 2

θ =

Y − 1

Y

Solution. This is a somewhat different problem from the previous one because the support

of the density function depends on θ. Recall the indicator function I(A). It is equal to one

when A is true, and zero if A is false.

We can write the likelihood function in the following way:

L =

n ∏

i=

f (y i

n ∏

i=

2 θ + 1

I(0 ≤ y i

≤ 2 θ + 1) =

(2θ + 1)

n

n ∏

i=

I(0 ≤ y i

≤ 2 θ + 1).

We can simplify this even further if we note that the product of indicator is non-zero only

when all of the underlying conditions fulfill. That is, all y i

are less that 2θ + 1 and positive.

Notice that this statement is equivalent to the following: 0 ≤ y(1) and y(n) ≤ 2 θ + 1. (We use

order statistics y (1)

= min(y 1

,... , y n

) and y (n)

= max(y 1

,... , y n

).) We have

L =

(2θ + 1)

n

I(0 ≤ y(1)) · I(y(n) ≤ 2 θ + 1).

Now look at the first part of the likelihood function L, (2θ + 1)

−n

. Notice that this is a

decreasing (and continuous) function of θ. If we want to maximize L, we should choose the

value of θ as small as possible. Notice that if 2θ + 1 is smaller than y (n)

, then the value of L(θ)

is zero. So, the minimum of 2θ + 1 is y (n)

. This gives the minimum value for θ and maximizes

the likelihood L(θ). We conclude (provided at least one observation in the sample is positive)

Y

(n)

θ + 1 ∴

θ =

Y

(n)

9.80 Let Y 1

, Y

2

,... , Y

n

denote a random sample from the probability density function

f (y) =

(θ + 1)y

θ

, 0 < y < 1; θ > − 1

0 , elsewhere.

Find the maximum-likelihood estimator for θ. Compare your answer to the method of mo-

ments estimator found in Exercise 9.61.

Solution. Define the likelihood function:

L =

n ∏

i=

(θ + 1)y

θ

i

= (θ + 1)

n

( n ∏

i=

yi

θ

Take the logarithms:

ln L = n ln(θ + 1) + θ

n ∑

i=

ln y i

Find critical points:

d

ln L =

n

θ + 1

n ∑

i=

ln y i

so

θ = −

n

n

i=

ln y i

and finally

θ = −

n

n

i=

ln Y i

This is quite different from the method of moments estimator found in Exercise 9.61. §