Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

M.Phil. in Statistical Science: Paper 41 - Statistical Theory, Exams of Statistics

A past exam paper from an m.phil. In statistical science program, focusing on statistical theory. It includes questions on maximum likelihood estimation, hermite polynomials, conditional likelihood, p-formula, functional statistics, and hypothesis testing. Students are required to find maximum likelihood estimators, use edgeworth expansions, explain concepts, and derive expressions.

Typology: Exams

2012/2013

Uploaded on 02/26/2013

dharmanand
dharmanand 🇮🇳

3.3

(3)

61 documents

1 / 5

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
M. PHIL. IN STATISTICAL SCIENCE
Thursday 5 June 2003 9 to 12
PAPER 41
STATISTICAL THEORY
Attempt FOUR questions, not more than TWO of which should be from Section B.
There are ten questions in total.
The questions carry equal weight.
You may not start to read the questions
printed on the subsequent pages until
instructed to do so by the Invigilator.
pf3
pf4
pf5

Partial preview of the text

Download M.Phil. in Statistical Science: Paper 41 - Statistical Theory and more Exams Statistics in PDF only on Docsity!

M. PHIL. IN STATISTICAL SCIENCE

Thursday 5 June 2003 9 to 12

PAPER 41

STATISTICAL THEORY

Attempt FOUR questions, not more than TWO of which should be from Section B. There are ten questions in total. The questions carry equal weight.

You may not start to read the questions

printed on the subsequent pages until

instructed to do so by the Invigilator.

Section A 1 (i) Let Y 1 ,... , Yn be independent, identically distributed exponential random variables with common density f (y ; λ) = λe−λy^ , y > 0, and suppose that inference is required for θ = E(Y 1 ). Find the maximum likelihood estimator of θ, and explain carefully why, with Y¯ = n−^1 ∑ni=1 Yi and Φ the distribution function of N (0, 1), ( (^) ¯ Y 1 + n−^1 /^2 Φ−^1 ( 1 − α 2 )^ ,^

1 − n−^1 /^2 Φ−^1 ( 1 − α 2 )

is a confidence interval for θ of asymptotic coverage 1 − α. (ii) Define the rth^ degree Hermite polynomial Hr (x). Let X 1 ,... , Xn be independent, identically distributed random variables, with common mean μ and common variance σ^2 , and let

T =

( (^) ∑n

i=

Xi − nμ

/√nσ.

An Edgeworth expansion of the distribution function of T is

P (T 6 t) = Φ(t) − φ(t)

{ (^) ρ 3 6 √n H^2 (t) +^

ρ 4 24 n H^3 (t) +^

ρ^23 72 n H^5 (t)

  • O(n−^3 /^2 ).

in terms of standardised cumulants ρr. Use an appropriate Edgeworth expansion to show that the confidence interval (∗) in (i) above has coverage error of order O(n−^1 ).

2 Explain briefly the concept of a conditional likelihood. Suppose Y 1 ,... , Yn are independent, identically distributed from the exponential family density f (y; ψ, λ) = exp{ψτ 1 (y) + λτ 2 (y) − d(ψ, λ) − Q(y)}. Find the cumulant generating function of τ 2 (Yi), and a saddlepoint approximation to the density of S = n−^1 ∑ni=1 τ 2 (Yi). Show that the saddlepoint approximation leads to an approximation to a condi- tional log-likelihood function for ψ of the form

l(ψ, λˆψ ) + 12 log |dλλ(ψ, ˆλψ )|,

in terms of quantities ˆλψ , dλλ which you should define carefully.

Paper 41

Section B 7 Suppose (rij ) are independent observations, with

rij ∼ Bi(nij , pij ) , 1 6 i, j 6 2 ,

where n 11 , n 12 , n 21 , n 22 are given totals. Consider the model

ω : logit pij = μ + αi + βj , 1 6 i, j 6 2

where α 1 = β 1 = 0. (i) Write down the log-likelihood under ω, and discuss carefully how ˆα 2 , βˆ 2 and their corresponding standard errors may be derived. [Do not attempt to find analytical expressions for ˆα 2 , βˆ 2 and their se’s.] (ii) With (rij /nij ) as 498/796 878/ 54/142 197/

and fitting ω by glm( ), the deviance was found to be .00451, with

αˆ 2 = − 1 .013(se = .0872) βˆ 2 = − 0 .3544(se = .0804).

What do you conclude from these figures?

8 Let Y 1 ,... , Yn be independent variables, such that Y = Xβ + , where X is a given n × p matrix, of rank p, β is an unknown vector of dimension p, and  1 ,... , n are independent normal variables, each with mean 0 and unknown variance σ^2. (i) Derive an expression for βˆ, the least squares estimator of β, and derive the distribution of βˆ. (ii) How would you estimate σ^2? (iii) In fitting the model Yi = μ + αxi + βzi + γti + i , 1 6 i 6 n, where (xi), (zi), (ti) are given vectors, and  1 ,... , n has the distribution given above, explain carefully how you would test H 0 : β = γ = 0.

(You may quote any standard theorems needed.)

Paper 41

9 Write an account, with appropriate examples, of the decision theory approach to inference. Your account should include discussion of all of the following: (i) the main elements of a decision theory problem; (ii) the Bayes and minimax principles; (iii) admissibility; (iv) finite decision problems; (v) decision theory approaches to point estimation and hypothesis testing.

10 Suppose that Y 1 and Y 2 are independent Poisson random variables with means ψμ and μ respectively. We are interested in testing the null hypothesis H 0 : ψ 6 ψ 0 against the alternative hypothesis H 1 : ψ > ψ 0 , where ψ 0 is given and μ is unknown. Explain in detail why the appropriate test is a conditional test, based on the conditional distribution of Y 1 given Y 1 + Y 2 , and find its form. Let S = Y 1 + Y 2. Show that the significance probability for observed (Y 1 , Y 2 ) is approximately 1 − Φ

[ (^) Y 1 − Sψ 0 /(1 + ψ 0 ) {Sψ 0 /(1 + ψ 0 )^2 }^1 /^2

]

Paper 41