



Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
A portion of a master's level statistical theory exam from 2004. It includes questions on kernel density estimation, empirical distribution functions, u-statistics, edgeworth expansions, saddlepoint approximations, bandwidth selection, likelihoods, and hypothesis testing. Students preparing for a statistical theory exam may find this document useful for understanding concepts and practicing problem-solving.
Typology: Exams
1 / 6
This page cannot be seen from the preview
Don't miss anything!
Thursday 3 June 2004 1.30 to 4.
Attempt FOUR questions, not more than TWO of which should be from Section B.
There are ten questions in total.
The questions carry equal weight.
Section A
1 Let X 1 ,... , Xn be independent random variables with density f (x). Define what is meant by a kernel K(x), and by the kernel density estimate fˆh(x) of f (x), with kernel K and bandwidth h > 0.
Define the mean integrated squared error (MISE) of fˆh, and derive an exact expression for this quantity in terms of f and the scaled kernel Kh, where Kh(x) = h−^1 K(x/h).
For a symmetric, second-order kernel, under regularity conditions, the minimum value of the asymptotic MISE may be expressed as
inf h> 0
AM ISE( fˆh) =
{μ 2 (K)^2 R(K)^4 R(f ′′)}^1 /^5 n−^4 /^5 ,
where μ 2 (K) =
−∞ x
(^2) K(x) dx, and R(g) = ∫^ ∞ −∞ g(x)
(^2) dx for a square integrable function
g : R → R. Show that R(f ′′) may be made arbitrarily small by means of a scale transformation af (ax) of f (x), but that
D(f ) = σ(f )^5 R(f ′′)
is scale invariant, where
σ(f )^2 =
−∞
x^2 f (x) dx −
−∞
xf (x) dx
Let f 0 (x) =
(1 − x^2 )^31 {|x|< 1 },
and let h(x) be another twice continuously differentiable density satisfying
−∞ xh(x)^ dx^ = 0 and σ(h) = σ(f 0 ). By considering e(x) = h(x) − f 0 (x) or otherwise, show that R(h′′) > R(f 0 ′′ ).
5 Give a brief description of marginal and profile likelihoods, contrasting the ways in which they are used to deal with nuisance parameters.
Let X 1 ,... , Xm, Y 1 ,... , Yn be independent exponential random variables with X∑ 1 ,... , Xm having mean 1/(ψλ) and Y 1 ,... , Yn having mean 1/λ. Further, let X = m i=1 Xi^ and^ Y^ =^
∑n i=1 Yi.^ Write down the joint density of^ X^ and^ Y^.^ Consider the transformation
T =
By first computing the joint density of T and U , find the marginal density of T and show that the marginal log-likelihood for ψ based on T is
`(ψ; t) = m log ψ − (m + n) log(ψt + 1).
Compute the maximum likelihood estimate of λ for fixed ψ, and hence show that the profile log-likelihood for ψ is identical to `(ψ; t) above.
6 Describe the Wald, score and likelihood ratio tests for hypotheses concerning a multidimensional parameter θ. Explain briefly how they can be used to construct confidence regions for θ of approximate (1 − α)-level coverage.
Let Y 0 , Y 1 ,... , Yn be a sequence of random variables such that Y 0 has a Poisson distribution with mean θ and for i > 1, conditional on Y 0 ,... , Yi− 1 , the random variable Yi has a Poisson distribution with mean θYi− 1. The parameter θ satisfies 0 < θ 6 1. Find the log-likelihood for θ, and show that the maximum likelihood estimator, θˆ = θˆ(Y 0 , Y 1 ,... , Yn), may be expressed as θˆ = min(θ,˜ 1), where θ˜ = ˜θ(Y 0 , Y 1 ,... , Yn) is a function which should be specified.
For θ ∈ (0, 1), compute the Fisher information i(θ), and show that
i(θ) 6
θ(1 − θ)
for all n.
Deduce that the Wald statistic for testing H 0 : θ = θ 0 against H 1 : θ 6 = θ 0 , where 0 < θ 0 < 1, does not have an asymptotic chi-squared distribution under the null hypothesis.
Section B
7 i) Suppose (Y |U = u) has a Poisson distribution, with mean μu, and U has probability density function f (u), where
f (u) = θθ^ uθ−^1 e−θu/Γ(θ), for u ≥ 0.
Show that
a) E(Y ) = μ, var(Y ) = μ + μ^2 /θ, b) Y has frequency function
g(y|μ) =
Γ(θ + y)μy^ θθ Γ(θ)y!(μ + θ)θ+y^
for y = 0, 1 , 2 ,....
ii) If (Y 1 ,... , Yn) are independent observations, and Yi has frequency function g(yi|μi), where log μi = βxi, and x 1 ,... , xn are given, describe how to estimate β in the case where θ is a known parameter, and derive the asymptotic distribution of your estimator.
8 Let Y 1 ,... , Yn be independent variables, such that
Y = μ1 + Xβ + ,
where X is a given n × p matrix of rank p, β is an unknown vector of dimension p, μ is an unknown constant, and 1 is the n-dimensional vector with every element 1. Assume that XT^ 1 = 0, and that ∼ N (0, σ^2 I), where σ^2 is unknown.
i) Derive an expression for βˆ, the least squares estimator of β, and derive its distribution.
ii) How would you test H 0 : β = 0?
iii) How would you check the assumption ∼ N (0, σ^2 I)?
(You may quote any standard theorems needed.)
9 What is meant by an improper prior in a Bayesian analysis?
Let X 1 ,... , Xn be independent identically distributed N (μ, σ^2 ), with both μ and σ^2 unknown. Suppose that μ and σ^2 are given independent prior densities. Show that in the case of the improper prior π(μ) ∝ 1 for μ, the marginal posterior density of σ^2 depends only on the sample variance s^2 = (n − 1)−^1 Σni=1(xi − x¯)^2.
Show further that in the case of improper priors π(μ) ∝ 1 , π(σ^2 ) ∝ σ−^2 , the posterior distribution of σ^2 is that of (n − 1)s^2 /V , where V ∼ χ^2 n− 1.