


Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Random Variable, Continuously Distributed, Additively or Multiplicatively Separable, Strictly Positive, Continuous Derivatives, Density Function, True Function, Separability Restriction, Random Sample, Kernel Estimator. This exam paper is for Econometrics course.
Typology: Exams
1 / 4
This page cannot be seen from the preview
Don't miss anything!
Department of Economics University of California, Berkeley
August 2010
Instructions: Answer THREE of the following four questions (one hour each).
M 1 : g(x 1 ; x 2 ) = g 1 (x 1 ) + g 2 (x 2 ); M 2 : g(x 1 ; x 2 ) = g 1 (x 1 ) g 2 (x 2 );
where both g 1 and g 2 are strictly positive with many continuous derivatives. The object of interest is g 1 (x) (or perhaps its logarithm) in either case.
(a) For model M 1 , suppose you are given the function
h(x)
g(x; u)f (u)du;
where f (u) is some known density function (with support strictly inside the support of X 2 ). Under what additional restriction(s) on the true function g 2 (x) is the function g 1 (x) identiÖed? Is this just a normal- ization ñthat is, there are always g 1 and g 2 functions satisfying the restriction(s) for any g(x 1 ; x 2 ) ñor do they rule out certain possible g(x 1 ; x 2 ) functions that satisfy the separability restriction?
(b) Repeat the exercise in part (a) for the model M 2.
(c) Suppose that, for a random sample of size n; you are given a kernel estimator ^g(x 1 ; x 2 ) of g(x 1 ; x 2 ) ñ based on a bivariate kernel K(u 1 ; u 2 ) and bandwidth sequence hn ñ that was asymptotically normal, converging at the (undersmoothed) rate
p nh^2 , i.e. p nh^2 (^g g) !d N(0; V 0 )
for the usual V 0 > 0 : For model M 1 and the restriction(s) derived for part (a), propose a consistent es- timator of g 1 (x); and indicate its rate of convergence under "suitable regularity conditions." You should not attempt to rigorously derive this convergence rate, but should give some justiÖcation of your claim.
(d) Again, repeat the exercise in part (c) for model M 2 and the restrictions derived for part (b).
(yt; xt)^0 : 1 t T is an observed time series generated by the cointegrated system
yt = xt + ut;
where
ut xt
i:i:d: N
with initial condition x 0 = 0: A researcher wants to estimate the scalar parameter ; which is assumed to be non-zero. Let ^^ denote the OLS estimator of :
(a) Characterize the limiting distribution (after appropriate centering and rescaling) of ^^ :
(b) Characterize the limiting distribution of the reverse regression estimator ^ =
T t=1 y
2 t
T t=1 ytxt
Let ~^ = 1=^:
(c) Is ~^ a consistent estimator of?
(d) Characterize the limiting distribution (after appropriate centering and rescaling) of ~^ : Is ~^ asymp- totically equivalent to ^^?
Y = X ( ) + U; (1) P rY jX (Y X ( )jX) = (2)
some 2 (0; 1), and ( ) 2 = R. Cast this problem as a M-estimation problem
Pn i=1 ^ (Yi^ ^ Xi^ (^ ))^ using^ ^ (u) =^ u(^ ^1 fu^ ^0 g). And let
^ (^) ( ) = arg min 2
X^ n
i=
(Yi Xi ): (3)
To solve the questions below, you might need to assume: E[X^2 ] < 1 FU jX (u) P rU jX (U ujX) is continuous di§erentiable with bounded second derivative, and 1 > C fU jX (0) c > 0 for all U and X.
(a) DeÖne Zn()
Pn i=1 (Ui^ Xi=
p n) (Ui). Show that (a) Zn() is convex, (b) it has minimizer, ^n = pn(^ (^) ( ) ( )).
(b) Show that, for all , Z 1 n() p^1 n
Pn i=1 Xi^ (Ui)^ )^ N^ (0;
0 )^ with (u) =^ ^ ^1 fu^ ^0 g^ and^ D^0 =^ E[X
(c) Show that, for all , Z 2 n()
Pn i=
R (^) Xi=pn 0 (1fUi^ ^ sg ^1 fUi^ ^0 g)ds^ = 0:^5 fU^ jX^ (0)
(^2) E[X (^2) ]+op(1).
Hint: Show that the expectation converges to 0 : 5 fU jX (0)^2 E[X^2 ] and then show that the variance of Z 2 n() goes to zero.
(d) Show that,
p n(^^ ( ) ( )) ) N (0; (^) f 2 (1 ^ ) U jX (0)^
). Hint: (1) You may use Knightís identity:
(u v) (u) = v (^) (u) +
Z (^) v
0
(1fu sg 1 fu 0 g)ds
. (2) You may use the following fact without showing it: If, for all , Zn() ) Z 0 (), then arg inf Zn() ) arg inf Z 0 (); provided Zn() is convex.