


Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
The questions for the econometrics field examination held at the university of california, berkeley in august 2009. The examination covers various topics in econometrics, including maximum likelihood estimation, time series analysis, and instrumental variables estimation. The questions require the application of theoretical concepts and the derivation of estimators and their properties.
Typology: Exams
1 / 4
This page cannot be seen from the preview
Don't miss anything!
(a) Assuming that "i is normally distributed with zero mean and unknown variance ^20 ; and is inde- pendent of xi; derive the form of the average log-likelihood function for the unknown parameters of this problem and the form of the asymptotic distribution of the corresponding maximum like- lihood estimator. (b) Suppose that the parametric form of the error distribution is unknown. Propose a
p n-consistent estimator of 0 , imposing a suitable stochastic restriction on the conditional distribution of "i given xi; and without imposing a scale normalization on 0 : If possible, give an expression for the asymptotic distribution of your estimator. (c) Now suppose that y i is never observed, but only the range that it falls into is observed. More speciÖcally, the dependent variable yi is now deÖned as yi ti(y i ) = 0 if y i 0 ; = 1 if 0 < y i Li; = 2 if Li < y i Ui; and = 3 if Ui < y i : Describe an alternative consistent estimator of 0 under a semiparametric restriction on the conditional distribution of the errors given the regressors. Is a scale normalization on 0 needed, or are all the components of 0 (including the scale) identiÖable under your restriction?
yt = yt 1 + "t;
where y 0 = 0 and "t i:i:d: N (0; 1) ; while is an unknown parameter. As estimators of ; consider
Pt=1^ yt ^1 yt T t=1 y 2 t 1
and ~ =
Pt=1^ ytyt ^1 T t=1 y 2 t
Suppose jj < 1 :
(a) Show that ^ !p and ~ !p : (b) Find the limiting distributions (after appropriate centering and rescaling) of ^ and ~: Are ^ and ~ asymptotically equivalent? (c) Suppose = 1: Show that ^ !p and ~ !p : (d) Again suppose = 1: Find the limiting distributions (after appropriate centering and rescaling) of ^ and ~: Are ^ and ~ asymptotically equivalent?
Yi = Bi Xi + Ui;
where Yi; Xi; Bi; and Ui are scalar random variables (Yi and Xi observable, Bi and Ui unobservable). The regressor Xi is assumed to be continuously distributed with density function f (x) that is assumed positive and smooth (i.e., lots of bounded derivatives) on the whole real line. The "error term" Ui is assumed to be independent of Xi with zero mean, variance ^2 ; and all higher moments Önite. All moments of the "random coe¢ cient" Bi are assumed to exist; the parameter of interest is its unconditional mean 0 E[Bi]: However, Bi is not assumed to be independent of Xi; and its conditional expectation (x) E[BijXi = x] is smooth but not assumed to be a constant in x
(a) What is the probability limit of the classical least squares estimator
Pi=1^ Yi^ ^ Xi N i=1 X 2 i
Under what additional conditions (if any) will ^^ be a (mean-squared error) consistent estimator of 0? (b) An instrumental variables estimator of 0 , using Zi = 1=Xi as an instrumental variable for Xi; is (^) 1 N
i=
Yi Xi
Under what additional conditions (if any) will ^ be a (mean-squared) consistent estimator of 0? (c) A "trimmed" version of the IV estimator is
i=
1(jXij > h)
Yi Xi
where h = hN is a deterministic sequence of constants. Under what additional conditions (if any) will ~^ be a (mean-squared) consistent estimator of 0? (d) What is the maximal rate of convergence of the mean-squared error of ~^ to zero? What as- sumptions on the sequence hn are needed to achieve that rate?