Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Estimator Properties in Economics: Unbiasedness, Efficiency, & Asymptotic Distributions, Study notes of Econometrics and Mathematical Economics

An in-depth analysis of the properties of estimators in economics, including unbiasedness, efficiency, and asymptotic distributions. It covers the concepts of bias, variance, cramer rao lower bound, consistency, convergence in distribution, and asymptotic distributions. The document also introduces maximum likelihood estimators and their desirable properties.

Typology: Study notes

Pre 2010

Uploaded on 08/18/2009

koofers-user-3xc
koofers-user-3xc 🇺🇸

5

(1)

10 documents

1 / 7

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Economics 310
Handout # VII The Normal Distribution and Properties of
Estimators
Miscellaneous Notes
pf3
pf4
pf5

Partial preview of the text

Download Estimator Properties in Economics: Unbiasedness, Efficiency, & Asymptotic Distributions and more Study notes Econometrics and Mathematical Economics in PDF only on Docsity!

Economics 310

Handout # VII The Normal Distribution and Properties of

Estimators

Miscellaneous Notes

Multivariate Normal.

Properties of Estimators

I. Unbiased: Let be an estimator of. Then is an unbiased estimator if E( )=

(Note: this definition applies equally well whether ) We define

II Efficiency: Let be an estimator of. Then

if is a scalar.

if is a vector.

Definition : Let and be two estimators of. Then is a more efficient estimator than if MSE( )<MSE( ) if is a scalar

MSE( )-MSE( ) is non-negative definite and if is a vector.

Note: If are two unbiased estimators of these definitions can be stated in terms of variances. Var( )<Var( ) if is a scalar. VAR( )-VAR( ) is non- negative definite and if is a vector. In the vector case this can be seen to imply that

Definition: Cramer Rao Lower Bound. Let be any unbiased estimator of. Then the information matrix of is defined as follows:

is called the Cramer Rao Lower Bound (CRLB) for any unbiased

estimator of. This means that for any unbiased estimator , VAR( )-CRLB is nonnegative definite.

III. Asymptotic Properties of Estimators.

Definition : A sequence of random variables is said to converge in probability to a

constant c if. (We use the notation plim ( )=c)

Definition : An estimator is said to be a consistent estimator of if plim( )=

Theorem : Let plim( ) = c. Let g( ) be a continuous function. Then

plim[g( )] = g(c).

Theorem : A sufficient condition for an estimator to be consistent is its bias and variance each approach a limit of 0 as n approaches infinity.

Definition : Let Z = { } be a matrix of random variables each of which has a probability

limit. Then plim Z = {plim ( )}

Theorem : Let A and B be two matrices such that plim A and plim B and the product AB exist. Then plim AB = (plim A)(plim B).

Theorem : plim A-1^ = [plim A]-

Convergence in distribution : Let {xn} n = 1, 2, ... be a sequence of random variables. Let {Fn} n = 1,2,.... be the sequence of cumulative distribution functions (CDF) of the random variables {xn} n = 1,2.... This simply means that P(xn < a) = Fn(a), n = 1,2.... The sequence of random variables xn is said to converge in distribution to a random variable x with cumulative distribution function, F, if at all points where F is continuous. Alternatively, we can say that xn converges in distribution to a random variable x if for every a at which F is continuous.

A familiar example of convergence in distribution is given by the central limit theorem which states that for any underlying population with finite mean and variance the distribution of

converges to a standard normal distribution.

If xn converges in distribution to a random variable x with CDF, F(x), we say that F(x) is the limiting distribution of xn.

Asymptotic distributions: Most of the estimators which we are interested in this course have degenerate limiting distributions, which is to say that in the limit the distribution collapses around a point. This is not a very useful property if we want to compare the asymptotic behaviors of two or more estimators. For example we might have two estimators both of which are consistent. Both distributions would collapse around the true

Example: Let be a random sample from a normal population with mean and

variance. Then the log of likelihood function of the sample is