Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

7 Solved Questions on Probability and Statistics for Econometrics | ECON 203A, Assignments of Economics

Material Type: Assignment; Class: Probability and Statistics for Econometrics; Subject: Economics; University: University of California - Los Angeles; Term: Spring 2006;

Typology: Assignments

Pre 2010

Uploaded on 08/30/2009

koofers-user-mna
koofers-user-mna 🇺🇸

10 documents

1 / 2

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Econ203C Homework #6 Anton Cheremukhin
June 1, 2006
Exercise 1 GMM and the Three Tests
The hardest part is to get a stable estimate for both the unrestricted and the restricted version.
We do this in six steps.
1) We find unrestricted β(1) using and identity weight matrix and "fminsearch".
2) Given that, we compute the optimal weight matrix V(2) and find the corresponding β(2).
3) We Repeat step 2 and obtain V(3) and the corresponding β(3) .
4) We Compute Aand mn¡β(3)¢.
5) We find restricted β(5) using V(3) as the weight and "fmincon".
6) We compute mn¡β(5)¢and V(5).
After that we calculate the three tests: W=nr0µR³A¡V(3)¢1A0´1R0r,
LM =nmn¡β(5)¢0¡V(3)¢1A0³A¡V(5)¢1A0´1A¡V(3) ¢1mn¡β(5)¢,
LR =n³mn¡β(5)¢0¡V(3)¢1mn¡β(5)¢mn¡β(3) ¢0¡V(3)¢1mn¡β(3)¢´,
where r=β2β3+β4,R=[0
3
2,1] .
The reason why we use V(3) is because it is the weight matrix, we used when finding the minimum
-theoneweregardas’optimal. Besides,otherwiseweoftengetnegativevaluesfortheLRtest.
The whole procedure is very sensitive to V(3), that’s why we repeated the second step, to be sure
we are getting a good estimate of the ’optimal’ weight matrix.
The histograms are almost identical: one of them is depicted on the left. The graph on the
right compares the three empirical c.d.f.’s of the tests. They are almost indistinguishable. The
probabilities of type I errors for the 95% level are all around 11-12%. The means dier a little,
which can be clearly seen in the graph.
Wald LM LR
mean 1.47 1.62 1.47
median 0.67 0.67 0.66
P(type I error) 0.109 0.118 0.108
1
pf2

Partial preview of the text

Download 7 Solved Questions on Probability and Statistics for Econometrics | ECON 203A and more Assignments Economics in PDF only on Docsity!

Econ203C Homework #6 Anton Cheremukhin

June 1, 2006

Exercise 1 GMM and the Three Tests

The hardest part is to get a stable estimate for both the unrestricted and the restricted version.

We do this in six steps.

  1. We find unrestricted β

(1) using and identity weight matrix and "fminsearch".

  1. Given that, we compute the optimal weight matrix V

(2) and find the corresponding β

(2) .

  1. We Repeat step 2 and obtain V

(3) and the corresponding β

(3) .

  1. We Compute A and m n

β

(3)

  1. We find restricted β

(5)

using V

(3)

as the weight and "fmincon".

  1. We compute m n

β

(5)

and V

(5) .

After that we calculate the three tests: W = nr

0

μ

R

A

V

(3)

− 1

A

0

− 1

R

0

r,

LM = nm n

β

(5)

0

V

(3)

− 1

A

0

A

V

(5)

− 1

A

0

− 1

A

V

(3)

− 1

m n

β

(5)

LR = n

m n

β

(5)

0

V

(3)

− 1

m n

β

(5)

− m n

β

(3)

0

V

(3)

− 1

m n

β

(3)

where r = β 2

β 3

  • β 4

, R = [0, β 3

, β 2

, 1].

The reason why we use V

(3)

is because it is the weight matrix, we used when finding the minimum

  • the one we regard as ’optimal’. Besides, otherwise we often get negative values for the LR test.

The whole procedure is very sensitive to V

(3) , that’s why we repeated the second step, to be sure

we are getting a good estimate of the ’optimal’ weight matrix.

The histograms are almost identical: one of them is depicted on the left. The graph on the

right compares the three empirical c.d.f.’s of the tests. They are almost indistinguishable. The

probabilities of type I errors for the 95% level are all around 11-12%. The means differ a little,

which can be clearly seen in the graph.

Wald LM LR

mean 1.47 1.62 1.

median 0.67 0.67 0.

P(type I error) 0.109 0.118 0.

Exercise 2 NLLS as an Example of GMM

y i

= g (x i

, θ 0

) + u i

E (u i

|x i

1) E

(yi − g (xi, θ))

2

→ minθ use law of iterated expectations:

FOC: E

∂θ

g (x i

, θ) (y i

− g (x i

, θ))

= E

∂θ

g (x i

, θ) E [(y i

− g (x i

, θ)) |x i

]

since E (ui|xi) = 0 ⇔ E (yi|xi) = g (xi, θ 0 ) , E

∂θ

g (xi, θ) (g (xi, θ 0 ) − g (xi, θ))

If g (.) is strictly monotone in θ, this expression is satisfied iff θ = θ 0

  1. The usual conditions for existence are continuity of g (.) and compactness of Θ. For uniqueness

we need to add: θ 6 = θ 0

⇒ g (x, θ) 6 = g (x, θ 0

). Strict monotonicity in θ is one possibility.

b θn = arg minθ mn (θ)

0

(Vn)

− 1

mn (θ)

m n

(θ) =

1

n

Σϕ (y i

, x i

, θ) ϕ (y i

, x i

, θ) = z (y i

− g (x i

, θ))

Some simple ideas for instruments zji include: 1 , x 1 i, ..., xKi.

  1. Some additional simple ideas for instruments z ji

include: x

2

1 i

, ..., x

2

Ki

We can use higher powers, if needed. The justification to that would be a Taylor expansion of

∂θ

g (x i

, θ) around θ 0

. These are all instruments since E [f (x i

) (y i

− g (x i

, θ))] = 0 only in θ 0

  1. An optimal GMM estimator uses an optimal weight matrix:

b V n

1

n

Σϕ

y i

, x i

, z,

b θ

0

ϕ

y i

, x i

, z,

b θ

0

0

Can use any consistent

b θ

0 , e.g.

b θ

0 = arg min θ

m n

(θ)

0

m n

(θ).

  1. Asymptotic distribution:

n

b θn − θ 0

d

→ N (0, Λ (θ 0 )) , where

A (θ 0

) = E

0

h

∂ϕ(y,x,z,θ 0

)

∂θ

i

, V (θ 0

) = E

0

ϕ (y, x, z, θ 0

) ϕ (y, x, z, θ 0

0

Λ (β 0

A (θ 0

) V (θ 0

− 1

A (θ 0

0

− 1

A consistent estimator for A (θ 0 ) is

b A =

P (^) ∂ϕ (yi,xi,zi,^

b θ

0 )

n∂β

A consistent estimator for V (θ 0

) is

b V n

1

n

P

z i

z

0

i

y i

− x

0

i

b θ

0

2

  • non-singular, p.s.d.

Proof is exactly the same as always.

  1. H 0 : θ 1 θ 2 ...θK = 1 H 1 : θ 1 θ 2 ...θK 6 = 1

r (θ) = θ 1

θ 2

...θ K

− 1 R (θ) = [θ 2

...θ K

, θ 1

θ 3

...θ K

, ..., θ 1

θ 2

...θ K− 1

]

e θn = arg minθ mn (θ)

0

e Vn

− 1

mn (θ) s.t. r (θ) = 0

e A =

P (^) ∂ϕ (yi,xi,zi,^

e θ

0 )

n∂β

e V n

1

n

Σϕ

y i

, x i

, z j

e θ

0

ϕ

y i

, x i

, z i

e θ

0

0

e θ

0 = arg min θ

m n

(θ)

0

m n

(θ) s.t. r (θ) = 0

W = nr

b θ

0

Ã

R

b θ

μ

b A

b V

− 1

b A

0

− 1

R

b θ

0

r

b θ

LM = nm

e θ

0

e V

− 1

e A

0

μ

e A

e V

− 1

e A

0

− 1

e A

e V

− 1

m

e θ

LR = n

μ

m

e θ

0

e V

− 1

m

e θ

− m

b θ

0

b V

− 1

m

b θ

All distributed as χ

2 (1).