

Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Material Type: Assignment; Class: Probability and Statistics for Econometrics; Subject: Economics; University: University of California - Los Angeles; Term: Spring 2006;
Typology: Assignments
1 / 2
This page cannot be seen from the preview
Don't miss anything!
Econ203C Homework #6 Anton Cheremukhin
June 1, 2006
Exercise 1 GMM and the Three Tests
The hardest part is to get a stable estimate for both the unrestricted and the restricted version.
We do this in six steps.
(1) using and identity weight matrix and "fminsearch".
(2) and find the corresponding β
(2) .
(3) and the corresponding β
(3) .
β
(3)
(5)
using V
(3)
as the weight and "fmincon".
β
(5)
and V
(5) .
After that we calculate the three tests: W = nr
0
μ
(3)
− 1
0
− 1
0
r,
LM = nm n
β
(5)
0
(3)
− 1
0
(5)
− 1
0
− 1
(3)
− 1
m n
β
(5)
LR = n
m n
β
(5)
0
(3)
− 1
m n
β
(5)
− m n
β
(3)
0
(3)
− 1
m n
β
(3)
where r = β 2
β 3
, R = [0, β 3
, β 2
The reason why we use V
(3)
is because it is the weight matrix, we used when finding the minimum
The whole procedure is very sensitive to V
(3) , that’s why we repeated the second step, to be sure
we are getting a good estimate of the ’optimal’ weight matrix.
The histograms are almost identical: one of them is depicted on the left. The graph on the
right compares the three empirical c.d.f.’s of the tests. They are almost indistinguishable. The
probabilities of type I errors for the 95% level are all around 11-12%. The means differ a little,
which can be clearly seen in the graph.
Wald LM LR
mean 1.47 1.62 1.
median 0.67 0.67 0.
P(type I error) 0.109 0.118 0.
Exercise 2 NLLS as an Example of GMM
y i
= g (x i
, θ 0
) + u i
E (u i
|x i
(yi − g (xi, θ))
2
→ minθ use law of iterated expectations:
∂
∂θ
g (x i
, θ) (y i
− g (x i
, θ))
∂
∂θ
g (x i
, θ) E [(y i
− g (x i
, θ)) |x i
since E (ui|xi) = 0 ⇔ E (yi|xi) = g (xi, θ 0 ) , E
∂
∂θ
g (xi, θ) (g (xi, θ 0 ) − g (xi, θ))
If g (.) is strictly monotone in θ, this expression is satisfied iff θ = θ 0
we need to add: θ 6 = θ 0
⇒ g (x, θ) 6 = g (x, θ 0
). Strict monotonicity in θ is one possibility.
b θn = arg minθ mn (θ)
0
(Vn)
− 1
mn (θ)
m n
(θ) =
1
n
Σϕ (y i
, x i
, θ) ϕ (y i
, x i
, θ) = z (y i
− g (x i
, θ))
Some simple ideas for instruments zji include: 1 , x 1 i, ..., xKi.
include: x
2
1 i
, ..., x
2
Ki
We can use higher powers, if needed. The justification to that would be a Taylor expansion of
∂
∂θ
g (x i
, θ) around θ 0
. These are all instruments since E [f (x i
) (y i
− g (x i
, θ))] = 0 only in θ 0
b V n
1
n
Σϕ
y i
, x i
, z,
b θ
0
ϕ
y i
, x i
, z,
b θ
0
0
Can use any consistent
b θ
0 , e.g.
b θ
0 = arg min θ
m n
(θ)
0
m n
(θ).
n
b θn − θ 0
d
→ N (0, Λ (θ 0 )) , where
A (θ 0
0
h
∂ϕ(y,x,z,θ 0
)
∂θ
i
, V (θ 0
0
ϕ (y, x, z, θ 0
) ϕ (y, x, z, θ 0
0
Λ (β 0
A (θ 0
) V (θ 0
− 1
A (θ 0
0
− 1
A consistent estimator for A (θ 0 ) is
b A =
P (^) ∂ϕ (yi,xi,zi,^
b θ
0 )
n∂β
A consistent estimator for V (θ 0
) is
b V n
1
n
z i
z
0
i
y i
− x
0
i
b θ
0
2
Proof is exactly the same as always.
r (θ) = θ 1
θ 2
...θ K
− 1 R (θ) = [θ 2
...θ K
, θ 1
θ 3
...θ K
, ..., θ 1
θ 2
...θ K− 1
e θn = arg minθ mn (θ)
0
e Vn
− 1
mn (θ) s.t. r (θ) = 0
e A =
P (^) ∂ϕ (yi,xi,zi,^
e θ
0 )
n∂β
e V n
1
n
Σϕ
y i
, x i
, z j
e θ
0
ϕ
y i
, x i
, z i
e θ
0
0
e θ
0 = arg min θ
m n
(θ)
0
m n
(θ) s.t. r (θ) = 0
W = nr
b θ
0
b θ
μ
b A
b V
− 1
b A
0
− 1
b θ
0
r
b θ
LM = nm
e θ
0
e V
− 1
e A
0
μ
e A
e V
− 1
e A
0
− 1
e A
e V
− 1
m
e θ
LR = n
μ
m
e θ
0
e V
− 1
m
e θ
− m
b θ
0
b V
− 1
m
b θ
All distributed as χ
2 (1).