Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Principles of Econometrics Cheat Sheet, Cheat Sheet of Econometrics and Mathematical Economics

in this cheat sheet you find all main formulas for the exam of Principles of Econometrics

Typology: Cheat Sheet

2019/2020
On special offer
30 Points
Discount

Limited-time offer


Uploaded on 10/09/2020

freddye
freddye 🇺🇸

4.3

(11)

235 documents

1 / 3

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
The Rules of Summation
å
n
i¼1
xi¼x1þx2þþxn
å
n
i¼1
a¼na
å
n
i¼1
axi¼aå
n
i¼1
xi
å
n
i¼1ðxiþyiÞ¼å
n
i¼1
xiþå
n
i¼1
yi
å
n
i¼1ðaxiþbyiÞ¼aå
n
i¼1
xiþbå
n
i¼1
yi
å
n
i¼1ðaþbxiÞ¼na þbå
n
i¼1
xi
x¼
å
n
i¼1xi
n¼x1þx2þþxn
n
å
n
i¼1ðxixÞ¼0
å
2
i¼1
å
3
j¼1
fðxi;yjÞ¼å
2
i¼1
fðxi;y1Þþfðxi;y2Þþfðxi;y3Þ½
¼fðx1;y1Þþfðx1;y2Þþfðx1;y3Þ
þfðx2;y1Þþfðx2;y2Þþfðx2;y3Þ
Expected Values & Variances
EðXÞ¼x1fðx1Þþx2fðx2ÞþþxnfðxnÞ
¼å
n
i¼1
xifðxiÞ¼å
x
xfðxÞ
EgðXÞ½¼å
x
gðxÞfðxÞ
Eg
1ðXÞþg2ðXÞ½¼å
x
g1
ðxÞþg2ðxÞ½fðxÞ
¼å
x
g1
ðxÞfðxÞþå
x
g2ðxÞfðxÞ
¼Eg
1ðXÞ½þEg
2ðXÞ½
E(c)¼c
E(cX)¼cE(X)
E(aþcX)¼aþcE(X)
var(X)¼s
2
¼E[XE(X)]
2
¼E(X
2
)[E(X)]
2
var(aþcX )¼E[(aþcX )E(aþcX )]
2
¼c
2
var(X)
Marginal and Conditional Distributions
fðxÞ¼å
y
fðx;yÞfor each value Xcan take
fðyÞ¼å
x
fðx;yÞfor each value Ycan take
fðxjyÞ¼PX¼xjY¼y
½
¼fðx;yÞ
fðyÞ
If Xand Yare independent random variables, then
f(x,y)¼f(x)f(y) for each and every pair of values
xand y. The converse is also true.
If Xand Yare independent random variables, then the
conditional probability density function of Xgiven that
Y¼yis fðxjyÞ¼fðx;yÞ
fðyÞ¼fðxÞfðyÞ
fðyÞ¼fðxÞ
for each and every pair of values xand y.The converse is
also true.
Expectations, Var iances & Covariances
covðX;YÞ¼E½ðXE½XÞðYE½YÞ
¼å
x
å
y
xEðXÞ½yEðYÞ½fðx;yÞ
r¼covðX;YÞ
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
varðXÞvarðYÞ
p
E(c
1
Xþc
2
Y)¼c
1
E(X)þc
2
E(Y)
E(XþY)¼E(X)þE(Y)
var(aX þbY þcZ)¼a
2
var(X)þb
2
var(Y)þc
2
var(Z)
þ2abcov(X,Y)þ2accov(X,Z)þ2bccov(Y,Z)
If X,Y, and Zare independent, or uncorrelated, random
variables, then the covariance terms are zero and:
varðaX þbY þcZÞ¼a2varðXÞ
þb2varðYÞþc2varðZÞ
Normal Probabilities
If XN(m,s
2
), then Z¼Xm
sNð0;1Þ
If XN(m,s
2
) and ais a constant, then
PðXaÞ¼PZam
s

If XNðm;s2Þand aand bare constants;then
PðaXbÞ¼Pam
sZbm
s

Assumptions of the Simple Linear Regression
Model
SR1 The value of y, for each value of x,isy¼b
1
þ
b
2
xþe
SR2 The average value of the random error eis
E(e)¼0 sincewe assume that E(y)¼b
1
þb
2
x
SR3 The variance of the random error eis var(e)¼
s
2
¼var( y)
SR4 The covariance between any pair of random
errors, e
i
and e
j
is cov(e
i
,e
j
)¼cov(y
i
,y
j
)¼0
SR5 The variable xis not random and must take at
least two different values.
SR6 (optional) The values of eare normally dis-
tributed about their mean eN(0, s
2
)
Least Squares Estimation
If b
1
and b
2
are the least squares estimates, then
^
yi¼b1þb2xi
^ei¼yi^yi¼yib1b2xi
The Normal Equations
Nb1þSxib2¼Syi
Sxib1þSx2
ib2¼Sxiyi
Least Squares Estimators
b2¼SðxixÞðyiyÞ
SðxixÞ2
b1¼yb2x
pf3
Discount

On special offer

Partial preview of the text

Download Principles of Econometrics Cheat Sheet and more Cheat Sheet Econometrics and Mathematical Economics in PDF only on Docsity!

The Rules of Summation

å

n

i¼ 1

x (^) i ¼ x 1 þ x 2 þ    þ x (^) n

å

n

i¼ 1

a ¼ na

å

n

i¼ 1

axi ¼ a å

n i¼ 1

x (^) i

å

n

i¼ 1

ðx (^) i þ yiÞ ¼ å

n i¼ 1

x (^) i þ å

n i¼ 1

y (^) i

å

n

i¼ 1

ðaxi þ byiÞ ¼ a å

n i¼ 1

x (^) i þ b å

n i¼ 1

y (^) i

å

n

i¼ 1

ða þ bxiÞ ¼ na þ b å

n i¼ 1

x (^) i

x ¼

å n i¼ 1 x (^) i n ¼^

x 1 þ x 2 þ    þ x (^) n n å

n

i¼ 1

ðx (^) i  xÞ ¼ 0

å

2 i¼ 1

å

3 j¼ 1

f ðx (^) i; y (^) jÞ ¼ å

2 i¼ 1

½ f ðx (^) i; y 1 Þ þ f ðxi; y 2 Þ þ f ðx (^) i; y 3 ފ ¼ f ðx 1 ; y 1 Þ þ f ðx 1 ; y 2 Þ þ f ðx 1 ; y 3 Þ þ f ðx 2 ; y 1 Þ þ f ðx 2 ; y 2 Þ þ f ðx 2 ; y 3 Þ

Expected Values & Variances

EðXÞ ¼ x 1 f ðx 1 Þ þ x 2 f ðx 2 Þ þ    þ x (^) n f ðx (^) nÞ

¼ å

n i¼ 1

x (^) i f ðxiÞ ¼ å x

x f ðxÞ

E g½ ðXފ ¼ å x

gðxÞ f ðxÞ

E g½ 1 ðXÞ þ g 2 ðXފ ¼ å x

½g 1 ðxÞ þ g 2 ðxފ f ðxÞ

¼ å x g 1 ðxÞ f ðxÞ þ å x g 2 ðxÞ f ðxÞ

¼ E g½ 1 ðXފ þ E g½ 2 ðXފ E(c) ¼ c E(cX ) ¼ cE(X ) E(a þ cX ) ¼ a þ cE(X ) var(X ) ¼ s^2 ¼ E[X  E(X )] 2 ¼ E(X^2 )  [E(X )]^2 var(a þ cX ) ¼ E[(a þ cX)  E(a þ cX)]^2 ¼ c^2 var(X )

Marginal and Conditional Distributions

f ðxÞ ¼ å y

f ðx; yÞ for each value X can take

f ðyÞ ¼ å x f ðx; yÞ for each value Y can take

f ðxjyÞ ¼ P X½ ¼ xjY ¼ yŠ ¼

f ðx; yÞ f ðyÞ

If X and Y are independent random variables, then f (x,y) ¼ f (x)f ( y) for each and every pair of values x and y. The converse is also true.

If X and Y are independent random variables, then the conditional probability density function of X given that

Y ¼ y is f ðxjyÞ ¼ f ðx; yÞ f ðyÞ

f ðxÞ f ðyÞ f ðyÞ

¼ f ðxÞ

for each and every pair of values x and y. The converse is also true.

Expectations, Variances & Covariances

covðX; YÞ ¼ E½ðXE½XŠÞðYE½YŠÞŠ ¼ å x

å y

½x  EðXފ ½ y  EðYފ f ðx; yÞ

r ¼ covðX;YÞ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi varðXÞvarðYÞ

p

E(c 1 X þ c 2 Y ) ¼ c 1 E(X ) þ c 2 E(Y ) E(X þ Y ) ¼ E(X ) þ E(Y ) var(aX þ bY þ cZ ) ¼ a^2 var(X) þ b^2 var(Y ) þ c^2 var(Z ) þ 2 abcov(X,Y ) þ 2 accov(X,Z ) þ 2 bccov(Y,Z ) If X, Y, and Z are independent, or uncorrelated, random variables, then the covariance terms are zero and: varðaX þ bY þ cZÞ ¼ a^2 varðXÞ þ b^2 varðYÞ þ c^2 varðZÞ

Normal Probabilities

If X  N(m, s^2 ), then Z ¼ X  m s

 Nð 0 ; 1 Þ If X  N(m, s^2 ) and a is a constant, then

PðX  aÞ ¼ P Z  a  m s

If X  Nðm; s^2 Þ and a and b are constants; then

Pða  X  bÞ ¼ P am s

 Z 

b  m s

Assumptions of the Simple Linear Regression Model SR1 The value of y, for each value of x, is y ¼ b 1 þ b 2 x þ e SR2 The average value of the random error e is E(e) ¼ 0 since we assume that E(y) ¼ b 1 þ b 2 x SR3 The variance of the random error e is var(e) ¼ s^2 ¼ var(y) SR4 The covariance between any pair of random errors, ei and e (^) j is cov(ei, e (^) j) ¼ cov(y (^) i, yj) ¼ 0 SR5 The variable x is not random and must take at least two different values. SR6 (optional) The values of e are normally dis- tributed about their mean e  N(0, s^2 )

Least Squares Estimation If b 1 and b 2 are the least squares estimates, then ^y (^) i ¼ b 1 þ b 2 x (^) i ^e (^) i ¼ yi  ^y (^) i ¼ yi  b 1  b 2 xi

The Normal Equations Nb 1 þ Sx (^) i b 2 ¼ Sy (^) i Sx (^) i b 1 þ Sx^2 i b 2 ¼ Sx (^) i y (^) i

Least Squares Estimators

b 2 ¼

Sðx (^) i  xÞðy (^) i  yÞ S ðx (^) i  xÞ^2 b 1 ¼ y  b 2 x

Elasticity

h ¼ percentage change in y percentage change in x

Dy=y Dx=x

Dy Dx

x y

h ¼ DEðyÞ=EðyÞ Dx=x

DEðyÞ Dx

x EðyÞ

¼ b 2  x EðyÞ

Least Squares Expressions Useful for Theory

b 2 ¼ b 2 þ Swi e (^) i

wi ¼

x (^) i  x Sðx (^) i  xÞ^2

Swi ¼ 0 ; Swi x (^) i ¼ 1 ; Sw^2 i ¼ 1 =Sðx (^) i  xÞ^2

Properties of the Least Squares Estimators

varðb 1 Þ ¼ s^2 Sx^2 i NSðx (^) i  xÞ^2

varðb 2 Þ ¼ s^2 Sðx (^) i  xÞ^2

covðb 1 ; b 2 Þ ¼ s^2 x Sðx (^) i  xÞ^2

Gauss-Markov Theorem: Under the assumptions SR1–SR5 of the linear regression model the estimators b 1 and b 2 have the smallest variance of all linear and unbiased estimators of b 1 and b 2. They are the Best Linear Unbiased Estimators (BLUE) of b 1 and b 2.

If we make the normality assumption, assumption SR6, about the error term, then the least squares esti- mators are normally distributed.

b 1  N b 1 ;

s^2 å x^2 i NSðx (^) i  xÞ^2

; b 2  N b 2 ;

s^2 Sðxi  xÞ^2

Estimated Error Variance

^s^2 ¼ S^e^2 i N  2

Estimator Standard Errors

seðb 1 Þ ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi bvarðb 1 Þ

q ; seðb 2 Þ ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi bvarðb 2 Þ

q

t-distribution

If assumptions SR1–SR6 of the simple linear regression model hold, then

t ¼ bk  bk seðbk Þ

 tðN 2 Þ; k ¼ 1 ; 2

Interval Estimates

P[b 2  tcse(b 2 )  b 2  b 2 þ t (^) cse(b 2 )] ¼ 1  a

Hypothesis Testing

Components of Hypothesis Tests

  1. A null hypothesis, H 0
  2. An alternative hypothesis, H 1
  3. A test statistic
  4. A rejection region
  5. A conclusion If the null hypothesis H 0 : b 2 ¼ c is true, then

t ¼ b 2  c seðb 2 Þ

 tðN 2 Þ

Rejection rule for a two-tail test: If the value of the test statistic falls in the rejection region, either tail of the t-distribution, then we reject the null hypothesis and accept the alternative. Type I error: The null hypothesis is true and we decide to reject it. Type II error: The null hypothesis is false and we decide not to reject it. p-value rejection rule: When the p-value of a hypoth- esis test is smaller than the chosen value of a, then the test procedure leads to rejection of the null hypothesis. Prediction y 0 ¼ b 1 þ b 2 x 0 þ e 0 ; ^y 0 ¼ b 1 þ b 2 x 0 ; f ¼ ^y 0  y 0

var^ bð f Þ ¼ ^s^2 1 þ 1 N þ ðx 0  xÞ^2 Sðx (^) i  xÞ^2

; seð f Þ ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffi varbð f Þ

q

A (1  a)  100% confidence interval, or prediction interval, for y 0 ^y 0 tcseð f Þ Goodness of Fit Sðy (^) i  yÞ^2 ¼ Sð^y (^) i  yÞ^2 þ S^e^2 i SST ¼ SSR þ SSE

R^2 ¼

SSR

SST

SSE

SST

¼ ðcorrðy; ^yÞÞ^2

Log-Linear Model lnðyÞ ¼ b 1 þ b 2 x þ e; blnð yÞ ¼ b 1 þ b 2 x 100  b 2  % change in y given a one-unit change in x: ^y (^) n ¼ expðb 1 þ b 2 xÞ ^y (^) c ¼ expðb 1 þ b 2 xÞexpðs^^2 = 2 Þ Prediction interval:

exp blnðyÞ  tcseð f Þ

h i ; exp blnð yÞ þ tcseð f Þ

h i

Generalized goodness-of-fit measure R^2 g ¼ ðcorrðy;^y (^) nÞÞ^2 Assumptions of the Multiple Regression Model MR1 y (^) i ¼ b 1 þ b 2 x (^) i 2 þ    þ bK xiK þ e (^) i MR2 E(y (^) i) ¼ b 1 þ b 2 xi 2 þ    þ bK xiK , E(e (^) i) ¼ 0. MR3 var(y (^) i) ¼ var(ei) ¼ s^2 MR4 cov(y (^) i, y (^) j) ¼ cov(e (^) i, ej) ¼ 0 MR5 The values of x (^) ik are not random and are not exact linear functions of the other explanatory variables. MR6 y (^) i  N½ðb 1 þ b 2 x (^) i 2 þ    þ bK x (^) iK Þ; s^2 Š , ei  Nð 0 ; s^2 Þ

Least Squares Estimates in MR Model Least squares estimates b 1 , b 2 ,... , bK minimize Sðb 1 , b 2 ,... , bKÞ ¼ åðyi  b 1  b 2 x (^) i 2      bKx (^) iKÞ^2

Estimated Error Variance and Estimator Standard Errors

^s^2 ¼

å ^e^2 i N  K seðbkÞ ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi varbðbk Þ

q