

Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
in this cheat sheet you find all main formulas for the exam of Principles of Econometrics
Typology: Cheat Sheet
1 / 3
This page cannot be seen from the preview
Don't miss anything!
On special offer
The Rules of Summation
å
n
i¼ 1
x (^) i ¼ x 1 þ x 2 þ þ x (^) n
å
n
i¼ 1
a ¼ na
å
n
i¼ 1
axi ¼ a å
n i¼ 1
x (^) i
å
n
i¼ 1
ðx (^) i þ yiÞ ¼ å
n i¼ 1
x (^) i þ å
n i¼ 1
y (^) i
å
n
i¼ 1
ðaxi þ byiÞ ¼ a å
n i¼ 1
x (^) i þ b å
n i¼ 1
y (^) i
å
n
i¼ 1
ða þ bxiÞ ¼ na þ b å
n i¼ 1
x (^) i
x ¼
å n i¼ 1 x (^) i n ¼^
x 1 þ x 2 þ þ x (^) n n å
n
i¼ 1
ðx (^) i xÞ ¼ 0
å
2 i¼ 1
å
3 j¼ 1
f ðx (^) i; y (^) jÞ ¼ å
2 i¼ 1
½ f ðx (^) i; y 1 Þ þ f ðxi; y 2 Þ þ f ðx (^) i; y 3 Þ ¼ f ðx 1 ; y 1 Þ þ f ðx 1 ; y 2 Þ þ f ðx 1 ; y 3 Þ þ f ðx 2 ; y 1 Þ þ f ðx 2 ; y 2 Þ þ f ðx 2 ; y 3 Þ
Expected Values & Variances
EðXÞ ¼ x 1 f ðx 1 Þ þ x 2 f ðx 2 Þ þ þ x (^) n f ðx (^) nÞ
¼ å
n i¼ 1
x (^) i f ðxiÞ ¼ å x
x f ðxÞ
E g½ ðXÞ ¼ å x
gðxÞ f ðxÞ
E g½ 1 ðXÞ þ g 2 ðXÞ ¼ å x
½g 1 ðxÞ þ g 2 ðxÞ f ðxÞ
¼ å x g 1 ðxÞ f ðxÞ þ å x g 2 ðxÞ f ðxÞ
¼ E g½ 1 ðXÞ þ E g½ 2 ðXÞ E(c) ¼ c E(cX ) ¼ cE(X ) E(a þ cX ) ¼ a þ cE(X ) var(X ) ¼ s^2 ¼ E[X E(X )] 2 ¼ E(X^2 ) [E(X )]^2 var(a þ cX ) ¼ E[(a þ cX) E(a þ cX)]^2 ¼ c^2 var(X )
Marginal and Conditional Distributions
f ðxÞ ¼ å y
f ðx; yÞ for each value X can take
f ðyÞ ¼ å x f ðx; yÞ for each value Y can take
f ðxjyÞ ¼ P X½ ¼ xjY ¼ y ¼
f ðx; yÞ f ðyÞ
If X and Y are independent random variables, then f (x,y) ¼ f (x)f ( y) for each and every pair of values x and y. The converse is also true.
If X and Y are independent random variables, then the conditional probability density function of X given that
Y ¼ y is f ðxjyÞ ¼ f ðx; yÞ f ðyÞ
f ðxÞ f ðyÞ f ðyÞ
¼ f ðxÞ
for each and every pair of values x and y. The converse is also true.
Expectations, Variances & Covariances
covðX; YÞ ¼ E½ðXE½XÞðYE½YÞ ¼ å x
å y
½x EðXÞ ½ y EðYÞ f ðx; yÞ
r ¼ covðX;YÞ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi varðXÞvarðYÞ
p
E(c 1 X þ c 2 Y ) ¼ c 1 E(X ) þ c 2 E(Y ) E(X þ Y ) ¼ E(X ) þ E(Y ) var(aX þ bY þ cZ ) ¼ a^2 var(X) þ b^2 var(Y ) þ c^2 var(Z ) þ 2 abcov(X,Y ) þ 2 accov(X,Z ) þ 2 bccov(Y,Z ) If X, Y, and Z are independent, or uncorrelated, random variables, then the covariance terms are zero and: varðaX þ bY þ cZÞ ¼ a^2 varðXÞ þ b^2 varðYÞ þ c^2 varðZÞ
Normal Probabilities
If X N(m, s^2 ), then Z ¼ X m s
Nð 0 ; 1 Þ If X N(m, s^2 ) and a is a constant, then
PðX aÞ ¼ P Z a m s
If X Nðm; s^2 Þ and a and b are constants; then
Pða X bÞ ¼ P am s
b m s
Assumptions of the Simple Linear Regression Model SR1 The value of y, for each value of x, is y ¼ b 1 þ b 2 x þ e SR2 The average value of the random error e is E(e) ¼ 0 since we assume that E(y) ¼ b 1 þ b 2 x SR3 The variance of the random error e is var(e) ¼ s^2 ¼ var(y) SR4 The covariance between any pair of random errors, ei and e (^) j is cov(ei, e (^) j) ¼ cov(y (^) i, yj) ¼ 0 SR5 The variable x is not random and must take at least two different values. SR6 (optional) The values of e are normally dis- tributed about their mean e N(0, s^2 )
Least Squares Estimation If b 1 and b 2 are the least squares estimates, then ^y (^) i ¼ b 1 þ b 2 x (^) i ^e (^) i ¼ yi ^y (^) i ¼ yi b 1 b 2 xi
The Normal Equations Nb 1 þ Sx (^) i b 2 ¼ Sy (^) i Sx (^) i b 1 þ Sx^2 i b 2 ¼ Sx (^) i y (^) i
Least Squares Estimators
b 2 ¼
Sðx (^) i xÞðy (^) i yÞ S ðx (^) i xÞ^2 b 1 ¼ y b 2 x
Elasticity
h ¼ percentage change in y percentage change in x
Dy=y Dx=x
Dy Dx
x y
h ¼ DEðyÞ=EðyÞ Dx=x
DEðyÞ Dx
x EðyÞ
¼ b 2 x EðyÞ
Least Squares Expressions Useful for Theory
b 2 ¼ b 2 þ Swi e (^) i
wi ¼
x (^) i x Sðx (^) i xÞ^2
Swi ¼ 0 ; Swi x (^) i ¼ 1 ; Sw^2 i ¼ 1 =Sðx (^) i xÞ^2
Properties of the Least Squares Estimators
varðb 1 Þ ¼ s^2 Sx^2 i NSðx (^) i xÞ^2
varðb 2 Þ ¼ s^2 Sðx (^) i xÞ^2
covðb 1 ; b 2 Þ ¼ s^2 x Sðx (^) i xÞ^2
Gauss-Markov Theorem: Under the assumptions SR1–SR5 of the linear regression model the estimators b 1 and b 2 have the smallest variance of all linear and unbiased estimators of b 1 and b 2. They are the Best Linear Unbiased Estimators (BLUE) of b 1 and b 2.
If we make the normality assumption, assumption SR6, about the error term, then the least squares esti- mators are normally distributed.
b 1 N b 1 ;
s^2 å x^2 i NSðx (^) i xÞ^2
; b 2 N b 2 ;
s^2 Sðxi xÞ^2
Estimated Error Variance
^s^2 ¼ S^e^2 i N 2
Estimator Standard Errors
seðb 1 Þ ¼
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi bvarðb 1 Þ
q ; seðb 2 Þ ¼
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi bvarðb 2 Þ
q
t-distribution
If assumptions SR1–SR6 of the simple linear regression model hold, then
t ¼ bk bk seðbk Þ
tðN 2 Þ; k ¼ 1 ; 2
Interval Estimates
P[b 2 tcse(b 2 ) b 2 b 2 þ t (^) cse(b 2 )] ¼ 1 a
Hypothesis Testing
Components of Hypothesis Tests
t ¼ b 2 c seðb 2 Þ
tðN 2 Þ
Rejection rule for a two-tail test: If the value of the test statistic falls in the rejection region, either tail of the t-distribution, then we reject the null hypothesis and accept the alternative. Type I error: The null hypothesis is true and we decide to reject it. Type II error: The null hypothesis is false and we decide not to reject it. p-value rejection rule: When the p-value of a hypoth- esis test is smaller than the chosen value of a, then the test procedure leads to rejection of the null hypothesis. Prediction y 0 ¼ b 1 þ b 2 x 0 þ e 0 ; ^y 0 ¼ b 1 þ b 2 x 0 ; f ¼ ^y 0 y 0
var^ bð f Þ ¼ ^s^2 1 þ 1 N þ ðx 0 xÞ^2 Sðx (^) i xÞ^2
; seð f Þ ¼
ffiffiffiffiffiffiffiffiffiffiffiffiffi varbð f Þ
q
A (1 a) 100% confidence interval, or prediction interval, for y 0 ^y 0 tcseð f Þ Goodness of Fit Sðy (^) i yÞ^2 ¼ Sð^y (^) i yÞ^2 þ S^e^2 i SST ¼ SSR þ SSE
R^2 ¼
¼ ðcorrðy; ^yÞÞ^2
Log-Linear Model lnðyÞ ¼ b 1 þ b 2 x þ e; blnð yÞ ¼ b 1 þ b 2 x 100 b 2 % change in y given a one-unit change in x: ^y (^) n ¼ expðb 1 þ b 2 xÞ ^y (^) c ¼ expðb 1 þ b 2 xÞexpðs^^2 = 2 Þ Prediction interval:
exp blnðyÞ tcseð f Þ
h i ; exp blnð yÞ þ tcseð f Þ
h i
Generalized goodness-of-fit measure R^2 g ¼ ðcorrðy;^y (^) nÞÞ^2 Assumptions of the Multiple Regression Model MR1 y (^) i ¼ b 1 þ b 2 x (^) i 2 þ þ bK xiK þ e (^) i MR2 E(y (^) i) ¼ b 1 þ b 2 xi 2 þ þ bK xiK , E(e (^) i) ¼ 0. MR3 var(y (^) i) ¼ var(ei) ¼ s^2 MR4 cov(y (^) i, y (^) j) ¼ cov(e (^) i, ej) ¼ 0 MR5 The values of x (^) ik are not random and are not exact linear functions of the other explanatory variables. MR6 y (^) i N½ðb 1 þ b 2 x (^) i 2 þ þ bK x (^) iK Þ; s^2 , ei Nð 0 ; s^2 Þ
Least Squares Estimates in MR Model Least squares estimates b 1 , b 2 ,... , bK minimize Sðb 1 , b 2 ,... , bKÞ ¼ åðyi b 1 b 2 x (^) i 2 bKx (^) iKÞ^2
Estimated Error Variance and Estimator Standard Errors
^s^2 ¼
å ^e^2 i N K seðbkÞ ¼
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi varbðbk Þ
q