Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Principles of Econometrics Cheat Sheet, Cheat Sheet of Econometrics and Mathematical Economics

in this cheat sheet you find all main formulas for the exam of Principles of Econometrics

Typology: Cheat Sheet

2019/2020

Uploaded on 10/09/2020

freddye
freddye 🇺🇸

4.3

(11)

3 documents

Partial preview of the text

Download Principles of Econometrics Cheat Sheet and more Cheat Sheet Econometrics and Mathematical Economics in PDF only on Docsity! The Rules of Summation  n i¼1 xi ¼ x1 þ x2 þ    þ xn  n i¼1 a ¼ na  n i¼1 axi ¼ a  n i¼1 xi  n i¼1 ðxi þ yiÞ ¼  n i¼1 xi þ  n i¼1 yi  n i¼1 ðaxi þ byiÞ ¼ a  n i¼1 xi þ b  n i¼1 yi  n i¼1 ðaþ bxiÞ ¼ naþ b  n i¼1 xi x ¼  n i¼1 xi n ¼ x1 þ x2 þ    þ xn n  n i¼1 ðxi  xÞ ¼ 0  2 i¼1  3 j¼1 f ðxi; yjÞ ¼  2 i¼1 f ðxi; y1Þ þ f ðxi; y2Þ þ f ðxi; y3Þ½  ¼ f ðx1; y1Þ þ f ðx1; y2Þ þ f ðx1; y3Þ þ f ðx2; y1Þ þ f ðx2; y2Þ þ f ðx2; y3Þ Expected Values & Variances EðXÞ ¼ x1 f ðx1Þ þ x2 f ðx2Þ þ    þ xn f ðxnÞ ¼  n i¼1 xi f ðxiÞ ¼  x x f ðxÞ E gðXÞ½  ¼  x gðxÞ f ðxÞ E g1ðXÞ þ g2ðXÞ½  ¼  x g1ðxÞ þ g2ðxÞ½  f ðxÞ ¼  x g1ðxÞ f ðxÞ þ  x g2ðxÞ f ðxÞ ¼ E g1ðXÞ½  þ E g2ðXÞ½  E(c) ¼ c E(cX ) ¼ cE(X ) E(a þ cX ) ¼ a þ cE(X ) var(X ) ¼ s2 ¼ E[X  E(X )]2 ¼ E(X2)  [E(X )]2 var(a þ cX ) ¼ E[(a þ cX)E(a þ cX)]2 ¼ c2var(X ) Marginal and Conditional Distributions f ðxÞ ¼  y f ðx; yÞ for each value X can take f ðyÞ ¼  x f ðx; yÞ for each value Y can take f ðxjyÞ ¼ P X ¼ xjY ¼ y½  ¼ f ðx; yÞ f ðyÞ If X and Y are independent random variables, then f (x,y) ¼ f (x)f (y) for each and every pair of values x and y. The converse is also true. If X and Y are independent random variables, then the conditional probability density function of X given that Y ¼ y is f ðxjyÞ ¼ f ðx; yÞ f ðyÞ ¼ f ðxÞ f ðyÞ f ðyÞ ¼ f ðxÞ for each and every pair of values x and y. The converse is also true. Expectations, Variances & Covariances covðX;YÞ ¼ E½ðXE½XÞðYE½Y Þ ¼ x  y x EðXÞ½  y EðYÞ½  f ðx; yÞ r ¼ covðX;YÞffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi varðXÞvarðYÞp E(c1X þ c2Y ) ¼ c1E(X ) þ c2E(Y ) E(X þ Y ) ¼ E(X ) þ E(Y ) var(aXþ bYþ cZ )¼ a2var(X)þ b2var(Y )þ c2var(Z ) þ 2abcov(X,Y ) þ 2accov(X,Z ) þ 2bccov(Y,Z ) If X, Y, and Z are independent, or uncorrelated, random variables, then the covariance terms are zero and: varðaX þ bY þ cZÞ ¼ a2varðXÞ þ b2varðYÞ þ c2varðZÞ Normal Probabilities If X  N(m, s2), then Z ¼X  m s Nð0; 1Þ If X  N(m, s2) and a is a constant, then PðX  aÞ ¼ P Z  a m s   If X Nðm;s2Þ and a and b are constants; then Pða  X  bÞ ¼ P am s  Z  b m s   Assumptions of the Simple Linear Regression Model SR1 The value of y, for each value of x, is y ¼ b1 þ b2x þ e SR2 The average value of the random error e is E(e) ¼ 0 sincewe assume thatE(y) ¼ b1 þ b2x SR3 The variance of the random error e is var(e) ¼ s2 ¼ var(y) SR4 The covariance between any pair of random errors, ei and ej is cov(ei, ej) ¼ cov(yi, yj) ¼ 0 SR5 The variable x is not random and must take at least two different values. SR6 (optional) The values of e are normally dis- tributed about their mean e  N(0, s2) Least Squares Estimation If b1 and b2 are the least squares estimates, then ŷi ¼ b1 þ b2xi êi ¼ yi  ŷi ¼ yi  b1  b2xi The Normal Equations Nb1 þ Sxib2 ¼Syi Sxib1 þ Sx2i b2 ¼ Sxiyi Least Squares Estimators b2 ¼ Sðxi  xÞðyi  yÞ S ðxi  xÞ2 b1 ¼ y b2x Elasticity h ¼ percentage change in y percentage change in x ¼ Dy=y Dx=x ¼ Dy Dx  x y h ¼ DEðyÞ=EðyÞ Dx=x ¼ DEðyÞ Dx  x EðyÞ ¼ b2  x EðyÞ Least Squares Expressions Useful for Theory b2 ¼ b2 þ Swiei wi ¼ xi  x Sðxi  xÞ2 Swi ¼ 0; Swixi ¼ 1; Sw2i ¼ 1=Sðxi  xÞ2 Properties of the Least Squares Estimators varðb1Þ ¼ s2 Sx 2 i NSðxi  xÞ2 " # varðb2Þ ¼ s 2 Sðxi  xÞ2 covðb1; b2Þ ¼ s2 x Sðxi  xÞ2 " # Gauss-Markov Theorem: Under the assumptions SR1–SR5 of the linear regression model the estimators b1 and b2 have the smallest variance of all linear and unbiased estimators of b1 and b2. They are the Best Linear Unbiased Estimators (BLUE) of b1 and b2. If we make the normality assumption, assumption SR6, about the error term, then the least squares esti- mators are normally distributed. b1  N b1; s2  x2i NSðxi  xÞ2 ! ; b2  N b2; s2 Sðxi  xÞ2 ! Estimated Error Variance ŝ2 ¼ Sê 2 i N  2 Estimator Standard Errors seðb1Þ ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffibvarðb1Þq ; seðb2Þ ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffibvarðb2Þq t-distribution If assumptions SR1–SR6of the simple linear regression model hold, then t ¼ bk  bk seðbkÞ  tðN2Þ; k ¼ 1; 2 Interval Estimates P[b2  tcse(b2)  b2  b2 þ tcse(b2)] ¼ 1  a Hypothesis Testing Components of Hypothesis Tests 1. A null hypothesis, H0 2. An alternative hypothesis, H1 3. A test statistic 4. A rejection region 5. A conclusion If the null hypothesis H0 : b2 ¼ c is true, then t ¼ b2  c seðb2Þ  tðN2Þ Rejection rule for a two-tail test: If the value of the test statistic falls in the rejection region, either tail of the t-distribution, then we reject the null hypothesis and accept the alternative. Type I error: The null hypothesis is true and we decide to reject it. Type II error: The null hypothesis is false andwe decide not to reject it. p-value rejection rule:When the p-value of a hypoth- esis test is smaller than the chosen value of a, then the test procedure leads to rejection of the null hypothesis. Prediction y0 ¼ b1 þ b2x0 þ e0; ŷ0 ¼ b1 þ b2x0; f ¼ ŷ0  y0 bvarð f Þ ¼ ŝ2 1þ 1 N þ ðx0  xÞ 2 Sðxi  xÞ2 " # ; seð f Þ ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffibvarð f Þq A (1  a)  100% confidence interval, or prediction interval, for y0 ŷ0 tcseð f Þ Goodness of Fit Sðyi  yÞ2 ¼ Sðŷi  yÞ2 þ Sê2i SST ¼ SSRþ SSE R2 ¼ SSR SST ¼ 1 SSE SST ¼ ðcorrðy; ŷÞÞ2 Log-Linear Model lnðyÞ ¼ b1þb2xþ e;blnð yÞ ¼ b1 þ b2x 100 b2 % change in y given a one-unit change in x: ŷn ¼ expðb1 þ b2xÞ ŷc ¼ expðb1 þ b2xÞexpðŝ2=2Þ Prediction interval: exp blnðyÞ  tcseð f Þh i; exp blnð yÞ þ tcseð f Þh i Generalized goodness-of-fit measureR2g¼ðcorrðy; ŷnÞÞ2 Assumptions of theMultiple RegressionModel MR1 yi ¼ b1 þ b2xi2 þ    þ bKxiK þ ei MR2 E(yi)¼b1þb2xi2þ    þbKxiK , E(ei) ¼ 0. MR3 var(yi) ¼ var(ei) ¼ s2 MR4 cov(yi, yj) ¼ cov(ei, ej) ¼ 0 MR5 The values of xik are not random and are not exact linear functions of the other explanatory variables. MR6 yi  N½ðb1 þ b2xi2 þ    þ bKxiKÞ;s2 , ei  Nð0;s2Þ Least Squares Estimates in MR Model Least squares estimates b1, b2, . . . , bK minimize Sðb1, b2, . . . , bKÞ ¼ ðyi  b1  b2xi2      bKxiKÞ2 Estimated Error Variance and Estimator Standard Errors ŝ2 ¼  ê 2 i N  K seðbkÞ ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffibvarðbkÞq
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved