Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Statistical Analysis: Hypothesis Testing and Confidence Intervals using T-statistic, Exams of Introduction to Econometrics

An explanation of hypothesis testing and confidence intervals using the t-statistic. It covers the concept of simple random sampling, the properties of the sampling distribution of a statistic y, and the large-n distribution of the t-statistic. The document also includes the calculation of the p-value and its significance in hypothesis testing.

Typology: Exams

Pre 2010

Uploaded on 03/16/2009

koofers-user-r19
koofers-user-r19 🇺🇸

10 documents

1 / 55

Toggle sidebar

Related documents


Partial preview of the text

Download Statistical Analysis: Hypothesis Testing and Confidence Intervals using T-statistic and more Exams Introduction to Econometrics in PDF only on Docsity! 1 Chapter 2: Review of Probability and Statistics In this chapter we provide a review of some concepts used in this course. The concepts are illustrated using the example of class size and educational output. 2 Empirical problem: Class size and educational output • Policy question: What is the effect on test scores of reducing class size by one student per class? • We must use data to find out (is there any way to answer this without data?) 5 Do districts with smaller classes have higher test scores? What does this figure show? Scatterplot of test score v. student-teacher ratio 6 We need to get some numerical evidence on whether districts with low STRs have higher test scores – but how? (Note that this is different from the original policy question, why?) 1. Compare average test scores in districts with low STRs to those with high STRs (“estimation”) 2. Test the “null” hypothesis that the mean test scores in the two types of districts are the same, against the “alternative” hypothesis that they differ (“hypothesis testing”) 3. Estimate an interval for the difference in the mean test scores, high v. low STR districts (“confidence interval”) 7 Initial data analysis: Compare districts with “small” (STR < 20) and “large” (STR ≥ 20) class sizes: 1. Estimation of ∆ = difference between group means 2. Test the hypothesis that ∆ = 0 3. Construct a confidence interval for ∆ Y 18217.9650.0Large 23819.4657.4Small nStandard deviation (s BYB) Average score ( ) Class Size 10 Compute the difference-of-means t-statistic: Size Y s BYB n small 657.4 19.4 238 large 650.0 17.9 182 2 2 2 219.4 17.9 238 182 657.4 650.0 7.4 1.83s l s l s l s s n n Y Yt − −= = = + + = 4.05 |t| > 1.96, so reject (at the 5% significance level) the null hypothesis that the two means are the same. 11 3. Confidence interval A 95% confidence interval for the difference between the means is, ( sY – lY ) ± 1.96× SE( sY – lY ) = 7.4 ± 1.96× 1.83 = (3.8, 11.0) Two equivalent statements: 1. The 95% confidence interval for ∆ doesn’t include 0; 2. The hypothesis that ∆ = 0 is rejected at the 5% level. 12 What comes next… • The mechanics of estimation, hypothesis testing, and confidence intervals should be familiar • These concepts extend directly to regression and its variants • Before turning to regression, however, we will review some of the underlying theory of estimation, hypothesis testing, and confidence intervals: • Why do these procedures work, and why use these rather than others? • So we will review the intellectual foundations of statistics and econometrics 15 Population distribution of Y • The probabilities of different values of Y that occur in the population, for ex. Pr[Y = 650] (when Y is discrete) • or: The probabilities of sets of these values, for ex. Pr[640 ≤ Y ≤ 660] (when Y is continuous). 16 (b) Moments of a population distribution: mean, variance, standard deviation, covariance, correlation mean = expected value (expectation) of Y = E(Y) = µBY B = long-run average value of Y over repeated realizations of Y variance = E(Y – µ BYB)P2 P = 2Yσ = measure of the squared spread of the distribution standard deviation = variance = σ BYB 17 Moments, ctd. skewness = ( )3 3 Y Y E Y µ σ ⎡ ⎤−⎣ ⎦ = measure of asymmetry of a distribution • skewness = 0: distribution is symmetric • skewness > (<) 0: distribution has long right (left) tail kurtosis = ( )4 4 Y Y E Y µ σ ⎡ ⎤−⎣ ⎦ = measure of mass in tails = measure of probability of large values • kurtosis = 3: normal distribution • skewness > 3: heavy tails (“leptokurtotic”) 20 so is the correlation… The covariance between Test Score and STR is negative: 21 The correlation coefficient is defined in terms of the covariance: corr(X,Z) = cov( , ) var( ) var( ) XZ X Z X Z X Z σ σ σ = = r BXZB • –1 ≤ corr(X,Z) ≤ 1 • corr(X,Z) = 1 mean perfect positive linear association • corr(X,Z) = –1 means perfect negative linear association • corr(X,Z) = 0 means no linear association The correlation coefficient measures linear association y y 70, 70 60 eee 60 t. ste : 50) Be tset 50 40 8, il 40 4 wae 4 ORY 30) eee 30 . . : Ss ° oo 20 10} 10 0 1 1 1 1 1 J 0 1 1 1 1 1 J 70 80 90 100 110 120 130 70 80 90 100 110 120 130 x (a) Correlation = +0.9 x (b) Correlation = —0.8 9 y 70 70 60 . *. 60 pi se ' atte, 50 aut ea ies 30 Je *e «. © ONS ety .° “# 40- 4° res 8 40 e “1 er ae % 30 ees Pee 30 . 20 a 20 . 10 10 . 0 1 1 1 J 0 1 1 1 1 1 J 1 110 120 130 x 1 70 80 90 100 (c) Correlation = 0.0 70 80 90 100 110 120 130 x (d) Correlation = 0.0 (quadratic) 22 25 (d) Distribution of a sample of data drawn randomly from a population: Y1,…, Yn We will assume simple random sampling • Choose and individual (district, entity) at random from the population Randomness and data • Prior to selection, the value of Y is random because the individual selected is random • Once the individual is selected and the value of Y is observed, then Y is just a number – not random • The data set is (Y B1 B, Y B2 B,…, Y Bn B), where Y Bi B = value of Y for the iPth P individual (district, entity) sampled 26 Distribution of Y1,…, Yn under simple random sampling • Because individuals #1 and #2 are selected at random, the value of Y B1 B has no information content for Y B2 B. Thus: • Y B1 B and Y B2 B are independently distributed • Y B1 B and Y B2 B come from the same distribution, that is, Y B1 B, Y B2 B are identically distributed • That is, under simple random sampling, Y B1 B and Y B2 B are independently and identically distributed (i.i.d.). • More generally, under simple random sampling, {Y BiB}, i = 1,…, n, are i.i.d. This framework allows rigorous statistical inferences about moments of population distributions using a sample of data from that population … 27 1. The probability framework for statistical inference 2. Estimation 3. Testing 4. Confidence Intervals Estimation Y is the natural estimator of the mean. But: What are the properties of Y , i.e., the sampling distribution of Y … 30 Things we want to know about the sampling distribution: • What is the mean of Y ? • If E(Y ) = true, then Y is an unbiased estimator of µ • What is the variance of Y ? • How does var(Y ) depend on n • Does Y become close to µ when n is large? • Law of large numbers: Y is a consistent estimator of µ • Y – µ appears bell shaped for n large…is this generally true? • In fact, Y – µ is approximately normally distributed for n large (Central Limit Theorem) 31 The mean and variance of the sampling distribution of Y General case – that is, for Yi i.i.d. from any distribution, not just Bernoulli: mean: E(Y ) = E( 1 1 n i i Y n = ∑ ) = 1 1 ( ) n i i E Y n = ∑ = 1 1 n Y in µ = ∑ = µY Variance: var(Y ) = E[Y – E(Y )]2 = E[Y – µY]2 = E 2 1 1 n i Y i Y n µ = ⎡ ⎤⎛ ⎞ −⎢ ⎥⎜ ⎟ ⎝ ⎠⎣ ⎦ ∑ = E 2 1 1 ( ) n i Y i Y n µ = ⎡ ⎤−⎢ ⎥⎣ ⎦ ∑ 32 so var(Y ) = E 2 1 1 ( ) n i Y i Y n µ = ⎡ ⎤−⎢ ⎥⎣ ⎦ ∑ = 1 1 1 1( ) ( ) n n i Y j Y i j E Y Y n n µ µ = = ⎧ ⎫⎡ ⎤⎡ ⎤⎪ ⎪− × −⎨ ⎬⎢ ⎥⎢ ⎥⎣ ⎦⎪ ⎪⎣ ⎦⎩ ⎭ ∑ ∑ = 2 1 1 1 ( )( ) n n i Y j Y i j E Y Y n µ µ = = ⎡ ⎤− −⎣ ⎦∑∑ = 2 1 1 1 cov( , ) n n i j i j Y Y n = = ∑∑ = 22 1 1 n Y in σ = ∑ = 2 Y n σ 35 The Law of Large Numbers: An estimator is consistent if the probability that its falls within an interval of the true population value tends to one as the sample size increases. If (Y1,…,Yn) are i.i.d. and 2Yσ < ∞ , then Y is a consistent estimator of µY, that is, Pr[|Y – µY| < ε] → 1 as n → ∞ which can be written, Y p → µY (“Y p → µY” means “Y converges in probability to µY”). (the math: as n → ∞ , var(Y ) = 2 Y n σ → 0, which implies that Pr[|Y – µY| < ε] → 1.) 36 The Central Limit Theorem (CLT): If (Y1,…,Yn) are i.i.d. and 0 < 2Yσ < ∞ , then when n is large the distribution of Y is well approximated by a normal distribution. • Y is approximately distributed N(µY, 2 Y n σ ) (“normal distribution with mean µY and variance 2Yσ /n”) • n (Y – µY)/σY is approximately distributed N(0,1) (standard normal) • That is, “standardized” Y = ( ) var( ) Y E Y Y − = / Y Y Y n µ σ − is approximately distributed as N(0,1) • The larger is n, the better is the approximation. 37 Summary: The Sampling Distribution of Y For Y1,…,Yn i.i.d. with 0 < 2Yσ < ∞ , • The exact (finite sample) sampling distribution of Y has mean µY (“Y is an unbiased estimator of µY”) and variance 2 Yσ /n • Other than its mean and variance, the exact distribution of Y is complicated and depends on the distribution of Y (the population distribution) • When n is large, the sampling distribution simplifies: • Y p → µY (Law of large numbers) • ( ) var( ) Y E Y Y − is approximately N(0,1) (CLT) 40 Calculating the p-value, ctd. • To compute the p-value, you need the to know the sampling distribution of Y , which is complicated if n is small. • If n is large, you can use the normal approximation (CLT): p-value = 0 ,0 ,0 Pr [| | | |]actH Y YY Yµ µ− > − , = 0 ,0 ,0Pr [| | | |] / / act Y Y H Y Y Y Y n n µ µ σ σ − − > = 0 ,0 ,0Pr [| | | |] act Y Y H Y Y Y Yµ µ σ σ − − > ≅ probability under left+right N(0,1) tails where Yσ = std. dev. of the distribution of Y = σY/ n . 41 Calculating the p-value with σY known: • For large n, p-value = the probability that a N(0,1) random variable falls outside |( actY – µY,0)/ Yσ | • In practice, Yσ is unknown – it must be estimated 42 Estimator of the variance of Y: 2 Ys = 2 1 1 ( ) 1 n i i Y Y n = − − ∑ = “sample variance of Y” Fact: If (Y1,…,Yn) are i.i.d. and E(Y4) < ∞ , then 2Ys p → 2Yσ Why does the law of large numbers apply? • Because 2Ys is a sample average; see Appendix 3.3 • Technical note: we assume E(Y4) < ∞ because here the average is not of Yi, but of its square; see App. 3.3 45 What happened to the t-table and the degrees of freedom? Digression: the Student t distribution If Yi, i = 1,…, n is i.i.d. N(µY, 2Yσ ), then the t-statistic has the Student t-distribution with n – 1 degrees of freedom. The critical values of the Student t-distribution is tabulated in the back of all statistics books. Remember the recipe? 1. Compute the t-statistic 2. Compute the degrees of freedom, which is n – 1 3. Look up the 5% critical value 4. If the t-statistic exceeds (in absolute value) this critical value, reject the null hypothesis. At this point, you might be wondering,... 46 Comments on this recipe and the Student t-distribution 1. The theory of the t-distribution was one of the early triumphs of mathematical statistics. It is astounding, really: if Y is i.i.d. normal, then you can know the exact, finite-sample distribution of the t-statistic – it is the Student t. So, you can construct confidence intervals (using the Student t critical value) that have exactly the right coverage rate, no matter what the sample size. This result was really useful in times when “computer” was a job title, data collection was expensive, and the number of observations was perhaps a dozen. It is also a conceptually beautiful result, and the math is beautiful too – which is probably why stats profs love to teach the t-distribution. But…. 47 Comments on Student t distribution, ctd. 2. If the sample size is moderate (several dozen) or large (hundreds or more), the difference between the t-distribution and N(0,1) critical values are negligible. Here are some 5% critical values for 2-sided tests: degrees of freedom (n – 1) 5% t-distribution critical value 10 2.23 20 2.09 30 2.04 60 2.00 ∞ 1.96 50 Comments on Student t distribution, ctd. 4. You might not know this. Consider the t-statistic testing the hypothesis that two means (groups s, l) are equal: 2 2 ( )s l s l s l s l s s s l n n Y Y Y Yt SE Y Y − − = = −+ Even if the population distribution of Y in the two groups is normal, this statistic doesn’t have a Student t distribution! There is a statistic testing this hypothesis that has a normal distribution, the “pooled variance” t-statistic – see SW (Section 3.6) – however the pooled variance t-statistic is only valid if the variances of the normal distributions are the same in the two groups. Would you expect this to be true, say, for men’s v. women’s wages? 51 The Student-t distribution – summary • The assumption that Y is distributed N(µY, 2Yσ ) is rarely plausible in practice (income? number of children?) • For n > 30, the t-distribution and N(0,1) are very close (as n grows large, the tn–1 distribution converges to N(0,1)) • The t-distribution is an artifact from days when sample sizes were small and “computers” were people • For historical reasons, statistical software typically uses the t-distribution to compute p-values – but this is irrelevant when the sample size is moderate or large. • For these reasons, in this class we will focus on the large-n approximation given by the CLT 52 1. The probability framework for statistical inference 2. Estimation 3. Testing 4. Confidence intervals Confidence Intervals A 95% confidence interval for µY is an interval that contains the true value of µY in 95% of repeated samples. Digression: What is random here? The values of Y1,…,Yn and thus any functions of them – including the confidence interval. The confidence interval it will differ from one sample to the next. The population parameter, µY, is not random, we just don’t know it.
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved