Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Lecture 3: Confidence Intervals in Statistical Inference, Study notes of Statistics

An introduction to confidence intervals in statistical inference. It explains the concept of point estimates and the need for confidence intervals, the basics of calculating confidence intervals for the mean of a normal population, and the difference between the confidence level and the probability that the observed interval contains the true value. It also discusses the relationship between confidence level and interval width, and the concept of pivoting in deriving confidence intervals.

Typology: Study notes

2009/2010

Uploaded on 03/28/2010

koofers-user-hcm
koofers-user-hcm 🇺🇸

10 documents

1 / 14

Toggle sidebar

Related documents


Partial preview of the text

Download Lecture 3: Confidence Intervals in Statistical Inference and more Study notes Statistics in PDF only on Docsity! Statistics 431: Statistical Inference Lecture 3: Confidence intervals Introduction • A point estimate (eg, sample mean estimating population mean ) could be very precise, or not at all. Can’t tell from just the number. • Instead of reporting single estimate of , can report a range of plausible values based on data: a confidence interval for . • Each CI has an associated confidence level, like 90%, 95%, ... - the higher the confidence level, the more likely the CI is to contain • A wide interval implies we don’t have a good handle on ; a narrow interval implies is known precisely. • To find the CI for a given confidence level, we need assumptions plus a probability calculation. 2 X̄ µ µ µ µ µ µ • Before the data are observed: - the CI is a random interval (in this case, centered at ) - there is probability that the observed CI will cover - note: center is random but width is not • After the data are observed: - the CI is a fixed interval, determined by - this fixed interval either covers or it doesn’t (no probability statement applies) 5 X̄ ! µ x1, . . . , xn µ • After the data is observed and a 95% CI is computed, nothing is random. • In particular, .95 is not the probability that the observed interval contains : the interval is now fixed, not random, and is an unknown constant, not a random variable. • Meaning of .95: if I build 95% CIs from many independent samples of size , then in the long run, 95% of those intervals will cover , and 5% will not. 6 More on interpretation 09/09/2005 04:23 PMconexa.gif 383!287 pixels Page 1 of 1http://ewr.cee.vt.edu/environmental/teach/smprimer/intervals/conexa.gif µ µ n µ Confidence vs. width • Higher confidence (good) = wider interval (bad) • The only way to get higher confidence and a narrower interval is to increase the sample size . • For confidence 100 % and width we need (Again, we don’t know : we’ll come back to this.) • Example: Fisher’s iris data had , , 95% CI CI width . To achieve on a new sample, 7 n ! w n(w) = ! 2z ! 2 · " w "2 ! w = 0.5 n = (2 · 1.96 · 3.5/0.5)2 ! 753 n = 50 ! = 3.5 5.0± 1.96 · 3.5/ ! 50 = 5.0± 0.97 = (4.03, 5.97) w = 2 · 0.97 = 1.94 Derivation of CI: example • ; . • Can show has chi-square distribution with degrees of freedom, . Since this is a known distribution (in particular, it doesn’t depend on ), is a pivot. 10 09/09/2005 05:22 PMchspdftb.gif 380!280 pixels Page 1 of 1http://www.itl.nist.gov/div898/handbook/eda/section3/gif/chspdftb.gif a b X1, . . . , Xn ! Exp(!) p(x) = !e!!x , x > 0, ! > 0 h(X1:n, !) = 2n!X̄ 2n !22n ! h(X1:n, !) • So implies • We pivoted to get the 100(1- )% CI for . 11 P ! a 2n X̄ < ! < b 2n X̄ " = 1! " P(a < 2n!X̄ < b) = 1! " h(X1:n, !) ! ! a 2n X̄ , b 2n X̄ " ! Large-sample CIs • Up to now, popn distrn was , , and was known. • When is large, we can get rid of both assumptions, and our previous CI for the popn mean is still approximately correct. • Let be a sample from any distrn with unknown mean and unknown variance (both finite). - Central Limit Theorem says - so • We are back to our previous CI derivation. 12 N (µ, ! 2) ! 2 n µ X1, . . . , Xn µ ! 2 X̄ ! N (µ, ! 2/n) Z = (X̄ ! µ)/(!/ " n) # N (0, 1)
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved