Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Stat 410: Random Variables and Confidence Intervals - Review of Dr. D. Scott's Lecture, Study notes of Statistics

An overview of the concepts of random variables, confidence intervals, and their relationships to statistical inference. The fundamental object of density functions, regression, properties of estimators, normal results, and confidence intervals for parameters. It also introduces the central limit theorem and the use of pivots for calculating confidence intervals.

Typology: Study notes

Pre 2010

Uploaded on 08/17/2009

koofers-user-ye9
koofers-user-ye9 🇺🇸

10 documents

1 / 16

Toggle sidebar

Related documents


Partial preview of the text

Download Stat 410: Random Variables and Confidence Intervals - Review of Dr. D. Scott's Lecture and more Study notes Statistics in PDF only on Docsity! Stat 410 Random Variable and Confidence Interval Review Dr. D. Scott August 23, 2005 Fundamental object is the density function: X ∼ f(x) = f(x1, x2, . . . , xp), which encodes ”structure.” Sometimes one of the variables is labeled dif- ferently: (X, Y ) ∼ f(x, y) = f(x1, . . . , xp, y). Here, Y is the dependent or response vari- able, while X1, . . . , Xp are the independent or predictor variables. σ2x̄ = E(X̄ − µx̄)2 = E   1 n n ∑ i=1 (Xi − µ)   2 = 1 n2 E   n ∑ i=1 (Xi − µ)2 + ∑ i 6=j (Xi − µ)(Xj − µ)   = 1 n2 · n σ2x + 1 n2 n(n − 1) · 0 · 0 Why? = σ2x n . Normal Results Z ∼ N(0,1) X ∼ N(µ, σ2) f(x) = 1√ 2π σ exp ( − (x − µ) 2 2σ2 ) Facts: X − µ σ ∼ N(0,1) (exactly) X̄ ∼ N(µ, σ 2 n ) X̄ − µ σ/ √ n ∼ N(0,1) Z2 ∼ χ2(1) where Z ∼ N(0,1) n ∑ i=1 Z2i ∼ χ2(n) . CLT: ∑ of i.i.d. r.v.’s ≈ Normal. (RVLS) Confidence Intervals for Parameters: pivots Rearrange Prob(−1.96 < X̄ − µ σ/ √ n < 1.96) = 95% to get Prob(X̄ −1.96 σ√ n < µ < X̄ +1.96 σ√ n ) = 95% (±2.576 for a 99% confidence interval) Since E[χ2(p)] = p and V ar[χ2(p)] = 2p, E [ (n − 1) σ2 S2 ] = n − 1 or E[S2] = σ2 (unbiased) Now have a pivot for a C.I. for σ2: Prob ( a < (n − 1)S2 σ2 < b ) = 1 − α iff Prob ( (n − 1)S2 b < σ2 < (n − 1)S2 a ) , where Pr(χ2n−1 < a) = Pr(χ 2 n−1 > b) = α 2 . (note: show R code to plot χ2(6)) Need a better pivot for µ, since do not really know σ2 for the normal random variable. Obvious idea is to use S2 in place of σ2! S2 = 1 n − 1 ∑ (Xi − X̄)2 S2(X̄) = 1 n S2 = 1 n(n − 1) ∑ (Xi − X̄)2 is unbiased for σ2x̄. Relationship to hypothesis testing. H0 : µ = µ0 vs. H1 : µ 6= µ0 Compute T̂ = (X̄ − µ0)/S(X̄) and reject if |T̂ | > a. Same as checking if µ0 is in C.I. (symmetry)! p-value = Prob(|Tn−1| > |T̂ |) (learn) Finally, the F distribution is a pivot for com- paring variances. F = χ2(ν1)/ν1 χ2(ν2)/ν2 ∼ F(ν1, ν2) = Fν1,ν2 Recall: Tn−1 = Z √ χ2n−1/(n − 1) Thus T2n−1 = Z2 χ2n−1/(n − 1) = χ2(1) χ2n−1/(n − 1) = F1,n−1
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved