Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Statistical Formulas Cheat Sheet for Final Exam, Cheat Sheet of Statistics

This summary cheat sheet on Statistics is a good review to prepare for the final exam

Typology: Cheat Sheet

2019/2020
On special offer
30 Points
Discount

Limited-time offer


Uploaded on 10/09/2020

anuradha
anuradha 🇺🇸

4.6

(9)

5 documents

Partial preview of the text

Download Statistical Formulas Cheat Sheet for Final Exam and more Cheat Sheet Statistics in PDF only on Docsity! Formula Sheet for Final Exam Summary Statistics • Sample mean: x̄ = n∑ i=1 xi n • Sample variance: s2 = ∑n i=1(xi − x̄)2 n− 1 • Sample standard deviation:= √ s2 • Inter-quartile range = q75 − q25, where qx = xth percentile. Probability • Complement: P (Ac) = 1− P (A) • Addition law: P (A or B) = P (A) + P (B)− P (A and B) • Conditional probability: P (A|B) = P (A and B) P (B) • If A and B are mutually exclusive: P (A and B) = 0 • If A and B are independent: P (A and B) = P (A)P (B) • Bayes’ rule: P (A|B) = P (B|A)P (A) P (B) • Partition law: If A1, . . . , An are mutually exclu- sive and ∑n i=1 P (Ai) = 1, then P (B) = n∑ i=1 P (B|Ai)P (Ai) Discrete Distribution • E(X) = ∑ x∈X xP (x) • V (X) = ∑ x∈X (x− µ)2P (x) = ∑ x∈X x2P (x)− µ2 • If X ∼ Bernoulli (p), p(x) = { p if x=1 1− p if x=0 E(X) = p V (X) = p(1− p) • If X ∼ Binomial (n, p), for r = 0, 1, . . . n, P (X = r) = ( n r ) pr(1− p)n−r E(X) = np V (X) = np(1− p)( n r ) = n!r!∗(n−r)! where n! = n(n − 1)(n − 2) · · · 1 and 0! = 1 • If X ∼ Poisson ( λ ), for k = 0, 1, 2, . . . P (X = k) = λk k! e−λ E(X) = λ, V (X) = λ Continuous Distribution • Let F (xo) be the cumulative distribution func- tion of X with density p(x): F (xo) = P (X ≤ xo) = ∫ xo −∞ p(x)dx P (a < X ≤ b) = F (b)− F (a) • If X is Normal with mean µ and variance σ2: X ∼ N (µ, σ2), then X − µ σ = Z ∼ N (0, 1) Expectation and Variance Let X and Y denote two independent random vari- ables and let a and b denote two known constants. • E(aX + b) = aE(X) + b 1 • V (aX + b) = a2V (X) • V (X) = E(X2)− [E(X)]2 • E(X + Y ) = E(X) + E(Y ) • V (X + Y ) = V (X) + V (Y ) Sampling Distribution Let X1, . . . , Xn be an iid sample with E(Xi) = µ and V (Xi) = σ 2. Denote X̄ the sample mean and s2 the sample variance. X̄ ∼ N ( µ, σ2 n ) X̄ − µ σ/ √ n ∼ N(0, 1) X̄ − µ s/ √ n ∼ tn−1 Point Estimation Let X and Y be estimators for θ. • Bias = E(X)− θ • MSE (X) = E[(X − θ)2] = V(X) + [Bias(X)]2 • Efficiency of X compared to Y = MSE(Y ) MSE(X) *MSE = mean squared error Confidence Interval for One Mean • If the population variance σ2 is known: x̄± z(1−C%)/2 × σ√ n . where z(1−C%)/2 is the (1 − C%)/2 quantile of the standard Normal distribution. • If the population variance σ2 is unknown: x̄± t(1−C%)/2, n−1 × s√ n . Confidence Interval for the Difference in Two Means • If σ21 and σ22 are known: (x1 − x2)± z(1−C%)/2 × √ σ21 n1 + σ22 n2 . • If σ21 and σ22 are unknown and equal: (x1 − x2)± t(1−C%)/2, n1+n2−2 × sp √ 1 n1 + 1 n2 . sp = √ (n1 − 1)s21 + (n2 − 1)s22 n1 + n2 − 2 . • If σ21 and σ22 are unknown and unequal: (x1 − x2)± t(1−C%)/2, tWS × √ s21 n1 + s22 n2 tWS = (s21/n1 + s 2 2/n2) 2 (s21/n1) 2/(n1 − 1) + (s22/n2)2/(n2 − 1) . Hypothesis Testing • Type I error (α) = P (reject H0|H0 is true) • Type II error (β) = P (not reject H0|H0 is false) • Power = 1− β • Test statistic for one mean, H0 : µ = µ0:, X̄ − µ0 σX̄ • Test statistic for difference of two sample means, H0 : µ1 − µ2 = d0, (X1 −X2)− d0 σX1−X2 Joint Distribution Let X and Y denote two random variables with joint distribution p(x, y). Let a and b denote two known constants • E[g(X,Y )] = ∑ xy g(x, y)p(x, y) • V [g(X,Y )] = ∑ xy (g(x, y)− E[g(x, y)]) 2 p(x, y) 2
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved