Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Understanding Confidence Intervals & Hypothesis Testing: Sampling & Test Statistics - Prof, Study notes of Data Analysis & Statistical Methods

A lecture note from dr. Kobi abayomi's introductory course on statistics. It covers the concept of sampling distributions and test statistics, which are essential for constructing confidence intervals and performing hypothesis tests. Various probability distributions, such as bernoulli, binomial, poisson, normal, exponential, chi-square, and t-distribution, and their roles in statistical inference. It also demonstrates how to calculate test statistics, such as z-statistic and t-statistic, and interpret their significance.

Typology: Study notes

Pre 2010

Uploaded on 08/04/2009

koofers-user-iks
koofers-user-iks 🇺🇸

10 documents

1 / 13

Toggle sidebar

Related documents


Partial preview of the text

Download Understanding Confidence Intervals & Hypothesis Testing: Sampling & Test Statistics - Prof and more Study notes Data Analysis & Statistical Methods in PDF only on Docsity! ISYE 2028 A and B Lecture 10 Sampling Distributions and Test Statistics Dr. Kobi Abayomi April 2, 2009 1 Introduction: The context for Confidence Intervals and Hypothesis Testing: Sampling Distributions for Test Statistics Here is a (non-exhaustive) illustration of the population—sample dichotomy that is the center of what we are studying in this introductory course. Population Sample Random Variable Statistic Population Mean, Expectation Sample Mean Parameter Estimate µ x We make assumptions or define a population to ”fit” observed data. Our data is information about events we wish to speak—or gain inference about. The natural framework is that of an experiment: the population composes our assumptions about what might happen; the sample data compose what we actually observe. Our beliefs about what we see—that is the sample distribution, are related to our general assumptions— that is the population distribution. We have canonical population models in our overview of random variables. Bernoulli, Bi- nomial, Poisson, Normal, Exponential, etc. characterize types of experiments; we use these characterizations to make statements about data. 1 Bernoulli distribution to model simple events that can either happen or not. Like whether a coin turns up heads or not. Binomial distribution to model sums or totals of Bernoulli events. Like whether a coin turns up heads k times in n tosses. Poisson distribution to model Binomial type events when the probability of any event is very low, and the number of events is very high. Like the number of soldiers who are kicked in the head in a military campaign in 18th century France. Exponential distribution to model continuous, positive events like waiting times, or time to failure. Normal distribution to model averages of events, or events where the outcomes are contin- uous, or when we just don’t know any better (ha!). Chi-Square distribution to model squared deviations, sums of squared deviations, and squared normal random variables. Moving on, we use our these canonical random variables, to make statements about observed data. The setup is almost always this: we compare observed data to an expected value under our assumptions. This comparison yields a test statistic. We then use our probability model (i.e. our fundamental assumption about population for the data) to make a probabilistic statement about the population parameter. In general, a test statistic looks like this: TestStatistic = observed value− expected value standard error (1) In general the ”observed value” will be some statistic or function of data. The ”expected value” will be some parameter, the population correspondence of the statistic. We call statistics used in this context – to estimate population parameters – estimators. A popular notation is to use θ̂, read ”theta-hat”, as an estimator of the population parameter θ. We have already been exposed to one such estimator: µ̂ = x – the sample mean. We use functions of data — statistics — to estimate parameters and then our test statistics are rescalings by the standard deviation of our estimator. We call √ V ar(θ̂) = the standard error of the estimator. 2 • Third: Notice the special use of the Binomial setup to generate estimates of the Bernoulli parameter. • Fourth: Notice our usual construction of Z so that we can use our standard normal tables (in the back of the book or on your computer). Situations often arise where the sample mean cannot sufficiently describe, or test for, im- portant hypothetical differences in populations. We must appeal to other distributions, to other quantifications of difference, to test other hypothesis. A useful alternative is... 2.2 The T Distribution In many situations we cannot assume that we know the variance of the sample mean. As well, we often have not enough samples to apply the central limit theorem to the sampling distribution. In these situation we construct the t-statistic: t = x− µ s/ √ n (2) The t-distribution, T ∼ t(df) is an approximation to the normal distribution. Notice I have written df as the parameter of the distribution.3 The T distribution is centered at zero, just like the Z.4. We let df ≡ degrees of freedom. When we talk about sample data, we loosely define ”degrees of freedom” as the number of independent observations — the number of observations we have left after we subtract the number of parameters we have to estimate. df ≡ n− k where we let n = the number of observations and k = the number of parameters to be estimated. Notice that our constructed t-statistic is a deviation, which we expect to be Normal, rescaled (or divided) by our estimate of the standard deviation √ s2 n . Notice that 3What, if any, are the parameters for the Z N(0, 1) distribution? The parameters are µ = 0 and σ2 = 1. 4It turns out that E(T ) = 0 and V ar(T ) = rr−2 5 T = (X − µ)/(σ/ √ n)√ S2/σ2 is a standard normal random variable divided by a chi-square random variable. We showed at the end of lecture 8 that the distribution of X is independent from the distribution of S2 and are Normal and Chi-Squared. The density function for the t distribution we get by writing T = Z√ V/r with Z and V independent: fZ,V = 1√ 2πe−z2/2 · 1 Γ(r/2)2r/2 vr/2−1e−v/2 From what we know about transformations we get the joint distribution (letting U = V ): fT,U = h( t √ u√ r , u)|J | = ... and we integrate over U , integrate out the Chi-Squared random variable to get fT = Γ[(r + 1)/2]√ πrΓ(r/2) 1 (1 + t2/r)r/2+1/2 Notice that this simplifies to T = X − µ S/ √ n which is the way you use it. 2.2.1 Illustration and Setup Suppose we have a process X ∼ µ, σ2 unknown, and our estimator σ̂2 = s2. We want to look at the sample mean x = x1+···+xn n to gain inference about µ. 6 We then need to look at the probability distribution for T . Example What is the probability of a sample of size 25 having a mean of 518 grams and standard deviation of 40 grams, if the population mean yield is 500 grams. Solution The t statistic is t = 518− 500 40/ √ 25 = 2.25 Then: P(X > 518) = = P(t24 > 2.25) = 0.02 3 The Chi Squared Distribution and Test statistic Example Say we are interested in the fairness of a die. Here is the observed distribution after 120 tosses: Die Face 1 2 3 4 5 6 Obs. Count 30 17 15 23 24 21 What is the probability that the die is fair? Using what we only what we have done so far we could test the hypothesis that the die is fair by doing a test of mean: What is the probability that the mean is 3.5? We calculate the sample mean to be x = 3.433. Using the variance of a fair die, σ2 = 2.91, we can compute the sampling distribution and thus the value of the z-statistic is z0 = 3.475− 3.5√ 2.9/120 = −.161 . This yields 7 Then P(X = row i and Y = col j) = ni.n.j n..n.. . So the expected number of counts in row i and column j, under a hypothesis of independence, is: Expectedij = ni.n.j n.. For our data here we calculate χ2o = (6− 11.89)2 11.89 + · · ·+ (75− 82.65) 2 82.65 = 14.92 . And P(χ24 > 14.92) < 0.005 We conclude that class level and fashion are not independent. 4 F -distribution for ratio of variance If X1, ...Xm is distributed N(µ1, σ 2 1) and Y1, ...Yn is distributed N(µ2, σ 2 2) then the ratio F = S21/σ 2 1 S22/σ 2 2 (3) has what we call an F distribution with numerator degrees of freedom m−1 and denominator degrees of freedom n− 1. F is the ratio of two independent chi-squared variables, call them U ∼ χ2(m − 1) and V ∼ χ2(n− 1). If U = (m−1)S 2 σ21 then U ∼ χ2(m− 1). If V = (n−1)S 2 σ22 then V ∼ χ2(n− 1). Then F = (m−1)S2 σ2 /(m− 1) (n−1)S2 σ2 /(n− 1) 10 which just simplifies to (3).7 An important identity for the F-distribution is: F1−α,ν1,ν2 = F −1 α,ν2,ν1 (4) You’ll notice that you may have to use this fact in looking up values on the F -table in some books. 5 Miscellanea 5.1 Boxplots A boxplot is an illustration of the distribution of a a sample Figure 1 This boxplot displays data that is skewed to the right and with an IQR = 4. In R: x<-rnorm(100) 7It turns out that E(F ) = ν2ν2−2 and var(F ) = 2ν22 (ν1+ν2−2) ν1(ν2−2)2(ν2−4) where U ∼ χ 2(ν1), V ∼ χ2(ν2), F = U/ν1V/ν2 and U is independent of V . 11 y<-c(x,rnorm(20,5)) boxplot(x) boxplot(x, horizontal=T) boxplot(y, horizontal=T) 5.2 Quantile-Quantile, Plots A quantile-quantile plot is a plot of data values on the (ordinate) y axis versus theoretical quantiles on the (abscissa) x axis. So a Q−Q plot, in the typical name, should look like a 45 degree line when these values are similar: the plot is X vs. Y == FX(X(i/n)) vs. Fn(x(i)) == (x, y) where Fn is the empirical cdf (the cdf induced by the data), i.e. Fn(x) = 1 n n∑ i=1 1{xi≤x} A qth quantile — 0 < q < 1 — say, is the value of the random variable (or data) yielded by evaluating the inverse cumulative distribution function at q. That is F−1(q) = qth quantile or F (qth quantile) = q 12
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved