Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Statistical Inference: Confidence Intervals and Hypothesis Testing for Means - Prof. Thoma, Study notes of Data Analysis & Statistical Methods

An explanation of how to estimate the population mean when the standard deviation is unknown, using the t-distribution and confidence intervals. It covers the assumptions for inference about a mean, the one-sample t statistic and confidence interval, and examples of calculating a confidence interval by hand and using spss. It also discusses the use of the t-distribution in hypothesis testing and the concept of robustness.

Typology: Study notes

Pre 2010

Uploaded on 07/30/2009

koofers-user-pvk
koofers-user-pvk 🇺🇸

10 documents

1 / 19

Toggle sidebar

Related documents


Partial preview of the text

Download Statistical Inference: Confidence Intervals and Hypothesis Testing for Means - Prof. Thoma and more Study notes Data Analysis & Statistical Methods in PDF only on Docsity! Lecture 8, Sections 7.1 & 7.2 Inference for the Mean of a Population Previously we made the assumption that we know the population standard deviation, σ. We then developed a confidence interval and used tests for significance to gather evidence for/against an hypothesis, all with a known σ. In normal practice, σ is unknown. In this section, we must estimate σ from the data though we are primarily interested in the population mean, µ. Confidence Interval for a Mean First, Assumptions for Inference about a mean:  Our data are a simple random sample (SRS) of size n from the population.  Observations from the population have a normal distribution with mean µ and standard deviation σ. If population distribution is not normal, it is enough that the distribution is unimodal and symmetric and that the sample size be large (n>15). Both µ and σ are unknown parameters. Because we do not know σ we make two changes in our procedure. 1. The standard error, xSE , is used in place of n  . Standard Error: When the standard deviation of a statistic is estimated from the data, the result is called the standard error of the statistic. The standard error of the sample mean x is x sSE n  Where s is the sample standard deviation, n is the sample size. 2. We calculate a different test statistic and use a different distribution to calculate our p-value. Lecture 8, Section 7.1 & 7.2 Page 1 The t-distributions:  The t-distribution is used when we do not know σ. The t-distributions have density curves similar in shape to the standard normal curve, but with more spread.  The t-distributions have more probability in the tails and less in the center than does the standard normal. This is because substituting the estimate s for the fixed parameter σ introduces more variation into the statistic.  As the sample size increases, the t-density curve approaches the N(0,1) curve. (Note: This is because s estimates σ more accurately as the sample size increases). The t Distributions Suppose that an SRS of size n is drawn from a ( , )N   population. Then the one-sample t statistic / xt s n   has the t distribution with n-1 degrees of freedom. The One-Sample t Confidence Interval Suppose that an SRS of size n is drawn from a population having unknown mean µ. A level C confidence interval for µ is * sx t n  where t* is the value for the t(n-1) density curve with area C between –t* and t*. This interval is exact when the population distribution is normal and is approximately correct for large n in other cases. Lecture 8, Section 7.1 & 7.2 Page 2 4. State the conclusions in terms of the problem. Choose a significance level such as α = 0.05, then compare the P-value to the α level. If P-value  α, then reject 0H If P-value > α, then fail to reject 0H Examples: 1. Experiments on learning in animals sometimes measure how long it takes mice to find their way through a maze. The mean time is 18 seconds for one particular maze. A researcher thinks that a loud noise will improve/decrease the time it takes a mouse to complete the maze. She measures how long each of 30 mice take to complete the maze with noise stimulus. She finds their average time is 16 seconds and their standard deviation is s = 3 seconds. Do a hypothesis test to test the researchers assertions with α = 0.1. 2. (Example 7.2 in Textbook) Suppose that we know that sufficient vitamin C was added to the CSB mixture to produce a mean vitamin C content in the final product of 40 mg/100 g. It is suspected that some of the vitamin is lost or destroyed in the production process. To test this hypothesis we can conduct a one-sided test to determine if there is sufficient evidence to conclude that the CSB mixture lost vitamin C content at α = 0.05 level. By hand: Lecture 8, Section 7.1 & 7.2 Page 5 Using SPSS: analyze > compare means > One sample T test Move “vitaminc” into the “test variable box” and type in 40 for the test value. To change the confidence interval, Click “options” and change confidence interval from 95% to whatever. I did not do this as I will keep the 95% default. Click “continue”. Lastly click “OK”. One-Sample Statistics N Mean Std. Deviation Std. Error Mean Vitamin C 8 22.50 7.191 2.542 One-Sample Test Test Value = 40 t df Sig. (2-tailed) Mean Difference 95% Confidence Interval of the Difference Lower Upper vitamin C -6.883 7 .000 -17.500 -23.51 -11.49 Matched Pairs Design: Lecture 8, Section 7.1 & 7.2 Page 6 A common design to compare two treatments is the matched pairs design. One type of matched pair design has 2 subjects who are similar in important aspects matched in pairs and each treatment is given to one of the subjects in each pair. With only one subject, 2 treatments are given in random order. Another type of matched pairs is before-and after observations on the same subject. Paired t Procedures: To compare the mean responses to the two treatments in a matched pairs design, apply the one-sample t procedures to the observed differences, d. Example: (Problem 7.31 is done by hand and using SPSS): The researchers studying vitamin C in CSB in example 7.1 were also interested in a similar commodity called wheat soy blend (WSB). Both these commodities are mixed with other ingredients and cooked. Loss of vitamin C as a result of this process was another concern of the researchers. One preparation used in Haiti called gruel can be made from WSB, salt, sugar, milk, banana, and other optional items to improve the taste. Samples of gruel prepared in Haitian households were collected. The vitamin C content (in milligrams per 100 grams of blend, dry basis) was measured before and after cooking. Here are the results: Sample 1 2 3 4 5 Before 73 79 86 88 78 After 20 27 29 36 17 Set up appropriate hypotheses and carry out a significance test for these data. (It is not possible for cooking to increase the amount of vitamin C). Lecture 8, Section 7.1 & 7.2 Page 7 Comparing Two Means: Two-Sample Problems: A situation in which two populations or two treatments based on separate samples are compared. A two-sample problem can arise:  from a randomized comparative experiment which randomly divides the units into two large groups and imposes a different treatment on each group.  From a comparison of random samples selected separately from different populations. Note: Do not confuse two-sample designs with matched pair designs! Assumptions for Comparing Two Means:  Two independent simple random samples, from two distinct populations are compared. The same variable is measured on both samples. The sample observations are independent, neither sample has an influence on the other.  Both populations are approximately normally distributed.  The means 1 and 2 and standard deviations 1 and 2 of both populations are unknown. Typically we want to compare two population means by giving a confidence interval for their difference, 1 2  , or by testing the hypothesis of no difference, 0 1 2 .: 0H    The Two-Sample t Confidence Interval: Suppose that an SRS of size 1n is drawn from a normal population with unknown mean 1 and that an independent SRS of size 2n is drawn from another normal population with unknown mean 2 . The confidence interval for 1 2  given by 2 2 1 2 1 2 1 2 ( ) * s s x x t n n    has confidence level at least C no matter what the population standard deviations may be. Here, t* is the value for the t(k) density curve with area C between –t* and t*. The value of the degrees of freedom k is approximated by software or we use the smaller of 1 1n  and 2 1n  . Two-Sample t Procedure: Lecture 9, Section 7.1 & 7.2 Page 10 1. Write the hypotheses in terms of the difference between means. 0 1 2: 0H u   1 2: 0aH    or 1 2: 0aH    or 1 2: 0aH    2. Calculate the test statistic. A SRS of size 1n is drawn from a normal population with unknown mean 1 and draw an independent SRS of size 2n from another normal population with unknown mean 2 . To test the hypothesis 0 1 2: 0H u   the two-sample t statistic is: 1 2 2 2 1 2 1 2 x x t s s n n    and use P-values or critical values for the t(k) distribution, where the degrees of freedom k are either approximated by software or are the smaller of 1 1n  and 2 1n  . Note: The two-sample t statistic does not have a t distribution. The software, however, uses a t distribution to do inference for two-sample problems. This is because it is approximately a t distribution with degrees of freedom calculated by a complex formula called the Welch Approximation. 3. Calculate the P-value. Note: Unless we use software, we can only get a range for the P-value. We use the following formulas: 1 2: 0aH    is ( )P T t 1 2: 0aH    is ( )P T t 1 2: 0aH    is 2 ( | |)P T t Note: Instead of using the degrees of freedom found by software, you can use the smaller of 1 1n  and 2 1n  . The resulting procedure is conservative. 4. State the conclusions in terms of the problem. Choose a significance level such as α = 0.05, then compare the P-value to the α level. If P-value  α, then reject 0H If P-value > α, then fail to reject 0H Robustness and use of the Two-Sample t Procedures: Lecture 9, Section 7.1 & 7.2 Page 11 The two-sample t procedures are more robust than the one-sample t methods, particularly when the distributions are not symmetric. They are robust in the following circumstances:  If two samples are equal and the two populations that the sample come from have similar distributions then the t distribution is accurate for a variety of distributions even when the sample sizes are as small as 1 2 5n n  .  When the two population distributions are different, larger samples are needed.  151 2n n  : Use two-sample t procedures if the data are close to normal. If the data are clearly non normal or if outliers are present, do not use t.  151 2n n  : The t procedures can be used except in the presence of outliers or strong skewness.  1 2 40n n  : The t procedures can be used even for clearly skewed distributions. Examples: Lecture 9, Section 7.1 & 7.2 Page 12 Lecture 9, Section 7.1 & 7.2 Page 15 b. Most studies found that the mean SSHA score for men is lower than the mean score in a comparable group of women. Test this supposition here. That is, state the hypotheses, carry out the test and obtain a P-value, and give your conclusions. Using SPSS: Note: The data needs to be typed in using two columns. In the first column you need to put all the scores. In the second column define the grouping variable as gender and enter “ women” next to the women’s scores and “men” next to the men’s scores. Analyze > Compare means > Independent Sample T test Move “score” to “Test Variable” box and “gender” to “grouping variable” box. Click “define groups” and enter “women” for group 1 and “men” for group 2. Click “continue” followed by “OK”. Group Statistics group N Mean Std. Deviation Std. Error Mean score women 18 140.56 26.262 6.190 men 20 121.25 32.852 7.346 Levene's Test for Equality of Variances t-test for Equality of Means F Sig. t df Sig. (2- tailed) Mean Differenc e Std. Error Differenc e 95% Confidence Interval of the Difference Lower Upper score Equal variances assumed 1.030 .317 1.986 36 .055 19.306 9.721 -.410 39.021 Equal variances not assumed 2.010 35.537 .052 19.306 9.606 -.185 38.797 Independent Samples Test We use the second line, “Equal variances not assumed” to get the t test statistic, p- value, etc. Lecture 9, Section 7.1 & 7.2 Page 16 c. Give a 95% confidence interval for the mean difference between the SSHA scores of male and female first-year students at this college. 3. Suppose we wanted to compare how students performed on test 1 versus test 2 in stat 301. Below is data for a random sample of 10 students taking stat 301 along with the printout of the results from running the matched pairs test. Subject Test 1 Test 2 1 60 55 2 59 60 3 90 82 4 87 85 5 99 100 6 100 98 7 92 90 8 82 76 9 75 79 10 84 82 Paired Samples Statistics 82.80 10 14.382 4.548 80.70 10 14.507 4.588 Test 1 Test 2 Pair 1 Mean N Std. Deviation Std. Error Mean Paired Samples Test 2.100 3.573 1.130 -.456 4.656 1.859 9 .096Test 1 - Test 2Pair 1 Mean Std. Deviation Std. Error Mean Lower Upper 95% Confidence Interval of the Difference Paired Differences t df Sig. (2-tailed) Answer the questions below based on the test. Lecture 9, Section 7.1 & 7.2 Page 17
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved