Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Point Estimates and Confidence Intervals for Population Parameters, Exams of Statistics

Probability DistributionsSampling TheoryInferential StatisticsDescriptive Statistics

An introduction to point estimates and confidence intervals for estimating population parameters such as mean, standard deviation, and proportion. It covers the concepts of point estimates, unbiased estimators, and interval estimates using examples. The document also discusses the concept of consistent estimators and the relationship between confidence levels and confidence intervals.

What you will learn

  • What is the difference between a point estimate and an interval estimate for a population parameter?
  • What is a point estimate for a population mean?
  • How is the sample standard deviation used as a point estimate for the population standard deviation?

Typology: Exams

2021/2022

Uploaded on 09/27/2022

skips
skips 🇺🇸

4.4

(11)

2 documents

1 / 9

Toggle sidebar

Related documents


Partial preview of the text

Download Point Estimates and Confidence Intervals for Population Parameters and more Exams Statistics in PDF only on Docsity! (Lesson 24: Point Estimates of Parameters) 24.01 ESTIMATING POPULATION PARAMETERS How do we estimate population parameters such as a population mean, standard deviation (SD), variance, or proportion? LESSON 24: POINT ESTIMATES OF PARAMETERS What is Our Best Guess? PART A: INFERENTIAL STATISTICS Here is the overview from Lesson 1: A Statistical Experiment Probability | 1) Designing an Experiment | | | 2) Collecting Data | | | 3) Describing Data | (Descriptive Statistics) | | 4) Interpreting Data using ←− − − − − − (Inferential Statistics) Size: N elements (or members) Population [of interest] All adult Americans? All registered voters in California? 2)↘ ↗ 4) Size: n elements (or members) Sample For a poll? A scientific study? Are we testing products for quality control? 3) We now enter the realm of inferential statistics, the science of interpreting data. What can a sample tell us about the population from which it is drawn? (Lesson 24: Point Estimates of Parameters) 24.02 PART B: POINT ESTIMATES If a single value is used to estimate a population parameter, we call that value a point estimate for that parameter. Let’s say we draw a sample from a population. Mean SD VAR Population (Size N) µ σ σ 2 ↓ Sample (Size n) x s s2 We use sample statistics to estimate population parameters. • We will use the sample mean, x , as a point estimate for the population mean, µ . • We will use the sample standard deviation, s , as a point estimate for the population standard deviation, σ . • We will use the sample variance, s2 , as a point estimate for the population variance, σ 2 . Example 1 (Point Estimates) A large lecture class takes a test. Five of the tests are randomly selected and graded. The grader reports that x = 75 points , s = 15 points , and s 2 = 225 square points . • a) What is a point estimate for the population mean of test scores for the entire class? • b) What is a point estimate for the population standard deviation of test scores for the entire class? • c) What is a point estimate for the population variance of test scores for the entire class? (Lesson 24: Point Estimates of Parameters) 24.03 § Solution • a) What is a point estimate for the population mean of test scores for the entire class? The point estimate is the sample mean, x = 75 points . • b) What is a point estimate for the population standard deviation of test scores for the entire class? The point estimate is the sample standard deviation, s = 15 points . • c) What is a point estimate for the population variance of test scores for the entire class? The point estimate is the sample variance, s 2 = 225 square points . § PART C: UNBIASED ESTIMATORS A sample statistic is an unbiased estimator for a population parameter when the expected value of the sample statistic is the value of the population parameter. A key reason why we use the sample mean as a point estimate for the population mean is that the sample mean is an unbiased estimator for the population mean: E X( ) = µ . • What does that mean? Consider all possible samples of size five from the class … and all of their sample means (values of X ). The average of those sample means would be the population mean, µ . • Unbiased estimators do not have a tendency to overestimate or underestimate the population parameter. The sample variance is an unbiased estimator for the population variance. The sample standard deviation is not an unbiased estimator for the population standard deviation, but it is still good enough to use as a point estimate. (Lesson 24: Point Estimates of Parameters) 24.04 PART D: SAMPLE PROPORTIONS p denotes a population proportion. • It could represent a probability, such as the success probability p from a Bin n, p( ) distribution. p̂ denotes a sample proportion. It is an unbiased estimator of p. Proportion Population (Size N) p ↓ Sample (Size n) p̂ Calculating a Sample Proportion A success is a property that we are interested in. If n is a sample size, and if x is the number of successes in the sample, then the sample proportion of successes is given by: p̂ = x n If n is the number of trials in a binomial experiment, and if x is the number of “successful” trials, then our point estimate for p, the success probability per trial in a Bin n, p( ) distribution, is given by: p̂ = x n This is the sample proportion of successes among the n trials. (Lesson 25: Confidence Interval Estimates of Parameters) 25.03 PART C: MARGIN OF ERROR If an interval estimate is symmetric about the point estimate, then we may write the interval in terms of the point estimate and the margin of error. Margin of Error of a Symmetric Interval Estimate • The margin of error (denoted by E ) is the distance between the point estimate and either limit of the interval. • It is half the width of the entire interval. • It is the maximum likely distance between the point estimate and the true value of the population parameter being estimated. Example 3 (Margin of Error; Revisiting Example 2) In Example 2, we estimated the population mean of exam scores in a class. Our point estimate was the sample mean x = 74 points . Our interval estimate 70 points, 78 points( ) was symmetric about the point estimate. There are three ways to calculate the margin of error E : • E = upper limit( )− x = 78− 74 = 4 points • E = x − lower limit( ) = 74− 70 = 4 points • It is half the width of the entire interval: E = upper limit( )− lower limit( ) 2 = 78− 70 2 = 4 points The interval estimate 70 points, 78 points( ) for the population mean µ can be written in terms of the sample mean x and the margin of error E : µ = x ± E µ = 74 ± 4 in points( ) We could informally say that we believe that the population mean is about 74 points, give or take 4 points. § (Lesson 25: Confidence Interval Estimates of Parameters) 25.04 PART D: CONFIDENCE INTERVAL ESTIMATES We believe that an interval estimate for a population parameter (such as the population mean µ ) is likely to contain the value of that parameter. If we attach a confidence level to the interval estimate, then we have a confidence interval (“CI”) estimate for the parameter. A confidence level is a probability that is often expressed as a percent. • It is the probability that the confidence interval contains the true value of the parameter. • The most commonly used confidence level is 95%. This is the typically assumed confidence level in published studies and news reports. Example 4 (Confidence Levels and Confidence Intervals) Let µ be the population mean I.Q. score of American adults. A psychology professor wants to estimate µ . The professor analyzes a sample of American adult I.Q.s. The sample mean x = 101 points and a 95% confidence interval (CI) for µ is 98 points, 104 points( ) . Interpret this confidence interval (CI). § Solution We are 95% confident that this interval contains the population mean I.Q. score of American adults. • Some books (such as Triola’s) object to saying that “there is a 95% chance” that µ “falls in the confidence interval.” See the next page … § (Lesson 25: Confidence Interval Estimates of Parameters) 25.05 A surprising truth about statistics: STATISTICS PREDICTS ITS OWN FAILURE RATE! The failure rate for 95% confidence intervals would be about 5%, or 0.05. PART E: “ALPHA” NOTATION The confidence level is denoted by 1−α , where α is the Greek letter alpha. Here, α represents a failure probability in decimal form. Example 5 (“Alpha” Notation for Confidence Levels) If a confidence interval has a confidence level of 95%, or 0.95, then 1−α = 0.95 . The failure probability α = 0.05 . § (Lesson 25: Confidence Interval Estimates of Parameters) 25.06. PART F: CONFIDENCE LEVELS and TRADEOFFS Based on the same sample data, we can use different confidence levels to construct different confidence intervals (CIs) for a population parameter. Think About It: Which is usually more likely to contain a population mean µ : a wider confidence interval (CI) for µ … or a smaller one? Assume they are both symmetric about the sample mean x . • If we are estimating a population mean µ , a higher confidence level will lead to a wider confidence interval for µ , which is more likely to contain µ . • A higher confidence level will lead to higher reliability … but lower precision. 95% CI for µ 99% CI for µ (Lesson 26: z, t, and χ 2 Distributions) 26.01 LESSON 26 : z, t, and χ 2 DISTRIBUTIONS What Distributions Will Help Us Construct Confidence Intervals? PART A: REVIEW OF STANDARD NORMAL (z) DISTRIBUTIONS The standard normal (z) distribution is the normal distribution with mean 0 and standard deviation 1: Z ~ N µ = 0, σ = 1( ) . Properties of Standard Normal (z) Distributions • They are continuous. • A normal random variable can take on any real number as a value. • The density curve is bell-shaped. • The total area under the density curve (and above the horizontal axis) is 1. This is true for all probability density curves. • The mean is 0. • The density curve is symmetric about the mean. • The standard deviation is 1. • The standard deviation is the distance between the “mean-line” and either inflection point (“IP”); “IP”s are points where the density curve changes from concave up (curving upward) to concave down (curving downward), or vice-versa. • The tails of the curve fall off rapidly (in terms of standard deviations). General Normal Distribution Standard Normal Distribution (Lesson 26: z, t, and χ 2 Distributions) 26.02 PART B: t DISTRIBUTIONS t distributions, also called Student’s t distributions, are similar to the standard normal (z) distribution. Similarities Between t and z Distributions • They are continuous. • A random variable can take on any real number as a value. • The density curve is bell-shaped. • The total area under the density curve is 1. • The mean is 0. • The density curve is symmetric about the mean. • The tails of the curve fall off rapidly (in terms of standard deviations). Differences Between t and z Distributions • There are infinitely many different t distributions. • There is only one z distribution. • For a z distribution, the standard deviation σ = 1. • For a t distribution, the standard deviation σ >1. • The tails of a “t density curve” do not fall off as rapidly as for the “z curve.” • The two inflection points (IPs) for a t distribution aren’t as meaningful as for a z distribution. (See Footnote 1.)
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved