Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Statistics 131C: Hypothesis Testing and Distributions, Exams of Mathematical Statistics

An introduction to hypothesis testing, including the concepts of null and alternative hypotheses, critical regions, power functions, type i and ii errors, level of significance, p-values, and equivalence between hypothesis tests and confidence intervals. It also covers optimal tests for simple null against simple alternative, uniformly most powerful tests, monotone likelihood ratio property, and tests for normal families with known and unknown variances, as well as two-sample t-tests and f distributions.

Typology: Exams

Pre 2010

Uploaded on 07/30/2009

koofers-user-w3j
koofers-user-w3j 🇺🇸

10 documents

1 / 3

Toggle sidebar

Related documents


Partial preview of the text

Download Statistics 131C: Hypothesis Testing and Distributions and more Exams Mathematical Statistics in PDF only on Docsity! Spring 2009 Statistics 131C Handout 1 1. Null and alternative hypotheses : Given a family of probability distributions P = {Pθ : θ ∈ Ω} a pair of null and alternative hypotheses provide a partition of Ω: Test H0 : θ ∈ Ω0 against H1 : θ ∈ Ω1, where Ω = Ω0 ∪ Ω1 and Ω0 ∩ Ω1 = ∅. 2. Critical region : Supposing that we observe a random sample X = (X1, . . . , Xn) from the distribution Pθ, with θ ∈ Ω. The sample space S is the set of all possible outcomes X for all θ ∈ Ω. The critical region C of a (nonrandomized) test δ is a subset of S such that if X ∈ C, then H0 is rejected and if X ∈ Cc then H0 is accepted. 3. Power function : Let C be the critical region of the test δ, then the power function of δ is defined as π(θ|δ) = P (X ∈ C|θ), for θ ∈ Ω. 4. Type I and Type II error : Type I error is committed when H0 is rejected even though θ ∈ Ω0. Type II error is committed when H0 is accepted even though θ ∈ Ω1. Thus, if θ ∈ Ω0, P (Type I error) = π(θ|δ), and if θ ∈ Ω1, P (Type II error) = 1− π(θ|δ). 5. Level of significance : The level of significance of the test δ is defined as α(δ) = supθ∈Ω0 π(θ|δ). Need to control this at a pre-specified value α0 (0 < α0 < 1). 6. p-value : p-value gives an idea about the practical significance of the test based on the observed data. It is defined as the smallest level of significance α0 such that we reject the null hypothesis at level α0 based on the observed data. 7. Equivalence between hypothesis test and confidence interval : For each x ∈ S, let ω(x) be the set of all θ0 ∈ Ω such for which the level 1 − γ test δθ0 (for 0 < γ < 1) accepts H0 : θ = θ0 when X = x is observed. Then P (θ0 ∈ ω(x)|θ = θ0) ≥ γ. That is, ω(x) is a confidence set with confidence coefficient at least γ. 8. Optimal test for simple null against simple alternative : Test H0 : θ = θ0 against H1 : θ = θ1. Suppose that the joint density under null and alternative hypotheses are denoted by fj(x) = ∏n i=1 f(xi|θj), j = 0, 1. Let a, b > 0 be constants. Let δ∗ be the test procedure such that H0 is accepted if af0(x) > bf1(x), and H1 is accepted if af0(x) < bf1(x). Then for every test procedure δ, aα(δ∗) + bβ(δ∗) ≤ aα(δ) + bβ(δ), where β(δ) = P (Type II error). A consequence of this result is the Neyman-Pearson Lemma which gives the form of the optimal (in terms of maximizing the power) level α0 test for the hypotheses. 9. Uniformly most powerful test : A test procedure δ∗ is uniformly most powerful (UMP) for the hypotheses H0 : θ ∈ Ω0 against H1 : θ ∈ Ω1 at the level of significance α0 if α(δ∗) ≤ α0 and, for every other test procedure δ such that α(δ) ≤ α0, it is true that π(θ|δ) ≤ π(θ|δ∗) for every θ ∈ Ω1. A UMP test in general may not exist. 1 10. Monotone likelihood ratio property : Let fn(x|θ) denote the joint p.m.f. or joint p.d.f. of the observations X1, . . . , Xn. Let T = r(X) be a statistic such that for every two values θ1, θ2 ∈ Ω with θ1 < θ2, the ratio fn(x|θ2)/fn(x|θ1) depends on x only through r(x) and the ratio is an increasing function of r(x) over the range of possible values of r(x). Then the family fn(x|θ) is said to have a Monotone Likelihood Ratio (MLR) property in T = r(X). Some examples of family of distributions with an MLR are, N(µ, 1), N(0, σ2), Bernoulli(p), Poisson(λ), Exponential(θ). 11. UMP test for one-sided alternative : Suppose that fn(x|θ) has an MLR in the statistic T = r(X). Suppose we want to test H0 : θ ≤ θ0 against H1 : θ > θ0. Let c and α0 be constants such that P (T ≥ c|θ = θ0) = α0. Then the test procedure that rejects H0 if T ≥ c, and accepts H0 if T < c, is a UMP test of the hypotheses at the level of significance α0. 12. UMP test for a normal family with known variance : Let X1, . . . , Xn ∼ N(µ, σ2), with σ2 known. Test H0 : µ ≤ µ0 against H1 : µ > µ0. The UMP level α0 test exists for all α0 ∈ (0, 1) and it rejects H0 if Xn > c = µ0 + σ√nΦ−1(1 − α0). The power function is strictly increasing, with value at µ0 being α0. However, no UMP test exists for the hypotheses H0 : µ = µ0 against H1 : µ 6= µ0. 13. Unbiased test : A test procedure δ for testing H0 : θ ∈ Ω0 against H1 : θ ∈ Ω1 is said to be unbiased if for every pair of parameter values θ0 and θ1 such that θ0 ∈ Ω0 and θ1 ∈ Ω1, we have π(θ|δ) ≤ π(θ′|δ). In many situations where a UMP test may not exist, we typically look for an unbiased test with “good” power behavior. 14. Likelihood ratio test : In order to test H0 : θ ∈ Ω0 against H1 : θ ∈ Ω1, consider the test statistic (likelihood ratio statistic) Λ(X) = supθ∈Ω1 fn(X|θ) supθ∈Ω0 fn(X|θ) . A likelihood ratio test rejects H0 if Λ(X) > k for some k > 0. 15. Test for a normal family with unknown variance : Suppose that X1, . . . , Xn are i.i.d. N(µ, σ2) with σ2 unknown. Suppose that we want to test H0 : µ ≤ µ0 against H1 : µ > µ0. In this case, the level α0 likelihood ratio test is the one-sided t-test which rejects H0 if U > c = T−1n−1(1− α0) where U = √ n(Xn − µ0)√∑n i=1(Xi −Xn)2/(n− 1) , and Tn−1 denotes the c.d.f. of the t distribution with n − 1 degrees of freedom. It is an unbiased test. The power function depends on the parameters only through the quantity (noncentrality parameter) ψ = √ n(µ− µ0)/σ. Indeed, π(µ, σ2|δ) = 1− Tn−1(c|ψ) where Tn−1(·|ψ) is the c.d.f. of the noncentral t distribution with (n − 1) degrees of freedom and noncentrality parameter ψ. The p-value of the test, when U = u is observed, is supµ≤µ0,σ2>0 P (U ≥ u|µ, σ2) = P (U ≥ u|µ0, σ2) = 1− Tn−1(u). For the two-sided alternative, H0 : µ = µ0 against H1 : µ 6= µ0, the level α0 likelihood ratio test reduces to the test which rejects H0 if |U | ≥ c = T−1n−1(1−α0/2). This is also an 2
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved