Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Hypothesis Testing and Likelihood Ratio in Statistical Inference - Prof. Jem Corcoran, Assignments of Mathematical Statistics

Solutions to problem set ten, which covers hypothesis testing, likelihood ratio, sufficient statistics, and neyman-pearson lemma in statistical inference. It includes examples of best tests, ump tests, and the distribution of test statistics for various problems.

Typology: Assignments

Pre 2010

Uploaded on 02/13/2009

koofers-user-set
koofers-user-set 🇺🇸

5

(1)

10 documents

1 / 7

Toggle sidebar

Related documents


Partial preview of the text

Download Hypothesis Testing and Likelihood Ratio in Statistical Inference - Prof. Jem Corcoran and more Assignments Mathematical Statistics in PDF only on Docsity! APPM 4/5520 Solutions to Problem Set Ten 1. f(~x;λ) = n ∏ i=1 e−λλxi xi! = e−nλλ ∑ xi ∏ (xi!) So, the likelihood ratio is λ(~x; 0.1, 0.5) = f(~x; 0.1) f(~x; 0.5) = e−n(0.1−0.5) ( 0.1 0.5 ) ∑ xi = e0.4n0.2 ∑ xi (That sum of x’s is in the exponent in case it is unclear.) By the Neyman-Pearson Lemma, we will have a best test of size α = 0.08 if we solve 0.08 = P ( e0.4n0.2 ∑ Xi ≤ k;H0 ) = P ( 0.2 ∑ Xi ≤ k1;H0 ) = P ( ln 0.2 ∑ Xi ≤ ln k1;H0 ) = P ( ∑ Xi · ln 0.2 ≤ ln k1;H0) Dividing by ln 0.2, which is negative, we get 0.08 = P ( ∑ Xi ≥ k2;H0) Now, under H0, the Xi are iid Poisson(0.1). So, Y = ∑10 i=1 Xi is Poisson(1). (1 = (10)(0.1)) So, having already taken into account the distribution of Y under H0, we want to solve 0.08 = P (Y ≥ k2) for k2. Equivalently, we want to solve 0.92 = P (Y < k2). Try a few values of k2: P (Y < 0) = 0 P (Y < 1) = P (Y = 0) = e −1(1)0 0! ≈ 0.367879 P (Y < 2) = P (Y = 0) + P (Y = 1) = e −1(1)0 0! + e−1(1)1 1! ≈ 0.735759 P (Y < 3) = e −1(1)0 0! + e−1(1)1 1! + e−1(1)2 2! ≈ 0.92 So, the best test of size α = 0.08 is to reject H0 if ∑ Xi ≥ 2. 2. First, a sufficient statistic. f(~x; θ) = ∏n i=1 f(xi; θ) = ∏n i=1 θx θ−1 i I(0,∞)(xi) = θn ( ∏n i=1 xi) θ−1 ∏n i=1 I(0,∞)(xi) = θn ( ∏n i=1 I(0,∞)(xi) ) · exp [(θ − 1) ln(∏ xi)] By one-parameter exponential family, S = ln( ∏ Xi) = ∑ ln Xi is sufficient for θ (and complete!). Now for the UMP test. We consider first the simple versus simple hypotheses: H0 : θ = 6 H1 : θ = θ1 for some θ1 < 6. The likelihood ratio is λ(~x; 6, θ1) = f(~x; 6) f(~x; θ1) = ( 6 θ1 )n · ( ∏ xi )5−θ1+1 = ( 6 θ1 )n · ( ∏ xi )6−θ1 We wish to solve α = P (λ( ~X ; 6, θ1) ≤ k;H0) = P (( 6 θ1 )n · (∏ Xi)6−θ1 ≤ k;H0 ) = P ( ( ∏ Xi) 6−θ1 ≤ ( θ1 6 )n k ) = P ( ln ( ∏ Xi) 6−θ1 ≤ ln [( θ1 6 )n k ]) = P ( (6 − θ1) ln ( ∏ Xi) ≤ ln [( θ1 6 )n k ]) = P ( ln ( ∏ Xi) ≤ 16−θ1 ln [( θ1 6 )n k ]) = P (ln ( ∏ Xi) ≤ k2) Note that G ∼ Γ(6n/2, 1/θ0). So 2G/θ0 ∼ Γ(6n/2, 1/2) ∼ χ2(6n). Write α = P (G ≥ k2) = P (2G/θ0 ≥ 2k2/θ0) = P (W ≥ 2k2/θ0) where W ∼ χ2(6n). This means that 2k2/θ0 = χ 2 α(6n) which gives us k2 = θ0χ 2 α(6n)/2. So, the best test of size α for testing H0 : θ = θ0 versus Ha : θ = θa with θa > θ0 is to reject H0 if ∑ Xi ≥ θ0χ2α(6n)/2. (b) The test from part (a) did not depend on the particular value of θa (but its form did depend on the fact that θa > θ0) the above test is also UMP for H0 : θ = θ0 versus Ha : θ > θ0. 5. (a) The joint pdf is f(~x;µ) = (2π)−n/2e− 1 2 ∑n i=1 (xi−µ)2 . The ratio in the Neyman-Pearson Theorem for the simple versus simple hypotheses (H0 : µ = µ0 versus Ha : µ = µa for some fixed µa > µ0) is λ = λ(x) = f(~x);µ0f(~x);µ0 = e − 1 2 ∑ (xi−µ0) 2 e − 1 2 ∑ (xi−µa) 2 = e− 1 2 [ ∑ (xi−µ0)2− ∑ (xi−µa)2] According to the N-P Theorem, the best test of size α of H0 : µ = µ0 versus Ha : µ = µa is to reject H0 is λ = λ(~x) ≤ k where k is such that P (λ( ~X) ≤ k;H0) = α. Well... λ(~x) ≤ k gives us e− 1 2 [ ∑ (xi−µ0)2− ∑ (xi−µa)2] ≤ k ⇓ −1 2 [ ∑ (xi − µ0)2 − ∑ (xi − µa)2] ≤ ln k ⇓ ∑ (xi − µ0)2 − ∑ (xi − µa)2 ≥ −2 ln k ⇓ ∑ x2i − 2µ0 ∑ xi + µ 2 0 − ∑ x2i + 2µa ∑ xi − µ2a ≥ −2 ln k ⇓ −2µ0 ∑ xi + 2µa ∑ xi ≥ −2 ln k − µ20 + µ2a ⇓ −2(µ0 − µa) ∑ xi ≥ −2 ln k − µ20 + µa 2 ⇓ ∑ xi ≥ −2 ln k − µ20 + µ2a −2(µ0 − µa) (Note the inequality didn’t flip because µa > µ0 ⇒ −2(µ0 − µa) > 0). So, the form of the test is to reject H0 if ∑ xi gets “large”. How large? Well, we could find the k and then compute the right-hand-side of that last inequality, or, we could go for the value of the right-hand-side directly. Call it k1. Then, we have α = P (λ(~x) ≤ k;H0) = P ( ∑ Xi ≥ k1;H0) = P (X ≥ k1/n;H0) When H0 is true, X1, . . . ,Xn iid∼ N(µ0, 1) and X ∼ N(µ0, 1/n). So α = P (X ≥ k1/n;H0) = P ( X−µ0 1/ √ n ≥ k1/n−µ0 1/ √ n ;H0 ) = P ( Z ≥ k1/n−µ0 1/ √ n ) So, we must have k1/n − µ0 1/ √ n = zα and, solving for k1 gives us k1 = n(zα/ √ n + µ0) So, the best test of size α for H0 : µ = µ0 versus Ha : µ = µa for µa > µ0 is to reject H0 if ∑ Xi ≥ n(zα/ √ n + µ0), or, equivalently, to reject H0 if X ≥ zα/ √ n + µ0. Note: There are no µa’s in this test. The only way the value of µa played into the outcome was when we were simplifying λ ≤ k. Since µa > µ0, a particular inequality along the way did not flip. Therefore, the test would be the same (reject H0 if X ≥ zα/ √ n + µ0 ) for any µa > µ0. Therefore, our test is uniformly most powerful for testing H0 : µ = µ0 versus Ha : µ > µ0. (b) Consider the simple versus simple hypotheses H0 : µ = µ0 and Ha : µ = µa for a particular µa 6= µ0. Since now −2(µ0−µa), could be either positive or negative, repeating the process in part (a), would cause an inequality to not flip or flip depending on the particular value of µa 6= µ0 we chose. In other words for some values of µ given by the alternative hypothesis the best test would be to reject H0 when ∑ Xi is greater than or equal to something. In other cases, the best test is given by rejecting H0 when ∑ Xi is less than or equal to something. There is no one best test for all µ’s given by Ha– there is no uniformly best or most powerful test! 6. Ugh. When I wrote this problem, I thought I had a simple example of an exponential family distribution where a two-sided test can be found. (Recall that we at least have seen an example that UMP tests can exist for a two-sided alternative but it is not for a one-parameter exponential family. See Exam 3 review problems.) Technically, we can get a UMP test for a distribution that has a parameter θ supported on a closed interval [a, b] by finding the UMP test of H0 : θ = a versus H1 : θ > a and then claiming that (technically correct but kind of “shady”) that this is a test of H0 : θ = a versus θ 6= a. We have several exponential families, for example the geometric with parameter p (0 ≤ p ≤ 1) where this would work. On the course web site I have put a link to a paper which has, in the center of the third page, necessary and sufficient conditions for a two-sided ump to exist. The paper is interesting but will require a bit of translating from very different notation than we are used too. I say, save it for New Year’s Eve or something... As a final note, we can get a UMP test for the two-sided hypothesis of problem 5 in this problem set if we restrict ourselves to “unbiased” tests. A test is unbiased if we have that P ( reject H0 |H0 false ) ≥ P ( reject H0 |H0 true ) Using this restriction you can get a UMP test. (THis is known as a “UMPU test”. The extra “U” is for “unbiased”.)
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved