Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Statistical Estimation: Method of Moments and Maximum Likelihood Estimation - Prof. Yi Cha, Assignments of Statistics

Solutions to homework problems related to statistical estimation, focusing on the method of moments and maximum likelihood estimation. Topics include moment estimation, method of moments equations, maximum likelihood estimation, and the relationship between maximum likelihood estimation and method of moments. The document also covers various distributions, such as the binomial and hypergeometric distributions.

Typology: Assignments

Pre 2010

Uploaded on 09/02/2009

koofers-user-k0c
koofers-user-k0c 🇺🇸

10 documents

1 / 3

Toggle sidebar

Related documents


Partial preview of the text

Download Statistical Estimation: Method of Moments and Maximum Likelihood Estimation - Prof. Yi Cha and more Assignments Statistics in PDF only on Docsity! STAT610 - HWK Solution 5 2.1.1 Let N1, N2, N3 be the number of individuals in the three different types, respectively. Note that N1 +N2 +N3 = n. The corresponding probabilities are θ2, 2θ(1− θ), (1− θ)2. (a) θ̂2 = N1n , 2θ̂(1− θ̂) = N2 n ⇒ θ̂ = θ̂2 + 2θ̂(1− θ̂) 2 = N1 n + N2 2n (b) θ̂ 1− θ̂ = N1 n + N2 2n 1− (N1n + N2 2n ) = 2N1 +N2 2n− 2N1 −N2 (c) E(X) = p3 − p1 = (1− θ)2 − θ2 = 1− 2θ. Apply the mothod of moment, (1− 2θ̂) = X̄ = N3 −N1 n ⇒ θ̂ = N1 n + N2 2n = T3 2.1.3 If X ∼ β(α1, α2), E(X) = α1 α1 + α2 , E(X2) = α1(α1 + 1) (α1 + α2)(α1 + α2 + 1) . Let µ̂1 = ( ∑ Xi)/n, µ̂2 = ( ∑ X2i )/n. The method of moments estimates of (α1, α2) is given by α1 α1 + α2 = µ̂1 α1(α1 + 1) (α1 + α2)(α1 + α2 + 1) = µ̂2 ⇒  α̂1 = µ̂1(µ̂1 − µ̂2) µ̂2 − µ̂21 α̂2 = (1− µ̂1)(µ̂1 − µ̂2) µ̂2 − µ̂21 2.1.5 X1, · · · , Xn ∼ i.i.d. Bernoulli(θ). V (θ0, θ) = Eθ0ψ(X, θ) = nθ0 θ − n(1− θ0) 1− θ V (θ0, θ) = 0 ⇒ θ = θ0, so θ0 is the unique solution and ψ is the estimating equation. Solving ψ(X, θ̂) = 0 to get θ̂ = S/n. 2.1.17 The best linear predictor is given by b1 = Cov(Y, Z) Var(Z) = E(Y Z)− E(Y )E(Z) E(Z2)− (EZ)2 , a1 = E(Y )− b1E(Z). The method of moment estimate for a1 and b1 can be obtained by pluging in the sample moments, b̂1 = ( ∑ YiZi)/n− Ȳ Z̄ ( ∑ Z2i )/n− (Z̄)2 , â1 = Ȳ − b̂1Z̄. 2.2.1 The constrast used in this problem is ρ(θ) = ∑ (Yi − θ 2 t2i ) 2. d dθ ρ(θ̂) = 0 ⇒ − ∑ Yit 2 i + θ̂ 2 ∑ t4i = 0 ⇒ θ̂ = 2 ∑ Yit 2 i∑ t4i 1 2.2.2 Assume Pr(Zi = zi, Yi = yi)= 1n , i = 1, · · · , n. Then, E(Z) = Z̄, E(Y ) = Ȳ , Var(Z) = E(Z − Z̄)2 = 1 n ∑ (Zi − Z̄)2 Cov(Z, Y ) = E[(Z − Z̄)(Y − Ȳ )] = 1 n ∑ (Zi − Z̄)(Yi − Ȳ ) From Thm 1.4.3., the best linear predictor is Y = a+ bZ, where β2 = Cov(Y, Z) Var(Z) = ∑ (Zi − Z̄)(Yi − Ȳ )∑ (Zi − Z̄)2 , β1 = E(Y )− β2E(Z) = Ȳ − β2Z̄. 2.2.16 (a) θ̂ is MLE for θ, ⇒ Lx(θ̂) ≥ Lx(θ∗), ∀θ ∈ Θ. η = h(θ) is 1-1, ⇒ Lx(h−1(η)) = p(x, η) Then for any η∗ = h(θ) ∈ h(Θ), p(x, h(θ̂)) = Lx(θ̂) ≥ Lx(θ∗)) = p(x, η∗) Therefore, h(θ̂) is MLE for η. (b) Let Θ(ω) = {θ ∈ Θ : q(θ) = ω}, then Lx(ω) = sup{Lx(θ) : θ ∈ Θ(ω)} and by definition ωMLE = arg sup ω∈Ω Lx(ω) = arg sup ω∈Ω sup{LX(θ) : θ ∈ Θ(ω)}. Let ω̂ = q(θ̂), so θ̂ ∈ Θ(ω̂). Since θ̂ is MLE of θ, Lx(ω̂) = sup{Lx(θ) : θ ∈ Θ(ω)} = Lx(θ̂) ≥ sup{Lx(θ) : θ ∈ Θ(ω∗)} = Lx(ω∗) ∀ω∗ ∈ Ω. Therefore, ω̂ = q(θ̂) is MLE for ω = q(θ). 2.2.22 The likelihood of hypergeometric distribution is given by Lx(b) = ( b x )( N−b n−x )( N n ) = c · b!(N − b)! (b− x)!(N − b− n+ x)! where c is some constant which doesn’t depend on b. Lx(b+ 1) Lx(b) = (b+1)!(N−b−1)! (b+1−x)!(N−b−1−n+x)! b!(N−b)! (b−x)!(N−b−n+x)! = (b+ 1)(N − b− n+ x) (b+ 1− x)(N − b) = 1 + x(N + 1)− n(b+ 1) (b+ 1− x)(N − b) 2
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved