Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Bayesian Stats: Solved Problems on Hypothesis Testing & Poisson, Exams of Statistics

Solved problems from an applied bayesian statistics course focusing on hypothesis testing using the bayes factor and the poisson distribution. Topics include prior odds, likelihood ratio, posterior odds, jeffreys prior, and the effective number of parameters. Students will learn how to calculate the bayes factor, interpret the results, and understand the implications of informative priors.

Typology: Exams

2012/2013

Uploaded on 02/26/2013

dharmaa
dharmaa 🇮🇳

4.4

(19)

158 documents

1 / 6

Toggle sidebar

Related documents


Partial preview of the text

Download Bayesian Stats: Solved Problems on Hypothesis Testing & Poisson and more Exams Statistics in PDF only on Docsity! M. PHIL. IN STATISTICAL SCIENCE Thursday, 3 June, 2010 1:30 pm to 3:30 pm APPLIED BAYESIAN STATISTICS Attempt no more than THREE questions. There are FOUR questions in total. The questions carry equal weight. STATIONERY REQUIREMENTS SPECIAL REQUIREMENTS Cover sheet None Treasury Tag Script paper You may not start to read the questions printed on the subsequent pages until instructed to do so by the Invigilator. 2 1 Derren claims he has extra-sensory perception (ESP) and can guess in advance how a fair coin will land (heads or tails) with a probability θ that is different from 12 . Let H0 be the hypothesis that he does not have ESP, and H1 be the hypothesis that he does have ESP. (a) Define the prior odds on H0. (b) Suppose we have data y and a probability model that gives values for p(y|H0) and p(y|H1). Define the likelihood ratio and show how to obtain the posterior odds on H0. (c) Suppose the data comprises n flips of a coin, of which Derren got y correct. Assume that the prior distribution for θ is uniform over the interval (0, 1). What is p(y|H1)? (d) By making a normal approximation to p(y |H0), show that the likelihood ratio (Bayes factor) is ≈ √ 2n π exp [ − 2 n ( y − n2 )2 ] . (e) Suppose that out of 10,000 flips, Derren gets 5150 right. In a classical statistical sense, is this statistically significant evidence against H0? (f) What, very approximately, is the Bayes factor between H0 and H1? How do you explain any difference between this and the ‘classical’ result? (g) Informally, what might be a more reasonable prior for θ under H1? (h) Even if further evidence gives a Bayes factor in favour of H1, do you think the posterior odds on H0 should necessarily be less than 1? [A Beta(a, b) distribution has density p(θ| a, b) = Γ(a+b)Γ(a)Γ(b) θ a−1 (1 − θ)b−1 ; θ ∈ (0, 1).] Applied Bayesian Statistics 5 4 A sample of 106 children in Gambia were immunised against Hepatitis B at a baseline visit and then followed up at 3 additional clinic visits. Their level of immunity is measured by their ‘titre’. Let Yij be the log(titre) of child i at clinic visit j at time tij, where tij is the time since immunisation. Yij is assumed to be drawn from a Normal(µij , σ 2) distribution. We assume that each child’s expected log(titre) µij changes linearly with log(tij), and also depends on the child’s baseline log-titre y0i, and has a different intercept for each child, so that µij = αi + β log(tij) + γy0i , and αi ∼ Normal(δ, τ 2). This is Model 1 and is fitted using the following WinBUGS code: for ( i in 1:106 ) { for ( j in 1:3 ) { y [i, j] ~ dnorm ( mu [i, j], invsigma2 ) mu [i, j] ) <- alpha [ i ] + beta*log ( time [i, j]) + gamma*y0 [ i ] } alpha [ i ] ~ dnorm ( delta, invtau2 ) } invsigma2 ~ dgamma (0.001, 0.001 ) beta ~ dunif (-100, 100 ) gamma ~ dunif (-100, 100 ) delta ~ dunif (-100, 100 ) tau ~ dunif ( 0, 100 ) invtau2 <- 1 / ( tau*tau ) (a) Explain briefly what will be effect of assuming the αi’s are drawn from a common prior distribution. (b) How might the convergence be improved? (c) Explain briefly the prior distributions given to the parameters, in particular why the standard Jeffreys prior is not given to variance parameter τ2. (d) Why might it be reasonable to assume the baseline log-titre y0i is an observation from a distribution that is Normal(µ0i, σ 2), where µ0i is the true baseline titre? (e) Consider Model 2 in which the regression model is changed to µij = αi + β log(tij) + γµ0i , and µ0i ∼ Normal(θ, ψ 2). Why would you want to consider such a model? Draw a rough directed graph for the whole of Model 2. How would you adapt the code if you wanted to fit Model 2? [Do not worry about correct syntax.] (f) Model 2 gave the following output node mean sd MC error 2.5% median 97.5% start sample beta -1.064 0.1352 0.001286 -1.329 -1.065 -0.7955 1001 10000 gamma 1.023 0.1145 0.0106 0.7915 1.014 1.231 1001 10000 In Model 3, we fix β = −1, γ = 1 . Why might this be a reasonable assumption? Applied Bayesian Statistics [TURN OVER 6 (g) The following table shows the DIC output based on 10000 iterations when fitting the models 2 and 3. Dbar = post.mean of -2logL; Dbar pD DIC Model 2 1128.1 143.6 1271.7 Model 3 1128.3 141.5 1269.8 Interpret these results, in particular the pD column. (h) Explain why Model 3 could be interpreted as implying that the fraction of titre after time t decreases as 1/t. END OF PAPER Applied Bayesian Statistics
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved