Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Solutions for Final Exam - Probability and Statistics In Engineering I | IE 23000, Exams of Probability and Statistics

Material Type: Exam; Professor: Schmeiser; Class: Probability And Statistics In Engineering I; Subject: IE-Industrial Engineering; University: Purdue University - Main Campus; Term: Spring 2001;

Typology: Exams

Pre 2010

Uploaded on 07/30/2009

koofers-user-i9w
koofers-user-i9w 🇺🇸

10 documents

1 / 8

Toggle sidebar

Related documents


Partial preview of the text

Download Solutions for Final Exam - Probability and Statistics In Engineering I | IE 23000 and more Exams Probability and Statistics in PDF only on Docsity! IE 230 — Probability & Statistics in Engineering I Name __ < KEY > __ Closed book and notes. No calculators. 120 minutes. Cover page, five pages of exam, and tables for discrete and continuous distributions. Score ___________________________ Xd ≡ Σi =1n Xi / n SX 2 ≡ Σi =1n (Xi − Xd)2 / (n − 1) = [Σi =1n Xi2 − n Xd 2 ] / (n − 1) E(W ) ≡ center of gravity of f W COV(X , Y ) ≡ E[ (X − E(X )) (Y − E(Y )) ] = E( XY ) − E(X ) E(Y ) V(X ) ≡ COV(X , X ) CORR(X , Y ) ≡ COV(X , Y ) / √ddddddddddV(X ) V(Y ) E(aX + bY ) = a E(X ) + b E(Y ) V(aX + bY ) = a 2 V(X ) + b 2 V(Y ) + 2ab COV(X , Y ) Final Exam (a), Spring 2001 Schmeiser IE 230 — Probability & Statistics in Engineering I Name __ < KEY > __ 1. True or false. (for each, 2 points if correct, 1 point if left blank.) (a) F The formula for sample variance, s 2 ≡ Σi =1n (xi − xd)2 / (n − 1), applies only when the observations xi are from a continuous distribution. (b) F "Maximum-likelihood estimation" determines the sample of observations that is most likely for a given assumed distribution. (c) T In inferential statistics, conclusions about a population arise from observing a random sample. (d) T If X and Y are independent, then COV(X , Y ) = 0, regardless of whether the random variables X and Y are continuous. (e) F If X and Y are continuous, then COV(X , Y ) = 0, regardless of whether the random variables X and Y are independent. (f) F The linear combination a X + b Y is a discrete random variable regardless of whether X and Y are continuous or discrete. (g) T If X is an indicator random variable, then E(X ) is a probability. (h) T All normal distributions differ only in location and scale; that is, their density functions all have the same shape. (i) T Although not useful in practice, a 100% confidence interval for any distribution parameter θ is the real-number line, (−∞, ∞). 2. Result: E(X + Y ) = E(X ) + E(Y ). Assume that X and Y are continuous. Prove the result, providing a reason for each step. ---------------------------------------------------------------------- E(X + Y ) = ∫−∞ ∞ ∫−∞ ∞ (x + y ) f XY (x , y ) dx dy definition of expected value = ∫−∞ ∞ ∫∞ ∞ x f XY (x , y ) dx dy + ∫−∞ ∞ ∫−∞ ∞ y f XY (x , y ) dx dy calculus = E(X ) + E(Y ) definition of expected value ---------------------------------------------------------------------- Final Exam (a), Spring 2001 Page 1 of 5 Schmeiser IE 230 — Probability & Statistics in Engineering I Name __ < KEY > __ 6. (Montgomery and Runger, 3–82) Customers are used to evaluate preliminary product designs. In the past, 90% of highly successful products received good reviews, 50% of moderately successful products received good reviews, and 10% of poor products received good reviews. In addition, 45% of products have been highly successful, 30% have been moderately successful, and 25% have been poor products. (a) What is the probability that a product receives a good review? ---------------------------------------------------------------------- Let G = "product receives a good review". Let H = "product is highly successful" Let M = "product is moderately successful" Let U = "product is unsuccessful" We know that P(G | H ) = 0.9 P(G | M ) = 0.5 P(G | U ) = 0.1 P(H ) = 0.45 P(M ) = 0.30 P(U ) = 0.25 Therefore, P(G ) = P(G | H ) P(H ) + P(G | M ) P(M ) + P(G | U ) P(U ) total probability = (0.9) (0.45) + (0.5) ((0.3) + ((0.1) (0.25) substitute = 0.405 + 0.150 + 0.025 simplify = 0.58← simplify ---------------------------------------------------------------------- (b) If a new design receives a good review, what is the probability that it will be a highly successful product? ---------------------------------------------------------------------- P(H | G ) = P(G | H ) P(H ) / P(G ) Bayes’s Rule = (0.9) (0.45) / 0.58 substitute = 0.405 / 0.58 simplify ≈ 0.70← simplify ---------------------------------------------------------------------- (c) If a product does not receive a good review, what is the probability that it will be a highly successful product? ---------------------------------------------------------------------- P(H | G′ ) = P(G′ | H ) P(H ) / P(G′ ) Bayes’s Rule = [1 − P(G | H )] P(H ) / [1 − P(G )] complements = [1 − 0.9] (0.45) / [1 − 0.58] substitute = (0.1) (0.45) / 0.42 simplify = 0.045 / 0.42 simplify ≈ 0.11← simplify ---------------------------------------------------------------------- Final Exam (a), Spring 2001 Page 4 of 5 Schmeiser IE 230 — Probability & Statistics in Engineering I Name __ < KEY > __ 7. A multiple-choice exam has 100 questions, each with five possible answers. Each question is worth one point. (a) Suppose that a student guesses randomly. What is the distribution of the student’s score? ---------------------------------------------------------------------- Let X = "exam score (of a randomly selected student)" Then X is binomial with n = 100 questions and p = 1 / 5 = 0.2← ---------------------------------------------------------------------- (b) Suppose that the student can eliminate one choice for each question. The student then guesses for each of the remaining four choices. In terms of expected value, how many points better off is the student for having eliminated the one choice? ---------------------------------------------------------------------- Let Y = "exam score (of a randomly selected student)" Then Y is binomial with n = 100 questions and p = 1 / 4 = 0.25 E(Y ) − E(X ) = (100)(0.25) − (100)(0.20) = 25 − 20 = 5 points ← ---------------------------------------------------------------------- (c) Suppose that you answer 85 questions correctly. Also suppose that, to save time, only 20 randomly chosen questions are graded. What is the probability mass function of the number of graded questions that you have correct? ---------------------------------------------------------------------- Let W = "number of correct questions graded". Then W is hypergeometric with N = 100, n = 20 and K = 80. The pmf is f W (w ) = I L 20 100 M O I L w 85 M O I L20 − w 15 M O hhhhhhhhhhhh for w = 5, 6,..., 20 and zero elsewhere. ---------------------------------------------------------------------- Spring 2001 Page 5 of 5 Schmeiser IE 230 — Probability & Statistics in Engineering I Discrete-Distributions: Summary Table (from the Concise Notes) iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii random distribution range probability expected variance variable name mass function valueiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii X general x 1, x 2, . . . , xn P(X = x ) i =1 Σ n xi f (xi ) i =1 Σ n (xi − µ) 2f (xi ) = f (x ) = µ = µX = σ 2 = σX 2 = f X (x ) = E(X ) = V(X ) = E(X 2) − µ2 iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii X discrete x 1, x 2, . . . , xn 1 / n i =1 Σ n xi / n [ i =1 Σ n xi 2 / n ] − µ2 uniform X equal-space x = a ,a +c ,...,b 1 / n 2 a +bhhhh 12 c 2 (n 2−1)hhhhhhhh uniform where n = (b −a +c ) / c "# successes in binomial x = 0, 1,..., n Cx n p x (1−p )n −x np np (1−p ) n Bernoulli trials" "# Bernoulli geometric x = 1, 2,... p (1−p )x −1 1 / p (1−p ) / p 2 trials until 1st success" "# Bernoulli negative x = r , r +1,... Cr −1 x −1 p r (1−p )x −r r / p r (1−p ) / p 2 trials until binomial r th success" "# successes in hyper- x = Cx K Cn −x N −K / Cn N np np (1−p ) (N −1) (N −n )hhhhhh a sample of geometric (n −(N −K ))+, size n from ..., min{K , n } where p = K / N a population (Sampling and of size N without integer containing replacement) k successes" "# of counts in Poisson x = 0, 1,... e−µ µx / x ! µ µ a Poisson- process interval"i iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii Spring 2001 Page 1 of 2 Schmeiser
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved