Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

MATH 201 PROBABILITY & STATISTICS COMPLETED EXAM 2024, Exams of Nursing

MATH 201 PROBABILITY & STATISTICS COMPLETED EXAM 2024MATH 201 PROBABILITY & STATISTICS COMPLETED EXAM 2024MATH 201 PROBABILITY & STATISTICS COMPLETED EXAM 2024

Typology: Exams

2023/2024

Available from 12/26/2023

Examiner651
Examiner651 🇺🇸

4.2

(10)

630 documents

1 / 8

Toggle sidebar

Related documents


Partial preview of the text

Download MATH 201 PROBABILITY & STATISTICS COMPLETED EXAM 2024 and more Exams Nursing in PDF only on Docsity! MATH 201 Probability & Statistics COMPLETED EXAM 2024 1. A random sample of 100 students from a university has a mean GPA of 3.2 and a standard deviation of 0.5. What is the 95% confidence interval for the mean GPA of the population? Answer: The 95% confidence interval for the mean GPA of the population is given by the formula: mean +/- z * (standard deviation / sqrt(sample size)) where z is the critical value for a 95% confidence level, which is approximately 1.96. Plugging in the given values, we get: 3.2 +/- 1.96 * (0.5 / sqrt(100)) which simplifies to: 3.2 +/- 0.098 Therefore, the 95% confidence interval is (3.102, 3.298). Rationale: A confidence interval is a range of values that is likely to contain the true population parameter with a certain level of confidence. The width of the interval depends on the sample size, the standard deviation, and the confidence level. The larger the sample size, the smaller the standard deviation, and the higher the confidence level, the narrower the interval. 2. A fair coin is tossed 10 times. What is the probability of getting exactly 6 heads? Answer: The probability of getting exactly 6 heads in 10 tosses of a fair coin is given by the binomial distribution formula: (n choose k) * p^k * (1-p)^(n-k) where n is the number of trials, k is the number of successes, and p is the probability of success in each trial. In this case, n = 10, k = 6, and p = 0.5. Plugging in these values, we get: (10 choose 6) * 0.5^6 * (1-0.5)^(10-6) which simplifies to: 210 * 0.015625 * 0.0625 which equals: 0.205078125 Therefore, the probability of getting exactly 6 heads in 10 tosses of a fair coin is about 0.205. Rationale: The binomial distribution models the number of successes in a fixed number of independent trials with a constant probability of success in each trial. The formula gives the probability of getting exactly k successes out of n trials. average. The expected value and variance of an exponential variable are inversely proportional to the square of the rate parameter. A researcher wants to test whether there is a relationship between gender and political affiliation in a sample of 100 voters. The observed frequencies are shown in the table below. Use a chi-square test to test the null hypothesis that gender and political affiliation are independent at a 0.05 significance level. Report the test statistic, the degrees of freedom, the p-value, and the conclusion. | Gender | Democrat | Republican | Independent | Total | |--------|----------|------------|------------|-------| | Male | 18 | 32 | 10 | 60 | | Female | 22 | 8 | 10 | 40 | | Total | 40 | 40 | 20 | 100 | Answer: The expected frequencies are calculated by multiplying the row total and the column total and dividing by the grand total. For example, the expected frequency for male democrats is (60 x 40) / 100 = 24. The chi-square test statistic is the sum of (observed - expected)^2 / expected for each cell. The test statistic is: X^2 = [(18 - 24)^2 / 24] + [(32 - 24)^2 / 24] + [(10 - 12)^2 / 12] + [(22 - 16)^2 / 16] + [(8 - 16)^2 / 16] + [(10 - 8)^2 / 8] X^2 = 6 + 8 + (4/3) + (9/2) + (16/2) + (1/2) X^2 = 22.67 The degrees of freedom are calculated by (number of rows - 1) x (number of columns - 1). In this case, df = (2 - 1) x (3 - 1) = 2. The p-value is the probability of obtaining a chi-square value equal to or greater than the test statistic under the null hypothesis. Using a chi-square table or a calculator, the p-value is approximately 0.0001. Since the p-value is less than the significance level of 0.05, we reject the null hypothesis and conclude that there is a relationship between gender and political affiliation in the sample. A coin is tossed 100 times and the number of heads and tails are recorded. The observed frequencies are shown in the table below. Use a chi-square test to test the null hypothesis that the coin is fair at a 0.01 significance level. Report the test statistic, the degrees of freedom, the p-value, and the conclusion. | Outcome | Heads | Tails | Total | |---------|-------|-------|-------| | Observed| 45 | 55 | 100 | | Expected| 50 | 50 | 100 | Answer: The chi-square test statistic is the sum of (observed - expected)^2 / expected for each cell. The test statistic is: X^2 = [(45 - 50)^2 / 50] + [(55 - 50)^2 / 50] X^2 = (25/50) + (25/50) X^2 = 1 The degrees of freedom are calculated by (number of rows - 1) x (number of columns - 1). In this case, df = (1 - 1) x (2 - 1) = 0. However, since we cannot have zero degrees of freedom, we use df = 1 as a conservative estimate. The p-value is the probability of obtaining a chi-square value equal to or greater than the test statistic under the null hypothesis. Using a chi-square table or a calculator, the p-value is approximately 0.3173. Since the p-value is greater than the significance level of 0.01, we fail to reject the null hypothesis and conclude that there is no evidence that the coin is unfair. A teacher wants to test whether there is a difference in the performance of students who took an online course and those who took a face-to-face course. The teacher randomly selects 20 students from each group and administers a final exam. The exam scores are shown in the table below. Use a chi-square test to test the null hypothesis that there is no difference in the performance of students who took different types of courses at a 0.05 significance level. Report the test statistic, the degrees of freedom, the p-value, and the conclusion. | Course Type | Online | Face-to-face | Total | |-------------|--------|--------------|-------| | Mean Score |75 |80 |77.5 | | Standard Deviation|10 |12 |11.18 | | Sample Size |20 |20 |40 | Answer: The chi-square test statistic is calculated by [(n - 1) x sample variance] / population variance for each group. The sample variance is the square of the standard deviation. The population variance is estimated by the pooled variance, which is the weighted average of the sample variances. The pooled variance is: s^2 = [(n1 - 1) x s1^2 + (n2 - 1) x s2^2] / (n1 + n2 - 2) s^2 = [(20 - 1) x 10^2 + (20 - 1) x 12^2] / (20 + 20 - 2) s^2 = [1900 + 2736] / 38 s^2 = 121.95 The chi-square test statistic is: X^2 = [(20 - 1) x 10^2 / 121.95] + [(20 - 1) x 12^2 / 121.95] X^2 = [1900 / 121.95] + [2736 / 121.95] X^2 = 15.58 + 22.42 X^2 = 38 The degrees of freedom are calculated by the number of groups minus one. In this case, df = 2 - 1 = 1. The p-value is the probability of obtaining a chi-square value equal to or greater than the test statistic under the null hypothesis. Using a chi-square table or a calculator, the p-value is approximately 0.0000. Since the p-value is less than the significance level of 0.05, we reject the null hypothesis and conclude that there is a difference in the performance of students who took different types of courses. What is the coefficient of determination (R^2) for a simple linear regression model? How can it be interpreted? How can it be calculated from the correlation coefficient (r)? Answer: The coefficient of determination (R^2) is the proportion of the variance in the dependent variable that is explained by the independent variable. It can be interpreted as a measure of how well the regression line fits the data. It can be calculated from the correlation coefficient (r) by squaring it: R^2 = r^2. . What is the difference between a residual and an error in a regression model? How can they be used to assess the quality of the model? Answer: A residual is the difference between the observed value and the predicted value of the dependent variable for a given observation. An error is the difference between the true value and the predicted value of the dependent variable for a given observation. The residuals can be used to assess the quality of the model by checking their distribution, mean, variance, and correlation with the independent variable. Ideally, the residuals should be normally distributed, have a mean of zero, have a constant variance, and be uncorrelated with the independent variable. What is multicollinearity in a multiple linear regression model? How can it affect the estimation and interpretation of the model parameters? How can it be detected and corrected? Answer: Multicollinearity is a situation where two or more independent variables in a multiple linear regression model are highly correlated with each other. It can affect the estimation and interpretation of the model parameters by inflating their standard errors, making them less precise and less significant. It can also make the model unstable and sensitive to small changes in the data. It can be detected by calculating the variance inflation factor (VIF) for each independent variable, which measures how much its variance is increased due to multicollinearity. A rule of thumb is that a VIF above 10 indicates a serious multicollinearity problem. It can be corrected by dropping some of the correlated variables, transforming them, or using regularization techniques such as ridge or lasso regression. What is heteroscedasticity in a regression model? How can it affect the estimation and interpretation of the model parameters? How can it be detected and corrected? Answer: Heteroscedasticity is a situation where the variance of the residuals in a regression model is not constant across different values of the independent variable. It can affect the estimation and interpretation of the model parameters by making them biased and inconsistent, violating one of the assumptions of ordinary least squares (OLS) regression. It can also affect the validity of hypothesis tests and confidence intervals based on OLS estimates. It can be detected by plotting the residuals against the predicted values or the independent variable, or by using formal tests such as Breusch-Pagan or White test. It can be corrected by using weighted least squares (WLS) regression, transforming the dependent or independent variables, or using robust standard errors.
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved