Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Research Methods: Reliability and Validity in Experimental Designs, Exams of Psychology

An overview of key concepts in research methods, focusing on reliability and validity in experimental designs. Topics include randomization, matched samples, experimenter bias, descriptive and inferential statistics, reliability measures, and validity types. Learn about the importance of these concepts in producing accurate and unbiased research results.

Typology: Exams

2023/2024

Available from 04/12/2024

CarlyBlair
CarlyBlair 🇺🇸

4

(1)

1.2K documents

1 / 16

Toggle sidebar

Related documents


Partial preview of the text

Download Research Methods: Reliability and Validity in Experimental Designs and more Exams Psychology in PDF only on Docsity! Statistics (Research & Data Analysis in Psychology) Exam 1 independent variable - has at least two levels that we either manipulate or observe (quasi independent) to determine its effects on the dependent variable - participants in each level are thought to either display or be exposed to the conditions of this variable in a consistent manner - ex: caffeine vs. no caffeine, gender (quasi independent) dependent variable - the outcome variable that we hypothesized to be related to, or caused by, changes in the independent variable - dependent variables are only in experimental studies quasi-independent variable - the variable that has been manipulated, though there was no random assignment into groups quasi-dependent variable - the variable that we think was impacted by the quasi-independent variable characteristics of an ideal experiment - 1. the participants in each of your conditions (groups) are the same 2. all conditions go through the same procedure, except for what you are manipulating 3. sample is representative of population 4. reliable and valid measure of DV randomization - assigning participant to conditions with no visible pattern - the best way to assign participants to an experimental condition matched sample - match conditions (groups) based on particular characteristics (e.g. age, income, gender) - useful if true randomization isn't possible experimenter bias - if the experimenter is aware of the hypothesis and knows which condition he/she is in, he or she might bias the experiment by acting different in one condition (e.g. smiling more) than in other conditions - the individual may not be aware of their biased actions descriptive statistics - information about a sample of everyone of interest in the study - summarizes and describes your data - ex: mean, standard deviation, range - ex: the average grade on the test was 85.4 out of 100 inferential statistics - information about a population based on information from a smaller set of information - uses the results of your data to make predictions or generalize about a larger population - ex: dancers have a higher IQ than golfers; private school graduates earn more than public school graduates reliability - consistency in measurement - your measure gives the same result even if measured at different times, in different ways inter-rater reliability - consistency in scores between observers/measurers available samples might not be representative of the population - ex: college student effect volunteer bias - specific individuals might be more apt to volunteering responses - ex: Kinsey effect species issues - ethical concerns might make human research necessary, but how generalizable are the results to the population? experiment design issues - real-world applications - sample's prior exposure effects (rats running in previous experiments) reliability and validity - you can have reliability without validity - you cannot have a valid measurement without it being reliable - if you're measuring the same variable, it's reliability - if you're looking at the relationship between two different variables, it's validity nominal/categorical variables - variables that have no numerical meaning - values are categories - ex: religion (atheist = 1, christian = 2, jewish = 3) - ex: gender, favorite ice cream - qualitative numerical/quantitative variables - variables that are numbers - the levels of these variables that are represented as numbers - ex: rank order, interval, ratio, averages, and other arithmetic transformations make sense ordinal variables - variables that have that have a natural order, but the precise distance between values is not defined - type of quantitative variable - ex: grade levels, rank in school, age groups interval variables - variables that have values where the distance between them is meaningful and consistent - no true zero - type of quantitative variable - ex: IQ scores, temperature in Fahrenheit, Likert responses ratio variables - interval variables where there is a true zero and where ratios of values make sense - type of quantitative variable - ex: income, height, temperature in Kelvin population - a complete set of people, events, or scores that we're interested in parameter - the measurable characteristic of the population that is of interest sample - a subset or portion of that population statistic - the measurable characteristic of the sample of the population that we're interested in - a measure of some attribute of a sample - samples can be one element or a large collection of elements why not just test a population - size, time, money/expense, and ethical issues what do I need to worry about with my sample? - is it representative of the population? - is my sample large enough? statistics - the science of collecting, analyzing, and interpreting data variables - characteristics or conditions that change in values from individuals or situations - often arbitrary (e.g. happiness scales) - measurements of some variables are not perfectly consistent correlation - relationship comparison - difference change - influence constructs - a hypothetical mechanism or attribute that a researcher is interested in exploring - aka variables operational definition - the systematic process of obtaining or measuring a construct levels - the values that a construct can take on based on its operational definition - conclusions about theories are limited by the accuracy of the operational definitions being used - most psychological concepts already have operational definitions for a lot of the variables that we might want to use, researching your topic is a good thing observational studies - behavioral reports or self-report studies - or a combination of those two experimental studies - research experiments designed to discover casual relationships between various factors ways data can lie - measurement error, unreliability of measurement tools, and randomness in the data expected probability - a measure of the actual probability of an outcome if the outcomes were random and repeated many times - we are looking to find data that allows us to reject the null hypothesis (accept the alternative) or retain the null (fail to reject the null hypothesis) hypothesis testing - statistically verifying that the probability of an outcome is so unlikely, that it has to be more than just chance null hypothesis - a statement that implies no effects, differences, or similarities on or between variables within a population of interest - basically states that the results were obtained merely due to chance or that there is no relationship - we are always trying to disprove the null hypothesis, but we assume its true then find evidence that it is false - identified as Ho - ex: coin flip exercise, lucky days, hot streaks statistical questions related to the null hypothesis - differences in a variable between individuals and/or groups - similarities between variables - differences in outcomes based on another variable alternative hypothesis - a statement that implies that the null hypothesis is false (untrue) - the opposite of the null hypothesis - identified as Ha type I error - null hypothesis is true in reality, but your data leads you to reject it - random chance - oversensitive tests - unethical behavior (intentional demand characteristics, biased scoring, or non-random assignment) type II error - null hypothesis is false in reality, but your data leads you to retain it - random chance - poor/unreliable measures/design - overly stringent tests = accidental impacts (third variables, participant bias) how to prevent type I & II errors - come up with statistical tests that minimize the chance of making these errors (minimize demand characteristics, placebo effects, participant bias, and unreliability) - verify conclusions with replication and large, diverse samples ***stuff to bring to exam - calculator, 1 page of scratch paper, pen, pencil - no scantron necessary ***test details - 100 points worth of questions - each question will be labeled for how many points its worth - questions will address logic or methodology and require calculations ***what you need to know - focus on the stuff covered in class (not based on the book) class 2 - "statistics" and "statistic" - variables, operational definitions, levels (how they related/are different) - be able to describe the difference between observational and experimental studies - know about all the different types of variables that exist and be able to identify them (independent vs dependent, quasi-variables, NOIR) - good for distributions that are skewed or have extreme outliers - less sampling variability - doesn't represent all the scores mean - sum of all the scores divided by the number of scores - = ∑xi/n (n = number of variable occurrences in a group) - m or x-bar for sample, µ or X-bar for population - lowest sampling variability - takes all scores into account - sensitive to outliers - doesn't work with nominal or ordinal data measures of variability - range, variance, standard deviation - measures that quantify how close or far from the mean data is range - the spread of the possible values of the variable of interest - the difference between the highest and lowest number in a distribution sums of squares - the average variance in a data - = ∑(xi - xbar)^2 - = ∑(xi^2) - (∑xi^2)/n variance - sx^2 = ∑(xi - xbar)^2/n - = ssx/n standard deviation - sx = √(sx^2) - = √∑(xi - xbar)^2/n - = √ssx/n outliers - extreme values - always relative to the rest of the distribution - a data point that drastically skews all the measures of central tendency - mean is affected by outliers - median and more are resistant to outliers - variance and standard deviation are very sensitive to outliers because they are based on the mean
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved