Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Advanced Psychological Statistics I: Factorial ANOVA and Repeated Measures - Prof. Michael, Study notes of Psychology

A portion of lecture notes from a university course on advanced psychological statistics. It covers the topics of factorial anova 3 and repeated measures 1. Power calculations for factorial anova, power equations, and examples. It also introduces the concept of higher-order designs and unbalanced designs, as well as the differences between fixed and random effects. The document then moves on to repeated measures anova, explaining the basic ideas and expected mean squares. It includes an example and a discussion on power and interactions.

Typology: Study notes

Pre 2010

Uploaded on 08/18/2009

koofers-user-dyp
koofers-user-dyp 🇺🇸

10 documents

1 / 24

Toggle sidebar

Related documents


Partial preview of the text

Download Advanced Psychological Statistics I: Factorial ANOVA and Repeated Measures - Prof. Michael and more Study notes Psychology in PDF only on Docsity! 1 Factorial ANOVA 3 and Repeated Measures 1 Advanced Psychological Statistics I Psychology 502 November 13, 2007 2 Overview ! Questions? ! Finish up factorial ANOVA " Power " Higher-order designs " Unbalanced design stuff " Fixed vs. random effects ! Start repeated measures ANOVA " Basic ideas " Expected mean squares " Example 3 Power ! All the usual stuff applies " Changing " changes power " Changing effect size changes power # Absolute size of effects # Variance " Changing N (or n) changes power ! Not much different than one-way designs " Recall the one-way !" = # 2$ k% e 2 ! " = # " n ! Power computed separately for each factor and the interaction 4 Power Equations ! Power for Factor A !" A = #2$ J% e 2 ! " A = # " A Kn ! Power for Factor B !" B = #2$ K% e 2 ! " B = # " B Jn ! Power for interaction !" AxB = #$2% JK& e 2 ! " AxB = # " AxB n ! Same conventions for !´ " Small = 0.10, medium = 0.25, large = 0.40 9 Higher-Order Interactions ! Can be very difficult to interpret ! Simple main effects " Often become “simple interaction effects” " For example: # Break up a three-way into two two-ways # See if the interactions are the same in both two-ways ! Post-hocs on interaction marginals " Must re-code to collapse across one or more factors ! Interaction contrasts can be arbitrarily complex " But very specific ! No standard way, no strict formula to follow " Even more so than for two-way designs 10 3-way Interaction A1 A2 A3 0 5 10 15 20 25 30 35 40 45 50 B1 B2 B3 C1 A1 A2 A3 0 5 10 15 20 25 30 35 40 45 50 B1 B2 B3 C2 ! SPSS code for basic ANOVA: " UNIANOVA dep BY a b c. 11 Unbalanced Designs ! So far, we!ve assumed that n is equal in every cell ! What happens when it isn!t? " Conceptually, things get messy " Pragmatically, this isn!t great ! But it changes very little about how you run the ANOVA on the computer and how you interpret the printouts " If you only remember one thing, remember this: " Use Type III sums of squares! 12 The Problem ! When the n are equal in the cells, all effects are independent: " That is, effect of factor A does not depend on effect of factor B # And vice versa " Interaction does not depend on either A or B ! When the n!s are not equal, then the effects become dependent! " "j!s can depend on #k!s ! If you do the standard computations, this equality does NOT hold: ! SS total = SS A + SS B + SS AB + SS e 13 The Solution (Conceptually) ! Compute sum of squares for all effects (called “model”) " Compute SStotal and SSe normally " Compute SSmodel = SStotal - SSe ! Compute SS for each invidual effect by subtraction: " SSAxB = SSmodel - SS(A) - SS(B) " SSA = SSmodel - SS(B) - SS(AxB) " SSB = SSmodel - SS(A) - SS(AxB) ! Note that SSmodel > SSAxB + SSA + SSB ! This is called “Type III” sums of squares ! We!ll walk though an SPSS example of this " This is not exactly how the computer does it, but it!s close in concept " You!ll get into this more next semester 14 Example ! Consider a design with these n!s: B1 B2 B3 B4 A1 5 5 20 35 65 A2 35 24 11 5 75 40 29 31 40 140 ! Look at cells A2B1 and A1B2 " They contribute to all three ANOVA effects " But they!re much different in size " To which effects should their SS go? ! Can tell stats packages to count them in certain ways 19 Weighted vs. Unweighted Means ! Horrible choice of terminology ! Consider two groups: " G1: 3, 4, 5 " G2: 15, 16, 17, 18, 19 ! Mean of G1 is 4, mean of G2 is 17 ! What!s the mean of the two groups? " “Weighted” mean is sum of all observations divided by N # (3 + 4 + 5 + 15 + 16 + ... 19) / 8 = 12.125 " “Unweighted” mean is the mean of the two group means: # (4 + 17) / 2 = 10.5 ! Can think of Type III sums of squares as ANOVA with unweighted means 20 Weighted Means ! SPSS code: " UNIANOVA dep BY a b /PRINT = DESCRIPTIVE. Descriptive Statistics Dependent Variable: DEP 78.8827 16.4275 5 106.9286 14.4850 5 85.6158 12.4839 20 82.0982 19.7862 35 84.8432 18.1499 65 95.3896 15.7489 35 116.4315 16.2732 24 104.1855 12.1939 11 88.6980 16.0930 5 102.9670 18.1740 75 93.3262 16.5673 40 114.7931 16.1508 29 92.2051 15.1605 31 82.9232 19.3068 40 94.5524 20.2435 140 B 1 2 3 4 Total 1 2 3 4 Total 1 2 3 4 Total A 1 2 Total Mean Std. Deviation N 21 Unweighted Means ! SPSS code: " UNIANOVA dep BY a b /EMMEANS = TABLES(a) /EMMEANS = TABLES(b) /EMMEANS = TABLES(a*b). 1. A Dependent Variable: DEP 88.381 2.826 82.792 93.971 101.176 2.455 96.321 106.032 A 1 2 Mean Std. Error Lower Bound Upper Bound 95% Confidence Interval 22 More Unweighted Means 2. B Dependent Variable: DEP 87.136 3.906 79.410 94.862 111.680 4.016 103.736 119.624 94.901 3.067 88.835 100.967 85.398 3.906 77.672 93.124 B 1 2 3 4 Mean Std. Error Lower Bound Upper Bound 95% Confidence Interval 3. A * B Dependent Variable: DEP 78.883 7.307 64.429 93.336 106.929 7.307 92.475 121.382 85.616 3.653 78.389 92.843 82.098 2.762 76.635 87.561 95.390 2.762 89.927 100.853 116.432 3.335 109.834 123.029 104.186 4.926 94.441 113.930 88.698 7.307 74.244 103.152 B 1 2 3 4 1 2 3 4 A 1 2 Mean Std. Error Lower Bound Upper Bound 95% Confidence Interval 23 Usage ! It is the general expectation that when you report F-tests for an unbalanced ANOVA, you report the tests based on Type III sums of squares " This is SPSS!s default behavior " Generally, use “unweighted” when SPSS gives you a choice on contrasts ! Using anything else is less conservative, so you!d better have a good reason " These come up only rarely ! However, most descriptives are reported using weighted means (i.e., the raw data) 24 Handling “Extra” Factors ! Let!s say you have a three-way design (A, B, C) ! You have a reliable BxC interaction " You do simple main effects, but should you do " TEMPORARY. SELECT IF (B EQ 0). UNIANOVA dep BY c. " Or " TEMPORARY. SELECT IF (B EQ 0). UNIANOVA dep BY a c. ! What!s the difference, and why does it matter? " Especially when unbalanced ! Don!t leave out factors! 29 Something for Nothing? ! Obvious advantages: " Run fewer subjects " High statistical power ! Not quite a free ride: " Many research questions do not lend themselves to repeated-measures designs " Slightly more difficult to analyze " Makes more stringent and complex mathematical assumptions " Can make data collection more difficult ! Generally recommended if your research question will allow you to do so 30 Hypothetical Study ! Familiarity and humor ! Show cartoons to children, collect humor ratings ! Do children find the cartoons less funny over time? 1st 2nd 3rd 1 6 5 2 4.33 2 5 5 4 4.67 3 5 6 3 4.67 4 6 5 4 5.00 5 7 3 3 4.33 6 4 2 1 2.33 7 4 4 1 3.00 8 5 7 2 4.67 5.25 4.625 2.50 4.125 31 Linear Model ! xij is the individual observation ! µ is the grand mean ! %i is the effect of being subject i ! &j is the effect of being in condition j ! %&ij is the interaction ! eij is random error (like always, normal, mean zero) ! Distributional assumptions: " Subject effects are normal with mean of zero " Subject by condition interactions normal, mean zero ! xij = µ +" i + # j +"# ij + eij 32 Null Hypotheses ! Again, multiple null hypotheses ! No differences between subjects: " %1 = %2 = ... = %i = 0 ! No differences between conditions: " &1 = &2 = ... = &i = 0 ! No interactions: " %&11 = %&21 = ... = %&ij = 0 ! We will only be testing one of these: " &1 = &2 = ... = &i = 0 33 Sums of Squares ! As usual, each effect has a sum of squares: SSsubjects = K (xi• ! x )" 2 SSconditions = n (x• j ! x ) 2" SSSxC = (xij ! xi• ! x• j + x ) 2"" SStotal = (xij ! x )2"" ! SSsubjects + SSconditions + SSSxC = SStotal ! Each also has a degrees of freedom " Subjects: n - 1 Conditions: K - 1 " Interaction (n - 1)(K - 1) Total: nK-1 ! And thus a Mean Square 34 Expected Mean Squares ! Not quite as conventional as for between-subjects ANOVA ! E(MSsubjects) = " e 2 + K"# 2 ! E(MS conditions ) = " e 2 +"#$ 2 + n"$ 2 ! E(MS SxC ) = " e 2 +"#$ 2 ! Technically, there is no MSE! " That is, there is no term that estimates just $e 2 ! So, how do we test hypotheses? !" 2 = " 2# n $1 !" 2 = " 2# K $1 !"# 2 = "# 2$ (n %1)(K %1) 39 ANOVA Table ! What!s the critical value for F(2, 14)? ! What should we conclude? Source SS df MS F Subjects SSs n - 1 SSs/(n-1) - Condition SSc k - 1 SSc/(k-1) MSc/MSsxc Interaction SSsxc (n-1)(k-1) SSi/dfi - Total SSt nk - 1 Source SS df MS F Subjects 18.625 7 2.661 - Condition 33.25 2 16.625 13.90 Interaction 16.75 14 1.196 - Total 68.625 23 40 Sphericity ! Repeated-measures ANOVA makes somewhat more complex assumptions " Variance of each condition is equal # This is the same as between-subjects ANOVA " Covariances between all variables are equal # Remember covariance? ! Think of it this way: " Create difference scores between all pairs of variables " Variance of all those difference scores is assumed to be equal ! Howell notes this is technically “compound symmetry,” but most folks call this sphericity 41 Violating Sphericity ! What happens when the sphericity assumption is violated? ! Type I error rate is not preserved! " Like a t-test with unequal variances and unequal n " Not merely a power issue ! The F-test for the effect of conditions is not distributed as F with k - 1 and (n - 1)(k - 1) degrees of freedom " However, it is still distributed as an F " Worst-case scenario is F with 1 and k - 1 degrees of freedom # Very conservative ! Can we do better? 42 Identifying Non-spherical Data ! There!s a measure called epsilon, ' ! More than one way to compute epsilon " Greenhouse-Geisser " Huynh-Feldt " Howell provides equations (pp. 454 & 455) ! When assumptions are perfectly met, epsilon (either one) will be 1.0 ! Violations of assumptions reduce epsilon " The more severe the violation, the smaller epsilon is " Minimum bound is 1/(K - 1) # What is the minimum when K = 2? # K - 1 is also what? 43 Correcting ! How non-spherical do the data have to be to require a correction? " Opinions on this subject differ " One way of defining a violation: # Severe violation: G-G epsilon < 0.65 # Mild violation: H-F epsilon < 0.85 ! The correction: " Multiply both d.f. by epsilon # Will d.f. get larger or smaller? # This will yield fractional d.f.s " Compute new critical value or p-value based on the new degrees of freedom " The good news: stats packages will do this for you 44 Contrasts ! Can again have contrasts on the levels of the within- subjects variable(s) ! However, contrasts work somewhat differently (mathematically) in repeated-measures designs ! A contrast can actually be thought of as an entirely new variable " Consider: d1, d2, and d3 are the d.v.!s " Trends (e.g. linear, quadratic) on these are reasonable " However, we can do the contrast on each subject, rather than on means of groups of subjects " Then test if the mean of the new contrast variable is zero ! How do we test if the mean of a variable is zero?
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved