Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Critique of Krueger & Mueller's Interpretation of Self-Assessment Findings - Prof. Robert , Exams of Information Technology

A reply to krueger and mueller's (2002) critique of kruger and dunning's (1999) study on self-assessment and metacognition. The authors of this reply disagree with krueger and mueller's interpretation of the original findings and argue that their use of unreliable tests and inappropriate measures of relevant variables hindered their analysis. They also suggest that krueger and mueller's regression-bta account cannot explain the experimental data reported in kruger and dunning (1999) or their reanalysis following the suggested procedure.

Typology: Exams

Pre 2010

Uploaded on 09/02/2009

koofers-user-ero-1
koofers-user-ero-1 🇺🇸

10 documents

1 / 4

Toggle sidebar

Related documents


Partial preview of the text

Download Critique of Krueger & Mueller's Interpretation of Self-Assessment Findings - Prof. Robert and more Exams Information Technology in PDF only on Docsity! Unskilled and Unaware—But Why? A Reply to Krueger and Mueller (2002) Justin Kruger University of Illinois David Dunning Cornell University J. Kruger and D. Dunning (1999) argued that the unskilled suffer a dual burden: Not only do they perform poorly, but their incompetence robs them of the metacognitive ability to realize it. J. Krueger and R. A. Mueller (2002) replicated these basic findings but interpreted them differently. They concluded that a combination of the better-than-average (BTA) effect and a regression artifact better explains why the unskilled are unaware. The authors of the present article respectfully disagree with this proposal and suggest that any interpretation of J. Krueger and R. A. Mueller’s results is hampered because those authors used unreliable tests and inappropriate measures of relevant mediating variables. Additionally, a regression–BTA account cannot explain the experimental data reported in J. Kruger and D. Dunning or a reanalysis following the procedure suggested by J. Krueger and R. A. Mueller. In 1999, we published an article (Kruger & Dunning, 1999) suggesting that the skills that enable one to perform well in a domain are often the same skills necessary to be able to recognize good performance in that domain. As a result, when people are unskilled in a domain (as everyone is in one domain or another), they lack the metacognitive skills necessary to realize it. To test this hypothesis, we conducted a series of studies in which we compared perceived and actual skill in a variety of everyday domains. Our predictions were borne out: Across the various studies, poor performers (i.e., those in the bottom quartile of those tested) overestimated their percentile rank by an average of 50 percentile points. Along the way, we also discovered that top performers, although they estimated their raw test scores relatively accurately, slightly but reliably underestimated their comparative performance, that is, their percentile rank among their peers. Although not central to our hypothesis, we reasoned that top performers might underestimate themselves relative to others because they have an inflated view of the competence of their peers, as predicted by the well- documented false consensus effect (Ross, Greene, & House, 1977) or, as Krueger and Mueller (2002) termed it, a social-projection error. Krueger and Mueller (2002) replicated some of our original findings, but not others. As in Kruger and Dunning (1999), they found that poor performers vastly overestimate themselves and show deficient metacognitive skills in comparison with their more skilled counterparts. Krueger and Mueller also replicated our find- ing that top performers underestimate their comparative ranking. They did not find, however, that metacognitive skills or social projection mediate the link between performance and miscalibra- tion. Additionally, they found that correcting for test unreliability reduces or eliminates the apparent asymmetry in calibration be- tween top and bottom performers. They thus concluded that a regression artifact, coupled with a general better-than-average (BTA) effect, is a more parsimonious account of our original findings than our metacognitive one is. In the present article we outline some of our disagreements with Krueger and Mueller’s (2002) interpretation of our original find- ings. We suggest that the reason the authors failed to find medi- ational evidence was because of their use of unreliable tests and inappropriate measures of our proposed mediators. Additionally, we point out that the regression–BTA account is inconsistent with the experimental data we reported in our original article, as well as with the results of a reanalysis of those data using their own analytical procedure. Does Regression Explain the Results? The central point of Krueger and Mueller’s (2002) critique is that a regression artifact, coupled with a general BTA effect, can explain the results of Kruger and Dunning (1999). As they noted, all psychometric tests involve error variance, thus “with repeated testing, high and low test scores regress toward the group average, and the magnitude of these regression effects is proportional to the size of the error variance and the extremity of the initial score” (Krueger & Mueller, 2002, p. 184). They go on to point out that “in the Kruger and Dunning (1999) paradigm, unreliable actual per- centiles mean that the poorest performers are not as deficient as they seem and that the highest performers are not as able as they seem” (p. 184). Although we agree that test unreliability can contribute to the apparent miscalibration of top and bottom performers, it cannot fully explain this miscalibration. If it did, then controlling for test The writing of this reply was supported financially by University of Illinois Board of Trustees Grant 1-2-69853 to Justin Kruger and by Na- tional Institute of Mental Health Grant RO1 56072 to David Dunning. Correspondence concerning this article should be addressed to Justin Kruger, Department of Psychology, 709 Psychology Building, Univer- sity of Illinois, 603 East Daniel Street, Champaign, Illinois 61820, or to David Dunning, Department of Psychology, Uris Hall, Cornell Univer- sity, Ithaca, New York 14853-7601. E-mail: jkruger@s.psych.uiuc.edu or dad6@cornell.edu Journal of Personality and Social Psychology Copyright 2002 by the American Psychological Association, Inc. 2002, Vol. 82, No. 2, 189–192 0022-3514/02/$5.00 DOI: 10.1037//0022-3514.82.2.189 189 reliability, as Krueger and Mueller (2002) do in Figure 2, should cause the asymmetry to disappear. Although this was the case for the difficult test that Krueger and Mueller used, this was inevitable given that the test was extremely unreliable (Spearman–Brown  .17). On their easy test, which had moderate reliability of .56, low-scoring participants still overestimated themselves—by ap- proximately 30 percentile points—even after controlling for test unreliability, just as the metacognitive account predicts. When even more reliable tests are used, the regression account is even less plausible. For instance, in Study 4 of Kruger and Dunning (1999), in which test reliability was quite high (Spearman– Brown  .93), controlling for test unreliability following the procedure outlined by Krueger and Mueller failed to change the overall picture. As Figure 1 of this article shows, even after controlling for test unreliability, low-scoring participants contin- ued to overestimate their percentile score by nearly 40 points (and high scorers still underestimated themselves). In sum, although we agree with Krueger and Mueller that measurement error can con- tribute to some of the apparent miscalibration among top and bottom scorers, it does not, as Figure 1 of this article and Figure 2 of theirs clearly show, account for all of it. Do Metacognition and Social Projection Mediate Miscalibration? Krueger and Mueller (2002) did more than merely suggest an alternative interpretation of our data, they also called into question our interpretation. Specifically, although they found evidence that poor performers show lesser metacognitive skills than top per- formers, they failed to find that these deficiencies mediate the link between performance and miscalibration. The fact that these authors failed to find mediational evidence is hardly surprising, however, in light of the fact that the tests they used to measure performance, as the authors themselves recog- nized, were either moderately unreliable or extremely so. It is difficult for a mediator to be significantly correlated with a crucial variable, such as performance, when that variable is not measured reliably. In addition, even if the tests were reliable, we would be sur- prised if the authors had found evidence of mediation because their measures of metacognitive skills did not adequately capture what that skill is. Metacognitive skill, traditionally defined, is the ability to anticipate or recognize accuracy and error (Metcalfe & Shi- mamura, 1994). Krueger and Mueller (2002) operationalized this variable by correlating, across items, participants’ confidence in their answers and the accuracy of those answers. The higher the correlation, the better they deemed the individual’s metacognitive skills. There are several problem with this measure, however. Principal among them is the fact that a high correlation between judgment and reality does not necessarily imply high accuracy, nor does a low correlation imply the opposite. To see why, consider an example inspired by Campbell and Kenny (1999) of two weather forecasters, Rob and Laura. As Table 1 shows, although Rob’s predictions are perfectly correlated with the actual temperatures, Laura’s are more accurate: Whereas Rob’s predictions are off by an average of 48 degrees, Laura’s are off by a mere 7. How can this be? Correlational measures leave out two impor- tant components of accuracy. The first is getting the overall level of the outcome right, and this is something on which Rob is impaired. The second is ensuring that the variance of the predic- tions is in harmony with the variance of the outcome, depending on how strongly they are correlated (Campbell & Kenny, 1999). Correlational measures miss both these components. However, deviational measures, that is, ones that simply assess on average how much predictions differ from reality, do take these two com- ponents into account. We suspect that this fact, coupled with the problem of test unreliability, is the reason the deviational measures of metacognition we used in our studies mediated the link between performance and miscalibration, whereas the correlational measure used by Krueger and Mueller (2002) did not.1 Note that this point applies equally well to Krueger and Muel- ler’s (2002) social-projection measure (how well others are doing) as it does to their metacognition measure (how well oneself is 1 Krueger and Mueller’s (2002) operationalization of metacognitive accuracy is problematic on other grounds. As researchers in metacognition have discovered, different correlational measures of skill (e.g., Pearson’s r, gamma) often produce very different results when applied to the exact same data (for an excellent discussion, see Schwartz & Metcalfe, 1994). Figure 1. Regression of estimated performance on actual performance before and after correction for unreliability (based on data from Kruger & Dunning, 1999, Study 4). Table 1 Comparison of the Prediction Skills of Two Hypothetical Weather Forecasters Day Actual temperaturea Forecast temperaturesa Rob Laura Monday 70 20 65 Tuesday 80 35 75 Wednesday 60 5 70 Thursday 70 20 75 Friday 90 50 80 r 1.00 .53 Average deviation from actual score 48 7 a In degrees Fahrenheit. 190 KRUGER AND DUNNING
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved