Docsity
Docsity

Prepara tus exámenes
Prepara tus exámenes

Prepara tus exámenes y mejora tus resultados gracias a la gran cantidad de recursos disponibles en Docsity


Consigue puntos base para descargar
Consigue puntos base para descargar

Gana puntos ayudando a otros estudiantes o consíguelos activando un Plan Premium


Orientación Universidad
Orientación Universidad

Alternatives to Randomized Controlled Trials for Evaluating Psychological Treatments: Crit, Apuntes de Psicología

The limitations of randomized controlled trials (rcts) as the dominant methodology for evaluating psychological treatments. The authors argue that a more nuanced understanding of rcts and other research strategies is necessary to develop effective treatments for specific clients with different problems. They suggest some viable alternatives, such as the method of levels and theory-building research, and provide examples of their effectiveness.

Tipo: Apuntes

2015/2016

Subido el 15/11/2016

carloths
carloths 🇪🇸

3.9

(24)

16 documentos

1 / 9

Toggle sidebar

Documentos relacionados


Vista previa parcial del texto

¡Descarga Alternatives to Randomized Controlled Trials for Evaluating Psychological Treatments: Crit y más Apuntes en PDF de Psicología solo en Docsity! Practitioner Report Some Problems with Randomized Controlled Trials and Some Viable Alternatives Timothy A. Carey1* and William B. Stiles2 1Centre for Remote Health, A Joint Centre of Flinders University and Charles Darwin University, Alice Springs, Australia 2Department of Psychology, Miami University, Oxford, OH, USA Randomized controlled trials (RCTs) are currently the dominant methodology for evaluating psycho- logical treatments. They are widely regarded as the gold standard, and in the current climate, it is un- likely that any particular psychotherapy would be considered evidence-based unless it had been subjected to at least one, and usually more, RCTs. Despite the esteem within which they are held, RCTs have serious shortcomings. They are the methodology of choice for answering some questions but are not well suited for answering others. In particular, they seem poorly suited for answering questions re- lated to why therapies work in some situations and not in others and how therapies work in general. Ironically, the questions that RCTs cannot answer are the questions that are of most interest to clinicians and of most benefit to patients. In this paper, we review some of the shortcomings of RCTs and suggest a number of other approaches. With a more nuanced understanding of the strengths and weaknesses of RCTs and a greater awareness of other research strategies, we might begin to develop a more realistic and precise understanding of which treatment options would be most effective for particular clients with different problems and in different circumstances. Copyright © 2015 John Wiley & Sons, Ltd. Key Practitioner Message: • Practitioners can think more critically about evidence provided by RCTs and can contribute to progress in psychotherapy by conducting research using different methodologies. Keywords: randomized controlled trial, gold standard, treatment outcomes In 1967, Gordon Paul suggested that ‘the question towards which all outcome research should ultimately be directed is the following: What treatment, by whom, is most effec- tive for this individual with that specific problem, and un- der which set of circumstances?’ (Paul, 1967, p. 111). This question remains relevant today as indicated by citations in peer-reviewed journals (e.g., Lewis, Simons, & Kim, 2012). However, inmanyways, we are no closer to answer- ing this question than we were over four decades ago. To begin to develop a coherent and accurate under- standing of which treatment is needed by any particular individual in any given situation, it may be necessary to reconsider the privileged position afforded to the random- ized controlled trial (RCT). It has become standard rhetoric that RCTs are the gold standard for evaluating psychological treatments. Such is the pervasiveness of RCTs that, currently, it would be almost incomprehensible to consider a psychological treatment as evidence-based unless its efficacy had been demonstrated in one or more RCTs. Table 1 lists the number of publications per decade for two high-impact psychology journals concerned with psychological treatments. The increasing popularity and use of RCTs over the last four decades is illustrated by the growing number of peer-reviewed publications. It is easy to understand the strong allegiance to the RCT design. The RCT has been described as ‘one of the sim- plest, most powerful, and revolutionary tools of research’ (Jadad & Enkin, 2007, p. 1). The RCT is a statistical adap- tation of the experimental method, which is the closest sci- ence has come to a method for demonstrating causality (Haaga & Stiles, 2000). When they are conducted well and used to address appropriate questions, RCTs yield re- sults that are compelling. In situations where assumptions of linear causality can reasonably be applied, RCTs can be an excellent method for demonstrating causal effects. In situations where linear causality does not apply, however, RCTs will be a poor choice of methodology. The RCT methodology has short- comings that are particularly relevant for understanding psychological treatments as they are implemented in rou- tine clinical settings. In this paper, we explore and explain some conceptual and statistical shortcomings of RCTs and *Correspondence to: Timothy Carey, Centre for Remote Health, Flinders University, PO Box 4066, 0871 Alice Springs, Australia. E-mail: tim.carey@flinders.edu.au Clinical Psychology and Psychotherapy Clin. Psychol. Psychother. 23, 87–95 (2016) Published online 20 January 2015 in Wiley Online Library (wileyonlinelibrary.com). DOI: 10.1002/cpp.1942 Copyright © 2015 John Wiley & Sons, Ltd. review some alternative methodologies which, if they are applied with the same intensity and resources that have lately been applied to RCTs, may help us use psychologi- cal treatments more strategically and systematically for the benefits of the people who access them. CONCEPTUAL AND STATISTICAL PROBLEMS WITH RANDOMIZED CONTROLLED TRIALS Agency and Causation Fundamentally, RCTs address causality (Bracken, 2013). They were designed to answer the question ‘Does this program work?’ (Christie & Fleischer, 2009). This is a question of causality: Did A cause B? Is it reasonable to attribute increases in crop yield to fertilizer F or the recovery of patients to surgical technique P? Implicit in these questions is a model of causality in which varia- tions in A directly and unambiguously (under con- trolled conditions) lead to measurable changes in B. The locus of responsibility for creating the change is placed with the experimental treatment, for example, the fertilizer or the surgical procedure, because plants or patients in these designs are conceptualized as rela- tively passive recipients compared with the treatments they are exposed to. Perhaps in an effort to make research of psychological treatments as rigorous as research of medical treatments, researchers borrowed the methodology that had been used to answer important questions in medicine (Budd & Hughes, 2009). Psychological treatments, however, are unlike medical treatments in crucial ways, and important assumptions that underpin RCTs do not necessarily apply in the context of psychological treatments. We can identify four main problems with the application of RCT assump- tions to psychological treatments. First, treatment techniques are a small part of what con- tributes to psychological change. RCTs focus on the tech- niques of treatment and emphasize the specificity of treatment (Hemmings, 2000); however, as Lambert (1992) points out, treatment techniques are one of the least important components of treatment in terms of the amount of outcome variance accounted for. Second, RCTs ascribe improvements in the clients’ men- tal state to the treatment. From an RCT perspective, it is assumed that the treatment causes or produces the effect. This view is irreconcilable with the concept of the client as the agent of change (Bohart, 2000). Psychological treat- ments do not mechanistically cause individuals to get bet- ter, and clients are not passive recipients of the treatments (Bohart, Tallman, Byock, & Mackrill, 2011). Rather, the cli- ents are the active agents. Clients use the resources offered by the treatment to create the effects they desire. A third problem for demonstrating the causal influence of treatment involves defining what the treatment actually is. Psychological treatments were manualized to introduce standardization; however, this did not really standardize the treatments. In a manualized treatment of 12 sessions of cognitive behavioural therapy, for example, what should be considered the ‘treatment’? Is it the sequence of 12 sessions or the ordering of activities as specified in the manual? If cognitive activities are introduced after be- havioural activities in the manual but a clinician uses them in the reverse order (behavioural activities after cog- nitive activities), does this constitute a different treatment? Finally, treatment groups are not homogeneous. The prob- abilistic question of, on the average, ‘does this treatment work?’ is very blunt for investigating the usefulness of psy- chological treatments. RCTs yield a quantification of how much one group differs from another group, and the proba- bility that a difference of this magnitude or greater could have occurred by chance. While RCTs in any field show some degree of variability in results, it is almost always the case with psychological treatments that some participants in the control group improve more than some participants in the treatment group, and similarly, some participants in the treatment group deteriorate more than some partici- pants in the control group (Blampied, 2001). If the average change in pre and post scores for the treatment group, how- ever, is more favourable (in a statistical sense) than the aver- age change in pre-post scores for the comparison group, then the treatment will be deemed to have ‘worked’. This is a very unusual way to speak about causation. Independent Variables, Dependent Variables and Independence While problems of agency, causation, and treatment defi- nition are serious enough, a more critical problem con- cerns the difficulty in demarcation of independent variables (IVs) and dependent variables (DVs). It is essen- tial for the conduct of RCTs that IVs and DVs are clearly defined and independent of each other. In other contexts, such as when testing pharmacological agents, it might be relatively straightforward to disentangle the predictor variables from the response variables. This is not the case for psychological treatments. Table 1. Number of randomized controlled trial articles published per decade in Behaviour Research and Therapy (BRAT) and the Journal of Consulting and Clinical Psychology (JCCP) Decade BRAT JCCP 1980–1989 0 1 1990–1999 2 7 2000–2009 28 64 2010– 50 54 88 T. Carey and W. B. Stiles Copyright © 2015 John Wiley & Sons, Ltd. Clin. Psychol. Psychother. 23, 87–95 (2016) with different therapists in different settings across differ- ent time periods, what are the results? When this treat- ment is evaluated in routine clinical practice and benchmarked against other published data, how does it compare? When a series of case studies are conducted, what do these studies reveal about both the underlying theory and important therapeutic mechanisms? In qualita- tive studies, what do patients report about their experi- ence of the treatment? As evidence from a number of different methodologies is obtained, a clearer picture will emerge about under what conditions and for which peo- ple this particular treatment might work best. Knowledge accumulated in this way would be more clinically helpful than the evidence obtained from RCTs alone. Elliott (2002) has described a systematic procedure for bringing convergent evidence to bear on the question of the efficacy of treatment in single cases. Conclusions are based on a wide range of indicators (quantitative and qualitative, self-report and therapist perspective) and formal statements of both pro and con arguments regard- ing treatment effectiveness and possible alternative explanations. Eliminating Alternative Explanations When a person moves from a state of psychological dis- tress to one of psychological contentment or at least less distress, there are potentially many explanations to ac- count for the change. Of interest to clinicians and re- searchers is to know how defensible it is to conclude that participating in a particular program of treatment was pri- marily responsible for enabling patients to make the changes they did. Kazdin (2011) argues that it is possible to collect data in such a way that competing explanations can be eliminated and valid inferences can be drawn. At any point in time, psychological treatment is only a small part of the ongoing activity of a person’s life. Table 2 provides a summary of the way in which internal validity can be threatened through competing explanations. It might be the case, for example, that the reduction in psy- chological distress could be attributed to the expected growth and development of the individual. Maturation, therefore, might be a more appropriate explanation of the change in the person’s psychological state. It is also known that many emotional states remit of their own ac- cord over a period of time. With depression, for example, 40% of people will recover within a few months whether they receive treatment or not (Healy, 2012). There could also be characteristics of any testing that is conducted that relate more to the change in score than the resources of the treatment. Kazdin (2011), for example, suggests that an ex- treme score might become less extreme at a second time point through regression to the mean. An example from routine clinical practice demonstrates the way in which questionnaire scores and clinical obser- vations can be used to rule out competing explanations. A young adult woman attended 12 weekly sessions of the Method of Levels. She had experienced difficulties at school and had trouble maintaining employment. For the past 8months, she had isolated herself at home, fre- quently expressed suicidal thoughts, smoked, and used drugs. She stated that she only attended the first session because her parent was worried about her and wanted her to see someone. Table 3 provides unsolicited indica- tors offered by the client and observations recorded by the therapist that suggest an improvement in mental state during the time period in which treatment was provided. The client completed the K10, a widely used, 10-item, global measure of distress (Kessler et al., 2002), at pre- treatment, after six sessions, and at post-treatment with scores of 39, 32, and 25, respectively. A previous study re- ported a reliable change index of 7.58 for the K10 (Murugesan et al., 2007). Using this score as a benchmark indicates that the client’s change from 39 to 25 could be regarded as reliable. The client also completed the Out- come Rating Scale (ORS), a visual analogue scale (Miller & Duncan, 2004) assessing individual, relational, social, and overall functioning at every session. A conservative reliable change index for the ORS is 6.8 (Miller & Duncan, 2004). To achieve clinically significant improvement, a person’s score must begin at or below the clinical cutoff of 25 and increase by greater than 6.8. The client’s scores indicated she achieved both reliable and clinically signifi- cant change. Finally, the client was accepted into univer- sity and, at both 3 and 12-month follow-ups, she was still enrolled in her course. This clinical example illustrates the way in which clini- cians can collect and organize data efficiently and system- atically in order to eliminate many of the alternative explanations for change highlighted by Kazdin (2011). Given the longevity of the client’s problems, the rate of improvement once treatment started, and the sustained improvement after the end of treatment, it seems reason- able to conclude that treatment altered the course of the client’s troubles. Also, the changes occurred after the treat- ment commenced and not before. Improvements were re- corded in a number of ways (ORS, K10, observationally, and self-report) across multiple time points. Clinicians, therefore, can use the data they collect during the course of their routine clinical practice to make conclusions about the impact of their treatment with their clients and also against which to consider competing explanations. Theory-Building Research Theory-building research seeks to test, improve, and ex- tend a particular theory (Stiles, 2009a, in press). In what 91Randomized Controlled Trials Problems and Some Alternatives Copyright © 2015 John Wiley & Sons, Ltd. Clin. Psychol. Psychother. 23, 87–95 (2016) Kuhn (1970) called normal science, dominated by an ac- cepted paradigm, theory-building is the main research purpose. In theory-building research, the domain of the theory, and hence its generality, is specified within the the- ory itself. Particular studies typically examine only small derived aspects of the theory. This contrasts with RCTs, in which generality is understood as external validity— the range of populations and settings in which a treatment can be expected to work. Theory-building research aims at building explanatory theory, which synthesizes observations, explaining how observations are related to each other. The research pro- vides quality control on the theory by comparing the the- oretical account with observations. Whereas RCTs seek to test whether an existing treatment is effective, theory- building research seeks to improve understanding and hence can contribute to systematic development of im- proved treatments. Explanatory theory is distinct from treatment theory, which is meant to guide clinicians in conducting therapy. Some explanatory theories of psychotherapy also seek to be treatment theories (e.g., psychoanalytic theory, cognitive theory, and person-centred theory), and indeed, there is nothing so clinically practical as a good explanatory theory of psychopathology and psychotherapeutic process. How- ever, treatment theories may be separate from and far sim- pler than explanatory theories. For example, a treatment theory may simply state that therapists should be as genu- ine, accurately empathic, and unconditionally accepting as they can be, or that watching an object move back and forth while talking about problemswill be helpful. Treatment the- ories are judged not by their descriptive accuracy but by Table 2. Eliminating alternative explanations (adapted from Kazdin, 2011) Alternative explanation Threat to internal validity The person’s condition changed because of their developmental life stage or because they grew tired or bored of the treatment. Maturation The person’s condition could have reasonably been expected to remit of its own accord within the time period in which treatment was provided. Natural progression of the condition Situations in the person’s life changed concurrently with their participation in the treatment program. Change in circumstances The changes had started to occur prior to the commencement of treatment. Temporal sequencing The person only provided pre and post scores, and their extreme pre scores could have been expected to move closer to the mean at the post assessment. Conditions of testing Table 3. Client reports and therapist observations during delivery of psychological treatment Session number Client report Therapist observation 2 • Felt positive and hopeful • Socializing more 3 • Wanted to get a job, get married, and have children • Reconnected with ex-partner 4 • Going out with best friend • Registered for a 1-day writing course • Attended for the first time without her hood over her head 5 • Reduced cannabis used from every day to every second or third day • Negotiated part-time work • Attended without her parent waiting in the reception area • Not wearing the pullover with the hood 6 • Had resumed playing guitar and was going out with friends 7 • No further thoughts of suicide. Reported that she began attending to her grooming and appearance once she decided not to kill herself anymore • Waking 30–40min every day • Brought in a book she was reading as well as pieces of writing she had recently completed • Was wearing lipstick for the first time8 • Part-time work commenced 11 • Reduced smoking to one cigarette or less a day • Applied to go to university 92 T. Carey and W. B. Stiles Copyright © 2015 John Wiley & Sons, Ltd. Clin. Psychol. Psychother. 23, 87–95 (2016) whether therapists who use them produce positive results (e.g., in RCTs). Explanatory theories, in contrast, seek to un- derstand the details of the process and explain a wide vari- ety of observations. They are judged by how well they correspond with the detailed observations. Explanatory theories grow by accommodating new ob- servations as follows. Researchers make theory-relevant observations. If the new observations are consistent with theory, then confidence in the theory increases. If the ob- servations are inconsistent with the theory (and the incon- sistencies cannot be ascribed to faulty methods), then the theory may be abandoned or, more often, adjusted to ac- commodate the inconsistent observations. If the new ob- servations are outside the domain of the theory, they can be ignored, but, in contrast, sometimes the theory can be expanded to encompass them. These adjustments and ex- pansions are technically called abductions (Rennie, 2000; Stiles, 2009a, in press). They are constrained by the re- quirement that the adjusted or expanded theory must remain consistent with all previous theory-relevant obser- vations. Through research and abductions, then, theories become more trustworthy and grow to encompass an ever-widening range of observations with ever-greater precision. From this perspective, theories are never finished; there is no point when the theory is ready for a final test. In- stead, each relevant observation either confirms the theory or leads to some abductive modification. Of course, all researchers (and readers) must beware of confirmatory biases. There is always a danger that observers will see what they want to see. Actively seek- ing disconfirmation with the goal of improving the the- ory (through abductions) goes some way towards counteracting the bias favouring existing theory. Theory-building research can be performed with any sort of research, quantitative or qualitative. Theory- building case studies (Stiles, 2009a) may be of particular interest to clinicians, insofar as qualitative case observa- tions may include details about the therapist, patient, setting, context, and process of therapy that could begin to address Paul’s (1967) question. By adjusting and expanding the theory to accommodate these details in successive cases, the theory can come to convey, in com- pact form, the accumulated observations of the clinicians and researchers who have contributed to it. Task Analysis Task analysis is an observational, inductive, and iterative strategy in which investigators use observations of indi- viduals performing tasks to progressively improve de- scriptions of how the task can best be performed (Greenberg, 1984, 2007; Greenberg & Foerster, 1996; Pascual-Leone, Greenberg, & Pascual-Leone, 2014). Task analysis affords an intensive scrutiny of events in therapy. The procedure begins with selecting a specific type of therapeutic problem, for example, a problematic reaction point, in which a client reports an uncharacteristic or puz- zling personal reaction and identifying in-session markers of that problem (Rice & Saperia, 1984). Next, expert opin- ions about how the problem might be solved are gathered and synthesized, yielding a rational model of the task. Then, a corpus of instances of the problem—in therapy events that illustrate the problem—are collected from therapy recordings, and these are compared with the ra- tional model to assess whether the model works in prac- tice. Next, the rational model is progressively corrected and refined in light of the empirical observations; that is, a rational-empirical model is constructed by successive ra- tional and empirical analyses. Finally, the model is veri- fied by comparing successful and unsuccessful instances of attempts to solve this problem. As presented in the cited references, task analysis is a pragmatic strategy that begins by drawing on clinical ex- perience and works towards a stand-alone account of how to address a particular sort of clinical problem. How- ever, most of the task analytic procedures could be straightforwardly adapted to theory building by begin- ning with an existing theory rather than newly gathered expert opinions and working towards either confirmation of the theory or towards abductions that would reconcile the theory with observations regarding the solution of this particular problem. Benchmarking Another approach to assessing treatment outcomes is to benchmark results with comparable results in the pub- lished literature (Minami & Wampold, 2008; Minami, Wampold, Serlin, Kircher, & Brown, 2007). One source for benchmarks is through the results reported in clinical trials or even meta-analyses of clinical trials. These bench- marks will provide statistics such as effect sizes for partic- ular outcome measures used in various studies. Clinicians interested in benchmarking the results from their clinical practice can use outcome measures to compare the effect sizes they obtain in routine practice with the effect sizes published in the literature. The logic of benchmarking is compelling. Although RCTs can involve comparisons of bona fide treatments, it is common to compare a preferred treatment with a non- standard form of treatment such as a waiting-list control group, a Treatment as Usual group, a self-help group, or an educational group. Such comparisons tend to favour the researcher’s preferred treatment. With benchmarking studies, however, researchers can compare the outcomes from their preferred treatment with the published out- comes obtained from the preferred treatments of other 93Randomized Controlled Trials Problems and Some Alternatives Copyright © 2015 John Wiley & Sons, Ltd. Clin. Psychol. Psychother. 23, 87–95 (2016)
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved