Docsity
Docsity

Prepara i tuoi esami
Prepara i tuoi esami

Studia grazie alle numerose risorse presenti su Docsity


Ottieni i punti per scaricare
Ottieni i punti per scaricare

Guadagna punti aiutando altri studenti oppure acquistali con un piano Premium


Guide e consigli
Guide e consigli

Social and Political research methods by Matteo Bassoli, Appunti di Statistica Sociale

Notes Social and Political research methods by Matteo Bassoli Summary of the book "Research methods and Statistics for public and nonprofit administrators" by Nishishiba" + summary of the book "Social Research" by Corbetta

Tipologia: Appunti

2022/2023

In vendita dal 19/05/2023

FCau
FCau 🇮🇹

4.6

(14)

23 documenti

1 / 63

Toggle sidebar

Spesso scaricati insieme


Documenti correlati


Anteprima parziale del testo

Scarica Social and Political research methods by Matteo Bassoli e più Appunti in PDF di Statistica Sociale solo su Docsity! SOCIAL AND POLITICAL RESEARCH METHOD – FIRST QUARTER SOCIAL RESEARCH: THEORY, METHODS AND TECHNIQUES – PIERGIORGIO CORBETTA CHAPTER 1 – PARADIGMS OF SOCIAL RESEARCH 1. KUHN AND THE PARADIGMS OF SCIENCES Reflecting on the historical development of the sciences, Kuhn refuted the traditional understanding of the sciences as a cumulative and linear progression of new acquisitions. According to the traditional conception, single inventions and discoveries would be added to the previous body of knowledge in the same manner as bricks are placed one on top of another in the construction of a building. According to Kuhn, however, while this is the process of science in “normal” times, there are also “revolutionary” moments, in which the continuity with the past is broken and a new construction is begun. A reorientation in the discipline occurs that consists of “a displacement of the conceptual network through which scientists view the world”. This “conceptual network” is what Kuhn calls a “paradigm”. It is difficult to identify a paradigm that has been agreed upon, even for limited periods, by the community of sociologists. Nevertheless, there exists another interpretation of the thinking of Kuhn, which has been proposed in an attempt to apply his categories to sociology. This interpretation redefines the concept of the paradigm, maintaining all the elements of the original definition (theoretical perspective that defines the relevance of social phenomena, puts forward interpretative hypotheses and orients the techniques of empirical research) except one: that the paradigm is agreed upon by the members of the scientific community. This paves the way for the presence of multiple paradigms inside a given discipline; thus, instead of being a pre-paradigmatic discipline, sociology becomes a multi- paradigmatic one. Moreover, it directs research through: - The specification and choice of what to study; - The formulation of hypotheses to explain the phenomenon observed; - The identification of the most suitable empirical research techniques. 2. THREE BASIC QUESTIONS There is broad agreement among scholars that two general frames of reference have historically oriented social research since its inception: the “empiristic” vision, also known as “positivism”, and the “humanist vision”, less consolidated and also called “interpretivism”. These are two organic and strongly opposed visions of social reality and how it should be understood; and they have generated two coherent and highly differentiated blocks of research techniques. In order to adequately compare the two above-mentioned paradigms, it is necessary to understand how they respond to the fundamental interrogatives facing social research. These can be traced back to three basic questions. THE ONTOLOGICAL QUESTION – This is the question of “what”. It regards the nature and form of social reality. It asks if the world of social phenomena is a real and objective world endowed with an autonomous existence outside the human mind and independent from the interpretation given to it by the subject. It asks, therefore, if social phenomena are “things in their own right” or “representations of things”. 1 THE EPISTEMOLOGICAL QUESTION – This is the question of the relationship between the “who” and the “what” (and the outcome of this relationship). It regards the knowability of social reality and, above all, focuses on the relationship between the observer and the reality observed. Clearly, the answer to this question depends on the answer to the previous ontological question. If the social world exists in its own right, independently from human action, the aspiration to reach it and understand it in a detached, objective way, without fear of altering it during the course of the cognitive process, will be legitimate. THE METHODOLOGICAL QUESTION – This is the question of “how”. It therefore regards the technical instruments of the cognitive process. Here, too, the answers depend closely on the answers to the previous questions. A vision of social reality as an external object that is not influenced by the cognitive research procedures of the scientist will accept manipulative techniques (e.g. experimentation, the control of variables, etc.) more readily than a perspective that underlines the existence of interactive processes between the scholar and the object studied. 3. POSITIVISM The positivist paradigm is the study of social reality utilizing the conceptual framework, the techniques of observation and measurement, the instruments of mathematical analysis, and the procedures of inference of the natural sciences. The techniques of observation and measurement is based on the use of quantitative variables, even for qualitative phenomena; measurement procedures applied to ideological orientation, mental abilities and psychological states (attitude measurement, intelligence tests, etc.). Mathematical analysis is the use of statistics, mathematical models, etc. The procedure of inference is the inductive process, whereby hypotheses regarding the unknown are formed on the basis of what is known and specific observations give rise to general laws; the use of theory to predict outcomes; extrapolation from the sample to the whole population. The first assertion is that social reality exists outside the individual. The second is that this social reality is objectively understandable, and the third that it can be studied by means of the same methods as the natural sciences. As Durkheim states, “Our main objective is to extend the scope of scientific rationalism to cover human behaviour [...]. What has been termed positivism is merely a consequence of this rationalism”. Positivism is fundamentally inductive, where induction means “moving from the particular to the general” the process by which generalizations or universal laws are derived from empirical observation, from the identification of regularities and recurrences in the fraction of reality that is empirically studied. Finally, with regard to the “form” of this knowledge, there is no doubt that these laws of nature will eventually be identified, formulated, demonstrated and “proved”; in their most complete form, they are laws that link cause and effect. 4. NEOPOSITIVISM AND POSTPOSITIVISM The reassuring clarity and linearity of nineteenth-century positivism gave way to a twentieth- century version that was much more complex and detailed and, in some respects, contradictory and unclear. However, some basic assumptions were maintained, such as ontological realism (the world exists independently of our awareness of it) and the pre-eminent role of empirical observation in understanding this world. The “neopositivism” assigned a central role to the criticism of science and redefined the task of philosophy, which was to abandon its broad 2 - Exaltation of the “Other”, differences, minorities; identification with the oppressed, assumption of “power” as an explanatory category at the basis of all social relationships and structures. CHAPTER 2 – QUANTITATIVE AND QUALITATIVE RESEARCH 3. QUANTITATIVE AND QUALITATIVE RESEARCH: A COMPARISON 3.1. RESEARCH PLANNING In the case of quantitative research inspired by the neopositivist paradigm, the relationship between theory and research is structured in logically sequential phases, according to a substantially deductive approach (theory precedes observation) that strives to “justify”, that is to say, to support the previously formulated theory with empirical data. In qualitative research, which springs from the interpretive paradigm, there is an open, interactive relationship between theory and research. The researcher often deliberately avoids formulating theories before fieldwork begins, on the grounds that this might hinder his capacity to “comprehend” the point of view of the subject studied. These two approaches to research also differ in their use of concepts. The concepts are the constituent elements of the theory and, at the same time, they allow the theory to be tested empirically through their operationalization, that is, their transformation into empirically observable variables. In the neopositivist approach, the concepts are clarified and operationalized into variables even before the research begins. In the qualitative research, instead of transforming the concept into a variable at the outset, the researcher would have utilized a guiding concept that remains to be refined during the course of the research, not only in operational terms, but also in theoretical terms. Another set of differences between quantitative and qualitative research can be seen in the personal relationship between the researcher and the object studied. More specifically, the reactivity of the object under investigation. The neopositivist approach does not seem to be particularly concerned about this. Researcher maintains that the problem of subject reactivity does not constitute a fundamental obstacle, or at least believes that a certain margin of “controlled manipulation” is acceptable. “Naturalistic approach” occurs when the researcher refrains from any form of manipulation, stimulation, interference or disturbance, and that the object is studied in its natural setting. These two opposing ways of conducting research can best be illustrated by the techniques of experimentation and participant observation. In carrying out an experiment, the researcher manipulates social reality extensively, even to the extent of constructing an artificial situation. Before and after exposure to the stimulus, subjects are tested (again an artificial situation); moreover, the initial subdivision of the subjects into an experimental group and a control group (on the basis of abstract, unnatural criteria) also involves an artificial operation. The situation is therefore totally unnatural and the researcher’s manipulation is all- pervading. By contrast, in participant observation, the researcher’s role is restricted to observing what happens in the social reality under investigation, and the researcher may sometimes even refrain from interviewing or questioning the subjects observed. However, participant observation itself is only rarely perfectly “naturalistic”, in the sense that the mere presence of an outside observer is likely to have some effect on the subjects. 5 A further aspect concerns the relationship between the researcher and the individual subjects studied. As already said, one of the fundamental differences between the neopositivist paradigm and the interpretive paradigm lies in how they define their research objectives; in the former case, the objective can be summarized as “empirical validation of the hypotheses”, while in the latter case it is “to discover the social actor’s point of view”. This dual perspective gives rise to two issues, one of a psychological-cultural nature and the other of what could be called a physical-spatial nature. The first of these concerns the psychological interaction between the researcher and the subject studied. In quantitative research, observation is carried out from a position that is external to the subject studied, just as the “scientific” observer adopts a neutral, detached stance. By contrast, the qualitative researcher tries to get as deep inside the subject as possible, in an attempt to see social reality “through the eyes of the subject studied”. Clearly, this psychological involvement raises the question of the objectivity of qualitative research. It is a problem that also arises in quantitative research, in that what the researcher sees must pass through the filter of his own perspective, experience of life, culture and values. The second issue, which is directly linked to the first, concerns the physical interaction between the researcher and the subject. Quantitative research does not envision any physical contact between the researcher and the subject. Obviously, the opposite is true in the case of qualitative research, in which contact – and even close interaction – between the researcher and the subject is a prerequisite to comprehension. it is evident that the two approaches also differ in terms of the role of the subject studied. From the quantitative standpoint, the subject studied is regarded as being passive, and even when he cannot be regarded as such, every effort is made to reduce his interaction with the researcher to a minimum. On the qualitative side, by contrast, research is conceived of as “interaction”, which naturally implies an active role on the part of the subject studied. 3.2. DATA COLLECTION One of the principal differences between the two approaches has to do with the research design – all those operational decisions concerning where, how and when to gather the data; this means deciding what data-collection tools are to be used, where data collection is to be carried out, how many and which subjects or organizations are to be studied, etc. The quantitative design, which is drawn up on paper before data collection begins, is rigidly structured and closed, while the qualitative design is unstructured, open, shaped during the course of data collection, and able to capture the unforeseen. The qualitative research, once a few basic criteria had been defined, the 6 researcher is free to interview whomever he wishes, to lengthen or shorten the observation as he thought fit, etc. This difference in research design is linked to two further distinguishing features. The first of these is the representativeness of the subjects studied. The quantitative researcher is concerned with the generalizability of the results, and the use of a statistically representative sample is the most evident manifestation of this concern. Statistical representativeness is of no interest to the qualitative researcher. The second of the two above-mentioned distinguishing features concerns the standardization of the data-collection tool. In quantitative research, all subjects receive the same treatment. The data-collection tool is the same for all cases (e.g. a questionnaire) or at least strives for uniformity. The reason for this is that the information gathered will be used to create the “data-matrix”, a rectangular matrix of numbers in which the same information is coded for all the cases. Qualitative research does not aim for this standardization. On the contrary, the heterogeneity of information is a constituent element of this type of research, since the researcher records different information according to the cases, and at different levels of depth according to his judgement. Once again, the difference in approach stems from the difference in the cognitive objective; in the one case, it is to cover the uniformities of the world of human beings, while in the other, it is to understand its individual manifestations. The final point to be mentioned under the heading of “data collection” concerns the nature of the data. In quantitative research, the data are reliable, precise, rigorous and unequivocal: in a word, hard. They should also be “standardized”, in the sense that data recorded on different subjects must be able to be compared. In qualitative research, by contrast, the issue of the objectivity and standardization of data does not arise; what counts is their wealth and depth. Data produced by qualitative research are termed soft, as opposed to the hard data mentioned earlier. 3.3. DATA ANALYSIS Quantitative research makes ample use of mathematical and statistical tools, together with a whole array of tables, graphs, statistical tests, etc., as well as the full set of technological equipment. The most fundamental difference, however, lies not so much in the technological tools of data analysis or the different presentation of results as in the logic that underlies the analysis itself. The object of a qualitative analysis is not the variable, but the entire individual. While quantitative research is variable-based, qualitative research is case-based. 7 CHAPTER 3 – FROM THEORY TO EMPIRICAL RESEARCH 1. THE “TYPICAL” STRUCTURE OF QUANTITATIVE RESEARCH 1.1.THE FIVE STAGES OF THE RESEARCH The “typical” itinerary followed in social research consists of a loop, which begins with the theory, runs through the phases of data collection and analysis, and returns to the theory. The first phase is that of the theory. The second is that of the hypotheses, and the passage between the two involves a process of deduction. The hypothesis constitutes a partial articulation of the theory and, in relation to the theory, is located on a lower level of generality. The theory is “general”, while the hypothesis is “specific”. The third phase is that of empirical observation, or rather, data collection. This is reached through the process of operationalization – that is to say, the transformation of hypotheses into empirically observable statements. This process is very complex and can be broken down into two stages. The first of these concerns the operationalization of concepts; this involves transforming the concepts into variables. The second stage regards the choice of the tool and of the procedures for data collection. Such decisions will lead to the construction of the research design – that is to say, a “fieldwork plan” in which the various phases of empirical observation will be established. Once the empirical material has been gathered, one proceeds to the fourth phase, or data analysis phase, which will be preceded by the organization of the data. In general, the term information is applied to the raw empirical material that has not yet been systematized, while the term data is used to indicate the same material once it has been organized into a form that can be analyzed. In quantitative research, the process of data organization usually involves transforming information into a rectangular matrix of numbers. The resulting data matrix forms the basis for the data analysis, which normally involves computer- aided statistical elaboration. Results are presented in the fifth phase, which is reached through a process of interpretation of the statistical analyses carried out in the previous phase. Finally, the researcher returns to the starting point of the whole procedure – that is to say, the theory. The process involved here is one of induction; the empirical results will be compared with the theoretical hypotheses and, more generally, with the initial theory. In this way, the theory will either be confirmed or reformulated. It should be added that what has been described is the “ideal” pathway of quantitative research, and that this basic sequence may vary, even considerably, in actual practice. 10 2. FROM THEORY TO HYPOTHESES 2.1. THEORY A theory can be defined as a set of organically connected propositions, which are located at a higher level of abstraction and generalization than empirical reality, and which are derived from empirical patterns, and from which empirical forecasts can be derived. 2.2. HYPOTHESES A theoretical proposition must be able to be broken down into specific hypotheses. By hypothesis, it is meant a proposition that implies a relationship between two or more concepts, which is located on a lower level of abstraction and generality than the theory, and which enables the theory to be transformed into terms that can be tested empirically. The hypothesis has two distinguishing features. First, it is less abstract than the theory in conceptual terms, and less general in terms of extension. Second, it is provisional in nature; it is a statement that has yet to be proved, which is derived from the theory but awaits empirical confirmation. The validity of a theory depends on whether it can be transformed into empirically testable hypotheses. 3. FROM CONCEPTS TO VARIABLES The term “concept” refers to the semantic content (the meaning) of linguistic signs and mental images. From this definition, it follows that “the term has a very general meaning and may include any kind of sign or semantic procedure, whatever object it refers to, whether abstract or concrete, near or far, universal or individual, etc.”. Furthermore, concepts can refer to abstract mental constructions that are impossible to observe directly, such as power, happiness or social class, or else to immediately observable concrete entities, such as flower or worker. If the theory is a network of connections among abstract entities represented by concepts, then once these abstract entities become concrete, the whole theoretical network will become concrete. It will therefore be possible to establish the same connections among the concepts made concrete – that is, transformed into empirically observable entities. If the theoretical hypothesis is that post- materialistic values are more widely held in wealthy societies, then as soon as it is possible to empirically gauge both wealth and the presence of such values in different societies, it will be also 11 able to test the validity of the theory empirically, simply by observing whether the two operationalized concepts are positively correlated in the data recorded. The first step in the empirical transformation of concepts consists of applying the concepts to concrete objects. This is done by causing the concepts to become attributes or properties of the specific objects studied, which are called units of analysis. The second step in the process is to make the concept-property operational. This involves giving it an operational definition – that is to say, establishing the rules for its transformation into empirical operations. The third step is to apply the abovementioned rules to the concrete cases studied; this is the phase of operationalization in the narrow sense. The operational definition is drawn up on paper, while operationalization is its practical implementation. The operational definition is a “text”; operationalization is an “action”. The property so operationalized is called a variable. The operationalized “states” of the property are called categories, each of which is assigned a different symbolic value, which is normally constituted by a number. At this point, a specification needs to be made with regard to the term “operationalization”, which it is used to denote the passage from property to variable. The current language uses the term “measurement” to refer to the process of assigning numerical values to the states of the property. The passage from property to variable often involves an operation which is something other than measurement. By the same token, the operation may consist of ordering or counting. However, no single term has been agreed upon to define this operation of measuring-ordering-counting-classifying. The intrusiveness of the natural sciences – in which a unit of measure can almost always be established – has prompted the use of the term “measure” even when it is improper. This process is called “operationalization”. This term is sometimes used in a broad sense to mean “translation from theoretical language to empirical language”. However, strictly speaking, it refers to the passage from properties to variables. On the broader pathway from theory to research, operationalization constitutes a crucial bridge from one side of the divide to the other, illustrated as follows: 4. UNITS OF ANALYSIS In empirical research, the unit of analysis is the social object to which the properties investigated appertain. As mentioned earlier, a concept (which is by definition abstract) is transformed into empirical terms by assigning it as a property to a concrete social object (“unit of analysis”). 4.1. DIFFERENT TYPES OF UNITS OF ANALYSIS In social research by far the most common unit of analysis is the individual. Another frequently adopted unit of analysis is the collective. These “collectives” may be constituted by an aggregate of individuals or by a group-organization-institution. On the other hand, it is a group-organization- institution when the variables are recorded at the group level. This kind of unit of analysis is 12 measured is continuous – that is to say, it can take on an infinite number of intermediate states in a given range between any two states; and (b) a pre-established unit of measurement enables to compare the magnitude to be measured with a reference magnitude. last figure has been rounded off). By contrast, counting takes place when: (a) the property to be recorded is discrete – that is, it can take on a finite number of indivisible states; and (b) a counting unit exists – that is to say, an elementary unit which is contained a certain finite number of times in the property of the object. It should be noted that the characteristics of the three types of variables mentioned are cumulative, in that each level includes the properties of the levels below it. Thus, only relationships of equality and inequality can be established among the values of nominal variables, while among the values of ordinal variables, relationships of order can be established in addition to those of equality and inequality; finally, among the values of interval variables, relationships regarding the distances among the values can be established in addition to the other two types of relationship. CHAPTER 4 – CAUSALTY AND EXPERIMENTATION 1. THE CONCEPT OF CAUSE IF “C”, THEN “E” In that this affirmation indicates that the relationship between C (cause) and E (effect) can be true both “sometimes” and “always”, while the causal principle must assert the occurrence of E every single time that C happens. IF “C”, THEN (AND ONLY THEN) “E” ALWAYS This formulation expresses some characteristics of the causal link (conditionality, succession, constancy, univocity), but it is still insufficient. IF “C”, THEN (AND ONLY THEN) “E” ALWAYS PRODUCED BY “C” The key element that this statement adds to the preceding ones is the idea of production: it is not enough, in order to have causation, to ascertain that there exists a “constant conjunction” between two phenomena; it must be shown that “the effect is not merely accompanied by the cause, but is engendered by it”. However, the mere fact that X and Y vary together in a predictable way, and that a change in X always precedes the change in Y, can never assure that X had produced a change in Y. Although the existence of a causal law can never be “proved” empirically, hypothesizing a causal relationship at the theoretical level implies observing facts. In other words, the theoretical existence of a causal mechanism implies observable consequences at the empirical 15 level. While empirical observation of such consequences cannot provide definitive evidence of the existence of a causal link. 2. EMPIRICAL CORROBOATION OF THE CAUSAL RELATIONSHIP 2.1. COVARIATION BETWEEN INDEPENDENT AND DEPENDENT VARIABLE The researcher must, in the first place, be able to observe variations in the independent variable – that is to say, in what is hypothesized to be the “cause”. A “covariation” between the two variables must be observed: when one varies, the other must also vary. 2.2. CAUSAL DIRECTION One must be able to ascertain that a variation in the independent variable is followed, and not preceded, by a variation in the dependent variable. This can be empirically established in two ways. The first is by manipulation of the independent variable: if the researcher brings about a variation in the variable X, and subsequently observes a variation in the variable Y, then there is no doubt that – if a causal link exists – its direction is from X to Y and not vice versa. When this is impossible, the direction of the causal link can be established through the criterion of temporal succession, which stems from the observation that the variation in the independent variable X precedes the variation in the dependent variable Y. In addition, some causal directions can be excluded on the grounds of logical impossibility. 2.3. CONTROL OF EXTRANEOUS VARIABLES When the independent variable varies, it must be excluded the variation of other variables that are correlated with it, as these may themselves cause the dependent variable to vary. It is therefore essential to control extraneous variables if the goal is to achieve empirical control of causal relationships. Empirical observation of the first aspect alone (i.e. covariation) is not sufficient to have causation. Covariation and causation are very different in terms of their context: the concept of causation is theoretical, whereas covariation is empirical. Moreover, covariation alone can never be taken as empirical proof of the existence of a causal relationship. To sum up, if the theoretical statement “X causes Y” is true, then it should observable, at the empirical level, that a variation in X – when all other possible causes of Y are kept constant – is accompanied by a variation in Y. But how can be empirically achieved the so-called ceteris paribus (“all other things being equal”) condition? The answer depends on whether it is adopted the logic of covariation analysis or that of experimentation. 3. COVARIATION ANALYSIS AND EXPERIMENTATION 3.1. COVARIATION ANALYSIS A spurious relationship is a correlation between two variables, X and Y, which does not derive from a causal link between them, but from the fact that they are both influenced by a third variable Z. Variations in Z bring about simultaneous variations in X and Y (through a causal mechanism) without there being a causal link between X and Y. When this happens, the 16 researcher has two ways of establishing whether or not the relationship between X and Y is due to the external action of Z on the two variables: a) subset comparison, achieved by transforming the external variables into constants, and b) mathematical estimation, through the statistical analysis and control of the effects of such external variables. The first procedure involves a sort of mechanical control of the variables that may cause interference. In order to eliminate this interference, it is needed only transform the variable Z into a constant. Of course, this procedure becomes somewhat complicated when many variables have to be controlled simultaneously. This problem can be overcome through what it is called statistical control – that is to say, by computing the net correlation between X and Y through mathematical estimation. There is, however, another way of solving the problem based on a different way not of analyzing data, but of producing them. 3.2. EXPERIMENTATION Two sample of people are created in order to study their behaviour. The assignment of individuals to two groups was, in fact, deliberately performed in a random manner, in order to ensure that the groups were, on average, equal in terms of the characteristics of their members. In both cases the researcher studies a covariation between a hypothesized causal variable, X (independent), and a hypothesized effect variable, Y (dependent). In the first case, he observes and analyses how the variations in X relate to the variations in Y in a natural setting. In the second case, she produces a variation in X in a controlled situation and measures how much Y subsequently varies. The researcher “produces” a variation in that he manipulates the independent variable from the outside – that is, he causes it to vary. In the first case, the researcher intervenes after collection of the data, which he only analyzes. In the second case, he controls the very production of data, which takes place in an artificial situation that he himself has set up. The basic idea underpinning the experiment, therefore, is that, given the hypothesis “X causes Y”, if it is produced a variation in the value of X on a certain number of subjects and keep constant all other possible causes of variations in Y, it is possible to observe variations in Y in those same subjects. Manipulation of the independent variable and control of third variables are, therefore, the two features of experimentation that distinguish it from covariation analysis. 4. EXPERIMENTS IN THE SOCIAL SCIENCES An experiment can be defined as a form of experience of natural facts that occurs following deliberate human intervention to produce change; as such it distinguishes itself from the form of experience involving the observation of facts in their natural settings. 4.1. SCIENTIFIC SOLUTION The scientific solution is possible if one of the following assumptions can be adopted: the assumption of invariance or the assumption of equivalence. The assumption of invariance involves supposing: - Temporal stability, the value of Yc (Y before the variation of X) can be substituted by a measure of the same Yc recorded earlier; - Non-influence of the measuring procedure, the value of Yt (Y after the variation of X) is not affected by the preceding measurement of Yc on the same unit. 17 two groups. The causal effect brought about by the variation in X is measured as the difference (Y2 – Y1). This experimental design is called “only-after” because the dependent variable Y is measured only-after exposure to the experimental stimulus, rather than before and after exposure, as is the case of other experimental designs, which will be illustrated shortly. This is the simplest of the designs that can be classified as a “true experiment”. Nevertheless, it contains all the essential elements: random assignment, exposure to a stimulus, and recording of the dependent variable after exposure. Randomization ensures that the groups are equivalent before exposure to the stimulus – that is to say, they display the same mean values (except for haphazard fluctuations) over the whole range of variables. Thus, after exposure to the different values of the independent variable X, the two groups will differ only in terms of the value of this variable. 6.1.2. “BEFORE-AFTER”, TWO (OR MORE)-GROUP DESIGN – Unlike the previous experimental design, this design involves recording the dependent variable before, as well as after, exposure to the stimulus (hence the term “before-after”). The two measurements of the dependent variable are also called pre-test and post-test. In this experimental design, variation in the independent variable between groups is combined with variation over time within groups. What advantage does this experimental design have over the previous one, or – in other words – what does the pre-test add? Indeed, pre-testing is not essential in true experiments, since randomization guarantees the initial equivalence of the two groups. Nevertheless, pre-testing does verify this equivalence. There are, however, disadvantages to pretesting; it may influence post-test responses, especially if the two tests are administered within a fairly short space of time. 6.1.3. SOLOMON FOUR-GROUP DESIGN – This experimental design combines the two previous designs, adding the advantages of the first (no interference by the pre-test) to those of the second (availability of the pre-test as a starting point before exposure to the stimulus). By means of simple differences among the six pre- and post-test values (Y1…, Y6), the effect of the stimulus can be separated from that of the interaction between pre-test and stimulus. 6.1.4. FACTORIAL DESIGN – So far, only one independent variable (or stimulus) X has been considered, and the previous examples have dealt mainly with cases in which it assumes only two values (X1 and X2), often corresponding to absence/presence. It has, however, been pointed out 20 that what has been said holds true when X1 and X2 stand for any values of the variable X; furthermore, it can easily be extended to cases involving more than two values of X (multiple- group design). The above scheme shows that the subjects have been randomly assigned to four groups: in the first group, the variables X and Z have both been given a value of 1 (male actor, dominant behaviour); in the second group, the values are X1 and Z2 (male actor, submissive behaviour), and so on. If, however, the independent variable Z is given three values (e.g. dominant, submissive and neutral), the design will become 2 × 3 and require six groups (and therefore six actors). Similarly, if it wishes to add to the initial 2 × 2 design a third independent variable (e.g. age, again dichotomized into “young” and “old”), the design becomes 2 × 2 × 2 and there will be eight groups. The main advantage of the factorial design lies in the fact that it enables the researcher to study not only the isolated effect of the independent variables on the dependent variable, but also the effect of the interaction between the independent variables. Conversely, the experimental design rapidly becomes complicated as the number of independent variables increases. 6.2. QUASI-EXPERIMENTS Quasi-experiments are “experiments that have treatments, outcome measures, and experimental units (like true experiments), but do not use random assignment to create the comparisons from which treatment-caused change is inferred. Instead the comparisons depend on non-equivalent groups that differ from each other in many ways other than the presence of a treatment whose effects are being tested”. The fact that the groups cannot be assumed to be equivalent before exposure to the stimulus clearly places a serious handicap on the logic of the experiment. Indeed, if the groups are not equivalent at the outset, the researcher cannot know whether the differences observed among the groups with regard to the values of the dependent variable are due to the initial non-equivalence or to the effect of the stimulus. Given the impossibility of separating the effect of the experimental stimulus from all the other effects, the researcher cannot draw causal inferences regarding the effect of the independent variable on the dependent variable. Consequently, some authors have gone so far as to deny legitimacy to the very existence of the category of quasi-experiments, claiming that it is a hybrid and confused classification. In practice, however, it is often impossible in social research to assign subjects randomly to groups, particularly when the groups are pre-constituted (e.g. school classes, work departments, etc.). 6.2.1. ONE-GROUP, PRE-TEST-POST-TEST DESIGN – By definition, a true experiment is impossible to conduct on only one group. Nevertheless, the present experimental design is an important surrogate for the “only-after”, two (or more)-group design, in which one of the two groups is exposed to the stimulus while the other is not, and the 21 dependent variable is subsequently recorded. In the present case, too, the dependent variable is observed both without exposure to the stimulus (Y1) and after exposure (Y2); the difference being that this time the two observations are carried out on the same group. The variation in X therefore occurs over time within the same group rather than between groups. In other words, with regard to the two assumptions presented in Section 4, instead of the assumption of equivalence (between groups), it is applied the assumption of invariance (of the same group). However, the assumption of invariance presupposes stability over time and non-interference on the part of the first measurement. It must be therefore ensured that nothing – apart from the variation in X – occurs between the two observations Y1 and Y2 that might itself influence Y, thus contaminating the effect of the stimulus. At the same time, the pretest must not influence the post-test. 6.2.2. INTERRUPTED TIME-SERIES DESIGN – Again, this is a one-group design. It differs from the previous design. In order to avoid the risk that the difference in the value of Y before and after exposure to the stimulus may be due to an ongoing trend in Y rather than to the effect of the stimulus itself, this design compares not the mean values of Y but its trend over time before and after the stimulus. The design involves serial recording of the dependent variable Y; at some point in the series, a variation in the independent variable X is introduced with a view to ascertaining whether this produces a variation in the trend of Y. This design offers two advantages: first, any influence of the pre-test on the post-test will be slight (since this influence is present in all observations of Y except the first); and second, little interference can be expected from uncontrolled external events that take place between two successive measurements (since such events may occur during any of the intervals – between Y1 and Y2, Y2 and Y3, etc. – and not only in concomitance with the change in X). Thus, if a variation in the trend of Y is recorded between Y3 and Y4 in concomitance with the change in X, it is unlikely that this will be due to the effect of the pre-test Y3 or to the intervention of other unknown factors. It can therefore be plausibly attributed to causal action of X. Clearly, this design is applicable only in particular cases. 6.2.3. “BEFORE-AFTER” TWO GROUP DESIGN WITHOUT RANDOMIZATION – Measured effect: (Y4 – Y3) – (Y2 – Y1) The scheme that illustrates this quasi-experimental design is similar to that of the “before-after” two (or more)-group experimental design, with the difference that the letter R (for “randomization”) and the arrows pointing to the two groups have been replaced by a horizontal line between the groups; this indicates their separate origin and non-equivalence. The method involves taking two groups, pre-testing both, exposing only one of the groups to the stimulus, and then post-testing both. The presence of a control group eliminates the distortions due to events taking place between the two tests and to the influence of the pre-test on the post-test, since such effects will be present in both groups and will not therefore influence the differences between them. Naturally, the fact remains that the two groups are not equivalent, as they have not been randomized. However, the pre-test provides information on the pre-existing differences between 22 itself has to be of considerable size. According to the definition, the sample subjects must be questioned by means of a standardized procedure. This means that all subjects are asked the same questions in the same manner. Standardization of the stimulus is a fundamental feature of the survey and is aimed at ensuring that the answers can be compared and analysed statistically. A survey is made up of two parts: the question and the answer. Both of these may be formulated in a standard way (when questions are asked with the same wording and answers are only pre- established options); alternatively, they may be freely expressed (the subject is allowed to answer freely). The third case is one in which the question is not standardized. For instance, the interviewer is free to formulate the questions in the way she thinks fit. 2. STANDARDIZATION, OR THE INVARIANCE OF THE STIMULUS 2.1. OBJECTIVIST vs. CONSTRUCTIVIST APPROACH The first dilemma concerns the contrast between the view that social reality exists outside the researcher and is fully and objectively knowable, and the view that the very act of knowing alters reality, which means that knowable reality can only be that which is generated by the interaction between the subject who studies and the subject that is studied. Thus, in the two extreme statements of these views, on the one hand, a position that could be defined as objectivist (social data can be recorded objectively through a process not unlike that of “observation” in the natural sciences) and, on the other, a position that could be defined as constructivist (according to which social data are not observed, collected or recorded, but “constructed” or “generated” in the interaction between the subject studied and the subject studying). In the field of the survey, this dilemma opens up the problem of the relationship between the interviewer and the interviewee, contrasting detached impersonal recording versus empathetic interaction. The objectivist approach holds that the interviewer-interviewee relationship should be completely impersonal. The interviewer’s main concern should be to avoid altering the subject studied. Interaction with the subject is seen as a necessary evil, a negative factor to be kept to the minimum. Consequently, the interviewer is obliged to comply with codes of behaviour designed to achieve total neutrality and uniformity. 2.2. UNIFORMIST vs. INDIVIDUALIST APPROACH The second dilemma contrasts what could be called the uniformist and the individualist positions. According to the uniformist view, there exist, if not exactly laws as in the physical world, empirical uniformities or regulations in social phenomena and human behaviour, which can therefore be classified and standardized to some degree. By contrast, the individualist perspective emphasizes the notion that inter-individual differences cannot be eliminated, that human beings cannot be reduced to any form of generalization or standardization, and that social reality becomes comprehensible to the researcher only insofar as the researcher is able to establish an empathetic relationship with single individuals. This dilemma ushers in the problem of standardization of the data-collection technique. The questionnaire is binding not only on the interviewer, who has to ask every subject the same question in the same way, but also on the interviewee, who is forced to choose among sets of prefabricated answers. 2.3. REDUCING INVESTIGATION TO A LOWEST COMMON DENOMINATOR 25 The fundamental goal of the objectivist uniformist position is all too clear. The solutions proposed to the first dilemma (depersonalize the interviewer-interviewee relationship) and to the second (standardize the questions and answers) lead to the same point: the neutrality of the recording tool (of which the interviewer is also a part) or, to use behaviorist terminology, the invariance of the stimulus. The aim, of course, is to obtain answers that can be compared, and the answers are claimed to be open to comparison on the grounds that all interviewees are asked the same questions in nearly uniform interview situations. 3. THE RELIABILITY OF VERBAL BEHAVIOUR SOCIAL DESIRABILITY – It regards the commonly held evaluation of a certain attitude or behaviour of the individual within a given culture. Certain attributes or behaviours of an individual are disapproved according to the collective norms of the society involved (e.g. poverty, alcoholism, drug abuse, extra-marital sex, etc.), while others (such as honesty, diligence, church attendance, etc.) meet approval. If an attitude has a strong positive or negative connotation in a certain culture, questions concerning such an issue may elicit a biased response. In other words, the respondent may be tempted to be less than truthful. NON-ATTITUDE – Another difficulty is constituted by non-attitudes. In social research, subjects are often asked about complex matters, such as how far the government should interfere in the economy, or whether taxes should be raised to pay for better social services, etc. Questions are often asked in the form of “item batteries”: the subject is presented with a block of statements, and has to say whether he agrees or disagrees with each one. In the interview situation, however, the respondent may feel under pressure to answer and – facilitated by the formulation of the “closed question” – choose one of the possible replies at random. On examining the results of a longitudinal survey conducted by re-interviewing the same subjects on different occasions, Converse noted some rather strange patterns (such as the low correlation between the replies given to the same question by the same individuals on different occasions). This prompted him to hypothesize the existence “of two sharply discontinuous classes of respondents, the stable and the random”. The former is perfectly constant in their replies, while the latter are people “with no real attitudes on the matter in question, but who for some reason felt obliged to try a response”. INTENSITY – Standardized questions elicit opinions but do not record the intensity or the staunchness of those opinions. A question that asks the respondent if he agrees or disagrees with a certain statement will elicit a number of positive and a number of negative responses, which will be undifferentiated; the researcher is therefore unable to distinguish, within each class of response, which opinions are deeply rooted and which are superficial, perhaps even formed on the spur of the moment. Thus, the sociologist is obliged to attach the same importance to fleeting whims that may change from one day to the next as to consolidated opinions that are entrenched in the respondent’s personal history. 4. QUESTION FROM AND SUBSTANCE 4.1. SOCIOGRAPHIC DATA, ATTITUDES AND BEHAVIOURS QUESTIONS CONCERNING BASIC SOCIOGRAPHIC PROPERTIES – These refer to the simple description of the basic social characteristics of the individual. Individual’s permanent features, 26 such as demographic features and social connotations inherited from the family or acquired in youth. QUESTIONS CONCERNING ATTITUDES – Here, the area under investigation has to do with opinions, motivation, orientation, feelings, evaluations, judgements, and values. These properties of the individual are those that are most typically recorded through surveys; indeed, direct questioning of the individual seems to be the only way of gaining some insight into his thoughts. QUESTIONS CONCERNING BEHAVIOURS – In the present case, it is recorded what he says he does or has done. Therefore, it is the field of “actions” which, for at least two reasons, constitutes much more solid ground. First of all, unlike attitudes, which depend upon psychological and mental states, behaviours are unequivocal. An action either takes place or does not, and questions about actions therefore have a precise answer. Second, behaviours are empirically observable. An action can be observed by another person and may leave an objective trace (as opposed to the paramount subjectivity of attitudes); if a person goes on strike, this action is known to his workmates; if he has voted, this fact is recorded in the electoral registers, etc. 4.2. OPEN VERSUS CLOSED QUESTIONS An open question is one in which the respondent is at liberty to formulate a reply as he wishes. A closed question is one in which the interviewee is presented with a range of alternative answers to the question, and is asked to choose the most appropriate option. In the hands of a capable interviewer, the open question always yields an unambiguous result that remains within the frame of reference laid down by the researcher. However, this way of working has a high cost and is not practicable on large numbers. The closed question offers everyone the same frame of reference. The closed question is an aid to memory; the alternatives proposed act as a kind of checklist for the respondent. The closed question prompts thought and analysis; it forces the respondent to abandon vagueness and ambiguity. However, closed questions have three limitations. The closed question omits all the alternative answers that the researcher has not thought of. 5. FORMULATON OF THE QUESTIONS Some suggestions can be made with regard to the language, syntax and content of the questions: - Simplicity of language, given that the questions are standardized, and therefore the same for everyone, language that is accessible to everyone should be used. - Question length, as well as being formulated in simple language, questions should generally be concise. Questions that are too long not only take up more time but may also distract the respondent from the crux of the issue. - Number of response alternatives, in closed questions, the response alternatives must not be too numerous. - Slang, many subcultures are jealous of their slang. - Ambiguous or vague definitions. - Words with strong emotive connotations, it is advisable to avoid emotive language. - Syntactically complex questions, the syntax of the question should be linear and clear. - Questions with non-univocal answers, multiple questions and questions that are not sufficiently detailed should be avoided. - Non-discriminating questions. 27 distributes questionnaires to families and calls back a week later to collect them. This obviates the two problems mentioned earlier; gross errors are avoided through a summary check carried out by the operator when the questionnaires are collected, and self-selection is avoided by the fact that the operator ensures that all completed questionnaires are collected. The advantages are briefly listed: - Very great saving on costs; - The questionnaire can be fille in at leisure even at different times; - Greater guarantee of anonymity than in a face-to-face interview; - No bias due to the presence of an interviewer; - Subjects living far away or in isolated areas can be reached. While the disadvantages are: - Low percentage of returns; - Sample bias due to self-selection; - The level of education of the population studied has to be medium-high; - No control over completion of the questionnaire; - Questionnaires must not be too complex; - Questionnaires have to be much shorter than in face-to-face. 7.4. COMPUTER-ASSISTED INTERVIEWS Another technique is that of CAPI (Computer-assisted personal interviewing); this is not very different from a normal face-to-face interview except for the fact that, instead of using a written questionnaire, the interviewer reads the questions from a portable personal computer and types in the answers directly. In this way, some of the steps between data recording and processing can be eliminated; the phases of data coding and input are no longer required. In addition to this advantage, complex questionnaires can be handled more easily, as the computer can be programmed in advance to deal with these variants. Another use of the computer in questionnaire administration is that of CASI (Computer assisted self-interviewing), in which the respondent himself reads the questions on the monitor and types in the answers. In terms of cost, the most obvious advantage of this technique is that the interviewer is eliminated (indeed, this is a self- administered questionnaire). There is, however, another important advantage: the possibility of conducting longitudinal surveys – that is, of repeating the survey over time on the same subjects. Successive interviews can be carried out in which the questionnaire is modified each time, thus allowing permanent monitoring of such phenomena as changes in public opinion or consumer spending patterns. It is not, however, free from problems, the main ones being the limits to self- administered questionnaires and the drawbacks to longitudinal surveys. CHAPTER 8 – SAMPLING + CHAPTER 5 OF NISHISHIBA (SAMPLE SELECTION) 1. POPULATION AND SAMPLE It must be distinguished between a haphazard sample and a probability sample. A random choice is by no means a choice without rules; on the contrary, the procedure of random sampling has to follow very precise criteria and chance – that is to say, true probabilistic chance has its laws. Indeed, contrary to what common sense would seem to suggest, if there is one phenomenon that 30 is perfectly known to science, it is chance. Sampling is the procedure through which it is picked out, from a set of units that make up the object of study (the population), a limited number of cases (sample) chosen according to criteria that enable the results obtained by studying the sample to be extrapolated to the whole population. Sampling offers several advantages in terms of: - Cost of data collection; - Time required for the collection and processing of data; - Organization, in that there is no need to recruit, train and supervise huge number of interviewers, as is the case for a census of population; - Depth and accuracy, in that the lesser organizational complexity enable resources to be concentrated on quality control. Sample selection is based on three different stages: - Identify an appropriate sampling frame, the specific criteria to define the population are called inclusion criteria. The list of individuals that qualify to be included in the population of interest is called the sampling frame. The research sample will be selected from this list. In some instances, it may be important to explicitly identify categories of individuals who will not be included in the study population. The criteria to exclude individuals from the study population are called exclusion criteria. Researchers pay attention to four problems: missing elements, foreign elements, duplicate entries, and clusters. Missing elements refers to individuals who are not included in the study population, but should be of interest for the research objective. Foreign elements refer to individuals who may be included in the sampling frame according to the inclusion criteria, but are not relevant to the interest of the research or might add spurious information. Duplicate entries are a common occurrence in certain data sets used to compose a sampling frame. Clusters refer to entries in a list that include multiple individuals. - Identify an appropriate sample size, selecting an appropriate sample size from a population of interest for a survey can usually be accomplished with three variables: the confidence level, the confidence interval, and variation of what is being measured in the population. The less a value varies in the population, the smaller the sample size needed to estimate the population value. In research that expects to detect change following an intervention the amount of change one expects to detect (the effect size) is an additional factor in selecting an appropriate sample size. - Identify an appropriate sampling technique, there are two basic techniques for sampling: probability sampling and non-probability sampling. Probability sampling is a form of sampling that always includes some ways to randomly select study participants. It means that each unit in the population has an equal chance of being selected for the sample. In non-probability sampling, the probability of any one element being selected is not taken into account; selection is based on other criteria. 3. SAMPLE SIZE 3.1. ONE VARIABLE To establish the size (n) of the sample, it is necessary to use the following equation: 31 n=( zse ) 2 = z2 pq e2 Sample size is therefore directly proportional to the desired confidence level of the estimate (z) and to the variability of the phenomenon being investigated, and it is inversely proportional to the error (e) that the researcher is prepared to accept. Furthermore, it depends on the proportion of the sample accounted (p) and q, which is 1 – p. When the researcher wants to evaluate the parameters of several variables, the method described above can be applied to each of the most important variables separately; the highest value of n found among these can then be taken as the sample size. 4. PROBABILITY SAMPLING DESIGN SIMPLE RANDOM SAMPLING Among probability samples, the most elementary case is that of simple random sampling. From a formal standpoint, a simple random sample is obtained when all the units in the reference population have the same probability of being included in the sample. In order to implement this sampling design, the researcher will, first of all, need a complete list of the members of the population; a number will then be assigned to each of the N units in the list, and the n numbers corresponding to the subjects for inclusion in the sample will be picked out at random. SYSTEMATIC SAMPLING A procedure that is statistically equivalent to simple random sampling – from the point of view of the result – is that of systematic sampling. The only difference lies in the technique of picking out the subjects. The sampling units are no longer selected by lottery (or random number tables), but by scrolling the list of subjects and systematically selecting one unit at every given interval. If the size N of the reference population is known and the size n of the sample has been established, one unit every k = N/n units of the population is selected, beginning with a number chosen at random between 1 and k. In social research, systematic sampling is often used precisely because in many cases no list of the reference population is available. STRATIFIED SAMPLING If the variability of the phenomenon under investigation is very high, then the sample analyzed will need to be larger, in order to maintain a certain level of accuracy in the estimate. Alternatively, if the phenomenon displays areas of greater homogeneity, it is possible to increase sample efficiency (greater accuracy for the same sample size) by adopting stratified sampling. This sampling design is organized in three phases. First of all, the reference population is subdivided into sub-populations (called “strata”) that are as homogeneous as possible in terms of the phenomenon to be studied; this is done by using as a stratification criterion a variable that is correlated with the phenomenon. Second, a sample is selected from each stratum by means of a random procedure. Finally, the samples drawn from each stratum are pooled in order to produce an overall sample. By contrast, if it wants to over-represent some strata and to under-represent others, a non- proportionate stratified sample can be used. Among the various types of nonproportionate stratified sampling, the one that is theoretically most efficient is optimum allocation stratified 32 1. OBSERVATION AND PARTICIPANT OBSERVATION By “observation” it is meant the principal technique for gathering data on non-verbal behaviour; by “participant observation” it is meant, rather than simple observation, the researcher’s direct involvement with the object studied. In such cases, however, a fundamental element of the technique is lacking, namely the involvement of the researcher in the social situation studied and his interaction with the subjects involved. In other words, the above-mentioned techniques could still fall within the positivist approach. Participant observation, on the other hand, is fully located within the interpretative paradigm. It is the researcher’s involvement in the situation under investigation that constitutes the distinctive element. The researcher not only observes the life of the subjects being studied, but also participates in it. This approach therefore has two underlying principles: (a) that a full social awareness can be achieved only through understanding the subjects’ point of view, through a process of identification with their lives; and (b) that this identification is attainable only through complete participation in their daily lives, by interacting continuously and directly with the subjects being studied. The main features are: - Directly; - For a relatively long period of time into a given social group; - In its natural setting; - Establishing a relationship of personal interaction with its members; - In order to describe their actions and understand their motivations, through a process of identification. In this perspective the researcher must not be afraid of contaminating the data through a process of subjective and personal interpretation, in that the subjectivity of the interaction, and therefore of the interpretation, is one of the very features of the technique. Involvement and identification are not to be avoided, but actively pursued (whereas objectivity and distance, which were basic elements of the neopositivist approaches, are no longer considered values). 2. FIELDS OF APPLICATION AND RECENT DEVELOPMENTS IN PARTICIPATION OBSERVATION Participant observation emerges as a natural investigative tool when the researcher intends to study a situation in which he has taken part himself, thus giving rise to what has been called autobiographical sociology. In sociological research it has been applied basically to two objectives: to study in depth small autonomous societies located in specific territories and possessing a culturally closed universe that contains all aspects of life (e.g. a farming community, a small provincial town, a mining village, etc.); and to study subcultures that arise within specific sectors of a complex society. These may represent aspects of the dominant culture (youth culture, the rich, lawyers, the workers of a large industrial complex, the military, a political party, soccer fans, etc.) or be in partial conflict with it (a religious sect, a revolutionary party, gamblers, ethnic minorities, etc.) or even in open conflict (terrorist groups, prison inmates, radical political movements, deviant groups in general). Studies of the first type are called community studies, while those of the second type are called subculture studies. Community studies are usually conducted on small social communities located in specific areas, oblige the researcher to live for a certain period of time in the community studied. 35 As the focus of participant observation shifted from fringe cultures to “normal society”, research was conducted on the values, networks of social relationships and interpersonal dynamics that develop inside institutions and social organizations. This so-called organizational ethnography consists of analyzing organizations as cultures. Such studies examine the culture of an organization (the implicit knowledge shared by members of the same institution, the reference models used to interpret reality, the unwritten rules that guide an individual’s action) and the way in which this culture is expressed in action and social interaction (formal and informal groups, the structure of decision-making processes, interpersonal relationships, symbols and rituals). Participant observation has also been used to study political institutions. 3. OVERT AND COVERT OBSERVATION: ACCESS AND INFORMANTS With regard to participant observation, an important distinction must be made between overt and covert observation. Indeed, the researcher may reveal or disguise his true objectives. He may openly declare at the outset that he is a researcher, that he wishes to be part of a given social group not because he agrees with its goals, but only to study it; or he may infiltrate the group by pretending to join it and to be a member just like any other. The main advantage of covert observation stems from the fact that human beings, if the know that they are being observed, are not likely to behave naturally. There are, nonetheless, convincing arguments against covert observation. Assuming a false identity to play a role that could be likened to that of a “spy” is, in itself, reprehensible, and can be justified only if there are compelling ethical reasons for it. It is doubtful that the objective of social research has such a high moral value as to justify deceit and taking advantage of other people’s good faith. In certain cases, concealing the researcher’s role can even be an obstacle to the final objective of observation: understanding. Sometimes, explicit interviews and persistent questioning are simply impossible if the researcher does not reveal his role and objectives. By contrast, the overt participant observer can take advantage of his declared “incompetence’” in order to ask naïve questions, and elicit explanations of banal matters, thus accumulating information concerning the natives’ accounts and viewpoints. Sometimes the question of revealing the role of the observer does not arise. In other cases the question is not so much one of plain deceit as of omission: simply not declaring one’s role. Once the case to be studied has been chosen and the mode of observation (overt or covert) established, the first problem that the researcher has to deal with is “access”. The participant observer usually gains entry to the field of study, but it is never simple. The most common way of solving this problem is through a cultural mediator. This tactic is based on appealing to the prestige and credibility of one of the members of the group to legitimize the observer and get him accepted by the group. In some cases, the opposite problem may arise; excessive identification of the observer with the group being studied can impair the critical assessment of observed facts. Having earned the trust of the cultural intermediaries and gained access to the study group, the observer will still need to construct privileged relationships with some of the subjects studied. Insiders whom the observer uses to acquire information and interpretation from within the culture studied are usually called informants. A distinction between ‘”institutional informants” and “non-institutional informants” can be made. The former are people who have a formal role in an organization. Non-institutional informants are more important; they belong directly to the culture under examination and, as such, can provide their interpretation of facts and their motivation for action, crucial elements for the observer’s “comprehension”. 36 4. WHAT TO OBSERVE Subjects of observation may be classified into the following areas: (a) physical setting; (b) social setting; (c) formal interactions; (d) informal interactions; and (e) interpretations of the social actors. PHYSICAL SETTING – It is usually fairly important for the researcher to scrutinize the structural layout of the areas in which the behaviour to be studied takes place. This is done not only to communicate the observational experiences more clearly to the reader, but also because physical characteristics almost always reflect social characteristics. SOCIAL SETTING – The human element will be described in the same way as the physical environment. In attempting to understand a given community, history plays an important role, especially when the study focuses on social change. FORMAL INTERACTIONS – By formal interactions it means those which take place among individuals within institutions and organizations in which roles are pre-established and relationships are played out within a framework of predetermined constraints. INFORMAL INTERACTIONS – By their very nature, such interactions are difficult to study. Moreover, since their observation involves scrutinizing a multitude of different situations, it is impossible to provide rules or even general guidelines. First of all, the observer can begin with physical interactions. Participant observation often starts with ordinary, everyday behaviour, which is the most difficult to analyse in that it is made up of a whole range of mechanical actions of which the individual is hardly aware. He must also be able to pick up the interactions among the people he observes. It is important to learn to focus on those interactions that are of interest. At first, the researcher’s field of observation will be very broad, but gradually he will become more selective. SOCIAL ACTORS’ INTERPRETATIONS – In the interpretative paradigm the individual studied is not merely a passive object of research, but becomes an active protagonist; his interpretation of reality is therefore a constituent part (and not simply an accessory) of scientific knowledge. In this perspective, his verbal communication with the observer becomes a preferential channel of communication. 5. RECORDING OBSERVATIONS The researcher’s daily notes arise out of the interaction between the observer and the situation observed; they will therefore consist of two basic components: the description of facts, events, places and persons; and the researcher’s interpretation of these events together with his reactions, impressions and reflections. The act of recording observations will be examined further by splitting it into its three constituent parts: “when” to record, “what” to record, and “how” to record. WHEN – As soon as possible. As time passes, the vividness of the event will tend to fade and new events will tend to obscure older ones. Ideally, notes should be taken as the events occur, but this is not normally possible. 37 another problem is the subjectivity of the cases studied. Participant observation usually involves one case or a few cases; such research is intense but small-scale. NON-STANDARDIZATION OF THE PROCEDURES USED – it is difficult to describe the technique of participant observation, on account of the lack of universally applicable standardized procedures and the specific nature of each pathway of research. For the same reasons, such studies are not reproducible. Indeed, if the researcher changes, so also will the subjects and the settings observed, the modes of observation, the sequence of recordings, the data-gathering procedures, and therefore the very characteristics of the empirical material used. And without reproducibility, basic requisites of scientific research are lacking. CHAPTER 10 – THE QUALITATIVE INTERVIEW 1. COLLECTING DATA BY ASKING QUESTIONS The qualitative interview can be defined as a conversation that has the following characteristics. First of all, the interview is elicited by the interviewer; in this respect, it differs from chance conversation. Second, subjects are selected for interview on the basis of a systematic data- gathering plan, meaning that they are chosen according to their characteristics (e.g. their belonging to certain social categories or having been through some particular experience). Moreover, these subjects must be fairly numerous (as a general indication, at least some tens) in order to yield information that can be generalized to a larger population. In conducting qualitative interviews, the researcher pursues a cognitive objective. Finally, the qualitative interview is not an ordinary conversation; rather, it is a guided conversation in which the interviewer establishes the topic and ensures that the interview is conducted according to the cognitive aims set. The interviewer may impart guidance at various degrees, but will substantially allow the interviewee to structure his answer, or even the entire conversation, as she thinks fit. 2. QUANTITATIVE AND QUALITATIVE INTERVIEWING 2.1. ABSENCE OF STANDARDIZATION While the questionnaire attempts to place the subject within the researcher’s pre-established schemes (the answers to a closed question), the interview is aimed at revealing the mental categories of the subject, without reference to preconceived ideas. The quantitative approach, whose instrument is the questionnaire, forces the respondent to limit his answers. In qualitative interview, the goal is to grasp the subject’s perspective. 2.2. COMPREHENSION vs. DOCUMENTATION (CONTEXT OF DISCOVERY vs. CONTEXT OF JUSTIFICATION) Questioning can be regarded both as a means of collecting information and as a means of understanding social reality. The quantitative approach uses questioning in order to collect information on people, their behaviour and social features. By contrast, the qualitative approach does not use the interview to gather information on people, but to understand them from the inside. The difference between these two approaches – which can be seen as the difference between quantity and quality, between breadth and depth – also has implications for the number of subjects to be interviewed. 2.3. ABSENCE OF A REPRESENTATIVE SAMPLE 40 Another difference between the questionnaire and the interview arises out of the two points mentioned above: that of the sample. A fundamental requirement of questionnaire-based research (i.e. surveys) is that it must be carried out on a “representative” sample – that is, a sample constructed in such a way as to reproduce the characteristics of the population on a small scale. The qualitative interview does not aspire to this objective. Even when interview subjects are picked out systematically, this procedure stems more from the need to cover the range of social situations than from the desire to reproduce the features of the population on a small scale. If the sample does not have to be representative, then there is no need to adopt a strategy of random selection of subjects for inclusion. The selection procedure is generally carried out as follows. A few variables relevant to the issue under investigation are identified and a table is drawn up containing cells generated by the intersection of the columns and rows corresponding to the values of the (nominal) variables. Very often, however, no sampling design is drawn up in advance in qualitative research. 3. TYPES OF INTERVIEWS 3.1. STRUCTUTED, SEMI-STRUCTURED AND UNSTRUCTURED INTERVIEWS STRUCTURED INTERVIEWS – These are interviews in which all respondents are asked the same questions with the same wording and in the same sequence. Interviewees are free to answer as they wish. The interview is, in effect, a questionnaire with open questions. Although answers are freely expressed, and even if the interviewer is careful to “let the interviewee speak”, the mere fact that the same questions are asked in the same order introduces a considerable degree of rigidity into the interview. On the other hand, the respondent’s freedom to answer as he wishes is in line with the tenets of this paradigm. The structured interview is therefore a somewhat hybrid technique, in that it offers the standardization of information required by the “context of justification”, while remaining receptive to those unknown and unforeseen elements that belong to the “context of discovery”. A researcher might choose to make use of structured interviews for three reasons. One of these is the extreme individuality of the situations investigated. When each situation differs from the others, the researcher will be prevented from drawing up an exhaustive range of response alternatives before conducting the interview. On other occasions, the researcher may opt for the structured interview not because she knows little about the issue under investigation, but because it involves so many aspects that an exhaustive list would have to contain an infinite number of response categories. Finally, the researcher’s choice of the structured interview may be dictated by the respondents’ level of education. SEMI-STRUCTURED INTERVIEWS – When conducting a semi-structured interview, the interviewer makes reference to an “outline” of the topics to be covered during the course of the conversation. The order in which the various topics are dealt with and the wording of the questions are left to the interviewer’s discretion. Within each topic, the interviewer is free to conduct the conversation as he thinks fit, to ask the questions he deems appropriate in the words he considers best, to give explanations and ask for clarification if the answer is not clear, to prompt the respondent to elucidate further if necessary, and to establish his own style of conversation. The interviewer’s outline may contain varying degrees of specification and detail. This way of conducting the interview gives both the interviewer and the respondent ample freedom, while at the same time ensuring that all the relevant themes are dealt with and all the necessary information collected. 41 UNSTRUCTURED INTERVIEWS – In the structured interview, the questions are predetermined both in content and in form. In the semi-structured interview, the content, but not the form, of the questions is predetermined. In the third case, that of the unstructured interview, neither the content nor the form of the questions is predetermined, and may vary from one respondent to another. The interviewer’s only task is to raise the topics that are to be dealt with during the conversation. The respondent will be allowed to develop the chosen theme as he wishes and to maintain the initiative in the conversation, while the interviewer will restrict himself to encouraging the respondent to elucidate further whenever he touches upon a topic that seems interesting. HOW TO CHOOSE? – The choice among these three types of interviews will depend on the research objectives and the characteristics of the phenomenon studied. If the research design envisions interviewing a large number of subjects, numerous interviewers will be required, which means that a structured approach will have to be adopted. Finally, it should be added that the distinction between semi-structured and unstructured interviews is somewhat blurred, the real difference being between these two and the structured interview. When the interviewer is provided with a schematic outline of the interview, it may be difficult to say whether the relationship is semi-structured or unstructured. If, on the other hand, a series of predetermined questions is used, then it is clearly a structured interview, in which the respondent is closely guided by the interviewer. 3.2. SPECIAL CASES NON-DIRECTIVE AND CLINICAL INTERVIEWS – In the three types of interviews described so far, interaction is in some way guided by the interviewer, who establishes the topics and the bounds of the conversation if nothing else. In the non-directive interview, however, not even the topic of conversation is pre-established; the interviewer simply follows the interviewee, who is free to lead the conversation wherever he wishes, and the very fact that the interviewee raises one topic rather than another is taken as a diagnostic element. The interviewer therefore does not know where the conversation will lead. The clinical interview is different in that it is closely guided by the interviewer (psychologist, doctor, social worker). Its aim is to examine the personal history of the individual, by means of an in-depth interview not unlike the unstructured interview illustrated earlier, in order to reconstruct the pathway that has led to a certain outcome, for example, deviant behaviour such as drug use, delinquency, etc. In both these types of interviews, the objective is therapeutic rather than cognitive. They are used not so much to gather information on a given social phenomenon as to delve into the patient’s personality. INTERVIEWS WITH KEY INFORMANTS – In the various types of interviews presented so far, the persons interviewed are themselves the object of the study. If it wants to study political militants, militants are interviewed; if it wants to study delinquents, delinquents are interviewed. It can be interviewed individuals who are not a part of the phenomenon under investigation, but who have special expertise or knowledge of that phenomenon. On account of their privileged observational position, these subjects are called “key informants”. FOCUS GROUP – In certain cases, interaction may produce deeper discussion, thereby aiding the researcher’s understanding. Moreover, group discussion may be better able to reveal the intensity of feelings, thus facilitating comparisons among different positions. A focus group is generally 42 inferences on a larger population, on account of the small number of cases studied. In addition, prepare qualitative interviews requires a lot of effort. First, respondents have to be identified and tracked down; the purpose of the interview then has to be explained and the respondent’s trust has to be gained through preliminary contacts or presentation by intermediaries, and appointments have to be arranged at times when the respondent is available for interview and in places where the conversation can take place without any disturbance. The qualitative interview may also be used in quantitative research in order to investigate a particular theme in greater depth, after the quantitative data have been collected. In this case, the qualitative interview plays a supporting role for the quantitative data collection. By contrast, the scholars who subscribe to the interpretive paradigm claim that the qualitative interview is the only technique based on questioning that can lead to a genuine understanding of social reality. Rather than an act of observation, it is an instance of interaction through which the researcher gains direct access to the world of the interviewee, in much the same way as the participant observer does. RESEARCH METHODS AND STATISTICS FOR PUBLIC AND NONPROFIT ADMINSTRATORS: A PRACTIAL GUIDE – MASAMI NISHISHIBA, MATTHEW JONES, MARIAH KRANER CHAPTER 3 – IDENTIFYING THE FOCUS OF THE RESEARCH: RESEARCH OBJECTIVE AND RESEARCH QUESTION RESEARCH OBJECTIVES Research originates from a problem that needs to be solved. When faced with such a problem, a practitioner may need to conduct some kind of research to find a solution. Research is an information gathering activity that will help you identify solutions to problems. The first step to ensure you collect information aligned with the problems you are trying to solve is to be clear about the objective of your research. A research objective is a statement of the purpose of your research—to what end you are conducting your research. IDENTIFYING RESEARCH OBJECTIVES Research topics are broad descriptions or areas of interest, such as alcoholism, poverty, leadership, performance management, motivation, or organizational behavior. All of these topics imply that there are problems to address. The topic needs to be articulated as a specific problem to reach a definition for the research objective. TYPES OF RESEARCH THEORY BUILDING APPROACHES: INDUCTIVE vs. DEDUCTIVE The relationship of research to theory building has two basic forms: an inductive approach and a deductive approach. The inductive approach starts with specific observations. With an accumulation of observations, you begin to identify patterns. When the patterns seem to be prevalent in your observations, you can develop a hypothesis, which is like a tentative theory. If 45 your observations keep confirming your hypothesis, then your hypothesis becomes a theory—an explanation that may help you understand some characteristic of what you are observing. This approach is sometimes referred to as a bottom-up approach, grounded approach, or exploratory approach. A deductive approach starts from the opposite direction, with a general idea or set of principles that suggest more specific ideas on how things are. In this case, a pattern of ideas forms a hypothesis, or tentative theory, which can be tested to see if it is true, or perhaps, in what specific instances it is true. If your observations confirm the hypothesis, then your hypothesis becomes a theory, related to the original general ideas as a form of explanation that may help you understand some characteristic of what you are observing. This approach is sometimes referred to as a top-down approach, hypothesis-testing approach, or confirmatory approach. TYPES OF DATA ANALYSIS For a research objective to explore and describe, if you have data captured as numbers (quantitative data), you can use descriptive statistics to present your results. When the objective is to explore and describe, you can also use data captured as statements (qualitative data) and present themes. There are a number of statistical analysis techniques called inferential statistics that can be used to analyze the data. RESEARCH QUESTIONS After clarifying the research objective, the second step in the research process is to rephrase the objective into a question or in some cases multiple questions. You need to answer the research question to know how well you reached your objective. FOCUSING YOUR RESEARCH QUESTIONS The research question is a road map that indicates a basic structure for the following steps in the research process. You can also say that a research question defines the project’s scope of work. It is natural to start a research process with a broad research question or a research question that has multiple questions embedded within it. IDENTIFYING TYPES OF RESEARCH QUESTIONS When you have a group difference research question, you need to be able to define what distinguishes the groups. Groups can be defined in many ways. A naturally occurring grouping of people defines and distinguishes groups by a combination of individual characteristics and conditions, such as gender, race, educational level, status, location, participation in a certain activity, or membership in an organization. In this kind of grouping, the researcher specifies the qualities of interest and studies the people found to fit the definition. A researcher can also create groups through an assigned grouping of people. A correlational research question hypothesizes that a characteristic of one thing is related to a characteristic of another thing. The thing can refer to individuals, conditions, objects, or events. A causal research question hypothesizes that one factor X is a cause of the effect Y. 46 CHAPTER 4 – RESEARCH DESIGN IDENTIFYING RESEARCH DESIGN RESEARCH DESIGN: A GAME PLAN Every research project needs a game plan to determine how an answer will be produced for the research question. The game plan is called a research design. A research design will establish a plan that includes the following elements: (1) the structure of when the data are collected, (2) the time frame of the research, and (3) the number of data collection points. When choosing a research design, there are some key factors that need to be taken into consideration. Most important, the selected research design should match the purpose of the research. It should allow the researcher to collect appropriate data that provides answers to the research question. Also, the selected research design should fit the research objective. TYPES OF RESEARCH DESIGN Four types of research design are distinguished: 1) Collect data one time now about now. This research design is appropriate when you are interested in finding out how things are at the present moment. This type of survey approach is referred to as cross sectional survey design. 2) Collect data now about the past. In this research design, the data could focus on one event at a single time point or multiple events across multiple time points. When collecting data about the past that stretches over a longer time period, not just one time point, a researcher may want to consider an in-depth interview or oral history to capture the information. 3) Collect data in the past about the past. A researcher might be interested in data collected in the past only one time or multiple times over a period. Unlike the previous type of research design, this research design does not depend on the recall of an informant. This type of approach is referred to as secondary data analysis. 47 OTHER VARIATIONS OF EXPERIMENTAL AND QUASI-EXPERIMENTAL DESIGN There are other ways you can structure and design the research. For example, it is possible to have more than one treatment group. Another variation is found in the placebo design, commonly used in clinical trials for pharmaceuticals. In medical interventions, it is known that when patients believe they are receiving treatments, they may improve even when the treatment has no therapeutic benefit. This psychological effect is called a placebo effect. ETHICAL CONSIDERATIONS IN EXPERIMENTAL AND QUASI-EXPERIMENTAL DESIGN In designing an experimental or a quasi-experimental study, researchers need to consider its ethical implications on subjecting study participants to a treatment, or not providing a certain group an opportunity to benefit from the experimental treatment. These are the kinds of important considerations that a researcher needs to weigh before finalizing the research design. Researchers need to consider these issues and be aware that there may be some instances where experimental or quasi-experimental approaches may not be appropriate, due to ethical implications. THE NUREMBERG CODE - The voluntary consent of the human subject is absolutely essential. - The experiment should be such as to yield fruitful results for the good of society, unprocurable by other methods or means of study, and not random and unnecessary in nature. - The experiment should be so designed and based on the results of animal experimentation and a knowledge of the natural history of the disease or other problem under study that the anticipated results will justify the performance of the experiment. - The experiment should be so conducted as to avoid all unnecessary physical and mental suffering and injury. - No experiment should be conducted where there is an a priori reason to believe that death or disabling injury will occur; except, perhaps, in those experiments where the experimental physicians also serve as subjects. - The degree of risk to be taken should never exceed that determined by the humanitarian importance of the problem to be solved by the experiment. - Proper preparations should be made and adequate facilities provided to protect the experimental subject against even remote possibilities of injury, disability, or death. - The experiment should be conducted only by scientifically qualified persons. The highest degree of skill and care should be required through all stages of the experiment of those who conduct or engage in the experiment. - During the course of the experiment the human subject should be at liberty to bring the experiment to an end if he has reached the physical or mental state where continuation of the experiment seems to him to be impossible. - During the course of the experiment the scientist in charge must be prepared to terminate the experiment at any stage, if he has probable cause to believe, in the exercise of the 50 good faith, superior skill and careful judgment required of him that a continuation of the experiment is likely to result in injury, disability, or death to the experimental subject. CHAPTER 9 – COMPARING MEANS BETWEEN TWO GROUPS TYPES OF RESEARCH QUESTIONS T-TESTS CAN ANSWER T-tests are the statistical tests you can use when you have a research question that requires a comparison of two means. This requires a dependent variable that is a continuous measure. There are three different types of t-tests, based on the type of groups you are comparing: a one-sample t-test, an independent samples t-test, and a paired samples t-test. The one-sample t-test is used when you have only one sample, and you are comparing its mean to some other set value. The independent samples t-test is used when you have two groups in your sample that are independent from each other, and you would like to compare their means to see if they are significantly different. The paired samples t-test is used when you want to compare the means of two groups that are closely related or matched or when one group is measured twice. ONE SAMPLE T-TEST All inferential statistical tests have assumptions that must be met to correctly use and interpret the results of the test. There are three primary assumptions of the one-sample t-test: - The variable from which the mean is calculated (which is a dependent variable or outcome variable) must be a continuous measure, representing either an interval or ratio level of measurement. - The independent variable must be dichotomous. - The variable from which the mean is calculated must be normally distributed. If the p-value is lower than the significance level (usually, it is 0.05), the null hypothesis should be rejected and the research hypothesis is supported. INDEPENDENT SAMPLES T-TEST 51 The independent samples t-test, also known as the two-samples t-test, evaluates whether the means of two samples are different from one another. The following assumptions apply to the independent samples t-test: - The variable from which the mean is calculated must be a continuous measure, representing either an interval or ratio level of measurement. - The independent variable must be dichotomous. - The dependent variable must be normally distributed. - Observations between the two groups must be independent of each other. - The variances for the two populations are equal. EQUALITY OF VARIANCE The fifth assumption, that the variances between the two groups should be equal, is also called the assumption of homogeneity of variance. When there is a departure from this assumption, the variances are then considered heterogeneous (heterogeneity of variances). Why does this matter? Because the way the test statistic is calculated, it can be influenced by the variance of each group and subsequently may affect the p-value, and therefore, your interpretation of the results. To see if the population variances for the two groups you are comparing in your analysis are equal, you should conduct Levene’s test. If the result of the Levene’s test is significant, you can conclude that there is a statistically significant difference in the population variances between the two groups you are comparing in the analysis, and therefore, the assumption of homogeneity of variance is violated. When the p- value for Levene’s test is below 0.05, then Levene’s test is significant. This means there is a significant difference in the variance between the two groups in the population, and the assumption of homogeneity variance is not met. On the other hand, when the p-value for Levene’s test is larger than 0.05, then Levene’s test is not significant. This means there is no significant differences in the variance between the two groups in the population, and the assumption of homogeneity variance is met. PAIRED SAMPLES T-TEST If you have a study where data were collected from the same group twice, you have a repeated measures design. In the repeated measures design with the data collected twice, the paired samples t-test is the appropriate statistical test to compare the means of the data from the first time (time 1) and the second time (time 2). Another situation where the paired samples t-test is appropriate is when you have a pair of people assessed once on a same measure. This is called matched subjects design. The following assumptions apply to the paired samples t-test: - The variable from which the mean is calculated must be a continuous measure, representing either an interval or ratio level of measurement. - The independent variable is a pair of two conditions that the data represent. - The difference score in the dependent variable between the two conditions must be normally distributed in the population. - The difference score in the dependent variable between the two conditions must be independent of each other. 52 described as: Mean score for Group 1 = X 1, Mean score for Group 2 = X 2, and Mean score for Group 3 = X 3. The null hypothesis, therefore, can be expressed as the following: H0 : X 1 = X 2 = X 3. When ANOVA gives a result indicating a significant difference between groups, it is simply indicating that the null hypothesis is not true. However, there are different ways that the means of the three groups can differ. A statistically significant difference (according to the significance level for the F-statistic) can mean that the mean of all three groups differ significantly (X 1 ≠ X 2 ≠ X 3) or that only Group 1 and Group 2 differ (X 1 ≠X 2 = X 3), or only Group 1 and Group 3 differ (X 1 ≠ X 3 = X 2), or only Group 2 and Group 3 differ (X 1 = X 2 ≠ X 3). ANOVA does not tell which of the four options is the case. For this reason, ANOVA is referred to as an omnibus test. POST HOC TEST In some cases, detecting an unspecified difference among several groups with ANOVA may be enough for the research question. Usually, however, you will want to conduct a further analysis to find out which of the groups differ. These additional tests are called post hoc tests. EFFECT SIZE: ETA SQUARED In ANOVA, along with the statistically significant differences between groups, the test will also provide a measure that tells you the magnitude of the difference, or effect size, to assist in making an assessment as to whether the difference is meaningful. All measures of effect size range from 0 to 1. The general rule of thumb for interpreting the effect size in social science is as follows: - 0.0 – 0.2, a small effect size; - 0.2 – 0.5, a medium effect size; - > 0.5, a large effect size. The common measure of effect size for ANOVA is eta squared (η2), which is calculated by taking the ratio of the between sum of squares over the combined between sum of squares and within sum of squares: η2= Between∑ of Squares Between∑ of Squares+Within∑ of Squares ONE-WAY ANOVA One-way ANOVA is used to compare the means of two or more independent groups. There are four primary assumptions for the one-way ANOVA that need to be met to use and interpret the results of the test correctly: - The observations of the groups that are compared must be independent of each other. - The dependent variable must be normally distributed in each group. - The dependent variable must be a continuous measure (interval or ratio), and the grouping variable, called the factor in ANOVA, must be nominal. - The variances of the dependent variable in each group must be equal (referred to as homogeneity of variance). 55 If the result indicates a statistically significant difference (p < 0.05), then the null hypothesis should be rejected NOTE ON SAMPLE SIZES FOR THE ONE-WAY ANOVA When creating a research design where a one-way ANOVA is to be used, a sufficient sample size is an important consideration. As is the case with many statistical tests, a small sample size can have an adverse effect on the statistical power of the test. In ANOVA, a small sample size increases the likelihood of violating the homogeneity of variance assumption. A sample should contain 15 cases per group be used to conduct a one-way ANOVA capable of fairly accurate p-values. When planning a study with group comparisons like this, it is also important to make the groups as equal in size as possible. REPEATED MEASURES ANOVA Repeated measures ANOVA is used to compare means for more than two related, not independent groups. It is also referred to as a within-subjects ANOVA. Assumptions for the repeated measures ANOVA are similar to that of the one-way ANOVA. For the repeated measures ANOVA, however, the independence of the grouped observations is not required. There are four primary assumptions for the repeated measures ANOVA: - The dependent variable must be normally distributed at each measurement level. - The dependent variable must be a continuous measure (interval or ratio). - The variances of the differences between all combinations of related groups (levels) are equal. This is called sphericity, and violation of the sphericity assumption will increase the risk of Type I error in a repeated measured ANOVA. - If there are separate groups in addition to the repeated measurement levels, then the variances of the dependent variable in each group must be equal (homogeneity of variance). If the repeated measures ANOVA indicates a statistically significant difference (p < 0.05), then the null hypothesis must be rejected. The repeated measures ANOVA is an omnibus test, like the one- way ANOVA, and will not distinguish which set of means differ, and a post hoc test will be required to identify which pair-wise comparison is significantly different. 56 CHAPTER 11 – BIVARIATE CORRELATION PEARSON PRODUCT MOMENT CORRELATION Correlation is examined based on how much the two variables co-vary, or how the value of one variable change when the value of another variable changes. In statistical analysis, a correlation coefficient is used as a numerical index to represent the relationship between two variables. Pearson product moment correlation coefficient (R) is one of the correlation coefficients used to represent the relationship between two variables that are continuous in nature. DIRECT OF THE RELATIONSHIP 57 analysis called nonparametric tests. All statistical tests covered before, except for chi-square analysis, are parametric tests, which means the statistical analysis is based on the assumption that the underlying population data are normally distributed and that the measures used for the data are continuous (interval or ratio scales). Nonparametric tests apply to categorical data and do not require a normal distribution of the data. CALCULATING CHI-SQUARE STATISTICS AND TESTING STATISTICAL SIGNIFICANCE χ2=∑ (Observed frequency−Expected frequency )2 Expected frequency χ2=∑ (ad−bc )2(a+b+c+d ) (a+b)(c+d)(b+d )(a+c ) After obtaining the chi-square statistic, you can then check to see if the result is statistically significant. As with other statistical tests, if the p-value is equal to or lower than .05, then you can reject the null hypothesis and conclude that there is a statistically significant difference in the grouping of your categorical variables. If the p-value is larger than .05, then you do not reject the null hypothesis and conclude that there are no statistically significant differences in the groupings of the two variables. NOTE ON SAMPLE SIZE FOR CHI-SQUARE ANALYSIS One important thing to remember in conducting the chi-square analysis is that it requires at least five expected frequency scores in each category cell to meet the requirements of the analysis. 60 CHAPTER 14 – QUALITATIVE DATA ANALYSIS QUALITATIVE VERSUS QUANTITATIVE DATA ANALYSIS With quantitative data, researchers can summarize results using statistics. In contrast, qualitative data capture the phenomena using words, statements, and sometimes visuals. Qualitative data provide a richer description of the phenomena of interest than can be accomplished with numbers. Narrative or graphic information is easier to comprehend as a direct representation of the phenomena. Numbers and statistics are more abstract and require additional knowledge to understand what is represented. 61 PREPARING DATA FOR QUALITATIVE ANALYSIS Quantitative data is a little easier to manage at this stage because most of the decisions about what to count were made before the data were collected. With qualitative data, the researcher may have prepared with as much care how to elicit responses from participants in the research, but once in hand, it is not immediately clear how those responses qualify as data. THEMATIC ANALYSIS OF THE QUALITATIVE DATA One of the most common approaches to the qualitative data analysis is called thematic analysis. This approach focuses on identifying themes that adequately represent the data. Themes are key patterns identified in the data that may be important features of the phenomenon in question, according to the purposes of the research question. A researcher identifies themes by going through multiple examinations of the data. The next step involves documenting the patterns you find by generating initial codes as labels for the recurring patterns. The labels attach a categorical meaning to bits of text to represent a single concept, even though the specific examples may be a little different from each other. Coding is the first step to systematically organize your qualitative data. Once you have an array of codes, you can review them to search for themes in a similar way as you did for the original text. ISSUES IN QUALITATIVE DATA COLLECTION AND ANALYSIS In quantitative studies that use inferential statistics, the ability to generalize the study results to the population of interest depends on probability sampling and the size of the sample. Due to the nature of the data collection in qualitative research, sample sizes tend to be smaller. Also, participants tend to be selected with nonprobability sampling approaches to target particular sources of information (or because the sampling frame is unknown). In any case, qualitative data is basically exploratory, and the richness of the data prohibits a strict quantification of every possible input. Even with probability sampling, the open-ended form of data collection would make each individual unique and no longer equally likely to respond to any one particular issue. Determining the appropriate number of participants for a qualitative study is not as exact as the quantitative description of 95% confidence for a sample from a population with a normal distribution on the item being measured. INTERVIEWER EFFECT Many qualitative data collection approaches involve in-person contacts between the researchers or interviewers and the study participants. Researchers should be mindful that this personal contact can affect the quality of the data. On the one hand, in person contact allows a researcher to probe and get more in-depth information, and in that sense, the interaction may help to obtain better data. On the other hand, the presence of the researcher can affect what and how participants share information. In face-to- face contact, participants could find it harder to be direct and critical. Qualitative researchers need to bear in mind a general tendency for people to offer socially desirable comments in an interview situation. SUBJECTIVE NATURE OF THE ANALYSIS 62
Docsity logo


Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved