Full Library Hours
To make things easier, the following will act as representations within particular designs:
- X: Treatment
- O: Observation or measurement
- R: Random assignment
The three experimental designs discussed in this section are:
1) The One Shot Case StudyThere is a single group and it is studied only once. A group is introduced to a treatment or condition and then observed for changes which are attributed to the treatment
The problems with this design are:
- A total lack of manipulation. Also, the scientific evidence is very weak in terms of making a comparison and recording contrasts.
- There is also a tendency to have the fallacy of misplaced precision, where the researcher engages in tedious collection of specific detail, careful observation, testing and etc., and misinterprets this as obtaining solid research. However, a detailed data collection procedure should not be equated with a good design. In the chapter on design, measurement, and analysis, these three components are clearly distinguished from each other.
- History, maturation, selection, mortality, and interaction of selection and the experimental variable are potential threats against the internal validity of this design.
2) One Group Pre-Posttest DesignThis is a presentation of a pretest, followed by a treatment, and then a posttest where the difference between O1 and O2is explained by X:
O1 X O2
However, there exists threats to the validity of the above assertion:
- History: between O1 and O2 many events may have occurred apart from X to produce the differences in outcomes. The longer the time lapse between O1 and O2, the more likely history becomes a threat.
- Maturation: between O1 and O2 students may have grown older or internal states may have changed and therefore the differences obtained would be attributable to these changes as opposed to X. For example, if the US government does nothing to the economic depression starting from 2008 and let the crisis runs its course (this is what Mitt Romney said), ten years later the economy may still be improved. In this case, it is problematic to compare the economy in 2021 and that in 2011 to determine whether a particular policy is effective; rather, the right way is to compare the economy in 2021 with the overall (e.g. 2011 to 2021). In SPSS the default pairwise comparison is to contrast each measure with the final measure, but it may be misleading. In SAS the default contrast scheme is Deviation, in which each measure is compared to the grand mean of all measures (overall).
- Testing: the effect of giving the pretest itself may effect the outcomes of the second test (i.e., IQ tests taken a second time result in 3-5 point increase than those taking it the first time). In the social sciences, it has been known that the process of measuring may change that which is being measured: the reactive effect occurs when the testing process itself leads to the change in behavior rather than it being a passive record of behavior (reactivity: we want to use non-reactive measures when possible).
- Instrumentation: examples are in threats to validity above
- Statistical regression: or regression toward the mean. Time-reversed control analysis and direct examination for changes in population variability are proactive counter-measures against such misinterpretations of the result. If the researcher selects a very polarized sample consisting of extremely skillful and extremely poor students, the former group might either show no improvement (ceiling effect) or decrease their scores, and the latter might appear to show some improvement. Needless to say, this result is midleading, and to correct this type of misinterpretation, researchers may want to do a time-reversed (posttest-pretest) analysis to analyze the true treatment effects. Researchers may also exclude outliers from the analysis or to adjust the scores by winsorizing the means (pushing the outliers towards the center of the distribution).
- Others: History, maturation, testing, instrumentation interaction of testing and maturation, interaction of testing and the experimental variable and the interaction of selection and the experimental variable are also threats to validity for this design.
3) The Static Group ComparisonThis is a two group design, where one group is exposed to a treatment and the results are tested while a control group is not exposed to the treatment and similarly tested in order to compare the effects of treatment.
Threats to validity include:
X O1 O2
- Selection: groups selected may actually be disparate prior to any treatment.
- Mortality: the differences between O1 and O2may be because of the drop-out rate of subjects from a specific experimental group, which would cause the groups to be unequal.
- Others: Interaction of selection and maturation and interaction of selection and the experimental variable.
The next three designs discussed are the most strongly recommended designs:
1) The Pretest-Posttest Control Group DesignThis designs takes on this form:
This design controls for all of the seven threats to validity described in detail so far. An explanation of how this design controls for these threats is below.
R O1 X O2 R O3 O4
- History: this is controlled in that the general history events which may have contributed to the O1 and O2 effects would also produce the O3 and O4effects. However, this is true if and only if the experiment is run in a specific manner: the researcher may not test the treatment and control groups at different times and in vastly different settings as these differences may influence the results. Rather, the researcher must test the control and experimental groups concurrently. Intrasession history must also be taken into account. For example if the groups are tested at the same time, then different experimenters might be involved, and the differences between the experimenters may contribute to the effects.
In this case, a possible counter-measure is the randomization of experimental conditions, such as counter-balancing in terms of experimenter, time of day, week and etc.
- Maturation and testing: these are controlled in the sense that they are manifested equally in both treatment and control groups.
- Instrumentation: this is controlled where conditions control for intrasession history, especially where the same tests are used. However, when different raters, observers or interviewers are involved, this becomes a potential problem. If there are not enough raters or observers to be randomly assigned to different experimental conditions, the raters or observers must be blind to the purpose of the experiment.
- Regression: this is controlled by the mean differences regardless of the extremely of scores or characteristics, if the treatment and control groups are randomly assigned from the same extreme pool. If this occurs, both groups will regress similarly, regardless of treatment.
- Selection: this is controlled by randomization.
- Mortality: this was said to be controlled in this design. However, unless the mortality rate is equal in treatment and control groups, it is not possible to indicate with certainty that mortality did not contribute to the experiment results. Even when even mortality actually occurs, there remains a possibility of complex interactions which may make the effects drop-out rates differ between the two groups. Conditions between the two groups must remain similar: for example, if the treatment group must attend the treatment session, then the control group must also attend sessions where either no treatment occurs, or a "placebo" treatment occurs. However, even in this there remains possibilities of threats to validity. For example, even the presence of a "placebo" may contribute to an effect similar to the treatment, the placebo treatment must be somewhat believable and therefore may end up having similar results!
The factors described so far affect internal validity. These factors could produce changes, which may be interpreted as the result of the treatment. These are called main effects, which have been controlled in this design giving it internal validity.
However, in this design, there are threats to external validity (also called interaction effects because they involve the treatment and some other variable the interaction of which cause the threat to validity). It is important to note here that external validity or generalizability always turns out to involve extrapolation into a realm not represented in one's sample.
In contrast, internal validity are solvable by the logic of probability statistics, meaning that we can control for internal validity based on probability statistics within the experiment conducted. On the other hand, external validity or generalizability can not logically occur because we can't logically extrapolate to different settings. (Hume's truism that induction or generalization is never fully justified logically).
External threats include:
- Interaction of testing and X: because the interaction between taking a pretest and the treatment itself may effect the results of the experimental group, it is desirable to use a design which does not use a pretest.
- Interaction of selection and X: although selection is controlled for by randomly assigning subjects into experimental and control groups, there remains a possibility that the effects demonstrated hold true only for that population from which the experimental and control groups were selected. An example is a researcher trying to select schools to observe, however has been turned down by 9, and accepted by the 10th. The characteristics of the 10thschool may be vastly different than the other 9, and therefore not representative of an average school. Therefore in any report, the researcher should describe the population studied as well as any populations which rejected the invitation.
- Reactive arrangements: this refers to the artificiality of the experimental setting and the subject's knowledge that he is participating in an experiment. This situation is unrepresentative of the school setting or any natural setting, and can seriously impact the experiment results. To remediate this problem, experiments should be incorporated as variants of the regular curricula, tests should be integrated into the normal testing routine, and treatment should be delivered by regular staff with individual students.
Research should be conducted in schools in this manner: ideas for research should originate with teachers or other school personnel. The designs for this research should be worked out with someone expert at research methodology, and the research itself carried out by those who came up with the research idea. Results should be analyzed by the expert, and then the final interpretation delivered by an intermediary.
Tests of significance for this design: although this design may be developed and conducted appropriately, statistical tests of significance are not always used appropriately.
- Wrong statistic in common use: many use a t-test by computing two ts, one for the pre-post difference in the experimental group and one for the pre-post difference of the control group. If the experimental t-test is statistically significant as opposed to the control group, the treatment is said to have an effect. However this does not take into consideration how "close" the t-test may really have been. A better procedure is to run a 2X2 ANOVA repeated measures, testing the pre-post difference as the within-subject factor, the group difference as the between-subject factor, and the interaction effect of both factors.
- Use of gain scores and covariance: the most used test is to compute pre-posttest gain scores for each group, and then to compute a t-test between the experimental and control groups on the gain scores. In addition, it is helpful to use randomized "blocking" or "leveling" on pretest scores because blocking can localize the within-subject variance, also known as the error variance. It is important to point out that gain scores are subject to the ceiling and floor effects. In the former the subjects start with a very high pretest score and in the latter the subjects have very poor pretest performance. In this case, analysis of covariance (ANCOVA) is usually preferable to a simple gain-score comparison.
- Statistics for random assignment of intact classrooms to treatments: when intact classrooms have been assigned at random to treatments (as opposed to individuals being assigned to treatments), class means are used as the basic observations, and treatment effects are tested against variations in these means. A covariance analysis would use pretest means as the covariate.
2) The Soloman Four-Group DesignThe design is as:
R O1 X O2 R O3 O4 R X O5 R O6
In this research design, subjects are randomly assigned into four different groups: experimental with both pre-posttests, experimental with no pretest, control with pre-posttests, and control without pretests. In this configuration, both the main effects of testing and the interaction of testing and the treatment are controlled. As a result, generalizability is improved and the effect of X is replicated in four different ways.
Statistical tests for this design: a good way to test the results is to rule out the pretest as a "treatment" and treat the posttest scores with a 2X2 analysis of variance design-pretested against unpretested. Alternatively, the pretest, which is a form of pre-existing difference, can be used as a covariate in ANCOVA.
3) The Posttest-Only Control Group DesignThis design is as:
This design can be viewed as the last two groups in the Solomon 4-group design. And can be seen as controlling for testing as main effect and interaction, but unlike this design, it doesn't measure them. But the measurement of these effects isn't necessary to the central question of whether of not Xdid have an effect. This design is appropriate for times when pretests are not acceptable.
R X O1 R O2
Statistical tests for this design: the most simple form would be the t-test. However, covariance analysis and blocking on subject variables (prior grades, test scores, etc.) can be used which increase the power of the significance test similarly to what is provided by a pretest.
As illustrated above, Cook and Campbell devoted much efforts to avoid/reduce the threats against internal validity (cause and effect) and external validity (generalization). However, some widespread concepts may also contribute other types of threats against internal and external validity.
Some researchers downplay the importance of causal inference and assert the worth of understanding. This understanding includes "what," "how," and "why." However, is "why" considered a "cause and effect" relationship? If a question "why X happens" is asked and the answer is "Y happens," does it imply that "Y causes X"? If X and Y are correlated only, it does not address the question "why." Replacing "cause and effect" with "understanding" makes the conclusion confusing and misdirect researchers away from the issue of "internal validity."
Some researchers apply a narrow approach to "explanation." In this view, an explanation is contextualized to only a particular case in a particular time and place, and thus generalization is considered inappropriate. In fact, an over-specific explanation might not explain anything at all. For example, if one asks, "Why Alex Yu behaves in that way," the answer could be "because he is Alex Yu. He is a unique human being. He has a particular family background and a specific social circle." These "particular" statements are always right, thereby misguide researchers away from the issue of external validity.
Information from Threats to validity of Research Design by Chong-ho Yu & Barbara Ohlund (2012) http://www.creative-wisdom.com/teaching/WBI/threat.shtml