J. R. Statist. Soc. A (2008) 171, Part 2, pp. 481–502
Misunderstandings between experimentalists and observationalists about causal inference
Princeton University, USA
Harvard University, Cambridge, USA
and Elizabeth A. Stuart
Johns Hopkins Bloomberg School of Public Health, Baltimore, USA
[Received January 2007. Final revision August 2007] Summary. We attemptto clarify, and suggest how to avoid, several serious misunderstandings about and fallacies of causal inference. These issues concern some of the most fundamental advantages and disadvantages of each basic research design. Problems include improper use of hypothesis tests for covariate balance between the treated and control groups, and the consequences of using randomization, blocking beforerandomization and matching after assignment of treatment to achieve covariate balance. Applied researchers in a wide range of scientiﬁc disciplines seem to fall prey to one or more of these fallacies and as a result make suboptimal design or analysis choices. To clarify these points, we derive a new four-part decomposition of the key estimation errors in making causal inferences. We then show howthis decomposition can help scholars from different experimental and observational research traditions to understand better each other’s inferential problems and attempted solutions. Keywords: Average treatment effects; Blocking; Covariate balance; Matching; Observational studies; Randomized experiments
Random treatment assignment, blocking before assignment, matching afterdata collection and random selection of observations are among the most important components of research designs for estimating causal effects. Yet the beneﬁts of these design features seem to be regularly misunderstood by those specializing in different inferential approaches. Observationalists often have inﬂated expectations of what experiments can accomplish; experimentalists ignore some of thetools that observationalists have made available; and both regularly make related mistakes in understanding and evaluating covariate balance in their data. We attempt to clarify some of these issues by introducing a general framework for understanding causal inference. As an example of some of the confusion in the literature, in numerous references across a diverse variety of academic ﬁelds,researchers have evaluated the similarity of their treated and control groups that is achieved through blocking or matching by conducting hypothesis tests, most commonly the t-test for the mean difference of each of the covariates in the two
Address for correspondence: Kosuke Imai, Department of Politics, Princeton University, Princeton, NJ 08544, USA. E-mail: KImai@Princeton.Edu
© 2008 RoyalStatistical Society
K. Imai, G. King and E. A. Stuart
groups. We demonstrate that when these tests are used as stopping rules in evaluating matching adjustments, as frequently done in practice, they will often yield misleading inferences. Relatedly, in experiments, many researchers conduct such balance tests after randomization to see whether additional adjustmentsneed to be made, perhaps via regression methods or other parametric techniques. We show that this procedure is also fallacious, although for different reasons. These and other common fallacies appear to stem from a basic misunderstanding that some researchers have about the precise statistical advantages of their research designs, and other paradigmatic designs with which they compare their work. Weattempt to ameliorate this situation here. To illustrate our points, we use two studies comparing the 5-year survival of women with breast cancer who receive breast conservation (roughly, lumpectomy plus radiation) versus mastectomy. By the 1990s, multiple randomized studies indicated similar survival rates for the two treatments. One of these was Lichter et al. (1992), a study by the National...
Leer documento completo
Regístrate para leer el documento completo.