Mechanisms and Causal Explanation




Скачать 126.01 Kb.
НазваниеMechanisms and Causal Explanation
страница3/6
Дата конвертации11.02.2013
Размер126.01 Kb.
ТипДокументы
1   2   3   4   5   6
by conditioning on .

More generally, isolation comes in strong and weak forms, both of which are sufficient. For the strong form considered up until now, no back-door paths exist between the mechanistic variables and the outcome variable that are not blocked by . For the weak form, back-door paths do exist between the mechanistic variables and the outcome variable that are not blocked by , but each of these back-door paths can then be blocked by conditioning on another observed variable in the graph other than . This distinction clarifies that one must be concerned only about the dependence of mechanistic variables on components of back-door paths that cannot be blocked by conditioning on observed variables.

The implication of the necessity of assuming isolation is that a very good understanding of the back-door paths between the causal variable and the outcome variable is needed in order to justify the assumption that none of the components of unblockable back-door paths have direct effects on the mechanism. If the isolation assumption cannot be maintained, then the mechanistic variables are similarly affected by the same set of dependencies that invalidate basic back-door conditioning as a strategy to identify the causal effect.

Now consider the requirement that the mechanism is exhaustive. For the graph in panel (a) of Figure 8.5, suppose, again, that there is an unblocked back-door path and also two distinct causal pathways from to , which are represented by the two variables and . Finally, unlike for Figure 8.3, suppose that is unobserved.

[ INSERT FIGURE 8.5 HERE ]

In this case, the causal pathway from to via cannot be estimated, and thus the full causal effect of on cannot be estimated via front-door conditioning. If one were to assert a mistaken assumption that is the full causal mechanism (for example, by substituting a back-door path for the genuine causal pathway ), then one can obtain a causal effect estimate. But the causal effect of on will then be underestimated (assuming all causal effects are positive, etc.) because the part of the causal effect generated by the pathway is attributed to a mistakenly asserted back-door path .

Relaxing the Assumption that the Mechanism is Exhaustive

For the example depicted in panel (a) of Figure 8.5, a mechanism-based analysis may still be useful because an isolated piece of the causal effect can still be estimated. Even though is unobserved, the causal effect of on is identified by their observed association because there are no back-door paths from to . And, as before, the causal effect of on is identified because all back-door paths from to are blocked by . One can thereby obtain a consistent estimate of the causal effect of on by conditioning on , which guarantees that the part of the causal effect that travels through the pathway can be estimated consistently. This is an important result, especially for practice, because identifying and consistently estimating a distinct causal pathway can be very useful, even if one cannot in the end offer a causal effect estimate of the full effect of the causal variable on the outcome variable.5

But, even though this partial estimation result is very useful, there is an important implicit assumption that is hidden by this example: the unobserved causal pathways cannot interact with the observed causal pathways. To see the complications that such an interaction could produce, consider the graph in panel (b) of Figure 8.5. For this graph, the mechanism is isolated, but the unobservable variable has a causal effect on . As a result, the causal pathway from through to is mixed in with the causal pathway from through to . In this situation, the effect of on is still identified, as there are still no back-door paths between and . But, the effect of on is not identified. Two back-door paths between and are still blocked by : and . But there is now a third back-door path between and that cannot be blocked by conditioning on or any other observed variable. This new path is , and it remains unblocked because its only intermediate variable is unobserved (and is not a collider). Not only is it impossible to estimate the effect of on the unobserved variable , but the variable may transmit to both and its own exogenous variation which is completely unrelated to and . Without recognition of this outside source of dependence, one could mistakenly infer that the causal pathway from to via is much more powerful than it is (assuming all causal effects are positive, etc.).

As these simple diagrams show, if one wants to use a mechanism-based strategy to point-identify the full effect of a causal variable on an outcome variable, one must put forward an isolated mechanism that is also exhaustive. The exhaustiveness requirement can be relaxed if one is satisfied with identifying only part of the causal effect of interest. But, in order to secure partial estimation of this form, one must be willing to assume that the observed portion of the mechanism is completely separate from the unobserved portion of the mechanism. And, in order to assert this assumption, one typically needs to have a theoretical model of the complete and exhaustive mechanism, even though parts of it remain unobserved.

In sum, Pearl's front-door criterion represents a powerful and original set of ideas that clarifies the extent to which a mechanism can be used to identify and estimate a causal effect. And, by approaching the causal inference predicament from the inside-out, the approach helps to shape the question of whether a causal claim qualifies as a sufficiently deep explanation. Rather than estimate a causal effect by some form of conditioning (or via a naturally occurring experiment) and then wonder whether or not a mechanism can be elaborated to show how the effect comes about, Pearl instead shows that, if we can agree on the variables that constitute an isolated and exhaustive mechanism (in some configuration), then we can estimate the causal effect from its component causal pathways.

The approach cannot be taken out of context, though, lest one claim too much explanatory power for a non-isolated and/or non-exhaustive mechanism. And, at the same time, it must be acknowledged that even a front-door-identified causal effect may not be explained deeply enough by the isolated and exhaustive mechanism that identifies it, for any such mechanism may still be too shallow for a particular substantive area. To consider these issues further, we now turn to recent social scientific writing on mechanisms, wherein the connections between theory, generative mechanisms, and modes of explanation have been fruitfully engaged.

The Appeal for Generative Mechanisms

To some extent inspired by the work of Jon Elster (e.g., Elster 1989), but more generally by their reading of past successes and failures in explanatory inquiry in the social sciences, Peter Hedström and Richard Swedberg convened a group of leading scholars to discuss a reorientation of explanatory practices in the social sciences. A collection of papers was then published in 1998 as Social Mechanisms: An Analytical Approach to Social Theory. At the same time, John Goldthorpe developed his case for grounding all causal modeling in the social sciences on the elaboration of generative mechanisms. His proposal was published in 2000 as On Sociology: Numbers, Narratives, and the Integration of Research and Theory.6 Most recently, Peter Hedström has laid out in considerable detail his full program of a mechanism-based social science in his 2005 book Dissecting the Social: On the Principles of Analytical Sociology, relying to a substantial degree on Goldthorpe's accounting of the literature on causality. To orient the reader to this alternative mechanism-based perspective, we will present the slightly different perspectives of both Goldthorpe (2000, 2001) and Hedström (2005).

Goldthorpe develops his proposal by first attempting to drive a wedge between the counterfactual model, which he labels causation as consequential manipulation, and his alternative mechanistic model of causality. To develop his argument, he advances the claim that the counterfactual model is irretrievably tied to actual experimental manipulations of causal variables. Goldthorpe (2001:6) writes: ... the crux of the matter is of course the insistence of Rubin, Holland, and others that causes must be manipulable, and their unwillingness to allow causal significance to be accorded to variables that are not manipulable, at least in principle.7 (See our discussion of this issue in the final chapter, where we present the position of Woodward (2003) that this argument is incorrect.)

Goldthorpe further argues, citing the work of statisticians Cox and Wermuth (1996; also Cox 1992), that the counterfactual model too easily settles for causal claims of insufficient depth that do not account for how causes brings about their effects (a point which is clearly supported by our presentation of the natural experiment literature discussed earlier in this chapter). Goldthorpe writes:

The approach to causal analysis that is here proposed ... is presented in the form of a three phase sequence: (i) establishing the phenomena that form the explananda; (ii) hypothesizing generative processes at the level of social action; and (iii) testing the hypotheses. (Goldthorpe 2001:10)

He then proceeds to lay out the basic contours of the first two steps, arguing that the most useful mechanisms in the social sciences are those that are based on rational choice theory (because these are sufficiently microscopic and focus on the actions and beliefs of individuals). He then explicates the third step of his approach, which is his requirement that one must test hypotheses suggested by the generative mechanism that one has proposed.8 And here, he notes that generative causal accounts of the sort that he prefers can never be definitively verified, only indirectly supported through repeated validation of entailed hypotheses.

Consider now the more recent appeal for mechanisms offered by Peter Hedström (2005). Unlike Goldthorpe, Hedström does not confront the counterfactual causality literature directly. He instead develops an argument that suggests the following orienting principle: The core idea behind the mechanism approach is that we explain not by evoking universal laws, or by identifying statistically relevant factors, but by specifying mechanisms that show how phenomena are brought about (Hedström 2005:24). Hedström arrives at this position from two different directions. First, he signs on to the dominant position in the philosophy of science that an explanation of how a phenomenon is brought about must necessarily be a causal account of some form that does not rely on unrealistic presuppositions of the existence of invariant general (or covering) laws (see Hedström 2005:15-20). But, he also justifies the principle by criticizing regression-based causal models in social science (see Hedström 2005:20-3), writing:

Such statistical analysis is often described as a form of `causal analysis'. If a factor appears to be systematically related to the expected value or the conditional probability of the outcome, then the factor is often referred to as a (probabilistic) `cause' of the outcome. Although it makes little sense to quibble over words, I would like to reserve the word
1   2   3   4   5   6

Похожие:

Mechanisms and Causal Explanation iconInterventionism and Causal Exclusion

Mechanisms and Causal Explanation iconPainless Pain Property Dualism and the Causal Role of Phenomenal Consciousness

Mechanisms and Causal Explanation iconAdaptationism and psychological explanation

Mechanisms and Causal Explanation iconName Current New Explanation Effective

Mechanisms and Causal Explanation iconDefinition of terms and explanation of abbreviations

Mechanisms and Causal Explanation iconHigh temperature superconductivity: the explanation

Mechanisms and Causal Explanation iconExplanation of Matter and the Atomic Model Development

Mechanisms and Causal Explanation iconSpace leadership affirmative – case explanation

Mechanisms and Causal Explanation iconQuantum Chromodynamical Explanation of the Strong Nuclear Force

Mechanisms and Causal Explanation iconEvery lesson will follow 5E instructional model with extended Explanation part


Разместите кнопку на своём сайте:
lib.convdocs.org


База данных защищена авторским правом ©lib.convdocs.org 2012
обратиться к администрации
lib.convdocs.org
Главная страница