Скачать 100.05 Kb.
We can distinguish three questions about the relation between capacities we uncover and the architecture underlying them. First, we can ask whether we are dealing with (for example) a Trivers-Willard module at the computational level. That is, is there a domain-specific, encapsulated computer underlying Trivers-Willard behavior or, for example, a set of mechanisms which evolved to do other tasks but can be pressed into service for this one? Second, how does the computational mechanism discharging a cognitive task actually work? For example, is it a set of stored rules or a connectionist network? Third, is the computational system shared across species? The alternative is that there are different responses to similar adaptive pressures on different lineages, implemented by different systems. A Trivers-Willard effect is found in many species besides humans, but the nature of the effect varies across cases. All the species can do the job, but they might do it very differently. This puts important obstacles in the way of generalizing from comparative research.
The first question, whether we have found a module when we find a psychological capacity, is the one that is most relevant to narrow evolutionary psychology in general, and there are strong suggestions in the literature that many evolutionary psychologists think the answer is ”yes”. I will look at the first question in some detail, first saying why I think we can attribute this affirmative answer to narrow evolutionary psychology, and then giving some reasons for skepticism about their answer. Then I will deal with the second and third questions much more briefly. Finally, I will look at developmental perspectives that hold out promise of integrating evolution and development without prejudging the issue of modularity.
1.5.Is the Capacity Realized in a Distinct Computational Module?
Tooby and Cosmides argue that evolutionary considerations imply that modules exist. They argue that general-purpose architectures “could not have evolved, survived or propagated because they are incapable of solving even routine adaptive problems” (Cosmides and Tooby, 1994, p. 58). They sometimes concede that there are general-purpose psychological systems (Cosmides and Tooby, 1997), although they seem to think that a general-purpose device would be of little use and could not have evolved to solve specific adaptive problems, because “[g]eneral-purpose mechanisms can’t solve most adaptive problems at all, and in those few cases where one could, a specialized mechanism is likely to solve it much more efficiently” (1994, p. 58). Symons is just as severe: “Each kind of problem is likely to require its own distinctive kind of solution . . . There is no such thing as a ‘general problem solver’ because there is no such thing as a general problem” (1992, p. 142). Cosmides and Tooby (1994, 1997; see also Pinker, 1997) argue that the same principle — a different system for each problem — applies also to the human body. They urge that we have no reason to expect the mind/brain to be any different.
The conclusion being drawn, then, is that evolutionary considerations compel us to believe in modularity. We could not have evolved without solving adaptive problems and we cannot solve adaptive problems without specialized domain-specific systems. The passages I have cited claim that the mind grows piecemeal by developing discrete specialized structures in response to particular adaptive problems. If this is right, we have strong reason to expect that in discovering a psychological capacity we also discover the module designed to discharge that capacity. In other words, the whole tenor of narrow evolutionary psychology suggests that one should answer yes to the question whether a given psychological capacity is realized in a distinct computational module.
These arguments notwithstanding, the connection between behavior and underlying mechanism is not transparent. From the verified prediction that a certain capacity is found in a population, we cannot simply conclude that we are dealing with a discrete module that evolved to discharge the capacity we have found. This problem should be distinguished from Shapiro’s (1999) discussion of what he calls the BTM (‘behavior-to-mind’) inference, which is the inference to the presence of psychological mechanisms on the basis of observed behavior. Shapiro is correct to point to the ubiquity of this inference among narrow evolutionary psychologists, but he goes wrong by assuming that the question whether the BTM inference is warranted is a matter of “when some bit of behavior is likely to implicate the presence of mind” (p. 86) and treating this as a biologicized version of the problem of other minds: under what circumstances are we justified in saying that a behaving organism has a mind at all? However, the sin committed by evolutionary psychologists taking the narrow approach cannot be that of inferring from human behavior that we have minds. Nor is their sin that of trying to figure out what our psychological endowment is based on our behavior, since all cognitive psychology does that. The unwarranted inference these psychologists are prone to is different. They assume that when we have identified a type of human behavior we are justified in positing the existence of a module that underwrites it — it is the existence of modules, not the existence of mentality, which is under supported by the behavioral evidence.
Very extravagant modularity should be optional even if one views the mind as an evolved computational organ, since the connections between function and structure may be very indirect. This indirectness seems to be the correct lesson to take to psychology from evolutionary theory, and not the very bold claim that evolutionary logic means that the mind is massively modular. One reason for the indirectness is that natural selection must work within pre-existing constraints, modifying pre-existing structures and pressing them into new uses. Evolution has no foresight, and so these structures will have been previously designed for other uses. The problem of modifying or re-arranging a lot of existing systems is a different kind of design task from that of simply coming up with a design to solve a problem.
Faced with the problem of re-configuring the mind to deal with a new set of problems, natural selection might indeed come up with novel modules. There are clearly other possibilities, however. An existing system could be pressed into service for a new use, perhaps abandoning the old use. That would leave us with a new module, albeit not quite a purpose-built one. Maybe, though, the module could discharge the new task as well as the old one. We would now have one system, with two functions. Another possibility is that a collection of pre-existing systems might meet the new challenge without registering the costs of building a new system. Lastly, the much-despised domain-general systems might actually be good for something after all.
These considerations let us distinguish three ways in which we might answer the first of our questions, whether a given capacity is realized in a distinct computational module. If the capacity is indeed realized in a distinct module we have a module-specific computational explanation. This kind of explanation correlates a psychological capacity, or a distinguishable component thereof, with a computational module responsible for its implementation. It identifies features of the computational system with components of the behavior.
However, we should not expect even module-specific computational explanations to invariably correlate modules one-to-one with adaptive psychological capacities. Even if we can identify a capacity with an underlying module, there remains the complication that the module might implement more than one capacity. There are clear precedents from physiology for this, despite the argument of narrow-school evolutionary psychologists that the body relies on dedicated structures. Excretion and reproduction look like pretty good candidates for distinct adaptive capacities, but they share physical machinery in half our species; one structure, two functions. Of course, each capacity draws on further mechanisms with only one job (testes, bladder), but the output goes through the same structure.
A different case would be one in which a set of modules evolve to do their respective jobs, but in aggregate enable further capacities to be discharged. A theory of how the mind implements a capacity via the interaction of a collection of modules gives us a second sort of explanation, which we might call a multiple-module computational explanation. Rather than having an identifiable psychological competence realized by a module designed for that purpose, we would be faced with the simultaneous employment of several mechanisms, each with its peculiar adaptive history. The interaction among these mechanisms explains how a psychological capacity can be realized without the design of a new purpose-built computational system. We can envisage a competence using a collection of mechanisms designed for other purposes. Many behaviors may require multiple-domain explanations. They are not correlated with modules built to implement them, but they are the by-product of the joint workings of systems for which different, module-specific, adaptive explanations may in turn be possible. (Of course, we must also be open to the possibility that some of the components may not have adaptive explanations at all.)
The third kind of relation between function and structure appeals to domain-general mechanisms. A domain-general explanation does not involve modules at all, either singly or in concert. As we have seen, narrow evolutionary psychology seems committed to the idea that only modules can solve adaptive problems. Much strife has arisen on this point, so I will belabor it a little.
As with module-specific explanations of functions that share structure, we can envisage a physiological analogue to domain-general cognitive architecture. Again, the argument for domain-specificity from analogy with physiology appears to fail, even if we leave aside the problem of how to identify “domains” in physiology. If you don’t think that there are physical organs that can carry out a wide variety of tasks then I invite you, as G.E.Moore did, to hold up your hands. Hands are distinct, identifiable bodily structures. And they have an indefinite number of functions (Hull, 1989). If it makes sense to talk about domains in physiology, then hands are domain- general bodily structures4.
Tooby and Cosmides do have a stronger argument against the idea that domain-general psychological systems are of adaptive importance. This is the idea that the domain-general systems fail solvability tests — models using general purpose algorithms, it is claimed, are unable to solve adaptive problems that humans can solve (Cosmides and Tooby, 1994; Tooby and Cosmides, 1992). There are two problems with this argument. First, when it comes to the performance of complicated cognitive tasks we aren’t exactly inundated with successful computational models of any kind, including domain-specific ones. In areas where there are some respectable models, such as face-recognition or memory, the performance of models using domain-general learning algorithms (usually connectionist ones) has often been lauded unduly, but it is unclear that they are doing noticeably worse than domain-specific systems.
Second, Cosmides and Tooby (1994) themselves draw attention to the fact that, traditionally, computational models have been applied to problems that are psychologically unrealistic — tractable in the laboratory, but unlikely to carve the mind at its joints. This complicates the solvability argument, for the poor performance of these systems as psychological models may be attributable to the problems they are working on, rather than the methods they use.
This is especially salient once we take learning into account. Narrow evolutionary psychology is sometimes written as though the superiority of domain-specific systems is in principle a matter of their special-purpose algorithms, but they sometimes say that the crucial edge comes from having more knowledge about the problem. It makes a big difference which claim is their actual one. In the following passage, for example, methods and knowledge are more or less completely confounded: “The difference between domain-specific methods and domain-independent ones is akin to the difference between experts and novices: Experts can solve problems faster and more efficiently than novices because they already know a lot about the problem domain” (Cosmides & Tooby, 1997, p. 83, italics in original). Surely, though, an expert can know more about a problem domain than a novice without using different methods. Faced with the problem of explaining the resurgence of the French monarchy in the thirteenth century I would bet against myself and back a medieval historian — not because she knows some special method that I don’t, but because she knows a lot more pertinent facts than I do.
If solving an adaptive problem can simply depend on becoming expert via the acquisition of knowledge, then it seems that a general-purpose system with a powerful learning algorithm could well do so, especially if it were given many years to learn the information, as might happen for humans with the acquisition of social information during childhood. Faced with an adaptive problem, natural selection might design a new module, but it might do just as well by relying on an existing domain-general system with lots of time to learn about the problem.
Ultimately, of course, it is an empirical question whether one should prefer, in a given case, module-specific explanations, multiple-module explanations or domain-general explanations. I hope to have shown, though, that narrow evolutionary psychology tends to assume the first with undue confidence, based merely on discoveries about the existence of psychological capacities. Forward-looking adaptationist reasoning discovers psychological capacities or traits, not the architecture underlying them. The move from function to structure is likely to be very complicated for any theoretically interesting partition of behavior. Despite the apparent beliefs of some thinkers, there is no warrant for inferring the presence of a module from the presence of an adaptation at the level of task-description. This also suggests narrow evolutionary psychology is less clearly distinct from human sociobiology than one might have thought, since the former’s claims to focus on cognitive architecture rather than behavior turn out to rely on questionable inferences about the logic of adaptationist thinking in psychology.
That we can’t make conclusions about underlying structure on the basis of observed behavior or overall functioning is hardly a surprising point. Indeed, it would be surprising if this were not true, for surely it is not generally the case that one can predict an organism’s innards from its functional capacities. Knowing that an organism can deal with toxins in its environment, for example, doesn’t tell us anything about the workings of the organs it uses to do that. We know what they can do, not how they do it.
1.6.How Does the Computational Module Work?
The answers to the second and third questions I posed are shorter. I turn now to the question of how modules work, and then look briefly at the question whether shared adaptive capacities across lineages are evidence for shared mechanisms.
Let’s assume that we have identified an adaptive capacity. Further, let’s assume that we have good reason to believe that we are dealing with a computational module. We can imagine that we are able to base this conclusion on the sorts of general psychological theories and strategies that are standardly taken to isolate computational modules. We are still not justified in making particular claims about the workings of the module. We might be able to understand perfectly well what a system does without grasping the underlying engineering. There are different ways to build a device to carry out a particular function. Knowing what a module was designed to do is a matter of understanding a distinct psychological competence, perhaps by isolating it from others via empirical evidence such as dissociations, which leave the competence intact while causing deficits in other areas. Understanding the engineering, however, is a matter of finding out exactly how a device is built to discharge the competence. This cannot just be read off evidence that the competence exists.
The significance of this for the evolutionary perspective on psychology is that an underlying module might realize a particular psychological capacity even though it has no resources designed expressly to realize that capacity. It is always possible that existing resources have been co-opted for a new use. So even if we have evidence that a discrete module exists we are still not justified in assuming that the module is an adaptation designed to discharge a particular cognitive task.
Hauser and Carey (1998) review studies showing that rhesus monkeys and cotton-top tamarinds share with human infants the fundamentals of numerical representation. The original studies on infants (Wynn, 1992) convinced many people, including the editor of