Скачать 324.6 Kb.

Fig. 4 Now we perturb the system (i.e., make a change in inputs) corresponding to a nudge parallel to the unit vector . In an artificially simple case, with a vector of magnitude and ignoring the flow within the basins of attraction, the circle is now centred on the origin. (Depending on the character of the flow in each of the four basins of attraction, the circle may of course be deformed, and it may be necessary to apply a slightly different nudge, but the details are unimportant; all that matters is that we have now moved some portion of the circle over the attractor basin boundaries.) This is illustrated in Figure 5. We can easily see that a physical nudge to the same psychological state can give rise to four different possible psychological states, according to the differences in underlying physical states. Fig. 5 Well, then, have we dispensed with the need for chaos altogether? We can see from the example that all that is required to observe high level indeterminism on top of low level determinism is a nudge that shifts any set of arbitrarily similar states lying near one attractor to a new neighbourhood where a basin boundary will intersect the translated set. These nudges, to use the phrase Smith applies to changes in control parameters, Ôcan be as deterministic as you likeÕ, but that doesnÕt mean the subsequent evolution, described at a high level, is deterministic.^{16} But this isnÕt chaos, this is elementary dynamics. ItÕs true: it appears that something like psychological state indeterminism follows immediately unless we are prepared to ignore basic neurobiology and arbitrarily restrict the attractor changes that count to those which are brought about by changes in global parameters describing ion concentrations, neuropeptide distributions, and the like. The r™le here for chaotic dynamics, and for their associated fractal basin boundaries and sensitive dependence on initial conditions, is to make this psychological level indeterminism more interesting. LetÕs return to the argument Smith attributes to me. The gist of it is that sensitive dependence on initial conditions in the low level dynamics of a system can magnify tiny, apparently psychologically irrelevant changes in physical state at one time, into physical changes which are psychologically significant later on. If the picture of strange attractor evolution, or cognitive transition, offered by Smith were complete, this conclusion about sensitive dependence on initial conditions would be false. But when we consider the importance of afferent signals in the way real neural networks function, this conclusion about sensitive dependence on initial conditions becomes true. The reason is that a tiny change in physical state, insufficient to lead to a new psychological state all by itself, can cause the system to enter a different psychological state than it otherwise would have the next time it is ÔbumpedÕ by a change in afferent signals. To illustrate, consider two arbitrarily similar states within the circle in Figure 4. Perhaps after some time the states have evolved into the upper left and the lower right of the circle. They are still the same psychological state, because they are both in quadrant I, but when we make a change in afferent signals corresponding to the nudge illustrated in Figure 5, one trajectory now lies in quadrant II and the other in quadrant IV. The presence of sensitive dependence on initial conditions is not a sufficient condition for cognitive transition indeterminism, but it makes that indeterminism more interesting by increasing the sensitivity of the system to afferent signal ÔbumpsÕ. To recap, then, weÕve established first that the apparent analogy between SmithÕs thermodynamical argument and the one he attributes to me is insufficient to show the latter invalid. Interpreting the three different arguments depends on a number of hidden premises. In the case of the original psychological argument, weÕve seen that making sense of it depends on how we construe the mappings from physical states to psychological states. If we adopt the suggestion that psychological states correspond to classes of trajectories near particular attractors, we gain a useful picture of how psychological states may change in response to modifications in global parameters. But when we add in the realisation that networks may move to new attractors in response to changes in afferent signals, we gain two further useful conclusions. First, something like psychological indeterminism follows immediately, without recourse to chaotic dynamics. Second, if the system is in fact chaotic at low levels, we gain a richer variety of cognitive transition indeterminism. Thus Smith was correct in objecting that appeals to chaos do not add argumentative force to the basic notion that dynamics described at the psychological state level may be indeterministic although they supervene on deterministic dynamics at a lower level. At the same time, as I argued in the original paper, an understanding of low level chaotic dynamics is helpful for understanding characteristics of high level cognitive transitions, and a representational schema for speaking about interactions between dynamics at various levels of description is useful. Next we will consider low level chaos itself and conclusions about it which are independent of how we may relate low level dynamics to higher level properties of intelligent systems. We have seen some very basic ways in which chaotic behaviour in neural networks may be relevant to the philosophy of mind. In what follows, we shall explore a more technical aspect of chaos in neural networks. Specifically, our concern will be with what relevance, if any, the analogue nature of real biological networks has for simulating their behaviour on digital computers. This is significant for the philosophy of mind because if analogue neural networks can be simulated satisfactorily on digital computers, and if we believe human minds are instantiated by analogue neural networks, then human minds may be essentially little different from Universal Turing Machines running very sophisticated programs. If analogue neural networks cannot be simulated satisfactorily on digital computers but we still believe that human minds are instantiated by analogue neural networks, then human minds may be very different from Universal Turing Machines running very sophisticated programs.^{17} In what follows, we shall question the assumption that the computability of a function governing the behaviour of a nonlinear analogue dynamical system guarantees that that dynamical system may be satisfactorily simulated by digital means. After a brief overview of recursion theory, I consider two observations about chaotic analogue systems which suggest some aspects of their behaviour may not be captured by digital simulations. I conclude by placing these observations in a broader recursion theoretic context and by commenting on their potential relevance to chaotic analogue subsystems of the human brain.^{18} Later we shall examine a recent development which apparently offers real evidence of the kinds of difficulties explored in the following discussion. Mathematicians usually assume that the computability of a function governing the behaviour of a dynamical system is both a necessary and a sufficient condition for its being possibleÑgiven appropriate resourcesÑto simulate effectively that dynamical system with a digital computer. In other words, the assumption is that it is impossible for the real dynamical system to behave in such a way that a digital simulation could not behave arbitrarily similarly, to within a degree of accuracy limited only by the computational resources available. This is not an unreasonable assumption. The requirements for a functionÕs computabilityÑsequential computability and effective uniform continuityÑappear, on the face of it, to preclude any kind of behaviour at all in the dynamical system which a function might describe which cannot simply be read off from an appropriate application of the function to a set of specified initial conditions. Even in cases where necessary inaccuracies in specifying the initial conditions of a system mean the time evolution of a real system cannot be predicted precisely, it appears that no phase trajectory which the real system might trace could be qualitatively different from phase trajectories described by simulated evolution. In what follows, I explore whether this holds true for chaotic systems which are analogue and whose behaviour we thus might analyse in terms of values on the continuous real line. We begin with a brief overview of fundamental definitions of recursion theory. With these definitions and the earlier definitions of chaos in hand, I explore two general observations about chaos over the real line. Finally, I place these observations in a broader recursion theoretic context and suggest ways in which they might be relevant to the analysis of chaotic areas of the human brain. At the heart of the theory of computability is the recursively enumerable set. Put simply, any set of natural numbers which can be generated algorithmicallyÑi.e., by a Turing machineÑis called recursively enumerable. The class of all such algorithms, or Turing programs, generates the class of all recursively enumerable sets. For some of these, called recursively enumerable nonrecursive sets, there is no algorithmic test for set membership. It is just such a noncomputable set, one whose complement is not recursively enumerable, which is the key to understanding noncomputability for a function, number, or sequence of numbers. That such sets exist was first established by Kleene (1952; see also Rogers 1967). A real number is computable if there is a computable sequence of rationals which converges effectively to it. (PourEl and Richards 1989) More formally, a sequence of rational numbers {r_{k}} is computable if there exist three recursiveÑthat is, algorithmicÑfunctions over the naturals a, b, s: N ® N such that "k, and Such a sequence converges effectively to a real number x if there exists a recursive function e: N ® N such that "N Î N: implies More intuitively, a real number is computable if it can be effectively approximated to an arbitrary degree of accuracy by an algorithmic method. For instance, ¹ is a computable number because the successive digits of its decimal expansion can be generated by an algorithm specified in advance. (Note, however, that the question of whether a particular sequence of digits occurs in the expansion of ¹ cannot, in general, be decided algorithmically; the best we could do would be to keep generating the decimal expansion and testing for the sequenceÑuntil it did appear, we could not answer the question of whether it might not still appear.) It is significant that ÔmostÕ real numbers are in fact noncomputable. (Minsky 1967) It is easy to see why: every real number is either computable or noncomputable, but the set of computable reals is only countably, or denumerably, infinite, while the set of all reals is uncountably infinite. This point plays a central r™le in the observations we shortly will make about chaotic analogue systems. Computability for a function was first formulated over three decades ago (Grzegorczyk 1955, 1957; Lacombe 1955a, 1955b), and it requires both sequential computability and effective uniform continuity. Consider a function Ä defined on a closed bounded rectangle I^{q} in R^{q}, where , a_{i}, b_{i} (the ÔcornersÕ) are computable real numbers, and R^{q} represents qdimensional real space. The first criterion is met when the function maps computable sequences to computable sequences: Ä maps every computable sequence of points x_{k} Î I^{q} into a computable sequence {Ä(x_{k})} of real numbers. The second condition is fulfilled when a certain algorithmic relationship exists between the euclidean distance separating points in the domain of the function and the distance between corresponding points in the range. Specifically, the condition is met when there exists a recursive function d: N ® N such that "x, y Î I^{q} and "N Î N: implies Having outlined the technical meaning of ÔcomputabilityÕ on which the rest of our points rely, we turn now to an analysis of computability for chaotic systems which are analogue. The present observations rely upon the assumption that the continuous real lineÑas opposed to, say, the constructive rationalsÑrepresents the best mathematical framework in which to analyse the behaviour of analogue systems. While both sets of numbers provide continuity, the reals are intuitively more ÔcompleteÕ and offer a starting point which, on the face of it, is certainly no less plausible than the alternative. Although we will return to a closely related question later when we discuss the applicability of real number models to a physical world with apparently limited detail, an extended discussion of the merits of each set of numbers for analysing analogue systems is best left to a paper dedicated to philosophy of maths or philosophy of logic. In this paper, I shall concentrate entirely on exploring the consequences of applying the real numbers. As I mentioned previously, there is a nondenumerable infinity of noncomputable numbers on the real line and a denumerable infinity of computable numbers. As Minksy (1967) put it, ÔmostÕ real numbers are noncomputable. In iterative systems, there is only a countably infinite set of possible periods, but if there are analogue systems in nature which truly have fixed points of all possible periods, then such systems will have an uncountably infinite number of possible periods but only a countably infinite number of computable points in phase space at a given time or over a given PoincarŽ section. If this is true, that there is an uncountably infinite number of possible periods but only a countably infinite number of computable points in phase space, then it is a simple observation that there must be an uncountably infinite number of fixed points with unique periods. Equivalently, there is an uncountably infinite set of fixed points with noncomputable periods. To use MinskyÕs term again, ÔmostÕ phase trajectories, then, have noncomputable periods. As a followup, it is interesting to note that a point in phase space defined by computable coordinates could have a noncomputable period; likewise, a point defined by noncomputable coordinates could have a computable period. But if either the coordinates of a point in phase space are noncomputable or the period of a point is noncomputable, then the phase trajectory on which such a point lies is noncomputable. Thus, unless there is some strange reason according to which computable points in phase space mustnÕt have noncomputable periods, not even the full countably infinite set of computable points in phase space will lie on computable phase trajectories (although, of course, the set of computable points in phase space which do lie on computable phase trajectories might well still be countably infinite!). The second observation about chaotic analogue systems is closely related to this first. If a system is described by a computable function, and if a phase trajectory passes through a computable point at 