Скачать 117.33 Kb.
|Emergence, causation and objectivity: elements of an alternative theoretical framework|
Abstract. This chapter presents a computational view of emergence, alternative to the usual combinatorial view, formulated in terms of parts and wholes. It show that computational emergence can be characterized in terms of causation, and that a subclass of computationally emergent process display many of the connotations of the scientific use of the term. After having so captured a concept of emergence, I turn to the question of applying the concept and testing whether some instantiations exist.
The concept of emergence aims at making sense of novelty – of properties, of entities, of laws, etc.1 – within the framework of naturalism, namely the refusal of dualism, of positing a region of being besides, and independently of, the natural world as unveiled by natural sciences. The idea of emergence therefore rests on the shared intuition that, if, on the one hand, a scientific mind must not admit any supernatural thing, on the other hand an explanation of things such as trends in economy, thought or affects, or history of political ideas, cannot be worked out in terms of motion of quarks or muons, or other elementary entities in particles physics. It is striking that the theory which holds that only entities of fundamental physics are real entities, and therefore claims that predicates like “think”, “believe”, “idea”, “affection”, are illusory – such theory, called “eliminativist” (Churchland 1981), seems immediately absurd to most of us. For these reasons, the word “emergence” is pervasive in the scientific as well as the philosophical recent literatures. Nevertheless, nothing proves that it would resist a rigorous elucidation; it might be the case that such intuition would fade after an attempt of clarifying it.
Scientists often use the term because many of them believe that even if the investigated phenomena are made of material items which obey laws of elementary physics, such physics is not sufficient to understand them. This is true for human and social sciences but also for biology and even for the regions of physics which are not particle physics. In the 80s, an interesting debate opposed actually – at the occasion of the controversy about the making of a giant particle accelerator LHC-style in the US (which will be eventually given up then) – those who think that any physics is reducible to particle physics because, as Steven Weinberg (1984) said, the “arrow of explanation”2 goes downward – to those like Nobel Prizes Anderson (1971) or Laughlin (2005; Laughlin et al 2000), often physicists of condensed matter, who think that a “theory of everything” elaborated within the quantum physics of particles would not be able to account for everything, because there are emergent laws at levels above particles.
For professional philosophers, emergentism first means a view defended in the 20 by Samuel Alexander, Lloyd Morgan or C.D. Broad, British philosophers who thought that emergence actually reconciled naturalism with the acknowledgment of novel properties beyond elementary physics? They were proved wrong by the progress of science to the extent that one of their paradigms was the structure of water – which was, according to them, unexplainable through atomism – and which later has been explained precisely by the quantum physics of covalent linkage (Mc Laughlin 1992). The notion then came back through the field of the philosophy of mind- as is attested by the above reference to eliminativism. The main problem here is to understand mental states as, in the same time, grounded upon, and irreducible to, brain states. The discussion then revolved around the concept of supervenience3, and an argument suggested by Jaegwon Kim, who sees mental properties as epiphenomenal ones, because if one is committed to the “causal closure of physics”4, they cannot have any causal efficiency (all their causal strength comes from their physical bases) and then have a mere epiphenomenal reality.
Interestingly, the generality of the use of the word emergence in the science contrasts with the specificity of this term by many philosophers. Some of them, considering the concept, come to the conclusion that either there are no emergent properties at all, or only phenomenal consciousness (i.e., “what it’s like” to have this thought or to be this person, e.g. Nagel (1974): “what it’s like to be a bat”) would be emergent (Chalmers 2009). But, following the general orientation of the volume edited recently by Bedau and Humphreys (2008) I aim here at making sense of the concept of emergence as one can find it in the sciences, instead of discussing what should be the concept of emergence and what would instantiate it in the sole light of the mind/body problem.
As said Bedau (2008) a concept of emergence must in the same time mean the autonomy (viz. some bases) and dependency (viz.these bases) of what is emergent. An important distinction has to be made between two different questions regarding emergence, namely the meaning of emergence, and the reality of emergence. The former is about building a coherent concept of emergence, likely to capture many of the uses of the word in the sciences. A second issue is whether there are things in the world which actually fall under this concept. This distinction is necessary, because many arguments in philosophy – first of all by Kim – were directed against the concept of emergence, i.e. showing that either it makes no sense or it means some kind of epiphenomenalism. In this perspective, what we call “emergent” is not emergent because the very concept of emergence is misconstrued. On the contrary, it is thinkable that we could devise a satisfying concept of emergence, and that in the end nothing empirically falls under this concept. Construing the concept is the philosophical question; testing whether what is believed to be emergent and then fall under such concept, actually falls under this concept, and finally whether there exists in the world something which belongs to the extension of such concept, is another question, to be answered by the empirical sciences. Some confusion occurred in the debates because these two questions, relevant to two kinds of investigations, have been conflated. Most of the present paper deals only with the first question, namely, the meaning of emergence.
I will here present a computational concept of emergence, contrast it with another, more frequent, approach to emergence. I will show that, on the one hand, it is more satisfying and answers better to some objections raised against the very concept of emergence; and on the other side it includes a causal dimension which makes it into a concept proper to capture what is at stake in many appeals to “emergence” in scientific contexts. The last section of the paper will sketch some applications of this concept to special sciences.
Emergence is often conceived of as the issue of understanding the properties of a whole which would be irreducible to properties of the parts – what I call “combinatorial emergence”. Traffic jams (Nagel and Rasmussen, 1994), fads (Tassier, 2004), temperature, chromosomes at the time of meiosis, all display a behavior which cannot be understood by adding accounts of the behaviors of their parts. It entails that one sees them as “emergent behavior”: such emergence is often viewed as something proper to the whole and irreducible to the parts. Philosophers like Silberstein (2002), O’Connor (1994), Bechtel and Richardson (1992) tackled the issue of emergence through this scheme of the parts vs. whole. In the same way, Phan and Dessalles (2005) see emergence as a drop in complexity, Wilson (2009) as a decrease in degrees of freedom – contrary to the mere product of properties of parts (where one would find additivity of degrees of complexity/degrees of freedom5).
But let’s take what is often seen as a famous example of emergence, the segregation model by economist Thomas Schelling (1969): according to their “colour”, agents in an agent-based model will eventually get lumped together in homogeneous clusters, like ghettos in real life. The rules are, as one knows, only to have a slight dislike for being part of the minority (something like “if I am the only green in ten reds, then move”). Besides the important teachings in social sciences of this model (essentially about the limits of a desegregation politics based on education), the behavior “join the group” is not given in the behavioral rules of agents, so it is somehow emerging from the added interactions. But groups are not exactly composed of agents, because those groups subsist even if some agents are added and some “die” (Gilbert, 1992). Therefore, given that parts are transient regarding the whole, a simple view of emergence as irreducibility of parts to the whole is mistaken.
Philosopher William Wimsatt (1997) defined emergence as the “failure of aggergativity”. The main issue here is to provide then criteria of aggergativity – which means tackling the issue of emergence in an inverse way. Wimsatt’s criteria for failure of aggergativity are a sophisticated formulation of what is happening when we say that we cannot reduce the properties of the parts to those of the whole. These criteria are: invariance through substitution of parts; qualitative similarity through addition or subtraction of parts; invariance regarding decomposition-re-aggregation of parts; lack of cooperative/inhibitory interactions. Those are criteria of invariance; thereby they take into account the case of parts which change and alternate in a whole like in the segregation model. However, it seems that, except mass, almost nothing is genuinely aggregative, namely satisfying all these invariance criteria. This is clearly a problem for the combinatorial view. Emergence should surely be ascribed to fewer properties than “everything, except the mass”; therefore it should require an additional criterion which is not provided by such analysis. That’s why we are left with the idea that emergence comes by degrees6. However in such view, the meaning of emergence is quite superfluous, it would be enough to talk of degrees of aggergativity; the concept of emergence can only have a meaning with this additional criterion according to which emergence is more than a mere lack of aggregativity, but precisely it can’t be provided by the combinatorial view.
Actually, emergence is supposed to encompass several characteristics: unpredictability, novelty, irreducibility ((Klee (1984), Silberstein (2002), O’Connor (1994), Crane (2006), Chalmers (2006), Seager (2005), Humphreys (1997) largely concur on these characteristics). Many add “downward causation”, but this is more controversial. Irreducibility understood as irreducibility of properties of the whole to properties of the parts seems now quite trivial given the previous considerations, and too frequent to provide something as “emergence”7. Concerning novelty, since the properties of the whole are quite always novel regarding properties of the parts – think of colour, or even volume… - the real issue is: which novelty should count as emergent? We are left once again with no objective criterion. “Novel” most of the times mean what has no name yet in our language (Epstein 1999). Hence this unavoidable characteristics leads to a widely shared conclusion : if emergence has any meaning, it is restricted to epistemological emergence, namely relative to a set of theories and cognitive abilities – perhaps to the exclusion of the exceptional case of qualia (Chalmers 2006, Crane 2006, Seager 2005, O’Connor 1994). Generally, most of the authors oppose epistemological and ontological emergence (the latter being in the real world, the former being defined by the weakness of our analytical or theorizing abilities). Most would conclude that the concept of emergence is undoubtedly epistemological only. A major argument for this conclusion is that, as the example of water for British emergentists can remind it, that what seems now emergent is such only relatively to our theories, and that nothing precludes that a more sophisticated theory could later explain how – to stay in the framework of combinatorial emergence – the properties of the whole result from properties of the parts, or are simply the conditional properties of parts, now actualized. Another argument is the fact that what is real must have causal properties, yet if emergent properties emerge upon some bases, and are not transcendent, they receive their causal powers from those of their bases, so they don’t have any of such powers on their own, and thus don’t have a proper ontological character. Kim’s arguments of exclusion and overdetermination provide the most achieved form of this argument.
The rest of this chapter explores another concept of emergence than the combinatorial one; I show that it is immune to the triviality problem revealed by Wimastt’s non-aggregativity criteria, and to the usual verdict that emergence is eventually epistemological, and emergent properties are epiphenomenal.
In the framework of computer simulations one has defined what Bedau (1997) calls “weak emergence”8. According to the purported criterion, a state in a computational process is weakly emergent if there is no shorthand to get to it, except by running the simulation. (“The incompressibility criterion of emergence” – see Huneman 2008 b, Humphreys 2008, Bedau, 2008, Hovda, 2008). This approach, amongst the four mentioned connotations of emergence (unpredictability, irreducibility, novelty, downward causation), starts from the notion of unpredictability.
Such an approach bypasses the question of the cognitive subjectivity proper to the novelty problem in the former approach, because it’s based on a computational property of algorithmic models. That is why we would have a major clue about emergence which would be, if not ontological, at least objective in the same way as conceptual truths of mathematics are objective, independent of our cognitive capacities or epistemic choices.
Yet, one could object that our criterion of incompressibility is only provisory, because we cannot claim that in a remote future, with increased computational capacities, we will be still unable to find analytical shortcuts to reach faster the final state than by simulation. However, here is the sketch of a refutation of such objection. I develop some arguments in favor of the objectivity of computational criteria on the basis of Buss and al. (1992). The basic idea consists in building a set of logical automata whose values change according to a global rule R. each automaton transforms the value of its cells according to an input 0 or 1. Applying the global rule R depends upon the numbers of each values (q1, q2…) in the set of automata at step n ; the input function which determines then the input of all automata at step n+1 is determined by R For this reason the system is perfectly deterministic.
For a class of rules, it can be shown that the problem of predicting the state of the automata set at time T arbitrary remote is PSPACE complete (see Box 1). This result perfectly illustrates the fact that some computational devices are objectively incompressible. As authors write: “If the prediction problem is PSPACE complete, this would mean essentially that the system is not easily predictable, and that there seems to be no better prediction method other than simulation” (Buss et al. 1992, 526) Even with infinite cognitive capacities, there would be a real difference between predictions problems which are PSPACE complete and others, therefore the computational definition of emergence is objective. Weak emergence so defined as inaccessibility except via simulation is then not something trivial since, in this context, all global rules which are constant-free are computational in polynomial time, which makes a clear distinction between weakly emergent cases and other ones.
2. Causation and computational emergence.
The present approach starts with a concept of emergence to show its coherence and plausibility. Another issue is then to decide whether exist some things which, in the real world, fall under this concept, that is, are such that if one has an accurate model of the phenomenon, the model will display properties of computational emergence. It’s conceivable, for now, that there are none, or that we don’t know whether the current models we have, and which speak for the existence of emergent properties, are accurate enough. What is shown until now, is that with incompressibility one has a non-trivial, objective, concept of emergence. Because I am only concerned here with the meaning of emergence and not its actuality, I rely on formal properties of simulations such as cellular automata or genetic algorithms. We can’t rely solely on them to find out instances of the concept of emergence in the world, but here they can allow us to construe and justify a proper concept of emergence.
Such concept, starting from the idea of unpredictability, includes the notion of irreducibility. I will now show that the notion of novel order can be included in the intuitive notion of emergence. To this end, I show first that one finds in the computational concept of emergence a dimension of causation, so that it’s not a mere formal notion, whatever the degree to which this concept is instantiated in the real world. From this on, I show (2.3) that this connotation of novel order is likely to be met for a subclass of systems displaying computational emergence. (This section surveys arguments presented in Huneman 2008b). The last section (3) will give some clues for the issue of finding in the real world instantiations of this concept of computational emergence.
2.1. Causation and simulations.
First, this is about answering the objection that starting from the context of simulations to conceive of emergence compels one to leave aside the most important, namely the fact that one calls “emergent” real processes, which thereby encompass some causation, and possibly raise the issue of downward causation, that is, emergent properties of entities (e.g. mental states, fads, standing ovations…) causing backwards effects in their bases (brain states agents, individual spectators). Peter Corning enunciates such objection very clearly, criticizing in general approaches of emergence relying on computer sciences, such as Holland’s views (Emergence, 1998), based on a study of genetic algorithms : ““Consider Holland’s chess analogy. Rules or laws have no causal efficacy; they do not in fact “generate anything”. They serve merely to describe regularities and consistent relationships in nature. These patterns may be very illuminating and important, but the underlying causal agencies must be separately specified (though often they are not).” (Corning 2002, 26).
Yet, simulations actually can include a dimension of causation. Let’s take first cellular automata. A cellular automaton is a set of cells which can be in several possible states, the state of each cell C at time n+1 being determined by its state at n, through a rule which assigns a state to C at n+1 according to the state of neighboring cells of C at n. This system is wholly determined.
Actually, I argue that there are relations of causation within simulations, which are given by the specifications of properties at successive times in the simulation: some properties of a cellular automaton at time n+1 have as a sole cause, on the background of the rules, properties of it at time n. This argument uses the counterfactualist concept of causation, first elaborated by Lewis (1973) and sophisticated since (e.g. Hall & Paul 2003). According to this concept, A causes B iff “if there had not been A, there would not be B.” (I leave aside some subtle distinctions which aims at excluding obvious counterexamples from this rough formulating of causation.)
Since the rules of the cellular automaton are such that several neighborhoods of the cell C (i; n) yield the same state for C (i; n+1), one can’t say that “if there had not been the same neighborhood state C (i-j, i+j ; n) there would not been C (i, n+1). However there are properties which give rise to a counterfactual dependence between their instantiations at n and at n+1, as sketched in Box 2. Therefore characterizing a class of simulations as computationally emergent should entail a specification of this class in terms of a specific causal pattern.
2.2. Causation and incompressibility. Emergence as break up in causal explanation.
Those counterfactual correlations belong to the set of cellular automata. When there are emergent properties, this means a specific causal singularity. Quickly said10, when there is emergence, it means that the causal relationship between two successive states of the system (Ain and Ain+1 for all i) can never be traced back to a global law of the system (for example a law for all An). For a cellular automaton, one can go from ajn (with j varying from i+p to i-p, p being defined by the rules of the CA) to ain+1, but not in general in a nomothetic way from An (the set of states ajn) to An+1, and even less to some An. This could be a way to make sense, from the viewpoint of computational emergence, of what Wimsatt called failure of aggregativity, because any aggregative system is such that we can go in a somewhat continuous way from the local to the global, and write a global law to describe this process.
In general, for any system the causal explanations can be of two fashions – either forward on the basis of the elements (forward-local), or backward from the whole (global backward). About a thrown stone, one could write the position of the ball at each instant on a trajectory given by the law of gravitation, or compute the position, instant by instant, according to its prior position. This problem in dynamics is such that both approaches coincide because the trajectory is integrable. The coincidence between causal explanation means that the step by step explanation, represented by a running cellular automaton, and the explanation by a rule, which represents the jump from the initial state to a step n of the automaton and is given by the motion’s equations, do coincide. When there is computational emergence, we lack such a coincidence: in this sense, the proper character of causation in simulations which represent emergent (in the computational sense) processes, is indeed this fracture within causal explanation. If actually such a coincidence was always by principle available, then we would always have a rule to go from the local (explaining the ain+1 by the (i-k
2.3. Emergence causal robustness and emergent order.
Nevertheless there is, among computationally emergent phenomena, a subclass of processes such that, beyond some step, regularities between sets of cells arise. For example, think of gliders and glider guns in Conway’s Game of Life. Here, we could say it in a counterfactual way : if the set of states (defining a glider or glider gun) had not been there (in this position), the glider (as a set of cell-states) would not be in the state one finds it.
These dependencies between partly global states of the simulation are not given with the initial rules, which concern only sets of individual cells. In the usual sense, these emerge in the course of the simulation, and can concern a mere transient state of it. But when they happen, they allow a much more simple explanation of the behavior of the simulation, than appealing to the rules. Why simple ? Because usually one has to specify the states of all cells in order to step by step explain the simulation, whereas here when a rule (as a transient counterfactual dependency) has emerged, it can be stated by specifying position of sets of states only. This is a coarse grained explanation (gliders flying, loop self-replicating in Langton’s loop improved by Sayama11), which of course omits details, but in the same time saves both computation time and information, and allows generalizations, like Epstein’s civil violence study (see below). Israeli and Goldenfeld (1996) have shown that most of the CA rules support, at some point, to be formulated as coarser grain rules, the sets of cells being then taken as cells, so that apparently incompressible rules can be translated, in a coarse grained description, into compressible rules. (Of course in many cases the coarse grained cell is what indeed emerges sensu the incompressibility criterion, in the simulation)
More technically, Hanson and Crutchfield (1993, 1996, Shalizi and Crutchfield 2002) developed a method for reading CA in terms of “mechanics”: they identify patterns (names, by analogy with mechanics, lines, points, particles, etc.) whose correlations as such that they underwrite the running of the whole CA (Fig.2). Here, clearly we get a causal lexicon which enables one to make sense of the intuition that emergent properties are such that they encompass another kind of causation. This novel causality is, first of all, counterfactual regularity between sets of cells (in CAs) or agents (in ABMs12).
Fig.2. Filter (Crutchfield & Hanson 1993) intending to reveal “mechanical” entities (greek letters in bottom diagram) which causally explain it.
Robust emergence – in the sense of the subclass of emergent processes satisfying the clause 2.3. of causal robustness – is instantiated by numerous models in empirical sciences. Take the study of fads (Tassier et al. 2004). In this case, one can see clear relations of causality between states of fads at some moments, which define a general pattern of fads processes. According to the parameters, in these multi-agent models the agents either separate into clusters, any of which adopting a given fashion – or display a behavior such that cycles of fads become visible.
In the same way, emergence of local norms (Burke et al. 2006) appears as a computationally incompressible process leading to specific patterns, possibly alternate, or fix. The whole system satisfies the computational emergence criterion because one can’t analytically derive the result. Also, emergence of traffic jams (Nagel and Rasmussen 1994) display patterns whose arising process, modeled by a CA, is incompressible. In the same way, once a traffic jam has appeared, it is likely to show causal relationships between itself and, either other traffic jams, or some state variables of the system such as the average speed of cars etc. Finally lipid membranes (Rasmussen et al. 1995) satisfy the same criteria concerning their creation, the authors describing as a discrepancy between two languages the same difference here described as a distinction between a global rule and the lack of immediate transduction into global rules in the case of emergent processes.
With Epstein’s work on civil violence (Epstein 2002) one sees a last kind of counterfactual dependence13. In these models, Epstein defines agents – representing social individuals, and studies their propensity to rebel. It implements quite intuitive rules, according to which acting out (violently) of an agent against the State depends both upon the perceived risk, and the frequency of already acting out agents in her neighborhood. This is a typical multi-agent model. Two parameters are defined, level of oppression, and level of legitimacy of the government. By varying these parameters, and multiplying simulations, Epstein can show numerous counterfactual dependencies between the values of parameters (or their variations) and global outcomes of simulations (namely the frequency and generalization (or not) of a rebelling behavior.) Here, dependency does not take place between two moments of the simulation, but – on the basis of a large set of simulations – width of value of parameters and classes of global outcomes. The latter being a generalization of a former, hence causation (counterfactual) at an upper level. One of the most striking results here is that, when the legitimacy of a government drops, this can increase the probability of a violent uprising – however the relevant variable here is not the width of the drop, but its speed. A small but quick drop of legitimacy more easily entails an uprising than a much larger but slower drop. A last concern could be the following. I have shown that computational emergence is objective, that it concerns some specific causal explanations, that it allows one to define a subclass of emergent phenomena displaying causal relations such that one can recover the idea of novel and spontaneous order, proper to our intuitive conception of emergence. Yet someone could always make the following objection: even if such a notion is formally correct, when one asks what instantiates it in the real world, some phenomena such as the ones as described indeed instantiate it only under the condition of our accepting the models which describe them. In other words, if this concept of emergence is ontological, it is however not buffered against the fact that nothing would instantiate it in the real world because all models which make us conclude that it is instantiated can, one day, be superseded by models with no emergence. Many arguments have been used against such objection (e.g. Bedau 2008, Humphreys 1997, etc.) I defended (Huneman 2010) the idea that robustness analysis as done by scientists usually would provide a way to conclude positively regarding the genuinely emerging character of some phenomena, independently of the model.
The idea of robustness analysis is the following (Levins 1966, Weisberg 2006). To build a model implies choosing parameters and ascribe them values. Behaviors and general outcomes of the model can vary with those values, or with the choice of parameters. In Epstein’s model of civil uprising for example, education is not a parameter. One could add it, and then check whether the model behaves when education level varies. We say that a model is robust if its qualitative behavior does not change when the value of parameters varies, or their amount changes (which is basically the same, if one thinks of a neglected parameter as just having the value 0). For example if education would not change anything to Epstein’s results, this would speak for the robustness of the model (across known parameters).
Let’s assume that we have a model which satisfies the two computational emergence clauses and the robustness clause, and then has a number of parameters. And let’s assume that testing most of the parameters, one concludes that the model is robust. Many arguments then support the idea that such model describes the causal structure of the phenomenon: an argument of type “inference to best explanation” (Lipton 1991) could be adequate to reach such conclusion. From this on, such model, describing the causal structure of reality, and satisfying criteria for emergence, makes it legitimate to say that the process at stake presents emergent properties. In other words robustness analysis of the models is the last step which allows one to concludes in sole given cases, from the satisfaction of formal emergence criteria to the reality of emergent processes or properties.
This chapter explores various aspects of a theory of emergence based on the idea of computational emergence. Here are the main points
Emphasis on a subclass of processes which are computationally emergent and moreover display counterfactual dependences between sets of cells or agents – the world of numerical simulations provide numerous examples, from Langton’s loop to Holland’s Echo.
Quest for arguments which would allow one to conclude from the emergent character of a process to emergence as a property of the modeled reality. Robustness analysis coupled with a “inference to best explanation” kind of argument seems here promising. Anyway, the main result of this approach consists in showing that one can start by defining a concept of emergence, which is quite rigorous and relatively non-epistemic – in other words, that it should not be another name given to the boundaries of cognition. From there the question of what instantiates such concept, and how we can prove that it is indeed instantiated in the world, is undoubtedly more difficult and is just beginning.
Anderson, P.W. (1972), "More is Different: Broken Symmetry and the Nature of the Hierarchical Structure of Science", Science 177: 393–396
Atay, F., and J. Jost (2004), “On the Emergence of Complex Systems on the Basis of the Coordination of Complex Behaviours of Their Elements”, Complexity 10 (1): 17–22.
Bar Yam, Y. (2004), “A Mathematical Theory of Strong Emergence Using Multiscale Variety”, Complexity 9 (6): 15–24.
Bechtel W. and Richardson R. (1992), “Emergent phenomena and complex systems”, Beckermann A., Flohr H, Kim J. (eds), Emergence or reduction ?, Berlin, de Gruyter, pp.257-287
Bedau, M. (1997). “Weak Emergence”. In J. Tomberlin (ed.), Philosophical Perspectives: Mind, Causation, and World, vol. 11, pp. 375-399. Oxford: Blackwell Publishers.
Bedau, M. (2008). Is weak emergence just in the mind?. Minds & Machines, 18, 443–459.
Bedau M., Humphreys P. (2008) Emergence. Contemporary Readings in Philosophy and Science, Cambridge: MIT Press.
Burke, M., G. Furnier, and K. Prasad (2006), “The Emergence of Local Norms in Networks”, Complexity 11 (5): 65–83.
Buss, S., C. Papadimitriou, and J. Tsisiklis (1992), “On the Predictability of Coupled Automata: An Allegory about Chaos”, Complex Systems 5: 525–539.
Chalmers, D. (2006), “Strong and Weak Emergence”, in P. Clayton and P. Davies (eds.), The Re-emergence of Emergence. Oxford: Oxford University Press, 244–256.
Churchland, P. M. (1981). Eliminative Materialism and the Propositional Attitudes, Journal of Philosophy 78: 67-90
Corning P. (2002), “The re-emergence of “emergence”: a venerable concept in search of a theory”, Complexity, 7 (6): 18-30
Crane, T. (2001), “The Significance of Emergence”, in C. Gillett and B. Loewer (eds.), Physicalism and Its Discontents. Cambridge: Cambridge University Press, 207–224.
Crutchfield J. (1994) “The Calculi of emergence.” Physica D, special issue on the Proceedings of the Oji International Seminar Complex Systems — from Complex Dynamics to Artificial Reality.
Crutchfield J., Hanson J. (1997) “Computational mechanics of cellular automata: an example.” Physica D. 103: 169-189.
Crutchfield, J., and C. Shalizi (2001), “Pattern Discovery and Computational Mechanics”, arXiv:cs/0001027v1.
Crutchfield, J., and J. Hanson (1993), “Turbulent Pattern Bases for Cellular Automata”, Physica D 69: 279–301.
Epstein, J. ( 2007), “Agent-Based Computational Models and Generative Social Science”, in Generative Social Science: Studies in Agent-Based Computational Modeling. Princeton, NJ: Princeton University Press, 4–46.
Epstein J. (2002) “Modeling civil violence”. PNAS, 99, 3: 7243–7250
Gilbert, N. (2002), “Varieties of Emergence”. Paper presented at the Social Agents: Ecology, Exchange, and Evolution Conference, Chicago (http://www.soc.surrey.ac.uk/staff/ngilbert/ngpub/paper148_NG.pdf)
Hall N. , Paul D. (2005) Causation and counterfactuals. Cambridge : MIT Press
Hanson, J., and J. Crutchfield (1997), “Computational Mechanics of Cellular Automata: An Example”, Physica D 103: 169–189.
Лекция методология Microsoft Solutions Framework. Выработка концепции. Планирование