Cognition, Complexity, Dynamics
Welcome to discussion group wiki
Mailing list: subscribe by sending an email with no subject and no body to ude.anaidni|ebircsbus-l-ainehpopa#ude.anaidni|ebircsbus-l-ainehpopa
to unsubscribe follow the previous instructions but add an "un" in the email address
Spring 2013
April 10
Tononi, Giulio, and Gerald M. Edelman. (1998). Consciousness and complexity. Science 282.5395: 1846-1851. PDF
March 27
Smith, L.B., & Thelen, E. (2003). Development as a dynamic system, TRENDS in Cognitive Sciences, Vol 7(8), 343-348.PDF
March 20
Núñez, R. E. (2008). Mathematics, the Ultimate Challenge to Embodiment: Truth and the Grounding of Axiomatic Systems. In P. Calvo & T. Gomila (Eds.), Handbook of Cognitive Science: An Embodied Approach, pp. 333-353. PDF
March 6
Weber, A. & Varela, FJ. (2002). Life after Kant: Natural purposes and the autopoietic foundations of biological individuality. Phenomenology and the Cognitive Sciences 1: 97–125. PDF
February 27
Polani, D. (2008). Foundations and Formalizations of Self-organization. Advanced Information and Knowledge Processing, pp 19-37. PDF
February 20
W. Ross Ashby. (1962). Principles of the self-organizing system. Principles of Self-Organization: Transactions of the University of Illinois Symposium, H. Von Foerster and G. W. Zopf, Jr. (eds.), pp. 255-278. PDF
February 13
Beggs, JM. (2008). The criticality hypothesis: how local cortical networks might optimize information processing. Phil. Trans. R. Soc. A, 366, 329–343. PDF
February 6
Pellicano, E. & Burr, D. (2012). When the world becomes ‘too real’: a Bayesian explanation of autistic perception. Trends in Cognitive Sciences, 1-7. PDF
January 30
Froese, T. & Di Paolo, EA. (2011). The enactive approach: Theoretical sketches from cell to society. Pragmatics & Cognition, Vol 19(1), 1-36. PDF
January 23
Gentner, D. & Markman, AB. (1997). Structure mapping in analogy and similarity. American Psychologist, Vol 52(1), 45-56. PDF
January 16
Clark, A. & Chalmers, DJ. (1998). The extended mind. Analysis 58: 7-19. PDF
Fall 2012
November 15
Werndl, C. (2009). Deterministic versus indeterministic descriptions: not that different after all? In: A. Hieke and H. Leitgeb (Ed.), // Reduction, Abstraction, Analysis, Proceedings of the 31st International Ludwig Wittgenstein-Symposium//. Ontos, pp. 63-78.| PDF
November 8
Maturana, HR. & Varela, FJ. (1973). Autopoiesis: The Organization of the Living. As Reprinted in Autopoiesis and Cognition: The Realization of the Living pp: 73-93 PDF
November 1
N. Bertschinger et al. (2008) Autonomy: An information theoretic perspective”, BioSystems, vol. 91, pp. 331-345 PDF
October 25
Maturana, HR. (1970). Biology of Cognition. As Reprinted in Autopoiesis and Cognition: The Realization of the Living pp: 15-58 PDF
October 18
Maturana, HR. (1970). Biology of Cognition. As Reprinted in Autopoiesis and Cognition: The Realization of the Living pp: 1-14 PDF
October 11
Grossberg, S. (1987). Competitive Learning: From Interactive Activation to Adaptive Resonance. Cognitive Science, 11, 23-63. PDF
October 4
Grossberg, S. (1969). Embedding fields: A theory of learning with physiological implications. Journal of Mathematical Psychology, 6, 209-239. PDF
September 27
Townsend, J. T., & Busemeyer, J. R. (1989). Approach-avoidance: Return to dynamic decision behavior. In Chizuko Izawa (Ed.), Current Issues in Cognitive Processes: The Tulane Flowerree Symposium on Cognition. Hillsdale, NJ: Erlbaum Associates. PDF
September 20
Conant, R.C., & Ashby, W.R. (1970) Every Good Regulator of a System Must be a Model of that System. International Journal of Systems Science, No. 2, 89-97. PDF
September 13
Moreno, A., Ruiz-Mirazo, K., & Barandiaran X. (2009) The Impact of the Paradigm of Complexity on the Foundational Frameworks of Biology and Cognitive Science. In: Handbook of the Philosophy of Science (Vols. 1-10, Vol. Philosophy of Complex Systems) PDF
September 6
Di Paolo, E., Noble, J., & Bullock, S. (2000) Simulation Models as Opaque Thought Experiments. Artificial Life VII: The Seventh International Conference on the Simulation and Synthesis of Living Systems. PDF
Summer 2012
May 24
Von Foerster, H. (1970). Molecular Ethology: An Immodest Proposal for Semantic Clarification. In G. Ungar (Ed.), Molecular Mechanisms in Memory and Learning (pp. 213–248). New York: Plenum Press. PDF
May 14
Clark, A. (in press). Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science. Behavioral and Brain Sciences. PDF
Spring 2012
April 19
Dupuy, J. (2009). On the Origins of Cognitive Science: The Mechanization of Mind, pp: 113-143.
April 12
Dupuy, J. (2009). On the Origins of Cognitive Science: The Mechanization of Mind, pp: 70-112.
April 5
Dupuy, J. (2009). On the Origins of Cognitive Science: The Mechanization of Mind, pp: 43-69.
March 29
Dupuy, J. (2009). On the Origins of Cognitive Science: The Mechanization of Mind, pp: 27-42.
March 22
Dupuy, J. (2009). On the Origins of Cognitive Science: The Mechanization of Mind, pp: 3-26.
March 1
Shalizi, C. R. (2009). Dynamics of Bayesian updating with dependent data and misspecified models. Electronic Journal of Statistics, 3, 1039-1074. PDF (for a summary see Shalizi's related blog post)
February 23
Hopfield, J. J., & Brody, C. D. (2001). What is a moment? Transient synchrony as a collective mechanism for spatiotemporal integration. Proceedings of the National Academy of Sciences, 98 (3), 1282-1287. PDF
February 9
Breiman, L. (2001). Statistical modeling: The two cultures. Statistical Science, 16 (3), 199-231. PDF
January 26
Pecevski, D., Buesing, L., & Maass, W. (2011). Probabilistic inference in general graphical models through sampling in stochastic networks of spiking neurons. PLoS Computational Biology, 7 (12), 1-25. PDF
January 19
Reshef, D. N., et al. (2011). Detecting novel associations in large data sets. Science, 334, 1518-1524. PDF
Fall 2011
December 7
Berlyne, D. E. (1975). Behaviourism? Cognitive theory? Humanistic psychology? - To Hull with them all! Canadian Psychological Review, 16 (2), 69-80. PDF
November 30
Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2 (8), 696-701. PDF
November 16
Anderson, B. (2011). There is no such thing as attention. Frontiers in Psychology, 2, 1-8. PDF
November 9
Pearl, J. (2009). Causal inference in statistics: An overview. Statistics Surveys, 3, 96-146. PDF
November 2
Kalish, C. W., Rogers, T. T., Lang, J., & Zhu, X. (2011). Can semi-supervised learning explain incorrect beliefs about categories? Cognition, 120, 106-118. PDF
October 26
Freyd, J. J. (1993). Five hunches about perceptual processes and dynamic representations. In D. Meyer and S. Kornblum (Eds.), Attention and Performance XIV: Synergies in Experimental Psychology, Artificial Intelligence, and Cognitive Neuroscience (pp. 99-119). Cambridge, MA: MIT Press. PDF
October 19
Glymour, C. (1998). What went wrong? Reflections on science by observation and the bell curve. Philosophy of Science, 65 (1), 1-32. PDF
October 12
Gershman, S. J., & Niv, Y. (2010). Learning latent structure: Carving nature at its joints. Current Opinion in Neurobiology, 20, 251-256. PDF
October 5
Ramscar, M., Yarlett, D., Dye, M., Denny, K., & Thorpe, K. (2010). The effects of feature-label-order and their implications for symbolic learning. Cognitive Science, 34, 909-957. PDF
September 28
Shepard, R. N. (1984). Ecological constraints on internal representation: Resonant kinematics of perceiving, imagining, thinking, and dreaming. Psychological Review, 91 (4), 417-447. PDF
September 21
Frankenhuis, W. E., & Panchanathan, K. (2011). Balancing sampling and specialization: An adaptationist model of incremental development. Proceedings of the Royal Society B. PDF
September 14
Christiansen, M. H., & Chater, N. (2008). Language as shaped by the brain. Behavioral and Brain Sciences, 31, 489-509. PDF
Spring 2011
April 22
Berlyne, D. E. (1957). Uncertainty and conflict: A point of contact between information-theory and behavior-theory concepts. Psychological Review, 64 (6), 329-339. PDF (errata)
April 8
Chatterjee, A. (2010). Disembodying cognition. Language and Cognition, 2 (1), 79-119. PDF
April 1
Tononi, G., Sporns, O., & Edelman, G. M. (1996). A complexity measure for selective matching of signals by the brain. Proceedings of the National Academy of Sciences, 93, 3422-3427. PDF
March 25
Kemp, C., & Tenenbaum, J. T. (2008). The discovery of structural form. Proceedings of the National Academy of Sciences, 105 (31), 10687-10692. PDF
March 11
Linsker, R. (1990). Perceptual neural organization: Some approaches based on network models and information theory. Annual Review of Neuroscience, 13, 257-281. PDF
March 4
Tononi, G. (2008). Consciousness as Integrated Information: A Provisional Manifesto. Biological Bulletin, 215, 216-242. PDF
February 25 @ El Norteno
Settles, B. (2009). Active Learning Literature Survey. Computer Sciences Technical Report 1648, University of Wisconsin–Madison. PDF
February 11 @ El Norteno
Sporns, O. (2011). Networks of the Brain. MIT Press: Cambridge, MA.
Chapter 9, "Networks for Cognition". ZIP-compressed PDF
February 4 @ El Norteno
Sporns, O. (2011). Networks of the Brain. MIT Press: Cambridge, MA.
Chapter 7, "Economy, Efficiency, and Evolution". PDF
January 28 @ El Norteno
Moazzezi, R., & Dayan, P. (2008). Change-based inference for invariant discrimination. Network: Computation in Neural Systems, 19 (3), 236-252. PDF
Fall 2010
December 10 @ Runcible Spoon
Gold, E. M. (1967). Language identification in the limit. Information and Control, 10, 447-474. PDF
December 3 @ Runcible Spoon
Gallistel, C. R., Fairhurst, S., & Balsam, P. (2004). The learning curve: Implications of a quantitative analysis. Proceedings of the National Academy of Sciences, 101 (36), 13124-13131. PDF
November 12 @ Runcible Spoon
McClelland, J. M., et al. (2010). Letting structure emerge: Connectionist and dynamical systems approaches to understanding cognition. Trends in Cognitive Sciences, 14, 348-356. PDF
Griffiths, T. L., et al. (2010). Probabilistic models of cognition: Exploring representations and inductive biases. Trends in Cognitive Sciences, 14, 357-364. PDF
(Responses to both)
November 5 @ Runcible Spoon
Ashby, W. R. (1952/1960 [2nd ed.]). Design for a Brain. Wiley & Sons: New York, NY. Link
Chapter 16, "Adaptation in the Multistable System". PDF
Chapter 17, "Ancillary Regulations". PDF
Chapter 18, "Amplifying Adaptation". PDF
October 29 @ Runcible Spoon
Ashby, W. R. (1952/1960 [2nd ed.]). Design for a Brain. Wiley & Sons: New York, NY. Link
Chapter 14, "Repetitive Stimuli and Habituation". PDF
Chapter 15, "Adaptation in Iterated and Serial Systems". PDF
October 22 @ Runcible Spoon
Ashby, W. R. (1952/1960 [2nd ed.]). Design for a Brain. Wiley & Sons: New York, NY. Link
Chapter 12, "Temporary Independence". PDF
Chapter 13, "The System with Local Stabilities". PDF
October 15 @ Runcible Spoon
Ashby, W. R. (1952/1960 [2nd ed.]). Design for a Brain. Wiley & Sons: New York, NY. Link
Chapter 10, "The Recurrent Situation". PDF
Chapter 11, "The Fully-joined System". PDF
October 8 @ Runcible Spoon
Ashby, W. R. (1952). Design for a Brain. Wiley & Sons: New York, NY. Link
Chapter 5, "Adaptation as Stability". PDF
Chapter 6, "Parameters". PDF
Chapter 7, "Step-Functions". PDF
Chapter 8, "The Ultrastable System". PDF
Chapter 9, "Ultrastability in the Living Organism". PDF
(Optional references, Chapter 21, "Parameters", Chapter 22, "Step-Functions", Chapter 23, "The Ultrastable System".)
September 24 @ Runcible Spoon
Ashby, W. R. (1952). Design for a Brain. Wiley & Sons: New York, NY. Link
Chapter 3, "The Animal as Machine". PDF
Chapter 4, "Stability". PDF
(Optional reference, Chapter 20, "Stability" - PDF.)
September 17 @ Runcible Spoon
Ashby, W. R. (1952). Design for a Brain. Wiley & Sons: New York, NY. Link
Chapter 1, "The Problem". PDF
Chapter 2, "Dynamic Systems". PDF (Chapter 19, "The Absolute System", may be a useful reference for this chapter - PDF)
(Also, for fun, the preface and the table of contents.)
September 10 @ Runcible Spoon
ORGANIZATIONAL MEETING
Summer 2010
We meet Fridays at 6pm at different places
August 13 @ Runcible Spoon
Pfeifer, R., M. Lungarella, et al. (2007). "Self-organization, embodiment, and biologically inspired robotics." Science 318(5853): 1088. PDF
July 23 @ Runcible Spoon
Williams, L., & Bargh, J. (2008). Experiencing physical warmth promotes interpersonal warmth. Science, 322(5901), 606. PDF
June 18 @ Runcible Spoon
Jaynes, E. (1957). Information theory and statistical mechanics. Physical review, 108(2), 171-190. PDF
June 4 @ Runcible Spoon
Logothetis, N. (2008). What we can do and what we cannot do with fMRI. Nature, 453(7197), 869-878. PDF
Spring 2009
May 21 @ Firat's house
Mukamel, R., Ekstrom, A., Kaplan, J., Iacoboni, M., & Fried, I. (2010). Single-neuron responses in humans during execution and observation of actions. Current biology. PDF
May 4
Mitchell, T., Shinkareva, S., Carlson, A., Chang, K., Malave, V., Mason, R., et al. (2008). Predicting human brain activity associated with the meanings of nouns. Science, 320(5880), 1191. PDF
April 20
Barsalou, L. (2008). "Grounded Cognition." Annu. Rev. Psychol 59: 617-645. PDF
April 6
Andres, M., E. Olivier, et al. (2008). "Actions, words, and numbers." Current Directions in Psychological Science 17(5): 313. PDF
March 23
Friston, K. (2010). The free-energy principle: a unified brain theory?. Nature Reviews Neuroscience 11(2): 127-138. PDF
Feb 16
West, M., & King, A. (1987). Settling nature and nurture into an ontogenetic niche. Developmental Psychobiology, 20(5), 549-562. PDF
Jan 26
Tinbergen, N. (1963). On aims and methods of ethology. Zeitschrift für Tierpsychologie, 20, 410-433. PDF
Jan 12
Hutchins, Edwin. How a cockpit remembers its speeds. Cognitive Science, 19, 1995, 265-288. PDF
Fall 2009
Oct 16
Merleau-Ponty, Maurice. The Phenomenology of Perception, 1945: pp 60-83.
Oct 2
Merleau-Ponty, Maurice. The Phenomenology of Perception, 1945: pp i-30.
… a break …
Summer 2009
July 3
(@ Art's house)
Tsuda, I. (2001). Toward an interpretation of dynamic neural activity in terms of chaotic dynamical systems. BBS, 24, 793-847. PDF
June 19
(@ Dan's house)
Núñez, R., & Lakoff, G. (2005). The cognitive foundations of mathematics: The role of conceptual metaphor. Handbook of mathematical cognition, 109.124. PDF
Sfard, A. (1994). Reification as a birth of a metaphor. For the Learning of Mathematics, 14(1), 44-55. PDF
EXTRA:
Núñez, R. E. (2008). Mathematics, the Ultimate Challenge to Embodiment: Truth and the Grounding of Axiomatic Systems. In P. Calvo & T. Gomila (Eds.), Handbook of Cognitive Science: An Embodied Approach, pp. 333-353. PDF
June 5
(@ Firat's house)
RG Millikan, "Pushmi-Pullyu Representations", Philosophical Perspectives, Vol. 9, AI, Connectionism and Philosophical Psychology (1995), pp. 185-200 PDF
RG Millikan, "Styles of Rationality", In Rationality in Animals, M. Nudds and S. Hurley eds. (Oxford: Oxford University Press), (2006) PDF
EXTRA:
RG Millikan, "A common structure for concepts of individuals, stuffs, and real kinds: More Mama, more milk, and more mouse", Behavioral and Brain Sciences (1998). PDF
Spring 2009
We meet Wednesdays @ 8:30pm, at El Norteño (used to be Mondays)
May 22
(@ Rob's house)
BF Skinner, "Are theories of learning necessary?" Psychological Review, 57(4), 1950, 193-216. PDF
Optional: Tom Mitchell's "The Discipline of Machine Learning" PDF
May 6
Roger C. Conant, W. Ross Ashby, "Every Good Regulator of a System Must be a Model of that System", International Journal of Systems Science, 1970 PDF
Arturo Rosenbluth, Norbert Wiener and Julian Bigelow, "Behavior, Purpose, and Teleology", Philosophy of Science, vol. 10, 1943, 18.24 PDF
Daniel Dennett, "Real Patterns", Journal of Philosophy, 1991, 27-51 PDF
Apr 22
Francisco J Varela, Evan Thompson, Eleanor Rosch, The Embodied Mind: Cognitive Science and Human Experience, 1993, Chapters 9-11.
Apr 15
Francisco J Varela, Evan Thompson, Eleanor Rosch, The Embodied Mind: Cognitive Science and Human Experience, 1993, Chapters 8.
Apr 1
Francisco J Varela, Evan Thompson, Eleanor Rosch, The Embodied Mind: Cognitive Science and Human Experience, 1993, Chapters 6, 7.
SPRING BREAK
[NOTICE: now on Wednesday!]
Mar 2
Kurt Koffka, "Perception: An introduction to the Gestalt-theorie", 1922. LINK
Feb 9
Vygotsky, Lev. Mind in Society: The Development of Higher Psychological Processes, 1930: ch 6-7. CH.6 LINK CH.7 LINK
Jan 26
Vygotsky, Lev. Mind in Society: The Development of Higher Psychological Processes, 1930: ch 1-2. LINK
WINTER BREAK
Fall 2008
Dec 4
L. Goldfarb, "Representational formalisms: what they are and why we haven't had any" PDF
Currently, the only discipline that has dealt with scientific representations. albeit non-structural ones.is mathematics (as distinct from logic). I suggest that it is this discipline, only vastly expanded based on a new, structural, foundation, that will also deal with structural representations. Logic (including computability theory) is not concerned with the issues of various representations useful in natural sciences. Artificial intelligence was supposed to address these issues but has, in fact, hardly advanced them at all.
How do we, then, approach the development of representational formalisms? Itappears that the only reasonable starting point is the primordial point at which all of mathematics began, i.e. we should start with the generalization of the process of construction of natural numbers, replacing the identical structureless units, out of which numbers are built, by structural ones, each signifying an atomic .transforming. event.
…
Nov 13
A. S. Klyubin, D. Polani, C. L. Nehaniv, "Representations of Space and Time in the Maximization of Information Flow in the Perception-Action Loop", Neural Computation, vol. 19, 2007, pp. 2387-2432 PDF
Sensor evolution in nature aims at improving the acquisition of information from the environment and is intimately related with selection pressure toward adaptivity and robustness. Our work in the area indicates that information theory can be applied to the perception-action loop. This letter studies the perception-action loop of agents, which is modeled as a causal Bayesian network. Finite state automata are evolved as agent controllers in a simple virtual world to maximize information flow through the perception-action loop. The information flow maximization organizes the agent.s behavior as well as its information processing. To gain more insight into the results, the evolved implicit representations of space and time are analyzed in an information-theoretic manner, which paves the way toward a principled and general understanding of the mechanisms guiding the evolution of sensors in nature and provides insights into the design of mechanisms for artificial sensor evolution.
Oct 23
R. Legenstein and W. Maass, .What makes a dynamical system computationally powerful?., New Directions in Statistical Signal Processing: From Systems to Brain, 2005 PDF
We review methods for estimating the computational capability of a complex dynamical system. The main examples that we discuss are models for cortical neural microcircuits with varying degrees of biological accuracy, in the context of online computations on complex input streams. We address in particular the question to what extent earlier results about the relationship between the edge of chaos and thecomputational power of dynamical systems in discrete time for off-line computing also apply to this case.
Oct 9
L. Floridi, .Open Problems in the Philosophy of Information., Metaphilosophy, vol. 35, Jul. 2004 PDF
The philosophy of information (PI) is a new area of research with its own field of investigation and methodology. This article, based on the Herbert A. Simon Lecture of Computing and Philosophy I gave at Carnegie Mellon University in 2001, analysies the eighteen principal open problems in PI. Section 1 introduces the analysis by outlining Herbert Simon's approach to PI. Section 2 discusses some methodological considerations about what counts as a good philosophical problem. The discussion centers on Hilbert's famous analysis of the central problems in mathematics. The rest of the article is devoted to the eighteen problems. These are organized into five sections: problems in the analysis of the concept of information, in semantics, in the study of intelligence, in teh relation between information and nature, and in the investigation of values.
-Mike's gushing review of Floridi
Sep 25
W. Ross Ashby, "Principles of the self-organizing system," in Principles of Self-Organization: Transactions of the University of Illinois Symposium, H. Von Foerster and G. W. Zopf, Jr. (eds.), 1962, pp. 255-278. PDF
Links
People
Papers
Information Theory
N. Ay and D. Polani, “Information Flows in Causal Networks”, ADVANCES IN COMPLEX SYSTEMS, vol. 11, 2008, p. 17 PDF
We introduce a notion of causal independence based on virtual intervention, which is a fundamental concept of the theory of causal networks. Causal independence allows for defining a measure for the strength of a causal effect. We call this information flow and compare it with known information flow measures such as transfer entropy.
Y. Bar-Yam, “Multiscale Complexity/Entropy”, ADVANCES IN COMPLEX SYSTEMS, vol. 7, 2004, pp. 47-64 PDF
We discuss the role of scale dependence of entropy/complexity and its relationship to component interdependence. The complexity as a function of scale of observation is expressed in terms of subsystem entropies for a system having a description in terms of variables that have the same a-priori scale. The sum of the complexity over all scales is the same for any system with the same number of underlying degrees of freedom (variables), even though the complexity at specific scales differs due to the organization /interdependence of these degrees of freedom. This reflects a tradeoff of complexity at different scales of observation. Calculation of this complexity for a simple frustrated system reveals that it is possible for the complexity to be negative. This is consistent with the possibility that observations of a system that include some errors may actually cause, on average, negative knowledge, i.e. incorrect expectations.
N. Bertschinger et al., “Autonomy: An information theoretic perspective”, BioSystems, vol. 91, 2008, pp. 331-345 PDF
We present a tentative proposal for a quantitative measure of autonomy. This is something that, surprisingly, seems to be missing from the literature, even though autonomy is considered to be a basic concept in many disciplines, including artificial life.
We work in an information theoretic setting for which the distinction between system and environment is the starting point. As a measure for autonomy, we propose the conditional mutual information between consecutive states of the system conditioned on the history of the environment. This works well when the system cannot influence the environment at all. When, in contrast, the system has full control over its environment, we should instead neglect the environment history and simply take the mutual information between consecutive system states as a measure of autonomy.
In the case of mutual interaction between system and environment there remains an ambiguity. If the interaction structure of the system is known, we define a ”causal” autonomy measure which allows this ambiguity to be resolved.
Moreover, our analysis reveals some subtle facets of the concept of autonomy, in particular with respect to the seemingly innocent system-environment distinction we took for granted and raise the issue of the attribution of control, i.e. the responsibility for observed effects. To further explore these issues, we evaluate our autonomy measure for simple automata, an agent moving in space, gliders in the game of life, and the tessellation automaton for autopoiesis of Varela et al.
K. Hlavácková-Schindler et al., “Causality detection based on information-theoretic approaches in time series analysis”, Physics Reports, vol. 441, 2007, pp. 1-46 PDF
Synchronization, a basic nonlinear phenomenon, is widely observed in diverse complex systems studied in physical, biological and other natural sciences, as well as in social sciences, economy and finance. While studying such complex systems, it is important not only to detect synchronized states, but also to identify causal relationships (i.e. who drives whom) between concerned (sub) systems. The knowledge of information-theoretic measures (i.e. mutual information, conditional entropy) is essential for the analysis of information flow between two systems or between constituent subsystems of a complex system. However, the estimation of these measures from a set of finite samples is not trivial. The current extensive literatures on entropy and mutual information estimation provides a wide variety of approaches, from approximation-statistical, studying rate of convergence or consistency of an estimator for a general distribution, over learning algorithms operating on partitioned data space to heuristical approaches. The aim of this paper is to provide a detailed overview of information theoretic approaches for measuring causal influence in multivariate time series and to focus on diverse approaches to the entropy and mutual information estimation.
J. Jost et al., “An information theoretic approach to system differentiation on the basis of statistical dependencies between subsystems”, Physica A: Statistical Mechanics and its Applications, vol. 378, May. 2007, pp. 1-10 PDF
We develop an analysis of complex systems in terms of statistical correlations between the dynamics of its subsystems as a formal framework within which to understand processes of system differentiation.
X.S. Liang and R. Kleeman, “Information Transfer between Dynamical System Components”, Physical Review Letters, vol. 95, 2005, p. 244101 PDF
We present a rigorous formalism of information transfer for systems with dynamics fully known. This follows from an accurate classification of the mechanisms for the entropy change of one component into a self-evolution plus a transfer from the other component. The formalism applies to both continuous flows and discrete maps. The resulting transfer measure possesses a property of asymmetry and is qualitatively consistent with the classical measures. It is further validated with the baker transformation and the Hénon map.
J.T. Lizier, M. Prokopenko, and A.Y. Zomaya, “A framework for the local information dynamics of distributed computation in complex systems”, Physica D, submitted. 2008
J.T. Lizier, M. Prokopenko, and A.Y. Zomaya, “Local information transfer as a spatiotemporal filter for complex systems”, 0809.3275, Sep. 2008 PDF
We present a measure of local information transfer, derived from an existing averaged information-theoretical measure, namely transfer entropy. Local transfer entropy is used to produce profiles of the information transfer into each spatiotemporal point in a complex system. These spatiotemporal profiles are useful not only as an analytical tool, but also allow explicit investigation of different parameter settings and forms of the transfer entropy metric itself. As an example, local transfer entropy is applied to cellular automata, where it is demonstrated to be a novel method of filtering for coherent structure. More importantly, local transfer entropy provides the first quantitative evidence for the long-held conjecture that the emergent traveling coherent structures known as particles (both gliders and domain walls, which have analogues in many physical processes) are the dominant information transfer agents in cellular automata.
M. Lungarella and O. Sporns, “Mapping Information Flow in Sensorimotor Networks.”, PLoS Computational Biology, vol. 2, Oct. 2006, pp. 1301-1312 PDF
Biological organisms continuously select and sample information used by their neural structures for perception and action, and for creating coherent cognitive states guiding their autonomous behavior. Information processing, however, is not solely an internal function of the nervous system. Here we show, instead, how sensorimotor interaction and body morphology can induce statistical regularities and information structure in sensory inputs and within the neural control architecture, and how the flow of information between sensors, neural units, and effectors is actively shaped by the interaction with the environment. We analyze sensory and motor data collected from real and simulated robots and reveal the presence of information structure and directed information flow induced by dynamically coupled sensorimotor activity, including effects of motor outputs on sensory inputs. We find that information structure and information flow in sensorimotor networks (a) is spatially and temporally specific; (b) can be affected by learning, and (c) can be affected by changes in body morphology. Our results suggest a fundamental link between physical embeddedness and information, highlighting the effects of embodied interactions on internal (neural) information processing, and illuminating the role of various system components on the generation of behavior. [ABSTRACT FROM AUTHOR]
M. Prokopenko, F. Boschetti, and A.J. Ryan, “An information-theoretic primer on complexity, self-organisation and emergence”, Advances in Complex Systems, 2007 PDF
Complex Systems Science aims to understand concepts like complexity, self-organization, emergence and adaptation, among others. The inherent fuzziness in complex systems definitions is complicated by the unclear relation among these central processes: does self-organisation emerge or does it set the preconditions for emergence? Does complexity arise by adaptation or is complexity necessary for adaptation to arise? The inevitable consequence of the current impasse is miscommunication among scientists within and across disciplines. We propose a set of concepts, together with their informationtheoretic interpretations, which can be used as a dictionary of Complex Systems Science discourse. Our hope is that the suggested information-theoretic baseline may facilitate consistent communications among practitioners, and provide new insights into the field.
T. Schreiber, “Measuring Information Transfer”, Physical Review Letters, vol. 85, 2000, pp. 461-464 PDF
An information theoretic measure is derived that quantifies the statistical coherence between systems evolving in time. The standard time delayed mutual information fails to distinguish information that is actually exchanged from shared information due to common history and input signals. In our new approach, these influences are excluded by appropriate conditioning of transition probabilities. The resulting transfer entropy is able to distinguish effectively driving and responding elements and to detect asymmetry in the interaction of subsystems.
C.R. Shalizi and C. Moore, “What Is a Macrostate? Subjective Observations and Objective Dynamics”, cond-mat/0303625, Mar. 2003 PDF
We consider the question of whether thermodynamic macrostates are objective consequences of dynamics, or subjective reflections of our ignorance of a physical system. We argue that they are both; more specifically, that the set of macrostates forms the unique maximal partition of phase space which 1) is consistent with our observations (a subjective fact about our ability to observe the system) and 2) obeys a Markov process (an objective fact about the system's dynamics). We review the ideas of computational mechanics, an information-theoretic method for finding optimal causal models of stochastic processes, and argue that macrostates coincide with the “causal states” of computational mechanics. Defining a set of macrostates thus consists of an inductive process where we start with a given set of observables, and then refine our partition of phase space until we reach a set of states which predict their own future, i.e. which are Markovian. Macrostates arrived at in this way are provably optimal statistical predictors of the future values of our observables.
C.R. Shalizi, K.L. Shalizi, and J.P. Crutchfield, “An Algorithm for Pattern Discovery in Time Series”, cs/0210025, Oct. 2002 PDF
We present a new algorithm for discovering patterns in time series and other sequential data. We exhibit a reliable procedure for building the minimal set of hidden, Markovian states that is statistically capable of producing the behavior exhibited in the data — the underlying process's causal states. Unlike conventional methods for fitting hidden Markov models (HMMs) to data, our algorithm makes no assumptions about the process's causal architecture (the number of hidden states and their transition structure), but rather infers it from the data. It starts with assumptions of minimal structure and introduces complexity only when the data demand it. Moreover, the causal states it infers have important predictive optimality properties that conventional HMM states lack. We introduce the algorithm, review the theory behind it, prove its asymptotic reliability, use large deviation theory to estimate its rate of convergence, and compare it to other algorithms which also construct HMMs from data. We also illustrate its behavior on an example process, and report selected numerical results from an implementation.
R.V. Solé and S. Valverde, “Information Theory of Complex Networks: OnEvolution and Architectural Constraints”, 2004, pp. 189-207 PDF
Complex networks are characterized by highly heterogeneous distributions of links, often pervading the presence of key properties such as robustness under node removal. Several correlation measures have been defined in order to characterize the structure of these nets. Here we show that mutual information, noise and joint entropies can be properly defined on a static graph. These measures are computed for a number of real networks and analytically estimated for some simple standard models. It is shown that real networks are clustered in a well-defined domain of the entropy-noise space. By using simulated annealing optimization, it is shown that optimally heterogeneous nets actually cluster around the same narrow domain, suggesting that strong constraints actually operate on the possible universe of complex networks. The evolutionary implications are discussed.
Neuroscience
J. Atick, “Could information theory provide an ecological theory of sensory processing?”, Network: Computation in Neural Systems, vol. 3, 1992, pp. 213-251 PDF
The sensory pathways of animals are well adapted to processing a special class of signals, namely stimuli from the animal's environment. An important fact about natural stimuli is that they are typically very redundant and hence the sampled representation of these signals formed by the array of sensory cells is inefficient. One could argue for some animals and pathways, as the author does in this review, that efficiency of information representation in the nervous system has several evolutionary advantages. Consequently, one might expect that much of the processing in the early levels of these sensory pathways could be dedicated towards recording incoming signals into a more efficient form. The author explores the principle of efficiency of information representation as a design principle for sensory processing. He gives a preliminary discussion on how this principle could be applied in general to predict neural processing and then discuss concretely some neural systems where it recently has been shown to be successful. In particular, he examines the fly's LMC coding strategy and the mammalian retinal coding in the spatial, temporal and chromatic domains.
A.J. Bell and T.J. Sejnowski, “An Information-Maximization Approach to Blind Separation and Blind Deconvolution”, Neural Computation, vol. 7, Nov. 1995, pp. 1129-1159 PDF
We derive a new self-organizing learning algorithm that maximizes the information transferred in a network of nonlinear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximization has extra properties not found in the linear case (Linsker 1989). The nonlinearities in the transfer function are able to pick up higher-order moments of the input distributions and perform something akin to true redundancy reduction between units in the output representation. This enables the network to separate statistically independent components in the inputs: a higher-order generalization of principal components analysis. We apply the network to the source separation (or cocktail party) problem, successfully separating unknown mixtures of up to 10 speakers. We also show that a variant on the network architecture is able to perform blind deconvolution (cancellation of unknown echoes and reverberation in a speech signal). Finally, we derive dependencies of information transfer on time delays. We suggest that information maximization provides a unifying framework for problems in "blind" signal processing.
C. Fernando and S. Sojakka, “Pattern Recognition in a Bucket”, Advances in Artificial Life, 2003, pp. 588-597 PDF
This paper demonstrates that the waves produced on the surface of water can be used as the medium for a “Liquid State Machine” that pre-processes inputs so allowing a simple perceptron to solve the XOR problem and undertake speech recognition. Interference between waves allows non-linear parallel computation upon simultaneous sensory inputs. Temporal patterns of stimulation are converted to spatial patterns of water waves upon which a linear discrimination can be made. Whereas Wolfgang Maass’ Liquid State Machine requires fine tuning of the spiking neural network parameters, water has inherent self-organising properties such as strong local interactions, time-dependent spread of activation to distant areas, inherent stability to a wide variety of inputs, and high complexity. Water achieves this “for free”, and does so without the time-consuming computation required by realistic neural models. An analogy is made between water molecules and neurons in a recurrent neural network.
W.J. Freeman, “A Neurobiological Theory of Meaning in Perception Part I:: Information and Meaning in Nonconvergent and Nonlocal Brain Dynamics.”, International Journal of Bifurcation & Chaos in Applied Sciences & Engineering, vol. 13, 2003, p. 2493 PDF
The aim of this tutorial is to document a novel approach to brain function, in which the key to understanding is the capacity of brains for self-organization. The property that distinguishes animals from plants is the capacity for directed movement through the environment, which requires an organ capable of organizing information about the environment and predicting the consequences of self-initiated actions. The operations of predicting, planning acting, detecting, and learning comprise the process of intentionality by which brains construct meaning. The currency of brains is primarily meaning and only secondarily information. The information processing metaphor has dominated neurocognitive research for half a century. Brains certainly process information for input and output. They pre-process sensory stimuli before constructing meaning, and they post-process cognitive read-out to control appropriate action and express meaning. Neurobiologists have thoroughly documented sensory information processing bottom-up, and neuropsychologists have analyzed the later stages of cognition top-down, as they are expressed in behavior. However, a grasp of the intervening process of perception, in which meaning forms, requires detailed analysis and modeling of neural activity that is observed in brains during meaningful behavior of humans and other animals. Unlike computers, brains function hierarchically. Sensory and motor information is inferred from pulses of microscopic axons. Meaning is inferred from local mean fields of dendrites in mesoscopic and macroscopic populations. This tutorial is aimed to introduce engineers to an experimental basis for a theory of meaning, in terms of the nonlinear dynamics of the mass actions of large neural populations that construct meaning. The focus is on the higher frequency ranges of cortical oscillations. Part I introduces background on information, meaning and oscillatory activity (EEG). Part II details the properties of wave packets…. [ABSTRACT FROM AUTHOR]
R. Linsker, “Self-organization in a perceptual network”, Computer, vol. 21, 1988, pp. 105-117 PDF
W. Maass, T. Natschlager, and H. Markram, “Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations”, Neural Computation, vol. 14, 2002, pp. 2531-2560 PDF
A key challenge for neural modeling is to explain how a continuous stream of multimodal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real time. We propose a new computational model for real-time computing on time-varying input that provides an alternative to paradigms based onTuring machines or attractor neural networks. It does not require a task-dependent construction of neural circuits. Instead, it is based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry. It is shown that the inherent transient dynamics of the high-dimensional dynamical system formed by a sufficiently large and heterogeneous neural circuit may serve as universal analog fading memory. Readout neurons can learn to extract in real time from the current state of such recurrent neural circuit information about current and past inputs that may be needed for diverse tasks. Stable internal states are not required for giving a stable output, since transient internal states can be transformed by readout neurons into stable target outputs due to the high dimensionality of the dynamical system. Our approach is based on a rigorous computational model, the liquid state machine, that, unlike Turing machines, does not require sequential transitions between well-defined discrete internal states. It is supported, as the Turing machine is, by rigorous mathematical results that predict universal computational power under idealized conditions, but for the biologically more realistic scenario of real-time processing of time-varying inputs. Our approach provides new perspectives for the interpretation of neural coding, the design of experiments and data analysis in neurophysiology, and the solution of problems in robotics and neurotechnology.
D. Prokhorov, “Echo state networks: appeal and challenges”, Neural Networks, 2005. IJCNN '05. Proceedings. 2005 IEEE International Joint Conference on, 2005, pp. 1463-1466 vol. 3 PDF
The echo state network (ESN) has recently been proposed for modeling complex dynamic systems. The ESN is a sparsely connected recurrent neural network with most of its weights fixed a priori to randomly chosen values. The only trainable weights are those on links connected to the outputs. The ESN can demonstrate remarkable performance after seemingly effortless training. This brief paper discusses ESN in a broader context of applications of recurrent neural networks (RNN) and highlights challenges on the road to practical applications.
M. Rabinovich, R. Huerta, and G. Laurent, “NEUROSCIENCE: Transient Dynamics for Neural Processing”, Science, vol. 321, Jul. 2008, pp. 48-50 PDF
G. Tononi, G.M. Edelman, and O. Sporns, “Complexity and coherency: integrating information in the brain”, Trends in Cognitive Sciences, vol. 2, 1998, pp. 474-484 PDF
The brains of higher mammals are extraordinary integrative devices. Signals from large numbers of functionally specialized groups of neurons distributed over many brain regions are integrated to generate a coherent, multimodal scene. Signals from the environment are integrated with ongoing, patterned neural activity that provides them with a meaningful context. We review recent advances in neurophysiology and neuroimaging that are beginning to reveal the neural mechanisms of integration. In addition, we discuss concepts and measures derived from information theory that lend a theoretical basis to the notion of complexity as integration of information and suggest new experimental tests of these concepts.
Philosophy
X. Barandiaran and A. Moreno, “On the nature of neural information: A critique of the received view 50 years later”, Neurocomputing, 2007 PDF
J. Barham, “A dynamical model of the meaning of information”, BioSystems, vol. 38, 1996, pp. 235-241 PDF
The main challenge for information science is to naturalize the semantic content of information. This can only be achieved in the context of a naturalized teleology (by ‘teleology’ is meant the coherence and the coordination of the physical forces which constitute the living state). Neither semiotics nor cybernetics are capable of performing this task, but non-equilibrium thermodynamics and non-linear dynamics may be. A physical theory of the meaning of information is sketched, first by identifying biofunctions with generalized non-linear oscillators and their associated phase-space attractors, and then by postulating the existence, within all such oscillators, of a component capable of coordinating low-energy interactions with the correct environmental conditions supporting the dynamical stability of the oscillator. The meaning of information is thus interpreted as the prediction of successful functional action.
P. Cisek, “Beyond the Computer Metaphor: Behaviour as interaction”, JOURNAL OF CONSCIOUSNESS STUDIES, vol. 6, 1999, pp. 125-142 PDF
J. Collier, “Causation is the Transfer of Information”, Causation, Natural Laws and Explanation, H. Sankey, ed., 1997 PDF
C. Eliasmith, “Is the brain analog or digital? The solution and its consequences for cognitive science”, Cognitive Science Quarterly, vol. 1, 2000, pp. 147-170 PDF
L. Floridi, “Semantic Conceptions of Information” PDF
Goldfarb, Lev, “Representational formalisms: what they are and why we haven't had any”, 2006 PDF
Abstract. Currently, the only discipline that has dealt with scientific representations— albeit non-structural ones—is mathematics (as distinct from logic). I suggest that it is this discipline, only vastly expanded based on a new, structural, foundation, that will also deal with structural representations. Logic (including computability theory) is not concerned with the issues of various representations useful in natural sciences. Artificial intelligence was supposed to address these issues but has, in fact, hardly advanced them at all.
How do we, then, approach the development of representational formalisms? Itappears that the only reasonable starting point is the primordial point at which all of mathematics began, i.e. we should start with the generalization of the process of construction of natural numbers, replacing the identical structureless units, out of which numbers are built, by structural ones, each signifying an atomic “transforming” event.
Biosemiotics
M. Barbieri, “Biosemiotics: a new understanding of life”, Naturwissenschaften, vol. 95, Jul. 2008, pp. 577-599 PDF
Abstract Biosemiotics is the idea that life is based on semiosis, i.e., on signs and codes. This idea has been strongly suggested by the discovery of the genetic code, but so far it has made little impact in the scientific world and is largely regarded as a philosophy rather than a science. The main reason for this is that modern biology assumes that signs and meanings do not exist at the molecular level, and that the genetic code was not followed by any other organic code for almost four billion years, which implies that it was an utterly isolated exception in the history of life. These ideas have effectively ruled out the existence of semiosis in the organic world, and yet there are experimental facts against all of them. If we look at the evidence of life without the preconditions of the present paradigm, we discover that semiosis is there, in every single cell, and that it has been there since the very beginning. This is what biosemiotics is really about. It is not a philosophy. It is a new scientific paradigm that is rigorously based on experimental facts. Biosemiotics claims that the genetic code (1) is a real code and (2) has been the first of a long series of organic codes that have shaped the history of life on our planet. The reality of the genetic code and the existence of other organic codes imply that life is based on two fundamental processes—copying and coding—and this in turn implies that evolution took place by two distinct mechanisms, i.e., by natural selection (based on copying) and by natural conventions (based on coding). It also implies that the copying of genes works on individual molecules, whereas the coding of proteins operates on collections of molecules, which means that different mechanisms of evolution exist at different levels of organization. This review intends to underline the scientific nature of biosemiotics, and to this purpose, it aims to prove (1) that the cell is a real semiotic system, (2) that the genetic code is a real code, (3) that evolution took place by natural selection and by natural conventions, and (4) that it was natural conventions, i.e., organic codes, that gave origin to the great novelties of macroevolution. Biological semiosis, in other words, is a scientific reality because the codes of life are experimental realities. The time has come, therefore, to acknowledge this fact of life, even if that means abandoning the present theoretical framework in favor of a more general one where biology and semiotics finally come together and become biosemiotics.
S. Brier, Cybersemiotics: Why Information is Not Enough, 2007 PDF
A growing field of inquiry, biosemiotics is a theory of cognition and communication that unites the living and the cultural world. What is missing from this theory, however, is the unification of the information and computational realms of the non-living natural and technical world. Cybersemiotics provides such a framework.
By integrating cybernetic information theory into the unique semiotic framework of C. S. Peirce, Søren Brier attempts to find a unified conceptual frame work encompassing the complex area of information, cognition, and communication science. The integration is performed through Niklas Luhmann’s autopoietic systems theory of social communication. The link between cybernetics and semiotics is further an ethological and evolutionary theory of embodiment combined with Lakoff and Johnson’s ‘philosophy in the flesh.’ This demands the development of a transdisciplinary philosophy of knowledge: as common sense as well as it is cultured in the humanities and the sciences. Such an epistemological and ontological frame work is also developed in the book.
Cybersemiotics not only builds a bridge between science and culture, but it also provides at framework encompassing them both. The Cyber-semiotic framework offers a platform for a new level of global dialogue between knowledge systems including a view of science that does not compete with religion but offers the possibility for mutual and fruitful exchange.
Søren Brier is a professor in the Department of International Culture and Communication Studies at the Centre for Language, Cognition, and Mentality, Copenhagen Business School.
P. Capdepuy, D. Polani, and C.L. Nehaniv, “Constructing the Basic Umwelt of Artificial Agents: An Information-Theoretic Approach”, LECTURE NOTES IN COMPUTER SCIENCE, vol. 4648, 2007, p. 375
P. Lyon, “The biogenic approach to cognition”, Cognitive Processing, vol. 7, Mar. 2006, pp. 11-29 PDF
Abstract After half a century of cognitive revolution we remain far from agreement about what cognition is and what cognition does. It was once thought that these questions could wait until the data were in. Today there is a mountain of data, but no way of making sense of it. The time for tackling the fundamental issues has arrived. The biogenic approach to cognition is introduced not as a solution but as a means of approaching the issues. The traditional, and still predominant, methodological stance in cognitive inquiry is what I call the anthropogenic approach: assume human cognition as the paradigm and work ‘down’ to a more general explanatory concept. The biogenic approach, on the other hand, starts with the facts of biology as the basis for theorizing and works ‘up’ to the human case by asking psychological questions as if they were biological questions. Biogenic explanations of cognition are currently clustered around two main frameworks for understanding biology: self-organizing complex systems and autopoiesis. The paper describes the frameworks and infers from them ten empirical principles—the biogenic ‘family traits’—that constitute constraints on biogenic theorizing. Because the anthropogenic approach to cognition is not constrained empirically to the same degree, I argue that the biogenic approach is superior for approaching a general theory of cognition as a natural phenomenon.
M. Piraveenan, D. Polani, and M. Prokopenko, “Emergence of Genetic Coding: An Information-Theoretic Model”, Advances in Artificial Life, 2007, pp. 42-52 PDF
This paper introduces a simple model for evolutionary dynamics approaching the “coding threshold”, where the capacity to symbolically represent nucleic acid sequences emerges in response to a change in environmental conditions. The model evolves a dynamical system, where a conglomerate of primitive cells is coupled with its potential encoding, subjected to specific environmental noise and inaccurate internal processing. The separation between the conglomerate and the encoding is shown to become beneficial in terms of preserving the information within the noisy environment. This selection pressure is captured information-theoretically, as an increase in mutual information shared by the conglomerate across time. The emergence of structure and useful separation inside the coupled system is accompanied by self-organization of internal processing, i.e. an increase in complexity within the evolving system.
D. Polani, O. Sporns, and M. Lungarella, “How Information and Embodiment Shape Intelligent Information Processing”, LECTURE NOTES IN COMPUTER SCIENCE, vol. 4850, 2007, p. 99
“Biosystems Autonomy issue, Volume 93, Issue 3, Pages 151-260 (September 2008)”, 2008 PDF
Others
JJ Gibson, "New Reasons for Realism", Synthese, 17:2, 1967, p.162. PDF
Daniel Casasanto, "Embodiment of Abstract Concepts: Good and Bad in Right- and Left-Handers", Journal of Experimental Psychology: General, 2009. PDF
The Five Graces Group, "Language is a Complex Adaptive System", SFI Working Papers, 2008
PDF
Anagol et al. "There’s Something About Ambiguity. 2008 PDF
Classics
FRIEDMAN, M. Use of ranks to avoid the assumption of normality
implicit in the analysis of variance. Amer. statist. Ass., 1937, 32,
675-701.
MANN, H. B., & WHITNEY, D. R. On a test of whether one of two random
variables is stochastically larger than the other. Ann. math.
Statist., 1947,18, 50-60.
TUKEY, J. W. Comparing individual means in the analysis of variance.
Biometrics, 1949, B, 99-114.
WILCOXON, F. Some rapid approximate statistical procedures. Stamford,
Conn.: American Cyanamid Co., 1949.
Zipf, G. K., Human Behavior and The Principle of Least Effort: An Introduction to Human Ecology. Reading, Mass., Addision-Wesley (1949) PDF (31mb) DJVU (smaller file but you need to dl djvu viewer)
Herbert Simon, “The Architecture of Complexity”, Proceedings of the American Philosophical Society, vol. 106, Dec. 1962, pp. 467-482 PDF
Pylyshyn (1978) "Computational Models and Empirical constraints" BBS v1 Pylyshyn 1978 PDF
“The Cognitive Science Millennium Project” (100 classic papers) Link