choosing articles

I just ran into Paul and that got me thinking about paper topics again. The following may be a bit residual from Floridi's paper, but also from a variety of other papers that have been suggested:

It seems like the group is having a mild identity crisis and suggested paper topics aren't really following a theme. Does anyone mind if I make a couple of dichotomies and then say what quadrant I think we should look at? (The beauty of posting this is that nobody can say "yes I mind" before I finish :)

Let's say there are two approaches to information processing. One involves a tendency towards formal systems, and the other involves more of a dynamic processing conceptualization. A formal systems perspective assumes a signal is composed of discrete units. A dynamic systems perspective does not make this assumption. I don't take a stance on whether there is some kind of a continuum between the two.

The other dichotomy I would like to try to make is on the topic of communication. There are two kinds of communication 1) broadcast communication (sender does not get feedback from receiver), and 2) interactive communication (sender's message influenced by receiver feedback). There are also continuum issues here. Generally speaking though, broadcast communication derives from interactive communication. That is, broadcast information must be based on shared meanings [that have emerged through interactive communication]. Once we 'get' interactive communication, broadcast information should fall in our laps for free.

Most approaches (e.g. Lashley's paper we read) tend to assume a stream of discrete units arriving to a receiver {discrete, broadcast} as the problem to be addressed. The job is for the receiver to detect patterns in the well-formed stream and to learn those patterns. If the goal is to build systems that do, say, stock market forecasting, that's fine. If the goal is to build systems that can extract *meaning* from a worldly signal, we should consider the opposite quadrant {non-discrete, interactive}.

Okay, just to assign names to things, lets give {discrete, broadcast} the label "computer science" and {non-discrete, interactive} the label "brain science." This depicts the main dichotomy I am trying to make. As I have ranted, the brain and the computer are completely different things and provide counterproductive analogies for each other. My question for this reading group is: are we doing computer or brain-oriented cognitive science? I prefer brain-oriented. And, regardless of which direction we go, can we please not do any papers (besides historical ones) where the author hasn't figured out this distinction?


Hi Mike,

I think you're right in that there's a sort of 'mild identity crisis', though I see it as potentially a useful way to emerge upon a thematic consensus that we can all agree on (or disagree on) within the group. But, I also feel that to do that, we have to make our positions on the questions clear. So I'm glad you've done that, because these issues have come up numerous times in discussion before, normally without resolution.

I don't really agree with your dichotomies, nor the argument that we should expect the authors we read to understand what to me seems like an arbitrary distinction between 'brain science' and 'computer science.' I think it may have to do with the difference in backgrounds we are coming from — motor control on one hand (where, as far as I understand, the distinctions you draw play an important role), versus complex systems (where a somewhat different set of distinctions is usually brought up). We must be carefully not to assume that the concepts of one field will carry over their entire meanings when applied to other fields.

My own thoughts for the group were never specifically centered on "brain-oriented cognitive science", though I find it fascinating & a great topic to discuss. Rather, I was thinking more broadly about conceptual/formal frameworks for understanding information / control / organization / etc. in complex adaptive systems — specifically biological and cognitive systems (a huge topic, I realize). Hence, I never though of the distinguishing characteristic of biological or cognitive systems as in them being discrete or non-discrete, but rather things like (if you allow me to rattle things off without going into the problematic details):

- Distributed & bottom-up (higher levels arise through non-linear interactions among low-level constituents, but may then exert top-down constraints)
- Self-maintaining & displaying adaptive behavior in response to perturbations
- Possessing many circularly-causal feedback loops, both internally and through environmental coupling
- Etc.

I think your distinction between broadcast/interactive communication overlaps quite a bit with these (though, for example, the way Lashley talks about the internal dynamics of the brain seemed quite 'interactive' to me, if I remember correctly). However, in regards to discreteness, I could understand an argument where one made the claim that there is something 'ontologically' special about the computing power of analog systems or the real number line or whatever, but I don't think you are saying this — especially since your own 'non-discrete' models run within the discretized environment of a computer. And in fact, the contrast you draw is not between discrete and continuous (which I would think of as its opposite), but discrete and "dynamic processing". But surely one can have a discrete dynamical system — even one with most of the emergent properties ascribed to dynamical systems: trajectories, attractors, basins, etc.!

What I think is happening is when you see 'discrete' (and its attendant 'computer science'), you assume its talking about some small set of 'symbol tokens', embedded in a formal top-down architecture or something like this. But I don't think it has to be like this at all. One of the interesting things to me is that the birth of computer science is rooted in biological/neural metaphors. Wiener's book, for example, was titled "Cybernetics: Or the Control and Communication in the Animal and the Machine", and both Turing and von Neumann proposed distributed, adaptive models of computation in addition to the serial, centralized computer architecture we are familiar with nowadays (which I agree has, relatively speaking, little or nothing to do with cognition or the brain). So, can we not save these ideas from their previous association with now-unfashionable GOFAI? Or am I arguing against something different than you?

-Artemy


Hey Artemy,

I like your response. There are lots of important issues here. It simply occurred to me that contrast makes a powerful cognitive tool and so I thought I would try to lay out some contrasts to help focus the meta issues. 'Computer science' versus 'brain science' were probably not the best labels I could have chosen. Perhaps 'formal systems' versus 'complex systems' would have been too loaded though, and 'discrete cognition' versus 'worldly cognition' wouldn't have really worked either. The discrete/non-discrete distinction I wanted to make had more to do with input to the system than the system itself (if we can draw a line between where the input ends and the system begins).

-Mike

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License