Human Thought and the Design of Computers

 

Peter J. Denning wrote an excellent article titled "The Locality Principle" in the July 2005 issue of the Communications of the ACM. The article explained the story behind the locality of reference, a fundamental principle of computing with many applications. In a comment that appeared in the October issue of the same magazine I commented:

Peter J. Denning's "The Profession of IT" column ("The Locality Principle," July 2005) invoked an anthropomorphic explanation for the prevalence of the locality principle in computational systems, observing that humans gather the most useful objects close around them to minimize the time and work required for their use, and that we've transferred these behaviors into the computational systems we design.

A more intellectually satisfying explanation might be that we are dealing with two parallel and independent evolutionary design paths. Trading some expensive high-quality space (fast memory) in order to gain time performance is a sound engineering decision. It is therefore likely that evolution first adapted the human brain by endowing it with limited but versatile short-term memory and large long-term memory structure that exhibits behavior similar to caching.

Millennia later, we make similar design decisions when building computing systems.

The comment triggered an email exchange with Phillip G. Armour. It was one of the most intellectually satisfying email exchanges I've ever had, and I am reproducing it here, with his kind permission.


From: "Phillip G. Armour"
To: "Diomidis Spinellis"
Subject: Reply to Denning

Hi Diomidis,

I read your reply to Peter Demming's article in CACM, and I had a couple of comments, if I may...

I liked the analogy. I like anything that directly and practically links the brain operation with the design of systems, particularly computer systems. With physical systems we do appear to be constrained by the behavior of the material world in which we are working, but with computer systems there is much more conventional (as in "by convention") behavior. An evolutionary view of consciousness is something that people are starting to consider. Did people always think the way they think now? Or has our reasoning and consciousness evolved along with the physical attributes of humanity?

To take a purely evolutionary pressure perspective (which I don't necessarily agree with), our thinking processes have evolved because there was a benefit to them. I do have an issue with any teleological argument, since it is neither provable, nor refutable, but that's a different issue and I will go off on a 270 degree tangent (trigonometry joke) if I pursue that.

Starting from the evolutionary perspective, it would seem that thinking must relate to our ability to interface with the world. With all due respect to your countryman Socrates, there doesn't seem to be much general evolutionary advantage to, say, philosophy. The ability to introspect, to postulate and prove theorems, does not seem to have a direct relationship to our ability to put food on the table (there is a joke in the US: Question: "What's the difference between a philosophy major and a 14 inch pizza?" Answer: "A 14 inch pizza can feed a family of four"). That said, there is evidence from places like the caves at Lasceaux that the hunter-gatherers of 40,000 years ago did visualize and plan the hunt, and were therefore more successful. However, the Cro Magnon man who painted the hunt scenes was a fairly late evolutionary chap, pretty much where we are now. We don't have evidence that Australopithecus did much in the way of meditating. So the early advanced brain functions, on which the later ones were built, were probably related to improving our ability to deal with the world. This means through the senses. It's not a big step to consider the core thinking operations to be sensory metaphors that extend our understanding of the sensed world and our ability to function in it. George Lakoff (in for instance "Where Mathematics Comes From") makes this assertion and it rings true with me.

Therefore, our concept of location is primarily a sensory one. We can't "sense" things which are not close to us, and we can't easily deal with different things unless they are close together. so it makes sense (no pun) that proximity is one the core building blocks of conscious reasoning. Building on this, we've been able to extend the concept of proximity a lot; much of mathematics consists of defining realms (like Riemann space) in which disparate things are somehow linked with things that are "like" (read "near") them. The mechanisms we use to link things to other things that are not obviously (in a sensory way) "close together", are themselves extensions of the same sensory metaphor. While we've been enormously successful at doing this, I think there is evidence that this approach is breaking down. Perhaps more correctly, we are starting to appreciate the limitations of it as an understanding mechanism. I think the sudden realization that systems do not necessarily behave in a way that is empirically predictable from the behavior of their components is a good example.

The trouble is, we can only think the way we can think, and if thinking per se has limitations that do not allow us to fully understand the limitations of, well, thinking, we are kind of stuck. Sounds like a nice metaphysical kind of 3OI to me.

Best,

Phil

____________________________________________
Phillip G. Armour
CORVUS INTERNATIONAL INC
Systems, Psychology and Software
Web: http://www.corvusintl.com
The basic economic resource... is and will be knowledge... Peter F. Drucker
____________________________________________


From: "Diomidis Spinellis"
To: "Phillip G. Armour"
Subject: Re: Reply to Denning

Hi Phillip,

Your email poses a really interesting question: is our brain's design a result of evolutionary engineering, or does the design mirror properties (locality in our discussion) of the world around us? Given that currently a) we don't have a detailed understanding of how the brain works, b) we don't have access to other intelligent organisms that have evolved in a world with different properties, and c) we've been unable to artificially create deep intelligent behaviour through evolutionary algorithms, this is a question we can't answer directly.

One could argue that our thinking does not appear to be constrained by our sensory limitations: we investigate the atom's particles, black holes, and the big bang. However, one could also argue that the difficulties we face in obtaining a unified view of physics could well be limitations of our thinking processes, and end-up in the 3OI limitation you mention. Furthermore, there is no reason to believe that the brain's low-level design (short term memory in our case) should influence its high level processes: the high level behavior of our computing systems does not appear to be directly linked to the logic gates that compose them.

You also ask whether conciousness offers an evolutionary advantage. I'm really outside my depth here, but I can offer one advantage here. Conciousness allows us humans to form complex social relationships, and these *do* offer an evolutionary advantage.

All the best,

Diomidis


From: "Phillip G. Armour"
To: "Diomidis Spinellis"
Subject: Re: Reply to Denning

Hi Diomidis,

See comments below...

Best, Phil

-----Original Message-----
Your email poses a really interesting question: is our brain's design a result of evolutionary engineering, or does the design mirror properties (locality in our discussion) of the world around us?

---------------------------------------------------------------------------
[Phil:] Mostly likely both. The forces of evolution are primarily a result of the interaction between the organism and its environment (with some random mutation thrown in). The interface is the senses. Without the senses, any organism will be "unaware" of its environment. There are chemicals around us so we (and other organisms down to the level of a virus) have a sense of "smell" and "taste"--which is simply a chemical reaction to a chemical. We appear to be surrounded by physical entities that we can interact with, generating signals that we interpret as touch. I say "appear to" because we can reasonably infer that their physical properties are not the same as those given to us by the sense of touch. There are vibrations and energy around us that give us the senses of hearing and sight. For probably a variety of reasons, we have not developed sensitivity to electromagnetic radiation except in a quite narrow band from around 400nm to around 700nm wavelength. Radiation outside of these limits is there, but we can't "see" it.

Locality is a tricky one, since there is some evidence that it is not quite what we think it is (eg. Alain Aspect's experiment that rather refuted the EPR concept of locality). That said, it seems that only at the quantum level is it reasonable to debate locality. We always end up in an anthropomorphic loop of the "if a tree falls in the forest..." variety. I wrestle a lot with the argument "...if that's the way we perceive it, then is it worth trying to perceive it any other way?..."

----------------------------------------------------------------------------
Given that currently a) we don't have a detailed understanding of how the brain works, b) we don't have access to other intelligent organisms that have evolved in a world with different properties, and c) we've been unable to artificially create deep intelligent behaviour through evolutionary algorithms, this is a question we can't answer directly.

[Phil:] --------------------------------------------------------------------
I agree. I also think there are quite a few unresolved evolutionary issues. It is clear that we can go from one variety of a living form to another variety of a living form using evolutionary pressures or experiments, but it is not at all clear how to go from non-living to living. People will trotting out Stanley Miller's experiments even though they didn't result in anything other than slightly more complicated organic molecules, and their chemical basis is rather suspect anyway. It's still a REALLY big step from a long chain organic chemical to something that reproduces itself and manages itself within its environment.

----------------------------------------------------------------------------
One could argue that our thinking does not appear to be constrained by our sensory limitations: we investigate the atom's particles, black holes, and the big bang. However, one could also argue that the difficulties we face in obtaining a unified view of physics could well be limitations of our thinking processes, and end-up in the 3OI limitation you mention.

[Phil:] --------------------------------------------------------------------
But I'm not sure we are really investigating the atom's particles in a direct sense. We have established mental models (starting with Bohr) against which we match highly processed information that ultimately hits the senses. To interpret the output of an electron scanning microscope the "actual" data is filtered and filtered and filtered usually against a variety of other mental models and the devices we've constructed based upon them until the processed output ultimately becomes visible. The event still has its basis in a sensory interface.

It is an age-old debate whether an event exists or simply we interpret it as existing. While this is a highly tautological and somewhat fruitless deliberation, there is an underlying important consideration, namely that the essence of what we *observe* is anthropomorphic. That being the case, it is useful to keep in mind that it is not so much a law of the universe we are observing as a law of our own creation. The history of science is dotted with people, professions, and even cultures and civilizations which genuinely believed that their codification of their observations of the universe was fundamental. Such hubris is invariably punished when the next generation of thinking comes along, even though the next generation usually falls into exactly the same trap. There are some really simple examples of which my favorite is our system of numbers. "Real numbers" are not real, they are an artificial construct that allow humans to count. The universe does not need numbers, we need numbers. It is likely that the Romans considered their numbering system to be "real" even though it didn't contain provision for nothing, partial things, or negative things, let alone exponents and logarithms. Numbers, like all of our thinking and all of our perception of reality, are simply a human created model. And as George Box said "...all models are wrong, some models are useful."

So for me the issue is not correctness, it is usefulness. The wave-particle duality issue in sub-nuclear physics is another good example. Is an electron a wave, a particle, or a probability? Does it exist in one place or everywhere simultaneously? The ultimate answer is "yes". But we could give a good argument that it is also, and at the same time, "no". An electron is not a wave (in what medium?) or a particle (where is its boundary?) or a probability (which means, what?). It happens that each of these models is useful for certain human activities involving electrons. But that's not what an electron *is*. I (along with Edward DeBono) also think that the "yes"/"no" duality is also anthropomorphic and, while useful, is not necessarily true (but then maybe it is?)

There is an important point (that I'm long-windedly trying to get to) which is that our acknowledgement of our ignorance is one of the most important considerations, because it leaves us open to considering alternative models. While knowledge is power, it may be that ignorance is more powerful :-).

----------------------------------------------------------------------------
Furthermore, there is no reason to believe that the brain's low-level design (short term memory in our case) should influence its high level processes: the high level behavior of our computing systems does not appear to be directly linked to the logic gates that compose them.

[Phil:] -------------------------------------------------------------------
While the short term memory acts like a cache, it is probably not contiguously ordered as such. The "best" model (that is it works best for me) of thought is that it is a fractal pattern. When one thinks of a purple cow, it does not generate the intersection of signals from the purple neuron and the cow neuron. There is no purple neuron. The concept and application of purple and of cow (and of everything else) is not located in a particular neuron, it is located all over the brain (though granted there are certain parts of the brain that specialize in certain kinds of information). The idea of purple, of cow, and of purple cows is retained along with everything thing else we are thinking of, have thought of (that has not been forgotten), or to some extent can think of, in a self-sustaining dynamic fractal pattern at every moment of every minute of every day. Should that pattern ever "stop" we would be dead.

Some harmonics of the pattern are "stronger" than others and those represent our most conscious and intentional thoughts. Some harmonics are so weak they are almost gone. When they are finally so immersed in other patterns, we will have "forgotten" that fact or idea or experience. Some patterns are so similar to others that they coalesce and the memories become "blurred". It is very common that people overlay what they actually observe with a generalized pattern based on experience, and that pattern becomes reality. This has been duplicated in countless psychology experiments.

Some patterns are "retrieval patterns". That is, they are modalities of thought, or perhaps "search patterns" whose continually self-replicating function is to process other thought patterns to see how similar they are and perhaps to perform some optimization on them. My guess is that the optimization consists of (to use a model): (a) Combination of patterns, where one is subsumed into another or two are simply merged. This is the "like" construct. (b) The establishment of "meta-patterns" that allow relatively independent processing of the similarity of the two patterns and the difference (whence come ideas like the Linnean classification schemata in biology and, in OO, our inheritance ideas) (c) "Indexes" which are probably a simple form of the meta-pattern, but which just point us to the relevant patterns but don't perform much processing. This is where, well, indexes come from. (d) Meta-meta patterns which guide the construction of the meta patterns (this list is one such). Meta-meta patterns may be governed by meta-meta-meta patterns, or (more likely) themselves. This is where we get the "stack" model from. (e) hybrid patterns, which are combinations of (say) combinations, meta-patterns, indexes, meta-meta patterns and, of course, hybrid patterns.

Don't you just love recursion?

----------------------------------------------------------------------------
You also ask whether conciousness offers an evolutionary advantage. I'm really outside my depth here, but I can offer one advantage here. Conciousness allows us humans to form complex social relationships, and these *do* offer an evolutionary advantage.

[Phil:] --------------------------------------------------------------------
Consciousness is another tricky issue. The most useful view of consciousness for me is that it is that part of the fractal pattern that is self-aware. Consciousness is arguably the thing that separates humans from other animals. It is clear other animals think and also form societies. Interestingly in some animals the bulk of the thinking occurs in the individuals, but in some highly communal creatures such as ants, the "thinking" appears to be spread across the society.

It is a useful consideration to contemplate if human societies are "conscious". I think they are; though perhaps some societies are more conscious than others. There is no doubt that thought-like fractal patterns occur in societies and civilizations (think: fashion). They are born, evolve, and die, or more correctly are subsumed back into the overall pattern. There are patterns which are life-giving (see Christopher Alexander) and life-destroying (see Jared Diamond's "Collapse" for a few good (?) examples).

Given the propensity of the human race to generate destructive patterns, one has to wonder about the viability of this trait from a strictly evolutionary perspective.

To sum up, I would expect ALL of the design of computer systems to be analogues of thought processes. In fact, I cannot think of any way in which they would not. We think the way we think, and our thought process output, languages, systems designs, systems models, computer architecture, all the way up to the models we construct to try to understand the universe MUST be subject to the same restrictions. But hey, perhaps that's just the way I think.

Whew!

Comments   Toot! Share


Last modified: Friday, October 28, 2005 1:06 am

Creative Commons Licence BY NC

Unless otherwise expressly stated, all original material on this page created by Diomidis Spinellis is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.