Turing’s Golden: How Well Turing’s Work Stands Today.
It is fifty years since the creator of our age laid down in his Spartan bed, crunched down on an apple laced with potassium cyanide, and died. It behooves us to ask how his achievements stand today. He has, after all, bequeathed us a conceptulary including “Turing, or Turing-Church, thesis,” “Turing machine,” “universal Turing machine,” “Turing test,” and “Turing structures,” plus other unnamed achievements, including a proof that any formal language adequate to express arithmetic contains undecidable formulae, achievements in computer science, artificial intelligence, mathematics, biology, and cognitive science (indeed, Turing’s work became so ubiquitous and impersonal that for decades the Library of Congress lower-cased Turing; its entries read “turing machine” and “universal turing machine,” as if they were farm tools or exotic recreational devices). I have said conceptulary, rather than vocabulary, to note that Turing, after all, did not paste his own last name on things, and, more importantly, because I want to emphasize what these achievements legitimately amount to for us today, rather than adopting a purely historical stance which might allow silly, malicious, wildly-political, or irrelevantly personal supposed counter-examples or psychological deconstructions. Indeed, both scientifically and philosophically, Time has winnowed, honored, and made familial Turing’s now golden offsprings, while cultural mavens and dramatists, and even some scholars and scientists, have perhaps been less than fair, and certainly less than accurate, respecting the man who crunched the apple.
In Turing’s justly famous 1936 paper, “On Computable Numbers,” Turing negatively answered David Hilbert’s question as to whether a “definite, mechanical method” existed that would decide in a finite number of steps whether any given mathematical assertion, or equally any real number, was determinable or computable, as what Turing called a “satisfactory number” or, equivalently in Hilbert’s explicit formulation, a determinably true or false mathematical assertion. Turing continued his paper’s title “with an Application to the Entscheidensdungsproblem” because in order to solve this problem he had to invent what he called “Theoretical [or Logical] Computing Engines” and a particular species of them he called “Universal Theoretical [or Logical] Computing Engines” (these are what we now of course call “Turing machines” and “universal Turing machines”). Turing also put the application as the subtitle because he thought the characterization of TCEs and UTCEs was of considerable independent interest as characterization of real numbers, which prove to include not only computable numbers but also, using a diagonal procedure of the sort Cantor used to establish among the real numbers both the countable rational numbers and uncountable irrational numbers, Turing distinguished computable numbers from uncomputable numbers, including amongst these uncomputable numbers, universal computable-checking machines that cannot determine, or compute, their own description number on pain of contradiction. As Turing’s mathematical biographer remarks,
Turing’s proof can be recast in many ways, but the core idea depends on the self-reference involved in a machine operating on symbols, which is itself described by symbols and so can operate on its own description. Indeed, the self referential aspect of the theory can be high lighted by a different form of the proof…However, the “diagonal” method has the advantage of bringing out the following: that a real number may be defined unequivocally, yet be uncomputable. It is a non-trivial discovery that whereas some infinite decimals (e.g. p ) may be encapsulated in a finite table, other infinite decimals (in fact, almost all), cannot. (Hodges 2002. p. 4).
Even more importantly perhaps, TMs might be a useful précising or formalization of the equivalent informal terms “definite method,” “effective method,” or (as Alonzo Church independently termed it) “effectively calculable method” (Church 1936; Church in the same paper provided “lambda definable function” as his own formalization of the informal notion).
Church, in fact, identified or defined the informal notion with his formalized notion of lambda definable, or recursive, function, latterly recognizing that his formulation also generated the same functions as Turing’s formulation. Emil Post at the time justifiably offered the following caveat to Church’s formulation,
To mask this identification under a definition … blinds us to the need of its continual verification, [for it is just a] “working hypothesis” (Post 1936: p. 105).
Turing indeed did not then identify his formalization of TCE with the informal notion of “definite method,” etc., or claim an equivalence. In Wittgenstein’s Lectures on the Philosophy of Mathematics, Cambridge 1939, Alan Turing does indeed remark that “a [real] number is like a simple kind of device that transforms inputs into outputs in a characteristic way” (Wittgenstein 1976, p. 37).
Twelve years later, however, Turing was to justifiably assert that
LCMs [ Turing machines] can do anything that could be described as “rule of thumb” or “purely mechanical” (Turing 1948: p. 7).
Or as he also then, and more carefully, added the supporting formulation,
This is sufficiently well-established that it is now agreed that “calculable by means of an LCM [Turing machine]” is the correct rendering of such phrases. (Turing 1948: p. 7)
In other words, Turing’s, and Church’s, original working hypotheses amounted to the identification of the informal notion of computable with their own 1936 formalizations. But latterly with the accumulation of ever so many alternative formalizations of computability, with the equivalent effect, converging on their own formulizations, we could now increasingly take the working identification for granted as the now well-established Church-Turing Thesis. This thesis has had nothing but increasing support since 1948. While we today can say that, strictly speaking, the claim that anything that is TM-computable is just plain computable is still unproven (how indeed could it be proved, strictly-speaking, given its informality?), we nonetheless now are quite confident that the formulation will endure. Indeed, since Turing’s formulation seems more clearly mechanized, and more fundamental, fruitful, and physically realized, we might well speak, as some do, of the Turing thesis.
Indeed, both Church and Godel handsomely recognized something like this and both McCulloch-Pitts 1943 and von Neumann 1945 acknowledge Turing’s 1936 paper as stimulus for their own work (for McCulloch-Pitts’ acknowledgement see von Neumann 1951/1963, V: p. 319). In a review of Turing’s paper, Church acknowledges that
As a matter of fact, there is involved here the equivalence of three different notions: computability by a Turing machine, general recursiveness in the sense of Herbrand-Godel-Kleene, and l-definability in the sense of Kleene and the present reviewer. Of these, the first has the advantage of making the identification with effectiveness in the ordinary (not explicitly defined) sense evident immediately (Church 1937a: p. 43).
And Godel writes that
Due to A. M. Turing’s work, a precise and unquestionably adequate definition of the concept of a formal system can now be given … Turing’s work gives us an analysis of the concept of “mechanical procedure” (alias “algorithm,” or “computational procedure,” or “combinatorial procedure”) (Godel 1934/added in 1946).
Tarski has stressed in his lecture (and I think justly) the great importance of the concept of general recursiveness (or Turing’s computability). It seems to me that this importance is largely due to the fact that with this concept one has for the first time succeeded in giving an absolute definition of an interesting epistemological notion, i.e., one not depending on the formalism chosen. In all other cases treated previously, such as demonstrability or definability, one has been able to define them only relative to a given language it is clear that the one thus obtained is not the one looked for. … By a kind of miracle it is not necessary to distinguish orders, and the diagonal procedure does not lead outside the defined notion. This, I think, should encourage one to expect the same thing to be possible also in other cases. (Godel 1946, p. 84).
Indeed, some have gone so far as to suggest that a Turing machine is the Normal Form for reports of psychological states or capacities, whether the attribution is to a human, a computer, or a hypothetical alien intelligence.
Turing machines are minimalist, evocative, and physical – at least sort of physical, although Turing called them Theoretical Machines and he certainly had no intention of building one. They were machines to think with.
Turing machines are certainly minimalist in kinds of symbols – two distinguishable symbols plus blank. The active part of the machine is fed by an indefinitely long tape divided into frames like those on a roll of film. The active part has a “read head” which can tell whether the frame under it has “/,” “\,” or is blank. The read head is also a “write head” that can erase, write in a “/” or do nothing. The machine can move one frame forward or backward or stay in place, and the write head take action, according to what is read and what state the machine is in, among a small number of “internal states.” Then the move is repeated. At the beginning of each move, the machine is in one of a small number of “internal states” and may switch to another after the move. The machine is built to instantiate instructions in its machine table of the form “if / is read and the internal state is 1, then erase, move one frame forward and go into state 2.” A Turing machine for adding would get, say, the input sequence “// //” and then automatically change it into “////” through a long series of steps, shuttle-cocking backwards and forwards, and then stop. For us this would be the computation “2+2=4.” Since Turing was thinking about a theoretical device, he did not mind that a million would be represented by a like number of “/”s.
The tape could also store data and programs (“memory,” “skills,” and “plans”), represent incoming data (“sensory input”), and issue output of a like character (“motor outputs”). So Turing has also given the framework in which to describe any sort of individual thinker, you and me included. As many scientists have said since the 1960s, we are, more or less, universal Turing machines and so are our digital electronic computers. For the same reason, we now think of thinking as computing or data-processing. When Alan Turing showed up at Bletchly for deciphering German Enigma Machine military messages in September, 1939, we might say that he already knew what he had to do theoretically. There was in fact an incredible amount of practical engineering in front of him. In any case, Turing’s theoretical machines have cast much light in formal and cognitive science, light that emphasizes the automation of cognition as a mechanical and purely physical task. Turing’s machines shape our conception of thought and of psychology. Just as time, fertility, multiple convergence, and lack of refutation has shifted us to a thorough acceptance of the equation of the informal notion of computable to TM computable, so we have come a long way toward understanding Turing’s Turing test as a productive criterion for practical cognitive science.
Aside from the initial mid-century reaction that it would be “a simple [but irrelevant] task” to simulate human intelligence or that it would take but a decade to reach the goal, there has developed over the past few decades a consensus in cognitive science about what intelligence a Turing test passer would have to exhibit, under extensive and lengthy interrogation by experts, and a considered agreement that nothing on the near horizon looks likely to succeed. Professor Robinson, to whose objections Turing replies in his famous 1950 paper, may be pardoned for thinking it “would be a simple thing” to program a computer to pass the test but no one now may so be pardoned. It is an impressive testament to the enormous (and heretofore largely unrecognized) complexity of ordinary human intelligence that passage now seems so difficult, even with computers that almost unthinkably exceed in memory size, speed, and programmability those beginning to become available in Turing’s day.
Around 1950, Turing proposed what we now call the Turing test for machine intelligence (Turing 1950, 1948), a goal to which his many contributions to the development of serial-processing programmable digital electronic computers were dedicated. Although many U.S. scientists refer to such as Von Neumann machines, they are as much or more Turing’s achievement. On this last point he has received less than credit due (Carpenter and Dorant while crediting Von Neumann (1945) as well, insist that Turing (1945) “is quite possibly the first complete design of a stored program computer architecture” specifically including “subroutines, the stack and micromachine architecture” as Turing’s contributions (Carpenter and Dorant 1977), see also Hodges 1983)). More expansively, D. C. Ince credits the 1945 paper for hierarchy of programs and Turing’s 1948 paper for computer-based theorem proving and self-organizing machines (Ince 1992; Turing 1945, 1948).
What has largely gone unnoticed is Turing’s 1948 proposal about training and construction methods for “Unorganized Machines,” or what we today call connectionist nets, among which he discriminated a variety of parallel distributed processors with computational capacity comparable to that theoretically available in a Universal Turing Machine or practically in a serial processing programmable digital electronic computer (Turing 1948). Also unnoticed has been his description of the “Child Machine” and importance it might hold, along with the Unorganized Machines, in the attempt to make Turing Test passers (Turing 1948,1950).
Turing envisioned virtually no restrictions on how the Turing Test passer is to be fashioned. It will be quite enough of an achievement to come up with anything whatsoever that does the job (a point rather more vividly evident to us by now than it was in 1950). Turing excludes only one method: to conceive, bear, grow, and properly educate a human being (amusingly, a requirement that John Searle’s Chinese Box counterexample would flunk in that Searle is human). Turing recommends that both programming serial computers and training connectionist nets should be employed. And Turing insists that since a fully disorganized machine could not be efficiently trained (just as a non-specific, all-purpose chordate brain could not effectively reach cognitive maturity), much thought and experiment might need to be devoted to the structure of the Child Machine. Its structure might need to mirror native human cognition to achieve, after training and programming, Turing Test passage.
Hence, for Turing, there is no linkage between the propriety of the Turing Test and the fate of the various theses of what has recently been labeled Good Old A. I. In particular, the passer need not exemplify the “physical symbol hypothesis” or wear on its sleeve the claims about the well-defined autonomy and causality of cognition that Fodor and Pylyshyn (1988) have championed against connectionism. This lack of linkage is important in that Turing’s Turing Test draws, as he remarked, “a fairly sharp line between the physical and intellectual capacities of a human being.”
No engineer or chemist claims to be able to produce a material that is indistinguishable from the human skin. It is possible that at some time this might be done, but even supposing this invention available we should feel there was little point in trying to make a “thinking machine” more human by dressing it up in such artificial flesh. The form in which we have set the problem reflects this fact in the condition which prevents the interrogator from seeing or touching the other competitors, or hearing their voices …The question and answer method seems to be suitable for introducing almost any one of the fields of human endeavor that we wish to include. (Turing 1950, p. 12)
The rough fact remains evident in everyday life, in law, in science, and in myth that the Turing test is a natural.
But while Turing placed no restrictions on the construction and training of Turing Test passers, for the real interest lies in what the construction and training will have to be, Turing was adamant for just that reason that what counts could only be that a real mechanism should REALLY PASS the test. Moreover, as evinced by Turing’s viva voce example in which the machine is supposed to demonstrate a lively grasp of everyday stereotypes, literary criticism, and the sonnet, Turing expected the passer to prove indistinguishable from as broadly talented and articulate a human as could be produced, under as long an examination as wanted, conducted by knowledgeable and probing experts. Purported theoretical counterexamples to the Turing Test, such as those of John Searle and Ned Block, tend to violate the first point; while purported partial practical counterexamples, such as ELIZA and PARRY, radically loosen up the passing standards.
Block (1978) considers a counter example with a cast of millions who are imagined to instantiate the Turing Machine that represents some normal human’s intellectual capacities. Block considers the claim that this assemblage would be the appropriate, “functionally-equivalent” Turing Machine but that we would hesitate to call it a thinker. John Searle (1980) offers a related argument in which he is to be imagined simulating an intelligent Chinese speaker by reading and sorting for him meaningless Chinese symbols; by any construal of his supposed counterexample, it would take Searle hours to respond to simple questions. Block and Searle’s counterexamples do not bite on Turing’s views. Block’s assemblage would have neither the reliability or speed to have even the feeblest chance of Turing Test passage; Searle would be unmasked at first pass (see Leiber (1992)). What you need to refute Turing’s Turing Test is a reasonably detailed description of a physically possible machine which can indeed be supposed to pass the test in real time but which is somehow not really thinking. That is what the counter- examples do not offer. Turing clearly presumed that the natural laws of our universe obtain for any putative Turing test passer. Philosophers’ counter-examples blatantly demand worlds where these laws do not hold, where things happen at speeds and with reliability not even faintly obtainable in the materials and conditions of our universe -- worlds in which miracles are continually happening. The counter-examples are miracles, while Turing test passage has to be a practical achievement.
Turing, indeed, clearly held that no machine with the literal Turing machine architecture could ever be a viable candidate for Turing test passage (Turing 1948). Further, Turing certainly speculated that even the much speedier and more reliable electronic computer architecture might not allow us to do the job. Perhaps we will also need dense networks of nodes with adjustable weights, “the best eyes and ears money can buy,” and a suitable training program.
Indeed, Turing even considers adding what we might call the “Frankenstein” option.
[One way] of building a “thinking machine” would be to take a man as a whole and try to replace all parts of him by machinery. He would include television cameras, microphones, loudspeakers, wheels, and “handling servo-mechanisms” as well as some sort of “electronic brain.” … In order that the machine should have a chance of finding things out for itself it should be allowed to roam the countryside, and the danger to the ordinary citizen would be serious. …[A]lthough this method is probably the “sure” way of producing a thinking machine it seems altogether too slow and impractical (Turing 1948: p. 13).
Perhaps Turing calls this “the ‘sure’ way” because it is the evolutionary algorithm. The environment will sanction various versions of the machine, until the experimenter, who has been shuttling back and forth from the drawing board to the environmental testing, converges upon a successful version. No wonder Turing calls it too slow. He does allow that the foresighted experimenter may proceed more expeditiously than mother nature.
Given the enormous number of steps a Universal Logical Computing Machine must through in performing relatively simple arithmetical calculations, Turing argues on physical grounds that such machines cannot be constructed unless we severely limit the computations we expect them to realize. Given Brownian movement, there will be no reliability in computations with “very large numbers of steps,” nor, “assuming that a light wave must travel at least 1 cm between steps,” would we be willing to “wait more than 100 years for an answer” (Turing 1948). In reality, Turing argues, we will have to be content with Practical Computing Machines, such as our computers, that exponentially reduce computational steps, thus giving us marked increases in speed and reliability. Today we approach specific forms of the same practical physical limitations on Turing Machine architectures on our computing machines themselves. Minimizing signal distance to maximize speed, Cray computers require liquid nitrogen cooling, suggesting that the architectural change to parallel distributed processing might possibly be forced on purely mechanical grounds (and parallel processors have heat problems coming up too; they in fact are normally run on programmable computers). It has been objected that the Turing Test is behaviorist: to ascribe intelligence we will want not only the right input/output relationships but also a sense that there’s the right architecture inside. Turing’s answer is that the most promising avenue for discovering what’s right about our own architecture is through Turing Test passage research (our armchair architectural intuitions can have at best heuristic value). Engineering/simulation first, theoretical understanding perhaps later. as Professor Chomsky has pointed out,
Engineers knew how to do all sorts of complicated and amazing things for hundreds of years. It wasn’t until the mid-nineteenth century physics began to catch up and to provide some understanding that was actually useful to engineers. (Chomsky 1988: p. 182).
This is only a reasoned rejection, not a refutation of a priori armchair architectural demands.
Turing began “Computing Machinery and Intelligence” by insisting that if the meaning of the words “machine” and “think” are to be found by examining common usage, then the answer to the question, “Can machines think?” is a Gallup poll.
But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. (Turing 1950).
Turing does not say that we logically must accept Turing test passing machines as thinkers, but that, if
the engineering succeeds, we will and should take them to be such. Respecting Gallup, Turing does
speculate that by the end of the century we can expect willy nilly that usage that ascribes intelligence to computers will be common place, whether or not Turing test passage is obtained.
As Turing’s engaging insistence on “playing fair with the machine” suggests, Turing happily deployed moral justifications for the Turing Test. He disarmingly introduces the now familiar interrogative format as a game in which a man (A) (“My hair is shingled...” is trying to simulate the replies of a woman (B) (“I am the woman, don’t listen to him...”). Then he suggests substituting a computer for A (so that, perversely and curiously, the computer’s assignment briefly appears to be that of simulating a man simulating a woman (Moody 1993)). Responding to the claim that computers can’t “pass” because they aren’t “beautiful, friendly, etc.” and don’t “fall in love [or] enjoy strawberries,” Turing’s devastating reply is
The ability to enjoy strawberries and cream may have struck the reader as frivolous. Possibly a machine might be made to enjoy this delicious dish, but any attempt to make one do so would be idiotic. What is important about this disability is that it contributes to some of the other disabilities, e.g., to the difficulty of the same kind of friendliness occurring between man and machine as between white man and white man, or between black man and black man. (Turing 1950: p. 24)
Over half a century ago, Turing suggested that we might need to construct neural nets, trying out various “initial states” for the Child Machine, and then train these nets with a view to Turing Test passage (Turing 1948, 1950).
Crucially but briskly, Turing also dismissed what he called the “solipsist objection.” Echoing a number of philosophers and some philosophy undergraduates, Turing’s solipsist maintains that the only way to really know that you think is through introspection, through consciously experiencing your own thinking. Turing’s solipsist therefore reasons that since he can’t introspect or experience other people’s thinking, he cannot really know whether anyone else thinks. Turing asks us to imagine that we have built a successful imitation of human intelligence, a Turing test passer. Someone may raise the objection, “Yes, it is an amazing trick, a sort of cognitive wax museum construction, but there is no real thinking going on inside, no consciousness, no inner life like I have.” Turing, momentarily accepting the solipsistic criterion, suggests we should conclude that then the only one who could ever know that the machine thinks is the machine itself, and we of course would not take its word for it. By the same token, however, Turing suggests that his solipsist must also conclude that only I can really know that I think and I can’t know whether anyone else does. Everyone else, then, is in the same position. Turing jokes that we have a “polite convention” of assuming that other humans think. However, if we can’t establish that a computer thinks through its observable behavior, then surely we can’t establish that other humans think either, because all we see is their observable behavior (we can see inside their brains but we don’t of course see thoughts there, just brain tissue; whatever neurology brings us, it won’t be our old familiar little man in the head).
What endures is the native plausibility of the Test, which recalls Rene Descartes’s “respond in a suitable fashion to whatever may be said in its presence.” It provides a most suitable pre-theoretical target without which we are unable to draw a line between modeling human intelligence and simulating the human brain, neurological and sensory systems, and the rest of the body, and all their biological, psychological, and social features. Refreshingly, Turing’s Turing test sets us an experimental and constructive engineering task. Literal Turing machine architecture maximizes simplicity by making tape length infinite and computative steps of the simplest possible sort, so it is literally wholly impractical for all but the simplest computations. Now our empirical problem becomes one of engineering, one of finding the materials and shortcuts, the many clever hacks, training regimes, and complications that allow each of us to instantiate a Turing Machine with a light years long tape in three pints of neurological nets.
A philosophical critic may say, We are also interested in what intelligence or mind IS. Might not engineers even actually build and train a passer WITHOUT knowing what intelligence really IS? Should we not, therefore, preserve a conceptual and philosophical concern with the nature of intelligence (or mind)?
The question what is intelligence? resembles the question what is life? Answers to this latter question - the soul, Aristotle’s vegetative and animating principles, elan vital, etc. - now seem as much irrelevant as mistaken. We know very roughly how the DNA coding-structures arose and how these organism-building instructions for protein structures have been winnowed by evolutionary accident into the vast profusion of particular and peculiar organisms that have teemed our planet, how ingenious and baroque mechanisms have actually done the job here. While artificial life is a recent coinage, the engineering orientation reminds one of Turing. But while biologists occasionally speculate about whether silicon could play the same biochemical role as carbon in a vastly different planetary environment, or wonder how much structural variation on the double helix or the amino acid alphabet might arise on other earth-like planets, or did arise early in Earth’s history, etc., the cosmic and portentous question what is life?, along with its Gallup Poll answers, now seems lost, and well lost, in the shuffle.
Turing prepared us to regard the equally cosmic and portentous question what is intelligence?, or what is mind?, in a similar way. As his work on morphogenesis and his equating, “structure of the Child Machine” with “hereditary material” and “natural selection” with “judgement of the experimenter” suggest, Turing’s biological and cognitive investigations are closely interrelated. Just as biologists speculate as to whether silicon life forms might evolve, so biological and cognitive scientists may wonder as to whether such organisms could, when winnowed by a suitable environment, evolve to exhibit something comparable to our cognitive processes and powers. Biologists hold that the reproductive peculiarities of the social insects make it nearly inevitable that they evolve into species with elaborate architectural proclivities (hence bee hives, ant hills, termite mounds). Perhaps biologists and cognitive scientists may speculate, similarly, that “naturally evolved” intelligence, whether carbon or silicon based, will inevitably build electronic computers. But though these are biological and cognitive questions, they are not questions to which we should expect answers too soon and still less questions that arm chair intuitions can hope to decide. Beyond answers to these sorts of biological and cognitive questions, what else could we reasonably expect to know?
Someone may well reply, But what about consciousness? THAT I understand. It’s not like elan vital, for consciousness is exposed directly to my observation. I know intimately what it is to think and no engineering simulation is going to convince me that a machine thinks. But the cases are parallel. You not only think, you live. Your living is just as exposed directly to your observation as your thinking. Still, your intimate experience of living doesn’t make you an oracular biologist: you are a specimen. The same goes for cognition.
Cognitive science can burgeon as an investigation of engineering possibilities with a merely heuristic interest in simulating cognitive aspects of successful biological models, or additionally aided by an historical account of what has happened to be thrown up biologically and cognitively on this peculiar planet. Turing saw that it might have to be both; that realization is exemplified by his test. Tens of thousands of scientists and philosophers today continue to pursue Turing’s quest through vigorous, far-ranging construction and experiment. Like Turing’s thesis, time has indeed been kind to Turing’s test as currently construed, and it is clearly central to cognitive science. It is not only a test to profitably pursue in all of its special branches and central requirements, but a test to think with as well. It stands well today.
However, as with any such central and portentous scientific development, particularly one that touches our sense of ourselves, Turing’s test and indeed its author, have been subjected to wild misunderstandings and demonic flights of malicious fancy that far exceed Professor Robinson’s “It would be a simple task.” Al though the examples of this are legion, I shall content myself with rebuking some of the characteristic mistakes that Jean Lassegue makes in his recent paper, “What Kind of Test Did Turing have in Mind?”
[I]s the so-called “Turing test” as objective and scientific as it is claimed to be in the AI and cognitive science community? Do we have to consider it as a threshold one has to cross to be able to enter the realm of a scientific approach to the mind? My answer is “no” because the so-called Turing test [as Turing actually formulated it] says in fact more about Turing’s psychological life than about the science of mind itself. (Lassegue 2002: p. 4)
Informal logic teachers will doubtless groan at this elementary example of the genetic fallacy. If it is true that the Cretan, Epimenides, who is credited with originating the liar paradox by saying “All Cretans are liars,” did not actually recognize there was anything paradoxical in his pronouncement and he gave in any case a loose version of it, this is hardly grounds for denying that the liar paradox has an objective meaning in logic and mathematics (St. Paul, in his letter to Titus quotes Epimenides (“even one of their own”) as evidence that the Cretans are habitual liars; Paul even adds that Cretans always lie, thus tightening the paradox without in fact suspecting that there is anything paradoxical about what he is writing). If the “Turing test” has a clear, useful, and established sense in the cognitive science, it is simply irrelevant to cognitive science whether Turing mis-formulated it originally or expressed it bizarrely through some unconscious, self-revealing, pathological cause. But he didn’t do that either.
After dismissing the question “Can a computing machine think?” as unsuitably vague, Turing proposed to operationalize the issue by substituting what he called the imitation game, the genesis of the Turing test. But he first introduced a preliminary version (game 1) in which a man and a woman compete and then substituted the machine for the man (game 2). I will quote the same lines that Lassegue quotes but I will also insert and italicize the material that Lassegue crucially leaves out, replacing them with three dots, thus sanctioning his perverse misreading. Additionally, I have bold-faced the most crucial excision.
C: Will X please tell me the length of his or her hair?
Now suppose X is actually A [the male], then A must answer. It is A’s object in the game to try and cause C to make the wrong identification. His answer might therefore be:
My hair is shingled, and the longest strands are about nine inches long.
communicate between the two rooms. The object of the game for the third player (B) [the woman] is to help the interrogator. The best strategy for her is probably to give truthful answers ...
Thusly Lassegue resumes quoting Turing:
We now ask the question, “What will happen when a machine takes the part of A [the male] in the game. Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?” These questions replace our original “Can machines think?”
Having finished his doctored Turing quote, Lassegue now triumphantly pounces on Turing as
Let us go back to Turing’s description of his first game and to the strategies displayed
imitation but on the contrary to an unsuccessful one, that is to say to an unreciprocal
Here is how Turing describes the woman’strategy:
The best strategy for her is probably to give truthful answers. She can add such
things as “I am the woman, don’t listen to him!” to her answers, but it will avail nothing as the man can make similar remarks.
What can we infer from this description? That the woman will at once be recognized by the interrogator and that the match will end immediately. If the strategy followed by the woman is to tell the truth about herself without imitating the man’s behavior, this truth will become very quickly apparent to the interrogator.
Why should the woman speak the truth? It is obviously a very bad strategy, since it should be characterized, more than anything else, as an absence of strategy ... In Turing’s article the odds are weighed too heavily against the woman and this fact must somehow be explained. (Lassegue 2001, p. 9).
But of course, as Turing very clearly stated in the sentence Lassegue left out, the woman’s task is to help the interrogator! This is exactly parallel to her task of helping the interrogator in the second game, where a computer is substituted for the man. So, pace Lassegue, the strategy Turing recommends for her is not at all a very bad strategy, one so bizarre that Turing’s recommendation cries out for psychological explanation. To the contrary, the strategy that Turing suggests is clearly the best possible strategy and, again in parallel, exactly the right strategy for her (or any human) to follow in the second game. The natural way for the interrogator to start the full blown test is familiar to real professional interrogators: What is your name?, Where were you born and when? ... You’ve told us that you grew up living at 526 Chartres St., New Orleans. Tell us about the buildings near your home then. ... Can you hum “When the saints come marching in”? ... Didn’t you previously say that your boy friend’s name was Xavier? ... What did you say your address was in New Orleans?
It is elementary that when you want the authorities to believe that you really are you, you don’t start making things up! This maxim also applies when the interrogator, in the second game, not only asks you personal questions but more general ones. The best strategy for the woman will be to give the most competent answer she can manage rather than giving lots of ad hoc, deliberately stupid answers, for it will be much easier for a computer to simulate dumbness than human competence. It is, for example, child’s play to program a computer to simulate human like mistakes and slowness in mathematical questions but extraordinarily hard to display the talents humans characteristically deploy in ordinary conversation over a range of topics (it certainly hasn’t been done). To repeat Turing, who makes its roots in interrogation techniques clear, “The question and answer method seems to be suitable for introducing almost any one of the fields of human endeavor that we wish to include.”
Even a spy doesn’t make things up if this can be avoided. There is nothing like being a practiced plastic surgeon (named Mallory) as the most successful way to pretend to be a practicing plastic surgeon who goes by the name Sorge and is really secretly reporting political gossip among his patients to the CIA. Fake as little as possible because the truth is much easier to keep track of and make convincing. So the man in Turing’s first interrogation game would do best to leave all the details of his own life unchanged except those that must be changed to maintain the impression that he is a she. In the second case, the computer has to put together whole cloth the complex identity structure of a person with characteristically human competencies and beliefs, along with characteristically human performance mistakes, prejudices, and emotions.
Turing takes personal narrative as basic. It is certainly quintessential in his first sketch in the 1950s paper. Personal narrative aptly fits his general formulations of the test. Turing always takes it that the machine must pass as a particular human identity, not as some reified human in general, although the machine must also exhibit multiple talents that are characteristically human, talents grounded especially in natural language and folk psychology (his second exemplary interrogation in the 1950s paper, Turing seems to call for a refined sense of metaphor in English and a passing acquaintance with Charles Dickens’ novels). Turing’s formulation also seems apt because personal narrative is found not only in interrogations but in a large, and often vital, portion of our daily interchanges with other people (and with ourselves) and in our oral story-telling, our novels, plays, movies, operas, and TV narratives of all sorts. No one seeks to pass as a human in general. We have no training in that enterprise, while we have scads of experience in individual self-narrations, whether respecting ourselves or others.
Lassegue also tries to suggest another way in which the first test and the second are suspiciously dis-analogous. He points out that if, in the first test, the man pulls off his imitation of a woman, this doesn’t in any way establish that he is a woman, while in the second test, passage by the computer is supposed to establish that the computer is, genuinely, a thinker or intelligent. But this is silly. Turing’s whole point is to distinguish physical traits from mental ones. As Turing emphasizes, “playing fair with the machine” is like blind refereeing or putting an opaque screen between musical performers and their judges. If the man in the first test passes, then it establishes that he can think like (or can cognitively/affectively pass as) a woman. Similarly, just as the man’s passage in the first test does not of course establish that he is a woman, the computer’s passage in the second test does not of course establish that the computer is a woman (or a human being). And again, in precise analogy, what the computer’s passage of the second test would establish is that it can think like (or can cognitively/affectively pass as) a woman (or more generally, a human being).
Turing, who was no human chauvinist, also makes it amply clear that passage would be a sufficient condition for being intelligent, not at all a necessary one. It might well be, he suggests, that a computer might be produced that really ought to be considered intelligent but would not be able to pass the test. It is perhaps also important to add that while Turing speaks of “playing fair with the machine,” it would be absurd to think, as Lassegue seems to argue, that Turing identified himself with the machine or with “pure intelligence.” In fact, as his biographer makes clear, Turing felt contempt for the airs put on by upper middle class Englishmen who prided themselves in their capacity for “abstract thought”; thinking machines would take them down a peg (Hodges 1983).
His perverse misreading now allows Lassegue to go on, preposterously, to write,
Little by little, a kind of hierarchy in the players’ responses emerges from the text: the woman imitates herself (that is why she is a poor player), the man imitates the woman (he is therefore a better player than the woman). What about the machine which replaces the man in the second game? (Lassague 2001: p. 10)
Lassegue’s response to his wholly contrived and mis-premissed question is that the computer, when passing, really shows itself, and intelligence itself, to be male. To the contrary, and to insist on the literal reading of Turing that Lassegue officially demands (but hardly follows), what the computer has to do to pass, to prove it is a thinker, is to be cognitively indistinguishable from a human female. (I am reminded of one of my daughter Casey’s first sentences, in her twenty sixth month, “I am a person,” which elicited a matronly “Of course you are, my dear” from the next table in a Cotswold restaurant. Baffled, and seeking to understand what this Cartesian announcement could possibly mean, I asked “Is Mommy a person?” and after some repetition eventually got a lisped “yesh.” My further inquiry, “Is Daddy a person?” eventually elicited a “No.”)
For an amusing account of a related attack on Turing that also fizzled, consult Leiber (1992: p. 127-28). In Ian McEwan’s TV drama, The Imitation Game, protagonist Cathy Raine, hoping to do her bit despite the granite assurance of her male chauvinist society that she is a decorative, know-nothing non-entity, becomes one of the hundreds of Bletchley women who, in complete ignorance of what they are really doing, endlessly record and transcribe the radio signals that feed the fledgling electronic brains that “Turner” and other males in Hut Eight direct. By seducing Turner, Raines manages to learn what is really going on and as “the woman who knows” she is locked up for the War’s duration (McEwan also cannot resist having Turner prove impotent in the climax of his tete-a-tete with Raine). Indeed, McEwan even has Turner speak several lines from Turing 1950’s paper as if they were Turner’s own thoughts and these lines are apparently intended to unmask Turner as a sexist, misanthropic machine-lover. In fact, however, the real Hut Eight contained a female mathematician who was perfectly aware of what was going on and, like Turing, a graduate of Cambridge University; Turing eventually proposed marriage to her, and the two retained a friendly relationship long after she withdrew from the engagement (Hodges 1983).
Over the past fifty years computers have grown almost immeasurably in speed and memory capacity, and all the methods Turing suggested have been zealously pursued (except the Frankenstein option), with much inevitably learned about human intelligence. But the goal of Turing test passage now seems ever more formidable in just the area C human language and the folk psychology it embodies in personal narrative C that the Turing test makes central. The magnitude of the difficulty is now perhaps most dramatically obvious in the much simpler but related task of machine translation between human languages. After a half century of concerted effort by linguists, psycholinguists, programmers and indeed cognitive scientists of all ilk, the best that can be said of current machine translations is that by scanning the often absurd muddle for key words, and the occasional viable phrase or even intelligible sentence, you may be able to guess whether it will be worth while to call in a human translator. And something like the largely tacit and profoundly human knowledge that the translator employs must be available to a Turing test passing machine. That and ever so much more, for the machine must not only be able to manage coherent narrative but also must be able to flesh it out in every conceivable way in response to skilled, real-time interrogation. Of course, the translation machine’s success can be better with highly restricted technical writing. Failure is most complete with the words and constructions of what Ludwig Wittgenstein called the “old city,” the core words and narrative structures, and personal knowledge, common to speakers of a natural language.
In his 1948 paper, Turing lists five areas he thinks will prove most fruitful for intelligence research:
(i) Various games, e.g., chess, noughts and crosses, bridge, poker
(ii) The learning of languages
(iii) Translation of language
In a passage sometimes eliminated when the familiar 1950 paper is anthologized, Turing specifically suggests attempting to simulate Athe initial state of the mind, say at birth” and then giving “the Child Machine an education” (Turing 1950, p. 31). He points out that the experimenter would not be likely to happen on an appropriate Child Machine simulation immediately but would have to try out various possibilities and he compares this process with biological evolution through the following equations (anticipating rather similar formulations in Noam Chomsky@s radical reformation of linguistic theory (Chomsky 1957) and in subsequent work in modular, mentalist psychology):
Structure of the Child Machine = hereditary material
Changes of the Child Machine = mutations
Judgement of the experimenter = natural selection. (Turing 1950/1963, p. 32)
Turing pointed out that the experimenter can proceed rather more quickly than natural selection, making carefully planned and rapid changes. Similarly, Turing suggests that a “blank tablet at birth” animal will be wholly improbable, so he supposes that the successful “Child Machine” will have plenty of native structure (Turing 1948). I hasten to add, as his use of the locution “the initial state of the mind, say at birth” suggests, that Turing, anticipating Chomsky, does not mean just the genes (i.e. DNA) by “hereditary material” but rather the child’s full cognitive apparatus insofar as this is attributable to natural growth and development as opposed to peculiarities of the local environment. Although evolutionary biologists sometimes appear to suggest that DNA does all the work (even providing a “blue print” of the mature organism), DNA just provides templates that cell protein structures use to make more proteins and to join them in more complex configurations, soon to spin out multicellular structures that are the beginnings of organs that now as structured wholes direct differentiation into a couple hundred cell types placed into complex larger structures (the first and master developer is the nervous system, which directs the development of the fetus, starting a few days after the zygote is attached to the womb). “Say at birth” leaves Turing wiggle room, for he of course recognizes that the child at birth is not through with what Turing called the “morphogens” of development (Turing 1952). If anything, subsequent work has made it even more clear that lots of biologically directed growth, rather than general purpose learning, is needed.
Of course Turing loved his machines, both theoretical and practical, but it was the same love he much earlier formed for the tubular hydra, for the explosion of dappling in leopards, and for the unfolding of the growing leaf pattern (phyllotaxis) on many plants whose expanding distribution came out in the Fibonacci numbers (1, 1, 2, 3, 5, 8, 13, 21, etc.) – a love of nature’s machines to which he would specifically return in the embryological work of the last few years of his life, finally returning to the puzzle about daffodils that led him to science at age ten (Hodges 1989). What continues to be refreshing about Turing’s Turing test is that it endorses an experimental and constructive engineering task, a proper change indeed from twenty-five hundred years of introspective chatter and anecdotal observation. As La Mettrie, the first modern mechanist, remarked, insisting that the task is indeed empirical,
Man is a machine so complicated that it is impossible at first to form a clear idea of it, and consequently to describe it This is why all the investigations the greatest philosophers have made a priori, that is by wanting to take flight with the wings of the mind, have been in vain. Only a posteriori, by unraveling the soul as one pulls out the guts of the body, can one, I do not say discover with clarity what the nature of man is, but rather attain the highest degree of probability possible on the subject ( L’Homme Machine 1748/1994: p. 30).
Alan Turing has done more than anyone to give a clear mathematical, theoretical, and constructive foundation to a further rapsody of La Mettrie, whose erotic attitude toward natural and artificial machines knew no bounds,
Man is to apes and the most intelligent animals what Huygens’ planetary pendulum is to a watch of Julien le Roy. If more instruments, wheelwork, and springs are required to show the movements of the planets than to mark and repeat the hours, if Vaucanson needed more art to make his flute player than his duck, he would need even more to make a talker, which can no longer be regarded as impossible, particularly in the hands of a new Prometheus. To be a machine, to feel, think, know how to distinguish good from evil like blue from yellow, in a word, to be born with intelligence and a sure instinct for morality, and yet be only an animal, are things no more contradictory than to be an ape or a parrot and know how to find sexual pleasure.... Thought is so far from being incompatible with organized matter that it seems to me to be just another of its properties, such as electricity, the faculty of motion, impenetrability, extension, etc. (1748/19: p. 69-71)
There is evidence that Turing went as far in his enthusiasm for natural and artificial machines as La Mettrie, only the less effusive Turing was not inclined to publicly display his affection.
In La Mettrie’s Man a Machine, a cardinal example for La Mettrie’s mechanistic views is “Trembley’s polyp” (Chlorohydra viridissima), a tubular, tenticled, fresh water coelenterate (Trembley 1744). Diderot makes the same example central in his D’Alembert’s Dream for the same reasons. The polyp, which can be cut into scores of pieces, each growing into the mature form, demonstrates asexual reproduction and suggests that life is a natural property of matter, not passive stuff conjured into life by an active, Aristotelian form (or soul). Since Turing seems not to have been familiar with La Mettrie, it is remarkable that one of the five biological phenomena that Turing modeled is the development of the fresh water polyp, specifically the Hydra.
In “On Computable Numbers,” Turing tackles a fundamental and well known mathematical problem that is also a problem about doing mathematics, about computing as a kind of thinking. While what he came up with is highly original and widely fertile, it was immediately recognized as good mathematics. In his Turing test papers, Turing briskly sets a goal for cognitive science, elucidates a variety of ways to approach this goal, and helps design, build, and program the first generation of digital electronic computers. Prometheus like, he sees himself in a line from Charles Babbage and Mary Shelley back to the earliest materialists. But neither “On Computable Numbers” nor “Computing Machinery and Intelligence” explicitly acknowledge a precursor figure or movement. In “On the Chemical Basis of Morphogenesis” and in his related papers and notes, Turing explicitly presents his work as flowing out of D’Arcy Wentworth Thompson’s materialist and anti-adaptationist views and Thompson’s magisterial exposition of purely physical and chemical explanation in his On Growth and Form (Thompson 1917).
Broadly speaking, the Thompson/Turing biological tradition goes back to the early Greek naturalists such as Empedocles and Democritus, and their followers such as Lucretius, who rebuked intentional, functional, and teleological biological explanations (Thompson 1917). Growth and biological form, insofar as they can be scientifically characterized, arise through physical and structural necessity and chance, leaving talk of proper function, purpose, goal, and design aside. Thompson/Turing regard teleology, evolutionary phylogeny, natural selection, and history to be irrelevant distractions from fundamental biological explanation. As Turing puts his general project, his new ideas were sought to “defeat the argument from design” (Hodges 1983, p. 431). Turing is not referring to William Paley=s watchmaker argument for the existence of God, an argument long before displaced by Lamarck and Darwin. Darwin endorsed Aristotle=s biological work, writing that his “idols,” Covier and Linnaeus, seemed “mere school boys” compared to “old Aristotle.” Darwin cut his biological teeth in rapt fascination with Paley=s detailed teleology, whose designed-ness Darwin in no way wished to dispel from biological descriptions but sought to derive it, following a line of thought established by Aristotle, through mother nature=s rather than God=s selections (in Aristotle, the god of nature intends the design it constructs to reach the various goals it designs them for – and the intends and constructs for are efficient causality in Aristotle’s original sense). Indeed, as Darwin insists in his Autobiography, he intended Origin of Species to express the Deism he endorsed when he wrote it, not the atheism he eventually adopted. Life’s initial conditions were created by God, whose design unfolded from thence into mother nature’s tree of life, all in keeping with God’s intention (Darwin 1958).
Turing, rather, endorses D=Arcy Wentworth Thompson=s view that the teleological Aevolutionary explanations@ endemic to Darwinian Aadaptationist@ biology are non-fundamental, fragile, misdirected, and at best mildly heuristic (Thompson 1917). One of Thompson=s favorite examples was Aheliotropism,@ an instinctive striving toward the sun attributed to the leaves of plants by adaptationist biologists. Once the simple stem growth mechanisms that incline leaves toward maximum sun exposure are known, Aheliotropism@ disappears from biological vocabulary. But as Thompson warns us,
Time out of mind it has been by way of the ‘final cause’, by the teleological concept of end, of purpose or of ‘design’, in one of its many forms (for its moods are many), that men have been chiefly wont to explain the phenomena of the living world; and it will be so while men have eyes to see and ears to hear withal. … We are told that teleology was ‘refounded, reformed and rehabilitated’ by Darwin’s concept of the origin of species; for, just as the older naturalists held that ‘the make of every kind of animal is different from that of every other kind; and yet there is not the least turn in the muscles, or twist in the fibres of any one, which does not render them more proper for that particular animal’s way of life than any other sut or texture of them would have been’: so, by the theory of natural selection, ‘every variety of form and color was urgently and absolutely called upon to produce its title to existence either as an active useful agent, or as a survival’ of such active usefulness in the past. … So long and so far as ‘fortuitous variation’ and ‘survival of the fittest’ remain engrained as fundamental and satisfactory hypotheses in the philosophy of biology, so long will these ‘satisfactory and specious causes’ tend to stay ‘severe and diligent enquiry … to the great arrest and prejudice of future discovery.’ (Thompson 1917).
Or as a biologist recently put it, “The primary task of the biologist is to discover the set of forms that are likely to appear [for] only then is it worth asking which of them will be selected” (Saunders in Turing 1992, xii). And Turing meant not only to steer clear of forward-looking teleology but also backward-looking talk of efficient causality in Aristotle=s sense that would distinguish two chemically identical molecules, or two chemically and structurally identical organisms, if one were produced Anaturally@ and the other in the laboratory. Nor would Turing allow that biological description of a particular organism is crucially incomplete or indeterminate if several selective descent pathways might have led to it, with which one possibly simply an historical accident but nonetheless supposedly part of its biological description. Similarly, to draw on Chomsky=s example, it is common sense that the H20 that comes from my tap is “water” even though it may have more tea in it from the Lipton plant leakage than the weak “tea” I brew for myself, using hot distilled water briefly-flavored with a sliver of tea leaf. The intentional idiom does not lend itself well to natural science.
As Turing wrote,
Unless we adopt a vitalistic and teleological conception of living organisms, or make extensive use of the plea that there are important physical laws as yet undiscovered relating to the activities of organic molecules, we must envisage a living organism as a special kind of system to which the general laws of physics and chemistry apply. And because of the prevalence of homologies of organization, we may well suppose, as D=Arcy Thompson has done, that certain physical processes are of very general occurrence [Turing follows with a specific instance]... What is novel in the theory is the demonstration that, under suitable conditions, many diffusion reaction systems will eventually give rise to stationary waves; in fact to a patterned distribution of metabolites. (Turing and Wardlaw, C. W. (1953/1992: p. 45)
The anti-teleological, morphological tradition that Thompson and Turing articulate, and exemplify, proximally goes back to Etienne Geoffroy Saint-Hilaire, who, in a month long debate before the Academie des Sciences in 1830, maintained the unity of type thesis that all structured multicellular animals have the same ground plan (bauplane) against the selectionist demands of existence of Georges Cuvier, Darwin=s idol. Poet naturalist Johan Wolfgang Goethe also felt party to the debate since he maintained that plant appendages C carpels, stamens, petals, sepals, and leaves C are all metamorphoses of a kind of urleaf. Work by embryological and molecular geneticists in the last decade extravagantly confirm the claims of Geoffroy and Goethe. With one trifling exception C Bryozoa C all 20 odd animal phyla appeared within a few score million years in the great Cambrian explosion of life forms, as if nature was quick to run through all the basic possibilities of the animal type in less than 5% of the time there have been animals on earth. More substantially, it appears more and more likely that all animal phyla are variations of the same structural plan and use virtually the same homeobox Amaster genes@ and proteins to determine segmentation and segmental identity.
[W]e have accumulated more and more evidence that the same homeobox genes are used in both vertebrates and invertebrates to specify the body plan and that the mechanisms of the genetic control of development are much more universal than anticipated. (Gehring 1998: p. 53).
Parallel results appear in the study of plants. Goethe=s rapsody has been realized. Carpels, stamens, petals, sepals, and leaves merely vary the ur-leaf spun out by homeobox structural genes and their protein employers (Coen 1999). Geoffroy’s Unity of type has received an extraordinary, and for evolutionary biologists, a most unexpected confirmation.
In the 1970s, Stephen J. Gould had gently used many of Thompson’s arguments in an attempt to reintroduce morphology and the bauplane into English-speaking evolutionary biology, most controversially in AThe Spanrels of San Marco@ (Gould & Lewontin 1979). Anglo-American evolutionary biologists greeted their proposals, and their skepticism about selectionist explanations, with even greater skepticism and scorn for their supposed wooly-headed, anti-empirical morphologizing, a willful blindness supposedly motivated by left wing disdain for E. O. Wilson’s Sociobiology and its hereditarian and selectionist views. Gould and Lewontin could hardly have anticipated the last two decades of biological research.
More specifically, recent embryological and morphological research on mammalian brains suggest that the human brain is simply a scaled up version of the primate brain, its specific capacities perhaps more spandrels, or accidental byproducts, of general growth, rather than specifically winnowed through the protracted chiseling of natural selection.
In 1975, E. O. Wilson mined the behavioral sciences, anthropology particularly, for common human traits, which then he co-optively proceeded to explain sociobiologically. Over the last decade, the burgeoning, modular cognitive sciences have been similarly co-opted by evolutionary psychologists, who hold the modules are complex cognitive adaptations, sculpted gradually by natural selection over the Pleistocene hunter-gatherer era. Stephen J. Gould and Noam Chomsky have been scorned and ridiculed for suggesting that, perhaps, some of our human native competencies may be the fortuitous byproduct of the increase in size of the hominid brain from 500cc. to 1500cc. during the last 2 to 3 million years, an increase that has been termed by far the quickest large quantitative evolutionary change in the history of mammals (Chiarelli 1996). However, just as for the general bauplane line, recent allometric and embryological research on primate brain anatomy calls the evolutionary psychologist position into serious question. Barbara Finlay and Richard Darlington, and others, have marshaled evidence to show that, with the trifling exception of the medulla and the olifactory bulb, the human brain is just a scaled up primate and mammalian brain, the proportions among its parts conserved, along with the relative enlargement of the isocortex, all quite predictable on embryological grounds (from the homeobox gene and protein induced segmentation of the neuronal tube and the delayed Abirthday@ of the isocortex forming cells). To quote from their recent article in Behavioral and Brain Sciences,
How does evolution grow bigger brains? It has been widely assumed that the growth of individual structures and functional systems in response to niche‑specific cognitive challenges is the most plausible mechanism for brain expansion in mammals. Comparison of multiple regressions on allometric data for 131 mammalian species, however, suggests that for 9 of 11 brain structures taxonomic and body size factors are less important than covariance of these major structures with each other. Which structures grow biggest is largely predicted by a conserved order of neurogenesis that can be derived from the basic axial structure of the developing brain. This conserved order of neurogenesis predicts the relative scaling not only of gross brain regions like the isocortex or mesencephalon, but also the level of detail of individual thalamic nuclei. Special selection of particular areas for specific functions does occur, but it is a minor factor compared to the large‑scale covariance of the whole brain. The idea that enlarged isocortex could be a"spandrel," a by product of structural constraints later adapted for various behavior, contrasts with approaches that look to selection of particular brain regions for cognitively advanced behaviors, as commonly assumed in the case of hominid brain evolution. (Finlay, Darlington, Nicastro 2001).
The mammalian olifactory bulb is abnormally large in insectivores, prosimians, and bats, and abnormally small in simians. The medulla, the “lowest” part of the brain, which connects it with the brain stem, is also an outlier. But otherwise growth is entirely coordinated, so that for example if we imagine a Pleistocene scenario that selects for an enlarged hippocampus, it will also produce coordinate growth throughout the brain, as would growth of each of the eight other brain structures. You might call this “indeterminacy of selection,” because there apparently is now no determinate chemical or physiological fact of the matter. If an animal has an unusual feature that ministers to one significant need, such as the disproportionately giraffe neck, we may conclude it is probable that longer-necked proto-giraffes were environmentally selected for their superior high leaf reaching ability. Nothing in the current structure and physiology of current giraffes need determine whether the history of life on earth happened to cast this as the cause rather than the taste of alien visitors who culled all the short-necked giraffes for several thousand years for whatever gustatory or aesthetic reason. The first giraffe story is just much more plausible than the second, given what we know of earth’s history (viz., that it is highly improbable that there have been such alien visitors, etc.). But when nine brain structures grow in lock step, there may be indefinitely many stories to tell about which environmental features pressured what combination of such structures.
Our brain grew as a coordinated whole and each part is separately available to foster the various cognitive competencies that have allowed us to flourish recently (and excessively). Indeed, Finlay and Darlington suggest this may explain why, although the human brain has been much the same for 200,000 years, human culture and technology changed very little for much of this period, only to explode through the last 40,000 and especially the last 10,000 years. Perhaps we were not designed, or adaptively selected, to display any of these recent cognitive, cultural, and technological competencies. Finlay and Darlington might have added that just as an environmental change may call forth or augment a particular trait or competency, so too the development of a new trait or competency may itself become a prerequisite environment for the development of some further trait or competency. We just got an enormously but proportionately enlarged brain -- the spandrel to end all spandrels -- and have taken quite a while, in various ordered stages, to work out what can be done with it exadaptively.
A few years ago, Ernst Mayr proclaimed Darwin the greatest thinker of modern times for proposing four original theses as a foundation for evolutionary biology.
The first is the non-constancy of species, or the modern conception of evolution itself. The second is the notion of branching evolution, implying the common descent of all species of living things on earth from a single unique origin. Up until 1859, all evolutionary proposals, such as that of naturalist Jean-Baptiste Lamarck, instead endorsed linear evolution, a teleological march toward greater perfection that had been in vogue since Aristotle=s concept of Scala Naturae, the chain of being. Darwin further noted that evolution must be gradual, with no major breaks or discontinuities. Finally, he reasoned that the mechanism of evolution was natural selection. (Mayr 2000: p. 80)
Historically, of course, Lamarck and several others anticipated Darwin on “evolution itself.” Further, although he indeed adopted a linear or ladder account in Philosophie Zoologique (1809), Lamarck clearly and forcefully, and for the right reasons, converted to the branching tree in his masterwork, Histoire Naturelle des Animaux sans vertebres (Lamarck 1815-1822 – a good ten years before the Beagle sailed). Moreover, in much of his writings Darwin himself expresses a progressivist “advancing toward perfection,” or “higher forms,” viewpoint. Philosophically, Lamarck was in some ways more of a consistent and principled materialist than Darwin, who did think Origin of Species supported Deism; indeed, Darwin “justified” his later atheism, not by materialism but by the reflection that a morally responsible God could not have created this world, while a morally irresponsible God is apparently unthinkable (Darwin 1958).
Darwin is routinely celebrated in Anglo-American high school texts for refuting Lamarckian “inheritance of acquired characteristics.” Nonetheless, in every edition of Origins of Species, Darwin endorses Lamarck=s claim that the direct influence of the environment, and also an organism=s “use-disuse” of its parts, play an important role in evolution, although Darwin generally holds its role to be secondary to natural selection. Indeed, Darwin blinked when faced late in life by physicist Lord Kelvin’s startlingly new estimate that the cool, habitable Earth became habitable no more than a score million years ago, so that natural selection apparently did not have the hundreds of millions of years thought necessary to evolve its present life forms. Hence, Darwin radically revised upward his estimate of the role “inheritance of acquired characteristics” played in evolution. Indeed, he invented his spurious theory of epigenesis to supply the mechanisms that would make “use-disuse” inheritance prodigally rapid in its effects. (Epigenesis proposes that individual cells all over the body experience “use-disuse” and chemically communicate their changes to the reproductive cells; Francis Galton soon conducted experiments that severely and correctly criticized Darwin’s particular version of epigensis (Darwin 1871, Galton 1871).
Intellectual history aside, how does Darwin=s branching tree and his gradualism look today? It now appears that for well over two-thirds of the evolution of life on earth, life evolved more through lateral transfer of genes and gene sets across species or even Adomain@ barriers than through vertical, branching descent (and this Lamarckian form of evolution of course continues today among single celled organisms). We have a lattice of life, not a tree. As the author of “Uprooting the Tree of Life” sums up the results of the accelerating research of the last three decades
The most reasonable explanation for these various contrarian results is that the pattern of evolution is not as linear and treelike as Darwin imagined it. Although genes are passed vertically from generation to generation, this vertical inheritance is not the only important process that has affected the evolution of cells. Rampant operation of a different process C lateral, or horizontal, gene transfer C has also affected the course of that evolution profoundly. Such transfer involves the delivery of single genes, or whole suites of them, not from a parent cell to its offspring but across species barriers. (Doolittle 2000: p. 94)
While biologists now speak of the three life domains of bacteria, archaea, and eukaryota, with the last domain eventually giving rise to the modern structured multi-cellular kingdoms of organisms of the Cambrian explosion, biologists suspect there must have been even less distinct precursors of the domains. Carl Woese, who developed the technique for determining phylogeny through genetic difference that grounds the domain and lateral transfer view, recently dismissed Mayr=s Darwinian demand for a unique ancestral organism from which all life branched.
The ancestor cannot have been a particular organism, a single organismal lineage. It was a communal, a loosely knit, diverse conglomeration of primitive cells that evolved as a unit, and it eventually developed to a stage where it broke into several distinct communities, which in their turn become the primary lines of descent [bacteria, archaea, and eukaryotes]. (Woese 1998)
In 1966, microbiologist Lynn Margulis proposed that the eukaryote cell arose not through the gradual whittling away of natural selection but through instantaneous lateral transfer C ingestion -- of whole gene sets that settle into a symbiotic relationship as organelles inside their host. In particular, she claimed that the power plant or respirator necessary to the eukaryote cell, mitochondria, stems from such a symbiotic ingestion (mitochondria still retain some DNA of their own and are now thought to stem from an alpha-proteobacterial cell that settled in for a long stay). Similarly, the chloroplast organelles vital to plant life stemmed from ingested cyanobacteria that survived, indeed prevailed, symbiotically. Given the Darwinian emphasis on gradualism and natural selection, it is perhaps no wonder that Margulis’s first paper propounding endosymbiosis received fifteen rejection slips before it was finally published in 1966 (Margulis 1998: p 29), the problem was of course that adaptionist reasoning held sway and so Margulis had to be wrong. Reviving “Lamarckian” views, Margulis argues that these two cases, now well-stablished, are absolutely clear cases of the inheritance of acquired characteristics. Not the rightfully discredited epigenetic Ause-disuse@ but the acquiring of a heritable characteristic through the direct action of the environment, through in particular an ingested environmental bacterium (Margulis 1998). Literally, Margulis is surely right. Although her claims about the origins of mitochondria and chloroplasts are now wholly accepted and their extension to explain the origin of other organelles proceeds apace, most Anglo-American biologists simply cannot stomach the phrase “inheritance of acquired characteristics” except as something that must be false. Yet no one now disputes that we have here dramatic leaps in heritable organic complexity, not through millenniums of gradualist whittling but through single near instantaneous events of lateral transfer. As a recent textbook on symbiosis puts it,
The foundation of this text is that symbiosis has expanded the metabolic repertoire of eukaryotes. The greater part of the book has concerned associations that evolved in the Phanerozoic: the past 600 million years and the age of multicellular eukaryotes C the animals, plants, and fungi. In an evolutionary sense, however, the most significant symbiosis in eukaryotes is Precambrian in origin: the acquisition of aerobic respiration through mitochrondria. It is not entirely fanciful to suggest that, without this symbiosis, the eukaryotes would today be relegated to a few anaerobic environments, and the world would have been dominated by bacteria. (Douglas 1994: 131-32).
By current taxonomy one could call the domain distinction between eukaryotes and bacteria the most basic, with structured multi-cellular organisms eventually stemming from the eukaryote line, and further divided into plants and animals by another bacterial lateral ingestion. So the most basic distinctions among biological organism have been wrought by single symbiotic events rather than by gradual natural selection. (Of course, once symbiosis occurs, natural selection presumably may help to adjust and improve the symbiosis. The DNA of the bacterium are eventually mostly transferred to the more protective eukaryote nucleus, although in the case of mitochondria some are not transferred, apparently because they would endanger the nucleus.)
Turing, of course, could not have anticipated these particular discoveries of the last half century. But he, and Thompson, correctly anticipated that adaptationist biologists would tend, as they did in the case of symbiosis and the Darwinian tree, to resist ideas or research that might displace their favored views. If you have a supposedly satisfactory explanation for a supposedly satisfactory variety of evidence, you are prone to blink both at anomalous data and at anomalous explanations.
Among biologists, Turing is famous for his ground breaking 1952 Royal Society paper, “On the Chemical Basis of Morphogenesis.” Indeed, this paper, which introduced what biologists inevitably now call “Turing structures,” has received more citations than all the rest of Turing=s works altogether (Saunders 1992, p. xvi). Here Turing tackles a major aspect of what he sees as the central problem of biology, viz., how the zygotic cell of conception manages to grow into the immensely larger and enormously complicated structures of the fetus, the baby, and the mature organism, creating all along new information and structure. The exemplary chaotic reaction-diffusion models that Turing proposed now have an important role in theoretical biology and recently have been observed experimentally (Castets, V., Duclos, E., Boissonade, J., Kepper, P. 1990). They show how patterns or structures can burst forth in homogeneous mediums, the most specific example of “Turing structures,” the simplest and starkest example of morphogenesis imaginable. Recent news stories give pictures of an adult cat and her clone, a kitten genetically identical to her mother. But the kitten does not have the same color pattern in its fur as the mother: this is precisely the result that Turing=s work explained in 1952.
Aristotle insisted that biology (his biology) is not a theoretical science but rather a productive science, or better an activity, like medicine, navigation, architecture, engineering, warfare, or the arts (his most detailed comparison is between dramas and organisms, between stage crafting and organism crafting, the first concerned with human productions and the second with natural productions). In both, play and organism production, whether the maker is human or nature, artificial or natural, Aristotle insists that the austere language of material causality must be largely displaced and commandingly enriched by formal, final, and efficient causality, by the intentional idiom. And, indeed, recent work with autistics suggests, if only by way of contrast, how powerful, how richly detailed, how inevitable, and how natively human, our commonplace Theory of Mind is. Surely, as Thompson says of the human entanglement with the teleological and intentional idiom, “it will be so while men have eyes to see and ears to hear withal” (Thompson 1917). How apt our Theory of Mind to human affairs, because these are threaded through, even constituted, by our shared presumptions – how diminished, though comforting, how on occasion misleading, even childish and kitschy, when applied to other animals, or still less to plants. Anthropomorphism works best at home.
Aristotle rightly emphasized that the intentional, practical idiom is most appropriate and most exact when applied to humans. Because humans are substantially artificial as well as natural, talk about humans affords us richly distinct formal, efficient, and final cause characterizations, while with animals much less detail and precision is available and the distinction between formal, efficient, and final causality tends to disappear or become indistinct. Yes indeed, as Aristotle put it, humans qua humans are certainly subject to his sort of biological characterizations but there are richly other quas that compete on the same footing with human qua human characterizations. For example, human qua physician amply supplements or even supplants human qua human in explanatory adequacy – and so with the many of the quas to which humans are subject. Any human mix of these quas must be available to the Turing test passer, who must own the mix that makes a person a particular person, or particular thinker.
Just as, in the Turing test, we set aside physical being and material causality (the bodily differences between humans and computers) for the richer, more historical, more personal, and more variegated mentalist understandings reflected in performance in the intentional idiom, so in naturalistic biology (Turing’s biology, not Aristotle’s) we had best not fully trust this idiom and the particularly human understandings that it exemplifies. As William wisely wrote,
Now why do various animals do what seem to us such strange things, in the presence of such outlandish stimuli? Why does the hen, for example, submit herself to the tedium of incubating such a fearfully uninteresting set of objects as a nestful of eggs, unless she have some sort of prophetic inkling of the result? The only answer is ad hominem. We can only interpret the instincts of brutes by what we know of instincts in ourselves. (James 1892/1920, p. 393).
The Turing biologist would see this as a warning about the seductiveness of Aristotle’s biology.
Why did Turing center his biological research on the simplest available multi-celled biological organisms, but directed his cognitive research to the simulation, imitation, or construction of the most complex mind available for study? The answer is that the more complex the behavioral repertoire or powered competencies a mind deploys, the more specific, rich, and testable the characterization of that mind can be (or the characterization of that species of mind, or, leaving physical and chemical limitations aside, that species of Turing machine). Familiarly, for example, Noam Chomsky demonstrated that the human speaker-hearer mind exhibits a generative linguistic capacity beyond the compass of a finite state machine. Most simply, since a generative linguistic device can just do the same tasks as a finite state linguistic device, to describe it as generative is to produce a more specific and testable characterization. Skinnerian behaviorists felt they could safely ignore any physical limitations on the minds, whether human or animal (they did not suppose that miracles were going on inside), because the computational capacities behavioral studies would require were surely minimal and most assuredly finite state. Our human mental and behavioral data now clearly seem to require and reward much more complex characterizations.
Turing’s work stands well today.
Florida State University
Tallahassee, Fl 32306
Block, Ned: 1978, ‘Troubles with Functionalism’, in C. Wade Savage (ed.), Minnesota Studies in the Philosophy of Science, Vol. 9, University of Minneapolis Press, Minneapolis, 261-325.
Carpenter, B. and R. Dorant (1977). The Computer Journal, 20: 269-279.
Castets, V., Duclos, E., Boissonade, J., Kepper, P. Experimental evidence of a sustained standing
Turing type non-equilibrium chemical pattern. Physical Review Letter, 64: 2953-2956.
Chiarelli, B. (1996). Some Comments on the Evolution of Hominid Intelligence. Mankind
Quarterly, 37:1, 29-37.
Chomsky, Noam: 1988, Language and Problems of Knowledge: The Managua Lecture’s,
MIT, Cambridge, MA.
Church, A. 1936. An Insolvable Problem of Elementary Number Theory. American Journal of
Mathematics, 58: 345-366.
Church, A. 1937. Review of Turing 1936, Journal of Symbolic Logic, 2: 42-43
Coen, E. (1999). The Art of Genes: How organisms make themselves. Oxford: The University
Darwin, C. (1958). The Autobiography of Charles Darwin, 1809-1882, with Original Omissions
Restored. New York: Harcourt Inc.
Darwin, C. (1871). Pangenesis. Nature. 4: 502-503.
Davis, M. (1965). The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable
Problems and Computable Functions, ed. M. Davis. New York: Raven Press.
Doolittle, W. F. (2000). Uprooting the Tree of Life. Scientific American. 282(2): 90-95.
Douglas, A. E. (1994). Symbolic Interactions. Oxford University Press.
Finlay, B. L., Darlington, R. B., Nicastro, N. (2002). Developmental structure in brain evolution.
Behavioral and Brain Sciences. .
Analysis. Cognition, 28, pp. 3-71.
Galton, F. (1871). Pangenesis. Nature, 4, 5-6.
Gehring, W. J. (1998). Master control genes in developmental evolution. New Haven and London: Yale University Press.
Godel, K. (1934). On undecidable propositions of formal mathematical systems. Lecture notes
taken by S. Kleene and J. Rosser, reprinted in Davis (1965).
Godel, K. (1946). Remarks before the Princeton bicentennial conference on problems in
mathematics. Published in Davis (1965).
Gould, S. J. and Lewontin, R. (1979). The spandrels of San Marco and the Panglossian paradigm C a critique of the adaptationist programme. Proceedings of the Royal Society of London. Series B. 205, 581-598.
Hodges, A. (1983). Alan Turing: The enigma. New York: Simon and Schuster.
Hodges, A. (2002). The Church-Turing Thesis. Stanford Encyclopedia of Philosophy, p. 4.
Ince, D. (1992). “Introduction,” in The collected works of Alan Turing: Mechanical Intelligence.
James, W. (1892/1920). Psychology: Briefer Course. New York: Holt.
Lassegue, J. (2001). What Kind of Test Did Turing have in Mind? Tekhneme, 3: p. 3-14.
Leiber, J. (1992). The Light Bulb and the Turing-Tested Machine. Journal for the Theory
of Social Behaviour 24: 25-39.
Lengyel, Istvan and Irving Epstein: 1991, ‘Modeling of Turing Structures in the Chlorite-
Iodide-MaJonic Acid-Starch Reaction System’, Science 251: 650-2.
Margulis, L. (1998). Symbiotic Planet. New York: Basic Books.
McCulloch, W. and W. Pitts (1943). A logical calculus of ideas immanent in nervous activity. B.
Math. Biophy., 5, p. 115-133. Also reprinted in McCulloch, W. S., Embodiments of Mind.
Cambridge: MIT Press (1965).
Mayr, E. (2000). Darwin’s influence on modern thought. Scientific American. 282: 1, 79-83.
Moody, Todd: 1993, Philosophy and Artificial Intelligence, Prentice-Hall, Englewood
Cliffs, NJ. Rymer, Russ: 1993, Genie: An Abused Child’s Flight from Silence, Harper Collins, New York.
Post, E. 1936. Finite Combinatory Processes – Formulation 1. Journal of Symbolic Logic, 1: 103
Saunders, P. (1992). In Saunders, P., ed., The Collected Works of Alan Turing: Morphogensis.
Searle, J. (1980). Minds, Brains, and Programs. Behavioral and Brain Sciences. 3: pp 417-458.
Thompson, D’ A. W. 1917. On growth and form. Cambridge University Press.
Tomarev, S., Callaerts, P., Kos, L., Zinovieva, R., Halder, G., Gehring, H., Piatigorsky, J.
(1997). Squid Pax-6 and eye development. Proceedings of the National Academy of
Sciences, 94: pp 2421-2426.
Trembley, A. (1744). Memoires pour servir a l’histoire d’un genre de polyps d’eau douce, a
bras en forme de cornes. Leyden & Paris.
Turing, Alan: 1936, ‘On Computable Numbers, with an Application to the Entschei-
dungsproblem’, Proceedings of the London Mathematics Society, 42: 230-65.
Turing, Alan: 1948, ‘Intelligent Machinery’, a report to the National Physical Laboratory,
available in B. Meltzer and D. Michie (eds.). Machine Intelligence 5, American Elsevier
Publishing Company, New York.
Turing, A. (1950/1963). Computing Machinery and Intelligence. Mind, 59: 433-460. I am
quoting from E. A. Feigenbaum and J. Feldman (eds.), Computers and thought. New
Turing, A. (1952/1992). On the Chemical Basis of Morphogenesis. Philosophical Transactions
of the Royal Society of London, Series B, 237: 37-72. I am quoting from P. T. Saunders,
ed., collected works of A. M. Turing: Morphogenesis. Amsterdam: North-Holland.
Turing, A. (1992). The collected works of Alan Turing: Mechanical Intelligence. Amsterdam:
Turing, A. and Wardlaw, C. W. (1953/1992). A diffusion reaction theory of morphogenesis, The
collected works of Alan Turing: Morphogenesis. Amsterdam: North-Holland.
Von Neumann, J. (1945). First draft of a report on EDVAC, Moore School of Electrical
Engineering, University of Pennsylvania, unpublished. In Stern, N. (1981). From Eniac to Univac: An Appraisal of the Eckert-Mauchly Machines. Digital Press.
Von Neumann, J. (1963). Collected Works. New York: Permagon Press.
Wilson, E. O. (1975). Sociobiology. Cambridge, MA: Harvard University Press.
Woese, C. (1998). The Universal Ancestor. Proceedings of the National Academy of Sciences,