Category Archives: science

Intelligence — Natural and Artificial

An American court has just granted two chimpanzees a writ of habeas corpus.  Legally, this constitutes them as persons.  For the first time in history, a human legal system has recognized that creatures other than ourselves can be “cognitively complex” enough to merit the status of legal personhood.

And  this just in: the Defense Advanced Research Projects Agency (which, around fifty years ago, initiated the project that eventually yielded us the Internet) is inviting proposals for “Building Resource Adaptive Software Systems“.  They appear to mean by this systems that are intelligent enough to update themselves and even adapt to new hardware designs and platforms without human intervention.


Clearly, the concept of intelligence is undergoing some powerful transformational pressures.  Remember when there were “I.Q tests”?  Remember Mensa?  Does anybody still believe in IQ?

With regard to animal intelligence, we’ve come a long way.

Walk with me for a few minutes through an A.I. fantasy.  After a lifetime of waffling back and forth about the matter, I’m now convinced that some level of Artificial Intelligence will be active within the lifetime of my grandsons (who are now 19, 18, and 10 years old respectively).

What sort of behavior would we accept as “Artificial Intelligence”?  The best-known characterization of such behavior, for the past sixty-five years, has been the Turing Test.  But for riffing on the Darpa proposal, the turing test is too broad: it stipulates a system whose verbal behavior — say, on a chat line — is indistinguishable from that of a human.  (Think of the movie “Her“.)  We don’t need that for the Darpa proposal .

What we would need is a system that — whatever its primary purpose is — could

  • update itself
  • update any of its components, including not only current ones but also…
    • replacements for current ones, written in new computer languages
    • new ones, unforeseen at its creation
    • interfaces to newly installed hardware devices, and the drivers to operate them

OK, for a simple use-case, let’s imagine a system whose purpose is — oh, not military, for God’s sake —  maintaining archives.  Over the hundred years stipulated in the DARPA proposal, the technological infrastructure of the archives will likely go through several changes.  Just in the past fifty years, data memory has gone from ferrite-core to a bewildering variety of solid-state fabrications, while storage technologies have evolved from magnetic drums to tape to multi-terabyte hard drives.

A couple of propositions seem obvious:

  1. The system acts on its own initiative.  It does not wait for someone to press the “update” button.
  2. If it acts on its own initiative, it must have the capacity to “decide” when to do so.
  3. Items 1 and 2 can properly be identified, analogically, as a capacity for reflection.
  4. Such a capacity is necessarily independent of the other tasks assigned to the system.  It is executed by a subsystem to which all of the other system tasks appear as objects or processes.
  5. There will be some entirely practical boundaries to the system’s mutability.  It is not required to be able to transform itself, for example, from an archive maintainer into, say, the control system of an orbiting satellite.  Whatever its stipulated role, it maintains that role throughout its centuries-long lifetime.  It is not a shmoo.

To be continued . . .   .

Animals Learning

An interesting little report in The Register this morning.

Old notions die hard, right?

For example, Rene Descartes and his argument that animals are automatons.  Only humans think, because only humans have souls.  Animals don’t have souls, therefore they don’t think; the only existence available to them is as unconscious biological mechanisms.

I despise this notion, and Descartes for promulgating it.  I know from the histories of philosophy that he wasn’t the only one to believe that — it was popular among a certain class of 17th-century intellectuals — but it’s his name that is stereotypically associated with the idea, right down to the present time.

It’s wrong.  It’s false.  It’s provably, observably, factually false.  This is not a vaguely philosophical “Matter of Opinion” issue, like that goofy notion of his regarding Minds and Bodies and the pineal gland.

Well, Descartes was not an empirical researcher.  He was a mathematician.  He was a brilliant mathematician — and a pathetically inept philosopher.  (It’s extremely instructive to compare his methods with those of somebody who really was a researcher — his older contemporary Galileo.)

Animal intelligence is, in fact, a proper matter for straightforward scientific investigation.

For the past five or six decades, people have been conducting just such studies.  Their results have given us increasing confidence that animals really possess — in misty, simpler, sometimes rudimentary form — most of the same kinds of sentience that we humans do.

These results, inconspicuous as they are, have consequences for much bigger questions, concerning . . .

  •  human language and its origins
  •  the origin and persistence of human religions
  •  the nature of animal, human, and artificial intelligence.

I’ll get to those in time.

A Quote From John MacMurray

About twenty years ago I, um, lost my faith.  A lot of what I had believed up till then simply evaporated.

It didn’t even leave a residue.

I have been, ever since, an unbeliever.  I don’t believe psychiatry is science.

Psychology is most likely not a science either.  That could depend on how we define it; I might come back to that before this post is over.

It was a text from John Macmurray that pulled the rug out from under my convictions.  I was reading Persons In Relation, his Gifford lectures from 1954.  Sharply distinguishing the two realms of personal and scientific knowledge, he wrote — with razor-sharp sarcasm —  that “…our objective or scientific knowledge of man is such knowledge of one another as we can obtain without entering into personal relation.”

Poof.  The bubble popped.

In that single moment, an entire closetful of intellectual reservations that had been accumulating over the years blew open and disgorged itself all over my mind.  (That scene in Return of the King where Aragorn, having extracted an oath from the dead, is almost buried in skulls… .)

Experimental conclusions only expressible as probabilities?  Check.  Theoretical entities only dubiously related to anything observable?  Check.  Prohibitions against individual narratives as “anecdotal”?  Check.  Inability to arrive at stable theoretical foundations for even the most basic of observations?  Check.  Prohibition against knowledge by introspection?  Check.

The list goes on and on and on and on and on.

I thought about everything I had absorbed of human nature since I was born, and the sources from which I had learned it: from my own reflections on my family, friends, enemies, favorite stories and novels and comics, movies, and just plain daily news; and above all from my own adventures in exploring my own humanity.

By the definition of scientific objectivity, none of this could count towards knowledge of human nature.  And that’s just silly.

I’ve been asking myself, ever since, how could we ever, here in Western intellectual history,  have arrived at such a stupid conviction.

  • What are we studying when we study psychology?
  • How deeply may one person know another?
  • What can we know about humankind in general that is different — intrinsically — from knowing individual persons?

I’ll be coming back to these topics in upcoming posts.  More than once.