Tag Archives: A.I.

Intelligence — Natural and Artificial

An American court has just granted two chimpanzees a writ of habeas corpus.  Legally, this constitutes them as persons.  For the first time in history, a human legal system has recognized that creatures other than ourselves can be “cognitively complex” enough to merit the status of legal personhood.

And  this just in: the Defense Advanced Research Projects Agency (which, around fifty years ago, initiated the project that eventually yielded us the Internet) is inviting proposals for “Building Resource Adaptive Software Systems“.  They appear to mean by this systems that are intelligent enough to update themselves and even adapt to new hardware designs and platforms without human intervention.

Cool.

Clearly, the concept of intelligence is undergoing some powerful transformational pressures.  Remember when there were “I.Q tests”?  Remember Mensa?  Does anybody still believe in IQ?

With regard to animal intelligence, we’ve come a long way.

Walk with me for a few minutes through an A.I. fantasy.  After a lifetime of waffling back and forth about the matter, I’m now convinced that some level of Artificial Intelligence will be active within the lifetime of my grandsons (who are now 19, 18, and 10 years old respectively).

What sort of behavior would we accept as “Artificial Intelligence”?  The best-known characterization of such behavior, for the past sixty-five years, has been the Turing Test.  But for riffing on the Darpa proposal, the turing test is too broad: it stipulates a system whose verbal behavior — say, on a chat line — is indistinguishable from that of a human.  (Think of the movie “Her“.)  We don’t need that for the Darpa proposal .

What we would need is a system that — whatever its primary purpose is — could

  • update itself
  • update any of its components, including not only current ones but also…
    • replacements for current ones, written in new computer languages
    • new ones, unforeseen at its creation
    • interfaces to newly installed hardware devices, and the drivers to operate them

OK, for a simple use-case, let’s imagine a system whose purpose is — oh, not military, for God’s sake —  maintaining archives.  Over the hundred years stipulated in the DARPA proposal, the technological infrastructure of the archives will likely go through several changes.  Just in the past fifty years, data memory has gone from ferrite-core to a bewildering variety of solid-state fabrications, while storage technologies have evolved from magnetic drums to tape to multi-terabyte hard drives.

A couple of propositions seem obvious:

  1. The system acts on its own initiative.  It does not wait for someone to press the “update” button.
  2. If it acts on its own initiative, it must have the capacity to “decide” when to do so.
  3. Items 1 and 2 can properly be identified, analogically, as a capacity for reflection.
  4. Such a capacity is necessarily independent of the other tasks assigned to the system.  It is executed by a subsystem to which all of the other system tasks appear as objects or processes.
  5. There will be some entirely practical boundaries to the system’s mutability.  It is not required to be able to transform itself, for example, from an archive maintainer into, say, the control system of an orbiting satellite.  Whatever its stipulated role, it maintains that role throughout its centuries-long lifetime.  It is not a shmoo.

To be continued . . .   .