Wednesday, September 10, 2008

The Economist predicts an early Singularity through neuroengineering

No sooner had I put myself in the between near and far Singularity group, than The Economist declares for a very near Singularity in the form of a hard AI super-mind [1] (emphases mine – they don’t mention Kurzweil, Vinge, or any of the usual suspects) …

Tech.view | Minds of their own | Economist.com

… The progress being made in neuroengineering—devising machines that mimic the way the brain and other bodily organs function—has been literally eye-opening. In the decade since Kevin Warwick, professor of cybernetics at Reading University in Britain, had a silicon chip implanted in his arm so he could learn how to build better prostheses for the disabled, we now have cochlear implants that allow the deaf to hear, and a host of other spare mechanical parts to replace defective organs.

A bionic eye, to help people suffering from macular degeneration, is in the works, and artificial synapses are being tested as possible replacements for damaged optic nerves. An implantable electronic hippocampus—the world’s first brain prosthesis—is being developed for people who lose the ability to store long-term memories following a stroke, epilepsy or Alzheimer’s disease.

Meanwhile, a team at the University of Sheffield in Britain has built a “brainbot” controlled by a mathematical model of the brain’s basal ganglia—the part that helps us decide what to do next. Depending on how much simulated dopamine (the neurotransmitter in the brain that controls movement, behaviour, mood and learning) is dialled into the mathematical model, the brainbot responds differently.

Too much, and the machine has trouble suppressing unwanted actions, or tries to do two incompatible things at once—like patients with Huntington’s disease, Tourette’s syndrome or schizophrenia. Too little digital dopamine, and the machine has difficulty deciding how to move—like patients with Parkinson’s disease.

Mr Warwick’s team at Reading has now gone a stage further. Instead of using a computer model of part of the brain as a controller, the group’s new “animat” (part animal, part material) relies solely on nerve cells from an actual brain.

Signals from a culture of rodent brain cells in a tiny dish are picked up by an array of electrodes and used to drive a robot’s wheels. The animat’s biological brain learns how and when to steer away from obstacles by interpreting sensory data fed to it by the robot’s sonar array. And it does this without outside help or an electronic computer to crunch the data.

… Neuroengineers build tools that think for themselves, making decisions the way humans do.

… Over the past decade, a new technology known as “evolvable hardware” has emerged. Like traditional brute-force methods, evolvable machines try billions of different possibilities. But the difference is they then continually crop and refine their search algorithm—the sequence of logical steps they take to find a solution.

… The evolvable concept, pioneered by Adrian Thompson at the University of Sussex in Britain, has led to some astonishing results. Dr Thompson’s original “proof of principle” experiment—a design for a simple analogue circuit that could tell the difference between two audio tones—worked brilliantly, but to this day no one knows quite why. Left to run for some 4,000 iterations on its own, the genetic algorithm somehow found ways of exploiting physical quirks in the semiconductor material that researchers still don’t fully comprehend.

Similarly, John Koza at Stanford University has been using genetic algorithms to devise analog circuits that are so smart they infringe on patents awarded to human inventors. Mr Koza’s so-called “invention machine” has even earned patents of its own—the first non-human inventor to do so.

How soon before machines become smarter than people? The way self-programming machines are evolving today suggests they will probably begin to match human intelligence in perhaps little over a decade. By 2030, they might look down on us—if we’re lucky—as endangered critters like the blue whale or polar bear and accept we are worth keeping around for our genetic diversity.

But what if visionaries like Mr Gibson are right, and we embrace the bionic future? With our plug-in bio-processors and learning modules, perhaps we’ll be able to outsmart the machines—or, at least, become indistinguishable from them.

2030?! That’s damned early. Even Kurzweil usually says 2045, I’m hoping for 2090, and Aaronson things 2300.

There’s nothing above I haven’t known about or written of. For example, four years ago an organic rat neural network “flew” an F-22. So why did I say 2090 instead of, say, 2030?

It might be wishful thinking, since I fear this sort of inevitable Singularity is the most likely explanation of the Fermi Paradox. I want to put this well beyond my lifespan.

Alas, I may not be giving enough credit to the animal option. It’s “cheating”, bypassing much of the complexity of the traditional approaches that more or less build the AI from spare parts and a plan. Cheating works.

On the bright side, if they’re right then McCain/Palin won’t be able to do any lasting harm to humanity. Which is, of course, the apathetic behavior Aaronson opposes.

No comments:

Post a Comment