Monday, September 08, 2008

Aaronson critiques Kurzweil and the 2045 Singularity

Kurzweil's desperate drive to live to 2045 and guaranteed immortality makes him easy to mock. Scot Aaronson, one of my favorite science bloggers, avoids the easy mockery, and writes a respectful review of why he things The Singularity Is Far - by which he means at least a century away - and probably after 2300.

The topic is not entirely academic. If you believe that artificial sentience bounded only by physics will transform the world beyond recognition by 2045, you might not bother with everyday trivia like peaceful prosperity for China, global warming management, and sustaining pluralistic democracy in America.

Like Aaronson I share areas of agreement with Kurzweil (from The Singularity is Near) ...

I find myself in agreement with Kurzweil on three fundamental points. Firstly, that whatever purifying or ennobling qualities suffering might have, those qualities are outweighed by suffering’s fundamental suckiness....

Secondly, there’s nothing bad about overcoming nature through technology...

Thirdly, were there machines that pressed for recognition of their rights with originality, humor, and wit, we’d have to give it to them. And if those machines quickly rendered humans obsolete, I for one would salute our new overlords. In that situation, the denialism of John Searle would cease to be just a philosophical dead-end, and would take on the character of xenophobia, resentment, and cruelty...

Yeah, Searle annoys me too.

My only objection to Aaronson's summary is I'd dispense with the originality, humor and wit requirements-- I don't demand those of humans so it's hardly fair to demand it of non-humans. Also, if obsolete means "ready for recycling" I'm not so objective as to welcome that transition. Indeed, I find prospect of recycling cause to hope that the Singularity really is beyond 2200.

Aaronson (and I) are skeptical of Kurzweil's exponential projections though...

... Everywhere he looks, Kurzweil sees Moore’s-Law-type exponential trajectories—not just for transistor density, but for bits of information, economic output, the resolution of brain imaging, the number of cell phones and Internet hosts, the cost of DNA sequencing … you name it, he’ll plot it on a log scale. ... he knows that every exponential is just a sigmoid (or some other curve) in disguise. Nevertheless, he fully expects current technological trends to continue pretty much unabated until they hit fundamental physical limits.

I’m much less sanguine. Where Kurzweil sees a steady march of progress interrupted by occasional hiccups, I see a few fragile and improbable victories against a backdrop of malice, stupidity, and greed—the tiny amount of good humans have accomplished in constant danger of drowning in a sea of blood and tears, as happened to so many of the civilizations of antiquity....

So there's a bright side in Governor Palin et al's desire to reverse the Enlightenment and join bin Laden in the middle ages -- it delays the Singularity.

And some people think I'm too negative.

In any case, even if we don't destroy civilization by electing McCain/Palin (maybe India will save civilization even then), my personal computer technology doesn't feel like it's on an exponential growth curve. It feels about as slow as it did eight years ago (though that's partly because of the crapware corporations now install on machines). I think we've gone sigmoid well ahead of Kurzweil's timeline. In fact, the book was written five years ago -- I think we've already fallen off his roadmap.

Aaronson's essay makes a quiet digression into one of my favorite topics -- the Fermi Paradox (inevitably tied these days to the Singularity). Note here that Scott is a certified deep thinker about Bayesian reasoning (italics mine)...

... The fourth reason is the Doomsday Argument. Having digested the Bayesian case for a Doomsday conclusion, and the rebuttals to that case, and the rebuttals to the rebuttals, what I find left over is just a certain check on futurian optimism. ... Suppose that all over the universe, civilizations arise and continue growing exponentially until they exhaust their planets’ resources and kill themselves out. In that case, almost every conscious being brought into existence would find itself extremely close to its civilization’s death throes. If—as many believe—we’re quickly approaching the earth’s carrying capacity, then we’d have not the slightest reason to be surprised by that apparent coincidence. To be human would, in the vast majority of cases, mean to be born into a world of air travel and Burger King and imminent global catastrophe. It would be like some horrific Twilight Zone episode, with all the joys and labors, the triumphs and setbacks of developing civilizations across the universe receding into demographic insignificance next to their final, agonizing howls of pain. I wish reading the news every morning furnished me with more reasons not to be haunted by this vision of existence.

Hmm. Scott has seemed a bit depressed lately. Following American politics will do that to a person.

I'd argue that the Doomsday Argument makes the case for a (sometime) Singularity however. Global warming won't wipe out humanity, and neither would nuclear war or even most bioweapons. We're tougher than cockroaches, which, I read recently, only thrive in the US because of our delightful garbage. It would take something really catastrophic and inescapable to do us in. Something that would likewise eliminate every sentient biological entity. Something like an inevitable Singularity ...

Of course that will wipe us out just as thoroughly in 2240 as in 2040, so I wouldn't use that argument to advance our date with destiny.

Aaronson concludes with one of the more interesting critiques of the Singularity thesis. He says that while it may well happen someday (strong AI that is), the result probably won't be incomprehensible ...

As you may have gathered, I don’t find the Singulatarian religion so silly as not to merit a response. Not only is the “Rapture of the Nerds” compatible with all known laws of physics; if humans survive long enough it might even come to pass. The one notion I have real trouble with is that the AI-beings of the future would be no more comprehensible to us than we are to dogs (or mice, or fish, or snails). After all, we might similarly expect that there should be models of computation as far beyond Turing machines as Turing machines are beyond finite automata. But in the latter case, we know the intuition is mistaken. There is a ceiling to computational expressive power. Get up to a certain threshold, and every machine can simulate every other one, albeit some slower and others faster. Now, it’s clear that a human who thought at ten thousand times our clock rate would be a pretty impressive fellow. But if that’s what we’re talking about, then we don’t mean a point beyond which history completely transcends us, but “merely” a point beyond which we could only understand history by playing it in extreme slow motion.

So Aaronson's saying that faster isn't the same as incomprehensible. He is a world expert on the physics of computation, so it's not surprising that he reminds us of those limits. Kurzweil and Vinge know that too though, albeit not at Scott's level of detail.

So what do I think? On the one hand I don't think we're anywhere near computational physics limits, so I could believe that a sentient AI would be far more than Aaronson at warp speed. On the other hand, I could also believe that at a certain level of sentience all other sentience may be more or less imaginable -- and that some humans are there now.

I like Scott's essay -- probably because it fits my prejudices about "2045". It's nice to be affirmed by a true expert. I might be more worried about 2100 than he is however. It may come down to how discouraging the RNC is ...

... while I believe the latter kind of singularity is possible, I’m not at all convinced of Kurzweil’s thesis that it’s “near” (where “near” means before 2045, or even 2300). I see a world that really did change dramatically over the last century, but where progress on many fronts (like transportation and energy) seems to have slowed down rather than sped up; a world quickly approaching its carrying capacity, exhausting its natural resources, ruining its oceans, and supercharging its climate; a world where technology is often powerless to solve the most basic problems, millions continue to die for trivial reasons, and democracy isn’t even clearly winning over despotism; a world that finally has a communications network with a decent search engine but that still hasn’t emerged from the tribalism and ignorance of the Pleistocene. And I can’t helping thinking that, before we transcend the human condition and upload our brains to computers, a reasonable first step might be to bring the 17th-century Enlightenment to the 98% of the world that still hasn’t gotten the message.

I agree with the sentiment ... but there's a curious slip in here. Aaronson is saying that a healthy Enlightenment is an important step towards a sentient AI Singularity, but he's already established that the Singularity is unlikely to be an unmitigated gift. It could be an extinction event instead.

In which case the logical thing to do is vote for Palin/McCain. Down with the Enlightenment!

Update 9/9/08: A somewhat similar response to mine from a Singularity student. There's a bit of synchronicity, though where I write "prosperous and peaceful China" they write "Chinese military dominance". A revealing distinction I suspect. I guess I fall between Aaronson and Hanson on the Singularity spectrum, which means we all have a fair bit in common.

No comments: