Thursday, December 03, 2009

Singular fun with Fermi

I'm a geek. So I love to puzzle with the Fermi Paradox, and, like many fellow geeks, I'm tempted to connect the Great Silence to the "Singularity" (aka, the rapture of the geeks). That puts me on the pessimistic side of the Singularity cult -- as in "Hello Hal, Goodbye humanity".

Now since we're talking about the extinction of humanity within 40 - 300 years (I go for about 80-100 myself) you might think this would be a bit depressing. Well, it might be, except I've long known I'll be dead before 2070 and probably before 2050. Everyone I care about will be dead within about 110 years. These are the things we secular humanist types know, and yet we can be quite cheerful. Ok, not in my case. Less dour maybe.

The peri-Singular death of humanity is a serious matter for humans, but it's less inevitable than our personal exits, so by comparison the Fermi Paradox/Singularity schtick is more entertaining than grim. That's why I appreciated this comment on a recent post (edits and emphases mine, follow link for full text) ...
Comment by Augustine 11/29/09
... I don't trust predictions that are based on extrapolations from current rates of growth. These predictions are, and will be, correct, but only for limited time frames. Extend them out too far and they become absurd. Moore's Law works fine, and will continue to work fine for a while I’m sure, but basing predictions on ever accelerating computing power is about as useful as imagining accelerating a given mass to the speed of light.

The greater problem, however, with the argument lies in the fact that we are at best imperfect predictors ... You cannot accurately infer a future singularity when you cannot know what will change the game before it happens, if you get my drift...
There's more to the post, but I'll stick with these two questions. The "limits to exponential growth" argument is even stronger than stated here since, in fact, Moore's Law itself has already failed. We have some more doublings to go, but each one is taking longer than the last.

So maybe we'll never have the technology to make a super-human AI. I think we'll make at least a human-class AI, if only because we've made billions of human-level DI (DNA-Intelligences). Even if computers only get five more doublings in, I think we'll figure a way to cobble something together that merits legal protection, a vote, and universal healthcare. (Ok, so the AI will come sooner than universal healthcare.)

So we get our AI, and IT's very smart, but it's comprehensible (Aaronson put this well). So this is certainly disruptive, but it's no singularity. On the other circuit, it does seem odd that today's average human would represent the pinnacle of cognition. Our brains are really crappy. Sure the associative cortices are neat, but the I/O channels are pathetic. A vast torrent of data washes out of our retina -- and turns into hugely compressed lossy throughput along a clogged input channel. We can barely juggle five disparate concepts in working memory. Surely we can improve on that!

So I'm afraid that Newton/Einstein/Feynman class minds do not represent a physical pinnacle of cognition. We'll most likely get something at least 10 times smarter. Something that makes things even smarter and faster, than can continuously improve and extend cognitive abilities until we start to approach physical limits of computation. Before that though, the earth has been turned into "computronium" -- and my atoms are somewhere in orbit.

As to the second objection, that we can't imagine a singularity because we can only reason within the system we know, I think that's actually the point. We can't imagine what comes after the world of the super-human minds because -- well, we don't have the words for that world. We can reason within the system we know until sometime close to when these critters come online, then we can't.

That doesn't mean humanity necessarily kicks off. Lots of geeks imagine we'll upload our minds into unoccupied (!) processing environments, or that the AIs will be sentimental. Not everyone is as cheerily pessimistic as me. It's not called a "Singularity" because it's the "end", it's because we can't make predications about it. Super-AI is death to prediction.

2 comments:

Unknown said...

That's exactly right.

This is one of those Russian dolls of a topic -- out of any single discussion comes any number of other even more fascinating ones. So it's smart to narrow the discussion from the larger 'just-because-we've-heard-nothing-doesn't-mean-nobody's-talking' point to something one might just get under control. Otherwise, we'd be all over the place.

But I can't resist it.

I'm wondering if (pace Wittgenstein) we're still not getting a little tripped up by our own definitions. Just as we have no good definition of 'life' or 'communication,' we're also a little challenged on the words 'death' and 'humanity.' If, for instance, a species of Galapagos finch evolves into a new species with a beak brilliantly adapted to opening plastic-wrapped tuna sandwiches left by tourists, the earlier species did become 'extinct,' but I'm not sure in what sense that line of finches 'died.'

I'd say we don't really have a good understanding of what 'smarter than human' means either.

So, here we go, let's take another look at the singularity argument, which, if I understand it right, is an argument by analogy. It goes something like this: Machines have become stronger, faster, tougher, more dangerous than humans, to the point that they have replaced us for most physical activities; computers are a form of machine; computers will soon outsmart us in an accelerating dash towards singularity.

Umm, great. That is indeed a scary thought. Except it's flawed. Like many arguments by analogy. It's based on a false premise. Machines haven't in fact 'replaced us.’ Last time I walked to the bathroom, I used my legs; next time I walk to the bathroom I will use my legs; in fact every time I walk to the bathroom, I will use my legs, until my legs don't work any more. At that point, and only at that point, will I use a wheelchair or prosthetic legs. Even then, I'm not one smidgen less of a human.

Machines have replaced only PART of what we do. The part we can't do. Or don't want to do. Not the part we do. And there is no evidence (unless you're a Luddite, which I think you're not) that it will ever be any different.

Now let's compare this with computers. They process, they analyze, they crunch, they evaluate, they sift, they compare...and we anthropomorphize. So, when Deep Blue defeats Kasparov at chess, we go, "Uh oh, there goes humanity, we've been beaten at our own game, time to buy that plot in Forest Lawn."

But let's take a look at what actually happened here. A narrowly defined task (e.g. playing chess) was contested by two narrowly defined players (e.g. with no stated interest other than winning a chess game) under narrowly defined conditions (e.g. the human was not allowed to unplug the machine).

Under these somewhat rigged circumstances, the 'machine' won. Good Lord, I would hope so! Because it did what computers do best (and what we do worst), which is to crunch through billions, trillions of data points in a mindless, mind-numbingly boring, brute force search for the winning move. And it barely won. But even if next time, and every time thereafter, it won handily, it would still only ever have won a simple board game with simple, computer-friendly rules.

Sure, we will (and do) have computers that are better than us at identifying tumors. Or figuring out the most efficient airline routes and schedules. Or forecasting more accurately what the weather will be like the day after tomorrow. But I put it to you that what we have here is a modern version of the (18th C.) Mechanical Turk. That is, a prosthetic mind.

Continued below…

Unknown said...

Continued from above…

There is NO evidence yet (except by this extrapolation, which I mistrust) that Deep Blue, or any Baby Blue descendent, will ever kick back, turn on the TV and lazily flick open its favorite plastic-wrapped tuna sandwich. And if it did, which would shock me, I would ask you in what sense it was 'not human.'

In the meantime, life (real life, sentient or otherwise) is seething and bubbling everywhere right under our feet in the most incredible variety, mind-boggling complexity and utterly delightful, totally opportunistic chaos. And may well be doing so all across the universe. How exactly are we thinking to 'improve' on that?

I completely agree that the hypothetical singularity is in fact an event horizon. For the Snark was a Boojum, you see.