Now since we're talking about the extinction of humanity within 40 - 300 years (I go for about 80-100 myself) you might think this would be a bit depressing. Well, it might be, except I've long known I'll be dead before 2070 and probably before 2050. Everyone I care about will be dead within about 110 years. These are the things we secular humanist types know, and yet we can be quite cheerful. Ok, not in my case. Less dour maybe.
The peri-Singular death of humanity is a serious matter for humans, but it's less inevitable than our personal exits, so by comparison the Fermi Paradox/Singularity schtick is more entertaining than grim. That's why I appreciated this comment on a recent post (edits and emphases mine, follow link for full text) ...
Comment by Augustine 11/29/09
... I don't trust predictions that are based on extrapolations from current rates of growth. These predictions are, and will be, correct, but only for limited time frames. Extend them out too far and they become absurd. Moore's Law works fine, and will continue to work fine for a while I’m sure, but basing predictions on ever accelerating computing power is about as useful as imagining accelerating a given mass to the speed of light.There's more to the post, but I'll stick with these two questions. The "limits to exponential growth" argument is even stronger than stated here since, in fact, Moore's Law itself has already failed. We have some more doublings to go, but each one is taking longer than the last.
The greater problem, however, with the argument lies in the fact that we are at best imperfect predictors ... You cannot accurately infer a future singularity when you cannot know what will change the game before it happens, if you get my drift...
So maybe we'll never have the technology to make a super-human AI. I think we'll make at least a human-class AI, if only because we've made billions of human-level DI (DNA-Intelligences). Even if computers only get five more doublings in, I think we'll figure a way to cobble something together that merits legal protection, a vote, and universal healthcare. (Ok, so the AI will come sooner than universal healthcare.)
So we get our AI, and IT's very smart, but it's comprehensible (Aaronson put this well). So this is certainly disruptive, but it's no singularity. On the other circuit, it does seem odd that today's average human would represent the pinnacle of cognition. Our brains are really crappy. Sure the associative cortices are neat, but the I/O channels are pathetic. A vast torrent of data washes out of our retina -- and turns into hugely compressed lossy throughput along a clogged input channel. We can barely juggle five disparate concepts in working memory. Surely we can improve on that!
So I'm afraid that Newton/Einstein/Feynman class minds do not represent a physical pinnacle of cognition. We'll most likely get something at least 10 times smarter. Something that makes things even smarter and faster, than can continuously improve and extend cognitive abilities until we start to approach physical limits of computation. Before that though, the earth has been turned into "computronium" -- and my atoms are somewhere in orbit.
As to the second objection, that we can't imagine a singularity because we can only reason within the system we know, I think that's actually the point. We can't imagine what comes after the world of the super-human minds because -- well, we don't have the words for that world. We can reason within the system we know until sometime close to when these critters come online, then we can't.
That doesn't mean humanity necessarily kicks off. Lots of geeks imagine we'll upload our minds into unoccupied (!) processing environments, or that the AIs will be sentimental. Not everyone is as cheerily pessimistic as me. It's not called a "Singularity" because it's the "end", it's because we can't make predications about it. Super-AI is death to prediction.