I am a strong-AI pessimist. I think by 2100 we’ll be in range of sentient AIs that vastly exceed human cognitive abilities (“skynet”). Superhuman-AI has long been my favorite answer to the Fermi Paradox (see also); an inevitable product of all technological civilizations that ends interest in touring the galaxy.
I periodically read essays claiming superhuman-AI is silly, but the justifications are typically nonsensical or theological (soul-equivalents needed).
So I tried to come up with some valid reasons to be reassured. Here’s my list:
- We’ve hit the physical limits of our processing architecture. The “Moore-era” is over — no more doubling every 12-18 months. Now we slowly add cores and tweak hardware. The new MacBook Air isn’t much faster than my 2015 Air. So the raw power driver isn’t there.
- Our processing architecture is energy inefficient. Human brains vastly exceed our computing capabilities and they run on a meager supply of glucose and oxygen. Our energy-output curve is wrong.
- Autonomous vehicles are stuck. They aren’t even as good as the average human driver, and the average human driver is obviously incompetent. They can’t handle bicycles, pedestrians, weather, or map variations. They could be 20 years away, they could be 100 years away. They aren’t 5 years away. Our algorithms are limited.
- Quantum computers aren’t that exciting. They are wonderful physics platforms, but quantum supremacy may be quite narrow.
- Remember when organic neural networks were going to be fused into silicon platforms? Obviously that went nowhere since we no longer hear about it. (I checked, it appears Thomas DeMarse is still with us. Apparently.)
My list doesn’t make superhuman-AI impossible of course, it just means we might be a bit further away, closer to 300 years than 80 years. Long enough that my children might escape.
Just going through my bookmarks and was reminded I hadn't looked here in ages.
ReplyDeleteI think what will protect us from our AI overlords is destroying the planet as a place that can sustain our civilization sooner than we can create them. Though I remain surprised that I still don't see (not being in the field, so just what I see in popular press and randomly) anyone discussing the need for agency and drives (the equivalent of hunger, sex, companionship, etc.) if we want anything resembling consciousness. Of course neural nets training on images don't have a clue what they're really doing - they're just crunching data and finding patterns, not interacting with an external world and trying to satisfy needs. Everything we're learning about nonhuman behavior and intelligence suggests consciousness is gradually emergent and we're not trying to make AIs from which it might emerge. Which may be a good thing.