Sunday, November 30, 2025

AI fear is rational and hate comes from fear

From a Bluesky thread of mine (lightly edited);

Josh Marshall comments on widespread hostility to ai in polls despite heavy ai use:

I think the polls are correctly capturing the zeitgeist. In social consensus there is always a mix of independent contributors. Some are indirect and misleading causes - like despising Elon Musk. 

But in this case I believe there are good reasons for people to fear ai. And what we fear we hate.

There is no way to assuage this valid fear. We are already past the point where we know ai will be very disruptive. And human societies are already stressed and breaking down in most wealthy nations...

If our societies were healthier we would be talking publicly and intelligently about adaptations. Instead America has Idiocracy.

I believe there are ways to adapt to 2026 or 2027 level ai+memory+learning. If scaling stalls out that is.

I would like people with more power and wealth than me to fund that public discussion while we wait to see if Americans can turn from idiocy. If we have neither serious public discussion nor sane government then we just ride it out and try to pick up the pieces. But one of those two would be good.

In a very much related topic my post-2008 rantings on mass disability and the fading of middle-class-living hope among are, in several weird ways, starting to go mainstream. It took a couple of decades, but of course I'm not the only one that's been going on about the topic on the periphery of intellectual discourse, but I'm pretty sure I'm the only person on earth who has looked at it through the lens of "disability".

Whether we call it "economic polarization" or "mass disability" it's fundamentally a post-IBM effect of automation interacting with the (relatively fixed) distribution of human talents and the ability to outsource knowledge work globally. That effect is greatly accelerated by even 2025 ai, much less 2026 and 2027 ai. It is the most crucial cause of our societal collapse.

No comments: