Showing posts with label skynet. Show all posts
Showing posts with label skynet. Show all posts

Wednesday, November 06, 2024

Chaos times: American oligarchy

1. I was right about polling being worthless

2. At least Biden was spared humiliation 

3. Americans chose oligarchy willingly. 

4. Our feeble democracy wasn’t going to survive AGI (if we get it)

5. I think the inability of a large number of men and women to meet the always increasing IQ/EQ requirements needed for a middle-class life is the root cause. #massDisability

Now we enter the chaos times.

Thursday, October 31, 2024

Autonomous vehicles: A prediction

I'm putting this down as a marker for me to look back to in a few years.

When autonomous vehicles are able to operate in Minnesota winters they will also be able to converse about quantum field theory, exotic mathematical geometries, politics in the Maldives, art history, and their latest contributions to classical music. 

Sunday, October 13, 2024

Muskism, Vance, AI and the American choice between oligarchy and an imperfect democracy

In advance of Nov 5 2024, some thoughts on  Muskism. Because I can't resist this kind of thing.

Musk and his fellow tech oligarchs more or less believe in the need to preserve a white tribe, eugenics, and the relative supremacy of a male pattern brain. But I don't think that's what is what is driving Musk's political agenda.

I think Musk is sincere about his stated AI beliefs. He expects at least the genius-in-every-pocket described by Anthropic CEO Dario Amodei. He believes American democracy will not survive the AI driven chaos to come. This challenge, Musk believes, will be far more disruptive than mere world war.

Musk and his tech allies believe that humanity's best chance is to be guided by a kind of aristocracy of wealth and (in his imagination) clear thinking intellect. Perhaps if we pass into a future AI golden age democracy might be restored, but that will depend on whether the SentientAsBestWeCanTell AIs of 2030 get the vote. The Muskites believe Trump is weak, dying, and will not finish a second term. He will be pardoned, given money and left to the side. Vance is their man. The tech oligarchs will steer America and the world through what lies ahead.

I believe Musk is correct about the AI chaos to come. In the old days we called this period the "technological singularity" -- back when singularity meant absence of prediction rather than a religious experience.

I don't, however,  think Musk and his billionaire minions are the answer to this challenge. I would prefer to give democracy a chance.

But does democracy really have a chance? November 2024 is the test. If Americans, knowing all they know now, having lived through a Trump presidency, still reelect Donald Trump -- then it's clear our democracy is not up to the much greater challenge of even collective climate geoengineering -- much less the AI transition.

If "Trump" (Vance really) wins in 2024 I will be sad. On the other hand, I will then accept that Musk was correct. If American democracy can't handle the Trump idiocy it will have demonstrated it is entirely insufficient for what lies ahead. In that case perhaps the oligarchy is the best we can do.

If Vance loses in a few weeks -- then we will see how democracy manages the chaos times. That's what I have voted for.

Friday, September 20, 2024

Perplexity is saving my linguistics classmates

I have a dark past. I asked questions. In class. Lots of questions. Too many questions. I hear things, I get ideas, I notice gaps, I ask questions.

It's a compulsion.

Some of the questions helped classmates. To be honest more were probably confusing or distracting. I likely featured in classmate daydreams -- but not in a good way.

Worse, some of the questions confused the professor. Or exposed what they didn't understand. That could be embarrassing or even humiliating.

Now I'm back in the classroom, doing freshman linguistics.  As a 65yo, I can do classes at Minnesota state colleges and universities for free. We pay a lot in taxes, but there are benefits to living here.

My question compulsion is still there, but LLMs are saving everyone. I set up a linguistics "collection" in Perplexity with appropriate prompts; now I type my questions into my phone (allowed in class). I get the answer with Perplexity and spare my classmates.

Never say AI isn't good for something.

PS. Perplexity is to modern Google as Google was to Alta Vista. A qualitative improvement. It's almost as good as 1990s Google.



Wednesday, August 28, 2024

In which I declare my expert judgment on AI 2024

These days my social media experience is largely Mastodon. There's something to be said about a social network that's so irreparably geeky and so hard to work with that only a tiny slice of humanity can possibly participate (unless and until Threads integration actually works).

In my Mastodon corner of the "Fediverse', among the elite pundits I choose to read,  there's a vocal cohort that is firm in their conviction that "AI" hype is truly and entirely hype, and that the very term "AI" should not be used. That group would say that the main application of LLM technology is criming.

Based on my casual polling of my pundits there's a quieter cohort that is less confident. That group is anxious, but not only about the criming.

Somewhere, I am told, there is a third group that believes that godlike-AIs are coming in 2025. They may be mostly on Musk's network.

Over the past few months I think the discourse has shifted. The skeptics are less confident, and the godlike-AI cohort is likewise quieter as LLM based AI hits technical limits. 

The shifting discourse, and especially the apparent LLM technical limitations, mean I'm back to being in the murky middle of things. Where I usually sit. Somehow that compels me to write down what I think. Not because anyone will or should care [1], but because I write these posts mostly for myself and I like to look back and see how wrong I've been.

So, in Aug 2024, I think:
  1. I am less worried that the end of the world is around the corner. If we'd gotten one more qualitative advance in LLM or some other AI tech I'd be researching places to (hopelessly) run to.
  2. Every day I think of new things I would do if current LLM tech had access to my data and to common net services. These things don't require any fundamental advances but they do require ongoing iteration.  I don't have much confidence in Apple's capabilities any more, but maybe they can squeeze this out. I really, really, don't want to have to depend on Microsoft. Much less Google.
  3. Perplexity.ai is super valuable to me now and I'd pay up if they stopped giving it away. It's an order of magnitude better than Google search.
  4. The opportunities for crime are indeed immense. They may be part of what ends unmediated net access for most people. By far the best description of this world is a relatively minor subplot in Neal Stephenson's otherwise mixed 2018 novel "Fall".
  5. We seem to be replaying the 1995 dot com crash but faster and incrementally. That was a formative time in my life. It was a time when all the net hype was shown to be .... correct. Even as many lost their assets buying the losers.
  6. It will all be immensely stressful and disruptive and anxiety inducing even though we won't be doing godlike-AI for at least (phew) five more years.
  7. Many who are skeptical about the impact of our current technologies have a good understanding of LLM tech but a weak understanding of cognitive science. Humans are not as magical as they think.
- fn -

[1] I legitimately have deeper expertise here than most would imagine but it's ancient and esoteric.

Wednesday, August 30, 2023

Mass disability - dysfunctional web sites, apps for everything

I last wrote about "mass disability" and the Left Behind in a 2021 post. The concept has sometimes seemed on the edge of going mainstream but it's never quite made it. Maybe we're getting closer; a recent Michael Tsai post (No App, No entry) reminded me of my Mastodon thread from a few weeks ago:

What is the crazy that drives Trumpism and the many global equivalents?
It is that the minimal IQ to function well in the modern world is now about 120 and that eliminates most people.

This is both the most important fact of our time and the least palatable. It is the thing that cannot be said and it will be ruin of us if we don't say it ...

I've been saying this for years. Today I was reminded of it while doing some travel booking.
During the bookings I encountered:
1. A web site that didn't work with my older version of Safari (I knew what was wrong and switched to Chrome. 
2. A Delta web site bug (I recognized it as a bug and knew what to do). 
3. Place that was out of rental cards but I new Expedia would have some contracts that would let me find one. 
4. Travel web sites that all needed new credentials...
... These are all routine parts of modern life including maintaining flaky computer systems (let me tell you ...) and phones ...

It was not like this even 35 y ago. Travel agents handled travel complexity. There were no smartphones. Computers were very limited. There was no internet for most. By necessity everyday life was much simpler. Most people could cope with it.

Now most cannot cope.

This is the most important feature of our time. And nobody can talk about it. 

I remember some good discussions on this thread but I can't find any of them now. Perhaps by design Mastodon has a limited memory. (My home instance has no search, so I had to download my archive and search it to find the date of the post. Then I could slowly navigate to it.)

I expanded on the theme a bit later:

Hotel laundry year 2000

1. Insert quarters to buy detergent, operate washer and dryer.

IQ requirement: 65 (my son could do this after a demonstration)

Hotel laundry year 2023

1. Scan QR code to download app whose profit comes from unspent funds.

2. Install app, create account with Apple ID

3. Figure out cryptic UX so can deposit funds (several odd unintuitive steps)

3. Deposit funds, paying just enough to cover this sesh. Pat the 25 cent low transaction penalty... 

4. Spot the scam behind app and avoid it (eg find at minimum)

5. Diagnose why after paying money and confirming machine it’s still not working

6. Authorize specific transaction

7. Start laundry.

(My son could not do this)

8. When complete delete app. 

IQ requirement: minimum 110, higher to spot the scam. 

This is why America is burning.

People are scared and angry and feeling left behind -- and they can't come out and say they are unable to manage their tech. Because that's the ultimate shame.

See also:

Friday, April 28, 2023

Large Language Models - evolutionary tree and selection algorithm 2023

The following two diagrams were taken from Mooler et al's GitHub LLM guide (April 2023). I'm posting them here as a snapshot of this historic moment. The root of the LLM evolutionary tree goes far back in time, almost five years ago. Click image for full res.

Algorithm for selecting an AI technology


LLM Evolutionary Tree

Monday, April 03, 2023

We need a new word for the historical singularity.

TLDR: The "technological singularity" was an important and useful term with a clear meaning. Then it became the "Rapture of the Nerds". We need a new term.

--

I first heard the word "singularity" in the context of black hole physics; it dates back at least to the early 20th century:

ChatGPT 4 2023: "At the singularity, the laws of physics as we know them, including space and time, break down, and our current understanding of the universe is insufficient to predict what happens within it."

Not much later, in the 1950s, the term was applied by von Neumann in a technological context (from a 1993 Vernor Vinge essay):

Stan Ulam paraphrased John von Neumann as saying: "One conversation centered on the ever-accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."

Brad Delong used to write about this kind of non-AI historical singularity. My favorite description of what it would be like to a approach at technological singularity was Vinge's short story "Fast Times at Fairmount High". (This prescient story appears to be lost to time; he wrote a similar full length novel but I think the short story was better).

The core idea is there's a (virtuous?) recursive loop where technology improves technology with shorter and shorter cycle times. Many processes go exponential and even near term developments become unpredictable. One may assume social end economic structures train to keep pace. The historical singularity exponential curve was part of The Economist's y2K Millennium issue GDP per person historical graph:


In a January 1983 essay for Omni Magazine Vinge focused on a particular aspect of the the technological singularity arising from superhuman intelligence (aka "super intelligence"):

We will soon create intelligences greater than our own ... When this happens there will be a technological and social transition similar in some sense to "the knotted space-time at the center of a black hole" 

A decade later, in his 1993 essay later published in Whole Earth Review (non-Olds cannot imagine what Whole Earth Review was like), Vinge revised what he meant by "soon":

... Based on this trend, I believe that the creation of greater-than-human intelligence will occur during the next thirty years. (Charles Platt has pointed out that AI enthusiasts have been making claims like this for thirty years. Just so I'm not guilty of a relative-time ambiguity, let me be more specific: I'll be surprised if this event occurs before 2005 or after 2030.) ...

So by the year 2000 we had the concept of a historical technological singularity (eminently sensible) that had become focused on a specific kind of self-improving technology - the superhuman intelligence with an upper-case S Singularity (presumably AI). Those were useful concepts - "technological singularity" and "superintelligence" Singularity. 

In 1993 Vinge predicted the Singularity would happen before 2030, later experts like Scott Aaronson predicted after 2080. (Aaronson has since revised that prediction and works for OpenAI; Vinge's 2030 dates looks pretty good.)

After 2000 though the word Singularity went off the semantic rails. It came to be used for for a peculiar future state in which human minds were uploaded into simulation environments that were usually described as pleasant rather than hellish. This is, of course, antithetical to the original idea of unpredictability! This peculiar meaning of "The Singularity" came to be known as "The Rapture of the Nerds" based on the title of a book by Charles Stross and Cory Doctorow. More recently that vision underlies a weird cult called longtermism that seems to have infected some vulnerable minds.

The "Rapture" meaning of "Singularity" has effectively taken over. We no longer have a term for the original von Neumann concept of self-reinforcing technological change that makes even the near term future unpredictable. That perversion of meaning is why I use the tag "Whitewater world" rather than Singularity for my own blog posts (and "Skynet" rather than super intelligence).

So we need a new term. I don't think "whitewater world" is it.

Anyone have a new term? We probably have a few years in which it might still be useful.

Thursday, March 30, 2023

ChatGPT's 155 IQ and passing the Sebastian's children puzzle

A Scientific American contributor found ChatGPT (probably 3) scored 155 on a standard IQ test:

Estimated on the basis of five subtests, the Verbal IQ of the ChatGPT was 155, superior to 99.9 percent of the test takers who make up the American WAIS III standardization sample of 2,450 people. As the chatbot lacks the requisite eyes, ears and hands, it is not able to take WAIS’s nonverbal subtests. But the Verbal IQ and Full Scale IQ scales are highly correlated in the standardization sample, so ChatGPT appears to be very intelligent by any human standards.

We don't need to worry though because it couldn't answer this question:

ChatGPT easily fails at obvious riddles, such as “What is the first name of the father of Sebastian’s children?” (ChatGPT on March 21: I’m sorry, I cannot answer this question as I do not have enough context to identify which Sebastian you are referring to.) It seems that ChatGPT fails to reason logically and tries to rely on its vast database of “Sebastian” facts mentioned in online texts.

This is what happened when I tried the Sebastian test on GPT-4:

Ah, I understand now. This riddle is a classic play on words. The answer is "Sebastian" himself, as he is the father of his children.

Ok, so that's reassuring.

Update 4/12/2023: ChatGPT4 (C4) recently did well in some blinded advanced undergraduate exams.

A response to Scott Aaronson's rejection of an AI pause.

Scott Aaronson, who works on AI safety for OpenAI, wrote a critique of AI Pause that was not up to his usual standards. Here's what I wrote as a comment:

Hi Scott — I was confused by your post. I’m usually able to follow them. I won’t defend the letter directly and Yudkowsky/TIME is not worth a mention but maybe you could clarify some things…

1. 6m seems a reasonable compromise given the lifespan of humans, the timescales of human deliberation and the commercial and military pressure to accelerate AI development. Short enough to motivate urgent action, but long enough that reflection is possible. (I doubt we actually pause, but I agree with the principle. China isn’t going to pause of course.)

2. Let’s assume GPT 5 with an array of NLP powered extensions exceeds the reasoning abilities of 95% of humanity in a wide variety of knowledge domains. That’s a shock on the scale of developing fire, but it’s occurring in a hugely complex and interdependent world that seems always on the edge of self-destruction and actually has the capabilities to end itself. We’re not hunter gatherers playing with fire or Mesopotomians developing writing. There’s no precedent for the speed, impact and civilizational fragility we face now.

3. It’s not relevant that people who signed this letter were previously skeptical of the progress towards AI. I recall 10 years ago you were skeptical. For my part I’ve been worried for a long time, but assumed it was going to come in 2080 or so. 60 years early is a reason to pause and understand what has happened.

Lastly, I read the OpenAI statement. That seems consistent with a pause.

Tuesday, March 14, 2023

Everything is just fine.

The ad-funded internet is collapsing. It's been ailing for years but, as seems to the rule, the last bit is happening quite quickly. I'm guessing advertisers, who must be kind of dim, finally realized they were wasting their money. They did much better funding special interest magazines but those are gone now. Much of Silicon Valley has been funded by a now defunct business model. So maybe this is not a good time to be there. (Or, if you are young and have some money, maybe this is the best time to be there.)

Meanwhile human-cognition level AI has arrived. About 60 years earlier than I'd hoped. And some people thought "Don't Look Up" was about climate change. As a guide I recommend the movie Superintelligence; particularly the protagonist's approach to the end of the world. (Spoiler: It's a rom-com. Her world does not end.)

It's whitewater world all over now. On the bright side, we don't need to waste too much time on longterm planning. I retired in August of 2022 so I think I'll focus on mountain bike trail maintenance and, like Superintelligence's Carol Peters, on the needs of friends and family.

Sunday, March 12, 2023

I ask ChatGPT about the next step.

 Not a bad response.

”Describe what happens when ChatGPT is enhanced with natural language processing and domain-specific knowledge modules.”

--


When ChatGPT is enhanced with natural language processing and domain-specific knowledge modules, it gains several capabilities that allow it to better understand and respond to user input.


Natural language processing (NLP) is a field of artificial intelligence that focuses on enabling computers to understand and interpret human language. By incorporating NLP into ChatGPT, the model can better analyze and interpret user input, including understanding sentence structure, recognizing named entities, and determining the meaning of words in context. This allows ChatGPT to generate more accurate and relevant responses to user queries.


Domain-specific knowledge modules provide ChatGPT with specialized knowledge related to a particular field or topic. For example, if ChatGPT is enhanced with a medical knowledge module, it can better understand medical terminology and provide more accurate and relevant responses to medical-related queries. By incorporating domain-specific knowledge modules, ChatGPT can tailor its responses to specific user needs, making it a more effective conversational agent.


Together, these enhancements enable ChatGPT to better understand and respond to user input, providing more accurate and relevant information to users. This can lead to a more engaging and satisfying user experience, as well as improved productivity and efficiency in a wide range of applications.”

Monday, February 20, 2023

Be afraid of ChatGPT

TL;DR: It's not that ChatGPT is miraculous, it's that cognitive science research suggests human cognition is also not miraculous.

"Those early airplanes were nothing compared to our pigeon-powered flight technology!"

https://chat.openai.com/chat - "Write a funny but profound sentence about what pigeons thought of early airplanes"

Relax

Be Afraid

ChatGPT is just a fancy autocomplete.

Much of human language generation may be a fancy autocomplete.

ChatGPT confabulates.

Humans with cognitive disabilities routinely confabulate and under enough stress most humans will confabulate. 

ChatGPT can’t do arithmetic.

IF a monitoring system can detect a question involves arithmetic or mathematics it can invoke a math system*.


UPDATE: 2 hours after writing this I read that this has been done.

ChatGPT’s knowledge base is faulty.

ChatGPT’s knowledge base is vastly larger than that of most humans and it will quickly improve.

ChatGPT doesn’t have explicit goals other than a design goal to emulate human interaction.

Other goals can be implemented.

We don’t know how to emulate the integration layer humans use to coordinate input from disparate neural networks and negotiate conflicts.

*I don't know the status of such an integration layer. It may already have been built. If not it may take years or decades -- but probably not many decades.

We can’t even get AI to drive a car, so we shouldn’t worry about this.

It’s likely that driving a car basically requires near-human cognitive abilities. The car test isn’t reassuring.

ChatGPT isn’t conscious.

Are you conscious? Tell me what consciousness is.

ChatGPT doesn’t have a soul.

Show me your soul.

Relax - I'm bad at predictions. In 1945 I would have said it was impossible, barring celestial intervention, for humanity to go 75 years without nuclear war.


See also:

  • All posts tagged as skynet
  • Scott Aaronson and the case against strong AI (2008). At that time Aaronson felt a sentient AI was sometime after 2100. Fifteen years later (Jan 2023) Scott is working for OpenAI (ChatGPT). Emphases mine: "I’m now working at one of the world’s leading AI companies ... that company has already created GPT, an AI with a good fraction of the fantastical verbal abilities shown by M3GAN in the movie ... that AI will gain many of the remaining abilities in years rather than decades, and .. my job this year—supposedly!—is to think about how to prevent this sort of AI from wreaking havoc on the world."
  • Imagining the Singularity - in 1965 (2009 post.  Mathematician I.J. Good warned of an "intelligence explosion" in 1965. "Irving John ("I.J."; "Jack") Good (9 December 1916 – 5 April 2009)[1][2] was a British statistician who worked as a cryptologist at Bletchley Park."
  • The Thoughtful Slime Mold (2008). We don't fly like bird's fly.
  • Fermi Paradox resolutions (2000)
  • Against superhuman AI: in 2019 I felt reassured.
  • Mass disability (2012) - what happens as more work is done best by non-humans. This post mentions Clark Goble, an app.net conservative I miss quite often. He died young.
  • Phishing with the post-Turing avatar (2010). I was thinking 2050 but now 2025 is more likely.
  • Rat brain flies plane (2004). I've often wondered what happened to that work.
  • Cat brain simulator (2009). "I used to say that the day we had a computer roughly as smart as a hamster would be a good day to take the family on the holiday you've always dreamed of."
  • Slouching towards Skynet (2007). Theories on the evolution of cognition often involve aspects of deception including detection and deceit.
  • IEEE Singularity Issue (2008). Widespread mockery of the Singularity idea followed.
  • Bill Joy - Why the Future Doesn't Need Us (2000). See also Wikipedia summary. I'd love to see him revisit this essay but, again, he was widely mocked.
  • Google AI in 2030? (2007) A 2007 prediction by Peter Norvig that we'd have strong AI around 2030. That ... is looking possible.
  • Google's IQ boost (2009) Not directly related to this topic but reassurance that I'm bad at prediction. Google went to shit after 2009.
  • Skynet cometh (2009). Humor.
  • Personal note - in 1979 or so John Hopfield excitedly described his work in neural networks to me. My memory is poor but I think we were outdoors at the Caltech campus. I have no recollection of why we were speaking, maybe I'd attended a talk of his. A few weeks later I incorporated his explanations into a Caltech class I taught to local high school students on Saturday mornings. Hopfield would be about 90 if he's still alive. If he's avoided dementia it would be interesting to ask him what he thinks.

Saturday, February 02, 2019

Against superhuman AI

I am a strong-AI pessimist. I think by 2100 we’ll be in range of sentient AIs that vastly exceed human cognitive abilities (“skynet”). Superhuman-AI has long been my favorite answer to the Fermi Paradox (see also); an inevitable product of all technological civilizations that ends interest in touring the galaxy.

I periodically read essays claiming superhuman-AI is silly, but the justifications are typically nonsensical or theological (soul-equivalents needed).

So I tried to come up with some valid reasons to be reassured. Here’s my list:

  1. We’ve hit the physical limits of our processing architecture. The “Moore-era” is over — no more doubling every 12-18 months. Now we slowly add cores and tweak hardware. The new MacBook Air isn’t much faster than my 2015 Air. So the raw power driver isn’t there.
  2. Our processing architecture is energy inefficient. Human brains vastly exceed our computing capabilities and they run on a meager supply of glucose and oxygen. Our energy-output curve is wrong.
  3. Autonomous vehicles are stuck. They aren’t even as good as the average human driver, and the average human driver is obviously incompetent. They can’t handle bicycles, pedestrians, weather, or map variations. They could be 20 years away, they could be 100 years away. They aren’t 5 years away. Our algorithms are limited.
  4. Quantum computers aren’t that exciting. They are wonderful physics platforms, but quantum supremacy may be quite narrow.
  5. Remember when organic neural networks were going to be fused into silicon platforms? Obviously that went nowhere since we no longer hear about it. (I checked, it appears Thomas DeMarse is still with us. Apparently.)

My list doesn’t make superhuman-AI impossible of course, it just means we might be a bit further away, closer to 300 years than 80 years. Long enough that my children might escape.

Wednesday, April 18, 2018

Dyer on the 21st century crisis of mass unemployment

I believe this is true — though I’d be more confident if one of my favorite economists thought this was plausible (emphases mine):

If The Model Is Broken, Fix It | Gwynne Dyer

… The political model of Western-style democracy, which grew up alongside and then within a capitalist economic model, is now broken. Exhibit Number One is Donald Trump, but there’s lots of other evidence too.

One-third of French voters backed Marine Le Pen, a cleaned-up, user-friendly neo-fascist, in last year’s presidential election. In last September’s German election, one-eighth of the electorate voted for Alternative for Germany, a party whose more extreme wing is neo-Nazi – but it is now leads the opposition in the Bundestag, the German parliament.

Last month in Italy, the two biggest parties to emerge from the election were both led by populist rabble-rousers, one from the left and one from the right. Not to mention Brexit in Britain. And in every case the themes that dominated the populists’ rhetoric were racism, nationalism, hostility to immigrants – and jobs.

Trump rarely talked about anything else during the presidential election campaign: immigrants are stealing the jobs, free-trading American businessmen are exporting the jobs, the foreigners are eating America’s lunch….

Trump may not know a lot, but he knows One Big Thing. We are living in a new era of mass unemployment, and nobody has noticed. As Trump said the night after he won the New Hampshire primary in February 2016: “Don’t believe those phony numbers when you hear 4.9 and 5 percent unemployment. The number’s probably 28, 29, as high as 35. In fact, I even heard recently 42.”

It’s not really 42 percent, but it’s not 4.1 percent (the current official US rate) either. According to Nicholas Eberstadt’s ‘Men Without Work’, the real unemployment rate among American men of prime working age (24-55) – including those who don’t get counted because they have given up looking for work – is 17 percent.

Why didn’t we notice? Because the unemployed weren’t protesting in the streets like they did in the Great Depression of the 1930s, although the rate is getting up to Depression era levels. After the Second World War, all the Western democracies built welfare states, mainly so a new generation of radical populist leaders would not come to power the next time there is mass unemployment.

It has worked, in the sense that there is not blood in the streets this time around, but the jobless millions are very angry even if the welfare state means that they are not starving. They do vote, and unless something is done to ease their anger, next time they may vote for somebody who makes Trump look good by comparison.

But if the problem is unemployment, then the answer is not obvious, because the main cause of unemployment in Western countries is not immigration or ‘offshoring’ jobs, as Trump pretends. It is computers.

One-third of American manufacturing jobs have vanished in the past 20 years, and the vast majority of them (85 percent) were destroyed by automation. The algorithms and the robot arms have already killed the Rust Belt, and there is a plausible prediction that almost half of existing American jobs may be automated out of existence in the next 20 years.

What would our politics look like then? Not very democratic, unless we do something to ease the anger of the unemployed. This doesn’t just mean giving them more money – a massive expansion of the welfare state – but also finding way of taking the shame out of unemployment, because it is the humiliation of being seen as a loser that breeds the anger…

I’ve called this ‘mass disability’, because to me it’s a mismatch between the skills the majority of humans have and the skills needed to earn a middle class or better income.

I don’t have any other explanation for why the entire western world is simultaneously in crisis other than what I wrote about in 2010 - Globalization (China) and Information Technology.

See also:

Wednesday, November 16, 2016

Mass Disability - how did I come up with 40%?

How, a friend asked, did I come up with the 40% number for “mass disability” that I quoted in After Trump: reflections on mass disability in a sleepless night?

I came up with that number thinking about the relationship of college education, IQ curves, and middle class status. The thesis goes like this…

  1. Disability is contextual. In a space ship legs are a bit of a nuisance, but on earth they are quite helpful. The context for disability in the modern world is not climbing trees or lifting weights, it’s being able to earn an income that buys food, shelter, education, health care, recreation and a relatively secure old age. That is the definition of the modern “middle class” and above; a household income from $42,000 ($20/hr) to $126,000. It’s about half of Americans. By definition then half of Americans are not “abled”.
  2. I get a similar percentage if I look at the percentage of Americans who can complete a college degree or comparable advanced skills training. That’s a good proxy for reasonable emotional control and an IQ to at least 105 to 110. That’s about 40% of Americans — but Canada does better. I think the upper limit is probably 50% of people. If you accept that a college-capable brain is necessary for relative economic success in the modern world then 50% of Americans will be disabled.

So I could say that the real number is 50%, but college students mess up the income numbers. The 40% estimate for functionally disabled Americans adjusts for that.

As our non-sentient AI tech and automation gets smarter the “ability” threshold is going to rise. Somewhere the system has to break down. I think it broke on Nov 8, 2016. In a sense democracy worked — our cities aren’t literally on fire. Yet.

Friday, September 30, 2016

How does the world look to Trump's core supporters?

Set aside the neo-Klan-Nazi minority. Set aside the truly despicable - Coulter, Hannity, Falwell and the like. Forget the hell-spawn who think only of their personal wealth.

Think about the white non-college male voter:

… Trump’s fortunes rest on his core supporters, white men who lack a four-year college degree … He leads Clinton among them by 76-17 percent, an enormous 59-point advantage. That’s widened from 40 points early this month; it’s a group Mitt Romney won by 31 points -- half Trump’s current margin -- in 2012.

Whatever happens with this election, that 60% gap is a staggering fact.

How does the world look like to these non-college white men of the 4th quintile?

I have limited exposure to this cohort. A few Facebook friends — but they don’t post much about Trump. A family member with a cognitive disability claims to like Trump. That’s about it.

I need a journalist-anthropologist to falsify my story. I can’t help imagining a story though. It goes like this …

  • I have no hope of a secure economic future with savings, stable employment, good healthcare benefits and a pension.
  • I have limited marriage opportunities. I really miss the patriarchy. I feel that in my bones.
  • I watch Fox. I can understand it. Fox approves of me. Nobody else cares what happens to me.
  • I don’t understand economics, but I’ve lived through the past ten years. I’ve heard a lot of broken promises. Maybe nobody understands economics. Maybe the people who understand economics are lying to me. I definitely don’t understand borrowing from a wealthier future.
  • I don’t like academics.
  • I don’t care about the damned polar bears. I like warm weather. I don’t like bugs. I like motors and pavement. I don’t care about CO emissions.
  • I watch reality cop shows, where every criminal is a black man. I directly encounter crime and it’s always black men. I am afraid of black men. [1]
  • I don’t have a lot to lose.

None of this is going to go away. If we want to keep civilization going we need to give this cohort hope.

See also:

- fn -

[1] As a pedestrian and cyclist I am far more likely to be injured or killed by a white woman on a phone than a black man. FWIW.

Thursday, August 25, 2016

What socioeconomic support will look like in 20 years

This is what I think socioeconomic support will look like in 2040 based on cognitive [2] quintiles.

The bottom quintile (0-20%, non-voters) will have supported work environments and direct income subsidies; an improved version of what most [1] wealthy nations do for the 0-5% of adults currently considered cognitively “disabled” [1].

The second quintile (20-40%, Trump base if white) will have subsidized employment (direct or indirect).

The fifth quintile (80-100%) will live much as they do now.

I don’t know what happens to the 3rd and 4th quintile.

- fn -

[1] The US is currently “mainstreaming” the cognitively disabled into relatively unsupported work, a well intentioned and evidence-free project by (my) Team Liberal that is going to end in tears.

[2]  In US male euros (avoid racism/sexism effects) maps to academic achievement which tests learning, social skills, temperament and the like.

Thursday, March 03, 2016

Everyone needs an AI in their pocket

Two articles from my share feed today …

Transit systems are growing too complex for the human mind

… “What makes it messy is the presence of different possibilities," Barthelemy says. "When you arrive at a specific point, you have many choices."

The Paris system has 78 such choice points. The New York subway, the most complex in the world, has 161. New York's system is so sprawling and interconnected, Barthelemy and colleagues Riccardo Gallotti and Mason Porter concluded in a recent analysis, that it approaches the maximum complexity our human minds can handle, the equivalent of about 8 bits of information.

“But then if you add the bus,” Barthelemy warns, “the 8-bit limit is exploded."...

and

Google Research: An Update on fast Transit Routing with Transfer Patterns

What is the best way to get from A to B by public transit? Google Maps is answering such queries for over 20,000 cities and towns in over 70 countries around the world, including large metro areas like New York, São Paulo or Moscow…

… Scalable Transfer Patterns algorithm [2] does just that, but in a smart way. For starters, it uses what is known as graph clustering to cut the network into pieces, called clusters, that have a lot of connections inside but relatively few to the outside…

… Frequency-Based Search for Public Transit [3] is carefully designed to find and take advantage of repetitive schedules while representing all one-off cases exactly. Comparing to the set-up from the original Transfer Patterns paper [1], the authors estimate a whopping 60x acceleration of finding transfer patterns from this part alone….

Humans can’t manage modern transit complexity — but the AIs can. Including the AI in your pocket.

Everyone needs a portable AI, including people with no income and people with cognitive disabilities. That’s one reason I’m writing my smartphone for all book.

See also:

Thursday, November 19, 2015

Randall Munroe introduces world language and Google Translate training program using charming New Yorker article

XKCD’s Randall Munroe, the notorious interstellar sAI, has published a simplified vocabulary explanation of Special and General Relativity in the New Yorker.

This work is presumably taken from his almost released new book, Thing Explainer ($15 Amazon pre-order). The essay is entertaining and educational; it also promotes his new book and shows he is a smart pants man.

But that’s not the real reason he’s written this. Obviously his true agenda is to create an English dialect of a universal human language with a simplified vocabulary and grammar that is ideally suited to machine translation and, eventually, colloquial conversations with terrestrial AIs (contra the Wolfram Language for AI conversation, see also Marain. Siri-speak 2015 is a crude version of this.)

Let’s see how well his first version works, using the nsAI Google Translate to do round trip translations of a sample paragraph from the original muEnglish to another language and then back again. We’ll start with French, a language related to that of England’s 11th century conquerors, then we’ll do Chinese. I know from past experiments that round-trip translations from English to Chinese and back typically produce incomprehensible gibberish:

Munroe original (muEnglish)

The first idea is called the special idea, because it covers only a few special parts of space and time. The other one—the big idea—covers all the stuff that is left out by the special idea. The big idea is a lot harder to understand than the special one. People who are good at numbers can use the special idea to answer questions pretty easily, but you have to know a lot about numbers to do anything with the big idea. To understand the big idea—the hard one—it helps to understand the special idea first.

French version

La première idée est appelé l'idée particulière, car elle ne couvre que quelques pièces spéciales de l'espace et du temps. Celui-la grande idée-couvre l'autre tous les trucs qui est laissé par l'idée particulière. La grande idée est beaucoup plus difficile à comprendre que le spécial. Les gens qui sont bons à numéros peuvent utiliser l'idée spéciale pour répondre à des questions assez facilement, mais vous devez savoir beaucoup sur les numéros de faire quelque chose avec la grande idée. Pour comprendre la grande idée-le dur-elle aide à comprendre l'idée première spéciale.

French to English

The first idea is called the particular idea because it covers only a few special pieces of space and time. This great idea covers the other all the stuff that is left by the particular idea. The big idea is much harder to understand than the special. People who are good at numbers can use special idea to answer questions fairly easily, but you should know a lot about the numbers to do something with the big idea. To understand the great idea - hard - it helps to understand the first special idea.

Chinese

第一个想法就是所谓的特殊的想法,因为它涵盖的空间和时间只有几个特殊部位。另外一个大的想法,涵盖了所有剩下的由特殊的想法的东西。大的想法是很多更难理解比特殊的一个。人们谁是善于号码可以使用特殊的想法很容易回答的问题,但是你要知道很多关于数字做的大创意什么。为了解大的想法,硬一它有助于先了解特殊的想法

Chinese to English

The first idea is the idea of so-called special because the space and time it covers only a few special parts. Another big idea, covering all rest of the stuff from the special idea. Big idea is a lot more difficult to understand than the special one. People who are good at numbers you can use special idea is very easy question to answer, but you know a lot about what the figures do big ideas. To understand the big idea, hard and it helps to understand the idea of a special.

Munroe English (muEnglish) works rather well between French and English. If you’re interested in learning French, you might enjoy reading a future French version of Thing Explainer or simply run the English version through Google Translate (and use speech recognition for verbal work).

The Chinese round-trip experiment almost works, but falls apart grammatically. For example, “you can use special idea is very easy question to answer, but you know a lot about what the figures do big ideas” is missing things like “need” and “to” and a few pronouns. There’s also an unfortunate “numbers” to “figures” word substitution. Given that Munroe is a far more advanced AI than Google this essay will be used to enhance Google’s Chinese translation model (which desperately needs work).

I’m optimistic about this new language and happy that the Munroe is now taking a more active hand in guiding human development. Zorgon knows we need the help.

Update 11/19/2015: There’s a flaw in my logic.

Alas, I didn’t think this through. There’s a reason speech recognition and natural language processing work better with longer, more technical words. It’s because short English words are often homonyms; they have multiple meanings and so can only be understood in context [1]. Big, for example, can refer to size or importance. In order to get under 1000 words Munroe uses many context tricks, including colloquialisms like “good at numbers” (meaning “good at mathematics”). His 1000 word “simple” vocabulary just pushes the meaning problem from words into context and grammar — a much harder challenge for translation than mere vocabulary.

So this essay might be a Google Translate training tool — but it’s no surprise it doesn’t serve the round-trip to Chinese. It is a hard translation challenge, not an easy one.

[1] Scientology’s L Ron Hubbard had a deep loathing for words with multiple or unclear meanings, presumably including homonyms. He banned them from Scientology grade school education. Ironically this is hard to Google because so many people confuse “ad hominem attack” with homonym.