Showing posts with label skynet. Show all posts
Showing posts with label skynet. Show all posts

Wednesday, August 28, 2024

In which I declare my expert judgment on AI 2024

These days my social media experience is largely Mastodon. There's something to be said about a social network that's so irreparably geeky and so hard to work with that only a tiny slice of humanity can possibly participate (unless and until Threads integration actually works).

In my Mastodon corner of the "Fediverse', among the elite pundits I choose to read,  there's a vocal cohort that is firm in their conviction that "AI" hype is truly and entirely hype, and that the very term "AI" should not be used. That group would say that the main application of LLM technology is criming.

Based on my casual polling of my pundits there's a quieter cohort that is less confident. That group is anxious, but not only about the criming.

Somewhere, I am told, there is a third group that believes that godlike-AIs are coming in 2025. They may be mostly on Musk's network.

Over the past few months I think the discourse has shifted. The skeptics are less confident, and the godlike-AI cohort is likewise quieter as LLM based AI hits technical limits. 

The shifting discourse, and especially the apparent LLM technical limitations, mean I'm back to being in the murky middle of things. Where I usually sit. Somehow that compels me to write down what I think. Not because anyone will or should care [1], but because I write these posts mostly for myself and I like to look back and see how wrong I've been.

So, in Aug 2024, I think:
  1. I am less worried that the end of the world is around the corner. If we'd gotten one more qualitative advance in LLM or some other AI tech I'd be researching places to (hopelessly) run to.
  2. Every day I think of new things I would do if current LLM tech had access to my data and to common net services. These things don't require any fundamental advances but they do require ongoing iteration.  I don't have much confidence in Apple's capabilities any more, but maybe they can squeeze this out. I really, really, don't want to have to depend on Microsoft. Much less Google.
  3. Perplexity.ai is super valuable to me now and I'd pay up if they stopped giving it away. It's an order of magnitude better than Google search.
  4. The opportunities for crime are indeed immense. They may be part of what ends unmediated net access for most people. By far the best description of this world is a relatively minor subplot in Neal Stephenson's otherwise mixed 2018 novel "Fall".
  5. We seem to be replaying the 1995 dot com crash but faster and incrementally. That was a formative time in my life. It was a time when all the net hype was shown to be .... correct. Even as many lost their assets buying the losers.
  6. It will all be immensely stressful and disruptive and anxiety inducing even though we won't be doing godlike-AI for at least (phew) five more years.
  7. Many who are skeptical about the impact of our current technologies have a good understanding of LLM tech but a weak understanding of cognitive science. Humans are not as magical as they think.
- fn -

[1] I legitimately have deeper expertise here than most would imagine but it's ancient and esoteric.

Wednesday, August 30, 2023

Mass disability - dysfunctional web sites, apps for everything

I last wrote about "mass disability" and the Left Behind in a 2021 post. The concept has sometimes seemed on the edge of going mainstream but it's never quite made it. Maybe we're getting closer; a recent Michael Tsai post (No App, No entry) reminded me of my Mastodon thread from a few weeks ago:

What is the crazy that drives Trumpism and the many global equivalents?
It is that the minimal IQ to function well in the modern world is now about 120 and that eliminates most people.

This is both the most important fact of our time and the least palatable. It is the thing that cannot be said and it will be ruin of us if we don't say it ...

I've been saying this for years. Today I was reminded of it while doing some travel booking.
During the bookings I encountered:
1. A web site that didn't work with my older version of Safari (I knew what was wrong and switched to Chrome. 
2. A Delta web site bug (I recognized it as a bug and knew what to do). 
3. Place that was out of rental cards but I new Expedia would have some contracts that would let me find one. 
4. Travel web sites that all needed new credentials...
... These are all routine parts of modern life including maintaining flaky computer systems (let me tell you ...) and phones ...

It was not like this even 35 y ago. Travel agents handled travel complexity. There were no smartphones. Computers were very limited. There was no internet for most. By necessity everyday life was much simpler. Most people could cope with it.

Now most cannot cope.

This is the most important feature of our time. And nobody can talk about it. 

I remember some good discussions on this thread but I can't find any of them now. Perhaps by design Mastodon has a limited memory. (My home instance has no search, so I had to download my archive and search it to find the date of the post. Then I could slowly navigate to it.)

I expanded on the theme a bit later:

Hotel laundry year 2000

1. Insert quarters to buy detergent, operate washer and dryer.

IQ requirement: 65 (my son could do this after a demonstration)

Hotel laundry year 2023

1. Scan QR code to download app whose profit comes from unspent funds.

2. Install app, create account with Apple ID

3. Figure out cryptic UX so can deposit funds (several odd unintuitive steps)

3. Deposit funds, paying just enough to cover this sesh. Pat the 25 cent low transaction penalty... 

4. Spot the scam behind app and avoid it (eg find at minimum)

5. Diagnose why after paying money and confirming machine it’s still not working

6. Authorize specific transaction

7. Start laundry.

(My son could not do this)

8. When complete delete app. 

IQ requirement: minimum 110, higher to spot the scam. 

This is why America is burning.

People are scared and angry and feeling left behind -- and they can't come out and say they are unable to manage their tech. Because that's the ultimate shame.

See also:

Friday, April 28, 2023

Large Language Models - evolutionary tree and selection algorithm 2023

The following two diagrams were taken from Mooler et al's GitHub LLM guide (April 2023). I'm posting them here as a snapshot of this historic moment. The root of the LLM evolutionary tree goes far back in time, almost five years ago. Click image for full res.

Algorithm for selecting an AI technology


LLM Evolutionary Tree

Monday, April 03, 2023

We need a new word for the historical singularity.

TLDR: The "technological singularity" was an important and useful term with a clear meaning. Then it became the "Rapture of the Nerds". We need a new term.

--

I first heard the word "singularity" in the context of black hole physics; it dates back at least to the early 20th century:

ChatGPT 4 2023: "At the singularity, the laws of physics as we know them, including space and time, break down, and our current understanding of the universe is insufficient to predict what happens within it."

Not much later, in the 1950s, the term was applied by von Neumann in a technological context (from a 1993 Vernor Vinge essay):

Stan Ulam paraphrased John von Neumann as saying: "One conversation centered on the ever-accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."

Brad Delong used to write about this kind of non-AI historical singularity. My favorite description of what it would be like to a approach at technological singularity was Vinge's short story "Fast Times at Fairmount High". (This prescient story appears to be lost to time; he wrote a similar full length novel but I think the short story was better).

The core idea is there's a (virtuous?) recursive loop where technology improves technology with shorter and shorter cycle times. Many processes go exponential and even near term developments become unpredictable. One may assume social end economic structures train to keep pace. The historical singularity exponential curve was part of The Economist's y2K Millennium issue GDP per person historical graph:


In a January 1983 essay for Omni Magazine Vinge focused on a particular aspect of the the technological singularity arising from superhuman intelligence (aka "super intelligence"):

We will soon create intelligences greater than our own ... When this happens there will be a technological and social transition similar in some sense to "the knotted space-time at the center of a black hole" 

A decade later, in his 1993 essay later published in Whole Earth Review (non-Olds cannot imagine what Whole Earth Review was like), Vinge revised what he meant by "soon":

... Based on this trend, I believe that the creation of greater-than-human intelligence will occur during the next thirty years. (Charles Platt has pointed out that AI enthusiasts have been making claims like this for thirty years. Just so I'm not guilty of a relative-time ambiguity, let me be more specific: I'll be surprised if this event occurs before 2005 or after 2030.) ...

So by the year 2000 we had the concept of a historical technological singularity (eminently sensible) that had become focused on a specific kind of self-improving technology - the superhuman intelligence with an upper-case S Singularity (presumably AI). Those were useful concepts - "technological singularity" and "superintelligence" Singularity. 

In 1993 Vinge predicted the Singularity would happen before 2030, later experts like Scott Aaronson predicted after 2080. (Aaronson has since revised that prediction and works for OpenAI; Vinge's 2030 dates looks pretty good.)

After 2000 though the word Singularity went off the semantic rails. It came to be used for for a peculiar future state in which human minds were uploaded into simulation environments that were usually described as pleasant rather than hellish. This is, of course, antithetical to the original idea of unpredictability! This peculiar meaning of "The Singularity" came to be known as "The Rapture of the Nerds" based on the title of a book by Charles Stross and Cory Doctorow. More recently that vision underlies a weird cult called longtermism that seems to have infected some vulnerable minds.

The "Rapture" meaning of "Singularity" has effectively taken over. We no longer have a term for the original von Neumann concept of self-reinforcing technological change that makes even the near term future unpredictable. That perversion of meaning is why I use the tag "Whitewater world" rather than Singularity for my own blog posts (and "Skynet" rather than super intelligence).

So we need a new term. I don't think "whitewater world" is it.

Anyone have a new term? We probably have a few years in which it might still be useful.

Thursday, March 30, 2023

ChatGPT's 155 IQ and passing the Sebastian's children puzzle

A Scientific American contributor found ChatGPT (probably 3) scored 155 on a standard IQ test:

Estimated on the basis of five subtests, the Verbal IQ of the ChatGPT was 155, superior to 99.9 percent of the test takers who make up the American WAIS III standardization sample of 2,450 people. As the chatbot lacks the requisite eyes, ears and hands, it is not able to take WAIS’s nonverbal subtests. But the Verbal IQ and Full Scale IQ scales are highly correlated in the standardization sample, so ChatGPT appears to be very intelligent by any human standards.

We don't need to worry though because it couldn't answer this question:

ChatGPT easily fails at obvious riddles, such as “What is the first name of the father of Sebastian’s children?” (ChatGPT on March 21: I’m sorry, I cannot answer this question as I do not have enough context to identify which Sebastian you are referring to.) It seems that ChatGPT fails to reason logically and tries to rely on its vast database of “Sebastian” facts mentioned in online texts.

This is what happened when I tried the Sebastian test on GPT-4:

Ah, I understand now. This riddle is a classic play on words. The answer is "Sebastian" himself, as he is the father of his children.

Ok, so that's reassuring.

Update 4/12/2023: ChatGPT4 (C4) recently did well in some blinded advanced undergraduate exams.

A response to Scott Aaronson's rejection of an AI pause.

Scott Aaronson, who works on AI safety for OpenAI, wrote a critique of AI Pause that was not up to his usual standards. Here's what I wrote as a comment:

Hi Scott — I was confused by your post. I’m usually able to follow them. I won’t defend the letter directly and Yudkowsky/TIME is not worth a mention but maybe you could clarify some things…

1. 6m seems a reasonable compromise given the lifespan of humans, the timescales of human deliberation and the commercial and military pressure to accelerate AI development. Short enough to motivate urgent action, but long enough that reflection is possible. (I doubt we actually pause, but I agree with the principle. China isn’t going to pause of course.)

2. Let’s assume GPT 5 with an array of NLP powered extensions exceeds the reasoning abilities of 95% of humanity in a wide variety of knowledge domains. That’s a shock on the scale of developing fire, but it’s occurring in a hugely complex and interdependent world that seems always on the edge of self-destruction and actually has the capabilities to end itself. We’re not hunter gatherers playing with fire or Mesopotomians developing writing. There’s no precedent for the speed, impact and civilizational fragility we face now.

3. It’s not relevant that people who signed this letter were previously skeptical of the progress towards AI. I recall 10 years ago you were skeptical. For my part I’ve been worried for a long time, but assumed it was going to come in 2080 or so. 60 years early is a reason to pause and understand what has happened.

Lastly, I read the OpenAI statement. That seems consistent with a pause.

Tuesday, March 14, 2023

Everything is just fine.

The ad-funded internet is collapsing. It's been ailing for years but, as seems to the rule, the last bit is happening quite quickly. I'm guessing advertisers, who must be kind of dim, finally realized they were wasting their money. They did much better funding special interest magazines but those are gone now. Much of Silicon Valley has been funded by a now defunct business model. So maybe this is not a good time to be there. (Or, if you are young and have some money, maybe this is the best time to be there.)

Meanwhile human-cognition level AI has arrived. About 60 years earlier than I'd hoped. And some people thought "Don't Look Up" was about climate change. As a guide I recommend the movie Superintelligence; particularly the protagonist's approach to the end of the world. (Spoiler: It's a rom-com. Her world does not end.)

It's whitewater world all over now. On the bright side, we don't need to waste too much time on longterm planning. I retired in August of 2022 so I think I'll focus on mountain bike trail maintenance and, like Superintelligence's Carol Peters, on the needs of friends and family.

Sunday, March 12, 2023

I ask ChatGPT about the next step.

 Not a bad response.

”Describe what happens when ChatGPT is enhanced with natural language processing and domain-specific knowledge modules.”

--


When ChatGPT is enhanced with natural language processing and domain-specific knowledge modules, it gains several capabilities that allow it to better understand and respond to user input.


Natural language processing (NLP) is a field of artificial intelligence that focuses on enabling computers to understand and interpret human language. By incorporating NLP into ChatGPT, the model can better analyze and interpret user input, including understanding sentence structure, recognizing named entities, and determining the meaning of words in context. This allows ChatGPT to generate more accurate and relevant responses to user queries.


Domain-specific knowledge modules provide ChatGPT with specialized knowledge related to a particular field or topic. For example, if ChatGPT is enhanced with a medical knowledge module, it can better understand medical terminology and provide more accurate and relevant responses to medical-related queries. By incorporating domain-specific knowledge modules, ChatGPT can tailor its responses to specific user needs, making it a more effective conversational agent.


Together, these enhancements enable ChatGPT to better understand and respond to user input, providing more accurate and relevant information to users. This can lead to a more engaging and satisfying user experience, as well as improved productivity and efficiency in a wide range of applications.”

Monday, February 20, 2023

Be afraid of ChatGPT

TL;DR: It's not that ChatGPT is miraculous, it's that cognitive science research suggests human cognition is also not miraculous.

"Those early airplanes were nothing compared to our pigeon-powered flight technology!"

https://chat.openai.com/chat - "Write a funny but profound sentence about what pigeons thought of early airplanes"

Relax

Be Afraid

ChatGPT is just a fancy autocomplete.

Much of human language generation may be a fancy autocomplete.

ChatGPT confabulates.

Humans with cognitive disabilities routinely confabulate and under enough stress most humans will confabulate. 

ChatGPT can’t do arithmetic.

IF a monitoring system can detect a question involves arithmetic or mathematics it can invoke a math system*.


UPDATE: 2 hours after writing this I read that this has been done.

ChatGPT’s knowledge base is faulty.

ChatGPT’s knowledge base is vastly larger than that of most humans and it will quickly improve.

ChatGPT doesn’t have explicit goals other than a design goal to emulate human interaction.

Other goals can be implemented.

We don’t know how to emulate the integration layer humans use to coordinate input from disparate neural networks and negotiate conflicts.

*I don't know the status of such an integration layer. It may already have been built. If not it may take years or decades -- but probably not many decades.

We can’t even get AI to drive a car, so we shouldn’t worry about this.

It’s likely that driving a car basically requires near-human cognitive abilities. The car test isn’t reassuring.

ChatGPT isn’t conscious.

Are you conscious? Tell me what consciousness is.

ChatGPT doesn’t have a soul.

Show me your soul.

Relax - I'm bad at predictions. In 1945 I would have said it was impossible, barring celestial intervention, for humanity to go 75 years without nuclear war.


See also:

  • All posts tagged as skynet
  • Scott Aaronson and the case against strong AI (2008). At that time Aaronson felt a sentient AI was sometime after 2100. Fifteen years later (Jan 2023) Scott is working for OpenAI (ChatGPT). Emphases mine: "I’m now working at one of the world’s leading AI companies ... that company has already created GPT, an AI with a good fraction of the fantastical verbal abilities shown by M3GAN in the movie ... that AI will gain many of the remaining abilities in years rather than decades, and .. my job this year—supposedly!—is to think about how to prevent this sort of AI from wreaking havoc on the world."
  • Imagining the Singularity - in 1965 (2009 post.  Mathematician I.J. Good warned of an "intelligence explosion" in 1965. "Irving John ("I.J."; "Jack") Good (9 December 1916 – 5 April 2009)[1][2] was a British statistician who worked as a cryptologist at Bletchley Park."
  • The Thoughtful Slime Mold (2008). We don't fly like bird's fly.
  • Fermi Paradox resolutions (2000)
  • Against superhuman AI: in 2019 I felt reassured.
  • Mass disability (2012) - what happens as more work is done best by non-humans. This post mentions Clark Goble, an app.net conservative I miss quite often. He died young.
  • Phishing with the post-Turing avatar (2010). I was thinking 2050 but now 2025 is more likely.
  • Rat brain flies plane (2004). I've often wondered what happened to that work.
  • Cat brain simulator (2009). "I used to say that the day we had a computer roughly as smart as a hamster would be a good day to take the family on the holiday you've always dreamed of."
  • Slouching towards Skynet (2007). Theories on the evolution of cognition often involve aspects of deception including detection and deceit.
  • IEEE Singularity Issue (2008). Widespread mockery of the Singularity idea followed.
  • Bill Joy - Why the Future Doesn't Need Us (2000). See also Wikipedia summary. I'd love to see him revisit this essay but, again, he was widely mocked.
  • Google AI in 2030? (2007) A 2007 prediction by Peter Norvig that we'd have strong AI around 2030. That ... is looking possible.
  • Google's IQ boost (2009) Not directly related to this topic but reassurance that I'm bad at prediction. Google went to shit after 2009.
  • Skynet cometh (2009). Humor.
  • Personal note - in 1979 or so John Hopfield excitedly described his work in neural networks to me. My memory is poor but I think we were outdoors at the Caltech campus. I have no recollection of why we were speaking, maybe I'd attended a talk of his. A few weeks later I incorporated his explanations into a Caltech class I taught to local high school students on Saturday mornings. Hopfield would be about 90 if he's still alive. If he's avoided dementia it would be interesting to ask him what he thinks.

Saturday, February 02, 2019

Against superhuman AI

I am a strong-AI pessimist. I think by 2100 we’ll be in range of sentient AIs that vastly exceed human cognitive abilities (“skynet”). Superhuman-AI has long been my favorite answer to the Fermi Paradox (see also); an inevitable product of all technological civilizations that ends interest in touring the galaxy.

I periodically read essays claiming superhuman-AI is silly, but the justifications are typically nonsensical or theological (soul-equivalents needed).

So I tried to come up with some valid reasons to be reassured. Here’s my list:

  1. We’ve hit the physical limits of our processing architecture. The “Moore-era” is over — no more doubling every 12-18 months. Now we slowly add cores and tweak hardware. The new MacBook Air isn’t much faster than my 2015 Air. So the raw power driver isn’t there.
  2. Our processing architecture is energy inefficient. Human brains vastly exceed our computing capabilities and they run on a meager supply of glucose and oxygen. Our energy-output curve is wrong.
  3. Autonomous vehicles are stuck. They aren’t even as good as the average human driver, and the average human driver is obviously incompetent. They can’t handle bicycles, pedestrians, weather, or map variations. They could be 20 years away, they could be 100 years away. They aren’t 5 years away. Our algorithms are limited.
  4. Quantum computers aren’t that exciting. They are wonderful physics platforms, but quantum supremacy may be quite narrow.
  5. Remember when organic neural networks were going to be fused into silicon platforms? Obviously that went nowhere since we no longer hear about it. (I checked, it appears Thomas DeMarse is still with us. Apparently.)

My list doesn’t make superhuman-AI impossible of course, it just means we might be a bit further away, closer to 300 years than 80 years. Long enough that my children might escape.

Wednesday, April 18, 2018

Dyer on the 21st century crisis of mass unemployment

I believe this is true — though I’d be more confident if one of my favorite economists thought this was plausible (emphases mine):

If The Model Is Broken, Fix It | Gwynne Dyer

… The political model of Western-style democracy, which grew up alongside and then within a capitalist economic model, is now broken. Exhibit Number One is Donald Trump, but there’s lots of other evidence too.

One-third of French voters backed Marine Le Pen, a cleaned-up, user-friendly neo-fascist, in last year’s presidential election. In last September’s German election, one-eighth of the electorate voted for Alternative for Germany, a party whose more extreme wing is neo-Nazi – but it is now leads the opposition in the Bundestag, the German parliament.

Last month in Italy, the two biggest parties to emerge from the election were both led by populist rabble-rousers, one from the left and one from the right. Not to mention Brexit in Britain. And in every case the themes that dominated the populists’ rhetoric were racism, nationalism, hostility to immigrants – and jobs.

Trump rarely talked about anything else during the presidential election campaign: immigrants are stealing the jobs, free-trading American businessmen are exporting the jobs, the foreigners are eating America’s lunch….

Trump may not know a lot, but he knows One Big Thing. We are living in a new era of mass unemployment, and nobody has noticed. As Trump said the night after he won the New Hampshire primary in February 2016: “Don’t believe those phony numbers when you hear 4.9 and 5 percent unemployment. The number’s probably 28, 29, as high as 35. In fact, I even heard recently 42.”

It’s not really 42 percent, but it’s not 4.1 percent (the current official US rate) either. According to Nicholas Eberstadt’s ‘Men Without Work’, the real unemployment rate among American men of prime working age (24-55) – including those who don’t get counted because they have given up looking for work – is 17 percent.

Why didn’t we notice? Because the unemployed weren’t protesting in the streets like they did in the Great Depression of the 1930s, although the rate is getting up to Depression era levels. After the Second World War, all the Western democracies built welfare states, mainly so a new generation of radical populist leaders would not come to power the next time there is mass unemployment.

It has worked, in the sense that there is not blood in the streets this time around, but the jobless millions are very angry even if the welfare state means that they are not starving. They do vote, and unless something is done to ease their anger, next time they may vote for somebody who makes Trump look good by comparison.

But if the problem is unemployment, then the answer is not obvious, because the main cause of unemployment in Western countries is not immigration or ‘offshoring’ jobs, as Trump pretends. It is computers.

One-third of American manufacturing jobs have vanished in the past 20 years, and the vast majority of them (85 percent) were destroyed by automation. The algorithms and the robot arms have already killed the Rust Belt, and there is a plausible prediction that almost half of existing American jobs may be automated out of existence in the next 20 years.

What would our politics look like then? Not very democratic, unless we do something to ease the anger of the unemployed. This doesn’t just mean giving them more money – a massive expansion of the welfare state – but also finding way of taking the shame out of unemployment, because it is the humiliation of being seen as a loser that breeds the anger…

I’ve called this ‘mass disability’, because to me it’s a mismatch between the skills the majority of humans have and the skills needed to earn a middle class or better income.

I don’t have any other explanation for why the entire western world is simultaneously in crisis other than what I wrote about in 2010 - Globalization (China) and Information Technology.

See also:

Wednesday, November 16, 2016

Mass Disability - how did I come up with 40%?

How, a friend asked, did I come up with the 40% number for “mass disability” that I quoted in After Trump: reflections on mass disability in a sleepless night?

I came up with that number thinking about the relationship of college education, IQ curves, and middle class status. The thesis goes like this…

  1. Disability is contextual. In a space ship legs are a bit of a nuisance, but on earth they are quite helpful. The context for disability in the modern world is not climbing trees or lifting weights, it’s being able to earn an income that buys food, shelter, education, health care, recreation and a relatively secure old age. That is the definition of the modern “middle class” and above; a household income from $42,000 ($20/hr) to $126,000. It’s about half of Americans. By definition then half of Americans are not “abled”.
  2. I get a similar percentage if I look at the percentage of Americans who can complete a college degree or comparable advanced skills training. That’s a good proxy for reasonable emotional control and an IQ to at least 105 to 110. That’s about 40% of Americans — but Canada does better. I think the upper limit is probably 50% of people. If you accept that a college-capable brain is necessary for relative economic success in the modern world then 50% of Americans will be disabled.

So I could say that the real number is 50%, but college students mess up the income numbers. The 40% estimate for functionally disabled Americans adjusts for that.

As our non-sentient AI tech and automation gets smarter the “ability” threshold is going to rise. Somewhere the system has to break down. I think it broke on Nov 8, 2016. In a sense democracy worked — our cities aren’t literally on fire. Yet.

Friday, September 30, 2016

How does the world look to Trump's core supporters?

Set aside the neo-Klan-Nazi minority. Set aside the truly despicable - Coulter, Hannity, Falwell and the like. Forget the hell-spawn who think only of their personal wealth.

Think about the white non-college male voter:

… Trump’s fortunes rest on his core supporters, white men who lack a four-year college degree … He leads Clinton among them by 76-17 percent, an enormous 59-point advantage. That’s widened from 40 points early this month; it’s a group Mitt Romney won by 31 points -- half Trump’s current margin -- in 2012.

Whatever happens with this election, that 60% gap is a staggering fact.

How does the world look like to these non-college white men of the 4th quintile?

I have limited exposure to this cohort. A few Facebook friends — but they don’t post much about Trump. A family member with a cognitive disability claims to like Trump. That’s about it.

I need a journalist-anthropologist to falsify my story. I can’t help imagining a story though. It goes like this …

  • I have no hope of a secure economic future with savings, stable employment, good healthcare benefits and a pension.
  • I have limited marriage opportunities. I really miss the patriarchy. I feel that in my bones.
  • I watch Fox. I can understand it. Fox approves of me. Nobody else cares what happens to me.
  • I don’t understand economics, but I’ve lived through the past ten years. I’ve heard a lot of broken promises. Maybe nobody understands economics. Maybe the people who understand economics are lying to me. I definitely don’t understand borrowing from a wealthier future.
  • I don’t like academics.
  • I don’t care about the damned polar bears. I like warm weather. I don’t like bugs. I like motors and pavement. I don’t care about CO emissions.
  • I watch reality cop shows, where every criminal is a black man. I directly encounter crime and it’s always black men. I am afraid of black men. [1]
  • I don’t have a lot to lose.

None of this is going to go away. If we want to keep civilization going we need to give this cohort hope.

See also:

- fn -

[1] As a pedestrian and cyclist I am far more likely to be injured or killed by a white woman on a phone than a black man. FWIW.

Thursday, August 25, 2016

What socioeconomic support will look like in 20 years

This is what I think socioeconomic support will look like in 2040 based on cognitive [2] quintiles.

The bottom quintile (0-20%, non-voters) will have supported work environments and direct income subsidies; an improved version of what most [1] wealthy nations do for the 0-5% of adults currently considered cognitively “disabled” [1].

The second quintile (20-40%, Trump base if white) will have subsidized employment (direct or indirect).

The fifth quintile (80-100%) will live much as they do now.

I don’t know what happens to the 3rd and 4th quintile.

- fn -

[1] The US is currently “mainstreaming” the cognitively disabled into relatively unsupported work, a well intentioned and evidence-free project by (my) Team Liberal that is going to end in tears.

[2]  In US male euros (avoid racism/sexism effects) maps to academic achievement which tests learning, social skills, temperament and the like.

Thursday, March 03, 2016

Everyone needs an AI in their pocket

Two articles from my share feed today …

Transit systems are growing too complex for the human mind

… “What makes it messy is the presence of different possibilities," Barthelemy says. "When you arrive at a specific point, you have many choices."

The Paris system has 78 such choice points. The New York subway, the most complex in the world, has 161. New York's system is so sprawling and interconnected, Barthelemy and colleagues Riccardo Gallotti and Mason Porter concluded in a recent analysis, that it approaches the maximum complexity our human minds can handle, the equivalent of about 8 bits of information.

“But then if you add the bus,” Barthelemy warns, “the 8-bit limit is exploded."...

and

Google Research: An Update on fast Transit Routing with Transfer Patterns

What is the best way to get from A to B by public transit? Google Maps is answering such queries for over 20,000 cities and towns in over 70 countries around the world, including large metro areas like New York, São Paulo or Moscow…

… Scalable Transfer Patterns algorithm [2] does just that, but in a smart way. For starters, it uses what is known as graph clustering to cut the network into pieces, called clusters, that have a lot of connections inside but relatively few to the outside…

… Frequency-Based Search for Public Transit [3] is carefully designed to find and take advantage of repetitive schedules while representing all one-off cases exactly. Comparing to the set-up from the original Transfer Patterns paper [1], the authors estimate a whopping 60x acceleration of finding transfer patterns from this part alone….

Humans can’t manage modern transit complexity — but the AIs can. Including the AI in your pocket.

Everyone needs a portable AI, including people with no income and people with cognitive disabilities. That’s one reason I’m writing my smartphone for all book.

See also:

Thursday, November 19, 2015

Randall Munroe introduces world language and Google Translate training program using charming New Yorker article

XKCD’s Randall Munroe, the notorious interstellar sAI, has published a simplified vocabulary explanation of Special and General Relativity in the New Yorker.

This work is presumably taken from his almost released new book, Thing Explainer ($15 Amazon pre-order). The essay is entertaining and educational; it also promotes his new book and shows he is a smart pants man.

But that’s not the real reason he’s written this. Obviously his true agenda is to create an English dialect of a universal human language with a simplified vocabulary and grammar that is ideally suited to machine translation and, eventually, colloquial conversations with terrestrial AIs (contra the Wolfram Language for AI conversation, see also Marain. Siri-speak 2015 is a crude version of this.)

Let’s see how well his first version works, using the nsAI Google Translate to do round trip translations of a sample paragraph from the original muEnglish to another language and then back again. We’ll start with French, a language related to that of England’s 11th century conquerors, then we’ll do Chinese. I know from past experiments that round-trip translations from English to Chinese and back typically produce incomprehensible gibberish:

Munroe original (muEnglish)

The first idea is called the special idea, because it covers only a few special parts of space and time. The other one—the big idea—covers all the stuff that is left out by the special idea. The big idea is a lot harder to understand than the special one. People who are good at numbers can use the special idea to answer questions pretty easily, but you have to know a lot about numbers to do anything with the big idea. To understand the big idea—the hard one—it helps to understand the special idea first.

French version

La première idée est appelé l'idée particulière, car elle ne couvre que quelques pièces spéciales de l'espace et du temps. Celui-la grande idée-couvre l'autre tous les trucs qui est laissé par l'idée particulière. La grande idée est beaucoup plus difficile à comprendre que le spécial. Les gens qui sont bons à numéros peuvent utiliser l'idée spéciale pour répondre à des questions assez facilement, mais vous devez savoir beaucoup sur les numéros de faire quelque chose avec la grande idée. Pour comprendre la grande idée-le dur-elle aide à comprendre l'idée première spéciale.

French to English

The first idea is called the particular idea because it covers only a few special pieces of space and time. This great idea covers the other all the stuff that is left by the particular idea. The big idea is much harder to understand than the special. People who are good at numbers can use special idea to answer questions fairly easily, but you should know a lot about the numbers to do something with the big idea. To understand the great idea - hard - it helps to understand the first special idea.

Chinese

第一个想法就是所谓的特殊的想法,因为它涵盖的空间和时间只有几个特殊部位。另外一个大的想法,涵盖了所有剩下的由特殊的想法的东西。大的想法是很多更难理解比特殊的一个。人们谁是善于号码可以使用特殊的想法很容易回答的问题,但是你要知道很多关于数字做的大创意什么。为了解大的想法,硬一它有助于先了解特殊的想法

Chinese to English

The first idea is the idea of so-called special because the space and time it covers only a few special parts. Another big idea, covering all rest of the stuff from the special idea. Big idea is a lot more difficult to understand than the special one. People who are good at numbers you can use special idea is very easy question to answer, but you know a lot about what the figures do big ideas. To understand the big idea, hard and it helps to understand the idea of a special.

Munroe English (muEnglish) works rather well between French and English. If you’re interested in learning French, you might enjoy reading a future French version of Thing Explainer or simply run the English version through Google Translate (and use speech recognition for verbal work).

The Chinese round-trip experiment almost works, but falls apart grammatically. For example, “you can use special idea is very easy question to answer, but you know a lot about what the figures do big ideas” is missing things like “need” and “to” and a few pronouns. There’s also an unfortunate “numbers” to “figures” word substitution. Given that Munroe is a far more advanced AI than Google this essay will be used to enhance Google’s Chinese translation model (which desperately needs work).

I’m optimistic about this new language and happy that the Munroe is now taking a more active hand in guiding human development. Zorgon knows we need the help.

Update 11/19/2015: There’s a flaw in my logic.

Alas, I didn’t think this through. There’s a reason speech recognition and natural language processing work better with longer, more technical words. It’s because short English words are often homonyms; they have multiple meanings and so can only be understood in context [1]. Big, for example, can refer to size or importance. In order to get under 1000 words Munroe uses many context tricks, including colloquialisms like “good at numbers” (meaning “good at mathematics”). His 1000 word “simple” vocabulary just pushes the meaning problem from words into context and grammar — a much harder challenge for translation than mere vocabulary.

So this essay might be a Google Translate training tool — but it’s no surprise it doesn’t serve the round-trip to Chinese. It is a hard translation challenge, not an easy one.

[1] Scientology’s L Ron Hubbard had a deep loathing for words with multiple or unclear meanings, presumably including homonyms. He banned them from Scientology grade school education. Ironically this is hard to Google because so many people confuse “ad hominem attack” with homonym.

Monday, September 14, 2015

Google Trends: Across my interests some confirmation and some big surprises.

I knew Google Trends was “a thing”, but it had fallen off my radar. Until I wondered if Craigslist was going the way of Rich Text Format. That’s when I started playing with the 10 year trend lines.

I began with Craigslist and Wikipedia...

  • Craigslist is looking post-peak
  • Wikipedia looks ill, but given how embedded it is in iOS I wonder if that’s misleading.
Then I started looking at topics of special relevance to my life or interests. First I created a set of baselines to correct for decliniing interest in web search. I didn’t see any decline
  • Cancer: rock steady, slight dip in 2009, slight trend since, may reflect demographics
  • Angina: downward trend, but slight. This could reflect lessening interest in search, but it may also reflect recent data on lipid lowering agents and heart disease.
  • Exercise: pretty steady
  • Uber: just to show what something hot looks like. (Another: Bernie Sanders)
Things look pretty steady over the past 10 years, so I decided I could assume a flat baseline for my favorite topics.That’s when it got fascinating. 

Some of these findings line up with my own expectations, but there were quite a few surprises. It’s illuminating to compare Excel to Google Sheets. The Downs Syndrome collapse is a marker for a dramatic social change — the world’s biggest eugenics program — that has gotten very little public comment. I didn’t think interest in AI would be in decline, and the Facebook/Twitter curves are quite surprising.

Suddenly I feel like Hari Seldon.

I’ll be back ...

See also:

Saturday, April 26, 2014

Salmon, Picketty, Corporate Persons, Eco-Econ, and why we shouldn't worry

I haven’t read Picketty’s Capital in the Twenty-First Century. I’ll skim it in the library some day, but I’m fine outsourcing that work to DeLong, Krugman and Noah.

I do have opinions of course! I’m good at having opinions.

I believe Picketty is fundamentally correct, and it’s good to see our focus shifting from income inequality to wealth inequality. I think there are many malign social and economic consequences of wealth accumulation, but the greatest threat is likely the damage to democracy. Alas, wealth concentration and corruption of government are self-reinforcing trends. It is wise to give the rich extra votes, lest they overthrow democracy entirely, but fatal to give them all the votes.

What I haven’t seen in the discussions so far is the understanding that the modern oligarch is not necessarily human. Corporations are persons too, and even the Kock Brothers are not quite as wealthy as APPL. Corporations and similar self-sustaining entities have an emergent will of their own; Voters, Corporations and Plutocrats contend for control of avowed democracies [1]. The Rise of the Machine is a pithy phrase for our RCIIT disrupted AI age, but the Corporate entity is a form of emergent machine too.

So when we think of wealth and income inequality, and the driving force of emergent process, we need to remember that while Russia’s oligarchs are (mostly vile) humans, ours are more mixed. That’s not necessarily a bad thing - GOOGL is a better master than David Koch. Consider, for example, the silencing of Felix Salmon:

Today is Felix's last day at Reuters. Here's the link to his mega-million word blog archive (start from the beginning, in March 2009, if you like). Because we're source-agnostic, you can also find some of his best stuff from the Reuters era at Wired, Slate, the Atlantic, News Genius, CJR, the NYT, and NY Mag. There's also Felix TV, his personal site, his Tumblr, his Medium archive, and, of course, the Twitter feed we all aspire to.

Once upon a time, a feudal Baron or Russian oligarch would have violently silenced an annoying critic like Salmon (example: Piketty - no exit). Today’s system simply found him a safe and silent home. I approve of this inhuman efficiency.

So what comes next? Salmon is right that our system of Human Plutocrats and emergent Corporate entities is more or less stable (think - stability of ancient Egypt). I think Krugman is wrong that establishment economics fully describes what’s happening [2]; we still need to develop eco-econ — which is notecological economics”. Eco-econ is the study of how economic systems recapitulate biological systems; and how economic parasites evolve and thrive [3]. Eco-econ will give us some ideas on how our current system may evolve.

In any event, I’m not entirely pessimistic. Complex adaptive systems have confounded my past predictions. Greece and the EU should have collapsed, but the center held [4]. In any case, there are bigger disruptions coming [5]. We won’t have to worry about Human plutocrats for very long….

See also

and from my stuff

- fn -

[1] I like that 2011 post and the graphic I did then. I’d put “plutocrats” in the upper right these days. The debt ceiling fight of 2011, showed that Corporations and Plutocrats could be smarter than Voters, and the rise of the Tea Party shows that Corporations can be smarter than Voters and Plutocrats. Corporations, and most Plutocrats, are more progressive on sexual orientation and tribal origin than Voters. Corporations have neither gender nor pigment, and they are all tribes of one.

I could write a separate post about why I can’t simply edit the above graphic, but even I find that tech failure too depressing to contemplate.

[2] I don’t think Krugman believes this himself - but he doesn’t yet know how to model his psychohistory framework. He’s still working on the robotics angle.

[3] I just made this up today, but I dimly recall reading that the basic premises of eco-econ have turned up in the literature many times since Darwin described natural selection in biological systems. These days, of course, we apply natural selection to the evolution of the multiverse. Applications to economics are relatively modest.

[4] Perhaps because Corporations and Plutocrats outweighed Voters again — probably better or for worse.

[5] Short version — we are now confident that life-compatible exoplanets are dirt common, so the combination of the Drake Equation (no, it’s not stupid) and the Fermi Paradox means that wandering/curious/communicative civilizations are short-lived. That implies we are short-lived, because we’re like that. The most likely thing to finish us off are our technological heirs.

Saturday, August 03, 2013

Sympathy for Economists

A good feature of teenagers is that they sometimes sleep in. So Emily and I can chat on a quiet Saturday morning about wearable tech (remember 1988?), and how 2013 feels a bit like 1997 or 2007 or 1923. The times when technological change seems to rev up again. To be followed, if recent  history is any guide, by yet another crash.

Which brings us to Economics, and especially to economists like Brad DeLong and Paul Krugman

I suspect that DeLong, and even Krugman, believe that the fundamental drivers of our economic instability are the simultaneous and related rise of both digital technologies and China and India (RCIIIT). Both DeLong and Krugman, have, at various times, written about the disruptive impact of "smart" robots (including robot/human pairings) and the related rise of 'mass disability'. Both, I suspect, share my opinion of the economic consequences of artificial sentience.

These aren't however, topics they can discuss in the context of models and mechanisms. How do you measure technological disruption? Economists still struggle to describe the productivity impacts of typewriters. Corporations can't make an internal business case for products like Yammer. We can't measure technological disruptions, and what we can't measure we can't model. What Economists can't model they can't discuss, and so they look through a keyhole into a dimly lit room and see monsters, but can't speak of them.

But the situation for Economics is even worse than that. There is a reason Krugman rants about economists who cling to models when all their predictions fail and yet retain academic respect. A discipline without falsifiability can be scholarly, but it can't be a science. It can't progress.

Economics thus lies between the Scylla of the monsters than can't be mentioned, and the Charybdis of the non-falsifiable.

No wonder Economists are dismal.

Sunday, June 09, 2013

Cash purchases driving a new real estate bubble - too much wealth, too few investments

Cash-only real estate speculation in LA, Boston, Miami, San Francisco and so on (emphases mine) ...

... These days, the only way for would-be buyers to secure a home, it often seems, is to offer all cash and be ready to do so within hours, not days.

...first-time home buyers are competing with investors to get into single-family homes with prices approaching $1 million.

... large investors purchasing thousands of properties

... a third of all homes purchased in Los Angeles during the first quarter of this year went for all cash, compared with just 7 percent in 2007. In Miami, 65 percent of homes sold were for cash deals, compared with 16 percent six years ago.

... In Los Angeles, the median price on an all-cash home this year is about $351,000, compared with $230,000 in 2009. Over the same period, the median price over all increased to $410,000, up $85,000. In fact, last month, home prices in Southern California hit their highest level in the last five years.

... Buyers in Boston are offering $100,000 more than the asking price or placing offers on homes they have spent only minutes in.

... He also waived the inspection clause, an increasingly common practice... offers today are more likely to include escalation clauses, saying buyers will pay an additional amount over the highest bid.

... cash purchases fueled in part by international investors and retirees awash in cash after selling their homes elsewhere....

This fits reports a few months back of large numbers of purchased but unoccupied condominiums in luxury markets.

Where is all the cash coming from? The article doesn't say, but there's vast wealth in China now and few safe places to park it. Real estate is a classic Chinese investment. There's also a large amount of boomer wealth in play as my generation (noisily, because we are nothing if not loud) shuffles off the stage.

What happens next? I assume we're in for another one of our worldwide boom-bust cycles...

Gordon's Notes: Stock prices - resorting to another dumb hydraulic analogy

NewImage

Why are having these worldwide boom bust cycles? 

Ahh, if only we knew. Since I'm not an economist, and thus I have neither credibility to protect nor Krugman to fear, I'm free to speculate. I think the world's productive capacity has grown faster than the ability of our financial systems to manage it. There's too much wealth and potential wealth (in a fundamental sense, regardless of central bank actions) for our system to productively absorb. We're filling a 100 ml vial from a 10 liter bucket. Or, in Bernd Jendrissek's words: "The gain is too high for the phase shift for this feedback loop to be stable."

If there's anything to this idea then we little people may want to remember the advice of Victor Niederhoffer, a wealthy man who has lost vast amounts of money in the post RCIIIT economy:

... Whenever disaster strikes, the very sagacious wealthy people take their canes, and they hobble down from their stately mansions on Fifth Avenue, and they buy stocks to the extent of their bank balances, and then a week or two later, the market rises, they deposit the overplus in their accounts, invest it in blue-chip real estate, and retire back to their stately mansions. That's probably the best way of making money, to be a specialist in panics. Whenever there's panic hanging in the air, that's a great time to invest...

Of course this implies one has a relatively tax efficient way of moving money in and out of cash -- and lots of cash to gamble without fear of unemployment. When downturns hit most of us need our cash as a hedge against job loss; only the 0.05% don't need to work. Even so, there may be a lesser version of the long game we can play to at least limit our crash pain. For example, perhaps a 21st century John Bogle will create a derivative that retail investors can purchase on the rise (when we have cash) that pays off on the fall (when we don't).

How long will it be before the world's financial systems catch up with our productive capacity -- especially given the rise of Africa and the unfolding of the post-AI world?

I suspect not in my lifetime [1]. It's whitewater as far as the eye can see.

Update: In surfing lingo a hard breaking wave is a called a "Cruncher". Perhaps "new Cruncher" is a better term than "new bubble".

- fn -

[1] Though if wealth were better distributed we might have the equivalent of filling that 100 ml vial from 10,000 1 ml vials. Much easier to stop before overfilling.