Showing posts with label history. Show all posts
Showing posts with label history. Show all posts

Wednesday, August 28, 2024

In which I declare my expert judgment on AI 2024

These days my social media experience is largely Mastodon. There's something to be said about a social network that's so irreparably geeky and so hard to work with that only a tiny slice of humanity can possibly participate (unless and until Threads integration actually works).

In my Mastodon corner of the "Fediverse', among the elite pundits I choose to read,  there's a vocal cohort that is firm in their conviction that "AI" hype is truly and entirely hype, and that the very term "AI" should not be used. That group would say that the main application of LLM technology is criming.

Based on my casual polling of my pundits there's a quieter cohort that is less confident. That group is anxious, but not only about the criming.

Somewhere, I am told, there is a third group that believes that godlike-AIs are coming in 2025. They may be mostly on Musk's network.

Over the past few months I think the discourse has shifted. The skeptics are less confident, and the godlike-AI cohort is likewise quieter as LLM based AI hits technical limits. 

The shifting discourse, and especially the apparent LLM technical limitations, mean I'm back to being in the murky middle of things. Where I usually sit. Somehow that compels me to write down what I think. Not because anyone will or should care [1], but because I write these posts mostly for myself and I like to look back and see how wrong I've been.

So, in Aug 2024, I think:
  1. I am less worried that the end of the world is around the corner. If we'd gotten one more qualitative advance in LLM or some other AI tech I'd be researching places to (hopelessly) run to.
  2. Every day I think of new things I would do if current LLM tech had access to my data and to common net services. These things don't require any fundamental advances but they do require ongoing iteration.  I don't have much confidence in Apple's capabilities any more, but maybe they can squeeze this out. I really, really, don't want to have to depend on Microsoft. Much less Google.
  3. Perplexity.ai is super valuable to me now and I'd pay up if they stopped giving it away. It's an order of magnitude better than Google search.
  4. The opportunities for crime are indeed immense. They may be part of what ends unmediated net access for most people. By far the best description of this world is a relatively minor subplot in Neal Stephenson's otherwise mixed 2018 novel "Fall".
  5. We seem to be replaying the 1995 dot com crash but faster and incrementally. That was a formative time in my life. It was a time when all the net hype was shown to be .... correct. Even as many lost their assets buying the losers.
  6. It will all be immensely stressful and disruptive and anxiety inducing even though we won't be doing godlike-AI for at least (phew) five more years.
  7. Many who are skeptical about the impact of our current technologies have a good understanding of LLM tech but a weak understanding of cognitive science. Humans are not as magical as they think.
- fn -

[1] I legitimately have deeper expertise here than most would imagine but it's ancient and esoteric.

Friday, August 16, 2024

Fantasies of a tyrant: The Deshittification Corps

This morning on my Mastodon:

It’s possible that we have built an economical and technical environment that is too complex and treacherous for almost all humans to live in.

Perhaps the Neanderthals had similar feelings about the warmer buggier more diseased environment that came with the Skinnies (their name for us). 

I wonder who will inherit our world.

Yes, I'm Old. But the young are distressed and anxious, so perhaps they feel it too. To me many things feel frayed, fragile, buggy, and poorly maintained. As though there wasn't enough time to do it right before the next urgent thing. 

Apple is supposedly off trying to build Apple Intelligence -- and in the meantime Photos.mac is the shittiest software I've been obliged to consistently use [1]. For a small fraction of what Apple vaporized on autonomous vehicles and the Vision Pro thing they could have slowed the growth rate of their technical debt and maybe even implemented fucking Folder search in Photos.mac [3].

Which brings me to the Deshittification Corps. If I were a Tyrant, which is more than you deserve, I would create a Deshittification Corps (DC) to fight enshittification [2]. My Deshittification Corps would be a force of about 10,000 people who evaluated the services we rely on and gave them shittification ratings. Which they would be obliged to public post, a bit like cigarette carton notices. For example:

This service has a Shittification Rating of D. This is a really shitty service. We recommend everyone who works for this company seek new employment.

Companies that didn't improve their Shittification rating would be subject to a special tax that would start at 0.01% of gross revenue and double every day ... 

- fn -

[1] Ok, SharePoint is shittier. But now you're triggering me. Besides, one of the best things about retirement is no more SharePoint.

[2] Yes, Doctorow was talking about software and online services, but I'm bending the meaning more broadly to encompass government and other services.

[3] It would still be shitty software, but that would be the biggest improvement since iPhoto stabilized.

Monday, February 05, 2024

On living and working with "Nazis"

My current social media vice is Mastodon, with Threads 2nd and Bluesky a distant 3rd. (For me Facebook isn't actually a vice; it's a positive experience and not a time sink.)

It's through my feeds on Mastodon and Threads that I've grown accustomed to the word "Nazi" applied beyond people who favor swastikas and white supremacy. I don't love the meaning shift, but from studies across multiple societies and eras we know that roughly 1/3 of people would, in the right context, be eager Nazi equivalents (Dorothy Thompson's 1941 essay is worth a read). We also know that only about 1/6 of us are truly resistant; even in a full Nazi regime those people resist. The rest of us just kind of go along. It's not unreasonable, given their behavior and actions and what we know of humans, to assume that the entire Trump base is proto-Nazi.

One third of humanity is a lot of people. Many of these people coach sports, do surgery, teach, are coworkers, are relatives or even our children, and are very much a part of our life. If you are reasonably social you interact with them all the time. Just like Rwanda's Hutu and Tutsi interacted before and after a true genocide. Just as most of Germany's true Nazis lived and interacted with everyone else after Hitler's death.

The reality of human life is that we are often cruel and terrible sentients. Often, but not always. Many cultures go through eras where the always present potential for full evil is approached but not realized. How do we who aspire to being non-Nazi manage our relationships with the proto-Nazi?

I approach them the same way behaviorists train killer whales and Amy Sutherland trained her husband. Reinforce the positive and extinguish the negative. Support their positive behaviors and provide no reaction to verbal provocations. Reinforce cultural norms, even frayed norms, of compassion and caring. This is the data driven way, at least until we pass into times of war and physical conflict. Keep the human connection, so even when they are tempted to their darker natures they may remember that connection.

It is tempting to attack. To threaten to "punch Nazis". This is folly. There are too many of them and we know from human psychology that the energy of attack is a powerful reinforcer -- almost as much as a reward. It seems illogical, but humans are not logical.

The proto-Nazis will always be with us. At least until the AIs end us all. We have to manage them to have a civilization.

Sunday, January 07, 2024

Quicken for DOS cannot be recreated: Why we can't have good personal finance software any more.

Almost 40 years ago we used Quicken version 2 or 3 for DOS 3.1 on a Panasonic 8086 with 640K of memory and a CPU too feeble for a modern toaster. 

Every month a 3.5" (not 5.25") diskette came in the mail with our bank and credit card transactions. We loaded that into Quicken. We entered cash transactions manually. It worked pretty well, though Quicken was plagued with database corruption bugs until the 90s. When Microsoft Money appeared one could migrate transactions and history from one to the other.

There's no modern equivalent. Today's vendors sell our data to third parties and then market products to us. Vendors have a hard lock-in. This kind of service decay is now known as "enshittification". Today in a mastodon thread I listed what drove that enshittification*:

  1. The banks feared disintermediation and commodification so they stopped cooperating and/or raised transaction costs. 
  2. Selling services to customers and selling customer data were both seemingly painless ways to increase margins for a publicly traded company
  3. Costs and user experience both favor user data in the cloud — which aligns with selling user data and services.
  4. Customer data lock strategies became irresistible and with cloud migration they were easy to implement.
Of these the first is the big one. If customers could get their data then small vendors could produce niche subscription products. But the banks aren't going to cooperate. They know better now.

I don't know if we'll ever see good products again. Perhaps if Apple or Microsoft went into banking they'd provide an API for developers to use. Of course we'd all have to use Apple's Bank for everything but, speaking for my family, they already own us.

*With two 't's per Doctorow.

Friday, April 28, 2023

Large Language Models - evolutionary tree and selection algorithm 2023

The following two diagrams were taken from Mooler et al's GitHub LLM guide (April 2023). I'm posting them here as a snapshot of this historic moment. The root of the LLM evolutionary tree goes far back in time, almost five years ago. Click image for full res.

Algorithm for selecting an AI technology


LLM Evolutionary Tree

Monday, April 03, 2023

We need a new word for the historical singularity.

TLDR: The "technological singularity" was an important and useful term with a clear meaning. Then it became the "Rapture of the Nerds". We need a new term.

--

I first heard the word "singularity" in the context of black hole physics; it dates back at least to the early 20th century:

ChatGPT 4 2023: "At the singularity, the laws of physics as we know them, including space and time, break down, and our current understanding of the universe is insufficient to predict what happens within it."

Not much later, in the 1950s, the term was applied by von Neumann in a technological context (from a 1993 Vernor Vinge essay):

Stan Ulam paraphrased John von Neumann as saying: "One conversation centered on the ever-accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."

Brad Delong used to write about this kind of non-AI historical singularity. My favorite description of what it would be like to a approach at technological singularity was Vinge's short story "Fast Times at Fairmount High". (This prescient story appears to be lost to time; he wrote a similar full length novel but I think the short story was better).

The core idea is there's a (virtuous?) recursive loop where technology improves technology with shorter and shorter cycle times. Many processes go exponential and even near term developments become unpredictable. One may assume social end economic structures train to keep pace. The historical singularity exponential curve was part of The Economist's y2K Millennium issue GDP per person historical graph:


In a January 1983 essay for Omni Magazine Vinge focused on a particular aspect of the the technological singularity arising from superhuman intelligence (aka "super intelligence"):

We will soon create intelligences greater than our own ... When this happens there will be a technological and social transition similar in some sense to "the knotted space-time at the center of a black hole" 

A decade later, in his 1993 essay later published in Whole Earth Review (non-Olds cannot imagine what Whole Earth Review was like), Vinge revised what he meant by "soon":

... Based on this trend, I believe that the creation of greater-than-human intelligence will occur during the next thirty years. (Charles Platt has pointed out that AI enthusiasts have been making claims like this for thirty years. Just so I'm not guilty of a relative-time ambiguity, let me be more specific: I'll be surprised if this event occurs before 2005 or after 2030.) ...

So by the year 2000 we had the concept of a historical technological singularity (eminently sensible) that had become focused on a specific kind of self-improving technology - the superhuman intelligence with an upper-case S Singularity (presumably AI). Those were useful concepts - "technological singularity" and "superintelligence" Singularity. 

In 1993 Vinge predicted the Singularity would happen before 2030, later experts like Scott Aaronson predicted after 2080. (Aaronson has since revised that prediction and works for OpenAI; Vinge's 2030 dates looks pretty good.)

After 2000 though the word Singularity went off the semantic rails. It came to be used for for a peculiar future state in which human minds were uploaded into simulation environments that were usually described as pleasant rather than hellish. This is, of course, antithetical to the original idea of unpredictability! This peculiar meaning of "The Singularity" came to be known as "The Rapture of the Nerds" based on the title of a book by Charles Stross and Cory Doctorow. More recently that vision underlies a weird cult called longtermism that seems to have infected some vulnerable minds.

The "Rapture" meaning of "Singularity" has effectively taken over. We no longer have a term for the original von Neumann concept of self-reinforcing technological change that makes even the near term future unpredictable. That perversion of meaning is why I use the tag "Whitewater world" rather than Singularity for my own blog posts (and "Skynet" rather than super intelligence).

So we need a new term. I don't think "whitewater world" is it.

Anyone have a new term? We probably have a few years in which it might still be useful.

Thursday, March 30, 2023

ChatGPT's 155 IQ and passing the Sebastian's children puzzle

A Scientific American contributor found ChatGPT (probably 3) scored 155 on a standard IQ test:

Estimated on the basis of five subtests, the Verbal IQ of the ChatGPT was 155, superior to 99.9 percent of the test takers who make up the American WAIS III standardization sample of 2,450 people. As the chatbot lacks the requisite eyes, ears and hands, it is not able to take WAIS’s nonverbal subtests. But the Verbal IQ and Full Scale IQ scales are highly correlated in the standardization sample, so ChatGPT appears to be very intelligent by any human standards.

We don't need to worry though because it couldn't answer this question:

ChatGPT easily fails at obvious riddles, such as “What is the first name of the father of Sebastian’s children?” (ChatGPT on March 21: I’m sorry, I cannot answer this question as I do not have enough context to identify which Sebastian you are referring to.) It seems that ChatGPT fails to reason logically and tries to rely on its vast database of “Sebastian” facts mentioned in online texts.

This is what happened when I tried the Sebastian test on GPT-4:

Ah, I understand now. This riddle is a classic play on words. The answer is "Sebastian" himself, as he is the father of his children.

Ok, so that's reassuring.

Update 4/12/2023: ChatGPT4 (C4) recently did well in some blinded advanced undergraduate exams.

A response to Scott Aaronson's rejection of an AI pause.

Scott Aaronson, who works on AI safety for OpenAI, wrote a critique of AI Pause that was not up to his usual standards. Here's what I wrote as a comment:

Hi Scott — I was confused by your post. I’m usually able to follow them. I won’t defend the letter directly and Yudkowsky/TIME is not worth a mention but maybe you could clarify some things…

1. 6m seems a reasonable compromise given the lifespan of humans, the timescales of human deliberation and the commercial and military pressure to accelerate AI development. Short enough to motivate urgent action, but long enough that reflection is possible. (I doubt we actually pause, but I agree with the principle. China isn’t going to pause of course.)

2. Let’s assume GPT 5 with an array of NLP powered extensions exceeds the reasoning abilities of 95% of humanity in a wide variety of knowledge domains. That’s a shock on the scale of developing fire, but it’s occurring in a hugely complex and interdependent world that seems always on the edge of self-destruction and actually has the capabilities to end itself. We’re not hunter gatherers playing with fire or Mesopotomians developing writing. There’s no precedent for the speed, impact and civilizational fragility we face now.

3. It’s not relevant that people who signed this letter were previously skeptical of the progress towards AI. I recall 10 years ago you were skeptical. For my part I’ve been worried for a long time, but assumed it was going to come in 2080 or so. 60 years early is a reason to pause and understand what has happened.

Lastly, I read the OpenAI statement. That seems consistent with a pause.

Tuesday, March 14, 2023

Everything is just fine.

The ad-funded internet is collapsing. It's been ailing for years but, as seems to the rule, the last bit is happening quite quickly. I'm guessing advertisers, who must be kind of dim, finally realized they were wasting their money. They did much better funding special interest magazines but those are gone now. Much of Silicon Valley has been funded by a now defunct business model. So maybe this is not a good time to be there. (Or, if you are young and have some money, maybe this is the best time to be there.)

Meanwhile human-cognition level AI has arrived. About 60 years earlier than I'd hoped. And some people thought "Don't Look Up" was about climate change. As a guide I recommend the movie Superintelligence; particularly the protagonist's approach to the end of the world. (Spoiler: It's a rom-com. Her world does not end.)

It's whitewater world all over now. On the bright side, we don't need to waste too much time on longterm planning. I retired in August of 2022 so I think I'll focus on mountain bike trail maintenance and, like Superintelligence's Carol Peters, on the needs of friends and family.

Sunday, March 12, 2023

I ask ChatGPT about the next step.

 Not a bad response.

”Describe what happens when ChatGPT is enhanced with natural language processing and domain-specific knowledge modules.”

--


When ChatGPT is enhanced with natural language processing and domain-specific knowledge modules, it gains several capabilities that allow it to better understand and respond to user input.


Natural language processing (NLP) is a field of artificial intelligence that focuses on enabling computers to understand and interpret human language. By incorporating NLP into ChatGPT, the model can better analyze and interpret user input, including understanding sentence structure, recognizing named entities, and determining the meaning of words in context. This allows ChatGPT to generate more accurate and relevant responses to user queries.


Domain-specific knowledge modules provide ChatGPT with specialized knowledge related to a particular field or topic. For example, if ChatGPT is enhanced with a medical knowledge module, it can better understand medical terminology and provide more accurate and relevant responses to medical-related queries. By incorporating domain-specific knowledge modules, ChatGPT can tailor its responses to specific user needs, making it a more effective conversational agent.


Together, these enhancements enable ChatGPT to better understand and respond to user input, providing more accurate and relevant information to users. This can lead to a more engaging and satisfying user experience, as well as improved productivity and efficiency in a wide range of applications.”

Monday, February 20, 2023

Be afraid of ChatGPT

TL;DR: It's not that ChatGPT is miraculous, it's that cognitive science research suggests human cognition is also not miraculous.

"Those early airplanes were nothing compared to our pigeon-powered flight technology!"

https://chat.openai.com/chat - "Write a funny but profound sentence about what pigeons thought of early airplanes"

Relax

Be Afraid

ChatGPT is just a fancy autocomplete.

Much of human language generation may be a fancy autocomplete.

ChatGPT confabulates.

Humans with cognitive disabilities routinely confabulate and under enough stress most humans will confabulate. 

ChatGPT can’t do arithmetic.

IF a monitoring system can detect a question involves arithmetic or mathematics it can invoke a math system*.


UPDATE: 2 hours after writing this I read that this has been done.

ChatGPT’s knowledge base is faulty.

ChatGPT’s knowledge base is vastly larger than that of most humans and it will quickly improve.

ChatGPT doesn’t have explicit goals other than a design goal to emulate human interaction.

Other goals can be implemented.

We don’t know how to emulate the integration layer humans use to coordinate input from disparate neural networks and negotiate conflicts.

*I don't know the status of such an integration layer. It may already have been built. If not it may take years or decades -- but probably not many decades.

We can’t even get AI to drive a car, so we shouldn’t worry about this.

It’s likely that driving a car basically requires near-human cognitive abilities. The car test isn’t reassuring.

ChatGPT isn’t conscious.

Are you conscious? Tell me what consciousness is.

ChatGPT doesn’t have a soul.

Show me your soul.

Relax - I'm bad at predictions. In 1945 I would have said it was impossible, barring celestial intervention, for humanity to go 75 years without nuclear war.


See also:

  • All posts tagged as skynet
  • Scott Aaronson and the case against strong AI (2008). At that time Aaronson felt a sentient AI was sometime after 2100. Fifteen years later (Jan 2023) Scott is working for OpenAI (ChatGPT). Emphases mine: "I’m now working at one of the world’s leading AI companies ... that company has already created GPT, an AI with a good fraction of the fantastical verbal abilities shown by M3GAN in the movie ... that AI will gain many of the remaining abilities in years rather than decades, and .. my job this year—supposedly!—is to think about how to prevent this sort of AI from wreaking havoc on the world."
  • Imagining the Singularity - in 1965 (2009 post.  Mathematician I.J. Good warned of an "intelligence explosion" in 1965. "Irving John ("I.J."; "Jack") Good (9 December 1916 – 5 April 2009)[1][2] was a British statistician who worked as a cryptologist at Bletchley Park."
  • The Thoughtful Slime Mold (2008). We don't fly like bird's fly.
  • Fermi Paradox resolutions (2000)
  • Against superhuman AI: in 2019 I felt reassured.
  • Mass disability (2012) - what happens as more work is done best by non-humans. This post mentions Clark Goble, an app.net conservative I miss quite often. He died young.
  • Phishing with the post-Turing avatar (2010). I was thinking 2050 but now 2025 is more likely.
  • Rat brain flies plane (2004). I've often wondered what happened to that work.
  • Cat brain simulator (2009). "I used to say that the day we had a computer roughly as smart as a hamster would be a good day to take the family on the holiday you've always dreamed of."
  • Slouching towards Skynet (2007). Theories on the evolution of cognition often involve aspects of deception including detection and deceit.
  • IEEE Singularity Issue (2008). Widespread mockery of the Singularity idea followed.
  • Bill Joy - Why the Future Doesn't Need Us (2000). See also Wikipedia summary. I'd love to see him revisit this essay but, again, he was widely mocked.
  • Google AI in 2030? (2007) A 2007 prediction by Peter Norvig that we'd have strong AI around 2030. That ... is looking possible.
  • Google's IQ boost (2009) Not directly related to this topic but reassurance that I'm bad at prediction. Google went to shit after 2009.
  • Skynet cometh (2009). Humor.
  • Personal note - in 1979 or so John Hopfield excitedly described his work in neural networks to me. My memory is poor but I think we were outdoors at the Caltech campus. I have no recollection of why we were speaking, maybe I'd attended a talk of his. A few weeks later I incorporated his explanations into a Caltech class I taught to local high school students on Saturday mornings. Hopfield would be about 90 if he's still alive. If he's avoided dementia it would be interesting to ask him what he thinks.

Wednesday, November 02, 2022

What I learned about managing catalytic converter theft: OEM vs aftermarket vs universal

The catalytic converter on my much loved 2010 Kia Sedona van was stolen. (I think there are two and the rear converter was removed. I'm not a car guy.) It's an every day thing around here.

This is what I learned about this problem.

  1. Manufacturers only stock OEM converters for 10y post model date. So there's no OEM solution.
  2. The normal option to an OEM converter is a "bolt-on" aftermarket product. This is typically installed by a service garage or muffler shop. In Nov 2022 there are none available for the Kia Sedona and many other vehicles. It's a national shortage.
There are three remaining options:
  1. A "straight pipe": This is illegal but in our fallen times nobody seems to care. (Mad Max didn't even have a muffler.) Done by some muffler shops and by guys working out of their home. Oxygen sensor is a common issue, there are probably others. Not so good for the environment, but neither is replacing the vehicle.
  2. A "universal catalytic converter". Insurance companies will send customers to a muffler shop that will install (weld I think) a "universal converter". These are not a perfect match to the vehicle so they may cause performance issues and trigger a check engine light. For this reason service shops avoid them in normal times. That "we don't go there" rule can cause some bad advice in post-apocalyptic times.
  3. Sell or junk the vehicle and pay $50,000 for a new van. If sell then the buyer does one of the above.
Our garage mechanic didn't mention the "universal catalytic converter" option because "they don't do that". Sadly he hadn't updated his algorithm to deal with the national shortage. Now we have a new mechanic (He did apologize for his error, but hd didn't think to offer a retention incentive. I think he'll give better advice in the future.)

If you do replace a catalytic converter it may, of course, be stolen again. Consider a weld-on cage so thieves choose a neighboring vehicle instead. Or a straight pipe so they don't bother. It's handy to have a junk car when parking in unsecured lots in metro Minneapolis St Paul.

[Political aside: property crime will elect the GOP in America. Don't get fooled into thinking that only violent crime rates matter.]

Sunday, September 11, 2022

What is "manliness" in 2022?

Over the past year or two several of my favorite writers have expressed uncertainty about the American cultural standard for "manliness". Some wonder if it even exists.

Speaking from Oldness I would say that there is a clear standard of "manliness" in American culture and that it has changed relatively little over the past 80 years.

Manliness is Shane in 1949. It's the MCU's Steve Rogers (more than the comics actually). It's Aragorn in the LOTR. In the 1970s it's James Bond and Playboy and, more recently, Men's Journal. It's Kipling's (yes, that one) 1943 poem ...
If you can keep your head when all about you   
    Are losing theirs and blaming it on you,   
If you can trust yourself when all men doubt you,
    But make allowance for their doubting too ...

If you can dream—and not make dreams your master;   
    If you can think—and not make thoughts your aim;   
If you can meet with Triumph and Disaster
    And treat those two impostors just the same ...

... If neither foes nor loving friends can hurt you,
    If all men count with you, but none too much;
If you can fill the unforgiving minute
    With sixty seconds’ worth of distance run,   
Yours is the Earth and everything that’s in it,   
    And—which is more—you’ll be a Man, my son

Manliness includes enjoying toys, whether they are garden tools or drills or mountain bikes or skis or Lego models. There is continuity with Boyliness.

While Manliness has not changed much, there have been changes in who can be Manly. The role was once restricted to penis people. It's now open to all. Once you understand that you can see the continuity of the cultural model.

Saturday, November 27, 2021

Civilization, complexity and the limits of human cognition - another attempt at explaining the 21st century

The 70s were pretty weird, but I was too young to notice. (Not coincidentally, the Toffler/Farrell Future Shock book was written then.) By comparison the 80s and 90s more or less made sense. In 1992 Fukuyama wrote "The End of History" and that seemed about right for the times.

Things got weird again in the late 90s. I was in a .com startup and I remember valuations getting crazy about 1997, 3 years before the .com crash. We were still picking ourselves up from the crash when 9/11 hit. (A year later, on a purely personal note, my youngest brother vanished.) In the early 00s came Enron and other frauds almost forgotten now. Then in 2008 the real estate collapse and the Great Recession. We were barely recovering from the Great Recession when Trumpism hit. Followed by COVID (which was expected and not at all weird) and the Great Stupidity of the American Unvaccinated (which we did not expect and is perhaps weirdest of all).

Each time the world went off kilter I have tried to figure out a root cause:

At last count my list of contributing factors to the crash of '09 included ...

  1. Complexity collapse: we don't understand our emergent creation, we optimized for performance without adaptive reserve
  2. Mass disability and income skew: The modern world has disenfranchised much of humanity
  3. The Marketarian religion: The GOP in particular (now the Party of Limbaugh), but also many Democrats and libertarians, ascribed magical and benign powers to a system for finding local minima (aka The Market). The Market, like Nature, isn't bad -- but neither is it wise or kind.
  4. The occult inflation of shrinking quality: What happens when buyers can't figure out what's worth buying. Aka, the toaster crisis - yes, really.
  5. performance-based executive compensation and novel, unregulated, financial instruments: a lethal combination. See also - You get what you pay for. The tragedy of the incentive plan.
  6. Disintermediating Wall Street: Wall Street became a fragile breakpoint 
  7. The future of the publicly traded company: A part of our problem is that the publicly traded company needs to evolve
  8. The role of the deadbeats: too much debt - but we know that
  9. Firewalls and separation of powers: a culture of corruption, approved by the American electorate, facilitated dissolving regulatory firewalls
  10. Marked!: Rapid change and the Bush culture made fraud easy and appealing

I put Marked! pretty low on the list, but maybe I should bump it up a bit. The Hall of Shame (Clusterstock) lists a lot more fraud than has made the papers [1]...

By 2010 I was focusing on RCIIIT: The rise of China and India and the effects of IT.

... The Rise of China and India (RCI) has been like strapping a jet engine with a buggy throttle onto a dune buggy. We can go real fast, but we can also get airborne – without wings. Think about the disruption of German unification – and multiply than ten thousand times.

RCI would probably have caused a Great Recession even without any technological transformations.

Except we have had technological transformation – and it’s far from over. I don’t think we can understand what IT has done to our world – we’re too embedded in the change and too much of it is invisible. When the cost of transportation fell dramatically we could see the railroad tracks. When the cost of information generation and communication fell by a thousandfold it was invisible ...

In 2016 and again in 2018 I tried to explain Trumpism by contributing factors (I was too optimistic about Murdoch's health though):

  • 65% the collapse of the white non-college “working class” — as best measured by fentanyl deaths and non-college household income over the past 40 years. Driven by globalization and IT both separately and synergistically including remonopolization (megacorp). This is going to get worse.
  • 15% the way peculiarities of the American constitution empower rural states and rural regions that are most impacted by the collapse of the white working class due to demographics and out-migration of the educated. This is why the crisis is worse here than in Canada. This will continue.
  • 15% the long fall of patriarchy. This will continue for a time, but eventually it hits the ground. Another 20 years for the US?
  • 5% Rupert Murdoch. Seriously. In the US Fox and the WSJ, but also his media in Australia and the UK. When historians make their list of villains of the 21st century he’ll be on there. He’s broken and dying now, but he’s still scary enough that his name is rarely mentioned by anyone of consequence.
  • 1% Facebook, social media, Putin and the like. This will get better.

That 1% for Facebook et all is pretty small — but the election of 2016 was on the knife’s edge. That 1% was historically important.

A few months ago I listed 3 causes for the post-COVID supply and labor shock economics of 2021:

1. Wealth became extremely concentrated. 

2. Returns on labor for 40% of Americans fell below modern standard for economic life.

3. Good investments became hard to find.

It's almost 2022 now, so we're into almost 25 years of the world not making sense any more. So now I'm digging even deeper for a root cause.

Today I'm going with Gordon's Lawthe complexity of a complex adaptive system will increase until it reaches a limiting factor. Our civilization is a complex adaptive system and its complexity increased until it hit a limiting factor -- the complexity capacity of the average human. These days between 40 and 50% of American's can't handle civilization 2021 (sometimes I call this mass disability (see also). Witness among other things, The Great Stupidity of the FoxCovians.

It's a variant of the "Future Shock" Toffler wrote about 52 years ago. I don't have a fix; I don't think the world will get less complex. Our technologies are moving too fast. Maybe we'll just get used to not understanding the world and civilization will stumble on regardless. After all, for most of human history the world was incomprehensible -- and we did manage. Sort of. Mostly without civilization though ...

Friday, October 29, 2021

The Cybernated Generation: Time Magazine, April 2nd 1965

First check out the Time magazine covers for 1965. That was a very long time ago. Things have improved.

Now look at the April 2nd issue and particularly The Cybernated Generation. Every generation since 1965 has been declared cybernated or digitized or meta-sized.

The article is fascinating as a history of computing and our understanding of its impact -- and as a cultural artifact about a world of white men in white coats. There are no women save a secretary to "pass" at. There is no melanin. There are nerds. Some hyperbole aside there's not a lot that the author missed about the world to come...

As viewed by Sir Leon Bagrit, the thoughtful head of Britain's Elliot-Automation, the computer and automation will bring "the greatest change in the whole history of mankind.

... Boeing announced plans two weeks ago to outfit jetliners with computer-run systems that will land a plane in almost any weather without human help. A new "talking computer" at the New York Stock Exchange recently began providing instant stock quotations over a special telephone. In Chicago a drive-in computer center now processes information for customers while they wait, much as in a Laundromat. The New York Central recently scored a first among the world's railroads by installing computer-fed TV devices that will provide instant information on the location of any of the 125,000 freight cars on the road's 10,000 miles of track...

...  In 1834 an eccentric Englishman named Charles Babbage conceived the idea of a steam-driven "Analytical Engine" that in many details anticipated the basic principles of modern computers. 

... Even if no further advances were made in computer technology, some scientists maintain, the computer has provided enough work and opportunities for man for another thousand years....

... The most expensive single computer system in U.S. business is American Airlines' $30.5 million SABRE, a mechanical reservation clerk that gives instant up-to-the-minute information about every plane seat and reservation to American's 55 ticket offices. ...

... Computers now read electrocardiograms faster and more accurately than a jury of physicians. The Los Angeles police department plans to use computers to keep a collection of useful details about crimes and an electronic rogue's gallery of known criminals. And in a growing number of schools, computers have taken jobs as instructors in languages, history and mathematics...

... IBM is far and away the leader in the field, both in the U.S. and abroad...

... The computers have also spawned the so-called "software" industry, composed of computer service centers and independent firms that program machines and sell computer time (for as little as $10 an hour) to businesses that do not need a machine fulltime....

... Because computer technology is so new and computers require such sensitive handling, a new breed of specialists has grown up to tend the machines. They are young, bright, well-paid (up to $30,000) and in short supply. With brand-new titles and responsibilities, they have formed themselves into a sort of solemn priesthood of the computer, purposely separated from ordinary laymen. Lovers of problem solving, they are apt to play chess at lunch or doodle in algebra over cocktails, speak an esoteric language that some suspect is just their way of mystifying outsiders. Deeply concerned about logic and sensitive to its breakdown in everyday life, they often annoy friends by asking them to rephrase their questions more logically....

Until now computer experts could only communicate with their machines in one of 1,700 special languages, such as COBOL (Common Business Oriented Language), Fortran (Formula Translation), MAD (Michigan Algorithmic Decoder) and JOVIAL (Jules's Own Version of the International Algebraic Language). All of them are bewildering mixtures that only the initiated can decipher. Now some computers have reached the point where they can nearly understand—and reply in—plain English. The new Honeywell 20 understands a language similar enough to English so that an engineer can give it written instructions without consulting a programmer. The day is clearly coming when most computers will be able to talk back.

... Each week, the Government estimates, some 35,000 U.S. workers lose or change their jobs because of the advance of automation. There are also thousands more who, except for automation, would have been hired for such jobs. If U.S. industry were to automate its factories to the extent that is now possible—not to speak of the new possibilities opening up each year—millions of jobs would be eliminated. Obviously, American society will have to undergo some major economic and social changes if those displaced by machines are to lead productive lives.

Men such as IBM Economist Joseph Froomkin feel that automation will eventually bring about a 20-hour work week, perhaps within a century, thus creating a mass leisure class. Some of the more radical prophets foresee the time when as little as 2% of the work force will be employed, warn that the whole concept of people as producers of goods and services will become obsolete as automation advances. Even the most moderate estimates of automation's progress show that millions of people will have to adjust to leisurely, "nonfunctional" lives, a switch that will entail both an economic wrench and a severe test of the deeply ingrained ethic that work is the good and necessary calling of man...

... Many scientists hope that in time the computer will allow man to return to the Hellenic concept of leisure, in which the Greeks had time to cultivate their minds and improve their environment while slaves did all the labor. The slaves, in modern Hellenism, would be the computers...

... The computer has proved that many management decisions are routine and repetitive and can be handled nicely by a machine. Result: many of the middle management jobs of today will go to computers that can do just about everything but make a pass at a secretary...

... What it cannot do is to look upon two human faces and tell which is male and which is female, or remember what it did for Christmas five years ago." Bellman might get an argument about that from some computermen, but his point is valid...

... Most scientists now agree that too much was made in the early days of the apparent similarities between computers and the human brain. The vacuum tubes and transistors of computers were easy to compare to the brain's neurons—but the comparison has limited validity. "There is a crude similarity," says Honeywell's Bloch, "but the machine would be at about the level of an amoeba."... eventually the idea that a machine has humanlike intelligence will become part of folklore...

... In the years to come, computers will be able to converse with men, will themselves run supermarkets and laboratories, will help to find cures for man's diseases, and will automatically translate foreign languages on worldwide TV relayed by satellite. Optical scanning devices, already in operation in some companies, will eventually enable computers to gobble up all kinds of information visually. The machines will then be able to memorize and store whole libraries, in effect acquiring matchless classical and scientific educations by capturing all the knowledge to which man is heir....

... computers will eventually become as close to everyday life as the telephone—a sort of public utility of information...

... the computer is already upsetting old patterns of life, challenging accepted concepts, raising new specters to be conquered. Years from now man will look back on these days as the beginning of a dramatic extension of his power over his environment, an age in which technology began to recast human society. In the long run, the computer is not so much a challenge to man as a challenge for him: a triumph of technology to be developed, subdued and put to constantly increasing use.

Wednesday, October 27, 2021

Inline Skating San Francisco -- from 1997!

I wrote this in 1997 and found it in my old pre-blog Web 1.0 archives. I decided to republish it here because, you know, I could. Some of it may even be relevant today.

Before blogs we did this kind of thing in wysiwyg tools (FrontPage 97 for this piece) and then FTPd to a web server. 

The original page is still on the web but it is not known to Google. Astonishingly it's in the Wayback Machine.

Thursday, March 11, 2021

Sapiens by Yuval Noah Harari - a brief review

I'm a fairly average sleeper for my age, but this morning I gave up a bit before 4am. With the unexpected time I finished Yuval Harari's 2015 book "Sapiens: A Brief History of Humankind". (The "finishing" part is a bit unusual, I too often lose interest 80% of the way through many books.)

Sapiens was, I've read, quite popular with the Silicon Valley set. So I was prepared to dislike it from the start. In truth, while I can see why the Captains of Industry were fans, it's not a bad book. I'd grade it as very good to excellent.

I didn't learn much new -- I have read many of the same things as Harari. I was reminded, though, of things I'd forgotten -- and he touched on many of my favorite themes. If I'd read this as a young person I might have found it astonishing.

What are the flaws? He has a weird definition of "The Liberal" and he really dislikes whatever he means by that. He has a thing about Hosni Mubarak -- a loathsome person, but an odd choice for chief villain. He is glib, but that's a necessity in a book like this.  The glibness is somewhat offset by his habit of critiquing his assumptions at the end of each chapter. He's weakest when he strays into the sciences, particularly biology. Since he wrote this book we’ve seen Trumpism, the rise of Xi, and the slow burn of the Left Behind — events that might cause reconsiderations.

I liked the repeated reminder that non-human animals have paid a terrible price for the rise of humanity. Most books of this kind don't consider them.

In a book of this sort one constantly tries to decipher the author's agenda. What does Harari truly think? He clearly admires Buddhism; I would be surprised if he were not a practitioner of the more intellectual forms of Buddhism. He has a love and admiration for capitalism that outshines his self-critique. On the current American spectrum of political ideology he'd be a techno-optimist libertarian to the right of Obama and me (his characterization of the British Empire is more than slightly incorrect.)

Most of the time the book affirmed my own beliefs and reminded me of things I'd forgotten. Sometimes it annoyed me, but in a way that forced me to examine my priors. It's aged well -- even if some of his 2014 near-future predictions look to be still a decade or two away. I recommend it.

Sunday, January 10, 2021

Biden's essential task is to help the Left Behind

This is the last in a short series of post that began on Nov 11, 2020, shortly after Biden won the presidency for the first time. Back then it looked like McConnell would hold the Senate

In the decades since that post Trump became more insane than usual, a deranged mob of Trump cultists invaded the Capitol building, the Dems took the Senate, and Biden won the presidency for the third or fiftieth time depending how one counts. Truly, I need to finish this series while we still have electricity.

In the first post I wrote about how I think about the Trump voter (and America). I divided the Trump voter up into the religious fundamentalist (sometimes Trump cultist), the despicables (ex: Hawley, Cruz, the WSJ reader), and the Left Behind. The NYT profiled a Left Behind on 11/7/2020 (emphases mine):

... Nick [a] 26-year-old hair stylist ... spent the past few months campaigning for President Trump, taking special satisfaction in offending Biden supporters...

... He did not go to college and prides himself on his independent thinking; he puts himself in the category of street smart, not book smart...

... He had never bothered to register to vote until 2016, when he first heard Mr. Trump debate on television, and saw a political figure who reminded him of himself....

... When Fox News began reporting polls showing Mr. Biden in the lead, Ms. Rocco tried tuning into One America News, the right-wing cable network, which predicted a convincing victory for the president. The Roccos got news from a variety of sources — TikTok, QAnon, the pro-Trump comedian Terrence K. Williams, the YouTube mystic Clif High. “I’m a conspiracy theorist, I guess, if you want to call it that,” Ms. Rocco said...

...  he reminded himself what kept him out there all these months: People had disrespected him. The Facebook group manager who had kicked him off. The neighbor who took his yard sign. Teachers who responded to his daughter’s enthusiasm about Mr. Trump with awkward silence.

“People who don’t like Trump, I honestly think they are very soft people,” he said. “That is why the world is becoming so sensitive today. Back in the day, you could throw a snowball at someone at school and everything was fine. Nowadays, a letter gets sent home: Your child is being mean.”

... His older brother, who he describes as the brainy one, took the other path, winning a scholarship to college. (He is the Biden voter.) But Mr. Rocco’s aspirations were staunchly blue-collar; he chose trade school and went to work at 17...

It's unlikely Nick would have done well at a four year college or succeeded as a knowledge worker. He could marry well, or find an inheritance, or show unusual entrepreneurial talent -- but more likely he's going to be the disrespected Left Behind. In today's America if you aren't a knowledge worker you're probably not going to make over $30/hr, and you're not going to make it into the middle class as an individual [2]. Even in countries with effectively free college this non-knowledge worker cohort is about half the population [2].

America can't survive as a democracy with half its citizens Left Behind. We'll have a January 6 every year -- only the mobs will be larger and have better weapons. Of all the threats we face, this is our greatest near term challenge. Even CO2 driven climate change is a lesser threat.

Biden needs to work out a solution for the Left Behind problem. The good news is that for this problem survival and politics align. The Left Behind are the core of Trump's base and the modern GOP, it's tricky for the GOP to block policies to bring the Left Behind into the middle class. And, after Jan 6, 2021, there's intense awareness of the danger posed by the Left Behind (though that will be quickly forgotten). 

Most importantly the Left Behind are not only white, they are also Black [3], Hispanic, Asian and any slice you like (though more men than women). Helping the Left Behind crosses political boundaries, unlike reducing college debt.

What policies can be put in place so that most non-knowledge workers can earn over $30 an hour? I've written of a few over the years, here's a partial list:

  1. A $15/hour minimum wage. Put pressure at the base.
  2. Subsidize employment directly, or provide direct cash support (family allowance and the like), or do a mixture of both [4].
  3. Learn from Germany and from Quebec's manufacturing sector.
  4. Reduce corporate scale, favor smaller companies that employ more blue collar workers [5]
  5. Get creative about small business generation - including a national small business generator.
  6. Do public works and infrastructure that generate employment. I've been told the pharaohs built pyramids so laborers had work between flooding of the Nile. (The US military is one model for non-college public works employment.)
  7. Review all our tax, accounting, finance and labor policies and revise to favor non-college employment.
This is what Biden needs to focus on.

- fn -

[1] One trade worker and one service worker couple can squeak in it as long as they don't have child care expenses and as long as they stay together.

[2] See also my pre-catastrophe 2015 post on why I was a "fan" (heavy irony) of Trump or a recent post/pre catastrophe retrospective. I've been on this topic for a long time though my thoughts have evolved quite a bit over 17 years.

[3] Deep and pervasive structural racism means there's great potential for increasing four year college participation in Black America. I don't think there's anywhere near that potential for White America.

[4] I think subsidized work has significant political, cultural, economic, psychological, and social advantages over direct cash subsidies -- but I'm not adverse to experimenting. 

[5] "Blue collar" is a historic synonym for Left Behind.


Saturday, December 19, 2020

My 2015 post on why Trump was a sign of a healthy democracy

I'm on the way to writing about what I think should be Biden's #2 priority (#1 is undoing Trump's executive orders). Three previous in this series include: 

  1. How I think about the Trump voter (and America)
  2. What is middle class and why can't half of American voters get there?
  3. Biden's lost agenda
This fourth post is about something I wrote in 2015. Back then I thought Trump was a bad joke. I thought that American politics was a compromise between corporations, powerful (wealthy) individuals, and the voting masses. Clearly corporations and the wealthy would prefer many GOP candidates over Trump, and the masses alone would not be enough.

Yay masses.

Now, amidst the smoldering wreckage, I'm going to quote from that old misguided post (emphases added): 

Donald Trump is a sign of a healthy democracy. Really.

... I’m a fan because Trump appears to be channeling the most important cohort in the modern world — people who are not going to complete the advanced academic track we call college. Canada has the world’s highest “college” graduation rate at 55.8%, but that number is heavily biased by programs that can resemble the senior year of American High School (in Quebec, CEGEP, like mine). If we adjust for that bias, and recognizing that nobody does better than Canada, it’s plausible, even likely, that no more than half of the population of the industrialized world is going to complete the minimum requirements for the “knowledge work” and “creative work” that dominates the modern economy.

... This [never-college] cohort, about 40% of the human race, has experienced at least 40 years of declining income and shrinking employment opportunities. We no longer employ millions of clerks to file papers, or harvest crops, or dig ditches, or fill gas tanks or even assemble cars. That work has gone, some to other countries but most to automation. Those jobs aren’t coming back.

The future for about half of all Americans, and all humans, looks grim. When Trump talks to his white audience about immigrants taking jobs and betrayal by the elite he is starting a conversation we need to have. 

It doesn’t matter that Trump is a buffoon, or that restricting immigration won’t make any difference. It matters that the conversation is starting. After all, how far do you think anyone would get telling 40% of America that there is no place for them in current order because they’re not “smart” enough?

Yeah, not very far at all.

This is how democracy deals with hard conversations. It begins with yelling and ranting and blowhards. Eventually the conversation mutates. Painful thoughts become less painful. Facts are slowly accepted. Solutions begin to emerge.

Donald Trump is good for democracy, good for America, and good for the world.
"Good for democracy" except, of course, the white non-college masses spoke clearly back in 2016. An actual "President Trump" was unthinkable, but it happened. I do not underestimate him now.

So that part of the post did not hold up so well. But I stand by the part about making the never-college 40-50% of Americans a political focus. More on that when I write the fifth post in this series.

See also

Tuesday, November 17, 2020

Biden's lost agenda

In the Fall of 2020 the American people decided to dump Trump. Americans didn't agree on much else; voters rejected significant change. Overall the result did not surprise me, especially given the summer riots and the "defund the police" movement.

Biden's agenda is mostly history. Except for the most important part. I'm going to get to that in a future post (it's not student loan forgiveness!), but first I want to reference the future America rejected. I'll use George Packer's summary in the Oct issue of The Atlantic (emphases mine):

America’s Plastic Hour Is Upon Us

... The scale of Biden’s agenda is breathtaking. At its center is a huge jobs program. A Biden administration would invest $2 trillion in infrastructure and clean energy. He proposes creating 3 million jobs in early education, child care, and elderly care—sectors usually regarded as “soft” and neglected by presidential candidates—while raising their pay and status. “This economic crisis has hit women the hardest,” Sullivan said. “These care jobs are primarily jobs filled by women—and disproportionately women of color and immigrant women—but they don’t pay a fair wage, and the opportunities to advance aren’t there. This is a big, ambitious, bold proposal—not an afterthought, but at the core.” Another $700 billion would go to stimulating demand and innovation in domestic manufacturing for a range of essential industries such as medical supplies, microelectronics, and artificial intelligence. Some $30 billion would go to minority-owned businesses as part of a larger effort to reduce the racial wealth gap.

Biden is proposing industrial policy—massive, targeted investment to restructure production for national goals—something that no president has openly embraced since the 1940s. His agenda would also give workers more power, with paid family and medical leave, paid sick days, a public option for health care, and an easier path to organizing and joining unions. It would more than double the federal minimum wage, to $15 an hour ...

Ok, that's the agenda that was. Wave good-bye to it, but don't give up entirely. There's something in there that can be saved.