Showing posts with label whitewater world. Show all posts
Showing posts with label whitewater world. Show all posts

Tuesday, November 12, 2024

Mass disability measured: in 2016 40% of OECD workers could not manage basic technology tasks

It's hard to imagine how very simple much of our technology environment was in the 1980s. Much of what we interacted with was still understandable in simple mechanical terms. Early Mac OS Classic was vastly easier to understand and work with than anything we can image now; the closest analogy would be year two of the iPhone.

But even 8 years ago only a small slice of humanity could manage their technology environment (Jakob Nielsen from November 2016, thanks Matt Panaro for sharing.)

... The data was collected from 2011–2015 in 33 countries and was published in 2016 by the OECD... In total, 215,942 people were tested, with at least 5,000 participants in most countries... 

... research aimed to test the [job-related including technology] skills of people aged 16–65 ...

...participants were asked to perform 14 computer-based tasks. Instead of using live websites, the participants attempted the tasks on simulated software on the test facilitator’s computer. This allowed the researchers to make sure that all participants were confronted with the same level of difficulty across the years and enabled controlled translations of the user interfaces into each country’s local language ..

The tasks they chose were typical business worker tasks. The kinds of tasks that had lots of training back in the 90s. They never tried anything as tough as the HR software my legally disabled son has to work with at his minimum wage hourly job (we do it for him obviously).

Not all OECD members are equally wealthy but Nielsen reproduces a country-specific bar chart from the study. US numbers are not hugely different from the OECD averages; I've added US numbers based on squinting at the chart (numbers are done as levels, so as we move up the prior skills are assumed):

26% could not use a computer at all (US 20%)

14% could delete an email (US 15%)

29% could manage "reply all" or "Find all emails from John Smith" (US 35%)

5% could do "You want to know what percentage of the emails sent by John Smith last month were about sustainability." (US 5%)

They did not test the ability to maintain multiple malware-free home computers, iPhones, iPads and the like. I'd guess that's more like 0.1%.

Jakob Nielsen has been talking about "usability" as long as I can recall. So eight years ago he put this into the context of computer design and training. Which has been the mainstream interpretation of findings like this over the past 50 years of growing technological complexity. 

50 years is a good amount of time to wait for better software. Now we anticipate AI will monitor computer screens and guide users to complete tasks

But maybe better software won't help. After all, 40% of users couldn't do relatively simple software tasks. Maybe the problem is human cognitive limits.

I have coincidentally used that 40% number in a post rather similar to this the last time Trump won. Around the time that OECD study was published. Forty percent is my guesstimate of the percentage of Americans who cannot hope to earn the approximately $70,000 a year (plus benefits) needed to sustain a single adult in the low-end of American middle-class life. A life with some savings, yearly vacations, secure shelter, even a child or two.

Since at least 2008 I have called this global phenomenon mass disability. Every few years I see the basic concept emerge, typically with more euphemisms, only to be quickly forgotten. It's hard to fix anything when the the fix begins with a very unhappy truth. 

The unhappy truth is the complexity of our technological environment has exceeded the cognitive grasp of most humans. We now have an unsustainable mismatch between "middle-class" work and the cognitive talents of a large percentage of Americans.

There are things to do, some of which Biden started. We probably needed to have started on them back in the 90s. Perhaps Musk will have ideas. It's all on the oligarchs now.

Wednesday, August 28, 2024

In which I declare my expert judgment on AI 2024

These days my social media experience is largely Mastodon. There's something to be said about a social network that's so irreparably geeky and so hard to work with that only a tiny slice of humanity can possibly participate (unless and until Threads integration actually works).

In my Mastodon corner of the "Fediverse', among the elite pundits I choose to read,  there's a vocal cohort that is firm in their conviction that "AI" hype is truly and entirely hype, and that the very term "AI" should not be used. That group would say that the main application of LLM technology is criming.

Based on my casual polling of my pundits there's a quieter cohort that is less confident. That group is anxious, but not only about the criming.

Somewhere, I am told, there is a third group that believes that godlike-AIs are coming in 2025. They may be mostly on Musk's network.

Over the past few months I think the discourse has shifted. The skeptics are less confident, and the godlike-AI cohort is likewise quieter as LLM based AI hits technical limits. 

The shifting discourse, and especially the apparent LLM technical limitations, mean I'm back to being in the murky middle of things. Where I usually sit. Somehow that compels me to write down what I think. Not because anyone will or should care [1], but because I write these posts mostly for myself and I like to look back and see how wrong I've been.

So, in Aug 2024, I think:
  1. I am less worried that the end of the world is around the corner. If we'd gotten one more qualitative advance in LLM or some other AI tech I'd be researching places to (hopelessly) run to.
  2. Every day I think of new things I would do if current LLM tech had access to my data and to common net services. These things don't require any fundamental advances but they do require ongoing iteration.  I don't have much confidence in Apple's capabilities any more, but maybe they can squeeze this out. I really, really, don't want to have to depend on Microsoft. Much less Google.
  3. Perplexity.ai is super valuable to me now and I'd pay up if they stopped giving it away. It's an order of magnitude better than Google search.
  4. The opportunities for crime are indeed immense. They may be part of what ends unmediated net access for most people. By far the best description of this world is a relatively minor subplot in Neal Stephenson's otherwise mixed 2018 novel "Fall".
  5. We seem to be replaying the 1995 dot com crash but faster and incrementally. That was a formative time in my life. It was a time when all the net hype was shown to be .... correct. Even as many lost their assets buying the losers.
  6. It will all be immensely stressful and disruptive and anxiety inducing even though we won't be doing godlike-AI for at least (phew) five more years.
  7. Many who are skeptical about the impact of our current technologies have a good understanding of LLM tech but a weak understanding of cognitive science. Humans are not as magical as they think.
- fn -

[1] I legitimately have deeper expertise here than most would imagine but it's ancient and esoteric.

Monday, April 03, 2023

We need a new word for the historical singularity.

TLDR: The "technological singularity" was an important and useful term with a clear meaning. Then it became the "Rapture of the Nerds". We need a new term.

--

I first heard the word "singularity" in the context of black hole physics; it dates back at least to the early 20th century:

ChatGPT 4 2023: "At the singularity, the laws of physics as we know them, including space and time, break down, and our current understanding of the universe is insufficient to predict what happens within it."

Not much later, in the 1950s, the term was applied by von Neumann in a technological context (from a 1993 Vernor Vinge essay):

Stan Ulam paraphrased John von Neumann as saying: "One conversation centered on the ever-accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."

Brad Delong used to write about this kind of non-AI historical singularity. My favorite description of what it would be like to a approach at technological singularity was Vinge's short story "Fast Times at Fairmount High". (This prescient story appears to be lost to time; he wrote a similar full length novel but I think the short story was better).

The core idea is there's a (virtuous?) recursive loop where technology improves technology with shorter and shorter cycle times. Many processes go exponential and even near term developments become unpredictable. One may assume social end economic structures train to keep pace. The historical singularity exponential curve was part of The Economist's y2K Millennium issue GDP per person historical graph:


In a January 1983 essay for Omni Magazine Vinge focused on a particular aspect of the the technological singularity arising from superhuman intelligence (aka "super intelligence"):

We will soon create intelligences greater than our own ... When this happens there will be a technological and social transition similar in some sense to "the knotted space-time at the center of a black hole" 

A decade later, in his 1993 essay later published in Whole Earth Review (non-Olds cannot imagine what Whole Earth Review was like), Vinge revised what he meant by "soon":

... Based on this trend, I believe that the creation of greater-than-human intelligence will occur during the next thirty years. (Charles Platt has pointed out that AI enthusiasts have been making claims like this for thirty years. Just so I'm not guilty of a relative-time ambiguity, let me be more specific: I'll be surprised if this event occurs before 2005 or after 2030.) ...

So by the year 2000 we had the concept of a historical technological singularity (eminently sensible) that had become focused on a specific kind of self-improving technology - the superhuman intelligence with an upper-case S Singularity (presumably AI). Those were useful concepts - "technological singularity" and "superintelligence" Singularity. 

In 1993 Vinge predicted the Singularity would happen before 2030, later experts like Scott Aaronson predicted after 2080. (Aaronson has since revised that prediction and works for OpenAI; Vinge's 2030 dates looks pretty good.)

After 2000 though the word Singularity went off the semantic rails. It came to be used for for a peculiar future state in which human minds were uploaded into simulation environments that were usually described as pleasant rather than hellish. This is, of course, antithetical to the original idea of unpredictability! This peculiar meaning of "The Singularity" came to be known as "The Rapture of the Nerds" based on the title of a book by Charles Stross and Cory Doctorow. More recently that vision underlies a weird cult called longtermism that seems to have infected some vulnerable minds.

The "Rapture" meaning of "Singularity" has effectively taken over. We no longer have a term for the original von Neumann concept of self-reinforcing technological change that makes even the near term future unpredictable. That perversion of meaning is why I use the tag "Whitewater world" rather than Singularity for my own blog posts (and "Skynet" rather than super intelligence).

So we need a new term. I don't think "whitewater world" is it.

Anyone have a new term? We probably have a few years in which it might still be useful.

Thursday, March 30, 2023

ChatGPT's 155 IQ and passing the Sebastian's children puzzle

A Scientific American contributor found ChatGPT (probably 3) scored 155 on a standard IQ test:

Estimated on the basis of five subtests, the Verbal IQ of the ChatGPT was 155, superior to 99.9 percent of the test takers who make up the American WAIS III standardization sample of 2,450 people. As the chatbot lacks the requisite eyes, ears and hands, it is not able to take WAIS’s nonverbal subtests. But the Verbal IQ and Full Scale IQ scales are highly correlated in the standardization sample, so ChatGPT appears to be very intelligent by any human standards.

We don't need to worry though because it couldn't answer this question:

ChatGPT easily fails at obvious riddles, such as “What is the first name of the father of Sebastian’s children?” (ChatGPT on March 21: I’m sorry, I cannot answer this question as I do not have enough context to identify which Sebastian you are referring to.) It seems that ChatGPT fails to reason logically and tries to rely on its vast database of “Sebastian” facts mentioned in online texts.

This is what happened when I tried the Sebastian test on GPT-4:

Ah, I understand now. This riddle is a classic play on words. The answer is "Sebastian" himself, as he is the father of his children.

Ok, so that's reassuring.

Update 4/12/2023: ChatGPT4 (C4) recently did well in some blinded advanced undergraduate exams.

A response to Scott Aaronson's rejection of an AI pause.

Scott Aaronson, who works on AI safety for OpenAI, wrote a critique of AI Pause that was not up to his usual standards. Here's what I wrote as a comment:

Hi Scott — I was confused by your post. I’m usually able to follow them. I won’t defend the letter directly and Yudkowsky/TIME is not worth a mention but maybe you could clarify some things…

1. 6m seems a reasonable compromise given the lifespan of humans, the timescales of human deliberation and the commercial and military pressure to accelerate AI development. Short enough to motivate urgent action, but long enough that reflection is possible. (I doubt we actually pause, but I agree with the principle. China isn’t going to pause of course.)

2. Let’s assume GPT 5 with an array of NLP powered extensions exceeds the reasoning abilities of 95% of humanity in a wide variety of knowledge domains. That’s a shock on the scale of developing fire, but it’s occurring in a hugely complex and interdependent world that seems always on the edge of self-destruction and actually has the capabilities to end itself. We’re not hunter gatherers playing with fire or Mesopotomians developing writing. There’s no precedent for the speed, impact and civilizational fragility we face now.

3. It’s not relevant that people who signed this letter were previously skeptical of the progress towards AI. I recall 10 years ago you were skeptical. For my part I’ve been worried for a long time, but assumed it was going to come in 2080 or so. 60 years early is a reason to pause and understand what has happened.

Lastly, I read the OpenAI statement. That seems consistent with a pause.

Tuesday, March 14, 2023

Everything is just fine.

The ad-funded internet is collapsing. It's been ailing for years but, as seems to the rule, the last bit is happening quite quickly. I'm guessing advertisers, who must be kind of dim, finally realized they were wasting their money. They did much better funding special interest magazines but those are gone now. Much of Silicon Valley has been funded by a now defunct business model. So maybe this is not a good time to be there. (Or, if you are young and have some money, maybe this is the best time to be there.)

Meanwhile human-cognition level AI has arrived. About 60 years earlier than I'd hoped. And some people thought "Don't Look Up" was about climate change. As a guide I recommend the movie Superintelligence; particularly the protagonist's approach to the end of the world. (Spoiler: It's a rom-com. Her world does not end.)

It's whitewater world all over now. On the bright side, we don't need to waste too much time on longterm planning. I retired in August of 2022 so I think I'll focus on mountain bike trail maintenance and, like Superintelligence's Carol Peters, on the needs of friends and family.

Sunday, March 12, 2023

I ask ChatGPT about the next step.

 Not a bad response.

”Describe what happens when ChatGPT is enhanced with natural language processing and domain-specific knowledge modules.”

--


When ChatGPT is enhanced with natural language processing and domain-specific knowledge modules, it gains several capabilities that allow it to better understand and respond to user input.


Natural language processing (NLP) is a field of artificial intelligence that focuses on enabling computers to understand and interpret human language. By incorporating NLP into ChatGPT, the model can better analyze and interpret user input, including understanding sentence structure, recognizing named entities, and determining the meaning of words in context. This allows ChatGPT to generate more accurate and relevant responses to user queries.


Domain-specific knowledge modules provide ChatGPT with specialized knowledge related to a particular field or topic. For example, if ChatGPT is enhanced with a medical knowledge module, it can better understand medical terminology and provide more accurate and relevant responses to medical-related queries. By incorporating domain-specific knowledge modules, ChatGPT can tailor its responses to specific user needs, making it a more effective conversational agent.


Together, these enhancements enable ChatGPT to better understand and respond to user input, providing more accurate and relevant information to users. This can lead to a more engaging and satisfying user experience, as well as improved productivity and efficiency in a wide range of applications.”

Monday, February 20, 2023

Be afraid of ChatGPT

TL;DR: It's not that ChatGPT is miraculous, it's that cognitive science research suggests human cognition is also not miraculous.

"Those early airplanes were nothing compared to our pigeon-powered flight technology!"

https://chat.openai.com/chat - "Write a funny but profound sentence about what pigeons thought of early airplanes"

Relax

Be Afraid

ChatGPT is just a fancy autocomplete.

Much of human language generation may be a fancy autocomplete.

ChatGPT confabulates.

Humans with cognitive disabilities routinely confabulate and under enough stress most humans will confabulate. 

ChatGPT can’t do arithmetic.

IF a monitoring system can detect a question involves arithmetic or mathematics it can invoke a math system*.


UPDATE: 2 hours after writing this I read that this has been done.

ChatGPT’s knowledge base is faulty.

ChatGPT’s knowledge base is vastly larger than that of most humans and it will quickly improve.

ChatGPT doesn’t have explicit goals other than a design goal to emulate human interaction.

Other goals can be implemented.

We don’t know how to emulate the integration layer humans use to coordinate input from disparate neural networks and negotiate conflicts.

*I don't know the status of such an integration layer. It may already have been built. If not it may take years or decades -- but probably not many decades.

We can’t even get AI to drive a car, so we shouldn’t worry about this.

It’s likely that driving a car basically requires near-human cognitive abilities. The car test isn’t reassuring.

ChatGPT isn’t conscious.

Are you conscious? Tell me what consciousness is.

ChatGPT doesn’t have a soul.

Show me your soul.

Relax - I'm bad at predictions. In 1945 I would have said it was impossible, barring celestial intervention, for humanity to go 75 years without nuclear war.


See also:

  • All posts tagged as skynet
  • Scott Aaronson and the case against strong AI (2008). At that time Aaronson felt a sentient AI was sometime after 2100. Fifteen years later (Jan 2023) Scott is working for OpenAI (ChatGPT). Emphases mine: "I’m now working at one of the world’s leading AI companies ... that company has already created GPT, an AI with a good fraction of the fantastical verbal abilities shown by M3GAN in the movie ... that AI will gain many of the remaining abilities in years rather than decades, and .. my job this year—supposedly!—is to think about how to prevent this sort of AI from wreaking havoc on the world."
  • Imagining the Singularity - in 1965 (2009 post.  Mathematician I.J. Good warned of an "intelligence explosion" in 1965. "Irving John ("I.J."; "Jack") Good (9 December 1916 – 5 April 2009)[1][2] was a British statistician who worked as a cryptologist at Bletchley Park."
  • The Thoughtful Slime Mold (2008). We don't fly like bird's fly.
  • Fermi Paradox resolutions (2000)
  • Against superhuman AI: in 2019 I felt reassured.
  • Mass disability (2012) - what happens as more work is done best by non-humans. This post mentions Clark Goble, an app.net conservative I miss quite often. He died young.
  • Phishing with the post-Turing avatar (2010). I was thinking 2050 but now 2025 is more likely.
  • Rat brain flies plane (2004). I've often wondered what happened to that work.
  • Cat brain simulator (2009). "I used to say that the day we had a computer roughly as smart as a hamster would be a good day to take the family on the holiday you've always dreamed of."
  • Slouching towards Skynet (2007). Theories on the evolution of cognition often involve aspects of deception including detection and deceit.
  • IEEE Singularity Issue (2008). Widespread mockery of the Singularity idea followed.
  • Bill Joy - Why the Future Doesn't Need Us (2000). See also Wikipedia summary. I'd love to see him revisit this essay but, again, he was widely mocked.
  • Google AI in 2030? (2007) A 2007 prediction by Peter Norvig that we'd have strong AI around 2030. That ... is looking possible.
  • Google's IQ boost (2009) Not directly related to this topic but reassurance that I'm bad at prediction. Google went to shit after 2009.
  • Skynet cometh (2009). Humor.
  • Personal note - in 1979 or so John Hopfield excitedly described his work in neural networks to me. My memory is poor but I think we were outdoors at the Caltech campus. I have no recollection of why we were speaking, maybe I'd attended a talk of his. A few weeks later I incorporated his explanations into a Caltech class I taught to local high school students on Saturday mornings. Hopfield would be about 90 if he's still alive. If he's avoided dementia it would be interesting to ask him what he thinks.

Saturday, November 27, 2021

Civilization, complexity and the limits of human cognition - another attempt at explaining the 21st century

The 70s were pretty weird, but I was too young to notice. (Not coincidentally, the Toffler/Farrell Future Shock book was written then.) By comparison the 80s and 90s more or less made sense. In 1992 Fukuyama wrote "The End of History" and that seemed about right for the times.

Things got weird again in the late 90s. I was in a .com startup and I remember valuations getting crazy about 1997, 3 years before the .com crash. We were still picking ourselves up from the crash when 9/11 hit. (A year later, on a purely personal note, my youngest brother vanished.) In the early 00s came Enron and other frauds almost forgotten now. Then in 2008 the real estate collapse and the Great Recession. We were barely recovering from the Great Recession when Trumpism hit. Followed by COVID (which was expected and not at all weird) and the Great Stupidity of the American Unvaccinated (which we did not expect and is perhaps weirdest of all).

Each time the world went off kilter I have tried to figure out a root cause:

At last count my list of contributing factors to the crash of '09 included ...

  1. Complexity collapse: we don't understand our emergent creation, we optimized for performance without adaptive reserve
  2. Mass disability and income skew: The modern world has disenfranchised much of humanity
  3. The Marketarian religion: The GOP in particular (now the Party of Limbaugh), but also many Democrats and libertarians, ascribed magical and benign powers to a system for finding local minima (aka The Market). The Market, like Nature, isn't bad -- but neither is it wise or kind.
  4. The occult inflation of shrinking quality: What happens when buyers can't figure out what's worth buying. Aka, the toaster crisis - yes, really.
  5. performance-based executive compensation and novel, unregulated, financial instruments: a lethal combination. See also - You get what you pay for. The tragedy of the incentive plan.
  6. Disintermediating Wall Street: Wall Street became a fragile breakpoint 
  7. The future of the publicly traded company: A part of our problem is that the publicly traded company needs to evolve
  8. The role of the deadbeats: too much debt - but we know that
  9. Firewalls and separation of powers: a culture of corruption, approved by the American electorate, facilitated dissolving regulatory firewalls
  10. Marked!: Rapid change and the Bush culture made fraud easy and appealing

I put Marked! pretty low on the list, but maybe I should bump it up a bit. The Hall of Shame (Clusterstock) lists a lot more fraud than has made the papers [1]...

By 2010 I was focusing on RCIIIT: The rise of China and India and the effects of IT.

... The Rise of China and India (RCI) has been like strapping a jet engine with a buggy throttle onto a dune buggy. We can go real fast, but we can also get airborne – without wings. Think about the disruption of German unification – and multiply than ten thousand times.

RCI would probably have caused a Great Recession even without any technological transformations.

Except we have had technological transformation – and it’s far from over. I don’t think we can understand what IT has done to our world – we’re too embedded in the change and too much of it is invisible. When the cost of transportation fell dramatically we could see the railroad tracks. When the cost of information generation and communication fell by a thousandfold it was invisible ...

In 2016 and again in 2018 I tried to explain Trumpism by contributing factors (I was too optimistic about Murdoch's health though):

  • 65% the collapse of the white non-college “working class” — as best measured by fentanyl deaths and non-college household income over the past 40 years. Driven by globalization and IT both separately and synergistically including remonopolization (megacorp). This is going to get worse.
  • 15% the way peculiarities of the American constitution empower rural states and rural regions that are most impacted by the collapse of the white working class due to demographics and out-migration of the educated. This is why the crisis is worse here than in Canada. This will continue.
  • 15% the long fall of patriarchy. This will continue for a time, but eventually it hits the ground. Another 20 years for the US?
  • 5% Rupert Murdoch. Seriously. In the US Fox and the WSJ, but also his media in Australia and the UK. When historians make their list of villains of the 21st century he’ll be on there. He’s broken and dying now, but he’s still scary enough that his name is rarely mentioned by anyone of consequence.
  • 1% Facebook, social media, Putin and the like. This will get better.

That 1% for Facebook et all is pretty small — but the election of 2016 was on the knife’s edge. That 1% was historically important.

A few months ago I listed 3 causes for the post-COVID supply and labor shock economics of 2021:

1. Wealth became extremely concentrated. 

2. Returns on labor for 40% of Americans fell below modern standard for economic life.

3. Good investments became hard to find.

It's almost 2022 now, so we're into almost 25 years of the world not making sense any more. So now I'm digging even deeper for a root cause.

Today I'm going with Gordon's Lawthe complexity of a complex adaptive system will increase until it reaches a limiting factor. Our civilization is a complex adaptive system and its complexity increased until it hit a limiting factor -- the complexity capacity of the average human. These days between 40 and 50% of American's can't handle civilization 2021 (sometimes I call this mass disability (see also). Witness among other things, The Great Stupidity of the FoxCovians.

It's a variant of the "Future Shock" Toffler wrote about 52 years ago. I don't have a fix; I don't think the world will get less complex. Our technologies are moving too fast. Maybe we'll just get used to not understanding the world and civilization will stumble on regardless. After all, for most of human history the world was incomprehensible -- and we did manage. Sort of. Mostly without civilization though ...

Sunday, October 11, 2020

Electric vehicles will end another non-college job - the auto mechanic

 My Great-Aunt was born in the 19th century. She spent most of her life working in data processing. She, and thousands like her, did read, delete, update operations on paper cards that were passed between thousands of small rectangular desks in a large rectangular building in Montreal. None of her coworkers had a college degree -- I suspect many could not read very well. The work seems impossibly dull, but she enjoyed it and the pension it brought her.

I have one of those desks, I'm typing on it now. It fits nicely in a corner of my living room, and I'm slender enough to fit comfortably in it.

By the 1960s the first business computers wiped out her industry as definitively as the automobile eliminated millions of horses. There would never again be a large scale job that required no particular social, physical, or cognitive skills.

Since that time IT has generated vast numbers of knowledge worker jobs that pay relatively well while eliminating jobs that do not require cognitive skills.

Now electric vehicles are going to do the same thing. Compared to internal combustion engines they are much easier to maintain; their complexity is in batteries and software. Never-college auto mechanics are going to lose their jobs.

There's a lot we can do about this problem. It's not only the right thing to do, it's also essential to our survival. Even if Biden wins in 2020, if his administration doesn't act quickly there will be another Trump in 2024.

Tuesday, June 09, 2020

Viktor Frankl - on expectations and the behavior of people

Viktor Frankl formed some of his opinions of human nature by surviving several concentration camps. After his release, in a few days, he wrote a book about his experience - Man's Search for Meaning.

The book has harsh critics. I read it and I think much of sees truth, though it also a book of another era -- an era in which "man" more or less included women. 

Today psychology, psychiatry, neurology and the sciences retain little of Frankl's life work. He could not grasp that meaning might exist in the absence of religion, or that responsibility could be assumed rather than fundamental. I believe, however, that he had a true understanding of the extremes of human nature for evil and for good.

YouTube (and the Ted site) have a video of a lecture he gave later in his life. From the Frankl Institute (with let another video copy!):

YOUTH IN SEARCH OF MEANING, 1972 [4:22]
Frankl speaking at the "Toronto Youth Corps" in 1972. See Frankl "at his best" as he vividly explains his theories, and even draws analogies to piloting an aircraft – a passion he had recently picked up.

In this lecture he talks about how one must "crab" an airplane to adjust for a crosswind (1:45).  To reach a destination you have to periodically turn into the wind. He expands the analogy to people:

If we take man as he really is we make him worse. But if we overestimate him ... if we seem to be idealist and are overestimating ... overrating man ... and looking at him up high ... we promote him to what he really can be...

... Do you know who has said this? If we take man as he is we make him worse, but if we take man as he should be we make him capable of becoming what he can be? ... This was not me. This was not my flight instructor. This was Goethe.

From this it is a small Google step to the Goethe quote (in English):

When we treat man as he is we make him worse than he is.
When we treat him as if he already was what he potentially could be we make him what he should be.

 In the strange time of June 2020 I think this is worth remembering.

Sunday, December 30, 2018

Why the crisis of 2016 will continue for decades to come

I haven’t written recently about why Crisis 2016, sometimes called Crisis-T, happened. For that matter, why Brexit. My last takes were in 2016 …

  • In defense of Donald Trump - July 2016. In which I identified the cause of the crisis, but assumed we’d dodge the bullet and HRC would tend to the crisis of the white working class.
  • Trumpism: a transition function to the world of mass disability - Aug 2016. “How does a culture transition from memes of independence and southern Christian-capitalist marketarianism to a world where government deeply biases the economy towards low-education employment?"
  • After Trump: reflections on mass disability in a sleepless night - Nov 11, 2016. "Extreme cultural transformation. Demographics. China. The AI era and mass disability. I haven’t even mentioned that pre-AI technologies wiped out traditional media and enabled the growth of Facebook-fueled mass deception alt-media … We should not be surprised that the wheels have come off the train.”
  • Crisis-T: What’s special about rural? - Nov 16, 2016: "The globalization and automation that disabled 40% of working age Americans isn’t unique to rural areas, but those areas have been ailing for a long time. They’ve been impacted by automation ever since the railroad killed the Erie canal, and the harvester eliminated most farm workers. Once we thought the Internet would provide a lifeline to rural communities, but instead it made Dakka as close as Escanaba.”

How has my thinking changed two years later? Now I’d add a couple of tweaks, especially the way quirks of America’s constitution amplified the crisis. Today’s breakdown:

  • 65% the collapse of the white non-college “working class” — as best measured by fentanyl deaths and non-college household income over the past 40 years. Driven by globalization and IT both separately and synergistically including remonopolization (megacorp). This is going to get worse.
  • 15% the way peculiarities of the American constitution empower rural states and rural regions that are most impacted by the collapse of the white working class due to demographics and out-migration of the educated. This is why the crisis is worse here than in Canada. This will continue.
  • 15% the long fall of patriarchy. This will continue for a time, but eventually it hits the ground. Another 20 years for the US?
  • 5% Rupert Murdoch. Seriously. In the US Fox and the WSJ, but also his media in Australia and the UK. When historians make their list of villains of the 21st century he’ll be on there. He’s broken and dying now, but he’s still scary enough that his name is rarely mentioned by anyone of consequence.
  • 1% Facebook, social media, Putin and the like. This will get better.

That 1% for Facebook et all is pretty small — but the election of 2016 was on the knife’s edge. That 1% was historically important.

Rupert Murdoch will finally die, though his malignant empire will grind on for a time. Patriarchy can’t fall forever, eventually that process is done. We now understand the risks of Facebook and its like and those will be managed. So there’s hope.

But the crisis of the white non-college will continue and our constitution will continue to amplify that bloc’s political power in rural areas. Even if civilization wins in 2020 the crisis of 2016 will continue. It will test human societies for decades to come.

Wednesday, April 18, 2018

Dyer on the 21st century crisis of mass unemployment

I believe this is true — though I’d be more confident if one of my favorite economists thought this was plausible (emphases mine):

If The Model Is Broken, Fix It | Gwynne Dyer

… The political model of Western-style democracy, which grew up alongside and then within a capitalist economic model, is now broken. Exhibit Number One is Donald Trump, but there’s lots of other evidence too.

One-third of French voters backed Marine Le Pen, a cleaned-up, user-friendly neo-fascist, in last year’s presidential election. In last September’s German election, one-eighth of the electorate voted for Alternative for Germany, a party whose more extreme wing is neo-Nazi – but it is now leads the opposition in the Bundestag, the German parliament.

Last month in Italy, the two biggest parties to emerge from the election were both led by populist rabble-rousers, one from the left and one from the right. Not to mention Brexit in Britain. And in every case the themes that dominated the populists’ rhetoric were racism, nationalism, hostility to immigrants – and jobs.

Trump rarely talked about anything else during the presidential election campaign: immigrants are stealing the jobs, free-trading American businessmen are exporting the jobs, the foreigners are eating America’s lunch….

Trump may not know a lot, but he knows One Big Thing. We are living in a new era of mass unemployment, and nobody has noticed. As Trump said the night after he won the New Hampshire primary in February 2016: “Don’t believe those phony numbers when you hear 4.9 and 5 percent unemployment. The number’s probably 28, 29, as high as 35. In fact, I even heard recently 42.”

It’s not really 42 percent, but it’s not 4.1 percent (the current official US rate) either. According to Nicholas Eberstadt’s ‘Men Without Work’, the real unemployment rate among American men of prime working age (24-55) – including those who don’t get counted because they have given up looking for work – is 17 percent.

Why didn’t we notice? Because the unemployed weren’t protesting in the streets like they did in the Great Depression of the 1930s, although the rate is getting up to Depression era levels. After the Second World War, all the Western democracies built welfare states, mainly so a new generation of radical populist leaders would not come to power the next time there is mass unemployment.

It has worked, in the sense that there is not blood in the streets this time around, but the jobless millions are very angry even if the welfare state means that they are not starving. They do vote, and unless something is done to ease their anger, next time they may vote for somebody who makes Trump look good by comparison.

But if the problem is unemployment, then the answer is not obvious, because the main cause of unemployment in Western countries is not immigration or ‘offshoring’ jobs, as Trump pretends. It is computers.

One-third of American manufacturing jobs have vanished in the past 20 years, and the vast majority of them (85 percent) were destroyed by automation. The algorithms and the robot arms have already killed the Rust Belt, and there is a plausible prediction that almost half of existing American jobs may be automated out of existence in the next 20 years.

What would our politics look like then? Not very democratic, unless we do something to ease the anger of the unemployed. This doesn’t just mean giving them more money – a massive expansion of the welfare state – but also finding way of taking the shame out of unemployment, because it is the humiliation of being seen as a loser that breeds the anger…

I’ve called this ‘mass disability’, because to me it’s a mismatch between the skills the majority of humans have and the skills needed to earn a middle class or better income.

I don’t have any other explanation for why the entire western world is simultaneously in crisis other than what I wrote about in 2010 - Globalization (China) and Information Technology.

See also:

Saturday, December 30, 2017

Tech regressions: MORE, Quicken, PalmOS, iOS, Podcasts, Aperture, Music, iPad photo slide shows, and toasters.

One of the odder experiences of aging is living through technology regressions. I’ve seen a few — solutions that go away and are never replaced.

Symantec’s classicMac MORE 3.1 was a great outliner/editing tool with the best style sheet implementation I’ve seen. It died around 1991. The closest thing today would be Omni Outliner — 16 years later. There’s still no comparable Style Sheet support.

Quicken for DOS with 3.5” monthly diskette records of credit card transactions was the most reliable and useable personal accounting tool I’ve experienced — though even it had problems with database corruption. I think that was the 1980s. Today I use Quicken for Mac, a niche product with unreliable transfer of financial information, questionable data security, and limited investment tools.

PalmOS Datebk 5 was an excellent calendaring tool with good desktop sync (for a while the Mac had the best ‘personal information management’ companion). That was in the 1990s. When PalmOS died we went years without an alternative. I briefly returned to using a Franklin Planner. Somewhere around year 3 of iOS we had equivalent functionality again — and a very painful transition.

iOS and macOS have seen particularly painful combinations of progressions and regressions. OS X / macOS photo management was at its best somewhere around the end of Snow Leopard and Aperture 3.1 (memory fuzzy, not sure they overlapped). OS X photo solutions had finally reached a good state after years of iPhoto screw-ups — the professional and home products more or less interoperated. All Apple needed to do was polish Aperture’s rough edges and fix bugs. Instead they sunset Aperture and gave us Photos.app — a big functional regression. Apple did something similar with iMovie; it’s much harder to make home “movies” than it once was.

iOS was at its most reliable around version 6. So Apple blew it up. Since that time Podcasts.app has gone from great to bad to not-so-bad to abysmal. The iPad used to have a great digital picture frame capability tied to screen lock — Apple took that away. For a while there was a 3rd party app that worked with iCloud photo streams, I could remotely add images to my father’s iPad slideshow digital picture frame. There’s nothing that works as well now; as I write this I’m working through a web of bugs and incompetence (I suspect a desperate timeout stuck into iTunes/iOS sync) to sneak some photos from Aperture to an iPad.

Apple Music is following the path of Podcasts.app as Apple moves to ending the sale of music (probably 2019). At the same time iTunes is being divided into dumbed down subunits (iBooks regression). The last 2-3 revisions of iTunes have been so bad that this feels almost like a mercy killing.

We don’t have a  way to avoid these regressions. Once we could have gotten off the train, now the train stations are dangerous neighborhoods of lethal malware. We need to keep upgrading, and so much is bundled with macOS and iOS that we can’t find 3rd party alternatives. Data lock is ubiquitous now.

I think regressions are less common outside digital world. It’s true toasters aren’t what they were, but since 2006 Chinese products have become better made and more reliable. Perhaps the closest thing to tech regressions in the material world is the chaos of pharma prices.

This takes a toll. There are so many better ways to spend my life, and too few minutes to waste. I wonder what these regressions do to non-geeks; I don’t think it goes well for them.

Saturday, November 11, 2017

Taxing the externalities of the attention economy

The Economist has an excellent overview of the risks of the attention economy (11/4/17). The Gamergate connection is particularly good.

There is so much to say about all of the perverse consequences of funding the net through a tax on attention. I’m sure we don’t fully understand all of the implications; the reality may be even more grim than we know. It’s already grim enough though. So grim that the Russian assisted collapse of the US government has seized a fraction of our distracted attention.

It appears that most Americans are easily manipulated through modern meme-injectors like Facebook and Twitter. Vulnerability increases with lower education levels (among the privileged education is a rough proxy for cognition), but few are completely immune to distraction. We resemble a people who have never seen alcohol a few months after the whisky trade arrives.

If we believe the attention/ad funded economy is the mene equivalent of fentanyl or tobacco, what do we do about it? There are lessons from managing addictive and health destroying substances such as tobacco. It begins with with taxation.

We tax cigarettes heavily. We can similarly tax net advertising. Our goal should be to increase the cost of online advertising several fold. We raise the cost until few advertisers can afford it. At that point Facebook has to turn to other revenue sources to maintain services — such as charging a yearly fee to users.

This is obviously not sufficient, but it’s a beginning.

Sunday, October 15, 2017

Understanding century 21 - IT, Globalization and urban-urban migration

In the 90s the world kind of made sense. Since then, not so much. I don’t know if teens truly are experiencing an anxiety epidemic, but any American growing up in the new millennium has reason to be anxious.

I think the root causes of our disruption are globalization (China and India) and information technology (AI, robots, advertising supported web, etc) leading to peak human/mass disability and the collapse of the GOP.

I’m now considering a third factor — namely urban-urban migration (though it may be a consequence of globalization and IT rather than a root cause). The population required to sustain a viable local economy keeps increasing; this is absolutely not what we expected when the net was young. Once a city of 10,000 was viable, then a city of 50,000, then a few hundred thousand. We seem too heading towards a million as baseline.

This is politically potent here because the structure of American government gives disproportionate power to low population density regions. The pain of these communities is politically consequential. This is usually described as a “rural” crisis, but these aren’t “rural” in the traditional sense. They are regions around large towns and small cities that are no longer economically viable.

I was a family medicine resident and a young physician in communities like these. Recent stories feel familiar — they remind me of my desolate drives along the Erie Canal and the IT driven end of the mill town. It’s a worldwide thing.

Humans have been migrating from rural areas to cities for centuries. It’s often been socially disruptive. It still is, particularly because of the way American government works. The dying regions have power, and as they lose their cognitive elite they are ever more desperate and easier to deceive.

See also

Saturday, February 04, 2017

Warfare goes to the elite

Once upon a time tens of thousands of New Yorkers moved paper from one file cabinet to another. Once upon a time there were jobs for strong bodies. Once upon a time you could be blue collar and middle class.

Once upon a time anyone could be a warrior …

Special Operations Troops Top Casualty List as U.S. Relies More on Elite Forces

… “We’ve moved out of the major combat operations business,” said Linda Robinson, a counterterrorism expert at the RAND Corporation. In recent years, she said, the military has effectively outsourced rank-and-file infantry duties to local forces in places like Afghanistan, Iraq and Syria, leaving only a cadre of highly skilled Americans to train troops and take out high-value targets…

Now the physical and cognitive elite dominate warfare. Automation and globalization — in this case drones and outsourcing to local infantry.

Trump didn’t come from nowhere.

Sunday, January 29, 2017

Crisis-T: what to do about the delusions and the lies.

I’ve been using “crisis-T” as a tag for our times. I used to think it was a bit melodramatic; that maybe T would somehow veer to the fantasies of Thiel and the like.

Welcome to week two of Crisis-T. A week in which I’ve started monitoring neo-soviet propaganda for clues to what Bannon, Flynn and Putin are thinking (the troika).

Emily and I are still working out how to respond to this. I hope the March for Science happens — I’d join that one way or another. We already subscribe to the NYT, The Atlantic, The New Yorker, and Talking Points — it’s essential to support journalism with hard coin. The 2018 congressional campaign has begun and we will be active (I’ll likely vote in the GOP primaries, more on that later). We’ve done our first of many ACLU and Planned Parenthood donations. Basically we more or less track what Scalzi is doing.

I write and tweet of course, but that’s more therapeutic than useful. It does mean though that I run into some of the issues that real journalists face. Like how to approach the maelstrom of lies and delusions that Bannon and Trump produce. On the one hand presidential speech is a form of action, it can’t be disregarded. On the other hand I’m beginning to worry Bannon is not a conventional idiot. He may have a real talent for strategic propaganda and effective distraction.

I wonder if we should treat the lie-stream like the weather.  Box it on the proverbial page 2 as Bannon-T lies and delusions of the day. Each lie-delusion is then listed with a contrasting statement of testable reality. Then the main pages can focus on even more important problems, like swapping the Joint Chiefs of Staff and Director of National Intelligence for Bannon on the principals committee of the National Security Council.

More as we figure this out …

Update Jan 30, 2017: Jeff Atwood has one of the best action lists I’ve seen.

Saturday, December 31, 2016

Crisis-T: blame it on the iPhone (too)

It’s a human thing. Something insane happens and we try to figure out “why now?”. We did a lot of that in the fall of 2001. Today I looked back at some of what I wrote then. It’s somewhat unhinged — most of us were a bit nuts then. Most of what I wrote is best forgotten, but I still have a soft spot for this Nov 2001 diagram …

Model 20010911

I think some of it works for Nov 2016 too, particularly the belief/fact breakdown, the relative poverty, the cultural dislocation, the response to modernity and changing roles of women, and the role of communication technology. Demographic pressure and environmental degradation aren’t factors in Crisis-T though.

More than those common factors I’ve blamed Crisis-T on automation and globalization reducing the demand for non-elite labor (aka “mass disability”). That doesn’t account for the Russian infowar and fake news factors though (“Meme belief=facts” and “communications tech” in my old diagram). Why were they so apparently influential? 

Maybe we should blame the iPhone …

Why Trolls Won in 2016 Bryan Mengus, Gizmodo

… Edgar Welch, armed with multiple weapons, entered a DC pizzeria and fired, seeking to “investigate” the pizza gate conspiracy—the debunked theory that John Podesta and Hillary Clinton are the architects of a child sex-trafficking ring covertly headquartered in the nonexistent basement of the restaurant Comet Ping Pong. Egged on by conspiracy videos hosted on YouTube, and disinformation posted broadly across internet communities and social networks, Welch made the 350-mile drive filled with righteous purpose. A brief interview with the New York Times revealed that the shooter had only recently had internet installed in his home….

…. the earliest public incarnation of the internet—USENET—was populated mostly by academia. It also had little to no moderation. Each September, new college students would get easy access to the network, leading to an uptick in low-value posts which would taper off as the newbies got a sense for the culture of USENET’s various newsgroups. 1993 is immortalized as the Eternal September when AOL began to offer USENET to a flood of brand-new internet users, and overwhelmed by those who could finally afford access, that original USENET culture never bounced back.

Similarly, when Facebook was first founded in 2004, it was only available to Harvard students … The trend has remained fairly consistent: the wealthy, urban, and highly-educated are the first to benefit from and use new technologies while the poor, rural, and less educated lag behind. That margin has shrunk drastically since 2004, as cheaper computers and broadband access became attainable for most Americans.

…  the vast majority of internet users today do not come from the elite set. According to Pew Research, 63 percent of adults in the US used the internet in 2004. By 2015 that number had skyrocketed to 84 percent. Among the study’s conclusions were that, “the most pronounced growth has come among those in lower-income households and those with lower levels of educational attainment” …

… What we’re experiencing now is a huge influx of relatively new internet users—USENET’s Eternal September on an enormous scale—wrapped in political unrest.

“White Low-Income Non-College” (WLINC) and “non-elite” are politically correct [1] ways of speaking about the 40% of white Americans who have IQ scores below 100. It’s a population that was protected from net exposure until Apple introduced the first mass market computing device in June of 2007 — and Google and Facebook made mass market computing inexpensive and irresistible.

And so it has come to pass that in 2016 a population vulnerable to manipulation and yearning for the comfort of the mass movement has been dispossessed by technological change and empowered by the Facebook ad-funded manipulation engine.

So we can blame the iPhone too.

- fn -

[1] I think, for once, the term actually applies.

Tuesday, December 20, 2016

Save America. Vote GOP.

In the real world HRC is President and the GOP is beginning a painful reform process that will lead to a far better conservative party and a healthy American democracy.

In our consensus hallucination a walking tire fire is President, the GOP is further from reform than ever, and smart Dems are reading Josh Marshall’s advice. Oh, and the wake-up button isn’t working.

While we’re waiting for wakefulness we might as well come up with a plan or two. Plan one is to address the root cause of non-college misery. That will be useful if we survive (hint: avoid war with China) to get a sane government again.

Plan two is about getting a sane government. Towards that end we need to save the GOP from its addiction to the unreal. Unreality is a dangerous drug, after decades of abuse the GOP is in desperate need of rehab …

From Tabloids to Facebook: the Reality Wars (revised from my original)

I’ve been thinking about Russia’s successful hacking of the 2016 US election. It shouldn’t be seen in isolation.

It should be understood as part of the ancient human struggle with delusion and illusion — the reality wars.

In the US the reality wars were once bipartisan; each party struggled to separate fact from fantasy. Over the past few decades the GOP stopped fighting, they embraced the unreal. From Reagan to Gingrich to the Tea Party to Trump. By the 21st century we began seeing books like “The Republican War on Science”.

Unreality spread like a virus. AM talk radio was infested. Then came Drudge and Fox. Later Breitbart and finally the Facebook fake news stream. From the Clinton “murders” to birtherism to child pizza porn slaves.

This wasn’t bipartisan. The anti-reality meme, a core historic component of fascism, became concentrated in the GOP. Russia jumped on board, but Russia is more of a plague carrier than an intelligent agent. They lost their reality-war in the 90s. All their news is unreal now. Putin, like Trump, takes the fakes.

Trump’s victory is a triumph of the unreal. Of Will, I suppose. Now it threatens us all.

The rebellion against reason, against the perception of the real, is old. It’s a core component of fascism, but it’s much older than fascism. The Enlightenment was a setback for the unreal, but it wasn’t a final defeat. Now, in our troubled 3rd millennium, anti-reason is strong. It has taken over Russia. It has taken over the GOP, and Trump’s GOP has taken over America.

Somehow we have to rescue the GOP from it’s addiction to the unreal. That would be hard if it had been defeated. Now it seems impossible.

But there is a way. We need to vote GOP.

Vote GOP … in the primaries that is. In my home of Minnesota the Dem contenders are all pretty reasonable. I can send some money and volunteer to support the party, but my primary/caucus vote isn’t needed. On the other hand, the Minnesota GOP has lots of reality denialists running for office. I can use my primary vote to favor relatively sane GOP contenders.

If even half of Dems vote GOP in primaries we can ally with sane conservatives to pull the GOP back from the brink. Yes, there are a few sane conservatives. They are a dying breed, but there is room to ally with them here.

Then, in the election, we vote Dem. If America is lucky the Dems win. If America is unwise the GOP wins — but it’s a saner GOP. A setback, but not a catastrophe.

Work for a sane GOP. As a good Dem, vote GOP.