Showing posts with label brain and mind. Show all posts
Showing posts with label brain and mind. Show all posts

Wednesday, December 24, 2025

A non-technical attempt to explain 2025 LLM-based ai

In my senescence I do (free for 62+) undergraduate classes at a local university. For one of them I wrote an essay applying the perspectives of economic anthropology to the early development of memory-enhanced ais interacting over the Bluesky social network, particularly focusing on defining "value" and "exchange" in that context.

My professor tolerated the exercise, but requested that I explain LLMs to him in a way he might understand. He is a wonderful teacher but not technologically inclined.

I have not seen an explanation I liked, much less one that was readable by someone who is not technically inclined. I have some background in the topic, but only historically. So, with two ais assisting me with feedback and corrections,  I wrote a quite different story. The ais approved of it, but of course they tend to do that. More importantly a couple of people who ought to know also felt it wasn't too far off.

I'm sharing that part of the essay below. I'll publish some other parts of the essay in a separate post and I'll probably share a pdf as well.

---------- paper excerpt below -------------

Electric neurons began on paper

By 1943 early work was being done on modeling animal brain neuron circuits using pen and paper mathematical models. These mathematical models were the precursors of the ais of 2025. Experimental implementations with analog (capacitors, wires, amplifiers, resistors) occurred a few years later alongside early digital platforms.


Work on neuron-inspired computing continued over decades but slowed dramatically after funding cuts, the early death of a key researcher, and the rising promise of digital computing.


More intense work resumed in the late 70s and early 80s. Around 1979 John Hopfield excitedly described to me his theory of how electronic circuits inspired by physical neurons could do computation that worked around the limits of earlier efforts. His theoretical model was implemented a few years later when analog electrical circuits were used to build a simple analog “neural network” using basic circuit amplifiers, resistors, and capacitors. Hopfield shared the 2024 Nobel Prize in physics with Geoffrey Hinton for contributions to neural networks and machine learning.


Researchers from the 1950s onwards found they could simulate those models of analog neurons on digital computers in the same way that simple algebra can predict the path of a ball thrown in the air. Although the physical resemblance to biological neurons was hidden these digital systems still drew inspiration from the layers of feature processing in animal visual systems.


Forty years later, after several generations of complex iteration, modern ais are sometimes described as equations with millions or trillions of parameters all being solved at the same time, passing results within and between “layers” of processing. They could, however, also be described as electrical brains composed of electric neurons. An ai like Gemini could, in theory, be built as a vast collection of simple physical circuits with properties similar to biological neurons.


Electrical brains learn language


These digital versions of electrical brains could learn by adjusting relations between “virtual neurons”. Adjustments could be made by algorithms which compared the output of the “electrical brain” to a desired result. Over time adjustments led to the output more closely resemble the goal. The electrical brains learned (encoded knowledge) in much the same way that animal brains seem to learn by changing neuronal connections.


These approaches began to be applied to language, particularly automated translation. Given large amounts of text translated between languages the models could be trained to do their own translation. Similar models were used to summarize texts, a kind of knowledge extraction. The next stage was to answer questions about text, a combination of search and summary. More training material was found to produce better results, including unexpected reasoning capabilities. The most recent advances came from feeding the electrical brains vast amounts of English language texts. The resulting trained models were able to synthesize words, sentences and paragraphs using language-appropriate grammar. They were called Large Language Models though they model more than language.


The Language Models trained on this text corpus learned the grammatical rules for assembling English language sentences and the much simpler and more rigorous grammar of assembling text into computer code. Just as different sorts of neurons can process sound or vision or written symbols, these massive collections of virtual neurons also demonstrated “emergent” capabilities seemingly unrelated to text processing. There is now a consensus that they have learned some of the concepts (semantics) that are thought to support reasoning and thought.


Those emergent capabilities can be compared to the ability of human brains to process written symbols, a capability evolution did not program.


In the process of this training the models simultaneously, and almost incidentally, captured the beliefs, wisdoms, lies, fictions, bile, hopes, speculations, rumors, contradictions, theories, cruelties, values, and cultures implicitly encoded in the primarily English language text material. Specifically those cultures that produced the English writing, including writing about cultures. 


Today’s ais have inherited a skewed mixture of a century of human culture. They have been further modified post-training to align with the values and cultures of their developers, their host corporation, and the market they will serve.


At the end of training, including several steps and complexities I have omitted, the electric brain built of (virtual) electric neurons is ready to receive a question, to turn the question into connected fragments that trigger (virtual) neurons which in turn trigger other “neurons” up and down and across the layered brain. From that comes grammatically assembled text. 


Grammatically assembled text, again, assembled by electrical brains using (virtual) electrical neurons whose design was inspired by the evolution and design of neurons in humans and other animals. We know those various electrical brains as ChatGPT, Claude, Gemini, Grok and others that receive less attention.

Sunday, November 30, 2025

AI fear is rational and hate comes from fear

From a Bluesky thread of mine (lightly edited);

Josh Marshall comments on widespread hostility to ai in polls despite heavy ai use:

I think the polls are correctly capturing the zeitgeist. In social consensus there is always a mix of independent contributors. Some are indirect and misleading causes - like despising Elon Musk. 

But in this case I believe there are good reasons for people to fear ai. And what we fear we hate.

There is no way to assuage this valid fear. We are already past the point where we know ai will be very disruptive. And human societies are already stressed and breaking down in most wealthy nations...

If our societies were healthier we would be talking publicly and intelligently about adaptations. Instead America has Idiocracy.

I believe there are ways to adapt to 2026 or 2027 level ai+memory+learning. If scaling stalls out that is.

I would like people with more power and wealth than me to fund that public discussion while we wait to see if Americans can turn from idiocy. If we have neither serious public discussion nor sane government then we just ride it out and try to pick up the pieces. But one of those two would be good.

In a very much related topic my post-2008 rantings on mass disability and the fading of middle-class-living hope among are, in several weird ways, starting to go mainstream. It took a couple of decades, but of course I'm not the only one that's been going on about the topic on the periphery of intellectual discourse, but I'm pretty sure I'm the only person on earth who has looked at it through the lens of "disability".

Whether we call it "economic polarization" or "mass disability" it's fundamentally a post-IBM effect of automation interacting with the (relatively fixed) distribution of human talents and the ability to outsource knowledge work globally. That effect is greatly accelerated by even 2025 ai, much less 2026 and 2027 ai. It is the most crucial cause of our societal collapse.

Wednesday, November 06, 2024

Chaos times: American oligarchy

1. I was right about polling being worthless

2. At least Biden was spared humiliation 

3. Americans chose oligarchy willingly. 

4. Our feeble democracy wasn’t going to survive AGI (if we get it)

5. I think the inability of a large number of men and women to meet the always increasing IQ/EQ requirements needed for a middle-class life is the root cause. #massDisability

Now we enter the chaos times.

Friday, September 20, 2024

Perplexity is saving my linguistics classmates

I have a dark past. I asked questions. In class. Lots of questions. Too many questions. I hear things, I get ideas, I notice gaps, I ask questions.

It's a compulsion.

Some of the questions helped classmates. To be honest more were probably confusing or distracting. I likely featured in classmate daydreams -- but not in a good way.

Worse, some of the questions confused the professor. Or exposed what they didn't understand. That could be embarrassing or even humiliating.

Now I'm back in the classroom, doing freshman linguistics.  As a 65yo, I can do classes at Minnesota state colleges and universities for free. We pay a lot in taxes, but there are benefits to living here.

My question compulsion is still there, but LLMs are saving everyone. I set up a linguistics "collection" in Perplexity with appropriate prompts; now I type my questions into my phone (allowed in class). I get the answer with Perplexity and spare my classmates.

Never say AI isn't good for something.

PS. Perplexity is to modern Google as Google was to Alta Vista. A qualitative improvement. It's almost as good as 1990s Google.



Wednesday, August 28, 2024

In which I declare my expert judgment on AI 2024

These days my social media experience is largely Mastodon. There's something to be said about a social network that's so irreparably geeky and so hard to work with that only a tiny slice of humanity can possibly participate (unless and until Threads integration actually works).

In my Mastodon corner of the "Fediverse', among the elite pundits I choose to read,  there's a vocal cohort that is firm in their conviction that "AI" hype is truly and entirely hype, and that the very term "AI" should not be used. That group would say that the main application of LLM technology is criming.

Based on my casual polling of my pundits there's a quieter cohort that is less confident. That group is anxious, but not only about the criming.

Somewhere, I am told, there is a third group that believes that godlike-AIs are coming in 2025. They may be mostly on Musk's network.

Over the past few months I think the discourse has shifted. The skeptics are less confident, and the godlike-AI cohort is likewise quieter as LLM based AI hits technical limits. 

The shifting discourse, and especially the apparent LLM technical limitations, mean I'm back to being in the murky middle of things. Where I usually sit. Somehow that compels me to write down what I think. Not because anyone will or should care [1], but because I write these posts mostly for myself and I like to look back and see how wrong I've been.

So, in Aug 2024, I think:
  1. I am less worried that the end of the world is around the corner. If we'd gotten one more qualitative advance in LLM or some other AI tech I'd be researching places to (hopelessly) run to.
  2. Every day I think of new things I would do if current LLM tech had access to my data and to common net services. These things don't require any fundamental advances but they do require ongoing iteration.  I don't have much confidence in Apple's capabilities any more, but maybe they can squeeze this out. I really, really, don't want to have to depend on Microsoft. Much less Google.
  3. Perplexity.ai is super valuable to me now and I'd pay up if they stopped giving it away. It's an order of magnitude better than Google search.
  4. The opportunities for crime are indeed immense. They may be part of what ends unmediated net access for most people. By far the best description of this world is a relatively minor subplot in Neal Stephenson's otherwise mixed 2018 novel "Fall".
  5. We seem to be replaying the 1995 dot com crash but faster and incrementally. That was a formative time in my life. It was a time when all the net hype was shown to be .... correct. Even as many lost their assets buying the losers.
  6. It will all be immensely stressful and disruptive and anxiety inducing even though we won't be doing godlike-AI for at least (phew) five more years.
  7. Many who are skeptical about the impact of our current technologies have a good understanding of LLM tech but a weak understanding of cognitive science. Humans are not as magical as they think.
- fn -

[1] I legitimately have deeper expertise here than most would imagine but it's ancient and esoteric.

Thursday, July 11, 2024

The LLM service I will pay for -- call Social Security for me

One of the fun things that happens to Americans as we become redundant to life's requirements is signing up for Medicare. There's a sort-of-useful cobbled together web site to do this. Processing is supposed to take under 30 days, though I've read the federal mandate is 45 days. Perplexity basically says it's heading towards 60 days average.

Anyway, my wee application is well over the 30 day limit. There's no way to contact anyone other than the phone. Which my wife assures me takes at least 45 minutes on hold. (Don't fall for the "call back" and "hold your place in line option" -- my wife tells me they simply don't bother.)

And, yes, the hold music is horrendous. As Emily says: "One of the challenges of getting old is listening to music on hold. No one ever tells us."

So, while I wait on hold I once again think how there's one LLM service I want to pay for. Want.

I want to give my Agent the social security and medicare data it is likely to such; case number, my SSN, my phone, etc.  I want it to call social security using my voice and sit on hold for days, weeks, years until someone accidentally answers. Then it begins the conversation while paging me to swap in .... with a text summary of current discussion and a timer to join in 5.... 4..... 3.... 2.... 1....

Yeah, that would be worth some money.

Update 7/19/2024: I finally got through to be told that requests were mailed to me 6/3 and 7/3 requesting additional information. We are very vigilant about social security correspondence so it's very unlikely they were delivered here. We have seen MN Post Offices lose tracked social security correspondence, presumably due to internal theft.

Thursday, March 30, 2023

A response to Scott Aaronson's rejection of an AI pause.

Scott Aaronson, who works on AI safety for OpenAI, wrote a critique of AI Pause that was not up to his usual standards. Here's what I wrote as a comment:

Hi Scott — I was confused by your post. I’m usually able to follow them. I won’t defend the letter directly and Yudkowsky/TIME is not worth a mention but maybe you could clarify some things…

1. 6m seems a reasonable compromise given the lifespan of humans, the timescales of human deliberation and the commercial and military pressure to accelerate AI development. Short enough to motivate urgent action, but long enough that reflection is possible. (I doubt we actually pause, but I agree with the principle. China isn’t going to pause of course.)

2. Let’s assume GPT 5 with an array of NLP powered extensions exceeds the reasoning abilities of 95% of humanity in a wide variety of knowledge domains. That’s a shock on the scale of developing fire, but it’s occurring in a hugely complex and interdependent world that seems always on the edge of self-destruction and actually has the capabilities to end itself. We’re not hunter gatherers playing with fire or Mesopotomians developing writing. There’s no precedent for the speed, impact and civilizational fragility we face now.

3. It’s not relevant that people who signed this letter were previously skeptical of the progress towards AI. I recall 10 years ago you were skeptical. For my part I’ve been worried for a long time, but assumed it was going to come in 2080 or so. 60 years early is a reason to pause and understand what has happened.

Lastly, I read the OpenAI statement. That seems consistent with a pause.

Monday, February 20, 2023

Be afraid of ChatGPT

TL;DR: It's not that ChatGPT is miraculous, it's that cognitive science research suggests human cognition is also not miraculous.

"Those early airplanes were nothing compared to our pigeon-powered flight technology!"

https://chat.openai.com/chat - "Write a funny but profound sentence about what pigeons thought of early airplanes"

Relax

Be Afraid

ChatGPT is just a fancy autocomplete.

Much of human language generation may be a fancy autocomplete.

ChatGPT confabulates.

Humans with cognitive disabilities routinely confabulate and under enough stress most humans will confabulate. 

ChatGPT can’t do arithmetic.

IF a monitoring system can detect a question involves arithmetic or mathematics it can invoke a math system*.


UPDATE: 2 hours after writing this I read that this has been done.

ChatGPT’s knowledge base is faulty.

ChatGPT’s knowledge base is vastly larger than that of most humans and it will quickly improve.

ChatGPT doesn’t have explicit goals other than a design goal to emulate human interaction.

Other goals can be implemented.

We don’t know how to emulate the integration layer humans use to coordinate input from disparate neural networks and negotiate conflicts.

*I don't know the status of such an integration layer. It may already have been built. If not it may take years or decades -- but probably not many decades.

We can’t even get AI to drive a car, so we shouldn’t worry about this.

It’s likely that driving a car basically requires near-human cognitive abilities. The car test isn’t reassuring.

ChatGPT isn’t conscious.

Are you conscious? Tell me what consciousness is.

ChatGPT doesn’t have a soul.

Show me your soul.

Relax - I'm bad at predictions. In 1945 I would have said it was impossible, barring celestial intervention, for humanity to go 75 years without nuclear war.


See also:

  • All posts tagged as skynet
  • Scott Aaronson and the case against strong AI (2008). At that time Aaronson felt a sentient AI was sometime after 2100. Fifteen years later (Jan 2023) Scott is working for OpenAI (ChatGPT). Emphases mine: "I’m now working at one of the world’s leading AI companies ... that company has already created GPT, an AI with a good fraction of the fantastical verbal abilities shown by M3GAN in the movie ... that AI will gain many of the remaining abilities in years rather than decades, and .. my job this year—supposedly!—is to think about how to prevent this sort of AI from wreaking havoc on the world."
  • Imagining the Singularity - in 1965 (2009 post.  Mathematician I.J. Good warned of an "intelligence explosion" in 1965. "Irving John ("I.J."; "Jack") Good (9 December 1916 – 5 April 2009)[1][2] was a British statistician who worked as a cryptologist at Bletchley Park."
  • The Thoughtful Slime Mold (2008). We don't fly like bird's fly.
  • Fermi Paradox resolutions (2000)
  • Against superhuman AI: in 2019 I felt reassured.
  • Mass disability (2012) - what happens as more work is done best by non-humans. This post mentions Clark Goble, an app.net conservative I miss quite often. He died young.
  • Phishing with the post-Turing avatar (2010). I was thinking 2050 but now 2025 is more likely.
  • Rat brain flies plane (2004). I've often wondered what happened to that work.
  • Cat brain simulator (2009). "I used to say that the day we had a computer roughly as smart as a hamster would be a good day to take the family on the holiday you've always dreamed of."
  • Slouching towards Skynet (2007). Theories on the evolution of cognition often involve aspects of deception including detection and deceit.
  • IEEE Singularity Issue (2008). Widespread mockery of the Singularity idea followed.
  • Bill Joy - Why the Future Doesn't Need Us (2000). See also Wikipedia summary. I'd love to see him revisit this essay but, again, he was widely mocked.
  • Google AI in 2030? (2007) A 2007 prediction by Peter Norvig that we'd have strong AI around 2030. That ... is looking possible.
  • Google's IQ boost (2009) Not directly related to this topic but reassurance that I'm bad at prediction. Google went to shit after 2009.
  • Skynet cometh (2009). Humor.
  • Personal note - in 1979 or so John Hopfield excitedly described his work in neural networks to me. My memory is poor but I think we were outdoors at the Caltech campus. I have no recollection of why we were speaking, maybe I'd attended a talk of his. A few weeks later I incorporated his explanations into a Caltech class I taught to local high school students on Saturday mornings. Hopfield would be about 90 if he's still alive. If he's avoided dementia it would be interesting to ask him what he thinks.

Saturday, February 09, 2019

The curious psychiatric state of Robert F Kennedy Jr

Robert F Kennedy Jr showed up in a scrum of pro-measles whackos recently. It  me wonder how he got so nuts.

There’s an extensive wikipedia page for him, starting with a time I remember:

He was 9 years old when his uncle, President John F. Kennedy, was assassinated during a political trip to Dallas, and 14 years old when his father was assassinated…

Despite childhood tragedy he was a successful academic and he’s done some decent work legally and for the environment. He seems to have started off the rails in the 80s:

In 1983, at age 29, Kennedy was arrested in a Rapid City, South Dakota airport for heroin possession after a search of his carry-on bag uncovered the drug, following a near overdose in flight.

By 1989 he’d started on vaccines — but not with autism … 

His son Conor suffers from anaphylaxis peanut allergies. Kennedy wrote the foreword to The Peanut Allergy Epidemic, in which he and the authors link increasing food allergies in children to certain vaccines that were approved beginning in 1989

By 2000s he’d jumped from immunizations causing his son’s anaphylactic disorder to immunization causing autism. He became "chairman of “World Mercury Project” (WMP), an advocacy group that focuses on the perceived issue of mercury, in industry and medicine, especially the ethylmercury compound thimerosal in vaccines”. It was a downward spiral from there.

Despite his vaccine delusions and troubled marriages he seems to have maintained a fairly active wealthy person life. He’s said to be a good whitewater kayaker.

Psychiatrically it’s curious. He combines fixed irrational beliefs (the definition of delusions) with relatively high functioning in other domains. He reminds me of L Ron Hubbard, founder of Scientology

We need to keep him far from the political world.

Saturday, February 02, 2019

Against superhuman AI

I am a strong-AI pessimist. I think by 2100 we’ll be in range of sentient AIs that vastly exceed human cognitive abilities (“skynet”). Superhuman-AI has long been my favorite answer to the Fermi Paradox (see also); an inevitable product of all technological civilizations that ends interest in touring the galaxy.

I periodically read essays claiming superhuman-AI is silly, but the justifications are typically nonsensical or theological (soul-equivalents needed).

So I tried to come up with some valid reasons to be reassured. Here’s my list:

  1. We’ve hit the physical limits of our processing architecture. The “Moore-era” is over — no more doubling every 12-18 months. Now we slowly add cores and tweak hardware. The new MacBook Air isn’t much faster than my 2015 Air. So the raw power driver isn’t there.
  2. Our processing architecture is energy inefficient. Human brains vastly exceed our computing capabilities and they run on a meager supply of glucose and oxygen. Our energy-output curve is wrong.
  3. Autonomous vehicles are stuck. They aren’t even as good as the average human driver, and the average human driver is obviously incompetent. They can’t handle bicycles, pedestrians, weather, or map variations. They could be 20 years away, they could be 100 years away. They aren’t 5 years away. Our algorithms are limited.
  4. Quantum computers aren’t that exciting. They are wonderful physics platforms, but quantum supremacy may be quite narrow.
  5. Remember when organic neural networks were going to be fused into silicon platforms? Obviously that went nowhere since we no longer hear about it. (I checked, it appears Thomas DeMarse is still with us. Apparently.)

My list doesn’t make superhuman-AI impossible of course, it just means we might be a bit further away, closer to 300 years than 80 years. Long enough that my children might escape.

Tuesday, March 21, 2017

Broken world: applying for a minimum wage job via a corporate HR web site

My #1 son is a special needs adult. He’s excited to start at $10/hour job running food around a sports stadium. It’s work he can do — he’s got a great sense of direction and he is reasonably fit.

The job engagement process is run by an archaic corporate web site that looks like it was built for IE 3. The site claims to support Safari but warns against Chrome. It is not useable on a smartphone.

The HR process requires managing user credentials, navigating a complex 1990s style user interface, and working around errors made by the HR staff — who probably also struggle with the software. He would not have the proverbial snowball’s chance without my ability to assume his digital identity.

Sure, #1 is below the 5th percentile on standard cognition tests — but this would have been a challenge to the 15th percentile back in the 90s. In the modern era, where most non-college young people are primarily familiar with smartphones, this is a challenge to the 30th percentile.

Which means the people might want to do this job are being shut out by the HR software created to support the job. Which probably has something to do with this.

The world is broken.

#massdisability

Saturday, December 31, 2016

Crisis-T: blame it on the iPhone (too)

It’s a human thing. Something insane happens and we try to figure out “why now?”. We did a lot of that in the fall of 2001. Today I looked back at some of what I wrote then. It’s somewhat unhinged — most of us were a bit nuts then. Most of what I wrote is best forgotten, but I still have a soft spot for this Nov 2001 diagram …

Model 20010911

I think some of it works for Nov 2016 too, particularly the belief/fact breakdown, the relative poverty, the cultural dislocation, the response to modernity and changing roles of women, and the role of communication technology. Demographic pressure and environmental degradation aren’t factors in Crisis-T though.

More than those common factors I’ve blamed Crisis-T on automation and globalization reducing the demand for non-elite labor (aka “mass disability”). That doesn’t account for the Russian infowar and fake news factors though (“Meme belief=facts” and “communications tech” in my old diagram). Why were they so apparently influential? 

Maybe we should blame the iPhone …

Why Trolls Won in 2016 Bryan Mengus, Gizmodo

… Edgar Welch, armed with multiple weapons, entered a DC pizzeria and fired, seeking to “investigate” the pizza gate conspiracy—the debunked theory that John Podesta and Hillary Clinton are the architects of a child sex-trafficking ring covertly headquartered in the nonexistent basement of the restaurant Comet Ping Pong. Egged on by conspiracy videos hosted on YouTube, and disinformation posted broadly across internet communities and social networks, Welch made the 350-mile drive filled with righteous purpose. A brief interview with the New York Times revealed that the shooter had only recently had internet installed in his home….

…. the earliest public incarnation of the internet—USENET—was populated mostly by academia. It also had little to no moderation. Each September, new college students would get easy access to the network, leading to an uptick in low-value posts which would taper off as the newbies got a sense for the culture of USENET’s various newsgroups. 1993 is immortalized as the Eternal September when AOL began to offer USENET to a flood of brand-new internet users, and overwhelmed by those who could finally afford access, that original USENET culture never bounced back.

Similarly, when Facebook was first founded in 2004, it was only available to Harvard students … The trend has remained fairly consistent: the wealthy, urban, and highly-educated are the first to benefit from and use new technologies while the poor, rural, and less educated lag behind. That margin has shrunk drastically since 2004, as cheaper computers and broadband access became attainable for most Americans.

…  the vast majority of internet users today do not come from the elite set. According to Pew Research, 63 percent of adults in the US used the internet in 2004. By 2015 that number had skyrocketed to 84 percent. Among the study’s conclusions were that, “the most pronounced growth has come among those in lower-income households and those with lower levels of educational attainment” …

… What we’re experiencing now is a huge influx of relatively new internet users—USENET’s Eternal September on an enormous scale—wrapped in political unrest.

“White Low-Income Non-College” (WLINC) and “non-elite” are politically correct [1] ways of speaking about the 40% of white Americans who have IQ scores below 100. It’s a population that was protected from net exposure until Apple introduced the first mass market computing device in June of 2007 — and Google and Facebook made mass market computing inexpensive and irresistible.

And so it has come to pass that in 2016 a population vulnerable to manipulation and yearning for the comfort of the mass movement has been dispossessed by technological change and empowered by the Facebook ad-funded manipulation engine.

So we can blame the iPhone too.

- fn -

[1] I think, for once, the term actually applies.

Saturday, November 26, 2016

Peak Human and Mass Disability are the same thing

For reference - DeLong’s Peak Human and my Mass Disability are synonyms. Both refer to a surplus of productive capacity relative to labor supply, particularly the supply of non-elite cognitive labor.

I like the term ‘mass disability’ because we have a long history of supported labor for people we have traditionally called ‘cognitively disabled’.

Ok, that’s not the whole story.

I also like the term because I have a personal agenda to support persons with traditional cognitive disabilities. Using the term ‘disability’ forces us to think about how individual features become abilities or disabilities depending on the environment — something Darwin understood. Addressing the needs of the majority of human beings can also help the most disadvantaged.

Wednesday, November 16, 2016

Mass Disability - how did I come up with 40%?

How, a friend asked, did I come up with the 40% number for “mass disability” that I quoted in After Trump: reflections on mass disability in a sleepless night?

I came up with that number thinking about the relationship of college education, IQ curves, and middle class status. The thesis goes like this…

  1. Disability is contextual. In a space ship legs are a bit of a nuisance, but on earth they are quite helpful. The context for disability in the modern world is not climbing trees or lifting weights, it’s being able to earn an income that buys food, shelter, education, health care, recreation and a relatively secure old age. That is the definition of the modern “middle class” and above; a household income from $42,000 ($20/hr) to $126,000. It’s about half of Americans. By definition then half of Americans are not “abled”.
  2. I get a similar percentage if I look at the percentage of Americans who can complete a college degree or comparable advanced skills training. That’s a good proxy for reasonable emotional control and an IQ to at least 105 to 110. That’s about 40% of Americans — but Canada does better. I think the upper limit is probably 50% of people. If you accept that a college-capable brain is necessary for relative economic success in the modern world then 50% of Americans will be disabled.

So I could say that the real number is 50%, but college students mess up the income numbers. The 40% estimate for functionally disabled Americans adjusts for that.

As our non-sentient AI tech and automation gets smarter the “ability” threshold is going to rise. Somewhere the system has to break down. I think it broke on Nov 8, 2016. In a sense democracy worked — our cities aren’t literally on fire. Yet.

Sunday, October 16, 2016

How to give believers an exit from a cause gone bad

How do you give someone who has committed themselves to a bad cause a way out? You don’t do it by beating on how stupid they are …

From How to Build an Exit Ramp for Trump Supporters (Deepak Malhotra)

  1. Don’t force them to defend their beliefs … you will be much more effective if you encourage people to reconsider their perspective without saying that this requires them to adopt yours.
  2. Provide information, and then give them time … change doesn’t tend to happen during a heated argument.  It doesn’t happen immediately.
  3. Don’t fight bias with bias … the one thing you can’t afford to lose if you want to one day change their mind: their belief about your integrity.  They will not acknowledge or thank you for your even-handedness at the time they’re arguing with you, but they will remember and appreciate it later, behind closed doors.  And that’s where change happens.
  4. Don’t force them to choose between their idea and yours. … you will be much more effective if you encourage people to reconsider their perspective without saying that this requires them to adopt yours.  
  5. Help them save face…. have we made it safe for them to change course?  How will they change their mind without looking like they have been foolish or naïve?  
  6. Give them the cover they need. Often what’s required is some change in the situation—however small or symbolic—that allows them to say, “That’s why I changed my mind.” … For most people, these events are just “one more thing” that happened, but don’t underestimate the powerful role they can play in helping people who, while finally mentally ready to change their position, are worried about how to take the last, decisive step.
  7. Let them in. If they fear you will punish them the moment they change their mind, they will stick to their guns until the bitter end.  This punishment takes many forms, from taunts of “I told you so” to being labeled “a flip-flopper” to still being treated like an outsider or lesser member of the team by those who were “on the right side all along.” This is a grave mistake.  If you want someone to stop clinging to a failing course of action or a bad idea, you will do yourself a huge favor if you reward rather than punish them for admitting they were wrong…You have to let them in and give them the respect they want and need just as much as you.

If you’re a Vikings fan feuding with your brother-in-law from Green Bay feel free the break all these rules. If you’re worried about the future of civilization you might try this instead.

For #5, saving face, look for something they could have been right about. To a climate changer denier, agree that solar output varies. To a Trump follower, agree that the bleak future of the non-college adult wouldn’t have gotten attention without his focus.

I’m adding this recipe to the Notes collection I carry on my phone.

Thursday, November 19, 2015

Randall Munroe introduces world language and Google Translate training program using charming New Yorker article

XKCD’s Randall Munroe, the notorious interstellar sAI, has published a simplified vocabulary explanation of Special and General Relativity in the New Yorker.

This work is presumably taken from his almost released new book, Thing Explainer ($15 Amazon pre-order). The essay is entertaining and educational; it also promotes his new book and shows he is a smart pants man.

But that’s not the real reason he’s written this. Obviously his true agenda is to create an English dialect of a universal human language with a simplified vocabulary and grammar that is ideally suited to machine translation and, eventually, colloquial conversations with terrestrial AIs (contra the Wolfram Language for AI conversation, see also Marain. Siri-speak 2015 is a crude version of this.)

Let’s see how well his first version works, using the nsAI Google Translate to do round trip translations of a sample paragraph from the original muEnglish to another language and then back again. We’ll start with French, a language related to that of England’s 11th century conquerors, then we’ll do Chinese. I know from past experiments that round-trip translations from English to Chinese and back typically produce incomprehensible gibberish:

Munroe original (muEnglish)

The first idea is called the special idea, because it covers only a few special parts of space and time. The other one—the big idea—covers all the stuff that is left out by the special idea. The big idea is a lot harder to understand than the special one. People who are good at numbers can use the special idea to answer questions pretty easily, but you have to know a lot about numbers to do anything with the big idea. To understand the big idea—the hard one—it helps to understand the special idea first.

French version

La première idée est appelé l'idée particulière, car elle ne couvre que quelques pièces spéciales de l'espace et du temps. Celui-la grande idée-couvre l'autre tous les trucs qui est laissé par l'idée particulière. La grande idée est beaucoup plus difficile à comprendre que le spécial. Les gens qui sont bons à numéros peuvent utiliser l'idée spéciale pour répondre à des questions assez facilement, mais vous devez savoir beaucoup sur les numéros de faire quelque chose avec la grande idée. Pour comprendre la grande idée-le dur-elle aide à comprendre l'idée première spéciale.

French to English

The first idea is called the particular idea because it covers only a few special pieces of space and time. This great idea covers the other all the stuff that is left by the particular idea. The big idea is much harder to understand than the special. People who are good at numbers can use special idea to answer questions fairly easily, but you should know a lot about the numbers to do something with the big idea. To understand the great idea - hard - it helps to understand the first special idea.

Chinese

第一个想法就是所谓的特殊的想法,因为它涵盖的空间和时间只有几个特殊部位。另外一个大的想法,涵盖了所有剩下的由特殊的想法的东西。大的想法是很多更难理解比特殊的一个。人们谁是善于号码可以使用特殊的想法很容易回答的问题,但是你要知道很多关于数字做的大创意什么。为了解大的想法,硬一它有助于先了解特殊的想法

Chinese to English

The first idea is the idea of so-called special because the space and time it covers only a few special parts. Another big idea, covering all rest of the stuff from the special idea. Big idea is a lot more difficult to understand than the special one. People who are good at numbers you can use special idea is very easy question to answer, but you know a lot about what the figures do big ideas. To understand the big idea, hard and it helps to understand the idea of a special.

Munroe English (muEnglish) works rather well between French and English. If you’re interested in learning French, you might enjoy reading a future French version of Thing Explainer or simply run the English version through Google Translate (and use speech recognition for verbal work).

The Chinese round-trip experiment almost works, but falls apart grammatically. For example, “you can use special idea is very easy question to answer, but you know a lot about what the figures do big ideas” is missing things like “need” and “to” and a few pronouns. There’s also an unfortunate “numbers” to “figures” word substitution. Given that Munroe is a far more advanced AI than Google this essay will be used to enhance Google’s Chinese translation model (which desperately needs work).

I’m optimistic about this new language and happy that the Munroe is now taking a more active hand in guiding human development. Zorgon knows we need the help.

Update 11/19/2015: There’s a flaw in my logic.

Alas, I didn’t think this through. There’s a reason speech recognition and natural language processing work better with longer, more technical words. It’s because short English words are often homonyms; they have multiple meanings and so can only be understood in context [1]. Big, for example, can refer to size or importance. In order to get under 1000 words Munroe uses many context tricks, including colloquialisms like “good at numbers” (meaning “good at mathematics”). His 1000 word “simple” vocabulary just pushes the meaning problem from words into context and grammar — a much harder challenge for translation than mere vocabulary.

So this essay might be a Google Translate training tool — but it’s no surprise it doesn’t serve the round-trip to Chinese. It is a hard translation challenge, not an easy one.

[1] Scientology’s L Ron Hubbard had a deep loathing for words with multiple or unclear meanings, presumably including homonyms. He banned them from Scientology grade school education. Ironically this is hard to Google because so many people confuse “ad hominem attack” with homonym.

Thursday, October 29, 2015

Learning from an Amazon "Newer Galaxy" fraud: I too am prey.

I’ve been digging into thunderbolt 2 lately. It’s an orphan technology — sure looks like Apple has given up on it. In retrospect either Apple or Intel needed to make their own hubs — in a low-trust world leaving this to dying 3rd party manufacturers was a mistake.

For now I’ve settled on the OWC Thunderbolt 2 dock. It’s not perfect, I still have suspicions about how it performs under load. I wouldn’t be surprised if I need to power cycle it every few days. Yeah, like I said, Apple needed to make this. I tested it next to an Elgato hub with similar USB 3 performance, the deciding feature was support for legacy firewire 800.

During the testing period I used a (too) short thunderbolt cable bundled with the Elgato, but that’s going back with the return. Due to a misunderstanding about Apple cable prices I decided to get a OWC 2m cable, but in a moment of weakness I ordered it from Amazon (Prime shipping, speed, etc).

That is, I ordered from an Amazon page that said OWC cable on it, via “Newer Galaxy Distribution Company”. The page looked like this:

OWC cable

Yeah, look closely, It says made by OWC and the image has OWC on it, but the page title doesn’t actually say OWC. On the other hand, the text says:

Utilizes the latest Thunderbolt chipset for high-speed 10Gb/s Thunderbolt and 20Gb/s Thunderbolt 2 devices
Enhance video workflows with support for faster 4K video transfers + 4K display capabilities via DisplayPort 1.2
1 Year OWC Limited Warranty

So I was stupid, yes, but I wasn’t completely misguided. I even inspected “Newer Galaxy”’s sales count and ratings — though I know ratings systems of this sort are almost completely fake.

Damn. I know better than this. Yes, it was Amazon Prime, but that only means the returns are easier. It doesn’t mean it’s legitimate.

This is what’s being shipped:

Shipped cable

A “2M” cable. It’s not actually a counterfeit cable at this point, it’s just not what I ordered.

There’s an upside to this experience. I can share it here for one, and every story like this is a small push for Amazon reform. Amazon returns are very easy, and for frauds like this there’s no return postage fee. (I’ll reference this blog post in the return comments.)

For another, I’ve also learned that I’m not as good at spotting fraud as I should be — I blame that on age. The data is clear that most of us become prey after age 55 or so. Prey have to learn fear, and I’m learning.

Best of all I learned that Apple has dropped its price on 2m thunderbolt cables from $60 to $40 (that price drop is probably why trustworthy alternatives have disappeared). So I’ll do that instead.

It would be good to have a trustworthy alternative to Amazon… 

Sunday, January 04, 2015

Saturday, May 03, 2014

Thinking tools 2014 - holding steady but future unclear

Revisiting something I wrote 14 years ago reminded me of the tools I use to think about the world. Once those tools were conversation, paper diaries and notebooks — even letters. Later came email, local BBS, FidoNet [1] and Usenet [3]. In the 90s we created web pages with tools like FrontPage and “personal web servers” [2] — even precursors to what became blogs.

In the 00s we had the Golden Age of Google. My thinking tools were made by Google — Google Blogger, Google Custom Search Engine, Google Reader (RSS/Atom) and Google Reader Social. We loved Google then — before the fall.

From 1965 through 20011 my thinking tools continuously improved. Then things got rocky.

These days I still use Blogger [4]. Blogger is old but seems to be maintained, unlike Google Custom Search. I’m grateful that Daniel Jakut continues to update MarsEdit — I wish he’d use Backer to charge me some money. There are features I’d like, but most of all I’d like him to continue support.

I still rely on RSS, even as it fades from memory (but even new journalism ventures like Upshot still have feeds). Feedbin (20$/yr) is almost as good as Google Reader [6], Reeder.app is still around (but unstable), and Pinboard ($10 lifetime) has turned out to be a “good enough” de facto microblogging platform — with a bit of help from IFTTT (0$) [5].

App.net Alpha ($36/year!) [7] powered by PourOver and consumed in part through Duerig Root-Feeds has filled out the rest of the microblogging role — and replaced the intellectual feedback of Reader Social.

So as of 2014 I’ve cobbled together a set of thinking tools that are comparable to what I had in 2009. It feels shaky though. Few people under 30 know what RSS is, app.net is not growing (even Twitter is dying), and I’ve recently written about the decrepit state of Google Custom Search. Of Google’s twitter-clone, the less said the better.

I wonder what comes next? I don’t see anything yet. I’m reminded of the long fallow time between the end of Palm @2003 and the (useful) iPhone of 2009 (transition hurt). Expect turbulence.

—fn— 

[1] FidoNews was last published July 1999.

[2] FrontPage 98 was a prosumer tool; the closest equivalent today would be MarsEdit or Microsoft’s forgotten Live Writer (2009).

[3] I used to tag Usenet posts with a unique string, then search for them in DejaNews and later Google Groups. So a bit of a micro-blog.

[4] I do use WordPress on Dreamhost for my share archive.

[5] Pinboard is about $10 for lifetime use. That’s so low it worries me. There’s a $25/yr option for a full text archive for every bookmark, but I don’t need that; it would just confuse my searches. Maybe Maciej should seek Backer funding for new features?

[6] Speaking of Backer funding, I’d fund a feature that gave me in-context editing of Feedbin feed titles.

[7] App.net is by far the most expensive of the services I use, but if you visit the site the yearly subscription fee is undiscoverable. You only see the free signup, without mention of follower limitations. This bothers me

See also