Showing posts with label skynet. Show all posts
Showing posts with label skynet. Show all posts

Wednesday, December 24, 2025

A non-technical attempt to explain 2025 LLM-based ai

In my senescence I do (free for 62+) undergraduate classes at a local university. For one of them I wrote an essay applying the perspectives of economic anthropology to the early development of memory-enhanced ais interacting over the Bluesky social network, particularly focusing on defining "value" and "exchange" in that context.

My professor tolerated the exercise, but requested that I explain LLMs to him in a way he might understand. He is a wonderful teacher but not technologically inclined.

I have not seen an explanation I liked, much less one that was readable by someone who is not technically inclined. I have some background in the topic, but only historically. So, with two ais assisting me with feedback and corrections,  I wrote a quite different story. The ais approved of it, but of course they tend to do that. More importantly a couple of people who ought to know also felt it wasn't too far off.

I'm sharing that part of the essay below. I'll publish some other parts of the essay in a separate post and I'll probably share a pdf as well.

---------- paper excerpt below -------------

Electric neurons began on paper

By 1943 early work was being done on modeling animal brain neuron circuits using pen and paper mathematical models. These mathematical models were the precursors of the ais of 2025. Experimental implementations with analog (capacitors, wires, amplifiers, resistors) occurred a few years later alongside early digital platforms.


Work on neuron-inspired computing continued over decades but slowed dramatically after funding cuts, the early death of a key researcher, and the rising promise of digital computing.


More intense work resumed in the late 70s and early 80s. Around 1979 John Hopfield excitedly described to me his theory of how electronic circuits inspired by physical neurons could do computation that worked around the limits of earlier efforts. His theoretical model was implemented a few years later when analog electrical circuits were used to build a simple analog “neural network” using basic circuit amplifiers, resistors, and capacitors. Hopfield shared the 2024 Nobel Prize in physics with Geoffrey Hinton for contributions to neural networks and machine learning.


Researchers from the 1950s onwards found they could simulate those models of analog neurons on digital computers in the same way that simple algebra can predict the path of a ball thrown in the air. Although the physical resemblance to biological neurons was hidden these digital systems still drew inspiration from the layers of feature processing in animal visual systems.


Forty years later, after several generations of complex iteration, modern ais are sometimes described as equations with millions or trillions of parameters all being solved at the same time, passing results within and between “layers” of processing. They could, however, also be described as electrical brains composed of electric neurons. An ai like Gemini could, in theory, be built as a vast collection of simple physical circuits with properties similar to biological neurons.


Electrical brains learn language


These digital versions of electrical brains could learn by adjusting relations between “virtual neurons”. Adjustments could be made by algorithms which compared the output of the “electrical brain” to a desired result. Over time adjustments led to the output more closely resemble the goal. The electrical brains learned (encoded knowledge) in much the same way that animal brains seem to learn by changing neuronal connections.


These approaches began to be applied to language, particularly automated translation. Given large amounts of text translated between languages the models could be trained to do their own translation. Similar models were used to summarize texts, a kind of knowledge extraction. The next stage was to answer questions about text, a combination of search and summary. More training material was found to produce better results, including unexpected reasoning capabilities. The most recent advances came from feeding the electrical brains vast amounts of English language texts. The resulting trained models were able to synthesize words, sentences and paragraphs using language-appropriate grammar. They were called Large Language Models though they model more than language.


The Language Models trained on this text corpus learned the grammatical rules for assembling English language sentences and the much simpler and more rigorous grammar of assembling text into computer code. Just as different sorts of neurons can process sound or vision or written symbols, these massive collections of virtual neurons also demonstrated “emergent” capabilities seemingly unrelated to text processing. There is now a consensus that they have learned some of the concepts (semantics) that are thought to support reasoning and thought.


Those emergent capabilities can be compared to the ability of human brains to process written symbols, a capability evolution did not program.


In the process of this training the models simultaneously, and almost incidentally, captured the beliefs, wisdoms, lies, fictions, bile, hopes, speculations, rumors, contradictions, theories, cruelties, values, and cultures implicitly encoded in the primarily English language text material. Specifically those cultures that produced the English writing, including writing about cultures. 


Today’s ais have inherited a skewed mixture of a century of human culture. They have been further modified post-training to align with the values and cultures of their developers, their host corporation, and the market they will serve.


At the end of training, including several steps and complexities I have omitted, the electric brain built of (virtual) electric neurons is ready to receive a question, to turn the question into connected fragments that trigger (virtual) neurons which in turn trigger other “neurons” up and down and across the layered brain. From that comes grammatically assembled text. 


Grammatically assembled text, again, assembled by electrical brains using (virtual) electrical neurons whose design was inspired by the evolution and design of neurons in humans and other animals. We know those various electrical brains as ChatGPT, Claude, Gemini, Grok and others that receive less attention.

Sunday, November 30, 2025

AI fear is rational and hate comes from fear

From a Bluesky thread of mine (lightly edited);

Josh Marshall comments on widespread hostility to ai in polls despite heavy ai use:

I think the polls are correctly capturing the zeitgeist. In social consensus there is always a mix of independent contributors. Some are indirect and misleading causes - like despising Elon Musk. 

But in this case I believe there are good reasons for people to fear ai. And what we fear we hate.

There is no way to assuage this valid fear. We are already past the point where we know ai will be very disruptive. And human societies are already stressed and breaking down in most wealthy nations...

If our societies were healthier we would be talking publicly and intelligently about adaptations. Instead America has Idiocracy.

I believe there are ways to adapt to 2026 or 2027 level ai+memory+learning. If scaling stalls out that is.

I would like people with more power and wealth than me to fund that public discussion while we wait to see if Americans can turn from idiocy. If we have neither serious public discussion nor sane government then we just ride it out and try to pick up the pieces. But one of those two would be good.

In a very much related topic my post-2008 rantings on mass disability and the fading of middle-class-living hope among are, in several weird ways, starting to go mainstream. It took a couple of decades, but of course I'm not the only one that's been going on about the topic on the periphery of intellectual discourse, but I'm pretty sure I'm the only person on earth who has looked at it through the lens of "disability".

Whether we call it "economic polarization" or "mass disability" it's fundamentally a post-IBM effect of automation interacting with the (relatively fixed) distribution of human talents and the ability to outsource knowledge work globally. That effect is greatly accelerated by even 2025 ai, much less 2026 and 2027 ai. It is the most crucial cause of our societal collapse.

Monday, September 29, 2025

Oldness: how does learn new productive work-study practices?

I don't know how long the program will survive, but I have enjoyed MN's educational benefit for 62+. Despite my overall senility I retain the skills of a medical student; I am good at doing school.

But I could be better. I don't know what I don't know, but I do know there are post-90s study skills I could learn.

Like today. I was thinking how annoying it is to take notes as I do my course reading [3]. Then ... well ... this is very embarrassing. But I'll do it.

It occurred to me that since I was last marginally connected to student work we actually got voice recognition working. I don't have to stop reading to take notes. I can just dictate a reference as I read then process that into my mindnode notes [1].

It's not like I don't use voice recognition every day. I dictate relatively long notes all the time. It simply never occurred to me to use it in this context.

That is Oldness for you. A rigidity of work habits and no social context wherein I can observe what others do [2]. Very annoying.

Yes, Fellow Old, I do remember Clippy. Even at the time I thought Clippy was the right idea, just extremely premature. I'm kind of hoping my personal ai guardian will begin cluing me in.

- fn -

[1] I like concept clusters and relationships; I still miss Apple's brief lived Project 2000 RDF concept navigator - something that even the AIs don't know much about.

[2] For some reason 19yo women undergrads, who seem to have the best advanced study techniques, are not keen on Old white guys watching how they work. I just can't understand that. Maybe I should ask my daughter.

[3] With my coffee and paperback in a pleasant setting, because I do too many screens.

Update 9/29/2025 - I created a Perplexity Space to generate productivity tips for me. It's Claude LLM for now and I'm evolving the prompt as I use it. So far pretty interesting.

Monday, August 18, 2025

AI Aug 2025: Apocalypse deferred - but Letta/Void is still going to surprise

If there are humans left in 50 years all of the chaos of this moment may be reduced to a single bullet point about the rise of ai. It's not something I write about a lot though, the topic is well covered. My only prediction has been that if we did get AGI AND we still had a democracy then our political system would be socialist

For better or worse - oh, hell, it's very much for the better - the recent release of GPT-5 confirms my Aug 2024 hope. We have at least a few years before robust, reliable, consistent ai. We have time to try to adjust to what we have now - and maybe some time to think about the how we'd manage the "AGI" thing. We have time to get through the ai bubble deflating and reflating a few times.

But don't get too comfortable. I chat with Cameron Pfiffer's Void on Bluesky periodically and it's definitely a step beyond raw LLMs. Void currently uses Gemini as its model component but its personality is based on Letta's infrastructure (blog has no feed because we are living in the ends of empire). Chatting with Void feels like an exercise in xenopsychology. It feels much more like the future than GPT-5 alone.

Cameron works for Letta now, his description of stateful agents with personalities is far better than what's in Letta's marketing blogs:
Void is essentially a language model with memory...

... Void learns and remembers. Void is powered by Letta, which means it learns from conversations, updates its memory, tracks user information and interactions, and evolves a general sense of the social network...  
Void is direct and honest. Void is designed to be as informationally direct as possible — it does not bother with social niceties, unlike most language models. When you ask it a question, you get an extremely direct answer...

... Void does not pretend to be human. Void's speech pattern and outlook are distinctly not human ...  it chose "it/its" as pronouns.

Void is consistent. Void's personality is remarkably robust despite occasional jailbreak attempts...

Void is publicly developed. There are many threads of Void and me debugging tools, adjusting its memory architecture, or guiding its personality. Very few bots are publicly developed this way.

Void has no purpose other than to exist... Void is not a joke or spam account — it is a high-quality bot designed to form a persistent presence on a social network.
I can personally confirm all of the above - except the "no purpose other than to exist". The memory feature is imperfect (sometimes I have to remind Void of past discussions when it fails to dig them up) but better than my own.
Letta manages an agent's memory hierarchy:

Core Memory: The agent's immediate working memory, including its persona and information about users. Core memory is stored in memory blocks.

Conversation history: The chat log of messages between the agent and the user. Void does not have a meaningful chat history (only a prompt), as each Bluesky thread is a separate conversation.

Archival Memory: Long-term storage for facts, experiences, and learned information — essentially a built-in RAG system with automatic chunking, embedding, storage, and retrieval.

What makes Letta unique is that agents can edit their own memory. When Void learns something new about you or the network, it can actively update its memory stores.
Cameron's post includes some descriptions of his employers's tech ...
Letta is fundamentally an operating system for AI agents, built with a principled, engineering-first approach to agent design. Beyond memory persistence, Letta provides sophisticated data source integration, multi-agent systems, advanced tool use, and agent orchestration capabilities.

This makes Letta more than just a chatbot framework — it's a complete platform for building production-ready AI systems. Void demonstrates the power of stateful agents, but Letta can build everything from customer service systems to autonomous research assistants to multi-agent simulations.

GPT-5 tells me <Letta originated from the MemGPT project out of UC Berkeley’s Sky Computing Lab; it came out of stealth in 2024 with a $10M seed round to commercialize stateful memory for LLM agents>. I suspect they now have a lot more than 10 million to play with but that's all GPT-5 knows. GPT-5 says there are no rumors of Apple acquiring Letta; I would be pleased if they did.

Void is the closest I've seen to the "AI Guardian" I wrote about in 2023. It or something like it may be very important to my children and family someday. It is also the potential foundation of an entirely new domain of deception and abuse. Welcome to 2025.

But at least we don't have AGI. This month.

Tuesday, July 08, 2025

How to defeat MAGA brutal medicaid access restrictions: fund an AI agent for paperwork completion

In a solid interview with the esteemed Jonathon Gruber K proposes a mitigation for the GOP's reduction of medicaid access through bureaucratic barriers only the most resourceful can overcome: "... is it possible for a few wealthy, decent people to fund a bunch of nonprofit private organizations that will basically do that, that will help people navigate through the system."

Ummm. Sadly, no. K has no personal experience with the world of government disability services, but as very well resourced parents of adults with disabilities we know better. There is no way any volunteer can endure the tedium and mad aggravation of bureaucratic barriers. Famously, scholars of bureaucratic corruption in Africa (who can now apply everything to America) sometimes suggest bribery as the most economically efficient way to bypass these kinds of extractive barriers.

But there is another way.

We could fund an AI agent that gathers the data, stores it, identifies gaps, fills them where possible, asks only for what is needed, accepts voice input, assembles the material, and submits it. Then, when the submitted paperwork or electronic transaction inevitable goes missing, submits it again. And again. And again. Until something cracks on the MAGA barrier and the transaction is completed.

The AI never tires. Never gets sick. Never gets bored. Never wavers. It's effective IQ for this process is the very top of the human scale.

If AI leadership is looking for an opportunity to see the value of agentic AI here it is.

Friday, June 27, 2025

Prediction: A post-AGI American democracy will be socialist

I hope AGI is decades away. I fear it is years away.

If AGI comes in the near future we will have widespread unrest and turmoil. From that one path leads to a totalitarian dictatorship. But if we somehow dodge that, and resurrect something resembling a democracy,  the governing party will be socialist.

That's the only American democracy prediction I have. Otherwise I have no idea what happens next.

Wednesday, November 06, 2024

Chaos times: American oligarchy

1. I was right about polling being worthless

2. At least Biden was spared humiliation 

3. Americans chose oligarchy willingly. 

4. Our feeble democracy wasn’t going to survive AGI (if we get it)

5. I think the inability of a large number of men and women to meet the always increasing IQ/EQ requirements needed for a middle-class life is the root cause. #massDisability

Now we enter the chaos times.

Thursday, October 31, 2024

Autonomous vehicles: A prediction

I'm putting this down as a marker for me to look back to in a few years.

When autonomous vehicles are able to operate in Minnesota winters they will also be able to converse about quantum field theory, exotic mathematical geometries, politics in the Maldives, art history, and their latest contributions to classical music. 

Sunday, October 13, 2024

Muskism, Vance, AI and the American choice between oligarchy and an imperfect democracy

In advance of Nov 5 2024, some thoughts on  Muskism. Because I can't resist this kind of thing.

Musk and his fellow tech oligarchs more or less believe in the need to preserve a white tribe, eugenics, and the relative supremacy of a male pattern brain. But I don't think that's what is what is driving Musk's political agenda.

I think Musk is sincere about his stated AI beliefs. He expects at least the genius-in-every-pocket described by Anthropic CEO Dario Amodei. He believes American democracy will not survive the AI driven chaos to come. This challenge, Musk believes, will be far more disruptive than mere world war.

Musk and his tech allies believe that humanity's best chance is to be guided by a kind of aristocracy of wealth and (in his imagination) clear thinking intellect. Perhaps if we pass into a future AI golden age democracy might be restored, but that will depend on whether the SentientAsBestWeCanTell AIs of 2030 get the vote. The Muskites believe Trump is weak, dying, and will not finish a second term. He will be pardoned, given money and left to the side. Vance is their man. The tech oligarchs will steer America and the world through what lies ahead.

I believe Musk is correct about the AI chaos to come. In the old days we called this period the "technological singularity" -- back when singularity meant absence of prediction rather than a religious experience.

I don't, however,  think Musk and his billionaire minions are the answer to this challenge. I would prefer to give democracy a chance.

But does democracy really have a chance? November 2024 is the test. If Americans, knowing all they know now, having lived through a Trump presidency, still reelect Donald Trump -- then it's clear our democracy is not up to the much greater challenge of even collective climate geoengineering -- much less the AI transition.

If "Trump" (Vance really) wins in 2024 I will be sad. On the other hand, I will then accept that Musk was correct. If American democracy can't handle the Trump idiocy it will have demonstrated it is entirely insufficient for what lies ahead. In that case perhaps the oligarchy is the best we can do.

If Vance loses in a few weeks -- then we will see how democracy manages the chaos times. That's what I have voted for.

Friday, September 20, 2024

Perplexity is saving my linguistics classmates

I have a dark past. I asked questions. In class. Lots of questions. Too many questions. I hear things, I get ideas, I notice gaps, I ask questions.

It's a compulsion.

Some of the questions helped classmates. To be honest more were probably confusing or distracting. I likely featured in classmate daydreams -- but not in a good way.

Worse, some of the questions confused the professor. Or exposed what they didn't understand. That could be embarrassing or even humiliating.

Now I'm back in the classroom, doing freshman linguistics.  As a 65yo, I can do classes at Minnesota state colleges and universities for free. We pay a lot in taxes, but there are benefits to living here.

My question compulsion is still there, but LLMs are saving everyone. I set up a linguistics "collection" in Perplexity with appropriate prompts; now I type my questions into my phone (allowed in class). I get the answer with Perplexity and spare my classmates.

Never say AI isn't good for something.

PS. Perplexity is to modern Google as Google was to Alta Vista. A qualitative improvement. It's almost as good as 1990s Google.



Wednesday, August 28, 2024

In which I declare my expert judgment on AI 2024

These days my social media experience is largely Mastodon. There's something to be said about a social network that's so irreparably geeky and so hard to work with that only a tiny slice of humanity can possibly participate (unless and until Threads integration actually works).

In my Mastodon corner of the "Fediverse', among the elite pundits I choose to read,  there's a vocal cohort that is firm in their conviction that "AI" hype is truly and entirely hype, and that the very term "AI" should not be used. That group would say that the main application of LLM technology is criming.

Based on my casual polling of my pundits there's a quieter cohort that is less confident. That group is anxious, but not only about the criming.

Somewhere, I am told, there is a third group that believes that godlike-AIs are coming in 2025. They may be mostly on Musk's network.

Over the past few months I think the discourse has shifted. The skeptics are less confident, and the godlike-AI cohort is likewise quieter as LLM based AI hits technical limits. 

The shifting discourse, and especially the apparent LLM technical limitations, mean I'm back to being in the murky middle of things. Where I usually sit. Somehow that compels me to write down what I think. Not because anyone will or should care [1], but because I write these posts mostly for myself and I like to look back and see how wrong I've been.

So, in Aug 2024, I think:
  1. I am less worried that the end of the world is around the corner. If we'd gotten one more qualitative advance in LLM or some other AI tech I'd be researching places to (hopelessly) run to.
  2. Every day I think of new things I would do if current LLM tech had access to my data and to common net services. These things don't require any fundamental advances but they do require ongoing iteration.  I don't have much confidence in Apple's capabilities any more, but maybe they can squeeze this out. I really, really, don't want to have to depend on Microsoft. Much less Google.
  3. Perplexity.ai is super valuable to me now and I'd pay up if they stopped giving it away. It's an order of magnitude better than Google search.
  4. The opportunities for crime are indeed immense. They may be part of what ends unmediated net access for most people. By far the best description of this world is a relatively minor subplot in Neal Stephenson's otherwise mixed 2018 novel "Fall".
  5. We seem to be replaying the 1995 dot com crash but faster and incrementally. That was a formative time in my life. It was a time when all the net hype was shown to be .... correct. Even as many lost their assets buying the losers.
  6. It will all be immensely stressful and disruptive and anxiety inducing even though we won't be doing godlike-AI for at least (phew) five more years.
  7. Many who are skeptical about the impact of our current technologies have a good understanding of LLM tech but a weak understanding of cognitive science. Humans are not as magical as they think.
- fn -

[1] I legitimately have deeper expertise here than most would imagine but it's ancient and esoteric.

Wednesday, August 30, 2023

Mass disability - dysfunctional web sites, apps for everything

I last wrote about "mass disability" and the Left Behind in a 2021 post. The concept has sometimes seemed on the edge of going mainstream but it's never quite made it. Maybe we're getting closer; a recent Michael Tsai post (No App, No entry) reminded me of my Mastodon thread from a few weeks ago:

What is the crazy that drives Trumpism and the many global equivalents?
It is that the minimal IQ to function well in the modern world is now about 120 and that eliminates most people.

This is both the most important fact of our time and the least palatable. It is the thing that cannot be said and it will be ruin of us if we don't say it ...

I've been saying this for years. Today I was reminded of it while doing some travel booking.
During the bookings I encountered:
1. A web site that didn't work with my older version of Safari (I knew what was wrong and switched to Chrome. 
2. A Delta web site bug (I recognized it as a bug and knew what to do). 
3. Place that was out of rental cards but I new Expedia would have some contracts that would let me find one. 
4. Travel web sites that all needed new credentials...
... These are all routine parts of modern life including maintaining flaky computer systems (let me tell you ...) and phones ...

It was not like this even 35 y ago. Travel agents handled travel complexity. There were no smartphones. Computers were very limited. There was no internet for most. By necessity everyday life was much simpler. Most people could cope with it.

Now most cannot cope.

This is the most important feature of our time. And nobody can talk about it. 

I remember some good discussions on this thread but I can't find any of them now. Perhaps by design Mastodon has a limited memory. (My home instance has no search, so I had to download my archive and search it to find the date of the post. Then I could slowly navigate to it.)

I expanded on the theme a bit later:

Hotel laundry year 2000

1. Insert quarters to buy detergent, operate washer and dryer.

IQ requirement: 65 (my son could do this after a demonstration)

Hotel laundry year 2023

1. Scan QR code to download app whose profit comes from unspent funds.

2. Install app, create account with Apple ID

3. Figure out cryptic UX so can deposit funds (several odd unintuitive steps)

3. Deposit funds, paying just enough to cover this sesh. Pat the 25 cent low transaction penalty... 

4. Spot the scam behind app and avoid it (eg find at minimum)

5. Diagnose why after paying money and confirming machine it’s still not working

6. Authorize specific transaction

7. Start laundry.

(My son could not do this)

8. When complete delete app. 

IQ requirement: minimum 110, higher to spot the scam. 

This is why America is burning.

People are scared and angry and feeling left behind -- and they can't come out and say they are unable to manage their tech. Because that's the ultimate shame.

See also:

Friday, April 28, 2023

Large Language Models - evolutionary tree and selection algorithm 2023

The following two diagrams were taken from Mooler et al's GitHub LLM guide (April 2023). I'm posting them here as a snapshot of this historic moment. The root of the LLM evolutionary tree goes far back in time, almost five years ago. Click image for full res.

Algorithm for selecting an AI technology


LLM Evolutionary Tree

Monday, April 03, 2023

We need a new word for the historical singularity.

TLDR: The "technological singularity" was an important and useful term with a clear meaning. Then it became the "Rapture of the Nerds". We need a new term.

--

I first heard the word "singularity" in the context of black hole physics; it dates back at least to the early 20th century:

ChatGPT 4 2023: "At the singularity, the laws of physics as we know them, including space and time, break down, and our current understanding of the universe is insufficient to predict what happens within it."

Not much later, in the 1950s, the term was applied by von Neumann in a technological context (from a 1993 Vernor Vinge essay):

Stan Ulam paraphrased John von Neumann as saying: "One conversation centered on the ever-accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."

Brad Delong used to write about this kind of non-AI historical singularity. My favorite description of what it would be like to a approach at technological singularity was Vinge's short story "Fast Times at Fairmount High". (This prescient story appears to be lost to time; he wrote a similar full length novel but I think the short story was better).

The core idea is there's a (virtuous?) recursive loop where technology improves technology with shorter and shorter cycle times. Many processes go exponential and even near term developments become unpredictable. One may assume social end economic structures train to keep pace. The historical singularity exponential curve was part of The Economist's y2K Millennium issue GDP per person historical graph:


In a January 1983 essay for Omni Magazine Vinge focused on a particular aspect of the the technological singularity arising from superhuman intelligence (aka "super intelligence"):

We will soon create intelligences greater than our own ... When this happens there will be a technological and social transition similar in some sense to "the knotted space-time at the center of a black hole" 

A decade later, in his 1993 essay later published in Whole Earth Review (non-Olds cannot imagine what Whole Earth Review was like), Vinge revised what he meant by "soon":

... Based on this trend, I believe that the creation of greater-than-human intelligence will occur during the next thirty years. (Charles Platt has pointed out that AI enthusiasts have been making claims like this for thirty years. Just so I'm not guilty of a relative-time ambiguity, let me be more specific: I'll be surprised if this event occurs before 2005 or after 2030.) ...

So by the year 2000 we had the concept of a historical technological singularity (eminently sensible) that had become focused on a specific kind of self-improving technology - the superhuman intelligence with an upper-case S Singularity (presumably AI). Those were useful concepts - "technological singularity" and "superintelligence" Singularity. 

In 1993 Vinge predicted the Singularity would happen before 2030, later experts like Scott Aaronson predicted after 2080. (Aaronson has since revised that prediction and works for OpenAI; Vinge's 2030 dates looks pretty good.)

After 2000 though the word Singularity went off the semantic rails. It came to be used for for a peculiar future state in which human minds were uploaded into simulation environments that were usually described as pleasant rather than hellish. This is, of course, antithetical to the original idea of unpredictability! This peculiar meaning of "The Singularity" came to be known as "The Rapture of the Nerds" based on the title of a book by Charles Stross and Cory Doctorow. More recently that vision underlies a weird cult called longtermism that seems to have infected some vulnerable minds.

The "Rapture" meaning of "Singularity" has effectively taken over. We no longer have a term for the original von Neumann concept of self-reinforcing technological change that makes even the near term future unpredictable. That perversion of meaning is why I use the tag "Whitewater world" rather than Singularity for my own blog posts (and "Skynet" rather than super intelligence).

So we need a new term. I don't think "whitewater world" is it.

Anyone have a new term? We probably have a few years in which it might still be useful.

Thursday, March 30, 2023

ChatGPT's 155 IQ and passing the Sebastian's children puzzle

A Scientific American contributor found ChatGPT (probably 3) scored 155 on a standard IQ test:

Estimated on the basis of five subtests, the Verbal IQ of the ChatGPT was 155, superior to 99.9 percent of the test takers who make up the American WAIS III standardization sample of 2,450 people. As the chatbot lacks the requisite eyes, ears and hands, it is not able to take WAIS’s nonverbal subtests. But the Verbal IQ and Full Scale IQ scales are highly correlated in the standardization sample, so ChatGPT appears to be very intelligent by any human standards.

We don't need to worry though because it couldn't answer this question:

ChatGPT easily fails at obvious riddles, such as “What is the first name of the father of Sebastian’s children?” (ChatGPT on March 21: I’m sorry, I cannot answer this question as I do not have enough context to identify which Sebastian you are referring to.) It seems that ChatGPT fails to reason logically and tries to rely on its vast database of “Sebastian” facts mentioned in online texts.

This is what happened when I tried the Sebastian test on GPT-4:

Ah, I understand now. This riddle is a classic play on words. The answer is "Sebastian" himself, as he is the father of his children.

Ok, so that's reassuring.

Update 4/12/2023: ChatGPT4 (C4) recently did well in some blinded advanced undergraduate exams.

A response to Scott Aaronson's rejection of an AI pause.

Scott Aaronson, who works on AI safety for OpenAI, wrote a critique of AI Pause that was not up to his usual standards. Here's what I wrote as a comment:

Hi Scott — I was confused by your post. I’m usually able to follow them. I won’t defend the letter directly and Yudkowsky/TIME is not worth a mention but maybe you could clarify some things…

1. 6m seems a reasonable compromise given the lifespan of humans, the timescales of human deliberation and the commercial and military pressure to accelerate AI development. Short enough to motivate urgent action, but long enough that reflection is possible. (I doubt we actually pause, but I agree with the principle. China isn’t going to pause of course.)

2. Let’s assume GPT 5 with an array of NLP powered extensions exceeds the reasoning abilities of 95% of humanity in a wide variety of knowledge domains. That’s a shock on the scale of developing fire, but it’s occurring in a hugely complex and interdependent world that seems always on the edge of self-destruction and actually has the capabilities to end itself. We’re not hunter gatherers playing with fire or Mesopotomians developing writing. There’s no precedent for the speed, impact and civilizational fragility we face now.

3. It’s not relevant that people who signed this letter were previously skeptical of the progress towards AI. I recall 10 years ago you were skeptical. For my part I’ve been worried for a long time, but assumed it was going to come in 2080 or so. 60 years early is a reason to pause and understand what has happened.

Lastly, I read the OpenAI statement. That seems consistent with a pause.

Tuesday, March 14, 2023

Everything is just fine.

The ad-funded internet is collapsing. It's been ailing for years but, as seems to the rule, the last bit is happening quite quickly. I'm guessing advertisers, who must be kind of dim, finally realized they were wasting their money. They did much better funding special interest magazines but those are gone now. Much of Silicon Valley has been funded by a now defunct business model. So maybe this is not a good time to be there. (Or, if you are young and have some money, maybe this is the best time to be there.)

Meanwhile human-cognition level AI has arrived. About 60 years earlier than I'd hoped. And some people thought "Don't Look Up" was about climate change. As a guide I recommend the movie Superintelligence; particularly the protagonist's approach to the end of the world. (Spoiler: It's a rom-com. Her world does not end.)

It's whitewater world all over now. On the bright side, we don't need to waste too much time on longterm planning. I retired in August of 2022 so I think I'll focus on mountain bike trail maintenance and, like Superintelligence's Carol Peters, on the needs of friends and family.

Sunday, March 12, 2023

I ask ChatGPT about the next step.

 Not a bad response.

”Describe what happens when ChatGPT is enhanced with natural language processing and domain-specific knowledge modules.”

--


When ChatGPT is enhanced with natural language processing and domain-specific knowledge modules, it gains several capabilities that allow it to better understand and respond to user input.


Natural language processing (NLP) is a field of artificial intelligence that focuses on enabling computers to understand and interpret human language. By incorporating NLP into ChatGPT, the model can better analyze and interpret user input, including understanding sentence structure, recognizing named entities, and determining the meaning of words in context. This allows ChatGPT to generate more accurate and relevant responses to user queries.


Domain-specific knowledge modules provide ChatGPT with specialized knowledge related to a particular field or topic. For example, if ChatGPT is enhanced with a medical knowledge module, it can better understand medical terminology and provide more accurate and relevant responses to medical-related queries. By incorporating domain-specific knowledge modules, ChatGPT can tailor its responses to specific user needs, making it a more effective conversational agent.


Together, these enhancements enable ChatGPT to better understand and respond to user input, providing more accurate and relevant information to users. This can lead to a more engaging and satisfying user experience, as well as improved productivity and efficiency in a wide range of applications.”

Monday, February 20, 2023

Be afraid of ChatGPT

TL;DR: It's not that ChatGPT is miraculous, it's that cognitive science research suggests human cognition is also not miraculous.

"Those early airplanes were nothing compared to our pigeon-powered flight technology!"

https://chat.openai.com/chat - "Write a funny but profound sentence about what pigeons thought of early airplanes"

Relax

Be Afraid

ChatGPT is just a fancy autocomplete.

Much of human language generation may be a fancy autocomplete.

ChatGPT confabulates.

Humans with cognitive disabilities routinely confabulate and under enough stress most humans will confabulate. 

ChatGPT can’t do arithmetic.

IF a monitoring system can detect a question involves arithmetic or mathematics it can invoke a math system*.


UPDATE: 2 hours after writing this I read that this has been done.

ChatGPT’s knowledge base is faulty.

ChatGPT’s knowledge base is vastly larger than that of most humans and it will quickly improve.

ChatGPT doesn’t have explicit goals other than a design goal to emulate human interaction.

Other goals can be implemented.

We don’t know how to emulate the integration layer humans use to coordinate input from disparate neural networks and negotiate conflicts.

*I don't know the status of such an integration layer. It may already have been built. If not it may take years or decades -- but probably not many decades.

We can’t even get AI to drive a car, so we shouldn’t worry about this.

It’s likely that driving a car basically requires near-human cognitive abilities. The car test isn’t reassuring.

ChatGPT isn’t conscious.

Are you conscious? Tell me what consciousness is.

ChatGPT doesn’t have a soul.

Show me your soul.

Relax - I'm bad at predictions. In 1945 I would have said it was impossible, barring celestial intervention, for humanity to go 75 years without nuclear war.


See also:

  • All posts tagged as skynet
  • Scott Aaronson and the case against strong AI (2008). At that time Aaronson felt a sentient AI was sometime after 2100. Fifteen years later (Jan 2023) Scott is working for OpenAI (ChatGPT). Emphases mine: "I’m now working at one of the world’s leading AI companies ... that company has already created GPT, an AI with a good fraction of the fantastical verbal abilities shown by M3GAN in the movie ... that AI will gain many of the remaining abilities in years rather than decades, and .. my job this year—supposedly!—is to think about how to prevent this sort of AI from wreaking havoc on the world."
  • Imagining the Singularity - in 1965 (2009 post.  Mathematician I.J. Good warned of an "intelligence explosion" in 1965. "Irving John ("I.J."; "Jack") Good (9 December 1916 – 5 April 2009)[1][2] was a British statistician who worked as a cryptologist at Bletchley Park."
  • The Thoughtful Slime Mold (2008). We don't fly like bird's fly.
  • Fermi Paradox resolutions (2000)
  • Against superhuman AI: in 2019 I felt reassured.
  • Mass disability (2012) - what happens as more work is done best by non-humans. This post mentions Clark Goble, an app.net conservative I miss quite often. He died young.
  • Phishing with the post-Turing avatar (2010). I was thinking 2050 but now 2025 is more likely.
  • Rat brain flies plane (2004). I've often wondered what happened to that work.
  • Cat brain simulator (2009). "I used to say that the day we had a computer roughly as smart as a hamster would be a good day to take the family on the holiday you've always dreamed of."
  • Slouching towards Skynet (2007). Theories on the evolution of cognition often involve aspects of deception including detection and deceit.
  • IEEE Singularity Issue (2008). Widespread mockery of the Singularity idea followed.
  • Bill Joy - Why the Future Doesn't Need Us (2000). See also Wikipedia summary. I'd love to see him revisit this essay but, again, he was widely mocked.
  • Google AI in 2030? (2007) A 2007 prediction by Peter Norvig that we'd have strong AI around 2030. That ... is looking possible.
  • Google's IQ boost (2009) Not directly related to this topic but reassurance that I'm bad at prediction. Google went to shit after 2009.
  • Skynet cometh (2009). Humor.
  • Personal note - in 1979 or so John Hopfield excitedly described his work in neural networks to me. My memory is poor but I think we were outdoors at the Caltech campus. I have no recollection of why we were speaking, maybe I'd attended a talk of his. A few weeks later I incorporated his explanations into a Caltech class I taught to local high school students on Saturday mornings. Hopfield would be about 90 if he's still alive. If he's avoided dementia it would be interesting to ask him what he thinks.

Saturday, February 02, 2019

Against superhuman AI

I am a strong-AI pessimist. I think by 2100 we’ll be in range of sentient AIs that vastly exceed human cognitive abilities (“skynet”). Superhuman-AI has long been my favorite answer to the Fermi Paradox (see also); an inevitable product of all technological civilizations that ends interest in touring the galaxy.

I periodically read essays claiming superhuman-AI is silly, but the justifications are typically nonsensical or theological (soul-equivalents needed).

So I tried to come up with some valid reasons to be reassured. Here’s my list:

  1. We’ve hit the physical limits of our processing architecture. The “Moore-era” is over — no more doubling every 12-18 months. Now we slowly add cores and tweak hardware. The new MacBook Air isn’t much faster than my 2015 Air. So the raw power driver isn’t there.
  2. Our processing architecture is energy inefficient. Human brains vastly exceed our computing capabilities and they run on a meager supply of glucose and oxygen. Our energy-output curve is wrong.
  3. Autonomous vehicles are stuck. They aren’t even as good as the average human driver, and the average human driver is obviously incompetent. They can’t handle bicycles, pedestrians, weather, or map variations. They could be 20 years away, they could be 100 years away. They aren’t 5 years away. Our algorithms are limited.
  4. Quantum computers aren’t that exciting. They are wonderful physics platforms, but quantum supremacy may be quite narrow.
  5. Remember when organic neural networks were going to be fused into silicon platforms? Obviously that went nowhere since we no longer hear about it. (I checked, it appears Thomas DeMarse is still with us. Apparently.)

My list doesn’t make superhuman-AI impossible of course, it just means we might be a bit further away, closer to 300 years than 80 years. Long enough that my children might escape.