Sunday, September 18, 2011

Life in the post-AI world. What's next?

I missed something new and important when I wrote ...

Complexity and air fare pricing: Houston, we have a problem

... planning a plane trip has become absurdly complex. Complex like choosing a cell phone plan, getting a "free" preventive care exam, managing a flex spending account, getting a mortgage, choosing health insurance, reading mobile bills, fighting payment denials, or making safe product choices. Complex like the complexity collapse that took down the western world.

I blame it all on cheap computing. Cheap computing made complexity attacks affordable and ubiquitous...

The important bit is what's coming next and now in the eternal competition.

AI.

No, not the "AIs" of Data, Skynet and the Turing Test [1]. Those are imaginary sentient beings. I mean Artificial Intelligence in the sense it was used in the 1970s -- software that could solve problems that challenge human intelligence. Problems like choosing a bike route.

To be clear, AIs didn't invent mobile phone pricing plans, mortgage traps or dynamic airfare pricing. These "complexity attacks" were made by humans using old school technologies like data mining, communication networks, and simple algorithms.

The AIs, however, are joining the battle. Route finding and autonomous vehicles and (yes) search are the obvious examples. More recently services like Bing flight price prediction and Google Flights are going up against airline dynamic pricing. The AIs are among us. They're just lying low.

Increasingly, as in the esoteric world of algorithmic trading, we'll move into a world of AI vs. AI. Humans can't play there.

We are in the early days of a post-AI world of complexity beyond human ken. We should expect surprises.

What's next?

That depends on where you fall out on the Vinge vs. Stross spectrum. Stross predicts we'll stop at the AI stage because there's no real economic or competitive advantage to implementing and integrating sentience components such as motivation, self-expansion, self-modeling and so on. I suspect Charlie is wrong about that.

AI is the present. Artificial Sentience (AS), alas, is the future.

[1] Recently several non-sentient software programs have been very successful at passing simple versions of the Turing Test, a test designed to measure sentience and consciousness. Human interlocutors can't distinguish Turing Test AIs from human correspondents. So either the Turing Test isn't as good as it was thought to be, or sentience isn't what we thought it was. Or both.

Update 9/20/11: I realized a very good example of what's to come is the current spambot war. Stross, Doctorow and others have half-seriously commented that the deception detection and evasion struggle between spammers and Google will birth the first artificial sentience. For now though it's an AI vs. AI war; a marker of what's to come across all of commercial life.

See also:

Update 9/22: Yuri Milner speaking at the "mini-Davos" recently:
.... Artificial intelligence is part of our daily lives, and its power is growing. Mr. Milner cited everyday examples like Amazon.com’s recommendation of books based on ones we have already read and Google’s constantly improving search algorithm....
I'm not a crackpot. Ok, I am on, but I'm not alone.

3 comments:

Anonymous said...

There are sometimes jokes along the lines that the reason AI is never achieved is that 'AI' is a stand-in for what we can't do yet with computers. Once we can do it, it ceases to be 'AI' and is just one more programming technique or algorithm. There is something to that.

I think that twenty years ago I would have thought that the search problem was something which couldn't be solved without some kind of intelligence. It turns out that you can do a decent job with statistical analysis without resulting in anything like what we would call intelligence.

Moving from these individual heuristics to something approaching sentience seems very unlikely given the progression we've seen up to this point.

JGF said...

A variant of the old joke is that "intelligence" is whatever can't be done with statistical analysis.

I go with a pragmatic take -- if the computer does something that ten years ago we considered the domain of the human (car driving, route selection) then it's 'AI'.

Of course by that definition we've had AI for a long time. I think we have really. EKG reading got quite good even 15 years ago.

I suspect we'll approach sentience the same way -- bits and pieces that, in retrospect, will look like algorithms and statistics.

In the past I'd guesstimated that as around 2050 or so. Wild-ass guess of course. I hope it takes a lot longer.

JGF said...

I found some old post I couldn't locate when I first wrote this, so I added some new links and references.