Showing posts with label skynet. Show all posts
Showing posts with label skynet. Show all posts

Monday, September 14, 2015

Google Trends: Across my interests some confirmation and some big surprises.

I knew Google Trends was “a thing”, but it had fallen off my radar. Until I wondered if Craigslist was going the way of Rich Text Format. That’s when I started playing with the 10 year trend lines.

I began with Craigslist and Wikipedia...

  • Craigslist is looking post-peak
  • Wikipedia looks ill, but given how embedded it is in iOS I wonder if that’s misleading.
Then I started looking at topics of special relevance to my life or interests. First I created a set of baselines to correct for decliniing interest in web search. I didn’t see any decline
  • Cancer: rock steady, slight dip in 2009, slight trend since, may reflect demographics
  • Angina: downward trend, but slight. This could reflect lessening interest in search, but it may also reflect recent data on lipid lowering agents and heart disease.
  • Exercise: pretty steady
  • Uber: just to show what something hot looks like. (Another: Bernie Sanders)
Things look pretty steady over the past 10 years, so I decided I could assume a flat baseline for my favorite topics.That’s when it got fascinating. 

Some of these findings line up with my own expectations, but there were quite a few surprises. It’s illuminating to compare Excel to Google Sheets. The Downs Syndrome collapse is a marker for a dramatic social change — the world’s biggest eugenics program — that has gotten very little public comment. I didn’t think interest in AI would be in decline, and the Facebook/Twitter curves are quite surprising.

Suddenly I feel like Hari Seldon.

I’ll be back ...

See also:

Saturday, April 26, 2014

Salmon, Picketty, Corporate Persons, Eco-Econ, and why we shouldn't worry

I haven’t read Picketty’s Capital in the Twenty-First Century. I’ll skim it in the library some day, but I’m fine outsourcing that work to DeLong, Krugman and Noah.

I do have opinions of course! I’m good at having opinions.

I believe Picketty is fundamentally correct, and it’s good to see our focus shifting from income inequality to wealth inequality. I think there are many malign social and economic consequences of wealth accumulation, but the greatest threat is likely the damage to democracy. Alas, wealth concentration and corruption of government are self-reinforcing trends. It is wise to give the rich extra votes, lest they overthrow democracy entirely, but fatal to give them all the votes.

What I haven’t seen in the discussions so far is the understanding that the modern oligarch is not necessarily human. Corporations are persons too, and even the Kock Brothers are not quite as wealthy as APPL. Corporations and similar self-sustaining entities have an emergent will of their own; Voters, Corporations and Plutocrats contend for control of avowed democracies [1]. The Rise of the Machine is a pithy phrase for our RCIIT disrupted AI age, but the Corporate entity is a form of emergent machine too.

So when we think of wealth and income inequality, and the driving force of emergent process, we need to remember that while Russia’s oligarchs are (mostly vile) humans, ours are more mixed. That’s not necessarily a bad thing - GOOGL is a better master than David Koch. Consider, for example, the silencing of Felix Salmon:

Today is Felix's last day at Reuters. Here's the link to his mega-million word blog archive (start from the beginning, in March 2009, if you like). Because we're source-agnostic, you can also find some of his best stuff from the Reuters era at Wired, Slate, the Atlantic, News Genius, CJR, the NYT, and NY Mag. There's also Felix TV, his personal site, his Tumblr, his Medium archive, and, of course, the Twitter feed we all aspire to.

Once upon a time, a feudal Baron or Russian oligarch would have violently silenced an annoying critic like Salmon (example: Piketty - no exit). Today’s system simply found him a safe and silent home. I approve of this inhuman efficiency.

So what comes next? Salmon is right that our system of Human Plutocrats and emergent Corporate entities is more or less stable (think - stability of ancient Egypt). I think Krugman is wrong that establishment economics fully describes what’s happening [2]; we still need to develop eco-econ — which is notecological economics”. Eco-econ is the study of how economic systems recapitulate biological systems; and how economic parasites evolve and thrive [3]. Eco-econ will give us some ideas on how our current system may evolve.

In any event, I’m not entirely pessimistic. Complex adaptive systems have confounded my past predictions. Greece and the EU should have collapsed, but the center held [4]. In any case, there are bigger disruptions coming [5]. We won’t have to worry about Human plutocrats for very long….

See also

and from my stuff

- fn -

[1] I like that 2011 post and the graphic I did then. I’d put “plutocrats” in the upper right these days. The debt ceiling fight of 2011, showed that Corporations and Plutocrats could be smarter than Voters, and the rise of the Tea Party shows that Corporations can be smarter than Voters and Plutocrats. Corporations, and most Plutocrats, are more progressive on sexual orientation and tribal origin than Voters. Corporations have neither gender nor pigment, and they are all tribes of one.

I could write a separate post about why I can’t simply edit the above graphic, but even I find that tech failure too depressing to contemplate.

[2] I don’t think Krugman believes this himself - but he doesn’t yet know how to model his psychohistory framework. He’s still working on the robotics angle.

[3] I just made this up today, but I dimly recall reading that the basic premises of eco-econ have turned up in the literature many times since Darwin described natural selection in biological systems. These days, of course, we apply natural selection to the evolution of the multiverse. Applications to economics are relatively modest.

[4] Perhaps because Corporations and Plutocrats outweighed Voters again — probably better or for worse.

[5] Short version — we are now confident that life-compatible exoplanets are dirt common, so the combination of the Drake Equation (no, it’s not stupid) and the Fermi Paradox means that wandering/curious/communicative civilizations are short-lived. That implies we are short-lived, because we’re like that. The most likely thing to finish us off are our technological heirs.

Saturday, August 03, 2013

Sympathy for Economists

A good feature of teenagers is that they sometimes sleep in. So Emily and I can chat on a quiet Saturday morning about wearable tech (remember 1988?), and how 2013 feels a bit like 1997 or 2007 or 1923. The times when technological change seems to rev up again. To be followed, if recent  history is any guide, by yet another crash.

Which brings us to Economics, and especially to economists like Brad DeLong and Paul Krugman

I suspect that DeLong, and even Krugman, believe that the fundamental drivers of our economic instability are the simultaneous and related rise of both digital technologies and China and India (RCIIIT). Both DeLong and Krugman, have, at various times, written about the disruptive impact of "smart" robots (including robot/human pairings) and the related rise of 'mass disability'. Both, I suspect, share my opinion of the economic consequences of artificial sentience.

These aren't however, topics they can discuss in the context of models and mechanisms. How do you measure technological disruption? Economists still struggle to describe the productivity impacts of typewriters. Corporations can't make an internal business case for products like Yammer. We can't measure technological disruptions, and what we can't measure we can't model. What Economists can't model they can't discuss, and so they look through a keyhole into a dimly lit room and see monsters, but can't speak of them.

But the situation for Economics is even worse than that. There is a reason Krugman rants about economists who cling to models when all their predictions fail and yet retain academic respect. A discipline without falsifiability can be scholarly, but it can't be a science. It can't progress.

Economics thus lies between the Scylla of the monsters than can't be mentioned, and the Charybdis of the non-falsifiable.

No wonder Economists are dismal.

Sunday, June 09, 2013

Cash purchases driving a new real estate bubble - too much wealth, too few investments

Cash-only real estate speculation in LA, Boston, Miami, San Francisco and so on (emphases mine) ...

... These days, the only way for would-be buyers to secure a home, it often seems, is to offer all cash and be ready to do so within hours, not days.

...first-time home buyers are competing with investors to get into single-family homes with prices approaching $1 million.

... large investors purchasing thousands of properties

... a third of all homes purchased in Los Angeles during the first quarter of this year went for all cash, compared with just 7 percent in 2007. In Miami, 65 percent of homes sold were for cash deals, compared with 16 percent six years ago.

... In Los Angeles, the median price on an all-cash home this year is about $351,000, compared with $230,000 in 2009. Over the same period, the median price over all increased to $410,000, up $85,000. In fact, last month, home prices in Southern California hit their highest level in the last five years.

... Buyers in Boston are offering $100,000 more than the asking price or placing offers on homes they have spent only minutes in.

... He also waived the inspection clause, an increasingly common practice... offers today are more likely to include escalation clauses, saying buyers will pay an additional amount over the highest bid.

... cash purchases fueled in part by international investors and retirees awash in cash after selling their homes elsewhere....

This fits reports a few months back of large numbers of purchased but unoccupied condominiums in luxury markets.

Where is all the cash coming from? The article doesn't say, but there's vast wealth in China now and few safe places to park it. Real estate is a classic Chinese investment. There's also a large amount of boomer wealth in play as my generation (noisily, because we are nothing if not loud) shuffles off the stage.

What happens next? I assume we're in for another one of our worldwide boom-bust cycles...

Gordon's Notes: Stock prices - resorting to another dumb hydraulic analogy

NewImage

Why are having these worldwide boom bust cycles? 

Ahh, if only we knew. Since I'm not an economist, and thus I have neither credibility to protect nor Krugman to fear, I'm free to speculate. I think the world's productive capacity has grown faster than the ability of our financial systems to manage it. There's too much wealth and potential wealth (in a fundamental sense, regardless of central bank actions) for our system to productively absorb. We're filling a 100 ml vial from a 10 liter bucket. Or, in Bernd Jendrissek's words: "The gain is too high for the phase shift for this feedback loop to be stable."

If there's anything to this idea then we little people may want to remember the advice of Victor Niederhoffer, a wealthy man who has lost vast amounts of money in the post RCIIIT economy:

... Whenever disaster strikes, the very sagacious wealthy people take their canes, and they hobble down from their stately mansions on Fifth Avenue, and they buy stocks to the extent of their bank balances, and then a week or two later, the market rises, they deposit the overplus in their accounts, invest it in blue-chip real estate, and retire back to their stately mansions. That's probably the best way of making money, to be a specialist in panics. Whenever there's panic hanging in the air, that's a great time to invest...

Of course this implies one has a relatively tax efficient way of moving money in and out of cash -- and lots of cash to gamble without fear of unemployment. When downturns hit most of us need our cash as a hedge against job loss; only the 0.05% don't need to work. Even so, there may be a lesser version of the long game we can play to at least limit our crash pain. For example, perhaps a 21st century John Bogle will create a derivative that retail investors can purchase on the rise (when we have cash) that pays off on the fall (when we don't).

How long will it be before the world's financial systems catch up with our productive capacity -- especially given the rise of Africa and the unfolding of the post-AI world?

I suspect not in my lifetime [1]. It's whitewater as far as the eye can see.

Update: In surfing lingo a hard breaking wave is a called a "Cruncher". Perhaps "new Cruncher" is a better term than "new bubble".

- fn -

[1] Though if wealth were better distributed we might have the equivalent of filling that 100 ml vial from 10,000 1 ml vials. Much easier to stop before overfilling.

Saturday, June 01, 2013

What pedestrians and cyclists can do while we wait for the end of human drivers

After 40 years of biking with cars, and almost as long driving with them, I cannot avoid the obvious.

Humans cannot drive cars safely around anything smaller than a Honda Civic.

This is not a matter of rules or training. We could make violation of the three foot passing rule a capital crime and cars would still pass too close to pedestrians and cyclists. Even without benefit of age, smartphones or alcohol human drivers will signal left and go straight, open driver side doors into oncoming bicyclists, and do rolling stops through pedestrians. Human drivers will continue to not see motorcycles, pedestrians, or bikes.

Our evolutionary history didn't prepare us for the job of driving cars. Non-armored road travelers need the Google driverless car; within a few years of its affordable introduction friends won't left friends drive. Shortly thereafter human drivers will become uninsurable. (Shortly after that humans may lose the right to vote, but that's another post :-).

Alas, fully autonomous cars are probably twenty to thirty years away -- changes on this scale take much longer than enthusiasts imagine. Happily, we don't have to wait that long. Both Volvo and Volkswagen are developing pedestrian and bicycle avoidance systems. We need to make these mandatory in cars sold after 2018. In the same time period smartphones can be broadcasting increasingly precise location information to nearby vehicles, augmenting visual detection systems.

We should accelerate the effective Dutch-inspired trend of segregating bicycles from cars. We should continue to study bicycle and pedestrian accidents in detail and apply lessons learned. We should get blinking red lights on the backs of all bicycles, and the unarmored would be wise to wear eye searing colors. Some sting operations or video monitors to enforce Minnesota's largely ignored and often unknown crosswalk laws would not be amiss.

There's a lot we can do while we wait to celebrate the end of the human driver.

See also:

mine:

Friday, December 28, 2012

The Post-AI era is also the era of mass disability

Is Stephen Hawking disabled?

Stephen Hawking 050506

Obviously this is a rhetorical question. Hawking is 70 and retired from the Lucasian chair, but he remains a tenured professor. He is a bestselling author of multiple popular books, has been married twice, and has three children.

He is clearly not disabled.

Is a physically strong male with an IQ of 65 disabled in Saint Paul, MN? Yes, of course. Forty years ago, however, there were many jobs that would pay above minimum wage for a strong back and a willingness to do tedious work. Heck, in those days men earned money literally pumping gas.

Would Stephen Hawking have been disabled in 1860? Yeah, for the short duration of his 19th century life.

Disability is relative to the technological environment. Once a missing leg meant disability, now it rules out only a small number of jobs. Once a strong back meant a job, now it means little.

Technology changes the work environment; it makes some disabled, and others able. It's an old trend, automated looms put textile artisans out of work 200 years ago.

Those artisans had a rough time, but workers with similar skill sets have done well since. Economic theory and history teaches us that disruptive technological transformation can produce transient chaos, but over time resulting economic growth will benefit almost everyone. More or less.

But history only repeats until it doesn't. Economic benefits don't have to be evenly distributed. If fewer jobs require strong backs, then people whose primary talent is the strength of their spine may earn relatively less. If supply exceeds demand, the price of labor will fall below the "zero bound" of the minimum wage. Some backs won't find work; those workers are disabled.

Most people can play in more than one game, but the competition is getting tougher and the space for human advantage is shrinking in the post-AI era. The percentage of the population who are effectively disabled has been rising along with national income and the Gini coefficient. It's not just the pioneers now, Respectable economists are wondering about tipping points.

So enter The Wolverine...

Screen Shot 2012 12 28 at 6 13 51 PM 

Krugman acts as though he's just started thinking about the post-AI economy, but he isn't fooling anyone. We know he grew up on Asimov and the Three Laws. Now that the election is done, and he doesn't have to be a strict non-structuralist any more [1], he's started writing about what the post-AI era means for income inequality using the phrase "Capital-biased technology". He has recently promised us a "future" column on policy implications.

Future - because he's trying to break it to us gently. I, of course, have no such qualms. A year ago I wrote about the policy implications of the Post-AI era (emphases added) ...

The AI Age: Siri and Me

... Economically, of course, the productivity/consumption circuit has to close... If .1% of humans get 80% of revenue, then they'll be taxed at 90% marginal rates and the 99.9% will do subsidized labor. That's what we do for special needs adults now, and we're all special needs eventually...

Or, in other words, "From each according to his ability, to each according to his need". In the post-AI era we will need to create employment for the mass disabled.

See also:

 and from the K (NYT):

elsewhere

- fn -

[1] Clark Goble made me read a critique of my team's champion. I found hurt feelings (K has claws), but no substantive critiques. That's a shame, I've long wanted to see somebody like Mankiw (who was once readable) engage K on his denial of structural factors in 2009-2012 unemployment. I suspect K has always known of ways to argue the structural case despite the persuasive low global-demand data. I wonder if he was disappointed that nobody dared challenge him. 

Sunday, October 07, 2012

Baumol's cost disease: medicine, education and post-AI disruption

William Baumol was born in 1922. In 2012, 90 years later, he's listed as first author on a new bookThe Cost Disease: Why Computers Get Cheaper and Health Care Doesn't.

Damn. It's one thing to win the brain lottery, but winning the longevity lottery is really piling on. Even if all he did is read the page drafts he's doing pretty well.

That's not the most irritating thing about Baumol though. The most irritating thing is that I keep forgetting about his fundamental insight, one that I first blogged about 8 years ago...

... The disparity between rapid productivity growth in mechanized sectors and slow productivity growth in human-service jobs produces Baumol's disease—named after the economist William J. Baumol. According to Baumol, in a technological economy falling prices for manufactured goods and automated services eventually increase the relative cost of labor-intensive services such as nursing and teaching. Baumol has predicted that the share of gross domestic product spent on health care will rise from 11.6 percent in 1990 to 35 percent in 2040, while the share spent on education will rise from 6.7 percent to 29 percent.

The shifting of relative costs need not in itself be a problem. If Americans in 2050 or 2100 pay far more (as a percentage of their spending) for health care and education than they did in 1900, they may still be better off—if they pay correspondingly less for other goods and services. The problem is that as the relative cost of services like education and health care rises, more and more Americans will find themselves in service-sector jobs that, unlike the professions, have historically been low-wage...

Today Education and Health Care are famously afflicted by Baumol's disease. Law used to be, but then full-text search decimated legal employment (and yet, legal costs have not fallen ....).

Baumol argues that even if these professions remain labor intensive, and even if health care comes therefore to claim 50% of our GDP, that we'll be able to afford it nonetheless.

His argument is persuasive, but is that likely to happen? College education today is experiencing widespread disruption including iTunes Ucoursera (Caltech, University of Toronto and many more), edX (MIT, Harvard, Berkeley), California open-source eTexts, Stanford Online, Khan Academy and numerous for-profit ventures. Education is deep in whitewater times.

Health care, particularly medical care, isn't changing as quickly. The fundamental tasks of sorting out what's going on with a particular patient, and how best to manage that problem in their personal context, and then how to manage the patient's psyche and health -- those haven't changed much [1] over the past century. 

We're accumulating more health care data though -- for better and for worse [3]. "Analytics" is the "hot" area in health care IT now, including running Google/Facebook style algorithms against large clinical and financial data sets [2].

That doesn't necessarily sound disruptive, unless you know that the techniques used in extracting meaning from large data sets are the same technologies that power our post-AI world. (Yeah, I used the forbidden acronym.) If you know that, then you know "Analytics" can be thought of as the current pseudonym for "Medical AI". Whether it's disruptive or not remains to be seen, but I suspect that we'll get to health care cost disruption well before health care hits 50% of a much larger future GDP.

 [1] It's interesting to read articles written in the 1970s during the early days of diagnostic lab testing. They imagined patients walking into a series of lab test queues staffed with low wage workers, then emerging with a set of diagnoses and plans. Similar plans arose during the last period of genomic enthusiasm. They will come again ... 
[2] The base stats is generally pretty simple stuff, if only because more complex algorithms don't scale well to terabyte data sets. The trick is that simple stats on large data sets enabled by cheap computation can produce surprisingly useful answers. This is best described in the terrific Halevy, Norvig and Pereira paper: The Unreasonable Effectiveness of Data.
[3] In 1996 I was part of a theater-style presentation called "Dark Visions: 1996-2010" that included a fanciful and intentionally dramatic timeline of dystopic data sharing. By 2005 India was the world center of clinical AI, and by 2006 elite health care providers had moved to more private paper records. Maybe we were a bit hasty :-).

See also:

Saturday, July 07, 2012

Where would I hide a military AI project?

If I were a somewhat different person, and life played out quite differently, I can imagine being a senior NSA bureaucrat.

I'd read about Google's cat-recognition engine, and the likelihood of a mouse-level AI within the decade, and I'd be thinking that the NSA needs to get there first.

Not because there's an obvious military application, but because there could be a weapon in there somewhere and because someone in China is thinking the same thing.

So I'd add a few hundred million a year to my off-budget budget; just  pocket change really. Then I'd build a data center for my testing and I'd get to a mouse level AI in six years. If I needed to I could pry the secret sauce out of Google's hands; I'm sure there a ways to do that but it's probably not necessary. Google publishes much of its AI research.

I'd build it just to see what it was like, and so I could assess the military potential.

Problem is, modern AI experiments take a lot of power and produce a lot of heat. I wonder how I'd disguise it...

Wednesday, June 27, 2012

Google's A.I. recognizes cats. Laugh while you can.

Google's brain module was trained on YouTube stills. From vast amounts of data, one image spontaneously emerged ...
Using large-scale brain simulations for machine learning and A.I. | Official Google Blog 
".. we developed a distributed computing infrastructure for training large-scale neural networks. Then, we took an artificial neural network and spread the computation across 16,000 of our CPU cores (in our data centers), and trained models with more than 1 billion connections.  
...  to our amusement, one of our artificial neurons learned to respond strongly to pictures of... cats ... it “discovered” what a cat looked like by itself from only unlabeled YouTube stills. That’s what we mean by self-taught learning... 
... Using this large-scale neural network, we also significantly improved the state of the art on a standard image classification test—in fact, we saw a 70 percent relative improvement in accuracy. We achieved that by taking advantage of the vast amounts of unlabeled data available on the web, and using it to augment a much more limited set of labeled data. This is something we’re really focused on—how to develop machine learning systems that scale well, so that we can take advantage of vast sets of unlabeled training data.... 
... working on scaling our systems to train even larger models. To give you a sense of what we mean by “larger”—while there’s no accepted way to compare artificial neural networks to biological brains, as a very rough comparison an adult human brain has around 100 trillion connections.... 
..  working with other groups within Google on applying this artificial neural network approach to other areas such as speech recognition and natural language modeling."
Hah, hah, a cat. That's so funny. Unless you're a mouse of course.

The mouse cortex has 14 million neurons and a maximum of 45K connections per neuron, so ballpark estimate, perhaps 300 billion connections (real estimates are probably known from the mouse connectome project but I couldn't find them). So in this first pass Google has less than 1% of a mouse connectome.

Assuming they double the connectome every two years, they should hit mouse scale in nine years, or around 2021. There's a good chance you and will still be around then.

I've long felt that once we had a "mouse-equivalent" connectome we could probably stop worrying about global warming, social security, meteor impacts, cheap bioweapons, and the Yellowstone super volcano.

Really, we're just mice writ large. That cat is looking hungry.

Incidentally, Google didn't use the politically incorrect two letter acronym in the blog post, but they put it, with periods (?), in the post title.

Tuesday, February 14, 2012

Slavery, technology, and the future of the weak

Reading 9th grade world history as an adult I read over the names of the wicked and the great. I round years to centuries, and nations to regions.

Other things catch my eye. Reading of slavery in ancient Rome and Greece, I think of India's untouchables. The theme of surplus built upon slavery runs constantly through human history, until it blends into an industrial model of market utilization of the "The Weak".

Yeah, progress happens. I'd choose a minimum wage job in Norway, or even in Minnesota, over slavery.

So what's next? In a globalized post-industrial world, does the labor of the "Weak" have sufficient value to support a life of health and balance? If it does not, if within the framework of the post-AI world 20% of the population is effectively disabled, then what do we do?

Slavery was one answer to the problem of the weak. Industrial and agricultural employment was another. If we are fortunate, we will provide a third answer.

See also:

Monday, February 06, 2012

Siri struggling

This time around, I got my 4S early in the adoption cycle. So I remember when Siri mostly worked.

Since then, Siri works, at best, about half the time. She's overloaded. Even when I get through processing seems more error prone, perhaps because accuracy has been sacrificed to manage capacity.

Since the initial results were pretty decent, I assume Siri will eventually work. We've seen this before; it has taken about two years for Facetime to become a useful solution.

For now I've learned to avoid Siri during the US evenings. During the mornings results are much better. I've also learned to break my requests into stages, allowing Siri to scope her language processing in smaller chunks. To create a reminder I start with 'remind me' ... then I wait ... then the reminder text ... then the time ... then I have to wait for the confirm.

Processing aside, there is obvious room for improvement. We need, we REALLY need, a way to tell Siri to give up and start over again. We need a way to tell Siri 'yes and confirm' so we can skip the confirmation dialog. I assume Apple omitted these commands because they don't market well - they expose the limitations of 2011 Siri. Just like Graffiti exposed the limitations of 1990s handwriting recognition. Time to give a bit so we can get better results from a useful tool.

Wednesday, February 01, 2012

Translating brain electrical activity into word sounds

Under some conditions, researchers are able to translate brain electrical signals into concepts/sounds which can be expressed using English words.

From the description I think the analysis focused on sound generation, so it was downstream from concept generation (which might express words before we were conscious of thinking them).

I have been following this research from a distance, and I knew the 'lie detectors' were getting pretty good, but this genuinely surprises me.

Science fiction writers are now frantically revising works in press. Charles Stross is probably banging his head on the wall right now.

Stunning, really. I'd been hopeful that I'd avoid the inevitable Singularity*, and that my kids would have good lives before it hits. Now I'm less optimistic.

* My favorite explanation for the Fermi Paradox.

Friday, December 02, 2011

The AI Age: Siri and Me

Memory is just a story we believe.

I remember that when I was on a city bus, and so perhaps 8 years old, a friend showed me a "library card". I was amazed, but I knew that libraries were made for me.

When I saw the web ... No, not the web. It was Gopher. I read the minutes of a town meeting in New Zealand. I knew it was made for me. Alta Vista - same thing.

Siri too. It's slow, but I'm good with adjusting my pace and dialect. We've been in the post-AI world for over a decade, but Siri is the mind with a name.

A simple mind, to be sure. Even so, Kurzweil isn't as funny as he used to be; maybe Sir's children will be here before 2100 after all.

In the meantime, we get squeezed...

Artificial intelligence: Difference Engine: Luddite legacy | The Economist

... if the Luddite Fallacy (as it has become known in development economics) were true, we would all be out of work by now—as a result of the compounding effects of productivity. While technological progress may cause workers with out-dated skills to become redundant, the past two centuries have shown that the idea that increasing productivity leads axiomatically to widespread unemployment is nonsense...

[there is]... the disturbing thought that, sluggish business cycles aside, America's current employment woes stem from a precipitous and permanent change caused by not too little technological progress, but too much. The evidence is irrefutable that computerised automation, networks and artificial intelligence (AI)—including machine-learning, language-translation, and speech- and pattern-recognition software—are beginning to render many jobs simply obsolete....

... The argument against the Luddite Fallacy rests on two assumptions: one is that machines are tools used by workers to increase their productivity; the other is that the majority of workers are capable of becoming machine operators. What happens when these assumptions cease to apply—when machines are smart enough to become workers? In other words, when capital becomes labour. At that point, the Luddite Fallacy looks rather less fallacious.

This is what Jeremy Rifkin, a social critic, was driving at in his book, “The End of Work”, published in 1995. Though not the first to do so, Mr Rifkin argued prophetically that society was entering a new phase—one in which fewer and fewer workers would be needed to produce all the goods and services consumed. “In the years ahead,” he wrote, “more sophisticated software technologies are going to bring civilisation ever closer to a near-workerless world.”

...In 2009, Martin Ford, a software entrepreneur from Silicon Valley, noted in “The Lights in the Tunnel” that new occupations created by technology—web coders, mobile-phone salesmen, wind-turbine technicians and so on—represent a tiny fraction of employment... In his analysis, Mr Ford noted how technology and innovation improve productivity exponentially, while human consumption increases in a more linear fashion.... Mr Ford has identified over 50m jobs in America—nearly 40% of all employment—which, to a greater or lesser extent, could be performed by a piece of software running on a computer...

In their recent book, “Race Against the Machine”, Erik Brynjolfsson and Andrew McAfee from the Massachusetts Institute of Technology agree with Mr Ford's analysis—namely, that the jobs lost since the Great Recession are unlikely to return. They agree, too, that the brunt of the shake-out will be borne by middle-income knowledge workers, including those in the retail, legal and information industries...

Even in the near term, the US Labor Department predicts that the 17% of US workers in "office and administrative support" will be replaced by automation.

It's not only the winners of the 1st world birth lottery that are threatened.

 China's Foxconn (Taiwan based) employs about 1 million people. Many of them will be replaced by robots.

It's disruptive, but given time we could adjust. Today's AIs aren't tweaking the permeability of free space; there are still a few things we do better than they. We also have complementary cognitive biases; a neurotypical human with an AI in the pocket will do things few unaided humans can do. Perhaps even a 2045 AI will keep human pets for their unexpected insights. Either way, it's a job.

Perhaps more interestingly, a cognitively disabled human with a personal AI may be able to take on work that is now impossible.

Economically, of course, the productivity/consumption circuit has to close. AIs don't (yet) buy info-porn. If .1% of humans get 80% of revenue, then they'll be taxed at 90% marginal rates and the 99.9% will do subsidized labor. That's what we do for special needs adults now, and we're all special needs eventually.

So, given time, we can adjust. Problem is, we won't get time. We will need to adjust even as our world transforms exponentially. It could be tricky.

See also:

Sunday, October 09, 2011

Siri, the Friendly AI

The iPhone 4S video shows a young runner asking Siri to rearrange this schedule. It doesn't show him running into the path of another Siri user driving his convertible.

Siri is the iPhone AI that understands how your phone works and, in theory, understands a domain constrained form of natural language. It has a long AI legacy; it's a spinoff from SRI Artificial Intelligence Center and the DARPA CALO project.

When Siri needs to know about the world it talks with Wolfram Alpha. That's where the story becomes a Jobsian fusion of the personal and the technical, and Siri's backstory becomes a bit ... unbelievable.

Siri was launched as the unchallenged king of technology lay dying. The Wolfram part of Siri began when Jobs was in exile ...

Wolfram Blog : Steve Jobs: A Few Memories

I first met Steve Jobs in 1987, when he was quietly building his first NeXT computer, and I was quietly building the first version of Mathematica. A mutual friend had made the introduction, and Steve Jobs wasted no time in saying that he was planning to make the definitive computer for higher education, and he wanted Mathematica to be part of it...

Over the months after our first meeting, I had all sorts of interactions with Steve aboutMathematica. Actually, it wasn’t yet called Mathematica then, and one of the big topics of discussion was what it should be called. At first it had been Omega (yes, like Alpha) and later PolyMath. Steve thought those were lousy names. I gave him lists of names I’d considered, and pressed him for his suggestions. For a while he wouldn’t suggest anything. But then one day he said to me: “You should call it Mathematica”...

... In June 1988 we were ready to release Mathematica. But NeXT had not yet released its computer, Steve Jobs was rarely seen in public, and speculation about what NeXT was up to had become quite intense. So when Steve Jobs agreed that he would appear at our product announcement, it was a huge thing for us.

He gave a lovely talk, discussing how he expected more and more fields to become computational, and to need the services of algorithms and of Mathematica. It was a very clean statement of a vision which has indeed worked out as he predicted....

A while later, the NeXT was duly released, and a copy of Mathematica was bundled with every computer...

... I think Mathematica may hold the distinction of having been the only major software system available at launch on every single computer that Steve Jobs created since 1988. Of course, that’s often led to highly secretive emergency Mathematica porting projects—culminating a couple of times in Theo Gray demoing the results in Steve Jobs’s keynote speeches.

... tragically, his greatest contribution to my latest life project—Wolfram|Alpha—happened just yesterday: the announcement that Wolfram|Alpha will be used in Siri on the iPhone 4S...

Siri's backstory is a good example of how you can distinguish truth from quality literature. Literature is more believable.

Siri isn't new of course. We've been in the post-AI world since Google displaced Alta Vista in the 1990s. Probably longer.

What's new is a classic Jobs move; the last Jobs move made during his lifetime. It's usually forgotten that Apple did not invent the MP3 player. They were quite late to the market they transformed. Similarly, but on a bigger and longer scale, personalized AIs have been with us for years.  AskJeeves was doing (feeble) natural language queries in the 1990s. So Siri is not the first.

She probably won't even work that well a while. Many of Apple's keynote foci take years to truly work (iChat, Facetime, etc). Eventually though, Siri will work. She and her kin will engage in the complexity wars humans can't manage, perhaps including our options bets. Because history can't resist a story, Siri will be remembered as the first of her kind.

Even her children will see it that way.

Update 10/12/11: Wolfram did a keynote address on 9/26 in which he hinted at the Siri connection to Wolfram Alpha: "It feels like Mathematica is really coming of age. It’s in just the right place at the right time. And it’s making possible some fundamentally new and profoundly powerful things. Like Wolfram|Alpha, and CDF, and yet other things that we’ll have coming over the next year." The address gives some insight into the world of the ubiquitous AI. (No real hits on that string as of 10/12/11. That will change.)

Sunday, September 18, 2011

Life in the post-AI world. What's next?

I missed something new and important when I wrote ...

Complexity and air fare pricing: Houston, we have a problem

... planning a plane trip has become absurdly complex. Complex like choosing a cell phone plan, getting a "free" preventive care exam, managing a flex spending account, getting a mortgage, choosing health insurance, reading mobile bills, fighting payment denials, or making safe product choices. Complex like the complexity collapse that took down the western world.

I blame it all on cheap computing. Cheap computing made complexity attacks affordable and ubiquitous...

The important bit is what's coming next and now in the eternal competition.

AI.

No, not the "AIs" of Data, Skynet and the Turing Test [1]. Those are imaginary sentient beings. I mean Artificial Intelligence in the sense it was used in the 1970s -- software that could solve problems that challenge human intelligence. Problems like choosing a bike route.

To be clear, AIs didn't invent mobile phone pricing plans, mortgage traps or dynamic airfare pricing. These "complexity attacks" were made by humans using old school technologies like data mining, communication networks, and simple algorithms.

The AIs, however, are joining the battle. Route finding and autonomous vehicles and (yes) search are the obvious examples. More recently services like Bing flight price prediction and Google Flights are going up against airline dynamic pricing. The AIs are among us. They're just lying low.

Increasingly, as in the esoteric world of algorithmic trading, we'll move into a world of AI vs. AI. Humans can't play there.

We are in the early days of a post-AI world of complexity beyond human ken. We should expect surprises.

What's next?

That depends on where you fall out on the Vinge vs. Stross spectrum. Stross predicts we'll stop at the AI stage because there's no real economic or competitive advantage to implementing and integrating sentience components such as motivation, self-expansion, self-modeling and so on. I suspect Charlie is wrong about that.

AI is the present. Artificial Sentience (AS), alas, is the future.

[1] Recently several non-sentient software programs have been very successful at passing simple versions of the Turing Test, a test designed to measure sentience and consciousness. Human interlocutors can't distinguish Turing Test AIs from human correspondents. So either the Turing Test isn't as good as it was thought to be, or sentience isn't what we thought it was. Or both.

Update 9/20/11: I realized a very good example of what's to come is the current spambot war. Stross, Doctorow and others have half-seriously commented that the deception detection and evasion struggle between spammers and Google will birth the first artificial sentience. For now though it's an AI vs. AI war; a marker of what's to come across all of commercial life.

See also:

Update 9/22: Yuri Milner speaking at the "mini-Davos" recently:
.... Artificial intelligence is part of our daily lives, and its power is growing. Mr. Milner cited everyday examples like Amazon.com’s recommendation of books based on ones we have already read and Google’s constantly improving search algorithm....
I'm not a crackpot. Ok, I am on, but I'm not alone.

Saturday, September 17, 2011

Complexity and air fare pricing: Houston, we have a problem.

Early in my life air travel was almost as expensive as today. At that time, however, we had travel agents and competitive service. It was hassle free.

Later air travel was inexpensive and hassle free. The world felt smaller.

Then it became complicated -- but travel software made up for lost travel agents. We were ahead of the airlines.

Now, it's not so good. It's not just the security hassles. It's not just that the cost of a Minneapolis to Montreal trip has gone up 20% a year for the past four years (now doubled, Hawaii and Europe are cheaper).

It's also that planning a plane trip has become absurdly complex. Complex like choosing a cell phone plan, getting a "free" preventive care exam, managing a flex spending account, getting a mortgage, choosing health insurance, reading mobile bills, fighting payment denials, or making safe product choices. Complex like the complexity collapse that took down the western world.

I blame it all on cheap computing. Cheap computing made complexity attacks affordable and ubiquitous. [1]

In my most recent experience with information asymmetry I found tickets on US Airways for $490 (1 stop) on both Bing and Kayak. When I added a 2nd traveler, however, the price of both tickets increased by $100. (This was harder to spot on the US Airways site as they list deceptive prices, hiding all the "additional fees" airlines carved out to disguise price increases.)

A bit of research (time is how we pay our complexity tax) revealed this happens when the 1st ticket allegedly uses the last "cheap" seat on a flight. The next ticket costs more, and because airlines are loathe to confess this they increase the price of both. That may be so, but it means there's a great incentive to have a few cheap seats that will attract hits from travel sites, but that will turn into high price tickets for the 2nd passenger. This doesn't even have to be planned, natural selection means this kind of emergent "happy accident" of complexity, once discovered, will be leveraged.

This has costs. Maybe high costs. We pay them either by cash lost to legal frauds, or we pay them in time. I think they have more do with the lesser depression than most admit.

It would probably be cheaper for me to just pay my fraud tax to the airlines, but of course I'm not going quietly. I'm studying the (now obsolete) tricks of the trade [2]

  • Shop Tuesday at 3pm ET
  • Start shopping 3.5 months before departure, buy prior to 14 days
  • Tues, Wed and Sat are cheapest days to fly

[1] In the words of James Galbraith (emphases mine): "... The financial world, as it exists, has nothing to do with the commodity world of real exchange economics with its delicate balance of interacting forces. It is the world of technology at play in the form of quasi mass produced legal instruments of uncontrolled complexity. It is the world of, in other words, of evolutionary specialization in the never ending dance of predator and prey...
[2] Seems like there's opportunity for outsourcing complexity management to a new age travel agent and their equivalent for managing the complex scams of everyday life. I fear, however, that only a few of us realize we need help.

Update: Twelve hours after posting I was able to buy both tickets for a total of $200 less than the Saturday price. Same times and planes. I learned ...

  • Email alerts are worthless. I think they're just a way to harvest email for spam (we live on Planet Chum). Instead I took advantage of a Kayak feature -- they save the last search in a short list on main screen. I refreshed this twice daily. Between Saturday night and Sunday night I was able to get both prices at the listed price.
  • I had to keep referencing the search results Kayak provided. The US Airways site kept substituting the flight I didn't want as the "preferred option". I took me 4 runs to get it right. It's hard to explain what they were doing but to succeed I had to carefully track all the flight numbers.
  • Kayak passed my reservation to US Airways as 2 adults. The flight was 1 adult and 1 child. I suspected I needed the Kayak reference to get the price I wanted. Kayak passes its request through URL parameters (only sort of works) so I edited the URL parameter to 1 adult and 1 child.
  • US Airways makes pointless use of Flash to animate simple result display. This is revealing.
See also:

Thursday, June 30, 2011

Stross whiffs on the Singularity

Charlie Stross has been heads down writing for a while, but he must have his books in the bag because his blog is aflame again.

Naturally, knowing we crave red raw meat, he started with an attack on geek theology. He beat up on the Singularity.

Go read the essay. Here's my quick digest of his three arguments:

  1. We won't build super-intelligent sentient computers because .... well ... we just won't ... because .... we're not that stupid and it wouldn't serve any obvious purpose.
  2. Uploading consciousnesses won't work because we didn't evolve to be uploaded and religious sorts will object.
  3. We aren't living in a Simulation because ... well, we might be ... but it's not falsifiable so ...

Charlie! What happened? This is your most muddled essay in years.

Not to worry too much though. Charlie followed up with three excellent posts. I think he was just rusty.

See also:

PS. Where am I on all things Skynet? I think we'll create artificial sentience and it will be the end of us. Unlike Charlie, I think there will be great economic advantages to push the limits of AI towards sentience, and we won't resist that. I'm very much hoping that is still 80 years away, but I'm afraid I might see it before I die. I think brain uploading is a hopeless dream. As for us living in a Simulation -- it does explain the Fermi Paradox ...

Wednesday, January 05, 2011

The not-so-vast readership of Gordon's notes - and why I keep posting

I get emails when a reader (infrequently) comments. The author deleted this comment, so I'll keep it anonymous ...

Say, is it not odd that you don't have a bunch of readers reading your blog? You have been writing this since 2003 and nobody comments or reads it? Is this even real?

Oh and I figured out how I reached your blog. I was looking for "nobody reads your blog" on google and a comment from your blog showed up on the 47th page.

Its sad and funny at the same time...

I wasn't able to replicate his search results, but unless we're post AI this was a bio post, not a bot post.

It's a good question [4], but there are a lot of blogs that go unread. So mine is not that unusual. What's unusual is that it's been persistently unread for 7 years. So the real question is - "why would anyone write 5,494 posts that nobody reads?" (@9,000 if you add Gordon's Tech) [1]

The short answer is that I read both of Gordon's Blogs. As I wrote back in 2007 ...

... my own very low readership blogs are written for these audiences in this order:

1. Myself. It’s how I learn and think.

2. The GoogleMind: building inferential links for search and reflection.

3. Tech blog: Future readers who find my posts useful to solve a problem they have that I've solved for myself.

4. Gordon's Notes: My grandchildren, so I can say I didn't remain silent -- and my tiny audience of regular readers, not least my wife (hey, we don't get that much time to talk!) ...

Later, when I integrated Google Custom Search, my history of posts began to inform my Google searches. My blogs extend my memory into the wider net.

So that explains why there are 9,000 "John Gordon" posts.

As to why their aren't many comments/readers, I can imagine several reasons ...

  • There's no theme. Gordon's Notes follows my interests, and they wander. At any given time there will posts that most people find boring, repetitive, or weird.
  • I'm writing for someone like me, Brad DeLong, Charlie Stross, Emily L and others of that esoteric sort. That's an uber-niche audience.
  • I have no public persona (I write using a pseudonym)
  • I like writing, but I don't work at writing. I'd have to work a lot harder to write well enough to be truly readable.
  • I don't market the blog.
  • I update my blog at odd hours, and I'm slow to respond to comments.
  • I have an irregular posting schedule.
  • I don't right about areas where I'm really a world-class expert because I keep my blogging and my employment separate.
  • I often write about the grim side of reality (that is, most of it).

That covers the bases I think. Except ...

Except, it's not quite so simple. It turns out I do have a few readers -- I'm guessing about 100 or so [3], not counting a larger number who come via Google [2], but certainly counting Google itself. Some of my readers are bloggers with substantial readership, and sometimes they respond to what I write.

So I do have an audience after all, it's just very quiet.

See also:

-- fn --

[1] Why do I share thousands of items via Google Reader? Because that's a searchable repository of things I find interesting. Another memory extender.
[2] I don't have a stellar Google ranking, but it's not bad 
[3] About 80 via Google Reader alone, where I share these posts.  There's also Emily, who comments over breakfast. A lot of my posts come out of our discussions.
[4] It wasn't clear when I first posted this that I like the question. I think it's a good question and I think it was meant well. Sorry for not making that clear. I've added this footnote.

Update 1/6/11: Based on comment response I probably have more regular readers than I imagined.

Monday, November 08, 2010

Edging to AI: Constructive (almost) comment spam

It took me a day to realize that this comment on Gordon's Notes: Apologetics: God and the Fermi Paradox was a spam comment (Spomment):

Luke said... Interesting questions you ask - as always enjoy reading your posts. We all have our personal experiences & beliefs, but I do have to challenge you to check out an event coming up in the spring that I recently was introduced to. March 12, 2011 a simulcast called The Case for Christianity is taking place that will address the very question you have asked. Led by Lee Strobel (former Legal Editor of the Chicago Tribune) & Mark Mittelberg, all of the most avoided questions Christians don't like to answer or even discuss. Both are authors of extremely intriguing books, I encourage you to check them out as well as the simulcast in March. Definitely worth the time & worthy of the debate! Thanks again!

It's obvious in retrospect "interesting questions you ask" is a give away. It doesn't address any specific aspect of my post, and it leads directly into an event promotion.

Still, it snuck under my radar -- and Google's too. It's well constructed.

Of course the construction was human, only the targeting was algorithmic. It's a bit of a milestone though -- it's almost a relevant comment.

Charles Stross and others have speculated that spambot wars will spawn hard AI. First, though, they have to become specific, relevant, and constructive. We're getting closer ...

Incidentally, shame on Strobel and Mittelberg for using this kind of sleazoid marketing.

See also:

Wednesday, October 06, 2010

Why you should vote for the Tea Party’s coven in the century of the fruitbat

Christine O’Donnell. Linda McMahon. Sharron Angle.

Names to conjure with! The Tea Party’s fruitbat coven strikes fear into the hearts of rationalists. Together with Minnesota’s Michelle Bachman and President Sarah Palin they …

Oh, excuse me. I’ve got to shut the window. Susan’s grave spinning can be kind of distracting.

Ok, where was I? Got it. Looks grim. Doomed we are. True, Minnesota survived Ventura [1], and this group can’t be as bad as Cheney/Bush, but America is in a grim place. Shouldn’t rationalists be buying gardens in the countryside?

Well, yes, we probably should. But I will make a case for why rationalists should vote fruitbat, even though I lack the convictional courage to do it myself.

Let’s consider just five of the wee challenges that face America in the next thirty years, and think about how Vulcans (my people – Team Obama) would do compared to fruitbats.

First, there’s the relative decline of America as a world power and the growth of American poverty. Obviously the fruitbats will speed this along. But relative decline is going to happen anyway. There’s nothing magical about America. Our post-WW II preeminence was largely a matter of circumstance. Since then we’ve done some things right, and, especially in the Cheney/Bush era, many, many things wrong. We Vulcans managed to avert, for now, Great Depression II, but we couldn’t finish the game. Advantage Vulcan, but only by degree.

Secondly, global climate change. Two words – Nixon. China. We tried, we failed. The fruitbats can’t do worse, and only they can talk to the denialists. Advantage fruitbat.

Thirdly, the end of participatory democracy – China and America converge. Enlightenment thinkers couldn’t anticipate the positive feeback loops that make American law and regulation ever more favorable to large corporate entities (and billionaires, though they are less predictable). We Vulcans have failed on this front. Advantage fruitbat.

Fourth – the reason-resistant bomb. Iran is only the best current example. Mutual Assured Destruction worked [2] because the enemies feared death. Russia, China and the EU are all secular states, and American leadership religion is mostly skin deep (until Bush II [4]). If true believers have control of nuclear delivery systems, and if they believe their deity will either protect them or give them paradise, then we’re in a new world of hurt. It’s hard to see how Vulcans can help here. Maybe fruitbats can talk to them. Maybe religious logicians [3] will stop worrying about a fruitbat led declining America. Advantage fruitbat, albeit a small one.

Lastly, there’s the Big One. AI, better described as AS (artificial sentience). Skynet – the smarter than you think [3] machines. We don’t survive this one. Vulcan leadership, by sustaining American science, will move this day forward. Fruitbats, by accelerating the decline of America, may slow it down by five to ten years. That might move the end time out of my lifespan, though, alas, probably not out of my children’s lifespan. Advantage fruitbat.

If we add it all up, Vulcans only clearly win on one of the five big challenges. Yes, the fruitbats do accelerate the decline of America – but that might also slow AS work.

I can’t force myself to vote fruitbat. I’m not that rational; I’ll continue to campaign for Vulcan rule. In the near term it is clearly the better choice. If the fruitbats win, however, there is some (slightly) longer term consolation.

- footnotes

[1] Yes, Minnesota is whackier than California. We don’t get the credit we deserve.

[2] To my amazement. The long post-fusion survival of civilization is a strong argument for divine (or other) intervention.

[3] I’m impressed and disturbed that the NYT put this series together, even though it’s annoying that the last article managed to miss the historic Cyc and active Wolfram Alpha AI projects.

[3] They’re not whackos. Given his stated beliefs and values Ahmadinejad is more rational (for a certain definition of rational), and thus more scary but less annoying, than the fruitbats.

[4] Carter was very religious, but in a peculiarly rational way. He’s a true anomaly.