Showing posts with label prediction. Show all posts
Showing posts with label prediction. Show all posts

Tuesday, July 10, 2012

Health care: We don't want more stuff, we want more years.

Stanford's Chad Jones and Robert Hall tell us health care spending really is different ...

Why Americans want to spend more on health care (Louis Johnston, MinnPost, 7/6/12)

... Income elasticity measures how much more of a good or service a person will buy if their income goes up by 1 percent. For most goods and services this number is less than 1; that is, if income rises then people will buy more of most goods but they will increase their purchases by less than 1 percent. 

Years of life are different. If you have a medical procedure that extends your life, then the first, second, third and however many extra years you receive are all equally valuable. So if your income rises by 1 percent, you will increase your spending on medical care by at least 1 percent, and possibly more.

Jones, along with Robert E. Hall (also of Stanford) embedded this idea in an economic model and found that it does a good job predicting the path of health care expenditures from 1950 to 2000. Further, they show that if this is true, then the share of GDP we devote to health care could easily rise to 30 percent or more over the next 50 years as people choose to spend more on health care to obtain more years of life.

Thinking about the rise in medical spending this way puts health care policy in a different light. People want to live longer, better lives, and they are willing to pay for it. They don’t want more stuff, they want more life...

Life extending [1] health care is an inexhaustible good. That's what simplistic happiness studies, like a pseudo-science [2] article claiming that $75,000 is "enough", usually miss. They implicitly assume, or indirectly measure, good health [3].

Years ago, when health care spending was a mere 12% of GDP (we're about 15% now), my partner, Dr. John H, saw no reason why it wouldn't, and shouldn't rise to a then unthinkable 15% or more. His point was that people like being healthy, and to the extent that health care works, they will want more of it.

Health care that is perceived to be effective is the ultimate growth industry.

That's why this is where we'll end up. We could do much worse.

[1] A shorthand for extending life that we care about, particularly life-years of loved ones. More years of dementia don't count, though significant disability has less impact that many imagine. I assume there's some amount of quality lifespan that would, depending on one's memory, have an income elasticity of less than one. Science fiction writers often put that at somewhere between 300 and 30,000 years.
[2] I read the published study; "Participants answered our questions as part of a larger online survey, in return for points that could be redeemed for prizes." Can you image a less representative population? Needless to say they didn't define what household income meant, yet they turned this into a NYT article.
[3] The Jimmy Johns' insultingly stupid parable of the mexican banker is a particularly egregious example. 

Thursday, July 05, 2012

Google's Project Glass - it's not for the young

I've changed my mind about Project Glass. I thought it was proof that Brin's vast wealth had driven him mad, and that Google was doing a high speed version of Microsoft's trajectory.

Now I realize that there is a market.

No, not the models who must, by now, be demanding triple rates to appear in Google's career-ending ads.

No, not even Google's geeks, who must be frantically looking for new employment.

No, the market is old people. Geezers. People like me; or maybe me + 5-10 years.

We don't mind that Google Glass looks stupid -- we're ugly and we know it.

We don't mind that Google Glass makes us look like Borg -- we're already good with artificial hips, knees, lenses, bones, ears and more. Nature is overrated and wears out too soon.

We don't mind wearing glasses, we need them anyway.

We don't mind having something identifying people for us,  recording where we've been and what we've done, selling us things we don't need, and warning us of suspicious strangers and oncoming traffic. We are either going to die or get demented, and the way medicine is going the latter is more likely. We need a bionic brain; an ever present AI keeping us roughly on track and advertising cut-rate colonoscopy.

Google Glass is going to be very big. It just won't be very sexy.

Wednesday, June 27, 2012

Google's A.I. recognizes cats. Laugh while you can.

Google's brain module was trained on YouTube stills. From vast amounts of data, one image spontaneously emerged ...
Using large-scale brain simulations for machine learning and A.I. | Official Google Blog 
".. we developed a distributed computing infrastructure for training large-scale neural networks. Then, we took an artificial neural network and spread the computation across 16,000 of our CPU cores (in our data centers), and trained models with more than 1 billion connections.  
...  to our amusement, one of our artificial neurons learned to respond strongly to pictures of... cats ... it “discovered” what a cat looked like by itself from only unlabeled YouTube stills. That’s what we mean by self-taught learning... 
... Using this large-scale neural network, we also significantly improved the state of the art on a standard image classification test—in fact, we saw a 70 percent relative improvement in accuracy. We achieved that by taking advantage of the vast amounts of unlabeled data available on the web, and using it to augment a much more limited set of labeled data. This is something we’re really focused on—how to develop machine learning systems that scale well, so that we can take advantage of vast sets of unlabeled training data.... 
... working on scaling our systems to train even larger models. To give you a sense of what we mean by “larger”—while there’s no accepted way to compare artificial neural networks to biological brains, as a very rough comparison an adult human brain has around 100 trillion connections.... 
..  working with other groups within Google on applying this artificial neural network approach to other areas such as speech recognition and natural language modeling."
Hah, hah, a cat. That's so funny. Unless you're a mouse of course.

The mouse cortex has 14 million neurons and a maximum of 45K connections per neuron, so ballpark estimate, perhaps 300 billion connections (real estimates are probably known from the mouse connectome project but I couldn't find them). So in this first pass Google has less than 1% of a mouse connectome.

Assuming they double the connectome every two years, they should hit mouse scale in nine years, or around 2021. There's a good chance you and will still be around then.

I've long felt that once we had a "mouse-equivalent" connectome we could probably stop worrying about global warming, social security, meteor impacts, cheap bioweapons, and the Yellowstone super volcano.

Really, we're just mice writ large. That cat is looking hungry.

Incidentally, Google didn't use the politically incorrect two letter acronym in the blog post, but they put it, with periods (?), in the post title.

Tuesday, April 17, 2012

Why we need Google Glasses (really)

There are lots of parodies of Google Glasses. I haven't bothered with them though; too easy a target and the technology seemed pointless.

Until I realized that we will all wear them one day.

No, I'm not crazy. There's a reason.

The reason is that, sadly, it's starting to look like prolonged sitting really is bad for us. It's not just lack of exercise, and it's not just obesity, there seems to be something fundamentally unhealthy about sitting.

So we need to walk more. We also need to be able to work while we walk, and office treadmills aren't feasible.

Enter Google Googles with voice recognition.

Sunday, January 08, 2012

Rule 34 by Charlie Stross - my review

I read Charlie Stross's Rule 34. Here's my 5 star Amazon review (slightly modified as I thought of a few more things):

Rule 34 is brilliant work.

If Stross had written a novel placed in 2010, it would have been a top notch crime and suspense novel. Charlie's portrayal of the criminal mind, from silence-of-the-lambs psychopath (sociopath in UK speak, though that US/UK distinction is blurring) to every day petty crook, is top notch.

Stross puts us into the minds of his villains, heroes, and fools, using a curious 2nd person pronoun style that has a surprising significance. I loved how so many of his villains felt they were players, while others knew they were pawns. Only the most insightful know they're a cog in the machine.

A cog in a corporate machine that is. Whether cop or criminal or other, whether gay or straight, everyone is a component of a corporation. Not the megacorp of Gibson and Blade Runner, but the ubiquitous corporate meme that we also live in. The corporate meme has metastasized. It is invisible, it is everywhere, and it makes use of all material. Minds of all kinds, from Aspergerish to sociopath, for better and for worse, find a home in this ecosystem. The language of today's sycophantic guides to business is mainstream here.

Stross manages the suspense and twists of the thriller, and explores emerging sociology as he goes. The man has clearly done his homework on the entangled worlds of spam and netporn -- and I'm looking forward to the interviewers who ask him what that research was like. In other works Stross has written about the spamularity, and in Rule 34 he lays it out. He should give some credit to the spambots that constantly attack his personal blog.

Rule 34 stands on its own as a thriller/crime/character novel, but it doesn't take place in 2010. It takes place sometime in the 2020-2030s (at one point in the novel Stross gives us a date but I can't remember it exactly). A lot of the best science fiction features fully imagined worlds, and this world is complete. He's hit every current day extrapolation I've ever thought of, and many more besides. From the macroeconomics of middle Asia, to honey pots with honey pots, to amplified 00s style investment scams to home foundries to spamfested networked worlds to a carbon-priced economy to mass disability to cyberfraud of the vulnerable to ubiquitous surveillance to the bursting of the higher education bubble, to exploding jurisprudence creating universal crime … Phew. There's a lot more besides. I should have been making a list as I read.

Yes, Rule 34 is definitely a "hard" science fiction novel -- though it's easy to skip over the mind-bending parts if you're not a genre fan. You can't, however, completely avoid Stross's explorations of the nature of consciousness, and his take on the "Singularity" (aka rapture of the nerds). It's not giving away too much to say there's no rapture here. As to whether this is a Rainbow's End pre-Singular world … well, you'll have to read the novel and make your own decision. I'm not sure I'd take Stross's opinion on where this world of his is going - at least not at face value.

Oh, and if you squint a certain way, you can see a sort-of Batman in there too. I think that was deliberate; someone needs to ask Charlie about that.

Great stuff, and a Hugo contender for sure.

If you've read my blog you know I'm fond of extrapolating to the near future. Walking down my blog's tag list I see I'm keen on the nature and evolution of the Corporation, mind and consciousness, economics, today's history, emergence, carbon taxes, fraud and "the weak", the Great Recession (Lesser Depression), alternative minds (I live with 2 non-neurotypicals), corruption, politics, governance, the higher eduction and the education  bubble, natural selection, identity, libertarianism (as a bad thing), memes, memory management, poverty (and mass disability), reputation management, schizophrenia and mental illness, security, technology, and the whitewater world. Not to mention the Singularity/Fermi Paradox (for me they're entangled -- I'm not a Happy Singularity sort of guy).

Well, Stross has, I dare to say, some of the same interests. Ok, so I'm not in much doubt of that. I read the guy religiously, and I'm sure I've reprocessed everything he's written. In Rule 34 he's hit all of these bases and more. Most impressively, if you're not looking for it, you could miss almost all of it. Stross weaves it in, just as he does a slow reveal of the nature of his characters, including the nature of the character you don't know about until the end.

Update: In one of those weird synchronicity things, Stross has his 2032 and 2092 predictions out this morning. Read 'em.

Thursday, December 29, 2011

GOP 2.0: What rational climate change politics might look like

"With great power comes great responsibility." Gingrich's inner geek smiled at that one. Certainly they had the power. The Democrats had been crushed by the 2012 elections. President Romney now controlled the House, the Senate and the Supreme Court -- and the filibuster had been eliminated in early 2013.

Gingrich was philosophical about the Vice Presidency; Cheney had taught him what could be done. Romney was happy enough to hand off the big one to him.

Not health care of course. That had been a trivial problem; it took only a few months to tweak ObamaCare, throw in some vouchers and a few distractions, and launch RomneyCare. The GOP base was fine with rebranding, and the dispirited remnant of the Democrats saw little real change.

No, the big one was climate change. Romney and Gingrich had never truly doubted that human CO2 emissions were driving global climate change, but pivoting the base took a bit of work. They'd begun with ritual purges; Hansen was quickly exiled to the lecture circuit. Then came the American Commission on Truth in Science. There wasn't even much tormenting of old enemies; the size of the GOP victory had taken the fun out of that. In short order the "weak mindedness" of the Democrats was exposed and the "honest and rigorous" examination of the Romney administration was completed. It was time, Murdoch's empire declared, for strong minded Americans to face hard (but not inconvenient) facts.

The hardest challenge came from a contingent that felt global warming was a good thing, even God's plan. American drought was weakening that group, but they were a constant headache.

Now though it was time for policy, and Gingrich couldn't be happier. He'd been meeting with Bill Clinton of course; the two rogues loved the evening debates. Clinton's engagement wasn't just for fun, despite the GOP's dominance there was still room for politics. America's wealthy had been irrationally terrified of Obama, but they were also afraid of runaway warming -- and they had considerable power. Trillions of dollars were at stake in any real attack on global warming, and every corporation in America was at the door. The Military was pushing for aggressive management. Lastly, Gingrich knew that power can shift. He'd seen it before.

He wrote out the options, and labeled them by their natural political base ...

  • Climate engineering: solar radiation reduction, massive sequestration projects (R)
  • CO2 pricing (by hook or crook) (R/D - political debate is how revenues are used)
  • Subsidies for public transit (D)
  • Urban planning measures (D)
  • Military strategy to manage anticipated collapse of African nations (R)
  • Military strategy to manage anticipated climate engineering conflicts with China (climate wars) (R)
  • Tariff's on Chinese imports to charge China for their CO2 emissions (R/D - but probably tied to American CO2 pricing)
  • Massive investments in solar power and conservation technologies (D)
  • Massive investments in fusion power (R)

The Climate Wars were particularly troublesome. There were simple things China could do, like pump massive amounts of sulfuric acid, that would alleviate the disaster their scientists had predicted. These measures, however, would be disastrous for the US. On the other hand, war with China was unthinkable.

Gingrich new he had to put a price on Carbon and he had to get China to avoid the most dangerous (for the US) forms of climate engineering. The rest was in play. This was what Great Men were made for ...

See also:

Gordon's Notes

Others

Saturday, December 17, 2011

Netbooks

Dell has ended their Netbook line.

That leaves Google's Chromebooks, which aren't exactly exciting.

I wasn't just a little wrong about Netbooks, I was incredibly, unbelievably, totally wrong. Again.

I mean, this is friggin' ridiculous.

What happened?

I suppose it was the pocket computer. People with iPhones and the Android equivalent are already paying for most of what a Netbook can do. It doesn't make sense to pay for an extra monthly data plan, and a Netbook without net access is kind of a bust.

That leaves Windows notebooks, which are cheap but crummy. And MacBook Airs, which are not cheap but very amazing.

There's still the grade school and perhaps junior high school marketplace, but the iPad and Android equivalents are squeezing there too.

The Netbook looks like an evolutionary dead end. Maybe we'd have taken that road, but the iPhone blew a hole in it by mid-2007. I was writing in 2009; the bloody 3G was out then!

Damnit Netbook, you made a fool of me.

See also:

Friday, December 02, 2011

The AI Age: Siri and Me

Memory is just a story we believe.

I remember that when I was on a city bus, and so perhaps 8 years old, a friend showed me a "library card". I was amazed, but I knew that libraries were made for me.

When I saw the web ... No, not the web. It was Gopher. I read the minutes of a town meeting in New Zealand. I knew it was made for me. Alta Vista - same thing.

Siri too. It's slow, but I'm good with adjusting my pace and dialect. We've been in the post-AI world for over a decade, but Siri is the mind with a name.

A simple mind, to be sure. Even so, Kurzweil isn't as funny as he used to be; maybe Sir's children will be here before 2100 after all.

In the meantime, we get squeezed...

Artificial intelligence: Difference Engine: Luddite legacy | The Economist

... if the Luddite Fallacy (as it has become known in development economics) were true, we would all be out of work by now—as a result of the compounding effects of productivity. While technological progress may cause workers with out-dated skills to become redundant, the past two centuries have shown that the idea that increasing productivity leads axiomatically to widespread unemployment is nonsense...

[there is]... the disturbing thought that, sluggish business cycles aside, America's current employment woes stem from a precipitous and permanent change caused by not too little technological progress, but too much. The evidence is irrefutable that computerised automation, networks and artificial intelligence (AI)—including machine-learning, language-translation, and speech- and pattern-recognition software—are beginning to render many jobs simply obsolete....

... The argument against the Luddite Fallacy rests on two assumptions: one is that machines are tools used by workers to increase their productivity; the other is that the majority of workers are capable of becoming machine operators. What happens when these assumptions cease to apply—when machines are smart enough to become workers? In other words, when capital becomes labour. At that point, the Luddite Fallacy looks rather less fallacious.

This is what Jeremy Rifkin, a social critic, was driving at in his book, “The End of Work”, published in 1995. Though not the first to do so, Mr Rifkin argued prophetically that society was entering a new phase—one in which fewer and fewer workers would be needed to produce all the goods and services consumed. “In the years ahead,” he wrote, “more sophisticated software technologies are going to bring civilisation ever closer to a near-workerless world.”

...In 2009, Martin Ford, a software entrepreneur from Silicon Valley, noted in “The Lights in the Tunnel” that new occupations created by technology—web coders, mobile-phone salesmen, wind-turbine technicians and so on—represent a tiny fraction of employment... In his analysis, Mr Ford noted how technology and innovation improve productivity exponentially, while human consumption increases in a more linear fashion.... Mr Ford has identified over 50m jobs in America—nearly 40% of all employment—which, to a greater or lesser extent, could be performed by a piece of software running on a computer...

In their recent book, “Race Against the Machine”, Erik Brynjolfsson and Andrew McAfee from the Massachusetts Institute of Technology agree with Mr Ford's analysis—namely, that the jobs lost since the Great Recession are unlikely to return. They agree, too, that the brunt of the shake-out will be borne by middle-income knowledge workers, including those in the retail, legal and information industries...

Even in the near term, the US Labor Department predicts that the 17% of US workers in "office and administrative support" will be replaced by automation.

It's not only the winners of the 1st world birth lottery that are threatened.

 China's Foxconn (Taiwan based) employs about 1 million people. Many of them will be replaced by robots.

It's disruptive, but given time we could adjust. Today's AIs aren't tweaking the permeability of free space; there are still a few things we do better than they. We also have complementary cognitive biases; a neurotypical human with an AI in the pocket will do things few unaided humans can do. Perhaps even a 2045 AI will keep human pets for their unexpected insights. Either way, it's a job.

Perhaps more interestingly, a cognitively disabled human with a personal AI may be able to take on work that is now impossible.

Economically, of course, the productivity/consumption circuit has to close. AIs don't (yet) buy info-porn. If .1% of humans get 80% of revenue, then they'll be taxed at 90% marginal rates and the 99.9% will do subsidized labor. That's what we do for special needs adults now, and we're all special needs eventually.

So, given time, we can adjust. Problem is, we won't get time. We will need to adjust even as our world transforms exponentially. It could be tricky.

See also:

Saturday, November 26, 2011

Mass disability goes mainstream: disequilibria and RCIIT

I've been chattering for a few years about the rise of mass disability and the role of RCIIT (India, China, computers, networks) in the Lesser Depression. This has taken me a bit out of the Krugman camp, which means I'm probably wrong.

Yes, I accept Krugman's thesis that the proximal cause of depression is a collapse in demand combined with the zero-bound problem. Hard to argue with arithmetic.

I think there's more going on though. Some secular trends that will be with us even if followed Krugman's wise advice. In fact, under the surface, I suspect Krugman and DeLong believe this as well. I've read Krugman for years, and he was once more worried about the impact of globalization and IT than he's now willing to admit. Sometimes he has to simplify.

For example, fraud has always been with us -- but something happened to make traditional fraud for more effective over the past thirteen years. I think that "something" was the rise of information technology and associated complexity; a technology that allowed financiers to appear to be contributing value even though their primary role was parasitic.

Similarly, the rise of China and India is, in the long run, good for the entire world. In the near future, however, it's very hard for world economies to adjust. Income shifts to a tiny fraction of Americans, many jobs are disrupted, people have to move, to change careers, etc. It takes time for new tax structures to be accepted, for new work to emerge. IT has the same disruptive effect. AI and communication networks will further limit the jobs we can take where our economic returns are equal or greater than the minimum wage.

I think these ideas are starting to get traction. Today Herman Gans is writing in the NYT about the age of the superfluous worker. A few days ago The Economist reviewed a book about disequlibria and IT

Economics Focus: Marathon machine | The Economist

... Erik Brynjolfsson, an economist, and Andrew McAfee, a technology expert, argue in their new e-book, “Race Against the Machine”, that too much innovation is the bane of struggling workers. Progress in information and communication technology (ICT) may be occurring too fast for labour markets to keep up. Such a revolution ought to be obvious enough to dissuade others from writing about stagnation. But Messrs Brynjolfsson and McAfee argue that because the growth is exponential, it is deceptive in its pace...

... Progress in many areas of ICT follows Moore’s law, they write, which suggests that circuit performance should double every 1-2 years. In the early years of the ICT revolution, during the flat part of the exponential curve, progress seemed interesting but limited in its applications. As doublings accumulate, however, and technology moves into the steep part of the exponential curve, great leaps become possible. Technological feats such as self-driving cars and voice-recognition and translation programmes, not long ago a distant hope, are now realities. Further progress may generate profound economic change, they say. ICT is a “general purpose technology”, like steam-power or electrification, able to affect businesses in all industries...

... There will also be growing pains. Technology allows firms to offshore back-office tasks, for instance, or replace cashiers with automated kiosks. Powerful new systems may threaten the jobs of those who felt safe from technology. Pattern-recognition software is used to do work previously accomplished by teams of lawyers. Programmes can do a passable job writing up baseball games, and may soon fill parts of newspaper sections (those not sunk by free online competition). Workers are displaced, but businesses are proving slow to find new uses for the labour made available. Those left unemployed or underemployed are struggling to retrain and catch up with the new economy’s needs.

As a result, the labour force is polarising. Many of those once employed as semi-skilled workers are now fighting for low-wage jobs. Change has been good for those at the very top. Whereas real wages have been falling or flat for most workers, they have increased for those who have advanced degrees. Owners of capital have also benefited. They have enjoyed big gains from the increased returns on investments in equipment. Technology is allowing the best performers in many fields, such as superstar entertainers, to dominate global markets, crowding out those even slightly less skilled. And technology has yet to cut costs for health care, or education. Much of the rich world’s workforce has been squeezed on two sides, by stagnant wages and rising costs.

In time the economy will adjust  -- unless exponential IT transformation actually continues [1]. Alas, the AI revolution well is underway and technology cycles are still brutally short.  I don't see adjustment happening within the next six years. The whitewater isn't calming.

[1] That is, of course, the Singularity premise, as previously reviewed in The Economist.

Update 12/3/2011: And how does the great stagnation play into this - Gordon's Notes: Ants, corporations and the great stagnation?

Saturday, October 22, 2011

In fifty years, what will our sins be?

In my early years white male heterosexual superiority was pretty much hardwired into my culture. I grew up in Quebec, so in my earliest pre-engagement years add the local theocracy of the Catholic church.

Mental illness, including schizophrenia, was a shameful sin. Hitting children was normal and even encouraged. There were few laws protecting domestic animals. There were almost no environmental protections. Children and adults with cognitive disorders were scorned and neglected. Physical disabilities were shameful; there were few accommodations for disability.

Our life then had a lot in common with China today.

Not all of these cultural attitudes are fully condemned, but that time is coming.

So what are the candidates for condemnation in 50 years? Gus Mueller, commenting on a WaPo article, suggests massive meat consumption and cannabis prohibition.

I am sure Gus is wrong about cannabis prohibition. Even now we don't condemn the ideal of alcohol prohibition; many aboriginal communities around the world still enforce alcohol restrictions and we don't condemn them. We consider American Prohibition quixotic, but not evil.

My list is not far from the WaPo article. Here's my set:

  • Our definition and punishment of crime, particularly in the context of diminished capacity.
  • Our tolerance of poverty, both local and global.
  • Our wastefulness.
  • Our tolerance of political corruption.
  • Our failure to create a carbon tax.
  • The use of semi-sentient animals as meat. (WaPo just mentions industrial food production. I think the condemnation will be deeper.)
  • Our failure to confront the responsibilities and risks associated with the creation of artificial sentience. (Depending on how things turn out, this might be celebrated by our heirs.)

The WaPo article mentions our isolation of the elderly. I don't think so; I think that will be seen more as a tragedy than a sin. This is really about the modern mismatch between physical and cognitive lifespan.

The article is accompanied by a poll with this ranking as of 5800 votes:

  • Environment
  • Food production
  • Prison system
  • Isolation of the elderly.

Wednesday, October 12, 2011

Apple 4.0

Steve M was* a master Healer and Teacher at the Upper Peninsula Health Education Corporation (UPHEC), but his true love was HyperCard. He should have been a programmer.

One of Steve's jobs was to civilize an obnoxious (think Jobs sans glamour, sans genius) young physician. The other was to convert me to the way of the Mac.

That was 1989, in the days of Apple 2.0. Steve Jobs had been gone for four years.

It wasn't hard to convert me. In those days Microsoft was taking over the world, but their Intel products pretty much sucked. Their Mac products, Word and Excel in particular, were far better than their Windows equivalents.

The Mac had a rich range of software, like More 3.0. The Mac cost about 20-30% more than roughly similar PC hardware, but Mac hardware and software quality was excellent (no viruses then, so security was not an issue). Apple networking was a joy to configure, though the cracks were starting to show. Apple networks didn't seem to scale well.

I stayed with Macs during my Informatics fellowship - until 1997. By then, twelve years after Jobs had left Apple, they weren't obviously better than the Wintel alternatives. Apple's OS 7 had terrible trouble with TCP/IP; it was even worse on the web than Windows 95. Windows 2000** was better than MacOS classic and Dell hardware was robust.

It took twelve years for post-Jobs Apple to become as weak as the competition. We were a Windows household from about 1997 to @2003, when I bought a G3 iBook. By then Apple was back. The Apple 3 recovery took about 5 years.

Now we're in the Apple 4.0 era. I suspect it began about 2010.

Apple 4.0 will behave like a publicly traded corporation (PTC), instead of the freakish anomaly it has been. It can't be Apple 3.0. On the other hand, I'm hopeful that Jobs last invention will turn out to be a new way to run a corporation; a reinvention of Sloan's GM design. It's clear that this is what he's been aiming at over the past few years. I stopped underestimating Jobs years ago. If anyone can fix the dysfunctional PTC, it would be Jobs.

Apple 4.0 won't have the glamour of 3.0. It may, however, do some things better. I believe Apple's product quality has been improving over the past two years. They're beginning to approach the quality of early Apple 2.0. Apple 4 may start to play better with others, even begin to support standards for information sharing instead of Jobs preference for data lock and proprietary connectors.

Apple 4.0 will have less art, less elegance, less glamour -- but it might have more engineering. Less exciting, but better for me.

I'm optimistic.
--
* Still a great Healer, but our UPHEC passed on. Steve isn't teaching these days.
** Windows 2000 was better than XP and Vista and Windows 7, but that's another story. Microsoft's post 2000 fall was much more dramatic than the slow decay of Apple 2.0.

Update 10/12/11: I respond to comments on quality and connectors in a f/u post.

Sunday, October 09, 2011

Siri, the Friendly AI

The iPhone 4S video shows a young runner asking Siri to rearrange this schedule. It doesn't show him running into the path of another Siri user driving his convertible.

Siri is the iPhone AI that understands how your phone works and, in theory, understands a domain constrained form of natural language. It has a long AI legacy; it's a spinoff from SRI Artificial Intelligence Center and the DARPA CALO project.

When Siri needs to know about the world it talks with Wolfram Alpha. That's where the story becomes a Jobsian fusion of the personal and the technical, and Siri's backstory becomes a bit ... unbelievable.

Siri was launched as the unchallenged king of technology lay dying. The Wolfram part of Siri began when Jobs was in exile ...

Wolfram Blog : Steve Jobs: A Few Memories

I first met Steve Jobs in 1987, when he was quietly building his first NeXT computer, and I was quietly building the first version of Mathematica. A mutual friend had made the introduction, and Steve Jobs wasted no time in saying that he was planning to make the definitive computer for higher education, and he wanted Mathematica to be part of it...

Over the months after our first meeting, I had all sorts of interactions with Steve aboutMathematica. Actually, it wasn’t yet called Mathematica then, and one of the big topics of discussion was what it should be called. At first it had been Omega (yes, like Alpha) and later PolyMath. Steve thought those were lousy names. I gave him lists of names I’d considered, and pressed him for his suggestions. For a while he wouldn’t suggest anything. But then one day he said to me: “You should call it Mathematica”...

... In June 1988 we were ready to release Mathematica. But NeXT had not yet released its computer, Steve Jobs was rarely seen in public, and speculation about what NeXT was up to had become quite intense. So when Steve Jobs agreed that he would appear at our product announcement, it was a huge thing for us.

He gave a lovely talk, discussing how he expected more and more fields to become computational, and to need the services of algorithms and of Mathematica. It was a very clean statement of a vision which has indeed worked out as he predicted....

A while later, the NeXT was duly released, and a copy of Mathematica was bundled with every computer...

... I think Mathematica may hold the distinction of having been the only major software system available at launch on every single computer that Steve Jobs created since 1988. Of course, that’s often led to highly secretive emergency Mathematica porting projects—culminating a couple of times in Theo Gray demoing the results in Steve Jobs’s keynote speeches.

... tragically, his greatest contribution to my latest life project—Wolfram|Alpha—happened just yesterday: the announcement that Wolfram|Alpha will be used in Siri on the iPhone 4S...

Siri's backstory is a good example of how you can distinguish truth from quality literature. Literature is more believable.

Siri isn't new of course. We've been in the post-AI world since Google displaced Alta Vista in the 1990s. Probably longer.

What's new is a classic Jobs move; the last Jobs move made during his lifetime. It's usually forgotten that Apple did not invent the MP3 player. They were quite late to the market they transformed. Similarly, but on a bigger and longer scale, personalized AIs have been with us for years.  AskJeeves was doing (feeble) natural language queries in the 1990s. So Siri is not the first.

She probably won't even work that well a while. Many of Apple's keynote foci take years to truly work (iChat, Facetime, etc). Eventually though, Siri will work. She and her kin will engage in the complexity wars humans can't manage, perhaps including our options bets. Because history can't resist a story, Siri will be remembered as the first of her kind.

Even her children will see it that way.

Update 10/12/11: Wolfram did a keynote address on 9/26 in which he hinted at the Siri connection to Wolfram Alpha: "It feels like Mathematica is really coming of age. It’s in just the right place at the right time. And it’s making possible some fundamentally new and profoundly powerful things. Like Wolfram|Alpha, and CDF, and yet other things that we’ll have coming over the next year." The address gives some insight into the world of the ubiquitous AI. (No real hits on that string as of 10/12/11. That will change.)

Friday, October 07, 2011

Investment in a whitewater world

During the last half of the 20th century retail investors earned positive returns with some mixture of stocks, bonds, real estate (personal) and cash. Mutual funds, and especially index mutual funds, made middle class investing possible.

Then came the great market bubble of the 90s, the real estate bubbles of the 00s, and the rise of IT enabled economic predation. The vast flow of growth returns was diverted from the middle class investor to corporate executives and sharper, faster, players. Corporate financial statements became less and less credible as new ways were found to obfuscate financial status. In a world of IT enabled complexity, risk assessment became extraordinarily difficult. Where growth opportunities were strong, as in China, the markets were corrupt and inaccessible.

Maybe we'll return to the relative calm of the 1980s, or even the slow growth of the 1970s. Oil prices may stabilize around $150/barrel. China's economy may make a soft landing and the Chinese nation may follow Taiwan's path to democracy. The GOP may shift away from the Tea Party base and, once in power, raise taxes substantially while implementing neo-Keynesian policies by another name. North Korea may go quietly. The EU may hold together while gradually abandoning the Euro. Cultural shifts might make personal integrity a core value. Disruptive innovations, like high performance robotics and widespread AI, may slow. The invisible hand and social adaptation may solve the mass disability problem of the wealthy nations. Immigration policy, a breakthrough in the prevention of dementia, or the return of ubasute may offset the impacts of age demographics. We may even ... we may even look intelligently at the costs of health care and education and manage both of them.

If these things happen then some economic growth will return, stock prices will reflect fundamentals, bonds will become feasible investments, and interest rates will be non-zero.

Or they won't happen.

In which case, we can look forward to more of the same. The best guide to the near future, after all, is the near past. In this whitewater world then, in which financial statements are unreliable and wealth streams are diverted, what are the investment opportunities? We cannot recover the lost returns of the past 11 years, but it would be nice to be less of a chump.

Real estate seems a reasonable option, though there we face the problem of untrustworthy investment agents. There is not yet a John Bogle of 21st century real estate investment. (This, incidentally, suggests something government could do -- engineer a trustworthy investment representative for American real estate.)

The other option is to switch from prey to predator.

During the 20th century retail investors could only make one way bets. We basically had to bet on economic growth and prosperity. For a time we could shift a bit. If we thought near term growth looked bad, we could shift to bonds. If we thought a crash was coming, we could shift to cash. Basically, however, we could only get good returns by investing in stocks and betting on growth. This worked under conditions of economic growth and relatively integrity. Under the conditions of the past decade this made us prey.

Predators don't make one way bets. They make bets on downturns, on upturns, on volatility, on stability, on irrationality, on continued fraud, on reform. They make bets on bets. They play the options and straddle options games that brought down the world economy. It's too bad they won, but, with a bit of help from our AI friends, they did.

So I'm learning about options. It's not what I like to do, but I don't make the rules.

Tuesday, October 04, 2011

The 4S is fine, but it's Sprint I'm interested in

I've never seen so few posts after an Apple keynote. Clearly the iPhone 4S disappointed many; though it's noteworthy that Apple's web site cratered today. I've seen it slow, but never with a server error.

Personally, I'm fine with it. I own a 4 and, with the slender and high quality case I use it's been robust and excellent. The 4S may (who knows) fit many existing cases and it should work fine with existing peripherals. Since it's an iteration on an established design it's much less likely to have Apple's inevitable new product issues. All the improvements are appreciated.

The 4S is exactly what I'd hoped for. [1]

Emily will get the 32GB model and we'll extend her AT&T contract ... unless ....

Unless Sprint does something interesting.

The Oct 7 Sprint announcement is the one I'm waiting for.

Sprint has nothing to lose. They're fourth down at their own 10 yard line with two minutes left in the game. Time to put the ball in the air.

The WSJ has already told us that Sprint has sold their soul to Apple, but all we're told is that they committed to selling a lot of iPhones. We don't know what orders Apple gave Sprint.

From the sound of it Apple acquired Sprint the same way Microsoft acquired Nokia. No cash down, but a promise of a future.

If Apple is now effectively running Sprint the way Apple thinks a mobile phone company should run, then things could get very interesting for the American mobile phone industry -- and quite profitable for Sprint shareholders. (Sprint's share price was on a roller coaster today. I haven't bought shares in a long time, but I may buy tomorrow.)

This is what I'm looking for on the 7th. I'm looking for Sprint to provide low cost unlimited texting/SMS support as part of their iPhone data plan. With iMessage they're not losing out much anyway; iPhone to iPhone texts are free.

I'm also looking for Sprint to offer a 5GB data cap to their iOS customers for the usual monthly data fee - instead of their "unlimited" phone data service.

Huh? What's good about that?! What's good about that is that the 5GB data allowance will include free iPhone mobile hot spot services (tethering) over Sprint's 4G network.

Lastly Sprint will offer an Apple style approach to mobile phone contracting -- simple plans, clear costs, consumer-friendly voice minute options.

Apple will use Sprint to beat Verizon and AT&T over the head. They don't want those two to get the power of a duopoly. Sprint will, in turn, become Apple's mobile phone company. Droid users will not stay with Sprint.

If I'm right, then Emily's 4S may be coming from Sprint -- because we'll be moving the entire family over. If Sprint doesn't do this, then I'll sell my shares at a loss.

[1] What I really want is a water resistant iPhone. I wasn't hoping for that. That's not Apple's style.

Update: Early signs are that Sprint whiffed. They are said to charge $30/month to tether, and also to introduce a 5GB/month data limit. That would leave me with AT&T.

Update 9/7/11: Good thing I was too tired and busy to buy any Sprint stock. They blew this opportunity. I wonder if Sprint knew their network couldn't handle the bandwidth from a 5GB capped bundled mifi/iPhone service. I fear they're goners; they certainly blew a great opportunity to differentiate. We signed up for 2 more years with AT&T.

Friday, September 23, 2011

Mass disability and the middle class

My paper magazine has another article on the Argentinification of America - Can the Middle Class Be Saved? - The Atlantic.

I'll skim it sometime, but I doubt there's much new there. We know the story.  The bourgeois heart of America is fading. In its place are the poor, the near poor, the rich and the near rich.

I have thought of this, for years, as the rise of mass disability. In the post-AI world the landscape of American employment is monotonous. There's work for people like me, not so much for some I love. Once they would have worked a simple job, but there's not much call for that these days. Simple jobs have been automated; there's only room for a small number of Walmart greeters. Moderately complex jobs have been outsourced.

Within the ecosystem of modern capitalism a significant percentage of Americans are maladapted. I'd guess about 25%; now 35% thanks to the lesser depression.

There are two ways to manage this - excluding the Swiftian solution.

One is to apply the subsidized employment strategies developed for adults with autism and low IQ. Doing this on a large scale would require substantial tax increases, particularly on the wealthy.

Another approach is to bias the economy to a more diverse landscape with a greater variety of employment opportunities -- including manufacturing. This is, depending on whether you are an optimist or realist, the approach of either modern Germany or Nehru's India. This bias compromises "comparative advantage", so we can expect this economy, all else being equal, to have a lower than maximal output. Since in our world the benefits of total productivity flow disproportionately to the wealthy (winner take all), this is equivalent to a progressive tax on an entire society.

So, either way,  the solution is a form of taxation. Either direct taxation and redistribution, or a decrease in overall growth.

I suspect that we will eventually do both.

See also:

Sunday, September 18, 2011

Life in the post-AI world. What's next?

I missed something new and important when I wrote ...

Complexity and air fare pricing: Houston, we have a problem

... planning a plane trip has become absurdly complex. Complex like choosing a cell phone plan, getting a "free" preventive care exam, managing a flex spending account, getting a mortgage, choosing health insurance, reading mobile bills, fighting payment denials, or making safe product choices. Complex like the complexity collapse that took down the western world.

I blame it all on cheap computing. Cheap computing made complexity attacks affordable and ubiquitous...

The important bit is what's coming next and now in the eternal competition.

AI.

No, not the "AIs" of Data, Skynet and the Turing Test [1]. Those are imaginary sentient beings. I mean Artificial Intelligence in the sense it was used in the 1970s -- software that could solve problems that challenge human intelligence. Problems like choosing a bike route.

To be clear, AIs didn't invent mobile phone pricing plans, mortgage traps or dynamic airfare pricing. These "complexity attacks" were made by humans using old school technologies like data mining, communication networks, and simple algorithms.

The AIs, however, are joining the battle. Route finding and autonomous vehicles and (yes) search are the obvious examples. More recently services like Bing flight price prediction and Google Flights are going up against airline dynamic pricing. The AIs are among us. They're just lying low.

Increasingly, as in the esoteric world of algorithmic trading, we'll move into a world of AI vs. AI. Humans can't play there.

We are in the early days of a post-AI world of complexity beyond human ken. We should expect surprises.

What's next?

That depends on where you fall out on the Vinge vs. Stross spectrum. Stross predicts we'll stop at the AI stage because there's no real economic or competitive advantage to implementing and integrating sentience components such as motivation, self-expansion, self-modeling and so on. I suspect Charlie is wrong about that.

AI is the present. Artificial Sentience (AS), alas, is the future.

[1] Recently several non-sentient software programs have been very successful at passing simple versions of the Turing Test, a test designed to measure sentience and consciousness. Human interlocutors can't distinguish Turing Test AIs from human correspondents. So either the Turing Test isn't as good as it was thought to be, or sentience isn't what we thought it was. Or both.

Update 9/20/11: I realized a very good example of what's to come is the current spambot war. Stross, Doctorow and others have half-seriously commented that the deception detection and evasion struggle between spammers and Google will birth the first artificial sentience. For now though it's an AI vs. AI war; a marker of what's to come across all of commercial life.

See also:

Update 9/22: Yuri Milner speaking at the "mini-Davos" recently:
.... Artificial intelligence is part of our daily lives, and its power is growing. Mr. Milner cited everyday examples like Amazon.com’s recommendation of books based on ones we have already read and Google’s constantly improving search algorithm....
I'm not a crackpot. Ok, I am on, but I'm not alone.

Friday, July 08, 2011

Why is the modern GOP crazy?

The GOP wasn't always this crazy. Minnesota's Arne Carlson, for example, wasn't a bad governor. Schwarzenegger had his moments.

Ok, so the modern GOP has never been all that impressive. Still, it wasn't 97% insane until the mid-90s.

So what happened?

I don't think it's the rise of corporate America or the amazing concentration of American wealth. The former impacts both parties, and not all the ultra-wealthy are crazy. These trends make the GOP dully malign, but the craziness of Koch brothers ought to be mitigated by better informed greed.

That leaves voters. So why have a substantial fraction, maybe 20%, of Americans shifted to the delusional side of the sanity spectrum? It's not just 9/11 -- this started before that, though it's easy to underestimate how badly bin Laden hurt the US. It can't be just economic distress -- Gingrich and GWB rose to power in relatively good times.

What's changed for the GOP's core of north-euro Americans (aka non-Hispanic "white" or NEA)?

Well, the interacting rise of the BRIC and the ongoing IT revolution did hit the GOP-voting NEA very hard, perhaps particularly among "swing" voters. That's a factor.

Demographics is probably a bigger factor. I can't find any good references (help?) but given overall population data I am pretty sure this population is aging quickly. A good fraction of the core of the GOP is experiencing the joys of entropic brains (here I speak from personal white-north-euro-middle-age experience). More importantly, as Talking Points describes, this group is feeling the beginning of the end of its tribal power. My son's junior high graduating class wasn't merely minority NEA, it was small minority NEA.

This is going to get worse before it gets better. The GOP is going to explore new realms of crazy before it finds a new power base; either as a rebuilt GOP or a new party.

It's a whitewater world.

Update 7/8/11: Coincidentally, 538 provides some data on GOP craziness ....

Behind the Republican Resistance to Compromise - NYTimes.com

... Until fairly recently, about half of the people who voted Republican for Congress (not all of whom are registered Republicans) identified themselves as conservative, and the other half as moderate or, less commonly, liberal. But lately the ratio has been skewing: in last year’s elections, 67 percent of those who voted Republican said they were conservative, up from 58 percent two years earlier and 48 percent ten years ago.

This might seem counterintuitive. Didn’t the Republicans win a sweeping victory last year? They did, but it had mostly to do with changes in turnout. Whereas in 2008, conservatives made up 34 percent of those who cast ballots, that number shot up to 42 percent last year...

... the enthusiasm gap did not so much divide Republicans from Democrats; rather, it divided conservative Republicans from everyone else. According to the Pew data, while 64 percent of all Republicans and Republican-leaning independents identify as conservative, the figure rises to 73 percent for those who actually voted in 2010...

Saturday, July 02, 2011

NYT's 1982 article on how teletext would transform America

(with thanks to Joseph P for the cite).

There were familiar computing names in the 1980s - Apple, IBM and so on. There were also many now lost, such as Atari and Commodore PCs. There were networks and email and decades old sophisticated collaboration technologies now almost lost to memory.

Against that background the Institute for the Future tried to predict the IT landscape of 1998. They were looking 16 years ahead.

You can see how well they did. For reasons I'll explain, the italicized text are word substitutions. Emphases mine ...

STUDY SAYS TECHNOLOGY COULD TRANSFORM SOCIETY (June 13, 1982)

WASHINGTON, June 13— A report ... made public today speculates that by the end of this century electronic information technology will have transformed American home, business, manufacturing, school, family and political life.

The report suggests that one-way and two-way home information systems ... will penetrate deeply into daily life, with an effect on society as profound as those of the automobile and commercial television earlier in this century.

It conjured a vision, at once appealing and threatening, of a style of life defined and controlled by network terminals throughout the house.

As a consequence, the report envisioned this kind of American home by the year 1998: ''Family life is not limited to meals, weekend outings, and once a-year vacations. Instead of being the glue that holds things together so that family members can do all those other things they're expected to do - like work, school, and community gatherings -the family is the unit that does those other things, and the home is the place where they get done. Like the term 'cottage industry,' this view might seem to reflect a previous era when family trades were passed down from generation to generation, and children apprenticed to their parents. In the 'electronic cottage,' however, one electronic 'tool kit' can support many information production trades.''...

... The report warned that the new technology would raise difficult issues of privacy and control that will have to be addressed soon to ''maximize its benefits and minimize its threats to society.''

The study ... was an attempt at the risky business of ''technology assessment,'' peering into the future of an electronic world.

The study focused on the emerging videotex industry, formed by the marriage of two older technologies, communications and computing. It estimated that 40 percent of American households will have internet service by the end of the century. By comparison, it took television 16 years to penetrate 90 percent of households from the time commercial service was begun.

The ''key driving force'' controlling the speed of computer communications penetration, the report said, is the extent to which advertisers can be persuaded to use it, reducing the cost of the service to subscribers.

''Networked systems create opportunities for individuals to exercise much greater choice over the information available to them,'' the researchers wrote. ''Individuals may be able to use network systems to create their own newspapers, design their own curricula, compile their own consumer guides.

''On the other hand, because of the complexity and sophistication of these systems, they create new dangers of manipulation or social engineering, either for political or economic gain. Similarly, at the same time that these systems will bring a greatly increased flow of information and services into the home, they will also carry a stream of information out of the home about the preferences and behavior of its occupants.'' Social Side Effects

The report stressed what it called ''transformative effects'' of the new technology, the largely unintended and unanticipated social side effects. ''Television, for example, was developed to provide entertainment for mass audiences but the extent of its social and psychological side effects on children and adults was never planned for,'' the report said. ''The mass-produced automobile has impacted on city design, allocation of recreation time, environmental policy, and the design of hospital emergency room facilities.''

Such effects, it added, were likely to become apparent in home and family life, in the consumer marketplace, in the business office and in politics.

Widespread penetration of the technology, it said, would mean, among other things, these developments:

- The home will double as a place of employment, with men and women conducting much of their work at the computer terminal. This will affect both the architecture and location of the home. It will also blur the distinction between places of residence and places of business, with uncertain effects on zoning, travel patterns and neighborhoods.

- Home-based shopping will permit consumers to control manufacturing directly, ordering exactly what they need for ''production on demand.''

- There will be a shift away from conventional workplace and school socialization. Friends, peer groups and alliances will be determined electronically, creating classes of people based on interests and skills rather than age and social class.

- A new profession of information ''brokers'' and ''managers'' will emerge, serving as ''gatekeepers,'' monitoring politicians and corporations and selectively releasing information to interested parties.

- The ''extended family'' might be recreated if the elderly can support themselves through electronic homework, making them more desirable to have around.

... The blurring of lines between home and work, the report stated, will raise difficult issues, such as working hours. The new technology, it suggested, may force the development of a new kind of business leader. ''Managing the complicated communication in networks between office and home may require very different styles than current managers exhibit,'' the report concluded.

The study also predicted a much greater diversity in the American political power structure. ''Electronic networks might mean the end of the two party system, as networks of voters band together to support a variety of slates - maybe hundreds of them,'' it said.

Now read this article on using software bots (not robots, contrary to the title) to shape and control social networks and opinions and two recent posts of mine on the state of blogging.

So, did the Institute for the Future get it right - or not?

I would say they did quite well, though they are more right about 2011 than about 1998. I didn't think so at first, because they used words like "videotext" and "teletext". They sound silly because we still do very little with telepresence or videoconferencing -- contrary to the expectations of the last seventy years.

On careful reading though, it was clear what they called "teletext and videotext" was approximately "email and rich media communications". So I substituted the words "computer", "internet" and "networked systems" where appropriate. Otherwise I just bolded a few key phrases.

Rereading it now they got quite a bit right. They weren't even that far off on home penetration.  They also got quite a bit wrong. The impact on politics seems to have contributed to polarization rather than diversity. Even now few elders use computer systems to interact with grandchildren, and none did in 1998.

So, overall, they maybe 65% right, but about 10 years premature (on a 16 year timeline!). That's now awful for predicting the near future, but they'd do even better to follow Charle's Stross prediction rules ...

The near-future is comprised of three parts: 90% of it is just like the present, 9% is new but foreseeable developments and innovations, and 1% is utterly bizarre and unexpected.

(Oh, and we're living in 2001's near future, just like 2001 was the near future of 1991. It's a recursive function, in other words.)

However, sometimes bits of the present go away. Ask yourself when you last used a slide rule — or a pocket calculator, as opposed to the calculator app on your phone or laptop, let alone trig tables. That's a technological example. Cultural aspects die off over time, as well. And I'm currently pondering what it is that people aren't afraid of any more. Like witchcraft, or imminent thermonuclear annihilation....

The state of blogging - dead or alive?

Today one of the quality bloggers I read declared blogging is dying. Two weeks ago, Brent Simmons, an early sub/pub (RSS, Atom) adopter tacked the RSS is dead meme. Today I discovered Google Plus Circles don't have readable feeds.

Perhaps worst of all, Google Reader, one of Google's best apps, is getting no Plus love at all -- and nobody seems upset. The only reference I could find shows in an Amil Dash post...

The Sparks feature, like a topic-based feed reader for keyword search results, is the least developed part of the site so far. Google Reader is so good, this can't possibly stay so bad for too long ...

That's a lot of crepe. It's not new however. I've been reading about the death of blogging for at least five years.

Against that I was so impressed with a recent blog post that I yesterday raved about terrific quality of the blogs I read.

So what's going on? I think Brent Simmons has the best state-of-the-art review. I say that because, of course, he lines up pretty well with my own opinions. (Brent has a bit more credibility I admit).

This is what I think is happening ...

  • We all hate the word Blog. Geeks should not name things.
  • The people I read are compulsive communicators. Brad, Charlie, Felix, Paul and many less famous names. They can't stop. Krugman is the most influential columnist in the US, but he's not paid for his non-stop NYT blog. Even when he declares he'll be absolutely offline he still posts.
  • Subscription and notification is absolutely not going away. Whether it's "RSS" (which is now a label for a variety of subscription technology standards) or Facebook's internal proprietary system there will be a form of sub/pub/notify. There are lots of interesting sub/notification projects starting up.
  • Nobody has been able to monetize the RSS/Atom/Feed infrastructure. Partial posts that redirect to ad-laden sites rarely work. (A few have figured out how to do this, but it's tricky.)
  • Blogs have enemies with significant economic and political power. That has an opportunity cost for developers of pub/sub solutions and it removes a potential source of innovation and communication.
  • Normal humans (aka civilians) do not use dedicated feed readers. That was a bridge too far. They don't use Twitter either btw and are really struggling with email.
  • Even for geeks, standalone feed readers on the desktop were killed by Google Reader. Standalone readers do persist on intermittently disconnected devices (aka smartphones).
  • Blog comments have failed miserably. The original backlink model, was killed by spam. (Bits of Google Reader Share and Buzz point the way to making this work, but Google seems to be unable to figure this out.)
  • The quality of what I read is, if anything, improving. i can't comment on overall volume, since I don't care about that. I have enough to read. It is true that some of my favorites go quiet for a while, but they often return.

Short version - it's a murky mixed bag. The good news is that pub/sub/notify is not going away, and that compulsive communicators will write even if they have to pay for the privilege. The bad news is that we're probably in for some turbulent transitions towards a world where someone can monetize the infostream.

Sunday, March 27, 2011

Naked Emperors: where are all the connected people?

A NYT headline says half of all American adults have Facebook accounts [4]. Twitter-like valuations are leading to tech bubble denials. Social networks, we are told, led to the Egyptian revolution [1].

Except, I don't see it here among school parents, sports team families, tech company colleagues, and upper-middle-class neighbors.

True, I live in the midwest, but by all metrics Minneapolis is a snowier version of Seattle-Portland. If not here, then where?

I don't see feed readers in use outside our home [3]. Almost nobody subscribes to calendar feeds. Very few of my sample [5] use Twitter. Most of my friends who once used Facebook have stopped posting or even reading. Even texting isn't universal. Everyone has 1-2 email addresses and can use Google, but that's as far as it goes. Forget Foursquare.

I see more iPhones every day, but they're not used for location services, pub/sub (feeds) or even Facebook's user-friendly pub/sub. Around here iPhone communication change has been limited to faster email responses.

There is change of course, but it lags about 5-10 years behind the media memes. Dial-up connections are mostly gone, though I still see AOL addresses [2]. Texting is becoming common. Old school email is now universal, though many (unwisely) still use office email for personal messaging.

It's frustrating for me; all of the school, sport, community organization and even corporate collaboration projects I work with would go better with pub/sub in particular. I've learned the hard way to dial back my expectations, and to focus on 1990s tech.

So is Minneapolis - St. Paul strangely stuck in the dark ages? Or is there a gulf between the media portrayal of American tech use and reality --  a gulf that will lead to a big fleecing when Facebook goes public?

My money is on the fleecing - and a faint echo of the 90s .com bubble.

[1] The same nearly-free-to-all worldwide communication network that Al Qaeda used effectively in 1999-2000 is now celebrated by us for its benefits in Egypt. Technology has no values, only value.
[2] I assume about half those are dial-up. 
[3] Google Reader is astounding. Just astounding. Nobody mentions this, everyone talks about Twitter (not useless, but weak). Weird.
[4] Not actually using FB mind you, just have accounts.
[5] Ages 8-80.

Update: An hour after I posted this I thought of one remarkable exception: LinkedIn. Unlike Facebook, LinkedIn has a non-predatory business model. They have been relatively careful not to infuriate their users. LinkedIn continues to grow, and I don't see any true attrition. It will be interesting to compare their valuation to Facebook's.