Showing posts with label meme watch. Show all posts
Showing posts with label meme watch. Show all posts

Saturday, December 31, 2016

Crisis-T: blame it on the iPhone (too)

It’s a human thing. Something insane happens and we try to figure out “why now?”. We did a lot of that in the fall of 2001. Today I looked back at some of what I wrote then. It’s somewhat unhinged — most of us were a bit nuts then. Most of what I wrote is best forgotten, but I still have a soft spot for this Nov 2001 diagram …

Model 20010911

I think some of it works for Nov 2016 too, particularly the belief/fact breakdown, the relative poverty, the cultural dislocation, the response to modernity and changing roles of women, and the role of communication technology. Demographic pressure and environmental degradation aren’t factors in Crisis-T though.

More than those common factors I’ve blamed Crisis-T on automation and globalization reducing the demand for non-elite labor (aka “mass disability”). That doesn’t account for the Russian infowar and fake news factors though (“Meme belief=facts” and “communications tech” in my old diagram). Why were they so apparently influential? 

Maybe we should blame the iPhone …

Why Trolls Won in 2016 Bryan Mengus, Gizmodo

… Edgar Welch, armed with multiple weapons, entered a DC pizzeria and fired, seeking to “investigate” the pizza gate conspiracy—the debunked theory that John Podesta and Hillary Clinton are the architects of a child sex-trafficking ring covertly headquartered in the nonexistent basement of the restaurant Comet Ping Pong. Egged on by conspiracy videos hosted on YouTube, and disinformation posted broadly across internet communities and social networks, Welch made the 350-mile drive filled with righteous purpose. A brief interview with the New York Times revealed that the shooter had only recently had internet installed in his home….

…. the earliest public incarnation of the internet—USENET—was populated mostly by academia. It also had little to no moderation. Each September, new college students would get easy access to the network, leading to an uptick in low-value posts which would taper off as the newbies got a sense for the culture of USENET’s various newsgroups. 1993 is immortalized as the Eternal September when AOL began to offer USENET to a flood of brand-new internet users, and overwhelmed by those who could finally afford access, that original USENET culture never bounced back.

Similarly, when Facebook was first founded in 2004, it was only available to Harvard students … The trend has remained fairly consistent: the wealthy, urban, and highly-educated are the first to benefit from and use new technologies while the poor, rural, and less educated lag behind. That margin has shrunk drastically since 2004, as cheaper computers and broadband access became attainable for most Americans.

…  the vast majority of internet users today do not come from the elite set. According to Pew Research, 63 percent of adults in the US used the internet in 2004. By 2015 that number had skyrocketed to 84 percent. Among the study’s conclusions were that, “the most pronounced growth has come among those in lower-income households and those with lower levels of educational attainment” …

… What we’re experiencing now is a huge influx of relatively new internet users—USENET’s Eternal September on an enormous scale—wrapped in political unrest.

“White Low-Income Non-College” (WLINC) and “non-elite” are politically correct [1] ways of speaking about the 40% of white Americans who have IQ scores below 100. It’s a population that was protected from net exposure until Apple introduced the first mass market computing device in June of 2007 — and Google and Facebook made mass market computing inexpensive and irresistible.

And so it has come to pass that in 2016 a population vulnerable to manipulation and yearning for the comfort of the mass movement has been dispossessed by technological change and empowered by the Facebook ad-funded manipulation engine.

So we can blame the iPhone too.

- fn -

[1] I think, for once, the term actually applies.

Wednesday, August 05, 2015

Donald Trump is a sign of a healthy democracy. Really.

I’m a liberal of Humean descent, and I’m a fan of Donald Trump.

No, not because Trump is humiliating the GOP, though he is. Of course I enjoy seeing the GOP suffer for its (many) sins, and it would be very good for the world if the GOP loses the 2016 presidential election, but Trump won’t cause any lasting political damage. Unless he runs as a third party candidate he’ll have no real impact on the elections.

I’m a fan because Trump appears to be channeling the most important cohort in the modern world — people who are not going to complete the advanced academic track we call college. Canada has the world’s highest “college” graduation rate at 55.8%, but that number is heavily biased by programs that can resemble the senior year of American High School (in Quebec, CEGEP, like mine). If we adjust for that bias, and recognizing that nobody does better than Canada, it’s plausible, even likely, that no more than half of the population of the industrialized world is going to complete the minimum requirements for the “knowledge work” and “creative work” that dominates the modern economy.

Perhaps not coincidentally about 40-50% population of Canadians have an IQ under 100. Most of this group will struggle to complete an academic program even given the strongest work ethic, personal discipline, and external support. This number is not going to change short of widespread genetic engineering...

Screen Shot 2015 08 07 at 8 16 45 PM

This cohort, about 40% of the human race, has experienced at least 40 years of declining income and shrinking employment opportunities. We no longer employ millions of clerks to file papers, or harvest crops, or dig ditches, or fill gas tanks or even assemble cars. That work has gone, some to other countries but most to automation. Those jobs aren’t coming back.

The future for about half of all Americans, and all humans, looks grim. When Trump talks to his white audience about immigrants taking jobs and betrayal by the elite he is starting a conversation we need to have. 

It doesn’t matter that Trump is a buffoon, or that restricting immigration won’t make any difference. It matters that the conversation is starting. After all, how far do you think anyone would get telling 40% of America that there is no place for them in current order because they’re not “smart” enough?

Yeah, not very far at all.

This is how democracy deals with hard conversations. It begins with yelling and ranting and blowhards. Eventually the conversation mutates. Painful thoughts become less painful. Facts are slowly accepted. Solutions begin to emerge.

Donald Trump is good for democracy, good for America, and good for the world.

See also

Saturday, April 26, 2014

Salmon, Picketty, Corporate Persons, Eco-Econ, and why we shouldn't worry

I haven’t read Picketty’s Capital in the Twenty-First Century. I’ll skim it in the library some day, but I’m fine outsourcing that work to DeLong, Krugman and Noah.

I do have opinions of course! I’m good at having opinions.

I believe Picketty is fundamentally correct, and it’s good to see our focus shifting from income inequality to wealth inequality. I think there are many malign social and economic consequences of wealth accumulation, but the greatest threat is likely the damage to democracy. Alas, wealth concentration and corruption of government are self-reinforcing trends. It is wise to give the rich extra votes, lest they overthrow democracy entirely, but fatal to give them all the votes.

What I haven’t seen in the discussions so far is the understanding that the modern oligarch is not necessarily human. Corporations are persons too, and even the Kock Brothers are not quite as wealthy as APPL. Corporations and similar self-sustaining entities have an emergent will of their own; Voters, Corporations and Plutocrats contend for control of avowed democracies [1]. The Rise of the Machine is a pithy phrase for our RCIIT disrupted AI age, but the Corporate entity is a form of emergent machine too.

So when we think of wealth and income inequality, and the driving force of emergent process, we need to remember that while Russia’s oligarchs are (mostly vile) humans, ours are more mixed. That’s not necessarily a bad thing - GOOGL is a better master than David Koch. Consider, for example, the silencing of Felix Salmon:

Today is Felix's last day at Reuters. Here's the link to his mega-million word blog archive (start from the beginning, in March 2009, if you like). Because we're source-agnostic, you can also find some of his best stuff from the Reuters era at Wired, Slate, the Atlantic, News Genius, CJR, the NYT, and NY Mag. There's also Felix TV, his personal site, his Tumblr, his Medium archive, and, of course, the Twitter feed we all aspire to.

Once upon a time, a feudal Baron or Russian oligarch would have violently silenced an annoying critic like Salmon (example: Piketty - no exit). Today’s system simply found him a safe and silent home. I approve of this inhuman efficiency.

So what comes next? Salmon is right that our system of Human Plutocrats and emergent Corporate entities is more or less stable (think - stability of ancient Egypt). I think Krugman is wrong that establishment economics fully describes what’s happening [2]; we still need to develop eco-econ — which is notecological economics”. Eco-econ is the study of how economic systems recapitulate biological systems; and how economic parasites evolve and thrive [3]. Eco-econ will give us some ideas on how our current system may evolve.

In any event, I’m not entirely pessimistic. Complex adaptive systems have confounded my past predictions. Greece and the EU should have collapsed, but the center held [4]. In any case, there are bigger disruptions coming [5]. We won’t have to worry about Human plutocrats for very long….

See also

and from my stuff

- fn -

[1] I like that 2011 post and the graphic I did then. I’d put “plutocrats” in the upper right these days. The debt ceiling fight of 2011, showed that Corporations and Plutocrats could be smarter than Voters, and the rise of the Tea Party shows that Corporations can be smarter than Voters and Plutocrats. Corporations, and most Plutocrats, are more progressive on sexual orientation and tribal origin than Voters. Corporations have neither gender nor pigment, and they are all tribes of one.

I could write a separate post about why I can’t simply edit the above graphic, but even I find that tech failure too depressing to contemplate.

[2] I don’t think Krugman believes this himself - but he doesn’t yet know how to model his psychohistory framework. He’s still working on the robotics angle.

[3] I just made this up today, but I dimly recall reading that the basic premises of eco-econ have turned up in the literature many times since Darwin described natural selection in biological systems. These days, of course, we apply natural selection to the evolution of the multiverse. Applications to economics are relatively modest.

[4] Perhaps because Corporations and Plutocrats outweighed Voters again — probably better or for worse.

[5] Short version — we are now confident that life-compatible exoplanets are dirt common, so the combination of the Drake Equation (no, it’s not stupid) and the Fermi Paradox means that wandering/curious/communicative civilizations are short-lived. That implies we are short-lived, because we’re like that. The most likely thing to finish us off are our technological heirs.

Monday, September 03, 2012

Do evolutionary strategies evolve?

Biologists study evolutionary "strategies", such as r and K selection.

These are the strategies deployed the Great Programmer as she fiddles with the game states of the multiver... erkkk. Just kidding. These are, of course, human terms for the emergent phenomena of natural selection.

At a more granular level, a predator's niche might be contested on the basis of bigger teeth, stronger claws, faster moves, greater endurance, or bigger brains.

Likewise, microbes, who rule the earth, have a range of "strategies". Symbiosis, parasitism, fast reproduction, encysting and so on.

Presumably the catalog of strategies changes over time. Before there were teeth, big teeth strategies were not available.

Before there were neurons, big brain strategies didn't work.

So that leads to the obvious question, do evolutionary strategies evolve? That is, do new strategies emerge from variations of strategies such that the strategies themselves are subject to selection pressure (a sort of meta-selection I suppose)?

Seems an obvious question, but as of Sept 2012 Google has 9 hits on that precise phrase, none by biologists.

So I guess it's an obvious question, but maybe obviously dumb. I'm surprised though that I didn't find a blog post explaining why it's dumb.

(A bit of context, this came up in a discussion with my 13yo about what species would fill our ecological niche (global multicellular apex predator). Having hit upon the strategy of investing in brains, would natural selection keep returning to the theme?)

Tuesday, July 10, 2012

Is labor lumpish in whitewater times?

Krugman is famously dismissive about claims of structural aspects to underemployment (though years ago he wasn't as sure). DeLong, I think, is less sure.

Krugman points to the uniformity of underemployment. If there were structural causes, wouldn't we see areas of relative strength? It seems a bit much to claim that multiple broad-coverage structural shocks would produce such a homogeneous picture.

Fortunately, I fly under the radar (esp. under Paul's), so I am free to wonder about labor in the post-AI era complicated by the the rise of China and India and the enabling effect of IT on financial fraud. Stories like this catch my attention ...

Fix Law Schools - Atlantic Vincent Rougeau  Mobile

... the jobs and high pay that used to greet new attorneys at large firms are gone, wiped away by innovations such as software that takes seconds to do the document discovery that once occupied junior attorneys for scores of (billable) hours while they learned their profession..

Enhanced search and discovery is only one small piece of the post-AI world, but there's a case to be made that it wiped out large portions of a profession. Brynjolfsson and McAfee expand that case in Race Against the Machine [1], though almost all of their fixes [1] increase economic output rather than addressing the core issue of mass disability. The exception, perhaps deliberately numbered 13 of 19, is easy to miss ...

13. Make it comparatively more attractive to hire a person than to buy more technology through incentives, rather than regulation. This can be done by, among other things, decreasing employer payroll taxes and providing subsidies or tax breaks for employing people who have been out of work for a long time. Taxes on congestion and pollution can more than make up for the reduced labor taxes.

Of course by "pollution ... tax" they mean "Carbon Tax" [1]. The fix here is the same fix that has been applied to provide employment for persons with cognitive disabilities such as low IQ and/or autism. In the modern world disability is a relative term that applies to a larger population.

If our whitewater times continue, we will either go there or go nowhere.

[1] They're popular at the "Singularity University" and their fixes are published in "World Future Society". Outcasts they are. Their fan base probably explains why the can't use the "Carbon" word, WFS/SU people have a weird problem with letter C. 

See also:

Monday, May 28, 2012

Why coupons? Price concealment information and memetic archeology in the pre-web world

Emily and I were wondering what business purpose coupons serve. They make brand price comparison labor intensive and hence unaffordable for most of us; this presumably allows both price discrimination (selling things at different prices to different markets), but it also enables companies to hide their prices from one another.

I couldn't find a good recent overview; the best I could do was a 1984 article by MC Narasimhan (A Price Discrimination Theory of Coupons).  A modern paper would want to include comparisons to Amazon's experiments with dynamic pricing, price hiding in pharmaceutical distribution, the regulatory and strategic concealment of physician office fees all in the context of game theory, information asymmetry (Akerlof, 1970s), and behavioral economics.

I can't find anything like this. So either we have a Google fail or yet another failure of modern economic academia (or perhaps a success of journal publisher information concealment, which is probably somehow related to coupon clipping).

Update: I gave this another 30 minutes of thought and realized there's a much more interesting explanation for this meme absentia.

This is pre-web economics; work done in the 1970s and 1980s. Discussions of affinity discounts, coupon clipping, frequent flyer mile economics and the like are the province of undergraduate textbooks. I shouldn't be looking in scholar.google.com, I should be checking out DeLong and Krugman's posts, tumblrs, and tweets.

Except, of course in the 1970s Brad and Paul were high school students. Google's Usenet (1980s blog posts) archives start in 1981, and email lists aren't much older. The memetic gulf stream [1] that swept ideas from academia to geekdom didn't exist.

So this isn't really a Google or Academia fail, it's a need for new innovations in memetic archeology [2] within the pre-web world (early singularity: 1820-1995).

[1] As of 5/28/2012 Google has no results on "memetic gulf stream". Try it now.
[2] Seven real hits today. Try it now

PS. Emily points out that the OpenStax nonprofit textbook movement will expose many of these memes to the Google filter feeder. (Meme phracking?). Incidentally, I found that reference through my pinboard/wordpress microblog/memory management infrastructure now integrated into my personal google custom search.

Tuesday, May 15, 2012

Minnesota 2012: Emotional health?

I tend to think bad driving is contagious. Not contagious as in passed from parent to child, but contagious as in a short-lived virus passed from driver to driver. When conditions are right, perhaps in bad weather or after tax filing, one bad driver angers another who angers another ...

So when I'm in my car and I see two people driving badly, I give everyone extra space. The virus is short-lived, typically things are back to normal within a few hours. [1]

Lately, however, it seems as though Minnesota drivers are persistently distracted, irritable, maybe angry. I see it when I'm driving, but especially when I'm walking or bicycling. It's not a mobile device problem; if anything I see less mobile use while driving. It could be demographics; Minnesotans are getting older (certainly I am), and old drivers are not happy drivers.

It's not just drivers though. I watch faces, and pedestrians too seem unhappy and distracted. That would be normal in February, but it's odd in a mild Minnesota spring.

On the other hand, a recent Gallup poll suggests a stable US emotional health index (The difference between 78.3 and 79.8 seems small, but US presidential elections are decided by less margin than that):
... Gallup's U.S. Emotional Health Index score was 79.9 last month, slightly above the previous high of 79.8 recorded in March 2008 and May 2010. Americans' emotional health has generally been improving since September, when it dropped to its lowest level in more than three years (78.3)...
So no conclusions for now, but I do wonder if Americans are starting to weary of economic stress, uncertainty, and increasing inequality. I'll be tracking this meme.

[1] I used to think there were similar epidemics of murder, perhaps with non-linear or chaotic peaks, but so far that theory hasn't held up.

Tuesday, May 01, 2012

My personal salon - the feeds I read completely

RSS is history. We know that. It's been replaced by ... by .... 

Right. Whatever is going to replace Feeds (RSS/Atom) hasn't quite arrived. So, while we wait, we read. In my case, I read using Reeder.app on my iPhone, Reeder for Mac on my main machine, and Google Reader elsewhere.

Recently, I did a reorg and cleanup of my subscriptions. I deleted perhaps 30 -- some hadn't been updated since 2006. A few were quite good, but ended abruptly a few years ago. Perhaps the author will return, maybe something happened to them. Google abandoned many blogs when they went G+, I deleted most of them. They're not very interesting any more anyway.

I was left with 363. I've long organized them by source and topic, such as "NYT" or "Science". This works pretty well, but there's a subset of blogs that are special. These blogs may not publish very often, but I read almost every article. They deserved more attention.

I've put these into a folder I call "Core", and I've shared them in a Google Reader Bundle called "Core" [1]. You can subscribe to them through the "bundle" (assuming it still works) and delete the ones you don't want. Or you can google the names on this list (I left out Gordon's Notes and Gordon's Tech because I'm just that kind of guy): 

All That's Interesting - pictures mostly
Blood & Treasure - China from a UK view
Charlie Stross - thinker and writer
Coates - intellectual.  Aka TNC, Ta-Nehisi Coates.
Coding Horror - geek and thinker
Cosmic Variance - physics
Daring Fireball - often annoying, almost always interesting. Mac
DeLong - an old favorite, though we read too much of the same stuff.
Ezra Klein - politics
Fallows - aviation, the world
Follow Me Here... - psychiatrist. A lot like me.
Gail Collins - NYT
Gwynne Dyer (NZH) - rabble rouser. Almost always right.
Hawks on Anthropology - like it says
I, Cringely - sometimes a bit eccentric. I worry about him. Almost always very interesting
Joel on Software
Leonard - economics
MN Bike Navig - local fave
Oatmeal - web comic
Paul Krugman - you know
Pphysics arXiv - best short science
Roger Ebert - intellectual, scholar, humanist
Salmon - business, news, journalist, economics
Shtetl-Optimized - computational physics
Talking Points - cutting edge politics
The Economist: Obituary - almost the only good part of a long dead journal
The Economist: SciTech - the other good part of a long dead journal
The Wirecutter - tech products, only the best
Top 25 - NYT Top 25
Whatever - Scalzi - science fiction
xkcd.com - unbelievably good

They mostly don't know me, but they are my salon. I'm a quiet sort of host. One or two are MN centric. I have Emily's Calendar feed in 'Core' too so I know when she adds events, but obviously there's no need to share that.

Many of my Pinboard/Twitter/Archive shares come from this set.

[1] Yes, Bundles still exist. Surprisingly. The view resermbles the old Reader Share view. There is some bugginess though; the widget for viewing the feed list is broken. I wonder if Google has forgotten this exists.

Saturday, April 28, 2012

Why are Google and Facebook ads so crappy?

In our world billions of dollars are spent trying to get us to read and click ads.

Billions.

So why are the ads so crappy? Google, which knows more about me than my mother, offered me these two next to my Gmail:
Master of Science Nursing
Earn a CCNE Accredited Masters in Nursing Online - Norwich University...

Overstock iPads 2: $43.20
Today Only: Get 32GB iPads for $43. 1 Per Customer. Limited Quantities...
A probably fake diploma program (or it's Norwich in the UK?!) and a con. Either Google thinks I'm an easy mark (demented already?), or it doesn't really know me after all.

Facebook is no better.

Really, spammers do better. After all, I'm marginally more likely to pay for genital enhancement than to send money to a con man. (I can imagine, for example, developing a brain tumor that might radically change my personality.)

It's not that I'm opposed to viewing ads. I pay $20 a year so that, in part, I can read the ads in Silent Sports.

Let me repeat that one. I actually turn over cold cash to read ads. I'm not the only one. Once upon I would, on occasion, buy a giant monthly phonebook sized slab of newsprint called "Computer Shopper" so I could read the friggin' ads.

I don't read or click Google or Facebook ads. Not because of an ideological objection -- but because they're worthless.

So where are all those billions going? Why doesn't Google or Facebook ask me what ads I'd like to see?

Why am I the only person who seems to notice this? Is it really just me?

There's something very strange going on here.

Sunday, April 15, 2012

Global Warming 2012 - Are the Denialists really winning?

This Telegraph article is primarily about a Hansen lecture on humanity's failure to think rationally about climate change, but I found the "Global Warming Policy Foundation" [1] funded response ironically interesting ...

Climate scientists are losing the public debate on global warming - Telegraph

... Dr Benny Peiser, director of sceptical think tank The Global Warming Policy Foundation, said governments and the public had "more urgent problems to deal with" than tackling climate change.

He said: "People have become bored by some of the rhetoric from the green movement as they have other things to worry about.

"In reality the backlash against climate change has very little to do with the sceptics. We will take credit for instilling some debate but it is mainly an economic issue. Climate change is not seen as being urgent any more."...

Over the past decade it seems the Denialist line has shifted from "it's not happening" to "it's not due to CO2 emissions" to "it's boring and not urgent".

That's a pretty radical retreat, even as public support for reducing emissions has collapsed in the face of the Lesser Depression (which is very severe now in the UK).

Contrary to the tone of the article, I call this progress. In the real world, the bad guys rarely fall on their knees and declare they were wrong. Yes, there were tobacco executives who did publicly repent, often after they or their loved ones developed lung cancer, but by then they weren't tobacco company executives any more. This denialist declaration of victory is, ironically, an admission of defeat.

Progress is very non-linear. The Lesser Depression will make action very difficult, even as it reduces carbon emissions far more than any tax ever could. Even so, I think we're moving into an era when the interesting debates begin. Debates about risks and costs, about climate engineering vs energy conservation, about who pays and who benefits and what is possible when. Those are debates about values and judgment as much as science.

[1] Funded by Michael Hintze, a hedge fund billionaire. Other funders are not known, but one assumes the usual suspects (Koch, Exxon, etc).

Wednesday, April 04, 2012

Why we tolerate Facebook, but despise Google

Google is dead to us. Dead to we who admired, even liked, the Google of Schmidt and Brin.

I read Google's blogs, but they bore me. I use Google services every hour of the waking day, but sadly. Inside Google, morale is poor. At a meeting of data geeks, where once Google ruled, no-one speaks their name. Dying Yahoo gets more love. Hearts yearn for broken old Microsoft.

Why is this, some wonder? Is Google really more evil than Exxon or Facebook? I don't love Apple, but I buy their stuff willingly. I use Facebook, even though I'd never sell it to anyone.

Is it merely Google's hypocrisy?

It's more than hypocrisy.

We trusted Google. It wasn't just marketing, Google's actions were different from other public companies. That's why we gave Google control of our email, why we used Google's search tools, why I signed up for Blogger, why, even a year ago, when I should have known better, I let Google manage my net identity.

We were wrong. We feel betrayed.

Google made me feel stupid.

That's why we despise Google.

See also:

Sunday, March 18, 2012

Whatever happened to the Fuel Cell?

In November of 2001 British Telecomunications published a white paper prepared by two futurists - Ian Pearson and Ian Neild. It's a bit hard to find online now, but there's an html version on ariska.org and the original PDF here. I came across it while running one of my custom google searches across my identity archives.

The two Ians are still in business, but I hope they're a bit more cautious these days. Their 2001 forecasts were a bit ... aggressive. The ones they got right were mostly prosaic - mostly gene sequencing, basic demographics, and internet growth.  They missed the rise of China and were oddly too optimistic about mobile internet access.

Otherwise they were way off. In particular, they were absurdly optimistic about the rise of the AI. It's interesting to look at how far off. For example:

Chat show hosted by robot  2003 
Confessions to AI priest   2004 
AI teachers in school      2004 
Computers that write most of their own software  2005 
Domestic appliances with personality and talking head interface 2007 
AI students 2007 
Highest earning celebrity is synthetic 2010 
AI houses which react to occupants 2010 
25 % of TV celebrities synthetic 2010 
Computer agents start being thought of as colleagues instead of 
tools 2013 
Direct electronic pleasure production 2010 
Online surgeries dominate first line medical care 2010 
Orgasm by email 2010 
Quiz shows screen for implant technologies 2010 
Artificial senses, sensors directly stimulating nerves 2012 
Some implants seen as status symbols 2012...

It's a long list. I kept it because in 2001 it was fun but preposterous. I like to think it was prepared at the local pub with a dartboard and a stack of science fiction novels; I hope British Telecomm published it to confuse their enemies. (It makes my own list of failed predictions seems absurdly prescient. Maybe BT should be paying me.)

One of their big misses is, however, interesting for other reasons ...

... Home fuel cell based 7kW generator 2001...

I remember fuel cells. It wasn't only that we were supposed to have them in our homes. They were supposed to power our hydrogen cars; pop magazines had major articles about Canada's BC Based fuel cell industry. Toshiba was a year away, once upon a time, from methanol fuel cells for laptops.

Obviously, none of that happened. Instead fuel cells are showing up in data centers -- and that's supposed to be news.

So why did the Fuel Cell future fail? Ben Wiens, who worked at that BC based fuel cell company, has a good technical description...

A few years ago it looked like micro fuel cells would soon be powering many portable electronic products. But this has not come to pass. One issue is that batteries have become much more powerful, and electronic devices smaller. Also, it has been hard to fit the fuel cell into the same thin profile of the battery. Another issue is that there is a problem with certain fuels being transported by passengers on aircraft. There are still some technical issues to be solved. The present price of fuel cells is higher than batteries. In my opinion the reason why micro fuel cells haven't penetrated the market however has nothing to do with the above factors....

... Fuel cells produce electricity. This is not the desired form of energy for transportation. The electricity must be converted into mechanical power using an electric motor. The Otto or Diesel cycle produces the required mechanical power directly. This gives them an advantage compared to fuel cell powered automobiles.

Presently Otto and Diesel cycle engines seem to be able to comply with extremely stringent pollution regulations, are inexpensive to produce, produce reasonable fuel economy, and use readily available liquid fuels. Fuel cell vehicles have a much greater chance of being accepted however in the future when fuel prices are higher and liquid fossil fuels are in short supply. However fuel cell vehicles will then be competing with electric vehicles which will be cheaper to operate but have problems with recharging...

Wiens article is the best I can find. Which brings up the real point of this post. Why hasn't there been more journalism on what happened to the fuel cell? Doesn't a failed revolution deserve a bit of an obituary? The rise and fall the Fuel Cell, and associated (extreme) hype and post-collapse silence, would make a great cautionary tale. Reading Wiens' summary, it seems as though a few wee issues in thermodynamics and hydrogen production were overlooked. Isn't it worth understanding why these things were missed? Aren't their lessons there that would serve us well now, as the rationalists among us consider our carbon-constrained energy options?

Journalists, where are you?

Sunday, February 26, 2012

Americans Elect - another try at GOP 2.0

Unsurprisingly, given the current state of the GOP presidential primary, people who'd prefer to vote GOP are advocating third party equivalents. This endorsement is from a Marketarian venture capitalist ...

A VC: Americans Elect (Fred Wilson)

Yesterday my partner Albert and I sat down with the people behind Americans Elect. For those that don't know, Americans Elect is an online third party movement. In their words, "Pick A President, Not A Party."...

Fred and  his kin assert the usual 'false equivalence' claim that both parties are equally dysfunctional. Sorry, that's not true. Team Obama is a good representative of a reason (data + logic, including evaluation of political realities) based implementation of social compact ("Branch I") values for a multicultural nation. The 2012 Dems are about as healthy as political parties get in an era where voters tolerate widespread corruption.

The problem, of course, is with the GOP. It has fallen into a political death-spiral where its survival depends on tribes that lack a common framework for interpreting reality. Some cleave to particular religious doctrines, others to secular tribal beliefs. The modern GOP is the party of unreason.

Obviously, this is bad. It's bad because the GOP has quite a good chance of taking full control of government. It's bad because a weak GOP will lead the Dems to destroy themsevles - and we'll have no government at all.

We all need need GOP 2.0, a reason based representation of Branch II values, a party that speaks for the powerful, the incorporated, the status quo, the authoritarian impulse and all those wary of change and disruption. Americans Elect is a part of the process of finding GOP 2.0. I wish them luck; we need this process to succeed.

Sunday, January 08, 2012

Rule 34 by Charlie Stross - my review

I read Charlie Stross's Rule 34. Here's my 5 star Amazon review (slightly modified as I thought of a few more things):

Rule 34 is brilliant work.

If Stross had written a novel placed in 2010, it would have been a top notch crime and suspense novel. Charlie's portrayal of the criminal mind, from silence-of-the-lambs psychopath (sociopath in UK speak, though that US/UK distinction is blurring) to every day petty crook, is top notch.

Stross puts us into the minds of his villains, heroes, and fools, using a curious 2nd person pronoun style that has a surprising significance. I loved how so many of his villains felt they were players, while others knew they were pawns. Only the most insightful know they're a cog in the machine.

A cog in a corporate machine that is. Whether cop or criminal or other, whether gay or straight, everyone is a component of a corporation. Not the megacorp of Gibson and Blade Runner, but the ubiquitous corporate meme that we also live in. The corporate meme has metastasized. It is invisible, it is everywhere, and it makes use of all material. Minds of all kinds, from Aspergerish to sociopath, for better and for worse, find a home in this ecosystem. The language of today's sycophantic guides to business is mainstream here.

Stross manages the suspense and twists of the thriller, and explores emerging sociology as he goes. The man has clearly done his homework on the entangled worlds of spam and netporn -- and I'm looking forward to the interviewers who ask him what that research was like. In other works Stross has written about the spamularity, and in Rule 34 he lays it out. He should give some credit to the spambots that constantly attack his personal blog.

Rule 34 stands on its own as a thriller/crime/character novel, but it doesn't take place in 2010. It takes place sometime in the 2020-2030s (at one point in the novel Stross gives us a date but I can't remember it exactly). A lot of the best science fiction features fully imagined worlds, and this world is complete. He's hit every current day extrapolation I've ever thought of, and many more besides. From the macroeconomics of middle Asia, to honey pots with honey pots, to amplified 00s style investment scams to home foundries to spamfested networked worlds to a carbon-priced economy to mass disability to cyberfraud of the vulnerable to ubiquitous surveillance to the bursting of the higher education bubble, to exploding jurisprudence creating universal crime … Phew. There's a lot more besides. I should have been making a list as I read.

Yes, Rule 34 is definitely a "hard" science fiction novel -- though it's easy to skip over the mind-bending parts if you're not a genre fan. You can't, however, completely avoid Stross's explorations of the nature of consciousness, and his take on the "Singularity" (aka rapture of the nerds). It's not giving away too much to say there's no rapture here. As to whether this is a Rainbow's End pre-Singular world … well, you'll have to read the novel and make your own decision. I'm not sure I'd take Stross's opinion on where this world of his is going - at least not at face value.

Oh, and if you squint a certain way, you can see a sort-of Batman in there too. I think that was deliberate; someone needs to ask Charlie about that.

Great stuff, and a Hugo contender for sure.

If you've read my blog you know I'm fond of extrapolating to the near future. Walking down my blog's tag list I see I'm keen on the nature and evolution of the Corporation, mind and consciousness, economics, today's history, emergence, carbon taxes, fraud and "the weak", the Great Recession (Lesser Depression), alternative minds (I live with 2 non-neurotypicals), corruption, politics, governance, the higher eduction and the education  bubble, natural selection, identity, libertarianism (as a bad thing), memes, memory management, poverty (and mass disability), reputation management, schizophrenia and mental illness, security, technology, and the whitewater world. Not to mention the Singularity/Fermi Paradox (for me they're entangled -- I'm not a Happy Singularity sort of guy).

Well, Stross has, I dare to say, some of the same interests. Ok, so I'm not in much doubt of that. I read the guy religiously, and I'm sure I've reprocessed everything he's written. In Rule 34 he's hit all of these bases and more. Most impressively, if you're not looking for it, you could miss almost all of it. Stross weaves it in, just as he does a slow reveal of the nature of his characters, including the nature of the character you don't know about until the end.

Update: In one of those weird synchronicity things, Stross has his 2032 and 2092 predictions out this morning. Read 'em.

Sunday, January 01, 2012

Medical fads - are they cycling faster?

We've always had crazy fads in medicine.

I fell for a few when I had wet ears. Magnesium Sulfate post-MI is the one I remember best. That one even made it to textbooks before it died.

It's typical of medical fads that they infest journals, and now newspapers, but usually die before they get to textbooks. Estrogen for osteoporosis wasn't in that class -- that was a somewhat understandable research problem. Medical fads are less forgivable; they really aren't supported by evidence. They're built on easy money and bored specialists.

It feels like the fads are cycling faster. Emily and I thought the Vitamin D craze had another year or two, but it died fast.

Our local minor neurotrauma ("acute mild head injury") craze reeks of fadism. In Minnesota recommendations are being written into law, with little basis in science. As of today, PubMed has precious few studies.

Maybe it will be real. Some cults become established religions, some fads become science.

I don't think this one will make it to science, I do think it will cause significant harm along the way. Labels are powerful.

Hope this cycles as fast as Vitamin D, but putting minor traumatic brain injury into law may stretch its lifespan. Medical fadism is a crime against the vulnerable...

Update: More on the Vitamin D fad.

A few readers asked me for more detail on the vitamin D fad.

Briefly, for a year or two, I couldn't avoid popular articles claiming that Americans suffered from an epidemic of Vitamin D deficiency causing a wide range of disorders, and that recommended daily allowances were inadequate. Then, at the end of 2010, the Institute of Medicine published a report declaring that the science wasn't there, and that overdosing was more harmful than expected ... (emphases mine)

... The committee provided an exhaustive review of studies on potential health outcomes and found that the evidence supported a role for these nutrients in bone health but not in other health conditions. Overall, the committee concludes that the majority of Americans and Canadians are receiving adequate amounts of both calcium and vitamin D. Further, there is emerging evidence that too much of these nutrients may be harmful...

In retrospect, within a few months of the IOM report, the media attention ended. The fad moved on.

There's still science to be done of course. Ever since medical school I've wondered about the relationship of latitude to multiple sclerosis, and whether there was some kind of cutaneous immunity/solar radiation component. Today there are many interesting articles on the relationship between vitamin D and MS. That's research though, the fad is over.

Friday, December 02, 2011

The AI Age: Siri and Me

Memory is just a story we believe.

I remember that when I was on a city bus, and so perhaps 8 years old, a friend showed me a "library card". I was amazed, but I knew that libraries were made for me.

When I saw the web ... No, not the web. It was Gopher. I read the minutes of a town meeting in New Zealand. I knew it was made for me. Alta Vista - same thing.

Siri too. It's slow, but I'm good with adjusting my pace and dialect. We've been in the post-AI world for over a decade, but Siri is the mind with a name.

A simple mind, to be sure. Even so, Kurzweil isn't as funny as he used to be; maybe Sir's children will be here before 2100 after all.

In the meantime, we get squeezed...

Artificial intelligence: Difference Engine: Luddite legacy | The Economist

... if the Luddite Fallacy (as it has become known in development economics) were true, we would all be out of work by now—as a result of the compounding effects of productivity. While technological progress may cause workers with out-dated skills to become redundant, the past two centuries have shown that the idea that increasing productivity leads axiomatically to widespread unemployment is nonsense...

[there is]... the disturbing thought that, sluggish business cycles aside, America's current employment woes stem from a precipitous and permanent change caused by not too little technological progress, but too much. The evidence is irrefutable that computerised automation, networks and artificial intelligence (AI)—including machine-learning, language-translation, and speech- and pattern-recognition software—are beginning to render many jobs simply obsolete....

... The argument against the Luddite Fallacy rests on two assumptions: one is that machines are tools used by workers to increase their productivity; the other is that the majority of workers are capable of becoming machine operators. What happens when these assumptions cease to apply—when machines are smart enough to become workers? In other words, when capital becomes labour. At that point, the Luddite Fallacy looks rather less fallacious.

This is what Jeremy Rifkin, a social critic, was driving at in his book, “The End of Work”, published in 1995. Though not the first to do so, Mr Rifkin argued prophetically that society was entering a new phase—one in which fewer and fewer workers would be needed to produce all the goods and services consumed. “In the years ahead,” he wrote, “more sophisticated software technologies are going to bring civilisation ever closer to a near-workerless world.”

...In 2009, Martin Ford, a software entrepreneur from Silicon Valley, noted in “The Lights in the Tunnel” that new occupations created by technology—web coders, mobile-phone salesmen, wind-turbine technicians and so on—represent a tiny fraction of employment... In his analysis, Mr Ford noted how technology and innovation improve productivity exponentially, while human consumption increases in a more linear fashion.... Mr Ford has identified over 50m jobs in America—nearly 40% of all employment—which, to a greater or lesser extent, could be performed by a piece of software running on a computer...

In their recent book, “Race Against the Machine”, Erik Brynjolfsson and Andrew McAfee from the Massachusetts Institute of Technology agree with Mr Ford's analysis—namely, that the jobs lost since the Great Recession are unlikely to return. They agree, too, that the brunt of the shake-out will be borne by middle-income knowledge workers, including those in the retail, legal and information industries...

Even in the near term, the US Labor Department predicts that the 17% of US workers in "office and administrative support" will be replaced by automation.

It's not only the winners of the 1st world birth lottery that are threatened.

 China's Foxconn (Taiwan based) employs about 1 million people. Many of them will be replaced by robots.

It's disruptive, but given time we could adjust. Today's AIs aren't tweaking the permeability of free space; there are still a few things we do better than they. We also have complementary cognitive biases; a neurotypical human with an AI in the pocket will do things few unaided humans can do. Perhaps even a 2045 AI will keep human pets for their unexpected insights. Either way, it's a job.

Perhaps more interestingly, a cognitively disabled human with a personal AI may be able to take on work that is now impossible.

Economically, of course, the productivity/consumption circuit has to close. AIs don't (yet) buy info-porn. If .1% of humans get 80% of revenue, then they'll be taxed at 90% marginal rates and the 99.9% will do subsidized labor. That's what we do for special needs adults now, and we're all special needs eventually.

So, given time, we can adjust. Problem is, we won't get time. We will need to adjust even as our world transforms exponentially. It could be tricky.

See also:

Saturday, November 26, 2011

Mass disability goes mainstream: disequilibria and RCIIT

I've been chattering for a few years about the rise of mass disability and the role of RCIIT (India, China, computers, networks) in the Lesser Depression. This has taken me a bit out of the Krugman camp, which means I'm probably wrong.

Yes, I accept Krugman's thesis that the proximal cause of depression is a collapse in demand combined with the zero-bound problem. Hard to argue with arithmetic.

I think there's more going on though. Some secular trends that will be with us even if followed Krugman's wise advice. In fact, under the surface, I suspect Krugman and DeLong believe this as well. I've read Krugman for years, and he was once more worried about the impact of globalization and IT than he's now willing to admit. Sometimes he has to simplify.

For example, fraud has always been with us -- but something happened to make traditional fraud for more effective over the past thirteen years. I think that "something" was the rise of information technology and associated complexity; a technology that allowed financiers to appear to be contributing value even though their primary role was parasitic.

Similarly, the rise of China and India is, in the long run, good for the entire world. In the near future, however, it's very hard for world economies to adjust. Income shifts to a tiny fraction of Americans, many jobs are disrupted, people have to move, to change careers, etc. It takes time for new tax structures to be accepted, for new work to emerge. IT has the same disruptive effect. AI and communication networks will further limit the jobs we can take where our economic returns are equal or greater than the minimum wage.

I think these ideas are starting to get traction. Today Herman Gans is writing in the NYT about the age of the superfluous worker. A few days ago The Economist reviewed a book about disequlibria and IT

Economics Focus: Marathon machine | The Economist

... Erik Brynjolfsson, an economist, and Andrew McAfee, a technology expert, argue in their new e-book, “Race Against the Machine”, that too much innovation is the bane of struggling workers. Progress in information and communication technology (ICT) may be occurring too fast for labour markets to keep up. Such a revolution ought to be obvious enough to dissuade others from writing about stagnation. But Messrs Brynjolfsson and McAfee argue that because the growth is exponential, it is deceptive in its pace...

... Progress in many areas of ICT follows Moore’s law, they write, which suggests that circuit performance should double every 1-2 years. In the early years of the ICT revolution, during the flat part of the exponential curve, progress seemed interesting but limited in its applications. As doublings accumulate, however, and technology moves into the steep part of the exponential curve, great leaps become possible. Technological feats such as self-driving cars and voice-recognition and translation programmes, not long ago a distant hope, are now realities. Further progress may generate profound economic change, they say. ICT is a “general purpose technology”, like steam-power or electrification, able to affect businesses in all industries...

... There will also be growing pains. Technology allows firms to offshore back-office tasks, for instance, or replace cashiers with automated kiosks. Powerful new systems may threaten the jobs of those who felt safe from technology. Pattern-recognition software is used to do work previously accomplished by teams of lawyers. Programmes can do a passable job writing up baseball games, and may soon fill parts of newspaper sections (those not sunk by free online competition). Workers are displaced, but businesses are proving slow to find new uses for the labour made available. Those left unemployed or underemployed are struggling to retrain and catch up with the new economy’s needs.

As a result, the labour force is polarising. Many of those once employed as semi-skilled workers are now fighting for low-wage jobs. Change has been good for those at the very top. Whereas real wages have been falling or flat for most workers, they have increased for those who have advanced degrees. Owners of capital have also benefited. They have enjoyed big gains from the increased returns on investments in equipment. Technology is allowing the best performers in many fields, such as superstar entertainers, to dominate global markets, crowding out those even slightly less skilled. And technology has yet to cut costs for health care, or education. Much of the rich world’s workforce has been squeezed on two sides, by stagnant wages and rising costs.

In time the economy will adjust  -- unless exponential IT transformation actually continues [1]. Alas, the AI revolution well is underway and technology cycles are still brutally short.  I don't see adjustment happening within the next six years. The whitewater isn't calming.

[1] That is, of course, the Singularity premise, as previously reviewed in The Economist.

Update 12/3/2011: And how does the great stagnation play into this - Gordon's Notes: Ants, corporations and the great stagnation?

Wednesday, November 23, 2011

Too much history

One of the reasons I blog is to engage with a fascinating world, and to track the streams of history.

I'm finding that harder to do. It's not that I don't see the streams, or see ways to connect them -- it's that there's too much. I feel as though history just kicked up a gear.

Partly this is the loss of Google Reader's share/tracking functions. They were a key component of how I engaged with my fragments of the world's knowledge flow. Even if nothing else had changed, losing those functions and my share repository would be disorienting.

I don't think it's just the loss of Reader though. It's more that Reader's capabilities masked the rate of change. Without them, it's easier to see how the world is changing.

These are truly whitewater times.

Friday, November 18, 2011

Social media is so 2000

GigaOm has a longish cloud computing post around a Peachtree Capital Advisors investor survey (full report is by request only).

I usually don't pay much attention to consulting group reports like this, but there were a few comments that struck me as interesting....

VCs: Don’t mistake cloud computing for cloud opportunity — Cloud Computing News

... tech investors are underwhelmed by social computing: A whopping 88 percent characterized the social media segment (including collaboration) as overvalued....

... The whole big data explosion that most businesses are trying to capture depends on the wide availability of diverse data from many sources, including the so-called Bermuda Triangle of Facebook, Twitter and Google...

... 35 percent of those surveyed said they think enterprise software as a category is undervalued...

By enterprise software they presumably mean Microsoft, Oracle, SAP, etc.

I was struck by the declining interest in social media. That may be because investors figure it's a mature segment (!) and Facebook owns it. Or that consumers are (re)turning to Cable TV.

I think both may be true. When a sclerotic company like Google 2.0 jumps into a domain, you can be pretty sure it's yesterday's news. Consumer tech cycles are viciously short now; fashion designers understand this all too well.

On the other hand, I'm also impressed by how quiet Facebook, G+ and the rest feel now, and "how happy this man looks" (SplatF). By my estimate we're in year 12 of the long depression, and we have years to go. Cable TV has not been displaced, and if consumers have limited time and attention ...

Thursday, November 03, 2011

Refugees from the wreck of Google Reader ...

Forbes ...

The Google Reader Redesign is an Ugly, Lonely User Experience - Ed Kain - Forbes

... On the overall changes as well as the unhelpful response from Google to its user base I give the new Google Reader a big, fat “E” for Evil. I guess the company’s slogan really was just a slogan. What fools we were to think it might have been anything more than that...

Kain is write about "sharebros". I never heard of it, and I was a mad sharer.

The Atlantic Wire

The Sharebros Are Building a Google Reader Replacement - Technology - The Atlantic Wire

Good article about Hivemined, despite the "sharebros".

More from the wire ...

Google Reader Backlash: A Fuss Over Nothing? - Rebecca J. Rosen - Technology - The Atlantic

... In a few ways, mostly aesthetic, Google Reader does seem better...

But for people who used Google Reader's sharing features, the upgrade is a big loss, for all intents and purposes ruining that aspect of Reader. The old sharing methods have been totally supplanted with Google+ tools, which, quality aside, are too different to satisfy the same needs. I'm going to dive into the nitty-gritty here, so consider yourself warned....

... The location of buttons, while annoying, does not ruin Google Reader's sharing utility.

... What does is having to read everything on Google+. First, it takes the experience out of Reader completely, making reading RSS feeds and reading your friends' gleanings from their RSS feeds two different activities. Second, it means that no longer can you read your friends' finds without also reading the other stuff they've posted on Google+...

... Finally, the worst part of reading shared items in Google+ is the stream. In Google Reader, you could easily come back to a post when a new comment appeared, or even put of reading certain streams until the weekend or until you left work. Now, once an item moves down the stream, the only way to get back to it is to scroll down. This will be the end of the Google Reader conversations that were the heart of Google Reader sharing...

There's a Facebook site for we shattered refugees. There I found a ranting Hitler parody that's particularly appropriate. I like the last line. Me too.