Saturday, March 23, 2013

Schneier: Security, technology, and why global warming isn't a real problem

In the Fever Days after September 2011, I wrote a bit about "the cost of havoc". The premise was that technology was consistently reducing the cost of havoc, but the cost of prevention was falling less quickly.

I still have my writing, but most of it is offline - esp. prior to 2004. As I said, those were the times of fever; back then we saw few alternatives to a surveillance society. Imagine that.

Ok, so that part did happen. On the other hand, we don't have Chinese home bioweapon labs yet. Other than ubiquitous surveillance, 2013 is more like 2004 than I'd expected.

The falling cost of offense/cost of defense ratio remains though. Today it's Schneier's turn to write about it… (emphases mine)

Schneier on Security: When Technology Overtakes Security

A core, not side, effect of technology is its ability to magnify power and multiply force -- for both attackers and defenders….

.. The problem is that it's not balanced: Attackers generally benefit from new security technologies before defenders do. They have a first-mover advantage. They're more nimble and adaptable than defensive institutions like police forces. They're not limited by bureaucracy, laws, or ethics. They can evolve faster. And entropy is on their side -- it's easier to destroy something than it is to prevent, defend against, or recover from that destruction.

For the most part, though, society still wins. The bad guys simply can't do enough damage to destroy the underlying social system. The question for us is: can society still maintain security as technology becomes more advanced?

I don't think it can.

Because the damage attackers can cause becomes greater as technology becomes more powerful. Guns become more harmful, explosions become bigger, malware becomes more pernicious...and so on. A single attacker, or small group of attackers, can cause more destruction than ever before...

.. Traditional security largely works "after the fact"… When that isn't enough, we resort to "before-the-fact" security measures. These come in two basic varieties: general surveillance of people in an effort to stop them before they do damage, and specific interdictions in an effort to stop people from using those technologies to do damage.

Lots of technologies are already restricted: entire classes of drugs, entire classes of munitions, explosive materials, biological agents. There are age restrictions on vehicles and training restrictions on complex systems like aircraft. We're already almost entirely living in a surveillance state, though we don't realize it or won't admit it to ourselves. This will only get worse as technology advances… today's Ph.D. theses are tomorrow's high-school science-fair projects.

Increasingly, broad prohibitions on technologies, constant ubiquitous surveillance, and Minority Report-like preemptive security will become the norm..

… sooner or later, the technology will exist for a hobbyist to explode a nuclear weapon, print a lethal virus from a bio-printer, or turn our electronic infrastructure into a vehicle for large-scale murder...

… If security won't work in the end, what is the solution?

Resilience -- building systems able to survive unexpected and devastating attacks -- is the best answer we have right now. We need to recognize that large-scale attacks will happen, that society can survive more than we give it credit for, and that we can design systems to survive these sorts of attacks. Calling terrorism an existential threat is ridiculous in a country where more people die each month in car crashes than died in the 9/11 terrorist attacks.

If the U.S. can survive the destruction of an entire city -- witness New Orleans after Hurricane Katrina or even New York after Sandy -- we need to start acting like it, and planning for it. Still, it's hard to see how resilience buys us anything but additional time. Technology will continue to advance, and right now we don't know how to adapt any defenses -- including resilience -- fast enough.

We need a more flexible and rationally reactive approach to these problems and new regimes of trust for our information-interconnected world. We're going to have to figure this out if we want to survive, and I'm not sure how many decades we have left.

Here's shorter Schneier, which is an awful lot like what I wrote in 2001 (and many others wrote in classified reports):

  • Stage 1: Universal surveillance, polite police state, restricted technologies. We've done this.
  • Stage 2: Resilience -- grow accustomed to losing cities. We're  not (cough) quite there yet.
  • Stage 3: Resilience fails, we go to plan C. (Caves?)

Or even shorter Schneier

  • Don't worry about global warming.

Grim stuff, but I'll try for a bit of hope. Many of the people who put together nuclear weapons assumed we'd have had a history ending nuclear war by now. We've had several extremely close calls (not secret, but not widely known), but we're still around. I don't understand how we've made it this far, but maybe whatever got us from 1945 to 2013 will get us to 2081.

Another bright side -- we don't need to worry about sentient AIs. We're going to destroy ourselves anyway, so they probably won't do much worse.

Thursday, March 21, 2013

Fermi Paradox: The solution set is stable


Any discussion of the Fermi Paradox has to be presented with a wink and a chuckle. Even if, behind the wink, there's a haunted look in the eyes.

Today's io9 version is no exception, but it contains two of my personal faves (wink, chuckle, give me a drink) ....
11 of the Weirdest Solutions to the Fermi Paradox 
...5. The Simulation Hypothesis 
...We haven’t been visited by anyone because we’re living inside a computer simulation — and the simulation isn’t generating any extraterrestrial companions for us.
If true, this could imply one of three things. First, the bastards — I mean Gods — running the simulation have rigged it such that we’re the only civilization in the entire Galaxy (or even the Universe)...
... the simulation is being run by a posthuman civilization in search of an answer to the Fermi Paradox, or some other scientific question. Maybe, in an attempt to entertain various hypotheses (perhaps even preemptively in consideration of some proposed action), they’re running a billion different ancestor simulations to determine how many of them produce spacefaring civilizations, or even post-Singularity stage civilizations like themselves... 
... 7. All Aliens Are Homebodies 
... An advanced ETI, upon graduating to a Kardashev II scale civilization, could lose all galactic-scale ambitions. Once a Dyson sphere or Matrioshka Brain is set up, an alien civilization would have more action and adventure in its local area than it knows what to do with. Massive supercomputers would be able to simulate universes within universes, and lifetimes within lifetimes — and at speeds and variations far removed from what’s exhibited in the tired old analog world. By comparison, the rest of the galaxy would seem like a boring and desolate place. Space could very much be in the rear view mirror...
The list omits the Theist Hypothesis -- that God(s) created Man to be Alone. This is, of course, simply a variant of the Simulation Hyopthesis.

I reinvented the Homebody Theory around 2000, but I later learned it goes back decades. The basic idea is that every civilization either dies or goes 'singular', and post-singular they are invariable disinterested in childish pursuits like interstellar travel.

The 'Phase Transition Hypothesis' doesn't really belong on the list; it's really just a term in the Drake Equation (technological life has been rare, etc).

The io9 post is a nice reference even if there's nothing new in the list; the set has been stable for at least seven years.

See also:

Sunday, March 17, 2013

Device size: clothing makes the choice

An App.net thread reminded me of some analysis I did in the 90s on what device sizes we should build our Cloud ASP service against. That analysis focused on pockets and purses; it came in an era when men still had shirt pockets and Jeff Hawkins carried around wooden models of the Pilot until he settled on one that fit his.

Since that time our devices have evolved a bit -- thought not as much as many think. We had slates in the 90s too -- they were just heavy and ran Windows variants or Windows thin client OS. Our clothing may have changed more [3]. Shirt pockets are gone, suit jackets are less common, and pants pockets are larger. Pocket location may also have shifted as men have gotten fatter around the world. (I don't know what's popular in China).

This produces some interesting size options based on clothing. Here's my own personal list of exemplar devices for each transport option with a gender assignment based on typical American practice.

GenderClothingDevice
b None [1] watch
m Traditional pants front pocket iPhone 5S
m Expansive pant front pocket Samsung Galaxy [2]
f Purse Samsung Galaxy
f Purse iPad Mini
b Backpack / shoulder bag iPad Mini
b Backpack / shoulder bag MacBook Air 11"
b Briefcase iPad (full) + Logitech Kb/Case
b Briefcase MacBook Air 13"

Running through the list, and disregarding whether one wants a phone or not, device options are probably best determined by one's clothing habits. The list also suggests the women should disproportionately prefer the Samsung Galaxy to the iPhone but that men should split 50/50 -- so the Samsung Galaxy should outsell the iPhone 5 about 1.5:1.

If the iPad Mini provided voice services the list predicts the combination of iPhone 5 and iPad Mini(v) would equal or exceed Samsung Galaxy sales.

This analysis suggests a narrow niche for the 11" Air. I have one and I like it, but if I were buying an Air today I'd get the 13". If I'm carrying a briefcase I might as well get the 13" Air or an iPad with Logitech Kb/Case. The 13" Air vs. iPad tradeoff is an interesting one -- for many travel cases I think the iPad wins on power and bandwidth consumption -- but see the comments -- Charlie Sross and Martin Steiger disagree. I can imagine a future version of OS X and OS X hardware with iPad like power and bandwidth use -- in which case I'd go Air.

[1] Swimsuit, running gear, nudist colony.
[2] I haven't seen mention of this, but my understanding is that Android handles variant screen geometry more easily than iOS. 
[3] Our devices must be influencing our clothing styles by now.

Saturday, March 16, 2013

Project Ducky - why I've stopped using new Cloud services.

Dilbert 04/14/1994:

Screen Shot 2013 03 16 at 3 39 53 PM

Yeah, Google Reader is on many geek minds today, but it's not the only cloud death to disrupt my routine.  Today, working in Aperture, I tried searching on a AppleScript workaround for Aperture's single-window mangled Project problems. I found a couple of good references -- to Apple .Mac pages that died with MobileMe. The page owners never recreated their lost resources.

Later I wanted to upload some photos from our special hockey team. I remembered then that Google discontinued Mac/Picasa integration and the iPhoto Plug-in.

Within a year I expect Google is going to discontinue Blogger, which currently hosts this blog.

Enough.

I'm on strike. You want my business? Give me standards. Give me products I pay for that have low exit costs and that have competitors. 

Oh, yeah, Google - go away.

See also:

Thursday, March 14, 2013

Google's war on standards: RSS, ActiveSync, now CalDAV

I remember when Google seemed to be somewhat friendly to standards and to the idea of open interchange.

That was, of course, Google 1.0. Now we live with Google 2.0.

With the neglect of Blogger, the end of Google Reader, and the RSS-free launch of G+, Google has put a stake in the RSS/Atom subscription standard. (Google played a big role in the development of Atom, when most of us write of "RSS" we mean "RSS/Atom".)

Recently Google limited support for ActiveSync, a de facto standard based on Microsoft Exchange technologies.

Now Clark reminds me that they've also ended CalDAV support, which I use to view my Google Calendars on my 11" Air:

Official Blog: A second spring of cleaning

... CalDAV API will become available for whitelisted developers, and will be shut down for other developers on September 16, 2013. Most developers’ use cases are handled well by Google Calendar API, which we recommend using instead. If you’re a developer and the Calendar API won’t work for you, please fill out this form to tell us about your use case and request access to whitelisted-only CalDAV API...

I'm glad I never committed to Android. I'm deeply enmeshed in the Google ecosystem, but it is time I started digging out.

Quick thoughts on the end of Google Reader

Of all the things Google has killed, I used Reader most of all.

I'm sad to see it go, but, unlike the end of Reader Social, I'm not angry. I'm just surprised Reader lasted as long as it did.

Reader feels now like something from a mythical Golden Age. Free, but with minimal and non-intrusive advertising -- particularly when used with Reeder.app and Readability. Standards based (OPML, RSS/Atom), and so open in a way that few things are in our locked-in locked-down era.

So Reader's end was expected -- and this time Google did things well. Unlike when they killed the 'sharebro' community, Google telegraphed this one. Even now we've got three months to switch -- much longer than I expected. 

It will be interesting to see what our options will be; the end of Reader will open up a new ecosystem. An ecosystem that will, I hope, include services we buy. There is at least one upside -- I'm losing one of my big Google dependencies. It's getting easier for me to swich away from Google.

Incidentally, I bet Blogger will die in 2014.

Saturday, March 09, 2013

Strange loops - five years of wondering why our corporate units couldn't cooperate.

Five years ago I tried to figure out why we couldn't share work across our corporate units.

This turned out to be one of those rabbit hole questions. The more I looked, the stranger it got. I knew there was prior work on the question -- but I didn't know the magic words Google needed. Eventually I reinvented enough economic theory to connect my simple question to Coase's 1937 (!) theorem1970s work on 'the theory of the firm', Brad DeLong's 1997 writings on The Corporation as a Command Economy [1], and Akerloff's 'information assymetry'. [2]

Among other things I realized that modern corporations are best thought of as feudal command economies whose strength comes more from their combat capacity and ability to purchase legislators and shape their ecosystems than from goods made or services delivered.

Think of the Soviet Union in 1975.

All of which is, I hope, an interesting review -- but why did I title this 'Strange loop'?

Because I used that term in a 2008 post on how Google search, and especially their (then novel) customized search results, was changing how I thought and wrote. This five year recursive dialog is itself a product of that cognitive extension function.

But that's not the only strange loop aspect.

I started this blog post because today I rediscovered DeLong's 2007 paper [1] as a scanned document. I decided to write about it, so I searched on a key phrase looking for a text version. That search, probably customized to my Gordon-identity [3], returned a post I wrote in 2008. [4]

That's just weird.

 - fn -

[1] Oddly the full text paper is no longer available from Brad's site, but a decent scan is still around.

[2] There are at least two Nobel prizes in Economics in that list, so it's nice to know I was pursuing a fertile topic, albeit decades late.

[3] John Gordon is a pseudonym; Gordon is my middle name.

[4] On the one hand it would be nice if I'd remembered I wrote it. On the other hand I've written well over 10,000 blog posts. 

See also: 

We do not understand the world in which we live

It is always this way, on the micro and the macro. I didn't understand high school until college. I didn't understand medical school until I was halfway through. I was deep into the corporation before I recognized my surroundings.

Did hunter-gatherers understand their context? 

Three links that tell us we don't understand ours (all via DeLong):

  • The Singularity in Our Past Light-Cone 11/2010. " ... An implacable drive on the part of those networks to expand, to entrain more and more of the world within their own sphere? ... the radical novely and strangeness of these assemblages, which are not even intelligent, as we experience intelligence, yet ceaselessly calculating ..."
  • Twentieth Century Economic History - DeLong: "... What do modern people do? Increasingly, they push forward the corpus of technological and scientific knowledge. They educate each other. They doctor each other. ... They provide other services for each other to take advantage of the benefits of specialization. And they engage in complicated symbolic interactions that have the emergent effect of distributing status and power and coordinating the seven-billion person division of labor of today’s economy...
  • Algorithmic Rape Jokes in the Library of Babel | Quiet Babylon: " ... The Kindle store is awash in books confusingly similar to bestsellers... Icon’s books are created by a patented system... products that generate unique text with simple thesaurus rewriting tools called content spinners... Amazon ‘stocks’ more than 500,000 items from Solid Gold Bomb. These things only barely exist. They are print on demand designs... Talk about crapjects and strange shaper subcultures still gives the whole threat a kind of artisanal feel. The true scale of object spam will be much greater..."

In our work, our hive like human world, we seek those who know and do. Some hide themselves, some advertise. Some are specialists, some are generalists, a few are omni-talented. A very few are powerful, a few are powerless, most are in-betweeners. All are enmeshed in systems of symbiosis and parasitism, all embedded in the "novel assemblage".

This world seems strange to me.

It will seem quaint to whatever thinks in 2113.

Friday, March 08, 2013

What's so bad about a bit of torture?

"... The Guardian newspaper unveiled the results of a year-long investigation purporting to show that U.S. military advisers, with the knowledge and support of many senior officials, including former Secretary of Defense Donald Rumsfeld and disgraced Gen. David Petraeus, oversaw a vast program of torture inside Iraqi prisons..

..Col. James Steele and Col. James H Coffman, ran a high-level secret program inside Iraqi prisons to extract information from alleged insurgents and Al Qaeda terrorists...." (10 Years After the Invasion of Iraq, a World of Hurt )

I run into Republicans on occasion. There's my beloved Uncle D for one, and there are some at work and in my Facebook feed.

I run into them, but we don't discuss politics. Similarly I don't consume any GOP media; neither Murdoch's nor talk radio nor right wing blogs. So what I know of Republican thinking is filtered by the NYT, NPR, Ezra Klein, Paul Krugman and the rest of my 400 feeds.

Except for app.net. That's the one place where I get to correspond with intellectual Republicans. It was there that some of us worked through a discussion on the role of torture in modern warfare. During our conversation, I was challenged to defend my scorn for the Bush/Cheney/Rumsfeld (BCR) torture program. I was surprised -- it's been a long time since I've had to think about why the BCR program was a terrible idea.

It's good to have surprises like that, and good to use this blog to think through my position, starting with a contrary "pro-torture" perspective of my own. (I'm not trying to represent my correspondent's position, I'd likely distort it unfairly.)

My pro-torture argument has nothing to do with whether torture is effective or not. That's a red herring; for the sake of argument let us assume that a skilled torturer always breaks any resistance and hears whatever the victim believes to be true.

Instead I, playing the role of Dick Cheney, will argue that torture isn't so bad. After all, we Americans routinely kill combatants and civilians in our many wars, not to mention our domestic execution chambers. We, more than most nations, sentence vast numbers of citizens to particularly nasty prisons.

Those are nasty fates. Given the choice, many of us might opt instead for a bit of sensory deprivation, flogging, waterboarding, electric shocks, and thumbscrews.

So then why should we be particularly averse to torture? If torture is no worse than routine warfare, shouldn't we retroactively pardon the torturers we imprisoned after World War II? Should we apologize to North Korea and North Vietnam for the mean names we called them; and discard our meager loyalty to the Geneva Conventions once and for all?

These are strong arguments, but history tells me they are misguided. There's a reason that torture was slowly removed from the legal code, and that 'cruel and unusual punishment' was a part of the English Bill of Rights in 1689.

One reason is that people who inflict torment on prisoners, who are by definition helpless, are changed by their experience. Some are repelled by the work, but some are attracted to it. The historical record tells us the practice spreads quickly, from special circumstances to general circumstances. From a few isolated rooms to a vast network of American supported Iraqi torture chambers. From the battlefield to Homeland Defense, and from Homeland Defense to the Ultra-security prison, from the Ultra-security prison to the routine prison, from the prison to the streets ...

Torture, history suggests, is habit forming. If humans were machines we might be able to manage torture as readily as we manage prison sentences. We're not though. Our culture fares badly when we make torture acceptable.

Our military knew that in 2005.

We should remember that now.

See also

Gordon's Notes

Others

Sunday, March 03, 2013

Does Edge Gel reduce the lifespan of disposable razors?

I tried searching on this, but couldn't find anything - even in shaveblog.com. So, for what it's worth, here's one article.

I've used Edge Gel for some time. During that time it seemed my disposable razor lifespan was reduced, but I didn't make the connection. Recently they increased the 'Aloe' component and my razor lifespan dropped down to a few days; I assume the shorter lifespan is related to the Aloe.

Then I found using plain soap (not too elegant :-), or cleaning the gunky gel from the razor with a toothpick, significantly increased razor lifespan.

Edge is made by Energizer Holdings, who also sell Schick and Wilkinson razors. So they don't have much incentive to increase razor lifespan. I suspect most customers don't care either way, but it will be interesting to see if I get any comments on this blog post. (I expect Gel's SEO operatives to bury it pretty deeply though :-).

For my part I'm going to go back to a brush and Williams Mug Shaving Soap. They seemed to be disappearing years ago, but I gather they're fashionable now. I expect that will save me enough to buy a coffee or two, and cut my waste output a bit.

PS. Researching this topic led me to a Wirecutter article on The Best Razors. I don't have the patience to do the Merkur thing, and I'm too cheap for the Gillette ProGlide, so I think I'll stick with the cheap Bic dual blades.

The canid domestication of homo sapiens brutalis

Eight years ago, I wondered if European Distemper killed the Native American dog and added a footnote on an old personal hypothesis ...

Humans and dogs have coexisted for a long time, it is extremely likely that we have altered each other's evolution (symbiotes and parasites always alter each other's genome). ... I thought I'd blogged on my wild speculation that it was the domestication of dogs that allowed humans to develop technology and agriculture (geeks and women can domesticate dogs and use a powerful and loyal ally to defend themselves against thuggish alphas) -- but I can't find that ...

 Happily, others have been pursuing this thought ....

We Didn’t Domesticate Dogs. They Domesticated Us Brian Hare and Vanessa Woods

...With this new ability, these protodogs were worth knowing. People who had dogs during a hunt would likely have had an advantage over those who didn't. Even today, tribes in Nicaragua depend on dogs to detect prey. Moose hunters in alpine regions bring home 56 percent more prey when they are accompanied by dogs. In the Congo, hunters believe they would starve without their dogs.

Dogs would also have served as a warning system, barking at hostile strangers from neighboring tribes. They could have defended their humans from predators.

And finally, though this is not a pleasant thought, when times were tough, dogs could have served as an emergency food supply. Thousands of years before refrigeration and with no crops to store, hunter-gatherers had no food reserves until the domestication of dogs. In tough times, dogs that were the least efficient hunters might have been sacrificed to save the group or the best hunting dogs. Once humans realized the usefulness of keeping dogs as an emergency food supply, it was not a huge jump to realize plants could be used in a similar way.

So, far from a benign human adopting a wolf puppy, it is more likely that a population of wolves adopted us. As the advantages of dog ownership became clear, we were as strongly affected by our relationship with them as they have been by their relationship with us....

The primary predators of humans, of course, are other humans. Women's need for protection against men is particularly acute. So which gender would be most interested in, and capable of, the domestication of a strong and loyal ally? What changes would a dog's presence make to a society and a species, and who would lose most when agriculture made dogs less useful?

What Evernote reminded me about my Cloud services - and my 2013 security policies

Evernote was hacked, and they mandated a global password reset.

It's not surprising Evernote was hacked. As Schneier wrote a few days ago about waterhole and precision phishing ...

Schneier on Security: Phishing Has Gotten Very Good

... Against a sufficiently skilled, funded, and motivated adversary, no network is secure. Period. Attack is much easier than defense, and the reason we've been doing so well for so long is that most attackers are content to attack the most insecure networks and leave the rest alone.

... If the attacker wants you specifically ...  relative security is irrelevant. What matters is whether or not your security is better than the attackers' skill. And so often it's not.

Schneier quotes former NSA Information Assurance Director Brian Snow: "... your cyber systems continue to function and serve you not due to the expertise of your security staff but solely due to the sufferance of your opponents".

It's likely some of Evernote's 50 million customers are of interest to major opponents, so it's not surprising their defenses were inadequate [1].

I don't make much use of Evernote, but I did a password reset anyway. Which is when I discovered ...

  • I was still using my non-robust 'evaluation period' password with Evernote. [2]
  • I was using said weak pw with test data that included photographs of the children's passports and my old PalmOS notes
  • I never purged my Evernote account when I decided not to use them (I went with Simplenote/Notational Velocity instead.)
Wow, by my standards that's quite a fail. When Cue.app failed a recent evaluation, I deleted my test data immediately. In the case of Evernote I may yet sign with them, so after I reset my password to something robust I merely deleted my old data [3]. 
 
All of which has led me to update my now laughably quaint 2010 lessons learned and security risks summary. Here's my current list. It's far from perfect; I'd like to say I avoid all services that use 'security questions' and high-risk reset procedures, but then I'd use nothing.
  1. If data is in the Cloud, and you do not personally hold the only encryption keys, it is 2/3 public. Treat it that way.
  2. Clean up your services. If you aren't using a Cloud service delete the account or your data.
  3. Obviously, don't reuse important credentials, use a password manager (ex: 1Password [4])
  4. Use Google two factor for your most critical Google credentials, even thought it has an longstanding egregiously stupid security hole and it's still a PITA to use.
  5. Use iOS for mobile and OS X Mountain Lion for desktop.
  6. On OS X desktop do not use Oracle Java plugin or runtime, Flash or Acrobat.
  7. On OS X desktop run as a non-admin user and enter your admin password with caution.
  8. Buy OS X software through the App Store unless you have exceptional trust in the vendor.
  9. Don't use OAUTH or OpenID on sites you really care about. For one thing, a password change doesn't repudiate OAUTH credentials on most sites. For another, it introduces too much complexity and side-effects and it's too hard to remember which OAUTH provider goes with which OAUTH service.
  10. Do not rely on encryption solutions that auto-open on login. (ex: iOS screen trivial bypass bug). I use encrypted disk images with no keychain pw storage on OS X desktop for my most critical data and I use 1Password on my iOS devices in addition to a (currently hackable) screen lock code.
  11. If something is really, really, secret, don't put it on a computer and especially don't put it on a networked computer. (I don't personally have anything that secret.) 
  12. Whether you're on the Net or on your own machine, remember Gordon's Five Levels of Information Affection [5] and manage accordingly:
Yeah, civilians can't do this stuff. I tell normal folk to use iOS and iCloud and treat everything they have as Public data. If they want something to be secret, don't put it on a computer.
 
 - fn -

[1] Among which antiviral software is worse than a snowball in Hell. At least the snowball will be transiently drinkable.

[2] An easy to remember and easy to break pw that I use for things I don't care about.

[3] The web UI doesn't support 'delete all notes', but if you create an empty notebook you can delete all non-empty notebooks, and associated notes, one at a time. Then empty trash. Of course the data will likely exist in Evernote backups for some time, possibly to be pillaged post-bankruptcy. Tags are not deleted.

[4] Note, however, the unanticipated consequences of strong security in cases of death, disability or disappearance

[5] aka Five tiers of data love, from Google's two factor authentication and why you need four OpenID accounts.

I: You want it? Take it.
II: I'd rather you didn't.
III: Help!! Help!!
IV: I'll fight you for it.
V: Kreegah bundolo! Kill!!

See also

ADN.NET: To get beyond social app.net needs to make parting painless

I love app.net as it is now, but nothing is forever. App.net is a specialized taste, and it needs to grow to survive.

That growth might come from its current social network features (ex: my stream and its RSS feed), but it would be good to have other growth options. Current work includes competitors to messaging, chat rooms, Google Reader Shares, and file and photo sharing.

I'm hoping several of these efforts will catch on, but first people like me need to use them and talk about them. (Like me, but with more fans :-). Problem is, we're a wary bunch. We hate losing our content.

That suggests a first principal for ADN beyond-social. Painless exits.

That's hard to do for anything non-trivial. Of all my Cloud services, only my Simplenote/Notational Velocity data is truly free. If Simplenote expired tomorrow, all of my content would remain on my hard drive and I could move it readily to Dropbox for sharing. It wouldn't be as good as what I have now, but I could keep going.

Beyond Simplenote things get harder. The next tier of freedom is probably Domain transfer, static web page hosting, and perhaps Wordpress migration. After that maybe moving Contacts and Calendars, perhaps moving email (but not archives) ... 

Yeah, Data Lock is ubiquitous.

So maybe it's time to try something different.

Why are some simple things still hard to discover online?

My Delta flight left from San Francisco International Airport (SFO) terminal 3, but Google told me a Lids store was in Terminal 1. I was looking for Lids  because #1 child wanted a USC cap, and since USC is hundreds of miles south of SFO a cap store seemed my best option.

So, I wondered, how much extra time did I need to get from Terminal 3 to Terminal 1? Was it conceivable that I'd have to exit security?

This official statement seemed pretty clear:

SFO - San Francisco International Airport - frequently asked questions

AirTrain, SFO's fully automated people mover, provides a convenient way to transfer terminals. AirTrain operates on two lines: the Red Line, which connects all terminals, terminal garages and the BART Station; and the Blue Line, which connects all terminals, terminal garages and the BART Station with the Rental Car Center. AirTrain operates 24 hours every day.

Please note that all of SFO's terminals are connected, and passengers may also transfer terminals by walking.

Yeah, pretty clear, until, looking back, you realize it doesn't say you can transfer terminals without passing through security.

In fact SFO terminals don't connect behind security. Indeed, United and Delta both have gates in Terminal 1 and you still need to exit security to pass between them.

Sprinting between gates I thought of a friend who died doing an airport sprint. So I slowed down a bit. I made my flight; an attendant met me at the checkpoint looking for her last arrival.

Oh, and there's no Lids Store in Terminal 1. Not any more. Thanks Google.

So why was it so hard to discover that SFO doesn't currently have useful connections between domestic terminals? That's the interesting bit of this post. Had I asked anyone who actually used the terminal I'd have gotten the right answer immediately, but this kind of common knowledge wasn't known to Google.

Some important questions are still hard to discover online. It would be interesting to catalog these "edge questions" and ask what they have in common.

Which is better for work travel: An 11" MacBook Air or a (maxi) iPad with Logitech keyboard?

I have just returned from a conference where I ran R and Python code on my 11" MacBook Air. It did the job well; Mountain Lion's Full Screen and Mission Control features add real value to this small screen lightweight laptop.

So for this trip the Air was a great device. For most trips though, a full sized iPad with a Logitech keyboard case would be a better work option. 

Curiously, this has nothing to do with the touchscreen; it's about other hardware and iOS design decisions. The iPad's advantages include:

  • iOS is a fairly good Exchange/ActiveSync client . OS X is not.
  • Many iOS apps work in offline mode, OS X apps expect a network connection.
  • iOS multitasking is constrained. In OS X many apps may simultaneously jump on a network connection, sucking bandwidth and power alike. (Heck, backup may start!)
  • iOS is, in general, less demanding of a network connection.
  • iOS and the iPad are designed from the 'ground up' to use less power. That's why an iPad can last for hours, receive a power boost from a mere Morphie Juice Pack, and charge off a meager USB connection. Even the best laptops, like my Air, can't do this.
  • The iPad can be purchased with an LTE chip, the MacBook cannot.
  • iOS bandwidth consumption is harder to track than it should be, but it's easier than tracking OS X bandwidth use.

Travel is characterized by limited power and limited bandwidth. The AIr is a lovely laptop, but compared to the iPad it's built for a world of ample power and bandwidth. Today, even excluding the touch interface, the Max iPad is a better traveling device for most use cases.

Apple could make the choice harder though. They could make a future version of OS X a much better bandwidth consumer, and they could provide an option to throttle multitasking. The iPad would still have a large power advantage, but this make for a great OS X upgrade.

See also:

American Healthcare: only the little people pay list

In healthcare, only the uninsured pay list price. They actually pay the crazy amounts that show up on their healthcare bills. Other payers, like insurance companies, pay a steeply discounted amount. Sometimes 70% off.

Pretty outrageous eh?

It's not new though. That's how it worked when I was a country doc in the early 90s and it was old and outrageous then. Now it's getting more attention; but it's not new. The weird thing is that this 'secret' has been in plain view for decades.

That's not the end of the story though. At least when I was in practice, we couldn't do a cash discount. The insurance price was based on list, and if we lowered list the insurance payments would fall. Indeed, our 'customary' charge rating would also fall, and in the bizarro world of healthcare finance what insurers were willing to pay us depended in part on our past charges.

Back then we wrote off many cash charges, but times have changed. For one thing, the Bush GOP made it much harder for regular folk to declare bankruptcy and escape healthcare debt.

So now that this story is getting traction, I wonder if Americans are ready to learn about how Evaluation and Management CPT codes (E&M Coding) destroyed primary care. Hint: "What gets measured gets done" doesn't mean "what is good gets done".

Many Americans still think we have a great healthcare system. It's probably not our only mass delusion.