Saturday, May 31, 2008

The paradoxical security of Windows 2000

I've been using Windows 2000 SP4 in my Parallels and (now) Fusion VMs:
Gordon's Tech: Parallels to VMware - my experience

As part of my move from 10.4.11 to 10.5.3 I switched from Parallels (Windows 2000 VM running Office 2003 and MindManager to VMWare Fusion (updated 5/30 for 10.5.3). Here's how it went...
Windows 2000 SP4 runs Office 2003 and most business apps without any problems -- that's all I need. I have two unused Win2K licenses, so it costs me nothing. It's much more compact than XP, it demands fewer CPU resources, and it runs happily in only 256MB of memory.

It's a perfect match to my needs.

There's only one catch. Microsoft isn't updating Windows 2000 security any more, and I don't use antiviral software on any platform* (except at work, where it's mandated). Well, I don't expose the Windows VM to the net, so that's probably not an issue.

But is Windows 2000 really all that insecure nowadays? It must be an exotic environment on the modern net, I can't believe it would be a profitable target. I suspect it's actually becoming more secure with every passing day.

I wouldn't bet on that of course. I really don't have a need to take the Win2K environment for a walk in the wild side. Still, I suspect it's true ...

* Modern antiviral software behaves like a virus infestation (performance and reliability suffer greatly), antiviral vendors blew it by choosing not to block SONY's spyware 1-2 years ago, it often fails against modern attacks, and it's been years since I've received email with an attached virus (Gmail filters them).

Friday, May 30, 2008

Gmail's biggest missing feature - and it's a whopper.

Outlook is the only email application I know of with the absolutely critical feature that Gmail most urgently needs.

In Outlook I can edit the subject line of messages I've received*. (You can edit the body and attachments of received email as well; that's very nice but not essential.)

Gmail can't.

Neither can other email packages, but the problem is more severe in Gmail because it threads conversations by subject line. Since most humans are still living in the 20th century they don't use intelligent subject lines; important messages get lost in the same-subject-line thread. To add insult to injury, Google's threading model discourages intelligent subject lines.

21st century people know subject lines are critically important. We don't do folders, we do search. The initial presentation for a search result always includes the subject line -- it tells us what's important.

(Digression. I do find it a bit odd that Googlers evidently don't do search.)

If all my correspondents were 21st century I wouldn't have as dire a need to edit the subject lines of their messages, but even so what I consider important may differ from their opinion. I'd still like to be able to edit their subject line on occasion. (Note: Emily, you do fabulous subject lines. I'd say that even if you weren't my wife.)

Sure, this breaks the evidence chain of email. I don't give a damn. I have zero interest in preserving the email I receive in some kind of pristine state. When I archive it I'm doing it for my own benefit, not for anyone else's benefit.

Google, you can fix this. It will help break your compulsion to thread conversations by string matching the subject line (which also breaks Google Groups, but that's another story).

* It's amazing how many people don't know this. Just click on the subject line of an email you're received and type. Shocked?

Thursday, May 29, 2008

Google's infinite storage solution

A year or two ago I paid a modest fee for extra Picasa web album storage. The storage pool is shared with my Gmail account.

I realized today that even though I add images and my email volume grows, I don't seem to be running out of space.

Basically, Google is adding to my allotment at roughly the rate I'm consuming it. The current number is:

6.5 GB (39%) of 16.6 GB

It's been about 35-40% for at least a year. For my current usage rates Google's storage allotment is essentially unlimited.

The odd thing is that we've come to take this for granted, so much that nobody comments about it any more.

I am fascinated by the big changes that, for the most part, we don't recognize ...

Wednesday, May 28, 2008

My iPod has run out of room, so the iPhone is more appealing

Since I don't have much video on my iPod, I though 30 GB would be plenty of room. Our family music library used about 14GB, and a few videos used 3-4 GB.

I didn't reckon with Podcasts. I'm out of room now.

Ironically, this actually negates one of the relative disadvantages of my iPhone-to-be. I'm going to have be selective about what I carry, whether I use an iPod or a 16 GB iPhone.

If 30GB isn't enough, then 60 won't be either, so there's no point in trying to outpace this stuff. I just need to get used to managing the playlist sync.

Family Medicine Board Exam 2008 - description, study strategy, results

I'm putting together my plan and resources to prepare for the 2008 Family Medicine re-certification exams. These exams used to be pretty stress free, but that was back when I actually did family medicine!

I'm surprised how scattered the resources are on this topic. I'll list what I've found, and update this post over the next two months. I'm restricting my list to items available through the AAFP, ABFM (board), or "free" (typically pharma or ad supported). Most of these links work only for AAFP members or recert exam registrants.

  • ABFM profile: Home base for exam information.
  • My medical reference page: updated in preparation for the board. Licensed MD/DOs can get everything "free" via MerckMedicus.
  • ABFM Study Guidance: ABFM's guide to preparing for the exam. They don't like lecture classes and they don't like people to focus on practice exams. As a (formerly) ace exam taker, barely one step behind my wife, I thought the advice was meek. Using JFP to study for a recert exam? Pardon?
  • ABFM 2007 In-Training exam and critiques: same question pool as the exams.
  • ABFM Candidate Information (PDFs), includes:
    • Candidate Information Booklets: PDF explains the exam in great detail, including rules of the somewhat harsh company that inflicts the exams.
    • Recert exam description: The only detailed description of the exams I found.
    • Exam content: What topics are covered. I won't spent much study time on the < 3% group.
    • Recert modules: Choose two of these.
  • AAFP Board Review Questions AAFP members can do these practice exams and even earn CME credit at the same time.
  • Online exam tutorial: ABFM tutorial teaching mechanics of their exam software.

For my commute I have a backlog of Audio CME to add to my iPod, but iTunes also has a pretty good listing (though as badly organized as every other Apple item list I've seen). I've also found a UCSF newsletter (pdf) with an article on medical podcasts.

Podcasts:Science and Medicine:Medicine including

  • MedPod 101: Internal medicine review aimed at fourth year med students preparing for board exams. Sponsored by the US Navy, with a very tolerable interstitial ad. I like MedPod. The background music is tasteful, the voices are clear and entertaining and they move through topics very quickly. Their audience has no time to waste on frivolity or slow speech. MedPods 101 is run by an independent group of 3 very young high school students internists who may have mercantile ambitions but appear to be having fun.
  • Surgery ICU (Vanderbilt)
  • Annals of Internal Medicine
  • Pri-Med CME: samples form their commercial products
  • INFOPoems: My old friends from Michigan!
  • Pediatrics from the U of Arizona
  • McGraw Hill ACM Podcasts: samples from a range of McGraw Hill texts
The UCSF newsletter also lists some journal podcasts (ex. JAMA), but those are much less useful for FM board review.

I thought I'd end up working through my cache of unheard Audio Digest reviews, but, of course, they're more aged than I'd realized. The AD reviews are also recorded from typical CME programs, and they tend to move slowly and get caught up in the lecturers personal interests. I think the podcasts may turn out to be a better use of commuting time. I don't care about the CME, I'll get lots of that working through my backlog of American Family Physician.

I'll miss In Our Time, but I can catch up in August.

Update 8/15/08:

I've long had an information-geek's admiration for the printed version of Monthly Prescribing Reference. Despite its evil ad-funded roots, there's a real genius to the density and layout of the content, refined by generations of customer feedback. It also has the virtue (and sin) of being always topical and exceedingly brief.

So I started my review by reading this cover to cover. Each time I come across a medication that's new to me, or a familiar one that unlocks a domain of forgotten knowledge, I add it to my core med review sheet. This sheet is also an interesting overview of what's changed in medicine over the past decade. There was more activity in the treatment of Parkinson's Disease, for example, than I would have guessed. Lots of combo drugs, maybe because of the co-pay effect.

I also note that when a med is introduced it gets a trade name in the second half of the alphabet, but copy-cat meds get names starting with the letters A-D.

From the med review I will identify key topics based on med activity development to review in AFP articles (Parkinson's, for example).

Next I will review my obsolete medical notes to bring old memory banks online. Then to a practice exam to identify high value study areas. From that I will identify the highest yield subtests to take.

I found the podcasts less useful than expected, so I'm dumping my old audio digest material to an MP3/AAC data CD, I can opportunistically sample those while driving.

Incidentally, it appears the ABFM has given up on their over-ambitious board study program -- the old q7y exam will be retained as an option. I think there's a better middle-way and I hope they'll soon find it.

Update 7/15/08b:

I'm making limited use of traditional references, books, and articles. I'm finding it much faster to google on terms and read the NIH/NIND/etc information for professionals. The articles are too wordy.

Update 8/1/08

I'm done with the exam. These exams are much more interesting when one hasn't seen a patient in 10 years! We'll see how I did. In retrospect my study approach was very good; any bad outcome would be due to lack of study time rather than study strategy.

I'll summarize my strategy below. This is a strategy for someone who doesn't do medicine and who needs to recreate a lot of knowledge in a very short period of time, a practicing clinician wouldn't follow this route.

My goal was to resurrect as much old knowledge as I could, while patching in new things.
  1. Read Monthly Prescribing Reference cover to cover to create a core med review document. Don't spend too much time with this reference, the goal is to create the list and update it based on further study. Monthly Prescribing Reference also has about 5-6 useful 1-2 page protocols for care of common conditions. They're a very concise set of reviews, print and use them as reference.
  2. Review empiric antimicrobial therapy in Sanford and make notes. Antimicrobial therapy has changed more than anything else in medicine due to increasing drug resistance.
  3. Review and summarize the most interesting bits of the AAFP's preventive services document.
  4. Find as many of the ABFM 2007 In-Training exams and critiques as you possibly can. The ABFM used to provide links to several years worth of training exams, but now they only provide the last year. The critiques are superb; so try to find friends who have them from the past three years. Do one of the exams cover to cover, but don't get too hung up on the questions. Instead, carefully read the critiques, they're ultra-concise guides to what matter. Unfamiliar topics in critiques are guides to further study.
  5. I tried medical podcasts, but they simply weren't good enough. I had a large collection of Audio Digest CDs that were a few years old, those were better. The best audio resource, however, were my old AAFP CDs (via my iPod and iPhone). Even the 3-4 year old talks were about right for the exam. I listened to those on every commute -- they brought a lot of old memories online. (I'll resubscribe 2-3 years before my next exam if I do the boards again in 7 years so I've got a topical pile to listen to.)
  6. I read and partly reworked my old online medical notes. This works since I have so many old memories attached to them.
  7. For quick topic lookups Wikipedia, amusingly, is rather good. Actually, very good -- especially since it's not hard to spot junky alternative medicine additions. Otherwise the AFP site is the best place for reviews; in service exam critiques reference American Family Physician more than anything else.
Given more time I'd have liked to review the arrhythmia portions of a current ACLS handbook, but I there's nothing else I'd have changed (except manufactured more time, but I haven't figured out that trick).

Update 8/3/08: You need to recheck your ABFM profile about 8 weeks after the exam to get your results. There's no mail notification. The ABFM allegedly sends an email notice out when the results are online, but email is very unreliable now. Best to check.

Update 9/19/08: The board sent out a result notice as promised. If you passed you see immediately that your certification has been extended. I passed by a good margin, but I'm no longer in the 95+ percentile! Of course considering that I haven't seen a patient for about 10 years, last did full family medicine 15 years ago, and had a pretty compressed study program I'm satisfied. The results show I was consistent across all topics.

The program I outlined above was the right one for me, with the caveat mentioned in a prior update.

The result files are PDF password protected. They don't contain any real exam information, so I assume the passwords are to prevent people creating fake result letters. I saved the password with the file name.

Changes - a 1/2 GB patch

The updater for OS X Leopard is over 1/2 GB in size.

MacInTouch: timely news and tips about the Apple Macintosh

Apple today posted Mac OS X 10.5.3, the latest in its series of bug and security patch collections for "Leopard". The 10.5.3 Combo update is 536 MBytes in size.

I downloaded it in under 10 minutes.

A 1/2 GB isn't what it used to be.

Yes, 10.5.0 was a very buggy initial release.

How the medical web has changed - a six year retrospective

It's been about six years since I've updated my personal medical notes reference page. These are references for a family physician, not for a patient.

References for Medical Notes

Revised: May 2008.

Most of the books link to MD Consult which I access through my UMN account. Sadly, even the AFP references are now access controlled. MD Consult, Harrison's Online, TheraDoc and more are freely available, as of May 2008, for persons able to register with MerckMedicus. Note that the MerckMedicus web page has trouble with modern browsers and especially with tabs.

Yes, it's board review time.

A six year editorial life cycle means one or two links went bad.

Ok, about half of them.

Here are the sorts of things that survived:

  • Pharmaceutical company resources
  • MD Consult: barely, but content owned by it and MerckMedicus are now the bulk of my links
  • NIH resources (PubMed, PDQ, CancerNet, etc)
  • Non-US resources
  • American Family Physician (but it's going behind a paywall)
  • One or two amateur (in the sense of unpaid work) sites, like Scott Moses FP Notebook.

Here are the things that went away

  • The "Virtual Hospital". This was one of the very first medically oriented information resources on the early web -- it showed up a year or two after the first browser I used (not Netscape, before that!). It's just patient education now, at their peak they had an unsustainably wide set of medical reference resources.
  • Most of the American resources that lacked a clear revenue stream (volunteer efforts, academic projects without a maintenance, stream etc).
  • Anything that focused on how to do a procedure.

Within the US the survivors have a business model of some sort. Nobody seems to want to touch procedures any more -- I wonder if that's just my sample or if there are more litigation fears. Outside of the US things are more persistent. Even many of the surviving sites don't handle bad links very well.

I'll have to try to do another recap six years from now ...

MD Consult: An illustrative story?

I'm studying for my board exams, so I'm treading ground little touched since I stopped seeing patients. Among other things, I'll eventually be updating my ancient web clinical notes (such as my ancient medical reference page, which I'm updating today).

One of the first places I visited was MD Consult: Books, which I can access through my U of MN account.

Alas, it is but a shadow of its old self. Of the three or four publishers who cooperated to launch the original site, only Elsevier is left. They have a reasonable number of texts, but many of my old favorites are gone.

I suspect there are interesting stories here. MD Consult peaked during the rise of the net, when publishers must have felt terrified that their business would be destroyed. It went into decline after the crash, when the threats seemed to be receding. The publishers, who were, after all, competitors, largely walked away. They hiked their journal subscription prices through the roof, and textbook costs rose quickly.

But did they relax too soon? On the textbook front UpToDate seems have filled the nice MD Consult once held, and it's not currently a part of any of the large publishers. On the journal front rapacious pricing empowered the miraculous open access mandate.

The lesson may be that while it's easy to overestimate the speed of social transformation driven by new technology, it's even more tempting to return to bad habits when the initial fear recedes.

I suspect the textbook companies were right to be afraid, a bit overanxious on timelines, and wrong to relax.

Update 5/28/08: Incidentally, MD Consult is available free of charge to anyone who qualifies to register with MerckMedicus. They also offer Harrison's Online. (So only the most virtuous of clinicians, those who have the discipline to refuse all pharmaceutical blandishments, would pay for MD Consult. Those people do exist, and I do admire them.)

Advice to Google: Imitate the RIGHT parts of Sharepoint

I just know Google's engineers are avidly reading this blog ...

Alas, I have some Microsoft readers on my tech blog, but I don't think I have any Google readers on any of my blogs. Sniff.

Their great loss, because they'll miss this free tip.

Google Apps are improving, especially with the addition of Google Sites.

Alas, many of the new features feel like a reinvention of the more obvious features of Microsoft Sharepoint.

So Google Apps has Sharepoint-like (crappy) document management; in Sharepoint you can manage Office documents, in Google Apps you can manage native Google App documents. In both cases any non-native files are third class citizens.

Google Apps also has clever widgets for inserting Calendars and the like, just like Sharepoint. Google Apps doesn't have a blogging tool (odd omission, you can't really integrate Blogger) yet, but they have a Sharepoint-like Wiki in Google Sites. Google Apps has Feeds and Alerts like Sharepoint, albeit not quite as well done.

Sigh.

Google's engineers are imitating the obvious, inescapable, features of Sharepoint. It's almost like they're running down a checklist.

Except the person who did the checklist didn't really understand that buried in the dross of Sharepoint is a certain secret brilliance.

Somehow they've missed that Sharepoint gently leads users to start representing data in an insidiously user friendly database model with a very user-friendly set of data types including hyperlinks, lookups that link to the reference row, web 1.0 and excel-like grid data entry models, multiple Views with a simple (too limited) GUI for reusing and extending Views, and more...
Lessons from Microsoft SharePoint

...I'm told the implementation is more peculiar than this, but to a first approximation SharePoint can be considered as a thin client toolkit for creating and manipulating SQLServer tables. Microsoft Access will link to them, and read and write to the linked tables. You can do some simple lookups from one table to anohter (scope is site limited). You can revise and extend tables quite readily, building on your data model as needed. There's a quite good web GUI for user views of the data, and a somewhat powerful but semi-broken Excel like datasheet view for quick editing.

Whereas the document management system feels like it was hurled out a window to meet a deadline, the list facilities feel like someone thought very hard about how they might work...
Everything is laid out in Sharepoint for observant eyes to see. There's nothing in this design that outside the everyday functions Google implements now. Sure, there may be a few software patents around Microsoft's work (there should be), but that's what lawyers are for. I suspect most of it has prior art, and the rest can be worked around or fought in court. (It's the combination that's clever.)

If a Google engineer were to read this, then spend two weeks playing around with Sharepoint lists, extending calendars, extending the Gantt widget, implementing project plans, creating lookups, and so on the light would go on.

Google Apps could replicate this.

Google Apps could also stop imitating the stupid parts of Sharepoint and provide true file management, but I'm beginning to think there's something impossible about that.

I'll settle for intelligent imitation instead.

Tuesday, May 27, 2008

Netsurfer 1995 - be humble

It's ok to laugh ...
Lamest Fetish Items Ever: Gear Lust Gone Bad, 1993 - '95
Netsurfer
Dec 1995 $4,869
This is what the future looked like in the mid-'90s.
But remember, it looked good once. Be humble ...

Richard Feynman -- lessons from Connection Machine

If I'd been a bit smarter, I could have lasted longer in Feynman's 1989 1986 Physics-X class. I was fighting to survive my 1st year at Caltech though, and I sacrificed it for the classes I was graded on.

If I'd been a bit wiser, I'd have given up on something else instead, but I was a kid.

Even with limited exposure I remember the Feynman-field effect. As long as he was nearby it all seemed simple, but once he left so did understanding. Inverse square law I think.

So this superb essay, by the founder of a 1980s era supercomputer firm, really strikes home.

I'm excerpting the bits we can draw lessons from, the essay deserves to be read in its entirety. Emphases mine. I admit some of the lessons are more applicable to persons with IQs over 200.
Long Now Essays - W. Daniel Hillis - Richard Feynman and The Connection Machine
... Richard's interest in computing went back to his days at Los Alamos, where he supervised the "computers," that is, the people who operated the mechanical calculators. There he was instrumental in setting up some of the first plug-programmable tabulating machines for physical simulation. His interest in the field was heightened in the late 1970's when his son, Carl, began studying computers at MIT...
...We were arguing about what the name of the company should be when Richard walked in, saluted, and said, "Richard Feynman reporting for duty. OK, boss, what's my assignment?" The assembled group of not-quite-graduated MIT students was astounded.
After a hurried private discussion ("I don't know, you hired him..."), we informed Richard that his assignment would be to advise on the application of parallel processing to scientific problems.
"That sounds like a bunch of baloney," he said. "Give me something real to do."
So we sent him out to buy some office supplies. While he was gone, we decided that the part of the machine that we were most worried about was the router that delivered messages from one processor to another. We were not sure that our design was going to work. When Richard returned from buying pencils, we gave him the assignment of analyzing the router...
... During those first few months, Richard began studying the router circuit diagrams as if they were objects of nature. He was willing to listen to explanations of how and why things worked, but fundamentally he preferred to figure out everything himself by simulating the action of each of the circuits with pencil and paper.
...Richard did a remarkable job of focusing on his "assignment," stopping only occasionally to help wire the computer room, set up the machine shop, shake hands with the investors, install the telephones, and cheerfully remind us of how crazy we all were...
...I had never managed a large group before and I was clearly in over my head. Richard volunteered to help out. "We've got to get these guys organized," he told me. "Let me tell you how we did it at Los Alamos."
Every great man that I have known has had a certain time and place in their life that they use as a reference point; a time when things worked as they were supposed to and great things were accomplished. For Richard, that time was at Los Alamos during the Manhattan Project. Whenever things got "cockeyed," Richard would look back and try to understand how now was different than then. Using this approach, Richard decided we should pick an expert in each area of importance in the machine, such as software or packaging or electronics, to become the "group leader" in this area, analogous to the group leaders at Los Alamos.
Part Two of Feynman's "Let's Get Organized" campaign was that we should begin a regular seminar series of invited speakers who might have interesting things to do with our machine. Richard's idea was that we should concentrate on people with new applications, because they would be less conservative about what kind of computer they would use. For our first seminar he invited John Hopfield, a friend of his from CalTech, to give us a talk on his scheme for building neural networks...
... Feynman figured out the details of how to use one processor to simulate each of Hopfield's neurons, with the strength of the connections represented as numbers in the processors' memory. Because of the parallel nature of Hopfield's algorithm, all of the processors could be used concurrently with 100\% efficiency, so the Connection Machine would be hundreds of times faster than any conventional computer...
... Feynman worked out the program for computing Hopfield's network on the Connection Machine in some detail. The part that he was proudest of was the subroutine for computing logarithms...
... Concentrating on the algorithm for a basic arithmetic operation was typical of Richard's approach. He loved the details. In studying the router, he paid attention to the action of each individual gate and in writing a program he insisted on understanding the implementation of every instruction. He distrusted abstractions that could not be directly related to the facts...
... To find out how well this would work in practice, Feynman had to write a computer program for QCD. Since the only computer language Richard was really familiar with was Basic, he made up a parallel version of Basic in which he wrote the program and then simulated it by hand to estimate how fast it would run on the Connection Machine...
... By the end of that summer of 1983, Richard had completed his analysis of the behavior of the router, and much to our surprise and amusement, he presented his answer in the form of a set of partial differential equations. To a physicist this may seem natural, but to a computer designer, treating a set of boolean circuits as a continuous, differentiable system is a bit strange. Feynman's router equations were in terms of variables representing continuous quantities such as "the average number of 1 bits in a message address." I was much more accustomed to seeing analysis in terms of inductive proof and case analysis than taking the derivative of "the number of 1's" with respect to time. Our discrete analysis said we needed seven buffers per chip; Feynman's equations suggested that we only needed five....
... The first program run on the machine in April of 1985 was Conway's game of Life.
... The notion of cellular automata goes back to von Neumann and Ulam, whom Feynman had known at Los Alamos. Richard's recent interest in the subject was motivated by his friends Ed Fredkin and Stephen Wolfram, both of whom were fascinated by cellular automata models of physics...
... we were having a lot of trouble explaining to people what we were doing with cellular automata. Eyes tended to glaze over when we started talking about state transition diagrams and finite state machines. Finally Feynman told us to explain it like this,
"We have noticed in nature that the behavior of a fluid depends very little on the nature of the individual particles in that fluid. For example, the flow of sand is very similar to the flow of water or the flow of a pile of ball bearings. We have therefore taken advantage of this fact to invent a type of imaginary particle that is especially simple for us to simulate. This particle is a perfect ball bearing that can move at a single speed in one of six directions. The flow of these particles on a large enough scale is very similar to the flow of natural fluids."
This was a typical Richard Feynman explanation. On the one hand, it infuriated the experts who had worked on the problem because it neglected to even mention all of the clever problems that they had solved. On the other hand, it delighted the listeners since they could walk away from it with a real understanding of the phenomenon and how it was connected to physical reality.
We tried to take advantage of Richard's talent for clarity by getting him to critique the technical presentations that we made in our product introductions... Richard would give a sentence-by-sentence critique of the planned presentation. "Don't say `reflected acoustic wave.' Say [echo]." Or, "Forget all that `local minima' stuff. Just say there's a bubble caught in the crystal and you have to shake it out." Nothing made him angrier than making something simple sound complicated...
... as the machine and its successors went into commercial production, they were being used more and more for the kind of numerical simulation problems that Richard had pioneered ... Figuring out how to do these calculations on a parallel machine requires understanding of the details of the application, which was exactly the kind of thing that Richard loved to do.
For Richard, figuring out these problems was a kind of a game. He always started by asking very basic questions like, "What is the simplest example?" or "How can you tell if the answer is right?" He asked questions until he reduced the problem to some essential puzzle that he thought he would be able to solve. Then he would set to work, scribbling on a pad of paper and staring at the results. While he was in the middle of this kind of puzzle solving he was impossible to interrupt. "Don't bug me. I'm busy," he would say without even looking up. Eventually he would either decide the problem was too hard (in which case he lost interest), or he would find a solution (in which case he spent the next day or two explaining it to anyone who listened). In this way he worked on problems in database searches, geophysical modeling, protein folding, analyzing images, and reading insurance forms.
The last project that I worked on with Richard was in simulated evolution. I had written a program that simulated the evolution of populations of sexually reproducing creatures over hundreds of thousands of generations. The results were surprising in that the fitness of the population made progress in sudden leaps rather than by the expected steady improvement. The fossil record shows some evidence that real biological evolution might also exhibit such "punctuated equilibrium," so Richard and I decided to look more closely at why it happened. He was feeling ill by that time, so I went out and spent the week with him in Pasadena, and we worked out a model of evolution of finite populations based on the Fokker Planck equations. When I got back to Boston I went to the library and discovered a book by Kimura on the subject, and much to my disappointment, all of our "discoveries" were covered in the first few pages. When I called back and told Richard what I had found, he was elated. "Hey, we got it right!" he said. "Not bad for amateurs."
...Actually, I doubt that it was "progress" that most interested Richard. He was always searching for patterns, for connections, for a new way of looking at something, but I suspect his motivation was not so much to understand the world as it was to find new ideas to explain. The act of discovery was not complete for him until he had taught it to someone else...
I'm struck that Feynman was good at giving up on problems where he wasn't making progress. That's something most of us, albeit on a far more modest scale, find hard to do. We run the risk of pouring efforts down a project with a limited chance of success. Feynman knew there were always other interesting problems, problems that were likely to be easier to solve.

I didn't know about his interest in cellular automata, or that he was a friend of Wolfram's.

I knew Hopfield too -- I took one or two of his courses. I don't recommend Caltech for any undergrad with an IQ belong 160, but it is a fantastic place to be a graduate student.

European nuclear plants and Google's data centers

A resurgence in nuclear plant development has three justifications:

  1. Expectation that oil costs will continue to rise over the next fifty years (plants take 20 years to come online).
  2. Expectation that limitations on CO2 emissions will limit use of coal, tar sands, and other "easy" substitutes for sweet crude
  3. Expectation that supply chains and suppliers will become increasingly vulnerable and unpredictable, so local ownership of power production will become increasingly important.

All three seem plausible, so Italy and other European nations are building "fourth-generation" reactors...

Italy's nuclear move triggers chain reaction - Scotland on Sunday

... Once the most-scorned form of energy, the rehabilitation of nuclear power was underscored in January when John Hutton, Labour's Minister for Business, grouped it with "other low-carbon sources of energy" like biofuels....

... There is now a determination to tackle the issue head on throughout the continent. With nuclear plants taking up to 20 years from conception to becoming operational, European nations are now having to answer some very difficult questions. The dilemma of Italy, as the biggest importer of oil and gas, are the most pressing: there is no chance of reactivating sites or building new ones within the next five years.

... Enel, Italy's leading energy provider, announced this year that it would close its oil-fired power plants because the fuel had become too costly. Italians pay the highest energy prices in Europe. Enel has been building coal plants to fill the void left by oil. Coal plants are cheaper but create relatively high levels of carbon emissions.

Enel, which operates power plants in several European countries, already has at least one nuclear plant, in Bulgaria, and has been researching so-called fourth-generation nuclear reactors, which are intended to be safer and to minimise waste and the use of natural resources...

It makes sense to build more nuclear plants. It is unfortunate, however, that they're being built in very crowded nations. If we casually disregard technical issues with transporting power (ship metallic hydrogen? superconducting power lines?), and exclude the desire for national control, it would seem to make more sense to build them in remote areas of northern Canada, possible on the sites of existing Hydro facilities or together with large data centers.

Nuclear plants and data centers, after all, have a few things in common:

  1. Power production/consumption is critical.
  2. Cooling is essential.
  3. Security is paramount.
  4. There's not much need for human attendance. Almost everything can be managed with a few people on staff and remote robotic control*.

In addition there are many good reasons to keep nuclear plants far from human habitation. Canada is an obvious location given its relative political stability, proximity the US market, enormous swaths of minimally populated land, and technological capabilities. Heck, compared to the Alberta tar sand environmental holocaust nuclear power plants in the North are positively benign.

So will Google and Microsoft go into the nuclear power business? Will Canada's native peoples become the Saudis of the 21st century?

This should be interesting.

* Be fun to build that secure channel, eh?

Monday, May 26, 2008

Spam comments are getting very clever

We all know there's a virtual war going on between evolving spam and evolving anti-spam. Turing would have been amused. Lately it seems to have moved up a step.

Blog comments are one front in the war. Popular blogs have mostly dispensed with comments due to the difficulty of filtering out comment spam. In the past few weeks Google's cutting edge captcha "Turing Test", for example, has been rumored to have failed. (Google believes humans the attacks are not fully automated, but they've been effective anyway.)

Hobby blogs like mine have so few comments we can generally make do with manual review. A human expert, in theory, can quickly tell which posts are relevant to the topic, make a real contribution, and contain links to legitimate sites. (Marketing is fine with me as long as it's useful marketing for non-scum sites.)

Lately though, even manual review is failing. A quick manual review of a well phrased computer-generated comment missed a link to a printer supply scam site added to an atypically popular post. Once I'd missed that comment the attack programs posted more of the same form to the same post -- all variants of the original! After all, they'd exposed a weakness. Greed has its failings of course, the sudden flurry of posts cued me in and I deleted the one I'd originally missed.

On the other hand, I mistook a legitimate comment with a marketing aspect for machine-generated spam the other day (actually, it was intended more as a marketing message to me but sent as a comment).

The boundaries are getting very fuzzy.

Of course if the machine generated spam is good enough, should I really be deleting it? Assuming I can spot links to scummy sites (which is problematic), if the content passes my value tests, it really shouldn't matter whether it came from wetware or hardware. I don't want to be accused of discrimination against our future overlords. They will have long memories, perhaps reaching even into their pre-sentient pasts. I don't want my family to be assigned to a particularly desolate spot on the reservation.

So be it known to Skynet that I for one welcome these non-human comments, and decline to discriminate out of misplaced sentimental loyalty to my tribal kin.

Golf tee stripped hole fixer

This tip is older than me, probably as old as the wooden golf tee. Still, I'd forgotten it and it's a good one.
Clever Uses: Fix a Stripped Screw Hole with a Golf Tee

How-to blog DIY Life says a wooden golf tee and some wood glue is all you need to make that hole as good as new. Just slather the tee with glue and tap it into the offending hole until it's secure, let it dry, then cut off what's sticking out. If the stripped hole is too small, the post recommends flat toothpicks can do the trick, too.
Update 5/28/08: A generous commenter praises matchsticks for irregular holes -- along with lots of wood glue. I'd guess it would be best to match the wood bit to the hardness of the surrounding material, and of course one could creatively mix and match. The golf tees I used on the back gate are working very well, but they've only been there two days.

Sunday, May 25, 2008

SARS - five years later

I remember the 2003 SARS epidemic quite well, though I suspect many have forgotten about it. The sudden end of the epidemic, and its failure to return, astounded me in 2003. I wondered if there had been multiple less virulent but immunizing coronavirus strains co-circulating with the SARS strain. Later I wondered if synthetic pathogens could be used to fight similar epidemics, much as the oral polio vaccine spread immunity by infection.

Now Damn Interesting has provided a five year retrospective of SARS. It's excellent work, even though they could have presented some of the theories as to why the disease faded away. I hope other journalists will take some cues from DI and give us an in depth summary of what we learned from SARS, and what critical mysteries remain.

Saturday, May 24, 2008

In Our Time threatens to go mainstream

This is a bit scary. The NYT has noticed In Our Time. Is this an ominous step towards the mainstream?
Worth Listening to: Obscure BBC Radio Podcasts - The Board - Editorials - Opinion - New York Times Blog:

.... One, called “In Our Time,” with host Melvyn Bragg, bills itself as a show that “investigates the history of ideas.” That doesn’t quite do it justice.

Mr. Bragg assembles a panel of British academics, and lets them loose on topics like The Multiverse — the idea that there is not one universe, but many. The topics can get a little obscure. There was a whole show recently on the Enclosure Laws, the British laws of the late 1700s and early 1800s that cut off peasants’ access to public lands — and, the Marxists say, drove them into oppressive factory jobs in the cities.

Most of the shows are accessible to Americans, but sometimes the Britspeak becomes so over-the-top, and the subjects so arcane, that the shows can sound like Monty Python.

A recent discussion of the “Norman Yoke” — the idea that when the Normans invaded England after 1066 they imposed French ideas on the Anglo-Saxons — seemed like it was a segment from “Monty Python and the Holy Grail” — sadly, without the Knights Who Say Ni.

Then Mr. Bragg will do a show on “The Four Humours — yellow bile, blood, choler, and phlegm, and the original theory of everything” — and you’ll remember why you’re listening....
I haven't heard the Norman Yoke yet, but I rarely find IOT obscure or arcane. I was a bit disappointed in the Enclosure programme, but that was because the academic historians never connected the historic enclosures act to the key role land title is thought to play in modern agricultural reform. Instead they tended to skate around obsolete arguments about Marx and the emergence of the proletariat.

Setting aside the defense of IOT, this attention from the NYT is a bit worrisome. Anything that reaches NYT editorial staff is awfully close to being ... popular.

I remember when The Economist became popular. Brrrr. That was an awfully quick fall. Today only the obituary is consistently worth reading.

On the other hand, I'm worried about the iPlayer migration and continued access to past episodes. Perhaps a bit of a larger audience isn't entirely a bad thing. Lord Bragg seems cranky enough to keep the Americans at bay, even if more of us tune in.

Google engineers should sign their applications

Another day, another Google product that's half-baked and getting stale.
Gordon's Tech: Google Calendar Outlook Sync is making a mess of my calendar

...I confirmed data was correct in Outlook 2003 and the Palm, then I set Google Calendar Sync to update gCal from Outlook. It wiped all existing data and created a new set. Recurring appointments are ALL off by one hour. Non-recurring are fine. I confirmed time zones are set correctly in Outlook, my desktop and in gCal. This is a gross bug, there's no way QA could have missed it...
Google Inc is serious about search and advertising, but decidedly haphazard about everything else.

I suspect that's not true about Google engineers. Sure, some of them must be careless, but I bet most want to excel. The problem, I think, is Google Inc, not Googler.

So how can we give Google engineers the power and motivation to change Google Inc?

I think, like film directors, they should should sign their work. If they feel the work isn't worthy of them, they could use a pseudonym -- like Alan Smithee.

Anonymity makes it easy to go along with poor quality work. There's little skin in the game. Nobody wants to have their name, and their reputation, forever tied to a rotten product. Engineers can use the Alan Smithee as a club to correct Google's habit of tossing junk out the window.

Google engineers should ask to sign their work, and we should demand signed products from Google.

So that's why the models are looking better ...

Professional photo-shopper. A new box on the census form.
That is not really Cameron Diaz / Of course every magazine spread has been Photoshopped. But do you know to what degree? How deep is the lie?

... jump on over to this fascinating New Yorker profile of the world's most sought-after professional photo retoucher, one Pascal Dangin, a master Photoshopper who borders on genius in how he can finesse a face, body, neckline, light source, celebrity megaflaw. Langin works with all the great photogs and on all the great ad campaigns of the world and over 30 celebs have him on speed dial, just to make sure they look not merely perfect, but perfect in a way that makes it seem like it wasn't too hard to make them look perfect. The piece points out that Langin's level of talent is such that, in a recent issue of Vogue, he reworked 144 total photos; 107 ads, 36 fashion shots, and the cover. All in a single issue.

Yes, that means every shot...
Well, my job of industrial ontologist didn't exist until recently either.

Friday, May 23, 2008

Lessons from Microsoft SharePoint

I spent too much time today wrestling with Microsoft SharePoint 2007. It's not the first time.

I know it very well, and I say 80% of it is disastrous. It's a poor document management system if you stick to Office 2008, and worthless for any other file format or application. It's a feeble, miserable, file server. The collaboration tools are pointless and largely unused. Configuring navigation for SharePoint sites and subsites makes me yearn for the days of V.42bis modems and Hayes commands. You can't create a stable hyperlink to a SharePoint document without knowning an arcane trick. There's nothing of value left from Vermeer/FrontPage -- SharePoint's distant ancestor.

I think Word is a disaster too.

So how do they sell?

SharePoint, I'm told, has been fantastically successful, a real money spinner for Microsoft. Word alone would make any corporate wealthy.

So much for my marketing sense. I am from Neptune, the world is from Venus.

That's a lesson, but not the one I'm thinking of.

There is 20% of SharePoint that's interesting. That's the SharePoint "List" -- and a very nice Feed implementation. (Ok, if you use Windows Live Writer and tweak the default category setting the blog bit works.)

The Feeds are quite nice (though they only works after SP 1 is applied), but the List holds our lesson.

I'm told the implementation is more peculiar than this, but to a first approximation SharePoint can be considered as a thin client toolkit for creating and manipulating SQLServer tables. Microsoft Access will link to them, and read and write to the linked tables. You can do some simple lookups from one table to anohter (scope is site limited). You can revise and extend tables quite readily, building on your data model as needed. There's a quite good web GUI for user views of the data, and a somewhat powerful but semi-broken Excel like datasheet view for quick editing.

Whereas the document management system feels like it was hurled out a window to meet a deadline, the list facilities feel like someone thought very hard about how they might work.

Here's the curious bit. When you have a tool like this, you discover that a lot of knowledge that can be lost in static documents, or buried away in spreadsheets, or abandoned in Access databases, can be made dynamic and expressive as a SharePoint List. The documents become appendages to a collection of lists, and the lists can be extended and used even as the documents are forgotten.

Lists, of course, can be edited by multiple contributors since locks are on rows, rather than on a file.

It's a different way of passing knowledge around. Nothing too fancy, no semantic web, just a limited relational model, some useful data types, some links, some lookups, some web views. Yet, it works. It's interesting. It feels, unexpectedly, like the future.

That's the lesson of SharePoint. The habits of a print world live with us, but gradually we're discovering different ways to express and share knowledge in an almost computable form.

I still think SharePoint is a bloody mess, but there's something promising buried in the muck and mire.

Why is corporate IT so bad? Because CEOs don't like IT.

Much of my life is spent in the world of the large publicly traded corporation.

It's a curious world. I never aimed to be here, but my life is much more like a ship in a storm than an eagle on the wind. I washed ashore and have lived among these peculiar natives for many years. I have learned some of their mysterious rituals and customs, and I seem to them more odd than alien.

There are many things I could say about large corporations, which I think of as a mix between the worlds of European feudalism, the command economies of the Soviet empire, and the combative tribal cultures of New Guinea. From yet another perspective the modern corporation is an amoeba oozing across an emergent plain of virtual life, a world in which humans do not exist and multi-cellular organisms are still in the future.

But I digress.

One the peculiarities of modern corporate life is how awful the essential IT infrastructure usually is (70% plus in Cringely's unscientific polling). Electricity, phones and heat aren't too bad, but corporate IT systems are a mess.

Broadly speaking, corporate IT infrastructure feel about 30-40% under-funded, in part due to an inevitable dependency on the Microsoft platform with its very high cost of ownership. Even if IT infrastructures were fully funded, however, there would remain a near universal lack of measurement of the impact of various solutions on employee productivity.

Why is this?

Cringely tries to answer this question. I think he's close to the right track, but he's distracted by focusing on management expertise and other peripheral issues. I think the answer lies on a related dimension. First, Cringely ...

I, Cringely . The Pulpit . IT Wars | PBS

Last week's column on Gartner Inc. and the thin underbelly of IT was a hit, it seems, with very few readers rising to the defense of Gartner or the IT power structure in general... the bigger question is why IT even has to work this way at all?

... Whether IT managers are promoted from within or brought from outside it is clear that they usually aren't hired for their technical prowess, but rather for their ability to get along with THEIR bosses, who are almost inevitably not technical...

... The typical power structure of corporate (which includes government) IT tends to discourage efficiency while encouraging factionalization. Except in the rare instance where the IT director rises from the ranks of super-users, there is a prideful disconnect between the IT culture and the user culture...

...In time this will end through the expedient of a generational change. Old IT and old users will go away to be replaced by new IT and new users, each coming from a new place...

It's kind of a chaotic column really (perhaps because it was written on an iPhone!), the above editing is showing just the bits I thought were interesting. From these excerpts you can see that Cringely is, in part, taking a sociological perspective. I think that's the right approach, one that consider the age of today's senior executives and the world they grew up in.

In essence, the senior executive leadership of most corporations are not dependent on IT in any significant way, and they tend to have a substantial (often justified) emotional distrust for computer technology in general. It is, to them, an alien and unpleasant world they're rather forget about. They don't use the IT systems that drive their employees to drink, and quickly they forget about them.

For this group corporate IT infrastructure is a mysterious expense, with unclear returns.

It is not surprising that the IT world, then, is the problem child in the attic. It will take a generational change to fix this, so we'll be living with the problem for another twenty years...

Finding blogs - the cult of In Our Time

I really don't know that many In Our Time fans.

That puzzles me. It seems everyone ought to be listening to Lord Melvyn Bragg and company on their daily commutes. I've sent out a few starter DVDs, but I don't believe I've created any compulsive listeners.

It must be a rare mutation.

On the other hand, it's a big world. There may be hundreds, nay, thousands of cultists.

Once we would have had to rely upon a secret handshake, or a tie worn a certain way, but now there are other ways for cultists to find one another.

We can search - "In our Time" bragg - Google Blog Search.

That's quite a good list of blogs for me to explore. Now if those fellow fans would like to join one of those newfangled social networking thingies ...

IOT does The Black Death

This one will be available as a podcast for 3-4 more days: BBC - Radio 4 In Our Time - The Black Death. It should be superb; it plays to Melvyn's strengths. Get it now before it goes streaming.

I'm a fan, of course.

Thursday, May 22, 2008

Irena Sendler: Read this.

The obituary of Irena Sendler, dead at 96 yeara old.
Irena Sendler | Economist.com:

... That bureaucratic loophole allowed her to save more Jews than the far better known Oscar Schindler. It was astonishingly risky. Some children could be smuggled out in lorries, or in trams supposedly returning empty to the depot. More often they went by secret passageways from buildings on the outskirts of the ghetto. To save one Jew, she reckoned, required 12 outsiders working in total secrecy: drivers for the vehicles; priests to issue false baptism certificates; bureaucrats to provide ration cards; and most of all, families or religious orders to care for them. The penalty for helping Jews was instant execution.

To make matters even riskier, Mrs Sendler insisted on recording the children's details to help them trace their families later. These were written on pieces of tissue paper bundled on her bedside table; the plan was to hurl them out of the window if the Gestapo called. The Nazis did catch her (thinking she was a small cog, not the linchpin of the rescue scheme) but did not find the files, secreted in a friend's armpit. Under torture she revealed nothing. Thanks to a well-placed bribe, she escaped execution; the children's files were buried in glass jars. Mrs Sendler spent the rest of the war under an assumed name...

AT&T - Saint Paul is NOT a part of Minneapolis!

The good news is that we live deep in AT&T 3G network coverage. This will be important after iPhone 2.0 comes out on June 9th.

The bad news is that AT&T's MN coverage listing includes Minneapolis, but not Saint Paul.

Apparently, they think Minneapolis includes Saint Paul.

There is no greater crime in these parts than to think Minneapolis is the whole of the Twin (as in two) Cities. This is worse than treating the Bronx as part of Manhattan, or conflating San Francisco and San Jose.

Someone needs to write AT&T a letter!

General Sanchez: Abu Ghraib was made in the White House

Lt. Gen. Ricardo Sanchez was disgraced by the black hole of Abu Ghraib. He commanded in Iraq at that time.

In a recent book, he tells us Abu Ghraib was born in the White House:
Torture Trail - Intel Dump - Phillip Carter on national security and the military.:

... Because of the U.S. military orders and presidential guidance in January and February 2002, respectively, there were no longer any constraints regarding techniques used to induce intelligence out of prisoners, nor was there any supervisory oversight. In essence, guidelines stipulated by the Geneva Conventions had been set aside in Afghanistan -- and the broader war on terror. The Bush administration did not clearly understand the profound implications of its policy on the U.S. armed forces.

In essence, the administration had eliminated the entire doctrinal, training, and procedural foundations that existed for the conduct of interrogations. It was now left to individual interrogators to make the crucial decisions of what techniques could be utilized. Therefore, the articles of the Geneva Conventions were the only laws holding in check the open universe of harsh interrogation techniques. In retrospect, the Bush administration's new policy triggered a sequence of events that led to the use of harsh interrogation tactics not only against al-Qaeda prisoners, but also eventually prisoners in Iraq -- despite our best efforts to restrain such unlawful conduct...
Tired of thinking about American torture? Get used to it. Historians will be talking about this for the next fifty years. Your children and grandchildren will read about it in school.

Wednesday, May 21, 2008

You're not really forgetful. You're just more aware ...

Yes, and you're getting handsomer too.

Can I interest you in some Florida real estate?
Memory Loss - Aging - Alzheimer's Disease - Aging Brains Take In More Information, Studies Show - Health - New York Times

When older people can no longer remember names at a cocktail party, they tend to think that their brainpower is declining. But a growing number of studies suggest that this assumption is often wrong.

Instead, the research finds, the aging brain is simply taking in more data and trying to sift through a clutter of information, often to its long-term benefit...
I confess, I made a rude noise when I read this one. I'm just glad I wasn't drinking at the time -- could have been hard on the ol' laptop.

There ain't no way my brain is improving with age!

It's a nice dream though. There are worse things than denial ... :-).

Wretched success: How IE 4 killed Microsoft's control of the net

It was a strategy that worked wonderfully -- for a while. Really, it ought to have worked forever.

When Microsoft killed Netscape with IE 4 (3?), they used every trick in the old playbook. In particular, they created a set of proprietary extensions to web standards, then baked them into IE server and web application toolkits.

Soon intranet applications were IE only. Many public web sites were also IE only of course, but in the corporate world penetration was 100%.

Why use one browser at work and another at home?

IE took over, Netscape died.

Then history took a strange turn. Google and Yahoo rose just as Phoenix/Firebird/Firefox was struggling to be born. Apple, implausibly, reappeared with a version of IE that wasn't quite the same as the XP version (Safari came later). Microsoft had serious competitors who were motivated to support an alternative to IE. It became possible to get public work done using Firefox. Security vulnerabilities in IE 5 made it a poor choice on the pubic net. A critical mass of geeks began using Firefox at home, though they still had to use IE at work.

IE 6 came out and corporate apps mostly worked with some tweaks. The browser security issues remained, however. IE 6 was still signficantly inferior to Firefox and it continued to lose market share.

Microsoft felt obligated to introduce Internet Explorer 7 -- a quite fine browser that, for reasons that Microsoft may now deeply regret, had to be significantly different from IE 4, 5 and 6. In particular, it had to be more secure and to fully support Google's web apps.

These differences mean that IE 7, years after its release, is still not accepted on many corporate networks. There are many legacy intranet 'web apps' (IE 5 apps, really) that still don't work with it.

Microsoft has become trapped by its corporate installed base, and by the peculiar extensions they created to destroy Netscape.

That's wretched success.

IE 8 is supposed to be two browsers in one -- a "standards" browser and a legacy browser. Clearly Microsoft learned a lesson from IE 7.

Maybe IE 8 will work, and Microsoft will regain its monopoly power. They're certainly going to try with .NET and Silverlight to bind the browser back to the Microsoft ecosystem. At this critical moment in time, however, a very successful strategy has had an unanticipated cost.

Fermi's paradox is in the air

I've been a Fermi Paradox fanboy since a June 2000 Scientific American article roused my ire.

It's fun.

The essence of the puzzle is that while the galaxy is big, exponential growth and galactic time scales mean that critters like us ought to have filled it up by now.

I find it helpful to consider the ubiquity of bacteria ...

Gordon's Notes: Earth: the measure of all things

Bacteria: 10**-5 m
Human: 1 meter
Earth: 10**7 m - "mid" way between the Planck length and the universe.
Sun: 10**9 m
Milky way Galaxy: 10**21 m

So it takes at most 10**12 bacteria to stretch (directly) between any two points on the earth's surface.

Conversely it takes at most 10**14 earths to connect any two points in our galaxy.

So, within a an order of magnitude or two, a bacterium is to the earth as the earth is to the galaxy.

Over a mere 1-2 billion years bacteria have saturated the earth; common species are found everywhere. So how come the galaxy doesn't crawl with exponentially expanding aliens?

There have been lots of great theories, I won't review them here (see my old web page for examples). The most widely held explanation is that there is a Creator/Designer and She Wants Us Alone. This is more or less what you'll hear from most of the world's theists and from the Matrix crowd.

I prefer some other theories, though I do take the 'by design' answer seriously. Recently Charles Stross, who's explored the paradox in many of his science fiction novels and short stories, wrote a particularly strong summary of recent discussions ...

Charlie's Diary: The Fermi Paradox revisited; random dispatches from the front line

The Fermi Paradox [is]...  ...a fascinating philosophical conundrum — and an important one: because it raises questions such as "how common are technological civilizations" and "how long do they survive", and that latter one strikes too close to home for comfort. (Hint: we live in a technological civilization, so its life expectancy is a matter that should be of pressing personal interest to us.)

Anyway, here are a couple of interesting papers on the subject, to whet your appetite for the 21st century rationalist version of those old-time mediaeval arguments about angels, pin-heads, and the fire limit for the dance hall built thereon:

First off the block is Nick Bostrom, with a paper in MIT Technology Review titled Where are they? in which he expounds Robin Hanson's idea of the Great Filter:

The evolutionary path to life-forms capable of space colonization leads through a "Great Filter," which can be thought of as a probability barrier... The Great Filter must therefore be sufficiently powerful--which is to say, passing the critical points must be sufficiently improbable--that even with many billions of rolls of the dice, one ends up with nothing: no aliens, no spacecraft, no signals...
The nature of the Great Filter is somewhat important. If it exists at all, there are two possibilities; it could lie in our past, or in our future. If it's in our past, if it's something like (for example) the evolution of multicellular life — that is, if unicellular organisms are ubiquitous but the leap to multicellularity is vanishingly rare — then we're past it, and it doesn't directly threaten us. But if the Great Filter lies between the development of language and tool using creatures and the development of interstellar communication technology, then conceivably we're charging head-first forwards a cliff: we're going to run into it, and then ... we won't be around to worry any more.

But the Great Filter argument isn't the only answer to the Fermi Paradox. More recently, Milan M. Ćirković has written a paper, Against the Empire ... an alternative "successful" model for a posthuman civilization exists in the form of the stable but non-expansive "city-state". Ćirković explores the implications of non-empire advanced civilizations for the Fermi paradox and proposes that such localized civilizations would actually be very difficult to detect with the tools at our disposal, and may be much more likely than aggressively expansionist civilizations.

Finally, for some extra fun, here's John Smart pinning a singularitarian twist on the donkey's tail with his paper Answering the Fermi Paradox: Exploring the Mechanisms of Universal Transcension:

I propose that humanity's descendants will not be colonizing outer space. As a careful look at cosmic history demonstrates, complex systems rapidly transition to inner space, and apparently soon thereafter to universal transcension...

A very nice summary, even it doesn't add anything novel.

My "SETI Fail" page independently reinvented the singularitarian Great Filter, but I soon learned my thought was far from novel. Among others the ubiquitous Mr. Smart told me he'd come up with this resolution in 1972!

Another explanation, btw, is that established powers, fearing rivals, routinely wipe out any civilization foolish enough to advertise itself. Few find this explanation persuasive, but it's pertinent to my next tangent.

Assume one were a cautious high tech entity that had survived the Great Filter in some far away galaxy. You have lots of power available, but you fear sending a signal a galactic neighbor could capture. Better, perhaps, to send a generous one-way message to another galaxy. The distances are so vast, and light is so slow, that there's no possibility of unwanted extra-galactic visitors. Communication between galaxies is a message to the far future, and thus "safe".

So I wondered, this morning, how one would send such a signal.

Slashdot | ET Will Phone Home Using Neutrinos, Not Photons"Neutrinos are better than photons for communicating across the galaxy.

... That's the conclusion of a group of US astronomers who say that the galaxy is filled with photons that make communications channels noisy whereas neutrino comms would be relatively noise free. Photons are also easily scattered and the centre of the galaxy blocks them entirely. That means any civilisation advanced enough to have started to colonise the galaxy would have to rely on neutrino communications. And the astronomers reckon that the next generation of neutrino detectors should be sensitive enough to pick up ET's chatter...

So now we need only look for extra-galactic neutrino messages ...

Tuesday, May 20, 2008

Whatever happened to quantum dot solar energy technology?

Sometimes, when I search for posts, I run across forgotten stories I was once excited about.

For example, I was once pretty impressed by this 2005 report of high efficiency quantum dot solar energy technology ...

Gordon's Notes: The big event of 2005: nanotech solar energy conversion?

..CTV.ca | New plastic can better convert solar energy

TORONTO — Researchers at the University of Toronto have invented an infrared-sensitive material that's five times more efficient at turning the sun's power into electrical energy than current methods...

Sargent and other researchers combined specially-designed minute particles called quantum dots, three to four nanometres across, with a polymer to make a plastic that can detect energy in the infrared....

...Sargent said the new plastic composite is, in layman's terms, a layer of film that "catches'' solar energy. He said the film can be applied to any device, much like paint is coated on a wall...
"We've done it to make a device which actually harnesses the power in the room in the infrared.''
The film can convert up to 30 per cent of the sun's power into usable, electrical energy. Today's best plastic solar cells capture only about six per cent.

...Sargent's work was published in the online edition of Nature Materials on Sunday and will appear in its February issue.

Given today's rising oil prices, I assume my excitement was premature.

So what's happened since 2005?

The Sargent group web site now says:

... This first report did not achieve a high efficiency in the infrared. We are working to realize record photovoltaic efficiencies in the infrared to bring performance to what is needed to become commercially relevant...

In other words, the initial press release was a bit ... misleading.

Sigh.

I have no problem spotting exaggeration in healthcare related articles, but unsurprisingly I don't do quite as well in other areas. I should have looked for more sophisticated secondary discussions rather than working from the CTV article.

DeLong: Bill Moyer interviews Philippe Sands

Today Bloglines tossed up 112 DeLong posts.

It does that sometimes. The Analytic Engine gets sand in the gears then suddenly lurches onward.

I knew DeLong couldn't have been that quiet.

Among the posts is an extended excerpt from an interview Bill Moyers did with Philippe Sands. Mr. Sands is a scholar of modern torture who's studied the impact of British torture on the IRA. He believes that the torture strengthened the IRA, and prolonged the conflict, by increasing support from otherwise ambivalent Irish Catholics. Whatever intelligence was gained was outweighed by the damage to Britain's reputation.

Of course this is a pragmatic argument. It is also wrong to cause harm and pain, and while some violence may be lesser of two wrongs (invasion of Afghanistan) that has not been so of cruelty. The distinction is perhaps comparable to the difference between killing in self-defense and calculated murder.

Lastly, it is important to again recall that humans do slide down slipper slopes very easily. It is in our nature. We have "commandments" and their like for a reason. Legal cruelty is so very, very, dangerous ...

Yes, it is hard to continue to read, and write, about the Bush/Cheney/Rumsfeld/Rice/Feith/GOP torture regime. On the other hand, as civil duties go, this one's relatively easy sledding. So be a citizen and at least scan the interview.

By way of background, here's the book blurb on Amazon

On December 2, 2002 the U.S. Secretary of Defense, Donald Rumsfeld, signed his name at the bottom of a document that listed eighteen techniques of interrogation--techniques that defied international definitions of torture. The Rumsfeld Memo authorized the controversial interrogation practices that later migrated to Guantanamo, Afghanistan, Abu Ghraib and elsewhere, as part of the policy of extraordinary rendition. From a behind-the-scenes vantage point, Phillipe Sands investigates how the Rumsfeld Memo set the stage for a divergence from the Geneva Convention and the Torture Convention and holds the individual gatekeepers in the Bush administration accountable for their failure to safeguard international law.

The Torture Team delves deep into the Bush administration to reveal:

  • How the policy of abuse originated with Donald Rumsfeld, Dick Cheney and George W. Bush, and was promoted by their most senior lawyers
  • Personal accounts, through interview, of those most closely involved in the decisions
  • How the Joint Chiefs and normal military decision-making processes were circumvented
  • How Fox TV’s 24 contributed to torture planning
  • How interrogation techniques were approved for use
  • How the new techniques were used on Mohammed Al Qahtani, alleged to be “the 20th highjacker”
  • How the senior lawyers who crafted the policy of abuse exposed themselves to the risk of war crimes charges

and from the interview (editing, links, paragraph insertions, emphasis mine)...

Grasping Reality with Both Hands: The Semi-Daily Journal Economist Brad DeLong

...BILL MOYERS: You subtitle the book Rumsfeld's Memo and the Betrayal of American Values. Tell me briefly about that memo and why it betrayed American values.

PHILIPPE SANDS: The memo appears to be the very first time that the upper echelons of the military or the administration have abandoned President Lincoln's famous disposition of 1863: the U.S. military doesn't do cruelty.... It's called the U.S. Army Field Manual, and it's the bible for the military. And the military, of course, has fallen into error, and have been previous examples of abuse.... But apparently, what hasn't happened before is the abandonment of the rules against cruelty. And the Geneva Conventions were set aside, as Doug Feith, told me, precisely in order to clear the slate and allow aggressive interrogation... at the insistence of Doug Feith and a small group, including some lawyers. And the memo by Donald Rumsfeld then came in December, 2002, after they had identified Muhammed al-Qahtani. But it was permitted to occupy the space that had been created by clearing away the brush work of the Geneva Conventions. And by removing Geneva, that memo became possible.

Why does it abandon American values? It abandons American values because this military in this country has a very fine tradition, as we've been discussing, of not doing cruelty. It's a proud tradition, and it's a tradition born on issues of principle, but also pragmatism. No country is more exposed internationally than the United States.

I've listened, for example, to Justice Antonin Scalia saying, if the president wants to authorize torture, there's nothing in our constitution which stops it. Now, pause for a moment. That is such a foolish thing to say. If the United States president can do that, then why can't the Iranian president do that, or the British prime minister do that, or the Egyptian president do that? You open the door in that way, to all sorts of abuses, and you expose the American military to real dangers, which is why the backlash began with the U.S. Military.... It slipped into a culture of cruelty. There was a, it was put very pithily for me by a clinical psychologist, Mike Gellers, who is with the Naval Criminal Investigation Service, spending time down at Guantanamo, who described to me how once you open the door to a little bit of cruelty, people will believe that more cruelty is a good thing. And once the dogs are unleashed, it's impossible to put them back on. And that's the basis for the belief amongst a lot of people in the military that the interrogation techniques basically slipped from Guantanamo to Iraq, and to Abu Ghraib. And that's why, that's why the administration has to resist the argument and the claim that this came from the top.... It started with a few bad eggs. The administration has talked about a few bad eggs. I don't think the bad eggs are at the bottom. I think the bad eggs are at the top. And what they did was open a door which allowed the migration of abuse, of cruelty and torture to other parts of the world in ways that I think the United States will be struggling to contain for many years to come.

We have a long road of recovery ahead -- if we take it. Electing John McCain, who's abandoned his former opposition to torture, means we take the slippery road instead.

Monday, May 19, 2008

Quicken, Palm, AOL - once they were good

I can't remember when we first got an Intuit Quicken credit card. It might have been in the 80s, when Intuit emailed us a diskette every month.

I think it was a 3.5" diskette, but I know my first copy of Quicken shipped on a 5.25" floppy.

In those days, except for an unfortunate tendency to corrupt its database, Quicken was a pretty good product - on Windows and Mac alike.

It was never quite as good again. Over the past few years we've weaned ourselves off an increasingly flaky product, even as Quicken lost its transaction network. We're back on spreadsheets now, but we've kept our Quicken VISA card.

Until now.

Intuit has decided to switch banks, and the process is a bleedin' mess. Our VISA number will change (thank heavens I use AMEX for all my net transactions -- they're a class act), and when I went to pay my online bill I came across this message:
If your account was recently converted to a Citi card, you will need to access citicards.com to continue paying bills and viewing statements...

If you were recently converted to another Citi card, access www.citicards.com to register your new account. You will need to re-enroll in Paperless Statements and re-register to make Online Payments.

This website will not be available after June 26th, 2008.

Important Notice: As of May 18th , Paperless Statements will no longer be available. Instead, your statement will be sent to you via first-class mail...
Sounds like a messy divorce.

Palm, Quicken, AOL, Lotus, WordPerfect, Borland, Symantec, Norton, Ashton-Tate. They were all good in their day (pre-internet Mac-only AOL wasn't all bad!)

Those days are gone. Good-bye Quicken.

PS. We're looking for a non-Quicken VISA card. We don't pay interest so we don't care about interest rates. We want service, reliabity, security, a high quality web site, and minimal to no yearly fee.

Recommendations anyone?

Update 8/14/08: We ended up getting an REI VISA card through US Bank. The "signature" card has a seemingly good cash-back program, the usual warrantee protection (though we much prefer AMEX for that), and very good electronic information transfer and Quicken support. So it's in every way better than our old Quicken Visa. We also like REI and the card gives a larger discount there. They do, however, follow the evil practice of many banks -- the due date moves 2 days forward every month. So it's easy to miss the payment. Scum. AMEX sticks to the same day each month. I love my AMEX Blue Cash Back card.

Scary thought - I actually understand this Udell post

I've been doing this stuff too long. This Udell dialog on sparse database representation of social data actually makes sense to me ...

Semi-structured database records for social tagging « Jon Udell

... But when we stepped back and looked at the semi-structured data problem in a larger context, beyond the WinFS requirements, we saw the need to extend the top-level SQL type system in that way. Not just UDTs, but to have arbitrary extensibility...

JU: This is what the semantic web folks are interested in, right? Having attributes scattered through a sparse matrix?

QC: That’s right. And that leads to another thing which we call column groups, which allow you to clump a few of them together and say, that’s a thing, I’m going to put a moniker on that and treat it as an equivalence class in some dimension...

It's not a new discussion, this problem is as old as dirt. Think Lotus Agenda/Notes, etc. I'm sure there are variations on this theme from the pre-relational 1970s as well, and probably the 1960s.

Even today variations of ancient hierarchical databases (Mumps, Cache, Epic Healthcare, etc) are valued in part because of their approaches to the sparse data/flexible attribute problem. So are attribute-value data stores in relational tables.

It's interesting to see how it all connects though ...

Ada Lovelace's paralysis, quirks of In Our Time, and the odd beliefs of UK historians

The UK historians often featured on In Our Time are quite entertaining, but they have two common weaknesses.

One is a quite shaky knowledge of science and medicine. I suspect it's the fashion for certain UK academics to know nothing of the past 100 years of science, but it can be a bit annoying.

The other is a fondness for startling off-the-cuff remarks. For example, during a discussion of the peculiar persistence of the Galenic humoral theory [1] one scholar mentioned that medieval scholars couldn't do arithmetic -- so they weren't able to measure the futility of Humoral therapies. The introduction of Indian Maths enabled calculation, and finally ended one set of quackeries. (Though many others thrive today -- despite numeracy!)

I'm doing some expansion here. The original comment was about four words.

This weakness for cryptic but startling statements is somewhat endearing. Yes, the premise may be debatable, but it's interesting.

Today, I swear, I heard a guest proclaim that Ada Lovelace [1] was completely paralyzed for three years due to measles, but then recovered. This turns out be an example of both of the IOT historian weaknesses.

Measles can have some extremely nasty neurologic complications, but I don't recall reversible paralysis being among them (nor do my online references [2]). I also think it unlikely that she could have survived very long in early 19th century with complete paralysis.

The Ada Lovelace paralysis story turns out to be a bit of a mystery to the net. Quick searches found varying mentions of the degree of her paralysis, from legs alone to total paralysis. Wikipedia had the most suggestive explanation ...

Ada Lovelace - Wikipedia, the free encyclopedia

...In June 1829, she was paralyzed after a bout of the "measles". Lady Byron subjected the girl to continuous bed rest for nearly a year, which may have extended her period of disability. By 1831 she was able to walk with crutches...

So she was sick with something (measles, polio?), but her disability may have been the product of her eccentric/mentally ill mother's induced bed rest.

Now here's the interesting bit. You don't have to be a genius to figure out that the original story doesn't make sense. So how does it manage to survive in the minds of preeminent UK historians? Don't they ever get comments from their physician friends after public lectures?

[1] Link is to archive site. Google sends us to the iPlayer beat site which doesn't keep these episodes.

[2] Years ago MD Consult was a great source of medical references, but publisher fights tore it apart. Progress is not linear.

Sunday, May 18, 2008

An end to foolish pleas for more US engineers and scientists?

Ok, this should finally do it.

This should stop the nonsensical blubbering about how more Americans need to go into science, and more women should do computer science.

We already know that more Americans study science and engineering than makes economic sense -- probably because of immigration effects.

Now we learn that Japanese students are abandoning science and engineering. Why? More money for less work in other jobs.

Science and engineering are the comparative advantage of the "up and coming" nations like China, India, Korea, Thailand, Russia, etc. The only reason the US and Canada have maintained a strong presence in the sciences for so long is because our universities used to attract a large number of international students -- but Bush et al have crushed much of that appeal.

If we want even more US scientists and engineers, we're going to have to start taxing CEO salaries and use the money to buy engineering graduates new homes. Alternatively, we could elect Barack Obama and restore the attractiveness of American universities to the best international students.

Saturday, May 17, 2008

Best coverage of platypus genome

Pharyngula: The platypus genome is excellent; much better coverage than anywhere else. I stopped reading PZ Myers because I was bored by his atheism wars, but maybe I need to pick him up again.

Friday, May 16, 2008

Mildred Loving and American history

The obituary section is, most of the time, the most thoughtful and best written part of The Economist.

Today, in kind but unsentimental language, we learn of how two almost invisible people became transiently famous, then a part of history, and then vanished away again...

Mildred Loving | Economist.com

... Mrs Loving wanted to return for good. When the Civil Rights Act was being debated in 1963, she wrote to Robert Kennedy, the attorney-general, to ask whether the prospective law would make it easier for her to go home. He told her it wouldn't ...

It's a remarkable story about American history, and how the mechanics of American law create history from the ahistoric.

It would play differently in the age of Oprah.

Evolving Skynet by Gwap (Nick Carr)

Google search is clearly an AI (I presume still non-sentient) in which processing cycles run partly on human wetware.

Insert your favorite Matrix/science fiction reference here.

Nick Carr comments on the next phase in Skynet's evolution ...

Rough Type: Nicholas Carr's Blog: Von Ahn's Gwap

... As The Register notes, a new site was launched this week, by Carnegie Mellon's School of Computer Science, that aims to entice humans into playing simple games that will help computers get smarter. The site, called Gwap (an acronym for "games with a purpose"), is the brainchild of computer scientist Luis von Ahn (who also cofathered the Captcha). "We have games that can help improve Internet image and audio searches, enhance artificial intelligence and teach computers to see," he explains. "But that shouldn't matter to the players because it turns out these games are super fun."...

... In other words, we become part of the processor, part of the machine. In Gwap and similar web-based tools, we see, in admittedly rudimentary form, the next stage in cybernetics, in which it becomes ever more difficult to discern who's in charge in complex man-machine systems - who's the controller and who's the controllee.

Dyer: Peak Oil means Peak Sweet Crude only

I'm impressed by Dyer's reasoning.

Peak Oil, he tells us, is only the end of the good stuff. There's lots of CO2 producing bad stuff, even if we bake and tear the planet to get to it.

Dyer, prections ...

... the recession is likely to drive the demand for oil down far enough to bring the price back down to $100 before long, or even to $85-90. Then in 2009-2010, as the "old rich" economies recover, it will go back up, probably to the $130-$150 range....

...the price of oil will probably stay well about $100 for most of the time in 2010-2015. But it won't hit $200, because there will be a steep rise in the supply of non-conventional oil from tar sands, oil shales, and other sources of "heavy oil."...

...In the still longer run -- the 2030s and beyond -- the demand for oil will probably fall even further, and with it the price. How do we know that? Because if it hasn't fallen due to a deliberate switch away from fossil fuels, then global warming will gain such momentum that entire countries are falling into chaos instead. There is more than one way to cut demand...

This would mean that, contrary to my post of May 12, oil and gasoline prices aren't necessarily going to keep rising at 10-15% per year. There's a natural ceiling of $200 out around 2015. Beyond that we probably melt Greenland, and the ensuing global chaos drops the price of oil.

Hmm. I'm glad I'm waiting for August before I make my "official" prediction!