Showing posts with label xanadu. Show all posts
Showing posts with label xanadu. Show all posts

Thursday, February 28, 2008

Brad DeLong Lecture slide: The Invention Transition

From his Berkeley Economics course: Brad DeLong's Slides: The Invention Transition

  • Population (n)--two heads are better than one
  • Education (fi)--standing on the shoulders of giants
  • Societal openness (li)--how many people can you talk to before being "shown the instruments"
  • Means of communication--language, writing, printing, etc....
  • If these four are hobbled, the pace of invention will be slow
    • Fortunately, no global technological regress (that we remember, at least)
    • Only seven known--and disputed--known examples of "local" technological regress
    • Iron Dark Age, Medieval Dark Age, Medieval Greenland Vikings, Mayan Heartland, Mississippi Mound-Builders, Easter Island, Flinders Island...

DeLong's "Chains of Innovation" equation has a parameter representing "number of links to others"; the "economy-wide innovation" value tends to infinity as the number of links and "probability of successful transmission" increase.

In my lifetime I've seen the "links to others" bit grow exponentially.

Hello Singularity.

I was struck by the comment that there are only seven known examples of "local" technological regress. Technology is sticky.

Thursday, January 31, 2008

Microsoft's FeedSync: what the heck is it and why would anyone care about a trivial problem like data synchronization?

Jacob Reider, the master of the terse post, apparently likes Microsoft's FeedSync.

Of course, Jacob, you didn't bother to say why you liked it. Or even what it might be good for!

It turns out that FeedSync was originally a Ray (Lotus Notes -> Microsoft CTO) Ozzie project. I don't know what it started out as, but now claims to be an open source specification for enabling data synchronization.

Jacob is presumably interested for two reasons. One is general geekhood, the other healthcare related. First the geek stuff.

As a fellow-geek Jacob, like me, is constantly trying to synchronize data across platforms. Anyone who's been around the block with Outlook, Exchange, Palm, mobile phones, iPhones, Gmail, iSync, etc, etc, will have learned that this is a non-trivial problem even in the relatively trivial domain of synchronizing address books.

We geeks would like, for example, to move our images and metadata readily from Picasa to Flickr and back again. Good luck - even if Google claims they're opposed to Data Lock enabling synchronization between competitors is rather a difficult proposition -- particularly when the services define photo collections differently (include by reference or by copy?).

Heck, we'd like to move our metadata from iPhoto to Aperture -- two desktop apps Apple controls. We can't even do that. (ex: photo book annotations). Forget Aperture to Lightroom!

How hard is this problem? I have long claimed that data synchronization issues between Palm and Outlook/Exchange were one of the top three causes of the collapse of once promising Palm OS ecosystem. OS X geeks know that Apple has a long history of messed up synchronization even within the completely controlled OS X/.Mac environment. IBM has had several initiatives to manage this kind of issue (the last one I tracked was in the OS/2 era) -- all disasters. Anyone remember CORBA transaction standards? Same problem in a different form. The only experience I've had of synchronization working was with the original Palm devices synchronizing to the original Palm Desktop -- where everything was built to make synchronization work. Lotus Notes, of course, was into synchronization in a very big way -- that's how the different Notes repositories communicated with one another (hence Ozzie's interest). I don't know how well that really worked, but I'm told it took an army to make Notes work.

Personally, I think this problem gets fully solved about 10 milliseconds before Skynet takes over. There are too many nasty issues of semantics, of each system knowing what the other means by "place", to achieve perfect results between disparate systems. Even the imperfect results achieved by using language between mere humans requires a semblance of sentience, shared language, and even shared culture.

Reason two for Jacob's interest is, of course, his health care IT background. HL-7. SNOMED terminfo models. HITSP and Continuity of Care Records. Even Google's fuzzy Personal Health Record interchange services. Microsoft's various healthcare IT initiatives. Many HCIT vendor transaction solutions. They're really all about data synchronization on a grand scale -- even if the realities tend to be fairly modest.

Jacob, btw, is fond of those loosely-coupled mashup thingies.

So what's "FeedSync"? (emphases mine)

Windows Live Dev FeedSync Intro

The creation of FeedSync was catalyzed by the observation that RSS and Atom feeds were exploding on the web, and that by harnessing their inherent simplicity we might enable the creation of a “decentralized data bus” among the world’s web sites. Just like RSS and Atom, FeedSync feeds can be synchronized to any device or platform.

Previously known as Simple Sharing Extensions, FeedSync was originally designed by Ray Ozzie in 2005 and has been developed by Microsoft with input from the Web community. The initial specification, FeedSync for Atom and RSS, describes how to synchronize data through Atom and RSS feeds.

The FeedSync specification is available under the Creative Commons Attribution-Share Alike License and the Microsoft Open Specification Promise.

... FeedSync lays the foundation for a common synchronization infrastructure between any service and any application.

... Everyone has data that they want to share: contact lists, calendar entries, blog postings, and so on. This data must be up-to-date, real-time, across any of the programs, services, or devices you choose to use and share with.

Too often today data is “locked up” in proprietary applications and services or on various devices. As an open extension to RSS and Atom, FeedSync enables you to “unlock” your data—making it easy to synchronize the data you choose to any other authorized FeedSync-enabled service, computer, or mobile device. FeedSync enables many compelling scenarios:

  • Collaboration over the web using synchronized feeds
  • Roaming data to multiple client devices
  • Publishing reference data and updates in an open format that can be synchronized easily

... FeedSync enables multi-master topologies,

... publish a subset of his calendar more broadly using a FeedSync feed. Consumers of the publish-only feed can only see a subset of the calendar, and don’t have permission to make changes. Because of the FeedSync information in the feed, though, they are reliably notified of updates to Steve’s shared calendar. And unlike current feeds, when Steve deletes an item from the calendar, the item is deleted on everyone’s calendar.

... RSS and Atom were designed as notification mechanisms, to alert clients that some new resource is available on a server. This is a great fit for simple applications like blogging.

But those feed formats are not a natural fit for representing collections of resources that change, such as a contact list, or a collection of calendar items. Atom Publishing Protocol is designed for resource collections, but it is a client-server protocol and isn’t suitable (by itself) for multi-master scenarios. FeedSync extends RSS and Atom so that FeedSync-enabled RSS and Atom feeds can be used for reliable, efficient content replication and multi-master data synchronization.

One of the great benefits of FeedSync is that it doesn’t attempt to replace technologies like RSS, Atom, or Atom Publishing Protocol. Instead, FeedSync is a simple set of extensions that enhances the RSS or Atom feeds that people are already using today...

There you go. Nerdvana indeed.

Grumph.

Ok, I won't rain too hard on this parade. I said "perfect results" weren't feasible. We can't do synchronization for anything that's not trivial -- at least not without monstrous effort. The interesting question is whether there's some kind of "good enough" compromise that we can start with that, with a lot of time and evolution, might lead to some sort of emergent solution. Preferably without Skynet. Something that bears the same relationship to the original Palm synchronization that Google does to the original memex/xanadu vision...

Sunday, December 30, 2007

Eckels on managing software projects

Bruce Eckels (new feed of mine, so I'm working through the archives [1]) distilled a lot of professional experience into a commencement address [2]. I've emphasized one statement that a less kind person than I would suggest be applied to certain persons with a branding iron ...

The Mythical 5%

... some companies have adopted a policy where at the end of some predetermined period each team evaluates everyone and drops the bottom 10% or 20%. In response to this policy, a smart manager who has a good team hires extra people who can be thrown overboard without damaging the team. I think I know someone to whom this happened at Novell. It's not a good policy; in fact it's abusive and eats away at company morale from within. But it's one of the things you probably didn't learn here, and yet the kind of thing you need to know, even if it seems to have nothing directly to do with programming.

Here's another example: People are going to ask you the shortest possible time it takes to accomplish a particular task. You'll do your best to guess what that is, and they'll assume you can actually do it. What you need to tell them for an estimate like this, and for all your estimates, is that there's a 0% probability that you will actually get it done in that period of time, that such a guess is only the beginning of the probability curve. Each guess needs to be accompanied by such a probability curve, so that all the probabilities combined produce a real curve indicating when the project might likely be done. You can learn more about this by reading a small book called Waltzing with Bears...

I admit, I'd not thought about the inevitable unintended consequence of the "bottom 10%" cuts. Once one person figures out the "hire human sacrifices" strategy everyone will soon learn it, just as ingenious hacks percolate in prisons. Now it's hit on the web, so it's known to the metamind. Humans adapt, and a ruthless corporate culture will breed ruthless employees -- which might not work out as intended.

The rest of the essay is all good advice, most of which I've learned the hard way.

[1] Tip: When you find a good blog, explore the archives deliberately. In bloglines I mark a post as 'keep current' as a reminder that I'm still mining the knowledge. If it's a decent blog there will be far more gold in the best of the archives than in the day-to-day (average) posts of even the very best blogs. (Basic stats - sampling curves.)

[2] I remember when things like this were said once, barely heard, then vanished. Now they live on. I don't think we understand how much difference this makes.

Friday, November 30, 2007

The spectrum wars: Chapter One

Google is ready to fight the 700MHz war.

Official Google Blog: Who's going to win the spectrum auction? Consumers.

... we announced today that we are applying to participate in the auction.

We already know that regardless of which bidders ultimately win the auction, consumers will be the real winners either way. This is because the eventual winner of a key portion of this spectrum will be required to give its customers the right to download any application they want on their mobile device, and the right to use any device they want on the network...

Regardless of how the auction unfolds, we think it's important to put our money where our principles are. Consumers deserve more choices and more competition than they have in the wireless world today. And at a time when so many Americans don't have access to the Internet, this auction provides an unprecedented opportunity to bring the riches of the Net to more people.

While we've written a lot on our blogs and spoken publicly about our plans for the auction, unfortunately you're not going to hear from us about this topic for awhile, and we want to explain why.
Monday, December 3, is the deadline for prospective bidders to apply with the FCC to participate in the auction. Though the auction itself won't start until January 24, 2008, Monday also marks the starting point for the FCC's anti-collusion rules, which prevent participants in the auction from discussing their bidding strategy with each other.

These rules are designed to keep the auction process fair, by keeping bidders from cooperating in anticompetitive ways so as to drive the auction prices in artificial directions. While these rules primarily affect private communications among prospective bidders, the FCC historically has included all forms of public communications in its interpretation of these rules.

All of this means that, as much as we would like to offer a step-by-step account of what's happening in the auction, the FCC's rules prevent us from doing so until the auction ends early next year. So here's a quick primer on how things will unfold:

  • December 3: By Monday, would-be applicants must file their applications to participate in the auction...
  • Mid-December: Once all the applications have been fully reviewed, the FCC will release a public list of eligible bidders in the auction. Each bidder must then make a monetary deposit no later than December 28, depending on which licenses they plan to bid on. The more spectrum blocks an applicant is deemed eligible to bid on, the greater the amount they must deposit.
  • January 24, 2008: The auction begins, with each bidder using an electronic bidding process. Since this auction is anonymous (a rule that we think makes the auction more competitive and therefore better for consumers), the FCC will not publicly identify which parties have made which bid until after the auction is over.
  • Bidding rounds: The auction bidding occurs in stages established by the FCC, with the likely number of rounds per day increasing as bidding activity decreases. The FCC announces results at the end of each round, including the highest bid at that point, the minimum acceptable bid for the following round, and the amounts of all bids placed during the round. The FCC does not disclose bidders' names, and bidders are not allowed to disclose publicly whether they are still in the running or not.
  • Auction end: The auction will end when there are no new bids and all the spectrum blocks have been sold (many experts believe this auction could last until March 2008). If the reserve price of any spectrum block is not met, the FCC will conduct a re-auction of that block. Following the end of the auction, the FCC announces which bidders have secured licenses to which pieces of spectrum and requires winning bidders to submit the balance of the payments for the licenses....

So Chapter One will likely run from now through March 2008.

Let the glorious battle begin! May the barbarians run rampant on the ruins of the Empire.

Monday, November 05, 2007

How to Hyperlink: advice and some hypertexual history

Coding Horror has done a nice job summarizing the art of the hyperlink -- and he provides historical context:

Coding Horror: Don't Click Here: The Art of Hyperlinking
...I distinctly remember reading this 1995 Wired article on Ted Nelson and Xanadu when it was published. It had a profound impact on me. I've always remembered it, long after that initial read. I know it's novella long, but it's arguably the best single article I've ever read in Wired; I encourage you to read it in its entirety when you have time. It speaks volumes about the souls of computers-- and the software developers who love them.

Xanadu was vaporware long before the term even existed. You might think that Ted Nelson would be pleased that HTML and the world wide web have delivered much of the Xanadu dream, almost 40 years later...

I recommend the article, though every rule should be broken on occasion. Sometimes I do resort to "click here", for example.

The historical context led me to dredge up old links, and in honor of Ted Nelson, I've created a "hypertextual thread" (tag) called "xanadu".

As CH tells us, Nelson was in fact quite unhappy with how the Web developed. In 1988 (yes, that long ago) Ted Nelson delivered the keynote address to the American Medical Informatics Association (AMIA) on "Project Xanadu" -- and it was clear he wanted the web to go away.

That was the most fun and interesting keynote I've ever heard at AMIA, but half the audience thought Nelson had gone off the deep end long ago. AMIA has since been careful not to invite anyone particularly novel to speak.

Nelson wasn't the only hypertext pioneer to be unhappy with the unidirectional hyperlink. Berners-Lee, the "father" of the web, used to be very unhappy with our fragile hyperlinks. I recall he'd wanted a directory service and an indirection layer for the hyperlink, his CERN experiments simply escaped prematurely. Nowadays, of course, Google is beginning to offer suggested redirects when one enters a failed link into the search engine -- an unimaginably brute force solution to the problem. I'm sure there are some interesting lessons in how this has evolved!

For a bit more on the topic over the past few years (I used to write about this pre-blog):

Saturday, December 02, 2006

The quiet demise of the CD

A little bit of Future Shock, or perhaps I should say Future Bite. I've used some nice archival quality Verbatim CDs for years and I wanted a refill. I couldn't find them; the only CD spindles for sale on Amazon seem to be lower quality.

I finally figured out why. The price of 'archival' DVDs has fallen below the current price of CDs, so low that packaging and shipping is probably a significant part of product cost. I ended up buying a spindle of DVDs instead.

CDs are quietly disappearing. Alas, I should upgraded my mother's new Mac Mini to a DVD burner! Blank CDs will become increasingly unreliable and costly.

I remember reading the book written by Bill Gate's father (yes, his father) called 'The New Papyrus'. It was all about the how the data CD would revolutionize the world. This was before the net became public. I was amazed by the CD back then, and I wrote a letter to a Canadian development organization on how it could dramatically change the delivery of knowledge to what was then called the 'third world'.

Good-bye CD. We barely knew you ...

Update 9/25/09: See also - UK University lectures and iTunes U.

Tuesday, September 12, 2006

Using Google co-op for health information.

Somehow I missed Google Co-op. Here it's being used to define resoures health information. These collaborative bookmarking, path sharing projects are all the rage, though until now I've not found one that worked for me.

The Google Co-op project is intriguing of course, it's getting hard to keep up with all of their inventiveness -- is Google trying to advance The Singularity all by itself?

Needless to say, the memex had this feature. Vannevar Bush's 1945 prototype for the WWW+ involved the sharing of links, connections and paths in a collaborative development effort.

PS. Visiting my all-but-forgotten del.ici.us site I was intrigued to see the vanity feature -- an ancient link to Gordon's Tech under the old name is on a few other people's lists. I'll have to add all of my blogs and key pages there to see how many others have been found ... Clearly, I've not thought enough about these emergent collaboration sites ...

Wednesday, September 06, 2006

HyperScope: Does innovation live?

Shades of the 1990s, HyperScope is a reimplementation of classic hypertext and information representation in modern browser-side Ajax. Maybe innovation is not quite dead!

I'm particularly intriguted that the file format is OPML. It suggests that OmniOutliner could be adapted to generate these documents fairly readily ...

Saturday, May 20, 2006

The Universal Library

The NYT magazine has a surprisingly good article on the digital library ...
Scan This Book! - New York Times

....Turning inked letters into electronic dots that can be read on a screen is simply the first essential step in creating this new library. The real magic will come in the second act, as each word in each book is cross-linked, clustered, cited, extracted, indexed, analyzed, annotated, remixed, reassembled and woven deeper into the culture than ever before. In the new world of books, every bit informs another; every page reads all the other pages.
It's pretty good really, but how could they manage to omit Nelson's Project Xanadu, The Memex (Vaneva Bush, As We May Think) and Dickson's The Final Encylopedia?

They give the impression this stuff is 21st century! It's very mid-20th.

Saturday, March 12, 2005

PLATO Notes, Microsoft Groove, and the curious history of software

This is a bit more software-centric than my usual 'Notes' postings, but it's really not about a particular technical issues, rather it's an interesting and topical ancectdote about how software evolves. Once upon a time I thought software largely came from the imagination of a few people. Sometimes it does (for better or worse), but most complex software projects have a long and often unrecognized legacy.

Groove is in the news today, it's a software solution recently acquired by Microsoft. Ray Ozzie, the CEO of Groove, will become Microsoft's Chief Technologist. Microsoft's involvement has created the recent interest in this "new" software.

Groove seems new, but it's been in development for at least six years. It's not six years old, however, because it's an offspring of Lotus Notes, which was developed in the 1980s. But it's not twenty years old, because it's really a descendant of PLATO Notes, which was developed at the University of Illinois in the 1970s atop the 1960s (1950s?) PLATO platform. So it's thirty years old. Heck, one could argue that it's really a child of the Memex (1945), so it's about sixty years old.

This is what Kapor of Lotus/spreadsheet fame wrote about the connection of Groove to PLATO:
Mitch Kapor's Weblog: Microsoft Acquires Groove
Ray has been a colleague and friend for over 20 years. He came to Lotus is 1982 with the vision of Notes already in mind, having been inspired by the PLATO system he used as an undergraduate at the University of Illinois...
Kapor's posting led me to a Google search, and thus quickly to a history of PLATO Notes, a pre-PC system for communication and collaboration. The history is well worth reading for anyone who develops or works with complex software systems, or who is just interested in the history of ideas. There are lessons there about electronic community (10 million hours!), about open source development, about the software development process, about software evolution, about software-as-platform -- and more besides.

There are also some minor personal serendipities here. I am writing this on a blog, a modern version of the kind of collaborative community that PLATO pioneered. I live in Saint Paul, and PLATO Notes was commercialized by a Minneapolis company -- Control Data. I have worked with many Control Data veterans who no doubt have connections to the CDC PLATO team, but, in addition, I have a longstanding interest in collaborative software systems (warning: old web pages). About 8 years ago my interest led me to review several alternatives and to comment on the work of David Woolley and his web conferencing guide.

David Woolley, as a young man, created PLATO Notes in 1973; he wrote the article I mention above. David is also a leader at Minnesota e-Democracy, which I've long appreciated. I shall have to send him a note of appreciation.

Update 3/15: David Woolley corrected some errors I made in dates. Thanks David!

Tuesday, February 01, 2005

Ghosts of the Golden Age: the computer as an aid to thinking

The New York Times Sunday Book Review - Essay: Tool for Thought

One upon a time geeks dreamed that computers could help us think. They are good at what we are bad at, we are good at what they cannot do at all. Vannevar Bush wrote about that dream in the 1940s, though he described it in terms of microfiche. (He actually knew about computers, but that knowledge was classified. I don't know for sure, but on reading his article I was left with the impression that he used "microfiche" as a code for what he could not say aloud.)

During the minicomputer era of 1970s very innovative software was developed to aid collaboration and education. Most of it is long forgotten -- even I cannot recall the names (Plato?). In the 1980s the dream again arose, I have a classic Whole Earth Catalog book on personal computing full of fascinating green screen DOS applications that tried to help people think. Lotus had Agenda and then Magellan.

Then came the Dark Ages. Microsoft swept all creativity aside in its race to power, and then the wonders of the Internet led creative minds in another directions. There were applications in finance, health care and other domains that solved particular problems -- but many of the ones I know of have more or less vanished (Iliad, QMR). (Ok, so many went underground, into devices like EKG machines.)

Steven Johnson claims the dream has life left in it yet. He describes the experience of using a full text document management tool to manage his large information repository - DEVONThink (an OS/X app). Over time his large knowledge collection is beginning to have "emergent properties", to turn into something that's not quite his biological memory but is far more than a filing cabinet. Something akin to Vannevar Bush's Memex, or Dickson's "Encyclopedia" or Ted Nelson's "Project Xanadu".

I've had a similar experience with using Lookout for Outlook, and, to a lesser extent, using Yahoo Desktop Search (X1). A lifelong knowledge repository seems to compensate, in some ways, for the memory loss of an aging and overflowing brain. Tools like DevonThink, YDS and Lookout are helping make this repository real.

Wednesday, January 19, 2005

Managing complexity: the lifelong data repository

Faughnan's Tech: Yahoo! Desktop (X1) is the new champion

In my tech notes blog I posted a review of X1. I've been using it for a while. It needs work, it's not as polished in some ways as Lookout, but it's pretty good. We have a lot further to go, however.

Lookout works well because Outlook content has lots of metadata and context. Email has dates, links to people, descriptive text surrounding attachments, etc. Email tends by nature to provide focal chunks of context. In contrast Google works well on the web because web pages have links that can be weighted, a robust form of metadata. Heck, web pages even have descriptive titles.

By comparison today's desktop file store is a barren desert. There's very little to go on to help search tools work. The most useful tool is probably the folder name -- pretty meager fare.

This wasn't such a big deal when we managed a few MBs of data. But what of the dataset that grows over a decade? That repository may be vast. Unfortunately, due to lack of supporting metadata, it's easier to find documents on the web than it is to find them on the desktop.

The good news is there are no lack of ideas to make things better. Heck, even as one uses today's software to search for items, one can be layering metadata atop the file system. If I do a search and open a file, then it's clearly more valuable and might earn a higher value score. The list of ways to assign value is very long; it will be fun to see how they get instantiated. Some of those ideas are 50 years old (Vannevar Bush described most of them in 1945 or so), I doubt any of them are truly new -- but the implementations will bring surprises.

PS. This is an old interest of mine.

Update 2/21/05: I've taken to appending the string [_s#] where # is 1-5 to the end of filenames to provide some crude metadata value scores. Full text search programs that index file names can then be filtered by the suffix value.

Monday, January 03, 2005

The stagnant information architecture of the web: ten years of stasis

Reviving Advanced Hypertext (Jakob Nielsen's Alertbox)
...In 1995, I listed fifteen hypertext features that were missing from Web browsers. None of these ideas have been implemented in the ten years since, except for Firefox's search box and Internet Explorer's search sidebar.

Is there any hope that the next decade will bring more progress? I think so. For one, most of the ideas mentioned here are rich sources of user interface patents, which offer a sustainable competitive advantage. (I invented at least five potential patents while writing this article, but didn't bother filing because I'm not in the business of suing infringers; a big company could rack up the patents if so inclined.)

The last ten years were a black hole: much attention was focused on doomed attempts at making the Web more like television. Hopefully, the next decade will focus instead on empowering users and giving us the features we need to master a worldwide information space.

I don't think these are Nielsen's ideas -- Berners-Lee wanted bidirectional links & collaboration from day one. On the other hand, I have Nielsen's book and he did a good job of summarizing the state of the art back in 1995. There's been zero progress since then. (I remember Hyper-G, a Gopher derivative .. for example. Jon Udell has also written about this.)

He's absolutely right that the information infrastructure of the web has stagnated, but that was inevitable once Microsoft took monopolistic control of the browser. Perhaps Firefox, Google and Amazon will lead us out of the darkness.

Tuesday, August 19, 2003

Jon Udell: The future of online community

Jon Udell: The future of online community:

From Jon's Weblog (emphasis mine, including the shameless plug ...
I used to think I knew what online community was all about. I thought it had something to do with discussion forums, like the one here at InfoWorld I've recently tried to colonize. Having spent too many years, keystrokes, and brain cells debating the pros and cons of various discussion technologies, I'll just cut to the chase. This WebX thing is not working for me. It's not simply that the software mangles URLs, doesn't preview messages, and handles topics and threads in a way I find awkward. What's broken, for me, is the idea that an online community is a place where people gather, and a centralized repository of the discussions held in that place. In that model, I've concluded, the costs are just too high. It's expensive to join. It's expensive to participate, because interactive discussion demands a lot of attention. And it's expensive to leave, because the repository has your data, and may or may not (probably won't) preserve its linkable namespace or hand the data back to you in a reasonable form.

The weblog model reduces all these costs. It's single sign-on: just log into your own blog software. There's less pressure to participate: you can acknowledge other blogs that comment on your stuff, or not. You control the data and can, if you choose, ensure that your namespace persists.

There are tradeoffs, of course. People do miss the feeling of direct interaction. Comment trails attached to blog items are one attempt to recreate the feeling of a discussion. Trackbacks/pingbacks are another. For me, neither quite manages to restore that sense of place and belonging that is lost when you switch to blogging's more loosely-coupled mode of interaction. But I think we'll get there. And when we do, virtual community is going to be even more virtual than we think of it today.

For a couple of years, Steve Yost has been pushing the idea of ThreadML -- that is, a way of representing discussions as portable XML objects. When I went back and looked at the column where I first mentioned Steve's idea, I found it to be a quilt woven from many threads. It began with a wonderful essay posted by John Faughnan to my newsgroup -- which I'm glad I quoted in the column, because the newsgroup is now defunct. The column went on to weave in discussion at Steve's QuickTopic site, on the Yahoo Groups syndication list, on Rael Dornfest's weblog, and elsewhere.
I found the above when testing Google's indexing of my personal blog. In the midst of discovering that Google still wallops Teoma/AskJeeves and AltaVista I came across Jon's essay.

I quote it here not only because Jon speaks of my "wonderful essay". Ok, so I think Jon's a genius and it tickled me no end to have him mention me. I also think Jon has hit it on the nose.

The blogger movement has a funny name, but it feels to me more like the original visions of Vannevar Bush's Memex and Tim Berners-Lee's WWW than all of the Amazons and MSNs put together. Google, with its acquisition of Blogger and their fascinating extensions to the Google toolbar lays just claim to being the home of the modern memex. (Apologies to the valiant efforts of Ted Nelson and Project Xanadu.)

I've failed a LOT with online project collaboration (exactly one success in 10 years or so). Jon's track record is far more extensive. Now I'm trying Blogger with a course I'm teaching and as part of a web development project at my son's primary school. I actually think it might just work. I love the speed and simplicity of how blogger works. (Of course I also want built in thread searching, alternative queries, effective metadata views, back links, etc. etc. But that doesn't have to make the basic UI more complex, that's all value add.)

I'm really looking forward to having moveable type class dynamic backtracks and comment threads that work with Google's painless blogger and that aren't IE specific. I'm reasonably hopeful that will happen.