There's a fair bit of history here. Years ago we used to speak of "thin clients" (Citrix) and "thick clients" (conventional apps). Thin clients were easy to deploy but put a lot of processing burden on servers and suffered from communications latency.
Of course it might also work even better on gBrowser ...
And now we have Ajax vs .NET for the variable weight browser-client title.
And so it goes.
Update: a colleague mentions the alternative approach to reducing latency - unbelievable server and network performance:
I would point out that several of the techniques Google is using don't require anUpdate Again 2/23: Udell has much more on this topic, including more on how Google got its server latency to unprecedented levels.
elaborate client side DHTML approach. They are, instead, leveraging
their incredible load capacity and low latency server array. Google maps is
a perfect example of relatively low tech client, plus an incredible tech
server architecture. If you look carefully you can sometimes see upwards of
20 GET requests being spun off, each with a latency response that is way way