So I had a quick look at Google Gears this morning. Unlike some, I do most definitely see value in supporting disconnected scenarios, not because I don’t see pervasive wired and wireless networks being the rule in the not-too-distant future – I do – but because I understand that networks are unreliable. That said, I do have some concerns about how Gears was put together.

My primary concern is that I’ve always felt that supporting offline use in existing browsers required more innovation of implementation rather than interface, whereas Gears is all about interface. What I mean by that is that I believe that a better, more easily deployable and usable solution would be for Mozilla itself to tweak the implementations of its HTTP stack, cache, and XMLHttpRequest object. Instead, Gears gives us new interfaces like LocalServer, which developers are supposed to use to check for valid cached representations before hitting up XHR: something XHR could very well do itself, largely transparently (I expect – haven’t considered all the backwards-compatibility issues).

Now, Gears could very well be something that was deployed for its ability to enable features today, because Google didn’t want to have to wait for HTML 5 (and its equivalent of client-side storage) to be deployed. And from that perspective it’s great (though requiring a plugin is a bit of a pain). I just hope that the Gears folks are talking with Hixie and Mozilla about where to draw the line here.

I made a comment on Pete Lacey’s latest in the “RIA” discussion that I wanted to reiterate here;

how is that different than the bad old days when a site was developed for one particular browser?

I’m absolutely thrilled that Tim has finally grokked REST. AFAIK, he’s the first die-hard Web services type with a strong public persona to realize REST’s (and the Web’s, of course) benefits over WS/SOA/RPC. Bravo, Tim!

I’ve long thought that what was needed in this discussion was new perspectives on the relationship between REST and WS/RPC/etc… that would permit the message to reach more people. Tim’s ably doing his part along those lines with his followup posts. I would never have thought to describe things this way.

So, who’s next?

This report about Google’s brand power reminds me of a discussion I had with a guy from Adobe at ETech who was pushing Apollo. I was trying to figure out why somebody would want to use it, and this guy’s response was “One word; branding”. Of course, he trotted out the expected example of Apple and iTunes and said that iTunes was more immersive and therefore provided Apple superior branding. Ok, fair enough. But obviously, as this report shows, Google didn’t require a fat client in order to build one of the world’s strongest brands.

Adobe’s ability to execute has been impressive, of course. But I can’t help but wonder if they wouldn’t be doing so much better had they simply innovated on top of the Web. I suppose that’s the easy way out, but it’s not nearly the most lucrative.

Dave Orchard on versioning;

The fundamental problem with a version # in a document is that it doesn’t provide for a given document to be valid under more than one version. What we really need is to be able to indicate a “space of versions” that a given document is valid under, whether that’s a list or regexp or whatever.

Amen. You know, just like a media type!

A quaint exchange on the WebAPI WG mailing list;

>> Why not always uppercase method?
>
> That would upset the HTTP gods.

Ok. Fair enough.

I see that my work there is complete. 8-)

Ian Foster writes about OGSA-DAI (Data Access and Integration);

[…] it provides uniform Web Services interfaces to diverse data resources

Neat! That’s so 1990.

How many times do we really need to reinvent the Web, on top of the Web?

All this because of a little confusion over a word. Wow.

Just a quick followup on a previous piece, Ajaxian picked up a couple of declarative Javascript stories today;

Any move of the pendulum in this direction is a-ok by me. But to be clear, I am glad it’s a pendulum … meaning that there’ll always be a place for script (the bleeding edge), but we need to consolidate common practice periodically. This also gives us the opportunity to support the functionality natively in the browser.

How has Mobile Web 2.0 come to this;

One way that Web 2.0 companies can similarly adjust their services for mobile devices is by relying less on browser-based applications and more on small software clients that users can download onto their phones. “The browser will fade into the background,” said Wood.

The article’s not all bad though (in fairness, the main message is obvious – as Micah says, “Duh”). It also warns against “naive copying of PC services” (which I assume he means Web sites primarily targetted at PC users – a subtle but important distinction), which is good advice, but here’s a tip for mobile folks; if you find yourself moving outside the browser, or doing so while not using Web technologies (widgets), you’re not doing Web 2.0. It might be “Mobile 2.0”, but it’s not Web 2.0, and therefore not “Mobile Web 2.0”.

And this…

He used the example of Google Maps, an application initially designed for the PC. Because the application is built on Ajax, like many other Web 2.0 services, it pushes data out to the client device in order to speed up future user requests. On a mobile phone, that process drains battery life, eats up limited memory and results in potentially very high data-access charges. Google Inc. has introduced a version of the program designed for mobile phones that eliminates some of that overhead, improving the mobile user experience.

Well, guess what; using the phone drains the battery, consumes memory, and costs money. Mapping on a phone is going to use more resources than, say, doing email, which in turn will use more than checking the current time. But so what? Mapping is resource-intensive (although you could certainly do better than Google has).

Have you ever used the fat-client Google Maps Mobile referred to above? It’s not exactly the posterchild for efficient use of resources – I’ve got (well, RIM had 8-) the phone bill to prove it. I’m not saying the Web version doesn’t consume more, but I would be surprised if a little optimization couldn’t bring it in line with the midlet. Besides, I’d bet that if you asked Google the reasons they created it, resource consumption would be way down the list, and the lack of widely deployed AJAX stack on mobile devices would be at the top … which is rapidly changing, of course.

While the unique needs of mobility should always be acknowledged, and normally accomodated, remember that there lies a very slippery slope … the same one that WAP happily slid down years ago by internalizing the belief that mobile was so special that it needed non-interoperable mobile equivalents of every protocol from IP on up. And while there are, as always, exceptions – apps that are much better off as an installable app than a Web app – are you certain that yours is one, and do you realize what you’re sacrificing by going that route?

Damn, if the W3C can’t get the browser based Web right, and is home to the core standards that make up WS-Deathstar, it makes one wonder if they’re really the organization best suited to “Lead the Web to its full potential”.

IMO, all of the problems mentioned at those links would vanish if only the W3C was made accountable to the public, rather than its members; or at least first to the public.

A new agenda item for the upcoming Advisory Board meeting perhaps?

So, how’s the Semantic Web coming along?