I respect what Google’s trying to accomplish with Gears very much, and appreciate that they’re helping draw attention to the need to build out the client-side of the Web a bit more, and that they’re doing it with open source. They’ve nailed the what perfectly.

Then again, Web services nailed their requirements perfectly too.

I think the how of what Gears is doing is, frankly, misguided. There’s minimal reuse of existing Web technologies, and it’s overly imperative when it could be easily be far more declarative. Each of these issues raise the bar for the kinds of skills a Web developer needs. Not a good thing.

Interestingly, the Web services analogy doesn’t stop at the requirements. When I talk about “minimal reuse”, what I’m referring to is that the Web already has a mature and general API for accessing and manipulating local data-oriented services – the DOM – and no “gear” I’ve seen has yet to use it for data access. So instead of a Javascript GeoLocation object that might expose method calls such as getCurrentLocation() or getLockedSatellites(), why not make the object a DOM Node and treat the data contained within as a document? Something like;

<Location>
  <Current>45.234,-120.999</Current>
  <LockedSattelites>4</LockedSatellites>
</Location>

(or better yet perhaps, microformatted HTML – though there’s issues there)

Then you could use the DOM itself to get at those values, or perhaps even a CSS selector. Simple!. You even get events for free.

Why is it that the industry continues to overlook the value of reusing pervasively deployed generic abstractions, be they network oriented or local? It couldn’t be a case of NIH, because the DOM is unavoidable if you’re a Web dev. I guess it just didn’t occur to the them, though that doesn’t explain the push back.

I had saved my email, both sent and received, since 1996, kept safe in a series of mbox files which I’ve diligently moved from hard drive to hard drive as I’ve upgraded my PC. Of course, it wasn’t very accessible there, and it certainly wasn’t integrated with the last four years of mail kept in Gmail (well, half in GMail proper, and the other half in Apps). So I was happy to read, albeit belatedly, that Google had added email migration to Apps.

I knew what had to be done, so I dug into the documentation looking for a way to have it inhale those mbox files. As well as the obvious POP/IMAP support, they also supported an Atom interface, which is great and all the rage and everything, but come on, is the low hanging fruit solution here not for me to just email the mbox files? Anyhow, without that option, it turned out that the simplest route was just to install dovecot and have it serve up each file as an IMAP folder … which took all of 5 seconds to configure. So I pointed Google at the server, and it spent the next half-day or so chugging through files.

I’m now a happy man, as the last 12 years of much of my communication with the world is now searchable.

AppEngine is to Amazon Web Services as HTTP is to SOAP.

Needless to say, I’m a fan.

I think Google really missed the mark with its attempt at embeddable maps. I suppose something is better than nothing for the myriads of folks who want this functionality, but when a simpler, less opaque solution (read; declarative), GMapEZ, has existed for ages, you have to wonder what Google were thinking. The blob of HTML you get might as well be Javascript, or heck, even a Java applet in the sense that it’s opaque to all but the most inquisitive of developers.

This is becoming a bad trend.

So I had a quick look at Google Gears this morning. Unlike some, I do most definitely see value in supporting disconnected scenarios, not because I don’t see pervasive wired and wireless networks being the rule in the not-too-distant future – I do – but because I understand that networks are unreliable. That said, I do have some concerns about how Gears was put together.

My primary concern is that I’ve always felt that supporting offline use in existing browsers required more innovation of implementation rather than interface, whereas Gears is all about interface. What I mean by that is that I believe that a better, more easily deployable and usable solution would be for Mozilla itself to tweak the implementations of its HTTP stack, cache, and XMLHttpRequest object. Instead, Gears gives us new interfaces like LocalServer, which developers are supposed to use to check for valid cached representations before hitting up XHR: something XHR could very well do itself, largely transparently (I expect – haven’t considered all the backwards-compatibility issues).

Now, Gears could very well be something that was deployed for its ability to enable features today, because Google didn’t want to have to wait for HTML 5 (and its equivalent of client-side storage) to be deployed. And from that perspective it’s great (though requiring a plugin is a bit of a pain). I just hope that the Gears folks are talking with Hixie and Mozilla about where to draw the line here.

The view of my GMail spam folder;

Is this just Google being cute or an unfortunate search result? Either way, pretty funny.

How has Mobile Web 2.0 come to this;

One way that Web 2.0 companies can similarly adjust their services for mobile devices is by relying less on browser-based applications and more on small software clients that users can download onto their phones. “The browser will fade into the background,” said Wood.

The article’s not all bad though (in fairness, the main message is obvious – as Micah says, “Duh”). It also warns against “naive copying of PC services” (which I assume he means Web sites primarily targetted at PC users – a subtle but important distinction), which is good advice, but here’s a tip for mobile folks; if you find yourself moving outside the browser, or doing so while not using Web technologies (widgets), you’re not doing Web 2.0. It might be “Mobile 2.0”, but it’s not Web 2.0, and therefore not “Mobile Web 2.0”.

And this…

He used the example of Google Maps, an application initially designed for the PC. Because the application is built on Ajax, like many other Web 2.0 services, it pushes data out to the client device in order to speed up future user requests. On a mobile phone, that process drains battery life, eats up limited memory and results in potentially very high data-access charges. Google Inc. has introduced a version of the program designed for mobile phones that eliminates some of that overhead, improving the mobile user experience.

Well, guess what; using the phone drains the battery, consumes memory, and costs money. Mapping on a phone is going to use more resources than, say, doing email, which in turn will use more than checking the current time. But so what? Mapping is resource-intensive (although you could certainly do better than Google has).

Have you ever used the fat-client Google Maps Mobile referred to above? It’s not exactly the posterchild for efficient use of resources – I’ve got (well, RIM had 8-) the phone bill to prove it. I’m not saying the Web version doesn’t consume more, but I would be surprised if a little optimization couldn’t bring it in line with the midlet. Besides, I’d bet that if you asked Google the reasons they created it, resource consumption would be way down the list, and the lack of widely deployed AJAX stack on mobile devices would be at the top … which is rapidly changing, of course.

While the unique needs of mobility should always be acknowledged, and normally accomodated, remember that there lies a very slippery slope … the same one that WAP happily slid down years ago by internalizing the belief that mobile was so special that it needed non-interoperable mobile equivalents of every protocol from IP on up. And while there are, as always, exceptions – apps that are much better off as an installable app than a Web app – are you certain that yours is one, and do you realize what you’re sacrificing by going that route?

When I first started using GMail 2 1/2 years ago, I used to get perhaps 1 or 2% spam. When I checked my email this morning, it was 85% spam (104 of 122).

Google hasn’t been keeping up with the spammers, so I should probably consider reinstituting my own de-spamming filter. What’s the state of the art nowadays?

Following up on my finger-wag at Google for not properly supporting mashup developers by messing up versioning, I have to now send them full props for one thing they’re doing very, very, right.

One half of Postel’s Law says “Be liberal in what you accept”, and Google has done exactly that in at least two places. First is in Google Maps, where you can enter pretty much anything resembling a street address, and more often than not it’ll grok it. That’s not to suggest it couldn’t be improved mind you – about a quarter of the time I probably have to refine what I enter, but still, that’s not bad. Without this capability, Maps mashups would be a lot more difficult to develop in part because there exists no widely adopted standard format for an address, leaving prose as the only option for interchange. By doing this Google is absorbing the costs of solving the problem, and relieving mashup developers of the burden. Quite the contrast to their API versioning policy! 8-O

Another example of this I’ve noticed is Google Calendar, where it can accept dates also in prose, even relative ones like “tomorrow”. And this is despite having somewhat decent time and calendaring standards. So why the prose? It just simplifies integration, as the calendar integration with GMail demonstrates; it can pick out dates from an email without requiring the sender conform to any particular standard. Actually, I don’t know if that’s GMail or calendar doing it, but I hope it would be the calendar so that it can be more easily reused in other calendar-integration scenarios.

FWIW, I recall Peter Norvig saying something in his recent highly publicized run-in with TimBL about the value of this approach (mining existing content) over authoring new content; just can’t find the quote I’m looking for right now, but I’ll add it when I do.

I previously pointed to the announcement of the shutdown of a version of the Google Adwords API, and commented that this really isn’t the way to go about versioning your Web 2.0 APIs.

I’ve been digging into Google Maps recently, and noticed that they’re making the same mistake;

The v=2 part of the URL http://maps.google.com/maps?file=api&v=2 refers to “Version 2 ” of the API. When we do a significant update to the API in the future, we will change the version number and post a notice on Google Code and the Maps API discussion group.

After a new version is released, we will try to run the old and new versions concurrently for about a month. After a month, the old version will be turned off, and code that uses the old version will no longer work.

Obviously we’re still pretty early into the whole “mashup” thing, but not too early that we shouldn’t be thinking about best practices IMO. And best practice #1? Don’t do what Google’s doing here, which is asking all their users – mashup developers who have committed themselves to this service – to absorb the cost of their inability to develop an extensible API.

I think a good rule of thumb for service providers is to assume that you’ve got a million mashups using your service, and therefore that the cost of incompatible changes is prohibitive for the mashup developers. Any other approach is sure to drive those developers to other service providers who do a better job evolving their APIs; bad juju if your business depends on attracting eye-balls.

So what constitutes a “better job”? I suggest it has something to do with declarative Javascript, but my argument needs some work so I’ll save that for another post.