I think Google really missed the mark with its attempt at embeddable maps. I suppose something is better than nothing for the myriads of folks who want this functionality, but when a simpler, less opaque solution (read; declarative), GMapEZ, has existed for ages, you have to wonder what Google were thinking. The blob of HTML you get might as well be Javascript, or heck, even a Java applet in the sense that it’s opaque to all but the most inquisitive of developers.

This is becoming a bad trend.

I’ve posted the position paper I authored for the W3C/OpenAjax Workshop on Mobile Ajax. It’s all about the value of declarative application development, which I’ve discussed before.

So I had a quick look at Google Gears this morning. Unlike some, I do most definitely see value in supporting disconnected scenarios, not because I don’t see pervasive wired and wireless networks being the rule in the not-too-distant future – I do – but because I understand that networks are unreliable. That said, I do have some concerns about how Gears was put together.

My primary concern is that I’ve always felt that supporting offline use in existing browsers required more innovation of implementation rather than interface, whereas Gears is all about interface. What I mean by that is that I believe that a better, more easily deployable and usable solution would be for Mozilla itself to tweak the implementations of its HTTP stack, cache, and XMLHttpRequest object. Instead, Gears gives us new interfaces like LocalServer, which developers are supposed to use to check for valid cached representations before hitting up XHR: something XHR could very well do itself, largely transparently (I expect – haven’t considered all the backwards-compatibility issues).

Now, Gears could very well be something that was deployed for its ability to enable features today, because Google didn’t want to have to wait for HTML 5 (and its equivalent of client-side storage) to be deployed. And from that perspective it’s great (though requiring a plugin is a bit of a pain). I just hope that the Gears folks are talking with Hixie and Mozilla about where to draw the line here.

This report about Google’s brand power reminds me of a discussion I had with a guy from Adobe at ETech who was pushing Apollo. I was trying to figure out why somebody would want to use it, and this guy’s response was “One word; branding”. Of course, he trotted out the expected example of Apple and iTunes and said that iTunes was more immersive and therefore provided Apple superior branding. Ok, fair enough. But obviously, as this report shows, Google didn’t require a fat client in order to build one of the world’s strongest brands.

Adobe’s ability to execute has been impressive, of course. But I can’t help but wonder if they wouldn’t be doing so much better had they simply innovated on top of the Web. I suppose that’s the easy way out, but it’s not nearly the most lucrative.

Just a quick followup on a previous piece, Ajaxian picked up a couple of declarative Javascript stories today;

Any move of the pendulum in this direction is a-ok by me. But to be clear, I am glad it’s a pendulum … meaning that there’ll always be a place for script (the bleeding edge), but we need to consolidate common practice periodically. This also gives us the opportunity to support the functionality natively in the browser.

How has Mobile Web 2.0 come to this;

One way that Web 2.0 companies can similarly adjust their services for mobile devices is by relying less on browser-based applications and more on small software clients that users can download onto their phones. “The browser will fade into the background,” said Wood.

The article’s not all bad though (in fairness, the main message is obvious – as Micah says, “Duh”). It also warns against “naive copying of PC services” (which I assume he means Web sites primarily targetted at PC users – a subtle but important distinction), which is good advice, but here’s a tip for mobile folks; if you find yourself moving outside the browser, or doing so while not using Web technologies (widgets), you’re not doing Web 2.0. It might be “Mobile 2.0”, but it’s not Web 2.0, and therefore not “Mobile Web 2.0”.

And this…

He used the example of Google Maps, an application initially designed for the PC. Because the application is built on Ajax, like many other Web 2.0 services, it pushes data out to the client device in order to speed up future user requests. On a mobile phone, that process drains battery life, eats up limited memory and results in potentially very high data-access charges. Google Inc. has introduced a version of the program designed for mobile phones that eliminates some of that overhead, improving the mobile user experience.

Well, guess what; using the phone drains the battery, consumes memory, and costs money. Mapping on a phone is going to use more resources than, say, doing email, which in turn will use more than checking the current time. But so what? Mapping is resource-intensive (although you could certainly do better than Google has).

Have you ever used the fat-client Google Maps Mobile referred to above? It’s not exactly the posterchild for efficient use of resources – I’ve got (well, RIM had 8-) the phone bill to prove it. I’m not saying the Web version doesn’t consume more, but I would be surprised if a little optimization couldn’t bring it in line with the midlet. Besides, I’d bet that if you asked Google the reasons they created it, resource consumption would be way down the list, and the lack of widely deployed AJAX stack on mobile devices would be at the top … which is rapidly changing, of course.

While the unique needs of mobility should always be acknowledged, and normally accomodated, remember that there lies a very slippery slope … the same one that WAP happily slid down years ago by internalizing the belief that mobile was so special that it needed non-interoperable mobile equivalents of every protocol from IP on up. And while there are, as always, exceptions – apps that are much better off as an installable app than a Web app – are you certain that yours is one, and do you realize what you’re sacrificing by going that route?

I previously pointed to the announcement of the shutdown of a version of the Google Adwords API, and commented that this really isn’t the way to go about versioning your Web 2.0 APIs.

I’ve been digging into Google Maps recently, and noticed that they’re making the same mistake;

The v=2 part of the URL http://maps.google.com/maps?file=api&v=2 refers to “Version 2 ” of the API. When we do a significant update to the API in the future, we will change the version number and post a notice on Google Code and the Maps API discussion group.

After a new version is released, we will try to run the old and new versions concurrently for about a month. After a month, the old version will be turned off, and code that uses the old version will no longer work.

Obviously we’re still pretty early into the whole “mashup” thing, but not too early that we shouldn’t be thinking about best practices IMO. And best practice #1? Don’t do what Google’s doing here, which is asking all their users – mashup developers who have committed themselves to this service – to absorb the cost of their inability to develop an extensible API.

I think a good rule of thumb for service providers is to assume that you’ve got a million mashups using your service, and therefore that the cost of incompatible changes is prohibitive for the mashup developers. Any other approach is sure to drive those developers to other service providers who do a better job evolving their APIs; bad juju if your business depends on attracting eye-balls.

So what constitutes a “better job”? I suggest it has something to do with declarative Javascript, but my argument needs some work so I’ll save that for another post.

I was looking through the scriptaculous drag-and-drop pages tonight, and stumbled upon this markup/script snippet;

<div id="photo1"> <img ... /> </div>
<script type="text/javascript" language="javascript">
 new Draggable('photo1',{revert:true});
</script>

Isn’t that just screaming out for declarative Javascript?

So I quickly hacked together a teeny tiny script to present the Javascript API via a declarative interface. See it in action for yourself (Firefox only). The markup for that script above would be, simply;

<draggable id="photo1" revert="true">
<img src="..." />
</draggable>

I’ve just got drag working right now. Drop will be a little trickier, because I’ve got to decide whether I want to setup something like an “onDrop” handler (pro: powerful, con: powerful), or whether I can define some canned droppable types, like a shopping cart, or a file uploader, etc.. Perhaps both?

So quoth Dave Winer;

After all these years, I’ve concluded that if I can’t understand it, it doesn’t have much of a chance in the market.

Well, without taking potshots at Dave, I think this is a fairly poor way to judge a technology. The interaction of a technology with its users is an incredibly complex environment that has no single metric that can indicate the success or failure of that technology. But what I look for in any technology, is network effects.

I haven’t talked much about RDF or the Semantic Web in my blog yet, so I’ll just say a quick word about them.

The Semantic Web is the Web with an additional architectural constraint that could be called “Explicit Data Semantics”; that data (representations, in the case of the Web) will be constrained to be explicit about its implied semantics. This adds the additional desirable property to the system of partial understanding. In a nutshell, this basically means that you get to avoid the “schema explosion” problem, where you have a bazillion different XML schemas, and understanding them is an all or nothing proposition (i.e. where software only understands the schemas it was programmed to understand). RDF and the Semantic Web doesn’t change one important thing; software will not “know” anything more than it was programmed to know. But it does allow a single piece of software to be able to do whatever it is that it does, on any data anywhere. For example, I could write software that searched for “people”, and it could find references to “people” in many different XML documents if RDF were used. And that generates network effects up the wazoo.