I used the subject line of this blog entry as my signature 11 years ago when I saw the writing on the wall at Nortel, and wanted to announce my interest in looking elsewhere for a job. Ok, perhaps it wasn’t the optimal way to communicate that information to the world, but it was cute, damnit. Skip ahead to current day, and I find myself ex-co-founder of a mobile startup which wasn’t able to raise the money it needed, and a consultant who, frankly, got bored answering all the REST 101 questions that came my way. Don’t get me wrong, I love that the Web and REST are finally getting their due, and I was honoured to have folks from all over the world come to me to answer their questions, but unfortunately those questions were rarely a challenge. What I want – nay, what I need – is to work on product or projects for a while where I have the opportunity to be creative once again, have a look at the best small business tools. If you’re reading this, you probably know what I’m good it and what I like. For the record, that’s the Web, mobile, open source, and projects/products for the public good. If you know of any opportunities that fall somewhere in that space, please drop me a line. Thanks.

David Peterson defends SimpleDB‘s use of HTTP GET for mutation actions by appealing to the HTTP spec itself, specifically pipelining where it says;

Clients SHOULD NOT pipeline requests using non-idempotent methods or non-idempotent sequences of methods (see section 9.1.2). Otherwise, a premature termination of the transport connection could lead to indeterminate results. A client wishing to send a non-idempotent request SHOULD wait to send that request until it has received the response status for the previous request.

His argument is that because POST isn’t idempotent, that it couldn’t be used for pipelining, and therefore that GET could be used. There’s two fatal flaws with this argument however. The first is that PUT is idempotent, and is also a mutator, so you can pipeline that no problem (modulo the concern about sequences of requests). The second is that if both the client and server understand that an “Action” parameter specifies the actual action to be taken (overriding the HTTP method), then if Action specifies a non-idempotent method, you’re still going to run into the same indeterminism problem that the HTTP spec warns against: what matters is the effective method of the message, not only the HTTP method.

It’s also interesting that the example Dare uses is of an Action value “PutAttributes”, which is presumably idempotent, doh!

Nope, Amazon blew it, again. I’ve offered them my services a couple times already, but they’ve not taken me up on my offer yet. They really, really(!) should before they publish another service.

Via Stefan, a proposal from the WSO2 gang for an approach to decentralizing media types and removing the requirement for the registration process.

Been there, tried that. I used to think that was a good idea, but no longer do.

Problem one: an abundance of media types is a bad thing for pretty much the same reasons that an abundance of application interfaces is a bad thing; the more that is different, the more difficult interoperability becomes. We need less, more general media types, not more specific ones.

Problem two, specific to their solution for this “problem” (which is “application/data-format;uri=http://mediatypes.example.com/foo/bar”): media type parameters don’t affect the semantics of the payload. This solution requires changing the Web to incorporate parameters in this way. Consider, if an existing firewall was configured to block, for example, image/svg+xml content. If SVG were also assigned its own “media type URI” and delivered using application/data-format, that firewall wouldn’t be able to block it. Oops.

Problem three (which mnot convinced me of): having your media type reviewed by the capable volunteers on ietf-types, is a good thing. Sure, you could still do that while using a decentralized token/process, but I consider having motivation for review built-in to the mechanism a feature, not a bug, especially given problem one above.

Update; here’s an older position of mine.

Mark Little responds to an interesting post by Bill Burke about compensation based transactions. I don’t really have any direct response to the gist of that discussion, but wanted to highlight a couple of Mark’s arguments that I consider to be probably the top two arguments by those who feel there’s value in both the Web and Web services (the “fence sitters”, as Mark recalls me calling them 8-).

First up, the belief that the Web has nothing to say about reliability, transactions, etc… Mark writes;

Yes, we have interoperability on the WWW (ignoring the differences in HTML syntax and browsers). But we do not have interoperabilty for transactions, reliable messaging, workflow etc. That’s not to say we can’t do it: as I said before, we did manage to do REST+transactions in HP but it was in a small-scale deployment involving only a couple of partners. There is no technical impediment to doing this: it’s entirely political. It can be done, I just don’t see it ever being done. Until it happens, REST/HTTP cannot compete with the kinds of heterogeneous out-of-the-box interoperability that we have demonstrated with WS-*

I’ve talked about this a lot, most recently in my position paper to the W3C Workshop on Enterprise Services. The gist of the argument is that the Web address all of those needs, just in a way which you might not recognize because it has to address them within the confines of architectural constraints that Web services folks aren’t used to. Again, that’s not to say that every possible one of your needs can be met out of the box today, only that far more of them can than you might believe.

Mark also uses the very common argument that because interoperability requires agreement on data for both Web and Web services, that there’s no significant difference between them (I hope that summarizes his point);

So just because I decide to use REST and HTTP doesn’t mean I get instant portability and interoperability. Yes, I get interoperability at the low level, but it says nothing about interoperability at the payload.

I can’t quickly find any past blog entries that touch on this point (though I know they’re there), but this argument I find the most confusing. I suspect it has to do with what I perceive to be a disconnect between Internet and intranet protocol stacks, but I can’t say for sure.

What Mark calls the “low level” isn’t the low level at all. Assuming he means HTTP, the agreement you get by using it is more (higher level) agreement than you get if you were just using SOAP (or XML-RPC or IIOP or BEEP or …). That’s because you’re agreeing on the methods in addition to an envelope (not to mention many other features).

BOSH is a specification that defines how XMPP can be used over HTTP. It’s obviously written by people who know what they’re talking about, because they’ve got good requirements, and get into great detail about the design choices they’ve made. Unfortunately, BOSH makes the one big mistake that so many others make; treating HTTP as a transport protocol. To wit;

POST /webclient HTTP/1.1
Host: httpcm.jabber.org
Accept-Encoding: gzip, deflate
Content-Type: text/xml; charset=utf-8
Content-Length: 188

<body rid='1249243562'
      sid='SomeSID'
      xmlns='http://jabber.org/protocol/httpbind'>
  <message to='contact@example.com'
           xmlns='jabber:client'>
    <body>I said "Hi!"</body>
  </message>

  <message to='friend@example.com'
           xmlns='jabber:client'>
    <body>I said "Hi!"</body>
  </message>
</body>

(you might also note that all of their example requests are POSTs to /webclient – a warning sign if ever there was one)

The intent of that message is to send two messages, one to each of the recipients at example.com. If we were treating HTTP as an application protocol, that would be done like this;

POST mailto:contact@example.com HTTP/1.1
Host: httpcm.jabber.org
Accept-Encoding: gzip, deflate
Content-Type: text/xml; charset=utf-8
Content-Length: nnn

<body rid='1249243562'
      sid='SomeSID'
      xmlns='http://jabber.org/protocol/httpbind'>
  <message xmlns='jabber:client'>
    <body>I said "Hi!"</body>
  </message>
<body>

POST mailto:friend@example.com HTTP/1.1
Host: httpcm.jabber.org
Accept-Encoding: gzip, deflate
Content-Type: text/xml; charset=utf-8
Content-Length: mmm

<body rid='1249243562'
      sid='SomeSID'
      xmlns='http://jabber.org/protocol/httpbind'>
  <message xmlns='jabber:client'>
    <body>I said "Hi!"</body>
  </message>
</body>

Alternately, if you don’t like proxies, the mailto URIs could be swapped out for an http URI specific to each mail address. But the point is that HTTP semantics be reused by recasting XMPP to them, rather than the current approach of grafting XMPP on top (read: obliterating). Don’t like two messages? Try pipelining them. Can’t pipeline? Does some other feature not map well onto HTTP in this way? Then it wasn’t meant to be.

We use HTTP (and the Web) because we want to be part of the Web; participate in the network effects, make information freely available (like, say, my presence status), etc.. We don’t do it because we need a way to get past firewalls. Good admins will avoid deploying software behind their firewall which subverts the intent of the firewall.

So I had a quick look at Google Gears this morning. Unlike some, I do most definitely see value in supporting disconnected scenarios, not because I don’t see pervasive wired and wireless networks being the rule in the not-too-distant future – I do – but because I understand that networks are unreliable. That said, I do have some concerns about how Gears was put together.

My primary concern is that I’ve always felt that supporting offline use in existing browsers required more innovation of implementation rather than interface, whereas Gears is all about interface. What I mean by that is that I believe that a better, more easily deployable and usable solution would be for Mozilla itself to tweak the implementations of its HTTP stack, cache, and XMLHttpRequest object. Instead, Gears gives us new interfaces like LocalServer, which developers are supposed to use to check for valid cached representations before hitting up XHR: something XHR could very well do itself, largely transparently (I expect – haven’t considered all the backwards-compatibility issues).

Now, Gears could very well be something that was deployed for its ability to enable features today, because Google didn’t want to have to wait for HTML 5 (and its equivalent of client-side storage) to be deployed. And from that perspective it’s great (though requiring a plugin is a bit of a pain). I just hope that the Gears folks are talking with Hixie and Mozilla about where to draw the line here.

I’ve spent some time over the past couple of months helping Microsoft with RESTful issues for two (soon to be three, I hope) different groups there. One of those is the WCF team, and Omri has just reported on some of it. I’m not sure how much of my input (if any) made it into that release, or if it’s all set for the next release, but there you have it; WCF does REST.

It was quite enjoyable to sit around the table (conference room and sushi table alike!) with Don and Steve in the context of trying to answer the question “How can Microsoft best support RESTful service developers?”, and not have to dwell much on the SOA/WS-vs-REST thing. Lots of love all round. 8-)

I’ll point to the other projects as soon as I know they’ve gone public.

Update; if it wasn’t clear, this was a consulting arrangement through my company, Coactus.

Update 2; the second project has been announced. Here’s more; doesn’t that XML just scream “Yaron”? 8-)

Dave Orchard on versioning;

The fundamental problem with a version # in a document is that it doesn’t provide for a given document to be valid under more than one version. What we really need is to be able to indicate a “space of versions” that a given document is valid under, whether that’s a list or regexp or whatever.

Amen. You know, just like a media type!