Via Stefan, a proposal from the WSO2 gang for an approach to decentralizing media types and removing the requirement for the registration process.

Been there, tried that. I used to think that was a good idea, but no longer do.

Problem one: an abundance of media types is a bad thing for pretty much the same reasons that an abundance of application interfaces is a bad thing; the more that is different, the more difficult interoperability becomes. We need less, more general media types, not more specific ones.

Problem two, specific to their solution for this “problem” (which is “application/data-format;uri=http://mediatypes.example.com/foo/bar”): media type parameters don’t affect the semantics of the payload. This solution requires changing the Web to incorporate parameters in this way. Consider, if an existing firewall was configured to block, for example, image/svg+xml content. If SVG were also assigned its own “media type URI” and delivered using application/data-format, that firewall wouldn’t be able to block it. Oops.

Problem three (which mnot convinced me of): having your media type reviewed by the capable volunteers on ietf-types, is a good thing. Sure, you could still do that while using a decentralized token/process, but I consider having motivation for review built-in to the mechanism a feature, not a bug, especially given problem one above.

Update; here’s an older position of mine.

I have to say, I’m with James in his response to Sam’s long bets;

To say, as Sam and Tim both do, that REST is important is like saying the fan in my laptop is “important”. There’s really nothing to discuss about it. RESTful services are fundamentally critical to the continued evolution of the Web. It just is. You just need to do things in a RESTful way. Period.

REST is just a starting point. What’s more important going forward is the framework which permits us to reason about REST extensions and other changes to the Web (or portions thereof).

BOSH is a specification that defines how XMPP can be used over HTTP. It’s obviously written by people who know what they’re talking about, because they’ve got good requirements, and get into great detail about the design choices they’ve made. Unfortunately, BOSH makes the one big mistake that so many others make; treating HTTP as a transport protocol. To wit;

POST /webclient HTTP/1.1
Host: httpcm.jabber.org
Accept-Encoding: gzip, deflate
Content-Type: text/xml; charset=utf-8
Content-Length: 188

<body rid='1249243562'
      sid='SomeSID'
      xmlns='http://jabber.org/protocol/httpbind'>
  <message to='contact@example.com'
           xmlns='jabber:client'>
    <body>I said "Hi!"</body>
  </message>

  <message to='friend@example.com'
           xmlns='jabber:client'>
    <body>I said "Hi!"</body>
  </message>
</body>

(you might also note that all of their example requests are POSTs to /webclient – a warning sign if ever there was one)

The intent of that message is to send two messages, one to each of the recipients at example.com. If we were treating HTTP as an application protocol, that would be done like this;

POST mailto:contact@example.com HTTP/1.1
Host: httpcm.jabber.org
Accept-Encoding: gzip, deflate
Content-Type: text/xml; charset=utf-8
Content-Length: nnn

<body rid='1249243562'
      sid='SomeSID'
      xmlns='http://jabber.org/protocol/httpbind'>
  <message xmlns='jabber:client'>
    <body>I said "Hi!"</body>
  </message>
<body>

POST mailto:friend@example.com HTTP/1.1
Host: httpcm.jabber.org
Accept-Encoding: gzip, deflate
Content-Type: text/xml; charset=utf-8
Content-Length: mmm

<body rid='1249243562'
      sid='SomeSID'
      xmlns='http://jabber.org/protocol/httpbind'>
  <message xmlns='jabber:client'>
    <body>I said "Hi!"</body>
  </message>
</body>

Alternately, if you don’t like proxies, the mailto URIs could be swapped out for an http URI specific to each mail address. But the point is that HTTP semantics be reused by recasting XMPP to them, rather than the current approach of grafting XMPP on top (read: obliterating). Don’t like two messages? Try pipelining them. Can’t pipeline? Does some other feature not map well onto HTTP in this way? Then it wasn’t meant to be.

We use HTTP (and the Web) because we want to be part of the Web; participate in the network effects, make information freely available (like, say, my presence status), etc.. We don’t do it because we need a way to get past firewalls. Good admins will avoid deploying software behind their firewall which subverts the intent of the firewall.

It’s nice to see Pat Helland join the REST/SOA conversation.

His first post is in a rather quisical, loose style that I hadn’t seen before, but that’s ok, I think I get what he’s talking about. The point seems to be summed up here;

Is the purchase-order (or even the line-item) a noun or a verb? I would argue is it syntactically a noun but semantically a verb.

Hmm. I’m quite certain it’s pure noun. If it were a verb, then it would only have a single-purpose – to order something – and wouldn’t be able to be archived, printed, translated, etc… which it clearly can. Obviously a message can only have one authoritative application-level verb, and if you’re using HTTP, then the request method is it.

I’m kinda busy with a bunch of things on my plate, but felt I had to chime in on the latest calls for a RESTful description language ala WADL.

Aristotle’s response struck a chord;

[…] there isn’t much to describe; there aren’t any methods or signatures thereof to document, since access to resources is uniform and governed by the verbs defined in RFC 2616 (in the case of HTTP, anyway)

Right-o, though it might be helpful to rephrase that last bit as “since access to resources is through the *same* uniform interface”, because that’s the whole point of REST: all services expose the same interface. This is what provides the majority of its loose coupling, and is the principle differentiator from RPC.

So if you’re writing (or generating) contract/interface-level code which can’t late-bind to all resources, everywhere, you’re not doing REST (10 kudos to whomever identifies the specific constraint being violated).

Cut the cord already! RPC is dead. You’re not in Kansas anymore.

So that whole “contract thang” has popped up again in the echo chamber. I’m going to pick on Steve Jones a little (more 8-), specifically something he says in his latest piece;

Where I do disagree though is whether this is a good or a bad thing to have these camps. Now I’m clearly biased as I’m on the contract side […]

Hold it! Let’s make sure we’re having the right conversation here. It’s not “pro contract” vs. “anti contract”, it’s simply “many contracts” vs “one contract”.

Resume!

I’m absolutely thrilled that Tim has finally grokked REST. AFAIK, he’s the first die-hard Web services type with a strong public persona to realize REST’s (and the Web’s, of course) benefits over WS/SOA/RPC. Bravo, Tim!

I’ve long thought that what was needed in this discussion was new perspectives on the relationship between REST and WS/RPC/etc… that would permit the message to reach more people. Tim’s ably doing his part along those lines with his followup posts. I would never have thought to describe things this way.

So, who’s next?

Don’s latest on GET is interesting (especially “That’s certainly where I’m investing”!), but I really liked this bit of Tim Ewald‘s comment;
The solution is a minimal footprint interaction, like a coarse-grained document transfer via a pin-hole, a la’ a Biztalk port. Exposing all your data via GET so anyone can read anything they want (modulo security concerns) and then providing controlled writes through pin-hole ports that consume documents and encapsulate the actual update process.

Bingo! Give the man a cigar.

That said, once you’ve done that for a while you will, in all likelihood (I’ve been there), find the need for the client to have expectations about server-side state changes beyond those offered by POST; PUT and DELETE are two very useful expectations.

I’ve posted some thoughts on the recent W3C Enterprise Services Workshop.

It sounds like Sanjiva’s as baffled as I am how we can each hold the positions we do about what seems like a completely trivial point. He responds to my previous piece on the Web being for humans, and writes;

The problem with GET is exactly what Mark points to as its feature: GET is reusable for anything. What does that mean? That means the receipient must be able to deal with anything that comes down the pipe!

Well, if the client receives something it doesn’t understand, it just needs to fail gracefully. Even for the case of a browser, most have a fallback for unrecognized content, permitting the user to save it to disk.

I don’t think this is any different than a Web service client invoking getStockQuote, as it would need code to handle stock quotes it didn’t understand (lest it require upgrading once new forms of quotes are developed in the future).

There’s nothing whatsoever wrong with GET- its a great way to get some information from one place to another. However, if you want to use it for application integration then somene needs to write down what kind of data is going to come back when I do a GET against a URI.

Sure, the publisher has to write it down somewhere. The question though is – because this relates to WSDL’s utility – when the client needs to know it.

I’ve written software to support international-scale data distribution networks, and never during the development of that software did I need to know, a priori, which URIs returned what kind of data. I just used the fact that HTTP messages are largely self-descriptive, and so looked at the Content-Type header on responses, along with some error handling code for dealing with unrecognized media types.

Anyhow, this really gets away from the point of this discussion which was just to point out that moving from getRealtimeStockQuote to getStockQuote to GET doesn’t change anything about whether the returned information (which can be identical in all cases) needs a human in the loop or not.