It sounds like Sanjiva’s as baffled as I am how we can each hold the positions we do about what seems like a completely trivial point. He responds to my previous piece on the Web being for humans, and writes;

The problem with GET is exactly what Mark points to as its feature: GET is reusable for anything. What does that mean? That means the receipient must be able to deal with anything that comes down the pipe!

Well, if the client receives something it doesn’t understand, it just needs to fail gracefully. Even for the case of a browser, most have a fallback for unrecognized content, permitting the user to save it to disk.

I don’t think this is any different than a Web service client invoking getStockQuote, as it would need code to handle stock quotes it didn’t understand (lest it require upgrading once new forms of quotes are developed in the future).

There’s nothing whatsoever wrong with GET- its a great way to get some information from one place to another. However, if you want to use it for application integration then somene needs to write down what kind of data is going to come back when I do a GET against a URI.

Sure, the publisher has to write it down somewhere. The question though is – because this relates to WSDL’s utility – when the client needs to know it.

I’ve written software to support international-scale data distribution networks, and never during the development of that software did I need to know, a priori, which URIs returned what kind of data. I just used the fact that HTTP messages are largely self-descriptive, and so looked at the Content-Type header on responses, along with some error handling code for dealing with unrecognized media types.

Anyhow, this really gets away from the point of this discussion which was just to point out that moving from getRealtimeStockQuote to getStockQuote to GET doesn’t change anything about whether the returned information (which can be identical in all cases) needs a human in the loop or not.

I wanted to expand a little on my dismissal of Sanjiva’s argument that “The Web is necessarily human centric”.

Sanjiva, in his support for – and authorship of – WSDL, presumably wants to permit developers to publish their own service-specific interfaces, such as ones supporting methods like the canonical “getStockQuote”, or even “getRealtimeStockQuote”. And I’m certain he’d claim that these are very much machine-facing interfaces, since that’s supposed to be the whole point of Web services. So far so good?

So why is a system built around GET suddenly not machine facing? I’ve said before that the one thing that most distinguishes SOA and REST is the uniform interface of the latter; it says , in part, that the more general the operation, the more reusable the interface. In other words, using the example above, getStockQuote is more reusable than getRealtimeStockQuote. Moreover, GET is more reusable than getStockQuote.

By following that logic – that more general means more human-targetted – then one can only conclude that the methods most suited for machine-targetting will be the most specific ones. So never mind getRealtimeStockQuote, we’d need getRealtimeStockQuoteForGOOGonNASDAQ.

Of course, that’s silly. So is the argument that the Web is only for humans. I hope (hah! 8-) that this finally puts that argument to rest (pun intended).

I tried to post this comment to Jorgen’s latest blog entry;

Perhaps if SOA were defined, we’d be able to tell you whether there’s a problem that had a decent SOA based solution! 8-O

All I know is that REST is an *improvement* upon SOA for multi-agency, data-oriented systems. The big win is that REST far better separates interface from implementation than SOA, and is therefore far more loosely coupled.

Unfortunately I received this error in response;

Your comment was denied for questionable content.

8-)