I feel dirty. I actually agree with Dave Winer;

Tim Bray suggests that Atom might nearly be finished. I read his comments carefully, and find the benefits of the possibly-final Atom to be vague, and the premise absolutely incorrect. Unlike SGML, RSS has been widely deployed, successfully, by users of all levels of technical expertise. There are many thousands of popular RSS feeds updating every day, from technology companies like Apple, Microsoft, Yahoo, Sun and Oracle, big publishing companies like Reuters, The Wall Street Journal, NY Times, Newsweek, Time, BBC, Guardian, etc, exactly the kinds of enterprises that his employer serves. It’s also widely used by today’s opinion leaders, the bloggers. Where SGML was beached and floundering, RSS is thriving and growing. So to conclude that RSS needs the same help that SGML did, is simply not supported by facts.

I recently advised a client who were planning to add syndication feed production and consumption capabilities to their product, to avoid the Atom format and go with the RDF-based RSS 1.0 and the Atom protocol. That way you get the self-descriptive extensibility and backwards compatibility into a massive installed base of RSS processors, and a simple protocol that integrates cleanly into the Web.

David asks;

I always appreciate it when Mark mentions me, but I’m not sure what warnings I’m heeding.

I meant those I mentioned in a my heads up to the TAG. Which, FWIW, discusses (sort-of) value in an epr URI scheme; a potential answer to your “Others?” question about how the EPR/URI mapping could occur.

He adds;

Akin a legal opening argument, I intend to show that an XML based identifier systems in addition to URIs has potential upsides. These upsides will be defined in terms of the REST thesis properties.

That’ll be interesting, but keep in mind that IMO, most of the arguments against EPRs are not REST based arguments, since REST doesn’t mandate URIs. In fact, one could devise a perfectly RESTful architecture that used EPRs. No, the arguments against EPRs are the practical ones; how do you expect to deploy such a system when such a massively deployed and successful alternative – and one that could be reused – is available. I’m not sure Web service proponents really appreciate what a MASSIVE uphill battle that is. I can’t fathom an advantage that EPRs would have over URIs that would make it worthwhile.

Patrick reminds me how similar the “XML is a universal solution” position is to the that of SOAP as similarly universal. In both cases, the premise is that by being independent of data models (for XML) and application interfaces (for SOAP), broad applicability is achieved.

Now contrast this with the Web/Semantic-Web position, that by constructing a generic data model and generic application interface, broad applicability is achieved.

Two interesting hypotheses, for sure. If only we had some objective manner to evaluate them!

The real issue with protocol independence, I believe, is the word “protocol”; that the two camps in this debate – the Web/Internet folks, and the Web services folks – each have their own, quite different, definition of the word. For Web services proponents, “protocol” means one thing and one thing only; a spec whose job it is to move bits from point A to point B over a network.

Meanwhile, for the Web/Internet crowd, “protocol” has a much broader definition. In common use, it encompasses the “bit moving” specs, but also others which do a lot more than simply move bits (more below). Some even (properly, IMO) refer to the data formats, such as HTML, as protocols, though you don’t see that too often any more.

As if this disconnect wasn’t bad enough, another – interface constraints – compounds the problem. Specifically, Internet based efforts (including the Web, of course) always start with an interface constraint. This is simply for the reason that they’re (usually) always focused on a single task – for example, email exchange, mail folder access and synchronization, file transfer – and pay little to no attention to what it means to define interoperability between those applications, since that’s tangential to their primary objective. A consequence of this approach is that there becomes little value in using a common sub-layer-7 protocol (like BEEP, IIOP, or how most people use SOAP). This has enormous benefit, with the big one being it permits the mechanics of mapping onto the network to be optimized for the semantics; consider that without GET, HTTP wouldn’t have needed a bunch of features, in particular caching. When semantics are detached from the “wire format” (as with BEEP et al, as mentioned previously), it’s optimized for no particular application, thereby resulting in poor performance for practically all applications.

I’ve drawn a diagram that I hope helps explain these two different views;

Hopefully you’ll at least see the fundamentally different visions of the stack in play here, and perhaps better appreciate my concern about “protocol independence”. Protocols play a much more important role in the stack on the right than they do in the one on the left!

Steve Maine reports on Don Box’s latest presentation that asks a very good question; why? He summarizes;

WS-Addressing is something that really should have been included in the original SOAP specification. However, when SOAP was written nobody was really thinking about transports other than HTTP. As a result, pure SOAP relies on the characteristics of the HTTP transport to convey addressing information. For example, a pure SOAP message does not contain any information about the address to which it was sent – that information is carried by the transport and is lost once the message is pulled off the wire.

Paraphrase; Doctor, doctor, it hurts when I lose information that I pull off the wire!. Sigh.

A SOAP envelope is not a SOAP message, and pretending otherwise turns a perfectly good document wrapper into a perfectly crappy application protocol.

Sorry Don, you were wrong then, and you’re wrong now.

All this just because the layering got totally screwed up by the broken requirement of protocol independence? Egads. Time to check back with those first principles, me thinks.

The SOAP response MEP, though not without problems, at least gets the layering right. And with this same layering, the best that could likely be done for a GET binding – even though it may have had deployment problems of its own – was described nearly three years ago. Too bad it was rejected. If only we knew then what we know now! Oh, wait, … 8-)

Ouch! 8-) “As far as the web is concerned, the WS-* work is about sprinkling XML pixie dust on a failing idea.”
(link) [del.icio.us/distobj]
s/transport/transfer darn it. 8-) A fine summary of the gnarly issues one will face doing XML over HTTP
(link) [del.icio.us/distobj]

He writes;

I don’t think people are embracing REST services because of architectural purity (the rest of the Web isn’t pure REST, so I don’t know why this would be). Rather, they embrace it because it’s easier in a lot of cases. There is no reason that SOAP couldn’t be the same, except that toolkits hide raw XML and you have to know how to get it.

To the first point, yes, certainly, they embrace it because it’s simpler and easier. Absolutely. As we’ve seen, they often screw up, but even then it’s very often preferable to SOA.

To the second point, there actually is a critical reason (in addition to the “hide the XML” problem) why SOA/WS cannot be as simple as REST; that the architectural constraints which induce the bulk of the simplicity in REST (uniform interface, self-description), are eschewed by SOA/WS. Isn’t it ironic that their raison d’etre – service specific interfaces – is the reason they will fail to see widespread deployment? I think so. 8-)

A gem from Mark, discussing the sad state of affairs with Web services architecture. Of course, he manages to do it without sounding like he’s criticising. How’s he do that? Gotta get me some pointers. 8-)

If Web services is a bag of specifications that only constrain you by accident (“it must be XML,” “it’s message-based,” “the basic unit of interaction is the ‘operation'”) then Web services has no architecture, at least in this sense of software architecture*; it’s just flinging messages around.

Pretty much, yep. Didn’t I point that out already? 8-)

But as a meta point, isn’t it nice how clear things become when using the language of software architecture to examine, well, software architecture? Why has it taken so long to get to this point? And why was it being defended so fanatically before anybody even bothered to study the architectural suitability of this new fangled architecture, especially when an existing loosely coupled, document oriented architecture was already available? There’ll be lots of time to answer those questions in the coming years, but it’s extremely disappointing to me that we weren’t able to ask them in time to avoid learning a lesson the hard way.