An interesting post from Dave. A few comments…

I’ve been saying for a while now that I think it’s a shame that SOAP 1.2 didn’t define a general SOAP to HTTP binding that used HTTP as a transfer protocol, for the previous 2 reasons.

It does, Dave. The default binding is a transfer binding; I made sure of that. I think you’re confusing how people use it with how it’s defined. Web services proponents generally think that a SOAP envelope is a SOAP message, yet that interpretation is not licensed anywhere in the spec, and is even explicitly rejected in the HTTP binding where the state transition table clearly shows HTTP response codes affecting SOAP message semantics. It’s also alluded to in the glossary where the definition of the two terms differ (you think this was accidental? Hah! 8-).

I would love it if there was a reasonable way to bridge the SOAP/WS-Addressing world and the HTTP Transfer protocol world, but I just don’t see that each side really want the features of the other side. The SOAP/WSA folks want the SOAP processing model for Asynch, and don’t care about the underlying protocol. The Web folks want their constrained verbs and URIs and don’t care about SOAP processing model.

Avert ye eyes! False dichotomy alert!! You can get the SOAP processing model, and HTTP as transfer protocol (including asynch, which HTTP handles just fine despite insistance from many that it doesn’t) simply by using SOAP in the manner prescribed in the SOAP 1.2 spec and default HTTP binding. In order to do so though, you need to give up on the idea of a new (non-URI) identifier syntax. This is really not a big deal!. We are, after all, primarily talking about syntactical differences here. What EPRs are trying to do is comparable to inventing a new alphabet for the english language; perhaps there are benefits, but I think the phoenician alphabet has a, ahem, rather large and insurmountable head start in deployment, making those benefits – if they exist at all – completely inconsequential.

Dave then makes a really interesting statement of the “protocol independent” variety;

Here’s a test case: Would the Atom protocol switch to using WS-Addressing and then use the HTTP as Transport binding(s) and HTTP as Transfer binding? Seems to me not likely. The Atom folks that want to use HTTP as Transfer have baked the verbs into their protocol, and they won’t want to switch away from being HTTP-centric. And same as I don’t see the SOAP centric folks wanting to “pollute” their operations and bindings with HTTP-isms.

Emphasis on “baked the verbs into their protocol”. Seriously – no matter how you slice it you’re always baking verbs into a “protocol”, because an application developer has to know what verbs they’re using. The problem as I see it, again, is one of nomenclature; that Web services proponents have a very narrow RPC-inspired definition of “protocol” (transport), and their mental models built around this definition simply can’t fully absorb the implications of the broader definition used in the IETF and W3C (transfer). They simply can’t conceive of something called a “protocol” playing such an enormously significant role in a distributed system, yet this is precisely how all existing Internet scale systems are built, and precisely why Web services proponents haven’t yet realized that the Web is what they’ve been trying to build, at least since the quest for “document oriented” services began in 2001/2002.

One might also look at Dave’s statements and ask themselves, well, if they’re going to be dependent on a protocol, then it might as well be the most successful one ever developed rather than one which has struggled for deployment anywhere except behind the firewall. And somebody please remind me; why is it desirable to be independent of a transfer protocol, but dependent on SOAP the protocol?

Phew!

My Linux box was commendeered this weekend, by parties unknown. I’m still figuring out how exactly that happened, but after struggling with trying to make it workable (a virus was involved), I gave up and reinstalled.

Kudos to Knoppix, a real lifesaver.

Werner likens REST to physics, in that each is a model of some reality, not the reality itself. He writes;

Whether we use the REST model, or another model to be developed that appears to match it closer or from a different perspective, “the web” and other large scale distributed systems will continue to do “their thing”, whatever model we put on it. The distributed, decentralized, bottom-up, autonomous nature of the web, exhibits complex organic interactions, that are not driven by models or laws, just as that Nature is not driven by the laws of Physics.

Well said.

I would just add though, that there’s also a “metamodel” in play here that shapes our models; software architecture (and while Roy’s view is just one of several, the other views aren’t that different). Of course, this too is a model, and so falls under the domain of the same principle. But I suggest that that so long as this metamodel remains useful, most models of the architecture of the best behaved parts of a future Web, will be REST extensions, like ARRESTED or the (bulk of) the Semantic Web.

“Who’s going to do the proofreading?”. Good question; how about Wiki-fying it?
(link) [del.icio.us/distobj]
My name’s in there somewhere!
(link) [del.icio.us/distobj]
“The next time you’re contemplating major technological improvements to your existing architecture, look around for spare parts you already know”
(link) [del.icio.us/distobj]
Tracking Internet growth; numbers, pointers, etc.. (by Mr. SIP)
(link) [del.icio.us/distobj]
Congrats to everybody involved! I much prefer “Volume One” to “First Edition”, since the former implies it’s currently incomplete, which it clearly is (though still useful, of course)
(link) [del.icio.us/distobj]

I was fearing this article being published for a while. The interview went terribly as you can probably tell; I was sick, and had about 4 hours of sleep the night before. Plus Greg was asking what I considered to be all the wrong questions, which irked me, and forced me to try to steer to the conversation to where I felt it should be going … with mixed results. I also interpreted his questions as having a strong Web services bias, which also bothered me. But to his credit, the article came off as reasonably well balanced.

Savas picked up on the article, and the praise I had for MEST in it. I did want to say one thing about that though, that I didn’t have a chance to say in the interview. While I like MEST as an architectural style, I believe that the MESTful software that Savas and Jim would write would have considerable issues integrating with the Web and other Internet based apps, largely because they also believe in protocol independence. On the other hand, since their assumed ProcessMessage semantic is practically semantically identical to HTTP POST (and even SMTP DATA), a considerable amount of that software may integrate well purely by accident! 8-) It’s only when they start using – and trying to be independent of – non-ProcessMessage like semantics, such as HTTP GET, PUT, or FTP STOR, that the integration problems will arise.

In case you hadn’t noticed, I’d not been doing much travelling for the past couple of years. In fact, my drive to D.C. last month was the first time I’d left the confines on the Canadian border in about two years (in contrast to the 350K+ miles accumulated in the previous three!). This was due to some issues I was having with the Canadian government, in turn due to a case of mistaken identity; apparently there’s some guy in Canada with my name and birthday(!!) who’s not quite so law-abiding as yours truly!

I thought I was doing the smart thing in 2003 by applying for my citizenship instead of a permanent resident card. But, in retrospect, that turns out not to have been such a good decision. So I finally applied for the card in addition to my citizenship, and I received the card yesterday. Woohoo!