Sam, in apparent response to my claims that some people see constraints as bad things, writes this;

There are those who appear to advocate a null architectural style with respect to data. Who claim that content type independence is not a bug but a feature. Who imply that few constraints on data is a good thing. Of course, there are no such people, this is only a strawman argument.

Very well. I was hoping to not have to name names, but here are some quotes from some very smart people who don’t understand what an architectural constraint is, or its value (or at least didn’t when they said what I’m quoting 8-), and who are implicitly suggesting that less constraints are better. And FWIW, I never said that anybody was advocating using the null style, only that it was the implicit result when you make arguments such as these.

I’m moving on Wednesday, and sometime over the next day or two my DSL service will be cut off, which will take down my Web server, and with it, my weblog. I hope to be back up by the end of the week, but the odds of that going smoothly are pretty slim.

If you’re interested, I’ve kept a photo record of the renovation of our new house, plus some additional info.

I want to ask a question of the Web services community about software architecture.

My question is; Do you believe it is possible to make an error when architecting a software system, that would make it practically impossible for that system to be successful, no matter how much support there is for it in industry?

I ask, because if there’s one thing I’ve learned during my studies of software architecture, it’s that you don’t get something for nothing; you have to expend energy to reduce entropy, aka add constraints to induce useful properties. In several discussions I’ve had over the past three or so years, I’ve heard more than a few folks (most of whom should know better, IMO) imply that less constraints are a good thing.

If there are a significant number of “No” responses, then I’ll know to spend more time on software architecture education. If there are a significant number of “Yes” responses, I’ll focus on the specifics of the architecture of large scale systems (which is what I’ve been doing).

Thanks. I apologize that I don’t have comments set up to handle this. But hopefully doing it this way, responses will end up in more aggregators, and this important issue will get more exposure.

Ok, would somebody like to tell why the heck we need WS-Addressing? Just address the darned resources directly, and be done with it. There’s simply no excuse for hiding resources in the post-Web era.

Also note the telling mistake in the first sentence, which keeps being made despite my continually pointing it out;

Web Services Addressing (WS-Addressing) defines two constructs that convey information that is typically provided by transport protocols and messaging systems in an interoperable manner. (emphasis mine)

Repeat after me; protocol independance is a bug, not a feature.

Sam comments on my Infopath feedback;

I’m pleased to see such a ringing endorsement by Mark Baker of XML Web Services.

Well hang on there. That isn’t “XML Web services”. Or if this is the new benchmark of a Web services architecture (has that been decided and nobody told me? 8-), then what the heck is this, because it sure doesn’t describe the architecture of Infopath – or at least not the configuration of Infopath I was praising (and probably extrapolating upon a little too). Perhaps with a handful more constraints, the architectural style described in that document will resemble the one in Infopath. But that doesn’t in any way mean that the WSA document is a good description of Infopath, any more than the null style is a useful description of all architectural styles.

I just thought I’d see what all the fuss is about, and check out Infopath. I just ran through the online demo that’s there.

Very nice, but like Edwin, I’ve really got to wonder about the wisdom of doing this as part of Office, as it’s just such a perfect fit for the browser. Hopefully the Office team is thinking this too.

The demo didn’t show much of the “Share” part, except for some out-of-place UDDI and WSDL stuff, which should just be chucked. This is such a perfectly RESTful system that there’s zero value in doing Web services. Hopefully what the demo didn’t show, was a means of publishing a form back to a Web server for activiation, constructing a workflow by linking forms together using hypermedia, yada yada… all that good Web stuff.

It’s also a perfect base from which to add RDF support, but I suppose we won’t see that until long after the Semantic Web is a huge success.

Congratulations to Werner Vogels and his group on having their proposal for (amoungst other things) emulating the Internet accepted for funding! Werner writes;

The philosophy behind the testbed is to use a small high performance cluster (4 nodes) with 20 Fast Ethernet interfaces and an internal Gigabit ethernet interconnect to emulate the core of the Internet (or what ever topology you want to emulate). These FE interfaces are connected to a number of switches that provide the connection to all the end nodes. We make extensive use of VLANs to emulate the AD connections, and last mile connectivity. The goal is to run eventually with a 1000 physical nodes, the current first phase proposal was for 252 nodes. Each node has 3 ethernet connections, and we use software to emulate 3 end-nodes at each physical node. The experiments run the real software that you would eventually deploy, no simulated stuff.

Which sounds very cool from a technical POV. But if you want to emulate the whole Internet, try selling off those nodes to independantly owned and operated profit centers that are competitive with each other. Now that’s emulation (or is it simulation?). 8-)

So says George Colony, the CEO of Forrester.

The Web is dead and will be replaced by an executable architecture

Bunk.

Sam writes;

Mark Baker is upset because SOAP permits usages which are not, in his and many people’s opinion, well architected. Usages such as RPC. While many of Mark’s arguments resonate with me, he tends to throw the baby out with the bathwater. He might as well say that Python is not a good language for building REST systems because it can also be used for RPC.

I realize my position is far from typical, and appears inconsistent at times, but I thought I’d been pretty clear about it at least. Oh well. Ok, so let me get it all out, and describe what I believe – and what I don’t – about REST and SOAP;

I believe SOAP is a valuable and useful technology.

I believe the Web, and in particular the subset that follows the constraints of the REST architectural style, is a fundamental breakthrough in the evolution of large scale distributed systems, that will continue to effect how most systems, both human and machine-targetted, are built for the foreseeable future.

I believe SOAP can provide value outside the constraints of REST, but I also believe that its predominant value, by far, is when used within the constraints of REST.

I believe that using SOAP within the constraints of REST does not mean that HTTP (or Waka) has to be used.

I believe that using SOAP as a means for extending underlying application protocols, is the most valuable thing it can be used for.

I believe that using SOAP as a framework from which to build new application protocols has some value, so long as those new protocols are built on transport protocols and not tunneled over other application protocols.

I believe SOAP will fail to see significant use on the Internet because the predominant use of SOAP today is to tunnel new application protocols over existing application protocols, and to encourage an explosion in per-service application protocols, not a unification of application protocols as REST does via generalization.

P.S. I’ll believe it when I see it. 8-)

A funny addendum from Sam on the topic of concurrent updates with REST;

Obviously, Clemens hasn’t heard about the expires header. All you have to do is predict with 100% accuracy when the next POST or DELETE request is going to come in, and all caches will remain perfectly in synch.

It is just a SMOP.

Heh, exactly. Some might call this a bug, I call it a feature. There is no requirement in REST that all layers have a consistent view of the state of a resource, as to do so would be to reduce scalability. It’s basically just recognizing that perfect data consistency in the face of latency and trust boundaries, is impossible. The best one can do (addendum; well close to it) is to make explicit, at the time the representation is transferred, the expectations about the durability of the data. As I mentioned previously, architectural styles that transfer serialized objects (the whole object, not just the state) can address the issue of latency, by moving the encapsulating code along with the data. But this is done at the cost of visibility, which means that it likely won’t cross trust boundaries.