I like how news of Sam and Leonard’s REST book is kicking off a new REST/SOAP thread. This time though, it seems the tables are turned and it’s the Web services proponents who are having to defend their preferred architectural style, rather than other way around. It’s about freakin’ time! I’m kinda tuckered 8-O

Sanjiva chimes in, in response to Sam’s probing questions;

ARGH. I wish people will stop saying flatly incorrect things about a SOAP message: it does NOT contain the method name. That’s just a way of interpreting SOAP messages […] SOAP is just carrying some XML and in some cases people may want to interpret the XML as a representation of a procedure call/response. However, that’s a choice of interpretation for users and two parties of a communication may interpret it differently and still interoperate.

Oh my.

Every message has a “method”; a symbol which is defined to have a certain meaning understood by sender and recipient so that the sender understands what it’s asking be done, and the recipient knows what it’s being asked to do. A recipient which doesn’t understand what is being asked of it cannot process the message for hopefully obvious reasons.

What Sanjiva’s talking about there is ambiguity as a feature of Web services; that some recipients will interpret a message to mean one thing, while others another. Note that this is very different than what the recipients actually do in response to the message; that can and should, obviously, vary. But varying interpretations of the meaning of the message? Absurd.

William Henry plays a game of “Who Stole the Network Effects”… but doesn’t seem to realize it 8-)

Damn, if the W3C can’t get the browser based Web right, and is home to the core standards that make up WS-Deathstar, it makes one wonder if they’re really the organization best suited to “Lead the Web to its full potential”.

IMO, all of the problems mentioned at those links would vanish if only the W3C was made accountable to the public, rather than its members; or at least first to the public.

A new agenda item for the upcoming Advisory Board meeting perhaps?

So, how’s the Semantic Web coming along?

Mark Little explains why he’s a proud fence sitter in the REST vs. WS-* debate;

I’ve never believed in the one-size fits all argument; REST has simplicity/manageability to offer in certain circumstances and WS-* works better in others. As far as distributed internet-based computing is concerned, REST is probably closer to Mac OS X and that makes WS-* the Windows. For what people want to do today I think REST is at the sweet spot I mentioned earlier. But as application requirements get more complex, WS-* takes over. We shouldn’t lose sight of the fact that they can compliment each other: it need not be a case of eiher one or the other.

Ah yes, more of the inaccurate trucks vs cars-style comparisons. In truth, SOA no more complements REST than a musket complements an M4, or an Edsel complements a BMW. REST is an improvement upon SOA in the general case, plain and simple.

Jorgen tries to convince us that Web 2.0 Needs WS-*. But he’s going to have do a lot better than arguments like this;

And, as if to underscore why I don’t see the REST / POX / AJAX “religion” achieving too much traction among enterprises, try explaining the phrase “The Web is All About Relinquishing Control” to any corporate security manager!

Well, if Jorgen had read what Alex was saying about relinquishing control, he might not think that such an insurmountable task;

This is possible because no one owns the web, and consequently no one can control it. The web is everyone’s property, and everyone is welcome to join in. It’s the world of sharing. The need for control has been relinquished, and it is left to the participants to sort their discrepancies out. The web as a medium and as a technology is not going to offer any assistance in that regard.

In other words, relinquishing control is largely about adopting public standards in lieu of pursuing proprietary interests, in particular the public Web standards that make inter-agency document-oriented integration (relatively) simple to achieve. If you are responsible for securing an Intranet, it should be your first and primary consideration to trust messages which are based on publicly vetted agreements, like most Web messages, and similarly, to distrust those messages whose complete semantics are not publicly vetted, like most SOAP messages.

Sam Ruby writes;

The very notion of a link has become practically inexpressible and virtually unthinkable in the vernacular of SOA.

That’s an awesome soundbite, but I don’t think that’s the (whole) problem because SOA/WS does have links, they’re called EPRs.

But what SOA doesn’t offer, is a uniform interface for the targets of those links, and a uniform interface is what gives the links most of their value as each one contains sufficient information to initiate a subsequent action (e.g. GET).

There’s a unique symbiotic relationship between links and the uniform interface that makes the whole greater than the sum of the parts; individually they’re useful, but together they changed the world.

An important, nay, foundational part of my mental model for how Internet scale systems work (and many other things, in fact), is that I view standards as axioms.

In linear algebra, there’s the concept of span, which is, effectively, a function that takes a set of vectors as input, and yields the vector space spanned by those vectors; the set of all reachable points. Also, for any given vector space you can find a set of axioms – a minimal set of vectors which are linearly independent of each other (orthogonal), but still span the space (note; I use “axioms” here to refer to a normalized set of basis vectors).

So given, say, HTTP and URIs as axioms (because they’re independent), I can picture the space reachable using those axioms, which is the set of all tasks that can be coordinated without any additional (beyond the axioms) a priori agreement; in this case, the ability to exchange data between untrusted parties over the Internet. I can also easily add other axioms to the fold and envision how the space expands, so I can understand what adding the new axiom buys me. For example, I can understand what adding RDF to the Web gives me.

More interestingly (though far more difficult – entropy sucks), I can work backwards by imagining how I want the space to look, then figure out what axiom – what new pervasively deployed standard – would give me the desired result.

As mentioned, I try to evaluate many things this way, and at least where I know enough to be able to (even roughly) identify the axioms. It’s why Web services first set off my bunk-o-meter, because treating HTTP as a transport protocol is akin to replacing my HTTP axiom with a TCP axiom, which severely shrinks the set of possible things that can be coordinated … to the empty set, in fact. Oops!

See also; mu, the Stack, Alexander, software architecture.

Sanjiva inadvertantly (I assume!) tells it like it is;

I’m still convinced that WS-* is the best technology *available today* to implement SOAs that do real “enterprisey” stuff

8-)

Dave Winer points to an interesting question from Daniel Berlinger; Why wasn’t the Web built on FTP?

This, and related ones such as “Could the Web have been built on FTP?”, or even “Why did the Web win, and not Gopher?”, are excellent questions with really interesting answers that expose historical, political, and technical aspects of the World Wide Web project. I don’t pretend to have all the answers, but I’ve done my research, so I think I’m a good person to ask (of course, you could just ask Tim for the authoritative answer)

I think the answer to Daniel’s question is pretty easy; Tim chose to start with a new protocol, because he was innovating and FTP was well-entrenched at that time (if he even considered FTP at all, I don’t know). Tim’s (and later, Roy’s) innovations would have required substantial changes to FTP too (support for URIs, and the fallout from that being the big one), so I think it was a wise choice.

So, could the Web have been built on FTP? I’d say probably not, no. Other than the aforementioned points, other things that would effect this would include;

  • FTP uses two network round trips for each retrieval, due to first-pass negotiation
  • FTP has no redirection capabilities
  • FTP doesn’t have a POST method
  • FTP implementations don’t permit delegation to plugged-in software

There’s probably more issues. Add those to the fact that FTP was RFCd in 1985, eight or nine years before Web standardization began, and there’d be a lot of pushback from the IESG to changing FTP into something that it wasn’t intended to be. And rightly so.