If you’d have asked me six or seven years ago – when this whole Web services things was kicking off – how things were likely to go with them, I would have said – and indeed, have said many times since – that they would fail to see widespread use on the Internet, as their architecture is only suitable for use under a single adminstrator, i.e. behind a firewall. But if you’d asked me if I would have thought that there’d be this much trouble with basic interoperability of foundational specifications, I would have said, no, I wouldn’t expect that. I mean, despite the architectural shortcomings, the job of developing interoperable specifications, while obviously difficult, wouldn’t be any more difficult because of these shortcomings… would it?

I’ve given this a fair bit of thought recently and concluded that yes, those differences are important. What I don’t know though, is whether they’re important enough to cause the aforementioned WS interop problems. But here’s my working theory on why Web services interop is harder; you can decide for yourself how significant they are.

What I think this boils down to is that interoperability testing of Web based services (not Web services), like any Web deployment, benefits from network effects not available with Web services, primarily due to the use of the uniform interface. So if we’re testing out Web based services, and I write a test client, then that client can be used – as-is – to test all services. You simply don’t get this with Web services, at least past the point where you get the equivalent of the “unknown operation” fault. As a result, there’s a whole lot more testing going on, which should intuitively mean better interop.

Or at least that’s the theory. What do you think?

Update; based on some comments by others, I guess I should qualify this as stating that I understand that there are concrete reasons why bugs exist today. But what I’m talking about above is the meta question of why these bugs continue to persist, despite plenty of time passing in which they could have been resolved (modulo the vendor-interest comment by Steve & Patrick, which I don’t buy because there’s so much activity in SOAP extensions that lock-in at the SOAP level is unnecessary and moreover, shrinks the market by scaring potential customers away).

Tags: soap, rest, web, soa, networkeffects, testing, interoperability, webservices.

“REST folk […] appreciate schemas as much as the SOAP crowd does. We just don’t confuse the issue by putting method semantics into them.” Heh, yep.
(link) [del.icio.us/distobj]

Don Box gives us his two cents on a Microsoft-internal “REST vs. SOA(P)” debate;

The following design decisions are orthogonal, even though people often conflate two or more of them:

  1. Whether one uses SOAP or POX (plain-old-XML).
  2. Whether or not one publishes an XML schema for their formats.
  3. Whether or not one generates static language bindings from an XML schema.
  4. The degree to which one relies on HTTP-specific features. That stated, screw with GET at your peril.
  5. Whether one adopts a message-centric design approach or a resource-centric design approach.

Some of the decisions (specifically 5) are architectural and sometimes philosophical.

I don’t know about that. Numbers 1, 2, 4, and 5 are architectural, as all impact either the components, connectors, or data of the system. Only number 3 isn’t, since it’s an implementation detail. Number 2 is debateable I suppose, but most Web services based uses of XML schemas I’ve seen involves removing descriptive information from the message in deference to an implicit pointer to a WSDL document (and therefore a schema).

He also added;

If you want to reach both audiences before your competition does, you’ll avoid indulging in religious debates and ship something.

Of course, religious debates should always be avoided. But architectural debates should not, and REST vs. SOA(P) is an architectural debate. Period. If anybody thinks this is a religious debate, you simply haven’t done your homework.

Also of interest, his advice was preceded by this qualifier;

In hopes I never have to address this debate again, […]

Hah, that’s a good one 8-) Resistance is futile. You can’t fight loose coupling, man. It’s infectious. Muhaha!

Tags: soap, rest, microsoft, webservices.

Interesting article on one way of doing REST when management asks for Web services. See my comment for an alternate approach.
(link) [del.icio.us/distobj]
“SOAP, Corba or something insane”. Heh. Not sure about “del.icio.us doesn’t even use REST” though – it does; POX is RESTful.
(link) [del.icio.us/distobj]

From Bobby Woolf’s latest;

The beauty of interoperability is that two systems developed completely independently can still work together. Magic? No, standards (or at least specifications, open or otherwise); see Open Standards in Everyday Life. Consider a Web services consumer that wants to invoke a particular WSDL, and a provider that implements the same WSDL; they’ll work together, even if they were implemented independently. Why? Because they agree on the same WSDL (which may have come from a third party) and a protocol (such as SOAP over HTTP) discovered in the binding.

So what about the services that expose WSDL that the client doesn’t know about? What’s the possibility of those components ever interoperating without software upgrades? Zero, of course. That situation is called a silo, and I thought one of the main objectives of this whole SOA thing was to avoid them … wasn’t it?

Sigh … how many ways do I have to keep saying the same thing?!?!

I started writing this before Jeff’s hilarious Lego piece (that must have been fun to put together 8-), but if anything, it just re-emphasized the need for me to write this, because Jeff makes the same mistake that most of the rest of the industry is making; believing that the SOAP processing model is the analogue of the Lego brick.

For an apples-to-apples comparison of SOA and Lego, one needs to realize that the analogue of the Lego brick is actually the WSDL document that describes the service. Lego works as it does because each brick exposes the same application interface, not just because each brick is made from the same plastic (which makes a better SOAP analogue, IMO).

Just check out his example of a non-recombinant system here; it’s not recombinant because it’s topped off with a piece which exposes a different “WSDL” than other “services”.

Jeff says a lot of valuable things in his piece that resonate with me, including;

The fundamental notion is that the true uses of the functions/data will not be known until after the system is put into production.
The future of software is about the creation and utilization of building blocks. It is about letting our users play with their Lego’s.

(modulo the fact that the plural of Lego is not “Legos” 8-)

… it’s just unfortunate that they all resonate with me from a Web/REST POV, not from an SOA POV.

Microsoft Director of Architecture Strategy; “REST is a dominant model on the consumer side, and SOAP is the model on the enterprise side”.
(link) [del.icio.us/distobj]
“Scraping, mashups, and RSS mean that your site is already a service […]”. Amen, Doug.
(link) [del.icio.us/distobj]
“We had a bit of a false start here with SOAP-based Web Services”. A “bit”?! 8-)
(link) [del.icio.us/distobj]