As if on queue, the Zapthink guys release a report which shows that they’ve been paying attention;

Since the Web plays such a large role for SMBs in their use of Web Services, it makes sense that many of them use the cheapest, simplest approach available for implementing B2B Web Services interactions. Using approaches such as Representational State Transfer (REST) gives companies a simple, straightforward HTTP-based approach to Web Services-based integration that is adequate for the needs of many SMBs.

and …

Many SMBs have been leveraging Web Services to reduce the cost of older approaches to addressing their external integration needs. The simple addition of Web Services interfaces, however, typically remain as inflexible as the API approaches that came before. Only through the application of SOA can midsize firms build and leverage loosely coupled Web Services that are flexible enough to respond to ongoing change in the business environment.

I really like that second last sentence, where they’re saying, no, SOA does not encompass all forms of service. And though they don’t explicitly state what they do think SOA entails, it’s made clear that their interpretation of SOA does not include “the simple addition of Web Services interfaces”, which seems to mean that they include some sort of interface constraint.

Some of those snippets are taken somewhat out of context; as you’d expect, there’s a bit of the “SOAP is for heavy lifting” stuff in there too. But still, from these historically foaming-at-the-mouth WS/SOA types (kidding guys! 8-), some good stuff.

Tags: soa, rest, web, webservices.

According to Rob Sayre, the REST vs SOA debate is over;

If you have Microsoft saying “well, the best approach is to make this elaborate infrastructure we’ve spent billions of dollars building out optional”, then the debate is over.

I noticed that in Don’s post, but figured he must have been trying to say something else. I mean, if the debate was over, somebody would have told me, right? 8-)

So would now be a good time to remind people that I’m looking for work?

Tags: soa, rest.

“[…] the SOA action to date has been on internal integration. That may change as organizations get their SOA houses in order and recognize that they can just as easily integrate with partners’ systems as their own.” Heh, that’s a good one. 8-)
(link) [del.icio.us/distobj]
W3C to demonstrate how to break the Web. Say it ain’t so, Yves/Hugo/Eric/Phillipe! 8-(
(link) [del.icio.us/distobj]
I just don’t see this being useful to anybody involved in software development. Perhaps if it were recast in the language of software architecture, it would make sense? Is that even possible?
(link) [del.icio.us/distobj]

Sage advice from Patrick Logan;

Simple dynamic programming languages and simple dynamic coordination languages are winning. Vendors will have to differentiate themselves on something more than wizards that mask complexity.

On the upside, when most every other vendor is hocking snake oil, differentiation from those vendors isn’t hard. On the downside, as Patrick points out, mature products like Apache are the competition.

Of course, there’s always many ways forward. Fair competition being one (build a better Web-server/CMS/router/whatever..), subversion, another. But lots of other possibilities in between.

I’m also reminded of a prediction I made about three years ago;

By the end of 2005, IBM’s content management software division will have absorbed their enterprise software group

Ok, so my timing’s off (as usual), but the message seems even more pertinent after the recent discussions; content management is enterprise integration.

Tags: soa, rest, web, webservices.

Note to self: if writing a book, avoid using the word “naked” in the title.
(link) [del.icio.us/distobj]

Now don’t get me wrong, I do appreciate the bevy of pro – or at least neutral – REST commentary in the recent discussion. But I just can’t get excited about the “moderate” conclusions such as this from Dare Obasanjo;

If you know the target platform of the consumers of your service is going to be .NET or some other platform with rich WS-* support then you should use SOAP/WSDL/WS-*. On the other hand, if you can’t guarantee the target platform of your customers then you should build a Plain Old XML over HTTP (POX/HTTP) or REST web service.

I mean, that looks fine and dandy – as did Don’s conclusions – until you realize that the architectural properties of the resulting system aren’t a factor in the decision.

Oops! This is not progress. This is not principled design.

Tags: soap, soa, rest, webservices.

If you’d have asked me six or seven years ago – when this whole Web services things was kicking off – how things were likely to go with them, I would have said – and indeed, have said many times since – that they would fail to see widespread use on the Internet, as their architecture is only suitable for use under a single adminstrator, i.e. behind a firewall. But if you’d asked me if I would have thought that there’d be this much trouble with basic interoperability of foundational specifications, I would have said, no, I wouldn’t expect that. I mean, despite the architectural shortcomings, the job of developing interoperable specifications, while obviously difficult, wouldn’t be any more difficult because of these shortcomings… would it?

I’ve given this a fair bit of thought recently and concluded that yes, those differences are important. What I don’t know though, is whether they’re important enough to cause the aforementioned WS interop problems. But here’s my working theory on why Web services interop is harder; you can decide for yourself how significant they are.

What I think this boils down to is that interoperability testing of Web based services (not Web services), like any Web deployment, benefits from network effects not available with Web services, primarily due to the use of the uniform interface. So if we’re testing out Web based services, and I write a test client, then that client can be used – as-is – to test all services. You simply don’t get this with Web services, at least past the point where you get the equivalent of the “unknown operation” fault. As a result, there’s a whole lot more testing going on, which should intuitively mean better interop.

Or at least that’s the theory. What do you think?

Update; based on some comments by others, I guess I should qualify this as stating that I understand that there are concrete reasons why bugs exist today. But what I’m talking about above is the meta question of why these bugs continue to persist, despite plenty of time passing in which they could have been resolved (modulo the vendor-interest comment by Steve & Patrick, which I don’t buy because there’s so much activity in SOAP extensions that lock-in at the SOAP level is unnecessary and moreover, shrinks the market by scaring potential customers away).

Tags: soap, rest, web, soa, networkeffects, testing, interoperability, webservices.

“So for ten-orders-of-magnitude-power-law-phenomena our candidates are: cosmology and the web. Are there any others?” Mind-blowing.
(link) [del.icio.us/distobj]