Jorgen Thelin points to a John McDowall blog about Architecture by Intent in which John talks about how Linux kernel upgrades requiring hardware upgrades is a bug requiring more architectural emphasis to fix.

When I first read Jorgen’s blog, I assumed that John was talking about Web services; I have a one track mind, I know. But I’ve tried to ask Web services proponents what the architectural constraints of Web services are, and even made a good faith effort to write them down (as compared to REST), only to have the attempt called a blatant troll.

So somebody please tell me; what are the architectural constraints of Web services? If nobody knows, then aren’t Web services “Architecture by Accident”? But if they are known, let’s sit down and figure out what properties these constraints induce so that we can tell if they can do what people want them to do. Yes, that’s a challenge.

Dave Winer says;

SOAP and XML-RPC were started to make it easy to build applications that viewed the Internet as if it were a LAN.

which is 100% true, and at the same time, 100% the wrong thing to do. The Internet is not a LAN. On a LAN, there’s one administrative/trust domain, and on the Internet there’s, well, a whole lot more than one. Computing in those two domains – LAN and Internet – is not the same thing, and therefore requires different solutions (though arguably it could be said that the LAN is a special case of the Internet, where number of trust domains = 1, so what works for the Internet could also work on a LAN – but the converse obviously doesn’t hold).

And for the record Dave, I write software, thank you very much (and so does Paul). Not as much as I used to, but I still do, and still enjoy it (especially now that I’ve switched to Python from Java). It’s all too easy to put down a detractor rather than trying to understand what they have to say, I suppose.

Just posted a new blog at my O’Reilly weblog about SOAPAction.

Greg Reinacker responds to the earlier discussion Jon Udell and I were having about uniform interfaces. The emphasis of his response is on my generalization post to www-ws-arch, in particular the part where I generalized from getStockQuote() to getQuote(). He may very well have a point that this step is pointless, I don’t know. But it was just an example of a step; perhaps a better example could be found. The main point of the post was the first step and the last; from the unparameterized, good-for-retrieving-one-thing-only method, to the good-for-any-safe-retrieval method.

Then, as I responded to Jon, the value of doing that is because deploying new interfaces is extremely expensive. If you’ve got an interface that can do what you need, then you’re better off using it.

Greg did have this to say in his conclusion, which I’d like to comment on;

You can retrieve anything the same way, but you can’t process it without knowing more specifically what it is.

I can’t emphasize enough how important this point is. Being able to retrieve anything is a big win. Processing it is indeed an issue, and a non-trivial one (as I just alluded to), but with Web services, you’ve got both the problem of being able to get the data and the problem of knowing how to process it. The Web solves the first problem, and that’s just with the GET method!

Michael Radwin reports from ApacheCon 2002 about Roy Fielding‘s presentation on Waka (his planned HTTP 1.1 replacement), and Web services.

There’s a lot of good stuff in Roy’s (PPT) presentation, but Michael appears to get the point Roy was making about Web services backwards. They don’t solve the N^2 problem, they are the N^2 problem. REST’s uniform interface constraint is what drives the complexity of integration to O(N).

So quoth Dave Winer;

After all these years, I’ve concluded that if I can’t understand it, it doesn’t have much of a chance in the market.

Well, without taking potshots at Dave, I think this is a fairly poor way to judge a technology. The interaction of a technology with its users is an incredibly complex environment that has no single metric that can indicate the success or failure of that technology. But what I look for in any technology, is network effects.

I haven’t talked much about RDF or the Semantic Web in my blog yet, so I’ll just say a quick word about them.

The Semantic Web is the Web with an additional architectural constraint that could be called “Explicit Data Semantics”; that data (representations, in the case of the Web) will be constrained to be explicit about its implied semantics. This adds the additional desirable property to the system of partial understanding. In a nutshell, this basically means that you get to avoid the “schema explosion” problem, where you have a bazillion different XML schemas, and understanding them is an all or nothing proposition (i.e. where software only understands the schemas it was programmed to understand). RDF and the Semantic Web doesn’t change one important thing; software will not “know” anything more than it was programmed to know. But it does allow a single piece of software to be able to do whatever it is that it does, on any data anywhere. For example, I could write software that searched for “people”, and it could find references to “people” in many different XML documents if RDF were used. And that generates network effects up the wazoo.

Finally, I was asking them for this over two years ago! The mod_pubsub project has been established at Sourceforge, with IFindKarma leading the way.

I’m sure I’ll be making use of it, perhaps on this very blog.

James Strachan sums it all up when he says;

[…]Whatever happens in this whole web services thing, I do think alot of good has come from it already. Its forced people to think alot about distributed systems and why the web works and scales – there’s a lot of great lessons there. Its also brought together lots of diverse communities from the web side of things, from MOM folks and distributed objects folks. If nothing else its made us look again at distributed object technologies like DCOM, CORBA, EJB and ask lots of questions – I think its also taught us what a leaky abstraction the traditional view of distributed objects are.

This is exactly my view. Once this is all resolved – and I hope for the industry’s sake that it is soon – a lot more people will be appreciative of the extent of the gift that TimBL gave to the world, and the principled design that Roy laid out for us.

What doesn’t kill you, makes you stronger.

And FWIW, I’ve maintained my “distobj” email address for a reason; the Web is a distributed object infrastructure, with the critical innovation that all objects implement the same interface (have I said that enough yet? 8-).

I’m officially unemployed now. I’m not quite sure what I’ll be doing next, but I’ve already been contacted about doing some consulting work in the enterprise Web services space. Consulting would be “ok”, but it would be nice to have the relative comfort of a “9 to 5” job for a little while after a year of doing my own thing. Plus I think I would have a bigger impact if I was with an established company with customers to satisfy.

So if you’re interested in the services of a large scale distributed systems expert who knows why Web services will fail, and how to fix them, drop me a line.

Jon discusses the pros and cons of uniform versus specific interfaces. This really is, as Dan Connolly described, a “fascinating tension”. So Jon’s in good company. 8-)

I agree with the gist – that there are pros and cons to each approach – but I disagree with the conclusion. Towards the end of the blog, Jon writes;

It’s a great idea to push the abstraction of the core primitives above the level of SELECT/CUT/PASTE. But there’s little to be gained by pretending that a table of contents is a pivot table.

If you’ve already deployed a network interface which is capable of accessing and manipulating pivot tables, then there is an enormous amount to be gained from being able to reuse this interface to access and manipulate table of contents, table of figures, or even dining room tables. Deploying new interfaces on the Internet is extremely difficult, expensive, and time consuming. The SELECT/CUT/PASTE analogy, while illustrative, doesn’t reflect the nuances of a network interface, which must work between multiple trust domains, not within just one.

WRT “ANALYZE” and “IMPROVE”, both of those “actions” can be accomplished without introducing new methods on the interface. For example, “ANALYZE” is a safe action, so could be handled as piping your content through an intermediary via a GET invocation, where it “analyzed” the content and returned the results (perhaps using annotation). “IMPROVE”, as I understand it, could be implemented similarly, but using WebDAV‘s COPY method, or maybe just PUT or POST; it depends how it’s used. Either way though, the intermediary would do the “improving”.

P.S. I like to use the “drag and drop” desktop metaphor as a comparison to REST’s uniform interface; GET as “double-click” (except for application-invocation, which isn’t technically part of drag-and-drop), POST as drag-and-drop, PUT as “file->save”, and DELETE as delete. This analogy breaks down with the drag-to-trash-can action, but it holds for the most part because drag-and-drop was designed as an interface for “all desktop objects” which is pretty similar to “all resources”.