I’m enjoying Tim Bray’s technology predictor success matrix series very much. It gave me an idea too. I hereby present the “Internet scale distributed system predictor success matrix, aka ISDSPSM.

The predictor I’m going to use is “Uses a constrained interface”. That is, whether or not the interaction semantics between components are constrained in any way. The technologies are a mix of systems that attempted to be deployed on the Internet at one time.

Winnersscore Losersscore
Web 10 CORBA 0
Email 10 DCOM 0
IM 10 DCE 0
IRC 6 ONC 0
Napster 7 RMI 0
DNS 10 Linda 10

I included Linda to show that constrained interfaces are not a sufficient condition for success. But they sure seem necessary, wouldn’t you say?

WS-Eventing, from BEA, MS, and Tibco.

The good news is that finally, we’ve got a Web services spec that tackles the hard problem, interfaces. Yes, WS-Eventing is an application protocol (do we have to bind SOAP to it too?)

The bad news is that it continues the track record of abusing other application protocols (if indeed it’s objective is to be used with them, which I can only assume is the case); the more that goes in the SOAP envelope, the less suitable it is for use with them, as it just duplicates what they already provide. Once again we see WS-Addressing as the culprit; wsa:Action and wsa:To exist to replace their counterparts in any underlying application protocol. For example, for HTTP, the counterparts of wsa:Action and wsa:To are the request method and request URI, respectively.

A point of frustration for me is that the semantics of subscription itself are largely uniform. What I mean is, wouldn’t it be great if all Web services supported “subscribe”? So why not use HTTP like I’ve done, which is built for uniform semantics? Using a SOAP envelope with MONITOR and for the resultant notifications would be really sweet.

One pleasant surprise is the form that notifications take, as in this example. Notice the use of wsa:Action; the value is no longer a method, but is instead a type. Woot! That’s the first time I’ve seen the “action” semantic used properly in any Web services work. Presumably this is due to notification semantics being entirely geared towards simply getting some data to some other application; basically, POST. Of course, technically, I don’t believe any “action” is required in this case, as there’s no intent on behalf of the notification sender beyond simple data submission; the intent is determined and assigned by the recipient of the notification. But that’s still progress!

Another upside is the use of fine grained URIs for identifying the endpoints, e.g. “http://www.other.example.com/OnStormWarning”, rather than something like “http://www.other.example.com/RPCrouter”.

Overall, very disappointing from a protocol and pub/sub POV, but the progress on resource identification, uniform semantics (even if it’s accidental 8-), and intent declaration is quite encouraging. Perhaps the next attempt will be done as an HTTP extension with a SOAP binding to MONITOR (the existing binding to POST would suffice for notifications).

Dave Orchard wonders how XQuery might be put on the Web.

My position seems to fly in the face of at least one part of Dave’s position;

But clearly XQuery inputs are not going to be sent in URIs

Why admit defeat so easily? Did the TAG not say, “Use GET … if the interaction is more like a question ..”? Well, aren’t XQuery documents questions? I think it’s quite clear that they are, and therefore XQuery would benefit from being able to be serialized into URI form. That’s not to say that all XQuery documents would benefit, but many could.

I got a lot of pushback from some in the XQuery WG when I suggested this to them a few months ago, but I think the TAG finding is quite clear. I also strongly believe that doing this is the only significant way in which XQuery can be “put on the Web”.

On the upside, Dave says good things about the value of generic operations;

I first tried to re-use the Xquery functionality rather than providing specific operations in the SAML spec. My idea was that instead of SAML defining bunch of operations (getAuthorizationAssertionBySubjectAssertion, getAuthorizationAssertionListBySubjectSubset, ..), that SAML would define a Schema data model which could be queried against. A provider would offer a generic operation (evaluateQuery) which took in the query against that data model.[…]

Of course, while you’re generalizing, why not go a little further and just use “POST” (suitably extended) instead of “evaluateQuery”?

I also like what Dare had to say about this too, in particular;

One thing lacking in the XML Web Services world are the simple REST-like notions of GET and POST. In the RESTful HTTP world one would simply specify a URI which one could perform an HTTP GET on an get back an XML document. One could then either use the hierarchy of the URI to select subsets of the document or perhaps use HTTP POST to send more complex queries. All this indirection with WSDL files and SOAP headers yet functionality such as what Yahoo has done with their Yahoo! News Search RSS feeds isn’t straightforward. I agree that WSDL annotations would do the trick but then you have to deal with the fact that WSDL’s themselves are not discoverable. *sigh* Yet more human intervebtion is needed instead of loosely coupled application building.

Heh, good one, especially the use of the “human argument” against Web services. 8-)

When it comes to predictions, I like to put it on the line and make mine measurable. As published at SearchWebServices, my predictions this year are two;

  • Web services will continue to struggle to be deployed on the Internet. I’ll restate an earlier prediction I made this year; that by the end of 2004, the number of parties offering publicly available non-RESTful Web services (as registered with XMethods.net) will have plateaued or be falling.
  • Another high profile public Web service will be developed in both REST and Web services/SOA styles, and again — as with Amazon — the REST based service will service at least 80% of the transactions.
Jon Udell responds to Stefano Mazzocchi’s comments on an earlier column of Jon’s. Stefano wrote;
Marketing, protocol and syntax sugar aside, web services are RPC.
to which Jon responds;
I disagree. It’s true that Web services got off to a shaky start. At a conference a couple of years ago, a panel of experts solemnly declared that the “Web” in “Web services” was really a misnomer, and that Web services really had nothing to do with the Web. But since then the pendulum has been swinging back, and for good reason,Learn about rep management. Much to everyone’s surprise, including mine, the linked-web-of-documents approach works rather well. Not just one-to-one and one-to-many, but also many-to-many. Adam Bosworth’s XML 2003 keynote was, for me, the most powerful affirmation yet that Web services can and should leverage the Web’s scalable spontaneity. That’s the vision firmly planted in my mind when I talk about Web services.
I’m reminded of a picture Don Box linked to a few weeks ago. A dog dressed as a clown is still a dog. Until Web services embrace a constrained interface (I’d recommend this one, they will always be RPC.

He writes;

Let’s take an old idea, like RPC, and wrap it with some new hype and nomenclature, and then mediate it with a completely orthogonal protocol! Yeah, lets!

The right conclusion, but for many of the wrong reasons, unfortunately.

It’s Dec 31, and time to see how my past predictions have done.

I predicted that XMethods would list less than 400 services by today, and lo-and-behold, that’s the case; they list 366.

Am I psychic? Have I hacked XMethods? Nope, I just performed simple linear extrapolation. Why linear? Because Web services have extraordinarily poor network effects; their growth has been roughly a function of time, not a function of the size of the existing population as you’d expect to see in an Internet scale network (like email, instant messaging, or the Web) which benefits from Metcalfe’s law.

I also predicted that the TAG would reject the Web Services Architecture document. Of course, that document hasn’t progressed very far, and will apparently only be published as a Note, meaning it will never be reviewed by the TAG.

So here’s a challenge to Web services promoters; make a prediction about the number of Web services available on the Internet by the end of 2004 and/or 2005. If they’re so great, then surely, at some point, there’s going to be thousands of them, right? When will that be? At this rate, they won’t get to 1000 until 2010 … assuming the hype – which is the only thing keeping them even linear, IMO – lasts that long.

He writes;

Is it a transport protocol or not?

Definitely not a transport protocol. It’s a state transfer protocol.

The difference is this: the HTTP interface is vague and the Linda interface is specific. Linda has precise, simple semantics. The possible range of behaviors exhibited in Linda-based systems benefit from being layered on *top* of those precise, simple semantics.

Hmm, I’m not seeing it. How is get, put, and post any more vague than rd, write, and take? All have “precise, simple semantics”. IMO, the only really important difference between them is that the former are operations defined on a more general abstraction than the latter; the resource for REST, and the tuple space for Linda. If it helps, I drew that relationship once upon a time.

Also, regarding Linda, I don’t see how the range of behaviours is a result of layering. Perhaps this is just a nomenclature problem, but I attribute the wide range of behaviours to the generality of the Linda interface. For example, if your operation was getStockQuote(), then you’re only going to be able to use it for stock quotes. If it were getQuote(), then that would be more general, as you could also use it for insurance quotes too. If it were GET or rd, well, you could use it for most anything, couldn’t you?

One other point I want to respond to;

Vanessa Williams provides an elaboration of HTTP for a tuple space application protocol. As I understand REST this should therefore provide the application protocol of a tuple space on the architectural style of REST using the HTTP transport/application protocol mix. In this case the advantage of using REST and HTTP is supposed to be found in the hardware and software that would already be in place between the client and the server.

I wouldn’t say that’s what Vanessa did. What I would say she did, is that she described how one could build an HTTP gateway (a Facade) to a tuple space system; identifying each space with a URI, and mapping HTTP operations to tuple space operations.

Offline, I suggested to Patrick that if we took Linda to the IETF to standardize, that we’d be placed in the Applications Area where we’d be tasked with defining “TTP”, the “Tuple Transfer Protocol”. This application protocol would include operations such as READ, WRITE, TAKE, etc.. Perhaps it might even accomodate extensions so that somebody could, in the future, define the “TTP-NOTIFY” extension protocol which added the “NOTIFY” operation. That would also, as with HTTP, most definitely not be a transport protocol … though I’m sure most SOAP proponents would want to treat it like one. 8-)

This is goodness, though I’m embarassed that it took me so long to get plugged into; I’m too many degrees of separation away from some communities that are important to my work. Time to update my weblog subscriptions.

Here’s what’s been said the past week;

Phil’s done his homework on his “See also” links there; it’s a nice collection of snippets from the past couple of years, several of them mine. I’d also recommend a presentation I gave last year to the Web Services Architecture WG titled “REST Compared”, where I present a simple example of a REST vs. Tuple space based solution to the pervasive problem of turning lights on and off.

I also like what Vanessa did there, and I think that for anybody currently into tuple spaces hardcore, that following through her outline of one possible integration of REST & tuple spaces would be very informative about how the Web relates to their work.

Patrick seems stuck with how to reconcile his position that generic abstractions are a good thing, but that systems should be built independent of the protocol. Note to Patrick; this is all well and good for transport protocols, but application protocols define the abstraction; for them, protocol independence requires that you disregard that abstraction.

What I like most about this meme is primarily that it implicitly eradicates the myth that the Web and/or REST is just for humans. Even if you don’t know – or want to know – about tuple spaces, it should hopefully pique your interest that a bunch of bright folk in the large scale distributed software composition space – where there’s no humans in the loop – are looking at REST.

… to the land of blogdom.

telnet proxy.markbaker.ca 80
MONITOR http://www.pacificspirit.com/blog/index.rdf HTTP/1.1
Host: www.pacificspirit.com
Reply-To: http://www.markbaker.ca/private/
Content-Length: 0

aka “subscribed”