Answer me this; is it the objective of the Web services architecture to permit a document (e.g. a purchase order) to be submitted to any service for processing?

If so, then how is this different than having every service implement a common “process this document” (aka POST) operation? That’s what uniform means.

Savas says that iTunes is doing a proprietary version of Web services. I don’t think so. He writes;

If, however, they wanted to describe the structure of that information, how would they do it? They’d probably use XML Schema.

Yep, agreed there. I think RDF would be a better solution, but given they’re using plain old XML, XML Schema would be the way to go.

And if they wanted to describe the message exchange patterns, the contract for the interactions with the store (e.g., “when the XML document with the order information for a song comes, the response will be the XML document with the song”)?

Hold on. Why would they ever do that? Why would they want to artificially restrict the type of documents returned from an interaction? That would mean that if they wanted to insert a step between order submission and and song download, that they’d have to change the interface and its description? No thanks. I would prefer that the client determine what to do based on the type of the document that actually arrives. Let’s keep the perpetually-static stuff in the protocol, and let everything else happen dynamically.

And if they wanted their service to be integrated with other applications without having to read Apple specifications on how to include the information in HTTP requests and how to read HTTP responses?

I don’t follow. One need only look at the HTTP spec to determine how that’s done. SOAP buys you nothing in that case, AFAICT. It buys you other things (better extensibility, richer intermediary model, etc..), but none of those seem relevant to ITMS at present.

Via Mnot, Aaron’s reverse engineering ITMS.

So let me understand… ITMS is presenting an HTTP/URI based interface intended for use by automata, and not human-controlled browsers? How is such a thing possible?! THE WEB IS FOR HUMANS (isn’t it?)! Head .. about to … explode. 8-)

Steve responds to my response to his article.

One of Mark’s comments was about my discussion around the difficulties of using URIs to represent some types of endpoints such as message queues. I don’t disagree with Mark that you could devise a way to do it, but it’s just that there’s no good standardized way to do it. I mean, if worse came to worst, you could use a URI with the stringified OMG interoperable object reference format (“IOR:” followed by any number of hex digits), especially given that a single IOR can simultaneously represent any number of endpoints, regardless of the complexity of any individual endpoint being represented. But I suspect most people would not want to take that approach.

But there is a standardized way, which I recommended; the http URI. You don’t need any more standardization than that, as an http URI provides sufficient information to enable anybody to grab data from it via GET (aka, it enables the late binding of the identifier to data).

FWIW, I agree that the URI-construction mechanism has its problems, and I try to avoid it when I can, especially where queues are created frequently. I just mentioned it as another option.

Thought for the day; how different would the Web services architecture be, if the Web didn’t exist?

Via Savas, a pointer to a paper by Jeff Schneider titled The World Wide Grid. It includes some incorrect assumptions about the Web that I’d like to address. Luckily, they’re summed up in this statement;

The focus for the web was to describe pages that could be linked together and read by people.

Bzzt. Can this be read by people? Nope.

The same mechanisms used to permit a browser to pull in HTML from any HTML-serving site around the world, can also be used to enable an automaton to pull in any kind of data from anywhere around the world (namely, GET + URIs). You are using data, right? 8-)

Even if you accept that all of these different approaches (Grid, WS, Web) are workable solutions to the over-the-Internet application-to-application integration problem, do you really want to bet against the world’s most successful distributed application?

Patrick and Stefan pick up on my “Linda did win” meme

Obviously the Web isn’t Linda; clearly, there are important differences. But IMO, there are more important similarities. Both Stefan and Patrick claim, in effect, that the way the Web is used today doesn’t leverage those similarities. While I agree that we haven’t yet taken full advantage of it, the most common use of the Web today is for browsing in a Web browser, and I see that as very similar to just rd()-ing (reading) a space, or at least far more similiar to the Web than a Web services approach of invoking getStockQuote or getPurchaseOrder.

How different from the human-centric Web is a tuple space returning HTML on a rd()?

… you’d have to negotiate much of the hard stuff that TCP accomplishes (e.g. flow control), for each different party you wanted to interact with.

When building an Internet scale machine-to-machine stack, your objective is to embue it with sufficient information to enable ad-hoc integration between parties which implement it. Agreeing on only a “messaging layer” while not agreeing on an interface, prevents two parties from integrating in an ad-hoc manner, just as agreeing on IP alone (or IP plus part of the TCP spec) is insufficient to permit an ad-hoc TCP connection to be established.

Congratulations to Tim on his new position at Sun.

In an interview he gave about the move, he said something interesting that I’d like to comment on. When asked whether he’d looked into peer-to-peer technologies, he said;

Only trivially. That has some roots of its thinking in the old Linda [parallel programming coordination language] technology that [David] Gelertner did years and years ago, which I thought was wonderful and which I was astounded never changed the world. But I have been working so hard on search and user interface and things like that for the last couple of years that I haven’t had time to go deep on JXTA.

Linda – or something very Linda-like – did change the world; the World Wide Web.

I’d really love to get Tim’s views on Web services. He’s said a handful of things on REST/SOAP, etc.. , almost all of which suggest that he totally gets the value of the Web (unsurprisingly). But he’s also said some things which have me wondering whether he appreciates the extent of the mistakes being made with Web services.

BTW, I wonder what’s going to happen on the TAG now that both Tim and Norm are at Sun, given that the W3C process document doesn’t allow two members from the same company? Either way, it will be a big loss to the TAG. Bumber.

Update; Tim resigns

Jim writes, regarding a diagram by Savas;

It shows that managing virtualised resources across organisations isn’t scalable, whereas composition of services is. Why the difference in terms of scalability? In the service-oriented view, services can manage whatever backend resources they have for themselves, therefore the complexity of the application driving the services increases linearly with the number of services. In the resource-oriented view, the consuming application must deal with each resource directly and so complexity increases as the sum of a multiple (the number of resources) of each service.

Aside; I think Jim’s using the WS-RF notion of “resource”, which is cool, since it jives so closely with the Web’s notion of one (stateful resource, stateless interaction).

I think the scalability claim above is only correct if you ignore a whole class of useful resources; containers which contain other resources. So I could layout a resource centric view of the network in that diagram to look exactly like the service centric view Savas draws. For example, I might define a container called “the aggregate log ‘file’ of all devices in this building”, and this might be dynamically constructed in basically the same way that aggregate RSS feeds are constructed. And, of course, it would be given a http URI so that I could snarf data from it. Each log entry could also provide the URI of the more granular “device” that it came from so that I, or an automata, could visit there to find its current status.