Dave suggests implementation details are important. As one of those who made the “implementation detail” claim, I’ll state my definition;
An “implementation detail” is an aspect of the design or implementation of a system, which has no effect on the architecture of that system, i.e. does not affect the relationships between components, connectors, and data.
Dave had argued for the use of EPRs instead of URIs based on what I believed to be “implementation details”, including this;
Separating the reference property from the URI may make it easier for service components to evolve. A service component may know nothing about the deployment address of the service from the reference properties. This effectively separates the concerns of identifiers into externally visible and evolvable from the internally visible and evolvable. For example, a dispatcher could evolve the format it uses for reference properties without concern of the URI related software.
to which I responded;
That separation you discuss is an implementation detail, not an aspect of the architecture. Consider Web servers that have a config file which is used to map URIs to code, freeing up the code from having to be bound to any particular URI.
Which is to say that the config file used by the Web server is not a part of the architecture of the operational system being discussed, because it is not exposed to the service consumer (not a data element), nor does it impact connector (transfer) semantics or the relationship between components. It would be a data element of the configuration subsystem architecture of course, but that’s not what we’re talking about.
A couple of weeks ago, Phil Wainewright had a great post on the role of standards. A few people picked up on it, but nobody picked up on the opening sentence (AFAICT);
Does anyone still believe that web services will be published and consumed indiscriminately on the open Internet?
I believe when people set out to build Web services five or so years ago, they were trying to enable just that, yes. But I agree that it hasn’t happened yet, nor will it likely ever with the current approach IMO.
If standards bodies were supposed to innovate, they’d be called innovation bodies. Standards, by definition, can’t be innovative. This is the trouble with all the work currently being put into the WS-* stack. Talented people are wasting time and resources devising capabilities that will never, ever be used. Only those specs that reflect established, proven practices will successfully become durable standards.
Which, taken with the above, says to me that if we want to enable a world of indiscriminate consumption of remote third party services, we need to do so in a manner similar to other systems which have done just that, like the Web, email, instant messaging, etc… Namely, by using a constrained interface, and embracing (application) protocol dependence.
Tim Ewald’s asking all the right questions;
If all of this makes your head spinning, it should, because there is a lack of consistency here. If I’m designing a Web service, where do semantics exist? Are they in the message body, where they started? Are they in one or both of the action URIs? If they aren’t in SOAPAction because we don’t want to count on a header, should they be in wsa:Action, which is also a header? Can portType/operation define semantics (as the default values of various action headers suggest) or not?
Sounds eerily similar to some previous comments of mine.
I’ve used (and studied) many distributed computing platforms, and I’ve never seen anything quite so thoroughly fouled up as Web services has fouled up the the essence of the contract. It tried to be everything to everybody, and in the end is nothing to anybody. Constrain or die.
For those of you not at XML 2004 last week, my new client is Justsystem, a mid sized (800 employee) consumer and enterprise software company based in Japan with a long history of successful product development. I’m working there part time, for at least the next several months, assisting them in making a new Web based compound document technology of theirs - xfy (pronounced “ecks-fi”) - a success. At the conference we announced the immediate availability of a technology preview (downloadable at that link), and the planned productization of the technology by mid 2005.
Since I first started exploring CORBA and OpenDoc integration back in 1995 or so, the vision of compound documents as both a general user interface model, as well as a largely universal data extensibility solution, has never strayed too far from my thoughts, nor my work. So when I was approached by the company in September, and introduced to xfy, the decision to help them was a total no-brainer. They totally get it.
Yes, it’s protocol independence theme month!
A gem of an exchange between the WSD and XMLP WGs.
It turns out that WSD wanted to be able to send a SOAP request via HTTP, but get the response back on some other channel. Fortunately, HTTP supports the 202 response code which permits the server to indicate exactly that. But unfortunately, the default SOAP 1.2 HTTP binding explicitly does not support 202. It actually used to, but the protocol-independence promoters had it removed because they feared, IIRC, that too much of HTTP was exposed to the application.
That made my week. 8-)
P.S. I hope to see lots of people out at XML 2004 next week. I drive down tomorrow. It’ll be my third road trip to D.C. in as many years; about 1000 kms each way, but a pleasure in my ride (though not as nice as a trip to Boston through Vermont and New Hampshire!). I’ll be announcing what I’ve been up to for the past little while too, with my latest client.
A good presentation by David Booth which elaborates on what I’ve been saying for some time now; messages should be self-descriptive, and that effectively requires an operation in the message itself (unless you’ll only ever have one operation).
I’ve got a couple of minor nits with the presentation, but nothing of consequence. Nice job, David.
I feel dirty. I actually agree with Dave Winer;
Tim Bray suggests that Atom might nearly be finished. I read his comments carefully, and find the benefits of the possibly-final Atom to be vague, and the premise absolutely incorrect. Unlike SGML, RSS has been widely deployed, successfully, by users of all levels of technical expertise. There are many thousands of popular RSS feeds updating every day, from technology companies like Apple, Microsoft, Yahoo, Sun and Oracle, big publishing companies like Reuters, The Wall Street Journal, NY Times, Newsweek, Time, BBC, Guardian, etc, exactly the kinds of enterprises that his employer serves. It’s also widely used by today’s opinion leaders, the bloggers. Where SGML was beached and floundering, RSS is thriving and growing. So to conclude that RSS needs the same help that SGML did, is simply not supported by facts.
I recently advised a client who were planning to add syndication feed production and consumption capabilities to their product, to avoid the Atom format and go with the RDF-based RSS 1.0 and the Atom protocol. That way you get the self-descriptive extensibility and backwards compatibility into a massive installed base of RSS processors, and a simple protocol that integrates cleanly into the Web.