After much time and anguish, “application/soap+xml” is finally a done deal; RFC 3902
<html xsl:version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns="http://www.w3.org/TR/xhtml1/strict">
<head>
<title>Expense Report Summary</title>
</head>
<body>
<p>Total Amount: <xsl:value-of select="expense-report/total"/></p>
</body>
</html>
Update; ok, for the two of you that missed my point, that document is both an XHTML document and an XSLT 1.0 stylesheet. All the root namespace tells you is, well, the root namespace. The actual type is orthogonal to this, and in fact, orthogonal to anything in the document itself. Unless we want to prevent this form of compound document from being used, it is critical that media types continue be the key from which applications are dispatched.
As previously suspected, it seems WS-Transfer is missing POST because of an attempt to limit it to CRUD semantics. From the latest MS whitepaper;
A factory is a Web service that can create a resource from its XML representation. WS-Transfer introduces operations that create, update, retrieve and delete resources.
Kudos to Dave for pulling up his socks and discovering what Semantic Web technologies have to offer first hand – you know, all that wonderful extensible goodness I’ve been going on about after discovering it for myself.
He talks about a perceived problem here …
In order to prevent an area code, I need to add the area code with a cardinality of 0. Now I think this is a pretty big problem. The whole Semantic Web world view of open content models comes and bites us here. The rough assumption is that if a property isn’t specifically exluded, it might be related to the thing.
That’s not quite right. It’s one thing to look at the information space that is the Web/Sem-Web and see two separate but related resources (e.g. the PO and the customer), but it’s something else entirely to look at a message on the Web and conclude that there might be some data elsewhere which is intended to be communicated. That’s where self-description comes in, and it prescribes that if a message arrives with a PO and without a customer, then the customer information is not part of the message, as you require. What Bijan seems to be talking about is the former, not the latter, i.e. to avoid closed world assumptions.
He adds;
If we think of the main point of a schema language as defining the language for exchanging information, it seems that RDF/OWL is a easier to use for extensibility and versioning. Which might be no surprise given the design centres. But given the inability to control the schemas in all the right facets – such as mandatory extensions – it doesn’t fully solve the problems of large scale distributed system extensibility and versioning. More work to be done….
…which is true, I think there is more work to be done (though I also think what’s done is a decent 80-90% solution, just as HTTP is despite not having mandatory extensions). More on mandatory extensions and RDF later though; I’ve given this subject a lot of thought (and code) over the past couple of years.
P.S. I think it’s really interesting that Web services proponents are discovering the virtues of the Semantic Web before they really appreciate all the Web itself has to offer. I totally didn’t see that coming! Coincidentally, check out this message (lists.w3.org is having issues right now, stay tuned…) where an OWL-S user realizes some of the problems with the Web services model.
But putting the misery of these experiences aside, I’m surprised at how little I’ve had to worry about SOAP. As it became clear to me that Web Services were becoming a menace to much of the goodness wrought by XML, I worried that I would be forced to do a lot of gritting my teeth at work while I accommodated clients’ insistence on WS. This hasn’t turned out to be the case. In several cases where WS “end points” have been suggested, I’ve been surprised at how easily my suggestions of a REST-like alternative are embraced (the fact that I could usually whip up running code in hours helped a lot).
That’s what I’m seeing too, at least once you’re in the door (though for a pretty small sample space of two clients). On the other hand, looking for cool large scale distributed systems work has becoming extremely painful since Web services came onto the scene. Most projects are asking for “SOAP/WSDL/UDDI experience”, which leaves me either having to lie and say “Oh yes, I’ve got lots” (which I won’t do, of course), or else I have to put a pleasant face on WS-Insanity and brave the inevitable lack of interest, as I did in my resume;
He believes that, for the foreseeable future, the bulk of innovation in Internet scale systems will occur via additional architectural constraints applied to the Web; for example the Semantic Web, or the Two Way Web. Unfortunately, these beliefs also indicate to him that Web services have some serious architectural flaws that make their suitability as a large scale integration solution questionable. As a result, he spends considerable amounts of time working within standards setting organizations to ensure that these specifications – including SOAP 1.2 – take maximal advantage of the Web.
Right from the spec, we have this example of an EPR;
<wsa:EndpointReference xmlns:wsa="..." xmlns:fabrikam="...">
<wsa:Address>http://www.fabrikam123.example/acct</wsa:Address>
<wsa:ReferenceProperties>
<fabrikam:CustomerKey>123456789</fabrikam:CustomerKey>
</wsa:ReferenceProperties>
<wsa:ReferenceParameters>
<fabrikam:ShoppingCart>ABCDEFG</fabrikam:ShoppingCart>
</wsa:ReferenceParameters>
</wsa:EndpointReference>
Somebody please tell me why on earth that isn’t a URI? You know, something like;
http://www.fabrikam123.example/acct/123456789?ShoppingCart=ABCDEFG
The original impetus behind XML, at least as far as I was concerned back in 1996, was a way to exchange data between programs so that a program could become a service for another program. I saw this as a very simple idea. Send me a message of type A and I’ll agree to send back messages of types B, C, or D depending on your A. If the message is a simle query, send it as a URL with a query string. In the services world, this has become XML over HTTP much more than so called “web services” with their huge and complex panoply of SOAP specs and standards.
Wow! I actually had a weblog entry written, that questioned whether Adam’s transition away from a Web services company, and towards a Web company, might have had something to do with him “seeing the light” regarding REST vs. SOA. I decided not to post it because it was pure conjecture. But it seems my gut feel was partly right. However, he also adds;
Why? Because it is easy and quick.
Which is true, but not a very satisfying answer I’d say. Why is it easy and quick? Anyhow, I’m sure Adam will have lots of time to mull that question over in his work at Google. I look forward to seeing what cool stuff he comes up with there.
Norm continues the slagging of the XML serialization of RDF, RDF/XML.
So, since somebody’s gotta do it, I’ll put my neck on the line by saying that I really don’t have any big problems with RDF/XML.
For me, RDF/XML works because it makes the simple things simple. Consider the following document;
<Person xmlns="http://example.org/foofoo/"> <name>Mark Smith</name> <age>55</age> </Person>
IMO, if the serialization can’t support extracting the same triples as a human would intuitively expect to be there, then it’s broken, or at least not suitable for hand-authoring (which I’ve had no problems doing).
Yes, that’s simple, but I’d say it’s about 70% of what I do with RDF, and I’d expect that for many people, it’s probably at least 50% of what they do with it.
And yes, striping is a nuisance some times. And lists and collections are annoying. And don’t get me started on reification. But the simple stuff is simple.
Actually, I should come clean and say that despite my claims above about not having any problems with RDF/XML, I have had some. But you know what? The problems I’ve had are always with XML. XML is just a really sucky syntax for lots of network-centric things I do, thanks to its seemed-like-a-good-idea-at-the-time deterministic failure model. This is what requires you (unless you want to institute transaction semantics between your app and parser, uh huh) not to do any application processing until you’ve received the final “/>”; because you never know, it might not arrive, and it’s only then that you realize “Oopsie!”, that wasn’t XML after all. Streaming? By definition, impossible. Latency-sensitive apps? Forget about it!
I’m overstating the case, of course. I use XML, a lot. It’s the default syntax that my last project used on the wide open Internet due to its pervasive recognition and support. We also used it as the default output format for our embedded systems, since producing it is cheap. But for embedded consumption, and between instances of our own software, we use Turtle.
Stefan found a paper with a very interesting title, that I hadn’t heard anything about; Developing Web Services Choreography Standards – The Case of REST vs. SOAP.
I’d heard Keith Swenson’s name a few days earlier in the context of ASAP and some of the recent buzz surrounding it, but I’ve known about his work for a while. I spent some time studying both SWAP (co-authored with a buddy of mine, Greg Bolcer) and IPP during their development.
Aside; IPP actually slowed my studies into the Web (along with HTTP-NG, sigh) because I, for whatever reason, started by assuming that IPP was a good use of HTTP. Only later did I realize that the exact opposite was true (as was only recently confirmed by Roy).
On the upside, IPP’s use of POST spawned an important and interesting exchange, via Internet draft, that I studied very carefully (which lead to a draft of my own); Don’t go Postal, and The Use of POST.
Anyhow, all of this to say that I was quite surprised to see REST reasonably well represented … except for perhaps this part;
Several standards for REST style workflow interaction have been proposed, namely SWAP, Wf-XML [26, 52, 58], AWSP [49] and ASAP [38]. Instead of relying on the HTTP 1.0 commands, these standards provide higher level operations that are specifically designed for the interaction with remote processes.
Hint; if you’re trying to provide operations at a higher level than the application layer (e.g. HTTP), you’re abusing HTTP, not using it. But, the operations being tunneled are themselves reasonably generic, which is good design practice for this space, and what I think the authors were trying to point out as nearly RESTful.
It’s really encouraging to see REST mentioned in the context of workflow, since I believe it provides a superior base for scalable solutions than does SOA. I think this paper could be an important piece of work if the authors were to spend some time studying both architectural styles from a software architecture POV, as well as actually building some systems with each. As is, the analysis is pretty decent, but the conclusion – basically that it’s a crap shoot over which one is superior – needs some major work.