Damn, if the W3C can’t get the browser based Web right, and is home to the core standards that make up WS-Deathstar, it makes one wonder if they’re really the organization best suited to “Lead the Web to its full potential”.

IMO, all of the problems mentioned at those links would vanish if only the W3C was made accountable to the public, rather than its members; or at least first to the public.

A new agenda item for the upcoming Advisory Board meeting perhaps?

So, how’s the Semantic Web coming along?

Mark Little explains why he’s a proud fence sitter in the REST vs. WS-* debate;

I’ve never believed in the one-size fits all argument; REST has simplicity/manageability to offer in certain circumstances and WS-* works better in others. As far as distributed internet-based computing is concerned, REST is probably closer to Mac OS X and that makes WS-* the Windows. For what people want to do today I think REST is at the sweet spot I mentioned earlier. But as application requirements get more complex, WS-* takes over. We shouldn’t lose sight of the fact that they can compliment each other: it need not be a case of eiher one or the other.

Ah yes, more of the inaccurate trucks vs cars-style comparisons. In truth, SOA no more complements REST than a musket complements an M4, or an Edsel complements a BMW. REST is an improvement upon SOA in the general case, plain and simple.

  • “That should make a good recipe for, what we think, is going to be an interesting and much needed book for a world where the Web is the application/integration platform”.
    (tags: rest web)

Jorgen tries to convince us that Web 2.0 Needs WS-*. But he’s going to have do a lot better than arguments like this;

And, as if to underscore why I don’t see the REST / POX / AJAX “religion” achieving too much traction among enterprises, try explaining the phrase “The Web is All About Relinquishing Control” to any corporate security manager!

Well, if Jorgen had read what Alex was saying about relinquishing control, he might not think that such an insurmountable task;

This is possible because no one owns the web, and consequently no one can control it. The web is everyone’s property, and everyone is welcome to join in. It’s the world of sharing. The need for control has been relinquished, and it is left to the participants to sort their discrepancies out. The web as a medium and as a technology is not going to offer any assistance in that regard.

In other words, relinquishing control is largely about adopting public standards in lieu of pursuing proprietary interests, in particular the public Web standards that make inter-agency document-oriented integration (relatively) simple to achieve. If you are responsible for securing an Intranet, it should be your first and primary consideration to trust messages which are based on publicly vetted agreements, like most Web messages, and similarly, to distrust those messages whose complete semantics are not publicly vetted, like most SOAP messages.

Sam Ruby writes;

The very notion of a link has become practically inexpressible and virtually unthinkable in the vernacular of SOA.

That’s an awesome soundbite, but I don’t think that’s the (whole) problem because SOA/WS does have links, they’re called EPRs.

But what SOA doesn’t offer, is a uniform interface for the targets of those links, and a uniform interface is what gives the links most of their value as each one contains sufficient information to initiate a subsequent action (e.g. GET).

There’s a unique symbiotic relationship between links and the uniform interface that makes the whole greater than the sum of the parts; individually they’re useful, but together they changed the world.

Not that anyone would ever mistake me for a query language guru, but that’s really part of the problem; I’m not a query language guru, because I’m a Web guru, and to a certain extent those two roles are incompatible.

The Web doesn’t do generic query, and it’s a better large scale distributed computing platform as a result. The cost of satisfying an arbitrary query is too large for most publishers to absorb, as they do when they internalize the cost of operating a public Web server and answer GET requests for free.

The Web does per-resource query, which is a far more tightly constrained form of query, if you can even call it that. It makes use of hypermedia to drive an application to the results of a query without the client needing to perform an explicit query. Think of a Facade in front of an actual query processor, where the user provides the arguments for the query, but has no visibility into the actual query being performed. FWIW, this isn’t an unfamiliar way of doing things, as it’s how millions of developers use SQL when authoring Web apps; a user enters “30000” in a form field, hits submit, and then some back-end CGI invokes “select name, salary from emp_sal where salary > 30000”.

I’m confident that SPARQL will be used primarily the same way SQL is used today, and that you won’t see many public SPARQL endpoints on the Web, just as you don’t see many SQL endpoints on the Web. There’s nothing wrong with that of course, but I think it’s important to keep our expectations in check; SPARQL is likely not going to enable new kinds of applications, nor help much with multi-agency data integration, nor do much else that doesn’t involve helping us with our triples behind the scenes.