… is the title of a new blog post by yours truly on my consultancy’s weblog.

Danny notes that I chimed in with some Web architectural advice for those considering SPARQL updates.

On that page, he asks;

I’d like to hear more from Mark about what he sees as problematic about the current notion of binding. Although the spec seems unusual, the end result does seem to respect WebArch

It does respect Web architecture, but only because it’s read-only. As soon as you need to add mutation support, or indeed any other operation on the same resource, the process fails and what results is not Web-friendly. This is because “operation on the same resource” doesn’t work if the operation is part of the resource name; if the operation changes, the name changes, and therefore the resource-identified changes.

This is the same problem that APIs such as Flickr and del.icio.us suffer from; Web-friendly for read-only, horribly broken for updates.

Making something Web-friendly means mapping your data and services into a set of inter-linked resources. Application-specific APIs works directly against that.

And FWIW, from a REST POV the constraint that’s being disregarded in these scenarios is commonly resource identification.

Not that anyone would ever mistake me for a query language guru, but that’s really part of the problem; I’m not a query language guru, because I’m a Web guru, and to a certain extent those two roles are incompatible.

The Web doesn’t do generic query, and it’s a better large scale distributed computing platform as a result. The cost of satisfying an arbitrary query is too large for most publishers to absorb, as they do when they internalize the cost of operating a public Web server and answer GET requests for free.

The Web does per-resource query, which is a far more tightly constrained form of query, if you can even call it that. It makes use of hypermedia to drive an application to the results of a query without the client needing to perform an explicit query. Think of a Facade in front of an actual query processor, where the user provides the arguments for the query, but has no visibility into the actual query being performed. FWIW, this isn’t an unfamiliar way of doing things, as it’s how millions of developers use SQL when authoring Web apps; a user enters “30000” in a form field, hits submit, and then some back-end CGI invokes “select name, salary from emp_sal where salary > 30000”.

I’m confident that SPARQL will be used primarily the same way SQL is used today, and that you won’t see many public SPARQL endpoints on the Web, just as you don’t see many SQL endpoints on the Web. There’s nothing wrong with that of course, but I think it’s important to keep our expectations in check; SPARQL is likely not going to enable new kinds of applications, nor help much with multi-agency data integration, nor do much else that doesn’t involve helping us with our triples behind the scenes.

Jon discusses the pros and cons of uniform versus specific interfaces. This really is, as Dan Connolly described, a “fascinating tension”. So Jon’s in good company. 8-)

I agree with the gist – that there are pros and cons to each approach – but I disagree with the conclusion. Towards the end of the blog, Jon writes;

It’s a great idea to push the abstraction of the core primitives above the level of SELECT/CUT/PASTE. But there’s little to be gained by pretending that a table of contents is a pivot table.

If you’ve already deployed a network interface which is capable of accessing and manipulating pivot tables, then there is an enormous amount to be gained from being able to reuse this interface to access and manipulate table of contents, table of figures, or even dining room tables. Deploying new interfaces on the Internet is extremely difficult, expensive, and time consuming. The SELECT/CUT/PASTE analogy, while illustrative, doesn’t reflect the nuances of a network interface, which must work between multiple trust domains, not within just one.

WRT “ANALYZE” and “IMPROVE”, both of those “actions” can be accomplished without introducing new methods on the interface. For example, “ANALYZE” is a safe action, so could be handled as piping your content through an intermediary via a GET invocation, where it “analyzed” the content and returned the results (perhaps using annotation). “IMPROVE”, as I understand it, could be implemented similarly, but using WebDAV‘s COPY method, or maybe just PUT or POST; it depends how it’s used. Either way though, the intermediary would do the “improving”.

P.S. I like to use the “drag and drop” desktop metaphor as a comparison to REST’s uniform interface; GET as “double-click” (except for application-invocation, which isn’t technically part of drag-and-drop), POST as drag-and-drop, PUT as “file->save”, and DELETE as delete. This analogy breaks down with the drag-to-trash-can action, but it holds for the most part because drag-and-drop was designed as an interface for “all desktop objects” which is pretty similar to “all resources”.