Graham Glass; “Pretty much any piece of software can be exposed as a collection of services”

Mark Baker; “Pretty much any piece of software can be exposed as a graph of resources”

I guess I didn’t mention that I’m now a SearchWebServices Expert ; the kind that you can ask questions of, along with smart folk such as Sean McGrath, Anne Thomas Manes, Doron Sherman, and Roman Stanek.

The last question I was asked was;

Does WebDAV violate the principles of REST?

To which I answered, “No”, though with a caveat.

The term “zero install” is commonly used to refer to the ability to deploy new applications without upgrading client software which has to deal with it. In an important sense, the entire World Wide Web project, including the Semantic Web, can be viewed as an attempt to bring this modus operandi to distributed computing in the large, including to browsers (and the humans using them), but also to automata. URIs, HTTP, and RDF, have all been designed with this objective in mind.

Via Dave Beckett, we see Timo Hannay explaining one of the key advantages of RDF;

I’m currently involved in a project that involves aggregating and querying a lot of RSS data. The only extension modules we can deal with in a fully generic way are the RDF-types ones designed to work with RSS 1.0. To deal with RSS 2.0 modules (which don’t use an RDF structure, at least currently) we either have to manually add routines for each one to our code (a maintainability nightmare) or skip them all together (which means we lose data).

Would this architectural feature be useful to Web services? If so, what would it take for them to get it?

Ergh, I don’t know how I could have possibly missed this (get a blog, Steve!), but one of the distributed system people I most respect, Steve Vinoski, wrote a great article on REST and Web services (PDF) a year ago that should IMO, be mandatory reading for Web services folk.

I have one (rhetorical) question for him though. Steve writes;

These verbs form a generic application interface that can be applied in practice to a broad range of distributed applications despite the fact that it was originally designed for hypermedia systems.

So is it that HTTP can be used for applications other than hypermedia, or is that hypermedia is perhaps an extremely generic application model?

As an after-thought to my last response to Doug, I thought I’d just say a quick word about “documents”, which Doug likes to talk about.

A document is state (the S in REST). Look in any file folder (the kind in the filing cabinet), document storage system, archived tape, or heck, any file system, and you will find chunks of state. What you won’t see in any of these are “methods”. If you encapsulate the state within a “method” wrapper, then what you have is no longer a document, because it carries with it intent; state does not.

If document transfer is your objective, then REST is what you need (and maybe some more constraints on top).

Doug responds, in reference to using URIs;

This implies that the document is associated with (tightly coupled to) a fixed location as specified in the URI.

Not at all. Are you aware that the machine that you’re grabbing my blog from relocated up the street five months ago? In the odd chance that you were, did any of your links to my blog have to change, or were you or any other linker to my site impacted in any way? No, of course not. A http URI is not location dependent if it uses a DNS name (as opposed to an IP address), because a DNS name can map to multiple IP addresses over time.

He also writes;

You can’t send email to me using an HTTP GET or[sic] POST to my desktop PC because it may be turned off.

That’s not true. You appear to be looking at REST through from the point of view of a browser. It would help to look *past* the browser, to state transfer (aka document tranfer). You could have a HTTP gateway sitting on some server someplace, which I POST emails too. When you boot up your laptop, you’d connect to the server, and invoke GET to retrieve those emails. That’s perfectly RESTful.

For those not familiar with it, the W3C’s www-archive public email archive is a great place to peruse some behind-the-scenes activities which are public, but not announced. It’s also home to a lot of Tim and Dan‘s discussions which are redirected from other mailing lists. Well worth following.

While checking it out yesterday, I found this gem from Roy Fielding on ambiguity in identification and the need to confront “secondary semantics” that result from that ambiguity.

In 5 or 10 years, people are going to look back on these archives (and perhaps the RESTful ones on www-ws-arch 8-), and realize that these were actually some of the most advanced and important du jour topics in large scale distributed systems research and practice … a far cry from the esoterica that it must seem to Web services proponents. I’m humbled when I realize how fortunate I am to have understood this early enough (only five years late, relative to 1993 when I first learned of the Web! 8-) to be able to make a contribution to the World Wide Web project, if only through education and evangelizing (hey, somebody has to do it!). I’m confident it will be my professional legacy.