The term “zero install” is commonly used to refer to the ability to deploy new applications without upgrading client software which has to deal with it. In an important sense, the entire World Wide Web project, including the Semantic Web, can be viewed as an attempt to bring this modus operandi to distributed computing in the large, including to browsers (and the humans using them), but also to automata. URIs, HTTP, and RDF, have all been designed with this objective in mind.

Via Dave Beckett, we see Timo Hannay explaining one of the key advantages of RDF;

I’m currently involved in a project that involves aggregating and querying a lot of RSS data. The only extension modules we can deal with in a fully generic way are the RDF-types ones designed to work with RSS 1.0. To deal with RSS 2.0 modules (which don’t use an RDF structure, at least currently) we either have to manually add routines for each one to our code (a maintainability nightmare) or skip them all together (which means we lose data).

Would this architectural feature be useful to Web services? If so, what would it take for them to get it?

Trackback

no comment until now

Add your comment now