I’ve written a draft note on how I see REST’s hypermedia application model relating to workflow/process-execution/choreography/orchestration/etc.. My conclusion; we don’t need these specs, because we should just be following URIs.

Congratulations to the XML Protocol WG. The end is near! 8-)

Phil Wolff critiques my resumé, and apparently likes what he sees. Thanks, Phil. How nice is that? Of course, I promptly patched it in response to the two very valid nits he had.

The job hunt is going well, actually. It started slowly, but is picking up steam. Just received an offer out of the blue from a local company, and it could be really good (Semantic Web, “Web services” (cough), collaboration, etc..), but I want to do some due diligence on them first, as I don’t know a lot about them as a going concern. Amazon has an evangelist position that sounds like it would be a blast, but I’ve got to check that the travel wouldn’t be too brutal. Been there, done that, got the miles.

Jon Udell asks three questions about the spontaneous integration observed in his LibraryLookup project. I’ll take a stab at answering them.

Why was this unexpected?. Because it wasn’t planned. 8-)

In what environment would it be taken for granted?. I’d say that any environment in which it was both possible and simple.

How do we create that environment?. Eschew data hiding. Make all data available with a simple mechanism.

I’ve got to agree with Sam and CNet. I just can’t get too excited about Office 11 and its XML support. XML is syntax, a context free grammar. Standardized syntaxes are wonderful, but heck, it’s just syntax. And while a published schema would describe some data semantics, it would still leave loads of wiggle room for the format to remain proprietary.

I think it is useful, as it does open up the format to XML tools like XSLT. But any use of tools with the Office XML format (such as XSLT style sheets written to manipulate it) will be locked-in to the Office 11 format, the same way that software written to the Win32 API is locked-in. That’s not an inherrently bad thing, of course, but let’s call a spade a spade.

Sam thinks good thoughts about how to use RESTful SOAP as a Blogger API. Nice. But there’s an important issue with SOAP and PUT. Unlike POST, PUT means “store”, and as such its processing model doesn’t provide a hook into which an additional processing model (i.e. SOAP) can be introduced. An intermediary or message processor seeing a PUT request with a SOAP body, would be correct to treat the body as opaque data, and wouldn’t be required to follow the SOAP processing model.

This is why I’ve talked about the need for a new HTTP method I’m currently calling “SET”, whose semantics will be “process and store”, and will therefore allow for an additional processing model to be inserted.

Apparently for today he is. Let’s check back tomorrow. 8-)

Eugene Kim of the Web services Devchannel interviewed me about REST and Web services.

InternetWeek published an article about REST as a different approach to Web services today. Some good stuff, but the conclusion of the article misses the point.

Perhaps this point doesn’t get made enough (hah! 8-), but not only is REST is an alternate approach to building Web services, it’s superior in almost any measurable way I can think of. Not all architectural styles are created equal, obviously, and not all are suitable for Internet scale deployment; REST is, Web services aren’t.

Dave Winer points to an interesting question from Daniel Berlinger; Why wasn’t the Web built on FTP?

This, and related ones such as “Could the Web have been built on FTP?”, or even “Why did the Web win, and not Gopher?”, are excellent questions with really interesting answers that expose historical, political, and technical aspects of the World Wide Web project. I don’t pretend to have all the answers, but I’ve done my research, so I think I’m a good person to ask (of course, you could just ask Tim for the authoritative answer)

I think the answer to Daniel’s question is pretty easy; Tim chose to start with a new protocol, because he was innovating and FTP was well-entrenched at that time (if he even considered FTP at all, I don’t know). Tim’s (and later, Roy’s) innovations would have required substantial changes to FTP too (support for URIs, and the fallout from that being the big one), so I think it was a wise choice.

So, could the Web have been built on FTP? I’d say probably not, no. Other than the aforementioned points, other things that would effect this would include;

  • FTP uses two network round trips for each retrieval, due to first-pass negotiation
  • FTP has no redirection capabilities
  • FTP doesn’t have a POST method
  • FTP implementations don’t permit delegation to plugged-in software

There’s probably more issues. Add those to the fact that FTP was RFCd in 1985, eight or nine years before Web standardization began, and there’d be a lot of pushback from the IESG to changing FTP into something that it wasn’t intended to be. And rightly so.