Yikes, I awoke this morning to about 100 new emails, 42 of which were this beast. And those were just the ones that evaded the spam filter. It also constituted about a third of the 200 or so spams caught by my filters.

I’ve never been hit so hard.

Via Ted, Stefano’s trying to trace the history of the inversion-of-control pattern (well, I think it’s a pattern at least 8-) used in frameworks.

I used it in my own work in ’94/95 when I first got into C++, and shortly thereafter with Java in the summer of ’95. But it was perhaps late ’95 (confirmed; it was published in Sept ’95), when I first saw it described and referred to as the “Hollywood Principle”. That description was in the famed “blue book”; Bob Orfali, Dan Harkey, and Jeri Edwards’ The Essential Distributed Objects Survival Guide.

I’m sure I sound like a broken record (there’s another case too, but I can’t find it, doh) on this subject, but another group has just taken a stab at reinventing substantial portions of the Web (poorly). This time, it’s presented as a Web-services-meets-the-Grid solution, and ironically, I think they nail the basics of a means to unify those two architectures; with a resource model. But, the reinvention part, is that the Web already has a perfectly good resource model.

Where their resource model breaks down, is that they feel the need to associate a specific interface with it. It’s a resource, why not define an interface for a resource? Let’s see, resources only have identity, state, and behaviour, so the semantics would have to operate on this; things like “serialize your state” (GET), “change your state to this” (PUT), etc..

Apparently, in the whats-hot-whats-not department, Web services are not.

Norm’s playing around with media types. This one is served with application/xml.

Let’s have a look at what should happen in a perfect world if XHTML is served with that media type. RFC 3023 says;

An XML document labeled as text/xml or application/xml might contain namespace declarations, stylesheet-linking processing instructions (PIs), schema information, or other declarations that might be used to suggest how the document is to be processed. For example, a document might have the XHTML namespace and a reference to a CSS stylesheet. Such a document might be handled by applications that would use this information to dispatch the document for appropriate processing.

What that means is that one cannot assume that namespace dispatching will occur, and therefore the semantics of application/xml are ambiguous; it is reasonable that the recipient see it as XHTML/HTML, but also reasonable that they see it as “XML” (such as in the IE XML tree view).

In the real world of course, reality can trump specification; concensus (in the form of running code) may very well be that namespace dispatching is assumed, and in that case at least the ambiguity vanishes. But then we’ve lost the ability to send plain-old XML. For example if somebody asks me for a sample XML document, I’d like to be able to send them some XHTML without it being interpreted by the other end as XHTML, just XML. I think it would be great if application/xml could be used for this purpose, but it’s not a huge deal; text/plain would also be appropriate in many cases.

So I set up a little test. Let me know what you see and I’ll record it. It could be useful in the soon-to-come revision to 3023, enabling us to be a bit more specific than “might”.

I gave a presentation tonight at the XML Users Group of Ottawa, titled REST, Self-description, and XML.

Not unexpectedly, the slides don’t capture a lot of what was presented (and nothing of what was discussed), but there’s a story in there that should be easy to follow. It also has a surprise ending that caught at least one person off guard. That was my objective.

That’s two in the past couple of weeks; the RESTwiki, and the ESW Wiki.

If they didn’t hack into the machines and change the MoinMoin database, then these have to be the lamest defacers ever.

Tim Bray writes;

When you’re explaining something to somebody and they don’t get it, that’s not their problem, it’s your problem.

Well, sorta. Let’s test it out.

Bob; Hey Jim

Jim; Hey Bob

Bob; Hey, your house is on fire!

Jim; Eh? My what’s on what?

Bob; I’m getting out of here! Best of luck with that fire.

Jim; Eh?

A nice post from Don last weekend, addressing the “roach motel” (aka “application silo”) problem, and what Longhorn’s doing to help developers who want to avoid it. Some comments;

Though I think their characterization of RPC is a bit naïve (NFS is a great counterexample of a broadly adopted RPC protocol), the argument in favor of common operations is a strong one that I’m extremely sympathetic to (watch this space).

NFS is built on an RPC infrastructure, but it’s not what you’d call RPC because its users don’t define the interface, the protocol does. Consider that just because it’s built with RPC, you don’t see it integrated with other RPC based services. I think there’s an important lesson there.

What the REST argument conveniently sidesteps is that had it not been for HTML (a common schema), HTTP (a common set or operations/access mechanisms) would have never registered on most people’s radar.

I don’t know about others, but I’ve never side-stepped that issue. I’m quite up front when I claim that REST alone doesn’t address the “schema explosion” problem, and that HTML is only a “unifying schema” for humans. I commonly follow that up with an explanation of why I like Semantic Web technologies, as they extend the Web to address the explosion problem for automata.

Anyhow, I’m very encouraged by the positive feedback, and will be keenly “watching that space”! Thanks, Don.

Dave Orchard wrote, and Don Box concurred, that it’s a good thing to avoid registration at the likes of IANA and IETF. I also concur, as my hopefully-soon-to-be-BCP Internet Draft with Dan Connolly describes.

Where I disagree with Dave and Don, is summed up by Dave;

XML changes the landscape completely. Instead of having a small number of types that are registered through a centralized authority, authors can create arbitrary vocabularies and even application protocols through XML and Schema. In the same way a client has to be programmed for media types, a client must be programmed for xml types and wsdl operations.

IMO, XML doesn’t change the landscape in that way at all. It’s always been possible to have an explosion of data formats and protocols; 10 years ago you could have done it with ASCII and ONC or DCE. The fact of the matter is that we don’t see these things on a large scale on the Internet because most people don’t want them. Not only is it expensive to develop new ones – even with a fine framework for their development, such as SOAP & XML Schema – but you’re very typically left amortizing that expense over a very narrowly focused application, such as stock quotes or shoe ordering, or what-have-you. The Web and Semantic Web efforts are an attempt to build a supremely generic application around a single application protocol (HTTP) and a single data model (RDF). Now that’s landscape-changing.