I’ve recently been coming up to speed on the whole Zeroconf space. Boy, what a mess.

Earlier this summer it seems, the WG decided to go with a Microsoft lead approach to multicast name resolution, called LLMNR. This was in constrast to Apple’s similar and existing work on Rendezvous, which they published in both spec and code form.

So rather than start from a solution that works, with multiple independent open source implementations available, they’re starting from scratch with something new and unproven? Brilliant!

Oh, and there’s also the issue that the applications area seems to be sitting on their duffs over the kind of transparency that LLMNR is forcing upon them by hiding the fact that the name resolution was performed via local multicast rather than via DNS-proper. Keith Moore seems to be the only well known “apps” person raising any objections.

Update; Stuart Cheshire, main Apple guy on this stuff, just posted his review of the last call working draft of LLMNR last night. Read it for yourself.

Every day, I get somewhere around 20 hits for the SOAP media type registration draft, referred from an old O’Reilly weblog entry of mine on SOAPAction. It turns out that this article is the first result returned when Googling for “SOAPAction”.

I feel a bit bad about this, because I only recently realized that the behaviour I described in that blog isn’t per any of the specs (obviously I don’t use SOAP at all 8-). I was extrapolating about its semantics based on some investigations into self-description and previous attempts at SOAP-like technologies such as RFC 2774 and PEP (specifically, this part, i.e. the 420 response code).

If SOAPAction/action were to be as I described there – and IMO, this would make it vastly more useful (i.e. make it useful at all 8-) – then the behaviour would have to have been specified to fault if the intent indicated by the value of the SOAPAction field were not understood. Obviously that isn’t the case today.

Sorry for the confusion.

Mark Nottingham suggests the W3C should take it upon themselves to clean up the media type registration process. I sort of concur, in that the official registration procedure doesn’t explain in sufficient detail how the burden of managing the timeline is entirely registrant-driven. This caused lots of delay during the registration of RFC 3236.

But on the other hand, I like it when centralized registries are difficult to use. If there’s really a need for a bazillion different data formats, then a centralized registry is the wrong approach, and the difficulty of using it – multipled by the number of people experiencing it – should provide sufficient impetus for somebody to suggest a change to a decentralized process.

Of course, I don’t believe we need a bazillion different data formats. I think we have a perfectly good 80% solution, which is why I’m not spearheading any efforts in this direction – though I think it would still be useful (just not required) to decentralize media types.

P.S. here’s an amusing data point, where Roy takes the W3C to task over its inability to properly register media types.

Mark Nottingham slaughters another sacred cow.

My rule of thumb has always been that if you can afford to use TCP, you can afford to use HTTP.

Ted Leung – whose weblog I just subscribed to a couple of weeks ago and I enjoy reading immensely – just commented on my blog about Adam Bosworth.

First off, I want to be clear that I wasn’t “taking Adam to task”. I was just honestly excited to see that he appeared to closing in on understanding the Web via the seemingly identical path that I took. I think you have to have had the “Web epiphany” before you can appreciate why this excites me so much. 8-)

Ted writes;

Cross off CORBA and replace it with either REST or web services. The Web is already there. The missing piece is OpenDoc or something like it.

I don’t dispute that the browser provides a relatively weak form of compound document framework when compared to OpenDoc and CommonPoint, but my emphasis at the time was in studying the architecture of the system to see if it prevented richer frameworks from being built by extension. And I discovered that no, it didn’t prevent this from happening, and in addition already had some of the architectural features that I felt were required (XML namespaces (well, they came later), serialization-centric (GET), binding of state to behaviour (Content-Type), etc..). And sure enough, we’re finally beginning to see some of these systems being developed now. So I wouldn’t say that we’re missing OpenDoc, I’d just say that we’re working with a primordial-but-extensible version of it.

BTW, I also discovered that just by historical accident, an important part of what I expected to see – client side containers – wasn’t there. Cookies really threw me for a loop for many months, and it wasn’t until I read Roy’s dissertation that I realized that he didn’t like cookies, and that the RESTful solution to the problem they addressed was also a perfectly compound-document-framework friendly solution.

Kudos to Aaron; XML’s deterministic failure model is broken. I agree 100%.

Another sacred cow bites the dust.

Kudos to Kendall Clark for stating XML is Not Self-Describing, as I did last month.

He writes;

Well, I’ve read too much Wittgenstein (not to mention too much Aquinas, Meister Eckhart, and Julian of Norwich) to think that a name is necessarily a self-description

I haven’t read them at all (8-), but I think I have a pretty good understanding of self-description that I developed “Bottom up” during my study of Web architecture over the past few years. As Kendall brought this up again, I’d thought I’d write a few more words about it.

As I see it, description is always with respect to some context. For example, “The sky is blue” is not a self-descriptive statement unless you know;

  • ASCII
  • English
  • Which sky I mean
  • Which colour blue I mean

For any bag-o-bits, it seems to me that there exists a finite amount of contextual knowledge which is necessary in order to be able to understand it. “Self-describing” then, should mean that the bag itself contains sufficient information to identify the required contextual knowledge.

Tim Berners-Lee likes to talk a lot about this. Last year in Honolulu at WWW2002, his keynote was Specs Count, and much of it was about the value in the ability to be able to perform successive application of public specifications in order to understand a message. That’s contextual knowledge, and as you can see in his talk, it doesn’t begin with the HTTP message, it goes all the way down to the IP segment and Ethernet frame; even those bits must be considered (see an example of where this issue can show up in practice).

Where the Web fits in here, is with its contribution of an enormously valuable piece of contextual knowledge; RFC 2396 aka URIs. With respect to the example above, I can use URIs instead of strings, where those URIs can be used to provide the specifics of which blue I meant, by relating it to other colours.

There’s lots to be said about XML, RDF, and why SOA based Web services can never be self-descriptive (hint; too many methods). But I’ll leave it at that for now.

Adam Bosworth sort of lays out some requirements for a “Web services browser”. It’s really funny for me to read this, because I was struggling with exactly these same questions back in 1996 or so, coming from some seriously hard-core CORBA work, but while also being a big fan of what we called the “Universal Front End”; a chunk of software deployed everywhere which could conceivably make use of every service out there. I spent a good amount of time trying to figure out how to integrate CORBA, OpenDoc, and the Web, in an attempt to yield what Adam’s asking for. A couple of years later, I figured it out; the key was the Web’s uniform interface, that you didn’t need to give services service-specific interfaces, you could accomplish the same tasks by exposing the “data objects” (aka resources) of that service via a common set of data-object-centric methods (what Adam refers to as “Add, Delete, Modify”, but which might as well just be GET, POST, PUT, DELETE).

I can even clearly recall holding some of the same misconceptions he had. For example;

Remember, in this dream, a web services browser is primarily interacting with information, not user interface.

Suggesting that today’s Web is about user interfaces. It isn’t, it’s precisely about “interacting with information”, where each information source is provided a URI, and the information is returned on a GET. It pains me to think back to when I didn’t even understand that simple point, because it’s so darned obvious now. But I’m comforted by the fact that gurus like Adam don’t get it, yet. 8-)

Adam earlier mentioned that he was going to be talking about REST. I’m very eager to hear what he has to say about that, given how RESTful his description of a “Web services browser” is. I think he’s almost there.

And BTW, with respect to mobility, I do believe that an additional constraint on top of REST could be useful. I actually wrote up a design at Sun back in 1999 about doing Servlets as Applets, permitting application code to be run in the browser. But Applet integration with the browser was just awful at the time (I think it’s gotten worse since 8-), making this basically infeasible. I should have investigated JavaScript, but didn’t; Mod-pubsub does some of the things with JavaScript that I couldn’t do with Java, in particular intercepting submission of POST data.

Graham Glass; “Pretty much any piece of software can be exposed as a collection of services”

Mark Baker; “Pretty much any piece of software can be exposed as a graph of resources”

I guess I didn’t mention that I’m now a SearchWebServices Expert ; the kind that you can ask questions of, along with smart folk such as Sean McGrath, Anne Thomas Manes, Doron Sherman, and Roman Stanek.

The last question I was asked was;

Does WebDAV violate the principles of REST?

To which I answered, “No”, though with a caveat.