Mark Nottingham created RSS History, in order to help distinguish the various ways in which an RSS feed can be interpreted. A very useful feature, for sure, but I think I have a better idea about how to do it.

If you think of a blog in terms of one agent receiving a compound document containing representations of individual items, the “h:overwrite” semantic should be reproducable simply by telling the receiving agent what resource (item) that data is a representation of; the only possible way the receiving agent can interpret this, is that the more recent representation “overwrites” the older one. In other words, we just need to give the item a URI ( preferably without a fragment identifier, though I suppose it could work with them – I’ll have to think more about that).

Similarly, the “h:add” semantic would be the result of not including the URI, giving the receiving agent no choice but to interpret that data as additional data, rather than replacement data.

“h:none” should be a special case of the overwrite semantic, so that if a single RSS document contains two representations of the same resource, that the “most recent” one overwrites the older one. Of course, this requires that the channel posess some sort of ordering semantics. I’m no RSS whiz, but the only example of this I know of is RSS 1.0 when it talks about using rdf:Seq (sequence) rather than rdf:Bag to contain items. So you’d need to use a rdf:Seq if you wanted a feed like this. Makes sense.

As a proposal, I guess this boils down to using rdf:about/GUID on items when you want replacement (perhaps in the future, i.e. if you don’t use the URI now, you can’t replace later)), don’t when you don’t (or even better, use a different URI), and use rdf:seq when order matters within a channel, even for intra-channel replacement. Oh, and use RDF 1.0. 8-) This has the downside that existing aggregators will likely change behaviour based on the same input, but I don’t think it’s too much of a surprising change. I think the gain in functionality, simplicity, and visibility is worth it, but I’m not an author of an aggregator so I might be missing something.

Doug Kaye finds an article about the struggles that Web services are having in getting uptake, at least as reported by CIOs.

This shouldn’t come as a surprise to any reader of my blog, given some of my past predictions, but frankly I wasn’t expecting it to be as grim as this suggests; hype can exist for years, and be a very powerful force. I wasn’t really expecting articles such as this one, until next year.

Dan Brickley finds a wrapper for wget called, which adds Etag support to wget, making for a much more network-friendly Blagg/Blosxom aggregator combo.

Argh, I forgot to s/eth0/ppp0/ my firewall ruleset after the switch to PPPoE. That explains a few things (such as why I’m the only one accessing my Web server 8-). Sigh.

One house move, two ISPs, and three line card swaps later, an IP packet finally graces the new Chez Baker.

I first got 2.2M/1.1M ADSL ethernet service back in April of 1998. Five years later, the best I can do is 1.3M/160K PPPoE, albeit for about half the price.

Don Park writes

OASIS is now looking at a lionshare of key specs that will dominate the Internet and intranets in the near future. Compared to them, W3C is looking pretty devastated [with disinterest and hopeless dreams] at this point.

I’ve heard that from a number of people over the past couple of years, but I just don’t see it that way. Yes, certainly the W3C has been taking lots of slack from lots of folks who think that Web services are some wonderful new thing, both inside (from W3C members), and out (press). But there’s much more at play here than that.

The fundamental difference between OASIS and the W3C, is that the W3C exists to maintain and enhance an existing software system, while OASIS does not. OASIS’s approach resembles little more than a random land grab, attempting to stake out territory without any consideration for its inherent value. Take Don’s list of specs, for example; SAML, XACML, Liberty, BPEL4WS. There is effectively no architectural consistency at all between those specs, so the chances of them ever working well together as part of a single system (without considerable effort) are pretty darned low. And that’s without even considering that I don’t think they will see much widespread deployment individually (though SAML and XACML aren’t too bad).

Thinking back over the recent history of influential software standards related organizations such as OMG, IETF, W3C, WAPforum/OMA, etc.. the only other one that I can think of that didn’t have a legacy system or architecture to protect is the Opengroup (though the OSF had DCE). The others all had some means of ensuring architectural consistency. The IETF has the IESG and Areas for constraining work. The OMG created the Architecture Board. The WAPforum had an Architectural Consistency group. And the W3C has Activities, Staff, the Director, and more recently, the TAG.

So if OASIS wants to go the way of the Opengroup, they’re certainly on track.