Sam writes;

A distributed system across trust boundaries is where I believe we are collectively heading.

Hey, wait a sec’, we’re already there; the Web and email cross trust boundaries. We even know why that worked too, so we have no excuse when mistakes are made.

Obviously, Sean just doesn’t get it. You’ve got to be protocol independent, so that you can invent a new application protocol on top (i.e. DocSOAP) upon which you are then allowed to become dependent. Silly wabbit.

Show me a WSDL document, and I’ll show you a poorly designed application protocol.

Sean asks for pub/sub on the Web. I wonder if he’s familiar with KnowNow, and in particular their open source project, mod_pubsub? They’ve got some nifty demo apps you can play with too.

We implemented something similar at Idokorro too, but for mobile clients which had our own browser installed (with its own embedded web server).

Mark Nottingham created RSS History, in order to help distinguish the various ways in which an RSS feed can be interpreted. A very useful feature, for sure, but I think I have a better idea about how to do it.

If you think of a blog in terms of one agent receiving a compound document containing representations of individual items, the “h:overwrite” semantic should be reproducable simply by telling the receiving agent what resource (item) that data is a representation of; the only possible way the receiving agent can interpret this, is that the more recent representation “overwrites” the older one. In other words, we just need to give the item a URI ( preferably without a fragment identifier, though I suppose it could work with them – I’ll have to think more about that).

Similarly, the “h:add” semantic would be the result of not including the URI, giving the receiving agent no choice but to interpret that data as additional data, rather than replacement data.

“h:none” should be a special case of the overwrite semantic, so that if a single RSS document contains two representations of the same resource, that the “most recent” one overwrites the older one. Of course, this requires that the channel posess some sort of ordering semantics. I’m no RSS whiz, but the only example of this I know of is RSS 1.0 when it talks about using rdf:Seq (sequence) rather than rdf:Bag to contain items. So you’d need to use a rdf:Seq if you wanted a feed like this. Makes sense.

As a proposal, I guess this boils down to using rdf:about/GUID on items when you want replacement (perhaps in the future, i.e. if you don’t use the URI now, you can’t replace later)), don’t when you don’t (or even better, use a different URI), and use rdf:seq when order matters within a channel, even for intra-channel replacement. Oh, and use RDF 1.0. 8-) This has the downside that existing aggregators will likely change behaviour based on the same input, but I don’t think it’s too much of a surprising change. I think the gain in functionality, simplicity, and visibility is worth it, but I’m not an author of an aggregator so I might be missing something.

Doug Kaye finds an article about the struggles that Web services are having in getting uptake, at least as reported by CIOs.

This shouldn’t come as a surprise to any reader of my blog, given some of my past predictions, but frankly I wasn’t expecting it to be as grim as this suggests; hype can exist for years, and be a very powerful force. I wasn’t really expecting articles such as this one, until next year.

Dan Brickley finds a wrapper for wget called wget.pl, which adds Etag support to wget, making for a much more network-friendly Blagg/Blosxom aggregator combo.

Argh, I forgot to s/eth0/ppp0/ my firewall ruleset after the switch to PPPoE. That explains a few things (such as why I’m the only one accessing my Web server 8-). Sigh.

One house move, two ISPs, and three line card swaps later, an IP packet finally graces the new Chez Baker.

I first got 2.2M/1.1M ADSL ethernet service back in April of 1998. Five years later, the best I can do is 1.3M/160K PPPoE, albeit for about half the price.

Don Park writes

OASIS is now looking at a lionshare of key specs that will dominate the Internet and intranets in the near future. Compared to them, W3C is looking pretty devastated [with disinterest and hopeless dreams] at this point.

I’ve heard that from a number of people over the past couple of years, but I just don’t see it that way. Yes, certainly the W3C has been taking lots of slack from lots of folks who think that Web services are some wonderful new thing, both inside (from W3C members), and out (press). But there’s much more at play here than that.

The fundamental difference between OASIS and the W3C, is that the W3C exists to maintain and enhance an existing software system, while OASIS does not. OASIS’s approach resembles little more than a random land grab, attempting to stake out territory without any consideration for its inherent value. Take Don’s list of specs, for example; SAML, XACML, Liberty, BPEL4WS. There is effectively no architectural consistency at all between those specs, so the chances of them ever working well together as part of a single system (without considerable effort) are pretty darned low. And that’s without even considering that I don’t think they will see much widespread deployment individually (though SAML and XACML aren’t too bad).

Thinking back over the recent history of influential software standards related organizations such as OMG, IETF, W3C, WAPforum/OMA, etc.. the only other one that I can think of that didn’t have a legacy system or architecture to protect is the Opengroup (though the OSF had DCE). The others all had some means of ensuring architectural consistency. The IETF has the IESG and Areas for constraining work. The OMG created the Architecture Board. The WAPforum had an Architectural Consistency group. And the W3C has Activities, Staff, the Director, and more recently, the TAG.

So if OASIS wants to go the way of the Opengroup, they’re certainly on track.

Mark talks about how he implemented his Dive Into Mark Mobile Edition, and in doing so talks about XHTML Basic, which I co-edited. He’s mostly correct, but there are some points I’d like to respond to.

The “link” element has a extremely low conformance profile; all it means to support it is that you don’t fault when you discover it. Supporting “link” doesn’t mean you have to support CSS.

As for the list of elements which XHTML Basic left out, “b”, “i”, “center”, and “font” aren’t there because XHTML 1.0 – from which XHTML Basic builds – removed them in the “presentation belongs in stylesheets” blitz of 1999. Nested tables were indeed removed based on extensive feedback and wide industry support for doing so, due to the memory consumed during their processing. Though I don’t know for sure, I’m quite confident that AvantGo does not support arbitrarily complex nested tables, which suggests that some form of subset would need to be defined should their solution ever be opened up anyhow.

It is not true that XHTML Basic has to use the application/xhtml+xml media type. In many cases it is appropriate to use “text/html”, though the W3C apparently disagrees with me there; their “XHTML Media Types” note says that it “SHOULD NOT” be used. Whatever. I doubt any text/html processor would have trouble with XHTML Basic, just don’t expect it to be treated as XML or XHTML.

Mark’s conclusion, “As I said, XHTML Basic has no basis in reality. Ignore it.”, for North Americans, probably isn’t too far from the truth. In much of Asia and parts of Europe though, it’s important, and its importance will probably be spreading.

Not that I really care that much. The reason I contributed to its development was because of Sun’s objective that WAP should use commodity protocols rather than wireless specific ones, and we did that. Though WAP 2.0 extended XHTML Basic, I’m confident that in time, those extensions will be ignored and HTML/XHTML will remain in some form, likely richer than XHTML Basic. I look forward to seeing that language documented after the fact; XHTML Basic 3.2 anyone? 8-)