Some musings of mine on OpenDoc and W3C CDF
(link) [del.icio.us/distobj]

It’s about time.

Web services were under attack (principled, of course) at today’s TAG call. Better late than never, I suppose…

Roy: The situation I run into is that if they don't solve the problem,
we shouldn't recommend a technology. ... WSaddressing may not be useful.

[…]

<Roy> what I said was that the WSA folks are roughly the same as the WSDL
folks and the WS* folks in general, and we have regularly described problems
with WS that need to be resolved in order to fit in with the Web, and they have
regularly refused to do so in a meaningful way. At some point, we have to say
that this technology should not be recommended to W3C members.

(emphasis mine)

[…]

<Roy> I don't find any technology that doesn't use the Web to be a useful product of the W3C.

[…]

<noah> Though, to be fair, the work required to process such a header would be a
structural change to most deployed SOAP software.
<DanC> so... the folks who made up that software dug that hole. they can dig
themselves out, no?

It’s a real shame. This would all just go away if only Web services advocates realized that the Web provides what they need for distributed, document oriented computing. You wonder why Dan, Tim, and Roy (and maybe Henry – I don’t know him very well) are pushing as they are? It’s because they understand that the Web is necessary, and that after you slash away all that makes the Web the Web, what’s left isn’t anything of any particular value to anyone, protestations to the contrary notwithstanding.

I’m not holding my breath that anything other than a toothless compromise will result from this exchange, but still, it’s nice to see the pushback; misery loves company, as they say 8-)

I loved this bit from Schwartz’s latest;

Or finally, as I did last week at a keynote, ask the audience which they’d rather give up – their browser, or all the rest of their desktop apps. (Unanimously, they’d all give up the latter without a blink.)

No surprise there, except perhaps to Microsoft (though I’ll have more to say on that soon, after the W3C makes a certain much-delayed announcement).

Tim Bray picks up on it and observes;

From right now in 2005, I see three families of desktop apps that are here for the long haul: First the browser itself, including variations like news readers and music finders, whether P2P or centralized. Second, realtime human-to-human communication, spanning the spectrum from text to voice to video. Third, content creation: PhotoShop, Excel, DreamWeaver, and whatever we’ll need for what we’re creating tomorrow.

That’s a reasonable list of apps, but I don’t know how “long haul” Tim might be thinking there. “Excel” specifically, seems like something that’s ripe for an AJAX equivalent, something Jot’s been working on (amoungst others). Also, the human-to-human stuff, while it has most certainly been handled primarily outside the browser to date, I see no reason why much of it couldn’t be handled in-browser, as the recent flurry of AJAXian IM solutions suggests.

Plus, lest one think that even those desktop apps that are around for the long haul aren’t affected by the Web, consider that most of them – Photoshop, etc.. – are, at the very least, destined for use as browser plugins.

8-)
(link) [del.icio.us/distobj]
About time!
(link) [del.icio.us/distobj]
Cool stuff from DanC
(link) [del.icio.us/distobj]

Massive kudos to the WS-Addressing WG (in particular Dave Orchard) for agreeing to drop reference properties from the WS-Addressing specification!! This addresses the most major(!) concern I had with the specification, and leaves EPRs as a means for bundling a URI with some state; cookies meet XML, as it were.

This decision means that a by-the-book EPR will contain only a single resource-identifying data element; a URI. In other words, the WG is adopting the REST constraint of a single resource-identifying data element. More concretely, it means that Web services will actually be encouraging the use of URIs for identifying things, rather than the old practice of using them as dispatch points behind which countless resources were hidden. This is HUGE, because in my experience, once you’ve adopted URIs, the use of http URIs and therefore HTTP (buh bye protocol independence) just naturally follow due to the massive network effects of the Web. The use of “hypermedia as the engine of application state” is the next obvious constraint for adoption after that.

It’s possible that with this decision, Web services might have just stepped inside the Web’s Schwarzschild Radius. Stay tuned.

After a two-plus year hiatus, I’m returning to active duty at the W3C. Justsystem, my principle client, has just joined as full members. I’m now a member of the Compound Document Formats WG, which shouldn’t come as any surprise given the problem space tackled by xfy, the technology we previewed in November.

An interesting post from Dave. A few comments…

I’ve been saying for a while now that I think it’s a shame that SOAP 1.2 didn’t define a general SOAP to HTTP binding that used HTTP as a transfer protocol, for the previous 2 reasons.

It does, Dave. The default binding is a transfer binding; I made sure of that. I think you’re confusing how people use it with how it’s defined. Web services proponents generally think that a SOAP envelope is a SOAP message, yet that interpretation is not licensed anywhere in the spec, and is even explicitly rejected in the HTTP binding where the state transition table clearly shows HTTP response codes affecting SOAP message semantics. It’s also alluded to in the glossary where the definition of the two terms differ (you think this was accidental? Hah! 8-).

I would love it if there was a reasonable way to bridge the SOAP/WS-Addressing world and the HTTP Transfer protocol world, but I just don’t see that each side really want the features of the other side. The SOAP/WSA folks want the SOAP processing model for Asynch, and don’t care about the underlying protocol. The Web folks want their constrained verbs and URIs and don’t care about SOAP processing model.

Avert ye eyes! False dichotomy alert!! You can get the SOAP processing model, and HTTP as transfer protocol (including asynch, which HTTP handles just fine despite insistance from many that it doesn’t) simply by using SOAP in the manner prescribed in the SOAP 1.2 spec and default HTTP binding. In order to do so though, you need to give up on the idea of a new (non-URI) identifier syntax. This is really not a big deal!. We are, after all, primarily talking about syntactical differences here. What EPRs are trying to do is comparable to inventing a new alphabet for the english language; perhaps there are benefits, but I think the phoenician alphabet has a, ahem, rather large and insurmountable head start in deployment, making those benefits – if they exist at all – completely inconsequential.

Dave then makes a really interesting statement of the “protocol independent” variety;

Here’s a test case: Would the Atom protocol switch to using WS-Addressing and then use the HTTP as Transport binding(s) and HTTP as Transfer binding? Seems to me not likely. The Atom folks that want to use HTTP as Transfer have baked the verbs into their protocol, and they won’t want to switch away from being HTTP-centric. And same as I don’t see the SOAP centric folks wanting to “pollute” their operations and bindings with HTTP-isms.

Emphasis on “baked the verbs into their protocol”. Seriously – no matter how you slice it you’re always baking verbs into a “protocol”, because an application developer has to know what verbs they’re using. The problem as I see it, again, is one of nomenclature; that Web services proponents have a very narrow RPC-inspired definition of “protocol” (transport), and their mental models built around this definition simply can’t fully absorb the implications of the broader definition used in the IETF and W3C (transfer). They simply can’t conceive of something called a “protocol” playing such an enormously significant role in a distributed system, yet this is precisely how all existing Internet scale systems are built, and precisely why Web services proponents haven’t yet realized that the Web is what they’ve been trying to build, at least since the quest for “document oriented” services began in 2001/2002.

One might also look at Dave’s statements and ask themselves, well, if they’re going to be dependent on a protocol, then it might as well be the most successful one ever developed rather than one which has struggled for deployment anywhere except behind the firewall. And somebody please remind me; why is it desirable to be independent of a transfer protocol, but dependent on SOAP the protocol?

From the minutes of the last TAG face-to-face;

TBL was positive about looking at RDF/SemanticWeb and said that doing Web Services would be appropriate, but expressed concern about the defacto architecture that that’s coming, e.g. from corporate sources. NM said that those working on Web Services would benefit from the right kind of guidance on how to better leverage the Web Architecture and build scaleable systems

and, as a plan for action;

web services
    get seriously involved in WS addressing. Ask WS choreography to present.

I’m not holding my breath, but it’s better than what’s going on right now.