Thanks Roy. That’s really good to see. Roy’s far more articulate than I am on these matters, and I’d say that he picked the right guy to come out of the woodwork for too. My favourite bit;

I don’t buy the argument that programmers benefit from a Web Services toolkit. Such things do not build applications — at most they automate the production of security holes.

8-)

I found this in in my aggregator, but it’s not on his site any longer;

Adam Bosworth admits he doesn’t get REST. I like that. It takes courage. The REST advocates promote by intimidation. A clear sign they don’t want you looking too closely. Now Bosworth is going to do exactly that. Bravo.

Apparently I intimidate Dave so much that he’s unable to dig deeply into the issues himself! I rule. Golly.

I am jazzed about Adam’s latest weblog entry though. I’ve always expected that he’d be one of the first big Web services proponents to really get the Web, since he’s such a bright guy. I’ll respond when I get a minute.

I’ve just been getting into Zope, and was reading The Zope Book when I stumbled upon this;

The technology that would become Zope was founded on the realization that the Web is fundamentally object-oriented. A URL to a Web resource is really just a path to an object in a set of containers, and the HTTP protocol provides a way to send messages to that object and receive its response.

I would have phrased the last part of the last sentence differently – perhaps “provides a way to request the state of the object, and to process the serialized state of objects” – but yah, close enough.

Identity; check.

State; check.

Behaviour; check.

Encapsulation; check.

Data hiding; nope, but I personally never considered data hiding axiomatic of OO-ness.

Sean points, indirectly via Jorgen to an effort out of Microsoft Research (CRL) called Project Samoa, which provides tools for verifying the security of Web services.

Cool stuff, but you realize that you’d have to apply this to every single application interface out there? Ouch.

Yet another example of a benefit of using a single application interface.

In a post to service-orientated-architecture.

For your (re)viewing pleasure.

At a one day miniconf (ISEN) at UCI a few years back, Adam Rifkin introduced me to a charming expression; “Ripping me a new asshole”. IIRC, he used it to describe how every time he talks to Jim Waldo, he learns something new that shatters some long-held belief.

Today, Roy Fielding ripped me a new one on the topic of the https URI scheme, which I had always considered an unnecessary hack. In one fell swoop, that misconception was vanquished (along with several others by Roy over the past several years). Not that there isn’t a little bit more to the issue, as Tim points out, but certainly the notion of it being a hack is wrong-o.

It does make one wonder why such an illuminating experience is given such a graphic label. Well, for a moment perhaps.

Over here. I’ve commented on it.

Jeff asked me to have a look at his attempt to quantify the concept of “coupling” from a REST perspective, which I’m happy to do.

I like what I read there quite a bit, in particular the emphasis on an aspect of large scale systems that I rarely see mentioned; the role of intermediary introduction at runtime. Jeff writes;

In addition, a fundamental belief that I hold is that ‘intermediaries’, if introduce-able at runtime, have the potential of bringing a coupling index to an unprecedented low level. This concept is conveyed via ‘Intermediary Decoupling’; where-by, other coupling concerns are mitigated by the use of one or more ‘intermediaries’.

A huge +1 from me on that. Roy Fielding describes a set of properties called “Modifiability”, which I believe encompasses Jeff’s intermediary concept, though I’m unclear exactly which sub-property it would refer to; I still have trouble recognizing some of those properties as distinct from the others.

I also like the mention of “Standards Coupling”, as it gives additional weight to reusing existing standardized solutions (HTTP, URIs), and de-emphasizes “reinventing the wheel” approaches (ahem 8-).

In general, I think most of the value in trying to build a metric is in going through the exercise. The “user weight” factor is much more dynamic than it appears too, or else needs to accomodate the requirements of the context into which a system will be deployed. For example, a system being deployed behind the firewall of an enterprise doesn’t need the same degree of loose coupling that a system being deployed outside a firewall does. That’s just one type of “context” of course, though a pretty darned important one.

James Strachan responds to my SDO comments suggesting I missed the point. I don’t think I did. Perhaps James missed the point of my comments. 8-)

He writes;

To help set Mark straight; Servlets are a Java API for implementing HTTP operations. (Incidentally Servlets don’t have a client side – for that you need to either use the URL class or the commons-httpclient library). So Servlets take care of the server-side part of HTTP protocol for you. To use REST speak, Servlets implement the server side of the transfer part of REST.

HttpServlets do expose a client side API, they just do it in a language neutral way via the HTTP protocol. I could have also compared SDO to java.net or HttpClient I suppose, but my argument would have been the same.

SDO on the other hand is a generic Java API for any arbitrary typesafe blob of state. It says nothing of how you fetch the state, where it comes from, what transfer/marshalling mechanism etc. The state could come from anywhere; from parsing any XML document, calling some RESTful service, a web service, an RMI call, an EJB / JDO / JDBC / JNDI / JMX query etc.

Yes, true, just as a RESTful service could front for an EJB, JDO, JDBC, JNDI, JMX query etc.

When I first reviewed it, I looked for a “smoking gun”; that thing that would stand out as evidence that the authors hadn’t studied the Web in sufficient detail. I didn’t see any at the time, but after having a second look I’d say it’s the use of XPath; SDO should have used URIs (or perhaps both) as the means of relating objects. Perhaps that might help you see where I’m coming from with my objections, James.

Thanks for the feed/pushback.

Update: James responds again. I think his last paragraph is all I really need to comment on;

Incidentally, the interesting thing about SDO is that the navigation mechanism is pluggable. So Mark, an SDO implementation could use URIs (or XPointer, or XQuery etc) to navigate through the data graph.

Yes, understood, but that misses the point of the Web. URIs aren’t just another identification mechanism, they are the most general identification mechanism, enabling other mechanisms to be defined in its terms (e.g. FTP file names engulfed via the ftp URI scheme, email addresses engulfed via mailto: etc..).

Sure, the Web can be treated as just another system, but as soon as you do that, any attempt to build a general model that includes it ends up being an attempt to reinvent it, because that’s what the Web is; the system of all systems.