A funny addendum from Sam on
the topic of concurrent updates with REST;
Obviously, Clemens hasn’t heard about the
expires
header. All you have to do is predict with 100% accuracy when the next POST or DELETE request is going to come in, and all caches will remain perfectly in synch.
It is just a SMOP.
Heh, exactly. Some might call this a bug, I call it a feature. There is no
requirement in REST that all layers have a consistent view of the state of a
resource, as to do so would be to reduce scalability. It’s basically just
recognizing that perfect data consistency in the face of latency and trust
boundaries, is impossible. The best one can do (addendum; well close to it) is to make explicit, at the
time the representation is transferred, the expectations about the
durability of the data. As I mentioned previously, architectural styles that
transfer serialized objects (the whole object, not just the state) can address
the issue of latency, by moving the encapsulating code along with the data. But
this is done at the cost of visibility, which means that it likely won’t cross
trust boundaries.
Clemens
has some doubts and questions
about REST, which I’m more than happy to respond to.
What I don’t get together is the basic idea that “every thing is a uniquely identifiable resource” with the bare fact that the seemingly limitless scalability of the largest websites is indeed an illusion created by folks like those at Akamai, which take replicated content around the globe and bring it as close as possible to the clients
I’ve heard this objection several times. What I always point out, is that REST
is explicitly
layered,
and that the Web is scalable because Akamai and other solutions like it, can exist.
Taken to the last consequence of every data facet being uniquely identifiable through a URI (which it then should be and is) this model can’t properly deal with any modification concurrency.
Not true. There are many known means of dealing with concurrent
access to data, and the Web has a good one; see the
lost update problem
(though other solutions may be used too). The one described in that
document is actually a very well designed one too, as other solutions
have problems with “hand of god” centralization issues (e.g. round
identifiers).
While this solution only addresses concurrent access to an individual
resource, WebDAV offers some
useful extensions for dealing with collections of resources, and concurrent
access to them.
So, what are the limits of data granularity to which REST applies? How do you define boundaries of data sets in a way that concurrency could work in some way […]
REST’s level of data granularity is the resource, but a resource
can be a collection of other resources, as WebDAV recognizes.
and how do you define the relationship between the data and the URI that identifies it?
By use; the data returned by GET over time determines what the URI identifies.
Though in the case a resource that’s created by PUT, then the one who’s doing the
PUTting already knows, since they’re directly effecting what GET returns.
So to sum up, I too believe that REST has places where it’s useful, and places
where it’s not, and that there will be other useful systems deployed on the
Internet which are not built around REST or REST extensions. But I don’t believe
“Web services” will be amoungst them, because the current approach fails to recognize
that being built around
a coordination language is essential to any Internet scale system.
A few people have already commented on Don’s comments about there being
enough specs already.
FWIW, and not too surprisingly I expect, I saw those comments as a direct jab
at BEA, who earlier this week released
three more WS-* specs.
It was good to finally see BEA go out on its own, lord knows that’s been
long overdue. But those three specs were very disappointing as a first
attempt. I mean, they’re ok work (though MessageData seems to be a pretty weak
attempt at addressing Roy’s issue about the different types of HTTP headers being
all munged together), but are over specified (note to authors,
leave some room in the margins for the standardization process 8-), and don’t stand
alone very well. They need to be bundled together under some kind of catch-all
“SOAP Extension Suite” or something. Or perhaps in WS-I, separating them out as
three is the best way to get them into some new-fangled profile, who knows. Ah
politics, gotta love ’em.
A flurry of
“It’s not REST vs. SOAP” comments in response to Bob McMillan’s
latest REST article.
I helped Bob with that story, and I know I’m always careful to frame the
argument as “REST vs. Web services” rather than “REST vs. SOAP”, so I apologize
for not communicating that point to Bob well enough. Heck, I spent all that time
on the XML Protocol WG for a
reason, you know; to make sure SOAP became the best darned
PEP replacement it could be.
But I suppose there’s an inevitability to this confusion, since the word
“SOAP” brings along with it an implied use, which is contrary to REST. Unfortunate,
but whad’ya gonna do?
If you blinked last week, you might have missed an ordinary
article on
double digit growth in IBM’s
content management software division. The gist of the article appears to be that content
management is hot in large part due to record keeping systems in the post-Enron world. That
probably has something to do with it, but I think it’s much more than that. I think it
has something to do with the more general trend of a wider variety of data being dealt with
under the content management umbrella. Certainly from a Web centric point of view (which
I’ve been known to promote 8-), everything is content.
So here’s a bold prediction. By the end of 2005, IBM’s content management software
division will have absorbed their enterprise software group.