So I was thinking last night about how far – or not – we’d come in
the whole “Web vs. Web services” debates. In one respect we’ve come a
long way; you hardly ever hear the argument that “the Web requires humans!”.
Many (but still not all) people remain indifferent about that; that the
Web may or may not be usable for this, but it’s moot anyhow, because the
“World isn’t going that way”. But that’s still pretty good,
as it shifts the discussion into the more concrete and less subjective
realm of software architecture, allowing us to use reasonably well
understood means of evaluating and comparing architectures for suitability
to a particular problem domain.
But on the other hand, the Web still doesn’t get the respect it
deserves from a lot of folk as a serious distributed computing
platform. I’ve just been reviewing some papers for
Middleware 2004,
and some of them talk about a variety of distributed computing platforms,
yet all fail to mention the Web as a peer.
There’s been a lot of low points, obviously, over the past four or
five years, but a few highlights too. Some of the latter include;
Now, with
Tim Bray
joining the ranks
of the WS-Disenfranchised (albeit for slightly different reasons than
myself), the future’s looking even brighter. Onward!
State transfer on top of SOAP on top of state transfer (HTTP). Sigh.
(
link) [
Mark Baker’s Bookmarks]
Eric writes sums up the XML Europe 2004 conference, and his
summary includes a couple of points that REST and Web services folks might be
interested in.
While talking about Amazon, he writes;
These services are available either as SOAP or REST (that is, XML over HTTP). Much simpler, REST web services can be tested using a web browser. They account for 80% of the actual requests. In my view this is confirmation of the continuity between the Web and web services. Before anything else, web services are, as their name indicates, services accessible on the Web. They belong to the Web, and that’s what makes them so interesting.
Yup. A little bit of wishful thinking perhaps, but that’s fine.
Then he mentions a
talk
by the occasionally-reclusive (8-)
Paul Prescod;
As expected from a defender of the REST architectural style, Prescod’s presentation started with a moving speech in favor of REST: “the document is what matters”; “we need resource oriented architecture rather than SOA”; “XML is the solution to the problem, not the problem”; “the emphasis should be on resources” and “there should be a seamless web of information resources”.
“the document is what matters”, I like that. But I expect that SOA
proponents would be dumbfounded by that statement, since they’ve been saying
pretty much the same thing, at least since “document style” SOAP came into
common use. But the difference between document-SOA and REST is that the
former uses documents and APIs, while the latter just uses
documents. In other words, document style SOA is like a hybrid style of RPC
and document orientation, whereas REST is purely document oriented; simple
document exchange (aka state transfer) between distributed and autonomous
parties.
I’ve wondered before where exactly
Tim Bray stands on Web services.
Now I know.
I agree with him more than I disagree, but there’s something I very
strongly disagree with him about. Where I do agree with him is in the
need for simplicity; things have gotten totally out of control, and we
need to fall back to what we know works.
Where I disagree with him, is where he says that certain technologies –
XML, URIs, HTTP, SOAP, and
WSDL – work today, because he’s seen them work. I don’t personally think seeing
one, two, or even a handful of working examples is sufficient. I want to
see rabid success be a requirement for admittance to that list.
So, IMO, the list should be XML, URIs, and HTTP.
I would really like to understand Tim’s support for WSDL.
To me, if you think WSDL is a good idea (at least in its current
form viz a viz WSDL 1.1 and WSDL 2.0), then you necessarily believe
that state transfer, and, well, the Web itself, is somehow an
insufficient basis for large scale distributed computing. I know Tim’s
an absolute Web fanatic, which is why I’m at a loss to explain it.
Ok, fess up, who
pissed in Chris’ coffee
this morning?
I think the operative word from my blog that Chris missed was “need”; that,
IMO, we need a WS-* RSS feed because new specs are appearing at a crazy rate.
You can’t compare that with the W3C’s TR page and corresponding RSS feed
because it represents deltas while the Wiki represents a sigma. If the W3C
published a list of recommendations via
RSS, that would make for a more fair comparison. So how many Recs have they
published? Let’s see, in almost 10 years, they’ve got about 80 (90ish if you
include the IETF specs), while there’s 40ish Web services specs listed on the
Wiki, the bulk of which have been produced in the past two or three years. Not
exactly apples-to-apples, but not too far from it.
He concludes;
Please don’t misunderstand my intent, I like HTTP. Unlike Mark, neither do I think it is the last protocol we’ll ever need (it is not), nor do I spend every waking moment trying to tear it down or to poke fun at things that it simply doesn’t handle effectively. That would be pointless.
Please don’t misunderstand my intent, I like SOAP. I just don’t
like how it’s being used. It would best used for document exchange, not RPC (Web
services circa 1999-2002), or RPC dressed to look like document exchange (present
day Web services). I also don’t “poke fun” at Web services very often,
but I do take pride in being able to point out their many architectural
flaws in a variety of different ways, which I do frequently. And I don’t think HTTP
is “the last protocol we’ll ever need”, though I do believe that if it suddenly became
impossible to create any more, that it wouldn’t be such a big deal, at least for
those of us building document exchange based Internet scale integration solutions. As
for what things HTTP “simply doesn’t handle effectively”, I believe you grossly
overestimate the number of items in that list, though clearly it’s non-empty.
So do me a favour and drop the strawmen, ok? You’ve been pulling that
crap for years.
“In any case, should we admit that the WSDL and SOAP contingent switched horses mid-stream?” – heh. But can they pull it off again?
(
link) [
Mark Baker’s Bookmarks]
Mark and
Bill list some of
their favourite protocol/distributed-systems papers. Here’s some – not all
“papers” – of my favs. I’m sure I’m forgetting some.
FWIW, I’m not too keen on Marshall Rose’s RFC 3117.
It’s wonderful up to and including section four, but how those sections are used to justify BEEP
blows my mind. I think he
lost sight of the forest for the trees; that application protocol frameworks (like BEEP, and
how SOAP is most commonly used) are a dime a dozen, and that until you’ve defined an
application protocol, you’re just spinning your wheels. In other words, BEEP addresses
most of the hard problems except the hardest one; coordination.
Savas says
that iTunes is doing a proprietary version of Web services. I don’t think so. He writes;
If, however, they wanted to describe the structure of that information, how would they do it? Theyd probably use XML Schema.
Yep, agreed there. I think RDF would be a better solution, but given they’re
using plain old XML, XML Schema would be the way to go.
And if they wanted to describe the message exchange patterns, the contract for the interactions with the store (e.g., “when the XML document with the order information for a song comes, the response will be the XML document with the song”)?
Hold on. Why would they ever do that? Why would they want to artificially
restrict the type of documents returned from an interaction? That would mean
that if they wanted to insert a step between order submission and and song download,
that they’d have to change the interface and its description? No thanks. I would
prefer that the client determine what to do based on the type of the document that
actually arrives. Let’s keep the perpetually-static stuff in the protocol, and
let everything else happen dynamically.
And if they wanted their service to be integrated with other applications without having to read Apple specifications on how to include the information in HTTP requests and how to read HTTP responses?
I don’t follow. One need only look at the HTTP spec to determine how that’s done.
SOAP buys you nothing in that case, AFAICT. It buys you other things (better
extensibility, richer intermediary model, etc..), but none of those seem relevant
to ITMS at present.
Congratulations to Tim
on his new position at Sun.
In an interview
he gave about the move, he said something interesting that I’d like to comment on.
When asked whether he’d looked into peer-to-peer technologies, he said;
Only trivially. That has some roots of its thinking in the old Linda [parallel programming coordination language] technology that [David] Gelertner did years and years ago, which I thought was wonderful and which I was astounded never changed the world. But I have been working so hard on search and user interface and things like that for the last couple of years that I haven’t had time to go deep on JXTA.
Linda – or something very Linda-like – did change the world;
the World Wide Web.
I’d really love to get Tim’s views on Web services. He’s said a
handful of things on
REST/SOAP,
etc.. , almost all of which suggest that he totally gets the value of the Web
(unsurprisingly). But he’s also said
some things
which have me wondering whether he appreciates the extent of the mistakes
being made with Web services.
BTW, I wonder what’s going to happen on the TAG now that both Tim and
Norm are at Sun, given that the
W3C process document
doesn’t allow
two members from the same company? Either way, it will be a big loss to the TAG. Bumber.
Update; Tim resigns
I stumbled upon an “old”
paper by
Dan Larner
yesterday that I first read
when it was published back in ’98, but had forgotten all about. I find it
poignant today not because I agree with its conclusions (I don’t), but because
it so well describes the tension between specific and generic interfaces,
albeit without actually acknowledging the tension 8-O
I liked this image in particular;
At the top you see the generic objects/interfaces, while at the bottom are
the specific interfaces; Printer, Scanner, Copier (this is Xerox, after all).
But why do those services require specific interfaces? Check out the methods
on Printer; Print, CancelJob, Status. Why is that needed? Why can you just
not call GET on the printer to retrieve it’s status, POST to the printer to
print a document, and DELETE on a job resource (which is subordinate to the
printer) to cancel a job? Simple.
Many of the folks behind HTTP-NG were
from PARC where
ILU, a CORBA ORB with
some funky extensions, provided the impetus for their W3C contributions.
Like Web services proponents, their backgrounds were with systems which
didn’t constrain interfaces, and so it was pretty much an implicit
requirement that HTTP-NG would need to support specific interfaces by
basically being a messaging layer ala SOAP. It’s too bad they didn’t take
the time to study what was capable with the HTTP interface specifically, or
even constrained interfaces in general. I think that’s a big part of the
reason why HTTP-NG flopped.