A really bizarre
article on IEEE Spectrum.
I say bizarre, because it totally nails the problem (linking databases), yet totally flubs the solution
If you want to open up your databases and link them together, look no further
than the Semantic Web.
Update; yes, I know it’s a pro-Semantic-Web article, I’m just pointing out
the misconceptions about Web services. They are not about publishing data. That’s
what the Web is for. The article even claims that HTTP was designed for HTML,
which is totally false.
Via Savas (subscribed!), I’m following
a lot of the discussion regarding the recent Grid/Web-services “unification” specs,
WSRF. Good stuff! These folks are even closer
to having their Web gestalt moment I believe, as they’re now talking about
“stateful resources” (now
have I heard of those before?) in the context of integrating two different systems,
the Grid and Web services. Rock on! I think I’ll sign up to a mailing list or two.
While following those links, I ran across a proposal by Savas and his group
called WS-GAF, a sort of accidental
precursor to parts of WSRF. It mentioned REST;
There have been proposals for naming and uniformly providing access to resources, like the REpresentational State Transfer (REST)  model. However, since REST depends on HTTP it is protocol specific and hence unsuitable for heterogeneous systems like the Grid.
First, I just want to say that I think it’s wonderful that REST was even
mentioned in the context of the Grid. Most folks think it’s still just for
humans and browsers and HTML. Very refreshing. Now for the bad news (come on,
you knew it was coming … 8-). REST does not depend on HTTP at all. It is
an architectural style, and therefore prescribes no specific
implementation, it just describes the constraints that one uses when designing
a RESTful system. You could run out and build one that didn’t use
any existing technology, if you
Savas might be interested in a
of mine to
Web services are not distributed objects paper.
WS-GAF still uses URIs, which is great. Here’s an example of one;
This is wonderful, in a sense. It’s the first time I’ve seen that we’ve narrowed
the whole SOA/REST/Web issue down to an issue of Web architecture, in particular,
(well, you have to squint a little to see why it’s really that issue – it could
have been called the “uriRange” issue, i.e. what can a URI identify?).
So my claim is that WS-GAF would be far better off to identify its resources
using http URIs, for the same
reason that “info” scheme users would.
Lots of distributed object, SOA, Web, Web services talk going on recently …
Dare Obasanjo on Web/SOA;
What Don and the folks on the Indigo team are trying to do is apply the lessons learned from the Web solving problems traditionally tackled by distributed object systems.
I know they’re trying to do that, but what they’ve (and most nearly
everybody else) have missed, is that the Web is already a distributed
object system; it has its own way of addressing most of the
problems that previous attempts at distributed object infrastructures
attempted to solve. For the things it doesn’t address, Web extensions
like the Semantic Web and ARREST cover them … and then some.
James Robertson on HTTP, documents and coupling;
Here’s my point though. It’s magic thinking to say that you have looser coupling simply because you use Http transport and XML documents. It’s a fantasy. Why do I say that? Well, let’s posit a blog server that accepts XmlRpc formatted posts. There you go – http transport, xml documents.
HTTP isn’t a transport protocol. It’s not intended to send RPC messages, it’s intended
to send real documents like images and resumes and letters and purchase orders and …
anything that is serialized state. If you use it that way, then there is magic
there, because it gets data into the hands of somebody else’s application code,
rather than into the hands of
some infrastructure code.
It’s actually no different than CORBA – except that maybe it’s slower. Either way, I have a server listening on a port, expecting data in a given form, and able to perform a constrained set of actions if I send it the right requests – and ready to send back errors if I don’t.
CORBA only tells you that objects have interfaces. HTTP tells
you what that interface is.
Chris Ferris writes,
regarding BEEP and HTTP;
The BEEP protocol offers much richer message exchange patterns than does HTTP, enabling the likes of publish/subscribe, one request/N responses, etc. without having to resort to hacks.
I’m not a fan of BEEP, primarily because I see little value in standardizing
at that layer (OSI layers 5 and 6) without standardizing up higher with an
application protocol, because layer 7
is where interop happens on the Internet.
But I have to take issue with the “hacks” jab. He and I went back and forth on
at least one HTTP extension in the early days of the
Web Services Architecture WG; the
Is that a hack? Is mod-pubsub
a hack? (well, parts sure are, but the bulk of it? 8-)
I suppose if you have the luxury of working in a greenfield environment with little
in the way of architectural constraints (e.g. BEEP), then you can pretty much do what you
want, and perhaps you’ll end up with something quite elegant. But doing the same thing
with an existing architecture is much harder because you have more constraints
you have to work within. That doesn’t make them hacks.
Via service-orientated-architecture, a good
article, with a great point;
Having a JavaSpaces foundation makes it possible to tackle the modernization of an enterprise application portfolio in a distributed fashion. It becomes possible to modernize front ends without petrifying back ends by agreeing on an abstract middle layer to which all the actual IT assets can connect without concern for what’s on the other side. It lets each center of responsibility make its own decisions about the relative urgency of various goals so that those who have budget accountability also get to decide what will be done when-an essential part of any rational goal-setting process.
Very well said, especially the “abstract middle layer” bit. Of course, there’s nothing
special about JavaSpaces in this regard; any interface constrained around an abstraction, be it a space, a
or an email inbox, will buy
you these same benefits.
So if similar architectural styles to JavaSpaces are the evolution of Web
services, and Web services are the evolution of the Web, then why does JavaSpaces
look so much like the Web?
Yikes, I awoke this morning to about 100 new emails, 42 of which were
And those were just the ones that evaded the spam filter. It also constituted
about a third of the 200 or so spams caught by my filters.
I’ve never been hit so hard.
to trace the history of the inversion-of-control pattern (well, I think
it’s a pattern at least 8-) used in frameworks.
I used it in my own work in ’94/95 when I first got into C++, and shortly
thereafter with Java in the summer of ’95. But it was perhaps late ’95 (confirmed;
it was published in Sept ’95), when I first saw it described and referred to as
the “Hollywood Principle”. That description was in the famed “blue book”;
Bob Orfali, Dan Harkey, and Jeri Edwards’
Distributed Objects Survival Guide.
I’m sure I sound like a
(there’s another case too, but I can’t find it, doh) on this subject, but another group has just
taken a stab
at reinventing substantial portions of the Web (poorly). This time, it’s
presented as a Web-services-meets-the-Grid solution, and ironically,
I think they nail the basics of a means to unify those two architectures;
with a resource model. But, the reinvention part, is that the Web
already has a perfectly good
Where their resource model breaks down, is that they feel
the need to associate a specific interface with it. It’s a resource,
why not define an interface for a resource? Let’s see,
resources only have identity, state, and behaviour, so the semantics
would have to operate on this; things like “serialize your state” (GET),
“change your state to this” (PUT), etc..
Apparently, in the whats-hot-whats-not department,
Web services are not.
Norm’s playing around with media types.
is served with application/xml.
Let’s have a look at what should happen in a perfect world if
XHTML is served with that media type.
RFC 3023 says;
An XML document labeled as text/xml or application/xml might contain
namespace declarations, stylesheet-linking processing instructions
(PIs), schema information, or other declarations that might be used
to suggest how the document is to be processed. For example, a
document might have the XHTML namespace and a reference to a CSS
stylesheet. Such a document might be handled by applications that
would use this information to dispatch the document for appropriate
What that means is that one cannot assume that namespace
dispatching will occur, and therefore the semantics of application/xml are
ambiguous; it is reasonable that the recipient see it as XHTML/HTML, but also
reasonable that they see it as “XML” (such as in the IE XML tree view).
In the real world of course, reality can trump specification; concensus
(in the form of running code) may very well be that namespace dispatching
is assumed, and in that case at least the ambiguity vanishes. But then
we’ve lost the ability to send plain-old XML. For example if somebody asks
me for a sample XML document, I’d like to be able to send them some XHTML
without it being interpreted by the other end as XHTML, just XML. I think
it would be great if application/xml could be used for this purpose, but
it’s not a huge deal; text/plain would also be appropriate in many cases.
So I set up a little test.
Let me know what you see and I’ll record it. It could be useful in the
soon-to-come revision to 3023, enabling us to be a bit more specific