I wanted to respond to some of the detail in Werner’s article, in
addition to
ranting about how document exchange was state transfer.
So here goes …
The first statement that really gave me pause was this;
The goal of
digitizing your business is to enable machine-to-machine communication at the same scale and using the same style of protocols as human interface centered World Wide Web.
I don’t believe that’s the case, and it’s certainly not been accomplished IMO.
I think that if you asked anybody involved since the early days, that
they’d say the goal is just the first part; to enable machine-to-machine
communication over the Web (or perhaps
Social Boosting). “using the same style of
protocols” has never been a requirement or goal of this community that I’ve
seen.
Consider what can be done with a Web services
identifier versus a Web identifier. Both are URIs, but because Web
architecture uses late binding, I know what methods I can invoke when I
see a URI (the URI scheme, specifically) and what they mean (because
there’s a path back to RFC 2616 from the URI scheme). With an identifier
for a Web service, I don’t have sufficient information to know what the
interface is and what it means, because Web services are early/statically
bound (creating centralization dependencies, ala
UDDI).
I don’t consider changing the architecture from late/dynamic binding
to early/static binding to be “using the same style of protocols”.
I suppose I also take issue with the implicit definition of “distributed
objects” as part of Misconception #1, when it says;
An important aspect at the core of distributed object technology is the notion of the object life cycle: objects are instantiated by a factory upon request, a number of operations are performed on the object instance, and sometime later the instance will be released or garbage collected.
I’ll present my definition first; distributed objects are identifiable things encapsulating
state and behaviour, which present an interface upon which operations can be invoked remotely.
Obviously
apps for business, like all software, do have a lifecycle. But it’s primarily
an implementation detail, and not exposed through the object’s interface. Some systems
chose to tie the identifier to some particular in-memory instantiation of an object
(rather than to an abstraction for which an object could be instantiated to proxy
for, in effect), which created a real mess, but I don’t consider that key to the
definition.
Misconception #2 also seems prima facie incorrect to me, at least by my
definition of “RPC”; an architectural style where the developer of each component
is provided the latitude to define the interface for that component. More
concretely, I believe the statement “there are no predefined semantics associated
with the content of the XML document sent to the service” to be incorrect because,
as I mentioned in my last post, if there is a method name in the document, then
that is an explicit request for “predefined semantics”.
I agree with the titles of Misconceptions #3 and #4; Web services don’t need HTTP or
Web servers. But I disagree with the explanation provided. That Web services are
transport agnostic is fine, but that does not at all imply that they should
be
application protocol agnostic, although most people use them this way.
The core of my disagreement is with this statement in the last paragraph of #4;
The REST principles are relevant for the HTTP binding, and for the web server parsing of resource names, but are useless in the context of TCP or message queue bindings where the HTTP verbs do not apply.
That is incorrect. REST is protocol independent, and has applicability
when used with other protocols, be they application or transport. REST is
an architectural style, and following its constraints guides the style of
use of all technologies you might choose to work with. For example, if you
were using FTP RESTfully, then you would do things like identify files with
URIs (rather than server/directory/file-name tuples), and interact with them
via a single “retrieve” semantic (an abstract “GET”), rather than the
chatty and stateful user/pass/binary/chdir/retr process (not-coincidentally,
this is how browsers deal with FTP). In essence (though
drastically oversimplifying), what REST says is “exchange documents in
self-descriptive messages, and interpret them as representations of the
state of the resource identified by the associated URI”. That philosophy can
be applied to pretty much any protocol you might want to name, especially
other transfer protocols (as the Web constitutes an “uber architecture”, of
sorts, for data transfer).
That’s most of the big issues as I see them. Anyhow, that was an enjoyable
and informative read. Thanks!