More good insight from Savas on REST.

He writes;

The human factor is involved. If a resource (e.g., a web page) has moved, applications don’t break. It’s just that there is nothing to see. We are frustrated that it’s not there. If an application depends on that resource being there, that application breaks.

Yep. But how is that any different than a service which you depend on not being there? At least HTTP responses code are well-defined, and handle a lot of common cases, including redirection, retiring, retry-later, etc.. I don’t see how this is human-centric at all; it’s just dealing with inevitable errors in distribution across trust boundaries.

I’m not sure what he means by using HTML for “interfaces”, but he then later speaks my language again when he describes HTML as a format for describing resource state;

If a resource’s representation is described in HTML, all is fine. Everyone knows how to read HTML. How about an arbitrary XML document though? Did we have a way of specifying to the recipient of the resource’s representation about the structure of the document? Perhaps they wouldn’t have requested it if they knew about it.

XML is fine and dandy, and I use it whenever I can, but it’s just a syntax. As such, it doesn’t do anything to alleviate the issue that understanding an XML document is an all-or-nothing proposition. That’s why when I use XML, I almost always use RDF. It enables a machine to extract triples from an arbitrary RDF/XML document, and triples are much finer grained pieces of information than a whole document. It allows me to process the triples I understand, and ignore the ones I don’t, which another way of saying that it provides a self-descriptive extensibility model. See this example.

If we are going to glue applications/organisations together when building large scale applications, we need to make sure that contracts for the interactions are in place. We need to define message formats. That’s what WSDL is all about.

Agreed, but that’s also an important part of HTTP. It just defines message formats in a more self-descriptive way (i.e. that doesn’t require a separate description document to understand what the message means).

Also, we talk about exchanging messages between applications and/or organisations. Do we care how these are transferred? Do we care about the underlying infrastructure? I say that we don’t; at least, not at the application level.

I’m not sure we’ll get past this nomenclature problem, but in my world, documents are transferred while messages are transported. I do agree that how message transport occurs doesn’t matter, but I don’t agree that how document transfer occurs doesn’t matter. As an example, consider a document transferred with HTTP PUT, versus that same document transferred with HTTP POST. Both messages mean entirely different things (more below).

If there is a suggestion that constrained interfaces are necessary for loose-coupling and internet-scale computing, then here’s a suggestion… What if the only assumption we made was that we had only one operation available, called "SEND"? Here are some examples:


Ah, this one again. 8-)

You can’t compare TCP/IP “SEND” with HTTP POST or SMTP DATA. TCP/IP is a transport protocol and therefore defines no operations. You can put operations in the TCP/IP envelope yourself (e.g. by sending “buyBook isbn:123412341234”), or you can have them be implicit by the port number by registering your “Book Buying” protocol with IANA, only ever using that one operation (“buyBook”), and sending just “isbn:123412341234”). On the other hand, HTTP, SMTP, and FTP, all do define their own very generic operations.


no comment until now

Add your comment now

Warning: Undefined variable $user_ID in /homepages/16/d135909399/htdocs/ on line 100