More on human targetting

It sounds like Sanjiva’s as baffled as I am how we can each hold the positions we do about what seems like a completely trivial point. He responds to my previous piece on the Web being for humans, and writes;

The problem with GET is exactly what Mark points to as its feature: GET is reusable for anything. What does that mean? That means the receipient must be able to deal with anything that comes down the pipe!

Well, if the client receives something it doesn’t understand, it just needs to fail gracefully. Even for the case of a browser, most have a fallback for unrecognized content, permitting the user to save it to disk.

I don’t think this is any different than a Web service client invoking getStockQuote, as it would need code to handle stock quotes it didn’t understand (lest it require upgrading once new forms of quotes are developed in the future).

There’s nothing whatsoever wrong with GET- its a great way to get some information from one place to another. However, if you want to use it for application integration then somene needs to write down what kind of data is going to come back when I do a GET against a URI.

Sure, the publisher has to write it down somewhere. The question though is – because this relates to WSDL’s utility – when the client needs to know it.

I’ve written software to support international-scale data distribution networks, and never during the development of that software did I need to know, a priori, which URIs returned what kind of data. I just used the fact that HTTP messages are largely self-descriptive, and so looked at the Content-Type header on responses, along with some error handling code for dealing with unrecognized media types.

Anyhow, this really gets away from the point of this discussion which was just to point out that moving from getRealtimeStockQuote to getStockQuote to GET doesn’t change anything about whether the returned information (which can be identical in all cases) needs a human in the loop or not.

0 thoughts on “More on human targetting

  1. Stelios Sfakianakis

    On the self descriptiveness of HTTP messages: Looking only to the content-type is not enough otherwise Google would just say that GData queries return text/xml (or application/xml) instead of specifying all the details e.g. as in http://code.google.com/apis/gdata/elements.html .

    So the description of the messages format either in human readable form (as in GData) or in a machine friendly form (as in WSDL) is required before the client can do something really useful with your application. Of course in the case of REST you can have some basic interaction supported from the beginning even without knowing the details (format, “semantics”,…) of the returned messages e.g. you can use your browser to inspect the messages or have a generic HTTP client (e.g. Joe’s httplib2) download and store locally the representations for you…

  2. Mark Baker

    Actually, practically all services that use the two */xml media types do so incorrectly, because they assume the recipient will interpret the root namespace as the real indicator of the vocabulary, which isn’t licensed by RFC 3023 (I wrote the section that warns about this interpretation). Those services should be using more specific media types, like, say (the fictitious) application/stockquote+xml.

  3. Bill Burke

    You talk a bit here about the client setting up handlers to process responses, but what about having the client request the response type with Accept header? From your experience, is this header generally feasible or not?

    Thanks….

  4. Mark Baker

    I’ve used Accept in the wild, but you need to keep in mind that servers are free to ignore it (and virtually all of them do), so you still need to manage unknown content types.

Leave a Reply

Your email address will not be published. Required fields are marked *