Mark Little responds to an interesting post by Bill Burke about compensation based transactions. I don’t really have any direct response to the gist of that discussion, but wanted to highlight a couple of Mark’s arguments that I consider to be probably the top two arguments by those who feel there’s value in both the Web and Web services (the “fence sitters”, as Mark recalls me calling them 8-).

First up, the belief that the Web has nothing to say about reliability, transactions, etc… Mark writes;

Yes, we have interoperability on the WWW (ignoring the differences in HTML syntax and browsers). But we do not have interoperabilty for transactions, reliable messaging, workflow etc. That’s not to say we can’t do it: as I said before, we did manage to do REST+transactions in HP but it was in a small-scale deployment involving only a couple of partners. There is no technical impediment to doing this: it’s entirely political. It can be done, I just don’t see it ever being done. Until it happens, REST/HTTP cannot compete with the kinds of heterogeneous out-of-the-box interoperability that we have demonstrated with WS-*

I’ve talked about this a lot, most recently in my position paper to the W3C Workshop on Enterprise Services. The gist of the argument is that the Web address all of those needs, just in a way which you might not recognize because it has to address them within the confines of architectural constraints that Web services folks aren’t used to. Again, that’s not to say that every possible one of your needs can be met out of the box today, only that far more of them can than you might believe.

Mark also uses the very common argument that because interoperability requires agreement on data for both Web and Web services, that there’s no significant difference between them (I hope that summarizes his point);

So just because I decide to use REST and HTTP doesn’t mean I get instant portability and interoperability. Yes, I get interoperability at the low level, but it says nothing about interoperability at the payload.

I can’t quickly find any past blog entries that touch on this point (though I know they’re there), but this argument I find the most confusing. I suspect it has to do with what I perceive to be a disconnect between Internet and intranet protocol stacks, but I can’t say for sure.

What Mark calls the “low level” isn’t the low level at all. Assuming he means HTTP, the agreement you get by using it is more (higher level) agreement than you get if you were just using SOAP (or XML-RPC or IIOP or BEEP or …). That’s because you’re agreeing on the methods in addition to an envelope (not to mention many other features).

Trackback

no comment until now

  1. Mark Little

    From my position on the fence I don’t see how you’ve addressed my concerns at all. As I said: just because we can agree on a way of transfering messages/documents does not mean we have agreed on the way in which information within those documents is represented. That’s pretty darn important if you want to have interoperable interactions across an arbitrary number of participants. One-to-one is trivial. One-to-many (heterogeneous) requires a lot of standards work: which simply hasn’t happened except in WS-*.

  2. Mark Little

    BTW, it may not have been clear, but in my blog entry I was assuming only two levels: a low level for how to get a message/document/payload to a recipient, and a high level for defining what is in the message/document/payload. The example I gave was envelope/letter (for low-level real world equivalent) and writing/language/structure (for high-level real world equivalent).

  3. I’m a huge REST noob, but what I like about REST and REST over HTTP in particular is that the client framework, protocol, and server framework are all separately defined. They are not tightly coupled with one another. If you’re using a client side REST framework, your server does not need to use the same framework, or even a framework at all.

    As for the “low level” argument, I think Mark’s real point is in contextual data like TX, security, etc… The real point of my blog was to figure out a way to redefine the problem to avoid the need for this type of contextual data at all. I just really can’t get away from security though, can’t figure out a way to redefine the problem there….any links/pointers would be appreciated.

  4. I believed I did address your data oriented concern, but perhaps you didn’t recognize I was doing so 8-)

    Ok, let’s break it down.

    In order for two independently developed and deployed applications to interoperate, they need to agree on standardized versions of the following things;

    1. transport – how bits are moved from one box to another over a network
    2. message envelope – the more interesting outer packaging for the message
    3. interface – the methods and modifiers (headers) that go in the envelope
    4. data – the data payload that goes in the envelope

    Ok so far?

    When somebody sets up a Web server on the Internet to publish some data, which of those items are they using a standardized solution for? Let’s assume the data is in a proprietary format (which rules out #4 of course).

    Now what if somebody else sets up a SOAP server on the Internet to make that same information available. What’s your answer now?

    I’m trying to make sure we’re doing an apples-to-apples comparison, which IMO we haven’t been doing since Web services began. I agree that the data problem is hard, but I believe it’s the same problem whether you’re using the Web or Web services, so we can ignore it when comparing the two.

  5. Mark B, you’ll always win on the 1, 3, and 4 argument. Envelope definition is where REST falls over. A distributed transaction requires a protocol and payload definition. I don’t believe you can always redefine away a DTX requirement. MarkL is saying that if you have more than one service you are coordinating, cross-cutting concerns like DTX need to be standardized or your coordinator has a huge integration mess. Sure, that might mean big bucks for JBoss ESB, but sucks for the user.

  6. Mark Little

    I think we are comparing apples-to-apples. If you check what I said on the subject (and have said so several times): this is not a technological problem, it’s a political problem. I’ve done transactions on REST before, back in 1999/2000. You and I first met as a direct result of that back then ;-)

    My point, and one of the reasons I’m sat on this crowded fence, is that the agreement on the data format/payload is critical to doing anything with REST that compares with the levels of interoperability we’ve seen so far with WS-* between an arbitrary number of heterogeneous implementations.

    So maybe to simplify this discussion: I don’t disagree with you that this can be done (and never have). I disagree with you that it’s easy to do, because it isn’t due to political factors. Try it: I have been involved with standards (not just Web Services) since 1990 and can say with some confidence that it’s darn hard to get people to agree to the levels we have managed with WS-*. I don’t see that happening around REST.

  7. I hear you, I really do, but I’m trying to have a conversation about the “agreement on the data format/payload”.

    I agree with you 100% that it’s critical to doing anything with REST. But I claim that it’s just as critical to doing anything with SOA/WS. Do you disagree? If not, then perhaps you can answer my question in my comment above.

  8. Mark Little

    “But I claim that it’s just as critical to doing anything with SOA/WS. Do you disagree?”

    I agree and thought I’d been saying so all along. That was my point though: this is all done in Web Services already. We’ve got that agreement.

    Are we arguing at cross purposes? Or are we violently agreeing ;-)?

  9. “REST enables intermediate processing by constraining messages to be self-descriptive: interaction is stateless between requests, standard methods and media types are used to indicate semantics and exchange information, and responses explicitly indicate cacheability.” — Fielding, 2000

    REST without standard content types is not REST, just a step along the road to REST. The political process of defining these content types remains a vitally important hard problem, and no single standards body appears set up to solve the world’s problems on this. On the other hand, there are thousands of *ML standards and mini-organisations in use around the world. A lack of central oversight certainly results in uneven quality, and that is something to be concerned about. Document-oriented SOA has the same problem in defining content types, but theoretically gains for one (Doc-SOA) are also gains for the other (REST).

    In order to communicate, you have to agree. That’s fundamental. REST is a better model than the null architectural style to achieve that agreement, and it is good to see many SOA advocates quietly absorbing REST principles one at a time.

    Benjamin

  10. “this is all done in Web Services already. We’ve got that agreement”

    Hang on a sec. The agreement I was talking about was agreement on the data. Outside of HTML – which I’m happy to ignore for this discussion – I don’t think either REST nor SOA have any answer to the data problem, except, as Benjamin notes, in the context of “thousands of *ML” organizations.

    The agreement that Web services do provide, using my 4-point model above, is #1 and #2. Agreed?

  11. Geez will this never end!

    It’s absolutely true you can do everything using HTTP that you can do with other approaches, and vice versa.

    What ends up creating these (in my view) unproductive discussions are the background assumptions.

    RESTafarians assume everything is HTTP, or can be, if everyone would just agree.

    Existing enterprise IT environments are not based on HTTP and do not follow the constraints of REST.

    This is the real issue, and it won’t go away simply by suggesting that everyone wake up and use HTTP for everything, since it’s possible.

    Web services, like them or not, were designed in large part for compatbility with existing systems.

    One of the more interesting aspects of Web services is their “multi-protocol” capability but this is where the disconnect with the HTTP world also happens.

    If you assume “everything is or can be HTTP” you get one set of arguments. If you assume “everything will be multi-protocol for the forseeable future” you get another set of arguments, and as they say never the twain shall meet (whatever that means ;-)

  12. Eric, I’ve never argued that everything should be based on HTTP. Also, when I argue that REST is superior for enterprise environments – and I’m sure I’ve said this to you several times – I am not arguing that existing systems be rewritten, only that they be wrapped with a RESTful Facade (not necessarily with HTTP even!).

Add your comment now