Via Steve, a link to an article on Where HTTP Fails SOAP.

My comments…

Simple Web browsers have been the de facto HTTP client to date, and they are in essence single-threaded clients as far as the server is concerned, making only one request at a time over a given connection.

Hmm, that might have been so in the very early days, but browsers have used multiple outgoing connections for some time. I just checked Firefox, and its property for this – network.http.max-connections-per-server – has a default value of “8”.

Also, if you’ve ever written one, a browser is far from “simple”. In fact, the most complex code I’ve ever worked on was a browser.

To back up its claim that SOAP/HTTP can’t scale in some circumstances, the article describes a scenario;

For example, assume we have an online banking system with support for up to 4000 concurrent users. The Web tier comprises a cluster of application server instances behind a hardware HTTP load balancer. In order to fulfill the online banking business function, there are three Unix-hosted services and a mainframe-hosted service utilized by the application server. In a world where SOAP/HTTP is the only protocol, the application server will have to support an incoming connection from the browser, and one additional connection out to each of the four back-office services for each concurrent user. This is because HTTP demands that you wait for a response before you send your next request over that same connection. It has no concept of a request identifier, which is a core requirement to enable connection sharing.

First, what’s so hard about 8000 (or even 10000) connections? Second, HTTP doesn’t demand that you “wait for a response before you send your next request”, all it requires is that the server not send a response to a request before the responses to previous requests are sent. But a smart Web server could facilitate processing of requests in parallel in some situations. And thirdly, who’s to say this has to be synchronously, and some AJAX couldn’t spice up the UI to save on long-lived connections?

That’s not to say that I disagree with their conclusion that “a standards-based protocol that allows for request interleaving is needed”. Indeed, I do agree with them. It’s just tricky to deploy, as the community discovered when the ability to remove this constraint from HTTP was specified. As it turns out, extensions which require the reimplementation of both client and server connectors don’t get much love – go figure!

What’s needed is a replacement protocol. And any which provided the value of HTTP, but without this ordering restriction, would be fine with me.

Unfortunately, they mis-read the HTTP spec when summing up;

This problem has been solved in the past with connection concentrators, but because we cannot interleave HTTP POST requests, HTTP-based communication cannot be concentrated. Clearly, HTTP is not capable of scaling in such an environment.

Nope. What the spec says regarding pipelining and POST is;

   Clients SHOULD NOT pipeline requests using non-idempotent methods or
   non-idempotent sequences of methods (see section 9.1.2). Otherwise, a
   premature termination of the transport connection could lead to
   indeterminate results. A client wishing to send a non-idempotent
   request SHOULD wait to send that request until it has received the
   response status for the previous request.

That’s “SHOULD NOT”, not “MUST NOT”, and the reason is to avoid the partial failure problems that would result from the pipelining of non-idempotent/unsafe requests. It’s not a matter of interop.

And FWIW, this isn’t “interleaving”, which generally refers to the use of logical streams within a single physical stream. Pipelining introduces no logical streams.

It’s good to see them reference Waka too though, but it’s in a very odd context where they discuss protocols such as IIOP, JMS, and MQSeries. Note to authors; those are not HTTP replacements, because they are not application protocols. If somebody offered you a carrot for your Ferrari, would you take them up on it? Didn’t think so.

It’s clear the authors did their homework, which is wonderful. Unfortunately though, they fell right into that “protocol independence” trap, along with much of the rest of the industry.

Trackback

no comment until now

Add your comment now