Well, that didn’t take long. The day after I posted it, I got an email from David Nüscheler, CTO at Day Software, asking if I’d be interested in working with them. Given that my friend and accidental mentor Roy works there, and that the company is obviously very clueful about the Web and open source, it seemed like a terrific fit. In my discussions with them since then, that impression has been confirmed over and over again. Honestly I don’t know why I didn’t approach them.

I’ll be joining their solutions team, which means I’ll be helping with the design, development, and deployment of custom solutions built upon Day’s product line. The position isn’t unlike the consulting work I’d been doing much of the past six years, only more hands-on (yeah!), and of course, Day-specific.

It also means a lot more travel, so watch for me in your neck of the woods on Dopplr.

It was about 2.5 years ago now, that I joined Research in Motion – makers of the Blackberry – for what turned out to be the shortest stint of my career. I was brought in as their “Web 2.0” guy, though as part of the standards organization rather than R&D (which should have been a warning sign). My job, initially, was to write a white paper which described what RIM needed to do to embrace the Web. What’s the standards organization doing defining an R&D roadmap you might ask? Good question. I wondered the same thing. But that’s not what this post is about.

What it is about is that earlier this week, at the BBDC, RIM announced what is, AFAIK, its first on the topic of the Web; Web Signals;

BlackBerry Web Signals leverages RIM’s unique push technology to allow online content providers to automatically notify BlackBerry smartphone users when relevant content has been published and to allow streamlined, one-click access to the online information.

So I dug into the technical overview, and spotted this near the beginning;

To push content to users, content providers must first register their web signals with Research In Motion.

Bzzt!

As they don’t seem to realize, the Web is agreement; a large, complex distributed system made possible by parties who agreed to use its constituent protocols. Publishers agreed because it gave them a low cost path to distribute information directly to the users who had also made those same agreements (by using an agent which implemented the protocols). Imagine now, if you will, what would have happened to the Web, had publishers needed to register with, say, AOL to reach AOL users, or Comcast for Comcast users. What a huge burden! It could be worse, the burden could be on the users, but why bother with one at all? Remember PQA? My point exactly.

Always, always, always, try do what you need using existing agreement.

In this case – of notification of content changes – RIM had a couple obvious options. Most simply, they could have used email, though of course the user experience is suboptimal, not to mention the privacy concerns of handing out the user’s email address to every publisher. Alternately, there’s RSS/Atom, something publishers are already pretty comfortable with. It might even sound a little familiar, seeing as I described the architecture necessary to support it in that white paper I wrote for them.

If you’ve read ahead in that tech overview, you’ll also notice that they predefine their URI structure, and don’t even mention which HTTP method to use on those URIs to send a notification, which probably means that GET does the deed. Yuck.

Come on RIM, get your act together. Competition is heating up, and those guys in Cupertino (mostly) have their act together when it comes to the Web.

It’s good to see Roy take on the pseudo/not-at-all “REST APIs” out there.

As I mentioned in a comment there, I’m no stranger to this kind of interface specification, as I’d guess that about 80% of the “APIs” I reviewed as a consultant suffered from at least one of the problems Roy listed. Fortunately, I found it wasn’t very difficult to get people to see the error of their ways. All I had to do was re-emphasize that REST requires that interfaces be uniform – the same – and therefore that pre-specifying anything specific about a resource, such as URI structure, response codes, media types, resource relationships, etc.. was antithetical to that requirement.

David Peterson defends SimpleDB‘s use of HTTP GET for mutation actions by appealing to the HTTP spec itself, specifically pipelining where it says;

Clients SHOULD NOT pipeline requests using non-idempotent methods or non-idempotent sequences of methods (see section 9.1.2). Otherwise, a premature termination of the transport connection could lead to indeterminate results. A client wishing to send a non-idempotent request SHOULD wait to send that request until it has received the response status for the previous request.

His argument is that because POST isn’t idempotent, that it couldn’t be used for pipelining, and therefore that GET could be used. There’s two fatal flaws with this argument however. The first is that PUT is idempotent, and is also a mutator, so you can pipeline that no problem (modulo the concern about sequences of requests). The second is that if both the client and server understand that an “Action” parameter specifies the actual action to be taken (overriding the HTTP method), then if Action specifies a non-idempotent method, you’re still going to run into the same indeterminism problem that the HTTP spec warns against: what matters is the effective method of the message, not only the HTTP method.

It’s also interesting that the example Dare uses is of an Action value “PutAttributes”, which is presumably idempotent, doh!

Nope, Amazon blew it, again. I’ve offered them my services a couple times already, but they’ve not taken me up on my offer yet. They really, really(!) should before they publish another service.

Sam Ruby writes;
A much more interesting question to me is whether PATCH will operate at the content level or the transfer level. Or, to put it anther way, will patch operate at the infoset level, or will it be able to be directly applied to HTML as she is written?

PATCH means what ever the spec says it means. Anything else is a function of either the diff media type in use, or the particular implementation of the server that processes the message.

cyber security Having your network hacked can put you out of business. However remote peering allows a business to access internet exchange points worldwide, providing higher bandwidth options as well as delivering faster network speeds, greater security and lower latency. That’s why security projects and this blog are being established. We urge everyone who reads this post to help us make this happen. All we ask is that you help us “use our tools”. Please help us by helping us to spread the word about this online data breach in order to get the word out to the public. The longer we can stay “out in the dark” and in the shadows, the more of a real threat this is, so using measures as VPN can be helpful, and you can find out the VPN meaning online here. You may also visit sites like http://www.venyu.com/disaster-recovery-services/ for additional security and protection. If you know of any other cyber security stories that you feel should be highlighted on this blog please email me at: Noticing some strange things in this email? You should receive a follow up from my regular email, so be sure to check it out, often times I’ll try to get you involved in some way. And, if you’re into why I’ve been active with this blog over the years, here’s an article about this blog I wrote for PC Magazine. This article was written back in 2004, but I feel it still holds true today. As always, thanks for reading, we appreciate the support. If you’re interested in more about this cyber security project, you can visit the site at: I’ve got a few updates on this website! The pages that you see below have been updated. For those of you who are interested in downloading my latest study for a good cause, here’s where you can buy it: I’ve recently been on a global speaking tour, giving my thoughts on your day to day transactions with me and fellow cyber security expert Charlie Miller. As I am doing these presentations, I will have the links to purchase my study for you at the bottom of my slides. Click here for the list of my selected areas. Here’s a summary of the information presented in each portion of my presentation: 3 Ways to Prevent and Repair Computer Disasters My predictions for the 21st century and 21st century cyber security Cyber Security Myth Busting Part 1 What Really Happens? Cyber Security Myth Busting Part 2 Bad or Broken? Cyber Security Myth Busting Part 3 How do We Get It Right? Security Analysis the tools that keep my fellow security analysts up at night We have an amazing client whose operation is as critical as the country’s $2.3 trillion GDP. The nation will be destroyed if we don’t track down the culprits and shut the malicious network down. It will not be our legacy. This is a warning to all in this industry.

My latest article is up on InfoQ. I hope it help demystifies the “Hypermedia as the engine of application state” constraint just a little bit.

I’ve been thinking about writing this for some time now. In fact, it’s probably way overdue. But there’s no better time than the present, as they say.

I’ve had enough. I’m not participating in any more “REST vs. SOAP” discussions. When I started on this mission to educate those who didn’t understand how the Web could help them, I figured it would be pretty straightforward; I’d explain it, they’d understand, and then we’d all skip away hand-in-hand whistling show tunes. Of course, it didn’t quite work out that way. Instead, I ended up spending on the order of $100K of my own money on travel, as well as the opportunity cost of many hundreds of otherwise billable hours, for what is working out to be essentially nothing in return. If that weren’t enough, my health has suffered the past year or so, in ways I won’t get into here, but that I’m confident are in part attributable to the despair I’ve felt over this extended period of frustration.

The war really has been won, I realize that now. And I’m happy to pat myself on my back for a job well done, despite what it’s cost me. Would I do it over again? No bloody way. I should have just “pulled a Roy” and continued to work with and improve the Web, but restrict my Web services standards work to trying to minimize the harm Web services were doing to the Web (which I was doing, but I went way beyond that). I think Max Planck got it right when he said;

A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.

Oh, and one last thing. I told you so. There, that felt good.

I’m with Stefan. Steve Vinoski’s always been somebody you want to listen very carefully to regarding distributed computing and enterprise integration, but even more so now that he’s left the employ of a company with skin in that game. That message isn’t entirely new, it’s just more obviously the message 8-)

Mark Little responds to an interesting post by Bill Burke about compensation based transactions. I don’t really have any direct response to the gist of that discussion, but wanted to highlight a couple of Mark’s arguments that I consider to be probably the top two arguments by those who feel there’s value in both the Web and Web services (the “fence sitters”, as Mark recalls me calling them 8-).

First up, the belief that the Web has nothing to say about reliability, transactions, etc… Mark writes;

Yes, we have interoperability on the WWW (ignoring the differences in HTML syntax and browsers). But we do not have interoperabilty for transactions, reliable messaging, workflow etc. That’s not to say we can’t do it: as I said before, we did manage to do REST+transactions in HP but it was in a small-scale deployment involving only a couple of partners. There is no technical impediment to doing this: it’s entirely political. It can be done, I just don’t see it ever being done. Until it happens, REST/HTTP cannot compete with the kinds of heterogeneous out-of-the-box interoperability that we have demonstrated with WS-*

I’ve talked about this a lot, most recently in my position paper to the W3C Workshop on Enterprise Services. The gist of the argument is that the Web address all of those needs, just in a way which you might not recognize because it has to address them within the confines of architectural constraints that Web services folks aren’t used to. Again, that’s not to say that every possible one of your needs can be met out of the box today, only that far more of them can than you might believe.

Mark also uses the very common argument that because interoperability requires agreement on data for both Web and Web services, that there’s no significant difference between them (I hope that summarizes his point);

So just because I decide to use REST and HTTP doesn’t mean I get instant portability and interoperability. Yes, I get interoperability at the low level, but it says nothing about interoperability at the payload.

I can’t quickly find any past blog entries that touch on this point (though I know they’re there), but this argument I find the most confusing. I suspect it has to do with what I perceive to be a disconnect between Internet and intranet protocol stacks, but I can’t say for sure.

What Mark calls the “low level” isn’t the low level at all. Assuming he means HTTP, the agreement you get by using it is more (higher level) agreement than you get if you were just using SOAP (or XML-RPC or IIOP or BEEP or …). That’s because you’re agreeing on the methods in addition to an envelope (not to mention many other features).