AppEngine is to Amazon Web Services as HTTP is to SOAP.

Needless to say, I’m a fan.

Once again I’m happy to be a part of the program committee for the DOA conference. I’ve found the quality of papers there to always be quite high … yes, even some of the Web services ones often have something to contribute (I can forgive one wrong assumption 8-).

The CFP for DOA 2008 has just been posted. Please consider submitting.

Opera’s surely been feeling the heat from WebKit given that it’s basically taking over the world, including mobile. So here’s a thought: why don’t they abandon their own rendering engine, Presto, and adopt WebKit? Then instead of being “that other browser”, which developers are loathe to bother testing for, they’d be the best (from what I’ve heard of Safari) WebKit based browser out there. Seems a no-brainer to me.

Dear LazyWeb;

I’m still undecided about which pill I’ll swallow when I finally upgrade my ancient-but-reliable laptop. But whatever my choice, I realize I’m going to have a large problem, the same problem that inflicts so many other laptop-toting, standards-wrangling, conference-schmoozing geeks like myself: I’ll need a whole new batch of stickers! What I was thinking would be nice would be a service that let me create my own cover, digitally, and then would print it out on some kind on thin, sticky clear laminate in the size I needed, perhaps like an Invisible Shield (which has been great on my N95, BTW).

Hey, it looks like Web3S was replaced by Atom/APP. Awesome. I think “Why not Atom?” was one of my first questions of Yaron when he described Web3S to me last year. I’m confident this is for the best. In addition to Atom/APP being existing standards (with an accompanying abundance of existing tooling), Microsoft will also gain the evolutionary advantages of the hypermedia as the engine of application state constraint, which Web3S opted to replace with a schema-driven application model.

Kudos to everybody involved in that decision.

David Peterson defends SimpleDB‘s use of HTTP GET for mutation actions by appealing to the HTTP spec itself, specifically pipelining where it says;

Clients SHOULD NOT pipeline requests using non-idempotent methods or non-idempotent sequences of methods (see section 9.1.2). Otherwise, a premature termination of the transport connection could lead to indeterminate results. A client wishing to send a non-idempotent request SHOULD wait to send that request until it has received the response status for the previous request.

His argument is that because POST isn’t idempotent, that it couldn’t be used for pipelining, and therefore that GET could be used. There’s two fatal flaws with this argument however. The first is that PUT is idempotent, and is also a mutator, so you can pipeline that no problem (modulo the concern about sequences of requests). The second is that if both the client and server understand that an “Action” parameter specifies the actual action to be taken (overriding the HTTP method), then if Action specifies a non-idempotent method, you’re still going to run into the same indeterminism problem that the HTTP spec warns against: what matters is the effective method of the message, not only the HTTP method.

It’s also interesting that the example Dare uses is of an Action value “PutAttributes”, which is presumably idempotent, doh!

Nope, Amazon blew it, again. I’ve offered them my services a couple times already, but they’ve not taken me up on my offer yet. They really, really(!) should before they publish another service.

Sam Ruby writes;
A much more interesting question to me is whether PATCH will operate at the content level or the transfer level. Or, to put it anther way, will patch operate at the infoset level, or will it be able to be directly applied to HTML as she is written?

PATCH means what ever the spec says it means. Anything else is a function of either the diff media type in use, or the particular implementation of the server that processes the message.

cyber security Having your network hacked can put you out of business. However remote peering allows a business to access internet exchange points worldwide, providing higher bandwidth options as well as delivering faster network speeds, greater security and lower latency. That’s why security projects and this blog are being established. We urge everyone who reads this post to help us make this happen. All we ask is that you help us “use our tools”. Please help us by helping us to spread the word about this online data breach in order to get the word out to the public. The longer we can stay “out in the dark” and in the shadows, the more of a real threat this is, so using measures as VPN can be helpful, and you can find out the VPN meaning online here. You may also visit sites like http://www.venyu.com/disaster-recovery-services/ for additional security and protection. If you know of any other cyber security stories that you feel should be highlighted on this blog please email me at: Noticing some strange things in this email? You should receive a follow up from my regular email, so be sure to check it out, often times I’ll try to get you involved in some way. And, if you’re into why I’ve been active with this blog over the years, here’s an article about this blog I wrote for PC Magazine. This article was written back in 2004, but I feel it still holds true today. As always, thanks for reading, we appreciate the support. If you’re interested in more about this cyber security project, you can visit the site at: I’ve got a few updates on this website! The pages that you see below have been updated. For those of you who are interested in downloading my latest study for a good cause, here’s where you can buy it: I’ve recently been on a global speaking tour, giving my thoughts on your day to day transactions with me and fellow cyber security expert Charlie Miller. As I am doing these presentations, I will have the links to purchase my study for you at the bottom of my slides. Click here for the list of my selected areas. Here’s a summary of the information presented in each portion of my presentation: 3 Ways to Prevent and Repair Computer Disasters My predictions for the 21st century and 21st century cyber security Cyber Security Myth Busting Part 1 What Really Happens? Cyber Security Myth Busting Part 2 Bad or Broken? Cyber Security Myth Busting Part 3 How do We Get It Right? Security Analysis the tools that keep my fellow security analysts up at night We have an amazing client whose operation is as critical as the country’s $2.3 trillion GDP. The nation will be destroyed if we don’t track down the culprits and shut the malicious network down. It will not be our legacy. This is a warning to all in this industry.

Via Stefan, a proposal from the WSO2 gang for an approach to decentralizing media types and removing the requirement for the registration process.

Been there, tried that. I used to think that was a good idea, but no longer do.

Problem one: an abundance of media types is a bad thing for pretty much the same reasons that an abundance of application interfaces is a bad thing; the more that is different, the more difficult interoperability becomes. We need less, more general media types, not more specific ones.

Problem two, specific to their solution for this “problem” (which is “application/data-format;uri=http://mediatypes.example.com/foo/bar”): media type parameters don’t affect the semantics of the payload. This solution requires changing the Web to incorporate parameters in this way. Consider, if an existing firewall was configured to block, for example, image/svg+xml content. If SVG were also assigned its own “media type URI” and delivered using application/data-format, that firewall wouldn’t be able to block it. Oops.

Problem three (which mnot convinced me of): having your media type reviewed by the capable volunteers on ietf-types, is a good thing. Sure, you could still do that while using a decentralized token/process, but I consider having motivation for review built-in to the mechanism a feature, not a bug, especially given problem one above.

Update; here’s an older position of mine.

If you’re a Firefox user (or any browser user for that matter), run, don’t walk, to download the latest Firefox 3 Beta. Damn, this thing is lean and mean. I’ve got my usual 40-50 tabs open right now and it’s consuming about one third to one quarter of the memory FF2 did on WinXP. Plus tab and new window creation is instantaneous, even after many hours of use. There’s some subtle chrome improvements too, including little things like smooth-scrolling tabs that prevent me from getting lost when I’ve got more than about 10 tabs per window; very useful for Wiki-despamming or reading developer documentation.

Go!