According to Steve Jones, REST is a career limiting move;

But as with any technical discuss there is another thing that should drive you when you look to your decisions. This is whether you want to get fired for a decision when something goes wrong, and whether you want your skills to be relevant in 3 years time.

That would be funny if it weren’t so sad; completely accurate, yet aimed entirely in the wrong direction 8-(

Danny notes that I chimed in with some Web architectural advice for those considering SPARQL updates.

On that page, he asks;

I’d like to hear more from Mark about what he sees as problematic about the current notion of binding. Although the spec seems unusual, the end result does seem to respect WebArch

It does respect Web architecture, but only because it’s read-only. As soon as you need to add mutation support, or indeed any other operation on the same resource, the process fails and what results is not Web-friendly. This is because “operation on the same resource” doesn’t work if the operation is part of the resource name; if the operation changes, the name changes, and therefore the resource-identified changes.

This is the same problem that APIs such as Flickr and del.icio.us suffer from; Web-friendly for read-only, horribly broken for updates.

Making something Web-friendly means mapping your data and services into a set of inter-linked resources. Application-specific APIs works directly against that.

And FWIW, from a REST POV the constraint that’s being disregarded in these scenarios is commonly resource identification.

I like how news of Sam and Leonard’s REST book is kicking off a new REST/SOAP thread. This time though, it seems the tables are turned and it’s the Web services proponents who are having to defend their preferred architectural style, rather than other way around. It’s about freakin’ time! I’m kinda tuckered 8-O

Sanjiva chimes in, in response to Sam’s probing questions;

ARGH. I wish people will stop saying flatly incorrect things about a SOAP message: it does NOT contain the method name. That’s just a way of interpreting SOAP messages […] SOAP is just carrying some XML and in some cases people may want to interpret the XML as a representation of a procedure call/response. However, that’s a choice of interpretation for users and two parties of a communication may interpret it differently and still interoperate.

Oh my.

Every message has a “method”; a symbol which is defined to have a certain meaning understood by sender and recipient so that the sender understands what it’s asking be done, and the recipient knows what it’s being asked to do. A recipient which doesn’t understand what is being asked of it cannot process the message for hopefully obvious reasons.

What Sanjiva’s talking about there is ambiguity as a feature of Web services; that some recipients will interpret a message to mean one thing, while others another. Note that this is very different than what the recipients actually do in response to the message; that can and should, obviously, vary. But varying interpretations of the meaning of the message? Absurd.

Jorgen tries to convince us that Web 2.0 Needs WS-*. But he’s going to have do a lot better than arguments like this;

And, as if to underscore why I don’t see the REST / POX / AJAX “religion” achieving too much traction among enterprises, try explaining the phrase “The Web is All About Relinquishing Control” to any corporate security manager!

Well, if Jorgen had read what Alex was saying about relinquishing control, he might not think that such an insurmountable task;

This is possible because no one owns the web, and consequently no one can control it. The web is everyone’s property, and everyone is welcome to join in. It’s the world of sharing. The need for control has been relinquished, and it is left to the participants to sort their discrepancies out. The web as a medium and as a technology is not going to offer any assistance in that regard.

In other words, relinquishing control is largely about adopting public standards in lieu of pursuing proprietary interests, in particular the public Web standards that make inter-agency document-oriented integration (relatively) simple to achieve. If you are responsible for securing an Intranet, it should be your first and primary consideration to trust messages which are based on publicly vetted agreements, like most Web messages, and similarly, to distrust those messages whose complete semantics are not publicly vetted, like most SOAP messages.

I previously pointed to the announcement of the shutdown of a version of the Google Adwords API, and commented that this really isn’t the way to go about versioning your Web 2.0 APIs.

I’ve been digging into Google Maps recently, and noticed that they’re making the same mistake;

The v=2 part of the URL http://maps.google.com/maps?file=api&v=2 refers to “Version 2 ” of the API. When we do a significant update to the API in the future, we will change the version number and post a notice on Google Code and the Maps API discussion group.

After a new version is released, we will try to run the old and new versions concurrently for about a month. After a month, the old version will be turned off, and code that uses the old version will no longer work.

Obviously we’re still pretty early into the whole “mashup” thing, but not too early that we shouldn’t be thinking about best practices IMO. And best practice #1? Don’t do what Google’s doing here, which is asking all their users – mashup developers who have committed themselves to this service – to absorb the cost of their inability to develop an extensible API.

I think a good rule of thumb for service providers is to assume that you’ve got a million mashups using your service, and therefore that the cost of incompatible changes is prohibitive for the mashup developers. Any other approach is sure to drive those developers to other service providers who do a better job evolving their APIs; bad juju if your business depends on attracting eye-balls.

So what constitutes a “better job”? I suggest it has something to do with declarative Javascript, but my argument needs some work so I’ll save that for another post.

An important, nay, foundational part of my mental model for how Internet scale systems work (and many other things, in fact), is that I view standards as axioms.

In linear algebra, there’s the concept of span, which is, effectively, a function that takes a set of vectors as input, and yields the vector space spanned by those vectors; the set of all reachable points. Also, for any given vector space you can find a set of axioms – a minimal set of vectors which are linearly independent of each other (orthogonal), but still span the space (note; I use “axioms” here to refer to a normalized set of basis vectors).

So given, say, HTTP and URIs as axioms (because they’re independent), I can picture the space reachable using those axioms, which is the set of all tasks that can be coordinated without any additional (beyond the axioms) a priori agreement; in this case, the ability to exchange data between untrusted parties over the Internet. I can also easily add other axioms to the fold and envision how the space expands, so I can understand what adding the new axiom buys me. For example, I can understand what adding RDF to the Web gives me.

More interestingly (though far more difficult – entropy sucks), I can work backwards by imagining how I want the space to look, then figure out what axiom – what new pervasively deployed standard – would give me the desired result.

As mentioned, I try to evaluate many things this way, and at least where I know enough to be able to (even roughly) identify the axioms. It’s why Web services first set off my bunk-o-meter, because treating HTTP as a transport protocol is akin to replacing my HTTP axiom with a TCP axiom, which severely shrinks the set of possible things that can be coordinated … to the empty set, in fact. Oops!

See also; mu, the Stack, Alexander, software architecture.

Just posted a new blog at my O’Reilly weblog about SOAPAction.