Tap, Tap. Is this thing on? So if you’ve read this blog at all in the past, you know that this topic pops up every so often. That would be because it’s difficult. The context for this instance of the discussion is a Twitter thread started by Erik I argue that what Erik called “by value” semantics, violates REST’s stateless constraint (and therefore self-description, as stateless is a sub-constraint). This is because the constraint is defined as;
[…] each request from client to server must contain all of the information necessary to understand the request, and cannot take advantage of any stored context on the server
And this leads us to look at a sample document, a JSON-LD document with an @context declaration as a message, and how we determine what that message means. Using the example from the front of the JSON-LD site;
{
  "@context": "http://json-ld.org/contexts/person.jsonld",
  "@id": "http://dbpedia.org/resource/John_Lennon",
  "name": "John Lennon",
  "born": "1940-10-09",
  "spouse": "http://dbpedia.org/resource/Cynthia_Lennon"
}
… sending this to someone is intended to communicate a set of RDF triples, including this one, where the “name” string is supposed to expand to the full FOAF name property URI;
<http://dbpedia.org/resource/John_Lennon> <http://xmlns.com/foaf/0.1/name> “John Lennon” .
So to even “understand” the request, we need to resolve the @context URI to receive additional information. Therefore, stateful, and also not self-descriptive. Another way to look at this is from an archivist POV. If I store that JSON-LD document away for 10 years, restore it, and try to understand it what it meant, can I? Obviously in this case you’d need for that resolved document to have not changed in ways which change the meaning of our JSON-LD document. For example, by not re-binding “name” to rdfs:label.

Alas, I’m once again looking for work, if anybody’s got any cool Web-ish – or would-be Web-ish – projects or products that they’re working on. Contract, full time, part time, I don’t care, as long as it’s interesting, challenging, and I can learn something.

Yes, the WSTF. I don’t have any commentary to offer, I just had to post something so I could use that subject line 8-).

It was about 2.5 years ago now, that I joined Research in Motion – makers of the Blackberry – for what turned out to be the shortest stint of my career. I was brought in as their “Web 2.0” guy, though as part of the standards organization rather than R&D (which should have been a warning sign). My job, initially, was to write a white paper which described what RIM needed to do to embrace the Web. What’s the standards organization doing defining an R&D roadmap you might ask? Good question. I wondered the same thing. But that’s not what this post is about.

What it is about is that earlier this week, at the BBDC, RIM announced what is, AFAIK, its first on the topic of the Web; Web Signals;

BlackBerry Web Signals leverages RIM’s unique push technology to allow online content providers to automatically notify BlackBerry smartphone users when relevant content has been published and to allow streamlined, one-click access to the online information.

So I dug into the technical overview, and spotted this near the beginning;

To push content to users, content providers must first register their web signals with Research In Motion.

Bzzt!

As they don’t seem to realize, the Web is agreement; a large, complex distributed system made possible by parties who agreed to use its constituent protocols. Publishers agreed because it gave them a low cost path to distribute information directly to the users who had also made those same agreements (by using an agent which implemented the protocols). Imagine now, if you will, what would have happened to the Web, had publishers needed to register with, say, AOL to reach AOL users, or Comcast for Comcast users. What a huge burden! It could be worse, the burden could be on the users, but why bother with one at all? Remember PQA? My point exactly.

Always, always, always, try do what you need using existing agreement.

In this case – of notification of content changes – RIM had a couple obvious options. Most simply, they could have used email, though of course the user experience is suboptimal, not to mention the privacy concerns of handing out the user’s email address to every publisher. Alternately, there’s RSS/Atom, something publishers are already pretty comfortable with. It might even sound a little familiar, seeing as I described the architecture necessary to support it in that white paper I wrote for them.

If you’ve read ahead in that tech overview, you’ll also notice that they predefine their URI structure, and don’t even mention which HTTP method to use on those URIs to send a notification, which probably means that GET does the deed. Yuck.

Come on RIM, get your act together. Competition is heating up, and those guys in Cupertino (mostly) have their act together when it comes to the Web.

It’s good to see Roy take on the pseudo/not-at-all “REST APIs” out there.

As I mentioned in a comment there, I’m no stranger to this kind of interface specification, as I’d guess that about 80% of the “APIs” I reviewed as a consultant suffered from at least one of the problems Roy listed. Fortunately, I found it wasn’t very difficult to get people to see the error of their ways. All I had to do was re-emphasize that REST requires that interfaces be uniform – the same – and therefore that pre-specifying anything specific about a resource, such as URI structure, response codes, media types, resource relationships, etc.. was antithetical to that requirement.

I respect what Google’s trying to accomplish with Gears very much, and appreciate that they’re helping draw attention to the need to build out the client-side of the Web a bit more, and that they’re doing it with open source. They’ve nailed the what perfectly.

Then again, Web services nailed their requirements perfectly too.

I think the how of what Gears is doing is, frankly, misguided. There’s minimal reuse of existing Web technologies, and it’s overly imperative when it could be easily be far more declarative. Each of these issues raise the bar for the kinds of skills a Web developer needs. Not a good thing.

Interestingly, the Web services analogy doesn’t stop at the requirements. When I talk about “minimal reuse”, what I’m referring to is that the Web already has a mature and general API for accessing and manipulating local data-oriented services – the DOM – and no “gear” I’ve seen has yet to use it for data access. So instead of a Javascript GeoLocation object that might expose method calls such as getCurrentLocation() or getLockedSatellites(), why not make the object a DOM Node and treat the data contained within as a document? Something like;

<Location>
  <Current>45.234,-120.999</Current>
  <LockedSattelites>4</LockedSatellites>
</Location>

(or better yet perhaps, microformatted HTML – though there’s issues there)

Then you could use the DOM itself to get at those values, or perhaps even a CSS selector. Simple!. You even get events for free.

Why is it that the industry continues to overlook the value of reusing pervasively deployed generic abstractions, be they network oriented or local? It couldn’t be a case of NIH, because the DOM is unavoidable if you’re a Web dev. I guess it just didn’t occur to the them, though that doesn’t explain the push back.

Well, for me at least, because it was 10 years ago today that Roy sent me the email that rocked my world. I remember barely sleeping for the next few nights as I was struggling to figure out solutions to various problems using HTTP and URIs. This was prior to Roy’s dissertation too, so I had little in the way of guidance. In fact, I think it took me a couple of years to get it all straight in my head.

Thanks, Roy. I owe you one.

I used the subject line of this blog entry as my signature 11 years ago when I saw the writing on the wall at Nortel, and wanted to announce my interest in looking elsewhere for a job. Ok, perhaps it wasn’t the optimal way to communicate that information to the world, but it was cute, damnit. Skip ahead to current day, and I find myself ex-co-founder of a mobile startup which wasn’t able to raise the money it needed, and a consultant who, frankly, got bored answering all the REST 101 questions that came my way. Don’t get me wrong, I love that the Web and REST are finally getting their due, and I was honoured to have folks from all over the world come to me to answer their questions, but unfortunately those questions were rarely a challenge. What I want – nay, what I need – is to work on product or projects for a while where I have the opportunity to be creative once again, have a look at the best small business tools. If you’re reading this, you probably know what I’m good it and what I like. For the record, that’s the Web, mobile, open source, and projects/products for the public good. If you know of any opportunities that fall somewhere in that space, please drop me a line. Thanks.

That’s funny, folks are talking about their favourite cover songs, and here I was about to blog about one of my favourite albums of all time – coincidentally a cover album – For the Masses, which is a tribute album to Depeche Mode. Four of my all time favourite covers are on this album;

Here’s the rest of my favourites, in no particular order;

(wow, complete performances at Youtube or last.fm for all of them, that’s pretty cool)

I had saved my email, both sent and received, since 1996, kept safe in a series of mbox files which I’ve diligently moved from hard drive to hard drive as I’ve upgraded my PC. Of course, it wasn’t very accessible there, and it certainly wasn’t integrated with the last four years of mail kept in Gmail (well, half in GMail proper, and the other half in Apps). So I was happy to read, albeit belatedly, that Google had added email migration to Apps.

I knew what had to be done, so I dug into the documentation looking for a way to have it inhale those mbox files. As well as the obvious POP/IMAP support, they also supported an Atom interface, which is great and all the rage and everything, but come on, is the low hanging fruit solution here not for me to just email the mbox files? Anyhow, without that option, it turned out that the simplest route was just to install dovecot and have it serve up each file as an IMAP folder … which took all of 5 seconds to configure. So I pointed Google at the server, and it spent the next half-day or so chugging through files.

I’m now a happy man, as the last 12 years of much of my communication with the world is now searchable.