Transforming a business platform


As you may have seen in our previous blog post – Evolving The Key: insights from user research – after a year in design and development we recently helped The Key Support relaunch The Key for School Leaders and School Governors. This post looks at the technology selections for the refresh of The Key’s content management platform and why certain elements were chosen.

In the nine years since launching The Key has grown to support almost half of schools in England, with an amazing 75,000 registered school leaders and 17,000 registered governors having access to 5,000 original resources every month. It is now one of the most trusted sources of information in the education sector.

Selecting a platform

The sustained growth of The Key in both size and breadth meant there was a real need to TheKey-Screen1
update the underlying platform.  The new content management system (CMS) needed to be efficient at managing user subscriptions, making the right content available to the right users (The Key has 7 different classes of user), as well as being ready for any future expansion plans.

The platform for the past 9 years has been Plone, an open source enterprise content management system first released 15 years ago. In 2007 – when we built the first version of The Key – Plone was the ideal choice, but as the business requirements have expanded and been refined over the years we felt it was a good time to revisit that selection when we were presented with the opportunity to completely refresh both sites.

As The Key has grown in size so has the variety of content they are displaying on the site. As the breadth and types of this content has developed The Key have struggled with the restrictions created by the traditional template-driven nature of Plone. This prompted us to consider more flexible CMS options.

The solution? A shift from Plone to Wagtail.


We were already pretty impressed with Wagtail, having already used it on a couple of smaller projects. Like Plone it’s an open source CMS written in Python, but Wagtail is built on Django, our preferred web framework, giving us all the advantages that Django brings. We wanted to make sure that the new platform would stand the test of time as well as the previous Plone solution had, so we ran a careful evaluation process between a group of Django based solutions – including Wagtail, Mezzanine, Django CMS and a bespoke pure Django approach – to see which would best meet The Key’s requirements. We’re pleased to say that Wagtail came out the clear winner!

There are a few reasons we were particularly impressed with Wagtail for an application of this size and scale…

  • It is highly extensible, meaning that we could push the user model very hard and accommodate the intricacies of The Key’s user base
  • There’s an extremely high quality ‘out of the box’ admin system, meaning that we could hit the goal of improving the editor experience without huge amounts of bespoke development
  • Wagtail supports the notion of page centric content management (through its StreamFields) which allowed us to build much richer pages than a traditional template driven CMS
  • There are powerful versioning tools built into the framework which would give The Key the level of control they need when managing changes to sensitive content

These features of Wagtail aligned beautifully with The Key’s requirements, allowing us to focus on delivering the features that they really needed.

Wagtail is a new and exciting open source platform which is constantly growing with new features and contributions. We were really looking forward to being involved and contributing some elements of our own.

Making the move…

One of the first tasks to complete as part of the move was to export the data out of Plone and into Wagtail. This involved the careful migration of over 30,000 pages across two websites, complete with full page history, allowing us to preserve all of The Key’s valuable content and metadata.

The goals of this project were manyfold for The Key:

  • Improve the member experience, making it easier to manage a school’s membership
  • Improve members’ ability to self-serve, improving their experience and reducing the workload of the team as the business grows
  • Improve the quality and measurability of online marketing activities
  • Improve the quality and robustness of reporting tools.

Making the move from Plone to Wagtail held so many benefits for The Key we couldn’t write about them all, but have summarised our favourites:

  • Improved user acquisition journey
  • Improved signposting of the huge variety of content on the site
  • It’s a long term solution, Wagtail can expand and grow alongside The Key
  • Flexible modular home page

Another important task was to ensure that any user behaviour tracking was successfully migrated over to Wagtail. The Key harness their large database of users to track and record vital information which is then translated into leading insights, ensuring The Key remain at the forefront of trends and industry changes.

Through our longstanding relationship with The Key we understand how valuable this data is, so we used a custom API to integrate a data warehousing service called This service intelligently stores the data allowing The Key to collate, store and build their own queries and analysis of user behaviour, allowing them to constantly refine and improve their content to better serve their members.

Monitoring performance

To ensure the stability of the complex infrastructure that supports a project of this scale we installed New Relic – a real-time software analytics program. New Relic provides deep performance analysis for every part of TheKey-Screen2The Key’s platform, enabling us to make faster decisions, monitor interactions, quickly pinpoint errors and achieve better business results for The Key.

What we’ve found working with Wagtail is that it’s so flexible, customisable, scalable and user friendly. It’s working wonders for some of our other clients too. If you’re interested to know what moving to Wagtail could do for the performance of your site then get in touch, we won’t try and sell you something you don’t want or need!

Stay tuned

The next blog installment: How has The Key benefited this update a month after deployment?

In our next blog post about The Key we’ll be revisiting the site a month after deployment to find out how their staff members got on with the CMS change and what impact it has had on the business.

If you found this article interesting and are looking for an agency to help you with an upcoming project, please do contact us and find out how we can help you. Alternatively you can read about some more of our work and see how we have helped other companies achieve their goals.

When a feature is invoked more often accidentally than on purpose, it should be considered a bug

Back in 2014 I tweeted this:

I’ve been meaning to revisit that statement for a while now. The link above refers to the following misfeature afflicting Mac users with external displays:

When the mouse cursor touches the bottom of an external display, MacOS assumes you want the Dock to be there, and moves it there from the primary display, often covering up what you were trying to click on.

This happens to me almost every day for several years now – never intentionally. I have looked into it thoroughly enough to know that it cannot be turned off without sacrificing all other multi-monitor features.

Our devices are full of such annoyances, born from designers’ attempts to be helpful. Sometimes they are just irritating, like the above. Sometimes they can be downright destructive.

Naked keyboard shortcuts

Keyboard shortcuts without modifier keys can be fantastic productivity enhancers, as any advanced user of Photoshop or Vim knows. But they are potentially incredibly dangerous, especially in applications with frequent text entry. Photoshop uses naked keyboard shortcuts (to coin a phrase) to primarily select tools or view modes. This may sometimes cause confusion for a novice (like when they accidentally enter Quick Mask mode with ‘Q’), but is rarely destructive.

Screenshot of Postbox Message menuPostbox, my email client, on the other hand, inexplicably uses naked keyboard shortcuts like ‘A’ to Archive and ‘V’ to move messages to other folders. What were they thinking? Email does involve frequent text entry. If you are rapidly touch typing an email, and the message composition window accidentally loses focus (e.g. due to the trackpad), there is no telling the damage that you can do. You may discover (as I have, more than once) that messages you know to have received have disappeared – sometimes only days later, without knowing what happened.

Any application that uses naked keyboard shortcuts should avoid using them for actions, as the application may come to foreground unintentionally. It’s safest to use them to select modes only.

Apple Pay

Here’s another example. Ever since Apple Pay came to iOS, I see this almost every day:

iPhone showing Apple Pay interfaceHow often do I actually use Apple Pay? About once a month, currently. Every time this screen appears unintentionally, I lose a few seconds – often missing the critical moment if my intention was to take a photo. (It is invoked by double-pressing the hopelessly overloaded Home button. A too-long press invokes Siri, which I also do unintentionally about 1 in 3 times.)


After Apple announced yet more gestures in iOS 10 at WWDC last week, @Pinboard quipped

On touchscreen devices, gestures are a powerful and often indispensable part of the UI toolkit. But they are invisible, and easy to invoke accidentally. As I wrote in my recent criticism of the direction Apple’s design is taking, whilst some gestures truly become second nature,

[…] mostly they are an over-hyped disaster area making our devices seem not fully under our control, constantly invoked by accident and causing us to to make mistakes, with no reliable way of discerning what gestures are available or what they’ll do.

Since I use an iPhone where the top-left corner is no longer reachable one-handed, I rely on the right-swipe gesture to go back more and more often. Unfortunately, this frequently has unintended effects, whether it’s because the gesture didn’t start at the edge, or wasn’t perfectly horizontal, or because the app or website developer intended something different with the gesture. And with every new version of OS X on a Macbook, the trackpad and magic mouse usually have more surprises in store for the unwary. I’m sure voice interfaces will yield plenty more examples over time.

Accidental activation – a necessary evil?

In my Apple design critique, I lamented the fact that it is very difficult to make gestures discoverable, as they are inherently invisible – contributing to that sense of “simplicity” which is a near-religion nowadays. You can introduce gestures during on-boarding, and hope users remember them, but more likely they will be quickly forgotten and we all know no-one will use a “help” page.

So you could argue that accidental invocation is the price we may have to pay for discoverability. Even though I rarely use Apple Pay, I am confident I know how to do so – a consequence of its annoying habit of popping up all the time.

With gestures, accidental activation may be critical to their discoverability, and if well implemented need not be irritating. For example, many gestures have a “peek” stage during which it is possible to reverse the action. Unfortunately, today’s touchscreen devices no longer have an obvious, reliable Undo function, one of the many failings highlighted by Tognazzini and Norman.

What are designers to do?

So if you design a helpful, time-saving feature that risks being invoked by accident,

  • consider the value (to the user of the feature)
  • consider the risk (of the user invoking it unintentionally)
  • consider the cost (to the user of doing so)

Is the value of the feature or shortcut worth the cost of it sometimes annoying users? How big is the cost to users? Wasting a second or two? Or possible data loss? Is the user even aware what happened? Is it possible to Undo? What is the risk, or likelihood, of this happening? What proportion of users does it affect, and how frequently? What proportion of usages are unintentional?

Failing any one of these may be enough to consider the feature a bug. Or you could fail two but the positive may still outweigh the negative. It depends on the severity:

Venn diagram with 3 circles intersecting: Low value, High risk, and High user cost

Can a feature be activated unintentionally? Is it worth the risk? The Venn diagram of inadvisability.

So designers should firstly try to ensure unintentional feature activation is as unlikely as possible – preferably impossible. But if it happens, the user should be aware of what happened, and the cost to the user should be low or at least easy to recover from. User testing will be your best hope of spotting problems early. Beta testing and dogfooding, run over a longer time period, are great at finding those problems that may have low frequency but high cost. Application developers may also be able to collect stats on feature usage, and determine automatically if a feature is often invoked but then not used, or immediately undone, which may highlight problems.

Or stick with a simple rule of thumb: if a feature is activated by accident more often than on purpose, it’s not a feature but a bug. Feel free to share more examples!

Beating Cancer To The Punch: Isotoma Project Officer Goes Three Rounds For Charity

Fighting back against cancer

Our Project Officer Daniel Merriman took part in his first charity Ultra White Collar Boxing (UWCB) match last weekend to raise money for Cancer Research UK. In the interview below, I ask Daniel what drove him to get into the ring…

1. So, what made you decide to take part in a charity boxing match?
“It was a spur of the moment thing, really! I did some boxing at university over a decade ago, but never had the chance to fight in front of a big crowd. Then earlier this year I saw an article in the local paper about these charity boxing events, and I speculatively sent them a message asking what was involved. Three days later I was in the gym with forty other novice boxers.”

2. How long did you train for, did you discover any difficult areas in training?
“Training was two sessions a week for eight weeks at Chokdee Academy, under the watchful eye of former world thai boxing champion Rich Cadden. It brought back a lot of memories of when I trained at university, but also emphasised just how out of condition I’d become. I’d been doing some running as training for an upcoming 10km race, but the type of fitness involved is on a totally different level. There were plenty of evenings when I came home with bruises and aching joints, but it was also immensely satisfying to see (and feel) the improvement as the weeks passed. In fact, the most frustrating aspect for me was having to miss some sessions because I wasn’t able to get a lift to the gym!”

3. Tell us about the fight night…
“Fight night for me actually started around 1pm with a medical, checking my blood pressure, shining a torch in my eyes and signing the all-important medical consent form. There were then talks from the organiser and the referee, explaining the format for the evening and the rules for the event. Unfortunately it turned out that my bout was slot 21 out of 22, so I
had a long time toDan-Close-Up wait! I spent the next few hours out in the audience with my guests at our VIP table, watching the other bouts and trying to conserve my energy.

Eventually I went backstage to warm up and mentally prepare myself. I don’t remember feeling nervous especially, but by that point in the evening I was just desperate to get in the ring. When the announcer called my name and my music started playing, though, the adrenaline started pumping and I knew it was time to (attempt to…) put into practice everything I’d learned in the previous eight weeks.

The fight itself seemed to fly by. I had sparred against my opponent in training before so I knew he was tough, and so he proved. The first round was relatively even, but he got me with a good shot in the second round, and by the end of the third round I was gasping for air. It’s hard to overestimate just how much the adrenaline of being in the ring saps your stamina, but I had some great supporters cheering me on and they helped me dig deep and get through to the final bell.”

4. Were there any unexpected benefits to the experience?
“Well I’ve dropped two sizes in jeans, so that’s something! I guess the other thing is that I wasn’t sure how I’d feel fighting in front of such a large crowd of people – there were close to 1,000 guests in attendance, and I didn’t know how that would affect me. I’m confident enough with public speaking, having done talks in front of a hundred or so students in my past life as a lecturer, but this was a different world altogether. Once I was in the ring, though, I was able to focus entirely on the man in front of me.”

5. Have you got any plans to continue the sport?
“I’ll probably do another bout towards the end of the year, but for now I’ve got a 10km race to prepare for. A team of us from work have signed up to do the Leeds 10k next month, which will be my first race at that distance, so I need to get the miles in. A few of my friends have signed up for the next boxing event though, so I’m sure I’ll be sparring a few rounds with them over the next couple of months.”

6. Most importantly, how much money have you raised for Cancer Research UK?
“Thanks to the generosity of my supporters, and particularly to Isotoma who sponsored me and matched the amount raised by fight night, I’ve raised £890 so far for Cancer Research UK.”


7. What advice would you give anyone who is interested in taking up boxing? 
“One of the great benefits of taking part in this event was the top class training I received at Chokdee Academy. If you have any interest in competing, definitely take your time in finding a good gym. Check if they have fighters there who currently compete at an amateur or professional level, talk to the coaches and see what the atmosphere is like at the club. Ask yourself: do I feel comfortable here? Anyone can run a boxercise class, but it takes knowledge and experience to teach a skill like boxing.

Also, don’t be afraid of being “too unfit”! There were plenty of people who fought at the event who, at the beginning of the eight week training cycle, were struggling to do a single push up. The training you’ll receive in a boxing gym, along with advice and guidance regarding diet, will get you fitter than you ever thought possible.”

Dan undertook 8 weeks of training with the professionals at Chokdee Academy to prepare for the fight. If you’d like more information about taking up the sport then please do get in touch with the team at Chokdee, or your local training centre.

Sites for The Key for School Leaders and School Governors shown on various devices

Evolving The Key: insights from user research

Last week the freshly redesigned The Key for School Leaders and School Governors went live, after almost a year in design and development. We’ve been working with The Key since their founding in 2007, and this is the third major update Isotoma has carried out.

In the nine years since launching The Key has grown to support almost half of schools in England, with an amazing 75,000 registered school leaders and 17,000 registered governors having access to 5,000 original resources every month. It is now one of the most trusted sources of information in the education sector.

It’s no small task to migrate that many regular, engaged users from a system they’ve grown used to across to a new design, information architecture and user experience. This post explains our process and the results.

The Key website shown on various devices

Learning from users

As an organisation answering questions from its members daily, The Key had a pretty good idea of some new features that they knew users would appreciate. For example, better information about their school’s membership: which of your colleagues are using it, when the membership renewal is due, etc. They also keep a keen eye on their web analytics, knowing what terms users are searching for, the preponderance of searching vs browsing, etc.

But other questions could only be answered through user research. How effective was the site navigation? Are certain features unused due to lack of awareness, or lack of interest? As we had done with the initial design of the site, I went on-site to observe school staff actually using The Key.

The left-hand navigation column was one such bone of contention. Article pages have always shown articles in the same topic in the column to the left. It was easy to argue that they were relevant “related content”, but did anyone ever use them? We found that, in fact, hardly anyone did. (A 2014 design change to make them look more inobtrusive until moused over simply made them more ignorable.) Users were more likely to click Back to their search results or previous topic page, or use the “See also” links. Out went the left-hand navigation – the savings went into a calmer, more spacious layout and a font size increase.

Comparison of 2015 and 2016 article page designs

The “See also” links, however, appeared at the top of the right-hand column, so were rarely on-screen at the time when users were finished with an article. So we made sure they reappeared when the user had reached the bottom of an article.

Homepages – what are they good for?

Another thing the user tests told us loud and clear was that the current homepage was not working. Nobody had anything negative to say about it, but most people couldn’t tell us what was on it, nor did it feature in their navigation journeys.

Screenshot of The Key member homepageHowever, we knew that most members were avid consumers of the weekly update emails, bringing news headlines and other topical articles to their attention. So the new homepage is designed much more like a magazine cover, promoting news and topical content in a much more eye-catching and easily scanned style. The new flexible grid-based layout allows the homepage design to be changed with ease.

Screenshot showing 'mini-homepage' below an articleWe know that most users’ journeys will continue to be task-driven, but we wanted to increase the likelihood of serendipitous browsing – all members said they do enjoy browsing content not related to their initial search, on those occasions when they have time to spare. We have also added what we refer to as a “mini homepage” below each article, with a magazine-style display of new and topical content. This does much the same job as the homepage, but does not require the visit to involve a visit to the homepage.

Getting users to their destination faster

Most people used Search to find what they’re looking for, but a significant number also browsed, using the site’s carefully curated Topic hierarchy. In both cases, we saw opportunities to speed up the process.

The Key screenshot showing suggested search resultsSwitching to a new, high performance search engine, Elasticsearch, let us present top search results for a query dynamically – in many cases, this should avoid the need to browse a page of search results entirely.

The Key screenshot showing dropdown navigation menuWe also introduced a large “doormat” style dropdown navigation on the main topic bar. This lets users skip entirely two or three pages of topics and sub-topics. It also makes it much easier to scan topics to decide on the most relevant area, without leaving the current page.

What else did we change?

There are too many other changes to list in this post. The customer acquisition journey – marketing and signup pages – was entirely redesigned. Some unused tools were removed and some under-used tools made more prominent. New article types were introduced such as article bundles, and dedicated News section was added.

Template changes

Under the hood the front-end templates were given a full overhaul for the first time since 2007 – they have held up remarkably well, even allowing a full “CSS Zen Garden” style redesign in 2014. But in other respects they held us back. We now have a fluid, responsive 12-column grid system and a whole library of responsive, multi-purpose modules. There is a module compendium and style guide to explain their usage and serve as reference design.

Screenshot of The Key for School Governors homepageThis was also our first site to use an SVG-based icon system. We went with the Grunticon system, which provides a PNG fallback for browsers that don’t yet support SVG (IE8 and below). Grunticon applies SVGs as background images, however, so they cannot be recoloured using CSS. Since the site has two distinct visual themes – for School Leaders and School Governors – each Grunticon icon was also sprited to allow a colour theme to be applied with only a CSS change.

A continuing journey

Our work is by no means done. The Key have many exciting developments planned in 2016, and we can’t wait to work on them… And expect another post from our development team on the process of migrating such a large site from one CMS (Plone) to another (Wagtail) in the not too distant future.

About usIsotoma is a bespoke software development company based in York, Manchester and London specialising in web apps, mobile apps and product design. If you’d like to know more you can review our work or get in touch.

In-car interaction design

I recently went to a fascinating IxDA (interaction design) meetup about in-car interaction design. Here’s a quick summary:

1. Driver distraction and multitasking

Duncan Brumby teaches and researches in-car UX at UCL. He described various ways car makers try to provide more controls to drivers whilst trying to avoid driver distraction (and falling foul of regulations).

I think most of us are sometimes confused by car user interfaces (UI), and with the advent of the “connected car”, are likely to be more confused than ever.

Ever wondered what those lights on your dash mean? Confusing car UI by Dave

Ever wondered what those lights on your dash mean? Confusing car UI by Dave

Modern in-car UIs take different approaches. Most cars use dashboard UIs with or without touchscreens. Apple’s CarPlay takes this approach. Then there are systems like BMW’s iDrive which has a dashboard display but a rotary controller located next to the seat, meant to be used without looking. This avoids the inaccuracy of touchscreens due to the vehicle’s speed or bumpy roads. (So iDrive makes more sense on the autobahn, whereas touchscreen UIs make more sense when you’re mostly stuck in traffic.)

Brumby mentioned that the Tesla’s giant touchscreens are not popular with drivers, as their glare is unpleasant when it’s dark, and app interfaces often change as a result of software updates.

The other major problem is that even interfaces you don’t have to glance at (e.g. audio interfaces, so fashionable at the moment) still cause cognitive distraction – research has confirmed what many of us instinctively know, that you are less attentive when you’re on a phone call, even when using hands-free. (See UX for connected cars by Plan Strategic.) And of course audio interfaces (Siri and the like) are never 100% accurate they way they are in advertisements. Imagine having to deal with its misheard mistakes in the message you were trying to send, whilst driving.

Reduction in reaction times 54% using a hand-held phone 46% using a hands-free phone 18% after drinking the legal limit of alcohol

Reduction in reaction times – RAC research 2008. From UX for connected cars by Plan Strategic

(Why, you may ask, is a hands-free phone conversation more distracting than a conversation with passengers in the car? People inside the car can see what the driver is seeing and doing. People instinctively modulate their conversation to what’s happening on the road, and drivers rely on that. A person on the other end of the phone can’t see what the driver is seeing, and doesn’t do that, unwittingly causing greater stress for the driver.)

2. ustwo: Are we there yet?

The talk by Harsha Vardhan and Tim Smith of ustwo (versatile studio that also made Monument Valley, and who hosted the event) was more interesting, even though I started off quite skeptical. They’ve published Are We There Yet? (PDF) which is their vision / manifesto of the connected car, which got quite a bit of attention. (It got them invited to Apple to speak to Jony Ive.) It’s available free online.

But what I found most interesting was their prototype dashboard UI – the “in-car cluster” – to demonstrate some of the ideas they talk about in the book. It’s summarised in this short video:

This blog post pretty much covers exactly what the talk did, in detail – do have a read. The prototype is also available online. (It’s built using Framer.JS, a prototyping app I’ve been meaning to try out for a while.)

As I said, I started off skeptical, but I found the rationale really quite convincing. I like how they distilled their thinking down to the essence – not leading with some sort of “futuristic aesthetic”. They’ve approached it as “what do drivers need to see” – and that this could be entirely different based on whether they’re parked, driving or reversing.

Is Apple giving design a bad name?

Legendary user experience pioneers and ex-Apple employees Don Norman and Bruce ‘Tog’ Tognazzini recently aimed a broadside at Apple in an article titled “How Apple Is Giving Design A Bad Name”, linkbait calibrated to get the design community in a froth.

The article has some weaknesses (over-long, repetitive, short on illustrations and with some unconvincing anecdata), but on the whole I think they are right. Apple’s design is getting worse, users are suffering from it, and they are setting bad examples that are being emulated by other designers. I would urge you to read the article, but here is my take on it. Continue reading


Sorting querysets with NULLs in Django

One thing which I’ve found surprisingly hard to do in Django over the years is sort a list of items on a database column when that column might have NULLs in it. The problem is that the default Postgres behaviour is to give NULL a higher sort value than everything else, so when sorting in descending order, all the NULLs appear at the top. This is particularly strange if, say, you want a list of items sorted by most recently updated, and the ones at the top are the ones that have never had an update.

If we were writing the SQL directly, we could just add NULLS LAST to the ORDER BY clause, but that would be a really rubbish reason to drop down to raw SQL mode in Django.

Fortunately, Django 1.8 has introduced a new feature: Func() expressions. These expressions let you run SQL-level functions like LOWER(), SUM() etc. and annotate your queryset with a new column containing the result. I didn’t want to run a database function, but what I discovered was that it is really easy to subclass and make your own Func() expression, giving you access to a template for generating SQL! The base class looks something like:

class Func(Expression):
    function = None
    template = '%(function)s(%(expressions)s)'

    # Other stuff

Normally you are supposed to override the function attribute, which then gets fed into the template and wrapped around the existing SQL statement. However, it is equally possible to override the template attribute itself and get rid of the wrapping function altogether! This led me to create my own “function” which just returns a boolean to say whether the current SQL statement (completely generated by the ORM and untouched by human hands) evaluates to NULL:

class IsNull(Func):
    template = '%(expressions)s IS NULL'

Welcome to Hacksville!

From here it’s simply a case of annotating your existing queryset with this field, and then adding it to the .order_by() statement:

    .order_by('last_update_isnull', '-last_update')

First we sort on last_update_isnull in ascending order (it will be either true or false, so all the “yes, it is NULL” items will go to the bottom of the list). Then we use the last_update field, which is what we really want to sort on, as the secondary sort field, safe in the knowledge that all the NULLs are already out of the way.

So there you have it: my moderately hacky solution that is quite small and crucially still plays nicely with the ORM 🙂


A Quick Introduction to Backbone

Who Uses Backbone?


airbnb, newsblur, disqus, hulu, basecamp, stripe, irccloud, trello, …


It is not a framework

Backbone is an MVP (Model View Presenter) javascript library that, unlike Django, is extremely light in its use of conventions. Frameworks are commonly seen as fully-working applications that run your code, as opposed to libraries, where you import their code and run it yourself. Backbone falls solidly into the latter category and it’s only through the use of the Router class that it starts to take some control back. Also included are View, Model and Collection (of Models), and Events, all of which can be used as completely standalone components and often are used this way alongside other frameworks. This means that if you use backbone, you will have much more flexibility for creating something unusual and being master of your project’s destiny, but on the other hand you’ll be faced with writing a lot of the glue code yourself, as well as forming many of the conventions.


Backbone is built upon jQuery and underscore. While these two libraries have some overlap, they mostly perform separate functions; jQuery is a DOM manipulation tool that handles the abstractions of various browser incompatibilities (and even in the evergreen browser age offers a lot of benefits there), and underscore is primarily a functional programming tool, offering cross-browser support for map, reduce, and the like. Most data manipulation you do can be significantly streamlined using underscore and, in the process, you’ll likely produce more readable code. If you’re on a project that isn’t transpiling from ES6 with many of the functional tools built in, I enthusiastically recommend using underscore.

Underscore has one other superb feature: templates.

// Using raw text
var titleTemplate = _.template('<h1>Welcome, <%- fullName %></h1>');
// or if you have a <script type="text/template">
var titleTemplate = _.template($('#titleTemplate'));
// or if you're using requirejs
var titleTemplate = require('tpl!templates/title');

var renderedHtml = titleTemplate({title: 'Martin Sheen'});

Regardless of how you feel about the syntax (which can be changed to mustache-style), having lightweight templates available that support escaping is a huge win and, on its own, enough reason to use underscore.


Probably the most reusable class in Backbone’s toolbox is Events. This can be mixed in to any existing class as follows:

// Define the constructor
var MyClass = function(){};

// Add some functionality to the constructor's prototype
_.extend(MyClass.prototype, {
  someMethod: function() {
    var somethingUseful = doSomeThing();
    // trigger an event named 'someEventName'
    this.trigger('someEventName', somethingUseful);

// Mix-in the events functionality
_.extend(MyClass.prototype, Backbone.Events);

And suddenly, your class has grown an event bus.

var thing = new MyClass();
thing.on('someEventName', function(somethingUseful) {
  alert('IT IS DONE' + somethingUseful);

Things we don’t know about yet can now listen to this class for events and run callbacks where required.

By default, the Model, Collection, Router, and View classes all have the Events functionality mixed in. This means that in a view (in initialize or render) you can do:

this.listenTo(this.model, 'change', this.render);

There’s a list of all events triggered by these components in the docs. When the listener also has Events mixed in, it can use .listenTo which, unlike .on, sets the value of this in the callback to the object that is listening rather than the object that fired the event.


Backbone.View is probably the most useful, reusable class in the Backbone toolbox. It is almost all convention and does nothing beyond what most people would come up with after pulling together something of their own.

Fundamentally, every view binds to a DOM element and listens to events from that DOM element and any of its descendants. This means that functionality relating to your application’s UI can be associated with a particular part of the DOM.

Views have a nice declarative format, as follows (extend is an short, optional shim for simplifying inheritance):

var ProfileView = Backbone.View.extend({

  // The first element matched by the selector becomes view.el
  // view.$el is a shorthand for $(view.el)
  // view.$('.selector') is shorthand for $(view.el).find('.selector');
  el: '.profile',

  // Pure convention, not required
  template: profileTemplate,

  // When events on left occur, methods on right are called
  events: {
    'click .edit': 'editSection',
    'click .profile': 'zoomProfile'

  // Custom initialize, doesn't need to call super
  initialize: function(options) {
    this.user = options.user;

  // Your custom methods
  showInputForSection: function(section) {

  editSection: function(ev) {
    // because this is bound to the view, jQuery's this is made
    // available as 'currentTarget'
    var section = $(ev.currentTarget).attr('data-section');

  zoomProfile: function() {

  // Every view has a render method that should return the view
  render: function() {
    var rendered = this.template({user: this.user});
    return this;


Finally, to use this view:

// You can also pass model, collection, el, id, className, tagName, attributes and events to override the declarative defaults
var view = new ProfileView({ user: someUserObject });
view.render();  // Stuff appears in .profile !

// Once you've finished with the view
// NB. Why doesn't remove call undelegateEvents? NFI m8. Hate that it doesn't.

The next step is to nest views. You could have a view that renders a list but for each of the list items, it instantiates a new view to render the list item and listen to events for that item.

render: {
  // Build a basic template "<ul></ul>"

  _.each(this.collection.models, function(model) {
    // Instantiate a new "li" element as a view for each model
    var itemView = new ModelView({ tagName: 'li' });
    // Render the view
    // The jquery-wrapped li element is now available at itemView.$el
    this.$('ul')  // find the ul tag in the parent view
      .append(itemView.$el);  // append to it this li tag

How you choose to break up your page is completely up to you and your application.

Backbone.Model and Backbone.Collection

Backbone’s Model and Collection classes are designed for very standard REST endpoints. It can be painful to coerce them into supporting anything else, though it is achievable.

Assuming you have an HTTP endpoint /items, which can:

  • have a list of items GET’d from it
  • have a single item GET’d from /items/ID

And if you’re going to be persisting any changes back to the database:

  • have new items POSTed to it
  • have existing items, at /items/ID PUT, PATCHed, and DELETEd

Then you’re all set to use all the functionality in Backbone.Model and Backbone.Collection.

var Item = Backbone.Model.extend({
  someMethod: function() {
    // perform some calculation on the data

var Items = Backbone.Collection.extend({
  model: Item,

Nothing is required to define a Model subclass, although you can specify the id attribute name and various other configurables, as well as configuring how the data is pre-processed when it is fetched. Collection subclasses must have a model attribute specifying which Model class is used to instantiate new models from the fetched data.

The huge win with Model and Collection is in their shared sync functionality – persisting their state to the server and letting you know what has changed. It’s also nice being able to attach methods to the model/collection for performing calculations on the data.

Let’s instantiate a collection and fetch it.

var items = new Items();
// Fetch returns a jQuery promise
items.fetch().done(function() {
  items.models // a list of Item objects
  items.get(4) // get an Item object by its ID

Easy. Now let’s create a new model instance and persist it to the database:

var newItem = new Item({some: 'data'});
  .done(function() {
    messages.success('Item saved');
  .fail(function() {
    messages.danger('Item could not be saved lol');

Or to fetch models independently:

var item = new Item({id: 5});
  .done(function() {

// or use Events!
this.listenTo(item, 'change', this.render);

And to get/set attributes on a model:

var attr = item.get('attributeName');
item.set('attributeName', newValue);  // triggers 'change', and 'change:attributeName' events

Backbone.Router and Backbone.History

You’ll have realised already that all the above would work perfectly fine in a typical, SEO-friendly, no-node-required, django views ftw multi-page post-back application. But what about when you want a single page application? For that, backbone offers Router and History.

Router classes map urls to callbacks that typically instantiate View and/or Model classes.

History does little more than detect changes to the page URL (whether a change to the #!page/fragment/ or via HTML 5 pushState) and, upon receiving that event, orchestrates your application’s router classes accordingly based on the urls it detects. It is Backbone.History that takes backbone from the library category to the framework category.

History and Router are typically used together, so here’s some example usage:

var ApplicationRouter = Router.extend({

  routes: {
    'profile': 'profile',
    'items': 'items',
    'item/:id': 'item',

  profile: function() {
    var profile = new ProfileView();

  items: function() {
    var items = new Items();
    var view = new ItemsView({items: items});
    items.fetch().done(function() {

  item: function(id) {
    var item = new Item({id: id})
    var view = new ItemView({item: item});
    item.fetch().done(function() {


Note that above, I’m running the fetch from the router. You could instead have your view render a loading screen and fetch the collection/model internally. Or, in the initialize of the view, it could do this.listenTo(item, 'sync', this.render), in which case your routers need only instantiate the model, instantiate the view and pass the model, then fetch the model. Backbone leaves it all to you!

Finally, let’s use Backbone.History to bring the whole thing to life:

  // Routers register themselves
  new ApplicationRouter();
  if (someCustomSwitch) {
    new CustomSiteRouter();

  // History is already instantiated at Backbone.history
  Backbone.history.start({pushState: true});

Now the most common way to use a router is to listen for a click event on a link, intercept the event, and rather than letting the browser load the new page, preventDefault and instead run Backbone.history.navigate('/url/from/link', {trigger: true}); This will run the method associated with the passed route, then update the url using pushState. This is the key: each router method should be idempotent, building as much of the page as required from nothing. Sometimes this method will be called with a different page built, sometimes not. Calling history.navigate will also create a new history entry in the browser’s history (though you can avoid this happening by passing {trigger: true, replace: true}.

If a user clicks back/forward in the browser, the url will change and Backbone.history will again look up the new url and execute the method associated with that url. If none can be found the event is propagated to the browser and the browser performs a typical page change. In this case, you should be sure in your router method to call .remove and .undelegateEvents on any instantiated views that you no longer need or else callbacks for these could still fire. YES, this is incredibly involved.

Finally, you’ll sometimes be in the position where you’ve updated one small part of a page, some sub-view perhaps but you want this change to be reflected in the URL. You don’t necessarily want to trigger a router method because all the work has been done but you do have a router method that could reconstruct the new state of the page were a user to load that page from a full refresh. In this case, you can call Backbone.history.navigate('/new/path'); and it’ll add a new history entry without triggering the related method.


Backbone is unopinionated. It provides an enormous amount of glue code in a way that is very useful and immediately puts you in a much better position than if you were just using jquery. That said, loads more glue code must be written for every application so it could do much, much more. On one hand this means you gain a tremendous amount of flexibility which is extremely useful given the esoteric nature of the applications we build, and it also gives you the power to make your applications incredibly fast, since you aren’t running lots of the general-purpose glue code from much more involved frameworks. It also gives you the power to inadvertently hang yourself.

If you’re looking for something easy to pick up and significantly more powerful than jQuery, but less of a painful risk than the horrible messy world of proper javascript frameworks (see also: the hundreds of angular “regret” blog posts), Backbone is a great place to start.

About usIsotoma is a bespoke software development company based in York and London specialising in web apps, mobile apps and product design. If you’d like to know more you can review our work or get in touch.


Observations on the nature of time. And javascript.

In the course of working on one of our latest projects, I picked up an innocuous looking ticket that said: “Date pickers reset to empty on form submission”. “Easy”, I thought. It’s just the values being lost somewhere in form validation.And then I saw the ‘in Firefox and IE’ description. Shouldn’t be too hard, it’ll be a formatting issue or something, maybe even a placeholder, right?

Yeah, no.

Initial Investigations

Everything was fine in Chrome, but not in Firefox. I confirmed the fault also existed in IE (and then promptly ignored IE for now).

The responsible element looked like this:
<input class="form-control datepicker" data-date-format="{{ js_datepicker_format }}" type="date" name="departure_date" id="departure_date" value="{{ form.departure_date.value|default:'' }}">

This looks pretty innocent. It’s a date input, how wrong can that be?

Sit comfortably, there’s a rabbit hole coming up.

On Date Inputs

Date type inputs are a relatively new thing, they’re in the HTML5 Spec. Support for it is pretty mixed. This jumps out as being the cause of it working in Chrome, but nothing else. Onwards investigations (and flapping at colleagues) led to the fact that we use bootstrap-datepicker to provide a JS/CSS based implementation for the browsers that have no native support.

We have an isolated cause for the problem. It is obviously something to do with bootstrap-datepicker, clearly. Right?

On Wire Formats and Localisation

See that data-date-format="{{ js_datepicker_format }}" attribute of the input element. That’s setting the date format for bootstrap-datepicker. The HTML5 date element doesn’t have similar. I’m going to cite this stackoverflow answer rather than the appropriate sections of the documentation. The HTML5 element has the concept of a wire format and a presentation format. The wire format is YYYY-MM-DD (iso8601), the presentation format is whatever the user has the locale set to in their browser.

You have no control over this, it will do that and you can do nothing about it.

bootstrap-datepicker, meanwhile has the data-date-format element, which controls everything about the date that it displays and outputs. There’s only one option for this, the wire and presentation formats are not separated.

This leads to an issue. If you set the date in YYYY-MM-DD format for the html5 element value, then Chrome will work. If you set it to anything else, then Chrome will not work and bootstrap-datepicker might, depending on if the format matches what is expected.

There’s another issue. bootstrap-datepicker doesn’t do anything with the element value when you start it. So if you set the value to YYYY-MM-DD format (for Chrome), then a Firefox user will see 2015-06-24, until they select something, at which point it will change to whatever you specified in data-date-format. But a Chrome user will see it in their local format (24/06/2015 for me, GB format currently).

It’s all broken, Jim.

A sidetrack into Javascript date formats.

The usual answer for anything to do with dates in JS is ‘use moment.js’. But why? It’s a fairly large module, this is a small problem, surely we can just avoid it?

Give me a date:

>>> var d = new Date();

Lets make a date string!

>>> d.getYear() + d.getMonth() + d.getDay() + ""

Wat. (Yeah, I know that’s not how you do string formatting and therefore it’s my fault.)

>>> d.getDay()

It’s currently 2015-06-24. Why 3?.

Oh, that’s day of the week. Clearly.

>>> d.getDate()

The method that gets you the day of the month is called getDate(). It doesn’t, you know, RETURN A DATE.

>>> var d = new Date('10-06-2015')
>>> d
Tue Oct 06 2015 00:00:00 GMT+0100 (BST)

Oh. Default date format is US format (MM-DD-YYYY). Right. Wat.

>>> var d = new Date('31-06-2015')
>>> d
Invalid Date

That’s… reasonable, given the above. Except that’s a magic object that says Invalid Date. But at least I can compare against it.

>>> var d = new Date('31/06/2015')
>>> d
Invalid Date

Oh great, same behaviour if I give it UK date formats (/ rather than -). That’s okay.

>>> var d = new Date('31/06/2015')
>>> d
"Date 2017-07-05T23:00:00.000Z"


What’s going on?

The difference here is that I’ve used Firefox, the previous examples are in Chrome. I tried to give an explanation of what that’s done, but I actually have no idea. I know it’s 31 months from something, as it’s parsed the 31 months and added it to something. But I can’t work out what, and I’ve spent too long on this already. Help. Stop.

So. Why you should use moment.js. Because otherwise the old great ones will be summoned and you will go mad.


ISO Date Format is not supported in Internet Explorer 8 standards mode and Quirks mode.


The Actual Problem

Now I knew all of this, I could see the problem.

  1. The HTML5 widget expects YYYY-MM-DD
  2. The JS widget will set whatever you ask it to
  3. We were outputting GB formats into the form after submission
  4. This would then be an incorrect format for the HTML 5 widget
  5. The native widget would not change an existing date until a new one is selected, so changing the output format to YYYY-MM-DD meant that it changed when a user selected something.

A Solution In Two Parts

The solution is to standardise the behaviour and formats across both options. Since I have no control over the HTML5 widget, looks like it’s time to take a dive into bootstrap-datepicker and make that do the same thing.

Deep breath, and here we go…

Part 1

First job is to standardise the output date format in all the places. This means that the template needs to see a datetime object, not a preformatted date.

Once this is done, can feed the object into the date template tag, with the format filter. Which takes PHP date format strings. Okay, that’s helpful in 2015. Really.

Figured that out, changed the date parsing Date Input Formats and make sure it has the right ISO format in it.

That made the HTML5 element work consistently. Great.

Then, to the javascript widget.

bootstrap-datepicker does not do anything with the initial value of the element. To make it behave the same as the HTML5 widget, you need to:

1. Get the locale of the user

2. Get the date format for that locale

3. Set that as the format of the datepicker

4. Read the value

5. Convert the value into the right format

6. Call the setValue event of the datepicker with that value

This should be relatively straightforward, with a couple of complications.

  1. moment.js uses a different date format to bootstrap-datepicker
  2. There is no easy way to get a date format string, so a hardcoded list is the best solution.

// taken from bootstrap-datepicker.js
function parseFormat(format) {
    var separator = format.match(/[./-s].*?/),
        parts = format.split(/W+/);
    if (!separator || !parts || parts.length === 0){
        throw new Error("Invalid date format.");
    return {separator: separator, parts: parts};

var momentUserDateFormat = getLocaleDateString(true);
var datepickerUserDateFormat = getLocaleDateString(false);

$datepicker.each(function() {
    var $this = $(this);
    var presetData = $this.val();
    $'datepicker').format = parseFormat(datepickerUserDateFormat);
    if (presetData) {
        $this.datepicker('setValue', moment(presetData).format(momentUserDateFormat));

A bit of copy and paste code from the bootstrap-datepicker library, some jquery and moment.js and the problem is solved.

Part 3

Now we have the dates displaying in the right format on page load, we need to ensure they’re sent in the right format after the user has submitted the form. Should just be the reverse operation.

 function rewriteDateFormat(event) {
    var $this = $(;
    if ($this.val()) {
        var momentUserDateFormat = getLocaleDateString(true);
        $this.val(moment($this.val(), [momentUserDateFormat, 'YYYY-MM-DD']).format('YYYY-MM-DD'));

$datepicker.each(function() {
    var $this = $(this);
     // set the form handler for rewriting the format on submit
    var $form = $this.closest('form');
    $form.on('submit', {input: this}, rewriteDateFormat);

And we’re done.


Some final points that I’ve learnt.

  1. Always work in datetime objects until the last possible point. You don’t have to format them.
  2. Default to ISO format unless otherwise instructed
  3. Use parsing libraries



The Key – back-fitting responsive design (with exciting graphs!)

As an industry we talk a lot about the importance of responsive design. There are a lot of oft-repeated facts about the huge rise in mobile usage, alongside tales of woe about the 70,000 different Android screen resolutions. Customers often say the word ‘responsive’ to us with a terrified, hunted expression. There’s a general impression that it’s a) incredibly vital but b) incredibly hard.

As to the former, it’s certainly becoming hard to justify sites not being responsive from the very beginning. 18 months ago, we’d often find ourselves reluctantly filing ‘responsive design’ along with all the other things that get shunted into ‘phase 2’ early in the project. Nowadays, not so much: Mailchimp reported recently that 60% of mail they send is opened on a mobile phone.

For the latter, there’s this blog post. We hope it demonstrates that retro-fitting responsive design can be simple to achieve and deliver measurable results immediately.

And, because there are graphs and graphs are super boring, we’ve had our Gwilym illustrate them with farm animals and mountaineers. Shut up; they’re great.

What were the principles behind the design?

We’re not really fans of change for change’s sake, and generally, when redesigning a site, we try to stick to the principle of not changing something unless it’s solving a problem, or a clear improvement.

In this redesign project we were working under certain constraints. We weren’t going to change how the sites worked or their information architecture. We were even planning on leaving the underlying HTML alone as much as possible. We ‘just’ had to bring the customer’s three websites clearly into the same family and provide a consistent experience for mobile.

In many ways, this was a dream project. How often does anyone get to revisit old work and fix the problems that have niggled at you since the project was completed? The fact that these changes would immediately benefit the thousands of school leaders and governors who use the sites every day was just the icing on the cake.

And, to heighten the stakes a little more, one of the sites in the redesign was The Key – a site that we built 7 years ago and which has been continually developed since it first went live. Its criticality to the customer cannot be overstated and the build was based on web standards that are almost as old as it’s possible to be on the internet.

What did we actually do?

The changes we made were actually very conservative.

Firstly, text sizes were increased across the board. In the 7 years since the site was first designed, monitor sizes and screen resolutions have increased, making text appear smaller as a result. You probably needed to lean in closer to the screen than was comfortable. We wanted the site to be easy to read from a natural viewing distance.

We retained the site’s ability to adapt automatically to whatever size screen you are using, without anything being cut off. But this now includes any device, from a palm-sized smartphone, to a notebook-sized tablet, up to desktop monitors. (And if your screen is gigantic, we prevent lines from getting too long.) The reading experience should be equally comfortable on any device.

On article pages, the article text used to compete for attention with the menu along the left. While seeing the other articles in the section is often useful, we wanted them to recede to the background when you’re not looking at them.

We wanted to retain the colourfulness that was a hallmark of the previous design. This is not only to be pleasing to the eye – colours are really helpful in guiding the eye around the page, making the different sections more distinct, and helping the most important elements stand out.

Finally, we removed some clutter. These sites have been in production for many years and any CMS used in anger over that kind of period will generate some extraneous bits and bobs. Our principle here was that if you don’t notice anything missing once we’ve removed it, then we’ve removed the right things.

What was the result?

The striking thing about the changes we made was not just the extent of the effect, but also the speed with which it was demonstrable. The following metrics were all taken in the first 4 weeks of the changes being live in production in August 2014.

The most significant change is the improvement in mobile usage on The Key for School Leaders. Page views went up – fast (and have stayed there.)



Secondly, the bounce rate for mobile dropped significantly in the three months following the additions:


Most interestingly for us, this sudden bounce in mobile numbers wasn’t from a new, unheard of group of users that The Key had never heard from before. The proportion of mobile users didn’t increase significantly in the month after the site was relaunched. The bump came almost exclusively from registered users who could suddenly now use the site the way they wanted to.



A note about hardness

What we did here wasn’t actually hard or complicated – it amounted to a few weeks work for Francois. I’ve probably spent longer on this blog post, to be honest. And so our take-away point is this: Agencies you work with should be delivering this by default for new work; should be proposing simple steps you can take to add it for legacy work or explaining why they can’t or won’t.

About usIsotoma is a bespoke software development company based in York and London specialising in web apps, mobile apps and product design. If you’d like to know more you can review our work or get in touch.