In-car interaction design

I recently went to a fascinating IxDA (interaction design) meetup about in-car interaction design. Here’s a quick summary:

1. Driver distraction and multitasking

Duncan Brumby teaches and researches in-car UX at UCL. He described various ways car makers try to provide more controls to drivers whilst trying to avoid driver distraction (and falling foul of regulations).

I think most of us are sometimes confused by car user interfaces (UI), and with the advent of the “connected car”, are likely to be more confused than ever.

Ever wondered what those lights on your dash mean? Confusing car UI by Dave

Ever wondered what those lights on your dash mean? Confusing car UI by Dave https://www.facebook.com/davewittybanter/

Modern in-car UIs take different approaches. Most cars use dashboard UIs with or without touchscreens. Apple’s CarPlay takes this approach. Then there are systems like BMW’s iDrive which has a dashboard display but a rotary controller located next to the seat, meant to be used without looking. This avoids the inaccuracy of touchscreens due to the vehicle’s speed or bumpy roads. (So iDrive makes more sense on the autobahn, whereas touchscreen UIs make more sense when you’re mostly stuck in traffic.)

Brumby mentioned that the Tesla’s giant touchscreens are not popular with drivers, as their glare is unpleasant when it’s dark, and app interfaces often change as a result of software updates.

The other major problem is that even interfaces you don’t have to glance at (e.g. audio interfaces, so fashionable at the moment) still cause cognitive distraction – research has confirmed what many of us instinctively know, that you are less attentive when you’re on a phone call, even when using hands-free. (See UX for connected cars by Plan Strategic.) And of course audio interfaces (Siri and the like) are never 100% accurate they way they are in advertisements. Imagine having to deal with its misheard mistakes in the message you were trying to send, whilst driving.

Reduction in reaction times 54% using a hand-held phone 46% using a hands-free phone 18% after drinking the legal limit of alcohol

Reduction in reaction times – RAC research 2008. From UX for connected cars by Plan Strategic http://www.slideshare.net/planstrategic/ux-for-connected-cars-58076640

(Why, you may ask, is a hands-free phone conversation more distracting than a conversation with passengers in the car? People inside the car can see what the driver is seeing and doing. People instinctively modulate their conversation to what’s happening on the road, and drivers rely on that. A person on the other end of the phone can’t see what the driver is seeing, and doesn’t do that, unwittingly causing greater stress for the driver.)

2. ustwo: Are we there yet?

The talk by Harsha Vardhan and Tim Smith of ustwo (versatile studio that also made Monument Valley, and who hosted the event) was more interesting, even though I started off quite skeptical. They’ve published Are We There Yet? (PDF) which is their vision / manifesto of the connected car, which got quite a bit of attention. (It got them invited to Apple to speak to Jony Ive.) It’s available free online.

But what I found most interesting was their prototype dashboard UI – the “in-car cluster” – to demonstrate some of the ideas they talk about in the book. It’s summarised in this short video:

This blog post pretty much covers exactly what the talk did, in detail – do have a read. The prototype is also available online. (It’s built using Framer.JS, a prototyping app I’ve been meaning to try out for a while.)

As I said, I started off skeptical, but I found the rationale really quite convincing. I like how they distilled their thinking down to the essence – not leading with some sort of “futuristic aesthetic”. They’ve approached it as “what do drivers need to see” – and that this could be entirely different based on whether they’re parked, driving or reversing.

Is Apple giving design a bad name?

Legendary user experience pioneers and ex-Apple employees Don Norman and Bruce ‘Tog’ Tognazzini recently aimed a broadside at Apple in an article titled “How Apple Is Giving Design A Bad Name”, linkbait calibrated to get the design community in a froth.

The article has some weaknesses (over-long, repetitive, short on illustrations and with some unconvincing anecdata), but on the whole I think they are right. Apple’s design is getting worse, users are suffering from it, and they are setting bad examples that are being emulated by other designers. I would urge you to read the article, but here is my take on it. Continue reading

isotoma-blog-6

Sorting querysets with NULLs in Django

One thing which I’ve found surprisingly hard to do in Django over the years is sort a list of items on a database column when that column might have NULLs in it. The problem is that the default Postgres behaviour is to give NULL a higher sort value than everything else, so when sorting in descending order, all the NULLs appear at the top. This is particularly strange if, say, you want a list of items sorted by most recently updated, and the ones at the top are the ones that have never had an update.

If we were writing the SQL directly, we could just add NULLS LAST to the ORDER BY clause, but that would be a really rubbish reason to drop down to raw SQL mode in Django.

Fortunately, Django 1.8 has introduced a new feature: Func() expressions. These expressions let you run SQL-level functions like LOWER(), SUM() etc. and annotate your queryset with a new column containing the result. I didn’t want to run a database function, but what I discovered was that it is really easy to subclass and make your own Func() expression, giving you access to a template for generating SQL! The base class looks something like:

class Func(Expression):
    function = None
    template = '%(function)s(%(expressions)s)'

    # Other stuff

Normally you are supposed to override the function attribute, which then gets fed into the template and wrapped around the existing SQL statement. However, it is equally possible to override the template attribute itself and get rid of the wrapping function altogether! This led me to create my own “function” which just returns a boolean to say whether the current SQL statement (completely generated by the ORM and untouched by human hands) evaluates to NULL:

class IsNull(Func):
    template = '%(expressions)s IS NULL'

Welcome to Hacksville!

From here it’s simply a case of annotating your existing queryset with this field, and then adding it to the .order_by() statement:

MyModel.objects\
    .annotate(last_update=Max('update__created_date'))\
    .annotate(last_update_isnull=IsNull('last_update'))\
    .order_by('last_update_isnull', '-last_update')

First we sort on last_update_isnull in ascending order (it will be either true or false, so all the “yes, it is NULL” items will go to the bottom of the list). Then we use the last_update field, which is what we really want to sort on, as the secondary sort field, safe in the knowledge that all the NULLs are already out of the way.

So there you have it: my moderately hacky solution that is quite small and crucially still plays nicely with the ORM 🙂

isotoma-blog-4

A Quick Introduction to Backbone

Who Uses Backbone?

everyone

airbnb, newsblur, disqus, hulu, basecamp, stripe, irccloud, trello, …

Why?

http://backbonejs.org/#FAQ-why-backbone

It is not a framework

Backbone is an MVP (Model View Presenter) javascript library that, unlike Django, is extremely light in its use of conventions. Frameworks are commonly seen as fully-working applications that run your code, as opposed to libraries, where you import their code and run it yourself. Backbone falls solidly into the latter category and it’s only through the use of the Router class that it starts to take some control back. Also included are View, Model and Collection (of Models), and Events, all of which can be used as completely standalone components and often are used this way alongside other frameworks. This means that if you use backbone, you will have much more flexibility for creating something unusual and being master of your project’s destiny, but on the other hand you’ll be faced with writing a lot of the glue code yourself, as well as forming many of the conventions.

Dependencies

Backbone is built upon jQuery and underscore. While these two libraries have some overlap, they mostly perform separate functions; jQuery is a DOM manipulation tool that handles the abstractions of various browser incompatibilities (and even in the evergreen browser age offers a lot of benefits there), and underscore is primarily a functional programming tool, offering cross-browser support for map, reduce, and the like. Most data manipulation you do can be significantly streamlined using underscore and, in the process, you’ll likely produce more readable code. If you’re on a project that isn’t transpiling from ES6 with many of the functional tools built in, I enthusiastically recommend using underscore.

Underscore has one other superb feature: templates.

// Using raw text
var titleTemplate = _.template('<h1>Welcome, <%- fullName %></h1>');
// or if you have a <script type="text/template">
var titleTemplate = _.template($('#titleTemplate'));
// or if you're using requirejs
var titleTemplate = require('tpl!templates/title');

var renderedHtml = titleTemplate({title: 'Martin Sheen'});
$('#main').html(renderedHtml);

Regardless of how you feel about the syntax (which can be changed to mustache-style), having lightweight templates available that support escaping is a huge win and, on its own, enough reason to use underscore.

Backbone.Events

Probably the most reusable class in Backbone’s toolbox is Events. This can be mixed in to any existing class as follows:

// Define the constructor
var MyClass = function(){};

// Add some functionality to the constructor's prototype
_.extend(MyClass.prototype, {
  someMethod: function() {
    var somethingUseful = doSomeThing();
    // trigger an event named 'someEventName'
    this.trigger('someEventName', somethingUseful);
  }
});

// Mix-in the events functionality
_.extend(MyClass.prototype, Backbone.Events);

And suddenly, your class has grown an event bus.

var thing = new MyClass();
thing.on('someEventName', function(somethingUseful) {
  alert('IT IS DONE' + somethingUseful);
});
thing.someMethod();

Things we don’t know about yet can now listen to this class for events and run callbacks where required.

By default, the Model, Collection, Router, and View classes all have the Events functionality mixed in. This means that in a view (in initialize or render) you can do:

this.listenTo(this.model, 'change', this.render);

There’s a list of all events triggered by these components in the docs. When the listener also has Events mixed in, it can use .listenTo which, unlike .on, sets the value of this in the callback to the object that is listening rather than the object that fired the event.

Backbone.View

Backbone.View is probably the most useful, reusable class in the Backbone toolbox. It is almost all convention and does nothing beyond what most people would come up with after pulling together something of their own.

Fundamentally, every view binds to a DOM element and listens to events from that DOM element and any of its descendants. This means that functionality relating to your application’s UI can be associated with a particular part of the DOM.

Views have a nice declarative format, as follows (extend is an short, optional shim for simplifying inheritance):

var ProfileView = Backbone.View.extend({

  // The first element matched by the selector becomes view.el
  // view.$el is a shorthand for $(view.el)
  // view.$('.selector') is shorthand for $(view.el).find('.selector');
  el: '.profile',

  // Pure convention, not required
  template: profileTemplate,

  // When events on left occur, methods on right are called
  events: {
    'click .edit': 'editSection',
    'click .profile': 'zoomProfile'
  },

  // Custom initialize, doesn't need to call super
  initialize: function(options) {
    this.user = options.user;
  },

  // Your custom methods
  showInputForSection: function(section) {
    ...
  },

  editSection: function(ev) {
    // because this is bound to the view, jQuery's this is made
    // available as 'currentTarget'
    var section = $(ev.currentTarget).attr('data-section');
    this.showInputForSection(section);
  },

  zoomProfile: function() {
    $(ev.currentTarget).toggleClass('zoomed');
  },

  // Every view has a render method that should return the view
  render: function() {
    var rendered = this.template({user: this.user});
    this.$el.html(rendered);
    return this;
  }

});

Finally, to use this view:

// You can also pass model, collection, el, id, className, tagName, attributes and events to override the declarative defaults
var view = new ProfileView({ user: someUserObject });
view.render();  // Stuff appears in .profile !

// Once you've finished with the view
view.undelegateEvents();
view.remove();
// NB. Why doesn't remove call undelegateEvents? NFI m8. Hate that it doesn't.

The next step is to nest views. You could have a view that renders a list but for each of the list items, it instantiates a new view to render the list item and listen to events for that item.

render: {
  // Build a basic template "<ul></ul>"
  this.$el.html(this.template());

  _.each(this.collection.models, function(model) {
    // Instantiate a new "li" element as a view for each model
    var itemView = new ModelView({ tagName: 'li' });
    // Render the view
    itemView.render();
    // The jquery-wrapped li element is now available at itemView.$el
    this.$('ul')  // find the ul tag in the parent view
      .append(itemView.$el);  // append to it this li tag
  });
}

How you choose to break up your page is completely up to you and your application.

Backbone.Model and Backbone.Collection

Backbone’s Model and Collection classes are designed for very standard REST endpoints. It can be painful to coerce them into supporting anything else, though it is achievable.

Assuming you have an HTTP endpoint /items, which can:

  • have a list of items GET’d from it
  • have a single item GET’d from /items/ID

And if you’re going to be persisting any changes back to the database:

  • have new items POSTed to it
  • have existing items, at /items/ID PUT, PATCHed, and DELETEd

Then you’re all set to use all the functionality in Backbone.Model and Backbone.Collection.

var Item = Backbone.Model.extend({
  someMethod: function() {
    // perform some calculation on the data
  }
});

var Items = Backbone.Collection.extend({
  model: Item,
});

Nothing is required to define a Model subclass, although you can specify the id attribute name and various other configurables, as well as configuring how the data is pre-processed when it is fetched. Collection subclasses must have a model attribute specifying which Model class is used to instantiate new models from the fetched data.

The huge win with Model and Collection is in their shared sync functionality – persisting their state to the server and letting you know what has changed. It’s also nice being able to attach methods to the model/collection for performing calculations on the data.

Let’s instantiate a collection and fetch it.

var items = new Items();
// Fetch returns a jQuery promise
items.fetch().done(function() {
  items.models // a list of Item objects
  items.get(4) // get an Item object by its ID
});

Easy. Now let’s create a new model instance and persist it to the database:

var newItem = new Item({some: 'data'});
items.add(newItem);
newItem
  .save()
  .done(function() {
    messages.success('Item saved');
  })
  .fail(function() {
    messages.danger('Item could not be saved lol');
  });

Or to fetch models independently:

var item = new Item({id: 5});
item
  .fetch()
  .done(function() {
    someView.render();
  });

// or use Events!
this.listenTo(item, 'change', this.render);
item.fetch();

And to get/set attributes on a model:

var attr = item.get('attributeName');
item.set('attributeName', newValue);  // triggers 'change', and 'change:attributeName' events

Backbone.Router and Backbone.History

You’ll have realised already that all the above would work perfectly fine in a typical, SEO-friendly, no-node-required, django views ftw multi-page post-back application. But what about when you want a single page application? For that, backbone offers Router and History.

Router classes map urls to callbacks that typically instantiate View and/or Model classes.

History does little more than detect changes to the page URL (whether a change to the #!page/fragment/ or via HTML 5 pushState) and, upon receiving that event, orchestrates your application’s router classes accordingly based on the urls it detects. It is Backbone.History that takes backbone from the library category to the framework category.

History and Router are typically used together, so here’s some example usage:

var ApplicationRouter = Router.extend({

  routes: {
    'profile': 'profile',
    'items': 'items',
    'item/:id': 'item',
  },

  profile: function() {
    var profile = new ProfileView();
    profile.render();
  },

  items: function() {
    var items = new Items();
    var view = new ItemsView({items: items});
    items.fetch().done(function() {
      items.render();
    });
  },

  item: function(id) {
    var item = new Item({id: id})
    var view = new ItemView({item: item});
    item.fetch().done(function() {
      view.render();
    });
  }

});

Note that above, I’m running the fetch from the router. You could instead have your view render a loading screen and fetch the collection/model internally. Or, in the initialize of the view, it could do this.listenTo(item, 'sync', this.render), in which case your routers need only instantiate the model, instantiate the view and pass the model, then fetch the model. Backbone leaves it all to you!

Finally, let’s use Backbone.History to bring the whole thing to life:

$(function(){
  // Routers register themselves
  new ApplicationRouter();
  if (someCustomSwitch) {
    new CustomSiteRouter();
  }

  // History is already instantiated at Backbone.history
  Backbone.history.start({pushState: true});
});

Now the most common way to use a router is to listen for a click event on a link, intercept the event, and rather than letting the browser load the new page, preventDefault and instead run Backbone.history.navigate('/url/from/link', {trigger: true}); This will run the method associated with the passed route, then update the url using pushState. This is the key: each router method should be idempotent, building as much of the page as required from nothing. Sometimes this method will be called with a different page built, sometimes not. Calling history.navigate will also create a new history entry in the browser’s history (though you can avoid this happening by passing {trigger: true, replace: true}.

If a user clicks back/forward in the browser, the url will change and Backbone.history will again look up the new url and execute the method associated with that url. If none can be found the event is propagated to the browser and the browser performs a typical page change. In this case, you should be sure in your router method to call .remove and .undelegateEvents on any instantiated views that you no longer need or else callbacks for these could still fire. YES, this is incredibly involved.

Finally, you’ll sometimes be in the position where you’ve updated one small part of a page, some sub-view perhaps but you want this change to be reflected in the URL. You don’t necessarily want to trigger a router method because all the work has been done but you do have a router method that could reconstruct the new state of the page were a user to load that page from a full refresh. In this case, you can call Backbone.history.navigate('/new/path'); and it’ll add a new history entry without triggering the related method.

Conclusion

Backbone is unopinionated. It provides an enormous amount of glue code in a way that is very useful and immediately puts you in a much better position than if you were just using jquery. That said, loads more glue code must be written for every application so it could do much, much more. On one hand this means you gain a tremendous amount of flexibility which is extremely useful given the esoteric nature of the applications we build, and it also gives you the power to make your applications incredibly fast, since you aren’t running lots of the general-purpose glue code from much more involved frameworks. It also gives you the power to inadvertently hang yourself.

If you’re looking for something easy to pick up and significantly more powerful than jQuery, but less of a painful risk than the horrible messy world of proper javascript frameworks (see also: the hundreds of angular “regret” blog posts), Backbone is a great place to start.

About usIsotoma is a bespoke software development company based in York and London specialising in web apps, mobile apps and product design. If you’d like to know more you can review our work or get in touch.

isotoma-blog-6

Observations on the nature of time. And javascript.

In the course of working on one of our latest projects, I picked up an innocuous looking ticket that said: “Date pickers reset to empty on form submission”. “Easy”, I thought. It’s just the values being lost somewhere in form validation.And then I saw the ‘in Firefox and IE’ description. Shouldn’t be too hard, it’ll be a formatting issue or something, maybe even a placeholder, right?

Yeah, no.

Initial Investigations

Everything was fine in Chrome, but not in Firefox. I confirmed the fault also existed in IE (and then promptly ignored IE for now).

The responsible element looked like this:
<input class="form-control datepicker" data-date-format="{{ js_datepicker_format }}" type="date" name="departure_date" id="departure_date" value="{{ form.departure_date.value|default:'' }}">

This looks pretty innocent. It’s a date input, how wrong can that be?

Sit comfortably, there’s a rabbit hole coming up.

On Date Inputs

Date type inputs are a relatively new thing, they’re in the HTML5 Spec. Support for it is pretty mixed. This jumps out as being the cause of it working in Chrome, but nothing else. Onwards investigations (and flapping at colleagues) led to the fact that we use bootstrap-datepicker to provide a JS/CSS based implementation for the browsers that have no native support.

We have an isolated cause for the problem. It is obviously something to do with bootstrap-datepicker, clearly. Right?

On Wire Formats and Localisation

See that data-date-format="{{ js_datepicker_format }}" attribute of the input element. That’s setting the date format for bootstrap-datepicker. The HTML5 date element doesn’t have similar. I’m going to cite this stackoverflow answer rather than the appropriate sections of the documentation. The HTML5 element has the concept of a wire format and a presentation format. The wire format is YYYY-MM-DD (iso8601), the presentation format is whatever the user has the locale set to in their browser.

You have no control over this, it will do that and you can do nothing about it.

bootstrap-datepicker, meanwhile has the data-date-format element, which controls everything about the date that it displays and outputs. There’s only one option for this, the wire and presentation formats are not separated.

This leads to an issue. If you set the date in YYYY-MM-DD format for the html5 element value, then Chrome will work. If you set it to anything else, then Chrome will not work and bootstrap-datepicker might, depending on if the format matches what is expected.

There’s another issue. bootstrap-datepicker doesn’t do anything with the element value when you start it. So if you set the value to YYYY-MM-DD format (for Chrome), then a Firefox user will see 2015-06-24, until they select something, at which point it will change to whatever you specified in data-date-format. But a Chrome user will see it in their local format (24/06/2015 for me, GB format currently).

It’s all broken, Jim.

A sidetrack into Javascript date formats.

The usual answer for anything to do with dates in JS is ‘use moment.js’. But why? It’s a fairly large module, this is a small problem, surely we can just avoid it?

Give me a date:

>>> var d = new Date();
undefined

Lets make a date string!

>>> d.getYear() + d.getMonth() + d.getDay() + ""
"123"

Wat. (Yeah, I know that’s not how you do string formatting and therefore it’s my fault.)

>>> d.getDay()
3

It’s currently 2015-06-24. Why 3?.

Oh, that’s day of the week. Clearly.

>>> d.getDate()
24

The method that gets you the day of the month is called getDate(). It doesn’t, you know, RETURN A DATE.

>>> var d = new Date('10-06-2015')
undefined
>>> d
Tue Oct 06 2015 00:00:00 GMT+0100 (BST)

Oh. Default date format is US format (MM-DD-YYYY). Right. Wat.

>>> var d = new Date('31-06-2015')
undefined
>>> d
Invalid Date

That’s… reasonable, given the above. Except that’s a magic object that says Invalid Date. But at least I can compare against it.

>>> var d = new Date('31/06/2015')
undefined
>>> d
Invalid Date

Oh great, same behaviour if I give it UK date formats (/ rather than -). That’s okay.

>>> var d = new Date('31/06/2015')
undefined
>>> d
"Date 2017-07-05T23:00:00.000Z"

Wat.

What’s going on?

The difference here is that I’ve used Firefox, the previous examples are in Chrome. I tried to give an explanation of what that’s done, but I actually have no idea. I know it’s 31 months from something, as it’s parsed the 31 months and added it to something. But I can’t work out what, and I’ve spent too long on this already. Help. Stop.

So. Why you should use moment.js. Because otherwise the old great ones will be summoned and you will go mad.

Also:

ISO Date Format is not supported in Internet Explorer 8 standards mode and Quirks mode.

Yep.

The Actual Problem

Now I knew all of this, I could see the problem.

  1. The HTML5 widget expects YYYY-MM-DD
  2. The JS widget will set whatever you ask it to
  3. We were outputting GB formats into the form after submission
  4. This would then be an incorrect format for the HTML 5 widget
  5. The native widget would not change an existing date until a new one is selected, so changing the output format to YYYY-MM-DD meant that it changed when a user selected something.

A Solution In Two Parts

The solution is to standardise the behaviour and formats across both options. Since I have no control over the HTML5 widget, looks like it’s time to take a dive into bootstrap-datepicker and make that do the same thing.

Deep breath, and here we go…

Part 1

First job is to standardise the output date format in all the places. This means that the template needs to see a datetime object, not a preformatted date.

Once this is done, can feed the object into the date template tag, with the format filter. Which takes PHP date format strings. Okay, that’s helpful in 2015. Really.

Figured that out, changed the date parsing Date Input Formats and make sure it has the right ISO format in it.

That made the HTML5 element work consistently. Great.

Then, to the javascript widget.

bootstrap-datepicker does not do anything with the initial value of the element. To make it behave the same as the HTML5 widget, you need to:

1. Get the locale of the user

2. Get the date format for that locale

3. Set that as the format of the datepicker

4. Read the value

5. Convert the value into the right format

6. Call the setValue event of the datepicker with that value

This should be relatively straightforward, with a couple of complications.

  1. moment.js uses a different date format to bootstrap-datepicker
  2. There is no easy way to get a date format string, so a hardcoded list is the best solution.

// taken from bootstrap-datepicker.js
function parseFormat(format) {
    var separator = format.match(/[./-s].*?/),
        parts = format.split(/W+/);
    if (!separator || !parts || parts.length === 0){
        throw new Error("Invalid date format.");
    }
    return {separator: separator, parts: parts};
}

var momentUserDateFormat = getLocaleDateString(true);
var datepickerUserDateFormat = getLocaleDateString(false);

$datepicker.each(function() {
    var $this = $(this);
    var presetData = $this.val();
    $this.data('datepicker').format = parseFormat(datepickerUserDateFormat);
    if (presetData) {
        $this.datepicker('setValue', moment(presetData).format(momentUserDateFormat));
    }
});

A bit of copy and paste code from the bootstrap-datepicker library, some jquery and moment.js and the problem is solved.

Part 3

Now we have the dates displaying in the right format on page load, we need to ensure they’re sent in the right format after the user has submitted the form. Should just be the reverse operation.

 function rewriteDateFormat(event) {
    var $this = $(event.data.input);
    if ($this.val()) {
        var momentUserDateFormat = getLocaleDateString(true);
        $this.val(moment($this.val(), [momentUserDateFormat, 'YYYY-MM-DD']).format('YYYY-MM-DD'));
    }
}

$datepicker.each(function() {
    var $this = $(this);
     // set the form handler for rewriting the format on submit
    var $form = $this.closest('form');
    $form.on('submit', {input: this}, rewriteDateFormat);
});

And we’re done.

Takeaways

Some final points that I’ve learnt.

  1. Always work in datetime objects until the last possible point. You don’t have to format them.
  2. Default to ISO format unless otherwise instructed
  3. Use parsing libraries

 

isotoma-blog-13

The Key – back-fitting responsive design (with exciting graphs!)

As an industry we talk a lot about the importance of responsive design. There are a lot of oft-repeated facts about the huge rise in mobile usage, alongside tales of woe about the 70,000 different Android screen resolutions. Customers often say the word ‘responsive’ to us with a terrified, hunted expression. There’s a general impression that it’s a) incredibly vital but b) incredibly hard.

As to the former, it’s certainly becoming hard to justify sites not being responsive from the very beginning. 18 months ago, we’d often find ourselves reluctantly filing ‘responsive design’ along with all the other things that get shunted into ‘phase 2’ early in the project. Nowadays, not so much: Mailchimp reported recently that 60% of mail they send is opened on a mobile phone.

For the latter, there’s this blog post. We hope it demonstrates that retro-fitting responsive design can be simple to achieve and deliver measurable results immediately.

And, because there are graphs and graphs are super boring, we’ve had our Gwilym illustrate them with farm animals and mountaineers. Shut up; they’re great.

What were the principles behind the design?

We’re not really fans of change for change’s sake, and generally, when redesigning a site, we try to stick to the principle of not changing something unless it’s solving a problem, or a clear improvement.

In this redesign project we were working under certain constraints. We weren’t going to change how the sites worked or their information architecture. We were even planning on leaving the underlying HTML alone as much as possible. We ‘just’ had to bring the customer’s three websites clearly into the same family and provide a consistent experience for mobile.

In many ways, this was a dream project. How often does anyone get to revisit old work and fix the problems that have niggled at you since the project was completed? The fact that these changes would immediately benefit the thousands of school leaders and governors who use the sites every day was just the icing on the cake.

And, to heighten the stakes a little more, one of the sites in the redesign was The Key – a site that we built 7 years ago and which has been continually developed since it first went live. Its criticality to the customer cannot be overstated and the build was based on web standards that are almost as old as it’s possible to be on the internet.

What did we actually do?

The changes we made were actually very conservative.

Firstly, text sizes were increased across the board. In the 7 years since the site was first designed, monitor sizes and screen resolutions have increased, making text appear smaller as a result. You probably needed to lean in closer to the screen than was comfortable. We wanted the site to be easy to read from a natural viewing distance.

We retained the site’s ability to adapt automatically to whatever size screen you are using, without anything being cut off. But this now includes any device, from a palm-sized smartphone, to a notebook-sized tablet, up to desktop monitors. (And if your screen is gigantic, we prevent lines from getting too long.) The reading experience should be equally comfortable on any device.

On article pages, the article text used to compete for attention with the menu along the left. While seeing the other articles in the section is often useful, we wanted them to recede to the background when you’re not looking at them.

We wanted to retain the colourfulness that was a hallmark of the previous design. This is not only to be pleasing to the eye – colours are really helpful in guiding the eye around the page, making the different sections more distinct, and helping the most important elements stand out.

Finally, we removed some clutter. These sites have been in production for many years and any CMS used in anger over that kind of period will generate some extraneous bits and bobs. Our principle here was that if you don’t notice anything missing once we’ve removed it, then we’ve removed the right things.

What was the result?

The striking thing about the changes we made was not just the extent of the effect, but also the speed with which it was demonstrable. The following metrics were all taken in the first 4 weeks of the changes being live in production in August 2014.

The most significant change is the improvement in mobile usage on The Key for School Leaders. Page views went up – fast (and have stayed there.)

total-page-views1

 

Secondly, the bounce rate for mobile dropped significantly in the three months following the additions:

bounce-rate

Most interestingly for us, this sudden bounce in mobile numbers wasn’t from a new, unheard of group of users that The Key had never heard from before. The proportion of mobile users didn’t increase significantly in the month after the site was relaunched. The bump came almost exclusively from registered users who could suddenly now use the site the way they wanted to.

proportion

 

A note about hardness

What we did here wasn’t actually hard or complicated – it amounted to a few weeks work for Francois. I’ve probably spent longer on this blog post, to be honest. And so our take-away point is this: Agencies you work with should be delivering this by default for new work; should be proposing simple steps you can take to add it for legacy work or explaining why they can’t or won’t.

About usIsotoma is a bespoke software development company based in York and London specialising in web apps, mobile apps and product design. If you’d like to know more you can review our work or get in touch.

isotoma-blog-2

SSL broken in gevent on Python 2.7.9 – a debugging tale of woe

I thought it would be worth writing this up quickly as a blog post, just so it’s documented, though I’m guessing the bug is common knowledge by now. The process of finding out the issue was (eventually) enlightening for me though, especially how far the initial problem was from the bug.

I was having problems last week deploying changes to one of our projects hosted on Heroku. I’d done a full run-down of dependencies trying to bring in security and bug fixes, making everything Python 2.7 and Ubuntu Trusty compatible (or, for Heroku, cedar-14 stack compatible). Everything worked fine locally, even using foreman (which is the Heroku tool that runs your code as if it was deployed on Heroku—in this case running through gunicorn with gevent). However, on deploying to a clean app and database on Heroku, the Persona Single-Sign-On authentication wasn’t working. The project’s settings are slightly involved, but the fact the admin site was working and I was getting a login page at all indicated that things were probably okay on the Django side of things. Persona itself worked fine locally, as well as on stage and production deployments on Heroku. I suspected DNS issues, but this turned out not to be the case—site domains and urls were being resolved correctly.

The only difference I could see between my new app and the stage deployment (that was working) was the database version (I’d deliberately matched the PostgreSQL version on my new app to that used in production, while staging was a point release ahead, for some reason) and the build stack that Heroku was using (part of what I was doing was testing the cedar-14 stack, which is based on trusty and supports Python 2.7.9). The customer was keen on having a test instance, so I decided to deploy on staging instead, but with an updated stack. I provisioned a new database to match production (and with production data), and used the cedar-14 stack, and everything worked fine (well apart from the bugs in my pre-Christmas development changes, but that was why I needed a test deployment for them to look at).

So there was something iffy in the test Heroku deployment I had. Off I set trying to debug a live Heroku deployment. Now after this experience, I must say I am in the market for decent logging/debugging tools for Heroku, so any suggestions are welcome. Papertrail, I found to be almost useless in this scenario, so was reduced to using django logging and running “heroku logs --app <app_name>” to see the output. So basically, print statement debugging, urgh. This was thwarted temporarily by a Heroku outage which they put apps into maintenance mode, preventing build updates. Eventually I worked out that while the Persona authentication was working ok, the authentication on the Django side was failing (using an old version of django-browserid). However, nothing was blowing up or throwing errors, the authentication was just silently failing.

Being unable to directly inject debug logging into a 3rd party library on Heroku, I decided to call the deeper API directly from my own code, and immediately I got a 500 error, and a stack trace – the definition of some internal SSL calls had changed in Python 2.7.9, and gevent was relying on them: reported here http://bugs.python.org/issue22438. So technically not a Python bug, but the app blew up querying SSL URLs when running in Gunicorn + gevent. This was the cause of my authentication issue, as even though this project doesn’t currently run over SSL, Persona verification does. This was not initially causing 500 errors, as although it raised a TypeError in an authentication backend, Django was then just falling back to the default authentication, failing and returning no valid user.


So, bug found, and my current work-around is to stick to a Python 2.7.8 runtime (technically unsupported by Heroku), until either gevent or Python is updated. But why did I not see the issue on staging or locally using foreman? The Heroku stack on staging turned out to be using Python 2.7.4, and locally my virtualenv was running Python 2.7.3.

A few lessons to take away, I suppose:

  • Make sure you include the Python runtime in your list of dependencies to check and versions to match on Development vs. Staging vs. Production.
  • Sometimes underlying components silently riding over errors can mask the true source of odd behaviour (I really should have twigged about the authentication backends, but also custom backends should deal with TypeErrors from internal calls properly).
  • Logging is super-useful, I should probably use it more, and in a smarter way.
  • I would love a decent debugging tool for Heroku.
  • Don’t rely on underscore methods in Python internal libraries (stares daggers at the gevent developers).

If you weren’t aware of this bug, watch out for it!

Update: According to comments on the gevent issue on github, this might be an issue on Amazon even with Python 2.7.8 as they have backported the SSL code from 2.7.9 to their 2.7 runtimes on AWS. This emphasises the need to be aware of what Python runtimes are being deployed on cloud services, where debugging may be more difficult.

About usIsotoma is a bespoke software development company based in York and London specialising in web apps, mobile apps and product design. If you’d like to know more you can review our work or get in touch.

isotoma-blog-1

Hacking together a standing desk

We’ve had a few people in the office transition to standing desks recently and the process by which everyone achieved their goal has been quite interesting. Or at least interesting to me.

doug's deskDoug was the first to try it and ended up, as a lot of people do, taking the old ‘big pile of books’ approach. It’s cheap, quick and,so long as you’re using a solid and non-tottering pile of books, probably pretty safe. Luckily the Isotoma office has a pretty extensive library of books–which in Doug’s case have mostly been memorised.

(I wouldn’t let him tidy his desk before I took this photo. He wanted me to tell you.)

A different approach that David introduced (as far as I know) and which I’ve adopted is the music stand approach. It’s a wee bit more expensive and depends on you having the kind of job that doesn’t really involve paper or double monitors – but I love it. The bonus of this approach is you get to feel a little bit like a member of an nineties synth-rock band. Always a bonus.

But the canonical Isotoma standing desk was “Ikea-hacked” together by Tom. He went to town on it to such an extent that we made him do an intranet post about it:

I’ve had a couple of enquiries about the parts for my standing desk, so rather than send sporadic emails, I thought I’d just put it here.

8717637419_6b5bfa4fa8_o8717637485_65832cd578_o

I built my standing desk based on this article, which is great, and has assembly instructions and everything (although it’s pretty trivial to put together). The guy bought all the bits from IKEA for $22. But obviously we don’t live in America, and also I needed to double up on some of the parts due to my dual monitor setup, so here is the full parts list, with links to the UK IKEA website:

Also, you will need to find 4 screws to screw the brackets into the table legs because annoyingly, the brackets have holes drilled, but no screws to go in said holes.

Grand total: £29 (including £10 postage)

Oh and don’t forget, you’ll probably want some kind of chair to give your legs a rest every now and then (yes it’s hilarious that Tom still has a chair etc, but it does come in handy, even if it’s just for 2×30 minute spells during the day). I got a bar stool with a back from Barnitts 🙂

This concludes my blog post about standing desks. Please add photos of your sweet, customised desks below in the comments.

Just kidding.

About usIsotoma is a bespoke software development company based in York and London specialising in web apps, mobile apps and product design. If you’d like to know more you can review our work or get in touch.

 

isotoma-blog-5

Why mobile last?

In his post Mobile Last? Jonathan Stark recounts his experiences at a recent hackathon, where teams were given 48 hours to build an innovative web application. While he ensured as usual that the site he built looked and worked great on any device, he was dismayed that “only one of the top ten winning entries was even a little bit mobile friendly.” He concludes that “the vast majority of web professionals have not truly embraced mobile.”

Jonathan points out, correctly, that mobile devices are already the default connected device for most people, and that poor mobile experiences are driving users towards native apps. After all, he says, working mobile-first is not that hard, and asks:

Are you working mobile-first? If not, why not? If you are working mobile-first, how do you like it? What were the biggest challenges you faced in making the transition from desktop? What platforms are you targeting on mobile? Web? Native iOS/Android? Something else?

I thought I’d write a quick response.

I’d guess the top reason why front-end web developers are not working mobile-first is that they are usually not the visual designers for the sites they’re building, and the designs they receive from the visual designers will usually be desktop designs. Visual designers:

  • are likely to be more set in their ways of designing for the desktop, and unaware of the “mobile first” philosophy
  • are working in tools that are not responsive (such as Photoshop), and creating mobile mockups as well means a doubling of effort

Clients are also slow to adapt. Unless they are conceiving of their project as primarily a mobile site/app, they’ll expect to be sold the design on the strength of a desktop mockup. This is what most visual designers are used to delivering.

Finally, the front-end developer and everyone else on the team will be working on desktop systems, where it’s far easier to view the work in progress in desktop browsers. It takes extra effort to view the site in a mobile device or emulation thereof.

I’m usually the front-end developer in this equation. I am given a clear visual spec for the desktop, and it’s left up to me to make it responsive. It’s usually not that difficult, but it means starting by implementing the visual spec I have, i.e. the desktop design, and adapting it via media queries afterwards, resulting in a “mobile last” process.

Mobile last is somewhat inefficient, as it means redefining many rules. (It’s “subtractive”, where mobile first is “additive”.) But in my experience it’s not that bad. In my 2 most recent projects the responsive.less include file (containing all the styles dependent on media queries) accounted for only 179 of 1492 lines of CSS (11%) and 641 of 5047 (12%), and adds at most 3 or 4 extra days. It’s also a conceptually simple way of working. The “layering” of a mobile first stylesheet can make the stylesheet a lot harder to interpret and maintain.

I’m currently implementing quite a complex visual design, provided to me, as usual, as desktop mockups. I have been attempting to follow a mobile first approach, but have found it extremely difficult, given that I don’t know at the outset what a polished look will be at mobile sizes (since I don’t have mockups for it). I’ve ended up following a mobile-first approach in some areas of the CSS, and mobile-last in others. Rather than a single responsive.less include right at the end, every one of the 20-odd Less files is riddled with media queries. The whole thing is, I have to admit, a lot more complex and confusing.

If I were starting a new project where I am also the visual designer, or a project where the visual design is relatively simple, then I’d probably work mobile-first. (Photoshop plays quite a small role in such cases, as my ideal process is to do much of the design in the browser.) But that, alas, wouldn’t be a typical project.

I hope this provides some insights into why so many developers are currently not yet working mobile first. What are your experiences? Let me know in the comments below or @fjordaan

On December 11 Jonathan will be redesigning his personal site in front of a live audience – you can sign up for the webcast here. I’m sure it will be a very interesting demonstration of mobile-first development.

My experiences are with websites and web apps, rather than native iOS or Android. I test on iPhone and Android smartphones, 7″ tablets and 10″ tablets (both iOS and Android). I also use the “Responsive Design View” on Firefox and Chrome’s mobile device emulation.

About usIsotoma is a bespoke software development company based in York and London specialising in web apps, mobile apps and product design. If you’d like to know more you can review our work or get in touch.

isotoma-blog-5

Links

There is a new version of gunicorn, 19.0 which has a couple of significant changes, including some interesting workers (gthread and gaiohttp) and actually responding to signals properly, which will make it work with Heroku.

The HTTP RFC, 2616, is now officially obsolete. It has been replaced by a bunch of RFCs from 7230 to 7235, covering different parts of the specification. The new RFCs look loads better, and it’s worth having a look through them to get familiar with them.

Some kind person has produced a recommended set of SSL directives for common webservers, which provide an A+ on the SSL Labs test, while still supporting older IEs. We’ve struggled to find a decent config for SSL that provides broad browser support, whilst also having the best levels of encryption, so this is very useful.

A few people are still struggling with Git.  There are lots of git tutorials around the Internet, but this one from Git Tower looks like it might be the best for the complete beginner. You know it’s for noobs, of course, because they make a client for the Mac 🙂

I haven’t seen a lot of noise about this, but the EU has outlawed pre-ticked checkboxes.  We have always recommended that these are not used, since they are evil UX, but now there’s an argument that might persuade everyone.

Here is a really nice post about splitting user stories. I think we are pretty good at this anyhow, but this is a nice way of describing the approach.

@monkchips gave a talk at IBM Impact about the effect of Mobile First. I think we’re on the right page with most of these things, but it’s interesting to see mobile called-out as one of the key drivers for these changes.

I’d not come across the REST Cookbook before, but here is a decent summary of how to treat PUT vs POST when designing RESTful APIs.

Fastly have produced a spectacularly detailed article about how to get tracking cookies working with Varnish.  This is very relevant to consumer facing projects.

This post from Thought Works is absolutely spot on, and I think accurately describes an important aspect of testing The Software Testing Cupcake.

As an example for how to make unit tests less fragile, this is a decent description of how to isolate tests, which is a key technique.

The examples are Ruby, but the principle is valid everywhere. Still on unit testing, Facebook have open sourced a Javascript unit testing framework called Jest. It looks really very good.

A nice implementation of “sudo mode” for Django. This ensures the user has recently entered their password, and is suitable for protecting particularly valuable assets in a web application like profile views or stored card payments.

If you are using Redis directly from Python, rather than through Django’s cache wrappers, then HOT Redis looks useful. This provides atomic operations for compound Python types stored within Redis.