Category Archives: User experience

Integrating UX with agile development

Incorporating user centred design practices within Agile product development can be a major challenge. Most of us in the user experience field are more familiar with the waterfall “big design up front” methodology. Project managers and developers are also likely to be more comfortable with a discreet UX design phase that is completed before development commences. But this approach tends to be inefficient, slower and more expensive. How does the role of the UX designer change within Agile product development, with its focus on transparency and rapid iteration?

BBC R&D logoWhile at Isotoma we’ve always followed our own flavour of Agile product development, UX is still mostly front-loaded in a “discovery” phase, as at most agencies. Our recent vision mixer project for BBC Research & Development, however, required a more integrated approach. The project had a very tight timeframe, requiring overlapping UX and development, with weekly show & tells.

From a UX perspective, it was a positive experience and I’m happy with the result. This post lists some of the techniques and approaches that I think helped integrate UX with Agile. Of course, every project and organisation is different, so there is definitely no one-size-fits-all approach, but hopefully there is something here you can use in your work. Continue reading

Sign pointing to Usability Lab

Rapid user research on an Agile project

Our timeline to build an in-browser vision mixer for BBC R&D (previously, previously) is extremely tight – just 2 months. UX and development runs concurrently in Agile fashion (a subject for a future blog post), but design was largely done within the first month.

Too frequently for projects on such timescales there is pressure to omit user testing in the interest of expediency. One could say it’s just a prototype, and leave it until the first trials to see how it performs, and hopefully get a chance to work the learnings into a version 2. Or, since we have weekly show & tell sessions with project stakeholders, one could argue complacently that as long as they’re happy with what they’re seeing, the design’s on track.

Why test?

But the stakeholders represent our application’s target users only slightly better than ourselves, which is not very well – they won’t be the ones using it. Furthermore, this project aims to broaden the range of potential operators – from what used to be the domain of highly experienced technicians, to something that could be used by a relative novice within hours. So I wanted to feel confident that even people who aren’t familiar with the project would be able to use it – both experts and novices. I’m not experienced in this field at all, so I was making lots of guesses and assumptions, and I didn’t want to go too far before finding they’re wrong.

One of the best things about working at the BBC is the ingrained culture of user centred design, so there was no surprise at the assumption that I’d be testing paper prototypes by the 2nd week. Our hosts were very helpful in finding participants within days – and with 100s of BBC staff working at MediaCity there is no danger of using people with too much knowledge of the project, or re-using participants. Last but not least, BBC R&D has a fully equipped usability lab – complete with two-way mirror and recording equipment. Overkill for my purposes – I would’ve managed with an ordinary office – but having the separate viewing room helped ensure that I got the entire team observing the sessions without crowding my subject. I’m a great believer in getting everyone on the project team seeing other people interact with and talk about the application.

Paper prototypes

Annoted paper prototyping test scriptPaper prototypes are A3 printouts of the wireframes, each representing a state of the application. After giving a brief description of what the application is used for, I show the page representing the application’s initial state, and change the pages in response to user actions as if it were the screen. (Users point to what they would click.) At first, I ask task-based questions: “add a camera and an audio source”; “create a copy of Camera 2 that’s a close-up”; etc. As we linger on a screen, I’ll probe more about their understanding of the interface: “How would you change the keyboard shortcut for Camera 1?”; “What do you think Undo/Redo would do on this screen?”; “What would happen if you click that?”; and so on. It doesn’t matter that the wireframes are incomplete – when users try to go to parts of the application that haven’t been designed yet, I ask them to describe what they expect to see and be able to do there.

In all, I did paper prototype testing with 6 people on week 2, and with a further 3 people on week 3. (With qualitative testing even very few participants tend to find the major issues.) In keeping with the agile nature of the project, there was no expectation of me producing a report of findings that everyone would read, although I do type up my notes in a shared document to help fix them in my memory. Rather, my learnings go straight into the design – I’m usually champing at the bit to make the changes that seem so obvious after seeing a person struggle, feeling really happy to have caught them so early on. Fortunately, user testing showed that the broad screen layout worked well – the main changes were to button labels, icon designs, and generally improved affordances.

Interactive prototypes

By week 4 my role had transitioned into front-end development, in which I’m responsible for creating static HTML mockups with the final design and CSS, which the developers use as reference markup for the React components. While this isn’t mainstream practice in our industry, I find it has numerous advantages, especially for an Agile project, as it enables me to leave the static, inexact medium of wireframes behind and refine the design and interaction directly within the browser. (I add some some dynamic interactivity using jQuery, but this is throwaway code for demo purposes only.)

The other advantage of HTML mockups is that they afford us an opportunity to do interactive user testing using a web browser, well before the production application is stable enough to test. Paper prototyping is fine up to a point, but you have plenty of limitations – for example, you can’t scroll, there are no mousever events, you can’t resize the screen, etc.

So by week 5 I was able to test nearly all parts of the application, in the browser, with 11 users. (This included two groups of 4, which worked better than I expected – one person manning the mouse and keyboard, but everyone in the group thinking out loud.) It was really good being able to see the difference that interactivity made, such as hover states, and seeing people actually trying to click or drag things rather than just saying what they’d do gave me an added level of confidence in my findings. Again, immediately afterwards, I made several changes that I’m confident improves the application – removing a redundant button that never got clicked, adding labels to some icons, strengthening a primary action by adding an icon, among others. Not to mention fixing numerous technical bugs that came up during testing. (I use Github comments to ensure developers are aware of any HTML changes to components at this stage.)

Never stop testing

Hopefully we’ll have time for another round of testing with the production application. This should give a more faithful representation of the vision mixing workflow, since in the mockups the application is always in the same state, using dummy content. With every test we can feel more confident – and our stakeholders can feel more confident – that what we’re building will meet its goals, and make users productive rather than frustrated. And on a personal level, I’m just relieved that we won’t be launching with any of the embarrassing gotchas that cropped up and got fixed during testing.

Read part 4 of this project working with BBC R&D where we talk about compositing and mixing video in the browser.

Design Museum interior

London’s new Design Museum

I have very little nostalgia for the old Design Museum building. Its location near Tower Bridge was always a real effort to get to, and while an attractive modernist icon, it always felt small, very much one of London’s “minor” museums – not befitting London’s reputation as a global design powerhouse. On 21 November it reopened at a new location in Kensington, and I visited on the opening weekend.

Part I: The new Design Museum and the exhibitions

Part II: A digital Design Museum? Continue reading

Stuttering towards accessibility

“Hello, I’m Andy and I have a stammer.”

While this is true, thankfully very few people notice it nowadays. Like many older stammerers I’ve developed complex and often convoluted strategies to avoid triggering it. But still, if you were to put me on stage and ask me to say that sentence we’d be there all week.

Over the last ten years or so as I’ve aged and gained more control over my stammer I’ve not given it much thought, barring politely turning down the occasional invitation to speak in public. Recently though, I’ve been forced to reassess both it and my coping strategies in the light of the rapid increase in voice interfaces for everything from phones to cars. And that’s made accessibility a very personal issue.

Like many stammerers I struggle with the start of my own name, and sounds similar to it. In the world of articulatory phonetics the sounds that trip me up are called “open vowels”. That is, sounds that are generated at the back of the throat with little or no involvement from the lips or tongue. In English that’s words starting with vowels or the letter H. So the first seven words of the sentence “Hello, I’m Andy and I have a stammer” are pretty much guaranteed to stop me in my tracks (unless I’m drunk or singing – coping strategies!).

We recently got an Amazon Echo for the office and wired it up to a bunch of things, including Spotify. Colleagues tell me it’s amazing, but because the only way I can wake it up is by saying “Alexa!” it’s absolutely useless to me.

And it gets worse. Even if a stammerer is usually able to overcome their problem sounds other factors will increase their likelihood of stammering in a particular situation.

One is over-rehearsal, where the brain has time to analyse the sentence, spot the potentially difficult words and start to worry about them, exacerbating the problem. This can be caused by reading aloud – even bedtime stories for the kids (don’t get me started on Harry and Hermione or Hiccup Horrendous Haddock the Third) – but anything where the words are predetermined can be a problem; be that a sales presentation, giving your name as everyone shakes hands as they walk into a meeting, performing lines from a play, making the vows at your wedding, literally anything where you have time to think about what you’re going to say and can’t change the words.

Speech interfaces currently fall firmly into the realm of over-rehearsal. You’re forced to plan carefully what you’re going to say, and then say it. “Alexa! Play The Stutter Rap by Morris Minor and the Majors” (yeah, that was a childhood high point, let me tell you) is a highly structured sentence and despite Alexa’s smarts it’s the only way you’re going to get that track played. So it’s not only a problematic sound, but it’s over-rehearsed… Doubly bad.

The other common trigger for stammering is often loosely defined as social anxiety, but is anywhere where the stammerer is drawing attention to themselves, either from being the focus of an activity (on stage, say) or from disturbing the normal flow of activity around them (for example, by trying to attract someone’s attention across a crowded room).

If I want to talk to the Echo in our office I know that saying “Alexa!” is going to disturb my colleague’s flow and cause them to involuntarily prick up their ears, which brings it right into the category of social anxiety… As well as already being a trigger sound and over-rehearsed… Triply bad.

However good my coping strategies might normally be I can’t use any of them when speaking to Alexa, and speaking to Alexa is exactly when I would normally be employing them all. Even when I’m in the office on my own it’s useless to me, because trigger sound and over-rehearsal is enough to stop me.

And the Echo isn’t alone. There’s “Hey, Siri!”, “Hey, Cortana!”, “OK Google!”, and “Hi TV!”. All of them, in fact. Right now all of the major domestic voice controls use wake words that start with an open vowel. Gee. Thanks everyone.

Google recently announced that 20% of mobile searches use voice rather than text. More than half of iOS users use Siri regularly. Amazon and Microsoft are doubling down on Echo and Cortana, respectively. Tesla are leading the way in automotive, but all the major manufacturers offer some form of voice control for at least some of their models. It makes absolute sense for them to do so – speech is such a natural interface, right? And it’s futuristic – it’s the stuff of Star Trek. Earl Grey, Hot! and all that. But just as screen readers have constantly struggled to keep up with web technologies we’re seeing developers doomed to repeat those same mistakes with voice interfaces, as they leap ahead without consideration for those that can’t use them.

To give some numbers and put this in context there are approximately twice as many stammerers in the UK (1% of the population) as there are registered visually impaired or blind (0.5% of the population). That’s a whole chunk of people. And while colleagues would say that me not being able to choose music for the stereo is a benefit not a drawback, it makes light of the fact that a technology we generally think of as assistive is not a panacea for all.

Currently Siri, Cortana, Samsung TVs and Alexa can only be addressed with sentences that start with an open vowel (Siri, Cortana and Samsung can’t be changed, Alexa can, but only to one of “Alexa”, “Echo” and “Amazon”). Google on Android can thankfully be changed to any phrase the user likes, even if the process is a little convoluted. Interestingly for me, though, is that the Amazon Echo offers no alternative interface at all. It is voice control only, and has to be woken with an open vowel. It is the worst offender.

For me this has been an object lesson in checking my privilege. Yes, I’m short sighted, but contact lenses give me 20/20 vision. I had a bad back for a while, but I was still mobile. This is the first piece of technology that I’ve actually been unable to use. And it’s not a nice experience. As technologists we know that accessibility is important – not just for the impaired but for everyone – yet we rarely feel it. I’m sure feeling it now.

Voice control is still in its infancy. New features and configurations are being introduced all the time. Parsing will get smarter so that wake words can be changed and commands can be more loosely structured. All of these things will improve accessibility for those of us with speech impediments, who are non-verbal, have a throat infection, or are heavily accented.

But we’re not there yet, and right now I’ve got to ask Amazon… Please, let me change Alexa’s name.

I was thinking Jeff?

Refreshing a site into uselessness

Myvue.com was never what I’d call wonderfully designed, but it it did its job. It did it so well, in fact, that it’s one of the reasonably few sites I’ve bookmarked on my phone, and one of the even fewer bookmarks that I actually use on a regular basis.

Specifically, I bookmarked the URL of my local cinema. Here’s how it looked until a month or so ago:

Screenshot of Myvue website from 2015Pretty simple, right? It shows me a vertical list of movies showing today, and the times they’re showing at. It defaults to today, but at the top of the list are tabs for the next 5 days. It’s not exactly mobile-optimised, but it’s perfectly usable on my iPhone.

The site also does plenty of other things, all of which are pretty much useless. I’m not here to watch a trailer. I don’t buy tickets online as it takes a minute to buy at the cinema and it’s never sold out or full. Where do “user ratings” even come from and why do I care? Why would anyone go to this site to find films to watch by genre? Why would I register on a site like this? What’s the point of literally any of the rest of the site’s navigation? Anyway, that’s by the by. It does its central job well, showing me every movie that’s showing on that day, at what times, on one page.

So the other day I used my bookmark again and noticed immediately they’ve redesigned. It looks new and expensive. It adapts to my mobile device. And it’s now utterly useless, particularly on mobile. Since 99% of the time I use this site it’s on my iPhone, that’s what I’ll use for the rest this review.

This is what you now see at the same URL:

Screenshot of new Myvue site on iPhoneThe entire first screen is taken up by a film poster, which turns out to be a slideshow. Carousels are annoying enough, but this one makes it extremely difficult to know what page I’m on, because judging by what I see on the screen, I’m on a page for Sausage Party.

Just pause to consider how pointless this slideshow is (whilst adding who knows how much to the download time). It’s a sequence of movies showing at this cinema. Which is… exactly what the list below it is. Except this is a slideshow, and that’s a vertical list. Someone must have insisted on a slideshow.

Scrolling past this annoyance you get to the vertical list of movies showing that day. The posters are now so large only 2 fit on screen at once (even on desktop!), yet they’ve removed the short description, leaving only the title and… what’s this? “Get times & tickets”? Why don’t you just show the times like you used to?  So now I have to navigate to get the times for every movie I’m interested in?

[Update 14 Sep 2016: MyVue have added showtimes back in on the listing page! I wish they would also show the film’s running time as they used to, though.]

So I click “Get times & tickets” and… WHAT?! Another page for the movie I just clicked on, with an enormous backdrop image but no useful information on it, and another big “GET TIMES & TICKETS” button! So I click that, and a panel slides laboriously in from the right, displays a “working” spinner for all of 7 seconds, before finally showing me the times. Wow, it really worked hard to show me some text-based information. There’s no caching, by the way. Next time I request showtimes it’ll take another 7 seconds.

Screenshot of new Myvue site on iPhone, product pageNow I want to see what times other movies are showing, so I go Back. Back to the useless screen with the backdrop (let’s call it the Product screen). So I go Back again. Whoops, here comes the sliding panel with showtimes again. Clicking Back a third time is the charm. (Although it’s hard at first to tell I’m back on the listing, because an unrelated movie – the slideshow at the top – is filling the screen.)

The above buggy behaviour is actually the best-case scenario. If you clicked the X in the corner of the sliding showtimes panel instead of Back, you’d find yourself back at the Product screen with no escape. Clicking Back again would restore the showtimes panel, and so on, trapping you in an endless loop.

The bottom line is I’m removing this bookmark from my phone, as it is now useless. A google search for “what’s showing at vue fulham” gives me the information I want.

What went wrong here?

Screenshot of new Myvue site product page, on desktop

The product screen on desktop includes showtimes for today, which requires an extra click on mobile.

Firstly, despite the mobile-optimised layout, it’s obvious that the site was designed and built with a desktop or widescreen display in mind. It looks like the designers wanted something that looks like today’s media centre interfaces, like Plex or Apple TV. The enormous posters, backdrops and spacious page layouts are typical of a “lean-back” design. Also, the desktop version includes stuff that’s missing on mobile – the Product screen even has screening times for today, saving one click. But ask yourself: is this site anywhere in the same category as these media centre apps? Where are people likely to be when checking what’s showing at their cinema that evening? How quickly do they want this information? Mobile should’ve been considered of at least equal importance.

Media centre interfaces also necessarily involve deep levels of navigation, a handicap born of lack of space on the screen and a remote-control interface. On browsers it’s easier to scroll and click on targets, and if you can avoid deeper levels of navigation, you do so.

Screenshot of new Myvue Quick Book interface on iPhoneBut secondly, it’s clear that the designers had a very different idea of the primary user journey from me. You can see this clearly in the super-prominent “Quick Book” widget. On a desktop, you can at least see what the widget does, but on the mobile it’s entirely mysterious what “Quick Book” will do. But when invoked, it’s clear that the designers consider the website’s primary purpose to be buying tickets online, and that users don’t care so much about where or when it’s showing, as long as it’s the one movie they want. (The widget does not default location to the current cinema selected, and does not default date to today.)

Admittedly I don’t know how typical I am of Vue cinemagoers, but I don’t buy tickets online, I’m 99% certain to go to my nearest Vue rather than somewhere else, and there may be more than one movie I’m interested in seeing. My decision ultimately depends on what’s the most convenient time within the next 5 days. With the “Quick Book” widget, I’d have to use 3 dropdowns (which should be the UI of last resort) – 7 clicks – before even being able to see which times it’s showing for that day, which may well rule it out.

The damage

I used to be able to see what’s showing today at my local cinema, and when, with a single tap on my phone. Two taps if I wanted to check another day. Now, to check the times for a movie requires 3 taps, with loading time between each. Checking the times for another movie adds another 5 taps. Checking a different day… you get the picture. This redesign has rendered the site unusable, for me, and I would guess a large proportion of its previous users.

 

When a feature is invoked more often accidentally than on purpose, it should be considered a bug

Back in 2014 I tweeted this:

I’ve been meaning to revisit that statement for a while now. The link above refers to the following misfeature afflicting Mac users with external displays:

When the mouse cursor touches the bottom of an external display, MacOS assumes you want the Dock to be there, and moves it there from the primary display, often covering up what you were trying to click on.

This happens to me almost every day for several years now – never intentionally. I have looked into it thoroughly enough to know that it cannot be turned off without sacrificing all other multi-monitor features.

Our devices are full of such annoyances, born from designers’ attempts to be helpful. Sometimes they are just irritating, like the above. Sometimes they can be downright destructive.

Naked keyboard shortcuts

Keyboard shortcuts without modifier keys can be fantastic productivity enhancers, as any advanced user of Photoshop or Vim knows. But they are potentially incredibly dangerous, especially in applications with frequent text entry. Photoshop uses naked keyboard shortcuts (to coin a phrase) to primarily select tools or view modes. This may sometimes cause confusion for a novice (like when they accidentally enter Quick Mask mode with ‘Q’), but is rarely destructive.

Screenshot of Postbox Message menuPostbox, my email client, on the other hand, inexplicably uses naked keyboard shortcuts like ‘A’ to Archive and ‘V’ to move messages to other folders. What were they thinking? Email does involve frequent text entry. If you are rapidly touch typing an email, and the message composition window accidentally loses focus (e.g. due to the trackpad), there is no telling the damage that you can do. You may discover (as I have, more than once) that messages you know to have received have disappeared – sometimes only days later, without knowing what happened.

Any application that uses naked keyboard shortcuts should avoid using them for actions, as the application may come to foreground unintentionally. It’s safest to use them to select modes only.

Apple Pay

Here’s another example. Ever since Apple Pay came to iOS, I see this almost every day:

iPhone showing Apple Pay interfaceHow often do I actually use Apple Pay? About once a month, currently. Every time this screen appears unintentionally, I lose a few seconds – often missing the critical moment if my intention was to take a photo. (It is invoked by double-pressing the hopelessly overloaded Home button. A too-long press invokes Siri, which I also do unintentionally about 1 in 3 times.)

Gestures

After Apple announced yet more gestures in iOS 10 at WWDC last week, @Pinboard quipped

On touchscreen devices, gestures are a powerful and often indispensable part of the UI toolkit. But they are invisible, and easy to invoke accidentally. As I wrote in my recent criticism of the direction Apple’s design is taking, whilst some gestures truly become second nature,

[…] mostly they are an over-hyped disaster area making our devices seem not fully under our control, constantly invoked by accident and causing us to to make mistakes, with no reliable way of discerning what gestures are available or what they’ll do.

Since I use an iPhone where the top-left corner is no longer reachable one-handed, I rely on the right-swipe gesture to go back more and more often. Unfortunately, this frequently has unintended effects, whether it’s because the gesture didn’t start at the edge, or wasn’t perfectly horizontal, or because the app or website developer intended something different with the gesture. And with every new version of OS X on a Macbook, the trackpad and magic mouse usually have more surprises in store for the unwary. I’m sure voice interfaces will yield plenty more examples over time.

Accidental activation – a necessary evil?

In my Apple design critique, I lamented the fact that it is very difficult to make gestures discoverable, as they are inherently invisible – contributing to that sense of “simplicity” which is a near-religion nowadays. You can introduce gestures during on-boarding, and hope users remember them, but more likely they will be quickly forgotten and we all know no-one will use a “help” page.

So you could argue that accidental invocation is the price we may have to pay for discoverability. Even though I rarely use Apple Pay, I am confident I know how to do so – a consequence of its annoying habit of popping up all the time.

With gestures, accidental activation may be critical to their discoverability, and if well implemented need not be irritating. For example, many gestures have a “peek” stage during which it is possible to reverse the action. Unfortunately, today’s touchscreen devices no longer have an obvious, reliable Undo function, one of the many failings highlighted by Tognazzini and Norman.

What are designers to do?

So if you design a helpful, time-saving feature that risks being invoked by accident,

  • consider the value (to the user of the feature)
  • consider the risk (of the user invoking it unintentionally)
  • consider the cost (to the user of doing so)

Is the value of the feature or shortcut worth the cost of it sometimes annoying users? How big is the cost to users? Wasting a second or two? Or possible data loss? Is the user even aware what happened? Is it possible to Undo? What is the risk, or likelihood, of this happening? What proportion of users does it affect, and how frequently? What proportion of usages are unintentional?

Failing any one of these may be enough to consider the feature a bug. Or you could fail two but the positive may still outweigh the negative. It depends on the severity:

Venn diagram with 3 circles intersecting: Low value, High risk, and High user cost

Can a feature be activated unintentionally? Is it worth the risk? The Venn diagram of inadvisability.

So designers should firstly try to ensure unintentional feature activation is as unlikely as possible – preferably impossible. But if it happens, the user should be aware of what happened, and the cost to the user should be low or at least easy to recover from. User testing will be your best hope of spotting problems early. Beta testing and dogfooding, run over a longer time period, are great at finding those problems that may have low frequency but high cost. Application developers may also be able to collect stats on feature usage, and determine automatically if a feature is often invoked but then not used, or immediately undone, which may highlight problems.

Or stick with a simple rule of thumb: if a feature is activated by accident more often than on purpose, it’s not a feature but a bug. Feel free to share more examples!

Sites for The Key for School Leaders and School Governors shown on various devices

Evolving The Key: insights from user research

Last week the freshly redesigned The Key for School Leaders and School Governors went live, after almost a year in design and development. We’ve been working with The Key since their founding in 2007, and this is the third major update Isotoma has carried out.

In the nine years since launching The Key has grown to support almost half of schools in England, with an amazing 75,000 registered school leaders and 17,000 registered governors having access to 5,000 original resources every month. It is now one of the most trusted sources of information in the education sector.

It’s no small task to migrate that many regular, engaged users from a system they’ve grown used to across to a new design, information architecture and user experience. This post explains our process and the results.

The Key website shown on various devices

Learning from users

As an organisation answering questions from its members daily, The Key had a pretty good idea of some new features that they knew users would appreciate. For example, better information about their school’s membership: which of your colleagues are using it, when the membership renewal is due, etc. They also keep a keen eye on their web analytics, knowing what terms users are searching for, the preponderance of searching vs browsing, etc.

But other questions could only be answered through user research. How effective was the site navigation? Are certain features unused due to lack of awareness, or lack of interest? As we had done with the initial design of the site, I went on-site to observe school staff actually using The Key.

The left-hand navigation column was one such bone of contention. Article pages have always shown articles in the same topic in the column to the left. It was easy to argue that they were relevant “related content”, but did anyone ever use them? We found that, in fact, hardly anyone did. (A 2014 design change to make them look more inobtrusive until moused over simply made them more ignorable.) Users were more likely to click Back to their search results or previous topic page, or use the “See also” links. Out went the left-hand navigation – the savings went into a calmer, more spacious layout and a font size increase.

Comparison of 2015 and 2016 article page designs

The “See also” links, however, appeared at the top of the right-hand column, so were rarely on-screen at the time when users were finished with an article. So we made sure they reappeared when the user had reached the bottom of an article.

Homepages – what are they good for?

Another thing the user tests told us loud and clear was that the current homepage was not working. Nobody had anything negative to say about it, but most people couldn’t tell us what was on it, nor did it feature in their navigation journeys.

Screenshot of The Key member homepageHowever, we knew that most members were avid consumers of the weekly update emails, bringing news headlines and other topical articles to their attention. So the new homepage is designed much more like a magazine cover, promoting news and topical content in a much more eye-catching and easily scanned style. The new flexible grid-based layout allows the homepage design to be changed with ease.

Screenshot showing 'mini-homepage' below an articleWe know that most users’ journeys will continue to be task-driven, but we wanted to increase the likelihood of serendipitous browsing – all members said they do enjoy browsing content not related to their initial search, on those occasions when they have time to spare. We have also added what we refer to as a “mini homepage” below each article, with a magazine-style display of new and topical content. This does much the same job as the homepage, but does not require the visit to involve a visit to the homepage.

Getting users to their destination faster

Most people used Search to find what they’re looking for, but a significant number also browsed, using the site’s carefully curated Topic hierarchy. In both cases, we saw opportunities to speed up the process.

The Key screenshot showing suggested search resultsSwitching to a new, high performance search engine, Elasticsearch, let us present top search results for a query dynamically – in many cases, this should avoid the need to browse a page of search results entirely.

The Key screenshot showing dropdown navigation menuWe also introduced a large “doormat” style dropdown navigation on the main topic bar. This lets users skip entirely two or three pages of topics and sub-topics. It also makes it much easier to scan topics to decide on the most relevant area, without leaving the current page.

What else did we change?

There are too many other changes to list in this post. The customer acquisition journey – marketing and signup pages – was entirely redesigned. Some unused tools were removed and some under-used tools made more prominent. New article types were introduced such as article bundles, and dedicated News section was added.

Template changes

Under the hood the front-end templates were given a full overhaul for the first time since 2007 – they have held up remarkably well, even allowing a full “CSS Zen Garden” style redesign in 2014. But in other respects they held us back. We now have a fluid, responsive 12-column grid system and a whole library of responsive, multi-purpose modules. There is a module compendium and style guide to explain their usage and serve as reference design.

Screenshot of The Key for School Governors homepageThis was also our first site to use an SVG-based icon system. We went with the Grunticon system, which provides a PNG fallback for browsers that don’t yet support SVG (IE8 and below). Grunticon applies SVGs as background images, however, so they cannot be recoloured using CSS. Since the site has two distinct visual themes – for School Leaders and School Governors – each Grunticon icon was also sprited to allow a colour theme to be applied with only a CSS change.

A continuing journey

Our work is by no means done. The Key have many exciting developments planned in 2016, and we can’t wait to work on them… And expect another post from our development team on the process of migrating such a large site from one CMS (Plone) to another (Wagtail) in the not too distant future.

About usIsotoma is a bespoke software development company based in York, Manchester and London specialising in web apps, mobile apps and product design. If you’d like to know more you can review our work or get in touch.

In-car interaction design

I recently went to a fascinating IxDA (interaction design) meetup about in-car interaction design. Here’s a quick summary:

1. Driver distraction and multitasking

Duncan Brumby teaches and researches in-car UX at UCL. He described various ways car makers try to provide more controls to drivers whilst trying to avoid driver distraction (and falling foul of regulations).

I think most of us are sometimes confused by car user interfaces (UI), and with the advent of the “connected car”, are likely to be more confused than ever.

Ever wondered what those lights on your dash mean? Confusing car UI by Dave

Ever wondered what those lights on your dash mean? Confusing car UI by Dave https://www.facebook.com/davewittybanter/

Modern in-car UIs take different approaches. Most cars use dashboard UIs with or without touchscreens. Apple’s CarPlay takes this approach. Then there are systems like BMW’s iDrive which has a dashboard display but a rotary controller located next to the seat, meant to be used without looking. This avoids the inaccuracy of touchscreens due to the vehicle’s speed or bumpy roads. (So iDrive makes more sense on the autobahn, whereas touchscreen UIs make more sense when you’re mostly stuck in traffic.)

Brumby mentioned that the Tesla’s giant touchscreens are not popular with drivers, as their glare is unpleasant when it’s dark, and app interfaces often change as a result of software updates.

The other major problem is that even interfaces you don’t have to glance at (e.g. audio interfaces, so fashionable at the moment) still cause cognitive distraction – research has confirmed what many of us instinctively know, that you are less attentive when you’re on a phone call, even when using hands-free. (See UX for connected cars by Plan Strategic.) And of course audio interfaces (Siri and the like) are never 100% accurate they way they are in advertisements. Imagine having to deal with its misheard mistakes in the message you were trying to send, whilst driving.

Reduction in reaction times 54% using a hand-held phone 46% using a hands-free phone 18% after drinking the legal limit of alcohol

Reduction in reaction times – RAC research 2008. From UX for connected cars by Plan Strategic http://www.slideshare.net/planstrategic/ux-for-connected-cars-58076640

(Why, you may ask, is a hands-free phone conversation more distracting than a conversation with passengers in the car? People inside the car can see what the driver is seeing and doing. People instinctively modulate their conversation to what’s happening on the road, and drivers rely on that. A person on the other end of the phone can’t see what the driver is seeing, and doesn’t do that, unwittingly causing greater stress for the driver.)

2. ustwo: Are we there yet?

The talk by Harsha Vardhan and Tim Smith of ustwo (versatile studio that also made Monument Valley, and who hosted the event) was more interesting, even though I started off quite skeptical. They’ve published Are We There Yet? (PDF) which is their vision / manifesto of the connected car, which got quite a bit of attention. (It got them invited to Apple to speak to Jony Ive.) It’s available free online.

But what I found most interesting was their prototype dashboard UI – the “in-car cluster” – to demonstrate some of the ideas they talk about in the book. It’s summarised in this short video:

This blog post pretty much covers exactly what the talk did, in detail – do have a read. The prototype is also available online. (It’s built using Framer.JS, a prototyping app I’ve been meaning to try out for a while.)

As I said, I started off skeptical, but I found the rationale really quite convincing. I like how they distilled their thinking down to the essence – not leading with some sort of “futuristic aesthetic”. They’ve approached it as “what do drivers need to see” – and that this could be entirely different based on whether they’re parked, driving or reversing.

Is Apple giving design a bad name?

Legendary user experience pioneers and ex-Apple employees Don Norman and Bruce ‘Tog’ Tognazzini recently aimed a broadside at Apple in an article titled “How Apple Is Giving Design A Bad Name”, linkbait calibrated to get the design community in a froth.

The article has some weaknesses (over-long, repetitive, short on illustrations and with some unconvincing anecdata), but on the whole I think they are right. Apple’s design is getting worse, users are suffering from it, and they are setting bad examples that are being emulated by other designers. I would urge you to read the article, but here is my take on it. Continue reading

The Key – back-fitting responsive design (with exciting graphs!)

As an industry we talk a lot about the importance of responsive design. There are a lot of oft-repeated facts about the huge rise in mobile usage, alongside tales of woe about the 70,000 different Android screen resolutions. Customers often say the word ‘responsive’ to us with a terrified, hunted expression. There’s a general impression that it’s a) incredibly vital but b) incredibly hard.

As to the former, it’s certainly becoming hard to justify sites not being responsive from the very beginning. 18 months ago, we’d often find ourselves reluctantly filing ‘responsive design’ along with all the other things that get shunted into ‘phase 2’ early in the project. Nowadays, not so much: Mailchimp reported recently that 60% of mail they send is opened on a mobile phone.

For the latter, there’s this blog post. We hope it demonstrates that retro-fitting responsive design can be simple to achieve and deliver measurable results immediately.

And, because there are graphs and graphs are super boring, we’ve had our Gwilym illustrate them with farm animals and mountaineers. Shut up; they’re great.

What were the principles behind the design?

We’re not really fans of change for change’s sake, and generally, when redesigning a site, we try to stick to the principle of not changing something unless it’s solving a problem, or a clear improvement.

In this redesign project we were working under certain constraints. We weren’t going to change how the sites worked or their information architecture. We were even planning on leaving the underlying HTML alone as much as possible. We ‘just’ had to bring the customer’s three websites clearly into the same family and provide a consistent experience for mobile.

In many ways, this was a dream project. How often does anyone get to revisit old work and fix the problems that have niggled at you since the project was completed? The fact that these changes would immediately benefit the thousands of school leaders and governors who use the sites every day was just the icing on the cake.

And, to heighten the stakes a little more, one of the sites in the redesign was The Key – a site that we built 7 years ago and which has been continually developed since it first went live. Its criticality to the customer cannot be overstated and the build was based on web standards that are almost as old as it’s possible to be on the internet.

What did we actually do?

The changes we made were actually very conservative.

Firstly, text sizes were increased across the board. In the 7 years since the site was first designed, monitor sizes and screen resolutions have increased, making text appear smaller as a result. You probably needed to lean in closer to the screen than was comfortable. We wanted the site to be easy to read from a natural viewing distance.

We retained the site’s ability to adapt automatically to whatever size screen you are using, without anything being cut off. But this now includes any device, from a palm-sized smartphone, to a notebook-sized tablet, up to desktop monitors. (And if your screen is gigantic, we prevent lines from getting too long.) The reading experience should be equally comfortable on any device.

On article pages, the article text used to compete for attention with the menu along the left. While seeing the other articles in the section is often useful, we wanted them to recede to the background when you’re not looking at them.

We wanted to retain the colourfulness that was a hallmark of the previous design. This is not only to be pleasing to the eye – colours are really helpful in guiding the eye around the page, making the different sections more distinct, and helping the most important elements stand out.

Finally, we removed some clutter. These sites have been in production for many years and any CMS used in anger over that kind of period will generate some extraneous bits and bobs. Our principle here was that if you don’t notice anything missing once we’ve removed it, then we’ve removed the right things.

What was the result?

The striking thing about the changes we made was not just the extent of the effect, but also the speed with which it was demonstrable. The following metrics were all taken in the first 4 weeks of the changes being live in production in August 2014.

The most significant change is the improvement in mobile usage on The Key for School Leaders. Page views went up – fast (and have stayed there.)

total-page-views1

 

Secondly, the bounce rate for mobile dropped significantly in the three months following the additions:

bounce-rate

Most interestingly for us, this sudden bounce in mobile numbers wasn’t from a new, unheard of group of users that The Key had never heard from before. The proportion of mobile users didn’t increase significantly in the month after the site was relaunched. The bump came almost exclusively from registered users who could suddenly now use the site the way they wanted to.

proportion

 

A note about hardness

What we did here wasn’t actually hard or complicated – it amounted to a few weeks work for Francois. I’ve probably spent longer on this blog post, to be honest. And so our take-away point is this: Agencies you work with should be delivering this by default for new work; should be proposing simple steps you can take to add it for legacy work or explaining why they can’t or won’t.

About usIsotoma is a bespoke software development company based in York and London specialising in web apps, mobile apps and product design. If you’d like to know more you can review our work or get in touch.