Internet Security Threats – When DDoS Attacks

On Friday evening an unknown entity launched one of the largest Distributed Denial of Service (DDoS) attacks yet recorded, against Dyn, a DNS provider. Dyn provide service for some of the Internet’s most popular services, and they duly suffered problems. Twitter, Github and others were unavailable for hours, particularly in the US.

DDoS attacks happen a lot, and are generally uninteresting. What is interesting about this one is:

  1. the devices used to mount the attack
  2. the similarity with the “Krebs attack” last month
  3. the motive
  4. the potential identity of the attacker

Together these signal that we are entering a new phase in development of the Internet, one with some worrying ramifications.

The devices

Unlike most other kinds of “cyber” attack, DDoS attacks are brute force – they rely on sending more traffic than the recipient can handle. Moving packets around the Internet costs money so this is ultimately an economic contest – whoever spends more money wins. The way you do this cost-effectively, of course, is to steal the resources you use to mount the attack. A network of compromised devices like this is called a “botnet“.

Most computers these days are relatively well-protected – basic techniques like default-on firewalls and automated patching have hugely improved their security. There is a new class of device though, generally called the Internet of Things (IoT) which have none of these protections.

IoT devices demonstrate a “perfect storm” of security problems:

  1. Everything on them is written in the low-level ‘C’ programming language. ‘C’ is fast and small (important for these little computers) but it requires a lot of skill to write securely. Skill that is not always available
  2. Even if the vendors fix a security problem, how does the fix get onto the deployed devices in the wild? These devices rarely have the capability to patch themselves, so the vendors need to ship updates to householders, and provide a mechanism for upgrades – and the customer support this entails
  3. Nobody wants to patch these devices themselves anyway. Who wants to go round their house manually patching their fridge, toaster and smoke alarm?
  4. Because of their minimal user interfaces (making them difficult to operate if something goes wrong), they often have default-on [awful] debug software running. Telnet to a high port and you can get straight in to adminster them
  5. They rarely have any kind of built in security software
  6. They have crap default passwords, that nobody ever changes

To see how shockingly bad these things are, follow Matthew Garrett on Twitter. He takes IoT devices to pieces to see how easy they are to compromise. Mostly he can get into them within a few minutes. Remarkably one of the most secure IoT device he’s found so far was a Barbie doll.

That most of these devices are far worse than a Barbie doll should give everyone pause for thought. Then imagine the dozens of them so many of us have scattered around our house.  Multiply that by the millions of people with connected devices and it should be clear this is a serious problem.

Matthew has written on this himself, and he’s identified this as an economic problem of incentives. There is nobody who has an incentive to make these devices secure, or to fix them afterwards. I think that is fair, as far as it goes, but I would note that ten years ago we had exactly the same problem with millions of unprotected Windows computers on the Internet that, it seemed, nobody cared about.

The Krebs attack

A few weeks ago, someone launched a remarkably similar attack on a security researcher Brian Krebs. Again the attackers are unknown and they launched the attack using a global network of IoT devices.

Given the similarities in the attack on Krebs and the attack on Dyn, it is probable that both of these attacks were undertaken by the same party. This doesn’t, by itself, tell us very much.

It is common for botnets to be owned by criminal organisations that hire them out by the hour. They often have online payment gateways, telephone customer support and operate basically like normal businesses.

So, if this botnet is available for hire then the parties who hired it might be different. However, there is one other similarity which makes this a lot spookier – the lack of an obvious commercial motive.

The motive

Mostly DDoS attacks are either (a) political or (b) extortion. In both cases the identity of the attackers is generally known, in some sense. For political DDOS attacks (“hacktivism”) the targets have often recently been in the news, and are generally quite aware of why they’re attacked.

Extortion using DDoS attacks is extremely common – anyone who makes money on the Internet will have received threats, and have been attacked and many will have paid out to prevent or stop a DDoS.  Banks, online gaming, DNS providers, VPN providers and ecommerce sites are all common targets – many of them so common that they have experienced operations teams in place who know how to handle these things.

To my knowledge no threats were made to Dyn or Krebs before the attacks and nobody tried to get money out of them to stop them.

What they have in common is their state-of-the-art protection. Brian Krebs was hosted by Akamai, a very well-respected content delivery company who have huge resources – and for whom protecting against DDOS is a line of business. Dyn host the DNS for some of the world’s largest Internet firms, and similarly are able to deploy huge resources to combat DDOS.

This looks an awful lot like someone testing out their botnet on some very well protected targets, before using it in earnest.

The identity of the attacker

It looks likely therefore that there are two possibilities for the attacker. Either it is (a) a criminal organisation looking to hire out their new botnet or (b) a state actor.

If it is a criminal organisation then right now they have the best botnet in the world. Nobody is able to combat this effectively.  Anyone who owns this can hire it out to the highest bidder, who can threaten to take entire countries off the Internet – or entire financial institutions.

A state actor is potentially as disturbing. Given the targets were in the US it is unlikely to be a western government that controls this botnet – but it could be one of dozens from North Korea to Israel, China, Russia, India, Pakistan or others.

As with many weapons a botnet is most effective if used as a threat, and we many never know if it is used as a threat – or who the victims might be.

What should you do?

As an individual, DDoS attacks aren’t the only risk from a compromised device. Anyone who can compromise one of these devices can get into your home network, which should give everyone pause – think about the private information you casually keep on your home computers.

So, take some care in the IoT devices you buy, and buy from reputable vendors who are likely to be taking care over their products. Unfortunately the devices most likely to be secure are also likely to be the most expensive.

One of the greatest things about the IoT is how cheap these devices are, and the capability they can provide at this low price. Many classes of device don’t necessarily even have reliable vendors working in that space. Being expensive and well made is no long-term protection – devices routinely go out of support after a few years and become liabilities.

Anything beyond this is going to require concerted effort on a number of fronts. Home router vendors need to build in capabilities for detecting compromised devices and disconnecting them. ISPs need to take more responsibility for the traffic coming from their networks. Until being compromised causes devices to malfunction for their owner there will be no incentives to improve them.

It is likely that the ultimate fix for this will be Moore’s Law – the safety net our entire industry has relied on for decades. Many of the reasons for IoT vulnerabilities are to do with their small amounts of memory and low computing power. When these devices can run more capable software they can also have the management interfaces and automated patching we’ve become used to on home computers.

 

The economics of innovation

One of the services we provide is innovation support. We help companies of all sizes when they need help with the concrete parts of developing new digital products or services for their business, or making significant changes to their existing products.

A few weeks ago the Royal Swedish Academy of Sciences awarded the Nobel Prize for Economics to Oliver Hart and Bengt Holmström for their work in contract theory. This prompted me to look at some of his previous work (for my sins I find economics fascinating), and I came across his 1998 paper Agency Costs and Innovation. This is so relevant to some of my recent experiences I wanted to share it.

Imagine you have a firm or a business unit and you have decided that you need to innovate.

This is a pretty common situation – you know strategically that your existing product is starting to lose traction. Maybe you can see commoditisation approaching in your sector. Or perhaps, as is often the case, you can see the Internet juggernaut bearing down on your traditional business and you know you need to change things up to survive.

What do you do about it?  If you’ve been in this situation the following will probably resonate:

agency2

This describes the principal-agent problem, which is a classic in economics. This describes how a principal (who wants something) can incentivise an agent to do what they want. The agent and “contracting” being discussed here could be any kind of contracting including full time staff.

A good example of the principal-agent problem is how you pay a surgeon. You want to reward their work, but you can’t observe everything they do. The outcome of surgery depends on team effort, not just an individual. They have other things they need to do other than just surgery – developing standards, mentoring junior staff and so forth. Finally the activity itself is very high risk inherently – which means surgeons will make mistakes, no matter how competent. This means their salary would be at risk, which means you need to pay huge bonuses to encourage them to undertake the work at all.

In fact commonly firms will try and innovate using their existing teams, who are delivering the existing product. These teams understand their market. They know the capabilities and constraints of existing systems. They have domain expertise and would seem to be the ideal place to go.

However, these teams have a whole range of tasks available to them (just as with our surgeon above), and choices in how they allocate their time. This is the “multitasking effect”. This is particularly problematic for innovative tasks.

My personal experience of this is that, when people have choices between R&D type work and “normal work”, they will choose to do the normal work (all the while complaining that their work isn’t interesting enough, of course):

variance

This leads large firms to have separate R&D divisions – this allows R&D investment decisions to take place between options that have some homogeneity of risk, which means incentives are more balanced.

However, large firms have a problem with bureaucratisation. This is a particular problem when you wish to innovate:

monitoring

Together this leads to a problem we’ve come across a number of times, where large firms have strong market incentives to spend on innovation – but find their own internal incentive systems make this extremely challenging.

If you are experiencing these sorts of problems please do give us a call and see how we can help.

I am indebted to Kevin Bryan’s excellent A Fine Theorem blog for introducing me to Holmström’s work.

 

A new Isotoma Whitepaper: Chatbots

Over the last six months we’ve had a lot of interest from customers in the emerging area of chatbots, particularly ones using Facebook Messenger as a platform.

While bots have been around, in some form or other, for a very long time the Facebook Messenger platform has catapulted them into prominence.  Access to one billion of the world’s consumers is a tempting prospect for many businesses.

We’ve reviewed the ecosystem that is emerging around chatbots and provide a guide to some of the factors you should consider if you are thinking about building and deploying chatbots, in our new whitepaper.

chatbots-thumbnails

The contents include

  • The history of chat interfaces
  • What conversational interfaces can do, and why
  • Natural Language Processing
  • Features provided by chatbot platforms
  • An in-depth review of eight of the top chatbot platforms
  • Recommendations for next steps, and a look to the future

Please, download the whitepaper, and let us know what you think.

 

Stuttering towards accessibility

“Hello, I’m Andy and I have a stammer.”

While this is true, thankfully very few people notice it nowadays. Like many older stammerers I’ve developed complex and often convoluted strategies to avoid triggering it. But still, if you were to put me on stage and ask me to say that sentence we’d be there all week.

Over the last ten years or so as I’ve aged and gained more control over my stammer I’ve not given it much thought, barring politely turning down the occasional invitation to speak in public. Recently though, I’ve been forced to reassess both it and my coping strategies in the light of the rapid increase in voice interfaces for everything from phones to cars. And that’s made accessibility a very personal issue.

Like many stammerers I struggle with the start of my own name, and sounds similar to it. In the world of articulatory phonetics the sounds that trip me up are called “open vowels”. That is, sounds that are generated at the back of the throat with little or no involvement from the lips or tongue. In English that’s words starting with vowels or the letter H. So the first seven words of the sentence “Hello, I’m Andy and I have a stammer” are pretty much guaranteed to stop me in my tracks (unless I’m drunk or singing – coping strategies!).

We recently got an Amazon Echo for the office and wired it up to a bunch of things, including Spotify. Colleagues tell me it’s amazing, but because the only way I can wake it up is by saying “Alexa!” it’s absolutely useless to me.

And it gets worse. Even if a stammerer is usually able to overcome their problem sounds other factors will increase their likelihood of stammering in a particular situation.

One is over-rehearsal, where the brain has time to analyse the sentence, spot the potentially difficult words and start to worry about them, exacerbating the problem. This can be caused by reading aloud – even bedtime stories for the kids (don’t get me started on Harry and Hermione or Hiccup Horrendous Haddock the Third) – but anything where the words are predetermined can be a problem; be that a sales presentation, giving your name as everyone shakes hands as they walk into a meeting, performing lines from a play, making the vows at your wedding, literally anything where you have time to think about what you’re going to say and can’t change the words.

Speech interfaces currently fall firmly into the realm of over-rehearsal. You’re forced to plan carefully what you’re going to say, and then say it. “Alexa! Play The Stutter Rap by Morris Minor and the Majors” (yeah, that was a childhood high point, let me tell you) is a highly structured sentence and despite Alexa’s smarts it’s the only way you’re going to get that track played. So it’s not only a problematic sound, but it’s over-rehearsed… Doubly bad.

The other common trigger for stammering is often loosely defined as social anxiety, but is anywhere where the stammerer is drawing attention to themselves, either from being the focus of an activity (on stage, say) or from disturbing the normal flow of activity around them (for example, by trying to attract someone’s attention across a crowded room).

If I want to talk to the Echo in our office I know that saying “Alexa!” is going to disturb my colleague’s flow and cause them to involuntarily prick up their ears, which brings it right into the category of social anxiety… As well as already being a trigger sound and over-rehearsed… Triply bad.

However good my coping strategies might normally be I can’t use any of them when speaking to Alexa, and speaking to Alexa is exactly when I would normally be employing them all. Even when I’m in the office on my own it’s useless to me, because trigger sound and over-rehearsal is enough to stop me.

And the Echo isn’t alone. There’s “Hey, Siri!”, “Hey, Cortana!”, “OK Google!”, and “Hi TV!”. All of them, in fact. Right now all of the major domestic voice controls use wake words that start with an open vowel. Gee. Thanks everyone.

Google recently announced that 20% of mobile searches use voice rather than text. More than half of iOS users use Siri regularly. Amazon and Microsoft are doubling down on Echo and Cortana, respectively. Tesla are leading the way in automotive, but all the major manufacturers offer some form of voice control for at least some of their models. It makes absolute sense for them to do so – speech is such a natural interface, right? And it’s futuristic – it’s the stuff of Star Trek. Earl Grey, Hot! and all that. But just as screen readers have constantly struggled to keep up with web technologies we’re seeing developers doomed to repeat those same mistakes with voice interfaces, as they leap ahead without consideration for those that can’t use them.

To give some numbers and put this in context there are approximately twice as many stammerers in the UK (1% of the population) as there are registered visually impaired or blind (0.5% of the population). That’s a whole chunk of people. And while colleagues would say that me not being able to choose music for the stereo is a benefit not a drawback, it makes light of the fact that a technology we generally think of as assistive is not a panacea for all.

Currently Siri, Cortana, Samsung TVs and Alexa can only be addressed with sentences that start with an open vowel (Siri, Cortana and Samsung can’t be changed, Alexa can, but only to one of “Alexa”, “Echo” and “Amazon”). Google on Android can thankfully be changed to any phrase the user likes, even if the process is a little convoluted. Interestingly for me, though, is that the Amazon Echo offers no alternative interface at all. It is voice control only, and has to be woken with an open vowel. It is the worst offender.

For me this has been an object lesson in checking my privilege. Yes, I’m short sighted, but contact lenses give me 20/20 vision. I had a bad back for a while, but I was still mobile. This is the first piece of technology that I’ve actually been unable to use. And it’s not a nice experience. As technologists we know that accessibility is important – not just for the impaired but for everyone – yet we rarely feel it. I’m sure feeling it now.

Voice control is still in its infancy. New features and configurations are being introduced all the time. Parsing will get smarter so that wake words can be changed and commands can be more loosely structured. All of these things will improve accessibility for those of us with speech impediments, who are non-verbal, have a throat infection, or are heavily accented.

But we’re not there yet, and right now I’ve got to ask Amazon… Please, let me change Alexa’s name.

I was thinking Jeff?

Our plants need watering, part I

Here at Isotoma Towers we’ve recently started filling our otherwise spartan office with plants. Plants are lovely but they do require maintenance, and in particular they need timely watering.

Plants.

Since we’re all about automation here, we decided to use this as a test case for building some Internet of Things (IoT) devices.  One of my colleagues pointed out this great moisture sensor from Catnip (right).

This forms the basis of our design.Catnip I2C soil moisture sensor

There are lots and lots of choices for how to build something like this, and this blog post is going to talk about design decisions.  See below the fold for more.

 

Continue reading

4 Times That The Misery Of Creative Agencies Made Me Happy

Clickbait titles are fun, but bear with me, good people. I’m trying the make a point.

This report was wafted under my nose the other day. It makes for depressing, but not terribly surprising reading. The first paragraph pretty much nails it:

Anyone who’s spoken to me in a professional capacity for the last 3 months will probably recognise that Smith & Beta’s report is quantitative confirmation of what I’ve been going on about for ages. Each one of these makes me sad – but also, because I am a shallow, vapid person, I still get to feel happy that I’m right.

1) Good quality creative requires good quality technical implementation

Agencies lead with creative vision and lean on technical skills (internal & external) to deliver this vision. No one ever won a pitch by saying that the creative will be a strong C+ but it’s going to be implemented really well. Sadly, the opposite is almost always true. The industry is generally OK with taking an amazing creative idea and delivering it late, over-budget and on top of a pile of bodies of fallen colleagues.

2) This technical resource – where it exists within an agency – is often siloed and over-committed

Because of the way the creative industry works, creative resource is always going to be an expense the agency is happy to invest in. Investing in technical resource however; is a more expensive, slower, trickier business.

Similarly, investing in older, more skilled resource is always going to be a harder sell when there are countless thousands of young and exploitable juniors clamouring for your attention.

An agency trying to walk the line between capability and capacity in order to really call themselves “Full Service” will end up with a safe but middle of the road offer. Conversely, an agency who shoots for the moon and invests in highly specialised and/or highly senior team may find that they’ve painted themselves into a very expensive corner.

3) It’s hard to hire your way out of this problem

I mean, duh, obviously. It’s hard to hire your way out of any problem. Recruitment, training and increasing retention are sloooooow processes. And the problems that this report outlines are problems of the now.

(Side-note: In my role here at Isotoma, I often end up talking to agencies about projects that we can collaborate on. I’m usually talking about projects that might be coming up in, say, 6 months, but people actually want help RIGHT NOW.)

4) These problems when considered together, reduce the satisfaction of the customer and shorten the lifetime of the account

As abusive as the client/agency model can be, there’s a satisfyingly stark bottom line to it: “Do good work; get more work.” Note that this is distinct from “Pitch good creative; get more work.”

As I said above, no one ever won a pitch for outlining a competent implementation plan, but once the project is over and the smoke settles, the customer doesn’t just remember the pitch.

(If you’re really unlucky, the people who were in the pitch don’t even work for the customer anymore…)

The knife edge that a marcomms agency has to walk is being able to deliver creative vision *and* technical competence in a way that doesn’t fundamentally alter what the company is. Go too far in one direction and you’re unable to deliver anything profitably, go too far in the other and you’ve magically become a company that you don’t want to be.

So this is one of the reasons that Isotoma do what we do. We’re already a technical agency. We’re already geared up to help you estimate, deliver and, crucially, support a creative campaign. We’re good partners. And the better we get at ploughing this particular furrow, the better we’re able to help and complement agencies who’ve chosen to plough another.

And that makes me happy.

(See? I was being cynically provocative to attract clicks. And the pug at the top? The cherry on the cake, my friend. Truly I am a monster.)

 

Refreshing a site into uselessness

Myvue.com was never what I’d call wonderfully designed, but it it did its job. It did it so well, in fact, that it’s one of the reasonably few sites I’ve bookmarked on my phone, and one of the even fewer bookmarks that I actually use on a regular basis.

Specifically, I bookmarked the URL of my local cinema. Here’s how it looked until a month or so ago:

Screenshot of Myvue website from 2015Pretty simple, right? It shows me a vertical list of movies showing today, and the times they’re showing at. It defaults to today, but at the top of the list are tabs for the next 5 days. It’s not exactly mobile-optimised, but it’s perfectly usable on my iPhone.

The site also does plenty of other things, all of which are pretty much useless. I’m not here to watch a trailer. I don’t buy tickets online as it takes a minute to buy at the cinema and it’s never sold out or full. Where do “user ratings” even come from and why do I care? Why would anyone go to this site to find films to watch by genre? Why would I register on a site like this? What’s the point of literally any of the rest of the site’s navigation? Anyway, that’s by the by. It does its central job well, showing me every movie that’s showing on that day, at what times, on one page.

So the other day I used my bookmark again and noticed immediately they’ve redesigned. It looks new and expensive. It adapts to my mobile device. And it’s now utterly useless, particularly on mobile. Since 99% of the time I use this site it’s on my iPhone, that’s what I’ll use for the rest this review.

This is what you now see at the same URL:

Screenshot of new Myvue site on iPhoneThe entire first screen is taken up by a film poster, which turns out to be a slideshow. Carousels are annoying enough, but this one makes it extremely difficult to know what page I’m on, because judging by what I see on the screen, I’m on a page for Sausage Party.

Just pause to consider how pointless this slideshow is (whilst adding who knows how much to the download time). It’s a sequence of movies showing at this cinema. Which is… exactly what the list below it is. Except this is a slideshow, and that’s a vertical list. Someone must have insisted on a slideshow.

Scrolling past this annoyance you get to the vertical list of movies showing that day. The posters are now so large only 2 fit on screen at once (even on desktop!), yet they’ve removed the short description, leaving only the title and… what’s this? “Get times & tickets”? Why don’t you just show the times like you used to?  So now I have to navigate to get the times for every movie I’m interested in?

[Update 14 Sep 2016: MyVue have added showtimes back in on the listing page! I wish they would also show the film’s running time as they used to, though.]

So I click “Get times & tickets” and… WHAT?! Another page for the movie I just clicked on, with an enormous backdrop image but no useful information on it, and another big “GET TIMES & TICKETS” button! So I click that, and a panel slides laboriously in from the right, displays a “working” spinner for all of 7 seconds, before finally showing me the times. Wow, it really worked hard to show me some text-based information. There’s no caching, by the way. Next time I request showtimes it’ll take another 7 seconds.

Screenshot of new Myvue site on iPhone, product pageNow I want to see what times other movies are showing, so I go Back. Back to the useless screen with the backdrop (let’s call it the Product screen). So I go Back again. Whoops, here comes the sliding panel with showtimes again. Clicking Back a third time is the charm. (Although it’s hard at first to tell I’m back on the listing, because an unrelated movie – the slideshow at the top – is filling the screen.)

The above buggy behaviour is actually the best-case scenario. If you clicked the X in the corner of the sliding showtimes panel instead of Back, you’d find yourself back at the Product screen with no escape. Clicking Back again would restore the showtimes panel, and so on, trapping you in an endless loop.

The bottom line is I’m removing this bookmark from my phone, as it is now useless. A google search for “what’s showing at vue fulham” gives me the information I want.

What went wrong here?

Screenshot of new Myvue site product page, on desktop

The product screen on desktop includes showtimes for today, which requires an extra click on mobile.

Firstly, despite the mobile-optimised layout, it’s obvious that the site was designed and built with a desktop or widescreen display in mind. It looks like the designers wanted something that looks like today’s media centre interfaces, like Plex or Apple TV. The enormous posters, backdrops and spacious page layouts are typical of a “lean-back” design. Also, the desktop version includes stuff that’s missing on mobile – the Product screen even has screening times for today, saving one click. But ask yourself: is this site anywhere in the same category as these media centre apps? Where are people likely to be when checking what’s showing at their cinema that evening? How quickly do they want this information? Mobile should’ve been considered of at least equal importance.

Media centre interfaces also necessarily involve deep levels of navigation, a handicap born of lack of space on the screen and a remote-control interface. On browsers it’s easier to scroll and click on targets, and if you can avoid deeper levels of navigation, you do so.

Screenshot of new Myvue Quick Book interface on iPhoneBut secondly, it’s clear that the designers had a very different idea of the primary user journey from me. You can see this clearly in the super-prominent “Quick Book” widget. On a desktop, you can at least see what the widget does, but on the mobile it’s entirely mysterious what “Quick Book” will do. But when invoked, it’s clear that the designers consider the website’s primary purpose to be buying tickets online, and that users don’t care so much about where or when it’s showing, as long as it’s the one movie they want. (The widget does not default location to the current cinema selected, and does not default date to today.)

Admittedly I don’t know how typical I am of Vue cinemagoers, but I don’t buy tickets online, I’m 99% certain to go to my nearest Vue rather than somewhere else, and there may be more than one movie I’m interested in seeing. My decision ultimately depends on what’s the most convenient time within the next 5 days. With the “Quick Book” widget, I’d have to use 3 dropdowns (which should be the UI of last resort) – 7 clicks – before even being able to see which times it’s showing for that day, which may well rule it out.

The damage

I used to be able to see what’s showing today at my local cinema, and when, with a single tap on my phone. Two taps if I wanted to check another day. Now, to check the times for a movie requires 3 taps, with loading time between each. Checking the times for another movie adds another 5 taps. Checking a different day… you get the picture. This redesign has rendered the site unusable, for me, and I would guess a large proportion of its previous users.

 

Going all-in on Flexbox

On a recent project we finally got to use Flexbox extensively for page layout. In the interests of increasing general knowledge about flexbox (including mine), I’ll explain a number of layouts that use flexbox extensively, organised in 3 sections:

  1. Main page layout (for a single-page JavaScript application)
  2. Fluid product grid and accordion-like summary box
  3. Product “cards” and “slabs”

Continue reading

Transforming a business platform

The-Key-Transform-CMS

As you may have seen in our previous blog post – Evolving The Key: insights from user research – after a year in design and development we recently helped The Key Support relaunch The Key for School Leaders and School Governors. This post looks at the technology selections for the refresh of The Key’s content management platform and why certain elements were chosen.

In the nine years since launching The Key has grown to support almost half of schools in England, with an amazing 75,000 registered school leaders and 17,000 registered governors having access to 5,000 original resources every month. It is now one of the most trusted sources of information in the education sector.

Selecting a platform

The sustained growth of The Key in both size and breadth meant there was a real need to TheKey-Screen1
update the underlying platform.  The new content management system (CMS) needed to be efficient at managing user subscriptions, making the right content available to the right users (The Key has 7 different classes of user), as well as being ready for any future expansion plans.

The platform for the past 9 years has been Plone, an open source enterprise content management system first released 15 years ago. In 2007 – when we built the first version of The Key – Plone was the ideal choice, but as the business requirements have expanded and been refined over the years we felt it was a good time to revisit that selection when we were presented with the opportunity to completely refresh both sites.

As The Key has grown in size so has the variety of content they are displaying on the site. As the breadth and types of this content has developed The Key have struggled with the restrictions created by the traditional template-driven nature of Plone. This prompted us to consider more flexible CMS options.

The solution? A shift from Plone to Wagtail.

The-Key-Wagtail-CMS

We were already pretty impressed with Wagtail, having already used it on a couple of smaller projects. Like Plone it’s an open source CMS written in Python, but Wagtail is built on Django, our preferred web framework, giving us all the advantages that Django brings. We wanted to make sure that the new platform would stand the test of time as well as the previous Plone solution had, so we ran a careful evaluation process between a group of Django based solutions – including Wagtail, Mezzanine, Django CMS and a bespoke pure Django approach – to see which would best meet The Key’s requirements. We’re pleased to say that Wagtail came out the clear winner!

There are a few reasons we were particularly impressed with Wagtail for an application of this size and scale…

  • It is highly extensible, meaning that we could push the user model very hard and accommodate the intricacies of The Key’s user base
  • There’s an extremely high quality ‘out of the box’ admin system, meaning that we could hit the goal of improving the editor experience without huge amounts of bespoke development
  • Wagtail supports the notion of page centric content management (through its StreamFields) which allowed us to build much richer pages than a traditional template driven CMS
  • There are powerful versioning tools built into the framework which would give The Key the level of control they need when managing changes to sensitive content

These features of Wagtail aligned beautifully with The Key’s requirements, allowing us to focus on delivering the features that they really needed.

Wagtail is a new and exciting open source platform which is constantly growing with new features and contributions. We were really looking forward to being involved and contributing some elements of our own.

Making the move…

One of the first tasks to complete as part of the move was to export the data out of Plone and into Wagtail. This involved the careful migration of over 30,000 pages across two websites, complete with full page history, allowing us to preserve all of The Key’s valuable content and metadata.

The goals of this project were manyfold for The Key:

  • Improve the member experience, making it easier to manage a school’s membership
  • Improve members’ ability to self-serve, improving their experience and reducing the workload of the team as the business grows
  • Improve the quality and measurability of online marketing activities
  • Improve the quality and robustness of reporting tools.

Making the move from Plone to Wagtail held so many benefits for The Key we couldn’t write about them all, but have summarised our favourites:

  • Improved user acquisition journey
  • Improved signposting of the huge variety of content on the site
  • It’s a long term solution, Wagtail can expand and grow alongside The Key
  • Flexible modular home page

Another important task was to ensure that any user behaviour tracking was successfully migrated over to Wagtail. The Key harness their large database of users to track and record vital information which is then translated into leading insights, ensuring The Key remain at the forefront of trends and industry changes.

Through our longstanding relationship with The Key we understand how valuable this data is, so we used a custom API to integrate a data warehousing service called Keen.io. This service intelligently stores the data allowing The Key to collate, store and build their own queries and analysis of user behaviour, allowing them to constantly refine and improve their content to better serve their members.

Monitoring performance

To ensure the stability of the complex infrastructure that supports a project of this scale we installed New Relic – a real-time software analytics program. New Relic provides deep performance analysis for every part of TheKey-Screen2The Key’s platform, enabling us to make faster decisions, monitor interactions, quickly pinpoint errors and achieve better business results for The Key.

What we’ve found working with Wagtail is that it’s so flexible, customisable, scalable and user friendly. It’s working wonders for some of our other clients too. If you’re interested to know what moving to Wagtail could do for the performance of your site then get in touch, we won’t try and sell you something you don’t want or need!

Stay tuned

The next blog installment: How has The Key benefited this update a month after deployment?

In our next blog post about The Key we’ll be revisiting the site a month after deployment to find out how their staff members got on with the CMS change and what impact it has had on the business.

If you found this article interesting and are looking for an agency to help you with an upcoming project, please do contact us and find out how we can help you. Alternatively you can read about some more of our work and see how we have helped other companies achieve their goals.

When a feature is invoked more often accidentally than on purpose, it should be considered a bug

Back in 2014 I tweeted this:

I’ve been meaning to revisit that statement for a while now. The link above refers to the following misfeature afflicting Mac users with external displays:

When the mouse cursor touches the bottom of an external display, MacOS assumes you want the Dock to be there, and moves it there from the primary display, often covering up what you were trying to click on.

This happens to me almost every day for several years now – never intentionally. I have looked into it thoroughly enough to know that it cannot be turned off without sacrificing all other multi-monitor features.

Our devices are full of such annoyances, born from designers’ attempts to be helpful. Sometimes they are just irritating, like the above. Sometimes they can be downright destructive.

Naked keyboard shortcuts

Keyboard shortcuts without modifier keys can be fantastic productivity enhancers, as any advanced user of Photoshop or Vim knows. But they are potentially incredibly dangerous, especially in applications with frequent text entry. Photoshop uses naked keyboard shortcuts (to coin a phrase) to primarily select tools or view modes. This may sometimes cause confusion for a novice (like when they accidentally enter Quick Mask mode with ‘Q’), but is rarely destructive.

Screenshot of Postbox Message menuPostbox, my email client, on the other hand, inexplicably uses naked keyboard shortcuts like ‘A’ to Archive and ‘V’ to move messages to other folders. What were they thinking? Email does involve frequent text entry. If you are rapidly touch typing an email, and the message composition window accidentally loses focus (e.g. due to the trackpad), there is no telling the damage that you can do. You may discover (as I have, more than once) that messages you know to have received have disappeared – sometimes only days later, without knowing what happened.

Any application that uses naked keyboard shortcuts should avoid using them for actions, as the application may come to foreground unintentionally. It’s safest to use them to select modes only.

Apple Pay

Here’s another example. Ever since Apple Pay came to iOS, I see this almost every day:

iPhone showing Apple Pay interfaceHow often do I actually use Apple Pay? About once a month, currently. Every time this screen appears unintentionally, I lose a few seconds – often missing the critical moment if my intention was to take a photo. (It is invoked by double-pressing the hopelessly overloaded Home button. A too-long press invokes Siri, which I also do unintentionally about 1 in 3 times.)

Gestures

After Apple announced yet more gestures in iOS 10 at WWDC last week, @Pinboard quipped

On touchscreen devices, gestures are a powerful and often indispensable part of the UI toolkit. But they are invisible, and easy to invoke accidentally. As I wrote in my recent criticism of the direction Apple’s design is taking, whilst some gestures truly become second nature,

[…] mostly they are an over-hyped disaster area making our devices seem not fully under our control, constantly invoked by accident and causing us to to make mistakes, with no reliable way of discerning what gestures are available or what they’ll do.

Since I use an iPhone where the top-left corner is no longer reachable one-handed, I rely on the right-swipe gesture to go back more and more often. Unfortunately, this frequently has unintended effects, whether it’s because the gesture didn’t start at the edge, or wasn’t perfectly horizontal, or because the app or website developer intended something different with the gesture. And with every new version of OS X on a Macbook, the trackpad and magic mouse usually have more surprises in store for the unwary. I’m sure voice interfaces will yield plenty more examples over time.

Accidental activation – a necessary evil?

In my Apple design critique, I lamented the fact that it is very difficult to make gestures discoverable, as they are inherently invisible – contributing to that sense of “simplicity” which is a near-religion nowadays. You can introduce gestures during on-boarding, and hope users remember them, but more likely they will be quickly forgotten and we all know no-one will use a “help” page.

So you could argue that accidental invocation is the price we may have to pay for discoverability. Even though I rarely use Apple Pay, I am confident I know how to do so – a consequence of its annoying habit of popping up all the time.

With gestures, accidental activation may be critical to their discoverability, and if well implemented need not be irritating. For example, many gestures have a “peek” stage during which it is possible to reverse the action. Unfortunately, today’s touchscreen devices no longer have an obvious, reliable Undo function, one of the many failings highlighted by Tognazzini and Norman.

What are designers to do?

So if you design a helpful, time-saving feature that risks being invoked by accident,

  • consider the value (to the user of the feature)
  • consider the risk (of the user invoking it unintentionally)
  • consider the cost (to the user of doing so)

Is the value of the feature or shortcut worth the cost of it sometimes annoying users? How big is the cost to users? Wasting a second or two? Or possible data loss? Is the user even aware what happened? Is it possible to Undo? What is the risk, or likelihood, of this happening? What proportion of users does it affect, and how frequently? What proportion of usages are unintentional?

Failing any one of these may be enough to consider the feature a bug. Or you could fail two but the positive may still outweigh the negative. It depends on the severity:

Venn diagram with 3 circles intersecting: Low value, High risk, and High user cost

Can a feature be activated unintentionally? Is it worth the risk? The Venn diagram of inadvisability.

So designers should firstly try to ensure unintentional feature activation is as unlikely as possible – preferably impossible. But if it happens, the user should be aware of what happened, and the cost to the user should be low or at least easy to recover from. User testing will be your best hope of spotting problems early. Beta testing and dogfooding, run over a longer time period, are great at finding those problems that may have low frequency but high cost. Application developers may also be able to collect stats on feature usage, and determine automatically if a feature is often invoked but then not used, or immediately undone, which may highlight problems.

Or stick with a simple rule of thumb: if a feature is activated by accident more often than on purpose, it’s not a feature but a bug. Feel free to share more examples!