A blog post about estimating.

First of all, a provocative but sweeping statement about the subject to kick us off: If your agency won’t talk to you about how they estimate projects then they’re either liars or fools.

You’ll have heard of Zeno’s Paradox. The one where a journey can theoretically never be completed because in order to travel the full distance you must first go halfway. And then once you’re halfway, you must then go half the remaining distance and so on.

The paradox is that in order to do something as simple as walking across a room, one must complete an infinitely regressing set of tasks. And yet, without wishing to boast, I’ve crossed the room twice already today and I managed it just fine.

Software estimation is a bit like that. If you analyse it closely you’ll see the tasks you have to complete multiply infinitely until even the simplest thing looks impossible and the budget is smashed to smithereens. And yet, as a company, we’ve got a track record of delivering on time and to budget that goes back years.

The various methods that we use are described in the episode of our podcast that this post supports (Why not go and check it out?) and we won’t go into detail here suffice to say that the process is always time-consuming and rarely problem-free.

So it’s hard. And prone to error. And time consuming to even do badly. So why do it?

The obvious answer – so you know how much to charge – is not actually all that applicable. More and more of the work we do on agile projects is charged on a time and materials basis. Additionally, there are a hundred good reasons why an agency might want to charge a price that wasn’t just literally [amount of time estimated] multiplied by [hourly rate].

No, the real reason that we put so much effort into estimation is that estimation is a great disinfectant. Everyone who works in this industry has a story about a project that went from perfectly fine to completely awful in a matter of seconds. Estimation helps us expose and resolve the factors that cause this horror: hidden complexity, differences of assumption, Just Plain Goofs etc.

It’s important to note though that even a carefully produced estimate can still be wrong and so the other key tools an agency needs are mature processes and procedures. You need to be able to effectively communicate how the estimate failed, assess what the impact of the failure will be to the broader project and, vitally, put all this information in a place where it can’t be forgotten or ignored.

This last step is effectively giving the organisation an institutional memory that lasts longer than 10 working days and it’s the vital step that ensures that by the end of the project the stakeholders can remember that there was a problem, see that it was resolved and how it affected timelines overall. Mistakes are always going to be made but the key thing is to ensure you’re always making exciting new ones rather than repeating the old ones.

All of the above is discussed to some extent in our Estimating podcast. Myself, Andy Theyers and Richard Newton spend around half an hour discussing the subject and, honestly, it’s quite interesting. I urge you to check it out.

Video: Serverless – The Real Sharing Economy

Serverless is a new application design paradigm, typified by services like AWS Lambda, Azure Cloud Functions and IBM OpenWhisk. It is particularly well suited to mobile software and Single-Page Application frameworks such as React.

In this video, Doug Winter talks at Digital North in Manchester about what Serverless is, where it comes from, why you would want to use it, how the economics function and how you can get started.

You can see more Isotoma videos on Vimeo.

RxJS: An object lesson in terrible good software

We recently used RxJS on a large, complex asynchronous project integrated with a big third-party distributed system. We now know more about it than, frankly, anyone would ever want to.

While we loved the approach, we hated the software itself. The reasons for this are a great lesson in how not to do software.

Our biggest issue by far with RxJS is that there are two actively developed, apparently stable versions at two different URLs. RxJS 4 is the top result from google for RxJS and lives at https://github.com/Reactive-Extensions/RxJS, and briefly mentions that there is an unstable version 5 at a different address. RxJS 5 lives at https://github.com/ReactiveX/rxjs and has a completely different API to version 4, completely different, far worse (“WIP”) documentation, doesn’t allude to its level of stability, and is written in typescript, so users will need to learn some typescript before they can understand the codebase.

Which version should new adopters use? I have absolutely no idea. Either way, when you google for advice and documentation, you can be fairly certain that the results you get will be for a version you’re not using.

RxJS goes to great lengths to swallow your errors. We’re pretty united here in thinking that it definitely should not. If an observable fires its “error” callback, it’s reasonable that the emitted error should be picked up by the nearest catch operator. Sadly though RxJS also wraps all of the functions that you pass to it with a try/catch block and any exception raised by those functions will also be shunted to the nearest try/catch block. Promises do this too, and many have complained bitterly about it already.

What this means in practice is that finding the source of an error is extremely difficult. RxJS tries to capture the original stack trace and make it available in the catch block but often fails, resulting in a failed observable and an “undefined” error. When my code breaks I’d like it to break where it broke, not in a completely different place. If I expect an error to occur I can catch it as I would anywhere else in the codebase and emit an observable error of the form that I’d expect in my catch block so that my catch blocks don’t all have to accommodate expected failure modes and any arbitrary exception. Days and days of development were lost to bisecting a long pile of dot-chained functions in order to isolate the one that raised the (usually stupidly trivial) error.

At the very least, it’d be nice to have the choice to use an unsafe observable instead. For this reason alone we are unlikely to use RxJS again.

We picked RxJS 5 as it’s been around for a long time now and seems to be being maintained by Netflix, which is reassuring.

The documentation could be way better. It is incomplete, with some methods not documented at all, partially documented as a mystery-meat web application that can’t be searched like any normal technical documentation. Code examples rarely use real-world use-cases so it’s tough to see the utility of many of the Observable methods. Most of the gotchas that caught out all of our developers weren’t alluded to at any point in any of the documentation (in the end, a youtube talk by the lead developer saved the day, containing the first decent explanation of the error handling mechanism that I’d seen or read). Worst of all, the only documentation that deals with solving actual problems with RxJS (higher-order observables) is in the form of videos on the paywalled egghead.io. I can’t imagine a more effective way to put off new adopters than requiring them to pay $200 just to appreciate how the library is commonly used (though, to be clear, I am a fan of egghead).

Summed up best by this thread, RxJS refuses to accept its heritage and admit that it’s a functional library. Within the javascript community there exists a huge functional programming subcommunity that has managed to put together a widely-adopted specification for writing functional javascript libraries that can interoperate with the rest of the available javascript functional libraries. RxJS chooses not to work to this specification  and a number of design decisions such as introspecting certain contained values and swallowing them drastically reduces the ease with which RxJS can be used in functional javascript codebases.

RxJS makes the same mistake lodash did a number of years ago, regularly opting for variadic arguments to its methods rather than taking arrays (the worst example is merge). Lodash did eventually learn its lesson, I hope RxJS does too.

Taming the Async Beast with FRP and RxJS

The Problem

We’ve recently been working on an in-browser vision mixer for the BBC (previous blog posts here, here, here, and here). Live vision mixing involves keeping track of a large number of BBC R&D logointerdependent data streams. Our application receives timing data for video tapes and live video streams via webrtc data channels and websocket connections and we’re sending video and audio authoring decisions over other websockets to the live rendering backend.

Many of the data streams we’re handling are interdependent; we don’t want to send an authoring decision to the renderer to cut to a video tape until the video tape is loaded and ready to play, so we need to wait until the video tape is ready to play before we send an authoring decision; if the authoring websocket has closed we’ll need to reconnect to it then retry sending that authoring decision.

Orchestrating interdependent asynchronous data streams is a fundamentally complex problem.

Promises are one popular solution for composing asynchronous operations and safely transforming the results, however they have a number of limitations. The primary issue is that they cannot be cancelled, so we need to handle teardown separately somehow. We could use the excellent fluture or Task Future libraries instead, both of which support cancellation (and are lazy and chainable and fantasy-land compliant), but futures and promises handle one single future value (or error value), not a stream of many values (or error value). The team working this project are fans of futures (less so of promises) and were aiming to write the majority of the codebase in a functional style using folktale and ramda (and react-redux) so wanted a functional, composable way to handle ongoing streams of data that could sit comfortably within the rest of the codebase.

A Solution

After some debate, we decided to use FRP (functional reactive programming) powered by the observable pattern. Having used RxJS (with redux-observable) for smaller projects in the past, we were confident that it could be an elegant solution to our problem. You can find out more about RxJS here and here but, in short, it’s a library that allows subscribers to listen to and transform the output of a data stream as per the observer pattern, and allows the observable (the thing subscribed to) to “complete” its stream when it runs out of data (or whatever), similar to an iterator from the iterator pattern. Observables also allow their subscribers to terminate them at any point, and typically observables will encapsulate teardown logic related to their data source – a websocket, long-poll, webrtc data channel, or similar.

RxJS implements the observer pattern in a functional way that allows developers to compose together observables, just as they’d compose functions or types. RxJS has its roots in functional reactive programming and leverages the power of monadic composition to chain together streams while also ensuring that teardown logic is preserved and handled as you’d expect.

Why FRP and Observables?

The elegance and power of observables is much more easily demonstrated than explained in a wordy paragraph. I’ll run through the basics and let your imagination think through the potential of it all.

A simple RxJS observable looks like this:

Observable.of(1, 2, 3)

It can be subscribed to as follows:

Observable.of(1, 2, 3).subscribe({
  next: val => console.log(`Next: ${val}`),
  error: err => console.error(err),
  complete: () => console.log('Completed!')
});

Which would emit the following to the console:

Next: 1
Next: 2
Next: 3
Completed!

We can also transform the data just as we’d transform values in an array:

Observable.of(1, 2, 3).map(x => x * 2).filter(x => x !== 4).subscribe(...)
2
6
Completed!

Observables can also be asynchronous:

Observable.interval(1000).subscribe(...)
0 [a second passes]
1 [a second passes]
2 [a second passes]
...

Observables can represent event streams:

Observable.fromEvent(window, 'mousemove').subscribe(...)
[Event Object]
[Event Object]
[Event Object]

Which can also be transformed:

Observable.fromEvent(window, 'mousemove')
  .map(ev => [ev.clientX, ev.clientY])
  .subscribe(...)
[211, 120]
[214, 128]
[218, 139]
...

We can cancel the subscriptions which will clean up the event listener:

const subscription = Observable.fromEvent(window, 'mousemove')
  .map(ev => [ev.clientX, ev.clientY])
  .subscribe(...)

subscription.unsubscribe();

Or we can unsubscribe in a dot-chained functional way:

Observable.of(1, 2, 3)
  .take(2)  // After receiving two values, complete the observable early
  .subscribe(...)
1
2
Completed!
Observable.fromEvent(window, 'mousemove')
  .map(ev => [ev.clientX, ev.clientY])
   // Stop emitting when the user clicks
  .takeUntil(Observable.fromEvent(window, 'click'))
  .subscribe(...)

Note that those last examples left no variables lying around. They are entirely self-contained bits of functionality that clean up after themselves.

Many common asynchronous stream use-cases are catered for natively, in such a way that the “operators” (the observable methods e.g. “throttle”, “map”, “delay”, “filter”) take care of all of the awkward state required to track emitted values over time.

Observable.fromEvent(window, 'mousemove')
  .map(...)
  .throttle(1000) // only allow one event through per second
  .subscribe(...);

… and that’s barely scratching the surface.

The Benefits

Many of the benefits of RxJS are the benefits of functional programming. The avoidance of state, the readability and testability of short, pure functions. By encapsulating the side-effects associated with your application in a generic, composable way, developers can maximise the reusability of the asynchronous logic in their codebase.

By seeing the application as a series of data transformations between the external application interfaces, we can describe those transformations by composing short, pure functions and lazily applying data to them as it is emitted in real-time.

Messy, temporary, imperative variables are replaced by functional closure to give observables access to previously emitted variables in a localised way that limits the amount of the application logic and state a developer must hold in their head at any given time.

Did It Work?

Sort of.  We spent a lot of our time in a state of low-level fury at RxJS, so much so that we’ve written up a long list of complaints, in another post.

There are some good bits though:

FRP and the observable pattern are both transformative approaches to writing complex asynchronous javascript code, producing fewer bugs and drastically improving the reusability of our codebase.

RxJS operators can encapsulate extremely complex asynchronous operations and elegantly describe dependencies in a terse, declarative way that leaves no state lying around.

In multiple standups throughout the project we’ve enthusiastically raved about how these operators have turned a fundamentally complex part of our implementation into a two line solution. Sure those two lines usually took a long time to craft and get right, but once working, it’s difficult to write many bugs in just two lines of code (when compared to the hundreds of lines of imperative code we’d otherwise need to write if we rolled our own).

That said, RxJS is a functional approach to writing code so developers should expect to incur a penalty if they’re new to the paradigm as they go from an imperative, object-oriented approach to system design to a functional, data-flow-driven approach instead. There is also a very steep learning curve required to feel the benefits of RxJS as developers familiarise themselves with the toolbox and the idiosyncrasies.

Would We Use It Again?

Despite the truly epic list of shortcomings, I would still recommend an FRP approach to complex async javascript projects. In future we’ll be trying out most.js to see if it solves the myriad of problems we found with RxJS. If it doesn’t, I’d consider implementing an improved Observable that keeps its hands off my errors.

It’s also worth mentioning that we used RxJS with react-redux to handle all redux side-effects. We used redux-observable to achieve this and it was terrific. We’ll undoubtedly be using redux-observable again.

 

The Europas 2017

June 13th was The Europas; a conference and awards ceremony for the European start up scene. Having watched the event from afar for a few years we decided to take the plunge and sponsor this year. We’ve been deeply involved in the start up community right from our inception; from attending the first Future of Web Apps back in 2005, through to helping some of our start up customers achieve successful funding rounds and eventual sale, and even setting up and one or two of our own (like Forkd). All in all it felt like the right kind of event for us to get involved in.

It was my first visit to the Olympic Park. My first thoughts were of how vast it is. I got off the train at Stratford International on a beautiful morning and decided to walk to Here East, which I could see in the distance…

Yeah. Perhaps I should have taken the shuttle bus that was on offer. Or taken the tube to Hackney Wick as recommended. Still, it was good to explore the park, even if I did arrive a little later than planned.

Sadly I wasn’t alone in arriving a little late. Because The Europas caters to a pan-European audience and the main event was in the evening many attendees had chosen to travel on the day, meaning that the morning sessions were a little under-attended. This was a real shame because the stand out talk of the day for those that saw it was Azeem Azhar’s “Will ubiquitous AI lead to artisanal cheese for all?” The title might have been a mouthful (ahem) but the talk was fascinating and wonderfully delivered.

Following on from Azeem on the main stage was an equally positive session with Bess Mayhew of More United; her take on UK politics and how we might best affect it (and how she already is) was genuinely uplifting.

This was the first talk that touched on a theme that would run throughout the rest of the day: #fakenews. Clearly anyone involved in politics is going to be worrying about the fake news phenomenon, and while Bess touched on the subject during her session the next panel was all about it. I’m going to say more on the topic in another post, so I’ll leave this one there, except to say that we – the tech community – currently seem bereft of ideas as to how to address it.

While Azeem’s session was the highlight of the talks the event had two non-talk stand outs. Straight after the excellent lunch (and a brief aside – the way lunch was delivered was very unusual and extremely efficient, a real plus for a conference) was Richard Browning the Rocket Man.

By now the venue had pretty much filled up, so a huge crowd watched him – with earplugs in – circle the courtyard outside the venue. It’s hard to describe how impressive it is to someone who hasn’t seen it up close with the heat and noise of the jets almost knocking you over. Quite how on earth he actually manages to fly the thing I don’t know.

Doug’s breakout session with Roberta Lucca was straight after The Rocket Man’s flight, and we were obviously worried that no one would turn up given the excitement of what was going on outside, but we had a good audience for an intimate and lively chat (and disagreement) about how to best get the most out of your development team, and when and whether to build your own team or outsource. More on that topic to come in both blog post and podcast form…

For us the afternoon ended with Gabrielle Aplin who gave a great talk about how artists are the new start ups (reflecting what we used to say a decade ago, that start ups are the new artists; what goes around comes around, of course) before giving a performance to a slightly bemused crowd.

For me the highlights of the day were Azeem’s talk on AI, the rocket man, and a great breakout panel on privacy, but there were very few dud moments in a packed day.

I left via the canal and Hackney Wick. Far more picturesque, and a much shorter walk!

We can’t thank Mike, Petra and Dianne enough for setting the thing up and running it so smoothly, and for giving us the opportunity to sponsor. We’ll see you there again next year.

HAPPI image

Video: Iomart on selecting Isotoma as their digital agency

HAPPI (Highly Available Provisioning and Procurement of Infrastructure) is a really exciting project that we’ve been working on for some time now. It empowers users to design and deploy their perfect hardware infrastructure, all with a click of a button.

A few weeks ago we were delighted to host Neil Christie, Managing Director of Iomart, who spoke to us about the idea for HAPPI and why working with Isotoma has been such a valuable partnership for their business.

HAPPI Highly Available Provisioning and Procurement of Infrastructure on Vimeo.

Neil Christie: “We went to 3 or 4 digital agencies when we initially had the idea of HAPPI. It became very clear from speaking to those that we needed to find someone that really understood the vision and the goals that we were aiming for.”

“I think it was when you were finishing my sentences and asking questions that we hadn’t even considered that we really knew we’d found someone that understood exactly what we were going to achieve.”

Iomart have big plans for HAPPI, and we’re delighted to be helping them along with the way. If you have an idea and need to turn it into reality, we’d love to hear from you.

Screenshot of SOMA vision mixer

Compositing and mixing video in the browser

This blog post is the 4th part of our ongoing series working with the BBC Research & Development team. If you’re new to this project, you should start at the beginning!

BBC R&D logoLike all vision mixers, SOMA (Single Operator Mixing Application) has a “preview” and “transmission” monitor. Preview is used to see how different inputs will appear when composed together – in our case, a video input, a “lower third” graphic such as a caption which fades in and out, and finally a “DOG” such as a channel or event identifier shown in the top corner throughout a broadcast.

When switching between video feeds SOMA offers a fast cut between inputs or a slower mix between the two. As and when edit decisions are made, the resulting output is shown in the transmission monitor.

The problem with software

However, one difference with SOMA is that all the composition and mixing is simulated. SOMA is used to build a set of edit decisions which can be replayed later by a broadcast quality renderer. The transmission monitor is not just a view of the output after the effects have been applied as the actual rendering of the edit decisions hasn’t happened yet. The app needs to provide an accurate simulation of what the edit decision will look like.

The task of building this required breaking down how output is composed – during a mix both the old and new input sources are visible, so six inputs are required.

VideoContext to the rescue

Enter VideoContext, a video scheduling and compositing library created by BBC R&D. This allowed us to represent each monitor as a graph of nodes, with video nodes playing each input into transition nodes allowing mix and opacity to be varied over time, and a compositing node to bring everything together, all implemented using WebGL to offload video processing to the GPU.

The flexible nature of this library allowed us to plug in our own WebGL scripts to cut the lower third and DOG graphics out using chroma-keying (where a particular colour is declared to be transparent – normally green), and with a small patch to allow VideoContext to use streaming video we were off and going.

Devils in the details

The fiddly details of how edits work were as fiddly as expected: tracking the mix between two video inputs versus the opacity of two overlays appeared to be similar problems but required different solutions. The nature of the VideoContext graph meant we also had to keep track of which node was current rather than always connecting the current input to the same node. We put a lot of unit tests around this to ensure it works as it should now and in future.

By comparison a seemingly tricky problem of what to do if a new edit decision was made while a mix was in progress was just a case of swapping out the new input, to avoid the old input reappearing unexpectedly.

QA testing revealed a subtler problem that when switching to a new input the video takes a few tens of milliseconds to start. Cutting immediately causes a distracting flicker as a couple of blank frames are rendered – waiting until the video is ready adds a slight delay but this is significantly less distracting.

Later in the project a new requirement emerged to re-frame videos within the application and the decision to use VideoContext paid off as we could add an effect node into the graph to crop and scale the video input before mixing.

And finally

VideoContext made the mixing and compositing operations a lot easier than they would have been otherwise. Towards the end we even added an image source (for paused VTs) using the new experimental Chrome feature captureStream, and that worked really well.

After making it all work the obvious point of possible concern is performance, and overall it works pretty well.  We needed to have half-a-dozen or so VideoContexts running at once and this was effective on a powerful machine.  Many more and the computer really starts struggling.

Even a few years ago attempting this in the browser would have been madness, so its great to see such a lot of progress in something so challenging, and opening up a whole new range of software to work in the browser!

Read part 5 of this project with BBC R&D where Developer Alex Holmes talks about Taming async with FRP and RxJS.

“What CMS?” is the wrong question

When I meet new customers, I often relate a story about CMS work.

It goes like this:

A decade or so ago, CMS projects used to make up the majority of Isotoma’s output.
These days the number is much lower. There are a few reasons for this but the main one is that back in the day, CMS work used to be hard. The projects were complex, expensive and fragile.
As an illustration of how much things have changed, the other day I built a website for my sister’s business using Squarespace. It took longer for me to upload photos to the gallery than it did to plan, build, populate and deploy the entire site.

This peppy anecdote disguises the fact that there is complexity still in CMS work; projects that involve content management have the ability to stymie organisations and put their digital roadmap back years – but almost all of this complexity resides outside of CMS platform choice.
This post looks into why that is.

“What CMS should I use?” is a boring question.

People still tend to start with the question “What platform are we going to use?” because, historically, it’s been a really important one. Back in the day, progress was slow and the costs of being wrong were astronomically high.

These days though, compared to some of the other decisions you need to make, platform choice is a relative doddle.

Why do I say this? Because the CMS market is commoditised, modern and highly competitive.

  • There are free/open source solutions that are as good as (or better than) anything you pay through the nose for
  • There’s a huge number of agencies who will compete with each other for your business and this keeps prices constantly low
  • The majority of features you could ever want from a content management system are now standardised and distributed across the marketplace. Anyone telling you otherwise is selling you something
  • As features increase; costs shrink. As costs shrink, the traditional organisational worries about sunk costs and having a product that’s wrong become less important

Indeed from a given list of modern, open source content management systems, you’d have to be going some to get a bad fit for 99% of organisations.

So having made a broad and eye-catching statement like that, let me now run through a brief list of actual interesting questions to ask about your CMS project.

1. What is my content strategy?

Back in the day, agencies would be asked by their customers “Will the section headings be editable by admins?” and then we’d go to quite a lot of trouble to make sure that the section headings were in fact editable by admins.

What agencies should have been doing instead was investigating why the section headings needed to be editable in the first place.

If you nail the content strategy – if you develop content and a structure that speaks to the requirements of your actual customers – then the need to make structural changes to your site should be, while not removed entirely, at least put on a slower and more predictable timetable.

2. Who is driving this project?

This is somewhat related to content strategy but deserves its own section. One of the reasons that a content strategy is needed is that the fundamental question “Who is the audience for this site?” can have multiple, arguably correct answers depending on who inside the organisation you ask.

And sometimes the reason that you have multiple answers to the question is because two (or more!) departments or individuals within the customer’s organisation have competing opinions about the fundamentals of the project: what it’s for, what the outcomes and priorities should be… etc.

Resolving these tensions can be difficult, even for members of the customer’s team. For employees of an outside agency the difficulty increases exponentially but, worst of all is when an attempt is made to solve a problem like this with the top down application of a technology choice. Drupal has many skills but senior management negotiation is not one of them.

3. Who are the audiences for this new CMS?

Is this a site made to make the company look attractive to customers? Or is this a site that is designed to make an internal task easier for the admins of the site? Or both? None of these answers are wrong but failing to ask the question can result in tens of thousands of pounds being spent on something of little demonstrable value to the customer.

So in summary then…

Asking the above questions is a far better use of your time in the run up to starting a project than asking questions about Wagtail vs WordPress.

You can take this one step further and use the same approach in selecting an agency: When you first engage with them, do they talk about the platform and technology choice or about how they’re going to help you increase your reach, or implement a content strategy? Or how they’ll help affect change within your organisation?

We’re on the Government G-Cloud 9 Marketplace

Good news, everyone! Isotoma are pleased to announce that our services are now available to public sector bodies for procurement via the G-Cloud 9 portal.

This means that you can find Isotoma’s services on the Digital Marketplace including cloud hosting, software and support. The Digital Marketplace is the new online platform that all public sector organisations can use to find and buy UK government approved cloud-based services.

We already deliver our services to organisations around the world. With this new accreditation, Isotoma is ready to deliver our best-in-industry services to even more public sector bodies.

Here’s an outline of the Isotoma services available on the G-Cloud 9 Digital Marketplace. Don’t hesitate to get in touch if we can provide more info.

Sign pointing to Usability Lab

Rapid user research on an Agile project

Our timeline to build an in-browser vision mixer for BBC R&D (previously, previously) is extremely tight – just 2 months. UX and development runs concurrently in Agile fashion (a subject for a future blog post), but design was largely done within the first month.

Too frequently for projects on such timescales there is pressure to omit user testing in the interest of expediency. One could say it’s just a prototype, and leave it until the first trials to see how it performs, and hopefully get a chance to work the learnings into a version 2. Or, since we have weekly show & tell sessions with project stakeholders, one could argue complacently that as long as they’re happy with what they’re seeing, the design’s on track.

Why test?

But the stakeholders represent our application’s target users only slightly better than ourselves, which is not very well – they won’t be the ones using it. Furthermore, this project aims to broaden the range of potential operators – from what used to be the domain of highly experienced technicians, to something that could be used by a relative novice within hours. So I wanted to feel confident that even people who aren’t familiar with the project would be able to use it – both experts and novices. I’m not experienced in this field at all, so I was making lots of guesses and assumptions, and I didn’t want to go too far before finding they’re wrong.

One of the best things about working at the BBC is the ingrained culture of user centred design, so there was no surprise at the assumption that I’d be testing paper prototypes by the 2nd week. Our hosts were very helpful in finding participants within days – and with 100s of BBC staff working at MediaCity there is no danger of using people with too much knowledge of the project, or re-using participants. Last but not least, BBC R&D has a fully equipped usability lab – complete with two-way mirror and recording equipment. Overkill for my purposes – I would’ve managed with an ordinary office – but having the separate viewing room helped ensure that I got the entire team observing the sessions without crowding my subject. I’m a great believer in getting everyone on the project team seeing other people interact with and talk about the application.

Paper prototypes

Annoted paper prototyping test scriptPaper prototypes are A3 printouts of the wireframes, each representing a state of the application. After giving a brief description of what the application is used for, I show the page representing the application’s initial state, and change the pages in response to user actions as if it were the screen. (Users point to what they would click.) At first, I ask task-based questions: “add a camera and an audio source”; “create a copy of Camera 2 that’s a close-up”; etc. As we linger on a screen, I’ll probe more about their understanding of the interface: “How would you change the keyboard shortcut for Camera 1?”; “What do you think Undo/Redo would do on this screen?”; “What would happen if you click that?”; and so on. It doesn’t matter that the wireframes are incomplete – when users try to go to parts of the application that haven’t been designed yet, I ask them to describe what they expect to see and be able to do there.

In all, I did paper prototype testing with 6 people on week 2, and with a further 3 people on week 3. (With qualitative testing even very few participants tend to find the major issues.) In keeping with the agile nature of the project, there was no expectation of me producing a report of findings that everyone would read, although I do type up my notes in a shared document to help fix them in my memory. Rather, my learnings go straight into the design – I’m usually champing at the bit to make the changes that seem so obvious after seeing a person struggle, feeling really happy to have caught them so early on. Fortunately, user testing showed that the broad screen layout worked well – the main changes were to button labels, icon designs, and generally improved affordances.

Interactive prototypes

By week 4 my role had transitioned into front-end development, in which I’m responsible for creating static HTML mockups with the final design and CSS, which the developers use as reference markup for the React components. While this isn’t mainstream practice in our industry, I find it has numerous advantages, especially for an Agile project, as it enables me to leave the static, inexact medium of wireframes behind and refine the design and interaction directly within the browser. (I add some some dynamic interactivity using jQuery, but this is throwaway code for demo purposes only.)

The other advantage of HTML mockups is that they afford us an opportunity to do interactive user testing using a web browser, well before the production application is stable enough to test. Paper prototyping is fine up to a point, but you have plenty of limitations – for example, you can’t scroll, there are no mousever events, you can’t resize the screen, etc.

So by week 5 I was able to test nearly all parts of the application, in the browser, with 11 users. (This included two groups of 4, which worked better than I expected – one person manning the mouse and keyboard, but everyone in the group thinking out loud.) It was really good being able to see the difference that interactivity made, such as hover states, and seeing people actually trying to click or drag things rather than just saying what they’d do gave me an added level of confidence in my findings. Again, immediately afterwards, I made several changes that I’m confident improves the application – removing a redundant button that never got clicked, adding labels to some icons, strengthening a primary action by adding an icon, among others. Not to mention fixing numerous technical bugs that came up during testing. (I use Github comments to ensure developers are aware of any HTML changes to components at this stage.)

Never stop testing

Hopefully we’ll have time for another round of testing with the production application. This should give a more faithful representation of the vision mixing workflow, since in the mockups the application is always in the same state, using dummy content. With every test we can feel more confident – and our stakeholders can feel more confident – that what we’re building will meet its goals, and make users productive rather than frustrated. And on a personal level, I’m just relieved that we won’t be launching with any of the embarrassing gotchas that cropped up and got fixed during testing.

Read part 4 of this project working with BBC R&D where we talk about compositing and mixing video in the browser.