The importance of asking the right questions

The management of a project is one of those situations where, when it’s done right, it barely looks like it’s happening at all. Requirements are gathered, work is done, outcomes are achieved, all with a cheery smile on our faces and a song in our hearts. Effective project management is built on a foundation of thorough planning, open communication, and disciplined adherence to mutually agreed processes.


Life is imperfect, projects are imperfect, and people are imperfect. Uncertainty is ever-present, change is inevitable, and the Rumsfeldian triangle of “known-knowns, known-unknowns, and unknown-unknowns” threaten us at every turn.

Luckily, I arrived into project management already well versed in the flawed nature of existence. This has meant my project management journey has been less about overcoming existential crises and far more about how, despite the relentless yoke of disappointment, we can still ensure projects are completed on time, on budget, and with minimal loss of life.

In my experience, the strongest weapon we have against projects going awry is honesty combined with a healthy supression of ego. Project management is usually seen as a problem of logistics and organisation, and I don’t doubt that this is a large part of it. However, my view is that managing the creation of complex digital products is, more than anything else, a problem of personal psychology.

What do I mean by this?

Our clients are experts in their own domains, whether it’s healthcare research or education funding or something else entirely. The first step in my being able to help them is to honestly explore what I don’t know about their domain, and work with them to fill in knowledge gaps that might otherwise lead to incorrect assumptions. In other words, I start the process by embracing my own ignorance and communicating that with our clients.

This approach is counter to a lot of day to day practices. If someone asks me a question, I generally think about what I do know, rather than what I don’t know. I am, after all, the product of an education system that typically awards points for regurgitating memorised facts over challenging received assumptions. I feel uncomfortable when I don’t know the answer to a question, because I have learned to associate this feeling with failure. When it comes to the sheer range of domains our clients cover, however, it is inevitable that I will bump up against the limits of my existing knowledge base on a regular basis.

It can feel risky to expose a lack of knowledge. Naturally I want a client to have confidence in me, and displaying a lack of domain-specific knowledge can feel counter to that goal. The biggest psychological hurdle to get over, then, is the acceptance that not knowing the answers at the beginning is a normal state to be in; embracing that it signifies opportunity rather than failure, and that the sooner we accept what we don’t know, the sooner we will be in a position to help our client achieve their goals. This is part of the reason we generally recommend a Discovery phase at the beginning of a project. It is during this period that we attack the Rumsfeldian triangle head on, embrace the things that we do not know, and build the foundation for the success of the project.

Encountering something new and unknown can be scary and intimidating.

This is ok.

In fact, this is more than ok – this is exactly what I love about managing projects at a company like Isotoma.

As a company we are experienced across a range of domains, and the knowledge does not all sit within one individual. We have a collective memory and level of expertise that allows us to meet the challenges that we face, an institutional memory of past problems and proven solutions. There will inevitably be times where we just don’t know enough about a domain to know the path forward instinctively, but by being honest about our limits and sharing a commitment to overcoming them, we grow as individuals and as a team.

It is tempting to think that we deliver great products because we always know the right thing to do, but I don’t think this is the case. In my view, we are good at what we do not because we always know the answers, but because we ask the right questions of our clients and of each other.

Photo by Josh Calabrese on Unsplash

Data as a public asset

I recently had the pleasure of attending a public consultation on the latest iteration of the Supplier Standard. For the uninitiated the Supplier Standard is a set of goals published by GDS it hopes both suppliers and public sector customers can sign up to when agreeing on projects.

You can read the current iteration of the standard on GOV.UK (it’s blessedly short), but the 6 headlines are:

  1. User needs first
  2. Data is a public asset
  3. Services built on open standards and reusable components
  4. Simple, clear, fast transactions
  5. Ongoing engagement
  6. Transparent contracting

Ideologically I am massively behind a lot of it. These goals go a long way to breaking the traditional software industry mindset of closed source software backed up with hefty licence fees and annual support and maintenance agreements. Projects meeting these standards will genuinely move public sector IT purchasing to a more open – and hugely more cost effective – model.

The conversations within the event were rightly confidential and I won’t report what anyone said, but I would like make some public comments on point 2 – Data is a public asset.

The standard says:

Government service data is a public asset. It should be open and easily accessible to the public and third-party organisations. To help us keep improving services, suppliers should support the government’s need for free and open access to anonymised user and service data, including data in software that’s been specially built for government.

That first sentence is fantastic. Government service data is a public asset. What a statement of intent. OS Maps. Public asset. Postcode databases. Public asset. Bus and train timetables. Public asset. Meteorological data. Public asset. Air quality. Health outcomes. House prices. Labour force study. Baby names. Public assets one and all.

But can we talk about the rest, please?

It should be open and easily accessible to the public and third-party organisations.

What do we mean by open and easily accessible? The idea is a great one, with rich APIs and spreadsheet downloads of key data, but if we’re not careful all we’ll end up with is a bunch of poorly planned, hurriedly implemented and unmaintained APIs.

Open data is a living breathing thing. Summary downloads need to be curated by people who understand data and how it might be used. APIs need to be well planned, well implemented and well documented, and the documentation has to be updated in line with any changes to the software. Anything less than that fails to meet any sensible definition of open or easily accessible.

And if nothing else a poorly planned or implemented API is likely to be a security risk. Which leads me to my next point:

[…] free and open access to anonymised user and service data […]

Woah there for a second!

We all know how hard genuine anonymisation is. And we all know how often well intentioned service owners and developers leak information they genuinely believed was anonymised, only to have it pulled apart and have personal information exposed.

This goal, like the others, is genuine, laudable and well intentioned. As suppliers to publicly funded bodies we should absolutely be signed up to all of them. But, as GDS standards spread out to the wider public sector, let’s make sure that everyone understands the concept of proportionality. The £20k to £40k budget put aside for a vital application to support foster carers*, for example, is best spent on features that users need, not on APIs and anonymisation.

Proportional. Proportionality. I said them a lot throughout the consultation meeting. I hope they stick.

*I use this as an example only; Isotoma didn’t bid for that particular project, it’s just a great example of a vital application with a small budget generating exactly the kind of data that would fall under this requirement

[Photo by Dennis Kummer on Unsplash]

Spell check not working in LibreOffice?

Is the spell check in your copy of LibreOffice not working?

When I installed Ubuntu 17.10 and set my locale to English (UK) during the install LibreOffice correctly noted the locale, but didn’t pick up the English (UK) dictionaries, meaning that spell checking wasn’t working.

Luckily it’s an easy fix:

  • Download the latest dictionaries extension from the  LibreOffice site (the UK English ones are here:
  • Then in LibreOffice hit up Tools -> Extension Manager and click the ‘Add’ button
  • In the resulting file dialog box find the .oxt file that you just downloaded and double click it
  • Restart LibreOffice

Voilà! (if you type that in LibreOffice Writer it should now have a red squiggly line underneath!)

[Photo by Romain Vignes on Unsplash]

FP: a quiet revolution

Functional Programming (FP) is taking over the programming world, which is kind of weird since it has taken over the programming world at least once before. If you aren’t a developer then you may never even have heard of it. This post aims to explain what it is and why you might care about it even if you never program a computer – and how you might go about adopting it in your organisation.

Not too long ago, every graduate computer scientist would have spent some time doing FP, perhaps in a language called LISP. FP was considered a crucial grounding in CompSci and some FP texts gained a cult following. The legendary “wizard book” Structure and Intpretation of Computer Programs was the MIT Comp-101 textbook.

Famously a third of students dropped out in their first semester because they found this book too difficult.

I think this was likely to be how MIT taught the course as much as anything, but nevertheless functional programming (and the confusingly-brackety LISP) started getting a reputation for being too difficult for mere mortals.

Along with the reputation for impossibility, universities started getting a lot of pressure to turn out graduates with “useful skills”. This has always seemed a bit of a waste of university’s time to me – they are very specifically not supposed to be useful in that sense. I’d much rather graduates got the most out of their limited time at university learning the things that only universities can provide, rather than programming which, bluntly, we can do a lot more effectively than academics.

Anyway, I digress.

The rise of Object Orientation

So it came to pass that universities decided to stop teaching academic languages and start teaching Java. Ten years ago I’d guess well over half of all university programming courses taught Java. Java is not a functional language and until recently had no functional features. It was unremittingly, unapologetically Object Oriented (OO).  Contrary to Sun’s bombastic marketing when they released Java (and claimed it was a revolution in programming) Java as a language was about as mainstream and boring as it could be. The virtual machine (the JVM) was much more interesting, and I’ll come back to that later.

(OO is not in itself opposed to FP, and vice versa. Many languages – as we’ll see – are able to support both paradigms. However OO, particularly the way it was taught with Java, encourages a way of thinking about data flowing through a system, and this leads to data being copied and duplicated… which leads to all sorts of problems managing state. FP meanwhile tends to think in terms of transformation of data, and relies on the programming language to deal with the menial tasks of deciding when to copy data whilst doing so. When computers were slow this could cause significant bottlenecks, but computers these days are huge and fast and you can get more of them easily, so it doesn’t matter nearly as much – until it suddenly does of course. Anyway, I digress again.)

In the workplace meanwhile FP had never really taken off. The vast majority of software is written using imperative languages like ‘C’ or Object Oriented languages like.. well pretty much any language you’ve heard of. Perl, Python, Java, C#, C++ – Object Orientation had taken over the world. FP’s steep learning curve, reputation for impossibility, academic flavour and at times performance constraints made it seem something only a lunatic would select.

And so did some proclaim, Fukuyama-like, the “end of history”: Object Orientation was the one true way to build software. That is certainly how it seemed until a few years ago.

Then something interesting started happening, a change that has had far-reaching effects on many programming languages: existing OO languages started gaining FP features. Python was an early adopter here but a lot of OO languages started gaining a smattering of FP features.

This has provided an easy way for existing programmers to be exposed to how FP thinks about problem solving – and the way one approaches a large problem in FP can be dramatically different to traditional OO approaches.

Object Oriented software has been so dominant that its benefits and drawbacks are rarely discussed – in fact the idea that it might have drawbacks would have been thought madness by many until recently.

OO does have real benefits. It provides a process-driven approach for analysis, where your problem domain is analysed first for the data that exists in the business or whatever, and then behaviours are hooked onto these data. A large system is decomposed by responsibilities towards data.

There are some other things where OO helps too, although they don’t maybe sound so great. Mediocre can be good enough – and when you’ve got hundreds of programmers on a mammoth government project you need to be able to accommodate the mediocre. The reliance on process and good enough code means your developers become more replaceable. Need one thousand identical carbon units? Lets go!

Of course you don’t get that for free. The resulting code often has problems, and sometimes severe ones. Non-localised errors are a major problem, with causes and effects being removed by billions of lines of code and sometimes weeks of execution. State becomes a constant problem, with huge amounts of state being passed around inside transactions. Concurrency issues are common as well, with unnecessary locking or race conditions being rife.

The outcome is also often very difficult to debug, with a single thread of execution sometimes involving hundreds of cooperating objects, each of which only contributes only one or two lines of code.

The impact of this is difficult to quantify, but I don’t think it is unfair to put some of the epic failures large scale IT to the choices of these tools and languages.


Strangely one of the places where FP is now being widely practised is in front-end applications, specifically Single-Page Applications (SPAs) written in frameworks like React.

The most recent Javascript standards (officially called, confusingly, ECMAScript) have added oodles of functional syntax and behaviour, to the extent that it is possible to write it almost entirely functionally. Furthermore, these new javascript standards can be transpiled into previous versions of Javascript, meaning they will run pretty much anywhere.

Since pretty much every device in the world has a Javascript virtual machine installed, this means we now have the worlds largest ever installed based of functional computers – and more and more developers are using it.

The FP frameworks that are emerging in Javascript to support functional development are bringing some of the more recent research and design from universities directly into practice in a way that hasn’t really happened previously.


The other major movement has been the development of functional languages that run on the Java Virtual Machine (the JVM). Because these languages can call Java functions it means they come with a ready-built standard library that is well known and well documented. There’s a bunch of these with Clojure and Scala being particularly prominent.

These have allowed enterprise teams with a large existing commitment to Java to start developing in FP without throwing away their existing code. I suspect it has also allowed them to retain some senior staff who would otherwise have left through boredom.

Ironically Java itself has added loads of functional features over the last few years, in particular lambda functions and closures.

How to adopt FP

We’ve adopted FP for some projects with some real success and there is a lot of enthusiasm for it here (and admittedly the odd bit of resistance too). We’ve learned a few things about how to go about adopting it.

First, you need to do more design work. Particularly with developers who are new to the approach, spending more time in design is of great benefit – but I would argue this is generally the case in our industry. An abiding problem is the resistance to design and the need to just write some code. Even in the most agile processes design is critical and should not be sidelined. Accommodating this design work in your process is crucial. This doesn’t mean big fat documents, but it does mean providing the space to think and for teams to discuss design before implementation, perhaps with spikes for prototypes.

Second, get up to speed with supporting libraries that work in a functional manner, and avoid those that are brutally OO. Just using ramda encourages developers to work in a more functional manner and develop composable interfaces.

Third, there is still a problem with impenetrable jargon, and it can be a turn off. Avoid talking about monads, even if you think you need one 😉

Finally, you really do not need to be smarter to work with FP. There is a learning curve and it is really quite steep in places, but once you’ve climbed it the kinds of solutions you develop feel just as natural as the OO ones did previously.





Needs, Empathy, and Ghosts in the Machine: Reflections on Dot York 2017

Last Thursday I spent the day helping out at the hugely anticipated Dot York 2017 conference. It was an early start and a (very!) late finish, but I wouldn’t have missed it for anything.

The success of a conference lives or dies by the quality of the speakers, and this year the bar was raised yet again, ably compèred by Scott Hartop. Each talk provided enough food for thought to fill this blog a hundred times over, but I’ll restrict myself to discussing a few of my personal highlights from each session.

Adam Warburton changes our perceptions about competitors

The opening session of the day concerned User Experience and Needs. Adam Warburton, Head of Product at Co-op Digital, gave an illuminating demonstration of how seemingly unrelated products can end up as competitors when viewed through Maslow’s Hierarchy of Needs. Who would have thought, for instance, that online supermarket shopping and Uber are actually competitors within this framework, and how does this challenge the way we think about our own products? Adam went on to discuss how, by framing your business and the needs that you service in this way, you can force entire industries to transform for the better. The Co-op is not the most dominant supermarket chain in the UK, but Adam argues that their business goals have actually been met – by championing Fair Trade products and ethical business methods, they found that consumers valued these aspects of their business and so forced competitors to adopt their practices. For them, that was how they measured success.

Ian Worley speaks of getting stakeholder buy-in in a difficult environment

The second session, Business Before Lunch, saw four insightful talks from experts, innovators and entrepreneurs looking at the decisions we make and how we make the right choices for our own businesses. Ian Worley kicked us off with a talk about his time as Head of User Experience and Design at Morgan Stanley. Ian spoke with eloquence about achieving stakeholder buy-in by a) being brave about your expertise, and b) finding the right arguments for the right people. In the conservative world of banking, efficiency gains and improved bottom line were persuasive where aesthetic values and improved user experience were not. As Ian described his experiences, I thought about the broader question of value alignment: what do your clients value, what do you value, and what do you do if you can’t find common ground? At Isotoma I am fortunate to work with a broad range of clients, some offering exciting technical challenges, others that provide opportunities to do real social good. Very few of us in this industry can fully separate our work identities from our personal ones, so the importance of doing work with clients who share at least some of your values cannot be overstated.

Hannah Nickin’s talk highlighted for me how destroying capitalism isn’t just a slogan..

Following quite the best conference lunch I’ve ever had (with many thanks to Smokin Blues!), we heard four presentations on Building Better Teams, with Hannah Nicklin providing a dramatic reading of her ethnographic experiences amongst games development collectives. Hannah’s talk highlighted for me how destroying capitalism isn’t just a slogan, but a praxis – the intersection of place and behaviour where we challenge orthodoxies. We probably can’t overthrow systems of exploitation overnight, but we can problematise convention and test alternatives. As a business, Isotoma works hard on cultivating an environment that works for its employees, and not simply operating as an entity for converting labour into ‘stuff’. What works for you may not work for me, and that’s ok, but the crucial thing is to challenge the received assumptions of what your business is for, and the value that it brings to the world.

Natalie Kane talks about how easily human bias can creep into development of advanced software

We rounded out the day’s events with a panel on Being Human, with an emphasis on empathy, self-care, and our responsibility towards others. Natalie Kane, the Curator of Digital Design at the Victoria and Albert Museum, delivered an intriguing talk concerning so-called ‘ghosts in the machine’, and how easy it is for the advance of technology to be embraced, unchallenged, as an unimpeachable good. Our ethical obligations do not begin and end with our good intentions, she announced, but require our constant and active engagement. Natalie argued that such ‘ghosts’ serve as a reminder that technology is not neutral, and we have a responsibility to keep a critical stance towards technology and how we use it. To paraphrase Jurassic Park, just because we can doesn’t mean we should.

I cannot wait to see who we book next year!

Dot York – Yorkshire’s digital conference returns

Dot York returned last Thursday 9th November with a new lease of life, new venue, and new sponsor. (Us!) The day’s events saw 16 compelling presentations, lunch by Smokin Blues and evening event at Brew York.

If there was an overriding theme to the talks, it was probably empathy. We all make better, more successful digital products if we make the effort to learn about our users. Or seen from another perspective, recognising the diversity of our users, our stakeholders and colleagues. Measurement was another theme, one way in which we learn from our users and the impact we’re having. Here’s my summary of the day’s talks:

Continue reading

Being a tutor at the Open University

At Isotoma, our recruitment policy is forward-thinking and slightly unconventional. We prioritise how you think rather than where, or even if, you studied formally. This is not to say that we don’t have our fair share of academics, with a former Human Computer Interaction researcher in the team among many more similarly impressive backgrounds!

This nonconformist approach spans the whole of Isotoma; and some of our clients may have noticed that, as a rule, I “don’t do Mondays”. So where am I? What am I doing? Being one of those academic types… I work as an Associate Lecturer at the Open University.

What is the Open University?

Like perhaps many people of a certain age I mainly associated the Open University with old educational TV programmes. So I was surprised to discover that the OU is the largest university in the UK with around 175,000 students, which is 40% of all the part-time students in the country!

The O in OU manifests itself as flexibility. It provides materials and resources for home study, allowing three quarters of students to combine study with full – or part-time work. Admissions are also open, with a third of students having one A-Level or lower on entry.

Studying in that context can be exceptionally challenging. So for each module, students are assigned a tutor to guide, support and motivate them: to illustrate, explain and bring the material to life. This is where I come in.


Oddly enough, given my developer position at Isotoma, I teach second and third year computing modules! I initially tutored web technologies, then diversified into object-oriented software development; algorithms, data structures and computability; and data management, analysis and visualisation.

The role of a tutor has three major components. To me, the most important is acting as the first point of contact for my tutor group, providing support and guidance throughout the module. For OU students, study is only one of many things going on in their lives – in fact, a student once apologised to me for an incomplete assignment, because they had to drive their wife to hospital to give birth! As a tutor, it is crucial to understand this, as such a unique learning environment requires adapting your teaching approach to students’ varied lives.

Marking and giving feedback is a core part of the role, with busier weeks producing plenty of varied and interesting assignments. For every piece of coursework, I write a ‘feedforward’ for each individual highlighting the strengths shown, but also outlining suggestions and targets for improvement. Personal feedback on assignments is an excellent learning opportunity for students and can really improve their final result. I also encourage students to get in touch to discuss my comments, as not only can this lead to some enlightening debates, but helps them to be in control of their own learning.

The final component is tutorials. I conduct most of mine through web-conferencing, working in a team to facilitate a programme of around 40 hours per module. These web-tutorials are extremely useful as the group can interact, chat and ask questions from wherever they are, and we can explore complex concepts visually on a whiteboard or through desktop sharing.

Tutoring: impact on development?

There is a great synergy between the two roles: as developers we try to keep on top of our game and getting a regular range of student questions that may be about Python, JavaScript, PHP, SQL, Java or who knows what certainly keeps you on your toes! This can be good preparation for some of the more …interesting… projects that Isotoma takes on from time to time.

Having a group of students all trying to find the best way to implement some practical activities is also like having a group of researchers working for you. So once when a student used the :target pseudo selector to implement a CSS lightbox without JavaScript I quite excitedly shared this technique in our development chat channel! Though (of course) our UX team were already well aware of it… but it was news to me!

To explain concepts you really need to understand them, and sometimes you realise over time what you thought you knew has become a bit shallow. Preparing a tutorial on recursion and search algorithms was a great warmup for solving how HAPPI implements drag and drop of multiple servers between racks – where not everything you drop may be able to move to the new slot, or the slot may be occupied by another server you are dragging away.

There isn’t an exact correlation between what I tutor and what I develop. Some topics push you beyond your comfort zone, so the implications of the Church-Turing thesis or the pros and cons of data mining are not things that crop up much in daily work, but things I’ve learnt in tutoring on data visualisation have proved to be pretty handy.

And of course some projects, such as the Sofia curriculum mapper for Imperial College of Medicine, are educational so domain knowledge of university processes is of direct relevance in understanding client requirements.

Development: impact on tutoring?

One of the reasons the OU employs part-time tutors is for the experience they bring from their work. In that respect, I can provide examples and context from what we do at Isotoma. This serves to bridge the gap between (what can sometimes be) quite dry theory and the decisions/compromises that are part and parcel of solution development in the real world.

So if a student questions the need to understand computational complexity for writing web applications, we can discuss what happens when part of your app is O(n²) when it could be O(n) or O(log n). Or the difference between a site that works well when one developer uses it and one that works well with thousands of users – but also discuss the evils of premature optimisation!

Being part of a wider team at Isotoma also allows me to talk about approaches to issues like project management, security, testing and quality assurance. Recently I’ve also started feeding some of Francois’ tweets into discussions on usability and accessibility, which is fun.

Web development is a fast-moving field so while principles evolve gradually, tools, frameworks, languages and practices come and go. Working in that field allows me to share how my work has changed over time and what remains true.

If you work in the IT industry and are looking for a different challenge then I would highly recommend becoming an OU computing tutor. Tutoring one group doesn’t need a day a week every week and it’s great to know that you’re sharing that expertise to those for whom full-time study isn’t an option, and developing a new generation of colleagues.

Containerisation: tips for using Kubernetes with AWS

Containers have been a key part of developer toolkits for many years now, but they are now becoming more common to use in production. Driving this adoption, in part, is the maturity of production-grade tooling and systems.

The leading container management product is Docker, but on its own docker does not provide enough to deploy into production, which has led to a new product category Container Orchestration.

The leading product in this space is Kubernetes, developed initially by Google and then released as open source software in 2015. Kubernetes differs from some of the competing container orchestration products in its design philosophy which is committed to open source (with components like iptables, nginx and etcd as core moving parts) and by being entirely API first in its design.

Our experience is that Kubernetes is ridiculously easy to deploy and manage and has many benefits over straight virtualisation for deploying mixed workloads, particularly in a public cloud environment.

Our services

We are working towards becoming a Kubernetes Certified Service Provider and are actively delivering Kubernetes solutions for customers, primarily on AWS. If you are interested in our consulting or implementation services please just drop us a line.

Why containers?

The primary benefits are cost and management effort. Cost because expensive compute resource can be efficiently shared between multiple workloads. Management because the container paradigm packages up an application with its dependencies in a way that facilitates flexible release and deployment.

A container cluster of 2 or 3 computers can host dozens of containers, all delivering different workloads. the Kubernetes software can scale containers within this cluster and can scale the overall cluster up and down depending on the needs of the workloads. This allows the cluster to be downsized to a minimum size when load is low. It also means containers that require very very low levels of resources can remain in service without needing to take a whole virtual machine.

Management time benefits enormously because of the packaging of applications with their dependencies. It allows you to share compute resource even when the workloads have conflicting dependencies – a very common problem. It allows upgrades to progress across your estate in a partial manner, rather than requiring big bang upgrades of possibly risky underlying software.

Finally it also allows you to safely upgrade the underlying operating system of your cluster without downtime. Workloads are automatically migrated around the cluster as nodes are taken out of service and new, upgraded, nodes are brought in.  We’ve done this a bunch of times now and it is honestly kind of magic.

There are other benefits to do with ease of accessgranular access control and federation, and I might deal with those in later posts.


Here are a few tips if you are considering getting started with Kubernetes.


Buy a new top level domain for every cluster. This makes your URLs so much nicer, and it really isn’t that expensive! 🙂

AWS accounts

We consider best practice to be a MASTER account, where your user accounts sit, and then one sub account for your production environment, with further sub accounts for pre-production environments. Note that you can run staging sites in your production cluster – this pattern should become much more common, since you are not staging the cluster, but staging the sites.

A staging cluster is only needed to test cluster-wide upgrades and changes.


When all your sites are in a single cluster, and behind a single AWS ELB (yes, you can do this), it makes things such as Web Application Firewall automation and IP restricted ELBs more cost-effective. These things only need to be applied once to provide benefit across your estate.

Role-Based Access Control

This is a relatively new feature of Kubernetes, but it is solid and well-designed. I’d recommend turning this on from day one, so the capabilities are available to you later.

Flannel and Calico, or Weave

Similarly I’d recommend enabling an overlay network from day one. These are easily deployed into an AWS Kubernetes cluster using the kops tool, but they provide advanced network capabilities if you ever need them in the future.


Use namespaces to subdivide your estate into logical partitions. production and staging are an obvious distinction, but you may well have user groups where namespaces make a sensible boundary for applying access control.


Currently integrating kubernetes configuration with cloudformation configuration means writing some custom tooling. Bite the bullet with this and dedicate some time to making a good job of this. I’m expecting to see Kubernetes become a first-class citizen within AWS at some point, but until then you are going to need to own your devops and do a good job of this.

Resource records

Create Route53 ALIAS records for all your exposed endpoints (which could be just your single ELB for your ingress controller), and use this in your Cloudfront distributions. This makes upgrades a lot easier!

SOMA in use during the 2017 Edinburgh Festival

Live video mixing with the BBC: Lessons learned

In this post I am going to reflect on some of the more interesting aspects of this project and the lessons they might provide for other projects.BBC R&D logo

This post is one of a series talking about our work on the SOMA video mixing application for the BBC. The previous posts in the series are:

  1. Building a live television video mixing application for the browser
  2. The challenges of mixing live video streams over IP networks
  3. Rapid user research on an Agile project
  4. Compositing and mixing video in the browser
  5. Taming the Async Beast with FRP and RxJS
  6. RxJS: An object lesson in terrible good software
  7. Video: The future of TV broadcasting
  8. Integrating UX with agile development

In my view there are three broad areas where this project has some interesting lessons.

Novel domains

First is the novel domain.

This isn’t unfamiliar – we often work in novel domains that we have little to no knowledge of. It is the nature of technical agency in fact – while we have some domains that we’ve worked in for many years such as healthcare and education there are always novel businesses with entirely new subjects to wrap our heads around.  (To give you some idea, a few recent examples include store-and-forward television broadcasting, horse racing odds, medical curricula, epilepsy diagnosis, clustering automation and datacentre hardware provisioning.)

Over the years this has been the thing that I have most enjoyed out of every aspect of our work. Plunging into an entirely new subject with a short amount of time to understand it and make a useful contribution is exhilarating.

Although it might sound a bit insane to throw a team who know nothing about a domain at a problem, what we’re very good at is designing and building products. As long as our customers can provide the domain expertise, we can bring the product build. It is easier for us to learn the problem domain than it is for a domain expert to learn how to build great products.

The greatest challenge with a new domain is the assumptions. We all have these in our work – the things we think are so well understood that we don’t even mention them. These are a terrible trap for software developers, because we can spend weeks building completely the wrong thing with no idea that we’re doing so.

We were very lucky in this respect to be working with a technical organisation within the BBC: Research & Development. They were aware of this risk and did a very good job of arranging our briefing, which included a visit to a vision mixing gallery. This is the kind of exercise that delivers a huge amount in tacit understanding, and allows us to ask the really stupid questions in the right setting.

I think of the core problem as a “Rumsfeld“. Although he got a lot of criticism for these comments I think they’re bizarrely insightful. There really are unknown unknowns, and what the hell do you do about them? You can often sense that they exist, but how do you turn them into known unknowns?

For many of these issues the challenge is not the answer, which is obvious once it has been found, but facilitating the conversation to produce the answer. It can be a long and frustrating process, but critical to success.

I’d encourage everyone to try and get the software team into the existing environment of the target stakeholder groups to try and understand at a fundamental level what they need.

The Iron Triangle

The timescale for this project was extraordinarily difficult – nine weeks from a standing start. In addition much of the scope was quite fixed – we were largely building core functionality that, if missing, would have rendered the application useless. In addition we wanted to achieve the level of finish for the UX that we generally deliver.

This was extremely ambitious, and in retrospect we bit off more than we could reasonably chew.

Time is the greatest enemy of software projects because of the challenges in estimation. For reasons covered in a different blog post, estimation for software projects is somewhere between an ineffable art reserved only for the angels, and completely impossible.

Triangle with sides labeled Quality, Scope and TimeWhen estimates are impossible, time becomes an even greater challenge. One of the truisms of our industry is the “Iron Triangle” of time, scope and quality. Like a good chinese buffet, you can only choose two. If you want a fixed time and scope, it is quality that will suffer.

Building good software takes thought and planning. Also, the first version of a component is rarely the best – it takes time to assemble, then consider it, and then perhaps shape it into something near its final form.

Quality is, itself, an aggregate quality. Haste lowers the standards for each part and so, by a process of multiplication, lowers far more the overall quality of a product. The only way to achieve a very high quality for the finished product is for every single part to be of similarly high quality. This is generally our goal.

However. Whisper it. It is possible to “manage” quality, if you understand your process and know the goal. Different kinds of testing can provide different levels of certainty of code quality. Manual testing, when done exhaustively, can substitute in some cases for perfection in code.

We therefore managed our quality, and I think actually did well here.

Asynchronous integration components had to be of absolute perfection because any bugs would result in general lack of stability which would be impossible to trace. The only way to build these is carefully, with a design and the only way to test these is exhaustively with unit and integration tests.

On the other hand, there were a lot of aspects of the UI where it was crucial that they performed and looked excellent, but the code could be rougher around the edges, and could just be hacked out. This was my area of the application, and my goal was to deliver features as fast as possible with just acceptable quality. Some of the code was quite embarrassing but we got the project over the line in the time, with the scope, and it all worked. This was sufficient for those areas.

Experimental technologies

I often talk about our approach using the concept of an innovation curve, and our position on it (I think I stole the idea from Ian Jindal – thanks Ian!).

If you can imagine a curve like this one, where the X axis is “how innovative your technologies are”, the Y axis is “pain”.

In practical terms this can be translated into “how likely I am to find the answer to my problems on Stack Overflow“.

At the very left, everything has been seen and done before, so there is no challenge from novelty – but you are almost certainly not making the most of available technologies.

At the far right, you are hand crafting your software from individual photons and you have to conduct high-energy physics experiments to debug your code. You are able to mould the entire universe to your whim – but it takes forever and costs a fortune.

There is no correct place to sit on this curve – where you sit is a strategic (and emotional) decision that depends on the forces at play in your particular situation.

Isotoma endeavours to be somewhere on the shoulder of the curve. The software we build generally needs to last 5+ years, so we can’t pick flash-in-the-pan technologies that will be gone in 18 months. But similarly we need to be relatively recent so it doesn’t become obsolete. This is sometimes called “leading edge”. Almost bleeding edge, but not so close you get cut. With careful choice of tools it is possible to maintain a position like this successfully.

This BBC project was off to the right of this curve, far closer to the bleeding edge than we’d normally choose, and we definitely suffered.

Some of the technologies we had to use had some serious issues:

  1. To use IPStudio, a properly cutting edge product developed internally by BBC R&D, we routinely had to read the C++ source code of the product to find answers to integration questions.
  2. We needed dozens of coordinated asynchronous streams running, for which we used RxJS. This was interesting enough to justify two posts on this blog on its own.
  3. WebRTC, which was the required delivery mechanism for the video, is absolutely not ready for this use case. The specification is unclear, browser implementation is incomplete and it is fundamentally unsuited at this time to synchronised video delivery.
  4. The video compositing technologies in browsers actually works quite well, but was entirely new to us and it took considerable time to gain sufficient expertise to do a good job. Also browser implementations still have surprising sharp edges (only 16 WebGL contexts are allowed! Why 16? I dunno.)

Any of these one issues could have sunk our project, so I am very proud we shipped good software, with all four issues.

Lessons learned? Task allocation is the key to this one I think.

One person, Alex, devoted his time to the IPStudio and WebRTC work for pretty much the entire project, and Ricey concentrated on video mixing.

Rather than try and skill up several people, concentrate the learning in a single brain. Although this is generally a terrible idea (because then you have a hard dependency on a single individual for a particular part of the codebase), in this case it was the only way through, and it worked.

Also, don’t believe any documentation, or in fact anything written in any human languages. When working on the bleeding edge you must “Use The Source, Luke”. Go to the source code and get your head around it. Everything else lies.


I am proud, justifiably I think, that we delivered this project successfully. It was used at the Edinburgh festival and actual real live television was mixed using our product, given all the constraints above.

The lessons?

  1. Spend the time and effort to make sure your entire team understand the tacit requirements of the problem domain and the stakeholders.
  2. Have an approach to managing appropriate quality that delivers the scope and timescale, if these are heavily constrained.
  3. Understand your position on the innovation curve and choose a strategic approach to managing this.

The banner image at the top of the article, taken by Chris Northwood, shows SOMA in use during the 2017 Edinburgh Festival.

Integrating UX with agile development

Incorporating user centred design practices within Agile product development can be a major challenge. Most of us in the user experience field are more familiar with the waterfall “big design up front” methodology. Project managers and developers are also likely to be more comfortable with a discreet UX design phase that is completed before development commences. But this approach tends to be inefficient, slower and more expensive. How does the role of the UX designer change within Agile product development, with its focus on transparency and rapid iteration?

BBC R&D logoWhile at Isotoma we’ve always followed our own flavour of Agile product development, UX is still mostly front-loaded in a “discovery” phase, as at most agencies. Our recent vision mixer project for BBC Research & Development, however, required a more integrated approach. The project had a very tight timeframe, requiring overlapping UX and development, with weekly show & tells.

From a UX perspective, it was a positive experience and I’m happy with the result. This post lists some of the techniques and approaches that I think helped integrate UX with Agile. Of course, every project and organisation is different, so there is definitely no one-size-fits-all approach, but hopefully there is something here you can use in your work. Continue reading