wall of product backlog post-it notes

Reflections on Product Ownership

In my day job, I do not call myself a Product Owner.

My title is Project Manager, and I think this is an accurate representation of what I do. I ‘manage projects’, in the sense that I do my best to ensure that projects are organised, run smoothly, and achieve outcomes that leave my clients feeling satisfied.

In a previous life, I have worked as a Product Owner. I worked for a large organisation that used their own version of Agile methodology across the board to deliver both new products and ongoing Business-As-Usual work. I was not a Project Manager there: I was a Product Owner. At least, that was my title. Whether I was actually a Product Owner, in the strict Scrum sense, I’m not so sure.

As the name suggests, a Product Owner is someone who owns a product, which is to say the “Vision”, as well as the Strategic goals and Tactical development of a product. This is part of the definition given to me by Roman Pichler, one of the foremost experts on Product Ownership and host of a two day workshop that left me with both an enormous sense of enthusiasm for the potential of the role and the nagging feeling that, despite what my memory and CV tell me, I haven’t actually been a Product Owner at all.

But first, let me discuss some elements of the course itself. I have to admit to a certain degree of scepticism towards such things, if only for the sheer number of offerings out there. With no rankings of the courses available, or accountability for many of the large organisations that deliver them, how do we ensure that the course we take is actually useful? Experience of such workshops, not just in business but in my previous life as an academic, suggest that there are two main factors that determine its usefulness.

First, does the instructor know what they are talking about? Do they have experience of the role in the ‘real world’, under fire, where life, politics, and Murphy’s Law conspire to ruin perfectly curated Agile processes? Do they rely on a textbook for delivering the course content, or do they have the expertise to share their knowledge in a way that adapts itself to their audience’s needs? Can they respond to their battle-hardened, sometimes cynical participants in a manner that acknowledges their own competencies while getting them to buy into the content as both realistic and valuable?

Second, who are the participants? Are they willing to get involved, to share their own experiences openly and honestly? How broad is their range of experiences? Are they all new to the role, in which case their inexperience may limit useful conversation, or are they all experienced to the point of weariness, in which case it may be difficult to persuade them of the value of active participation? Are there even enough attendees to have a meaningful discussion?

All these thoughts ran through my head while, bleary-eyed and half asleep, I sat on the early train to London. In the end, however, I need not have worried. Roman has worked in management and product roles for over 17 years, and has more than enough anecdotes and examples of real world application of the principles he espouses (both successes and failures) to demonstrate his bona fides. He cuts an imposing figure, part Steve Jobs, part “Gangs of New York” Daniel Day-Lewis, and delivers his material with confidence and mastery. I was particularly impressed with his method of starting with blank slides, then building them up with drawings and annotations as the exercises and discussions progressed. It seemed an excellent way to react to the queries of the attendees while maintaining a strong sense of narrative and interaction throughout the sessions, something that a set of pre-made slides or a handouts struggles to achieve. The approach reminded me greatly of similar techniques used in some YouTube videos (such as these), and I found it an excellent way of keeping an audience engaged*.

As for my fellow participants, an initial exercise revealed that we had a good mix of current, former, and new Product People in our midst. There were about 30 attendees in total, and remarkably some had even travelled from as far away as Germany and Slovakia to take part (and there was me thinking Yorkshire was far enough). Some, like me, came from small agencies, but many were based in start ups or large enterprises in industries ranging from fintech to engineering to travel and beyond. Most importantly, they were all happy and willing to share their experiences and ask insightful questions of our instructor and of each other.

Naturally, this being a course about Scrum roles, we began the course by building a backlog of questions we had about product ownership and goals we had for the two days. For myself, I really had two questions that I hoped the course would address. Firstly, and most applicable for my current situation, how do you leverage the value of product ownership in an agency that doesn’t have distinct Product Owner and Scrum Master roles? My second question was more personal, and takes me back to my thoughts at the beginning of this blog: that is, once a business reaches a certain size, or the scope of a product reaches a certain complexity, is it possible for a Product Owner to operate effectively in the Scrum sense of the role, rather than transitioning into “something else”?

Regarding Product Owners in agencies such as ours, Roman proposed two models, where either the client takes that role or the Project Manager (or someone similar in the agency) acts as a kind of ‘proxy’ Product Owner, switching hats as necessary for the duration of a project. While examining these possibilities, I thought about how we do things at Isotoma. With some clients, having someone on the customer side act as a Product Owner makes a tremendous amount of practical sense. They are a subject expert with a strong vision for the product that we’re developing, they are able to obtain buy-in from their business to support that kind of close working relationship, and they have a visibility of their internal business roadmap that allows them to prioritise work that returns the most immediate value. With other clients, however, that kind of relationship is a lot more challenging. Their schedule doesn’t allow them to devote the time necessary to a single product, their organisation doesn’t allow for them to ‘own’ the product themselves, or sometimes a lack of domain expertise means they rely a lot more on our input when considering what direction a product could take.

In practical terms, then, I realised that we actually practice both of these approaches. It’s not that one is intrinsically ‘better’ than the other, but rather each approach is more or less suitable depending on who our client is and what it is we’re building. Ideally this is something we can identify early on – much like my previous blog about asking questions (link), we use the Discovery phase of a project to determine the kind of role that our client is going to play in the delivery of the product. Are they able (and willing) to play this role effectively, or do we need to be more involved on a strategic level? This can change as the project develops, too, with some clients being heavily involved in the early stages and less involved once the product has started to take shape. Sometimes, when we’re acting more as direct partners, they will leave the early decision making to us and take more ownership of the product once its direction becomes more certain.

The importance for me, then, lies not in where the Product Owner role ‘sits’ during a project, but that everyone knows how decisions are made and buys into that process.

This brings me to my second question.

One of the main challenges that face Product Owners, in my view, lies in ensuring that they are understood to be Product Owners, rather than simply Project Managers working in an Agile environment. This is to say, their job is to own the vision of the product as well as its strategic and tactical direction. What their job should not be, in a true Scrum environment, is managing the development team on a day-to-day basis, or existing purely to triage requirements from other parts of the business without the authority to push back. Based on discussions with my fellow Product People over the two days, however, I think it’s fair to say that these experiences are fairly typical. Not one of the people I spoke to said “Oh, I wish I had less say in the direction that my product is taking”, or “I wish I had to spend more time dealing with issues arising during sprints”. Rather there was a real sense that Product Owners in large organisations are encumbered with an administrative burden that is anathema to the value that the role can bring. They become mini Project Managers, or funnels for business requirements from other stakeholders.

There is a standard model of how Scrum roles scale in large organisations. During one of the sessions Roman drew a distinction between ‘big’ Product Owners, who own the larger Vision and Strategy of the product, and ‘small’ Product Owners, who, typically through organisational design, have little influence beyond the Tactical level. This can work well, if everyone involved understands this role and works together as a kind of ‘Product Ownership Team’, but the broader challenge as the product and organisation grows is the dilution of a single point of ownership and subsequent weakening of both vision and consistency. If the product is simply too big by this stage, the question then becomes “Do we break down the ownership of this product into discreet ‘features’ or ‘components’, and if so what is a sane and sensible way of doing that while maintaining a coherent overall vision?”

It’s a hard problem to solve and I think, for my own part, it’s a problem that is addressed less by finding the ‘perfect’ configuration of roles and responsibilities in your organisational structure and more by individuals deciding for themselves whether they are truly satisfied with role that they are being asked to play. As Roman discussed in the course, when faced with a difficulty like this, you can either change what you’re doing or change how you feel about what you’re doing. I like this idea a lot, and would go further and rephrase it:

If you cannot take ownership of the product, take ownership of your relationship with the product.

And so I returned to York after two days, an officially certified Product Owner to go alongside the Scrum Master certification I achieved in 2016, and pondered the emphasis we put on methodology and roles and process management. Certainly we need all those things to maintain some semblance of order, to strive for value in the work that we do, to make sure that the things that need to get done are done by the right person in the right place at the right time. But still I wonder how many square pegs are out there, forcing themselves into round holes not because that is where they’re happiest, but because they’ve resigned themselves to never finding the square hole that would show them at their best.

* Note to self: steal this approach

Dan attended a two day course run by Roman Pichler. Visit www.romanpichler.com to view course dates and learn more about Product Ownership.

Continuous Delivery with CircleCI, ECR and Kubernetes

As well as being a great drop-in hosting system for a lot of bare-metal and “legacy” cloud workloads, Kubernetes provides some spectacular developer tools and access to automation.  It is often very easy to do things that on other platforms would be difficult or impossible.

Continuous Delivery is an example of something that, in practice, can prove difficult to orchestrate. Even with high automation, release processes often have enough moving parts, or sufficient latency, that operating them frequently is prohibitively expensive, difficult or error-prone.

Releases in Kubernetes however are generally so rapid and so well orchestrated that this is not a problem.

This week I put together a simple CD pipeline using CircleCI which is a good example of how simple this can be.

There are three phases to a software update based on docker images: build, push and update.

Build and push

CircleCI makes orchestrating this in a Continuous Integration system really easy. We’re storing our images in AWS Elastic Container Registry (ECR), which adds a little bit of complexity, but even then it’s pretty easy. Here’s the relevant part of the CircleCI configuration:

jobs:
  deploy:
    docker:
    - image: circleci/python:3.6.1
    working_directory: ~/repo
    steps:
    - checkout
    - setup_remote_docker:
        docker_layer_caching: true
    - run:
        name: Push to ECR
          command: |
            python3 -m venv venv
            . venv/bin/activate
            pip install awscli
            TAG=0.1.$CIRCLE_BUILD_NUM
            docker build -t local:$TAG .
            eval `aws ecr get-login | sed -e's/-e none//'`
            docker tag local:$TAG $AWS_ECR_REGISTRY:$TAG
            docker push $AWS_ECR_REGISTRY:$TAG

This has to jump through a couple of hoops. First, install the awscli, needed to log in to ECR:

python3 -m venv ven
. venv/bin/activate
pip install awscli

This is why we’re basing the build on a python image, so we have pip.

Build the local copy of the actual deployment image:

docker build -t local:$TAG .

Then do the login with some nasty shell hackery, tag the remote image and push it:

eval `aws ecr get-login | sed -e's/-e none//'`
docker tag local:$TAG $AWS_ECR_REGISTRY:$TAG
docker push $AWS_ECR_REGISTRY:$TAG

At this point, we’ve got the image in the remote registry. Authentication with AWS is handled by putting AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables with appropriate credentials in the CircleCI project environment.

Update

Actually deploying into the cluster uses a tool we wrote a while ago, k8ecr. This uses the AWS-SDK, the Kubernetes client-go packages and the docker client to coordinate various common operations on ECR repositories and Kubernetes. In particular it can issue image updates to Kubernetes deployment resources.

It has a mode where you can tell it to update every relevant deployment in a namespace:

> k8ecr deploy stage -

Running this command will cause it to compare (using semver) all the tags in all the ECR repositories in your AWS account with all the containers in all the deployments in the specified namespace (in this case stage), and issue rolling updates for all the containers for which there is a new version. Kubernetes then does whatever is necessary to get the new code running.

So if you have previously pushed a new version of an image, and there is a deployment using an earlier version of that image, then it will get updated.

The only missing part of the orchestration is doing this regularly, and we can do this, naturally, with another kubernetes deployment. Here’s a Dockerfile:

FROM alpine
ENV AWS_REGION eu-west-2
RUN apk add --no-cache ca-certificates
ADD k8ecr /
CMD while true; do /k8ecr deploy $NAMESPACE -; sleep 60; done

and deployment resource:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
 name: autodeploy-stage
spec:
 replicas: 1
 template:
   metadata:
     labels:
       app: autodeploy-stage
   spec:
     containers:
     - name: app
       image: isotoma/k8ecr-autodeploy
       env:
       - name: NAMESPACE
         value: stage

Every 60 seconds this will perform the checks and trigger any appropriate deployments. Voila, an auto-updating stage namespace. Now developers can do whatever is necessary in CI and magically have their stage updated. CircleCI provides filters so that, for example, only tags get deployed, and we can include the tag versions in the image versions.

If you want to use this, then an image built with that Dockerfile is available on docker hub.

Promotion to production then just requires running k8ecr with the appropriate arguments and we’re done.

Isotoma named a top software development company by Clutch

Recently, Clutch – the leading B2B ratings and reviews platform – released a report listing the top development companies in the United Kingdom. We are proud to announce that based on in-depth client feedback, Isotoma has been recognised for being one of the top software developers in the UK.

Clutch uses a unique rating methodology that compares agencies and software solutions based on the services that they offer, previous work, and analyst conducted client reference interviews. Client reviews form the backbone of Clutch’s research and allows business buyers and sellers with the insight they need to facilitate productive business partnerships.

Their research is displayed using a leader’s matrix that considers a company’s ability to deliver, clients and experience, and market presence. Isotoma is listed as a top 5 software development company in the UK. Take a look below! Isotoma was also featured as a top web development company in the UK.

Since joining Clutch in December, 4 of our clients have taken the time to review us on the ratings platform. Our clients came to Isotoma with high-level requirements for their projects and Isotoma ensured that our clients got the best return on their investment. You can check out more case studies and sample of the work that we are capable of on our work pages.

The feedback that has been received thus far has been incredible with our clients commending Isotoma for our proactive project management and technical expertise. Below are some experts of our Clutch reviews:

“Their ability to adapt, pivot, and reconfigure their time to match our changing goals is brilliant. They represent the standard we would expect from our best suppliers in terms of delivery, communication, and collaboration. We’re currently working with about 4­–5 different development agencies and they’re by far the most transparent in the reporting, planning, and delivery of projects.” – Director of Advanced Technology, NBC Universal

“The early work that they did with the revision of the platform extended our business life by 5 years…” – Managing Director, Pebble

“They’re very self-sufficient, and don’t require a lot of handholding. They really understood our problem, and came up with their own ideas to fix it. They are more than just coders who will meet the requirements, they find solutions and give us exactly what we need.” – Commercial Director, Iomart

“They’re much more creative. They want to understand the problem and then they try to solve the problem or identify solutions to the problem before they even start thinking about the technical side of it. I think that is incredibly valuable.” – Head of School Secretariat, University

We would like to thank our amazing clients for taking the time to review us on Clutch. We are thrilled to hear such positive experiences about our development work and look forward to maintaining our standing as a leading bespoke software development company throughout 2018.

You can read the full reviews on our Clutch profile. If you are interested in learning more about our highly rated services, please get in touch.

The importance of asking the right questions

The management of a project is one of those situations where, when it’s done right, it barely looks like it’s happening at all. Requirements are gathered, work is done, outcomes are achieved, all with a cheery smile on our faces and a song in our hearts. Effective project management is built on a foundation of thorough planning, open communication, and disciplined adherence to mutually agreed processes.

However.

Life is imperfect, projects are imperfect, and people are imperfect. Uncertainty is ever-present, change is inevitable, and the Rumsfeldian triangle of “known-knowns, known-unknowns, and unknown-unknowns” threaten us at every turn.

Luckily, I arrived into project management already well versed in the flawed nature of existence. This has meant my project management journey has been less about overcoming existential crises and far more about how, despite the relentless yoke of disappointment, we can still ensure projects are completed on time, on budget, and with minimal loss of life.

In my experience, the strongest weapon we have against projects going awry is honesty combined with a healthy supression of ego. Project management is usually seen as a problem of logistics and organisation, and I don’t doubt that this is a large part of it. However, my view is that managing the creation of complex digital products is, more than anything else, a problem of personal psychology.

What do I mean by this?

Our clients are experts in their own domains, whether it’s healthcare research or education funding or something else entirely. The first step in my being able to help them is to honestly explore what I don’t know about their domain, and work with them to fill in knowledge gaps that might otherwise lead to incorrect assumptions. In other words, I start the process by embracing my own ignorance and communicating that with our clients.

This approach is counter to a lot of day to day practices. If someone asks me a question, I generally think about what I do know, rather than what I don’t know. I am, after all, the product of an education system that typically awards points for regurgitating memorised facts over challenging received assumptions. I feel uncomfortable when I don’t know the answer to a question, because I have learned to associate this feeling with failure. When it comes to the sheer range of domains our clients cover, however, it is inevitable that I will bump up against the limits of my existing knowledge base on a regular basis.

It can feel risky to expose a lack of knowledge. Naturally I want a client to have confidence in me, and displaying a lack of domain-specific knowledge can feel counter to that goal. The biggest psychological hurdle to get over, then, is the acceptance that not knowing the answers at the beginning is a normal state to be in; embracing that it signifies opportunity rather than failure, and that the sooner we accept what we don’t know, the sooner we will be in a position to help our client achieve their goals. This is part of the reason we generally recommend a Discovery phase at the beginning of a project. It is during this period that we attack the Rumsfeldian triangle head on, embrace the things that we do not know, and build the foundation for the success of the project.

Encountering something new and unknown can be scary and intimidating.

This is ok.

In fact, this is more than ok – this is exactly what I love about managing projects at a company like Isotoma.

As a company we are experienced across a range of domains, and the knowledge does not all sit within one individual. We have a collective memory and level of expertise that allows us to meet the challenges that we face, an institutional memory of past problems and proven solutions. There will inevitably be times where we just don’t know enough about a domain to know the path forward instinctively, but by being honest about our limits and sharing a commitment to overcoming them, we grow as individuals and as a team.

It is tempting to think that we deliver great products because we always know the right thing to do, but I don’t think this is the case. In my view, we are good at what we do not because we always know the answers, but because we ask the right questions of our clients and of each other.

Photo by Josh Calabrese on Unsplash

Data as a public asset

I recently had the pleasure of attending a public consultation on the latest iteration of the Supplier Standard. For the uninitiated the Supplier Standard is a set of goals published by GDS it hopes both suppliers and public sector customers can sign up to when agreeing on projects.

You can read the current iteration of the standard on GOV.UK (it’s blessedly short), but the 6 headlines are:

  1. User needs first
  2. Data is a public asset
  3. Services built on open standards and reusable components
  4. Simple, clear, fast transactions
  5. Ongoing engagement
  6. Transparent contracting

Ideologically I am massively behind a lot of it. These goals go a long way to breaking the traditional software industry mindset of closed source software backed up with hefty licence fees and annual support and maintenance agreements. Projects meeting these standards will genuinely move public sector IT purchasing to a more open – and hugely more cost effective – model.

The conversations within the event were rightly confidential and I won’t report what anyone said, but I would like make some public comments on point 2 – Data is a public asset.

The standard says:

Government service data is a public asset. It should be open and easily accessible to the public and third-party organisations. To help us keep improving services, suppliers should support the government’s need for free and open access to anonymised user and service data, including data in software that’s been specially built for government.

That first sentence is fantastic. Government service data is a public asset. What a statement of intent. OS Maps. Public asset. Postcode databases. Public asset. Bus and train timetables. Public asset. Meteorological data. Public asset. Air quality. Health outcomes. House prices. Labour force study. Baby names. Public assets one and all.

But can we talk about the rest, please?

It should be open and easily accessible to the public and third-party organisations.

What do we mean by open and easily accessible? The idea is a great one, with rich APIs and spreadsheet downloads of key data, but if we’re not careful all we’ll end up with is a bunch of poorly planned, hurriedly implemented and unmaintained APIs.

Open data is a living breathing thing. Summary downloads need to be curated by people who understand data and how it might be used. APIs need to be well planned, well implemented and well documented, and the documentation has to be updated in line with any changes to the software. Anything less than that fails to meet any sensible definition of open or easily accessible.

And if nothing else a poorly planned or implemented API is likely to be a security risk. Which leads me to my next point:

[…] free and open access to anonymised user and service data […]

Woah there for a second!

We all know how hard genuine anonymisation is. And we all know how often well intentioned service owners and developers leak information they genuinely believed was anonymised, only to have it pulled apart and have personal information exposed.

This goal, like the others, is genuine, laudable and well intentioned. As suppliers to publicly funded bodies we should absolutely be signed up to all of them. But, as GDS standards spread out to the wider public sector, let’s make sure that everyone understands the concept of proportionality. The £20k to £40k budget put aside for a vital application to support foster carers*, for example, is best spent on features that users need, not on APIs and anonymisation.

Proportional. Proportionality. I said them a lot throughout the consultation meeting. I hope they stick.

*I use this as an example only; Isotoma didn’t bid for that particular project, it’s just a great example of a vital application with a small budget generating exactly the kind of data that would fall under this requirement

[Photo by Dennis Kummer on Unsplash]

Spell check not working in LibreOffice?

Is the spell check in your copy of LibreOffice not working?

When I installed Ubuntu 17.10 and set my locale to English (UK) during the install LibreOffice correctly noted the locale, but didn’t pick up the English (UK) dictionaries, meaning that spell checking wasn’t working.

Luckily it’s an easy fix:

  • Download the latest dictionaries extension from the  LibreOffice site (the UK English ones are here: https://extensions.libreoffice.org/extensions/english-dictionaries/)
  • Then in LibreOffice hit up Tools -> Extension Manager and click the ‘Add’ button
  • In the resulting file dialog box find the .oxt file that you just downloaded and double click it
  • Restart LibreOffice

Voilà! (if you type that in LibreOffice Writer it should now have a red squiggly line underneath!)

[Photo by Romain Vignes on Unsplash]

FP: a quiet revolution

Functional Programming (FP) is taking over the programming world, which is kind of weird since it has taken over the programming world at least once before. If you aren’t a developer then you may never even have heard of it. This post aims to explain what it is and why you might care about it even if you never program a computer – and how you might go about adopting it in your organisation.

Not too long ago, every graduate computer scientist would have spent some time doing FP, perhaps in a language called LISP. FP was considered a crucial grounding in CompSci and some FP texts gained a cult following. The legendary “wizard book” Structure and Intpretation of Computer Programs was the MIT Comp-101 textbook.

Famously a third of students dropped out in their first semester because they found this book too difficult.

I think this was likely to be how MIT taught the course as much as anything, but nevertheless functional programming (and the confusingly-brackety LISP) started getting a reputation for being too difficult for mere mortals.

Along with the reputation for impossibility, universities started getting a lot of pressure to turn out graduates with “useful skills”. This has always seemed a bit of a waste of university’s time to me – they are very specifically not supposed to be useful in that sense. I’d much rather graduates got the most out of their limited time at university learning the things that only universities can provide, rather than programming which, bluntly, we can do a lot more effectively than academics.

Anyway, I digress.

The rise of Object Orientation

So it came to pass that universities decided to stop teaching academic languages and start teaching Java. Ten years ago I’d guess well over half of all university programming courses taught Java. Java is not a functional language and until recently had no functional features. It was unremittingly, unapologetically Object Oriented (OO).  Contrary to Sun’s bombastic marketing when they released Java (and claimed it was a revolution in programming) Java as a language was about as mainstream and boring as it could be. The virtual machine (the JVM) was much more interesting, and I’ll come back to that later.

(OO is not in itself opposed to FP, and vice versa. Many languages – as we’ll see – are able to support both paradigms. However OO, particularly the way it was taught with Java, encourages a way of thinking about data flowing through a system, and this leads to data being copied and duplicated… which leads to all sorts of problems managing state. FP meanwhile tends to think in terms of transformation of data, and relies on the programming language to deal with the menial tasks of deciding when to copy data whilst doing so. When computers were slow this could cause significant bottlenecks, but computers these days are huge and fast and you can get more of them easily, so it doesn’t matter nearly as much – until it suddenly does of course. Anyway, I digress again.)

In the workplace meanwhile FP had never really taken off. The vast majority of software is written using imperative languages like ‘C’ or Object Oriented languages like.. well pretty much any language you’ve heard of. Perl, Python, Java, C#, C++ – Object Orientation had taken over the world. FP’s steep learning curve, reputation for impossibility, academic flavour and at times performance constraints made it seem something only a lunatic would select.

And so did some proclaim, Fukuyama-like, the “end of history”: Object Orientation was the one true way to build software. That is certainly how it seemed until a few years ago.

Then something interesting started happening, a change that has had far-reaching effects on many programming languages: existing OO languages started gaining FP features. Python was an early adopter here but a lot of OO languages started gaining a smattering of FP features.

This has provided an easy way for existing programmers to be exposed to how FP thinks about problem solving – and the way one approaches a large problem in FP can be dramatically different to traditional OO approaches.

Object Oriented software has been so dominant that its benefits and drawbacks are rarely discussed – in fact the idea that it might have drawbacks would have been thought madness by many until recently.

OO does have real benefits. It provides a process-driven approach for analysis, where your problem domain is analysed first for the data that exists in the business or whatever, and then behaviours are hooked onto these data. A large system is decomposed by responsibilities towards data.

There are some other things where OO helps too, although they don’t maybe sound so great. Mediocre can be good enough – and when you’ve got hundreds of programmers on a mammoth government project you need to be able to accommodate the mediocre. The reliance on process and good enough code means your developers become more replaceable. Need one thousand identical carbon units? Lets go!

Of course you don’t get that for free. The resulting code often has problems, and sometimes severe ones. Non-localised errors are a major problem, with causes and effects being removed by billions of lines of code and sometimes weeks of execution. State becomes a constant problem, with huge amounts of state being passed around inside transactions. Concurrency issues are common as well, with unnecessary locking or race conditions being rife.

The outcome is also often very difficult to debug, with a single thread of execution sometimes involving hundreds of cooperating objects, each of which only contributes only one or two lines of code.

The impact of this is difficult to quantify, but I don’t think it is unfair to put some of the epic failures large scale IT to the choices of these tools and languages.

Javascript

Strangely one of the places where FP is now being widely practised is in front-end applications, specifically Single-Page Applications (SPAs) written in frameworks like React.

The most recent Javascript standards (officially called, confusingly, ECMAScript) have added oodles of functional syntax and behaviour, to the extent that it is possible to write it almost entirely functionally. Furthermore, these new javascript standards can be transpiled into previous versions of Javascript, meaning they will run pretty much anywhere.

Since pretty much every device in the world has a Javascript virtual machine installed, this means we now have the worlds largest ever installed based of functional computers – and more and more developers are using it.

The FP frameworks that are emerging in Javascript to support functional development are bringing some of the more recent research and design from universities directly into practice in a way that hasn’t really happened previously.

The JVM

The other major movement has been the development of functional languages that run on the Java Virtual Machine (the JVM). Because these languages can call Java functions it means they come with a ready-built standard library that is well known and well documented. There’s a bunch of these with Clojure and Scala being particularly prominent.

These have allowed enterprise teams with a large existing commitment to Java to start developing in FP without throwing away their existing code. I suspect it has also allowed them to retain some senior staff who would otherwise have left through boredom.

Ironically Java itself has added loads of functional features over the last few years, in particular lambda functions and closures.

How to adopt FP

We’ve adopted FP for some projects with some real success and there is a lot of enthusiasm for it here (and admittedly the odd bit of resistance too). We’ve learned a few things about how to go about adopting it.

First, you need to do more design work. Particularly with developers who are new to the approach, spending more time in design is of great benefit – but I would argue this is generally the case in our industry. An abiding problem is the resistance to design and the need to just write some code. Even in the most agile processes design is critical and should not be sidelined. Accommodating this design work in your process is crucial. This doesn’t mean big fat documents, but it does mean providing the space to think and for teams to discuss design before implementation, perhaps with spikes for prototypes.

Second, get up to speed with supporting libraries that work in a functional manner, and avoid those that are brutally OO. Just using ramda encourages developers to work in a more functional manner and develop composable interfaces.

Third, there is still a problem with impenetrable jargon, and it can be a turn off. Avoid talking about monads, even if you think you need one 😉

Finally, you really do not need to be smarter to work with FP. There is a learning curve and it is really quite steep in places, but once you’ve climbed it the kinds of solutions you develop feel just as natural as the OO ones did previously.

 

 

 

 

Needs, Empathy, and Ghosts in the Machine: Reflections on Dot York 2017

Last Thursday I spent the day helping out at the hugely anticipated Dot York 2017 conference. It was an early start and a (very!) late finish, but I wouldn’t have missed it for anything.

The success of a conference lives or dies by the quality of the speakers, and this year the bar was raised yet again, ably compèred by Scott Hartop. Each talk provided enough food for thought to fill this blog a hundred times over, but I’ll restrict myself to discussing a few of my personal highlights from each session.

Adam Warburton changes our perceptions about competitors

The opening session of the day concerned User Experience and Needs. Adam Warburton, Head of Product at Co-op Digital, gave an illuminating demonstration of how seemingly unrelated products can end up as competitors when viewed through Maslow’s Hierarchy of Needs. Who would have thought, for instance, that online supermarket shopping and Uber are actually competitors within this framework, and how does this challenge the way we think about our own products? Adam went on to discuss how, by framing your business and the needs that you service in this way, you can force entire industries to transform for the better. The Co-op is not the most dominant supermarket chain in the UK, but Adam argues that their business goals have actually been met – by championing Fair Trade products and ethical business methods, they found that consumers valued these aspects of their business and so forced competitors to adopt their practices. For them, that was how they measured success.

Ian Worley speaks of getting stakeholder buy-in in a difficult environment

The second session, Business Before Lunch, saw four insightful talks from experts, innovators and entrepreneurs looking at the decisions we make and how we make the right choices for our own businesses. Ian Worley kicked us off with a talk about his time as Head of User Experience and Design at Morgan Stanley. Ian spoke with eloquence about achieving stakeholder buy-in by a) being brave about your expertise, and b) finding the right arguments for the right people. In the conservative world of banking, efficiency gains and improved bottom line were persuasive where aesthetic values and improved user experience were not. As Ian described his experiences, I thought about the broader question of value alignment: what do your clients value, what do you value, and what do you do if you can’t find common ground? At Isotoma I am fortunate to work with a broad range of clients, some offering exciting technical challenges, others that provide opportunities to do real social good. Very few of us in this industry can fully separate our work identities from our personal ones, so the importance of doing work with clients who share at least some of your values cannot be overstated.

Hannah Nickin’s talk highlighted for me how destroying capitalism isn’t just a slogan..

Following quite the best conference lunch I’ve ever had (with many thanks to Smokin Blues!), we heard four presentations on Building Better Teams, with Hannah Nicklin providing a dramatic reading of her ethnographic experiences amongst games development collectives. Hannah’s talk highlighted for me how destroying capitalism isn’t just a slogan, but a praxis – the intersection of place and behaviour where we challenge orthodoxies. We probably can’t overthrow systems of exploitation overnight, but we can problematise convention and test alternatives. As a business, Isotoma works hard on cultivating an environment that works for its employees, and not simply operating as an entity for converting labour into ‘stuff’. What works for you may not work for me, and that’s ok, but the crucial thing is to challenge the received assumptions of what your business is for, and the value that it brings to the world.

Natalie Kane talks about how easily human bias can creep into development of advanced software

We rounded out the day’s events with a panel on Being Human, with an emphasis on empathy, self-care, and our responsibility towards others. Natalie Kane, the Curator of Digital Design at the Victoria and Albert Museum, delivered an intriguing talk concerning so-called ‘ghosts in the machine’, and how easy it is for the advance of technology to be embraced, unchallenged, as an unimpeachable good. Our ethical obligations do not begin and end with our good intentions, she announced, but require our constant and active engagement. Natalie argued that such ‘ghosts’ serve as a reminder that technology is not neutral, and we have a responsibility to keep a critical stance towards technology and how we use it. To paraphrase Jurassic Park, just because we can doesn’t mean we should.

I cannot wait to see who we book next year!

Dot York – Yorkshire’s digital conference returns

Dot York returned last Thursday 9th November with a new lease of life, new venue, and new sponsor. (Us!) The day’s events saw 16 compelling presentations, lunch by Smokin Blues and evening event at Brew York.

If there was an overriding theme to the talks, it was probably empathy. We all make better, more successful digital products if we make the effort to learn about our users. Or seen from another perspective, recognising the diversity of our users, our stakeholders and colleagues. Measurement was another theme, one way in which we learn from our users and the impact we’re having. Here’s my summary of the day’s talks:

Continue reading

Being a tutor at the Open University

At Isotoma, our recruitment policy is forward-thinking and slightly unconventional. We prioritise how you think rather than where, or even if, you studied formally. This is not to say that we don’t have our fair share of academics, with a former Human Computer Interaction researcher in the team among many more similarly impressive backgrounds!

This nonconformist approach spans the whole of Isotoma; and some of our clients may have noticed that, as a rule, I “don’t do Mondays”. So where am I? What am I doing? Being one of those academic types… I work as an Associate Lecturer at the Open University.

What is the Open University?

Like perhaps many people of a certain age I mainly associated the Open University with old educational TV programmes. So I was surprised to discover that the OU is the largest university in the UK with around 175,000 students, which is 40% of all the part-time students in the country!

The O in OU manifests itself as flexibility. It provides materials and resources for home study, allowing three quarters of students to combine study with full – or part-time work. Admissions are also open, with a third of students having one A-Level or lower on entry.

Studying in that context can be exceptionally challenging. So for each module, students are assigned a tutor to guide, support and motivate them: to illustrate, explain and bring the material to life. This is where I come in.

Tutoring

Oddly enough, given my developer position at Isotoma, I teach second and third year computing modules! I initially tutored web technologies, then diversified into object-oriented software development; algorithms, data structures and computability; and data management, analysis and visualisation.

The role of a tutor has three major components. To me, the most important is acting as the first point of contact for my tutor group, providing support and guidance throughout the module. For OU students, study is only one of many things going on in their lives – in fact, a student once apologised to me for an incomplete assignment, because they had to drive their wife to hospital to give birth! As a tutor, it is crucial to understand this, as such a unique learning environment requires adapting your teaching approach to students’ varied lives.

Marking and giving feedback is a core part of the role, with busier weeks producing plenty of varied and interesting assignments. For every piece of coursework, I write a ‘feedforward’ for each individual highlighting the strengths shown, but also outlining suggestions and targets for improvement. Personal feedback on assignments is an excellent learning opportunity for students and can really improve their final result. I also encourage students to get in touch to discuss my comments, as not only can this lead to some enlightening debates, but helps them to be in control of their own learning.

The final component is tutorials. I conduct most of mine through web-conferencing, working in a team to facilitate a programme of around 40 hours per module. These web-tutorials are extremely useful as the group can interact, chat and ask questions from wherever they are, and we can explore complex concepts visually on a whiteboard or through desktop sharing.

Tutoring: impact on development?

There is a great synergy between the two roles: as developers we try to keep on top of our game and getting a regular range of student questions that may be about Python, JavaScript, PHP, SQL, Java or who knows what certainly keeps you on your toes! This can be good preparation for some of the more …interesting… projects that Isotoma takes on from time to time.

Having a group of students all trying to find the best way to implement some practical activities is also like having a group of researchers working for you. So once when a student used the :target pseudo selector to implement a CSS lightbox without JavaScript I quite excitedly shared this technique in our development chat channel! Though (of course) our UX team were already well aware of it… but it was news to me!

To explain concepts you really need to understand them, and sometimes you realise over time what you thought you knew has become a bit shallow. Preparing a tutorial on recursion and search algorithms was a great warmup for solving how HAPPI implements drag and drop of multiple servers between racks – where not everything you drop may be able to move to the new slot, or the slot may be occupied by another server you are dragging away.

There isn’t an exact correlation between what I tutor and what I develop. Some topics push you beyond your comfort zone, so the implications of the Church-Turing thesis or the pros and cons of data mining are not things that crop up much in daily work, but things I’ve learnt in tutoring on data visualisation have proved to be pretty handy.

And of course some projects, such as the Sofia curriculum mapper for Imperial College of Medicine, are educational so domain knowledge of university processes is of direct relevance in understanding client requirements.

Development: impact on tutoring?

One of the reasons the OU employs part-time tutors is for the experience they bring from their work. In that respect, I can provide examples and context from what we do at Isotoma. This serves to bridge the gap between (what can sometimes be) quite dry theory and the decisions/compromises that are part and parcel of solution development in the real world.

So if a student questions the need to understand computational complexity for writing web applications, we can discuss what happens when part of your app is O(n²) when it could be O(n) or O(log n). Or the difference between a site that works well when one developer uses it and one that works well with thousands of users – but also discuss the evils of premature optimisation!

Being part of a wider team at Isotoma also allows me to talk about approaches to issues like project management, security, testing and quality assurance. Recently I’ve also started feeding some of Francois’ tweets into discussions on usability and accessibility, which is fun.

Web development is a fast-moving field so while principles evolve gradually, tools, frameworks, languages and practices come and go. Working in that field allows me to share how my work has changed over time and what remains true.

If you work in the IT industry and are looking for a different challenge then I would highly recommend becoming an OU computing tutor. Tutoring one group doesn’t need a day a week every week and it’s great to know that you’re sharing that expertise to those for whom full-time study isn’t an option, and developing a new generation of colleagues.