Category Archives: Pontification

The importance of asking the right questions

The management of a project is one of those situations where, when it’s done right, it barely looks like it’s happening at all. Requirements are gathered, work is done, outcomes are achieved, all with a cheery smile on our faces and a song in our hearts. Effective project management is built on a foundation of thorough planning, open communication, and disciplined adherence to mutually agreed processes.

However.

Life is imperfect, projects are imperfect, and people are imperfect. Uncertainty is ever-present, change is inevitable, and the Rumsfeldian triangle of “known-knowns, known-unknowns, and unknown-unknowns” threaten us at every turn.

Luckily, I arrived into project management already well versed in the flawed nature of existence. This has meant my project management journey has been less about overcoming existential crises and far more about how, despite the relentless yoke of disappointment, we can still ensure projects are completed on time, on budget, and with minimal loss of life.

In my experience, the strongest weapon we have against projects going awry is honesty combined with a healthy supression of ego. Project management is usually seen as a problem of logistics and organisation, and I don’t doubt that this is a large part of it. However, my view is that managing the creation of complex digital products is, more than anything else, a problem of personal psychology.

What do I mean by this?

Our clients are experts in their own domains, whether it’s healthcare research or education funding or something else entirely. The first step in my being able to help them is to honestly explore what I don’t know about their domain, and work with them to fill in knowledge gaps that might otherwise lead to incorrect assumptions. In other words, I start the process by embracing my own ignorance and communicating that with our clients.

This approach is counter to a lot of day to day practices. If someone asks me a question, I generally think about what I do know, rather than what I don’t know. I am, after all, the product of an education system that typically awards points for regurgitating memorised facts over challenging received assumptions. I feel uncomfortable when I don’t know the answer to a question, because I have learned to associate this feeling with failure. When it comes to the sheer range of domains our clients cover, however, it is inevitable that I will bump up against the limits of my existing knowledge base on a regular basis.

It can feel risky to expose a lack of knowledge. Naturally I want a client to have confidence in me, and displaying a lack of domain-specific knowledge can feel counter to that goal. The biggest psychological hurdle to get over, then, is the acceptance that not knowing the answers at the beginning is a normal state to be in; embracing that it signifies opportunity rather than failure, and that the sooner we accept what we don’t know, the sooner we will be in a position to help our client achieve their goals. This is part of the reason we generally recommend a Discovery phase at the beginning of a project. It is during this period that we attack the Rumsfeldian triangle head on, embrace the things that we do not know, and build the foundation for the success of the project.

Encountering something new and unknown can be scary and intimidating.

This is ok.

In fact, this is more than ok – this is exactly what I love about managing projects at a company like Isotoma.

As a company we are experienced across a range of domains, and the knowledge does not all sit within one individual. We have a collective memory and level of expertise that allows us to meet the challenges that we face, an institutional memory of past problems and proven solutions. There will inevitably be times where we just don’t know enough about a domain to know the path forward instinctively, but by being honest about our limits and sharing a commitment to overcoming them, we grow as individuals and as a team.

It is tempting to think that we deliver great products because we always know the right thing to do, but I don’t think this is the case. In my view, we are good at what we do not because we always know the answers, but because we ask the right questions of our clients and of each other.

Photo by Josh Calabrese on Unsplash

Data as a public asset

I recently had the pleasure of attending a public consultation on the latest iteration of the Supplier Standard. For the uninitiated the Supplier Standard is a set of goals published by GDS it hopes both suppliers and public sector customers can sign up to when agreeing on projects.

You can read the current iteration of the standard on GOV.UK (it’s blessedly short), but the 6 headlines are:

  1. User needs first
  2. Data is a public asset
  3. Services built on open standards and reusable components
  4. Simple, clear, fast transactions
  5. Ongoing engagement
  6. Transparent contracting

Ideologically I am massively behind a lot of it. These goals go a long way to breaking the traditional software industry mindset of closed source software backed up with hefty licence fees and annual support and maintenance agreements. Projects meeting these standards will genuinely move public sector IT purchasing to a more open – and hugely more cost effective – model.

The conversations within the event were rightly confidential and I won’t report what anyone said, but I would like make some public comments on point 2 – Data is a public asset.

The standard says:

Government service data is a public asset. It should be open and easily accessible to the public and third-party organisations. To help us keep improving services, suppliers should support the government’s need for free and open access to anonymised user and service data, including data in software that’s been specially built for government.

That first sentence is fantastic. Government service data is a public asset. What a statement of intent. OS Maps. Public asset. Postcode databases. Public asset. Bus and train timetables. Public asset. Meteorological data. Public asset. Air quality. Health outcomes. House prices. Labour force study. Baby names. Public assets one and all.

But can we talk about the rest, please?

It should be open and easily accessible to the public and third-party organisations.

What do we mean by open and easily accessible? The idea is a great one, with rich APIs and spreadsheet downloads of key data, but if we’re not careful all we’ll end up with is a bunch of poorly planned, hurriedly implemented and unmaintained APIs.

Open data is a living breathing thing. Summary downloads need to be curated by people who understand data and how it might be used. APIs need to be well planned, well implemented and well documented, and the documentation has to be updated in line with any changes to the software. Anything less than that fails to meet any sensible definition of open or easily accessible.

And if nothing else a poorly planned or implemented API is likely to be a security risk. Which leads me to my next point:

[…] free and open access to anonymised user and service data […]

Woah there for a second!

We all know how hard genuine anonymisation is. And we all know how often well intentioned service owners and developers leak information they genuinely believed was anonymised, only to have it pulled apart and have personal information exposed.

This goal, like the others, is genuine, laudable and well intentioned. As suppliers to publicly funded bodies we should absolutely be signed up to all of them. But, as GDS standards spread out to the wider public sector, let’s make sure that everyone understands the concept of proportionality. The £20k to £40k budget put aside for a vital application to support foster carers*, for example, is best spent on features that users need, not on APIs and anonymisation.

Proportional. Proportionality. I said them a lot throughout the consultation meeting. I hope they stick.

*I use this as an example only; Isotoma didn’t bid for that particular project, it’s just a great example of a vital application with a small budget generating exactly the kind of data that would fall under this requirement

[Photo by Dennis Kummer on Unsplash]

FP: a quiet revolution

Functional Programming (FP) is taking over the programming world, which is kind of weird since it has taken over the programming world at least once before. If you aren’t a developer then you may never even have heard of it. This post aims to explain what it is and why you might care about it even if you never program a computer – and how you might go about adopting it in your organisation.

Not too long ago, every graduate computer scientist would have spent some time doing FP, perhaps in a language called LISP. FP was considered a crucial grounding in CompSci and some FP texts gained a cult following. The legendary “wizard book” Structure and Intpretation of Computer Programs was the MIT Comp-101 textbook.

Famously a third of students dropped out in their first semester because they found this book too difficult.

I think this was likely to be how MIT taught the course as much as anything, but nevertheless functional programming (and the confusingly-brackety LISP) started getting a reputation for being too difficult for mere mortals.

Along with the reputation for impossibility, universities started getting a lot of pressure to turn out graduates with “useful skills”. This has always seemed a bit of a waste of university’s time to me – they are very specifically not supposed to be useful in that sense. I’d much rather graduates got the most out of their limited time at university learning the things that only universities can provide, rather than programming which, bluntly, we can do a lot more effectively than academics.

Anyway, I digress.

The rise of Object Orientation

So it came to pass that universities decided to stop teaching academic languages and start teaching Java. Ten years ago I’d guess well over half of all university programming courses taught Java. Java is not a functional language and until recently had no functional features. It was unremittingly, unapologetically Object Oriented (OO).  Contrary to Sun’s bombastic marketing when they released Java (and claimed it was a revolution in programming) Java as a language was about as mainstream and boring as it could be. The virtual machine (the JVM) was much more interesting, and I’ll come back to that later.

(OO is not in itself opposed to FP, and vice versa. Many languages – as we’ll see – are able to support both paradigms. However OO, particularly the way it was taught with Java, encourages a way of thinking about data flowing through a system, and this leads to data being copied and duplicated… which leads to all sorts of problems managing state. FP meanwhile tends to think in terms of transformation of data, and relies on the programming language to deal with the menial tasks of deciding when to copy data whilst doing so. When computers were slow this could cause significant bottlenecks, but computers these days are huge and fast and you can get more of them easily, so it doesn’t matter nearly as much – until it suddenly does of course. Anyway, I digress again.)

In the workplace meanwhile FP had never really taken off. The vast majority of software is written using imperative languages like ‘C’ or Object Oriented languages like.. well pretty much any language you’ve heard of. Perl, Python, Java, C#, C++ – Object Orientation had taken over the world. FP’s steep learning curve, reputation for impossibility, academic flavour and at times performance constraints made it seem something only a lunatic would select.

And so did some proclaim, Fukuyama-like, the “end of history”: Object Orientation was the one true way to build software. That is certainly how it seemed until a few years ago.

Then something interesting started happening, a change that has had far-reaching effects on many programming languages: existing OO languages started gaining FP features. Python was an early adopter here but a lot of OO languages started gaining a smattering of FP features.

This has provided an easy way for existing programmers to be exposed to how FP thinks about problem solving – and the way one approaches a large problem in FP can be dramatically different to traditional OO approaches.

Object Oriented software has been so dominant that its benefits and drawbacks are rarely discussed – in fact the idea that it might have drawbacks would have been thought madness by many until recently.

OO does have real benefits. It provides a process-driven approach for analysis, where your problem domain is analysed first for the data that exists in the business or whatever, and then behaviours are hooked onto these data. A large system is decomposed by responsibilities towards data.

There are some other things where OO helps too, although they don’t maybe sound so great. Mediocre can be good enough – and when you’ve got hundreds of programmers on a mammoth government project you need to be able to accommodate the mediocre. The reliance on process and good enough code means your developers become more replaceable. Need one thousand identical carbon units? Lets go!

Of course you don’t get that for free. The resulting code often has problems, and sometimes severe ones. Non-localised errors are a major problem, with causes and effects being removed by billions of lines of code and sometimes weeks of execution. State becomes a constant problem, with huge amounts of state being passed around inside transactions. Concurrency issues are common as well, with unnecessary locking or race conditions being rife.

The outcome is also often very difficult to debug, with a single thread of execution sometimes involving hundreds of cooperating objects, each of which only contributes only one or two lines of code.

The impact of this is difficult to quantify, but I don’t think it is unfair to put some of the epic failures large scale IT to the choices of these tools and languages.

Javascript

Strangely one of the places where FP is now being widely practised is in front-end applications, specifically Single-Page Applications (SPAs) written in frameworks like React.

The most recent Javascript standards (officially called, confusingly, ECMAScript) have added oodles of functional syntax and behaviour, to the extent that it is possible to write it almost entirely functionally. Furthermore, these new javascript standards can be transpiled into previous versions of Javascript, meaning they will run pretty much anywhere.

Since pretty much every device in the world has a Javascript virtual machine installed, this means we now have the worlds largest ever installed based of functional computers – and more and more developers are using it.

The FP frameworks that are emerging in Javascript to support functional development are bringing some of the more recent research and design from universities directly into practice in a way that hasn’t really happened previously.

The JVM

The other major movement has been the development of functional languages that run on the Java Virtual Machine (the JVM). Because these languages can call Java functions it means they come with a ready-built standard library that is well known and well documented. There’s a bunch of these with Clojure and Scala being particularly prominent.

These have allowed enterprise teams with a large existing commitment to Java to start developing in FP without throwing away their existing code. I suspect it has also allowed them to retain some senior staff who would otherwise have left through boredom.

Ironically Java itself has added loads of functional features over the last few years, in particular lambda functions and closures.

How to adopt FP

We’ve adopted FP for some projects with some real success and there is a lot of enthusiasm for it here (and admittedly the odd bit of resistance too). We’ve learned a few things about how to go about adopting it.

First, you need to do more design work. Particularly with developers who are new to the approach, spending more time in design is of great benefit – but I would argue this is generally the case in our industry. An abiding problem is the resistance to design and the need to just write some code. Even in the most agile processes design is critical and should not be sidelined. Accommodating this design work in your process is crucial. This doesn’t mean big fat documents, but it does mean providing the space to think and for teams to discuss design before implementation, perhaps with spikes for prototypes.

Second, get up to speed with supporting libraries that work in a functional manner, and avoid those that are brutally OO. Just using ramda encourages developers to work in a more functional manner and develop composable interfaces.

Third, there is still a problem with impenetrable jargon, and it can be a turn off. Avoid talking about monads, even if you think you need one 😉

Finally, you really do not need to be smarter to work with FP. There is a learning curve and it is really quite steep in places, but once you’ve climbed it the kinds of solutions you develop feel just as natural as the OO ones did previously.

 

 

 

 

Being a tutor at the Open University

At Isotoma, our recruitment policy is forward-thinking and slightly unconventional. We prioritise how you think rather than where, or even if, you studied formally. This is not to say that we don’t have our fair share of academics, with a former Human Computer Interaction researcher in the team among many more similarly impressive backgrounds!

This nonconformist approach spans the whole of Isotoma; and some of our clients may have noticed that, as a rule, I “don’t do Mondays”. So where am I? What am I doing? Being one of those academic types… I work as an Associate Lecturer at the Open University.

What is the Open University?

Like perhaps many people of a certain age I mainly associated the Open University with old educational TV programmes. So I was surprised to discover that the OU is the largest university in the UK with around 175,000 students, which is 40% of all the part-time students in the country!

The O in OU manifests itself as flexibility. It provides materials and resources for home study, allowing three quarters of students to combine study with full – or part-time work. Admissions are also open, with a third of students having one A-Level or lower on entry.

Studying in that context can be exceptionally challenging. So for each module, students are assigned a tutor to guide, support and motivate them: to illustrate, explain and bring the material to life. This is where I come in.

Tutoring

Oddly enough, given my developer position at Isotoma, I teach second and third year computing modules! I initially tutored web technologies, then diversified into object-oriented software development; algorithms, data structures and computability; and data management, analysis and visualisation.

The role of a tutor has three major components. To me, the most important is acting as the first point of contact for my tutor group, providing support and guidance throughout the module. For OU students, study is only one of many things going on in their lives – in fact, a student once apologised to me for an incomplete assignment, because they had to drive their wife to hospital to give birth! As a tutor, it is crucial to understand this, as such a unique learning environment requires adapting your teaching approach to students’ varied lives.

Marking and giving feedback is a core part of the role, with busier weeks producing plenty of varied and interesting assignments. For every piece of coursework, I write a ‘feedforward’ for each individual highlighting the strengths shown, but also outlining suggestions and targets for improvement. Personal feedback on assignments is an excellent learning opportunity for students and can really improve their final result. I also encourage students to get in touch to discuss my comments, as not only can this lead to some enlightening debates, but helps them to be in control of their own learning.

The final component is tutorials. I conduct most of mine through web-conferencing, working in a team to facilitate a programme of around 40 hours per module. These web-tutorials are extremely useful as the group can interact, chat and ask questions from wherever they are, and we can explore complex concepts visually on a whiteboard or through desktop sharing.

Tutoring: impact on development?

There is a great synergy between the two roles: as developers we try to keep on top of our game and getting a regular range of student questions that may be about Python, JavaScript, PHP, SQL, Java or who knows what certainly keeps you on your toes! This can be good preparation for some of the more …interesting… projects that Isotoma takes on from time to time.

Having a group of students all trying to find the best way to implement some practical activities is also like having a group of researchers working for you. So once when a student used the :target pseudo selector to implement a CSS lightbox without JavaScript I quite excitedly shared this technique in our development chat channel! Though (of course) our UX team were already well aware of it… but it was news to me!

To explain concepts you really need to understand them, and sometimes you realise over time what you thought you knew has become a bit shallow. Preparing a tutorial on recursion and search algorithms was a great warmup for solving how HAPPI implements drag and drop of multiple servers between racks – where not everything you drop may be able to move to the new slot, or the slot may be occupied by another server you are dragging away.

There isn’t an exact correlation between what I tutor and what I develop. Some topics push you beyond your comfort zone, so the implications of the Church-Turing thesis or the pros and cons of data mining are not things that crop up much in daily work, but things I’ve learnt in tutoring on data visualisation have proved to be pretty handy.

And of course some projects, such as the Sofia curriculum mapper for Imperial College of Medicine, are educational so domain knowledge of university processes is of direct relevance in understanding client requirements.

Development: impact on tutoring?

One of the reasons the OU employs part-time tutors is for the experience they bring from their work. In that respect, I can provide examples and context from what we do at Isotoma. This serves to bridge the gap between (what can sometimes be) quite dry theory and the decisions/compromises that are part and parcel of solution development in the real world.

So if a student questions the need to understand computational complexity for writing web applications, we can discuss what happens when part of your app is O(n²) when it could be O(n) or O(log n). Or the difference between a site that works well when one developer uses it and one that works well with thousands of users – but also discuss the evils of premature optimisation!

Being part of a wider team at Isotoma also allows me to talk about approaches to issues like project management, security, testing and quality assurance. Recently I’ve also started feeding some of Francois’ tweets into discussions on usability and accessibility, which is fun.

Web development is a fast-moving field so while principles evolve gradually, tools, frameworks, languages and practices come and go. Working in that field allows me to share how my work has changed over time and what remains true.

If you work in the IT industry and are looking for a different challenge then I would highly recommend becoming an OU computing tutor. Tutoring one group doesn’t need a day a week every week and it’s great to know that you’re sharing that expertise to those for whom full-time study isn’t an option, and developing a new generation of colleagues.

Timesheets: some observations on observation

Just as a throwaway in my post on understanding your team’s progress I said something like “everyone hates timesheets”. And it’s true, they do. They’re onerous, boring and they’re usually seen as invasive, “big brother”-esque, make-work. But, as I also said in that post, good quality time recording is vital to understanding what’s going on within your teams.

Feeling the need

We first started looking at timesheet systems nine or ten years ago when it was becoming abundantly clear that we weren’t making the progress we were expecting on certain projects, but we didn’t know why.

The teams were skilled in the tools they were using, they were diligent, they’d done similar work before, but they just weren’t hitting the velocities that we had come to expect. On top of that, the teams themselves thought they were making good progress. And every which way we approached the problem we were missing the information needed to get to the bottom of the mismatch between expectation and reality.

At that point in the company’s life timesheets were anathema to us; we felt very strongly they indicated a lack of trust, and in a company built entirely on the principles behind the Agile Manifesto… Well… You can see our problem.

Build projects around motivated individuals.
Give them the environment and support they need,
and trust them to get the job done.

But however we cut it we really needed to understand what people were actually doing with their day. We trusted that if people thought they were making good progress then they were, but we definitely knew that we weren’t making the same kind of progress that we had been a year ago on the same types of project. And back then we were often on fixed price projects and billing by the day, so when projects started to overrun our financial performance started to dip and the quality of our code went the same way (for all the reasons I outlined in that previous post).

So we hit on Harvest (at the time one of the poster children of the burgeoning Rails SaaS community) and asked everyone to fill in their sheets for a couple of months so we could generate some data.

We had an all hands meeting, we explained exactly why we were doing it, and we asked, cajoled and bullied people into using it so that at least we had something to work on and perhaps uncover the problems we were hitting.

And of course we found it quickly enough, because accurate timesheets filled in honestly expose exactly what’s going on. By our nature we are both helpful and curious – that’s how we ended up doing what we’re doing. But helpful and curious is easily distracted; a colleague asking for help, an old customer with a quick question, a project manager from another project with an urgent request, the account management team asking “can you just…” And all of this added up. In the worst cases some people were only spending four hours a day on the project they were allocated to; the rest of their time spent helping colleagues and old customers… However, how you cope with these things is probably the subject of another post.
My point here is that once we had that data we realised how valuable it was and knew that we couldn’t go without it again. Our key takeaway was that timesheets are a key part of a company’s introspection and without good data you don’t know the problem you’re actually trying to solve. And so we had to make timesheets part of our everyday processes.

Loving the alien

Like I said; people hate timesheets. They’re invasive. They’re time consuming. They feel like you’re being watched, judged. They imply no trust. They’re alien to an agile environment. And the data they produce is a key part of someone else’s reporting, too. So how do you make sure they’re filled in accurately and honestly? And not just in month one, when you first introduce them, but in month fifty seven when your business relies on them and you may not be watching quite so closely.

We’ve found the following works for us:

  • Make it crystal clear what they’re for, and what they’re not
  • Make it explicit that timesheets are for tracking the performance of estimates and ensuring that progress can be reported accurately
  • It’s not about how much you do, but how much got done
  • Tie them together with things like iDoneThis, so that people can give context to their timesheets in an informal unstructured manner
  • Make sure that everyone who uses the data throughout the management chain is incentivised to treat it honestly – this means your project managers mustn’t feel the need to manipulate it or worse manipulate how it’s entered (we’ve seen this more than once in other organisations)

And Dan, one of our project managers, sends round a gentle chivvying email each evening (filled with the day’s fun facts, of course) to make sure that people actually fill them in.

[Photo by Sabri Tuzcu on Unsplash]

External agencies vs. in-house teams

As you’ll already know because you’re windswept and interesting; we record a semi regular podcast where we look into an aspect of life in a technical agency that we think will interest the outside world. We’ve just finished recording the latest episode about internal versus external teams and honestly I think it’s one of the most interesting chats we’ve had.

Joining us on the podcast are Andy Rogers from Rokker and Dan Graetzer from Carousel Group. Both Andy and Dan have tons of experience both commissioning work from internal teams and navigating the selection of external agencies. They were able to speak with clarity about the challenges that each task can bring.

One of the interesting things for me was getting a glimpse ‘over the fence’ into some of the thought processes and pressures that lead people to keep work internal – something that I’ve really only been able to guess at in the past.

Here’s a quick summary of things we speak about.

Agencies developing symbiotic/parasitic relationships with larger clients.

This tendency of larger agencies to act almost as though they are internal teams is becoming more and more common. There are upsides and downsides to this, obviously, in that while bodies like Deloitte et al can mobilise 200-strong dev teams, they also make it more and more likely that their customers will have to keep going back to them in future. (We discuss this subject mostly in terms of how Isotoma are not a larger agency!)

Good agencies are expensive but not as expensive as bad recruitment

The cost of hiring an agency for a given software project is likely to cost around the same as the annual salary of a developer and/or development team. Given this, it can seem galling for potential customers that they’re spending the right amount of money in the wrong place. We discuss how a good agency can help mitigate both the opportunity cost and assume all the tricky recruitment risk in the relationship. (Aren’t we nice?)

Continuous delivery shouldn’t necessarily mean continuous agency billing

One of the goals of any software project should be to build and develop the skills to support it in-house. If you’ve had a key piece of software in production for 18 months and you’re still relying on a third party to administer, fix or deploy it then you might have a problem.

Asking an agency to do something is the easy bit

Commissioning work with third party agencies is one step in a multi-step journey. This journey needs to include understanding how you’re defining your requirements, how you plan to receive it when it’s done and how you’re going to give the project good governance when it’s in flight.

Also there is a good deal of talk about werewolves

We’re not mega sure why.

Hopefully you’ll find it as interesting as we did. You can listen to the podcast and subscribe!

A blog post about estimating

First of all, a provocative but sweeping statement about the subject to kick us off: If your agency won’t talk to you about how they estimate projects then they’re either liars or fools.

You’ll have heard of Zeno’s Paradox. The one where a journey can theoretically never be completed because in order to travel the full distance you must first go halfway. And then once you’re halfway, you must then go half the remaining distance and so on.

The paradox is that in order to do something as simple as walking across a room, one must complete an infinitely regressing set of tasks. And yet, without wishing to boast, I’ve crossed the room twice already today and I managed it just fine.

Software estimation is a bit like that. If you analyse it closely you’ll see the tasks you have to complete multiply infinitely until even the simplest thing looks impossible and the budget is smashed to smithereens. And yet, as a company, we’ve got a track record of delivering on time and to budget that goes back years.

The various methods that we use are described in the episode of our podcast that this post supports (Why not go and check it out?) and we won’t go into detail here suffice to say that the process is always time-consuming and rarely problem-free.

So it’s hard. And prone to error. And time consuming to even do badly. So why do it?

The obvious answer – so you know how much to charge – is not actually all that applicable. More and more of the work we do on agile projects is charged on a time and materials basis. Additionally, there are a hundred good reasons why an agency might want to charge a price that wasn’t just literally [amount of time estimated] multiplied by [hourly rate].

No, the real reason that we put so much effort into estimation is that estimation is a great disinfectant. Everyone who works in this industry has a story about a project that went from perfectly fine to completely awful in a matter of seconds. Estimation helps us expose and resolve the factors that cause this horror: hidden complexity, differences of assumption, Just Plain Goofs etc.

It’s important to note though that even a carefully produced estimate can still be wrong and so the other key tools an agency needs are mature processes and procedures. You need to be able to effectively communicate how the estimate failed, assess what the impact of the failure will be to the broader project and, vitally, put all this information in a place where it can’t be forgotten or ignored.

This last step is effectively giving the organisation an institutional memory that lasts longer than 10 working days and it’s the vital step that ensures that by the end of the project the stakeholders can remember that there was a problem, see that it was resolved and how it affected timelines overall. Mistakes are always going to be made but the key thing is to ensure you’re always making exciting new ones rather than repeating the old ones.

All of the above is discussed to some extent in our Estimating podcast. Myself, Andy Theyers and Richard Newton spend around half an hour discussing the subject and, honestly, it’s quite interesting. I urge you to check it out.

One Pound in Three

Can we talk about this:

Big opportunities for small firms: government set to spend £1 in every £3 with small businesses

When its predecessor (of £1 in 4) was announced in 2010 many of us were sceptical, so it was fantastic news in 2014 when the National Audit Office announced that this target had not only been met, but exceeded. I don’t think anyone doubts that the new £1 in 3 target will be achieved by 2020; a real measure of confidence in the commitment to these plans.

It’s fair to say that it’s genuinely been a great move forward. It’s taken some time – as you might expect – both for this to trickle all the way down to the smaller end of the SME sector and for departments and other bodies to get their procurement processes aligned, but in the last couple of years we’ve have seen many positive and concrete changes to the way the public sector procures services.

We’ve been involved in quite a few of these SME tendering processes in the last year or so and have seen a full range of tenders from the very good through to the very bad. What’s clear is that things are continuing to improve as buyers and their procurement departments learn to navigate the new types of relationships that the public sector has with these smaller suppliers.
So a lot’s changed, but what could still improve?

Procurement workshops and briefing days

Soon after the 2010 announcement and in the midst of a fashion for “hackathons” and the open web these were all the rage; you could hardly go a week without one body or another running an event of this type.

You know the ones. Every Government department and even most local councils run them; non-government public bodies like the BBC, Channel 4 and JISC love them too. The intention is absolutely sound – you want to get us excited about working with you, outline the projects that we might be working on, help shape our proposals, and ultimately make sure we understand that you’re worth the effort of us pitching to.

There’s no doubt that these are great events to attend. But. They’re often marketed as “great opportunities” and there’s frequently a sense that we must attend to ensure that we don’t miss out. But time out of the office costs money, as does getting half way across the country because the “North” briefing is in London (I kid you not, that’s happened to me more than once). On top of that the audience and content of the talks at these events can be scarily similar regardless of location or presenting organisation. There’s nothing more disheartening than arriving at another one of these events to a feeling that only the venue and speakers have changed.

It’s obviously vitally important that you get these messages across, but please try and make sure that the events themselves don’t feel compulsory. SMEs are time poor (particularly the good ones); if it’s clear that I’m not going to miss out if I don’t attend and that all the information I need will be online then I may well choose not to come. It doesn’t mean I’m not engaged, just that new channels like this are things I often need to explore outside the usual working day.
There’s often a sense of “if we make it really explicit what we’re after at the workshop” that you’ll cut down on the number of inappropriate responses to your tenders. Sadly the opposite is often true – once someone has spent a lot of time and money in attending one of the briefing days they will pitch for absolutely everything, because they now feel invested, and they’ve met you. Sunk cost thinking affects us all.

Luckily the number of these apparently mandatory briefing days is reducing, with some organisations doing away with them entirely, replacing them with live web conferences, pre-recorded video presentations and detailed (and high quality) documentation. I’d love to see them done away with entirely, though.

Keeping contracts aligned

It’s a fair assumption that during the briefing days every single speaker will have made at least one reference to Agile. And it’s likely that Agile was the main topic of at least one talk. Because Agile is good. You get that. We get that. Agile makes absolute sense for many of the kinds of projects that the public sector is currently undertaking. Digital Transformation is certainly not easy, it’s definitely not cheap and it’s absolutely not going to be helped by a waterfall, BDUF approach.

But if you’re honestly committed to Agile please please ensure that your contracts reflect that. We’ve recently had to pull out of two tenders where we’d got down to the last round because the contract simply couldn’t accommodate a genuine Agile delivery. We know Agile contracts are hard, but if you’ve spent the entire procurement process actively encouraging people to pitch you an Agile approach you need to present an Agile contract at the end of it. Companies as old and grizzled as Isotoma may feel forced – and be willing – to back away, but for many agencies it’s a trap they unwittingly fall into which ultimately does nothing for either party.

It’s also worth remembering that it’s unlikely any SME you deal with has internal legal advice, so contract reviews are an expensive luxury. If you present a mandatory contract at the start of the tender process most of us will glance over it before ploughing ahead. We certainly aren’t going to pay for a full scale review because we know it’ll cost a fortune and the lawyer is only going to tell us it’s too risky and we shouldn’t pitch anyway. One contract we were presented with by a government department was described by our lawyer as a “witch’s curse”. We still pitched. Didn’t win it. Probably for the best.

Timelines

They say it’s the hope that kills you.

Small businesses are, by definition, small. The kind of procurements I’m talking about here are for services, not products, which means that people – our people, our limited number of people – are going to be required for the delivery. If the timeline on the procurement says “award contract on 17th February 2017, go live by end June 2017” we’re going to start trying to plan for what winning might look like. This might well involve subtly changing the shape of other projects that we’ve got in flight. If we’re really confident it might even mean turning away other work.

When we get to the 17th February and there’s no news from you what are we supposed to do? Do we hold the people we’d pencilled in for this work back and live with the fact that they’re unbilled?. And then when 24th February comes and there’s another round of clarification questions, but you commit to giving us an answer by the following week what do we do then? And so on. And so on.

The larger the business you’re dealing with the easier they find absorbing these kind of changes to timelines, but that’s one of the reasons they’re more expensive. SMEs are small, they’re nimble, but they also rely on keeping their utilisation high and their pipeline flowing. Unrealistic procurement timelines combined with fixed delivery dates can make pitching for large tenders very uncomfortable indeed.

To summarise

As I said at the start things have made huge leaps forward over the past couple of years. The commitment to pay 80% of all undisputed invoices within 5 days is a great example of how the public sector is starting to really understand the needs of SMEs, as is removing the PQQ process for smaller contracts, the commitment to dividing contracts into lots and explicitly supporting consortia and subcontracting.

In 2016 we’ve been to sadly uninformative developer days for an organisation that has offered wonderfully equitable Agile contracts and extremely clear and accurate timelines. We’ve pitched for work that was beautifully explained online with no developer day, but that presented a bear trap of a contract, and we’ve pitched for work that was perfect except for the wildly optimistic timelines and that finally awarded the contract 3 months after the date in the tender.

Things are definitely getting better, but a few more little tweaks could make them perfect.
Here’s to £1 in 3, and the continuing good work that everyone is doing across the sector.

The economics of innovation

One of the services we provide is innovation support. We help companies of all sizes when they need help with the concrete parts of developing new digital products or services for their business, or making significant changes to their existing products.

A few weeks ago the Royal Swedish Academy of Sciences awarded the Nobel Prize for Economics to Oliver Hart and Bengt Holmström for their work in contract theory. This prompted me to look at some of his previous work (for my sins I find economics fascinating), and I came across his 1998 paper Agency Costs and Innovation. This is so relevant to some of my recent experiences I wanted to share it.

Imagine you have a firm or a business unit and you have decided that you need to innovate.

This is a pretty common situation – you know strategically that your existing product is starting to lose traction. Maybe you can see commoditisation approaching in your sector. Or perhaps, as is often the case, you can see the Internet juggernaut bearing down on your traditional business and you know you need to change things up to survive.

What do you do about it?  If you’ve been in this situation the following will probably resonate:

agency2

This describes the principal-agent problem, which is a classic in economics. This describes how a principal (who wants something) can incentivise an agent to do what they want. The agent and “contracting” being discussed here could be any kind of contracting including full time staff.

A good example of the principal-agent problem is how you pay a surgeon. You want to reward their work, but you can’t observe everything they do. The outcome of surgery depends on team effort, not just an individual. They have other things they need to do other than just surgery – developing standards, mentoring junior staff and so forth. Finally the activity itself is very high risk inherently – which means surgeons will make mistakes, no matter how competent. This means their salary would be at risk, which means you need to pay huge bonuses to encourage them to undertake the work at all.

In fact commonly firms will try and innovate using their existing teams, who are delivering the existing product. These teams understand their market. They know the capabilities and constraints of existing systems. They have domain expertise and would seem to be the ideal place to go.

However, these teams have a whole range of tasks available to them (just as with our surgeon above), and choices in how they allocate their time. This is the “multitasking effect”. This is particularly problematic for innovative tasks.

My personal experience of this is that, when people have choices between R&D type work and “normal work”, they will choose to do the normal work (all the while complaining that their work isn’t interesting enough, of course):

variance

This leads large firms to have separate R&D divisions – this allows R&D investment decisions to take place between options that have some homogeneity of risk, which means incentives are more balanced.

However, large firms have a problem with bureaucratisation. This is a particular problem when you wish to innovate:

monitoring

Together this leads to a problem we’ve come across a number of times, where large firms have strong market incentives to spend on innovation – but find their own internal incentive systems make this extremely challenging.

If you are experiencing these sorts of problems please do give us a call and see how we can help.

I am indebted to Kevin Bryan’s excellent A Fine Theorem blog for introducing me to Holmström’s work.

 

4 Times That The Misery Of Creative Agencies Made Me Happy

Clickbait titles are fun, but bear with me, good people. I’m trying the make a point.

This report was wafted under my nose the other day. It makes for depressing, but not terribly surprising reading. The first paragraph pretty much nails it:

Anyone who’s spoken to me in a professional capacity for the last 3 months will probably recognise that Smith & Beta’s report is quantitative confirmation of what I’ve been going on about for ages. Each one of these makes me sad – but also, because I am a shallow, vapid person, I still get to feel happy that I’m right.

1) Good quality creative requires good quality technical implementation

Agencies lead with creative vision and lean on technical skills (internal & external) to deliver this vision. No one ever won a pitch by saying that the creative will be a strong C+ but it’s going to be implemented really well. Sadly, the opposite is almost always true. The industry is generally OK with taking an amazing creative idea and delivering it late, over-budget and on top of a pile of bodies of fallen colleagues.

2) This technical resource – where it exists within an agency – is often siloed and over-committed

Because of the way the creative industry works, creative resource is always going to be an expense the agency is happy to invest in. Investing in technical resource however; is a more expensive, slower, trickier business.

Similarly, investing in older, more skilled resource is always going to be a harder sell when there are countless thousands of young and exploitable juniors clamouring for your attention.

An agency trying to walk the line between capability and capacity in order to really call themselves “Full Service” will end up with a safe but middle of the road offer. Conversely, an agency who shoots for the moon and invests in highly specialised and/or highly senior team may find that they’ve painted themselves into a very expensive corner.

3) It’s hard to hire your way out of this problem

I mean, duh, obviously. It’s hard to hire your way out of any problem. Recruitment, training and increasing retention are sloooooow processes. And the problems that this report outlines are problems of the now.

(Side-note: In my role here at Isotoma, I often end up talking to agencies about projects that we can collaborate on. I’m usually talking about projects that might be coming up in, say, 6 months, but people actually want help RIGHT NOW.)

4) These problems when considered together, reduce the satisfaction of the customer and shorten the lifetime of the account

As abusive as the client/agency model can be, there’s a satisfyingly stark bottom line to it: “Do good work; get more work.” Note that this is distinct from “Pitch good creative; get more work.”

As I said above, no one ever won a pitch for outlining a competent implementation plan, but once the project is over and the smoke settles, the customer doesn’t just remember the pitch.

(If you’re really unlucky, the people who were in the pitch don’t even work for the customer anymore…)

The knife edge that a marcomms agency has to walk is being able to deliver creative vision *and* technical competence in a way that doesn’t fundamentally alter what the company is. Go too far in one direction and you’re unable to deliver anything profitably, go too far in the other and you’ve magically become a company that you don’t want to be.

So this is one of the reasons that Isotoma do what we do. We’re already a technical agency. We’re already geared up to help you estimate, deliver and, crucially, support a creative campaign. We’re good partners. And the better we get at ploughing this particular furrow, the better we’re able to help and complement agencies who’ve chosen to plough another.

And that makes me happy.

(See? I was being cynically provocative to attract clicks. And the pug at the top? The cherry on the cake, my friend. Truly I am a monster.)