Timesheets: some observations on observation

Just as a throwaway in my post on understanding your team’s progress I said something like “everyone hates timesheets”. And it’s true, they do. They’re onerous, boring and they’re usually seen as invasive, “big brother”-esque, make-work. But, as I also said in that post, good quality time recording is vital to understanding what’s going on within your teams.

Feeling the need

We first started looking at timesheet systems nine or ten years ago when it was becoming abundantly clear that we weren’t making the progress we were expecting on certain projects, but we didn’t know why.

The teams were skilled in the tools they were using, they were diligent, they’d done similar work before, but they just weren’t hitting the velocities that we had come to expect. On top of that, the teams themselves thought they were making good progress. And every which way we approached the problem we were missing the information needed to get to the bottom of the mismatch between expectation and reality.

At that point in the company’s life timesheets were anathema to us; we felt very strongly they indicated a lack of trust, and in a company built entirely on the principles behind the Agile Manifesto… Well… You can see our problem.

Build projects around motivated individuals.
Give them the environment and support they need,
and trust them to get the job done.

But however we cut it we really needed to understand what people were actually doing with their day. We trusted that if people thought they were making good progress then they were, but we definitely knew that we weren’t making the same kind of progress that we had been a year ago on the same types of project. And back then we were often on fixed price projects and billing by the day, so when projects started to overrun our financial performance started to dip and the quality of our code went the same way (for all the reasons I outlined in that previous post).

So we hit on Harvest (at the time one of the poster children of the burgeoning Rails SaaS community) and asked everyone to fill in their sheets for a couple of months so we could generate some data.

We had an all hands meeting, we explained exactly why we were doing it, and we asked, cajoled and bullied people into using it so that at least we had something to work on and perhaps uncover the problems we were hitting.

And of course we found it quickly enough, because accurate timesheets filled in honestly expose exactly what’s going on. By our nature we are both helpful and curious – that’s how we ended up doing what we’re doing. But helpful and curious is easily distracted; a colleague asking for help, an old customer with a quick question, a project manager from another project with an urgent request, the account management team asking “can you just…” And all of this added up. In the worst cases some people were only spending four hours a day on the project they were allocated to; the rest of their time spent helping colleagues and old customers… However, how you cope with these things is probably the subject of another post.
My point here is that once we had that data we realised how valuable it was and knew that we couldn’t go without it again. Our key takeaway was that timesheets are a key part of a company’s introspection and without good data you don’t know the problem you’re actually trying to solve. And so we had to make timesheets part of our everyday processes.

Loving the alien

Like I said; people hate timesheets. They’re invasive. They’re time consuming. They feel like you’re being watched, judged. They imply no trust. They’re alien to an agile environment. And the data they produce is a key part of someone else’s reporting, too. So how do you make sure they’re filled in accurately and honestly? And not just in month one, when you first introduce them, but in month fifty seven when your business relies on them and you may not be watching quite so closely.

We’ve found the following works for us:

  • Make it crystal clear what they’re for, and what they’re not
  • Make it explicit that timesheets are for tracking the performance of estimates and ensuring that progress can be reported accurately
  • It’s not about how much you do, but how much got done
  • Tie them together with things like iDoneThis, so that people can give context to their timesheets in an informal unstructured manner
  • Make sure that everyone who uses the data throughout the management chain is incentivised to treat it honestly – this means your project managers mustn’t feel the need to manipulate it or worse manipulate how it’s entered (we’ve seen this more than once in other organisations)

And Dan, one of our project managers, sends round a gentle chivvying email each evening (filled with the day’s fun facts, of course) to make sure that people actually fill them in.

[Photo by Sabri Tuzcu on Unsplash]

External Agencies vs. In-house Teams

As you’ll already know because you’re windswept and interesting; we record a semi regular podcast where we look into an aspect of life in a technical agency that we think will interest the outside world. We’ve just finished recording the latest episode about internal versus external teams and honestly I think it’s one of the most interesting chats we’ve had.

Joining us on the podcast are Andy Rogers from Rokker and Dan Graetzer from Carousel Group. Both Andy and Dan have tons of experience both commissioning work from internal teams and navigating the selection of external agencies. They were able to speak with clarity about the challenges that each task can bring.

One of the interesting things for me was getting a glimpse ‘over the fence’ into some of the thought processes and pressures that lead people to keep work internal – something that I’ve really only been able to guess at in the past.

Here’s a quick summary of things we speak about.

Agencies developing symbiotic/parasitic relationships with larger clients.

This tendency of larger agencies to act almost as though they are internal teams is becoming more and more common. There are upsides and downsides to this, obviously, in that while bodies like Deloitte et al can mobilise 200-strong dev teams, they also make it more and more likely that their customers will have to keep going back to them in future. (We discuss this subject mostly in terms of how Isotoma are not a larger agency!)

Good agencies are expensive but not as expensive as bad recruitment

The cost of hiring an agency for a given software project is likely to cost around the same as the annual salary of a developer and/or development team. Given this, it can seem galling for potential customers that they’re spending the right amount of money in the wrong place. We discuss how a good agency can help mitigate both the opportunity cost and assume all the tricky recruitment risk in the relationship. (Aren’t we nice?)

Continuous delivery shouldn’t necessarily mean continuous agency billing

One of the goals of any software project should be to build and develop the skills to support it in-house. If you’ve had a key piece of software in production for 18 months and you’re still relying on a third party to administer, fix or deploy it then you might have a problem.

Asking an agency to do something is the easy bit

Commissioning work with third party agencies is one step in a multi-step journey. This journey needs to include understanding how you’re defining your requirements, how you plan to receive it when it’s done and how you’re going to give the project good governance when it’s in flight.

Also there is a good deal of talk about werewolves

We’re not mega sure why.

Hopefully you’ll find it as interesting as we did. You can listen to the podcast and subscribe!

DotYork Event Announcement

Isotoma working in partnership with DotYork

We’re excited to announce that DotYork has joined forces with Isotoma, the York based software development agency. Isotoma have been long term supporters of the event, attending and sponsoring every conference so far and even speaking at DotYork 2016, so when they offered their help it felt like a natural fit.

We’ve always thought DotYork was a great way to highlight our beautiful city and York’s rapidly growing digital community, and during conversations with Rick we realised that we had loads of ideas for future events and could offer Dot York the support it needed. We’re really excited about DotYork 2017 and are looking forward to future events in 2018 and beyond.
Andy Theyers, director and founder, Isotoma

Working with Isotoma on DotYork 2017 has already sparked some new ideas and with a bigger team we’re able to look at expanding the event, including workshops and an evening event this year, with who knows what to come in 2018
Rick Chadwick, DotYork

Reposted from the original DotYork blog

 

The future of TV broadcasting

Earlier this year we started working on an exciting project with BBC Research & Development on their long term programme developing IP Studio – the next generation IP-network based broadcast television platform. The BBC are developing new industry-wide standards working with manufacturers which will hopefully be adopted worldwide.

This new technology dramatically changes the equipment needed for live TV broadcasting. Instead of large vans stocked with pricey equipment and a team of people; shows can be recorded, edited and streamed live by a single person with a 4k camera, a laptop and internet access.

The freedom of being able to stream and edit live TV within the browser will open up endless possibilities for the entertainment and media sector.

You can see more Isotoma videos on Vimeo.

A blog post about estimating.

First of all, a provocative but sweeping statement about the subject to kick us off: If your agency won’t talk to you about how they estimate projects then they’re either liars or fools.

You’ll have heard of Zeno’s Paradox. The one where a journey can theoretically never be completed because in order to travel the full distance you must first go halfway. And then once you’re halfway, you must then go half the remaining distance and so on.

The paradox is that in order to do something as simple as walking across a room, one must complete an infinitely regressing set of tasks. And yet, without wishing to boast, I’ve crossed the room twice already today and I managed it just fine.

Software estimation is a bit like that. If you analyse it closely you’ll see the tasks you have to complete multiply infinitely until even the simplest thing looks impossible and the budget is smashed to smithereens. And yet, as a company, we’ve got a track record of delivering on time and to budget that goes back years.

The various methods that we use are described in the episode of our podcast that this post supports (Why not go and check it out?) and we won’t go into detail here suffice to say that the process is always time-consuming and rarely problem-free.

So it’s hard. And prone to error. And time consuming to even do badly. So why do it?

The obvious answer – so you know how much to charge – is not actually all that applicable. More and more of the work we do on agile projects is charged on a time and materials basis. Additionally, there are a hundred good reasons why an agency might want to charge a price that wasn’t just literally [amount of time estimated] multiplied by [hourly rate].

No, the real reason that we put so much effort into estimation is that estimation is a great disinfectant. Everyone who works in this industry has a story about a project that went from perfectly fine to completely awful in a matter of seconds. Estimation helps us expose and resolve the factors that cause this horror: hidden complexity, differences of assumption, Just Plain Goofs etc.

It’s important to note though that even a carefully produced estimate can still be wrong and so the other key tools an agency needs are mature processes and procedures. You need to be able to effectively communicate how the estimate failed, assess what the impact of the failure will be to the broader project and, vitally, put all this information in a place where it can’t be forgotten or ignored.

This last step is effectively giving the organisation an institutional memory that lasts longer than 10 working days and it’s the vital step that ensures that by the end of the project the stakeholders can remember that there was a problem, see that it was resolved and how it affected timelines overall. Mistakes are always going to be made but the key thing is to ensure you’re always making exciting new ones rather than repeating the old ones.

All of the above is discussed to some extent in our Estimating podcast. Myself, Andy Theyers and Richard Newton spend around half an hour discussing the subject and, honestly, it’s quite interesting. I urge you to check it out.

Video: Serverless – The Real Sharing Economy

Serverless is a new application design paradigm, typified by services like AWS Lambda, Azure Cloud Functions and IBM OpenWhisk. It is particularly well suited to mobile software and Single-Page Application frameworks such as React.

In this video, Doug Winter talks at Digital North in Manchester about what Serverless is, where it comes from, why you would want to use it, how the economics function and how you can get started.

You can see more Isotoma videos on Vimeo.

RxJS: An object lesson in terrible good software

We recently used RxJS on a large, complex asynchronous project integrated with a big third-party distributed system. We now know more about it than, frankly, anyone would ever want to.

While we loved the approach, we hated the software itself. The reasons for this are a great lesson in how not to do software.

Our biggest issue by far with RxJS is that there are two actively developed, apparently stable versions at two different URLs. RxJS 4 is the top result from google for RxJS and lives at https://github.com/Reactive-Extensions/RxJS, and briefly mentions that there is an unstable version 5 at a different address. RxJS 5 lives at https://github.com/ReactiveX/rxjs and has a completely different API to version 4, completely different, far worse (“WIP”) documentation, doesn’t allude to its level of stability, and is written in typescript, so users will need to learn some typescript before they can understand the codebase.

Which version should new adopters use? I have absolutely no idea. Either way, when you google for advice and documentation, you can be fairly certain that the results you get will be for a version you’re not using.

RxJS goes to great lengths to swallow your errors. We’re pretty united here in thinking that it definitely should not. If an observable fires its “error” callback, it’s reasonable that the emitted error should be picked up by the nearest catch operator. Sadly though RxJS also wraps all of the functions that you pass to it with a try/catch block and any exception raised by those functions will also be shunted to the nearest try/catch block. Promises do this too, and many have complained bitterly about it already.

What this means in practice is that finding the source of an error is extremely difficult. RxJS tries to capture the original stack trace and make it available in the catch block but often fails, resulting in a failed observable and an “undefined” error. When my code breaks I’d like it to break where it broke, not in a completely different place. If I expect an error to occur I can catch it as I would anywhere else in the codebase and emit an observable error of the form that I’d expect in my catch block so that my catch blocks don’t all have to accommodate expected failure modes and any arbitrary exception. Days and days of development were lost to bisecting a long pile of dot-chained functions in order to isolate the one that raised the (usually stupidly trivial) error.

At the very least, it’d be nice to have the choice to use an unsafe observable instead. For this reason alone we are unlikely to use RxJS again.

We picked RxJS 5 as it’s been around for a long time now and seems to be being maintained by Netflix, which is reassuring.

The documentation could be way better. It is incomplete, with some methods not documented at all, partially documented as a mystery-meat web application that can’t be searched like any normal technical documentation. Code examples rarely use real-world use-cases so it’s tough to see the utility of many of the Observable methods. Most of the gotchas that caught out all of our developers weren’t alluded to at any point in any of the documentation (in the end, a youtube talk by the lead developer saved the day, containing the first decent explanation of the error handling mechanism that I’d seen or read). Worst of all, the only documentation that deals with solving actual problems with RxJS (higher-order observables) is in the form of videos on the paywalled egghead.io. I can’t imagine a more effective way to put off new adopters than requiring them to pay $200 just to appreciate how the library is commonly used (though, to be clear, I am a fan of egghead).

Summed up best by this thread, RxJS refuses to accept its heritage and admit that it’s a functional library. Within the javascript community there exists a huge functional programming subcommunity that has managed to put together a widely-adopted specification for writing functional javascript libraries that can interoperate with the rest of the available javascript functional libraries. RxJS chooses not to work to this specification  and a number of design decisions such as introspecting certain contained values and swallowing them drastically reduces the ease with which RxJS can be used in functional javascript codebases.

RxJS makes the same mistake lodash did a number of years ago, regularly opting for variadic arguments to its methods rather than taking arrays (the worst example is merge). Lodash did eventually learn its lesson, I hope RxJS does too.

Taming the Async Beast with FRP and RxJS

The Problem

We’ve recently been working on an in-browser vision mixer for the BBC (previous blog posts here, here, here, and here). Live vision mixing involves keeping track of a large number of BBC R&D logointerdependent data streams. Our application receives timing data for video tapes and live video streams via webrtc data channels and websocket connections and we’re sending video and audio authoring decisions over other websockets to the live rendering backend.

Many of the data streams we’re handling are interdependent; we don’t want to send an authoring decision to the renderer to cut to a video tape until the video tape is loaded and ready to play, so we need to wait until the video tape is ready to play before we send an authoring decision; if the authoring websocket has closed we’ll need to reconnect to it then retry sending that authoring decision.

Orchestrating interdependent asynchronous data streams is a fundamentally complex problem.

Promises are one popular solution for composing asynchronous operations and safely transforming the results, however they have a number of limitations. The primary issue is that they cannot be cancelled, so we need to handle teardown separately somehow. We could use the excellent fluture or Task Future libraries instead, both of which support cancellation (and are lazy and chainable and fantasy-land compliant), but futures and promises handle one single future value (or error value), not a stream of many values (or error value). The team working this project are fans of futures (less so of promises) and were aiming to write the majority of the codebase in a functional style using folktale and ramda (and react-redux) so wanted a functional, composable way to handle ongoing streams of data that could sit comfortably within the rest of the codebase.

A Solution

After some debate, we decided to use FRP (functional reactive programming) powered by the observable pattern. Having used RxJS (with redux-observable) for smaller projects in the past, we were confident that it could be an elegant solution to our problem. You can find out more about RxJS here and here but, in short, it’s a library that allows subscribers to listen to and transform the output of a data stream as per the observer pattern, and allows the observable (the thing subscribed to) to “complete” its stream when it runs out of data (or whatever), similar to an iterator from the iterator pattern. Observables also allow their subscribers to terminate them at any point, and typically observables will encapsulate teardown logic related to their data source – a websocket, long-poll, webrtc data channel, or similar.

RxJS implements the observer pattern in a functional way that allows developers to compose together observables, just as they’d compose functions or types. RxJS has its roots in functional reactive programming and leverages the power of monadic composition to chain together streams while also ensuring that teardown logic is preserved and handled as you’d expect.

Why FRP and Observables?

The elegance and power of observables is much more easily demonstrated than explained in a wordy paragraph. I’ll run through the basics and let your imagination think through the potential of it all.

A simple RxJS observable looks like this:

Observable.of(1, 2, 3)

It can be subscribed to as follows:

Observable.of(1, 2, 3).subscribe({
  next: val => console.log(`Next: ${val}`),
  error: err => console.error(err),
  complete: () => console.log('Completed!')
});

Which would emit the following to the console:

Next: 1
Next: 2
Next: 3
Completed!

We can also transform the data just as we’d transform values in an array:

Observable.of(1, 2, 3).map(x => x * 2).filter(x => x !== 4).subscribe(...)
2
6
Completed!

Observables can also be asynchronous:

Observable.interval(1000).subscribe(...)
0 [a second passes]
1 [a second passes]
2 [a second passes]
...

Observables can represent event streams:

Observable.fromEvent(window, 'mousemove').subscribe(...)
[Event Object]
[Event Object]
[Event Object]

Which can also be transformed:

Observable.fromEvent(window, 'mousemove')
  .map(ev => [ev.clientX, ev.clientY])
  .subscribe(...)
[211, 120]
[214, 128]
[218, 139]
...

We can cancel the subscriptions which will clean up the event listener:

const subscription = Observable.fromEvent(window, 'mousemove')
  .map(ev => [ev.clientX, ev.clientY])
  .subscribe(...)

subscription.unsubscribe();

Or we can unsubscribe in a dot-chained functional way:

Observable.of(1, 2, 3)
  .take(2)  // After receiving two values, complete the observable early
  .subscribe(...)
1
2
Completed!
Observable.fromEvent(window, 'mousemove')
  .map(ev => [ev.clientX, ev.clientY])
   // Stop emitting when the user clicks
  .takeUntil(Observable.fromEvent(window, 'click'))
  .subscribe(...)

Note that those last examples left no variables lying around. They are entirely self-contained bits of functionality that clean up after themselves.

Many common asynchronous stream use-cases are catered for natively, in such a way that the “operators” (the observable methods e.g. “throttle”, “map”, “delay”, “filter”) take care of all of the awkward state required to track emitted values over time.

Observable.fromEvent(window, 'mousemove')
  .map(...)
  .throttle(1000) // only allow one event through per second
  .subscribe(...);

… and that’s barely scratching the surface.

The Benefits

Many of the benefits of RxJS are the benefits of functional programming. The avoidance of state, the readability and testability of short, pure functions. By encapsulating the side-effects associated with your application in a generic, composable way, developers can maximise the reusability of the asynchronous logic in their codebase.

By seeing the application as a series of data transformations between the external application interfaces, we can describe those transformations by composing short, pure functions and lazily applying data to them as it is emitted in real-time.

Messy, temporary, imperative variables are replaced by functional closure to give observables access to previously emitted variables in a localised way that limits the amount of the application logic and state a developer must hold in their head at any given time.

Did It Work?

Sort of.  We spent a lot of our time in a state of low-level fury at RxJS, so much so that we’ve written up a long list of complaints, in another post.

There are some good bits though:

FRP and the observable pattern are both transformative approaches to writing complex asynchronous javascript code, producing fewer bugs and drastically improving the reusability of our codebase.

RxJS operators can encapsulate extremely complex asynchronous operations and elegantly describe dependencies in a terse, declarative way that leaves no state lying around.

In multiple standups throughout the project we’ve enthusiastically raved about how these operators have turned a fundamentally complex part of our implementation into a two line solution. Sure those two lines usually took a long time to craft and get right, but once working, it’s difficult to write many bugs in just two lines of code (when compared to the hundreds of lines of imperative code we’d otherwise need to write if we rolled our own).

That said, RxJS is a functional approach to writing code so developers should expect to incur a penalty if they’re new to the paradigm as they go from an imperative, object-oriented approach to system design to a functional, data-flow-driven approach instead. There is also a very steep learning curve required to feel the benefits of RxJS as developers familiarise themselves with the toolbox and the idiosyncrasies.

Would We Use It Again?

Despite the truly epic list of shortcomings, I would still recommend an FRP approach to complex async javascript projects. In future we’ll be trying out most.js to see if it solves the myriad of problems we found with RxJS. If it doesn’t, I’d consider implementing an improved Observable that keeps its hands off my errors.

It’s also worth mentioning that we used RxJS with react-redux to handle all redux side-effects. We used redux-observable to achieve this and it was terrific. We’ll undoubtedly be using redux-observable again.

 

The Europas 2017

June 13th was The Europas; a conference and awards ceremony for the European start up scene. Having watched the event from afar for a few years we decided to take the plunge and sponsor this year. We’ve been deeply involved in the start up community right from our inception; from attending the first Future of Web Apps back in 2005, through to helping some of our start up customers achieve successful funding rounds and eventual sale, and even setting up and one or two of our own (like Forkd). All in all it felt like the right kind of event for us to get involved in.

It was my first visit to the Olympic Park. My first thoughts were of how vast it is. I got off the train at Stratford International on a beautiful morning and decided to walk to Here East, which I could see in the distance…

Yeah. Perhaps I should have taken the shuttle bus that was on offer. Or taken the tube to Hackney Wick as recommended. Still, it was good to explore the park, even if I did arrive a little later than planned.

Sadly I wasn’t alone in arriving a little late. Because The Europas caters to a pan-European audience and the main event was in the evening many attendees had chosen to travel on the day, meaning that the morning sessions were a little under-attended. This was a real shame because the stand out talk of the day for those that saw it was Azeem Azhar’s “Will ubiquitous AI lead to artisanal cheese for all?” The title might have been a mouthful (ahem) but the talk was fascinating and wonderfully delivered.

Following on from Azeem on the main stage was an equally positive session with Bess Mayhew of More United; her take on UK politics and how we might best affect it (and how she already is) was genuinely uplifting.

This was the first talk that touched on a theme that would run throughout the rest of the day: #fakenews. Clearly anyone involved in politics is going to be worrying about the fake news phenomenon, and while Bess touched on the subject during her session the next panel was all about it. I’m going to say more on the topic in another post, so I’ll leave this one there, except to say that we – the tech community – currently seem bereft of ideas as to how to address it.

While Azeem’s session was the highlight of the talks the event had two non-talk stand outs. Straight after the excellent lunch (and a brief aside – the way lunch was delivered was very unusual and extremely efficient, a real plus for a conference) was Richard Browning the Rocket Man.

By now the venue had pretty much filled up, so a huge crowd watched him – with earplugs in – circle the courtyard outside the venue. It’s hard to describe how impressive it is to someone who hasn’t seen it up close with the heat and noise of the jets almost knocking you over. Quite how on earth he actually manages to fly the thing I don’t know.

Doug’s breakout session with Roberta Lucca was straight after The Rocket Man’s flight, and we were obviously worried that no one would turn up given the excitement of what was going on outside, but we had a good audience for an intimate and lively chat (and disagreement) about how to best get the most out of your development team, and when and whether to build your own team or outsource. More on that topic to come in both blog post and podcast form…

For us the afternoon ended with Gabrielle Aplin who gave a great talk about how artists are the new start ups (reflecting what we used to say a decade ago, that start ups are the new artists; what goes around comes around, of course) before giving a performance to a slightly bemused crowd.

For me the highlights of the day were Azeem’s talk on AI, the rocket man, and a great breakout panel on privacy, but there were very few dud moments in a packed day.

I left via the canal and Hackney Wick. Far more picturesque, and a much shorter walk!

We can’t thank Mike, Petra and Dianne enough for setting the thing up and running it so smoothly, and for giving us the opportunity to sponsor. We’ll see you there again next year.

HAPPI image

Video: Iomart on selecting Isotoma as their digital agency

HAPPI (Highly Available Provisioning and Procurement of Infrastructure) is a really exciting project that we’ve been working on for some time now. It empowers users to design and deploy their perfect hardware infrastructure, all with a click of a button.

A few weeks ago we were delighted to host Neil Christie, Managing Director of Iomart, who spoke to us about the idea for HAPPI and why working with Isotoma has been such a valuable partnership for their business.

HAPPI Highly Available Provisioning and Procurement of Infrastructure on Vimeo.

Neil Christie: “We went to 3 or 4 digital agencies when we initially had the idea of HAPPI. It became very clear from speaking to those that we needed to find someone that really understood the vision and the goals that we were aiming for.”

“I think it was when you were finishing my sentences and asking questions that we hadn’t even considered that we really knew we’d found someone that understood exactly what we were going to achieve.”

Iomart have big plans for HAPPI, and we’re delighted to be helping them along with the way. If you have an idea and need to turn it into reality, we’d love to hear from you.