Needs, Empathy, and Ghosts in the Machine: Reflections on Dot York 2017

Last Thursday I spent the day helping out at the hugely anticipated Dot York 2017 conference. It was an early start and a (very!) late finish, but I wouldn’t have missed it for anything.

The success of a conference lives or dies by the quality of the speakers, and this year the bar was raised yet again, ably compèred by Scott Hartop. Each talk provided enough food for thought to fill this blog a hundred times over, but I’ll restrict myself to discussing a few of my personal highlights from each session.

Adam Warburton changes our perceptions about competitors

The opening session of the day concerned User Experience and Needs. Adam Warburton, Head of Product at Co-op Digital, gave an illuminating demonstration of how seemingly unrelated products can end up as competitors when viewed through Maslow’s Hierarchy of Needs. Who would have thought, for instance, that online supermarket shopping and Uber are actually competitors within this framework, and how does this challenge the way we think about our own products? Adam went on to discuss how, by framing your business and the needs that you service in this way, you can force entire industries to transform for the better. The Co-op is not the most dominant supermarket chain in the UK, but Adam argues that their business goals have actually been met – by championing Fair Trade products and ethical business methods, they found that consumers valued these aspects of their business and so forced competitors to adopt their practices. For them, that was how they measured success.

Ian Worley speaks of getting stakeholder buy-in in a difficult environment

The second session, Business Before Lunch, saw four insightful talks from experts, innovators and entrepreneurs looking at the decisions we make and how we make the right choices for our own businesses. Ian Worley kicked us off with a talk about his time as Head of User Experience and Design at Morgan Stanley. Ian spoke with eloquence about achieving stakeholder buy-in by a) being brave about your expertise, and b) finding the right arguments for the right people. In the conservative world of banking, efficiency gains and improved bottom line were persuasive where aesthetic values and improved user experience were not. As Ian described his experiences, I thought about the broader question of value alignment: what do your clients value, what do you value, and what do you do if you can’t find common ground? At Isotoma I am fortunate to work with a broad range of clients, some offering exciting technical challenges, others that provide opportunities to do real social good. Very few of us in this industry can fully separate our work identities from our personal ones, so the importance of doing work with clients who share at least some of your values cannot be overstated.

Hannah Nickin’s talk highlighted for me how destroying capitalism isn’t just a slogan..

Following quite the best conference lunch I’ve ever had (with many thanks to Smokin Blues!), we heard four presentations on Building Better Teams, with Hannah Nicklin providing a dramatic reading of her ethnographic experiences amongst games development collectives. Hannah’s talk highlighted for me how destroying capitalism isn’t just a slogan, but a praxis – the intersection of place and behaviour where we challenge orthodoxies. We probably can’t overthrow systems of exploitation overnight, but we can problematise convention and test alternatives. As a business, Isotoma works hard on cultivating an environment that works for its employees, and not simply operating as an entity for converting labour into ‘stuff’. What works for you may not work for me, and that’s ok, but the crucial thing is to challenge the received assumptions of what your business is for, and the value that it brings to the world.

Natalie Kane talks about how easily human bias can creep into development of advanced software

We rounded out the day’s events with a panel on Being Human, with an emphasis on empathy, self-care, and our responsibility towards others. Natalie Kane, the Curator of Digital Design at the Victoria and Albert Museum, delivered an intriguing talk concerning so-called ‘ghosts in the machine’, and how easy it is for the advance of technology to be embraced, unchallenged, as an unimpeachable good. Our ethical obligations do not begin and end with our good intentions, she announced, but require our constant and active engagement. Natalie argued that such ‘ghosts’ serve as a reminder that technology is not neutral, and we have a responsibility to keep a critical stance towards technology and how we use it. To paraphrase Jurassic Park, just because we can doesn’t mean we should.

I cannot wait to see who we book next year!

Dot York – Yorkshire’s digital conference returns

Dot York returned last Thursday 9th November with a new lease of life, new venue, and new sponsor. (Us!) The day’s events saw 16 compelling presentations, lunch by Smokin Blues and evening event at Brew York.

If there was an overriding theme to the talks, it was probably empathy. We all make better, more successful digital products if we make the effort to learn about our users. Or seen from another perspective, recognising the diversity of our users, our stakeholders and colleagues. Measurement was another theme, one way in which we learn from our users and the impact we’re having. Here’s my summary of the day’s talks:

Continue reading

Being a tutor at the Open University

At Isotoma, our recruitment policy is forward-thinking and slightly unconventional. We prioritise how you think rather than where, or even if, you studied formally. This is not to say that we don’t have our fair share of academics, with a former Human Computer Interaction researcher in the team among many more similarly impressive backgrounds!

This nonconformist approach spans the whole of Isotoma; and some of our clients may have noticed that, as a rule, I “don’t do Mondays”. So where am I? What am I doing? Being one of those academic types… I work as an Associate Lecturer at the Open University.

What is the Open University?

Like perhaps many people of a certain age I mainly associated the Open University with old educational TV programmes. So I was surprised to discover that the OU is the largest university in the UK with around 175,000 students, which is 40% of all the part-time students in the country!

The O in OU manifests itself as flexibility. It provides materials and resources for home study, allowing three quarters of students to combine study with full – or part-time work. Admissions are also open, with a third of students having one A-Level or lower on entry.

Studying in that context can be exceptionally challenging. So for each module, students are assigned a tutor to guide, support and motivate them: to illustrate, explain and bring the material to life. This is where I come in.

Tutoring

Oddly enough, given my developer position at Isotoma, I teach second and third year computing modules! I initially tutored web technologies, then diversified into object-oriented software development; algorithms, data structures and computability; and data management, analysis and visualisation.

The role of a tutor has three major components. To me, the most important is acting as the first point of contact for my tutor group, providing support and guidance throughout the module. For OU students, study is only one of many things going on in their lives – in fact, a student once apologised to me for an incomplete assignment, because they had to drive their wife to hospital to give birth! As a tutor, it is crucial to understand this, as such a unique learning environment requires adapting your teaching approach to students’ varied lives.

Marking and giving feedback is a core part of the role, with busier weeks producing plenty of varied and interesting assignments. For every piece of coursework, I write a ‘feedforward’ for each individual highlighting the strengths shown, but also outlining suggestions and targets for improvement. Personal feedback on assignments is an excellent learning opportunity for students and can really improve their final result. I also encourage students to get in touch to discuss my comments, as not only can this lead to some enlightening debates, but helps them to be in control of their own learning.

The final component is tutorials. I conduct most of mine through web-conferencing, working in a team to facilitate a programme of around 40 hours per module. These web-tutorials are extremely useful as the group can interact, chat and ask questions from wherever they are, and we can explore complex concepts visually on a whiteboard or through desktop sharing.

Tutoring: impact on development?

There is a great synergy between the two roles: as developers we try to keep on top of our game and getting a regular range of student questions that may be about Python, JavaScript, PHP, SQL, Java or who knows what certainly keeps you on your toes! This can be good preparation for some of the more …interesting… projects that Isotoma takes on from time to time.

Having a group of students all trying to find the best way to implement some practical activities is also like having a group of researchers working for you. So once when a student used the :target pseudo selector to implement a CSS lightbox without JavaScript I quite excitedly shared this technique in our development chat channel! Though (of course) our UX team were already well aware of it… but it was news to me!

To explain concepts you really need to understand them, and sometimes you realise over time what you thought you knew has become a bit shallow. Preparing a tutorial on recursion and search algorithms was a great warmup for solving how HAPPI implements drag and drop of multiple servers between racks – where not everything you drop may be able to move to the new slot, or the slot may be occupied by another server you are dragging away.

There isn’t an exact correlation between what I tutor and what I develop. Some topics push you beyond your comfort zone, so the implications of the Church-Turing thesis or the pros and cons of data mining are not things that crop up much in daily work, but things I’ve learnt in tutoring on data visualisation have proved to be pretty handy.

And of course some projects, such as the Sofia curriculum mapper for Imperial College of Medicine, are educational so domain knowledge of university processes is of direct relevance in understanding client requirements.

Development: impact on tutoring?

One of the reasons the OU employs part-time tutors is for the experience they bring from their work. In that respect, I can provide examples and context from what we do at Isotoma. This serves to bridge the gap between (what can sometimes be) quite dry theory and the decisions/compromises that are part and parcel of solution development in the real world.

So if a student questions the need to understand computational complexity for writing web applications, we can discuss what happens when part of your app is O(n²) when it could be O(n) or O(log n). Or the difference between a site that works well when one developer uses it and one that works well with thousands of users – but also discuss the evils of premature optimisation!

Being part of a wider team at Isotoma also allows me to talk about approaches to issues like project management, security, testing and quality assurance. Recently I’ve also started feeding some of Francois’ tweets into discussions on usability and accessibility, which is fun.

Web development is a fast-moving field so while principles evolve gradually, tools, frameworks, languages and practices come and go. Working in that field allows me to share how my work has changed over time and what remains true.

If you work in the IT industry and are looking for a different challenge then I would highly recommend becoming an OU computing tutor. Tutoring one group doesn’t need a day a week every week and it’s great to know that you’re sharing that expertise to those for whom full-time study isn’t an option, and developing a new generation of colleagues.

Containerisation: tips for using Kubernetes with AWS

Containers have been a key part of developer toolkits for many years now, but they are now becoming more common to use in production. Driving this adoption, in part, is the maturity of production-grade tooling and systems.

The leading container management product is Docker, but on its own docker does not provide enough to deploy into production, which has led to a new product category Container Orchestration.

The leading product in this space is Kubernetes, developed initially by Google and then released as open source software in 2015. Kubernetes differs from some of the competing container orchestration products in its design philosophy which is committed to open source (with components like iptables, nginx and etcd as core moving parts) and by being entirely API first in its design.

Our experience is that Kubernetes is ridiculously easy to deploy and manage and has many benefits over straight virtualisation for deploying mixed workloads, particularly in a public cloud environment.

Our services

We are working towards becoming a Kubernetes Certified Service Provider and are actively delivering Kubernetes solutions for customers, primarily on AWS. If you are interested in our consulting or implementation services please just drop us a line.

Why containers?

The primary benefits are cost and management effort. Cost because expensive compute resource can be efficiently shared between multiple workloads. Management because the container paradigm packages up an application with its dependencies in a way that facilitates flexible release and deployment.

A container cluster of 2 or 3 computers can host dozens of containers, all delivering different workloads. the Kubernetes software can scale containers within this cluster and can scale the overall cluster up and down depending on the needs of the workloads. This allows the cluster to be downsized to a minimum size when load is low. It also means containers that require very very low levels of resources can remain in service without needing to take a whole virtual machine.

Management time benefits enormously because of the packaging of applications with their dependencies. It allows you to share compute resource even when the workloads have conflicting dependencies – a very common problem. It allows upgrades to progress across your estate in a partial manner, rather than requiring big bang upgrades of possibly risky underlying software.

Finally it also allows you to safely upgrade the underlying operating system of your cluster without downtime. Workloads are automatically migrated around the cluster as nodes are taken out of service and new, upgraded, nodes are brought in.  We’ve done this a bunch of times now and it is honestly kind of magic.

There are other benefits to do with ease of accessgranular access control and federation, and I might deal with those in later posts.

Tips

Here are a few tips if you are considering getting started with Kubernetes.

Domains

Buy a new top level domain for every cluster. This makes your URLs so much nicer, and it really isn’t that expensive! 🙂

AWS accounts

We consider best practice to be a MASTER account, where your user accounts sit, and then one sub account for your production environment, with further sub accounts for pre-production environments. Note that you can run staging sites in your production cluster – this pattern should become much more common, since you are not staging the cluster, but staging the sites.

A staging cluster is only needed to test cluster-wide upgrades and changes.

Security

When all your sites are in a single cluster, and behind a single AWS ELB (yes, you can do this), it makes things such as Web Application Firewall automation and IP restricted ELBs more cost-effective. These things only need to be applied once to provide benefit across your estate.

Role-Based Access Control

This is a relatively new feature of Kubernetes, but it is solid and well-designed. I’d recommend turning this on from day one, so the capabilities are available to you later.

Flannel and Calico, or Weave

Similarly I’d recommend enabling an overlay network from day one. These are easily deployed into an AWS Kubernetes cluster using the kops tool, but they provide advanced network capabilities if you ever need them in the future.

Namespaces

Use namespaces to subdivide your estate into logical partitions. production and staging are an obvious distinction, but you may well have user groups where namespaces make a sensible boundary for applying access control.

Tooling

Currently integrating kubernetes configuration with cloudformation configuration means writing some custom tooling. Bite the bullet with this and dedicate some time to making a good job of this. I’m expecting to see Kubernetes become a first-class citizen within AWS at some point, but until then you are going to need to own your devops and do a good job of this.

Resource records

Create Route53 ALIAS records for all your exposed endpoints (which could be just your single ELB for your ingress controller), and use this in your Cloudfront distributions. This makes upgrades a lot easier!

SOMA in use during the 2017 Edinburgh Festival

Live video mixing with the BBC: Lessons learned

In this post I am going to reflect on some of the more interesting aspects of this project and the lessons they might provide for other projects.BBC R&D logo

This post is one of a series talking about our work on the SOMA video mixing application for the BBC. The previous posts in the series are:

  1. Building a live television video mixing application for the browser
  2. The challenges of mixing live video streams over IP networks
  3. Rapid user research on an Agile project
  4. Compositing and mixing video in the browser
  5. Taming the Async Beast with FRP and RxJS
  6. RxJS: An object lesson in terrible good software
  7. Video: The future of TV broadcasting
  8. Integrating UX with agile development

In my view there are three broad areas where this project has some interesting lessons.

Novel domains

First is the novel domain.

This isn’t unfamiliar – we often work in novel domains that we have little to no knowledge of. It is the nature of technical agency in fact – while we have some domains that we’ve worked in for many years such as healthcare and education there are always novel businesses with entirely new subjects to wrap our heads around.  (To give you some idea, a few recent examples include store-and-forward television broadcasting, horse racing odds, medical curricula, epilepsy diagnosis, clustering automation and datacentre hardware provisioning.)

Over the years this has been the thing that I have most enjoyed out of every aspect of our work. Plunging into an entirely new subject with a short amount of time to understand it and make a useful contribution is exhilarating.

Although it might sound a bit insane to throw a team who know nothing about a domain at a problem, what we’re very good at is designing and building products. As long as our customers can provide the domain expertise, we can bring the product build. It is easier for us to learn the problem domain than it is for a domain expert to learn how to build great products.

The greatest challenge with a new domain is the assumptions. We all have these in our work – the things we think are so well understood that we don’t even mention them. These are a terrible trap for software developers, because we can spend weeks building completely the wrong thing with no idea that we’re doing so.

We were very lucky in this respect to be working with a technical organisation within the BBC: Research & Development. They were aware of this risk and did a very good job of arranging our briefing, which included a visit to a vision mixing gallery. This is the kind of exercise that delivers a huge amount in tacit understanding, and allows us to ask the really stupid questions in the right setting.

I think of the core problem as a “Rumsfeld“. Although he got a lot of criticism for these comments I think they’re bizarrely insightful. There really are unknown unknowns, and what the hell do you do about them? You can often sense that they exist, but how do you turn them into known unknowns?

For many of these issues the challenge is not the answer, which is obvious once it has been found, but facilitating the conversation to produce the answer. It can be a long and frustrating process, but critical to success.

I’d encourage everyone to try and get the software team into the existing environment of the target stakeholder groups to try and understand at a fundamental level what they need.

The Iron Triangle

The timescale for this project was extraordinarily difficult – nine weeks from a standing start. In addition much of the scope was quite fixed – we were largely building core functionality that, if missing, would have rendered the application useless. In addition we wanted to achieve the level of finish for the UX that we generally deliver.

This was extremely ambitious, and in retrospect we bit off more than we could reasonably chew.

Time is the greatest enemy of software projects because of the challenges in estimation. For reasons covered in a different blog post, estimation for software projects is somewhere between an ineffable art reserved only for the angels, and completely impossible.

Triangle with sides labeled Quality, Scope and TimeWhen estimates are impossible, time becomes an even greater challenge. One of the truisms of our industry is the “Iron Triangle” of time, scope and quality. Like a good chinese buffet, you can only choose two. If you want a fixed time and scope, it is quality that will suffer.

Building good software takes thought and planning. Also, the first version of a component is rarely the best – it takes time to assemble, then consider it, and then perhaps shape it into something near its final form.

Quality is, itself, an aggregate quality. Haste lowers the standards for each part and so, by a process of multiplication, lowers far more the overall quality of a product. The only way to achieve a very high quality for the finished product is for every single part to be of similarly high quality. This is generally our goal.

However. Whisper it. It is possible to “manage” quality, if you understand your process and know the goal. Different kinds of testing can provide different levels of certainty of code quality. Manual testing, when done exhaustively, can substitute in some cases for perfection in code.

We therefore managed our quality, and I think actually did well here.

Asynchronous integration components had to be of absolute perfection because any bugs would result in general lack of stability which would be impossible to trace. The only way to build these is carefully, with a design and the only way to test these is exhaustively with unit and integration tests.

On the other hand, there were a lot of aspects of the UI where it was crucial that they performed and looked excellent, but the code could be rougher around the edges, and could just be hacked out. This was my area of the application, and my goal was to deliver features as fast as possible with just acceptable quality. Some of the code was quite embarrassing but we got the project over the line in the time, with the scope, and it all worked. This was sufficient for those areas.

Experimental technologies

I often talk about our approach using the concept of an innovation curve, and our position on it (I think I stole the idea from Ian Jindal – thanks Ian!).

If you can imagine a curve like this one, where the X axis is “how innovative your technologies are”, the Y axis is “pain”.

In practical terms this can be translated into “how likely I am to find the answer to my problems on Stack Overflow“.

At the very left, everything has been seen and done before, so there is no challenge from novelty – but you are almost certainly not making the most of available technologies.

At the far right, you are hand crafting your software from individual photons and you have to conduct high-energy physics experiments to debug your code. You are able to mould the entire universe to your whim – but it takes forever and costs a fortune.

There is no correct place to sit on this curve – where you sit is a strategic (and emotional) decision that depends on the forces at play in your particular situation.

Isotoma endeavours to be somewhere on the shoulder of the curve. The software we build generally needs to last 5+ years, so we can’t pick flash-in-the-pan technologies that will be gone in 18 months. But similarly we need to be relatively recent so it doesn’t become obsolete. This is sometimes called “leading edge”. Almost bleeding edge, but not so close you get cut. With careful choice of tools it is possible to maintain a position like this successfully.

This BBC project was off to the right of this curve, far closer to the bleeding edge than we’d normally choose, and we definitely suffered.

Some of the technologies we had to use had some serious issues:

  1. To use IPStudio, a properly cutting edge product developed internally by BBC R&D, we routinely had to read the C++ source code of the product to find answers to integration questions.
  2. We needed dozens of coordinated asynchronous streams running, for which we used RxJS. This was interesting enough to justify two posts on this blog on its own.
  3. WebRTC, which was the required delivery mechanism for the video, is absolutely not ready for this use case. The specification is unclear, browser implementation is incomplete and it is fundamentally unsuited at this time to synchronised video delivery.
  4. The video compositing technologies in browsers actually works quite well, but was entirely new to us and it took considerable time to gain sufficient expertise to do a good job. Also browser implementations still have surprising sharp edges (only 16 WebGL contexts are allowed! Why 16? I dunno.)

Any of these one issues could have sunk our project, so I am very proud we shipped good software, with all four issues.

Lessons learned? Task allocation is the key to this one I think.

One person, Alex, devoted his time to the IPStudio and WebRTC work for pretty much the entire project, and Ricey concentrated on video mixing.

Rather than try and skill up several people, concentrate the learning in a single brain. Although this is generally a terrible idea (because then you have a hard dependency on a single individual for a particular part of the codebase), in this case it was the only way through, and it worked.

Also, don’t believe any documentation, or in fact anything written in any human languages. When working on the bleeding edge you must “Use The Source, Luke”. Go to the source code and get your head around it. Everything else lies.

Summary

I am proud, justifiably I think, that we delivered this project successfully. It was used at the Edinburgh festival and actual real live television was mixed using our product, given all the constraints above.

The lessons?

  1. Spend the time and effort to make sure your entire team understand the tacit requirements of the problem domain and the stakeholders.
  2. Have an approach to managing appropriate quality that delivers the scope and timescale, if these are heavily constrained.
  3. Understand your position on the innovation curve and choose a strategic approach to managing this.

The banner image at the top of the article, taken by Chris Northwood, shows SOMA in use during the 2017 Edinburgh Festival.

Integrating UX with agile development

Incorporating user centred design practices within Agile product development can be a major challenge. Most of us in the user experience field are more familiar with the waterfall “big design up front” methodology. Project managers and developers are also likely to be more comfortable with a discreet UX design phase that is completed before development commences. But this approach tends to be inefficient, slower and more expensive. How does the role of the UX designer change within Agile product development, with its focus on transparency and rapid iteration?

BBC R&D logoWhile at Isotoma we’ve always followed our own flavour of Agile product development, UX is still mostly front-loaded in a “discovery” phase, as at most agencies. Our recent vision mixer project for BBC Research & Development, however, required a more integrated approach. The project had a very tight timeframe, requiring overlapping UX and development, with weekly show & tells.

From a UX perspective, it was a positive experience and I’m happy with the result. This post lists some of the techniques and approaches that I think helped integrate UX with Agile. Of course, every project and organisation is different, so there is definitely no one-size-fits-all approach, but hopefully there is something here you can use in your work. Continue reading

Timesheets: some observations on observation

Just as a throwaway in my post on understanding your team’s progress I said something like “everyone hates timesheets”. And it’s true, they do. They’re onerous, boring and they’re usually seen as invasive, “big brother”-esque, make-work. But, as I also said in that post, good quality time recording is vital to understanding what’s going on within your teams.

Feeling the need

We first started looking at timesheet systems nine or ten years ago when it was becoming abundantly clear that we weren’t making the progress we were expecting on certain projects, but we didn’t know why.

The teams were skilled in the tools they were using, they were diligent, they’d done similar work before, but they just weren’t hitting the velocities that we had come to expect. On top of that, the teams themselves thought they were making good progress. And every which way we approached the problem we were missing the information needed to get to the bottom of the mismatch between expectation and reality.

At that point in the company’s life timesheets were anathema to us; we felt very strongly they indicated a lack of trust, and in a company built entirely on the principles behind the Agile Manifesto… Well… You can see our problem.

Build projects around motivated individuals.
Give them the environment and support they need,
and trust them to get the job done.

But however we cut it we really needed to understand what people were actually doing with their day. We trusted that if people thought they were making good progress then they were, but we definitely knew that we weren’t making the same kind of progress that we had been a year ago on the same types of project. And back then we were often on fixed price projects and billing by the day, so when projects started to overrun our financial performance started to dip and the quality of our code went the same way (for all the reasons I outlined in that previous post).

So we hit on Harvest (at the time one of the poster children of the burgeoning Rails SaaS community) and asked everyone to fill in their sheets for a couple of months so we could generate some data.

We had an all hands meeting, we explained exactly why we were doing it, and we asked, cajoled and bullied people into using it so that at least we had something to work on and perhaps uncover the problems we were hitting.

And of course we found it quickly enough, because accurate timesheets filled in honestly expose exactly what’s going on. By our nature we are both helpful and curious – that’s how we ended up doing what we’re doing. But helpful and curious is easily distracted; a colleague asking for help, an old customer with a quick question, a project manager from another project with an urgent request, the account management team asking “can you just…” And all of this added up. In the worst cases some people were only spending four hours a day on the project they were allocated to; the rest of their time spent helping colleagues and old customers… However, how you cope with these things is probably the subject of another post.
My point here is that once we had that data we realised how valuable it was and knew that we couldn’t go without it again. Our key takeaway was that timesheets are a key part of a company’s introspection and without good data you don’t know the problem you’re actually trying to solve. And so we had to make timesheets part of our everyday processes.

Loving the alien

Like I said; people hate timesheets. They’re invasive. They’re time consuming. They feel like you’re being watched, judged. They imply no trust. They’re alien to an agile environment. And the data they produce is a key part of someone else’s reporting, too. So how do you make sure they’re filled in accurately and honestly? And not just in month one, when you first introduce them, but in month fifty seven when your business relies on them and you may not be watching quite so closely.

We’ve found the following works for us:

  • Make it crystal clear what they’re for, and what they’re not
  • Make it explicit that timesheets are for tracking the performance of estimates and ensuring that progress can be reported accurately
  • It’s not about how much you do, but how much got done
  • Tie them together with things like iDoneThis, so that people can give context to their timesheets in an informal unstructured manner
  • Make sure that everyone who uses the data throughout the management chain is incentivised to treat it honestly – this means your project managers mustn’t feel the need to manipulate it or worse manipulate how it’s entered (we’ve seen this more than once in other organisations)

And Dan, one of our project managers, sends round a gentle chivvying email each evening (filled with the day’s fun facts, of course) to make sure that people actually fill them in.

[Photo by Sabri Tuzcu on Unsplash]

External agencies vs. in-house teams

As you’ll already know because you’re windswept and interesting; we record a semi regular podcast where we look into an aspect of life in a technical agency that we think will interest the outside world. We’ve just finished recording the latest episode about internal versus external teams and honestly I think it’s one of the most interesting chats we’ve had.

Joining us on the podcast are Andy Rogers from Rokker and Dan Graetzer from Carousel Group. Both Andy and Dan have tons of experience both commissioning work from internal teams and navigating the selection of external agencies. They were able to speak with clarity about the challenges that each task can bring.

One of the interesting things for me was getting a glimpse ‘over the fence’ into some of the thought processes and pressures that lead people to keep work internal – something that I’ve really only been able to guess at in the past.

Here’s a quick summary of things we speak about.

Agencies developing symbiotic/parasitic relationships with larger clients.

This tendency of larger agencies to act almost as though they are internal teams is becoming more and more common. There are upsides and downsides to this, obviously, in that while bodies like Deloitte et al can mobilise 200-strong dev teams, they also make it more and more likely that their customers will have to keep going back to them in future. (We discuss this subject mostly in terms of how Isotoma are not a larger agency!)

Good agencies are expensive but not as expensive as bad recruitment

The cost of hiring an agency for a given software project is likely to cost around the same as the annual salary of a developer and/or development team. Given this, it can seem galling for potential customers that they’re spending the right amount of money in the wrong place. We discuss how a good agency can help mitigate both the opportunity cost and assume all the tricky recruitment risk in the relationship. (Aren’t we nice?)

Continuous delivery shouldn’t necessarily mean continuous agency billing

One of the goals of any software project should be to build and develop the skills to support it in-house. If you’ve had a key piece of software in production for 18 months and you’re still relying on a third party to administer, fix or deploy it then you might have a problem.

Asking an agency to do something is the easy bit

Commissioning work with third party agencies is one step in a multi-step journey. This journey needs to include understanding how you’re defining your requirements, how you plan to receive it when it’s done and how you’re going to give the project good governance when it’s in flight.

Also there is a good deal of talk about werewolves

We’re not mega sure why.

Hopefully you’ll find it as interesting as we did. You can listen to the podcast and subscribe!

DotYork Event Announcement

Isotoma working in partnership with DotYork

We’re excited to announce that DotYork has joined forces with Isotoma, the York based software development agency. Isotoma have been long term supporters of the event, attending and sponsoring every conference so far and even speaking at DotYork 2016, so when they offered their help it felt like a natural fit.

We’ve always thought DotYork was a great way to highlight our beautiful city and York’s rapidly growing digital community, and during conversations with Rick we realised that we had loads of ideas for future events and could offer Dot York the support it needed. We’re really excited about DotYork 2017 and are looking forward to future events in 2018 and beyond.
Andy Theyers, director and founder, Isotoma

Working with Isotoma on DotYork 2017 has already sparked some new ideas and with a bigger team we’re able to look at expanding the event, including workshops and an evening event this year, with who knows what to come in 2018
Rick Chadwick, DotYork

Reposted from the original DotYork blog

 

The future of TV broadcasting

Earlier this year we started working on an exciting project with BBC Research & Development on their long term programme developing IP Studio – the next generation IP-network based broadcast television platform. The BBC are developing new industry-wide standards working with manufacturers which will hopefully be adopted worldwide.

This new technology dramatically changes the equipment needed for live TV broadcasting. Instead of large vans stocked with pricey equipment and a team of people; shows can be recorded, edited and streamed live by a single person with a 4k camera, a laptop and internet access.

The freedom of being able to stream and edit live TV within the browser will open up endless possibilities for the entertainment and media sector.

You can see more Isotoma videos on Vimeo.