I have very little nostalgia for the old Design Museum building. Its location near Tower Bridge was always a real effort to get to, and while an attractive modernist icon, it always felt small, very much one of London’s “minor” museums – not befitting London’s reputation as a global design powerhouse. On 21 November it reopened at a new location in Kensington, and I visited on the opening weekend.
A year or so ago we signed up for a bunch of free Cycle Alert tags. They’re RFID tags you attach to your bike that ping a sensor in the cab of suitably equipped vehicles, warning the driver that a cyclist is nearby. We even did some truly adorkable PR to go with it. If we ignore the subtle whiff of victim-blaming it’s a nice idea; after all, who doesn’t want to be safer on their bike?
Twelve months on though, and it’s all fallen a bit flat.
Why? Because not enough cyclists have the tags on their bikes and not enough vehicles have the sensors installed. Like so many businesses the Cycle Alert model is predicated on making both ends of a market simultaneously, and therein lie some serious problems.
Your business is a different beast
Around half the start-ups that reach out to us for help have a business model that relies on taking a percentage from transactions occurring on their platform. The idea might be a new twist on recruitment services, or fashion retail, or services for motor tuning enthusiasts; who knows? But they share that same problem. Without vacancies on the site why would I upload my CV? Without CVs on the site why would I post my advert or pay a fee to search? Without beautiful shoes on the site why would I visit? Without shoppers why would I upload my beautiful shoes?
This isn’t to say that applications that rely on making a market are bad ideas, but they need treating quite differently. If your business is a straight product build you can pretty safely build an MVP (for your personal definition of M, V and P, obviously) and start marketing the hell out of it, but a marketplace of any sort needs more careful planning; it’s a marathon, not a sprint, and founders – and their funders – need to plan their resources accordingly.
What is your focus and what should be your focus?
There are a few things we see over and over again with this type of business idea. First up, founders regularly focus on trying to attract both classes of users at the same time. After all, you need both to start making money, don’t you? This can have a bunch of effects, but the two that are absolutely certain are that user numbers won’t grow as fast as you hoped and that you find yourself spread too thin.
At this stage more often than not our advice is to focus on one class of users and build secondary features that draw them to the platform before the marketplace is up and running. In the recruitment example get candidates on board with a great CV builder. Or get the recruiters on board by offering great APIs to publish to all the other job boards. One way or another you’ve got to get a critical mass of one side to attract the other and get the transactions flowing.
It can feel completely arse about face – and expensive – to be building features that aren’t core to your main offering, but unlike a product build you need a critical mass before you can start generating revenue. Like I said, it’s a marathon, not a sprint, and you need to look after your resources accordingly.
Are you doing the last things first?
Secondly, founders can’t help but want to build the core offering straight away. If the business model is selling premium listings for shoes then – obviously, right? – we need to build all the controls and widgets for uploading, managing and billing those listings… Haven’t we? Right?
Well. Not necessarily. See my point above. You need to phase your delivery, and when looked at through the lens of generating critical mass those widgets probably aren’t necessary yet. I know that you’ve got a limited budget and once the product is “finished” it’s those widgets that will be the core of you making money, but if you haven’t got anyone to use them yet perhaps the budget is better spent getting users on the site some other way?
A corollary to this is that, insensitive as I’m about to sound, it’s important to make sure your site is attractive when it’s empty. If your logged-out homepage is an infinite scrolling mosaic of tiles, each made up of images uploaded by your eager users, it’s going to look awful bare on day one. Getting the first dancer onto the dancefloor is your most important job; leave worrying about the layout of the bar until after you’ve got people dancing.
Don’t underestimate. It’s never as simple as you think
Thirdly, and lastly for this post, is the biggy. Everyone we’ve ever met has underestimated what it will take to get to critical mass. They underestimate the time, the money, and the sheer volume of changes they’ll need to make along the way.
I’ve said “marathon not a sprint” a couple of times already, so I won’t labour the point… But. Well, you know… Just make sure you’ve got access to more than you’re currently planning to spend.
Your goal here is critical mass, so alongside user acquisition your focus has to be on user retention and reducing churn.
Make sure you’re swimming in analytics, analytics, analytics. These will tell you what your users are doing and give you real insights into what’s driving uptake (and drop off). And be responsive to your users’ behaviour; be willing to change the offering mid flight.
Finally, make sure you’ve got the marketing budget to keep plugging away. You’re going to need it.
We’ve got a ton of experience helping web businesses get from first idea to funding and sale, and we work in every which way, from short term advisory roles through to providing an entire team. If any of what I’ve said above rings true give us a bell; find out how we can help your business take off.
We like Postlight. They’re really good at talking about the reasons they do things and exposing these conversations to the world. In Gina Trapani’s recent post (Director of Engineering at Postlight), she gives some really useful advice on when and when not to use WordPress.
I thought it’d be useful to delve into this topic a little and expose some of the conversations we’ve had at Isotoma over the years. We’ve done a lot of what you might call ‘complex content management system’ projects in the past and, as a matter of policy, one of the first things we do when we start talking to potential customers about this kind of thing is ask “Why aren’t we doing this in WordPress?”
This is one of the most valuable questions an organisation can ask themselves when they start heading down the road to a new website. Trapani’s excellent article basically identifies 3 key reasons why WordPress represents excellent value:
- You can deliver a high number of features for a very low financial outlay
- It’s commoditised and therefore supportable and by a large number of agencies who will compete on price for your business
- It’s super easy to use and thus, easy to hire for
For us though, there’s a more implicit reason to ask the question ‘Why not WordPress?’
The more customised a software project is, the more complex it becomes. The more complexity there is, the more risk and expense the customer is exposed to. Minimising exposure to risk and reducing expense are always desirable project outcomes for everyone involved in the project – though that’s rarely an explicit requirement.
So all of a sudden, just understanding these issues and asking the question “Why aren’t we using WordPress?” becomes a really valuable question for an organisation to ask.
Good reasons not to choose WordPress
Through asking that question, you’ll discover that there are many valid reasons not to use WordPress. I thought it might be illuminating to unpack some of the most frequent ones we come across. So I thought back to projects we’ve delivered recently and teased out some reasons why we or our customers chose not to use WordPress.
1. When the edge case is the project
If you overlay your CMS requirements onto the features WordPress offers, you’ll almost always find a Venn Diagram that looks a lot like this:
The bits that jut out at the side? That’s where the expense lives. Delivering these requirements – making WordPress do something it doesn’t already do, or stop doing something it does – can get expensive fast. In our experience, extending any CMS to make it behave more like another product is a sign that you’re using the wrong tool for the job.
In fact, a simpler way of thinking about it is to redo the Venn diagram:
If you can cut those expensive requirements then fantastic. We’d always urge you to do so.
But ask this question while you do:
What’s the cost if I need to come back to these requirements in 18 months and deliver them in the chosen platform?
- Is it hard?
- What kind of hard is it?
Is it the kind of hard where someone can tell you what the next steps are? Or the kind of hard where people just suck their teeth and stare off into the middle distance?
The difference between those two states can run into the thousands and thousands of pounds so it’s definitely worth having the conversation before you get stuck in.
If you can’t get rid of the edge cases; if, in fact, the edge cases *are* your project, then we’d usually agree that WordPress is not the way forward.
2. Because you need to build a business with content
We’ve worked with one particular customer since 2008 when they were gearing up to become a company whose primary purpose was delivering high quality content to an incredibly valuable group of subscribers. WordPress would have delivered almost all of their requirements back then but we urged them to go in a different direction. One of the reasons we did this was to ensure that they weren’t building a reliance on someone else’s platform into a critical area of their business.
WordPress and Automattic will always be helpful and committed partners and service providers. However, they are not your business and they have their own business plans which you have neither access to or influence on. For our customer, this was not an acceptable situation and mitigating that risk was worth the extra initial outlay.
3. Because vanity, vanity; all is vanity
There is nothing wrong with being a special snowflake. Differentiation is hard and can often be the silver bullet that gets you success where others fail. We understand if there are some intangibles that drive your choice of CMS and broadly support your right to be an agent in your own destiny. You don’t want to use WordPress because WordPress is WordPress?
Congratulations and welcome to Maverick Island. We own a hotel here. Try the veal.
Seriously though, organisational decision making is often irrational and that’s just the way it is. When this kind of thing happens though, it’s important to be able to tell that it’s happening. You should aim to be as clear as possible about which requirements are real requirements and which are actually just Things We Want Because We Want Them. Confusing one with the other is a sure-fire way to increase the cost of your project – both financial and psychic.
If you want to know more about migrating CMS’ and the different platforms available, just contact us or send an email to email@example.com. As you can probably tell, this is the kind of thing we like talking about.
On Friday evening an unknown entity launched one of the largest Distributed Denial of Service (DDoS) attacks yet recorded, against Dyn, a DNS provider. Dyn provide service for some of the Internet’s most popular services, and they duly suffered problems. Twitter, Github and others were unavailable for hours, particularly in the US.
DDoS attacks happen a lot, and are generally uninteresting. What is interesting about this one is:
- the devices used to mount the attack
- the similarity with the “Krebs attack” last month
- the motive
- the potential identity of the attacker
Together these signal that we are entering a new phase in development of the Internet, one with some worrying ramifications.
Unlike most other kinds of “cyber” attack, DDoS attacks are brute force – they rely on sending more traffic than the recipient can handle. Moving packets around the Internet costs money so this is ultimately an economic contest – whoever spends more money wins. The way you do this cost-effectively, of course, is to steal the resources you use to mount the attack. A network of compromised devices like this is called a “botnet“.
Most computers these days are relatively well-protected – basic techniques like default-on firewalls and automated patching have hugely improved their security. There is a new class of device though, generally called the Internet of Things (IoT) which have none of these protections.
IoT devices demonstrate a “perfect storm” of security problems:
- Everything on them is written in the low-level ‘C’ programming language. ‘C’ is fast and small (important for these little computers) but it requires a lot of skill to write securely. Skill that is not always available
- Even if the vendors fix a security problem, how does the fix get onto the deployed devices in the wild? These devices rarely have the capability to patch themselves, so the vendors need to ship updates to householders, and provide a mechanism for upgrades – and the customer support this entails
- Nobody wants to patch these devices themselves anyway. Who wants to go round their house manually patching their fridge, toaster and smoke alarm?
- Because of their minimal user interfaces (making them difficult to operate if something goes wrong), they often have default-on [awful] debug software running. Telnet to a high port and you can get straight in to adminster them
- They rarely have any kind of built in security software
- They have crap default passwords, that nobody ever changes
To see how shockingly bad these things are, follow Matthew Garrett on Twitter. He takes IoT devices to pieces to see how easy they are to compromise. Mostly he can get into them within a few minutes. Remarkably one of the most secure IoT device he’s found so far was a Barbie doll.
That most of these devices are far worse than a Barbie doll should give everyone pause for thought. Then imagine the dozens of them so many of us have scattered around our house. Multiply that by the millions of people with connected devices and it should be clear this is a serious problem.
Matthew has written on this himself, and he’s identified this as an economic problem of incentives. There is nobody who has an incentive to make these devices secure, or to fix them afterwards. I think that is fair, as far as it goes, but I would note that ten years ago we had exactly the same problem with millions of unprotected Windows computers on the Internet that, it seemed, nobody cared about.
The Krebs attack
A few weeks ago, someone launched a remarkably similar attack on a security researcher Brian Krebs. Again the attackers are unknown and they launched the attack using a global network of IoT devices.
Given the similarities in the attack on Krebs and the attack on Dyn, it is probable that both of these attacks were undertaken by the same party. This doesn’t, by itself, tell us very much.
It is common for botnets to be owned by criminal organisations that hire them out by the hour. They often have online payment gateways, telephone customer support and operate basically like normal businesses.
So, if this botnet is available for hire then the parties who hired it might be different. However, there is one other similarity which makes this a lot spookier – the lack of an obvious commercial motive.
Mostly DDoS attacks are either (a) political or (b) extortion. In both cases the identity of the attackers is generally known, in some sense. For political DDOS attacks (“hacktivism”) the targets have often recently been in the news, and are generally quite aware of why they’re attacked.
Extortion using DDoS attacks is extremely common – anyone who makes money on the Internet will have received threats, and have been attacked and many will have paid out to prevent or stop a DDoS. Banks, online gaming, DNS providers, VPN providers and ecommerce sites are all common targets – many of them so common that they have experienced operations teams in place who know how to handle these things.
To my knowledge no threats were made to Dyn or Krebs before the attacks and nobody tried to get money out of them to stop them.
What they have in common is their state-of-the-art protection. Brian Krebs was hosted by Akamai, a very well-respected content delivery company who have huge resources – and for whom protecting against DDOS is a line of business. Dyn host the DNS for some of the world’s largest Internet firms, and similarly are able to deploy huge resources to combat DDOS.
This looks an awful lot like someone testing out their botnet on some very well protected targets, before using it in earnest.
The identity of the attacker
It looks likely therefore that there are two possibilities for the attacker. Either it is (a) a criminal organisation looking to hire out their new botnet or (b) a state actor.
If it is a criminal organisation then right now they have the best botnet in the world. Nobody is able to combat this effectively. Anyone who owns this can hire it out to the highest bidder, who can threaten to take entire countries off the Internet – or entire financial institutions.
A state actor is potentially as disturbing. Given the targets were in the US it is unlikely to be a western government that controls this botnet – but it could be one of dozens from North Korea to Israel, China, Russia, India, Pakistan or others.
As with many weapons a botnet is most effective if used as a threat, and we many never know if it is used as a threat – or who the victims might be.
What should you do?
As an individual, DDoS attacks aren’t the only risk from a compromised device. Anyone who can compromise one of these devices can get into your home network, which should give everyone pause – think about the private information you casually keep on your home computers.
So, take some care in the IoT devices you buy, and buy from reputable vendors who are likely to be taking care over their products. Unfortunately the devices most likely to be secure are also likely to be the most expensive.
One of the greatest things about the IoT is how cheap these devices are, and the capability they can provide at this low price. Many classes of device don’t necessarily even have reliable vendors working in that space. Being expensive and well made is no long-term protection – devices routinely go out of support after a few years and become liabilities.
Anything beyond this is going to require concerted effort on a number of fronts. Home router vendors need to build in capabilities for detecting compromised devices and disconnecting them. ISPs need to take more responsibility for the traffic coming from their networks. Until being compromised causes devices to malfunction for their owner there will be no incentives to improve them.
It is likely that the ultimate fix for this will be Moore’s Law – the safety net our entire industry has relied on for decades. Many of the reasons for IoT vulnerabilities are to do with their small amounts of memory and low computing power. When these devices can run more capable software they can also have the management interfaces and automated patching we’ve become used to on home computers.
One of the services we provide is innovation support. We help companies of all sizes when they need help with the concrete parts of developing new digital products or services for their business, or making significant changes to their existing products.
A few weeks ago the Royal Swedish Academy of Sciences awarded the Nobel Prize for Economics to Oliver Hart and Bengt Holmström for their work in contract theory. This prompted me to look at some of his previous work (for my sins I find economics fascinating), and I came across his 1998 paper Agency Costs and Innovation. This is so relevant to some of my recent experiences I wanted to share it.
Imagine you have a firm or a business unit and you have decided that you need to innovate.
This is a pretty common situation – you know strategically that your existing product is starting to lose traction. Maybe you can see commoditisation approaching in your sector. Or perhaps, as is often the case, you can see the Internet juggernaut bearing down on your traditional business and you know you need to change things up to survive.
What do you do about it? If you’ve been in this situation the following will probably resonate:
This describes the principal-agent problem, which is a classic in economics. This describes how a principal (who wants something) can incentivise an agent to do what they want. The agent and “contracting” being discussed here could be any kind of contracting including full time staff.
A good example of the principal-agent problem is how you pay a surgeon. You want to reward their work, but you can’t observe everything they do. The outcome of surgery depends on team effort, not just an individual. They have other things they need to do other than just surgery – developing standards, mentoring junior staff and so forth. Finally the activity itself is very high risk inherently – which means surgeons will make mistakes, no matter how competent. This means their salary would be at risk, which means you need to pay huge bonuses to encourage them to undertake the work at all.
In fact commonly firms will try and innovate using their existing teams, who are delivering the existing product. These teams understand their market. They know the capabilities and constraints of existing systems. They have domain expertise and would seem to be the ideal place to go.
However, these teams have a whole range of tasks available to them (just as with our surgeon above), and choices in how they allocate their time. This is the “multitasking effect”. This is particularly problematic for innovative tasks.
My personal experience of this is that, when people have choices between R&D type work and “normal work”, they will choose to do the normal work (all the while complaining that their work isn’t interesting enough, of course):
This leads large firms to have separate R&D divisions – this allows R&D investment decisions to take place between options that have some homogeneity of risk, which means incentives are more balanced.
However, large firms have a problem with bureaucratisation. This is a particular problem when you wish to innovate:
Together this leads to a problem we’ve come across a number of times, where large firms have strong market incentives to spend on innovation – but find their own internal incentive systems make this extremely challenging.
If you are experiencing these sorts of problems please do give us a call and see how we can help.
I am indebted to Kevin Bryan’s excellent A Fine Theorem blog for introducing me to Holmström’s work.
Over the last six months we’ve had a lot of interest from customers in the emerging area of chatbots, particularly ones using Facebook Messenger as a platform.
While bots have been around, in some form or other, for a very long time the Facebook Messenger platform has catapulted them into prominence. Access to one billion of the world’s consumers is a tempting prospect for many businesses.
We’ve reviewed the ecosystem that is emerging around chatbots and provide a guide to some of the factors you should consider if you are thinking about building and deploying chatbots, in our new whitepaper.
The contents include
- The history of chat interfaces
- What conversational interfaces can do, and why
- Natural Language Processing
- Features provided by chatbot platforms
- An in-depth review of eight of the top chatbot platforms
- Recommendations for next steps, and a look to the future
Please, download the whitepaper, and let us know what you think.
“Hello, I’m Andy and I have a stammer.”
While this is true, thankfully very few people notice it nowadays. Like many older stammerers I’ve developed complex and often convoluted strategies to avoid triggering it. But still, if you were to put me on stage and ask me to say that sentence we’d be there all week.
Over the last ten years or so as I’ve aged and gained more control over my stammer I’ve not given it much thought, barring politely turning down the occasional invitation to speak in public. Recently though, I’ve been forced to reassess both it and my coping strategies in the light of the rapid increase in voice interfaces for everything from phones to cars. And that’s made accessibility a very personal issue.
Like many stammerers I struggle with the start of my own name, and sounds similar to it. In the world of articulatory phonetics the sounds that trip me up are called “open vowels”. That is, sounds that are generated at the back of the throat with little or no involvement from the lips or tongue. In English that’s words starting with vowels or the letter H. So the first seven words of the sentence “Hello, I’m Andy and I have a stammer” are pretty much guaranteed to stop me in my tracks (unless I’m drunk or singing – coping strategies!).
We recently got an Amazon Echo for the office and wired it up to a bunch of things, including Spotify. Colleagues tell me it’s amazing, but because the only way I can wake it up is by saying “Alexa!” it’s absolutely useless to me.
And it gets worse. Even if a stammerer is usually able to overcome their problem sounds other factors will increase their likelihood of stammering in a particular situation.
One is over-rehearsal, where the brain has time to analyse the sentence, spot the potentially difficult words and start to worry about them, exacerbating the problem. This can be caused by reading aloud – even bedtime stories for the kids (don’t get me started on Harry and Hermione or Hiccup Horrendous Haddock the Third) – but anything where the words are predetermined can be a problem; be that a sales presentation, giving your name as everyone shakes hands as they walk into a meeting, performing lines from a play, making the vows at your wedding, literally anything where you have time to think about what you’re going to say and can’t change the words.
Speech interfaces currently fall firmly into the realm of over-rehearsal. You’re forced to plan carefully what you’re going to say, and then say it. “Alexa! Play The Stutter Rap by Morris Minor and the Majors” (yeah, that was a childhood high point, let me tell you) is a highly structured sentence and despite Alexa’s smarts it’s the only way you’re going to get that track played. So it’s not only a problematic sound, but it’s over-rehearsed… Doubly bad.
The other common trigger for stammering is often loosely defined as social anxiety, but is anywhere where the stammerer is drawing attention to themselves, either from being the focus of an activity (on stage, say) or from disturbing the normal flow of activity around them (for example, by trying to attract someone’s attention across a crowded room).
If I want to talk to the Echo in our office I know that saying “Alexa!” is going to disturb my colleague’s flow and cause them to involuntarily prick up their ears, which brings it right into the category of social anxiety… As well as already being a trigger sound and over-rehearsed… Triply bad.
However good my coping strategies might normally be I can’t use any of them when speaking to Alexa, and speaking to Alexa is exactly when I would normally be employing them all. Even when I’m in the office on my own it’s useless to me, because trigger sound and over-rehearsal is enough to stop me.
And the Echo isn’t alone. There’s “Hey, Siri!”, “Hey, Cortana!”, “OK Google!”, and “Hi TV!”. All of them, in fact. Right now all of the major domestic voice controls use wake words that start with an open vowel. Gee. Thanks everyone.
Google recently announced that 20% of mobile searches use voice rather than text. More than half of iOS users use Siri regularly. Amazon and Microsoft are doubling down on Echo and Cortana, respectively. Tesla are leading the way in automotive, but all the major manufacturers offer some form of voice control for at least some of their models. It makes absolute sense for them to do so – speech is such a natural interface, right? And it’s futuristic – it’s the stuff of Star Trek. Earl Grey, Hot! and all that. But just as screen readers have constantly struggled to keep up with web technologies we’re seeing developers doomed to repeat those same mistakes with voice interfaces, as they leap ahead without consideration for those that can’t use them.
To give some numbers and put this in context there are approximately twice as many stammerers in the UK (1% of the population) as there are registered visually impaired or blind (0.5% of the population). That’s a whole chunk of people. And while colleagues would say that me not being able to choose music for the stereo is a benefit not a drawback, it makes light of the fact that a technology we generally think of as assistive is not a panacea for all.
Currently Siri, Cortana, Samsung TVs and Alexa can only be addressed with sentences that start with an open vowel (Siri, Cortana and Samsung can’t be changed, Alexa can, but only to one of “Alexa”, “Echo” and “Amazon”). Google on Android can thankfully be changed to any phrase the user likes, even if the process is a little convoluted. Interestingly for me, though, is that the Amazon Echo offers no alternative interface at all. It is voice control only, and has to be woken with an open vowel. It is the worst offender.
For me this has been an object lesson in checking my privilege. Yes, I’m short sighted, but contact lenses give me 20/20 vision. I had a bad back for a while, but I was still mobile. This is the first piece of technology that I’ve actually been unable to use. And it’s not a nice experience. As technologists we know that accessibility is important – not just for the impaired but for everyone – yet we rarely feel it. I’m sure feeling it now.
Voice control is still in its infancy. New features and configurations are being introduced all the time. Parsing will get smarter so that wake words can be changed and commands can be more loosely structured. All of these things will improve accessibility for those of us with speech impediments, who are non-verbal, have a throat infection, or are heavily accented.
But we’re not there yet, and right now I’ve got to ask Amazon… Please, let me change Alexa’s name.
I was thinking Jeff?
Here at Isotoma Towers we’ve recently started filling our otherwise spartan office with plants. Plants are lovely but they do require maintenance, and in particular they need timely watering.
Since we’re all about automation here, we decided to use this as a test case for building some Internet of Things (IoT) devices. One of my colleagues pointed out this great moisture sensor from Catnip (right).
There are lots and lots of choices for how to build something like this, and this blog post is going to talk about design decisions. See below the fold for more.
Clickbait titles are fun, but bear with me, good people. I’m trying the make a point.
This report was wafted under my nose the other day. It makes for depressing, but not terribly surprising reading. The first paragraph pretty much nails it:
Anyone who’s spoken to me in a professional capacity for the last 3 months will probably recognise that Smith & Beta’s report is quantitative confirmation of what I’ve been going on about for ages. Each one of these makes me sad – but also, because I am a shallow, vapid person, I still get to feel happy that I’m right.
1) Good quality creative requires good quality technical implementation
Agencies lead with creative vision and lean on technical skills (internal & external) to deliver this vision. No one ever won a pitch by saying that the creative will be a strong C+ but it’s going to be implemented really well. Sadly, the opposite is almost always true. The industry is generally OK with taking an amazing creative idea and delivering it late, over-budget and on top of a pile of bodies of fallen colleagues.
2) This technical resource – where it exists within an agency – is often siloed and over-committed
Because of the way the creative industry works, creative resource is always going to be an expense the agency is happy to invest in. Investing in technical resource however; is a more expensive, slower, trickier business.
Similarly, investing in older, more skilled resource is always going to be a harder sell when there are countless thousands of young and exploitable juniors clamouring for your attention.
An agency trying to walk the line between capability and capacity in order to really call themselves “Full Service” will end up with a safe but middle of the road offer. Conversely, an agency who shoots for the moon and invests in highly specialised and/or highly senior team may find that they’ve painted themselves into a very expensive corner.
3) It’s hard to hire your way out of this problem
I mean, duh, obviously. It’s hard to hire your way out of any problem. Recruitment, training and increasing retention are sloooooow processes. And the problems that this report outlines are problems of the now.
(Side-note: In my role here at Isotoma, I often end up talking to agencies about projects that we can collaborate on. I’m usually talking about projects that might be coming up in, say, 6 months, but people actually want help RIGHT NOW.)
4) These problems when considered together, reduce the satisfaction of the customer and shorten the lifetime of the account
As abusive as the client/agency model can be, there’s a satisfyingly stark bottom line to it: “Do good work; get more work.” Note that this is distinct from “Pitch good creative; get more work.”
As I said above, no one ever won a pitch for outlining a competent implementation plan, but once the project is over and the smoke settles, the customer doesn’t just remember the pitch.
(If you’re really unlucky, the people who were in the pitch don’t even work for the customer anymore…)
The knife edge that a marcomms agency has to walk is being able to deliver creative vision *and* technical competence in a way that doesn’t fundamentally alter what the company is. Go too far in one direction and you’re unable to deliver anything profitably, go too far in the other and you’ve magically become a company that you don’t want to be.
So this is one of the reasons that Isotoma do what we do. We’re already a technical agency. We’re already geared up to help you estimate, deliver and, crucially, support a creative campaign. We’re good partners. And the better we get at ploughing this particular furrow, the better we’re able to help and complement agencies who’ve chosen to plough another.
And that makes me happy.
(See? I was being cynically provocative to attract clicks. And the pug at the top? The cherry on the cake, my friend. Truly I am a monster.)
Myvue.com was never what I’d call wonderfully designed, but it it did its job. It did it so well, in fact, that it’s one of the reasonably few sites I’ve bookmarked on my phone, and one of the even fewer bookmarks that I actually use on a regular basis.
Specifically, I bookmarked the URL of my local cinema. Here’s how it looked until a month or so ago:
Pretty simple, right? It shows me a vertical list of movies showing today, and the times they’re showing at. It defaults to today, but at the top of the list are tabs for the next 5 days. It’s not exactly mobile-optimised, but it’s perfectly usable on my iPhone.
The site also does plenty of other things, all of which are pretty much useless. I’m not here to watch a trailer. I don’t buy tickets online as it takes a minute to buy at the cinema and it’s never sold out or full. Where do “user ratings” even come from and why do I care? Why would anyone go to this site to find films to watch by genre? Why would I register on a site like this? What’s the point of literally any of the rest of the site’s navigation? Anyway, that’s by the by. It does its central job well, showing me every movie that’s showing on that day, at what times, on one page.
So the other day I used my bookmark again and noticed immediately they’ve redesigned. It looks new and expensive. It adapts to my mobile device. And it’s now utterly useless, particularly on mobile. Since 99% of the time I use this site it’s on my iPhone, that’s what I’ll use for the rest this review.
This is what you now see at the same URL:
The entire first screen is taken up by a film poster, which turns out to be a slideshow. Carousels are annoying enough, but this one makes it extremely difficult to know what page I’m on, because judging by what I see on the screen, I’m on a page for Sausage Party.
Just pause to consider how pointless this slideshow is (whilst adding who knows how much to the download time). It’s a sequence of movies showing at this cinema. Which is… exactly what the list below it is. Except this is a slideshow, and that’s a vertical list. Someone must have insisted on a slideshow.
Scrolling past this annoyance you get to the vertical list of movies showing that day. The posters are now so large only 2 fit on screen at once (even on desktop!), yet they’ve removed the short description, leaving only the title and… what’s this? “Get times & tickets”? Why don’t you just show the times like you used to? So now I have to navigate to get the times for every movie I’m interested in?
[Update 14 Sep 2016: MyVue have added showtimes back in on the listing page! I wish they would also show the film’s running time as they used to, though.]
So I click “Get times & tickets” and… WHAT?! Another page for the movie I just clicked on, with an enormous backdrop image but no useful information on it, and another big “GET TIMES & TICKETS” button! So I click that, and a panel slides laboriously in from the right, displays a “working” spinner for all of 7 seconds, before finally showing me the times. Wow, it really worked hard to show me some text-based information. There’s no caching, by the way. Next time I request showtimes it’ll take another 7 seconds.
Now I want to see what times other movies are showing, so I go Back. Back to the useless screen with the backdrop (let’s call it the Product screen). So I go Back again. Whoops, here comes the sliding panel with showtimes again. Clicking Back a third time is the charm. (Although it’s hard at first to tell I’m back on the listing, because an unrelated movie – the slideshow at the top – is filling the screen.)
The above buggy behaviour is actually the best-case scenario. If you clicked the X in the corner of the sliding showtimes panel instead of Back, you’d find yourself back at the Product screen with no escape. Clicking Back again would restore the showtimes panel, and so on, trapping you in an endless loop.
The bottom line is I’m removing this bookmark from my phone, as it is now useless. A google search for “what’s showing at vue fulham” gives me the information I want.
What went wrong here?
Firstly, despite the mobile-optimised layout, it’s obvious that the site was designed and built with a desktop or widescreen display in mind. It looks like the designers wanted something that looks like today’s media centre interfaces, like Plex or Apple TV. The enormous posters, backdrops and spacious page layouts are typical of a “lean-back” design. Also, the desktop version includes stuff that’s missing on mobile – the Product screen even has screening times for today, saving one click. But ask yourself: is this site anywhere in the same category as these media centre apps? Where are people likely to be when checking what’s showing at their cinema that evening? How quickly do they want this information? Mobile should’ve been considered of at least equal importance.
Media centre interfaces also necessarily involve deep levels of navigation, a handicap born of lack of space on the screen and a remote-control interface. On browsers it’s easier to scroll and click on targets, and if you can avoid deeper levels of navigation, you do so.
But secondly, it’s clear that the designers had a very different idea of the primary user journey from me. You can see this clearly in the super-prominent “Quick Book” widget. On a desktop, you can at least see what the widget does, but on the mobile it’s entirely mysterious what “Quick Book” will do. But when invoked, it’s clear that the designers consider the website’s primary purpose to be buying tickets online, and that users don’t care so much about where or when it’s showing, as long as it’s the one movie they want. (The widget does not default location to the current cinema selected, and does not default date to today.)
Admittedly I don’t know how typical I am of Vue cinemagoers, but I don’t buy tickets online, I’m 99% certain to go to my nearest Vue rather than somewhere else, and there may be more than one movie I’m interested in seeing. My decision ultimately depends on what’s the most convenient time within the next 5 days. With the “Quick Book” widget, I’d have to use 3 dropdowns (which should be the UI of last resort) – 7 clicks – before even being able to see which times it’s showing for that day, which may well rule it out.
I used to be able to see what’s showing today at my local cinema, and when, with a single tap on my phone. Two taps if I wanted to check another day. Now, to check the times for a movie requires 3 taps, with loading time between each. Checking the times for another movie adds another 5 taps. Checking a different day… you get the picture. This redesign has rendered the site unusable, for me, and I would guess a large proportion of its previous users.