Our Plants Need Watering Part II

This is the second post in a series doing a deep dive into Internet of Things implementation.  If you didn’t read the first post, Our Plants Need Watering Part I, then you should read that first.

This post talks about one of the most important decisions you’ll make in an IoT project: which microcontroller to use. There are lots of factors and some of them are quite fractal – but that said I think I can make some concrete recommendations based on what I’ve learned so far on this project, that might help you in your next IoT project.

This post gets really technical I am afraid – there’s no way of comparing microprocessors without getting into the weeds.

There are thousands of different microcontrollers on the market, and they are all different.  How you choose the one you want depends on a whole range of factors –  there is no one-size-fits all answer.

Inside a microcontroller

A microcontroller is a single chip that provides all the parts you require to connect software and hardware together. You can think of it as a tiny, complete, computer with CPU, RAM, storage and IO. That is where the resemblance ends though, because each of these parts is quite different from the computers you might be used to.

 

CPU

The Central Processing Unit (CPU) takes software instructions and executes them. This is the bit that controls the rest of the microcontroller, and runs your software.

Microcontroller CPUs come in all shapes and sizes all of which governs the performance and capabilities of the complete package. Mostly the impact of your CPU choice is smaller than you might think – toolchains and libraries protect you from most of the differences between CPU platforms.

Really it is price and performance that matter most, unless you need very specific capabilities. If you want to do floating point calculations or do high-speed video or image processing then you’re going to select a platform with those specific capabilities.

Flash Memory

The kind of computers we are used to dealing with have hard disks to hold permanent storage. Microcontrollers generally do not have access to hard disks. Instead they have what is called “flash” memory. This is permanent – it persists even if power is disconnected. The name “flash” comes from the way the memory is erased “like a camera flash”. It’s colloquially known as just “flash”.

You need enough flash to store your code. The amount of flash available varies tremendously. For example the Atmel ATtiny25 has a whole 2KB of flash whereas the Atmel ATSAM4SD32 has 2MB.

Determining how big your code will be is an important consideration, and often depends on the libraries you need to use. Some quotidian things we take for granted in the macro world, like C’s venerable printf function are too big to fit onto many microcontrollers in its normal form.

Static RAM (SRAM)

Flash is not appropriate for storing data that changes. This means your working data needs somewhere else to go. This is generally SRAM. You will need enough SRAM to hold your all changeable data.  

The amount of SRAM available varies widely. The ATtiny25 has a whole 128 bytes (far less than the first computer I ever programmed, the ZX81, and that was 35 years ago!). At the other end of the scale the ATSAM4SD32 has 160K, and can support separate RAM chips if you need them.

I/O Pins

Microcontrollers need to talk to the outside world, and they do this via their I/O pins. You are going to need to count the pins you need, which will depend on the devices you plan to connect your microcontroller to.

Simple things like buttons, switches, LEDs and so forth can use I/O pins on an individual basis in software, and this is a common use case. Rarely do you build anything that doesn’t use a switch, button or an LED.

If you are going to talk digital protocols however you might well want hardware support for those protocols. This means you might consider things like I²C, RS232 or ISP.

A good example of this is plain old serial. Serial is a super-simple protocol that dates back to the dark ages of computing. One bit at a time is sent over a single pin, and these are assembled together into characters. Serial support needs a bit of buffering, some timing control and possibly some flow control, but that’s it.

The ATtiny range of microprocessors have no hardware support for serial, so if you want to even print text out to your computer’s serial port you will need to do that in software on the microprocessor. This is slow, unreliable and takes up valuable flash. It does work though, at slow speeds – timing gets unreliable pretty quickly when doing things in software.

At the other end you have things like the SAM3X8E based on the ARM Cortex M3 which have a UART and 3 USARTs – hardware support for high speed (well 115200 baud) connections to several devices simultaneously and reliably.

Packaging

There are loads of different packaging formats for integrated circuits. Just check out the list on Wikipedia. Note that when you are developing your product you are likely to use a “development board”, where the microcontroller is already mounted on something that makes it easy to work with.

Here is a dev board for the STM32 ARM microprocessor:

(screwdriver shown for scale).

You can see the actual microprocessor here on the board:

Everything else on the board is to make it easier to work with that CPU – for example adding protection (so you don’t accidentally fry it), making the pins easier to connect, adding debug headers and also a USB interface with a programmer unit, so it is easy to program the chip from a PC.

For small-scale production use, “through hole” packages like DIP can be worked with easily on a breadboard, or soldered by hand. For example, here is a complete microcontroller, the LPC1114FN28:

Some, others, like “chip carriers” can fit into mounts that you can use relatively easily, and finally there are “flat packages”, which you would struggle to solder by hand:

Development support

It is all very well choosing a microcontroller that will work in production – but you need to get your software written first. This means you want a “dev board” that comes with the microcontroller helpfully wired up so you can use it easily.

There are dev boards available for every major platform, and mostly they are really quite cheap.

Here are some examples I’ve collected over the last few years:

The board at the bottom there is an Arduino Due, which I’ve found really useful.  The white box connected to it is an ATMEL debug device, which gives you complete IDE control of the code running on the CPU, including features like breakpoints, watchpoints, stepping and so forth.

Personally I think you should find a dev board that fits your needs first, then you need to choose a microcontroller that is sufficiently similar. A workable development environment is absolutely your number one goal!

Frameworks, toolchains and libraries

This is another important consideration – you want it to be as easy as possible to write your code, whilst getting full access to the capabilities of the equipment you’ve chosen.

Arduino

Arduino deserves a special mention here, as a spectacularly accessible way into programming microprocessors. There is a huge range of Arduino, and Arduino compatible, devices starting at only a few pounds and going right up to some pretty high powered equipment.

Most Arduino boards have a standard layout allowing “shields” to be easily attached to them, giving easy standardised access to additional equipment.

The great advantage of Arduino is that you can get started very easily. The disadvantage is that you aren’t using equipment you could go into production with directly. It is very much a hobbyist solution (although I would love to hear of production devices using Arduino code).

Other platforms

Other vendors have their own IDEs and toolchains – many of which are quite expensive.  Of the ones I have tried Atmel Studio is the best by far.  First it is free – which is pretty important.  Second it uses the gcc toolchain, which makes debugging a lot easier for the general programmer.  Finally the IDE itself is really quite good.

Next time I’ll walk through building some simple projects on a couple of platforms and talk about using the Wifi module in earnest.

 

One Pound in Three

Can we talk about this:

Big opportunities for small firms: government set to spend £1 in every £3 with small businesses

When its predecessor (of £1 in 4) was announced in 2010 many of us were sceptical, so it was fantastic news in 2014 when the National Audit Office announced that this target had not only been met, but exceeded. I don’t think anyone doubts that the new £1 in 3 target will be achieved by 2020; a real measure of confidence in the commitment to these plans.

It’s fair to say that it’s genuinely been a great move forward. It’s taken some time – as you might expect – both for this to trickle all the way down to the smaller end of the SME sector and for departments and other bodies to get their procurement processes aligned, but in the last couple of years we’ve have seen many positive and concrete changes to the way the public sector procures services.

We’ve been involved in quite a few of these SME tendering processes in the last year or so and have seen a full range of tenders from the very good through to the very bad. What’s clear is that things are continuing to improve as buyers and their procurement departments learn to navigate the new types of relationships that the public sector has with these smaller suppliers.
So a lot’s changed, but what could still improve?

Procurement workshops and briefing days

Soon after the 2010 announcement and in the midst of a fashion for “hackathons” and the open web these were all the rage; you could hardly go a week without one body or another running an event of this type.

You know the ones. Every Government department and even most local councils run them; non-government public bodies like the BBC, Channel 4 and JISC love them too. The intention is absolutely sound – you want to get us excited about working with you, outline the projects that we might be working on, help shape our proposals, and ultimately make sure we understand that you’re worth the effort of us pitching to.

There’s no doubt that these are great events to attend. But. They’re often marketed as “great opportunities” and there’s frequently a sense that we must attend to ensure that we don’t miss out. But time out of the office costs money, as does getting half way across the country because the “North” briefing is in London (I kid you not, that’s happened to me more than once). On top of that the audience and content of the talks at these events can be scarily similar regardless of location or presenting organisation. There’s nothing more disheartening than arriving at another one of these events to a feeling that only the venue and speakers have changed.

It’s obviously vitally important that you get these messages across, but please try and make sure that the events themselves don’t feel compulsory. SMEs are time poor (particularly the good ones); if it’s clear that I’m not going to miss out if I don’t attend and that all the information I need will be online then I may well choose not to come. It doesn’t mean I’m not engaged, just that new channels like this are things I often need to explore outside the usual working day.
There’s often a sense of “if we make it really explicit what we’re after at the workshop” that you’ll cut down on the number of inappropriate responses to your tenders. Sadly the opposite is often true – once someone has spent a lot of time and money in attending one of the briefing days they will pitch for absolutely everything, because they now feel invested, and they’ve met you. Sunk cost thinking affects us all.

Luckily the number of these apparently mandatory briefing days is reducing, with some organisations doing away with them entirely, replacing them with live web conferences, pre-recorded video presentations and detailed (and high quality) documentation. I’d love to see them done away with entirely, though.

Keeping contracts aligned

It’s a fair assumption that during the briefing days every single speaker will have made at least one reference to Agile. And it’s likely that Agile was the main topic of at least one talk. Because Agile is good. You get that. We get that. Agile makes absolute sense for many of the kinds of projects that the public sector is currently undertaking. Digital Transformation is certainly not easy, it’s definitely not cheap and it’s absolutely not going to be helped by a waterfall, BDUF approach.

But if you’re honestly committed to Agile please please ensure that your contracts reflect that. We’ve recently had to pull out of two tenders where we’d got down to the last round because the contract simply couldn’t accommodate a genuine Agile delivery. We know Agile contracts are hard, but if you’ve spent the entire procurement process actively encouraging people to pitch you an Agile approach you need to present an Agile contract at the end of it. Companies as old and grizzled as Isotoma may feel forced – and be willing – to back away, but for many agencies it’s a trap they unwittingly fall into which ultimately does nothing for either party.

It’s also worth remembering that it’s unlikely any SME you deal with has internal legal advice, so contract reviews are an expensive luxury. If you present a mandatory contract at the start of the tender process most of us will glance over it before ploughing ahead. We certainly aren’t going to pay for a full scale review because we know it’ll cost a fortune and the lawyer is only going to tell us it’s too risky and we shouldn’t pitch anyway. One contract we were presented with by a government department was described by our lawyer as a “witch’s curse”. We still pitched. Didn’t win it. Probably for the best.

Timelines

They say it’s the hope that kills you.

Small businesses are, by definition, small. The kind of procurements I’m talking about here are for services, not products, which means that people – our people, our limited number of people – are going to be required for the delivery. If the timeline on the procurement says “award contract on 17th February 2017, go live by end June 2017” we’re going to start trying to plan for what winning might look like. This might well involve subtly changing the shape of other projects that we’ve got in flight. If we’re really confident it might even mean turning away other work.

When we get to the 17th February and there’s no news from you what are we supposed to do? Do we hold the people we’d pencilled in for this work back and live with the fact that they’re unbilled?. And then when 24th February comes and there’s another round of clarification questions, but you commit to giving us an answer by the following week what do we do then? And so on. And so on.

The larger the business you’re dealing with the easier they find absorbing these kind of changes to timelines, but that’s one of the reasons they’re more expensive. SMEs are small, they’re nimble, but they also rely on keeping their utilisation high and their pipeline flowing. Unrealistic procurement timelines combined with fixed delivery dates can make pitching for large tenders very uncomfortable indeed.

To summarise

As I said at the start things have made huge leaps forward over the past couple of years. The commitment to pay 80% of all undisputed invoices within 5 days is a great example of how the public sector is starting to really understand the needs of SMEs, as is removing the PQQ process for smaller contracts, the commitment to dividing contracts into lots and explicitly supporting consortia and subcontracting.

In 2016 we’ve been to sadly uninformative developer days for an organisation that has offered wonderfully equitable Agile contracts and extremely clear and accurate timelines. We’ve pitched for work that was beautifully explained online with no developer day, but that presented a bear trap of a contract, and we’ve pitched for work that was perfect except for the wildly optimistic timelines and that finally awarded the contract 3 months after the date in the tender.

Things are definitely getting better, but a few more little tweaks could make them perfect.
Here’s to £1 in 3, and the continuing good work that everyone is doing across the sector.

Design Museum interior

London’s new Design Museum

I have very little nostalgia for the old Design Museum building. Its location near Tower Bridge was always a real effort to get to, and while an attractive modernist icon, it always felt small, very much one of London’s “minor” museums – not befitting London’s reputation as a global design powerhouse. On 21 November it reopened at a new location in Kensington, and I visited on the opening weekend.

Part I: The new Design Museum and the exhibitions

Part II: A digital Design Museum? Continue reading

Start ups: launching a marketplace

A year or so ago we signed up for a bunch of free Cycle Alert tags. They’re RFID tags you attach to your bike that ping a sensor in the cab of suitably equipped vehicles, warning the driver that a cyclist is nearby. We even did some truly adorkable PR to go with it. If we ignore the subtle whiff of victim-blaming it’s a nice idea; after all, who doesn’t want to be safer on their bike?

Twelve months on though, and it’s all fallen a bit flat.

Why? Because not enough cyclists have the tags on their bikes and not enough vehicles have the sensors installed. Like so many businesses the Cycle Alert model is predicated on making both ends of a market simultaneously, and therein lie some serious problems.

Your business is a different beast

Around half the start-ups that reach out to us for help have a business model that relies on taking a percentage from transactions occurring on their platform. The idea might be a new twist on recruitment services, or fashion retail, or services for motor tuning enthusiasts; who knows? But they share that same problem. Without vacancies on the site why would I upload my CV? Without CVs on the site why would I post my advert or pay a fee to search? Without beautiful shoes on the site why would I visit? Without shoppers why would I upload my beautiful shoes?

This isn’t to say that applications that rely on making a market are bad ideas, but they need treating quite differently. If your business is a straight product build you can pretty safely build an MVP (for your personal definition of M, V and P, obviously) and start marketing the hell out of it, but a marketplace of any sort needs more careful planning; it’s a marathon, not a sprint, and founders – and their funders – need to plan their resources accordingly.

What is your focus and what should be your focus?

There are a few things we see over and over again with this type of business idea. First up, founders regularly focus on trying to attract both classes of users at the same time. After all, you need both to start making money, don’t you? This can have a bunch of effects, but the two that are absolutely certain are that user numbers won’t grow as fast as you hoped and that you find yourself spread too thin.

At this stage more often than not our advice is to focus on one class of users and build secondary features that draw them to the platform before the marketplace is up and running. In the recruitment example get candidates on board with a great CV builder. Or get the recruiters on board by offering great APIs to publish to all the other job boards. One way or another you’ve got to get a critical mass of one side to attract the other and get the transactions flowing.

It can feel completely arse about face – and expensive – to be building features that aren’t core to your main offering, but unlike a product build you need a critical mass before you can start generating revenue. Like I said, it’s a marathon, not a sprint, and you need to look after your resources accordingly.

Are you doing the last things first?

Secondly, founders can’t help but want to build the core offering straight away. If the business model is selling premium listings for shoes then – obviously, right? – we need to build all the controls and widgets for uploading, managing and billing those listings… Haven’t we? Right?

Well. Not necessarily. See my point above. You need to phase your delivery, and when looked at through the lens of generating critical mass those widgets probably aren’t necessary yet. I know that you’ve got a limited budget and once the product is “finished” it’s those widgets that will be the core of you making money, but if you haven’t got anyone to use them yet perhaps the budget is better spent getting users on the site some other way?

A corollary to this is that, insensitive as I’m about to sound, it’s important to make sure your site is attractive when it’s empty. If your logged-out homepage is an infinite scrolling mosaic of tiles, each made up of images uploaded by your eager users, it’s going to look awful bare on day one. Getting the first dancer onto the dancefloor is your most important job; leave worrying about the layout of the bar until after you’ve got people dancing.

Don’t underestimate. It’s never as simple as you think

Thirdly, and lastly for this post, is the biggy. Everyone we’ve ever met has underestimated what it will take to get to critical mass. They underestimate the time, the money, and the sheer volume of changes they’ll need to make along the way.

I’ve said “marathon not a sprint” a couple of times already, so I won’t labour the point… But. Well, you know… Just make sure you’ve got access to more than you’re currently planning to spend.

Your goal here is critical mass, so alongside user acquisition your focus has to be on user retention and reducing churn.

Make sure you’re swimming in analytics, analytics, analytics. These will tell you what your users are doing and give you real insights into what’s driving uptake (and drop off). And be responsive to your users’ behaviour; be willing to change the offering mid flight.

Finally, make sure you’ve got the marketing budget to keep plugging away. You’re going to need it.

We’ve got a ton of experience helping web businesses get from first idea to funding and sale, and we work in every which way, from short term advisory roles through to providing an entire team. If any of what I’ve said above rings true give us a bell; find out how we can help your business take off.

When to WordPress; When not to WordPress.

We like Postlight. They’re really good at talking about the reasons they do things and exposing these conversations to the world. In Gina Trapani’s recent post (Director of Engineering at Postlight), she gives some really useful advice on when and when not to use WordPress.

I thought it’d be useful to delve into this topic a little and expose some of the conversations we’ve had at Isotoma over the years. We’ve done a lot of what you might call ‘complex content management system’ projects in the past and, as a matter of policy, one of the first things we do when we start talking to potential customers about this kind of thing is ask “Why aren’t we doing this in WordPress?”

This is one of the most valuable questions an organisation can ask themselves when they start heading down the road to a new website. Trapani’s excellent article basically identifies 3 key reasons why WordPress represents excellent value:

  1. You can deliver a high number of features for a very low financial outlay
  2. It’s commoditised and therefore supportable and by a large number of agencies who will compete on price for your business
  3. It’s super easy to use and thus, easy to hire for

Complexity issues

For us though, there’s a more implicit reason to ask the question ‘Why not WordPress?’
The more customised a software project is, the more complex it becomes. The more complexity there is, the more risk and expense the customer is exposed to. Minimising exposure to risk and reducing expense are always desirable project outcomes for everyone involved in the project – though that’s rarely an explicit requirement.

So all of a sudden, just understanding these issues and asking the question “Why aren’t we using WordPress?” becomes a really valuable question for an organisation to ask.

Good reasons not to choose WordPress

Through asking that question, you’ll discover that there are many valid reasons not to use WordPress. I thought it might be illuminating to unpack some of the most frequent ones we come across. So I thought back to projects we’ve delivered recently and teased out some reasons why we or our customers chose not to use WordPress.

1. When the edge case is the project

If you overlay your CMS requirements onto the features WordPress offers, you’ll almost always find a Venn Diagram that looks a lot like this:
wordpress-Venn

The bits that jut out at the side? That’s where the expense lives. Delivering these requirements – making WordPress do something it doesn’t already do, or stop doing something it does – can get expensive fast. In our experience, extending any CMS to make it behave more like another product is a sign that you’re using the wrong tool for the job.

In fact, a simpler way of thinking about it is to redo the Venn diagram:

wordpress-venn-02

If you can cut those expensive requirements then fantastic. We’d always urge you to do so.
But ask this question while you do:

What’s the cost if I need to come back to these requirements in 18 months and deliver them in the chosen platform?

  • Is it hard?
  • What kind of hard is it?

Is it the kind of hard where someone can tell you what the next steps are? Or the kind of hard where people just suck their teeth and stare off into the middle distance?

The difference between those two states can run into the thousands and thousands of pounds so it’s definitely worth having the conversation before you get stuck in.

If you can’t get rid of the edge cases; if, in fact, the edge cases *are* your project, then we’d usually agree that WordPress is not the way forward.

2. Because you need to build a business with content

We’ve worked with one particular customer since 2008 when they were gearing up to become a company whose primary purpose was delivering high quality content to an incredibly valuable group of subscribers. WordPress would have delivered almost all of their requirements back then but we urged them to go in a different direction. One of the reasons we did this was to ensure that they weren’t building a reliance on someone else’s platform into a critical area of their business.

WordPress and Automattic will always be helpful and committed partners and service providers. However, they are not your business and they have their own business plans which you have neither access to or influence on. For our customer, this was not an acceptable situation and mitigating that risk was worth the extra initial outlay.

3. Because vanity, vanity; all is vanity

There is nothing wrong with being a special snowflake. Differentiation is hard and can often be the silver bullet that gets you success where others fail. We understand if there are some intangibles that drive your choice of CMS and broadly support your right to be an agent in your own destiny. You don’t want to use WordPress because WordPress is WordPress?
Congratulations and welcome to Maverick Island. We own a hotel here. Try the veal.

Seriously though, organisational decision making is often irrational and that’s just the way it is. When this kind of thing happens though, it’s important to be able to tell that it’s happening. You should aim to be as clear as possible about which requirements are real requirements and which are actually just Things We Want Because We Want Them. Confusing one with the other is a sure-fire way to increase the cost of your project – both financial and psychic.

If you want to know more about migrating CMS’ and the different platforms available, just contact us or send an email to hello@isotoma.com. As you can probably tell, this is the kind of thing we like talking about.

Internet Security Threats – When DDoS Attacks

On Friday evening an unknown entity launched one of the largest Distributed Denial of Service (DDoS) attacks yet recorded, against Dyn, a DNS provider. Dyn provide service for some of the Internet’s most popular services, and they duly suffered problems. Twitter, Github and others were unavailable for hours, particularly in the US.

DDoS attacks happen a lot, and are generally uninteresting. What is interesting about this one is:

  1. the devices used to mount the attack
  2. the similarity with the “Krebs attack” last month
  3. the motive
  4. the potential identity of the attacker

Together these signal that we are entering a new phase in development of the Internet, one with some worrying ramifications.

The devices

Unlike most other kinds of “cyber” attack, DDoS attacks are brute force – they rely on sending more traffic than the recipient can handle. Moving packets around the Internet costs money so this is ultimately an economic contest – whoever spends more money wins. The way you do this cost-effectively, of course, is to steal the resources you use to mount the attack. A network of compromised devices like this is called a “botnet“.

Most computers these days are relatively well-protected – basic techniques like default-on firewalls and automated patching have hugely improved their security. There is a new class of device though, generally called the Internet of Things (IoT) which have none of these protections.

IoT devices demonstrate a “perfect storm” of security problems:

  1. Everything on them is written in the low-level ‘C’ programming language. ‘C’ is fast and small (important for these little computers) but it requires a lot of skill to write securely. Skill that is not always available
  2. Even if the vendors fix a security problem, how does the fix get onto the deployed devices in the wild? These devices rarely have the capability to patch themselves, so the vendors need to ship updates to householders, and provide a mechanism for upgrades – and the customer support this entails
  3. Nobody wants to patch these devices themselves anyway. Who wants to go round their house manually patching their fridge, toaster and smoke alarm?
  4. Because of their minimal user interfaces (making them difficult to operate if something goes wrong), they often have default-on [awful] debug software running. Telnet to a high port and you can get straight in to adminster them
  5. They rarely have any kind of built in security software
  6. They have crap default passwords, that nobody ever changes

To see how shockingly bad these things are, follow Matthew Garrett on Twitter. He takes IoT devices to pieces to see how easy they are to compromise. Mostly he can get into them within a few minutes. Remarkably one of the most secure IoT device he’s found so far was a Barbie doll.

That most of these devices are far worse than a Barbie doll should give everyone pause for thought. Then imagine the dozens of them so many of us have scattered around our house.  Multiply that by the millions of people with connected devices and it should be clear this is a serious problem.

Matthew has written on this himself, and he’s identified this as an economic problem of incentives. There is nobody who has an incentive to make these devices secure, or to fix them afterwards. I think that is fair, as far as it goes, but I would note that ten years ago we had exactly the same problem with millions of unprotected Windows computers on the Internet that, it seemed, nobody cared about.

The Krebs attack

A few weeks ago, someone launched a remarkably similar attack on a security researcher Brian Krebs. Again the attackers are unknown and they launched the attack using a global network of IoT devices.

Given the similarities in the attack on Krebs and the attack on Dyn, it is probable that both of these attacks were undertaken by the same party. This doesn’t, by itself, tell us very much.

It is common for botnets to be owned by criminal organisations that hire them out by the hour. They often have online payment gateways, telephone customer support and operate basically like normal businesses.

So, if this botnet is available for hire then the parties who hired it might be different. However, there is one other similarity which makes this a lot spookier – the lack of an obvious commercial motive.

The motive

Mostly DDoS attacks are either (a) political or (b) extortion. In both cases the identity of the attackers is generally known, in some sense. For political DDOS attacks (“hacktivism”) the targets have often recently been in the news, and are generally quite aware of why they’re attacked.

Extortion using DDoS attacks is extremely common – anyone who makes money on the Internet will have received threats, and have been attacked and many will have paid out to prevent or stop a DDoS.  Banks, online gaming, DNS providers, VPN providers and ecommerce sites are all common targets – many of them so common that they have experienced operations teams in place who know how to handle these things.

To my knowledge no threats were made to Dyn or Krebs before the attacks and nobody tried to get money out of them to stop them.

What they have in common is their state-of-the-art protection. Brian Krebs was hosted by Akamai, a very well-respected content delivery company who have huge resources – and for whom protecting against DDOS is a line of business. Dyn host the DNS for some of the world’s largest Internet firms, and similarly are able to deploy huge resources to combat DDOS.

This looks an awful lot like someone testing out their botnet on some very well protected targets, before using it in earnest.

The identity of the attacker

It looks likely therefore that there are two possibilities for the attacker. Either it is (a) a criminal organisation looking to hire out their new botnet or (b) a state actor.

If it is a criminal organisation then right now they have the best botnet in the world. Nobody is able to combat this effectively.  Anyone who owns this can hire it out to the highest bidder, who can threaten to take entire countries off the Internet – or entire financial institutions.

A state actor is potentially as disturbing. Given the targets were in the US it is unlikely to be a western government that controls this botnet – but it could be one of dozens from North Korea to Israel, China, Russia, India, Pakistan or others.

As with many weapons a botnet is most effective if used as a threat, and we many never know if it is used as a threat – or who the victims might be.

What should you do?

As an individual, DDoS attacks aren’t the only risk from a compromised device. Anyone who can compromise one of these devices can get into your home network, which should give everyone pause – think about the private information you casually keep on your home computers.

So, take some care in the IoT devices you buy, and buy from reputable vendors who are likely to be taking care over their products. Unfortunately the devices most likely to be secure are also likely to be the most expensive.

One of the greatest things about the IoT is how cheap these devices are, and the capability they can provide at this low price. Many classes of device don’t necessarily even have reliable vendors working in that space. Being expensive and well made is no long-term protection – devices routinely go out of support after a few years and become liabilities.

Anything beyond this is going to require concerted effort on a number of fronts. Home router vendors need to build in capabilities for detecting compromised devices and disconnecting them. ISPs need to take more responsibility for the traffic coming from their networks. Until being compromised causes devices to malfunction for their owner there will be no incentives to improve them.

It is likely that the ultimate fix for this will be Moore’s Law – the safety net our entire industry has relied on for decades. Many of the reasons for IoT vulnerabilities are to do with their small amounts of memory and low computing power. When these devices can run more capable software they can also have the management interfaces and automated patching we’ve become used to on home computers.

 

The economics of innovation

One of the services we provide is innovation support. We help companies of all sizes when they need help with the concrete parts of developing new digital products or services for their business, or making significant changes to their existing products.

A few weeks ago the Royal Swedish Academy of Sciences awarded the Nobel Prize for Economics to Oliver Hart and Bengt Holmström for their work in contract theory. This prompted me to look at some of his previous work (for my sins I find economics fascinating), and I came across his 1998 paper Agency Costs and Innovation. This is so relevant to some of my recent experiences I wanted to share it.

Imagine you have a firm or a business unit and you have decided that you need to innovate.

This is a pretty common situation – you know strategically that your existing product is starting to lose traction. Maybe you can see commoditisation approaching in your sector. Or perhaps, as is often the case, you can see the Internet juggernaut bearing down on your traditional business and you know you need to change things up to survive.

What do you do about it?  If you’ve been in this situation the following will probably resonate:

agency2

This describes the principal-agent problem, which is a classic in economics. This describes how a principal (who wants something) can incentivise an agent to do what they want. The agent and “contracting” being discussed here could be any kind of contracting including full time staff.

A good example of the principal-agent problem is how you pay a surgeon. You want to reward their work, but you can’t observe everything they do. The outcome of surgery depends on team effort, not just an individual. They have other things they need to do other than just surgery – developing standards, mentoring junior staff and so forth. Finally the activity itself is very high risk inherently – which means surgeons will make mistakes, no matter how competent. This means their salary would be at risk, which means you need to pay huge bonuses to encourage them to undertake the work at all.

In fact commonly firms will try and innovate using their existing teams, who are delivering the existing product. These teams understand their market. They know the capabilities and constraints of existing systems. They have domain expertise and would seem to be the ideal place to go.

However, these teams have a whole range of tasks available to them (just as with our surgeon above), and choices in how they allocate their time. This is the “multitasking effect”. This is particularly problematic for innovative tasks.

My personal experience of this is that, when people have choices between R&D type work and “normal work”, they will choose to do the normal work (all the while complaining that their work isn’t interesting enough, of course):

variance

This leads large firms to have separate R&D divisions – this allows R&D investment decisions to take place between options that have some homogeneity of risk, which means incentives are more balanced.

However, large firms have a problem with bureaucratisation. This is a particular problem when you wish to innovate:

monitoring

Together this leads to a problem we’ve come across a number of times, where large firms have strong market incentives to spend on innovation – but find their own internal incentive systems make this extremely challenging.

If you are experiencing these sorts of problems please do give us a call and see how we can help.

I am indebted to Kevin Bryan’s excellent A Fine Theorem blog for introducing me to Holmström’s work.

 

A new Isotoma Whitepaper: Chatbots

Over the last six months we’ve had a lot of interest from customers in the emerging area of chatbots, particularly ones using Facebook Messenger as a platform.

While bots have been around, in some form or other, for a very long time the Facebook Messenger platform has catapulted them into prominence.  Access to one billion of the world’s consumers is a tempting prospect for many businesses.

We’ve reviewed the ecosystem that is emerging around chatbots and provide a guide to some of the factors you should consider if you are thinking about building and deploying chatbots, in our new whitepaper.

chatbots-thumbnails

The contents include

  • The history of chat interfaces
  • What conversational interfaces can do, and why
  • Natural Language Processing
  • Features provided by chatbot platforms
  • An in-depth review of eight of the top chatbot platforms
  • Recommendations for next steps, and a look to the future

Please, download the whitepaper, and let us know what you think.

 

Stuttering towards accessibility

“Hello, I’m Andy and I have a stammer.”

While this is true, thankfully very few people notice it nowadays. Like many older stammerers I’ve developed complex and often convoluted strategies to avoid triggering it. But still, if you were to put me on stage and ask me to say that sentence we’d be there all week.

Over the last ten years or so as I’ve aged and gained more control over my stammer I’ve not given it much thought, barring politely turning down the occasional invitation to speak in public. Recently though, I’ve been forced to reassess both it and my coping strategies in the light of the rapid increase in voice interfaces for everything from phones to cars. And that’s made accessibility a very personal issue.

Like many stammerers I struggle with the start of my own name, and sounds similar to it. In the world of articulatory phonetics the sounds that trip me up are called “open vowels”. That is, sounds that are generated at the back of the throat with little or no involvement from the lips or tongue. In English that’s words starting with vowels or the letter H. So the first seven words of the sentence “Hello, I’m Andy and I have a stammer” are pretty much guaranteed to stop me in my tracks (unless I’m drunk or singing – coping strategies!).

We recently got an Amazon Echo for the office and wired it up to a bunch of things, including Spotify. Colleagues tell me it’s amazing, but because the only way I can wake it up is by saying “Alexa!” it’s absolutely useless to me.

And it gets worse. Even if a stammerer is usually able to overcome their problem sounds other factors will increase their likelihood of stammering in a particular situation.

One is over-rehearsal, where the brain has time to analyse the sentence, spot the potentially difficult words and start to worry about them, exacerbating the problem. This can be caused by reading aloud – even bedtime stories for the kids (don’t get me started on Harry and Hermione or Hiccup Horrendous Haddock the Third) – but anything where the words are predetermined can be a problem; be that a sales presentation, giving your name as everyone shakes hands as they walk into a meeting, performing lines from a play, making the vows at your wedding, literally anything where you have time to think about what you’re going to say and can’t change the words.

Speech interfaces currently fall firmly into the realm of over-rehearsal. You’re forced to plan carefully what you’re going to say, and then say it. “Alexa! Play The Stutter Rap by Morris Minor and the Majors” (yeah, that was a childhood high point, let me tell you) is a highly structured sentence and despite Alexa’s smarts it’s the only way you’re going to get that track played. So it’s not only a problematic sound, but it’s over-rehearsed… Doubly bad.

The other common trigger for stammering is often loosely defined as social anxiety, but is anywhere where the stammerer is drawing attention to themselves, either from being the focus of an activity (on stage, say) or from disturbing the normal flow of activity around them (for example, by trying to attract someone’s attention across a crowded room).

If I want to talk to the Echo in our office I know that saying “Alexa!” is going to disturb my colleague’s flow and cause them to involuntarily prick up their ears, which brings it right into the category of social anxiety… As well as already being a trigger sound and over-rehearsed… Triply bad.

However good my coping strategies might normally be I can’t use any of them when speaking to Alexa, and speaking to Alexa is exactly when I would normally be employing them all. Even when I’m in the office on my own it’s useless to me, because trigger sound and over-rehearsal is enough to stop me.

And the Echo isn’t alone. There’s “Hey, Siri!”, “Hey, Cortana!”, “OK Google!”, and “Hi TV!”. All of them, in fact. Right now all of the major domestic voice controls use wake words that start with an open vowel. Gee. Thanks everyone.

Google recently announced that 20% of mobile searches use voice rather than text. More than half of iOS users use Siri regularly. Amazon and Microsoft are doubling down on Echo and Cortana, respectively. Tesla are leading the way in automotive, but all the major manufacturers offer some form of voice control for at least some of their models. It makes absolute sense for them to do so – speech is such a natural interface, right? And it’s futuristic – it’s the stuff of Star Trek. Earl Grey, Hot! and all that. But just as screen readers have constantly struggled to keep up with web technologies we’re seeing developers doomed to repeat those same mistakes with voice interfaces, as they leap ahead without consideration for those that can’t use them.

To give some numbers and put this in context there are approximately twice as many stammerers in the UK (1% of the population) as there are registered visually impaired or blind (0.5% of the population). That’s a whole chunk of people. And while colleagues would say that me not being able to choose music for the stereo is a benefit not a drawback, it makes light of the fact that a technology we generally think of as assistive is not a panacea for all.

Currently Siri, Cortana, Samsung TVs and Alexa can only be addressed with sentences that start with an open vowel (Siri, Cortana and Samsung can’t be changed, Alexa can, but only to one of “Alexa”, “Echo” and “Amazon”). Google on Android can thankfully be changed to any phrase the user likes, even if the process is a little convoluted. Interestingly for me, though, is that the Amazon Echo offers no alternative interface at all. It is voice control only, and has to be woken with an open vowel. It is the worst offender.

For me this has been an object lesson in checking my privilege. Yes, I’m short sighted, but contact lenses give me 20/20 vision. I had a bad back for a while, but I was still mobile. This is the first piece of technology that I’ve actually been unable to use. And it’s not a nice experience. As technologists we know that accessibility is important – not just for the impaired but for everyone – yet we rarely feel it. I’m sure feeling it now.

Voice control is still in its infancy. New features and configurations are being introduced all the time. Parsing will get smarter so that wake words can be changed and commands can be more loosely structured. All of these things will improve accessibility for those of us with speech impediments, who are non-verbal, have a throat infection, or are heavily accented.

But we’re not there yet, and right now I’ve got to ask Amazon… Please, let me change Alexa’s name.

I was thinking Jeff?

Our plants need watering, part I

Here at Isotoma Towers we’ve recently started filling our otherwise spartan office with plants. Plants are lovely but they do require maintenance, and in particular they need timely watering.

Plants.

Since we’re all about automation here, we decided to use this as a test case for building some Internet of Things (IoT) devices.  One of my colleagues pointed out this great moisture sensor from Catnip (right).

This forms the basis of our design.Catnip I2C soil moisture sensor

There are lots and lots of choices for how to build something like this, and this blog post is going to talk about design decisions.  See below the fold for more.

 

Continue reading