Category Archives: Futurology

Screenshot of SOMA vision mixer

Compositing and mixing video in the browser

This blog post is the 4th part of our ongoing series working with the BBC Research & Development team. If you’re new to this project, you should start at the beginning!

BBC R&D logoLike all vision mixers, SOMA (Single Operator Mixing Application) has a “preview” and “transmission” monitor. Preview is used to see how different inputs will appear when composed together – in our case, a video input, a “lower third” graphic such as a caption which fades in and out, and finally a “DOG” such as a channel or event identifier shown in the top corner throughout a broadcast.

When switching between video feeds SOMA offers a fast cut between inputs or a slower mix between the two. As and when edit decisions are made, the resulting output is shown in the transmission monitor.

The problem with software

However, one difference with SOMA is that all the composition and mixing is simulated. SOMA is used to build a set of edit decisions which can be replayed later by a broadcast quality renderer. The transmission monitor is not just a view of the output after the effects have been applied as the actual rendering of the edit decisions hasn’t happened yet. The app needs to provide an accurate simulation of what the edit decision will look like.

The task of building this required breaking down how output is composed – during a mix both the old and new input sources are visible, so six inputs are required.

VideoContext to the rescue

Enter VideoContext, a video scheduling and compositing library created by BBC R&D. This allowed us to represent each monitor as a graph of nodes, with video nodes playing each input into transition nodes allowing mix and opacity to be varied over time, and a compositing node to bring everything together, all implemented using WebGL to offload video processing to the GPU.

The flexible nature of this library allowed us to plug in our own WebGL scripts to cut the lower third and DOG graphics out using chroma-keying (where a particular colour is declared to be transparent – normally green), and with a small patch to allow VideoContext to use streaming video we were off and going.

Devils in the details

The fiddly details of how edits work were as fiddly as expected: tracking the mix between two video inputs versus the opacity of two overlays appeared to be similar problems but required different solutions. The nature of the VideoContext graph meant we also had to keep track of which node was current rather than always connecting the current input to the same node. We put a lot of unit tests around this to ensure it works as it should now and in future.

By comparison a seemingly tricky problem of what to do if a new edit decision was made while a mix was in progress was just a case of swapping out the new input, to avoid the old input reappearing unexpectedly.

QA testing revealed a subtler problem that when switching to a new input the video takes a few tens of milliseconds to start. Cutting immediately causes a distracting flicker as a couple of blank frames are rendered – waiting until the video is ready adds a slight delay but this is significantly less distracting.

Later in the project a new requirement emerged to re-frame videos within the application and the decision to use VideoContext paid off as we could add an effect node into the graph to crop and scale the video input before mixing.

And finally

VideoContext made the mixing and compositing operations a lot easier than they would have been otherwise. Towards the end we even added an image source (for paused VTs) using the new experimental Chrome feature captureStream, and that worked really well.

After making it all work the obvious point of possible concern is performance, and overall it works pretty well.  We needed to have half-a-dozen or so VideoContexts running at once and this was effective on a powerful machine.  Many more and the computer really starts struggling.

Even a few years ago attempting this in the browser would have been madness, so its great to see such a lot of progress in something so challenging, and opening up a whole new range of software to work in the browser!

Read part 5 of this project with BBC R&D where Developer Alex Holmes talks about Taming async with FRP and RxJS.

The challenges of mixing live video streams over IP networks

Welcome to our second post on the work we’re doing with BBC Research & Development. If you’ve not read the first post, you should go read that first 😉

Introducing IP Studio

BBC R&D logo

The first part of the infrastructure we’re working with here is something called IP Studio. In essence this is a platform for discovering, connecting and transforming video streams in a generic way, using IP networking – the standard on which pretty much all Internet, office and home networks are based.

Up until now video cameras have used very simple standards such as SDI to move video around. Even though SDI is digital, it’s just point-to-point – you connect the camera to something using a cable, and there it is. The reason for the remarkable success of IP networks, however, is their ability to connect things together over a generic set of components, routing between connecting devices. Your web browser can get messages to and from this blog over the Internet using a range of intervening machines, which is actually pretty clever.

Doing this with video is obviously in some senses well-understood – we’ve all watched videos online. There are some unique challenges with doing this for live television though!

Why live video is different

First, you can’t have any buffering: this is live. It’s unacceptable for everyone watching TV to see a buffering message because the production systems aren’t quick enough.

Second is quality. These are 4K streams, not typical internet video resolution. 4K streams have (roughly) 4000 horizontal pixels compared to the (roughly) 2000 for a 1080p stream (weirdly 1080p, 720p etc are named for their vertical pixels instead). this means they need about 4 times as much bandwidth – which even in 2017 is quite a lot. Specialist networking kit and a lot of processing power is required.

Third is the unique requirements of production – we’re not just transmitting a finished, pre-prepared video, but all the components from which to make one: multiple cameras, multiple audio feeds, still images, pre-recorded video. Everything you need to create the finished live product. This means that to deliver a final product you might need ten times as much source material – which is well beyond the capabilities of any existing systems.

IP Studio addresses this with a cluster of powerful servers sitting on a very high speed network. It allows engineers to connect together “nodes” to form processing “pipelines” that deliver video suitable for editing. This means capturing the video from existing cameras (using SDI) and transforming them into a format which will allow them to be mixed together later.

It’s about time

That sounds relatively straightforward, except for one thing: time. When you work with live signals on traditional analogue or point-to-point digital systems, then live means, well, live. There can be transmission delays in the equipment but they tend to be small and stable. A system based on relatively standard hardware and operating systems (IP Studio uses Linux, naturally) is going to have all sorts of variable delays in it, which need to be accommodated.

IP Studio is therefore based on “flows” comprising “grains”. Each grain has a quantum of payload (for example a video frame) and timing information. the timing information allows multiple flows to be combined into a final output where everything happens appropriately in synchronisation. This might sound easy but is fiendishly difficult – some flows will arrive later than others, so systems need to hold back some of them until everything is running to time.

To add to the complexity, we need two versions of the stream, one at 4k and one at a lower resolution.

Don’t forget the browser

Within the video mixer we’re building, we need the operator to be able to see their mixing decisions (cutting, fading etc.) happening in front of them in real time. We also need to control the final transmitted live output. There’s no way a browser in 2017 is going to show half-a-dozen 4k streams at once (and it would be a waste to do so). This means we are showing lower resolution 480p streams in the browser, while sending the edit decisions up to the output rendering systems which will process the 4k streams, before finally reducing them to 1080p for broadcast.

So we’ve got half-a-dozen 4k streams, and 480p equivalents, still images, pre-recorded video and audio, all being moved around in near-real-time on a cluster of commodity equipment from which we’ll be delivering live television!

Read part 3 of this project with BBC R&D where we delve into rapid user research on an Agile project.

MediaCity UK offices

Building a live television video mixing application for the browser

BBC R&D logoThis is the first in a series of posts about some work we are doing with BBC Research & Development.

The BBC now has, in the lab, the capability to deliver live television using high-end commodity equipment direct to broadcast, over standard IP networks. What we’ve been tasked with is building one of the key front-end applications – the video mixer. This will enable someone to mix an entire live television programme, at high quality, from within a standard web-browser on a normal laptop.

In this series of posts we’ll be talking in great depth about the design decisions, implementation technologies and opportunities presented by these platforms.

What is video mixing?

Video editing used to be a specialist skill requiring very expensive, specialist equipment. Like most things this has changed because of commodity, high-powered computers and now anyone can edit video using modestly priced equipment and software such as the industry standard Adobe Premiere. This has fed the development of services such as YouTube where 300 hours of video are uploaded every minute.

“Video Mixing” is the activity of getting the various different videos and stills in your source material and mixing them together to produce a single, linear output. It can involve showing single sources, cutting and fading between them, compositing them together, showing still images and graphics and running effects over them. Sound can be similarly manipulated. Anyone who has used Premiere, even to edit their family videos, will have some idea of the options involved.

Live television is a very different problem

First you need to produce high-fidelity output in real time. If you’ve ever used something like Premiere you’ll know that when you finally render your output it can take quite a long time – it can easily spend an hour rendering 20 minutes of output. That would be no good if you were broadcasting live! This means the technology used is very different – you can’t just use commodity hardware, you need specialist equipment that can work with these streams in realtime.

Second the capacity for screw up is immensely higher. Any mistakes in a live broadcast are immediately apparent, and potentially tricky to correct. It is a high-stress environment, even for experienced operators.

Finally, the range of things you might choose to do is much more limited, because you can spend little time setting it up. This means live television tends to use a far smaller ‘palette’ of mixing operations.

Even then, a live broadcast might require half a dozen people even for a modest production. You need someone to set up the cameras and control them, a sound engineer to get the sound right, someone to mix the audio, a vision mixer, a VT Operator (to run any pre-recorded videos you insert – perhaps the titles and credits) and someone to set up the still image overlays (for example, names and logos).

If that sounds bad, imagine a live broadcast away from the studio – the Outside Broadcast. All the people and equipment needs to be on site, hence the legendary “OB Van”:

P1040518

Inside one of those vans is the equipment and people needed to run a live broadcast for TV. They’d normally transmit the final output directly to air by satellite – which is why you generally see a van with a massive dish on it nearby. This equipment runs into millions and millions of pounds and can’t be deployed on a whim. When you only have a few channels of course you don’t need many vans…

The Internet Steamroller

The Internet is changing all of this. Services like YouTube Live and Facebook Live mean that anyone with a phone and decent coverage can run their own outside broadcast. Where once you needed a TV network and millions of pounds of equipment now anyone can do it. Quality is poor and there are few options for mixing, but it is an amazingly powerful tool for citizen journalism and live reporting.

Also, the constraints of “channels” are going. Where once there was no point owning more OB Vans than you have channels, now you could run dozens of live feeds simultaneously over the Internet. As the phone becomes the first screen and the TV in the corner turns into just another display many of the constraints that we have taken for granted look more and more anachronistic.

These new technologies provide an opportunity, but also some significant challenges. The major one is standards – there is a large ecosystem of manufacturers and suppliers whose equipment needs to interoperate. The standards used, such as SDI (Serial Digital Interface) have been around for decades and are widely supported. Moving to an Internet-based standard needs cooperation across the industry.

BBC R&D has been actively working towards this with their IP Studio  project, and the standards they are developing with industry for Networked Media.

Read part 2 of this project with BBC R&D where I’ll describe some of the technologies involved, and how we’re approaching the project.

Solving the real Alt-Tab problem

In his latest blog post, Aza Raskin – interface design guru, creative lead on Firefox, and son of one of my heroes, Jef Raskin – tackles one of my oldest bugbears, Alt-Tab. Aza is a clever guy, but I was disappointed that his post addressed an issue I don’t perceive as important, while failing to address what I see as the very real problems with Alt-Tab. But let me start with some background.

(I use both Windows and Mac every day, and in this post tend to use Alt-Tab and Cmd-Tab interchangeably.)

(EDIT: I originally gave the window-switching shortcut as Cmd-\ since I use an external keyboard, but I probably confused Mac users who know it as Cmd-~ (tilde). Updated.)

History (something Windows got right)

Windows had Alt-Tab since Windows 1.0, although it was only implemented in its familiar visual form since Windows 3.1 (1992). To this day, I know many people who never use the shortcut, but I personally cannot imagine using a multi-tasking operating system without it.

I found its initial absence on Apple Macs unacceptable. In Mac-based studios during the 90s, I always relied on a third-party extension, Task Switcher, to provide the missing functionality.

When Apple finally introduced native Cmd-Tab (in OS 8.5, I think), they at first got it wrong. It cycled alphabetically through running apps, rather than switching between apps on a most-recently-used (MRU) basis like Windows1, so I had to continue using the extension. They changed it to MRU order in a later OS update. (Thank goodness Microsoft didn’t think to patent it.)

Unfortunately, Macs still suffer from another difference which I’ll come to after the following interlude.

 

Interlude: Why is most-recently-used (MRU) order better than cycling (and Exposé)?

The first reason is obvious: If Alt-Tab simply cycled through open windows in a fixed sequence (say, alphabetically), it would just require far too much tabbing, on average, to reach the item you wanted.

But Raskin alludes to the more powerful reason: spatial memory. If the shortcut always switches to the most-recently-used (MRU) item, this quickly teaches you to make the switch without thinking or looking at the screen. Spatial memory is awesome because it’s a background faculty, not foreground. Like reaching for your mouse, it does not interrupt your concentration, or where you’re looking on the screen.

This “toggle” behaviour, using a single shortcut key to switch back and forth between two windows only, is worth mentioning as an important feature in itself. It allows the shortcut to be used to compare the contents of two windows, not simply to switch from one to another. (The Undo/Redo shortcut in Photoshop – Cmd-Z for both – also brilliantly uses this principle.)

So, MRU order allows you to switch between two tasks pretty much subconsciously. Personally, I have learned to switch between up to 3 apps without relying on the visual aid (i.e. using spatial memory alone). For switching between more than that I need to look at the interface, but MRU ordering still reduces the number of times I need to Tab.

The trouble with Macs

Cmd-Tab on the Mac works less well than on Windows, due to the Mac’s application-centric model, as opposed to the document-centric model of Windows. In Windows, for example, you can Alt-Tab between two emails, or two browser windows; on the Mac you can’t. To switch between application windows on the Mac you have to use a different shortcut, Cmd-~, which again uses cycling rather than MRU order. On top of that Cmd-Tab on the Mac has the annoying habit of bringing all an application’s windows to the foreground, often covering up the window you were trying to compare against.

I believe window-switching is closer to the mental model of what this shortcut accomplishes. The clue is in the original feature name: “Task Switcher”. It switches between “things I am doing” – I do not care about which applications they happen to be in. So to see it as an “application switcher” is to miss the point.

It’s also my theory that the deficiencies of task-switching on the Mac spurred the development of Exposé, which I consider a sticking-plaster solution. Exposé is nice, but it does not utilise spatial memory – it forces you to look at the interface.

I admit this is debatable: some people (to my amazement) find application switching on the Mac more natural than window-switching on Windows.

The real problem

As more and more applications adopt a tabbed workspace, Alt-Tab is becoming less useful regardless of which operating system you use. And this is especially serious with browsers, because more and more of our daily tasks happen in browsers nowadays. You’re no longer just “browsing”. Just as often you’re composing documents, managing your calendar, filing bug reports, etc. I often find myself automatically attempting to Alt-Tab between two things I’m doing, but failing because they happen to be in two separate Firefox tabs. And then I have to use the mouse.

In Windows I can improve the situation slightly by opening more browser windows. That way I can use a single window, say, for Google Calendar, one for GMail, one for the blog post I’m writing, a multi-tab one with lots of things I’m reading, etc. On OS X I can’t, since application windows can only be cycled through using Cmd-(Shift)-~.

The keyboard shortcut for switching tabs is usually Ctrl-Tab (on both Mac and Windows), but again, this cycles rather than using MRU. Interestingly, there are a few applications who opted to use MRU with Ctrl-Tab, e.g. oXygen (my favourite HTML editor.) I appreciate it enormously when using the application. Weirdly, Firefox occasionally seems to do this (uses MRU rather than cycling), but this is unreliable and I cannot get it to do so now.

Aza Raskin’s proposal

I’m disappointed that Raskin – evidently a lifelong Mac user – implicitly accepts “application-switching” as the point of the shortcut. As I have tried to explain above, this is fundamentally less useful than task-switching.

In his article he attempts to come up with an improvement to the shortcomings of MRU. (He doesn’t even mention cycling so I assume he is not in favour of it.) MRU’s shortcoming, Raskin says, is that it is only useful for toggling between two things (he says apps, I’d say windows), and frustrates your tendency to form spatial memory habits for more than that. Personally I don’t experience the problem he describes with juggling 3 apps. It’s hard-wired in my spatial memory that Cmd-Tab switches to the last thing, and Cmd-Tab-Tab switches to the last-but-one. For more than this I need to shift my attention to the switcher interface, and spatial memory is no longer of help. But this is infrequent enough not to matter.

He then proposes a “habit-respecting MRU” (HRMRU) to solve this problem I don’t perceive. He ponders using heuristics or even a Markov model to detect users’ habits. Personally I see this failing for the very reasons he himself described – it would just result in a seemingly capricious interface.

But the bigger problem I have with Raskin’s article is that he doesn’t address the real erosion in the usefulness of this shortcut: The loss of MRU due to tabs, and the co-existence of both MRU-ordered switching and cycling. (And the greater problem on Mac OS by having 3 switching modes: Apps, windows and tabs, two of which don’t use MRU.)

My proposal

Tabbed interfaces are not going away. They’re a necessary way of managing the ever-increasing number of windows we have to juggle. If I had to Alt-Tab between all of them, it could number over a 100. So I think two switching modes are inevitable.

So I propose that only two shortcuts are necessary: Alt-Tab / Cmd-Tab for window-switching, and Ctrl-Tab for tab-switching inside a window. (Or document-switching in applications that don’t use tabs.) Both should work exactly the same way: MRU order, with the addition of Shift to reverse the order. There is no reason for application-switching to exist.

This would be a minor change on Windows, but a fundamental one on the Mac. Perhaps, as OS X insists on having 3 switching modes with 3 different shortcut keys, they could at least redefine the second one – Cmd-~ – to be window-switching across all apps (i.e. like Windows) rather than within the current app only. And all should use MRU.

Some people may find MRU order in a tabbed interface confusing, and crave a keyboard shortcut that cycles instead, but then a different application-specific shortcut could always be provided. E.g. Firefox already has Cmd-Alt-Left/Right arrow (Mac) or Ctrl-PgUp/PgDn (Windows). These are more appropriate shortcuts for cycling as their names imply directionality.

  1. Windows actually uses Z-Order, but in practice this generally works like MRU. The behaviour was slightly changed in Vista, but the 6 most recent windows still uses MRU order.

About us: Isotoma is a bespoke software development company based in York and London specialising in web apps, mobile apps and product design. If you’d like to know more you can review our work or get in touch.

How Google will kill Internet Explorer and save the web

Update: This open letter from the EFF to Google makes some of the same points, particularly how Google is probably the one company able to establish an open video standard for the web.

Update 2 (19/5/2010): Steps 1 and 2 in my prediction appears to have happened. Google is open-sourcing VP8 in the hope of making it (in the form of WebM) the standard for internet video. They will transcode all YouTube video to WebM. And Firefox and IE9 (in a half-assed way) have already committed to supporting it.

Update 3 (8/6/2010): Some of the best writing on this topic can be found on Diary Of An x264 Developer by Jason Garrett-Glaser (aka Dark Shikari), especially this and this. In short: he also anticipates the YouTube gambit (“Blitzkrieg”), and would welcome a truly open, patent-free video format for the web to oust Flash, but points out many existing and potential problems with WebM/VP8 that may be its undoing, and does not see H.264 going away. Wait and see, basically.

Update 4 (30/6/2010): YouTube speaks: “While HTML5’s video support enables us to bring most of the content and features of YouTube to computers and other devices that don’t support Flash Player, it does not yet meet all of our needs. Today, Adobe Flash provides the best platform for YouTube’s video distribution requirements, which is why our primary video player is built with it.” So, not soon, anyway.

I’d like to make a prediction. I’m probably not the first to make it, and I may be utterly wrong, but just in case I prove to be right, I’d like to have it on record.1

I believe Google is planning to kill off Internet Explorer, within the next two years, and I think they can succeed. By “kill off” I mean turn it from the majority browser into a niche browser (<20% for all versions combined.) I believe the strategy relies on Chrome Frame, YouTube, and HTML5 video using the VP8 format.

The game plan

Step 1. It is rumoured Google will soon open-source the VP8 video compression format by On2 Technologies, whom they bought earlier this year. They’ll do so in the hope that it would become the default video format on the web, over Theora (open but technically inferior) and H.264 (superior but patent-encumbered). If they did so, Mozilla, Webkit and Opera browsers, with their fierce competition and fast update cycles, will likely hedge their bets and quickly add support for VP8, in addition to the formats they already support.

Step 2. Google will transcode all videos on YouTube to VP8 format, and serve this as the default to capable browsers. Converting such a vast amount of video is a monumental task, but Google has the resources to do it.

Step 3. Once the release versions of all the major non-IE browsers are capable of displaying VP8 HTML5 video without a hitch2, Google will make its final move. Notices will appear on YouTube that they will soon turn off support for Flash, and serve all video as VP8 only. If you use Firefox, Safari or Chrome, you won’t notice a difference. But if you’re using Internet Explorer, not to worry: all you need to do is install a simple plugin: Chrome Frame.

Chrome Frame effectively turns Internet Explorer into Chrome. It still looks like you’re running IE, but the rendering engine has been replaced by Google’s. (Only on request, though: web authors have to explicitly ask for Chrome Frame to be used if available. The rest of the time IE remains unchanged.)

YouTube is Special

YouTube is unique on the web in that pretty much everyone uses it: it is the third most visited site after Google and Yahoo. I would wager that, within a month, some 80% of web users will have visited YouTube, and the vast majority of Internet Explorer users will have installed the plugin they need to continue getting their funny cat fix. Where else would they go? Sure, there are other video sites out there, but none truly compete with YouTube, in terms of volume of content, or audience size.

At the same time, very few people would be able to lambast Google for breaking something that harms their business or access to vital information. Very few people need YouTube, and very few of those will be unable to install the plugin or switch to a different browser.

In a matter of months, the vast majority of IE users will either have switched to a different browser, or installed Chrome Frame, effectively turning it (on demand) into Chrome. IE’s market share (if you look at the actual rendering engine) will collapse from 55% today3 to under 20% (and hopefully much lower).

This will reveal Google’s acquisitions of YouTube, On2, and their development of their own Chrome browser, merely as components in a masterpiece of long-game strategy. Without every one of these components, each monumental and expensive in themselves, the strategy couldn’t succeed. Nobody but Google could’ve done it.

The result

And what a future this will win for the web. Look at this demonstration of the capabilities of HTML5 and CSS3, and imagine a world in which every new website can use every part of it. This could be seen as a massive upgrade for the internet. Imagine not needing to support outdated versions of IE anymore. Only if you had to support a significant customer base locked in by IT policy, a rapidly-dwindling segment, would you still need to support IE.

How will this affect other players? It will be a mortal blow against Adobe, with Flash rapidly losing its hold over internet video over the ensuing months. This would suit Apple fine, who are already doing their best to keep Flash off Apple hardware. (They’re currently putting their weight behind H.264, but that’s simply the best option at the moment.) The Flash plugin will likely remain ubiquitous for a while still, but will be increasingly marginalised, and find its place usurped by JavaScript, Canvas and SVG as support for these open technologies become near-universal.

Microsoft can only respond by getting IE users to upgrade to the latest versions as quickly as possible, and add support for VP8 video. This will suit Google and the web just fine, since IE9 promises to be on par with the competition in its support for modern web technologies. But they will no longer be able to act with the hubris of majority market share, and will be forced into a position of playing catch-up to faster-evolving browsers.

For Google, of course, this is essential for its vision of the browser as operating system4.

  1. I deliberately did not do a web search to check for other articles making the same prediction, as I wanted to think it through for myself.
  2. Here’s a possible weak point in my argument: Unlike H.264, VP8 currently does not benefit from hardware acceleration (especially important on mobile platforms.) If this proves to be a major factor, the timeframe may need to be longer to allow for the natural cycle of hardware upgrades. (Fortunately this is more rapid for mobile devices.)
  3. http://marketshare.hitslink.com/report.aspx?qprid=3
  4. The front-end of the Internet operating system, that is.

Betting against Mr Moore

Yesterday I ordered a new server from our ISP, as I do pretty regularly. I happened to take a look at the “bandwidth” dropdown on their order form, and you can now get a Gigabit of unmetered internet bandwidth to your server for £3000/month.

This got me thinking. What on earth would I do with all that bandwidth? Well, serving video is the obvious application. Lets try live TV, something exciting, maybe sport. If you don’t do anything clever at all, just ship data to every individual person who wants to watch TV, how much would it cost? Say you need 10Mb per person for the video stream.

£3000/month is £4.30/hr for a Gigabit. For this you can serve 100 people their TV. So, 4.3p/hr/person for a video stream. Reckon you could make 4.3p / person / hour from advertising on a live sports channel? I reckon so. So why not do it right now? Two reasons. People don’t have the Internet on their TV, and bandwidth isn’t enough of a commodity. If I wanted to broadcast the FA Cup Final to 500M people, I’d need around 5 Exabits of bandwidth, just for a couple of hours. The market just isn’t there for me to buy it like that. Yet. But it will be – it’s the direction the industry has been moving in, at vast speed, and there’s no reason to think it’ll stop.

Give it a few years. Imagine a day, not that far off, when everyone’s TV is on the Internet. I, personally, could bid to broadcast the Cup Final on the basis that I need to make, say, 25p per person in advertising. Obviously I don’t own any cameras. Or Gary Lineker. But both of those are available for hire, at the right price.

I’m not proposing to go into the sports broadcasting business myself, but the effects of bandwidth commoditisation are going to be felt very soon in the TV business, and they are woefully unprepared for it. Commoditisation lowers barriers to entry and lowers the cost of innovation. Right now anyone betting on their TV channel keeping audience just because they’ve already got them is betting against Moore’s Law. And a bet against Moore’s Law is a very, very bad bet.