Friday, March 14, 2014

The Trouble With ... Spectrum


To paraphrase Mark Twain, "Buy spectrum.  They're not making it anymore."

And if Ofcom's figures are right, the spectrum that we use today is worth £50bn a year (as of 2011) to the UK economy.  The government say that they want to double that contribution by 2025 - it is already up 25% in the 5 years from 2006 to 2011.  It's unclear why the data is as of 2011 - one suspects that if it was up 25% in 5 years, it may already be up another 12.5% since then making doubling by 2025 at least a little easier.

If you've ever seen how spectrum is carved up in the UK, you know it's very complicated.  Here's a picture that shows just how complicated:


Every so often, the government auctions, through Ofcom, a slice of spectrum.  In April 2000, the sale of the 3G spectrum realised some £22bn.  There was much delight - until stock markets around the world fell dramatically not long afterwards, something which was at least partly to blame for delaying the actual rollout of 3G services (indeed, Vodafone narrowly avoided a fine for failing to reach 90% coverage on time - with an extension granted to the end of 2013).

That 90% is a measure of population coverage, not geographical coverage - which explains why you will often fail to get a signal in the middle of a park, just outside a small town or, often, anywhere with a beautiful view where you want to take a picture and send it to someone there and then, like if you were here:


Of course, there are doubtless plenty of people wandering down Baker Street right now who also can't get or even maintain a 3G signal.

The 4G auctions took place a little over 18 months ago and resulted in revenues that were some 90% lower than for 3G - partly a reflection of the times, partly because of somewhat decreased competition and partly because of smarter bidding on the part of at least some of the operators.  The 4G build out is underway now though there are, in effect, only two networks being built - O2 and Vodafone are sharing their network, as are Three and EE (the latter had a significant headstart on 4G because they lobbied, successfully, to reassign spare 1800 Mhz, originally for 3G, frequency for use as 4G).

For me, though, coverage shouldn't be a competitive metric.  Coverage should be national - not 90% national, proper national.  Using coverage as a metric, coupled with charging for the spectrum, and then splitting up the job of building the network across two (or more) players means that they will go where the money is - which means major towns first (in fact, it means London first and for longest), then major travel corridors and commuter towns and the rural areas never.  The same is true, of course, for broadband - though our broadband rollout is mostly physical cable rather than over the air, the same investment/return challenge remains.

And that always seems to leave government to fill in the gaps - whether with the recent "not spots" project for mobile that will result in a few dozen masts being set up (for multiple operator use) to cover some (not all) of the gaps in coverage or a series of rural broadband projects (that only BT is winning) - neither of which is progressing very fast and certainly not covering the gaps.

With the upcoming replacement of Airwave (where truly national - 100% geographic - coverage is required), the rollout of smart meters (where the ideal way for a meter to send its reading home is via SMS or over a home broadband network) and the need to plug gaps in both mobile and broadband coverage, surely there is a need for an approach that we might call "national infrastructure"?

So focusing on mobile and, particularly, where it converges with broadband (on the basis that one can substitute for the other and that the presence of one could drive the other), can one or more bodies be set up who have the job to create truly national coverage, and they sell the capacity that they create to content and service providers who want it.  That should ensure coverage, create economies of scale and still allow competition (even more so than today given that in many areas, there is only one mobile provider to choose from).  Network Rail for our telecomms infrastructure.

That is to say, in another way, is it relevant or effective to have competitive build-outs of such nationally vital capabilities as broadband and 4G (and later 5G, 6G etc) mobile? 

If the Airwave replacement were to base its solution on 4G (moving away from Tetra) - and I have no idea if they will or they won't, but given Emergency Services will have an increasing need for data in the future, it seems likely - then we would have another player doing a national rollout, funded by government (either funded directly or funded through recovery of costs)

There are probably 50,000 mobile masts in the country today.  With the consolidation of networks, that will get to a smaller number, maybe 30,000.  If you add in Airwave, which operates at a lower frequency and so will get better performance but has to cover more area, that number will increase a little (Airwave was formerly owned by O2 so my guess is that much of their gear is co-located with O2 mobile masts).   Perhaps £100k to build those in the first place and perhaps £10k every time they need a major upgrade (change of frequency / change of antenna / boost in backhaul and so on) ... pretty soon you're talking real money.  And that's on top of the cost of spectrum and excludes routine maintenance and refresh costs.

So having already rolled out 2G, 3G and the beginning of the 4G network and likely to replace (or at least combine) Airwave with a 4G-based solution ... and with many areas of the country still struggling to get a wireless signal, let alone a fast broadband solution, I think it's time to look at this again.

Whilst I was writing this, Ofcom put out a press release noting:
Ofcom’s European Broadband Scorecard, published today, shows that the UK leads the EU’s five biggest economies on most measures of coverage, take-up, usage and choice for both mobile and fixed broadband, and performs well on price.
That suggests that the current approach has done rather well - better than a curious selection of five big countries - but it doesn't mean that we should (a) compare ourselves with those countries, (b) that we should not go for an absolute measure of 100% coverage and (c) that the current approach will keep us ahead of the game.

It seems to me that rather than spend billions on HS2 which aims to transport more people from A to B a bit quicker than they move today, we could, instead, spend a smaller amount on securing the infrastructure for the 21st and 22nd centuries rather than that of the 19th.

Monday, March 03, 2014

The Trouble With ... Transition

In my post two weeks ago (Taking G-Cloud Further Forward), I made this point:
I struggle with the suggestion to make contracts three years instead of two.  This is a smokescreen, it's not what is making buyers nervous really, it's just that they haven't tried transition.  So let's try some - let's fire up e-mail in the cloud for a major department and move it 6 months from now.  

Until it's practiced, no one will know how easy (or incredibly difficult) it is.  The key is not to copy and paste virtual machines, but to move the gigabytes of data that goes with it.  This will prove where PSN is really working (I suspect that there are more problems than anyone has yet admitted to), demonstrate how new capabilities have been designed (and prove whether the pointy things have been set up properly as we used to say - that is, does the design rely on fixed IP address ranges or DNS routing that is hardcoded or whatever).  

This won't work for legacy - that should be moved once and once only to the Crown Hosting Service or some other capability (though recognise that lots of new systems will still need to talk to services there).  There's a lot riding on CHS happening - it will be an interesting year for that programme.
Eoin Jennings of Easynet responded, via Twitter, with a view that buyers thought that there was significant procurement overhead if there was a need to run a procurement every 2 years (or perhaps more frequently given there is an option within G-Cloud to terminate for convenience and move to a new provider). Eoin is seemingly already trying to convince customers - and struggling.


Georgina O'Toole (of Richard Holway's Tech Market View) shared her view that 2 years could be too short, though for a different reason:

An example might be where a Government organisation opts for a ‘private cloud’ solution requiring tailoring to their specifications. In these cases, a supplier would struggle to recoup the level of investment required in order to make a profit on the deal.  The intention is to reduce the need for private cloud delivery over time, as the cloud market “innovates and matures” but in the meantime, the 24-month rule may still deter G-Cloud use.
Both views make sense, and I understand them entirely, in the "current" world of government IT where systems are complex, bespoke and have been maintained under existing contracts for a decade or more. 

But G-Cloud isn't meant for such systems.  It's meant for systems designed under modern rules where portability is part of the objective from the get go.   There shouldn't be private, departmentally focused, clouds being set up - the public sector is perfectly big enough to have its own private cloud capability, supplied by a mixture of vendors who can all reassure government that they are not sharing their servers or storage with whoever they are afraid of sharing them with.  And if suppliers build something and need long contracts to get their return on investment, then they either aren't building the right things, aren't pricing it right or aren't managing it right - though I see that there is plenty of risk in building anything public sector cloud focused until there is more take up, and I applaud the suppliers who have already taken the punt (let's hope that the new protective marking scheme helps there).

Plainly government IT isn't going to turn on a sixpence with new systems transporting in from far off galaxies right away, but it is very much the direction of travel as evidence by the various projects set up using a more agile design approach right across government - in HMRC, RPA, Student Loans, DVLA and so on.

What really needs to happen is some thinking through of how it will work and some practice:

- How easy is it to move systems, even those designed, for cloud where IP ranges are fixed and owned by data centre providers?

- How will network stacks (firewalls, routers, load balancers, intrusion detection tools etc) be moved on a like for like basis?

- If a system comes with terabytes or petabytes of data, how will the be transferred so that there is no loss of service (or data)

- In a world where there is no capex, how does government gets its head around not looking at everything as though it owned it?

- If a system is supported by minimal staff (as in 0.05 heads per server or whatever), TUPE doesn't apply (but it may well apply for the first transfer), how does government (and the supplier base) understand that?

- How can the commercial team sharpen their processes so that what is still taking them many (many) weeks (despite the reality of a far quicker process with G-Cloud) can be done in a shorter period?

Designing this capability in from the start needs to have already started and, for those still grappling with it, there should be speedy reviews of what has already been tried elsewhere in government (I hesitate to say that the "lessons must be learned" on the basis that those 4 words may be the most over-used and under-practiced words in the public sector.

With the cost of hardware roughly halving every 18-24 months and therefore the cost of cloud hosting falling on a similar basis (perhaps even faster given increasing rates of automation), government can benefit from continuously falling costs (in at least part of its stack) - and by designing its new systems so that they avoid lockin from scratch, government should, in theory, never be dependent on a small number of suppliers and high embedded costs.

Now, that all makes sense for hosting ... how government does the same for application support, service integration and management and how it gets on the path where it actually redesigns its applications (across the board) to work in such a way that they could be moved whenever new pricing makes sense is the real problem to grapple with.