Friday, March 14, 2014

The Trouble With ... Spectrum


To paraphrase Mark Twain, "Buy spectrum.  They're not making it anymore."

And if Ofcom's figures are right, the spectrum that we use today is worth £50bn a year (as of 2011) to the UK economy.  The government say that they want to double that contribution by 2025 - it is already up 25% in the 5 years from 2006 to 2011.  It's unclear why the data is as of 2011 - one suspects that if it was up 25% in 5 years, it may already be up another 12.5% since then making doubling by 2025 at least a little easier.

If you've ever seen how spectrum is carved up in the UK, you know it's very complicated.  Here's a picture that shows just how complicated:


Every so often, the government auctions, through Ofcom, a slice of spectrum.  In April 2000, the sale of the 3G spectrum realised some £22bn.  There was much delight - until stock markets around the world fell dramatically not long afterwards, something which was at least partly to blame for delaying the actual rollout of 3G services (indeed, Vodafone narrowly avoided a fine for failing to reach 90% coverage on time - with an extension granted to the end of 2013).

That 90% is a measure of population coverage, not geographical coverage - which explains why you will often fail to get a signal in the middle of a park, just outside a small town or, often, anywhere with a beautiful view where you want to take a picture and send it to someone there and then, like if you were here:


Of course, there are doubtless plenty of people wandering down Baker Street right now who also can't get or even maintain a 3G signal.

The 4G auctions took place a little over 18 months ago and resulted in revenues that were some 90% lower than for 3G - partly a reflection of the times, partly because of somewhat decreased competition and partly because of smarter bidding on the part of at least some of the operators.  The 4G build out is underway now though there are, in effect, only two networks being built - O2 and Vodafone are sharing their network, as are Three and EE (the latter had a significant headstart on 4G because they lobbied, successfully, to reassign spare 1800 Mhz, originally for 3G, frequency for use as 4G).

For me, though, coverage shouldn't be a competitive metric.  Coverage should be national - not 90% national, proper national.  Using coverage as a metric, coupled with charging for the spectrum, and then splitting up the job of building the network across two (or more) players means that they will go where the money is - which means major towns first (in fact, it means London first and for longest), then major travel corridors and commuter towns and the rural areas never.  The same is true, of course, for broadband - though our broadband rollout is mostly physical cable rather than over the air, the same investment/return challenge remains.

And that always seems to leave government to fill in the gaps - whether with the recent "not spots" project for mobile that will result in a few dozen masts being set up (for multiple operator use) to cover some (not all) of the gaps in coverage or a series of rural broadband projects (that only BT is winning) - neither of which is progressing very fast and certainly not covering the gaps.

With the upcoming replacement of Airwave (where truly national - 100% geographic - coverage is required), the rollout of smart meters (where the ideal way for a meter to send its reading home is via SMS or over a home broadband network) and the need to plug gaps in both mobile and broadband coverage, surely there is a need for an approach that we might call "national infrastructure"?

So focusing on mobile and, particularly, where it converges with broadband (on the basis that one can substitute for the other and that the presence of one could drive the other), can one or more bodies be set up who have the job to create truly national coverage, and they sell the capacity that they create to content and service providers who want it.  That should ensure coverage, create economies of scale and still allow competition (even more so than today given that in many areas, there is only one mobile provider to choose from).  Network Rail for our telecomms infrastructure.

That is to say, in another way, is it relevant or effective to have competitive build-outs of such nationally vital capabilities as broadband and 4G (and later 5G, 6G etc) mobile? 

If the Airwave replacement were to base its solution on 4G (moving away from Tetra) - and I have no idea if they will or they won't, but given Emergency Services will have an increasing need for data in the future, it seems likely - then we would have another player doing a national rollout, funded by government (either funded directly or funded through recovery of costs)

There are probably 50,000 mobile masts in the country today.  With the consolidation of networks, that will get to a smaller number, maybe 30,000.  If you add in Airwave, which operates at a lower frequency and so will get better performance but has to cover more area, that number will increase a little (Airwave was formerly owned by O2 so my guess is that much of their gear is co-located with O2 mobile masts).   Perhaps £100k to build those in the first place and perhaps £10k every time they need a major upgrade (change of frequency / change of antenna / boost in backhaul and so on) ... pretty soon you're talking real money.  And that's on top of the cost of spectrum and excludes routine maintenance and refresh costs.

So having already rolled out 2G, 3G and the beginning of the 4G network and likely to replace (or at least combine) Airwave with a 4G-based solution ... and with many areas of the country still struggling to get a wireless signal, let alone a fast broadband solution, I think it's time to look at this again.

Whilst I was writing this, Ofcom put out a press release noting:
Ofcom’s European Broadband Scorecard, published today, shows that the UK leads the EU’s five biggest economies on most measures of coverage, take-up, usage and choice for both mobile and fixed broadband, and performs well on price.
That suggests that the current approach has done rather well - better than a curious selection of five big countries - but it doesn't mean that we should (a) compare ourselves with those countries, (b) that we should not go for an absolute measure of 100% coverage and (c) that the current approach will keep us ahead of the game.

It seems to me that rather than spend billions on HS2 which aims to transport more people from A to B a bit quicker than they move today, we could, instead, spend a smaller amount on securing the infrastructure for the 21st and 22nd centuries rather than that of the 19th.

Monday, March 03, 2014

The Trouble With ... Transition

In my post two weeks ago (Taking G-Cloud Further Forward), I made this point:
I struggle with the suggestion to make contracts three years instead of two.  This is a smokescreen, it's not what is making buyers nervous really, it's just that they haven't tried transition.  So let's try some - let's fire up e-mail in the cloud for a major department and move it 6 months from now.  

Until it's practiced, no one will know how easy (or incredibly difficult) it is.  The key is not to copy and paste virtual machines, but to move the gigabytes of data that goes with it.  This will prove where PSN is really working (I suspect that there are more problems than anyone has yet admitted to), demonstrate how new capabilities have been designed (and prove whether the pointy things have been set up properly as we used to say - that is, does the design rely on fixed IP address ranges or DNS routing that is hardcoded or whatever).  

This won't work for legacy - that should be moved once and once only to the Crown Hosting Service or some other capability (though recognise that lots of new systems will still need to talk to services there).  There's a lot riding on CHS happening - it will be an interesting year for that programme.
Eoin Jennings of Easynet responded, via Twitter, with a view that buyers thought that there was significant procurement overhead if there was a need to run a procurement every 2 years (or perhaps more frequently given there is an option within G-Cloud to terminate for convenience and move to a new provider). Eoin is seemingly already trying to convince customers - and struggling.


Georgina O'Toole (of Richard Holway's Tech Market View) shared her view that 2 years could be too short, though for a different reason:

An example might be where a Government organisation opts for a ‘private cloud’ solution requiring tailoring to their specifications. In these cases, a supplier would struggle to recoup the level of investment required in order to make a profit on the deal.  The intention is to reduce the need for private cloud delivery over time, as the cloud market “innovates and matures” but in the meantime, the 24-month rule may still deter G-Cloud use.
Both views make sense, and I understand them entirely, in the "current" world of government IT where systems are complex, bespoke and have been maintained under existing contracts for a decade or more. 

But G-Cloud isn't meant for such systems.  It's meant for systems designed under modern rules where portability is part of the objective from the get go.   There shouldn't be private, departmentally focused, clouds being set up - the public sector is perfectly big enough to have its own private cloud capability, supplied by a mixture of vendors who can all reassure government that they are not sharing their servers or storage with whoever they are afraid of sharing them with.  And if suppliers build something and need long contracts to get their return on investment, then they either aren't building the right things, aren't pricing it right or aren't managing it right - though I see that there is plenty of risk in building anything public sector cloud focused until there is more take up, and I applaud the suppliers who have already taken the punt (let's hope that the new protective marking scheme helps there).

Plainly government IT isn't going to turn on a sixpence with new systems transporting in from far off galaxies right away, but it is very much the direction of travel as evidence by the various projects set up using a more agile design approach right across government - in HMRC, RPA, Student Loans, DVLA and so on.

What really needs to happen is some thinking through of how it will work and some practice:

- How easy is it to move systems, even those designed, for cloud where IP ranges are fixed and owned by data centre providers?

- How will network stacks (firewalls, routers, load balancers, intrusion detection tools etc) be moved on a like for like basis?

- If a system comes with terabytes or petabytes of data, how will the be transferred so that there is no loss of service (or data)

- In a world where there is no capex, how does government gets its head around not looking at everything as though it owned it?

- If a system is supported by minimal staff (as in 0.05 heads per server or whatever), TUPE doesn't apply (but it may well apply for the first transfer), how does government (and the supplier base) understand that?

- How can the commercial team sharpen their processes so that what is still taking them many (many) weeks (despite the reality of a far quicker process with G-Cloud) can be done in a shorter period?

Designing this capability in from the start needs to have already started and, for those still grappling with it, there should be speedy reviews of what has already been tried elsewhere in government (I hesitate to say that the "lessons must be learned" on the basis that those 4 words may be the most over-used and under-practiced words in the public sector.

With the cost of hardware roughly halving every 18-24 months and therefore the cost of cloud hosting falling on a similar basis (perhaps even faster given increasing rates of automation), government can benefit from continuously falling costs (in at least part of its stack) - and by designing its new systems so that they avoid lockin from scratch, government should, in theory, never be dependent on a small number of suppliers and high embedded costs.

Now, that all makes sense for hosting ... how government does the same for application support, service integration and management and how it gets on the path where it actually redesigns its applications (across the board) to work in such a way that they could be moved whenever new pricing makes sense is the real problem to grapple with.

Monday, February 17, 2014

Taking G-Cloud Further Forward

A recent blog post from the G-Cloud team talks about how they plan to take the framework forward. I don't think it goes quite far enough, so here are my thoughts on taking it even further forward.

Starting with that G-Cloud post:

It's noted that "research carried out by the 6 Degree Group suggests that nearly 90 percent of local authorities have not heard of G-Cloud".  This statement is made in the context of the potential buyer count being 30,000 strong.  Some, like David Moss, have confused this and concluded that 27,000 buyers don't know about G-Cloud.  I don't read it that way - but it's hard to say what it does mean.  A hunt for the "6 Degree Group", presumably twice as good as the 3 Degrees, finds one obvious candidate (actually the 6 Degrees Group), but they make no mention of any research on their blog or their news page (and I can't find them in the list of suppliers who have won business via G-Cloud).  Still, 90% of local authorities not knowing about G-Cloud is, if the question was asked properly and to the right people (and therein lies the problem with such research), not good.  It might mean that 450 or 900 or 1,350 buyers (depending on whether there are 1, 2 or 3 potential buyers of cloud services in each local authority) don't know about the framework.  How we get to 30,000 potential buyers I don't know - but if there is such a number, perhaps it's a good place to look at potential efficiencies in purchasing.

[Update: I've been provided with the 30,000 - find them here: http://gps.cabinetoffice.gov.uk/sites/default/files/attachments/2013-04-15%20Customer%20URN%20List.xlsx. It includes every army regiment (SASaaS?), every school and thousands of local organisations.  So a theoretical buyer list but not a practical buyer list. I think it better to focus on the likely buyers. G-Cloud is a business - GPS gets 1% on every deal.  That needs to be spent on promoting to those most likely to use it]

[Second update: I've been passed a further insight into the research: http://www.itproportal.com/2013/12/20/g-cloud-uptake-low-among-uk-councils-and-local-authorities/?utm_term=&utm_medium=twitter&utm_campaign=testitppcampaign&utm_source=rss&utm_content=  - the summary from this is that 87% of councils are not currently buying through G-Cloud and 76% did not know what the G-Cloud [framework] could be used for]

Later, we read "But one of the most effective ways of spreading the word about G-Cloud is not by us talking about it, but for others to hear from their peers who have successfully used G-Cloud. There are many positive stories to tell, and we will be publishing some of the experiences of buyers across the public sector in the coming months" - True, of course.  Except if people haven't heard of G-Cloud they won't be looking on the G-Cloud blog for stories about how great the framework is.  Perhaps another route to further efficiencies is to look at the vast number of frameworks that exist today (particularly in local government and the NHS) and start killing them off so that purchases are concentrated in the few that really have the potential to drive cost saves allied with better service delivery.

And then "We are working with various trade bodies and organisations to continue to ensure we attract the best and most innovative suppliers from across the UK."  G-Cloud's problem today isn't, as far as we can tell, a lack of innovative suppliers - it's a lack of purchasing through it.  In other words, a lack of demand.  True, novel services may attract buyers but most government entities are still in the "toe in the water" stage of cloud, experimenting with a little IaaS, some PaaS and, based on the G-Cloud numbers, quite a lot of SaaS (some £15m in the latest figures, or about 16% of total spend versus only 4% for IaaS and 1% for Paas).

On the services themselves, we are told that "We are carrying out a systematic review of all services and have, so far, deleted around 100 that do not qualify."  I can only applaud that.  Though I suspect the real number to delete may be in the 1000s, not the 100s.  It's a difficult balance - the idea of G-Cloud is to attract more and more suppliers with more and more services, but buyers only want sensible, viable services that exist and are proven to work.  It's not like iTunes where it only takes one person to download an app and rate it 1* because it doesn't work/keeps crashing/doesn't synchronise and so suggest to other potential buyers that they steer clear - the vast number of G-Cloud services have had no takers at all and even those that have lack any feedback on how it went (I know that this was one of the top goals of the original team but that they were hampered by "the rules").

There's danger ahead too: "Security accreditation is required for all services that will hold information assessed at Business Impact Level profiles 11x/22x, 33x and above. But of course, with the new security protection markings that are being introduced on 1 April, that will change. We will be publishing clear guidance on how this will affect accreditation of G-Cloud suppliers and services soon."  It's mid-February and the new guidelines are just 7 weeks away.  That doesn't give suppliers long to plan for, or make, any changes that are needed (the good news here being that government will likely take even longer to plan for, and make, such changes at their end).  This is, as CESG people have said to me, a generational change - it's going to take a while, but that doesn't mean that we should let it.

Worryingly: "we’re excited to be looking at how a new and improved CloudStore, can act as a single space for public sector buyers to find what they need on all digital frameworks."  I don't know that a new store is needed; I believe that we're already on the third reworking, would a fourth help?  As far as I can tell, the current store is based on Magento which, from all accounts and reviews online, is a very powerful tool that, in the right hands, can do pretty much whatever you want from a buying and selling standpoint.  I believe a large part of the problem is in the data in the store - searching for relatively straightforward keywords often returns a surprising answer - try it yourself, type in some popular supplier names or some services that you might want to buy.   Adding in more frameworks (especially where they can overlap as PSN and G-Cloud do in several areas) will more than likely confuse the story - I know that Amazon manages it effortlessly across a zillion products but it seems unlikely that government can implement it any time soon (wait - they could just use Amazon). I would rather see the time, and money, spent getting a set of products that were accurately described and that could be found using a series of canned searches based on what buyers were interested in.

So, let's ramp up the PR and education (for buyers), upgrade the assurance process that ensures that suppliers are presenting products that are truly relevant, massively clean up the data in the existing store, get rid of duplicate and no longer competitive buying routes (so that government can aggregate for best value), make sure that buyers know more about what services are real and what they can do, don't rebuild the damn cloud store again ...

... What else?

Well, the Skyscape+14 letter is not a terrible place to start, though I don't agree with everything suggested.  G-Cloud could and should:

- Provide a mechanism for services to work together.  In the single prime contract era, which is coming to an end, this didn't matter - one of the oligopoly would be tasked to buy something for its departmental customer and would make sure all of the bits fitted together and that it was supported in the existing contract (or an adjunct).  In a multiple supplier world where the customer will, more often than not, act as the integrator both customer and supplier are going to need ways to make this all work together.   The knee bone may be connected to the thigh bone, but that doesn't mean that your email service in the cloud is going to connect via your PSN network to your active directory so that you can do everything on your iPad.

- Publish what customers across government are looking at both in advance and as it occurs, not as data but as information.  Show what proof of concept work is underway (as this will give a sense of what production services might be wanted), highlight what components are going to be in demand when big contracts come to an end, illustrate what customers are exploring in their detailed strategies (not the vague ones that are published online).  SMEs building for the public sector will not be able to build speculatively - so either the government customer has to buy exactly what the private sector customer is buying (which means that there can be no special requirements, no security rules that are different from what is already there and no assurance regime that is above and beyond what a major retailer or utility might want), or there needs to be a clear pipeline of what is wanted.  Whilst Chris Chant used to say that M&S didn't need to ask people walking down the street how many shirts they would buy if they were to open a store in the area, government isn't yet buying shirts as a service - they are buying services that are designed and secured to government rules (with the coming of Official, that may all be about to change - but we don't know yet because, see above, the guidance isn't available).

- Look at real cases of what customers want to do - let's say that a customer wants to put a very high performing Oracle RAC instance in the cloud - and ensure that there is a way for that to be bought.  It will likely require changes to business models and to terms and conditions, but despite the valiant efforts of GDS there is not yet a switch away from such heavyweight software as Oracle databases.  The challenge (one of many) that government has, in this case, is that it has massive amounts of legacy capability that is not portable, is not horizontally scalable and that cannot be easily moved - Crown Hosting may be a solution to this, if it can be made to work in a reasonable timeframe and if the cost of migration can be minimised.

- I struggle with the suggestion to make contracts three years instead of two.  This is a smokescreen, it's not what is making buyers nervous really, it's just that they haven't tried transition.  So let's try some - let's fire up e-mail in the cloud for a major department and move it 6 months from now.  Until it's practiced, no one will know how easy (or incredibly difficult) it is.  The key is not to copy and paste virtual machines, but to move the gigabytes of data that goes with it.  This will prove where PSN is really working (I suspect that there are more problems than anyone has yet admitted to), demonstrate how new capabilities have been designed (and prove whether the pointy things have been set up properly as we used to say - that is, does the design rely on fixed IP address ranges or DNS routing that is hardcoded or whatever).  This won't work for legacy - that should be moved once and once only to the Crown Hosting Service or some other capability (though recognise that lots of new systems will still need to talk to services there).  There's a lot riding on CHS happening - it will be an interesting year for that programme.

The ICT contracts for a dozen major departments/government entities are up in the next couple of years - contract values in the tens of billions (old money) will be re-procured.   Cloud services, via G-Cloud, will form an essential pillar of that re-procurement process, because they are the most likely way to extract the cost savings that are needed.  In some cases cloud will be bought because the purchasing decision will be left too late to do it any other way than via a framework (unless the "compelling reason" for extension clause kicks in) but in most cases because the G-Cloud framework absolutely provides the best route to an educated, passionate supplier community who want to disrupt how ICT is done in Government today.  We owe them an opportunity to make that happen.  The G-Cloud team needs more resources to make it so - they are, in my view, the poor relation of other initiatives in GDS today.  That, too, needs to change.

Monday, February 10, 2014

G-Cloud By The Numbers (December 2013 data)

Last week, the Cabinet Office published the latest figures for G-Cloud spending.  The data runs through December 2013.  Here are the highlights - call it an infographic in text.:

- After 21 months of use, total spend via the framework is £92.657m

- Spend in all of 2013 was £85m

- Spend in the last quarter of 2013 was £35m.  That would suggest a run rate for 2014 of some £140m, assuming no growth (or decline)

- Lot 3's total spending of £15m (over 21 months) is only marginally higher than the total spend in November 2013 (of £14m)

- The split between the lots is Lot 1: 4%, Lot 2: 1%, Lot 3: 16%, Lot 4: 78%

- Taking the spend over 2013 only, those splits don't change much: 5%, 1%, 15%, 79%

- Over the last quarter of 2013, the splits change a little: 7%, 1%, 10%, 82%

Conclusion:  The vast bulk of the spending is still via Lot 4 - people and people as a service.  That may soon change given that there are other frameworks available for taking on people, including the Digital Services Framework and Consultancy One.   Lot 4 spending was down to £9m in December (from £12m in November) - that's more likely because it was a short month than any sign of a trend.

Conclusion: Either Infrastructure as a Service is ridiculously cheap (not yet exceeding £1m per month) or there is little appetite yet for serious tin by the hour usage.  In all likelihood, with most departments tied into existing outsource contracts, only a few are breaking out and trialling Lot 1.  With contracts expiring over the next couple of years and the Crown Hosting Service perhaps appearing at the turn of the year, we may see real changes here.

- IBM is no longer the largest supplier by value of business done, the new winner is BJSS Ltd (a supplier of agile consultancy) who clock in at £7.2m.  IBM are second at £6m.

- The Home Office is still the highest spending customer, at £13.9m.  MoJ are second at £9.2m.

- The top 10 customers account for 51% of the spend on the framework.  The top 20 make up 66%.  There are nearly 350 individual customers listed.

- The lowest spending customer is the "Wales Office", with £375.  A dozen customers have spent less than £1,000.  Oddly, last time I checked this data, Downing Street was the lowest spend at £150 - that entry is no longer in the sheet as far as I can tell. Perhaps they got a refund.

Conclusion:  Adoption of the framework is still spotty.  A few customers have massively taken to it, many have dipped their toe in the water.  A few suppliers have likely seen their business utterly transformed with this new access to public sector customers.






Monday, January 27, 2014

Government Draws The Line

On Friday, the Cabinet Office announced (or re-announced according to Patricia Hodge) that:
  • no IT contract will be allowed over £100 million in value – unless there is an exceptional reason to do so, smaller contracts mean competition from the widest possible range of suppliers
  • companies with a contract for service provision will not be allowed to provide system integration in the same part of government
  • there will be no automatic contract extensions; the government won’t extend existing contracts unless there is a compelling case
  • new hosting contracts will not last for more than 2 years

I was intrigued by the lower case. Almost like I wrote the press release.
These are the new "red lines" then - I don't think these are re-announcements, they are firming up previous guidance.  When the coalition came to power, there was a presumption against projects over £100m in value; now there appears to be a hard limit (albeit with the caveat around exception reasons, ditto with extensions where there is a "compelling" case).

On the £100m limit:

There may be a perverse consequence here.  Contracts will be split up and/or made shorter to fit within the limit; or contracts may be undervalued with the rest coming in change controls.  Transitions may occur more regularly, increasing costs over the long term.  Integration of the various suppliers may also cost more.  For 20 years, government has bought its IT in huge, single prime (and occasionally double prime) silos.  That is going to be a hard, but necessary, habit to break.

£100m is, of course, still a lot of money.   Suppliers bidding for £100m contracts are likely the same as those bidding for £500m contracts; they are most likely not the same as those bidding for £1m or £5m contracts.

To understand what the new contract landscape looks like will require a slightly different approach to transparency - instead of individual spends or contracts being reported on, it would give a better view if the aggregate set of contracts to achieve a given outcome were reported.  So if HMRC are building a new Import/Export system (for instance), we should be able to visit a site and see the total set of contracts that are connected with that service (including the amounts, durations and suppliers).

On the "service providers" will not be allowed to carry out "system integration" point:
I'm not sure that I follow this but I take it to mean that competition will be forced into the process so that, in my point above about disaggregated contracts, suppliers will be prevented from winning multiple lots (particularly where hardware and software is provided by a company).  That, in theory, has the most consequence for companies like Fujitsu and HP who typically provide their own servers, desktops or laptops when taking on other responsibilities in an outsource deal.
 And no more extensions:

Assuming that there isn't a compelling reason for extension, the contract term is the contract term.  If that rule is going to be rigorously applied to all existing contracts, there are some departments in trouble already who have run out of time for a reprocurement or who will be unable to attract any meaningful competition into such a procurement.  Transparency, again, can help here - which contracts are coming up to their expiry point (let's look ahead 24 months to start with) and what is happening to each of them (along with what actually happened when push came to shove).  That would also help suppliers, particularly small ones, understand the pipeline.
On limiting hosting contracts to 2 years:
That's consistent with the G-Cloud contract term (notwithstanding that some suppliers wrote to GDS last week asking for the term to be extended to 3 years).  But it's also unproven - it's one thing to "copy and paste" a dozen virtual machines from one data centre to another, it's another thing to shift a petabyte of data or a set of load-balanced, firewalled, well-routed network connections.  Government is going to have to practice this - so far, moves of hosting providers have taken a year or more and cost millions (without delivering any tangible business benefit especially given the necessary freezes either side of the move).  It also means trouble for some of the legacy systems that are fragile and hard to move.  The Crown Hosting Service could, at least, limit moves of those kinds of systems to a single transition to their facilities - that would be a big help.

Friday, January 24, 2014

Government Gateway - Teenage Angst

Tomorrow, January 25th, the Government Gateway will be 13.  I’m still, to be honest, slightly surprised (though pleased) that the Gateway continues to be around - after all, in Internet time, things come and go in far shorter periods than that.  In the time that we have had the Gateway, we rebuilt UKonline.gov.uk with three different suppliers, launched direct.gov.uk and replatformed it some years later, then closed that down and replaced it with gov.uk which has absorbed the vast bulk of central government’s websites and has probably had 1,000 or more iterations since launch.  And yet the Gateway endures.


In 13 years, the Gateway has, astonishingly, had precisely two user interface designs.  In the first, I personally picked the images that we used on each screen (as well as the colour schemes, the text layout and goodness knows what else) and one of the team made ‘phone calls to the rights holders (most of whom, if I recall correctly, were ordinary people who had taken nice pictures) to obtain permission for us to use their images.  If you look at the picture above, you will see three departments that no longer exist (IR and C&E formed HMRC, MAFF became Defra) and five brands (including UKonline) that also don't exist.


Of course we carried out formal user testing for everything we did (with a specialist company, in a purpose built room with one-way glass, observers, cameras and all that kind of thing), often through multiple iterations.  The second UI change was carried out on my watch too.    I left that role - not that of Chief UI Designer - some 9 years ago.

My own, probably biased (but based on regular usage of it as a small business owner), sense is that the Gateway largely stopped evolving in about 2006.  Up until that point it had gone through rapid, iterative change - the first build was completed in just 90 days, with full scrutiny from a Programme Board consisting of three Permanent Secretaries, two CIOs and several other senior figures in government.  Ian McCartney, the Minister of the Cabinet Office (the Francis Maude of his day) told me as he signed off the funding for it that failure would be a “resignation issue.” I confirmed that he could have my head if we didn’t pull it off.  He replied “Not yours, mine!” in that slightly impenetrable Scottish accent of his.  We had a team, led by architects and experts from Microsoft, of over 40 SMEs (radical, I know).  Many of us worked ridiculous hours to pull off the first release - which we had picked for Burns Night, the 25th of January 2001.

On the night of the 24th, many of us pulled another all nighter to get it done and I came back to London from the data centre, having switched the Gateway on at around 5am - the core set of configuration data was hand carried from the pre-production machine to the production machine on a 3 1/2” floppy disc.  I don't think we could do that now, even if we could find such a disc (and a drive that supported it).  

The Programme Board met to review what we had done and, to my surprise, the security accreditation lead (what would be called a Pan-Government Accreditor now) said that he wanted to carry out some final tests before he okayed it being switched on.  I lifted my head from the table where I may have momentarily closed my eyes and said “Ummm, I turned it on at 5.”  Security, as it so often did (then and now), won - we took the Gateway off the ‘net, carried out the further tests and turned it back on a few hours later.

Over the following months we added online services from existing departments, added new departments (and even some Local Authorities), added capability (payments, secure messaging) and kept going.  We published what we were doing every month in an effort to be as transparent as possible.  We worked with other suppliers to support their efforts to integrate to the Gateway, developing (with Sun and Software AG, at their own risk and expense) a competitive product that handled the messaging integration (and worked with another supplier on an open source solution which we didn’t pull off).

We published our monthly reports online - though I think that they now lost folllowing perhaps multiple migrations of the Cabinet Office website.  Here is a page from February 2004 (the full deck is linked to here) that shows what we had got done and what our plans were:








The Gateway has long since been seen as end of life - indeed, I’ve been told several times that it has now been “deprecated” (which apparently means that the service should be avoided as it has been or is about to be superseded).  Yet it’s still here.

What is happening then?

Two years ago, in November 2011, I wrote a post about the Cabinet Office’s new approach to Identity. Perhaps the key paragraph in that post was "With the Cabinet Office getting behind the [Identity Programme] - and, by the sounds of it, resourcing it for the first time in its current incarnation - there is great potential, provided things move fast.  One of the first deliverables, then, should be the timetable for the completion of the standards, the required design and, very importantly, the proposed commercial model.”

There was talk then of HMRC putting up their business case for using the new services in April 2012.  The then development lead of Universal Credit waxed on about how he would definitely be using Identity Services when UC went live in April 2013.  Oh, the good old days.

DWP went to market for their Identity Framework in March 2012 as I noted in a post nearly a year ago. Framework contracts were awarded in November 2012.  

Nearly five Gateway development cycles later, we are yet to see the outcome of those - and there has been little in the way of update, as I said a year ago.

Things may, though, be about to change

GDS, in a blog post earlier this month, say "In the first few months of 2014 we’ll be starting the IDA service in private beta with our identity providers, to allow users to access new HMRC and DVLA services."

Nine gateway development cycles later, we might be about to see what the new service(s) will look like.   I am very intrigued.

Some thoughts for GDS as they hopefully enter their first year with live services:

Third Party Providers 

With the first iteration of the Gateway, we provided the capability for a 3rd party to authenticate someone and then issue them a digital certificate.  That certificate could be presented to the Gateway and then linked with your identity within government.  Certificates, at the time, were priced at £50 (by the 3rd party, not by government) because of the level of manual checking of documents that was required (they were initially available for companies only).  As long ago as 2002, I laid out my thoughts on digital certificates.

There were many technical challenges with certificates, as well as commercial ones around cost.  But one of the bigger challenges was that we still had to do the authentication work to tie the owner of the digital certificate to their government identity - it was a two step process.

With the new approach from the Cabinet Office - a significantly extended version of the early work with multiple players (up to 8 though not initially, and there is doubtless room for more later) but the same hub concept (the Gateway is just as much a hub as an authentication engine) - the same two step process will be needed.  I will prove who I am to Experian, the Post Office, Paypal or whoever, and then government will take that information and match that identity to one inside government - and they might have to do that several times for each of my interactions with, say, HMRC, DWP, DVLA and others.  There is still, as far as I know, no ring of trust where because HMRC trusts that identity, DWP will too.  Dirty data across government with confusion over National Insurance numbers, latest addresses, initials and so on all make that hard, all this time later.

As Dawn Primarolo, then a minister overseeing the Inland Revenue, said to me, very astutely I thought, when I first presented the Gateway to her in 2001 - "But people will realise that we don't actually know very much about them.  We don't have their current address and we may have their National Insurance number stored incorrectly".  She was right of course.

Managing Live Service

The new approach does, though, increase the interactions and the necessary orchestration - the providers, the hub and the departments all need to come together.  That should work fine for initial volumes but as the stress on the system increases, it will get interesting.  Many are the sleepless nights our team had as we worked with the then Inland Revenue ahead of the peak period in January.

End to end service management with multiple providers and consumers, inside and outside of government is very challenging.  Departments disaggregating their services as contracts expire are about to find that out, GDS will also find out.  There are many lessons to learn and, sadly, most of them are learned in the frantic action that follows a problem.

The Transaction Engine - The Forgotten Gateway

The Gateway doesn’t, though, just do the authentication of transactions. That is, you certainly use it when you sign in to fill in your tax return or your VAT return, but you also use it (probably unwittingly) when that return is sent to government.  All the more so if you are a company who uses 3rd party software to file your returns - as pretty much every company probably does now.  That bit of the Gateway is called the “Transaction Engine” and it handles millions of data submissions a year, probably tens of millions.

To replace the Gateway, the existing Authentication Engine (which we called R&E) within it must be decoupled from the Transaction Engine so that there can be authentication of submitted data via the new Identity Providers too, and then the Transaction Engine needs to be replaced.  That, too, is a complicated process - dozens of 3rd party applications know how to talk to the Gateway and will need to know how to talk to whatever replaces it (which, of course, may look nothing like the Transaction Engine and might, indeed, be individual services for each department or who knows what - though I have some thoughts on that).

Delegation of Rights

Beyond that, the very tricky problem of delegation needs to be tackled.  The Gateway supports it in a relatively rudimentary way - a small business can nominate its accountant to handle PAYE and VAT, for instance.  A larger business can establish a hierarchy where Joe does PAYE and Helen does VAT and Joe and Helen can do Corporation Tax.   But to handle something like Lasting Power of Attorney, there need to be more complex links between, say, me, my Mother and two lawyers.  Without this delegation capability - which is needed for so many transactions - the Digital by Default agenda could easily stall, handling only the simplest capabilities.

Fraud Detection and Prevention

Tied in with the two step authentication process I mention above is the need to deal with the inevitable fraud risk. Whilst Tax Credits was, as I said, briefly the most popular online service, it was withdrawn when substantial fraud was detected (actually, the Tax Credits service went online without any requirement for authentication - something that we fervently disagreed with but that was only supposed to be a temporary step.  Perhaps in another post I will take on the topic of Joint and Several Liability, though I am hugely reluctant to go back there).  

In the USA, there is massive and persistent Tax Return fraud - Business Week recently put the figure at $4 billion in 2011 and forecast that it would rise to $21 billion by 2017.  That looks to be the result of simple identity fraud, just as Tax Credits experienced.  Most tax returns in the USA are filed online, many using packages such as TurboTax.   Tax rebates are far more prevalent in the USA than they are in the UK, but once the identification process includes benefits, change of address and so on, it will become a natural target.  Paul Clarke raised this issue, and some others, in an excellent recent post.

The two step process will need to guard against any repeat of the US experience in the UK - and posting liabilities to the authentication providers would doubtless quickly lead to them disengaging from the business (and may not even be possible given the government carries out the second step which ties the person presented to a government identity record, or to a set of them).  

We included a postal loop from day one with the Gateway, aimed at providing some additional security (which could, of course, be compromised if someone intercepted the post); removing that (as a recent GDS blog post claims it will), as I imagine will be done in the new process (Digital by Default after all) requires some additional thinking.

User Led

Given that "User Led" is the GDS mantra, I have little fear that users won't be at the heart of what they do next, but it is a tricky problem this time.  For the first time, users will be confronted with non-government providers of identity (our Gateway integration with 3rd parties still resulted in a second step directly with government).  How will they know who to choose?  What happens if they don't like who they chose and want to move to someone else? How will they know that the service that they are using is legitimate - there will be many opportunities for phishing attacks and spoof websites? How will they know that the service they are using is secure - it is one thing to give government your data, another, perhaps, to give that data to a credit agency?   Will these services be able to accumulate data about your interactions with Government?  How will third party services be audited to ensure that they are keeping data secure?

Moving On From Gateway

There are more than 10 million accounts, I believe, on the Gateway today.  Transitioning to new providers will require a careful, user benefit led, approach so that everyone understands why the new service is better (for everyone) than the old one.   After all, for 13 years, people have been happily filing their tax returns and companies have been sending in PAYE and VAT without being aware of any problems.  It would help, I'm sure, if the existing customers didn't even realise things had changed - until they came to add new services that are only available with the coming solutions and were required to provide more information before they could access them; I think most would see that as a fair exchange.

Here's To The Future then

Our dream, way back on Burns Night in 2001, was that we would be able to break up the Gateway into pieces and created a federated identity architecture where there would be lots of players, all bringing different business models and capabilities.  We wanted to be free of some of the restrictions that we had to work with - complex usernames and even more complicated passwords, to work with an online model, to bring in third party identification services, to join up services so that a single interaction with a user would result in multiple interactions with government departments and, as our team strap line said back then, we wanted to “deliver the technology to transform government”.

Thirteen years on there have been some hits and some misses with that dream - inevitably we set our sights as high as we could and fell short.  I fully expect the Gateway to be around for another four or five years as it will take time for anyone to trust the new capabilities, for 3rd parties to migrate their software and for key areas like delegation to be developed.  It’s a shame that we have gone through a period of some 8 years when little has been done to improve how citizens identify themselves to government; there was so much that could have been done.

I’m looking forward to seeing what new capabilities are unveiled sometime in the next few months - perhaps I will be invited to be a user in the “private beta” so that I can see it a bit quicker.  Perhaps, though, I shouldn’t hold my breath.

Monday, January 20, 2014

Am I Being Official? Or Just Too Sensitive? Changes in Protective Marking.

From April 2nd - no fools these folks - government’s approach to security classifications will change.  For what seems like decades, the cognoscenti have bandied around acronyms like IL2 and IL3, with real insiders going as far as to talk about IL2-2-4 and IL3-3-4. There are at least seven levels of classification (IL0 through IL6 and some might argue that there are even eight levels, with “nuclear” trumping all else; there could be more if you accept that each of the three numbers in something like IL2-2-4 could, in theory, be changed separately). No more.  We venture into the next financial year with a streamlined, simplified structure of only three classifications. THREE!  

Or do we?

The aim was to make things easier - strip away the bureaucracy and process that had grown up around protective marking, stop people over-classifying data making it harder to share (both inside and outside of government) and introduce a set of controls that as well as technical security controls actually ask something of the user - that is, that ask them to take care of data entrusted to them.

In the new approach, some 96% of data falls into a new category, called “OFFICIAL” - I’m not shouting, they are. A further 2% would be labelled as “SECRET” and the remainder “TOP SECRET”.  Those familiar with the old approach will quickly see that OFFICIAL seems to encompass everything from IL0 to IL4 - from open Internet to Confidential (I’m not going to keep shouting, promise), though CESG and the Government Security Secretariat have naturally resisted mapping old to new.

That really is a quite stunning change.  Or it could be.

Such a radical change isn’t easy to pull off - the fact that there has been at least two years of work behind the scenes to get it this far suggests that.  Inevitably, there have been some fudges along the way.  Official isn’t really a single broad classification.  It also includes “Official Sensitive” which is data that only those who “need to know” should be able to access.   There are no additional technical controls placed on that data - that is, you don’t have to put it behind yet another firewall - there are only procedural controls (which might range - I'm guessing - from checking distribution lists to filters on outgoing email perhaps).

There is, though, another classification in Official which doesn’t yet, to my knowledge, have a name.   Some data that used to be Confidential will probably fall into this section.  So perhaps we can call it Official Confidential? Ok, just kidding.

So what was going to be a streamlining to three simple tiers, where almost everyone you’ve ever met in government would spend most of their working lives creating and reading only Official data, is now looking like five tiers.  Still an improvement, but not quite as sweeping as hoped for.

The more interesting challenges are probably yet to come - and will be seen in the wild only after April.  They include:

- Can Central Government now buy an off-the-shelf device (phone, laptop, tablet etc) and turn on all of the “security widgets” that are in the baseline operating system and meet the requirements of Official?

- Can Central Government adopt a cloud service more easily? The Cloud Security Principles would suggest not.

- If you need to be cleared to “SC” to access a departmental e-mail system which operated at Restricted (IL3) in the past and if “SC” allows you occasional access to Secret information, what is the new clearance level?

- If emails that were marked Restricted could never be forwarded outside of the government’s own network (the GSI), what odds would you place on very large amounts of data being classified as “Official Sensitive” and a procedural restriction being applied that prevents that data traversing the Internet?

- If, as anecdotal evidence suggests, an IL3 solution costs roughly 25% more than an IL2 solution, will IT costs automatically fall or will inertia mean costs stay the same as solutions continue to be specified exactly as before?

- Will the use of networks within government quickly fall to lowest common denominator - the Internet with some add-ons - on the basis that there needs to be some security but not as much as had been required before?

- If the entry to an accreditation process was a comprehensive and well thought through “RMADS” (Risk Management and Accreditation Document Set) which was largely the domain of experts who handed their secrets down through mysterious writings and hidden symbols

It seems most likely that the changes to protective marking will result in little change over the next year, or even two years.  Changes to existing contracts will take too long to process for too little return. New contracts will be framed in the new terms but the biggest contracts, with the potential for the largest effects, are still some way from expiry.  And the Cloud Security Principles will need much rework to encourage departments to take advantage of what is already routine for corporations. 

If the market is going to rise to the challenge of meeting demand - if we are to see commodity products made available at low cost that still meet government requirements - then the requirements need to be spelled out.  The new markings launch in just over two months.  What is the market supposed to provide come 2nd April?

None of this is aimed at taking away what has been achieved with the thinking and the policy work to date - it’s aimed at calling out just how hard it is going to be to change an approach that is as much part of daily life in HM Government as waking up, getting dressed and coming to work. 

Friday, January 17, 2014

Adequately Appropriate? Acceptably Appropriate? Thoughts on Cloud Security Principles

It was with some trepidation that, over the Christmas break, I clicked on links to the newly published Government Cloud Security Principles.  Trepidation because my contact with such principles goes back a long way and, in government, principles tend to hide more than they reveal. 

Some three years ago whilst looking at G-Cloud in its early days, I proposed that, as part of the procurement process, we publish a detailed set of guidelines that explained not only what was meant by IL0, IL2 and IL3 (I skipped IL1 on the basis that in over a decade, I have never heard anyone use it) but also what would be required if, as a vendor, you were trying to achieve any of those accreditation levels.  My thinking was if government was truly going to encourage new players to get involved, few would commit to building out infrastructure if there wasn’t specific guidance on what they would need to do.

I produced a short document - some 4 pages - which I thought would act as a starter.  I’ve published it on Scribd so that you can see how far I got (which wasn’t all that far, I admit - I'd say it's a beta at best).   Some weeks later, after chasing to see if it could be developed further, in partnership with some new suppliers so that we could test what they needed to know, I was told that such a document would not be viable as, and I quote, “it would encourage a tick box attitude to security compliance”.  Some thing in me tells me that would be no bad thing - definitely better than a no box attitude, no?

So here we are, in early 2014, and someone else - perhaps some brave person in Cabinet Office - has had another go.  Is this just a tick box exercise too?  Or would I find seriously useful principles that would help both client and supplier - the users - achieve what they both need?

Sadly, the answer is that these principles do not help.  Perhaps in a desire to ensure that there was definitely no encouragement of a tick box approach, they say as little as possible using words that are unqualified and without any context or examples that would help.  It strikes me as unlikely that any security experts in departments will find a need to refer to them and that any supplier seeking some clues as to the fastest route to an accredited service will linger on them no more than a moment.

For instance:

- The word “adequate” or “adequately” is used four times.  As in "The confidentiality and integrity of data should be adequately protected whilst in transit”.  Can’t disagree with that. Though, of course, I don't know what it means in delivery terms

- “Appropriate” crops up three times.  As in "All external or less trusted interfaces of the service should be identified and have appropriate protections to defend against attacks through them”.  Excellent advice, everything should always be appropriately protected.  No more and no less.  But how exactly?

- Or how about this: "The service should be developed in a secure fashion and should evolve to mitigate new threats as they emerge”.  No one would want you to develop an insecure service but what exactly is meant by this?

- Or this one: "The service provider should ensure that its supply chain satisfactorily supports all of the security principles that the service claims to deliver”; so now the service provider needs to decide what is meant by the principles and ensure that anyone it users also complies with their very vagueness.

Of course, there’s a rider at the front of the document, which says:

This document describes principles which should be considered when evaluating the security features of cloud services. Some cloud services will provide all of the security principles, while others only a subset. It is for the consumer of the service to decide which of the security principles are important to them in the context of how they expect to use the service. 

So not only do we have to decide what is adequate and appropriate, we have to decide which of the principles we need to adopt adequately and appropriately so that we have adequate and appropriate security for our service, lest it be seen as inadequate and/or inappropriate perhaps.  How appropriate.

This is hardly academic.  If you want commodity services, then you need to provide commodity standards and guidelines.  Leaving vast areas open to interpretation only furthers the challenges for new suppliers (and entrenches the capabilities of those who already supply) and means that customers are unable to evaluate like for like without detailed (and likely continuing) reviews.

To give an example, I recently sat with great people from three different government departments to look at the use of mobile devices.   One was using WiFi freely throughout their building (connected to ADSL lines) to allow staff with department issued iPads and Windows tablets to access the Internet.  Another had decided that WiFi was inherently untrustworthy and so insisted that staff use the 3G or 4G network, even issuing staff with Windows tablets a dongle that they needed to carry around (and pair via bluetooth - which is, I assume, for them more secure than WiFi) to access the Internet.

If three departments can’t agree on how to configure an iPad so that they can read their email (this wasn't about using applications beyond Office apps), what hope is there for a supplier offering such a service?  Where is the commodity aspect that is necessary to allow costs to be driven down? And how would a new supplier, with a product ready to launch, know how it would be judged by the security experts so that it could be sold to the public sector?

Principles such as these encourage - perhaps even direct - departments to come to their own conclusions about what they need and how they want it configured, just as they have done for the last three decades and more. 

With today’s protective markings - IL0, IL2 and IL3 etc - that is one thing.  With tomorrow’s “OFFICIAL”, there is a real need for absolute clarity on what a supplier needs to do and that can only come from the customer being clear about what they will and won’t accept - it cannot be that one department’s OFFICIAL is another department’s UNACCEPTABLE. 

Fingers crossed that this pre-Alpha document is allowed to iterate and evolve into something that is useful.

Sunday, September 08, 2013

The Trouble With ... The Green Book

The Green Book is, for some, akin to a Bible.  It's 118 pages of guidance on how to work through the costs of a project (not to mention 130 pages of guidance available on how to move from SBC to OBC to FBC - you get the ideas).

Across government, departmental approval bodies revere the Green Book with its detail on NPV, risk assessment, Monte Carlo models and benefits realisation.  The Green Book is for all projects covering whatever kind of policy outcomes are relevant - providing benefits to the right people, improving water quality, connecting communities and, of course, IT.

Despite all of the comprehensive guidance contained within it, the outcome of many projects suggests that risks aren't properly evaluated, that costs are not fully calculated and that the outcomes expected don't always occur.  Recent experience with ever inflating HS2 numbers demonstrate that only too clearly.  That said, trying to forecast costs many years out has never been easy (and the error bars increase with every year) - and in case you think government doesn't think long term, the Green Book contains a table that shows how you would discount cost of cash out 500 years.



Some paragraphs in the Green Book show how far we have to change to make the move from traditional project delivery to an approach that is faster, lighter and more agile:
5.61 There is a demonstrated, systematic, tendency for project appraisers to be overly optimistic. This is a worldwide phenomenon that affects both the private and public sectors.Many project parameters are affected by optimism – appraisers tend to overstate benefits, and understate timings and costs, both capital and operational.
Optimistic? Really?

Or how about this:
6.23 Implementation plans should be sufficiently complete to enable decisions to be taken on whether or not to proceed. So that evaluations can be completed satisfactorily later on, it is important that during implementation, performance is tracked and measured, and data captured for later analysis
Perhaps the get out here is "sufficiently complete" - one man's sufficient is another person's hopelessly inadequate.  But entire business cases are routinely laid before approval bodies right across government that claim to have looked ahead 10 years and figured out what will happen year to year at a sufficiently detailed level to forecast costs and benefits, albeit with inevitable optimism. Only recently - perhaps the Olympics and, now, HS2 - has contingency been a visible and public part of budgets.  It will be interesting how the spending of it is reported and tracked




And then this:
6.33 Benefits realisation management is the identification of potential benefits, their planning, modelling and tracking, the assignment of responsibilities and authorities and their actual realisation. In many cases, benefits realisation management should be carried out as a duty separate from day to day project management.
Generally the people delivering the programme are not the ones who have to make it work on the ground and so achieve the cost savings that the approval body has been promised.  As it says above, "realising the benefits" is a separate duty from day to day project management.  That is, then, part of the problem - delivery being isolated from the business means decisions can be taken for the good of the programme that are not for the good of the business.

None of the above is intended to be critical of the Green Book - it was very much a product of its time and perhaps where contruction of vast architectural feats such as dams and, indeed, railways that go from South to North (and back again) are being planned, it still makes enormous sense.

With a desire that IT projects be agile, flexible, iterative and responsive to the ever-evolving user need, the guidance looks increasingly anachronistic.   If you're not sure what functionality you're going to deliver when because you might replace A for B or drop C altogether, how do you calculate either costs or benefit with any reasonable confidence only a few months out?  The best you might be able to do is calculate the likely cost of the team - but what if it needs to grow suddenly to deliver an identified need?

The Green Book is doubtless being refreshed now by a combination of GDS, Cabinet Office and HMT but, and it's a big but, convincing finance directors across government that 200 page business cases with all of the options mapped out and separately costed are a thing of the past will be challenging.  And interesting to watch.
"Minister, there are three options … the first two pretty much lead to nuclear war, the third will be explosive and there will be terrible consequences, but I think we will survive … I recommend the third option … do you concur?"




Wednesday, July 31, 2013

The Trouble With ... Nokia

A little over two years ago I wrote a prescription for Nokia. I said they should do many things, some of which were very wide of the mark versus what's actually happened and some of which were reasonably close.

One thing I thought was really important was that Nokia:
Develop a brilliant and obvious naming convention. Years ago when I started using Nokia I understood the convention ... There was the 6210 which was replaced by the 6310; not long after came the 7210 which I understood to be better still. It all went wrong with the 8xxx series - the 8250 was small and sexy, the 8800 was shaped like a banana. Now, I couldn't tell you how it works with C, E and N series.
On hearing that Nokia had chosen "Lumia" to lead their branding, I was impressed.  Lovely word.  But how confusing does the range look now:


Not yet two years old, the Lumia range already numbers 12 models (albeit with two coming soon and, apparently, only one marked as "best seller").  Looking at the pictures above, taken from the Nokia website, I struggle to figure out what's what (not aided, I think, by the overwhelming similarity of the screen images on each).

The 625 ... "lets you see more of everything" ...

... the 925 has eye catching design and Smart Camera ...

... the 920 has a Carl Zeiss lends and PureView technology (but perhaps not a Smart Camera?) ...
... the 520 has a 1Ghz processor (I've always wanted one of those, it's right after "ability to make calls" on my list of user needs) ...

... the 820 has colourful and wireless charging shells (is that what makes it a best seller)? ...

... and the 620 packs a punch (which presumably means it has none of the above, or perhaps all of it?)

Confused?  Yes, thought so.

Is a 6xx better than a 5xx?  Is a 925 better than a 920 but not as good as a 1020?

I'm told that there are other variants, such as the 928 which is exclusive to a US carrier and even a "Model T" which is exclusive to a Chinese carrier.

It begins to look like Nokia need to publish a simple key, as Which? does for Samsung TVs:
'D' is an LCD TV
'E' is a plasma TV
'EH' and 'ES' are LED TV models
The final four numbers signify the series - a '4000' model is from Series 4, a '5000' model from Series 5 and so on. The higher the number, the more premium the model is - ie it has more gadgets and better features.
Series 4: Entry-level range and all HD Ready (720p)
Series 5: Just above entry level. All are Full HD (1080p) and offer additional features - some are 3D and smart TVs
Series 6: Mid-range. All are Full HD (1080p) and offer additional features. All come with 3D and smart TV capability, plus Freeview HD and Freesat HD tuners
Series 7: Premium models with a dual-core processor. In addition to the key features of the series below, there’s also a built-in camera and voice and motion control
Series 8: Flagship premium model. In addition to the features of the series below, this range features Samsung’s premium image processing and a touch-sensitive remote
Nokia also need, in my view, to introduce new brands to make it easier to choose between phones - they can stick to derivatives of Lumia if that's where their heart is (how about Lumila for cheaper phones? Lumaxa for the high end, flagship phone?) though I don't like those much.  Maybe Photia for the phones that focus (ha) on the camera. Or maybe they go for Lumia P for the multi-megapixel phone? 

Or perhaps Lumia Z for the Zombie phone ... because I think they might get there soon if they don't do something to make it easier for the customer to choose.

Monday, June 17, 2013

Cloud First (Second and Third)

Watching, and playing a very small part, in G-Cloud, the UK government framework for purchasing cloud products and services, over the last 2 1/2 years has been a fascinating experience.   It's grown from something that no one understood and, when they did, something no one thought would work into the first, and probably only, framework in government with greater representation from small companies than large companies and one that refreshes faster than any other procurement vehicle ever.

What G-Cloud doesn't yet have is significant amounts of money flowing through it - the total at the end of May was some £22m.  With its transition from "programme" to business as usual, under the aegis of GDS, it should now get access to the resource it has been starved of since birth.  The absence of that would, had it not been for tireless passion and commitment from its small team, have resulted in it being killed off long before now.  GDS should also bring it the needed Political cover to help it find a role  as part of the agenda for real change, but there are challenges to overcome.

In 1999, Jack Welch told every division in GE, the company he was then CEO of, that e-business would be every division's "priority one, two, three and four."

In 2013, UK government went a little further and mandated that, for central government, cloud would be first in every IT purchasing decision.  Local government and the wider public sector would be strongly encouraged to follow suit.

It's a laudable, if unclear, goal.

The previous incarnation of this goal held that "50% of new spend" would be in the public cloud - that was perhaps a little sharper than a "cloud first" goal - if, as I've written, we could be clear about what new spend was and track that spend, then achieving 50% would be a binary achievement.  Testing whether we are "cloud first" will be as nebulous as knowing whether the UK is a good place to live.

Moving G-Cloud from a rounding error (generously, let's say 0.05% of total IT spend; it's probably 1/10th of that even) to something more fundamental that reflects the energy that has gone in from a small team over the last 3 years or so requires many challenges to be overcome.  Two of those challenges are:

1) Real Disaggregation

Public sector buyers historically procure their IT in large chunks.  It's simpler that way - one big supplier, one throat to choke, one stop shop etc.  Even new applications are often bought with development, hosting and maintenance from one supplier - leading to a vast spread of IT assets across different suppliers (not many suppliers, just different). Some departments - HMRC and DWP perhaps - buy their new applications (tax credits, universal credit) from their existing suppliers to stop that proliferation.

Even today's in vogue tower model, with the SIAM at the top (albeit not its prime), there is little disaggregation.  The MoJ, shortly to announce the winner of its SIAM tender, will move all of its hosted assets from several suppliers, to one (perhaps - there is little to no business benefit in moving hardware around data centres, common sense may prevail before that happens).  MoJ had, indeed, planned to move all of its desktop assets from several suppliers to one but recently withdrew that procurement (at the BAFO stage) and returned to the drawing board - the new plan is not yet clear.  In consolidating, it will hopefully save money, though some of that will likely be added back when the friction of multiple suppliers interacting across the towers is included.  The job of the SIAM will be to manage that friction and deliver real change, whilst working across the silos of delivery - desktop, hosting, apps, security, network etc.

But disaggregating across the functional lines of IT brings nothing new for the business.  Costs may go down - suppliers, under competitive pressure for the first time in years will polish their rocks repeatedly, trying to make them look shinier than that of the others in the race.  Yet the year, or even two years, after the procurement could easily be periods of stasis as staff are transferred from supplier to supplier (or customer to supplier and even supplier to customer) and new plans are drawn up.  During that time, the unknown and unexpected will emerge and changes will be drawn up that bring the cost back to where it was.

In a zero-based corporate cloud model, you would also have your IT assets spread across multiple providers - and you wouldn't care.  Your email would be with Google, your collaboration with Huddle or Podio, your desktops mights be owned by the staff, your network would be the Internet, your finance and HR app would be Workday, your website would be Wordpress, your reporting would be with Tableau and so on.

In contrast, the public sector cloud model isn't yet clear.  Does the typical CIO, CTO, Chief Digital Officer want relationships with twenty or thirty suppliers?  Does she want to integrate all of those or have someone else do it?  Does she want to reconcile the bills from all of them or have someone else do it?

But if "cloud first" is to become a reality - and if G-Cloud spending is going to be 50% of new IT spend (assuming that the test of "cloud first" is whether it forces spend in a new direction) - then that requires services to be bought in units, as services.  That is, disaggregation at a much lower level than the simple tower.

Such disaggregation requires client organisations that look very different from those in place today where the onus is on man-to-man marking and "assurance" rather than on delivery.  Too many departments are IT led with their systems thinking; GDS' relentless focus on the user is a much needed shift in that traditional approach, albeit one that will be relentlessly challenged in the legacy world.

As Lord Deighton said in an interview earlier this month, the "public sector is slightly long on policy skills [and] ... slightly short on delivery skills."  I agree, except I think the word "slightly" is redundant.

2) Real Re-Integration

As services disaggregate and are sourced from multiple providers, probably spread around the UK and perhaps the world, the need to bring them all together looms large.  We do this at home all the time - we move our data between Twitter, Facebook, e-mail and Instagram all of the time.  But public sector instances of such self-integration are rare - connecting applications costs serious money: single sign-on, XML standards, secure connections, constant checking versus service levels and so on.

Indeed, a typical applications set for even a small department might look something like this:



Integrating applications like this is challenging, expensive and fraught with risk every time the need to make a change comes up.  Some of these applications are left behind on versions of operating systems or application packages that are no longer supported or that cannot be changed (the skills having long since left the building and, in some cases, this Earth).

New thinking, new designs, new capabilities and significant doses of courage will be required both to bring about the change required to disaggregate at a service level and to ensure that each steps is taken with the knowledge of how a persistent whole will be shown to the user.

The change in security classifications (from BIL to Tiers) will be instrumental in this new approach - but it, too, will require courage to deliver.  Fear of change and of high costs from incumbents will drive many departments to wait until the next procurement cycle before starting down the path.  They too, will then enter their period of stasis, delaying business benefits until early in the next decade.

To be continued ...