Monday, August 11, 2014

Hosting Crowns

Late in 2013 there was a flurry of interest in a project called the "Crown Hosting Service" - covered, for instance, by Bryan Glick at Computer Weekly. The aim, according to the article, was to save some £500m within a few years by reducing the cost of looking after servers.  The ITT for this "explicit legacy procurement" (as Liam Maxwell accurately labelled it) was issued in July 2014.

Apparently some £1.6bn is spent by government on hosting government's IT estate.  That figure is about half what it costs to run government's central civil estate (buildings); and that £3bn is only 15% of the cost of running the total estate.  The total cost of running the estate is, then, something like £20bn (with an overall estate value of c£370bn).

It's interesting, then, to see increasing instances of department's sharing buildings - the picture below shows two agencies that you might not associate together.  The Intellectual Property Office and the Insolvency Service share a building - though I'm hoping it's not because they share a customer base and offer a one stop shop.  The IPO and the IS are both part of BIS (which is just around the corner) so perhaps this is a like for like share.


But over the next couple of years, and maybe in the next couple of months, we are certainly going to see more sharing - DCLG will soon vacate its Victoria location and begin sharing with another central government department.  Definitely not like for like.

Such office sharing brings plenty of challenges. At the simpler end things such as standard entry passes and clearance levels.  At a more complicated level is the IT infrastructure - at present something that is usually entirely unique to each department.  A DCLG desktop will not easily plug straight into the network of another department - even a wireless network would need to be told about new devices and where they needed to be pointed at.

With increasing commoditisation of services, and increasing sharing, it's easily possible to see - from a purely IT point of view - government buildings that function, for large numbers of HQ and, perhaps especially, field staff, as drop in centres where desks are available for whoever is passing provided that they have the right badge.  Those who want to work from home can continue to do so, but will also be able to go to a "local office" where they will have higher bandwidth, better facilities and the opportunity to interact with those in other departments and who run other services.  

In this image, the vertical silos of government departments will be broken up simply because people no longer need to go to "their" department to do their day job, but they can go wherever makes most sense.  Maybe, just maybe, the one stop shop will become a reality because staff can go where the customers are, rather than where their offices are.

G-Cloud By The Numbers (To End June 2014)

With Dan's Tableau version of the G-Cloud spend data, interested folks need never download the csv file provided by Cabinet Office ever again.  Cabinet Office should subcontract all of their open data publication work to him.



The headlines for G-Cloud spend to the end of June 2014 are:

- No news on the split between lots.  80% of spend continues to be in Lot 4, Specialist Cloud Services

- 50% of the spend is with 10 customers, 80% is with 38 customers

- Spend in June was the lowest since February 2014.  I suspect that is still an artefact of a boost because of year end budget clearouts (and perhaps some effort to move spend out of Lot 4 onto other frameworks)

- 24 suppliers have 50% of the spend, 72 have 80%.  A relative concentration in customer spend is being spent across a wider group of suppliers.  That can only be a good thing

- 5 suppliers have invoiced less than £1,000. 34 less than £10,000

- 10 customers have spent less than £1,000. 122 less than £10,000.  How that boxes with the bullet immediately above, I'm not sure

- 524 customers (up from 489 last month) have now used the framework, commissioning 342 suppliers.  80% of the spend is from central government (unsurprising, perhaps, given the top 3 customers - HO, MoJ, CO - account for 31% of the spend)

- 36 customers have spent more than £1m.  56 suppliers have billed more than £1m (up from 51).  This time next year, Rodney, we'll be millionaires.

- Top spending customers stay the same but there's a change in the top 3 suppliers (BJSS, Methods stay the same and Equal Experts squeaks in above IBM to claim the 3rd spot)

One point I will venture, though not terribly well researched, is that once a customer starts spending money with G-Cloud, they are more likely to continue than not.  And one a supplier starts seeing revenue, they are more likely to continue to see it than not.  So effort on the first sale is likely to be rewarded with continued business.

Friday, June 27, 2014

Close Encounters Of The Fourth Kind

Much to my surprise, O2 sent me a text on Wednesday.  The text wasn't the surprising bit - they often send me texts offering me something that I don't want.  The surprise was that this time they didn't offer, they told me that I was just going to get it.  It was 4G.  And I wanted it.


4G. For nothing.  No options. No discussion. No questions allowed.  No SIM change needed.  No conversation about impact on battery life.  Just: turn off your phone in the morning and turn it back on, and if you're in a 4G area, it's all yours.

The next morning I was in Camden Town - no sign of 4G there.  Clearly Camden is a bit rural to have coverage just yet.

But later, in Whitehall, it worked just fine.  And fine means a consistent 15mb/s download (versus the previous day's 3G download speed of 2mb/s).

During 2012/13 I set up a JV owned by the four mobile operators, called at800, that had the task of managing any negative impact from interference with TV signals that might occur because the bottom of the 4G range aligns with the top of the TV range (and, until some recent work by Digital UK, overlapped).


at800 - you might have seen the ads or had a card, or maybe even a filter through your letterbox - has been a great success (last I checked, they'd been in touch with probably 45% of UK households).  That's in part because the problem that all the TV technologists worried might affect up to two million households has actually been far less of a problem but, for the most part, it's because we put together a great team, worked closely with the mobile operators and the broadcasters, ran pilots, tested everything we could and smoothed the way for the 4G roll out.  In truth, we were ready long before the operators were.  They were all fun/challenging/annoying/exciting to work with, but I liked O2's approach most.


After a couple of days testing 4G, I have this to say:

- Coverage in buildings where 3G coverage was previously poor to non-existent has much improved (I can even make calls from the dead centre of buildings where previously I stared only at "No Service")

- Download speeds are certainly faster (roughly equivalent to what I get from Satellite broadband, but without the round trip lag)

- Battery life seems unchanged (I wonder if battery usage is higher during download but because download is so much faster, there's less overall drain)

That said, the nearest mast to my home is still some 200 miles away.  Keep rolling it out O2.

I have no idea how widespread this offer is but, if you get the same text, say "yes". Not that you'll have any choice.  But so far, it's all upside.

Monday, June 23, 2014

The Trouble With Transition - DECC and BIS Go First

In a head-scratching story at the end of last week, DECC and BIS made the front page of the Financial Times (registered users/subscribers can access the story).  Given the front page status, you might imagine that the Smart Meter rollout had gone catastrophically wrong, or that we had mistakenly paid billions in grants to scientists who weren't getting the peer reviews that we wanted, or that we'd suddenly discovered a flaw in our model for climate change or perhaps that the Technology Strategy Board had made an investment that would forever banish viruses and malware.


The BBC followed the story too.

But, no.  Instead we have two departments having problems with their email.  Several Whitehall wags asked me weeks ago (because, yes, this story has been known about for a month or more) whether anyone would either notice, or care, that there was no email coming to or from these departments.   It is, perhaps, a good question.
Business Secretary Mr Cable and Energy and Climate Change Secretary Mr Davey were reported in the Financial Times to be angry about slow and intermittent emails and network problems at their departments since they started migrating to new systems in May.
The real question, though, is what actually is the story here?

- It appeared to be a barely-veiled attack on the current policy of giving more business to SMEs (insider says "in effect they are not necessarily the best fit for this sort of task" ... "an idealistic Tory policy to shake up Whitehall")

- Or was it about splitting up contracts and of taking more responsibility for IT delivery within departments (Mr Cable seemingly fears the combination of cost-cutting and small firms could backfire)?

-  Was the story leaked by Fujitsu who are perhaps sore at losing their £19m per annum, 15 year (yes, 15. 15!) contract?

- Was it really triggered by Ed Davey and Vince Cable complaining to the PM that their email was running slow ("Prime Minister, we need to stop everything - don't make a single decision on IT until we have resolved the problems with our email")?

- Is it even vaguely possible that it is some party political spat where the Liberal Democrats, languishing in the polls, have decided that a key area of differentiation is in how they would manage IT contracts in the future?  And that they would go back to big suppliers and single prime contracts?

- Was it the technology people in the department themselves who wish that they could go back to the glory days of managing IT with only one supplier when SLAs were always met and customers radiated delight at the services they were given?

#unacceptable as Chris Chant would have said.

Richard Holway added his view:
In our view, the pendulum has swung too far. The Cabinet Office refers to legacy ICT contracts as expensive, inflexible and outdated; but moving away from this style of contract does not necessarily mean moving away from the large SIs.
And it appears that it is beginning to dawn on some in UK Government that you can’t do big IT without the big SIs. A mixed economy approach – involving large and small suppliers - is what’s needed.
By pendulum, he means that equilibrium sat with less than a dozen suppliers taking more than 75% of the government's £16bn annual spend on IT.  And that this government, by pushing for SMEs to receive at least 25% of total spend, has somehow swung us all out of kilter, causing or potentially causing chaos.  Of course, 25% of spend is just that - a quarter - it doesn't mean (based on the procurements carried out so far by the MoJ, the Met Police, DCLG and other departments) that SIs are not welcome.

Transitions, especially, in IT are always challenging - see my last blog on the topic (and many before).  DECC and BIS are pretty much first with a change from the old model (one or two very large prime contracts) to the new model (several - maybe ten - suppliers with the bulk of the integration responsibility resting with the customer, even when, as in this case, another supplier is nominally given integration responsibility).  Others will be following soon - including departments with 20-30x more users than DECC and BIS.

Upcoming procurements will be fiercely competed, by big and small suppliers alike.  What is different this time is that there won't be:

-  15 year deals that leave departments sitting with Windows XP, Office 2002, IE 6 and dozens of enterprise applications and hardware that is beyond support.

or

- 15 year deals that leave departments paying for laptops and desktops that are three generations behind, that can't access wireless networks, that can't be used from different government sites and that take 40 minutes to boot.

or

- 15 year deals that mean that only now, 7 years after iPhone and 4 years after iPad, are departments starting to take advantage of truly mobile devices and services

With shorter contracts, more competition, access to a wider range of services (through frameworks like G-Cloud), only good things can happen.   Costs will fall, the rate of change will increase and users in departments will increasingly see the kind of IT that they have at home (and maybe they'll even get to use some of the same kind of tools, devices and services).

To the specific problem at BIS and DECC then.  I know little about what the actual problem is or was, so this is just speculation:

- We know that, one day, the old email/network/whatever service was switched off and a new one, provided by several new suppliers, was turned on.  We don't know how many suppliers - my guess is a couple, at least one of which is an internal trading fund of government. But most likely not 5 or 10 suppliers.

- We also know that transitions are rarely carried out as big bang moves.  It's not a sensible way to do it - and goodness knows government has learned the perils of big bang enough times over the last 15 years (coincidentally the duration of the Fujitsu contract).

- But what triggered the transition?  Of course a new contract had been signed, but why transition at the time they did?  Had the old contract expired?  Was there a drive to reduce costs, something that could only be triggered by the transition?   

- Who carried the responsibility for testing?  What was tested?  Was it properly tested?  Who said "that's it, we've done enough testing, let's go"?  There is, usually, only one entity that can say that - and that's the government department.  All the more so in this time of increased accountability falling to the customer.

- When someone said "let's go", was there an understanding that things would be bumpy?  Was there a risk register entry, flashing at least amber and maybe red, that said "testing has been insufficient"?

In this golden age of transparency, it would be good if DECC and BIS declared - at least to their peer departments - what had gone wrong so that the lessons can be learned.  But my feeling is that the lessons will be all too clear:

- Accountability lies with the customer.  Make decisions knowing that the comeback will be to you.

- Transition will be bumpy.  Practice it, do dry runs, migrate small numbers of users before migrating many.

- Prepare your users for problems, over-communicate about what is happening.  Step up your support processes around the transition period(s).

- Bring all of your supply chain together and step through how key processes and scenarios will work including when it all goes wrong.

- Have backout processes that you have tested and know the criteria you will use to put them into action

Transitions don't come along very often.  The last one DECC and BIS did seems to have been 15 years ago (recognising that DECC was within Defra and even MAFF back then).  They take practice.  Even moving from big firm A to big firm B.  Even moving from Exchange version x to Exchange version y.

What this story isn't, in any way, is a signal that there is something wrong with the current policy of disaggregating contracts, of bringing in new players (small and large) and of reducing the cost of IT).

The challenge ahead is definitely high on the ambition scale - many large scale IT contracts were signed at roughly the same time, a decade or more ago, and are expiring over the next 8 months.  Government departments will find that they are, as one, procuring, transitioning and going live with multiple new providers.  They will be competing for talent in a market where, with the economy growing, there is already plenty of competition.  Suppliers will be evaluating which contracts to bid for and where they, too, can find the people they need - and will be looking for much the same talent as the government departments are.  There are interesting times ahead.

There will be more stories about transition, and how hard it is, from here on in.  What angle the reporting takes in the future will be quite fascinating.

Friday, June 13, 2014

More On G-Cloud Numbers (May 2014 data)

The latest data show increasing spend via G-Cloud - this month tantalisingly close to the £200m arbitrary round but important number level at £191.6m.  The news after that is not terribly interesting:

Cloud spending may, it turns out, be seasonal.  Spend last month dropped to £12m, the lowest seen since October 2013 - all the more noticeable after the bollard budget blockbuster that was March spending.  Start of the new financial year and everyone is, it seems, planning rather than doing.

Lot 4 continues to dominate with 79% of the spend (Lots 1 to 3 are 6%, 1% and 13% respectively).

Much of the rest - top spending customers, top earnings suppliers etc - stays the same.

But there are some anomalies.  Last month I reported that the lowest spending customer had spent only £63.50.    This month they've moved higher with £85.90.  Thirteen customers have, though, still spent less than £1,000.

We do, though, have nearly 500 customers (489) which is, in my view, more important than the growth in spend - it shows either (a) that more people are looking at what the cloud can do for them, which would be good all round or (b) that more people have found that G-Cloud as a framework, GCaaS, can help them which is still good because it's transparent and we can see whether they spend more in the coming months.

51 suppliers have seen revenues of more than £1m. Some of those are brand name, paid up, members of the Oligopoly.  Others look new to the public sector and certainly new to having access to quite so many customers.

There are some other anomalies too - I assume the result of data capture errors.  One supplier has a revenue line showing £1,599,849.80 which is listed as "blank" - there are 9 other such lines, though the other numbers are far, far smaller.  It would be nice to know where to allocate that money.  It may be that it is correctly allocated by Lot (so shows up in the graph below where there are no "blank" entries) but not correctly tagged with a product description.  Still be nice to know.

A couple of other points to wonder about:

- The Crown Commercial Service are a bigger user of Skyscape than any other purchaser (£1.4m - double HMRC spend, neary triple Cabinet Office spend, nearly 5 times Home Office).  Is that all hosting of the G-Cloud store and other framework services?

- There are only 125 instances of the word "Cloud" in the line items of what has been purchased (which run to over 2,000 separate lines)

To repeat the last paragraph in my last entry on this topic, for the avoidance of doubt:

Still, there is no other framework in government that gives access to such a wide variety of suppliers (many new to the public sector) and no framework that publishes its information at such a transparent level.  For those two reasons alone, G-Cloud still deserves applause - and, as it grows month on month, I hope that it will only get louder. 




Monday, June 09, 2014

Digital Government 2002 - Doing Something Magical

Now here's a blast from the past!  Here's a "talking head" video recorded, I think, in early 2002 all about e-government (I am, of course, the talking head).  Some months later, much to my surprise, the video popped up at a conference I was attending - I remember looking up to see my head on a dozen 6' tall screens around the auditorium.

It's easily dated by me talking about increasing use of PDAs (you'll even see me using one) and the rollout of 3G, not to mention the ukonline.gov.uk logo flashing up in the opening frames and e-government, as opposed to Digital By Default.

But the underpinning points of making the move from government to online government, e-goverment or a Digital by Default approach are much the same now as then:

"The citizen gets the services they need, when they need them, where they need then, how they need them ... without having to worry about ... the barriers and burdens of dealing with government"

video

"You've changed government so fundamentally ... people are spending less time interacting and are getting real benefit"

Lessons learned: get a haircut before being taped, learn your  lines, even when in America don't wear a t-shirt under your shirt (my excuse is that it was winter).

Thursday, June 05, 2014

G-Cloud By The Numbers (April 2014 data, released mid-May 2014)

I haven't looked at the G-Cloud spend data for a few months (the last review was in December) - something changed with the data format earlier in the year and it screwed up all my nicely laid out spreadsheets; I've only just got round to reworking them.

- After 25 months of use, total spend via the framework is £175.5m

- Spend in all of 2013 was £85m.  Spend in the first 4 months of 2014 is £81m, about 46% of the total spend so far

- The run rate for 2014, if that spend rate continues, is perhaps more than £240m.  I suspect we could see much higher than that given the expiry of many central government IT contracts in 2015 and 2016 (and so an increase in experimentation, preparation for transition and even actual transition ahead of expiry)

- The split between the lots in December 2013 was Lot 1: 4%, Lot 2: 1%, Lot 3: 16%, Lot 4: 78%

- As of now, the split is similar: 6%, 1%, 14%, 79%
 
- The 2014 year to date split is little different: 8%, 1%, 12%, 80%





Conclusion:  The vast bulk of the spending is still via Lot 4 - people and people as a service.  I'd expected that to start changing now, with the Digital Services Framework fully live.  That said, Lot 4's spend per month has changed little since November 2013, except for a peak of £23m in March (roughly double the average spend over the last 6 months) which you can easily see in the graph above

Conclusion: Infrastructure as a Service (from Lot 1) is gradually increasing - it's gone from c£800k/month to c£1.5m a month in the last 6 months.  Again, there was a peak in March, of £2m. 

Conclusion: It's an old cliche but plainly there was a bit of a budget clear out in March with departments rushing to spend money.  March 2014 spend was £30m - roughly double any other month either side.

- In December 2013, BJSS was the largest supplier, followed by IBM.  Today, BJSS are still number 1, but Methods have moved to number 2, with IBM at 3.

- The Home Office is still the highest spending customer, at £24.7m (nearly double their spend as of December).  MoJ are second at £16.7m with Cabinet Office third at £12.5m

- The top 10 customers account for 50% of the spend on the framework.  The top 20 make up 67%. That's exactly how it was in December.  More than 100 new customers have been added since December, though, with over 470 customers now listed.

- Some 310 suppliers have won business.   The top 10 have 32% of the market, the top 20 have 47% (that's a better spread than the customer equivalent metrics)

- Last time, the lowest spending customer was the "Wales Office", with £375.   We are at a new low now, with "Circle Anglia Limited" spending £63.50 (I wonder if the cost of processing that order was far greater?).

-  Thirteen customers have spent less than £1,000.  Thirty one have spent more than £1m

Conclusion:  Much the same as in December - Adoption of the framework is still spotty, but it is definitely improving.  A greater spread of customers, spending higher amounts of money - though mostly concentrated in Lot 4.  A few more suppliers have likely seen their business utterly transformed with this new access to public sector customers.

Overall Conclusion: G-Cloud needs, still, to break away from its reliance on Lot 4 sales.  Scanning through the sales by line item, there are far too many descriptions that say simply "project manager", "tester", "IT project manager" etc.  There are even line items (not in Lot 4) that say "expenses - 4gb memory stick" - a whole new meaning to the phrase "cloud storage" perhaps.

Still, there is no other framework in government that gives access to such a wide variety of suppliers (many new to the public sector) and no framework that publishes its information at such a transparent level.  For those two reasons alone, G-Cloud still deserves applause - and, as it grows month on month, I hope that it will only get louder.

Tuesday, May 13, 2014

Officially Uncertain

It turns out that the new security classifications, introduced at the start of April 2014, have collapsed into a single new tier - Officially Uncertain.  I worried that this might happen earlier in the year.

Last week, for instance, it was clearly explained to me that "OFFICIAL is not a protective marking, it does not convey any associated controls on how the information is to be handled."

What that means, of course, is that because there are no particular controls that one might agree were the baseline necessary for protecting the information that is marked OFFICIAL that isn't actually marked by a protective marking, each department or government entity is able to decide, alone, what it should do to protect that information.  Adios commodity cloud.

In a different meeting with different people, it was explained to me, just as clearly, that because no one was going to go back and revisit their historical data and check what label should be applied to it (on an individual file by file basis).  The only conclusion, therefore, was that all historical data should be marked OFFICIAL SENSITIVE (notwithstanding that, if OFFICIAL isn't a protective marking, then neither is this one nor that the guidance suggests that use of "sensitive" is by exception only - this is one big exception).  And given it's all a bit sensitive, that historical data should be treated as if it were IL3 and kept in a secure facility in the UK.  Adieu commodity cloud.



All is not yet lost I hope.  Folks I speak to in CESG - sane, rational people that they are - recognise that this is a "generational change" and it will take some time before the implications are understood.  The trouble is that whilst time is on the side of government, it's not on the side of the smaller/newer players who want to provide services for government and for whom UNCERTAINTY is anathema.

In these early days, some guidance (not rules) would help people navigate through this uncertainty and support the development of products that met the needs of the bulk of government entities (be they local, central, arms length or otherwise).  The existing loose words - I can't stretch to guidance for these - known as the "Cloud Security Principles" get to the precipice of new controls, look over and leap sharply backwards, all a tremble.

Indeed, the summary of the approach recommended by those who best understand security is:

1. Think about the assets you have and what you're trying to do with them

2. Think about the attackers who'll be trying to interfere with those assets as you deliver your business function

3. Implement some mitigations (physical, procedural, personnel, technical) to address those identified risks

4. Get assurance as required in those mitigations

5. Thinking about the updated solution design, go back to step 1 to see if you've introduced any new risks.

6. Repeat until you've hit a level of confidence you are happy with

My guess is that 6, alone, could lead to an awful lot of iterations that culminate in "guards with machine guns, patrolling with dogs around a perimeter protected by an electric fence".  Of course, the number of guards, the type of guns, the eagerness of the dogs, the height of the fence and the shock provided by the fence will vary from entity to entity.



There is sunshine through some of the clouds though ... some departments are rolling out PCs using native BitLocker rather than add-on encryption, others are trialling Windows 8.1 on tablets, whilst managed iPads have been around for some months.



But a move of central government departments to public cloud services (remember - 50% of new spend to be in the public cloud by 2015) looks to be a long way from here.  I don't think I can even soften it an say that a significant move to even a private, public sector only, cloud is that close.


Friday, March 14, 2014

The Trouble With ... Spectrum


To paraphrase Mark Twain, "Buy spectrum.  They're not making it anymore."

And if Ofcom's figures are right, the spectrum that we use today is worth £50bn a year (as of 2011) to the UK economy.  The government say that they want to double that contribution by 2025 - it is already up 25% in the 5 years from 2006 to 2011.  It's unclear why the data is as of 2011 - one suspects that if it was up 25% in 5 years, it may already be up another 12.5% since then making doubling by 2025 at least a little easier.

If you've ever seen how spectrum is carved up in the UK, you know it's very complicated.  Here's a picture that shows just how complicated:


Every so often, the government auctions, through Ofcom, a slice of spectrum.  In April 2000, the sale of the 3G spectrum realised some £22bn.  There was much delight - until stock markets around the world fell dramatically not long afterwards, something which was at least partly to blame for delaying the actual rollout of 3G services (indeed, Vodafone narrowly avoided a fine for failing to reach 90% coverage on time - with an extension granted to the end of 2013).

That 90% is a measure of population coverage, not geographical coverage - which explains why you will often fail to get a signal in the middle of a park, just outside a small town or, often, anywhere with a beautiful view where you want to take a picture and send it to someone there and then, like if you were here:


Of course, there are doubtless plenty of people wandering down Baker Street right now who also can't get or even maintain a 3G signal.

The 4G auctions took place a little over 18 months ago and resulted in revenues that were some 90% lower than for 3G - partly a reflection of the times, partly because of somewhat decreased competition and partly because of smarter bidding on the part of at least some of the operators.  The 4G build out is underway now though there are, in effect, only two networks being built - O2 and Vodafone are sharing their network, as are Three and EE (the latter had a significant headstart on 4G because they lobbied, successfully, to reassign spare 1800 Mhz, originally for 3G, frequency for use as 4G).

For me, though, coverage shouldn't be a competitive metric.  Coverage should be national - not 90% national, proper national.  Using coverage as a metric, coupled with charging for the spectrum, and then splitting up the job of building the network across two (or more) players means that they will go where the money is - which means major towns first (in fact, it means London first and for longest), then major travel corridors and commuter towns and the rural areas never.  The same is true, of course, for broadband - though our broadband rollout is mostly physical cable rather than over the air, the same investment/return challenge remains.

And that always seems to leave government to fill in the gaps - whether with the recent "not spots" project for mobile that will result in a few dozen masts being set up (for multiple operator use) to cover some (not all) of the gaps in coverage or a series of rural broadband projects (that only BT is winning) - neither of which is progressing very fast and certainly not covering the gaps.

With the upcoming replacement of Airwave (where truly national - 100% geographic - coverage is required), the rollout of smart meters (where the ideal way for a meter to send its reading home is via SMS or over a home broadband network) and the need to plug gaps in both mobile and broadband coverage, surely there is a need for an approach that we might call "national infrastructure"?

So focusing on mobile and, particularly, where it converges with broadband (on the basis that one can substitute for the other and that the presence of one could drive the other), can one or more bodies be set up who have the job to create truly national coverage, and they sell the capacity that they create to content and service providers who want it.  That should ensure coverage, create economies of scale and still allow competition (even more so than today given that in many areas, there is only one mobile provider to choose from).  Network Rail for our telecomms infrastructure.

That is to say, in another way, is it relevant or effective to have competitive build-outs of such nationally vital capabilities as broadband and 4G (and later 5G, 6G etc) mobile? 

If the Airwave replacement were to base its solution on 4G (moving away from Tetra) - and I have no idea if they will or they won't, but given Emergency Services will have an increasing need for data in the future, it seems likely - then we would have another player doing a national rollout, funded by government (either funded directly or funded through recovery of costs)

There are probably 50,000 mobile masts in the country today.  With the consolidation of networks, that will get to a smaller number, maybe 30,000.  If you add in Airwave, which operates at a lower frequency and so will get better performance but has to cover more area, that number will increase a little (Airwave was formerly owned by O2 so my guess is that much of their gear is co-located with O2 mobile masts).   Perhaps £100k to build those in the first place and perhaps £10k every time they need a major upgrade (change of frequency / change of antenna / boost in backhaul and so on) ... pretty soon you're talking real money.  And that's on top of the cost of spectrum and excludes routine maintenance and refresh costs.

So having already rolled out 2G, 3G and the beginning of the 4G network and likely to replace (or at least combine) Airwave with a 4G-based solution ... and with many areas of the country still struggling to get a wireless signal, let alone a fast broadband solution, I think it's time to look at this again.

Whilst I was writing this, Ofcom put out a press release noting:
Ofcom’s European Broadband Scorecard, published today, shows that the UK leads the EU’s five biggest economies on most measures of coverage, take-up, usage and choice for both mobile and fixed broadband, and performs well on price.
That suggests that the current approach has done rather well - better than a curious selection of five big countries - but it doesn't mean that we should (a) compare ourselves with those countries, (b) that we should not go for an absolute measure of 100% coverage and (c) that the current approach will keep us ahead of the game.

It seems to me that rather than spend billions on HS2 which aims to transport more people from A to B a bit quicker than they move today, we could, instead, spend a smaller amount on securing the infrastructure for the 21st and 22nd centuries rather than that of the 19th.

Monday, March 03, 2014

The Trouble With ... Transition

In my post two weeks ago (Taking G-Cloud Further Forward), I made this point:
I struggle with the suggestion to make contracts three years instead of two.  This is a smokescreen, it's not what is making buyers nervous really, it's just that they haven't tried transition.  So let's try some - let's fire up e-mail in the cloud for a major department and move it 6 months from now.  

Until it's practiced, no one will know how easy (or incredibly difficult) it is.  The key is not to copy and paste virtual machines, but to move the gigabytes of data that goes with it.  This will prove where PSN is really working (I suspect that there are more problems than anyone has yet admitted to), demonstrate how new capabilities have been designed (and prove whether the pointy things have been set up properly as we used to say - that is, does the design rely on fixed IP address ranges or DNS routing that is hardcoded or whatever).  

This won't work for legacy - that should be moved once and once only to the Crown Hosting Service or some other capability (though recognise that lots of new systems will still need to talk to services there).  There's a lot riding on CHS happening - it will be an interesting year for that programme.
Eoin Jennings of Easynet responded, via Twitter, with a view that buyers thought that there was significant procurement overhead if there was a need to run a procurement every 2 years (or perhaps more frequently given there is an option within G-Cloud to terminate for convenience and move to a new provider). Eoin is seemingly already trying to convince customers - and struggling.


Georgina O'Toole (of Richard Holway's Tech Market View) shared her view that 2 years could be too short, though for a different reason:

An example might be where a Government organisation opts for a ‘private cloud’ solution requiring tailoring to their specifications. In these cases, a supplier would struggle to recoup the level of investment required in order to make a profit on the deal.  The intention is to reduce the need for private cloud delivery over time, as the cloud market “innovates and matures” but in the meantime, the 24-month rule may still deter G-Cloud use.
Both views make sense, and I understand them entirely, in the "current" world of government IT where systems are complex, bespoke and have been maintained under existing contracts for a decade or more. 

But G-Cloud isn't meant for such systems.  It's meant for systems designed under modern rules where portability is part of the objective from the get go.   There shouldn't be private, departmentally focused, clouds being set up - the public sector is perfectly big enough to have its own private cloud capability, supplied by a mixture of vendors who can all reassure government that they are not sharing their servers or storage with whoever they are afraid of sharing them with.  And if suppliers build something and need long contracts to get their return on investment, then they either aren't building the right things, aren't pricing it right or aren't managing it right - though I see that there is plenty of risk in building anything public sector cloud focused until there is more take up, and I applaud the suppliers who have already taken the punt (let's hope that the new protective marking scheme helps there).

Plainly government IT isn't going to turn on a sixpence with new systems transporting in from far off galaxies right away, but it is very much the direction of travel as evidence by the various projects set up using a more agile design approach right across government - in HMRC, RPA, Student Loans, DVLA and so on.

What really needs to happen is some thinking through of how it will work and some practice:

- How easy is it to move systems, even those designed, for cloud where IP ranges are fixed and owned by data centre providers?

- How will network stacks (firewalls, routers, load balancers, intrusion detection tools etc) be moved on a like for like basis?

- If a system comes with terabytes or petabytes of data, how will the be transferred so that there is no loss of service (or data)

- In a world where there is no capex, how does government gets its head around not looking at everything as though it owned it?

- If a system is supported by minimal staff (as in 0.05 heads per server or whatever), TUPE doesn't apply (but it may well apply for the first transfer), how does government (and the supplier base) understand that?

- How can the commercial team sharpen their processes so that what is still taking them many (many) weeks (despite the reality of a far quicker process with G-Cloud) can be done in a shorter period?

Designing this capability in from the start needs to have already started and, for those still grappling with it, there should be speedy reviews of what has already been tried elsewhere in government (I hesitate to say that the "lessons must be learned" on the basis that those 4 words may be the most over-used and under-practiced words in the public sector.

With the cost of hardware roughly halving every 18-24 months and therefore the cost of cloud hosting falling on a similar basis (perhaps even faster given increasing rates of automation), government can benefit from continuously falling costs (in at least part of its stack) - and by designing its new systems so that they avoid lockin from scratch, government should, in theory, never be dependent on a small number of suppliers and high embedded costs.

Now, that all makes sense for hosting ... how government does the same for application support, service integration and management and how it gets on the path where it actually redesigns its applications (across the board) to work in such a way that they could be moved whenever new pricing makes sense is the real problem to grapple with.

Monday, February 17, 2014

Taking G-Cloud Further Forward

A recent blog post from the G-Cloud team talks about how they plan to take the framework forward. I don't think it goes quite far enough, so here are my thoughts on taking it even further forward.

Starting with that G-Cloud post:

It's noted that "research carried out by the 6 Degree Group suggests that nearly 90 percent of local authorities have not heard of G-Cloud".  This statement is made in the context of the potential buyer count being 30,000 strong.  Some, like David Moss, have confused this and concluded that 27,000 buyers don't know about G-Cloud.  I don't read it that way - but it's hard to say what it does mean.  A hunt for the "6 Degree Group", presumably twice as good as the 3 Degrees, finds one obvious candidate (actually the 6 Degrees Group), but they make no mention of any research on their blog or their news page (and I can't find them in the list of suppliers who have won business via G-Cloud).  Still, 90% of local authorities not knowing about G-Cloud is, if the question was asked properly and to the right people (and therein lies the problem with such research), not good.  It might mean that 450 or 900 or 1,350 buyers (depending on whether there are 1, 2 or 3 potential buyers of cloud services in each local authority) don't know about the framework.  How we get to 30,000 potential buyers I don't know - but if there is such a number, perhaps it's a good place to look at potential efficiencies in purchasing.

[Update: I've been provided with the 30,000 - find them here: http://gps.cabinetoffice.gov.uk/sites/default/files/attachments/2013-04-15%20Customer%20URN%20List.xlsx. It includes every army regiment (SASaaS?), every school and thousands of local organisations.  So a theoretical buyer list but not a practical buyer list. I think it better to focus on the likely buyers. G-Cloud is a business - GPS gets 1% on every deal.  That needs to be spent on promoting to those most likely to use it]

[Second update: I've been passed a further insight into the research: http://www.itproportal.com/2013/12/20/g-cloud-uptake-low-among-uk-councils-and-local-authorities/?utm_term=&utm_medium=twitter&utm_campaign=testitppcampaign&utm_source=rss&utm_content=  - the summary from this is that 87% of councils are not currently buying through G-Cloud and 76% did not know what the G-Cloud [framework] could be used for]

Later, we read "But one of the most effective ways of spreading the word about G-Cloud is not by us talking about it, but for others to hear from their peers who have successfully used G-Cloud. There are many positive stories to tell, and we will be publishing some of the experiences of buyers across the public sector in the coming months" - True, of course.  Except if people haven't heard of G-Cloud they won't be looking on the G-Cloud blog for stories about how great the framework is.  Perhaps another route to further efficiencies is to look at the vast number of frameworks that exist today (particularly in local government and the NHS) and start killing them off so that purchases are concentrated in the few that really have the potential to drive cost saves allied with better service delivery.

And then "We are working with various trade bodies and organisations to continue to ensure we attract the best and most innovative suppliers from across the UK."  G-Cloud's problem today isn't, as far as we can tell, a lack of innovative suppliers - it's a lack of purchasing through it.  In other words, a lack of demand.  True, novel services may attract buyers but most government entities are still in the "toe in the water" stage of cloud, experimenting with a little IaaS, some PaaS and, based on the G-Cloud numbers, quite a lot of SaaS (some £15m in the latest figures, or about 16% of total spend versus only 4% for IaaS and 1% for Paas).

On the services themselves, we are told that "We are carrying out a systematic review of all services and have, so far, deleted around 100 that do not qualify."  I can only applaud that.  Though I suspect the real number to delete may be in the 1000s, not the 100s.  It's a difficult balance - the idea of G-Cloud is to attract more and more suppliers with more and more services, but buyers only want sensible, viable services that exist and are proven to work.  It's not like iTunes where it only takes one person to download an app and rate it 1* because it doesn't work/keeps crashing/doesn't synchronise and so suggest to other potential buyers that they steer clear - the vast number of G-Cloud services have had no takers at all and even those that have lack any feedback on how it went (I know that this was one of the top goals of the original team but that they were hampered by "the rules").

There's danger ahead too: "Security accreditation is required for all services that will hold information assessed at Business Impact Level profiles 11x/22x, 33x and above. But of course, with the new security protection markings that are being introduced on 1 April, that will change. We will be publishing clear guidance on how this will affect accreditation of G-Cloud suppliers and services soon."  It's mid-February and the new guidelines are just 7 weeks away.  That doesn't give suppliers long to plan for, or make, any changes that are needed (the good news here being that government will likely take even longer to plan for, and make, such changes at their end).  This is, as CESG people have said to me, a generational change - it's going to take a while, but that doesn't mean that we should let it.

Worryingly: "we’re excited to be looking at how a new and improved CloudStore, can act as a single space for public sector buyers to find what they need on all digital frameworks."  I don't know that a new store is needed; I believe that we're already on the third reworking, would a fourth help?  As far as I can tell, the current store is based on Magento which, from all accounts and reviews online, is a very powerful tool that, in the right hands, can do pretty much whatever you want from a buying and selling standpoint.  I believe a large part of the problem is in the data in the store - searching for relatively straightforward keywords often returns a surprising answer - try it yourself, type in some popular supplier names or some services that you might want to buy.   Adding in more frameworks (especially where they can overlap as PSN and G-Cloud do in several areas) will more than likely confuse the story - I know that Amazon manages it effortlessly across a zillion products but it seems unlikely that government can implement it any time soon (wait - they could just use Amazon). I would rather see the time, and money, spent getting a set of products that were accurately described and that could be found using a series of canned searches based on what buyers were interested in.

So, let's ramp up the PR and education (for buyers), upgrade the assurance process that ensures that suppliers are presenting products that are truly relevant, massively clean up the data in the existing store, get rid of duplicate and no longer competitive buying routes (so that government can aggregate for best value), make sure that buyers know more about what services are real and what they can do, don't rebuild the damn cloud store again ...

... What else?

Well, the Skyscape+14 letter is not a terrible place to start, though I don't agree with everything suggested.  G-Cloud could and should:

- Provide a mechanism for services to work together.  In the single prime contract era, which is coming to an end, this didn't matter - one of the oligopoly would be tasked to buy something for its departmental customer and would make sure all of the bits fitted together and that it was supported in the existing contract (or an adjunct).  In a multiple supplier world where the customer will, more often than not, act as the integrator both customer and supplier are going to need ways to make this all work together.   The knee bone may be connected to the thigh bone, but that doesn't mean that your email service in the cloud is going to connect via your PSN network to your active directory so that you can do everything on your iPad.

- Publish what customers across government are looking at both in advance and as it occurs, not as data but as information.  Show what proof of concept work is underway (as this will give a sense of what production services might be wanted), highlight what components are going to be in demand when big contracts come to an end, illustrate what customers are exploring in their detailed strategies (not the vague ones that are published online).  SMEs building for the public sector will not be able to build speculatively - so either the government customer has to buy exactly what the private sector customer is buying (which means that there can be no special requirements, no security rules that are different from what is already there and no assurance regime that is above and beyond what a major retailer or utility might want), or there needs to be a clear pipeline of what is wanted.  Whilst Chris Chant used to say that M&S didn't need to ask people walking down the street how many shirts they would buy if they were to open a store in the area, government isn't yet buying shirts as a service - they are buying services that are designed and secured to government rules (with the coming of Official, that may all be about to change - but we don't know yet because, see above, the guidance isn't available).

- Look at real cases of what customers want to do - let's say that a customer wants to put a very high performing Oracle RAC instance in the cloud - and ensure that there is a way for that to be bought.  It will likely require changes to business models and to terms and conditions, but despite the valiant efforts of GDS there is not yet a switch away from such heavyweight software as Oracle databases.  The challenge (one of many) that government has, in this case, is that it has massive amounts of legacy capability that is not portable, is not horizontally scalable and that cannot be easily moved - Crown Hosting may be a solution to this, if it can be made to work in a reasonable timeframe and if the cost of migration can be minimised.

- I struggle with the suggestion to make contracts three years instead of two.  This is a smokescreen, it's not what is making buyers nervous really, it's just that they haven't tried transition.  So let's try some - let's fire up e-mail in the cloud for a major department and move it 6 months from now.  Until it's practiced, no one will know how easy (or incredibly difficult) it is.  The key is not to copy and paste virtual machines, but to move the gigabytes of data that goes with it.  This will prove where PSN is really working (I suspect that there are more problems than anyone has yet admitted to), demonstrate how new capabilities have been designed (and prove whether the pointy things have been set up properly as we used to say - that is, does the design rely on fixed IP address ranges or DNS routing that is hardcoded or whatever).  This won't work for legacy - that should be moved once and once only to the Crown Hosting Service or some other capability (though recognise that lots of new systems will still need to talk to services there).  There's a lot riding on CHS happening - it will be an interesting year for that programme.

The ICT contracts for a dozen major departments/government entities are up in the next couple of years - contract values in the tens of billions (old money) will be re-procured.   Cloud services, via G-Cloud, will form an essential pillar of that re-procurement process, because they are the most likely way to extract the cost savings that are needed.  In some cases cloud will be bought because the purchasing decision will be left too late to do it any other way than via a framework (unless the "compelling reason" for extension clause kicks in) but in most cases because the G-Cloud framework absolutely provides the best route to an educated, passionate supplier community who want to disrupt how ICT is done in Government today.  We owe them an opportunity to make that happen.  The G-Cloud team needs more resources to make it so - they are, in my view, the poor relation of other initiatives in GDS today.  That, too, needs to change.

Monday, February 10, 2014

G-Cloud By The Numbers (December 2013 data)

Last week, the Cabinet Office published the latest figures for G-Cloud spending.  The data runs through December 2013.  Here are the highlights - call it an infographic in text.:

- After 21 months of use, total spend via the framework is £92.657m

- Spend in all of 2013 was £85m

- Spend in the last quarter of 2013 was £35m.  That would suggest a run rate for 2014 of some £140m, assuming no growth (or decline)

- Lot 3's total spending of £15m (over 21 months) is only marginally higher than the total spend in November 2013 (of £14m)

- The split between the lots is Lot 1: 4%, Lot 2: 1%, Lot 3: 16%, Lot 4: 78%

- Taking the spend over 2013 only, those splits don't change much: 5%, 1%, 15%, 79%

- Over the last quarter of 2013, the splits change a little: 7%, 1%, 10%, 82%

Conclusion:  The vast bulk of the spending is still via Lot 4 - people and people as a service.  That may soon change given that there are other frameworks available for taking on people, including the Digital Services Framework and Consultancy One.   Lot 4 spending was down to £9m in December (from £12m in November) - that's more likely because it was a short month than any sign of a trend.

Conclusion: Either Infrastructure as a Service is ridiculously cheap (not yet exceeding £1m per month) or there is little appetite yet for serious tin by the hour usage.  In all likelihood, with most departments tied into existing outsource contracts, only a few are breaking out and trialling Lot 1.  With contracts expiring over the next couple of years and the Crown Hosting Service perhaps appearing at the turn of the year, we may see real changes here.

- IBM is no longer the largest supplier by value of business done, the new winner is BJSS Ltd (a supplier of agile consultancy) who clock in at £7.2m.  IBM are second at £6m.

- The Home Office is still the highest spending customer, at £13.9m.  MoJ are second at £9.2m.

- The top 10 customers account for 51% of the spend on the framework.  The top 20 make up 66%.  There are nearly 350 individual customers listed.

- The lowest spending customer is the "Wales Office", with £375.  A dozen customers have spent less than £1,000.  Oddly, last time I checked this data, Downing Street was the lowest spend at £150 - that entry is no longer in the sheet as far as I can tell. Perhaps they got a refund.

Conclusion:  Adoption of the framework is still spotty.  A few customers have massively taken to it, many have dipped their toe in the water.  A few suppliers have likely seen their business utterly transformed with this new access to public sector customers.






Monday, January 27, 2014

Government Draws The Line

On Friday, the Cabinet Office announced (or re-announced according to Patricia Hodge) that:
  • no IT contract will be allowed over £100 million in value – unless there is an exceptional reason to do so, smaller contracts mean competition from the widest possible range of suppliers
  • companies with a contract for service provision will not be allowed to provide system integration in the same part of government
  • there will be no automatic contract extensions; the government won’t extend existing contracts unless there is a compelling case
  • new hosting contracts will not last for more than 2 years

I was intrigued by the lower case. Almost like I wrote the press release.
These are the new "red lines" then - I don't think these are re-announcements, they are firming up previous guidance.  When the coalition came to power, there was a presumption against projects over £100m in value; now there appears to be a hard limit (albeit with the caveat around exception reasons, ditto with extensions where there is a "compelling" case).

On the £100m limit:

There may be a perverse consequence here.  Contracts will be split up and/or made shorter to fit within the limit; or contracts may be undervalued with the rest coming in change controls.  Transitions may occur more regularly, increasing costs over the long term.  Integration of the various suppliers may also cost more.  For 20 years, government has bought its IT in huge, single prime (and occasionally double prime) silos.  That is going to be a hard, but necessary, habit to break.

£100m is, of course, still a lot of money.   Suppliers bidding for £100m contracts are likely the same as those bidding for £500m contracts; they are most likely not the same as those bidding for £1m or £5m contracts.

To understand what the new contract landscape looks like will require a slightly different approach to transparency - instead of individual spends or contracts being reported on, it would give a better view if the aggregate set of contracts to achieve a given outcome were reported.  So if HMRC are building a new Import/Export system (for instance), we should be able to visit a site and see the total set of contracts that are connected with that service (including the amounts, durations and suppliers).

On the "service providers" will not be allowed to carry out "system integration" point:
I'm not sure that I follow this but I take it to mean that competition will be forced into the process so that, in my point above about disaggregated contracts, suppliers will be prevented from winning multiple lots (particularly where hardware and software is provided by a company).  That, in theory, has the most consequence for companies like Fujitsu and HP who typically provide their own servers, desktops or laptops when taking on other responsibilities in an outsource deal.
 And no more extensions:

Assuming that there isn't a compelling reason for extension, the contract term is the contract term.  If that rule is going to be rigorously applied to all existing contracts, there are some departments in trouble already who have run out of time for a reprocurement or who will be unable to attract any meaningful competition into such a procurement.  Transparency, again, can help here - which contracts are coming up to their expiry point (let's look ahead 24 months to start with) and what is happening to each of them (along with what actually happened when push came to shove).  That would also help suppliers, particularly small ones, understand the pipeline.
On limiting hosting contracts to 2 years:
That's consistent with the G-Cloud contract term (notwithstanding that some suppliers wrote to GDS last week asking for the term to be extended to 3 years).  But it's also unproven - it's one thing to "copy and paste" a dozen virtual machines from one data centre to another, it's another thing to shift a petabyte of data or a set of load-balanced, firewalled, well-routed network connections.  Government is going to have to practice this - so far, moves of hosting providers have taken a year or more and cost millions (without delivering any tangible business benefit especially given the necessary freezes either side of the move).  It also means trouble for some of the legacy systems that are fragile and hard to move.  The Crown Hosting Service could, at least, limit moves of those kinds of systems to a single transition to their facilities - that would be a big help.

Friday, January 24, 2014

Government Gateway - Teenage Angst

Tomorrow, January 25th, the Government Gateway will be 13.  I’m still, to be honest, slightly surprised (though pleased) that the Gateway continues to be around - after all, in Internet time, things come and go in far shorter periods than that.  In the time that we have had the Gateway, we rebuilt UKonline.gov.uk with three different suppliers, launched direct.gov.uk and replatformed it some years later, then closed that down and replaced it with gov.uk which has absorbed the vast bulk of central government’s websites and has probably had 1,000 or more iterations since launch.  And yet the Gateway endures.


In 13 years, the Gateway has, astonishingly, had precisely two user interface designs.  In the first, I personally picked the images that we used on each screen (as well as the colour schemes, the text layout and goodness knows what else) and one of the team made ‘phone calls to the rights holders (most of whom, if I recall correctly, were ordinary people who had taken nice pictures) to obtain permission for us to use their images.  If you look at the picture above, you will see three departments that no longer exist (IR and C&E formed HMRC, MAFF became Defra) and five brands (including UKonline) that also don't exist.


Of course we carried out formal user testing for everything we did (with a specialist company, in a purpose built room with one-way glass, observers, cameras and all that kind of thing), often through multiple iterations.  The second UI change was carried out on my watch too.    I left that role - not that of Chief UI Designer - some 9 years ago.

My own, probably biased (but based on regular usage of it as a small business owner), sense is that the Gateway largely stopped evolving in about 2006.  Up until that point it had gone through rapid, iterative change - the first build was completed in just 90 days, with full scrutiny from a Programme Board consisting of three Permanent Secretaries, two CIOs and several other senior figures in government.  Ian McCartney, the Minister of the Cabinet Office (the Francis Maude of his day) told me as he signed off the funding for it that failure would be a “resignation issue.” I confirmed that he could have my head if we didn’t pull it off.  He replied “Not yours, mine!” in that slightly impenetrable Scottish accent of his.  We had a team, led by architects and experts from Microsoft, of over 40 SMEs (radical, I know).  Many of us worked ridiculous hours to pull off the first release - which we had picked for Burns Night, the 25th of January 2001.

On the night of the 24th, many of us pulled another all nighter to get it done and I came back to London from the data centre, having switched the Gateway on at around 5am - the core set of configuration data was hand carried from the pre-production machine to the production machine on a 3 1/2” floppy disc.  I don't think we could do that now, even if we could find such a disc (and a drive that supported it).  

The Programme Board met to review what we had done and, to my surprise, the security accreditation lead (what would be called a Pan-Government Accreditor now) said that he wanted to carry out some final tests before he okayed it being switched on.  I lifted my head from the table where I may have momentarily closed my eyes and said “Ummm, I turned it on at 5.”  Security, as it so often did (then and now), won - we took the Gateway off the ‘net, carried out the further tests and turned it back on a few hours later.

Over the following months we added online services from existing departments, added new departments (and even some Local Authorities), added capability (payments, secure messaging) and kept going.  We published what we were doing every month in an effort to be as transparent as possible.  We worked with other suppliers to support their efforts to integrate to the Gateway, developing (with Sun and Software AG, at their own risk and expense) a competitive product that handled the messaging integration (and worked with another supplier on an open source solution which we didn’t pull off).

We published our monthly reports online - though I think that they now lost folllowing perhaps multiple migrations of the Cabinet Office website.  Here is a page from February 2004 (the full deck is linked to here) that shows what we had got done and what our plans were:








The Gateway has long since been seen as end of life - indeed, I’ve been told several times that it has now been “deprecated” (which apparently means that the service should be avoided as it has been or is about to be superseded).  Yet it’s still here.

What is happening then?

Two years ago, in November 2011, I wrote a post about the Cabinet Office’s new approach to Identity. Perhaps the key paragraph in that post was "With the Cabinet Office getting behind the [Identity Programme] - and, by the sounds of it, resourcing it for the first time in its current incarnation - there is great potential, provided things move fast.  One of the first deliverables, then, should be the timetable for the completion of the standards, the required design and, very importantly, the proposed commercial model.”

There was talk then of HMRC putting up their business case for using the new services in April 2012.  The then development lead of Universal Credit waxed on about how he would definitely be using Identity Services when UC went live in April 2013.  Oh, the good old days.

DWP went to market for their Identity Framework in March 2012 as I noted in a post nearly a year ago. Framework contracts were awarded in November 2012.  

Nearly five Gateway development cycles later, we are yet to see the outcome of those - and there has been little in the way of update, as I said a year ago.

Things may, though, be about to change

GDS, in a blog post earlier this month, say "In the first few months of 2014 we’ll be starting the IDA service in private beta with our identity providers, to allow users to access new HMRC and DVLA services."

Nine gateway development cycles later, we might be about to see what the new service(s) will look like.   I am very intrigued.

Some thoughts for GDS as they hopefully enter their first year with live services:

Third Party Providers 

With the first iteration of the Gateway, we provided the capability for a 3rd party to authenticate someone and then issue them a digital certificate.  That certificate could be presented to the Gateway and then linked with your identity within government.  Certificates, at the time, were priced at £50 (by the 3rd party, not by government) because of the level of manual checking of documents that was required (they were initially available for companies only).  As long ago as 2002, I laid out my thoughts on digital certificates.

There were many technical challenges with certificates, as well as commercial ones around cost.  But one of the bigger challenges was that we still had to do the authentication work to tie the owner of the digital certificate to their government identity - it was a two step process.

With the new approach from the Cabinet Office - a significantly extended version of the early work with multiple players (up to 8 though not initially, and there is doubtless room for more later) but the same hub concept (the Gateway is just as much a hub as an authentication engine) - the same two step process will be needed.  I will prove who I am to Experian, the Post Office, Paypal or whoever, and then government will take that information and match that identity to one inside government - and they might have to do that several times for each of my interactions with, say, HMRC, DWP, DVLA and others.  There is still, as far as I know, no ring of trust where because HMRC trusts that identity, DWP will too.  Dirty data across government with confusion over National Insurance numbers, latest addresses, initials and so on all make that hard, all this time later.

As Dawn Primarolo, then a minister overseeing the Inland Revenue, said to me, very astutely I thought, when I first presented the Gateway to her in 2001 - "But people will realise that we don't actually know very much about them.  We don't have their current address and we may have their National Insurance number stored incorrectly".  She was right of course.

Managing Live Service

The new approach does, though, increase the interactions and the necessary orchestration - the providers, the hub and the departments all need to come together.  That should work fine for initial volumes but as the stress on the system increases, it will get interesting.  Many are the sleepless nights our team had as we worked with the then Inland Revenue ahead of the peak period in January.

End to end service management with multiple providers and consumers, inside and outside of government is very challenging.  Departments disaggregating their services as contracts expire are about to find that out, GDS will also find out.  There are many lessons to learn and, sadly, most of them are learned in the frantic action that follows a problem.

The Transaction Engine - The Forgotten Gateway

The Gateway doesn’t, though, just do the authentication of transactions. That is, you certainly use it when you sign in to fill in your tax return or your VAT return, but you also use it (probably unwittingly) when that return is sent to government.  All the more so if you are a company who uses 3rd party software to file your returns - as pretty much every company probably does now.  That bit of the Gateway is called the “Transaction Engine” and it handles millions of data submissions a year, probably tens of millions.

To replace the Gateway, the existing Authentication Engine (which we called R&E) within it must be decoupled from the Transaction Engine so that there can be authentication of submitted data via the new Identity Providers too, and then the Transaction Engine needs to be replaced.  That, too, is a complicated process - dozens of 3rd party applications know how to talk to the Gateway and will need to know how to talk to whatever replaces it (which, of course, may look nothing like the Transaction Engine and might, indeed, be individual services for each department or who knows what - though I have some thoughts on that).

Delegation of Rights

Beyond that, the very tricky problem of delegation needs to be tackled.  The Gateway supports it in a relatively rudimentary way - a small business can nominate its accountant to handle PAYE and VAT, for instance.  A larger business can establish a hierarchy where Joe does PAYE and Helen does VAT and Joe and Helen can do Corporation Tax.   But to handle something like Lasting Power of Attorney, there need to be more complex links between, say, me, my Mother and two lawyers.  Without this delegation capability - which is needed for so many transactions - the Digital by Default agenda could easily stall, handling only the simplest capabilities.

Fraud Detection and Prevention

Tied in with the two step authentication process I mention above is the need to deal with the inevitable fraud risk. Whilst Tax Credits was, as I said, briefly the most popular online service, it was withdrawn when substantial fraud was detected (actually, the Tax Credits service went online without any requirement for authentication - something that we fervently disagreed with but that was only supposed to be a temporary step.  Perhaps in another post I will take on the topic of Joint and Several Liability, though I am hugely reluctant to go back there).  

In the USA, there is massive and persistent Tax Return fraud - Business Week recently put the figure at $4 billion in 2011 and forecast that it would rise to $21 billion by 2017.  That looks to be the result of simple identity fraud, just as Tax Credits experienced.  Most tax returns in the USA are filed online, many using packages such as TurboTax.   Tax rebates are far more prevalent in the USA than they are in the UK, but once the identification process includes benefits, change of address and so on, it will become a natural target.  Paul Clarke raised this issue, and some others, in an excellent recent post.

The two step process will need to guard against any repeat of the US experience in the UK - and posting liabilities to the authentication providers would doubtless quickly lead to them disengaging from the business (and may not even be possible given the government carries out the second step which ties the person presented to a government identity record, or to a set of them).  

We included a postal loop from day one with the Gateway, aimed at providing some additional security (which could, of course, be compromised if someone intercepted the post); removing that (as a recent GDS blog post claims it will), as I imagine will be done in the new process (Digital by Default after all) requires some additional thinking.

User Led

Given that "User Led" is the GDS mantra, I have little fear that users won't be at the heart of what they do next, but it is a tricky problem this time.  For the first time, users will be confronted with non-government providers of identity (our Gateway integration with 3rd parties still resulted in a second step directly with government).  How will they know who to choose?  What happens if they don't like who they chose and want to move to someone else? How will they know that the service that they are using is legitimate - there will be many opportunities for phishing attacks and spoof websites? How will they know that the service they are using is secure - it is one thing to give government your data, another, perhaps, to give that data to a credit agency?   Will these services be able to accumulate data about your interactions with Government?  How will third party services be audited to ensure that they are keeping data secure?

Moving On From Gateway

There are more than 10 million accounts, I believe, on the Gateway today.  Transitioning to new providers will require a careful, user benefit led, approach so that everyone understands why the new service is better (for everyone) than the old one.   After all, for 13 years, people have been happily filing their tax returns and companies have been sending in PAYE and VAT without being aware of any problems.  It would help, I'm sure, if the existing customers didn't even realise things had changed - until they came to add new services that are only available with the coming solutions and were required to provide more information before they could access them; I think most would see that as a fair exchange.

Here's To The Future then

Our dream, way back on Burns Night in 2001, was that we would be able to break up the Gateway into pieces and created a federated identity architecture where there would be lots of players, all bringing different business models and capabilities.  We wanted to be free of some of the restrictions that we had to work with - complex usernames and even more complicated passwords, to work with an online model, to bring in third party identification services, to join up services so that a single interaction with a user would result in multiple interactions with government departments and, as our team strap line said back then, we wanted to “deliver the technology to transform government”.

Thirteen years on there have been some hits and some misses with that dream - inevitably we set our sights as high as we could and fell short.  I fully expect the Gateway to be around for another four or five years as it will take time for anyone to trust the new capabilities, for 3rd parties to migrate their software and for key areas like delegation to be developed.  It’s a shame that we have gone through a period of some 8 years when little has been done to improve how citizens identify themselves to government; there was so much that could have been done.

I’m looking forward to seeing what new capabilities are unveiled sometime in the next few months - perhaps I will be invited to be a user in the “private beta” so that I can see it a bit quicker.  Perhaps, though, I shouldn’t hold my breath.