Monday, January 27, 2014

Government Draws The Line

On Friday, the Cabinet Office announced (or re-announced according to Patricia Hodge) that:
  • no IT contract will be allowed over £100 million in value – unless there is an exceptional reason to do so, smaller contracts mean competition from the widest possible range of suppliers
  • companies with a contract for service provision will not be allowed to provide system integration in the same part of government
  • there will be no automatic contract extensions; the government won’t extend existing contracts unless there is a compelling case
  • new hosting contracts will not last for more than 2 years

I was intrigued by the lower case. Almost like I wrote the press release.
These are the new "red lines" then - I don't think these are re-announcements, they are firming up previous guidance.  When the coalition came to power, there was a presumption against projects over £100m in value; now there appears to be a hard limit (albeit with the caveat around exception reasons, ditto with extensions where there is a "compelling" case).

On the £100m limit:

There may be a perverse consequence here.  Contracts will be split up and/or made shorter to fit within the limit; or contracts may be undervalued with the rest coming in change controls.  Transitions may occur more regularly, increasing costs over the long term.  Integration of the various suppliers may also cost more.  For 20 years, government has bought its IT in huge, single prime (and occasionally double prime) silos.  That is going to be a hard, but necessary, habit to break.

£100m is, of course, still a lot of money.   Suppliers bidding for £100m contracts are likely the same as those bidding for £500m contracts; they are most likely not the same as those bidding for £1m or £5m contracts.

To understand what the new contract landscape looks like will require a slightly different approach to transparency - instead of individual spends or contracts being reported on, it would give a better view if the aggregate set of contracts to achieve a given outcome were reported.  So if HMRC are building a new Import/Export system (for instance), we should be able to visit a site and see the total set of contracts that are connected with that service (including the amounts, durations and suppliers).

On the "service providers" will not be allowed to carry out "system integration" point:
I'm not sure that I follow this but I take it to mean that competition will be forced into the process so that, in my point above about disaggregated contracts, suppliers will be prevented from winning multiple lots (particularly where hardware and software is provided by a company).  That, in theory, has the most consequence for companies like Fujitsu and HP who typically provide their own servers, desktops or laptops when taking on other responsibilities in an outsource deal.
 And no more extensions:

Assuming that there isn't a compelling reason for extension, the contract term is the contract term.  If that rule is going to be rigorously applied to all existing contracts, there are some departments in trouble already who have run out of time for a reprocurement or who will be unable to attract any meaningful competition into such a procurement.  Transparency, again, can help here - which contracts are coming up to their expiry point (let's look ahead 24 months to start with) and what is happening to each of them (along with what actually happened when push came to shove).  That would also help suppliers, particularly small ones, understand the pipeline.
On limiting hosting contracts to 2 years:
That's consistent with the G-Cloud contract term (notwithstanding that some suppliers wrote to GDS last week asking for the term to be extended to 3 years).  But it's also unproven - it's one thing to "copy and paste" a dozen virtual machines from one data centre to another, it's another thing to shift a petabyte of data or a set of load-balanced, firewalled, well-routed network connections.  Government is going to have to practice this - so far, moves of hosting providers have taken a year or more and cost millions (without delivering any tangible business benefit especially given the necessary freezes either side of the move).  It also means trouble for some of the legacy systems that are fragile and hard to move.  The Crown Hosting Service could, at least, limit moves of those kinds of systems to a single transition to their facilities - that would be a big help.

Friday, January 24, 2014

Government Gateway - Teenage Angst

Tomorrow, January 25th, the Government Gateway will be 13.  I’m still, to be honest, slightly surprised (though pleased) that the Gateway continues to be around - after all, in Internet time, things come and go in far shorter periods than that.  In the time that we have had the Gateway, we rebuilt UKonline.gov.uk with three different suppliers, launched direct.gov.uk and replatformed it some years later, then closed that down and replaced it with gov.uk which has absorbed the vast bulk of central government’s websites and has probably had 1,000 or more iterations since launch.  And yet the Gateway endures.


In 13 years, the Gateway has, astonishingly, had precisely two user interface designs.  In the first, I personally picked the images that we used on each screen (as well as the colour schemes, the text layout and goodness knows what else) and one of the team made ‘phone calls to the rights holders (most of whom, if I recall correctly, were ordinary people who had taken nice pictures) to obtain permission for us to use their images.  If you look at the picture above, you will see three departments that no longer exist (IR and C&E formed HMRC, MAFF became Defra) and five brands (including UKonline) that also don't exist.


Of course we carried out formal user testing for everything we did (with a specialist company, in a purpose built room with one-way glass, observers, cameras and all that kind of thing), often through multiple iterations.  The second UI change was carried out on my watch too.    I left that role - not that of Chief UI Designer - some 9 years ago.

My own, probably biased (but based on regular usage of it as a small business owner), sense is that the Gateway largely stopped evolving in about 2006.  Up until that point it had gone through rapid, iterative change - the first build was completed in just 90 days, with full scrutiny from a Programme Board consisting of three Permanent Secretaries, two CIOs and several other senior figures in government.  Ian McCartney, the Minister of the Cabinet Office (the Francis Maude of his day) told me as he signed off the funding for it that failure would be a “resignation issue.” I confirmed that he could have my head if we didn’t pull it off.  He replied “Not yours, mine!” in that slightly impenetrable Scottish accent of his.  We had a team, led by architects and experts from Microsoft, of over 40 SMEs (radical, I know).  Many of us worked ridiculous hours to pull off the first release - which we had picked for Burns Night, the 25th of January 2001.

On the night of the 24th, many of us pulled another all nighter to get it done and I came back to London from the data centre, having switched the Gateway on at around 5am - the core set of configuration data was hand carried from the pre-production machine to the production machine on a 3 1/2” floppy disc.  I don't think we could do that now, even if we could find such a disc (and a drive that supported it).  

The Programme Board met to review what we had done and, to my surprise, the security accreditation lead (what would be called a Pan-Government Accreditor now) said that he wanted to carry out some final tests before he okayed it being switched on.  I lifted my head from the table where I may have momentarily closed my eyes and said “Ummm, I turned it on at 5.”  Security, as it so often did (then and now), won - we took the Gateway off the ‘net, carried out the further tests and turned it back on a few hours later.

Over the following months we added online services from existing departments, added new departments (and even some Local Authorities), added capability (payments, secure messaging) and kept going.  We published what we were doing every month in an effort to be as transparent as possible.  We worked with other suppliers to support their efforts to integrate to the Gateway, developing (with Sun and Software AG, at their own risk and expense) a competitive product that handled the messaging integration (and worked with another supplier on an open source solution which we didn’t pull off).

We published our monthly reports online - though I think that they now lost folllowing perhaps multiple migrations of the Cabinet Office website.  Here is a page from February 2004 (the full deck is linked to here) that shows what we had got done and what our plans were:








The Gateway has long since been seen as end of life - indeed, I’ve been told several times that it has now been “deprecated” (which apparently means that the service should be avoided as it has been or is about to be superseded).  Yet it’s still here.

What is happening then?

Two years ago, in November 2011, I wrote a post about the Cabinet Office’s new approach to Identity. Perhaps the key paragraph in that post was "With the Cabinet Office getting behind the [Identity Programme] - and, by the sounds of it, resourcing it for the first time in its current incarnation - there is great potential, provided things move fast.  One of the first deliverables, then, should be the timetable for the completion of the standards, the required design and, very importantly, the proposed commercial model.”

There was talk then of HMRC putting up their business case for using the new services in April 2012.  The then development lead of Universal Credit waxed on about how he would definitely be using Identity Services when UC went live in April 2013.  Oh, the good old days.

DWP went to market for their Identity Framework in March 2012 as I noted in a post nearly a year ago. Framework contracts were awarded in November 2012.  

Nearly five Gateway development cycles later, we are yet to see the outcome of those - and there has been little in the way of update, as I said a year ago.

Things may, though, be about to change

GDS, in a blog post earlier this month, say "In the first few months of 2014 we’ll be starting the IDA service in private beta with our identity providers, to allow users to access new HMRC and DVLA services."

Nine gateway development cycles later, we might be about to see what the new service(s) will look like.   I am very intrigued.

Some thoughts for GDS as they hopefully enter their first year with live services:

Third Party Providers 

With the first iteration of the Gateway, we provided the capability for a 3rd party to authenticate someone and then issue them a digital certificate.  That certificate could be presented to the Gateway and then linked with your identity within government.  Certificates, at the time, were priced at £50 (by the 3rd party, not by government) because of the level of manual checking of documents that was required (they were initially available for companies only).  As long ago as 2002, I laid out my thoughts on digital certificates.

There were many technical challenges with certificates, as well as commercial ones around cost.  But one of the bigger challenges was that we still had to do the authentication work to tie the owner of the digital certificate to their government identity - it was a two step process.

With the new approach from the Cabinet Office - a significantly extended version of the early work with multiple players (up to 8 though not initially, and there is doubtless room for more later) but the same hub concept (the Gateway is just as much a hub as an authentication engine) - the same two step process will be needed.  I will prove who I am to Experian, the Post Office, Paypal or whoever, and then government will take that information and match that identity to one inside government - and they might have to do that several times for each of my interactions with, say, HMRC, DWP, DVLA and others.  There is still, as far as I know, no ring of trust where because HMRC trusts that identity, DWP will too.  Dirty data across government with confusion over National Insurance numbers, latest addresses, initials and so on all make that hard, all this time later.

As Dawn Primarolo, then a minister overseeing the Inland Revenue, said to me, very astutely I thought, when I first presented the Gateway to her in 2001 - "But people will realise that we don't actually know very much about them.  We don't have their current address and we may have their National Insurance number stored incorrectly".  She was right of course.

Managing Live Service

The new approach does, though, increase the interactions and the necessary orchestration - the providers, the hub and the departments all need to come together.  That should work fine for initial volumes but as the stress on the system increases, it will get interesting.  Many are the sleepless nights our team had as we worked with the then Inland Revenue ahead of the peak period in January.

End to end service management with multiple providers and consumers, inside and outside of government is very challenging.  Departments disaggregating their services as contracts expire are about to find that out, GDS will also find out.  There are many lessons to learn and, sadly, most of them are learned in the frantic action that follows a problem.

The Transaction Engine - The Forgotten Gateway

The Gateway doesn’t, though, just do the authentication of transactions. That is, you certainly use it when you sign in to fill in your tax return or your VAT return, but you also use it (probably unwittingly) when that return is sent to government.  All the more so if you are a company who uses 3rd party software to file your returns - as pretty much every company probably does now.  That bit of the Gateway is called the “Transaction Engine” and it handles millions of data submissions a year, probably tens of millions.

To replace the Gateway, the existing Authentication Engine (which we called R&E) within it must be decoupled from the Transaction Engine so that there can be authentication of submitted data via the new Identity Providers too, and then the Transaction Engine needs to be replaced.  That, too, is a complicated process - dozens of 3rd party applications know how to talk to the Gateway and will need to know how to talk to whatever replaces it (which, of course, may look nothing like the Transaction Engine and might, indeed, be individual services for each department or who knows what - though I have some thoughts on that).

Delegation of Rights

Beyond that, the very tricky problem of delegation needs to be tackled.  The Gateway supports it in a relatively rudimentary way - a small business can nominate its accountant to handle PAYE and VAT, for instance.  A larger business can establish a hierarchy where Joe does PAYE and Helen does VAT and Joe and Helen can do Corporation Tax.   But to handle something like Lasting Power of Attorney, there need to be more complex links between, say, me, my Mother and two lawyers.  Without this delegation capability - which is needed for so many transactions - the Digital by Default agenda could easily stall, handling only the simplest capabilities.

Fraud Detection and Prevention

Tied in with the two step authentication process I mention above is the need to deal with the inevitable fraud risk. Whilst Tax Credits was, as I said, briefly the most popular online service, it was withdrawn when substantial fraud was detected (actually, the Tax Credits service went online without any requirement for authentication - something that we fervently disagreed with but that was only supposed to be a temporary step.  Perhaps in another post I will take on the topic of Joint and Several Liability, though I am hugely reluctant to go back there).  

In the USA, there is massive and persistent Tax Return fraud - Business Week recently put the figure at $4 billion in 2011 and forecast that it would rise to $21 billion by 2017.  That looks to be the result of simple identity fraud, just as Tax Credits experienced.  Most tax returns in the USA are filed online, many using packages such as TurboTax.   Tax rebates are far more prevalent in the USA than they are in the UK, but once the identification process includes benefits, change of address and so on, it will become a natural target.  Paul Clarke raised this issue, and some others, in an excellent recent post.

The two step process will need to guard against any repeat of the US experience in the UK - and posting liabilities to the authentication providers would doubtless quickly lead to them disengaging from the business (and may not even be possible given the government carries out the second step which ties the person presented to a government identity record, or to a set of them).  

We included a postal loop from day one with the Gateway, aimed at providing some additional security (which could, of course, be compromised if someone intercepted the post); removing that (as a recent GDS blog post claims it will), as I imagine will be done in the new process (Digital by Default after all) requires some additional thinking.

User Led

Given that "User Led" is the GDS mantra, I have little fear that users won't be at the heart of what they do next, but it is a tricky problem this time.  For the first time, users will be confronted with non-government providers of identity (our Gateway integration with 3rd parties still resulted in a second step directly with government).  How will they know who to choose?  What happens if they don't like who they chose and want to move to someone else? How will they know that the service that they are using is legitimate - there will be many opportunities for phishing attacks and spoof websites? How will they know that the service they are using is secure - it is one thing to give government your data, another, perhaps, to give that data to a credit agency?   Will these services be able to accumulate data about your interactions with Government?  How will third party services be audited to ensure that they are keeping data secure?

Moving On From Gateway

There are more than 10 million accounts, I believe, on the Gateway today.  Transitioning to new providers will require a careful, user benefit led, approach so that everyone understands why the new service is better (for everyone) than the old one.   After all, for 13 years, people have been happily filing their tax returns and companies have been sending in PAYE and VAT without being aware of any problems.  It would help, I'm sure, if the existing customers didn't even realise things had changed - until they came to add new services that are only available with the coming solutions and were required to provide more information before they could access them; I think most would see that as a fair exchange.

Here's To The Future then

Our dream, way back on Burns Night in 2001, was that we would be able to break up the Gateway into pieces and created a federated identity architecture where there would be lots of players, all bringing different business models and capabilities.  We wanted to be free of some of the restrictions that we had to work with - complex usernames and even more complicated passwords, to work with an online model, to bring in third party identification services, to join up services so that a single interaction with a user would result in multiple interactions with government departments and, as our team strap line said back then, we wanted to “deliver the technology to transform government”.

Thirteen years on there have been some hits and some misses with that dream - inevitably we set our sights as high as we could and fell short.  I fully expect the Gateway to be around for another four or five years as it will take time for anyone to trust the new capabilities, for 3rd parties to migrate their software and for key areas like delegation to be developed.  It’s a shame that we have gone through a period of some 8 years when little has been done to improve how citizens identify themselves to government; there was so much that could have been done.

I’m looking forward to seeing what new capabilities are unveiled sometime in the next few months - perhaps I will be invited to be a user in the “private beta” so that I can see it a bit quicker.  Perhaps, though, I shouldn’t hold my breath.

Monday, January 20, 2014

Am I Being Official? Or Just Too Sensitive? Changes in Protective Marking.

From April 2nd - no fools these folks - government’s approach to security classifications will change.  For what seems like decades, the cognoscenti have bandied around acronyms like IL2 and IL3, with real insiders going as far as to talk about IL2-2-4 and IL3-3-4. There are at least seven levels of classification (IL0 through IL6 and some might argue that there are even eight levels, with “nuclear” trumping all else; there could be more if you accept that each of the three numbers in something like IL2-2-4 could, in theory, be changed separately). No more.  We venture into the next financial year with a streamlined, simplified structure of only three classifications. THREE!  

Or do we?

The aim was to make things easier - strip away the bureaucracy and process that had grown up around protective marking, stop people over-classifying data making it harder to share (both inside and outside of government) and introduce a set of controls that as well as technical security controls actually ask something of the user - that is, that ask them to take care of data entrusted to them.

In the new approach, some 96% of data falls into a new category, called “OFFICIAL” - I’m not shouting, they are. A further 2% would be labelled as “SECRET” and the remainder “TOP SECRET”.  Those familiar with the old approach will quickly see that OFFICIAL seems to encompass everything from IL0 to IL4 - from open Internet to Confidential (I’m not going to keep shouting, promise), though CESG and the Government Security Secretariat have naturally resisted mapping old to new.

That really is a quite stunning change.  Or it could be.

Such a radical change isn’t easy to pull off - the fact that there has been at least two years of work behind the scenes to get it this far suggests that.  Inevitably, there have been some fudges along the way.  Official isn’t really a single broad classification.  It also includes “Official Sensitive” which is data that only those who “need to know” should be able to access.   There are no additional technical controls placed on that data - that is, you don’t have to put it behind yet another firewall - there are only procedural controls (which might range - I'm guessing - from checking distribution lists to filters on outgoing email perhaps).

There is, though, another classification in Official which doesn’t yet, to my knowledge, have a name.   Some data that used to be Confidential will probably fall into this section.  So perhaps we can call it Official Confidential? Ok, just kidding.

So what was going to be a streamlining to three simple tiers, where almost everyone you’ve ever met in government would spend most of their working lives creating and reading only Official data, is now looking like five tiers.  Still an improvement, but not quite as sweeping as hoped for.

The more interesting challenges are probably yet to come - and will be seen in the wild only after April.  They include:

- Can Central Government now buy an off-the-shelf device (phone, laptop, tablet etc) and turn on all of the “security widgets” that are in the baseline operating system and meet the requirements of Official?

- Can Central Government adopt a cloud service more easily? The Cloud Security Principles would suggest not.

- If you need to be cleared to “SC” to access a departmental e-mail system which operated at Restricted (IL3) in the past and if “SC” allows you occasional access to Secret information, what is the new clearance level?

- If emails that were marked Restricted could never be forwarded outside of the government’s own network (the GSI), what odds would you place on very large amounts of data being classified as “Official Sensitive” and a procedural restriction being applied that prevents that data traversing the Internet?

- If, as anecdotal evidence suggests, an IL3 solution costs roughly 25% more than an IL2 solution, will IT costs automatically fall or will inertia mean costs stay the same as solutions continue to be specified exactly as before?

- Will the use of networks within government quickly fall to lowest common denominator - the Internet with some add-ons - on the basis that there needs to be some security but not as much as had been required before?

- If the entry to an accreditation process was a comprehensive and well thought through “RMADS” (Risk Management and Accreditation Document Set) which was largely the domain of experts who handed their secrets down through mysterious writings and hidden symbols

It seems most likely that the changes to protective marking will result in little change over the next year, or even two years.  Changes to existing contracts will take too long to process for too little return. New contracts will be framed in the new terms but the biggest contracts, with the potential for the largest effects, are still some way from expiry.  And the Cloud Security Principles will need much rework to encourage departments to take advantage of what is already routine for corporations. 

If the market is going to rise to the challenge of meeting demand - if we are to see commodity products made available at low cost that still meet government requirements - then the requirements need to be spelled out.  The new markings launch in just over two months.  What is the market supposed to provide come 2nd April?

None of this is aimed at taking away what has been achieved with the thinking and the policy work to date - it’s aimed at calling out just how hard it is going to be to change an approach that is as much part of daily life in HM Government as waking up, getting dressed and coming to work. 

Friday, January 17, 2014

Adequately Appropriate? Acceptably Appropriate? Thoughts on Cloud Security Principles

It was with some trepidation that, over the Christmas break, I clicked on links to the newly published Government Cloud Security Principles.  Trepidation because my contact with such principles goes back a long way and, in government, principles tend to hide more than they reveal. 

Some three years ago whilst looking at G-Cloud in its early days, I proposed that, as part of the procurement process, we publish a detailed set of guidelines that explained not only what was meant by IL0, IL2 and IL3 (I skipped IL1 on the basis that in over a decade, I have never heard anyone use it) but also what would be required if, as a vendor, you were trying to achieve any of those accreditation levels.  My thinking was if government was truly going to encourage new players to get involved, few would commit to building out infrastructure if there wasn’t specific guidance on what they would need to do.

I produced a short document - some 4 pages - which I thought would act as a starter.  I’ve published it on Scribd so that you can see how far I got (which wasn’t all that far, I admit - I'd say it's a beta at best).   Some weeks later, after chasing to see if it could be developed further, in partnership with some new suppliers so that we could test what they needed to know, I was told that such a document would not be viable as, and I quote, “it would encourage a tick box attitude to security compliance”.  Some thing in me tells me that would be no bad thing - definitely better than a no box attitude, no?

So here we are, in early 2014, and someone else - perhaps some brave person in Cabinet Office - has had another go.  Is this just a tick box exercise too?  Or would I find seriously useful principles that would help both client and supplier - the users - achieve what they both need?

Sadly, the answer is that these principles do not help.  Perhaps in a desire to ensure that there was definitely no encouragement of a tick box approach, they say as little as possible using words that are unqualified and without any context or examples that would help.  It strikes me as unlikely that any security experts in departments will find a need to refer to them and that any supplier seeking some clues as to the fastest route to an accredited service will linger on them no more than a moment.

For instance:

- The word “adequate” or “adequately” is used four times.  As in "The confidentiality and integrity of data should be adequately protected whilst in transit”.  Can’t disagree with that. Though, of course, I don't know what it means in delivery terms

- “Appropriate” crops up three times.  As in "All external or less trusted interfaces of the service should be identified and have appropriate protections to defend against attacks through them”.  Excellent advice, everything should always be appropriately protected.  No more and no less.  But how exactly?

- Or how about this: "The service should be developed in a secure fashion and should evolve to mitigate new threats as they emerge”.  No one would want you to develop an insecure service but what exactly is meant by this?

- Or this one: "The service provider should ensure that its supply chain satisfactorily supports all of the security principles that the service claims to deliver”; so now the service provider needs to decide what is meant by the principles and ensure that anyone it users also complies with their very vagueness.

Of course, there’s a rider at the front of the document, which says:

This document describes principles which should be considered when evaluating the security features of cloud services. Some cloud services will provide all of the security principles, while others only a subset. It is for the consumer of the service to decide which of the security principles are important to them in the context of how they expect to use the service. 

So not only do we have to decide what is adequate and appropriate, we have to decide which of the principles we need to adopt adequately and appropriately so that we have adequate and appropriate security for our service, lest it be seen as inadequate and/or inappropriate perhaps.  How appropriate.

This is hardly academic.  If you want commodity services, then you need to provide commodity standards and guidelines.  Leaving vast areas open to interpretation only furthers the challenges for new suppliers (and entrenches the capabilities of those who already supply) and means that customers are unable to evaluate like for like without detailed (and likely continuing) reviews.

To give an example, I recently sat with great people from three different government departments to look at the use of mobile devices.   One was using WiFi freely throughout their building (connected to ADSL lines) to allow staff with department issued iPads and Windows tablets to access the Internet.  Another had decided that WiFi was inherently untrustworthy and so insisted that staff use the 3G or 4G network, even issuing staff with Windows tablets a dongle that they needed to carry around (and pair via bluetooth - which is, I assume, for them more secure than WiFi) to access the Internet.

If three departments can’t agree on how to configure an iPad so that they can read their email (this wasn't about using applications beyond Office apps), what hope is there for a supplier offering such a service?  Where is the commodity aspect that is necessary to allow costs to be driven down? And how would a new supplier, with a product ready to launch, know how it would be judged by the security experts so that it could be sold to the public sector?

Principles such as these encourage - perhaps even direct - departments to come to their own conclusions about what they need and how they want it configured, just as they have done for the last three decades and more. 

With today’s protective markings - IL0, IL2 and IL3 etc - that is one thing.  With tomorrow’s “OFFICIAL”, there is a real need for absolute clarity on what a supplier needs to do and that can only come from the customer being clear about what they will and won’t accept - it cannot be that one department’s OFFICIAL is another department’s UNACCEPTABLE. 

Fingers crossed that this pre-Alpha document is allowed to iterate and evolve into something that is useful.