Monday, July 03, 2017

GDS Isn't Working (Part 2 - The Content Mystery)

In Martha Lane Fox's 2010 report, that, in effect, led to the creation of GDS and that set out its mission, there were a series of recommendations.  These seem like a reasonable place to start in assessing GDS' delivery track record.  The recommendations were:

1. Directgov should be the default platform for information and transactional services, enabling all government transactions to be carried out via digital channels by 2015 ... must focus on creating high-quality user-friendly transactions ... scaling back on non-core activities.

2. Realign all Government Delivery under a single web domain name ... accelerate the move to shared web services.

3. Learn from what has been proven to work well elsewhere on the web ... focus on user-driven and transparent ... implement a kill or cure policy to reduce poorly performing content.

4. Mandate the creation of APIs to allow 3rd parties to present content and transactions.  Shift from "public services all in one place" to government services "wherever you are"

5. Establish digital SWAT teams ... work on flagship channel shift transactions

Not surprisingly, I agreed entirely with this list at the time - nearly a decade beforehand I'd produced the picture below to represent the e-Delivery team's (eDt) e-government vision - eDt was a part of the Office of the e-Envoy when the late Andrew Pinder was in charge.  I think it captures Martha's recommendations in a page:


In slide format, the picture evolved to this:



Now, nearly 7 years after Martha's report, we have a new flagship website (whilst Martha's report was strong on making use of the direct.gov brand name, given the investment in it over the previous 6 or so years, a decision was made to use a different brand - you'll see that we had suggested that as a possible name in the 2001 picture above; it's in the very top left).

Here are 3 pictures showing the journey we have made over the last 13 years:

1) Direct.gov's home page in May 2004


2) The same site, in January 2007

3) Gov.uk in June 2017



Thirteen years of user needs, iteration, at least three different content management tools and, branding and size of search bar aside, do you notice any major difference?  Nope, me either. 

Interestingly and both encouragingly (because admitting the problem is the first step to solving it) and depressingly (because it's not like there haven't been plenty of opportunities before)  GDS, after I'd written this post but before I'd published it, have noticed the problem too and seem, at last, to be taking recommendation 3, "kill or cure", to heart.

When we envisaged the second iteration of UKonline.gov.uk (the first was run under a contract let to BT and was run by CITU, before OeE was really in existence, though it did go live on OeE's watch), we saw it as a way to join up important content across government, creating a veneer that would buy time for the real engineering join up to take place behind the scenes - something that would result in joined up transactions and a truly citizen-centered approach to government.  

Life events - important interactions with government - were synthesized from across all departments and brought together, by skilled content writers, in a way that meant the user didn't need to traverse multiple government websites - the aim was to give them everything in one place.  We (OeE as a whole) continued that approach through successive iterations of UKonline and on into its successor, direct.gov.uk (which started life as the Online Government Store, or OGS - a shopfront where all of government content could be accessed).

Thirteen years after the launch of direct.gov.uk, it looks like there have been successive iterations of that approach along with a wholesale migration of much of government's content to a single platform.  But there hasn't been any of the real heavy lifting done to join up the content and the transactions.  This is shown by the very existence of all of those other government websites, albeit as sub-domains on the same platform as gov.uk.  That wasn't the vision that we were after and, based on the recent GDS blog post, it seems not to be the one that GDS are after either.   But we had around 7 years before GDS and we've had nearly 7 years since.  So clearly something isn't working.

My guess is that the lessons that we learned from 2002-2010 have been learned again from 2010-2017.  Sure, some new lessons will have been learned, but they will be largely the same - many of the new lessons will have been technology and methodology related I suspect. Despite everything, it all looks the same and that, when poking behind the front page, all that's revealed is more design changes - bigger fonts, simpler questions and cleaner lines.
A little learning is a dangerous thing;
drink deep, or taste not the Pierian spring:
there shallow draughts intoxicate the brain,
and drinking largely sobers us again
The creation of gov.uk has been a massive job - huge amounts of content have been moved, sorted, consolidated, re-written and, doubtless, re-written again.  It all feels like marginal change though - more of the same.  Heavy lifting, yes, but more of the same, incremental changes, with some big parts still to move, such as much of HMRC and still no real home or consistency for business-related content
The real mystery, though, is where are the transactions?  The new ones I mean, not the ones that were online a decade ago.
Looking back at Martha's recommendations:

(1) Single platform and transactions - is at least partly done, but transactions have advanced little in a decade.

(2) Single domain - looks initially to be a success (and one that I do not underestimate the huge effort it's taken and that it continues to take), but there isn't much else in the way of shared web services (I'll be coming on to Verify and other common platform technologies soon).

(3) User driven and transparent / Kill or cure - I'm going to score as strong effort, but not nearly enough of an advance on what was done before.  We have a huge amount of content piled on a single technology platform.  Disentangling it and ensuring that there's only one place to find the most relevant content on any given topic is not well advanced.  If you're a business, things are even more confusing. And if you're a sole trader, for instance, who hops between individual and business content, you're likely more confused than ever.

(4) APIs - beyond what was done years ago, I don't see much new.  I would love to be pointed at examples where I'm wrong here as I think this is a crucial part of the future mission and it would be good to see successes that I've missed.

(5) Flagship transactions - I'm not seeing them. The tax disc is a great example of a transaction that was started years ago and that has been successively iterated, and I don't want to undersell the monumental shift that getting rid of the disc itself, but it's an outlier.  Where are the others, the ones that weren't there before 2010?

The critical recommendations in Martha's report - the ones about flagship channel shift transactions, creating APIs (other than in HMRC, most of which was completed in 2000-2004) and "government services wherever you are" are still adrift. 
Martha's goal of "enabling all government transactions to be carried out via digital channels by 2015" seems as far away as it was when, in 2001, the then Prime Minister, Tony Blair, exhorted us to put joined up, citizen-focused services online by the end of 2005.
The real mystery is why we are tinkering with content instead of confronting the really hard stuff, the transactions.  As I said in 2012:
GDS' most public delivery is "just another website” - those who know (and care) about these things think that it might be one of the sexiest and best websites ever developed, certainly in the government world.  But it isn't Facebook, it isn't iTunes, it isn't Pirate Bay.  It's a government website; perhaps “the” government website. Once you've packaged a lot of content, made wonderful navigation, transformed search, you end up with the place where government spends the real money - transactions (and I don't just mean in IT terms).
 
And now we have a Transformation Strategy that promises it will all be done by 2020.  I'm not seeing it.  Not if we follow the current approach.  That sounds snarky and perhaps it is, but it's really the fundamental point of centre's digital efforts - joining up what hasn't been joined up before.  Content, as has been well proven for the last 15 years, is the easy bit.

Transactions are definitely the difficult bit, and they're difficult in two ways - (1) the creation of an end to end service that goes all the way from citizen possibly through intermediary (everything from PAYE provider to accountant to Citizen's Advice Bureau to me doing my mother's tax return) and (2) the rethinking of traditional policy in a way that supports government's desired outcome, meets user needs and is also deliverable.  From 2001, we started putting transactions online and, for the most part, we put online what was offline.  At the time, a good start, but not one that fits with current thinking and capabilities.

Saturday, June 24, 2017

The Emperor And His Clothes Revisited - GDS Isn't Working (Part 1)


In October 2012, I questioned whether the Emperor had any clothes on; somewhere in that piece I said:
The question is really how to turn what GDS do into the way everyone else does it.  In parallel with GDS’ agile implementations, departments are out procuring their next "generation" of IT services - and when you consider that most are still running desktop operating systems released in 2000 and that many are working with big suppliers wrapped up in old contracts supporting applications that often saw the light of day in the 80s or, at best, the 90s, “generation” takes on a new meaning.  To those people, agile, iterative, user experience focused services are things they see when they go home and check Facebook, use Twitter or Dropbox or have their files automagically backed up into the cloud.  Splitting procurements into towers, bringing in new kinds of integrators, promising not to reward "bad" suppliers and landing new frameworks by the dozen is also different of course, but not enough to bridge the gap between legacy and no legacy.
and then
The question is whether the GDS model is the one that achieves scale transformation right across government, or whether it is another iteration in a series of waves of change that, in the end, only create local change, rather than truly structural change.
My sense, now, is that it's the latter - another iteration, but one that hasn't created as much change as the inputs would suggest and that, today, is creating far less change than it did early on in its life when charismatic leadership, a brilliant team, an almost messianic zeal and bulletproof political support were in place.

GDS has done some brilliant and world-leading stuff but has also failed to deliver on its mission.   Simply, GDS isn't working.  We need to think again about what it's going to take to deliver the vision; something that has been largely consistent for much of the last two decades but still seems far away.  This is tricky: we don't want to lose the good stuff and we clearly want to get the huge pile of missing stuff done.  The current approach is a dead end so we need to do something different; with the appointment of a new minister, now could be the time for the change.

Every few years for at least the last two decades, HM Government has revised its approach to the co-ordination of all things IT. Throughout that time there’s always been a central function to e.g. set standards (govtalk for example), engage with industry to get the most done at the best price, co-ordinate services, do some direct delivery (ukonline.gov.uk, direct.gov.uk, gov.uk etc) and also teach government what to do and how to do it - the art of the possible.

It started with the Central Computer and Telecommunications Agency (CCTA), followed by Central Information Technology Unit (CITU), then the Office of the e-Envoy (OeE), the e-Government Unit (eGU), the Office of the Government CIO (OCIO) and, most recently, the Government Digital Service (GDS). Some of these - CCTA and CITU for instance - overlapped but had slightly different roles.

After each revision, there was a change of leader, a change of approach and a change of focus - some were for the better, some not so much. Nearly 7 years after the Martha Lane Fox report that brought GDS into being, it’s time for another one of those revisions.

We should, of course, celebrate the successes of GDS, because there have been some big results, learn the lessons (of GDS and all of its predecessors) and shutter the failures. So let’s first laud the successes. GDS have, in my view, been responsible for four big changes at the heart of government.

1) User focus and an agile approach. GDS has shown government that there is another way to deliver projects (not just web projects, but all projects), through focusing on user needs, building initial capability and then iterating to bring on successive functionality. Whilst this wasn’t new and still isn’t yet fully adopted, there isn’t anyone in government who doesn’t know about, and have a view on, the topic; and every department and agency across the board is at least experimenting with the approach and many have taken it completely to heart. The two dozen exemplars showed departments that the new approach was possible and how they might go about it, infecting a new generation of civil servants, and some of the old guard, with an incredible enthusiasm. Assess user needs, build some and ship, assess results, build a bit more and ship again (repeat until false) is understood as a viable approach by far more of government than it was even 5 years ago, let alone 15.

2) Website consolidation. What was just an idea on some slides nearly 15 years ago, as seen in the picture below, is now close to reality. The vast bulk of government information sits on gov.uk, a site that didn’t exist in 2010. Gov.uk receives some12-14 million visitors in a typical week. We’ve gone from a couple of thousand websites to a handful (not quite to one, but near enough to make little difference).  Bringing together the content and giving the citizen the impression that government is all joined up is a necessary precursor to achieving lift off with transactions.


3) Spend Controls. Before the Coalition Government came in, departments spent money however they wanted to, despite the best efforts of various bodies to impose at least some controls. There’s now a governance process in place that reviews projects at various stages and, whilst the saves are likely not as big as has been claimed, the additional review focuses departmental minds and encourages them to look at all options.  Controlling and, more specifically, directing spend is the mainstay of changing how government does IT and will support further re-use of platforms and technologies.

4) Openness, transparency and championing issues. Government blogs were few and far between before 2010; official ones didn’t really exist. GDS staff (and, as a result, departmental people too) blog incessantly, to talk about what they are doing, to share best practice, to lay down gauntlets (e.g. championing the issue of necessary diversity on panels through the “GDS Parity Pledge”) and to help recruit new people from inside and outside of government to the cause.  Working in the open is a great way to show the outside, as well as the inside, world that things really have changed.

Each of those is a significant achievement - and each has been sustained, to at least some degree, throughout the time GDS has been active which deserves additional celebration. Having an idea is the easy bit, it’s getting it done that’s the hard bit - the latter is where most people turn around and give up. Each of these achievements does, however, come with a succession of buts which I will explore in later posts.


In the world of agile, failure is inevitable.  The point, though, is to fail fast and at a lower cost, correct the errors and get it right the next time.  Getting the next phase of the online agenda right requires some significant rethinking, an analysis of the failures and the setting of a new direction.

This is not to say that what GDS has done to date isn't good - the successes outlined above should rightly be lauded.  It is, though, to say that it was not and is not enough to create the necessary change.  Transformation is an overused word and one that is rarely delivered on, least of all in an agile, iterative world; but a step change in the way citizens interact with government is still possible.

So, to create that necessary level of change, we need to put in place a different approach, one that ratchets up the pace of delivery with departments, one that integrates tightly with the outside world and one that doesn't repeat the past but that embraces the future.
 
I plan to publish a succession of posts looking at this with the aims of constructively challenging what's been done so far and providing a framework for setting things up successfully for that next phase.

Thursday, February 09, 2017

5 Years After 10 Years After - The Emperor's New Clothes

With today's launch of the Government Transformation Strategy (not to be confused with this Government Transformation Strategy from 2006, or this one from 2005), my timing for taking a look at where we've been and what's left looks reasonably good.

In October 2012, I took a look at GDS, just as their first delivery, gov.uk, was about to go live.  I called it "The Emperor's New Clothes." My aim was to compare and contrast with earlier efforts specifically from the e-Delivery team which ran from 2001 through 2005/6.  The piece generated a lot of feedback at the time including whole Twitter conversations as well as lots of questions to me offline.  I noted that, during my time running eDt, I was never sure whether it was me who had no clothes on or whether I was the little boy.

Given my running theme that history might just be repeating, I've pulled out the main points from The Emperor's New Clothes here - and then, in future pieces, will catch up with where we are today and where we might go:
Change needs new ideas, new people and new ways to execute. This kind of change is very hard to get rolling and many times harder than that to sustain.   I watch, then, with fascination wondering if this is change that will stick and, especially, if it is change that will pervade across government.  Or whether its half-life is actually quite short - that when the difficult stuff comes along (as well as the routine, mind-numbing stuff), things will stall.  Perhaps the departments will rebel, or the sponsors will move on, or delivery will be undermined by some cockups, or the team will tire of bureaucracy once they move into the transaction domain.
The question is really how to turn what GDS do into the way everyone else does it.  In parallel with GDS’ agile implementations, departments are out procuring their next "generation" of IT services - and when you consider that most are still running desktop operating systems released in 2000 and that many are working with big suppliers wrapped up in old contracts supporting applications that often saw the light of day in the 80s or, at best, the 90s, “generation” takes on a new meaning.  To those people, agile, iterative, user experience focused services are things they see when they go home and check Facebook, use Twitter or Dropbox or have their files automagically backed up into the cloud.  Splitting procurements into towers, bringing in new kinds of integrators, promising not to reward "bad" suppliers and landing new frameworks by the dozen is also different of course, but not enough to bridge the gap between legacy and no legacy.
One of the strengths of the approach that GDS is adopting is that the roadmap is weeks or maybe months long.  That means that as new things come along they can be embraced and adopted - think what would have happened if a contract for a new site had been let three months before the iPhone came out? Or a month before the iPad came out? 
It is, though, also a significant weakness.  Departments plan their spending at least a year out and often further; they let contracts that run for longer than that.  If there is – as GDS are suggesting – to be a consolidation of central government websites by April 2013 and then all websites (including those belonging to Arm’s Length Bodies) by April 2014 then there needs to be a very clear plan for how that will be achieved so that everyone can line up the resource.  Likewise, if transactions are to be put online in new, re-engineered ways (from policy through to user interaction), that too will take extensive planning.
During the time of the e-Envoy we had four Ministers and, if you add in eGU, nine.  I suspect that my experience of the Cabinet Office is more common than the current experience where there has been stability for the last 2 ½ years.  GDS will need a plan B if Mr Maude does move on to something new.  There will also need to be a 2015 plan B if power changes hands.  Of course, if your roadmap goes out only weeks or months, then no one is looking at 2015.  That’s a mistake.
GDS have succeeded in being wildly transparent about their technology choices and thinking.  They are not, though, transparent about their finances.  That should change.  The close association with politicians seems to mean that GDS must champion everything that they do as a cost save – witness recent stories on identity procurement costs, website costs comparing direct.gov.uk and gov.uk and so on. Let’s see the numbers.
Given the inhouse staffing model that GDS is operating, changes are really represented only by cost of opportunity.  That makes comparing options and, particularly, benefits difficult.  In a beta world, you make more changes than you do in a production world – once you’re in production, you’re likely to make incremental changes than major ones (because, as Marc Andreessen said long ago, interfaces freeze early – people get used to them and are confused by too big a change).
Soon GDS will tell departments that their top transactions need to be re-engineered from policy through to service provision with a clear focus on the user.  At that point we move away from the technologists who are attracted to shiny new things and we hit the policy makers who are operating in a different world – they worry about local and EU legislation, about balancing the needs of vastly differing communities of stakeholders and, of course, they like to write long and complicated documents to explain their position having evaluated the range of possible options.
Tackling transaction is both fundamentally necessary and incredibly hard, though most of that isn’t about the shiny front end – it’s about the policy, the process and the integration with existing back end systems (which absorb some 65% of the £12-16bn spent per year on IT in government).  There is a sense of “Abandon Hope All Ye Who Enter Here.”
The question is whether the GDS model is the one that achieves scale transformation right across government, or whether it is another iteration in a series of waves of change that, in the end, only create local change, rather than truly structural change.
It seems unlikely that GDS can scale to take on even a reasonable chunk of government service delivery.  It also seems unlikely that enough people in departments can be trained in the new approaches to the point where they can shoulder enough of the burden so as to allow GDS to only steer the ship. If we add in the commercial controls, the supply chain and the complexity of policy (and the lack of join up of those policies), the challenges look insurmountable.

None of that is an argument for not trying.  Direct.gov.uk is old and tired and needed a massive refresh; transactions are where the real potential can be unlocked and they need to be tackled in a new way.
Much of this has been tried before, painful lessons have been learned and it would be more than a shame if the latest effort didn’t achieve its aims too.  The trick, then, is to pick the battles to fight and create the change in the right areas with the aim of infecting others.  Taking on too much at once will likely lead to failure.



Friday, February 03, 2017

10 Years After 10 Years After


Strictly speaking, this is a little more than 10 years after the 10 year mark.  In late 2005,  Public Sector Forums asked me to do a review of the first 10 years of e-government; in May 2006, I published that same review on this blog.  It's now time, I think, to look at what has happened in the 10 years (or more) since that piece, reviewing, particularly, digital government as opposed to e-government.

Here's a quick recap of the original "10 years of e-government" piece, pulling out the key points from each of the posts that made up the full piece:

Part 1 - Let's get it all online

At the Labour Party conference in 1997, the Prime Minister had announced his plans for 'simple government' with a short paragraph in his first conference speech since taking charge of the country: 
“We will publish a White Paper in the new year for what we call Simple Government, to cut the bureaucracy of Government and improve its service. We are setting a target that within five years, one quarter of dealings with Government can be done by a member of the public electronically through their television, telephone or computer.”
Some time later he went further:
"I am determined that Government should play its part, so I am bringing forward our target for getting all Government services online, from 2008 to 2005"

It’s easy to pick holes with a strategy (or perhaps the absence of one) that's resulted in more than 4,000 individual websites, dozens of inconsistent and incompatible services and a level of take-up that, for the most popular services, is perhaps 25% at best.

After all, in a world where most people have 10-12 sites they visit regularly, it’s unlikely even one of those would be a government site – most interactions with government are, at best, annual and so there's little incentive to store a list of government sites you might visit. As the count of government websites rose inexorably – from 1,600 in mid-2002 to 2,500 a year later and nearly 4,000 by mid-2005 – citizen interest in all but a few moved in the opposite direction.

Over 80% of the cost of any given website was spent on technology – content management tools, web server software, servers themselves – as technology buyers and their business unit partners became easy pickings for salesmen with 2 car families to support. Too often, design meant flashy graphics, complicated pages, too much information on a page and confusing navigation. 
Accessibility meant, simply, the site wasn’t.
In short, services were supply-led by the government, not demand-led by the consumer. But where was the demand? Was the demand even there? Should it be up to the citizen to scream for the services they want and, if they did, would they - as Henry Ford claimed before producing the Model T - just want 'faster horses', or more of the same they’d always had performed a little quicker? 
We have government for government, not government for the citizen. With so many services available, you’d perhaps think that usage should be higher. Early on, the argument was often made (I believe I made it too) that it wasn’t worth going online just to do one service – the overhead was too high – and that we needed to have a full range of services on offer - ones that could be used weekly and monthly as well as annually. That way, people would get used to dealing online with government and we’d have a shot at passing the 'neighbour test' (i.e. no service will get truly high usage until people are willing to tell their neighbour that they used, say, 'that new tax credits service online' and got their money in 4 days flat, encouraging their friends to do likewise).

A new plan

 • Rationalise massively the number of government websites. In a 2002 April Fool email sent widely around government, I announced the e-Envoy’s department had seized control of government’s domain name registry and routed all website URLs to UKonline.gov.uk and was in the process of moving all content to that same site. Many people reading the mail a few days later applauded the initiative. Something similar is needed. The only reason to have a website is if someone else isn’t already doing it. Even if someone isn’t, there’s rarely a need for a new site and a new brand for every new idea.

• Engage forcefully with the private sector. The banks, building societies, pension and insurance companies need to tie their services into those offered by government. Want a pension forecast? Why go to government – what you really want to know is how much will you need to live on when you’re 65 (67?) and how you'll put that much money away in time. Government can’t and won’t tell you that. Similarly, authentication services need to be provided that can be used across both public and private sectors – speeding the registration process in either direction. With Tesco more trusted than government, why shouldn't it work this way? The Government Gateway, with over 7 million registered users, has much to offer the private sector – and they, in turn, could accelerate the usage of hardware tokens for authentication (to rid us of the problems of phishing) and so on.

• Open up every service. The folks at my society, public whip and theyworkforyou.com have shown what can be done by a small, dedicated (in the sense of passionate) team. No-one should ever need to visit the absurdly difficult to use Hansard site when it’s much easier through the services these folks have created. Incentives for small third parties to offer services should be created.

• Build services based on what people need to do. We know every year there are some 38 million tax discs issued for cars and that nearly everyone shows up at a post office with a tax disc, insurance form and MOT. For years, people in government have been talking about insurance companies issuing discs – but it still hasn’t happened. Bring together disparate services that have the same basic data requirements – tax credits and child benefit, housing benefit and council tax benefit etc.

• Increase the use of intermediaries. For the 45% of people who aren’t using the Internet and aren’t likely to any time soon, web-enabled services are so much hocus pocus. There needs to be a drive to take services to where people use them. Andrew Pinder, the former e-Envoy, used to talk about kiosks in pubs. He may have been speaking half in jest, but he probably wasn’t wrong. If that’s where people in a small village in Shropshire are to be found (and with Post Offices diminishing, it's probably the only place to get access to the locals), that’s where the services need to be available. Government needs to be in the wholesale market if it's to be efficient – there are far smarter, more fleet of foot retail providers that can deliver the individual transactions.

• Clean up the data. One of the reasons why government is probably afraid to join up services is that they know the data held on any given citizen is wildly out of date or just plain wrong. Joining up services would expose this. When I first took the business plan for the Government Gateway to a minister outside the Cabinet Office, this problem was quickly identified and seen as a huge impediment to progress
More to come.

Monday, January 18, 2016

The Billion Pound G-Cloud

Sometime in the next few weeks, spend through the G-Cloud framework will cross £1 billion.  Yep, a cool billion.  A billion here and a billion there and pretty soon you're talking real money.

Does that mean G-Cloud has been successful?  Has it achieved what it was set up for? Has it broken the mould?  I guess we could say this is a story in four lots.

Well, that depends:

1) The Trend

Let's start with this chart showing the monthly spend since inception.



It shows 400 fold growth since day one, but spend looks pretty flat over the last year or so, despite that peak 3 months ago. Given that this framework had a standing start, for both customers and suppliers, it looks pretty good.  It took time for potential customers (and suppliers) to get their heads round it.  Some still haven't. And perhaps that's why things seem to have stalled?

Total spend to date is a little over £903m.  At roughly £40m a month (based on the November figures), £1bn should be reached before the end of February, maybe sooner. And then the bollard budget might swing into action and we'll see a year end boost (contrary to the principles of pay as you go cloud services though that would be).

Government no longer publishes total IT spend figures but, in the past, it's been estimated to be somewhere between £10bn and £16bn per year.  G-Cloud's annual spend, then, is a tiny part of that overall spend.  G-Cloud fans have, though, suggested that £1 spent on G-Cloud is equivalent to £10 or even £50 spent the old way - that may be the case for hosting costs, it certainly isn't the case for Lot 4 costs (though I am quite sure there has been some reduction in rates simply from the real innovation that G-Cloud brought - transparency on prices).

2) The Overall Composition

Up until 18 months ago, I used to publish regular analysis showing where G-Cloud spend was going.  The headline observation then was that some 80% was being spent in Lot 4 - Specialist Cloud Services, or perhaps Specialist Counsultancy Services.  To date, of our £903m, some £715m, or 79%, has been spent through Lot 4 (the red bars on the chart above).  That's a lot of cloud consultancy.

 
(post updated 19th Jan 2016 with the above graph to show more clearly the percentage that is spent on Lot 4).

With all that spent on cloud consultancy, surely we would see an increase in spend in the other lots?  Lot 4 was created to give customers a vehicle to buy expertise that would explain to them how to migrate from their stale, high capital, high cost legacy services to sleek, shiny, pay as you go cloud services.

Well, maybe.  Spend on IaaS (the blue bars), or Lot 1, is hovering around £4m-£5m a month, though has increased substantially from the early days.  Let's call it £60m/year at the current run rate (we're at £47m now) - if it hits that number it will be double the spend last year, good growth for sure, and that IaaS spend has helped created some new businesses from scratch.  But they probably aren't coining it just yet.

Perhaps the Crown Hosting Service has, ummm, stolen the crown and taken all of the easy business.  Government apparently spends £1.6bn per year on hosting, with £700m of that on facilities and infrastructure, and the CHS was predicted to save some £530m of that once it was running (that looks to be a save through the end of 2017/18 rather than an annual save).  But CHS is not designed for cloud hosting, it's designed for legacy systems - call it the Marie Celeste, or the Ship of the Doomed.  You send your legacy apps there and never have to move them again - though, ideally, you migrate them to cloud at some point. We had a similar idea to CHS back in 2002, called True North, it ended badly.

A more positive way to look at this is that Government's hosting costs would have increased if G-Cloud wasn't there - so the £47m spent this year would actually have been £470m or £2.5bn if the money had been spent the old way.  There is no way of knowing of course - it could be that much of this money is being spent on servers that are idling because people spin them up but don't spin them down, it could be that more projects are underway at the same than previously possible because the cost of hosting is so much lower.

But really, G-Cloud is all about Lot 4.  A persistent and consistent 80% of the monthly spend is going on people, not on servers, software or platforms.  PaaS may well be People As A Service as far as Lot 4 is concerned.

3) Lot 4 Specifically

Let's narrow Lot 4 down to this year only, so that we are not looking at old data.  We have £356m of spend to look at, 80% of which is made by central government.  There's a roughly 50/50 split between small and large companies - though I suspect one or two previously small companies have now become very much larger since G-Cloud arrived (though on these revenues, they have not yet become "large").

If we knew which projects that spend had been committed to - we would soon know what kind of cloud work government was doing if we could see that, right?

Sadly, £160m is recorded as against "Project Null".  Let's hope it's successful, there's a lot of cash riding on it not becoming void too.

Here are the Top 10 Lot 4 spenders (for this calendar year to date only):

 
 And the Top 10 suppliers:


Cloud companies?  Well, possibly.  Or perhaps, more likely, companies with available (and, obviously, agile) resource for development projects that might, or might not, be deployed to the cloud.  It's also possible that all of these companies are breaking down the legacy systems into components that can be deployed into the cloud starting as soon as this new financial year; we will soon see if that's the case.

To help understand what is most likely, here's another way of looking at the same data.  This plots the length of an engagement (along the X-axis) against the total spend (Y-axis) and shows a dot with the customer and supplier name.



A cloud-related contract under G-Cloud might be expected to be short and sharp - a few months, perhaps, to understand the need, develop the strategy and then ready it for implementation.  With G-Cloud contracts lasting a maximum of two years, you might expect to see no relationship last longer than twenty four months.

But there are some big contracts here that appear to have been running for far longer than twenty four months.  And, whilst it's very clear that G-Cloud has enabled far greater access to SME capability than any previous framework, there are some old familiar names here.

4) Conclusions

G-Cloud without Lot 4 would look far less impressive, even if the spend it is replacing was 10x higher.  It's clear that we need:

- Transparency. What is the Lot 4 spend going to?

- Telegraphing of need.  What will government entities come to market for over the next 6-12 months?

-  Targets.  The old target was that 50% of new IT spend would be on cloud.  Little has been said about that in a long time.  Little has, in fact, been said about plans.  What are the new targets?

Most of those points are not new - I've said them before, for instance in a previous post about G-Cloud as a Hobby and also here about how to take G-Cloud Further Forward.

In short, Lot 4 needs to be looked at hard - and government needs to get serious about the opportunity that this framework (which broke new ground at inception but has been allowed to fester somewhat) presents for restructuring how IT is delivered.

Acknowledgements

I'm indebted, as ever, to Dan Harrison for taking the raw G-Cloud data and producing these far simpler to follow graphs and tables.  I maintain that GDS should long ago have hired him to do their data analysis.  I'm all for open data, but without presentation, the consequences of the data go unremarked.


Monday, February 16, 2015

Performance Dashboard July 2003 - The Steep Hill of Adoption

With gov.uk's Verify appearing on the Performance Dashboard for the first time, I was taken all the way back to the early 2000s when we published our own dashboards for the Government Gateway, Direct.gov.uk and our other services.  Here's one from July 2003 - there must have been earlier ones but I don't have them to hand:



This is the graph that particularly resonated:

With the equivalent from back then being:

After 4 years of effort on the Identity programme (now called Verify), the figures present pretty dismal reading - low usage, low ability to authenticate first time, low number of services using it - but, you know what, the data is right there to see for everyone and it's plain that no one is going to give up on this so gradually the issues will be sorted, people will authenticate more easily and more services will be added.    It's a very steep hill to climb though.

We started the Gateway with just the Inland Revenue, HM Customs and MAFF (all department names that have long since fallen away)- and adding more was a long and painful process.  So I feel for the Verify team - I wouldn't have approached things the way they are but it's for each iteration to pick its path.  There were, though, plenty of lessons to learn that would have made things easier.

There is, though, a big hill to climb for Verify.  Will be interesting to watch.

Monday, January 05, 2015

Mind The Gaps - Nothing New Under The Sun


As we start 2015, a year when several big contracts are approaching their end dates and replacement solutions will need to be in place, here's a presentation I gave a couple of times last year looking at the challenges of breaking up traditional, single prime IT contracts into potentially lots of smaller, shorter contracts:

Monday, August 11, 2014

Hosting Crowns

Late in 2013 there was a flurry of interest in a project called the "Crown Hosting Service" - covered, for instance, by Bryan Glick at Computer Weekly. The aim, according to the article, was to save some £500m within a few years by reducing the cost of looking after servers.  The ITT for this "explicit legacy procurement" (as Liam Maxwell accurately labelled it) was issued in July 2014.

Apparently some £1.6bn is spent by government on hosting government's IT estate.  That figure is about half what it costs to run government's central civil estate (buildings); and that £3bn is only 15% of the cost of running the total estate.  The total cost of running the estate is, then, something like £20bn (with an overall estate value of c£370bn).

It's interesting, then, to see increasing instances of department's sharing buildings - the picture below shows two agencies that you might not associate together.  The Intellectual Property Office and the Insolvency Service share a building - though I'm hoping it's not because they share a customer base and offer a one stop shop.  The IPO and the IS are both part of BIS (which is just around the corner) so perhaps this is a like for like share.


But over the next couple of years, and maybe in the next couple of months, we are certainly going to see more sharing - DCLG will soon vacate its Victoria location and begin sharing with another central government department.  Definitely not like for like.

Such office sharing brings plenty of challenges. At the simpler end things such as standard entry passes and clearance levels.  At a more complicated level is the IT infrastructure - at present something that is usually entirely unique to each department.  A DCLG desktop will not easily plug straight into the network of another department - even a wireless network would need to be told about new devices and where they needed to be pointed at.

With increasing commoditisation of services, and increasing sharing, it's easily possible to see - from a purely IT point of view - government buildings that function, for large numbers of HQ and, perhaps especially, field staff, as drop in centres where desks are available for whoever is passing provided that they have the right badge.  Those who want to work from home can continue to do so, but will also be able to go to a "local office" where they will have higher bandwidth, better facilities and the opportunity to interact with those in other departments and who run other services.  

In this image, the vertical silos of government departments will be broken up simply because people no longer need to go to "their" department to do their day job, but they can go wherever makes most sense.  Maybe, just maybe, the one stop shop will become a reality because staff can go where the customers are, rather than where their offices are.

G-Cloud By The Numbers (To End June 2014)

With Dan's Tableau version of the G-Cloud spend data, interested folks need never download the csv file provided by Cabinet Office ever again.  Cabinet Office should subcontract all of their open data publication work to him.



The headlines for G-Cloud spend to the end of June 2014 are:

- No news on the split between lots.  80% of spend continues to be in Lot 4, Specialist Cloud Services

- 50% of the spend is with 10 customers, 80% is with 38 customers

- Spend in June was the lowest since February 2014.  I suspect that is still an artefact of a boost because of year end budget clearouts (and perhaps some effort to move spend out of Lot 4 onto other frameworks)

- 24 suppliers have 50% of the spend, 72 have 80%.  A relative concentration in customer spend is being spent across a wider group of suppliers.  That can only be a good thing

- 5 suppliers have invoiced less than £1,000. 34 less than £10,000

- 10 customers have spent less than £1,000. 122 less than £10,000.  How that boxes with the bullet immediately above, I'm not sure

- 524 customers (up from 489 last month) have now used the framework, commissioning 342 suppliers.  80% of the spend is from central government (unsurprising, perhaps, given the top 3 customers - HO, MoJ, CO - account for 31% of the spend)

- 36 customers have spent more than £1m.  56 suppliers have billed more than £1m (up from 51).  This time next year, Rodney, we'll be millionaires.

- Top spending customers stay the same but there's a change in the top 3 suppliers (BJSS, Methods stay the same and Equal Experts squeaks in above IBM to claim the 3rd spot)

One point I will venture, though not terribly well researched, is that once a customer starts spending money with G-Cloud, they are more likely to continue than not.  And one a supplier starts seeing revenue, they are more likely to continue to see it than not.  So effort on the first sale is likely to be rewarded with continued business.

Friday, June 27, 2014

Close Encounters Of The Fourth Kind

Much to my surprise, O2 sent me a text on Wednesday.  The text wasn't the surprising bit - they often send me texts offering me something that I don't want.  The surprise was that this time they didn't offer, they told me that I was just going to get it.  It was 4G.  And I wanted it.


4G. For nothing.  No options. No discussion. No questions allowed.  No SIM change needed.  No conversation about impact on battery life.  Just: turn off your phone in the morning and turn it back on, and if you're in a 4G area, it's all yours.

The next morning I was in Camden Town - no sign of 4G there.  Clearly Camden is a bit rural to have coverage just yet.

But later, in Whitehall, it worked just fine.  And fine means a consistent 15mb/s download (versus the previous day's 3G download speed of 2mb/s).

During 2012/13 I set up a JV owned by the four mobile operators, called at800, that had the task of managing any negative impact from interference with TV signals that might occur because the bottom of the 4G range aligns with the top of the TV range (and, until some recent work by Digital UK, overlapped).


at800 - you might have seen the ads or had a card, or maybe even a filter through your letterbox - has been a great success (last I checked, they'd been in touch with probably 45% of UK households).  That's in part because the problem that all the TV technologists worried might affect up to two million households has actually been far less of a problem but, for the most part, it's because we put together a great team, worked closely with the mobile operators and the broadcasters, ran pilots, tested everything we could and smoothed the way for the 4G roll out.  In truth, we were ready long before the operators were.  They were all fun/challenging/annoying/exciting to work with, but I liked O2's approach most.


After a couple of days testing 4G, I have this to say:

- Coverage in buildings where 3G coverage was previously poor to non-existent has much improved (I can even make calls from the dead centre of buildings where previously I stared only at "No Service")

- Download speeds are certainly faster (roughly equivalent to what I get from Satellite broadband, but without the round trip lag)

- Battery life seems unchanged (I wonder if battery usage is higher during download but because download is so much faster, there's less overall drain)

That said, the nearest mast to my home is still some 200 miles away.  Keep rolling it out O2.

I have no idea how widespread this offer is but, if you get the same text, say "yes". Not that you'll have any choice.  But so far, it's all upside.

Monday, June 23, 2014

The Trouble With Transition - DECC and BIS Go First

In a head-scratching story at the end of last week, DECC and BIS made the front page of the Financial Times (registered users/subscribers can access the story).  Given the front page status, you might imagine that the Smart Meter rollout had gone catastrophically wrong, or that we had mistakenly paid billions in grants to scientists who weren't getting the peer reviews that we wanted, or that we'd suddenly discovered a flaw in our model for climate change or perhaps that the Technology Strategy Board had made an investment that would forever banish viruses and malware.


The BBC followed the story too.

But, no.  Instead we have two departments having problems with their email.  Several Whitehall wags asked me weeks ago (because, yes, this story has been known about for a month or more) whether anyone would either notice, or care, that there was no email coming to or from these departments.   It is, perhaps, a good question.
Business Secretary Mr Cable and Energy and Climate Change Secretary Mr Davey were reported in the Financial Times to be angry about slow and intermittent emails and network problems at their departments since they started migrating to new systems in May.
The real question, though, is what actually is the story here?

- It appeared to be a barely-veiled attack on the current policy of giving more business to SMEs (insider says "in effect they are not necessarily the best fit for this sort of task" ... "an idealistic Tory policy to shake up Whitehall")

- Or was it about splitting up contracts and of taking more responsibility for IT delivery within departments (Mr Cable seemingly fears the combination of cost-cutting and small firms could backfire)?

-  Was the story leaked by Fujitsu who are perhaps sore at losing their £19m per annum, 15 year (yes, 15. 15!) contract?

- Was it really triggered by Ed Davey and Vince Cable complaining to the PM that their email was running slow ("Prime Minister, we need to stop everything - don't make a single decision on IT until we have resolved the problems with our email")?

- Is it even vaguely possible that it is some party political spat where the Liberal Democrats, languishing in the polls, have decided that a key area of differentiation is in how they would manage IT contracts in the future?  And that they would go back to big suppliers and single prime contracts?

- Was it the technology people in the department themselves who wish that they could go back to the glory days of managing IT with only one supplier when SLAs were always met and customers radiated delight at the services they were given?

#unacceptable as Chris Chant would have said.

Richard Holway added his view:
In our view, the pendulum has swung too far. The Cabinet Office refers to legacy ICT contracts as expensive, inflexible and outdated; but moving away from this style of contract does not necessarily mean moving away from the large SIs.
And it appears that it is beginning to dawn on some in UK Government that you can’t do big IT without the big SIs. A mixed economy approach – involving large and small suppliers - is what’s needed.
By pendulum, he means that equilibrium sat with less than a dozen suppliers taking more than 75% of the government's £16bn annual spend on IT.  And that this government, by pushing for SMEs to receive at least 25% of total spend, has somehow swung us all out of kilter, causing or potentially causing chaos.  Of course, 25% of spend is just that - a quarter - it doesn't mean (based on the procurements carried out so far by the MoJ, the Met Police, DCLG and other departments) that SIs are not welcome.

Transitions, especially, in IT are always challenging - see my last blog on the topic (and many before).  DECC and BIS are pretty much first with a change from the old model (one or two very large prime contracts) to the new model (several - maybe ten - suppliers with the bulk of the integration responsibility resting with the customer, even when, as in this case, another supplier is nominally given integration responsibility).  Others will be following soon - including departments with 20-30x more users than DECC and BIS.

Upcoming procurements will be fiercely competed, by big and small suppliers alike.  What is different this time is that there won't be:

-  15 year deals that leave departments sitting with Windows XP, Office 2002, IE 6 and dozens of enterprise applications and hardware that is beyond support.

or

- 15 year deals that leave departments paying for laptops and desktops that are three generations behind, that can't access wireless networks, that can't be used from different government sites and that take 40 minutes to boot.

or

- 15 year deals that mean that only now, 7 years after iPhone and 4 years after iPad, are departments starting to take advantage of truly mobile devices and services

With shorter contracts, more competition, access to a wider range of services (through frameworks like G-Cloud), only good things can happen.   Costs will fall, the rate of change will increase and users in departments will increasingly see the kind of IT that they have at home (and maybe they'll even get to use some of the same kind of tools, devices and services).

To the specific problem at BIS and DECC then.  I know little about what the actual problem is or was, so this is just speculation:

- We know that, one day, the old email/network/whatever service was switched off and a new one, provided by several new suppliers, was turned on.  We don't know how many suppliers - my guess is a couple, at least one of which is an internal trading fund of government. But most likely not 5 or 10 suppliers.

- We also know that transitions are rarely carried out as big bang moves.  It's not a sensible way to do it - and goodness knows government has learned the perils of big bang enough times over the last 15 years (coincidentally the duration of the Fujitsu contract).

- But what triggered the transition?  Of course a new contract had been signed, but why transition at the time they did?  Had the old contract expired?  Was there a drive to reduce costs, something that could only be triggered by the transition?   

- Who carried the responsibility for testing?  What was tested?  Was it properly tested?  Who said "that's it, we've done enough testing, let's go"?  There is, usually, only one entity that can say that - and that's the government department.  All the more so in this time of increased accountability falling to the customer.

- When someone said "let's go", was there an understanding that things would be bumpy?  Was there a risk register entry, flashing at least amber and maybe red, that said "testing has been insufficient"?

In this golden age of transparency, it would be good if DECC and BIS declared - at least to their peer departments - what had gone wrong so that the lessons can be learned.  But my feeling is that the lessons will be all too clear:

- Accountability lies with the customer.  Make decisions knowing that the comeback will be to you.

- Transition will be bumpy.  Practice it, do dry runs, migrate small numbers of users before migrating many.

- Prepare your users for problems, over-communicate about what is happening.  Step up your support processes around the transition period(s).

- Bring all of your supply chain together and step through how key processes and scenarios will work including when it all goes wrong.

- Have backout processes that you have tested and know the criteria you will use to put them into action

Transitions don't come along very often.  The last one DECC and BIS did seems to have been 15 years ago (recognising that DECC was within Defra and even MAFF back then).  They take practice.  Even moving from big firm A to big firm B.  Even moving from Exchange version x to Exchange version y.

What this story isn't, in any way, is a signal that there is something wrong with the current policy of disaggregating contracts, of bringing in new players (small and large) and of reducing the cost of IT).

The challenge ahead is definitely high on the ambition scale - many large scale IT contracts were signed at roughly the same time, a decade or more ago, and are expiring over the next 8 months.  Government departments will find that they are, as one, procuring, transitioning and going live with multiple new providers.  They will be competing for talent in a market where, with the economy growing, there is already plenty of competition.  Suppliers will be evaluating which contracts to bid for and where they, too, can find the people they need - and will be looking for much the same talent as the government departments are.  There are interesting times ahead.

There will be more stories about transition, and how hard it is, from here on in.  What angle the reporting takes in the future will be quite fascinating.

Friday, June 13, 2014

More On G-Cloud Numbers (May 2014 data)

The latest data show increasing spend via G-Cloud - this month tantalisingly close to the £200m arbitrary round but important number level at £191.6m.  The news after that is not terribly interesting:

Cloud spending may, it turns out, be seasonal.  Spend last month dropped to £12m, the lowest seen since October 2013 - all the more noticeable after the bollard budget blockbuster that was March spending.  Start of the new financial year and everyone is, it seems, planning rather than doing.

Lot 4 continues to dominate with 79% of the spend (Lots 1 to 3 are 6%, 1% and 13% respectively).

Much of the rest - top spending customers, top earnings suppliers etc - stays the same.

But there are some anomalies.  Last month I reported that the lowest spending customer had spent only £63.50.    This month they've moved higher with £85.90.  Thirteen customers have, though, still spent less than £1,000.

We do, though, have nearly 500 customers (489) which is, in my view, more important than the growth in spend - it shows either (a) that more people are looking at what the cloud can do for them, which would be good all round or (b) that more people have found that G-Cloud as a framework, GCaaS, can help them which is still good because it's transparent and we can see whether they spend more in the coming months.

51 suppliers have seen revenues of more than £1m. Some of those are brand name, paid up, members of the Oligopoly.  Others look new to the public sector and certainly new to having access to quite so many customers.

There are some other anomalies too - I assume the result of data capture errors.  One supplier has a revenue line showing £1,599,849.80 which is listed as "blank" - there are 9 other such lines, though the other numbers are far, far smaller.  It would be nice to know where to allocate that money.  It may be that it is correctly allocated by Lot (so shows up in the graph below where there are no "blank" entries) but not correctly tagged with a product description.  Still be nice to know.

A couple of other points to wonder about:

- The Crown Commercial Service are a bigger user of Skyscape than any other purchaser (£1.4m - double HMRC spend, neary triple Cabinet Office spend, nearly 5 times Home Office).  Is that all hosting of the G-Cloud store and other framework services?

- There are only 125 instances of the word "Cloud" in the line items of what has been purchased (which run to over 2,000 separate lines)

To repeat the last paragraph in my last entry on this topic, for the avoidance of doubt:

Still, there is no other framework in government that gives access to such a wide variety of suppliers (many new to the public sector) and no framework that publishes its information at such a transparent level.  For those two reasons alone, G-Cloud still deserves applause - and, as it grows month on month, I hope that it will only get louder. 




Monday, June 09, 2014

Digital Government 2002 - Doing Something Magical

Now here's a blast from the past!  Here's a "talking head" video recorded, I think, in early 2002 all about e-government (I am, of course, the talking head).  Some months later, much to my surprise, the video popped up at a conference I was attending - I remember looking up to see my head on a dozen 6' tall screens around the auditorium.

It's easily dated by me talking about increasing use of PDAs (you'll even see me using one) and the rollout of 3G, not to mention the ukonline.gov.uk logo flashing up in the opening frames and e-government, as opposed to Digital By Default.

But the underpinning points of making the move from government to online government, e-goverment or a Digital by Default approach are much the same now as then:

"The citizen gets the services they need, when they need them, where they need then, how they need them ... without having to worry about ... the barriers and burdens of dealing with government"

video

"You've changed government so fundamentally ... people are spending less time interacting and are getting real benefit"

Lessons learned: get a haircut before being taped, learn your  lines, even when in America don't wear a t-shirt under your shirt (my excuse is that it was winter).

Thursday, June 05, 2014

G-Cloud By The Numbers (April 2014 data, released mid-May 2014)

I haven't looked at the G-Cloud spend data for a few months (the last review was in December) - something changed with the data format earlier in the year and it screwed up all my nicely laid out spreadsheets; I've only just got round to reworking them.

- After 25 months of use, total spend via the framework is £175.5m

- Spend in all of 2013 was £85m.  Spend in the first 4 months of 2014 is £81m, about 46% of the total spend so far

- The run rate for 2014, if that spend rate continues, is perhaps more than £240m.  I suspect we could see much higher than that given the expiry of many central government IT contracts in 2015 and 2016 (and so an increase in experimentation, preparation for transition and even actual transition ahead of expiry)

- The split between the lots in December 2013 was Lot 1: 4%, Lot 2: 1%, Lot 3: 16%, Lot 4: 78%

- As of now, the split is similar: 6%, 1%, 14%, 79%
 
- The 2014 year to date split is little different: 8%, 1%, 12%, 80%





Conclusion:  The vast bulk of the spending is still via Lot 4 - people and people as a service.  I'd expected that to start changing now, with the Digital Services Framework fully live.  That said, Lot 4's spend per month has changed little since November 2013, except for a peak of £23m in March (roughly double the average spend over the last 6 months) which you can easily see in the graph above

Conclusion: Infrastructure as a Service (from Lot 1) is gradually increasing - it's gone from c£800k/month to c£1.5m a month in the last 6 months.  Again, there was a peak in March, of £2m. 

Conclusion: It's an old cliche but plainly there was a bit of a budget clear out in March with departments rushing to spend money.  March 2014 spend was £30m - roughly double any other month either side.

- In December 2013, BJSS was the largest supplier, followed by IBM.  Today, BJSS are still number 1, but Methods have moved to number 2, with IBM at 3.

- The Home Office is still the highest spending customer, at £24.7m (nearly double their spend as of December).  MoJ are second at £16.7m with Cabinet Office third at £12.5m

- The top 10 customers account for 50% of the spend on the framework.  The top 20 make up 67%. That's exactly how it was in December.  More than 100 new customers have been added since December, though, with over 470 customers now listed.

- Some 310 suppliers have won business.   The top 10 have 32% of the market, the top 20 have 47% (that's a better spread than the customer equivalent metrics)

- Last time, the lowest spending customer was the "Wales Office", with £375.   We are at a new low now, with "Circle Anglia Limited" spending £63.50 (I wonder if the cost of processing that order was far greater?).

-  Thirteen customers have spent less than £1,000.  Thirty one have spent more than £1m

Conclusion:  Much the same as in December - Adoption of the framework is still spotty, but it is definitely improving.  A greater spread of customers, spending higher amounts of money - though mostly concentrated in Lot 4.  A few more suppliers have likely seen their business utterly transformed with this new access to public sector customers.

Overall Conclusion: G-Cloud needs, still, to break away from its reliance on Lot 4 sales.  Scanning through the sales by line item, there are far too many descriptions that say simply "project manager", "tester", "IT project manager" etc.  There are even line items (not in Lot 4) that say "expenses - 4gb memory stick" - a whole new meaning to the phrase "cloud storage" perhaps.

Still, there is no other framework in government that gives access to such a wide variety of suppliers (many new to the public sector) and no framework that publishes its information at such a transparent level.  For those two reasons alone, G-Cloud still deserves applause - and, as it grows month on month, I hope that it will only get louder.

Tuesday, May 13, 2014

Officially Uncertain

It turns out that the new security classifications, introduced at the start of April 2014, have collapsed into a single new tier - Officially Uncertain.  I worried that this might happen earlier in the year.

Last week, for instance, it was clearly explained to me that "OFFICIAL is not a protective marking, it does not convey any associated controls on how the information is to be handled."

What that means, of course, is that because there are no particular controls that one might agree were the baseline necessary for protecting the information that is marked OFFICIAL that isn't actually marked by a protective marking, each department or government entity is able to decide, alone, what it should do to protect that information.  Adios commodity cloud.

In a different meeting with different people, it was explained to me, just as clearly, that because no one was going to go back and revisit their historical data and check what label should be applied to it (on an individual file by file basis).  The only conclusion, therefore, was that all historical data should be marked OFFICIAL SENSITIVE (notwithstanding that, if OFFICIAL isn't a protective marking, then neither is this one nor that the guidance suggests that use of "sensitive" is by exception only - this is one big exception).  And given it's all a bit sensitive, that historical data should be treated as if it were IL3 and kept in a secure facility in the UK.  Adieu commodity cloud.



All is not yet lost I hope.  Folks I speak to in CESG - sane, rational people that they are - recognise that this is a "generational change" and it will take some time before the implications are understood.  The trouble is that whilst time is on the side of government, it's not on the side of the smaller/newer players who want to provide services for government and for whom UNCERTAINTY is anathema.

In these early days, some guidance (not rules) would help people navigate through this uncertainty and support the development of products that met the needs of the bulk of government entities (be they local, central, arms length or otherwise).  The existing loose words - I can't stretch to guidance for these - known as the "Cloud Security Principles" get to the precipice of new controls, look over and leap sharply backwards, all a tremble.

Indeed, the summary of the approach recommended by those who best understand security is:

1. Think about the assets you have and what you're trying to do with them

2. Think about the attackers who'll be trying to interfere with those assets as you deliver your business function

3. Implement some mitigations (physical, procedural, personnel, technical) to address those identified risks

4. Get assurance as required in those mitigations

5. Thinking about the updated solution design, go back to step 1 to see if you've introduced any new risks.

6. Repeat until you've hit a level of confidence you are happy with

My guess is that 6, alone, could lead to an awful lot of iterations that culminate in "guards with machine guns, patrolling with dogs around a perimeter protected by an electric fence".  Of course, the number of guards, the type of guns, the eagerness of the dogs, the height of the fence and the shock provided by the fence will vary from entity to entity.



There is sunshine through some of the clouds though ... some departments are rolling out PCs using native BitLocker rather than add-on encryption, others are trialling Windows 8.1 on tablets, whilst managed iPads have been around for some months.



But a move of central government departments to public cloud services (remember - 50% of new spend to be in the public cloud by 2015) looks to be a long way from here.  I don't think I can even soften it an say that a significant move to even a private, public sector only, cloud is that close.


Friday, March 14, 2014

The Trouble With ... Spectrum


To paraphrase Mark Twain, "Buy spectrum.  They're not making it anymore."

And if Ofcom's figures are right, the spectrum that we use today is worth £50bn a year (as of 2011) to the UK economy.  The government say that they want to double that contribution by 2025 - it is already up 25% in the 5 years from 2006 to 2011.  It's unclear why the data is as of 2011 - one suspects that if it was up 25% in 5 years, it may already be up another 12.5% since then making doubling by 2025 at least a little easier.

If you've ever seen how spectrum is carved up in the UK, you know it's very complicated.  Here's a picture that shows just how complicated:


Every so often, the government auctions, through Ofcom, a slice of spectrum.  In April 2000, the sale of the 3G spectrum realised some £22bn.  There was much delight - until stock markets around the world fell dramatically not long afterwards, something which was at least partly to blame for delaying the actual rollout of 3G services (indeed, Vodafone narrowly avoided a fine for failing to reach 90% coverage on time - with an extension granted to the end of 2013).

That 90% is a measure of population coverage, not geographical coverage - which explains why you will often fail to get a signal in the middle of a park, just outside a small town or, often, anywhere with a beautiful view where you want to take a picture and send it to someone there and then, like if you were here:


Of course, there are doubtless plenty of people wandering down Baker Street right now who also can't get or even maintain a 3G signal.

The 4G auctions took place a little over 18 months ago and resulted in revenues that were some 90% lower than for 3G - partly a reflection of the times, partly because of somewhat decreased competition and partly because of smarter bidding on the part of at least some of the operators.  The 4G build out is underway now though there are, in effect, only two networks being built - O2 and Vodafone are sharing their network, as are Three and EE (the latter had a significant headstart on 4G because they lobbied, successfully, to reassign spare 1800 Mhz, originally for 3G, frequency for use as 4G).

For me, though, coverage shouldn't be a competitive metric.  Coverage should be national - not 90% national, proper national.  Using coverage as a metric, coupled with charging for the spectrum, and then splitting up the job of building the network across two (or more) players means that they will go where the money is - which means major towns first (in fact, it means London first and for longest), then major travel corridors and commuter towns and the rural areas never.  The same is true, of course, for broadband - though our broadband rollout is mostly physical cable rather than over the air, the same investment/return challenge remains.

And that always seems to leave government to fill in the gaps - whether with the recent "not spots" project for mobile that will result in a few dozen masts being set up (for multiple operator use) to cover some (not all) of the gaps in coverage or a series of rural broadband projects (that only BT is winning) - neither of which is progressing very fast and certainly not covering the gaps.

With the upcoming replacement of Airwave (where truly national - 100% geographic - coverage is required), the rollout of smart meters (where the ideal way for a meter to send its reading home is via SMS or over a home broadband network) and the need to plug gaps in both mobile and broadband coverage, surely there is a need for an approach that we might call "national infrastructure"?

So focusing on mobile and, particularly, where it converges with broadband (on the basis that one can substitute for the other and that the presence of one could drive the other), can one or more bodies be set up who have the job to create truly national coverage, and they sell the capacity that they create to content and service providers who want it.  That should ensure coverage, create economies of scale and still allow competition (even more so than today given that in many areas, there is only one mobile provider to choose from).  Network Rail for our telecomms infrastructure.

That is to say, in another way, is it relevant or effective to have competitive build-outs of such nationally vital capabilities as broadband and 4G (and later 5G, 6G etc) mobile? 

If the Airwave replacement were to base its solution on 4G (moving away from Tetra) - and I have no idea if they will or they won't, but given Emergency Services will have an increasing need for data in the future, it seems likely - then we would have another player doing a national rollout, funded by government (either funded directly or funded through recovery of costs)

There are probably 50,000 mobile masts in the country today.  With the consolidation of networks, that will get to a smaller number, maybe 30,000.  If you add in Airwave, which operates at a lower frequency and so will get better performance but has to cover more area, that number will increase a little (Airwave was formerly owned by O2 so my guess is that much of their gear is co-located with O2 mobile masts).   Perhaps £100k to build those in the first place and perhaps £10k every time they need a major upgrade (change of frequency / change of antenna / boost in backhaul and so on) ... pretty soon you're talking real money.  And that's on top of the cost of spectrum and excludes routine maintenance and refresh costs.

So having already rolled out 2G, 3G and the beginning of the 4G network and likely to replace (or at least combine) Airwave with a 4G-based solution ... and with many areas of the country still struggling to get a wireless signal, let alone a fast broadband solution, I think it's time to look at this again.

Whilst I was writing this, Ofcom put out a press release noting:
Ofcom’s European Broadband Scorecard, published today, shows that the UK leads the EU’s five biggest economies on most measures of coverage, take-up, usage and choice for both mobile and fixed broadband, and performs well on price.
That suggests that the current approach has done rather well - better than a curious selection of five big countries - but it doesn't mean that we should (a) compare ourselves with those countries, (b) that we should not go for an absolute measure of 100% coverage and (c) that the current approach will keep us ahead of the game.

It seems to me that rather than spend billions on HS2 which aims to transport more people from A to B a bit quicker than they move today, we could, instead, spend a smaller amount on securing the infrastructure for the 21st and 22nd centuries rather than that of the 19th.