Thursday, February 09, 2017

5 Years After 10 Years After - The Emperor's New Clothes

With today's launch of the Government Transformation Strategy (not to be confused with this Government Transformation Strategy from 2006, or this one from 2005), my timing for taking a look at where we've been and what's left looks reasonably good.

In October 2012, I took a look at GDS, just as their first delivery, gov.uk, was about to go live.  I called it "The Emperor's New Clothes." My aim was to compare and contrast with earlier efforts specifically from the e-Delivery team which ran from 2001 through 2005/6.  The piece generated a lot of feedback at the time including whole Twitter conversations as well as lots of questions to me offline.  I noted that, during my time running eDt, I was never sure whether it was me who had no clothes on or whether I was the little boy.

Given my running theme that history might just be repeating, I've pulled out the main points from The Emperor's New Clothes here - and then, in future pieces, will catch up with where we are today and where we might go:
Change needs new ideas, new people and new ways to execute. This kind of change is very hard to get rolling and many times harder than that to sustain.   I watch, then, with fascination wondering if this is change that will stick and, especially, if it is change that will pervade across government.  Or whether its half-life is actually quite short - that when the difficult stuff comes along (as well as the routine, mind-numbing stuff), things will stall.  Perhaps the departments will rebel, or the sponsors will move on, or delivery will be undermined by some cockups, or the team will tire of bureaucracy once they move into the transaction domain.
The question is really how to turn what GDS do into the way everyone else does it.  In parallel with GDS’ agile implementations, departments are out procuring their next "generation" of IT services - and when you consider that most are still running desktop operating systems released in 2000 and that many are working with big suppliers wrapped up in old contracts supporting applications that often saw the light of day in the 80s or, at best, the 90s, “generation” takes on a new meaning.  To those people, agile, iterative, user experience focused services are things they see when they go home and check Facebook, use Twitter or Dropbox or have their files automagically backed up into the cloud.  Splitting procurements into towers, bringing in new kinds of integrators, promising not to reward "bad" suppliers and landing new frameworks by the dozen is also different of course, but not enough to bridge the gap between legacy and no legacy.
One of the strengths of the approach that GDS is adopting is that the roadmap is weeks or maybe months long.  That means that as new things come along they can be embraced and adopted - think what would have happened if a contract for a new site had been let three months before the iPhone came out? Or a month before the iPad came out? 
It is, though, also a significant weakness.  Departments plan their spending at least a year out and often further; they let contracts that run for longer than that.  If there is – as GDS are suggesting – to be a consolidation of central government websites by April 2013 and then all websites (including those belonging to Arm’s Length Bodies) by April 2014 then there needs to be a very clear plan for how that will be achieved so that everyone can line up the resource.  Likewise, if transactions are to be put online in new, re-engineered ways (from policy through to user interaction), that too will take extensive planning.
During the time of the e-Envoy we had four Ministers and, if you add in eGU, nine.  I suspect that my experience of the Cabinet Office is more common than the current experience where there has been stability for the last 2 ½ years.  GDS will need a plan B if Mr Maude does move on to something new.  There will also need to be a 2015 plan B if power changes hands.  Of course, if your roadmap goes out only weeks or months, then no one is looking at 2015.  That’s a mistake.
GDS have succeeded in being wildly transparent about their technology choices and thinking.  They are not, though, transparent about their finances.  That should change.  The close association with politicians seems to mean that GDS must champion everything that they do as a cost save – witness recent stories on identity procurement costs, website costs comparing direct.gov.uk and gov.uk and so on. Let’s see the numbers.
Given the inhouse staffing model that GDS is operating, changes are really represented only by cost of opportunity.  That makes comparing options and, particularly, benefits difficult.  In a beta world, you make more changes than you do in a production world – once you’re in production, you’re likely to make incremental changes than major ones (because, as Marc Andreessen said long ago, interfaces freeze early – people get used to them and are confused by too big a change).
Soon GDS will tell departments that their top transactions need to be re-engineered from policy through to service provision with a clear focus on the user.  At that point we move away from the technologists who are attracted to shiny new things and we hit the policy makers who are operating in a different world – they worry about local and EU legislation, about balancing the needs of vastly differing communities of stakeholders and, of course, they like to write long and complicated documents to explain their position having evaluated the range of possible options.
Tackling transaction is both fundamentally necessary and incredibly hard, though most of that isn’t about the shiny front end – it’s about the policy, the process and the integration with existing back end systems (which absorb some 65% of the £12-16bn spent per year on IT in government).  There is a sense of “Abandon Hope All Ye Who Enter Here.”
The question is whether the GDS model is the one that achieves scale transformation right across government, or whether it is another iteration in a series of waves of change that, in the end, only create local change, rather than truly structural change.
It seems unlikely that GDS can scale to take on even a reasonable chunk of government service delivery.  It also seems unlikely that enough people in departments can be trained in the new approaches to the point where they can shoulder enough of the burden so as to allow GDS to only steer the ship. If we add in the commercial controls, the supply chain and the complexity of policy (and the lack of join up of those policies), the challenges look insurmountable.

None of that is an argument for not trying.  Direct.gov.uk is old and tired and needed a massive refresh; transactions are where the real potential can be unlocked and they need to be tackled in a new way.
Much of this has been tried before, painful lessons have been learned and it would be more than a shame if the latest effort didn’t achieve its aims too.  The trick, then, is to pick the battles to fight and create the change in the right areas with the aim of infecting others.  Taking on too much at once will likely lead to failure.



Friday, February 03, 2017

10 Years After 10 Years After


Strictly speaking, this is a little more than 10 years after the 10 year mark.  In late 2005,  Public Sector Forums asked me to do a review of the first 10 years of e-government; in May 2006, I published that same review on this blog.  It's now time, I think, to look at what has happened in the 10 years (or more) since that piece, reviewing, particularly, digital government as opposed to e-government.

Here's a quick recap of the original "10 years of e-government" piece, pulling out the key points from each of the posts that made up the full piece:

Part 1 - Let's get it all online

At the Labour Party conference in 1997, the Prime Minister had announced his plans for 'simple government' with a short paragraph in his first conference speech since taking charge of the country: 
“We will publish a White Paper in the new year for what we call Simple Government, to cut the bureaucracy of Government and improve its service. We are setting a target that within five years, one quarter of dealings with Government can be done by a member of the public electronically through their television, telephone or computer.”
Some time later he went further:
"I am determined that Government should play its part, so I am bringing forward our target for getting all Government services online, from 2008 to 2005"

It’s easy to pick holes with a strategy (or perhaps the absence of one) that's resulted in more than 4,000 individual websites, dozens of inconsistent and incompatible services and a level of take-up that, for the most popular services, is perhaps 25% at best.

After all, in a world where most people have 10-12 sites they visit regularly, it’s unlikely even one of those would be a government site – most interactions with government are, at best, annual and so there's little incentive to store a list of government sites you might visit. As the count of government websites rose inexorably – from 1,600 in mid-2002 to 2,500 a year later and nearly 4,000 by mid-2005 – citizen interest in all but a few moved in the opposite direction.

Over 80% of the cost of any given website was spent on technology – content management tools, web server software, servers themselves – as technology buyers and their business unit partners became easy pickings for salesmen with 2 car families to support. Too often, design meant flashy graphics, complicated pages, too much information on a page and confusing navigation. 
Accessibility meant, simply, the site wasn’t.
In short, services were supply-led by the government, not demand-led by the consumer. But where was the demand? Was the demand even there? Should it be up to the citizen to scream for the services they want and, if they did, would they - as Henry Ford claimed before producing the Model T - just want 'faster horses', or more of the same they’d always had performed a little quicker? 
We have government for government, not government for the citizen. With so many services available, you’d perhaps think that usage should be higher. Early on, the argument was often made (I believe I made it too) that it wasn’t worth going online just to do one service – the overhead was too high – and that we needed to have a full range of services on offer - ones that could be used weekly and monthly as well as annually. That way, people would get used to dealing online with government and we’d have a shot at passing the 'neighbour test' (i.e. no service will get truly high usage until people are willing to tell their neighbour that they used, say, 'that new tax credits service online' and got their money in 4 days flat, encouraging their friends to do likewise).

A new plan

 • Rationalise massively the number of government websites. In a 2002 April Fool email sent widely around government, I announced the e-Envoy’s department had seized control of government’s domain name registry and routed all website URLs to UKonline.gov.uk and was in the process of moving all content to that same site. Many people reading the mail a few days later applauded the initiative. Something similar is needed. The only reason to have a website is if someone else isn’t already doing it. Even if someone isn’t, there’s rarely a need for a new site and a new brand for every new idea.

• Engage forcefully with the private sector. The banks, building societies, pension and insurance companies need to tie their services into those offered by government. Want a pension forecast? Why go to government – what you really want to know is how much will you need to live on when you’re 65 (67?) and how you'll put that much money away in time. Government can’t and won’t tell you that. Similarly, authentication services need to be provided that can be used across both public and private sectors – speeding the registration process in either direction. With Tesco more trusted than government, why shouldn't it work this way? The Government Gateway, with over 7 million registered users, has much to offer the private sector – and they, in turn, could accelerate the usage of hardware tokens for authentication (to rid us of the problems of phishing) and so on.

• Open up every service. The folks at my society, public whip and theyworkforyou.com have shown what can be done by a small, dedicated (in the sense of passionate) team. No-one should ever need to visit the absurdly difficult to use Hansard site when it’s much easier through the services these folks have created. Incentives for small third parties to offer services should be created.

• Build services based on what people need to do. We know every year there are some 38 million tax discs issued for cars and that nearly everyone shows up at a post office with a tax disc, insurance form and MOT. For years, people in government have been talking about insurance companies issuing discs – but it still hasn’t happened. Bring together disparate services that have the same basic data requirements – tax credits and child benefit, housing benefit and council tax benefit etc.

• Increase the use of intermediaries. For the 45% of people who aren’t using the Internet and aren’t likely to any time soon, web-enabled services are so much hocus pocus. There needs to be a drive to take services to where people use them. Andrew Pinder, the former e-Envoy, used to talk about kiosks in pubs. He may have been speaking half in jest, but he probably wasn’t wrong. If that’s where people in a small village in Shropshire are to be found (and with Post Offices diminishing, it's probably the only place to get access to the locals), that’s where the services need to be available. Government needs to be in the wholesale market if it's to be efficient – there are far smarter, more fleet of foot retail providers that can deliver the individual transactions.

• Clean up the data. One of the reasons why government is probably afraid to join up services is that they know the data held on any given citizen is wildly out of date or just plain wrong. Joining up services would expose this. When I first took the business plan for the Government Gateway to a minister outside the Cabinet Office, this problem was quickly identified and seen as a huge impediment to progress
More to come.

Monday, January 18, 2016

The Billion Pound G-Cloud

Sometime in the next few weeks, spend through the G-Cloud framework will cross £1 billion.  Yep, a cool billion.  A billion here and a billion there and pretty soon you're talking real money.

Does that mean G-Cloud has been successful?  Has it achieved what it was set up for? Has it broken the mould?  I guess we could say this is a story in four lots.

Well, that depends:

1) The Trend

Let's start with this chart showing the monthly spend since inception.



It shows 400 fold growth since day one, but spend looks pretty flat over the last year or so, despite that peak 3 months ago. Given that this framework had a standing start, for both customers and suppliers, it looks pretty good.  It took time for potential customers (and suppliers) to get their heads round it.  Some still haven't. And perhaps that's why things seem to have stalled?

Total spend to date is a little over £903m.  At roughly £40m a month (based on the November figures), £1bn should be reached before the end of February, maybe sooner. And then the bollard budget might swing into action and we'll see a year end boost (contrary to the principles of pay as you go cloud services though that would be).

Government no longer publishes total IT spend figures but, in the past, it's been estimated to be somewhere between £10bn and £16bn per year.  G-Cloud's annual spend, then, is a tiny part of that overall spend.  G-Cloud fans have, though, suggested that £1 spent on G-Cloud is equivalent to £10 or even £50 spent the old way - that may be the case for hosting costs, it certainly isn't the case for Lot 4 costs (though I am quite sure there has been some reduction in rates simply from the real innovation that G-Cloud brought - transparency on prices).

2) The Overall Composition

Up until 18 months ago, I used to publish regular analysis showing where G-Cloud spend was going.  The headline observation then was that some 80% was being spent in Lot 4 - Specialist Cloud Services, or perhaps Specialist Counsultancy Services.  To date, of our £903m, some £715m, or 79%, has been spent through Lot 4 (the red bars on the chart above).  That's a lot of cloud consultancy.

 
(post updated 19th Jan 2016 with the above graph to show more clearly the percentage that is spent on Lot 4).

With all that spent on cloud consultancy, surely we would see an increase in spend in the other lots?  Lot 4 was created to give customers a vehicle to buy expertise that would explain to them how to migrate from their stale, high capital, high cost legacy services to sleek, shiny, pay as you go cloud services.

Well, maybe.  Spend on IaaS (the blue bars), or Lot 1, is hovering around £4m-£5m a month, though has increased substantially from the early days.  Let's call it £60m/year at the current run rate (we're at £47m now) - if it hits that number it will be double the spend last year, good growth for sure, and that IaaS spend has helped created some new businesses from scratch.  But they probably aren't coining it just yet.

Perhaps the Crown Hosting Service has, ummm, stolen the crown and taken all of the easy business.  Government apparently spends £1.6bn per year on hosting, with £700m of that on facilities and infrastructure, and the CHS was predicted to save some £530m of that once it was running (that looks to be a save through the end of 2017/18 rather than an annual save).  But CHS is not designed for cloud hosting, it's designed for legacy systems - call it the Marie Celeste, or the Ship of the Doomed.  You send your legacy apps there and never have to move them again - though, ideally, you migrate them to cloud at some point. We had a similar idea to CHS back in 2002, called True North, it ended badly.

A more positive way to look at this is that Government's hosting costs would have increased if G-Cloud wasn't there - so the £47m spent this year would actually have been £470m or £2.5bn if the money had been spent the old way.  There is no way of knowing of course - it could be that much of this money is being spent on servers that are idling because people spin them up but don't spin them down, it could be that more projects are underway at the same than previously possible because the cost of hosting is so much lower.

But really, G-Cloud is all about Lot 4.  A persistent and consistent 80% of the monthly spend is going on people, not on servers, software or platforms.  PaaS may well be People As A Service as far as Lot 4 is concerned.

3) Lot 4 Specifically

Let's narrow Lot 4 down to this year only, so that we are not looking at old data.  We have £356m of spend to look at, 80% of which is made by central government.  There's a roughly 50/50 split between small and large companies - though I suspect one or two previously small companies have now become very much larger since G-Cloud arrived (though on these revenues, they have not yet become "large").

If we knew which projects that spend had been committed to - we would soon know what kind of cloud work government was doing if we could see that, right?

Sadly, £160m is recorded as against "Project Null".  Let's hope it's successful, there's a lot of cash riding on it not becoming void too.

Here are the Top 10 Lot 4 spenders (for this calendar year to date only):

 
 And the Top 10 suppliers:


Cloud companies?  Well, possibly.  Or perhaps, more likely, companies with available (and, obviously, agile) resource for development projects that might, or might not, be deployed to the cloud.  It's also possible that all of these companies are breaking down the legacy systems into components that can be deployed into the cloud starting as soon as this new financial year; we will soon see if that's the case.

To help understand what is most likely, here's another way of looking at the same data.  This plots the length of an engagement (along the X-axis) against the total spend (Y-axis) and shows a dot with the customer and supplier name.



A cloud-related contract under G-Cloud might be expected to be short and sharp - a few months, perhaps, to understand the need, develop the strategy and then ready it for implementation.  With G-Cloud contracts lasting a maximum of two years, you might expect to see no relationship last longer than twenty four months.

But there are some big contracts here that appear to have been running for far longer than twenty four months.  And, whilst it's very clear that G-Cloud has enabled far greater access to SME capability than any previous framework, there are some old familiar names here.

4) Conclusions

G-Cloud without Lot 4 would look far less impressive, even if the spend it is replacing was 10x higher.  It's clear that we need:

- Transparency. What is the Lot 4 spend going to?

- Telegraphing of need.  What will government entities come to market for over the next 6-12 months?

-  Targets.  The old target was that 50% of new IT spend would be on cloud.  Little has been said about that in a long time.  Little has, in fact, been said about plans.  What are the new targets?

Most of those points are not new - I've said them before, for instance in a previous post about G-Cloud as a Hobby and also here about how to take G-Cloud Further Forward.

In short, Lot 4 needs to be looked at hard - and government needs to get serious about the opportunity that this framework (which broke new ground at inception but has been allowed to fester somewhat) presents for restructuring how IT is delivered.

Acknowledgements

I'm indebted, as ever, to Dan Harrison for taking the raw G-Cloud data and producing these far simpler to follow graphs and tables.  I maintain that GDS should long ago have hired him to do their data analysis.  I'm all for open data, but without presentation, the consequences of the data go unremarked.


Monday, February 16, 2015

Performance Dashboard July 2003 - The Steep Hill of Adoption

With gov.uk's Verify appearing on the Performance Dashboard for the first time, I was taken all the way back to the early 2000s when we published our own dashboards for the Government Gateway, Direct.gov.uk and our other services.  Here's one from July 2003 - there must have been earlier ones but I don't have them to hand:



This is the graph that particularly resonated:

With the equivalent from back then being:

After 4 years of effort on the Identity programme (now called Verify), the figures present pretty dismal reading - low usage, low ability to authenticate first time, low number of services using it - but, you know what, the data is right there to see for everyone and it's plain that no one is going to give up on this so gradually the issues will be sorted, people will authenticate more easily and more services will be added.    It's a very steep hill to climb though.

We started the Gateway with just the Inland Revenue, HM Customs and MAFF (all department names that have long since fallen away)- and adding more was a long and painful process.  So I feel for the Verify team - I wouldn't have approached things the way they are but it's for each iteration to pick its path.  There were, though, plenty of lessons to learn that would have made things easier.

There is, though, a big hill to climb for Verify.  Will be interesting to watch.

Monday, January 05, 2015

Mind The Gaps - Nothing New Under The Sun


As we start 2015, a year when several big contracts are approaching their end dates and replacement solutions will need to be in place, here's a presentation I gave a couple of times last year looking at the challenges of breaking up traditional, single prime IT contracts into potentially lots of smaller, shorter contracts:

Monday, August 11, 2014

Hosting Crowns

Late in 2013 there was a flurry of interest in a project called the "Crown Hosting Service" - covered, for instance, by Bryan Glick at Computer Weekly. The aim, according to the article, was to save some £500m within a few years by reducing the cost of looking after servers.  The ITT for this "explicit legacy procurement" (as Liam Maxwell accurately labelled it) was issued in July 2014.

Apparently some £1.6bn is spent by government on hosting government's IT estate.  That figure is about half what it costs to run government's central civil estate (buildings); and that £3bn is only 15% of the cost of running the total estate.  The total cost of running the estate is, then, something like £20bn (with an overall estate value of c£370bn).

It's interesting, then, to see increasing instances of department's sharing buildings - the picture below shows two agencies that you might not associate together.  The Intellectual Property Office and the Insolvency Service share a building - though I'm hoping it's not because they share a customer base and offer a one stop shop.  The IPO and the IS are both part of BIS (which is just around the corner) so perhaps this is a like for like share.


But over the next couple of years, and maybe in the next couple of months, we are certainly going to see more sharing - DCLG will soon vacate its Victoria location and begin sharing with another central government department.  Definitely not like for like.

Such office sharing brings plenty of challenges. At the simpler end things such as standard entry passes and clearance levels.  At a more complicated level is the IT infrastructure - at present something that is usually entirely unique to each department.  A DCLG desktop will not easily plug straight into the network of another department - even a wireless network would need to be told about new devices and where they needed to be pointed at.

With increasing commoditisation of services, and increasing sharing, it's easily possible to see - from a purely IT point of view - government buildings that function, for large numbers of HQ and, perhaps especially, field staff, as drop in centres where desks are available for whoever is passing provided that they have the right badge.  Those who want to work from home can continue to do so, but will also be able to go to a "local office" where they will have higher bandwidth, better facilities and the opportunity to interact with those in other departments and who run other services.  

In this image, the vertical silos of government departments will be broken up simply because people no longer need to go to "their" department to do their day job, but they can go wherever makes most sense.  Maybe, just maybe, the one stop shop will become a reality because staff can go where the customers are, rather than where their offices are.

G-Cloud By The Numbers (To End June 2014)

With Dan's Tableau version of the G-Cloud spend data, interested folks need never download the csv file provided by Cabinet Office ever again.  Cabinet Office should subcontract all of their open data publication work to him.



The headlines for G-Cloud spend to the end of June 2014 are:

- No news on the split between lots.  80% of spend continues to be in Lot 4, Specialist Cloud Services

- 50% of the spend is with 10 customers, 80% is with 38 customers

- Spend in June was the lowest since February 2014.  I suspect that is still an artefact of a boost because of year end budget clearouts (and perhaps some effort to move spend out of Lot 4 onto other frameworks)

- 24 suppliers have 50% of the spend, 72 have 80%.  A relative concentration in customer spend is being spent across a wider group of suppliers.  That can only be a good thing

- 5 suppliers have invoiced less than £1,000. 34 less than £10,000

- 10 customers have spent less than £1,000. 122 less than £10,000.  How that boxes with the bullet immediately above, I'm not sure

- 524 customers (up from 489 last month) have now used the framework, commissioning 342 suppliers.  80% of the spend is from central government (unsurprising, perhaps, given the top 3 customers - HO, MoJ, CO - account for 31% of the spend)

- 36 customers have spent more than £1m.  56 suppliers have billed more than £1m (up from 51).  This time next year, Rodney, we'll be millionaires.

- Top spending customers stay the same but there's a change in the top 3 suppliers (BJSS, Methods stay the same and Equal Experts squeaks in above IBM to claim the 3rd spot)

One point I will venture, though not terribly well researched, is that once a customer starts spending money with G-Cloud, they are more likely to continue than not.  And one a supplier starts seeing revenue, they are more likely to continue to see it than not.  So effort on the first sale is likely to be rewarded with continued business.

Friday, June 27, 2014

Close Encounters Of The Fourth Kind

Much to my surprise, O2 sent me a text on Wednesday.  The text wasn't the surprising bit - they often send me texts offering me something that I don't want.  The surprise was that this time they didn't offer, they told me that I was just going to get it.  It was 4G.  And I wanted it.


4G. For nothing.  No options. No discussion. No questions allowed.  No SIM change needed.  No conversation about impact on battery life.  Just: turn off your phone in the morning and turn it back on, and if you're in a 4G area, it's all yours.

The next morning I was in Camden Town - no sign of 4G there.  Clearly Camden is a bit rural to have coverage just yet.

But later, in Whitehall, it worked just fine.  And fine means a consistent 15mb/s download (versus the previous day's 3G download speed of 2mb/s).

During 2012/13 I set up a JV owned by the four mobile operators, called at800, that had the task of managing any negative impact from interference with TV signals that might occur because the bottom of the 4G range aligns with the top of the TV range (and, until some recent work by Digital UK, overlapped).


at800 - you might have seen the ads or had a card, or maybe even a filter through your letterbox - has been a great success (last I checked, they'd been in touch with probably 45% of UK households).  That's in part because the problem that all the TV technologists worried might affect up to two million households has actually been far less of a problem but, for the most part, it's because we put together a great team, worked closely with the mobile operators and the broadcasters, ran pilots, tested everything we could and smoothed the way for the 4G roll out.  In truth, we were ready long before the operators were.  They were all fun/challenging/annoying/exciting to work with, but I liked O2's approach most.


After a couple of days testing 4G, I have this to say:

- Coverage in buildings where 3G coverage was previously poor to non-existent has much improved (I can even make calls from the dead centre of buildings where previously I stared only at "No Service")

- Download speeds are certainly faster (roughly equivalent to what I get from Satellite broadband, but without the round trip lag)

- Battery life seems unchanged (I wonder if battery usage is higher during download but because download is so much faster, there's less overall drain)

That said, the nearest mast to my home is still some 200 miles away.  Keep rolling it out O2.

I have no idea how widespread this offer is but, if you get the same text, say "yes". Not that you'll have any choice.  But so far, it's all upside.

Monday, June 23, 2014

The Trouble With Transition - DECC and BIS Go First

In a head-scratching story at the end of last week, DECC and BIS made the front page of the Financial Times (registered users/subscribers can access the story).  Given the front page status, you might imagine that the Smart Meter rollout had gone catastrophically wrong, or that we had mistakenly paid billions in grants to scientists who weren't getting the peer reviews that we wanted, or that we'd suddenly discovered a flaw in our model for climate change or perhaps that the Technology Strategy Board had made an investment that would forever banish viruses and malware.


The BBC followed the story too.

But, no.  Instead we have two departments having problems with their email.  Several Whitehall wags asked me weeks ago (because, yes, this story has been known about for a month or more) whether anyone would either notice, or care, that there was no email coming to or from these departments.   It is, perhaps, a good question.
Business Secretary Mr Cable and Energy and Climate Change Secretary Mr Davey were reported in the Financial Times to be angry about slow and intermittent emails and network problems at their departments since they started migrating to new systems in May.
The real question, though, is what actually is the story here?

- It appeared to be a barely-veiled attack on the current policy of giving more business to SMEs (insider says "in effect they are not necessarily the best fit for this sort of task" ... "an idealistic Tory policy to shake up Whitehall")

- Or was it about splitting up contracts and of taking more responsibility for IT delivery within departments (Mr Cable seemingly fears the combination of cost-cutting and small firms could backfire)?

-  Was the story leaked by Fujitsu who are perhaps sore at losing their £19m per annum, 15 year (yes, 15. 15!) contract?

- Was it really triggered by Ed Davey and Vince Cable complaining to the PM that their email was running slow ("Prime Minister, we need to stop everything - don't make a single decision on IT until we have resolved the problems with our email")?

- Is it even vaguely possible that it is some party political spat where the Liberal Democrats, languishing in the polls, have decided that a key area of differentiation is in how they would manage IT contracts in the future?  And that they would go back to big suppliers and single prime contracts?

- Was it the technology people in the department themselves who wish that they could go back to the glory days of managing IT with only one supplier when SLAs were always met and customers radiated delight at the services they were given?

#unacceptable as Chris Chant would have said.

Richard Holway added his view:
In our view, the pendulum has swung too far. The Cabinet Office refers to legacy ICT contracts as expensive, inflexible and outdated; but moving away from this style of contract does not necessarily mean moving away from the large SIs.
And it appears that it is beginning to dawn on some in UK Government that you can’t do big IT without the big SIs. A mixed economy approach – involving large and small suppliers - is what’s needed.
By pendulum, he means that equilibrium sat with less than a dozen suppliers taking more than 75% of the government's £16bn annual spend on IT.  And that this government, by pushing for SMEs to receive at least 25% of total spend, has somehow swung us all out of kilter, causing or potentially causing chaos.  Of course, 25% of spend is just that - a quarter - it doesn't mean (based on the procurements carried out so far by the MoJ, the Met Police, DCLG and other departments) that SIs are not welcome.

Transitions, especially, in IT are always challenging - see my last blog on the topic (and many before).  DECC and BIS are pretty much first with a change from the old model (one or two very large prime contracts) to the new model (several - maybe ten - suppliers with the bulk of the integration responsibility resting with the customer, even when, as in this case, another supplier is nominally given integration responsibility).  Others will be following soon - including departments with 20-30x more users than DECC and BIS.

Upcoming procurements will be fiercely competed, by big and small suppliers alike.  What is different this time is that there won't be:

-  15 year deals that leave departments sitting with Windows XP, Office 2002, IE 6 and dozens of enterprise applications and hardware that is beyond support.

or

- 15 year deals that leave departments paying for laptops and desktops that are three generations behind, that can't access wireless networks, that can't be used from different government sites and that take 40 minutes to boot.

or

- 15 year deals that mean that only now, 7 years after iPhone and 4 years after iPad, are departments starting to take advantage of truly mobile devices and services

With shorter contracts, more competition, access to a wider range of services (through frameworks like G-Cloud), only good things can happen.   Costs will fall, the rate of change will increase and users in departments will increasingly see the kind of IT that they have at home (and maybe they'll even get to use some of the same kind of tools, devices and services).

To the specific problem at BIS and DECC then.  I know little about what the actual problem is or was, so this is just speculation:

- We know that, one day, the old email/network/whatever service was switched off and a new one, provided by several new suppliers, was turned on.  We don't know how many suppliers - my guess is a couple, at least one of which is an internal trading fund of government. But most likely not 5 or 10 suppliers.

- We also know that transitions are rarely carried out as big bang moves.  It's not a sensible way to do it - and goodness knows government has learned the perils of big bang enough times over the last 15 years (coincidentally the duration of the Fujitsu contract).

- But what triggered the transition?  Of course a new contract had been signed, but why transition at the time they did?  Had the old contract expired?  Was there a drive to reduce costs, something that could only be triggered by the transition?   

- Who carried the responsibility for testing?  What was tested?  Was it properly tested?  Who said "that's it, we've done enough testing, let's go"?  There is, usually, only one entity that can say that - and that's the government department.  All the more so in this time of increased accountability falling to the customer.

- When someone said "let's go", was there an understanding that things would be bumpy?  Was there a risk register entry, flashing at least amber and maybe red, that said "testing has been insufficient"?

In this golden age of transparency, it would be good if DECC and BIS declared - at least to their peer departments - what had gone wrong so that the lessons can be learned.  But my feeling is that the lessons will be all too clear:

- Accountability lies with the customer.  Make decisions knowing that the comeback will be to you.

- Transition will be bumpy.  Practice it, do dry runs, migrate small numbers of users before migrating many.

- Prepare your users for problems, over-communicate about what is happening.  Step up your support processes around the transition period(s).

- Bring all of your supply chain together and step through how key processes and scenarios will work including when it all goes wrong.

- Have backout processes that you have tested and know the criteria you will use to put them into action

Transitions don't come along very often.  The last one DECC and BIS did seems to have been 15 years ago (recognising that DECC was within Defra and even MAFF back then).  They take practice.  Even moving from big firm A to big firm B.  Even moving from Exchange version x to Exchange version y.

What this story isn't, in any way, is a signal that there is something wrong with the current policy of disaggregating contracts, of bringing in new players (small and large) and of reducing the cost of IT).

The challenge ahead is definitely high on the ambition scale - many large scale IT contracts were signed at roughly the same time, a decade or more ago, and are expiring over the next 8 months.  Government departments will find that they are, as one, procuring, transitioning and going live with multiple new providers.  They will be competing for talent in a market where, with the economy growing, there is already plenty of competition.  Suppliers will be evaluating which contracts to bid for and where they, too, can find the people they need - and will be looking for much the same talent as the government departments are.  There are interesting times ahead.

There will be more stories about transition, and how hard it is, from here on in.  What angle the reporting takes in the future will be quite fascinating.

Friday, June 13, 2014

More On G-Cloud Numbers (May 2014 data)

The latest data show increasing spend via G-Cloud - this month tantalisingly close to the £200m arbitrary round but important number level at £191.6m.  The news after that is not terribly interesting:

Cloud spending may, it turns out, be seasonal.  Spend last month dropped to £12m, the lowest seen since October 2013 - all the more noticeable after the bollard budget blockbuster that was March spending.  Start of the new financial year and everyone is, it seems, planning rather than doing.

Lot 4 continues to dominate with 79% of the spend (Lots 1 to 3 are 6%, 1% and 13% respectively).

Much of the rest - top spending customers, top earnings suppliers etc - stays the same.

But there are some anomalies.  Last month I reported that the lowest spending customer had spent only £63.50.    This month they've moved higher with £85.90.  Thirteen customers have, though, still spent less than £1,000.

We do, though, have nearly 500 customers (489) which is, in my view, more important than the growth in spend - it shows either (a) that more people are looking at what the cloud can do for them, which would be good all round or (b) that more people have found that G-Cloud as a framework, GCaaS, can help them which is still good because it's transparent and we can see whether they spend more in the coming months.

51 suppliers have seen revenues of more than £1m. Some of those are brand name, paid up, members of the Oligopoly.  Others look new to the public sector and certainly new to having access to quite so many customers.

There are some other anomalies too - I assume the result of data capture errors.  One supplier has a revenue line showing £1,599,849.80 which is listed as "blank" - there are 9 other such lines, though the other numbers are far, far smaller.  It would be nice to know where to allocate that money.  It may be that it is correctly allocated by Lot (so shows up in the graph below where there are no "blank" entries) but not correctly tagged with a product description.  Still be nice to know.

A couple of other points to wonder about:

- The Crown Commercial Service are a bigger user of Skyscape than any other purchaser (£1.4m - double HMRC spend, neary triple Cabinet Office spend, nearly 5 times Home Office).  Is that all hosting of the G-Cloud store and other framework services?

- There are only 125 instances of the word "Cloud" in the line items of what has been purchased (which run to over 2,000 separate lines)

To repeat the last paragraph in my last entry on this topic, for the avoidance of doubt:

Still, there is no other framework in government that gives access to such a wide variety of suppliers (many new to the public sector) and no framework that publishes its information at such a transparent level.  For those two reasons alone, G-Cloud still deserves applause - and, as it grows month on month, I hope that it will only get louder. 




Monday, June 09, 2014

Digital Government 2002 - Doing Something Magical

Now here's a blast from the past!  Here's a "talking head" video recorded, I think, in early 2002 all about e-government (I am, of course, the talking head).  Some months later, much to my surprise, the video popped up at a conference I was attending - I remember looking up to see my head on a dozen 6' tall screens around the auditorium.

It's easily dated by me talking about increasing use of PDAs (you'll even see me using one) and the rollout of 3G, not to mention the ukonline.gov.uk logo flashing up in the opening frames and e-government, as opposed to Digital By Default.

But the underpinning points of making the move from government to online government, e-goverment or a Digital by Default approach are much the same now as then:

"The citizen gets the services they need, when they need them, where they need then, how they need them ... without having to worry about ... the barriers and burdens of dealing with government"

video

"You've changed government so fundamentally ... people are spending less time interacting and are getting real benefit"

Lessons learned: get a haircut before being taped, learn your  lines, even when in America don't wear a t-shirt under your shirt (my excuse is that it was winter).

Thursday, June 05, 2014

G-Cloud By The Numbers (April 2014 data, released mid-May 2014)

I haven't looked at the G-Cloud spend data for a few months (the last review was in December) - something changed with the data format earlier in the year and it screwed up all my nicely laid out spreadsheets; I've only just got round to reworking them.

- After 25 months of use, total spend via the framework is £175.5m

- Spend in all of 2013 was £85m.  Spend in the first 4 months of 2014 is £81m, about 46% of the total spend so far

- The run rate for 2014, if that spend rate continues, is perhaps more than £240m.  I suspect we could see much higher than that given the expiry of many central government IT contracts in 2015 and 2016 (and so an increase in experimentation, preparation for transition and even actual transition ahead of expiry)

- The split between the lots in December 2013 was Lot 1: 4%, Lot 2: 1%, Lot 3: 16%, Lot 4: 78%

- As of now, the split is similar: 6%, 1%, 14%, 79%
 
- The 2014 year to date split is little different: 8%, 1%, 12%, 80%





Conclusion:  The vast bulk of the spending is still via Lot 4 - people and people as a service.  I'd expected that to start changing now, with the Digital Services Framework fully live.  That said, Lot 4's spend per month has changed little since November 2013, except for a peak of £23m in March (roughly double the average spend over the last 6 months) which you can easily see in the graph above

Conclusion: Infrastructure as a Service (from Lot 1) is gradually increasing - it's gone from c£800k/month to c£1.5m a month in the last 6 months.  Again, there was a peak in March, of £2m. 

Conclusion: It's an old cliche but plainly there was a bit of a budget clear out in March with departments rushing to spend money.  March 2014 spend was £30m - roughly double any other month either side.

- In December 2013, BJSS was the largest supplier, followed by IBM.  Today, BJSS are still number 1, but Methods have moved to number 2, with IBM at 3.

- The Home Office is still the highest spending customer, at £24.7m (nearly double their spend as of December).  MoJ are second at £16.7m with Cabinet Office third at £12.5m

- The top 10 customers account for 50% of the spend on the framework.  The top 20 make up 67%. That's exactly how it was in December.  More than 100 new customers have been added since December, though, with over 470 customers now listed.

- Some 310 suppliers have won business.   The top 10 have 32% of the market, the top 20 have 47% (that's a better spread than the customer equivalent metrics)

- Last time, the lowest spending customer was the "Wales Office", with £375.   We are at a new low now, with "Circle Anglia Limited" spending £63.50 (I wonder if the cost of processing that order was far greater?).

-  Thirteen customers have spent less than £1,000.  Thirty one have spent more than £1m

Conclusion:  Much the same as in December - Adoption of the framework is still spotty, but it is definitely improving.  A greater spread of customers, spending higher amounts of money - though mostly concentrated in Lot 4.  A few more suppliers have likely seen their business utterly transformed with this new access to public sector customers.

Overall Conclusion: G-Cloud needs, still, to break away from its reliance on Lot 4 sales.  Scanning through the sales by line item, there are far too many descriptions that say simply "project manager", "tester", "IT project manager" etc.  There are even line items (not in Lot 4) that say "expenses - 4gb memory stick" - a whole new meaning to the phrase "cloud storage" perhaps.

Still, there is no other framework in government that gives access to such a wide variety of suppliers (many new to the public sector) and no framework that publishes its information at such a transparent level.  For those two reasons alone, G-Cloud still deserves applause - and, as it grows month on month, I hope that it will only get louder.

Tuesday, May 13, 2014

Officially Uncertain

It turns out that the new security classifications, introduced at the start of April 2014, have collapsed into a single new tier - Officially Uncertain.  I worried that this might happen earlier in the year.

Last week, for instance, it was clearly explained to me that "OFFICIAL is not a protective marking, it does not convey any associated controls on how the information is to be handled."

What that means, of course, is that because there are no particular controls that one might agree were the baseline necessary for protecting the information that is marked OFFICIAL that isn't actually marked by a protective marking, each department or government entity is able to decide, alone, what it should do to protect that information.  Adios commodity cloud.

In a different meeting with different people, it was explained to me, just as clearly, that because no one was going to go back and revisit their historical data and check what label should be applied to it (on an individual file by file basis).  The only conclusion, therefore, was that all historical data should be marked OFFICIAL SENSITIVE (notwithstanding that, if OFFICIAL isn't a protective marking, then neither is this one nor that the guidance suggests that use of "sensitive" is by exception only - this is one big exception).  And given it's all a bit sensitive, that historical data should be treated as if it were IL3 and kept in a secure facility in the UK.  Adieu commodity cloud.



All is not yet lost I hope.  Folks I speak to in CESG - sane, rational people that they are - recognise that this is a "generational change" and it will take some time before the implications are understood.  The trouble is that whilst time is on the side of government, it's not on the side of the smaller/newer players who want to provide services for government and for whom UNCERTAINTY is anathema.

In these early days, some guidance (not rules) would help people navigate through this uncertainty and support the development of products that met the needs of the bulk of government entities (be they local, central, arms length or otherwise).  The existing loose words - I can't stretch to guidance for these - known as the "Cloud Security Principles" get to the precipice of new controls, look over and leap sharply backwards, all a tremble.

Indeed, the summary of the approach recommended by those who best understand security is:

1. Think about the assets you have and what you're trying to do with them

2. Think about the attackers who'll be trying to interfere with those assets as you deliver your business function

3. Implement some mitigations (physical, procedural, personnel, technical) to address those identified risks

4. Get assurance as required in those mitigations

5. Thinking about the updated solution design, go back to step 1 to see if you've introduced any new risks.

6. Repeat until you've hit a level of confidence you are happy with

My guess is that 6, alone, could lead to an awful lot of iterations that culminate in "guards with machine guns, patrolling with dogs around a perimeter protected by an electric fence".  Of course, the number of guards, the type of guns, the eagerness of the dogs, the height of the fence and the shock provided by the fence will vary from entity to entity.



There is sunshine through some of the clouds though ... some departments are rolling out PCs using native BitLocker rather than add-on encryption, others are trialling Windows 8.1 on tablets, whilst managed iPads have been around for some months.



But a move of central government departments to public cloud services (remember - 50% of new spend to be in the public cloud by 2015) looks to be a long way from here.  I don't think I can even soften it an say that a significant move to even a private, public sector only, cloud is that close.


Friday, March 14, 2014

The Trouble With ... Spectrum


To paraphrase Mark Twain, "Buy spectrum.  They're not making it anymore."

And if Ofcom's figures are right, the spectrum that we use today is worth £50bn a year (as of 2011) to the UK economy.  The government say that they want to double that contribution by 2025 - it is already up 25% in the 5 years from 2006 to 2011.  It's unclear why the data is as of 2011 - one suspects that if it was up 25% in 5 years, it may already be up another 12.5% since then making doubling by 2025 at least a little easier.

If you've ever seen how spectrum is carved up in the UK, you know it's very complicated.  Here's a picture that shows just how complicated:


Every so often, the government auctions, through Ofcom, a slice of spectrum.  In April 2000, the sale of the 3G spectrum realised some £22bn.  There was much delight - until stock markets around the world fell dramatically not long afterwards, something which was at least partly to blame for delaying the actual rollout of 3G services (indeed, Vodafone narrowly avoided a fine for failing to reach 90% coverage on time - with an extension granted to the end of 2013).

That 90% is a measure of population coverage, not geographical coverage - which explains why you will often fail to get a signal in the middle of a park, just outside a small town or, often, anywhere with a beautiful view where you want to take a picture and send it to someone there and then, like if you were here:


Of course, there are doubtless plenty of people wandering down Baker Street right now who also can't get or even maintain a 3G signal.

The 4G auctions took place a little over 18 months ago and resulted in revenues that were some 90% lower than for 3G - partly a reflection of the times, partly because of somewhat decreased competition and partly because of smarter bidding on the part of at least some of the operators.  The 4G build out is underway now though there are, in effect, only two networks being built - O2 and Vodafone are sharing their network, as are Three and EE (the latter had a significant headstart on 4G because they lobbied, successfully, to reassign spare 1800 Mhz, originally for 3G, frequency for use as 4G).

For me, though, coverage shouldn't be a competitive metric.  Coverage should be national - not 90% national, proper national.  Using coverage as a metric, coupled with charging for the spectrum, and then splitting up the job of building the network across two (or more) players means that they will go where the money is - which means major towns first (in fact, it means London first and for longest), then major travel corridors and commuter towns and the rural areas never.  The same is true, of course, for broadband - though our broadband rollout is mostly physical cable rather than over the air, the same investment/return challenge remains.

And that always seems to leave government to fill in the gaps - whether with the recent "not spots" project for mobile that will result in a few dozen masts being set up (for multiple operator use) to cover some (not all) of the gaps in coverage or a series of rural broadband projects (that only BT is winning) - neither of which is progressing very fast and certainly not covering the gaps.

With the upcoming replacement of Airwave (where truly national - 100% geographic - coverage is required), the rollout of smart meters (where the ideal way for a meter to send its reading home is via SMS or over a home broadband network) and the need to plug gaps in both mobile and broadband coverage, surely there is a need for an approach that we might call "national infrastructure"?

So focusing on mobile and, particularly, where it converges with broadband (on the basis that one can substitute for the other and that the presence of one could drive the other), can one or more bodies be set up who have the job to create truly national coverage, and they sell the capacity that they create to content and service providers who want it.  That should ensure coverage, create economies of scale and still allow competition (even more so than today given that in many areas, there is only one mobile provider to choose from).  Network Rail for our telecomms infrastructure.

That is to say, in another way, is it relevant or effective to have competitive build-outs of such nationally vital capabilities as broadband and 4G (and later 5G, 6G etc) mobile? 

If the Airwave replacement were to base its solution on 4G (moving away from Tetra) - and I have no idea if they will or they won't, but given Emergency Services will have an increasing need for data in the future, it seems likely - then we would have another player doing a national rollout, funded by government (either funded directly or funded through recovery of costs)

There are probably 50,000 mobile masts in the country today.  With the consolidation of networks, that will get to a smaller number, maybe 30,000.  If you add in Airwave, which operates at a lower frequency and so will get better performance but has to cover more area, that number will increase a little (Airwave was formerly owned by O2 so my guess is that much of their gear is co-located with O2 mobile masts).   Perhaps £100k to build those in the first place and perhaps £10k every time they need a major upgrade (change of frequency / change of antenna / boost in backhaul and so on) ... pretty soon you're talking real money.  And that's on top of the cost of spectrum and excludes routine maintenance and refresh costs.

So having already rolled out 2G, 3G and the beginning of the 4G network and likely to replace (or at least combine) Airwave with a 4G-based solution ... and with many areas of the country still struggling to get a wireless signal, let alone a fast broadband solution, I think it's time to look at this again.

Whilst I was writing this, Ofcom put out a press release noting:
Ofcom’s European Broadband Scorecard, published today, shows that the UK leads the EU’s five biggest economies on most measures of coverage, take-up, usage and choice for both mobile and fixed broadband, and performs well on price.
That suggests that the current approach has done rather well - better than a curious selection of five big countries - but it doesn't mean that we should (a) compare ourselves with those countries, (b) that we should not go for an absolute measure of 100% coverage and (c) that the current approach will keep us ahead of the game.

It seems to me that rather than spend billions on HS2 which aims to transport more people from A to B a bit quicker than they move today, we could, instead, spend a smaller amount on securing the infrastructure for the 21st and 22nd centuries rather than that of the 19th.

Monday, March 03, 2014

The Trouble With ... Transition

In my post two weeks ago (Taking G-Cloud Further Forward), I made this point:
I struggle with the suggestion to make contracts three years instead of two.  This is a smokescreen, it's not what is making buyers nervous really, it's just that they haven't tried transition.  So let's try some - let's fire up e-mail in the cloud for a major department and move it 6 months from now.  

Until it's practiced, no one will know how easy (or incredibly difficult) it is.  The key is not to copy and paste virtual machines, but to move the gigabytes of data that goes with it.  This will prove where PSN is really working (I suspect that there are more problems than anyone has yet admitted to), demonstrate how new capabilities have been designed (and prove whether the pointy things have been set up properly as we used to say - that is, does the design rely on fixed IP address ranges or DNS routing that is hardcoded or whatever).  

This won't work for legacy - that should be moved once and once only to the Crown Hosting Service or some other capability (though recognise that lots of new systems will still need to talk to services there).  There's a lot riding on CHS happening - it will be an interesting year for that programme.
Eoin Jennings of Easynet responded, via Twitter, with a view that buyers thought that there was significant procurement overhead if there was a need to run a procurement every 2 years (or perhaps more frequently given there is an option within G-Cloud to terminate for convenience and move to a new provider). Eoin is seemingly already trying to convince customers - and struggling.


Georgina O'Toole (of Richard Holway's Tech Market View) shared her view that 2 years could be too short, though for a different reason:

An example might be where a Government organisation opts for a ‘private cloud’ solution requiring tailoring to their specifications. In these cases, a supplier would struggle to recoup the level of investment required in order to make a profit on the deal.  The intention is to reduce the need for private cloud delivery over time, as the cloud market “innovates and matures” but in the meantime, the 24-month rule may still deter G-Cloud use.
Both views make sense, and I understand them entirely, in the "current" world of government IT where systems are complex, bespoke and have been maintained under existing contracts for a decade or more. 

But G-Cloud isn't meant for such systems.  It's meant for systems designed under modern rules where portability is part of the objective from the get go.   There shouldn't be private, departmentally focused, clouds being set up - the public sector is perfectly big enough to have its own private cloud capability, supplied by a mixture of vendors who can all reassure government that they are not sharing their servers or storage with whoever they are afraid of sharing them with.  And if suppliers build something and need long contracts to get their return on investment, then they either aren't building the right things, aren't pricing it right or aren't managing it right - though I see that there is plenty of risk in building anything public sector cloud focused until there is more take up, and I applaud the suppliers who have already taken the punt (let's hope that the new protective marking scheme helps there).

Plainly government IT isn't going to turn on a sixpence with new systems transporting in from far off galaxies right away, but it is very much the direction of travel as evidence by the various projects set up using a more agile design approach right across government - in HMRC, RPA, Student Loans, DVLA and so on.

What really needs to happen is some thinking through of how it will work and some practice:

- How easy is it to move systems, even those designed, for cloud where IP ranges are fixed and owned by data centre providers?

- How will network stacks (firewalls, routers, load balancers, intrusion detection tools etc) be moved on a like for like basis?

- If a system comes with terabytes or petabytes of data, how will the be transferred so that there is no loss of service (or data)

- In a world where there is no capex, how does government gets its head around not looking at everything as though it owned it?

- If a system is supported by minimal staff (as in 0.05 heads per server or whatever), TUPE doesn't apply (but it may well apply for the first transfer), how does government (and the supplier base) understand that?

- How can the commercial team sharpen their processes so that what is still taking them many (many) weeks (despite the reality of a far quicker process with G-Cloud) can be done in a shorter period?

Designing this capability in from the start needs to have already started and, for those still grappling with it, there should be speedy reviews of what has already been tried elsewhere in government (I hesitate to say that the "lessons must be learned" on the basis that those 4 words may be the most over-used and under-practiced words in the public sector.

With the cost of hardware roughly halving every 18-24 months and therefore the cost of cloud hosting falling on a similar basis (perhaps even faster given increasing rates of automation), government can benefit from continuously falling costs (in at least part of its stack) - and by designing its new systems so that they avoid lockin from scratch, government should, in theory, never be dependent on a small number of suppliers and high embedded costs.

Now, that all makes sense for hosting ... how government does the same for application support, service integration and management and how it gets on the path where it actually redesigns its applications (across the board) to work in such a way that they could be moved whenever new pricing makes sense is the real problem to grapple with.

Monday, February 17, 2014

Taking G-Cloud Further Forward

A recent blog post from the G-Cloud team talks about how they plan to take the framework forward. I don't think it goes quite far enough, so here are my thoughts on taking it even further forward.

Starting with that G-Cloud post:

It's noted that "research carried out by the 6 Degree Group suggests that nearly 90 percent of local authorities have not heard of G-Cloud".  This statement is made in the context of the potential buyer count being 30,000 strong.  Some, like David Moss, have confused this and concluded that 27,000 buyers don't know about G-Cloud.  I don't read it that way - but it's hard to say what it does mean.  A hunt for the "6 Degree Group", presumably twice as good as the 3 Degrees, finds one obvious candidate (actually the 6 Degrees Group), but they make no mention of any research on their blog or their news page (and I can't find them in the list of suppliers who have won business via G-Cloud).  Still, 90% of local authorities not knowing about G-Cloud is, if the question was asked properly and to the right people (and therein lies the problem with such research), not good.  It might mean that 450 or 900 or 1,350 buyers (depending on whether there are 1, 2 or 3 potential buyers of cloud services in each local authority) don't know about the framework.  How we get to 30,000 potential buyers I don't know - but if there is such a number, perhaps it's a good place to look at potential efficiencies in purchasing.

[Update: I've been provided with the 30,000 - find them here: http://gps.cabinetoffice.gov.uk/sites/default/files/attachments/2013-04-15%20Customer%20URN%20List.xlsx. It includes every army regiment (SASaaS?), every school and thousands of local organisations.  So a theoretical buyer list but not a practical buyer list. I think it better to focus on the likely buyers. G-Cloud is a business - GPS gets 1% on every deal.  That needs to be spent on promoting to those most likely to use it]

[Second update: I've been passed a further insight into the research: http://www.itproportal.com/2013/12/20/g-cloud-uptake-low-among-uk-councils-and-local-authorities/?utm_term=&utm_medium=twitter&utm_campaign=testitppcampaign&utm_source=rss&utm_content=  - the summary from this is that 87% of councils are not currently buying through G-Cloud and 76% did not know what the G-Cloud [framework] could be used for]

Later, we read "But one of the most effective ways of spreading the word about G-Cloud is not by us talking about it, but for others to hear from their peers who have successfully used G-Cloud. There are many positive stories to tell, and we will be publishing some of the experiences of buyers across the public sector in the coming months" - True, of course.  Except if people haven't heard of G-Cloud they won't be looking on the G-Cloud blog for stories about how great the framework is.  Perhaps another route to further efficiencies is to look at the vast number of frameworks that exist today (particularly in local government and the NHS) and start killing them off so that purchases are concentrated in the few that really have the potential to drive cost saves allied with better service delivery.

And then "We are working with various trade bodies and organisations to continue to ensure we attract the best and most innovative suppliers from across the UK."  G-Cloud's problem today isn't, as far as we can tell, a lack of innovative suppliers - it's a lack of purchasing through it.  In other words, a lack of demand.  True, novel services may attract buyers but most government entities are still in the "toe in the water" stage of cloud, experimenting with a little IaaS, some PaaS and, based on the G-Cloud numbers, quite a lot of SaaS (some £15m in the latest figures, or about 16% of total spend versus only 4% for IaaS and 1% for Paas).

On the services themselves, we are told that "We are carrying out a systematic review of all services and have, so far, deleted around 100 that do not qualify."  I can only applaud that.  Though I suspect the real number to delete may be in the 1000s, not the 100s.  It's a difficult balance - the idea of G-Cloud is to attract more and more suppliers with more and more services, but buyers only want sensible, viable services that exist and are proven to work.  It's not like iTunes where it only takes one person to download an app and rate it 1* because it doesn't work/keeps crashing/doesn't synchronise and so suggest to other potential buyers that they steer clear - the vast number of G-Cloud services have had no takers at all and even those that have lack any feedback on how it went (I know that this was one of the top goals of the original team but that they were hampered by "the rules").

There's danger ahead too: "Security accreditation is required for all services that will hold information assessed at Business Impact Level profiles 11x/22x, 33x and above. But of course, with the new security protection markings that are being introduced on 1 April, that will change. We will be publishing clear guidance on how this will affect accreditation of G-Cloud suppliers and services soon."  It's mid-February and the new guidelines are just 7 weeks away.  That doesn't give suppliers long to plan for, or make, any changes that are needed (the good news here being that government will likely take even longer to plan for, and make, such changes at their end).  This is, as CESG people have said to me, a generational change - it's going to take a while, but that doesn't mean that we should let it.

Worryingly: "we’re excited to be looking at how a new and improved CloudStore, can act as a single space for public sector buyers to find what they need on all digital frameworks."  I don't know that a new store is needed; I believe that we're already on the third reworking, would a fourth help?  As far as I can tell, the current store is based on Magento which, from all accounts and reviews online, is a very powerful tool that, in the right hands, can do pretty much whatever you want from a buying and selling standpoint.  I believe a large part of the problem is in the data in the store - searching for relatively straightforward keywords often returns a surprising answer - try it yourself, type in some popular supplier names or some services that you might want to buy.   Adding in more frameworks (especially where they can overlap as PSN and G-Cloud do in several areas) will more than likely confuse the story - I know that Amazon manages it effortlessly across a zillion products but it seems unlikely that government can implement it any time soon (wait - they could just use Amazon). I would rather see the time, and money, spent getting a set of products that were accurately described and that could be found using a series of canned searches based on what buyers were interested in.

So, let's ramp up the PR and education (for buyers), upgrade the assurance process that ensures that suppliers are presenting products that are truly relevant, massively clean up the data in the existing store, get rid of duplicate and no longer competitive buying routes (so that government can aggregate for best value), make sure that buyers know more about what services are real and what they can do, don't rebuild the damn cloud store again ...

... What else?

Well, the Skyscape+14 letter is not a terrible place to start, though I don't agree with everything suggested.  G-Cloud could and should:

- Provide a mechanism for services to work together.  In the single prime contract era, which is coming to an end, this didn't matter - one of the oligopoly would be tasked to buy something for its departmental customer and would make sure all of the bits fitted together and that it was supported in the existing contract (or an adjunct).  In a multiple supplier world where the customer will, more often than not, act as the integrator both customer and supplier are going to need ways to make this all work together.   The knee bone may be connected to the thigh bone, but that doesn't mean that your email service in the cloud is going to connect via your PSN network to your active directory so that you can do everything on your iPad.

- Publish what customers across government are looking at both in advance and as it occurs, not as data but as information.  Show what proof of concept work is underway (as this will give a sense of what production services might be wanted), highlight what components are going to be in demand when big contracts come to an end, illustrate what customers are exploring in their detailed strategies (not the vague ones that are published online).  SMEs building for the public sector will not be able to build speculatively - so either the government customer has to buy exactly what the private sector customer is buying (which means that there can be no special requirements, no security rules that are different from what is already there and no assurance regime that is above and beyond what a major retailer or utility might want), or there needs to be a clear pipeline of what is wanted.  Whilst Chris Chant used to say that M&S didn't need to ask people walking down the street how many shirts they would buy if they were to open a store in the area, government isn't yet buying shirts as a service - they are buying services that are designed and secured to government rules (with the coming of Official, that may all be about to change - but we don't know yet because, see above, the guidance isn't available).

- Look at real cases of what customers want to do - let's say that a customer wants to put a very high performing Oracle RAC instance in the cloud - and ensure that there is a way for that to be bought.  It will likely require changes to business models and to terms and conditions, but despite the valiant efforts of GDS there is not yet a switch away from such heavyweight software as Oracle databases.  The challenge (one of many) that government has, in this case, is that it has massive amounts of legacy capability that is not portable, is not horizontally scalable and that cannot be easily moved - Crown Hosting may be a solution to this, if it can be made to work in a reasonable timeframe and if the cost of migration can be minimised.

- I struggle with the suggestion to make contracts three years instead of two.  This is a smokescreen, it's not what is making buyers nervous really, it's just that they haven't tried transition.  So let's try some - let's fire up e-mail in the cloud for a major department and move it 6 months from now.  Until it's practiced, no one will know how easy (or incredibly difficult) it is.  The key is not to copy and paste virtual machines, but to move the gigabytes of data that goes with it.  This will prove where PSN is really working (I suspect that there are more problems than anyone has yet admitted to), demonstrate how new capabilities have been designed (and prove whether the pointy things have been set up properly as we used to say - that is, does the design rely on fixed IP address ranges or DNS routing that is hardcoded or whatever).  This won't work for legacy - that should be moved once and once only to the Crown Hosting Service or some other capability (though recognise that lots of new systems will still need to talk to services there).  There's a lot riding on CHS happening - it will be an interesting year for that programme.

The ICT contracts for a dozen major departments/government entities are up in the next couple of years - contract values in the tens of billions (old money) will be re-procured.   Cloud services, via G-Cloud, will form an essential pillar of that re-procurement process, because they are the most likely way to extract the cost savings that are needed.  In some cases cloud will be bought because the purchasing decision will be left too late to do it any other way than via a framework (unless the "compelling reason" for extension clause kicks in) but in most cases because the G-Cloud framework absolutely provides the best route to an educated, passionate supplier community who want to disrupt how ICT is done in Government today.  We owe them an opportunity to make that happen.  The G-Cloud team needs more resources to make it so - they are, in my view, the poor relation of other initiatives in GDS today.  That, too, needs to change.