Tuesday, August 29, 2006

Starting the Value For Money debate

Ages ago I was asked about Value For Money and specifically whether I'd measured how we'd done in e-government and how I'd explain it to "Jim Bailey" I'm going to take this in a slightly roundabout way - partly because I'm feeling my own way through the question and partly because I don't have the time right now to condese the points into a single, short thread. I hope when I get to the end, the right route through it will be obvious to me and I can post a summary. Here's an extract from a February 2002 paper on the issues government is facing nearly 2 years into the e-government agenda. I was trying to lay down the argument for greater co-operation between departments rather than the historical silo approach. Bear in mind that, at this point, government had perhaps 800 websites in total, maybe even fewer: • Advanced technology. Requirements have developed from the initial need for “just a web site” to more complicated “portals”. New technologies will be required, including enterprise content management systems, multi-platform delivery parsers, discussion forums, e-mail exchange software and so on. Costs and risks rise per instance of deployment. Our track record for implementing any project, let alone those that involve new technology is poor. • Systems management. As we aggregate content and capability into single systems and as usage of e-government increases, each of these systems becomes a component of the Critical National Infrastructure. CNI requires rigid management, ability to absorb peak volumes (at the end of the tax year, for instance), well-rehearsed continuity of business plans coupled with disaster recovery locations and so on. • Security and accreditation. Accreditation of a single system presently runs at £100,000, whether it is connected to the GSI or to the Internet. Despite departmental experience, these costs are likely to remain, as back ends are connected the Internet, personal data is held on their portals, increasing the risk of security intrusion. • Identification and authentication. Making sure that you know who the person dealing with you is and that they have a right to see the data requested remains one of the largest challenges to delivery. Implementing digital signature signing capability can cost up to £250,000 per department (if done alone) along with additional costs of £40,000 for each new certificate provider (presently there are three but this is expected to rise to ten or more). There can be no cross-government authentication unless there is a single virtual record that different departments can trust. • Partnerships with intermediaries. There is a step cost for intermediaries to communicate electronically with government. Vendors communicating with the Gateway estimate their costs at £100,000. For many, this will be the first time that such a change has been made. Proliferating many ways of communication with government will increase these costs, reducing the number of intermediaries and therefore the number of people using the services offered. • Back end integration. Perhaps the hardest technical challenge – preparing back end (or “legacy” systems) for dealing with electronic transactions. Only a few departments are addressing this at a strategic level. Few vendors have taken what they know from one department and offered it to other departments at minimal cost (hence we have hundreds of systems that do much the same thing) – the first recent opportunity to do that was presented by the Government Gateway standardised interfaces. It is essential to engage all of the vendors in an overall programme of upgrades that will position us for the long term. The choice was simple - let a thousand flowers bloom, or concentrate on a few key programmes and drive participation around those. With the Shared Service agenda now in its first full Spring, it looks a little like too many flowers are blooming - and that the lessons learned from e-government, whilst learned, are not being lived. Offering a shared service is easy - here, come and share this, it works just the way you need it to. You take the CD, install it in your server at your data centre and everything is fine. Alternatively, you just copy your data across the network from your data centre to their data centre and off you go. The real deal is more complicated, inevitably. First, you need a few willing participants. Two is not enough - two folks can strike bilateral deals based on strong relationships, can wiggle through a few tight gaps and can, eventually, get something together. The Gateway started with three - and that brought in complexities: big departments, small departments, citizen focused, business focused, digital certificates, userid/passwords, outsourced providers, inhouse providers, technically literate customers, technically far from literate customers etc. Now we need someone in charge. That's going to be hard. Who should it be? The biggest? The smallest? The one with the most to gain or the most to lose? The one with the strongest person at the head? The one with the biggest team? The one without and IT partner, or the one with? The OGC Gateway reviewers will focus in on governce and will want to see a Senior Responsible Owner in charge with the right stakeholders engaged. And, to make it serious, everyone has to put some money into the pot. Enough to hurt if it goes belly up but not enough to cripple them. Only gambling money you can afford to lose (as the spread betting ads say) doesn't give the necessary incentive to make it work. It will be pretty weird to see government paying government money - someone with safe hands will have to guard the pot and make sure it's spent properly. To add to the spice, each of the participants should have a separate IT vendor. And this is where it will get fun. Contracts being what they are, it will be uncertain how one department offers services to another. The "indivisibility of the crown" means that one department can't sue another in the event of failure - and so a kind of musketeer principle has to prevail (all for one and one for all). That takes time, it certainly isn't natural. On top of that, the service will need, eventually at least, to be productised so that it can be offered to more people. That means documentation, service management standards, professional live service, disaster recovery and so on. Lastly, we need folks who are going to get together regularly and figure out how to get this done, who are going to sort out the real requirements from the fictious ones, who are going to prioritise good from bad and who are going to manage the various vendors and the challenges that will loom in the coming months. We want people who are going to give up large chunks of their lives to see a difficult project realised in full technicolour. We want people who can learn to trust each other and who can deal with the politics both in the room and in the rooms back at base. So three departments is the minimum, four or more is better - but increases the risk of not getting anything done because of no agreements. Money is needed, a person in charge and a way to deliver services to other departments. So how about we count how many shared services look like this?

1 comment:

  1. Anonymous10:39 am

    I know it's off topic, sort of, but I'd rather pay 1000 quid for something that I liked than 500 for something I didn't.

    Bearing in mind the amount of money spent on IT projects, why doesn't the government have a prototyping department, that gets all the basic functionality working. [It's important that the prototyping team wasn't big, wasn't abroad, and particularly that it wasn't ran by Capita, or Detica, Accenture, ATOS-KPMG... etc.]

    I'm sure this would be far more effective in ensuring that cost overruns would be minimised. This is by virtue that if the prototype team can't get the basics working, then it stands to reason it's a big risk, (and it would be obvious that it was.) A small working model is far more effective at showing everyone what it's going to do, than a design document.