Smart meters and privacy

Belatedly, I’ve spotted a good post on the Big Brother Watch blog, here, on the subject of smart metering of utilities such as electricity, gas and water. I tried to leave a comment, but for some reason it got rejected… so here you go:

An awful lot of this debate needs to hinge on transparency. If smart metering is ‘something “they” do to “us” for “their” reasons and benefit’, it will run into considerable opposition, fail to generate the buy-in of household energy consumers, and therefore ultimately fail to reduce energy consumption/carbon footprint etc.

That principle has to guide the energy companies, as they consider design factors such as:

– what are the full range of purposes for which energy consumption data is collected, processed and shared with other organisations?

– what’s the balance of interests between the householder, the energy supplier and third parties?

– exactly what data items are collected by the meters?

– how much of that data is transmitted to the energy supplier?

– how much of it is visible to the householder?

– what degree of control does the householder have over what data is sent and what is kept solely for the householder’s use/convenience?

I really worry when I see the Director of Energy UK, on behalf of the UK Energy Industry, quoted as saying, essentially, “consumers’ security is paramount, and all information will be handled in strict accordance with the Data Protection Act”.

Frankly, if those are the success metrics, the privacy outlook is grim.

1 – Security is not the same as privacy, and a system can be designed to provide great security but trample all over users’ privacy. Privacy needs to be an explicit design goal in its own right from the outset.

2 – Data Protection law applies to the subset of data currently classed as “personally identifiable”… and there is still plenty of argument over what that means. As others have pointed out, you don’t need to personally identify someone in order to burgle their house when energy consumption data indicates they are not at home. DP law is an interesting starting point, but is not sufficient to guarantee a privacy-respecting implementation which protects householders from the range of possible threats.

I am also increasingly wary of promises such as that offered by Mark Daeche of First Utility, who says that information should be “secure and anonymous”. The work, particularly, of Vitaly Shmatikov and Arvind Narayanan has made it increasingly clear that anonymisation of consumer data is extremely hard to guarantee. Their papers should be required reading for anyone involved with supposedly “anonymised” datasets – required, but probably not reassuring. (See Arvind’s excellent blog here, aptly named “33 Bits of Entropy”, for well-informed and well-reasoned thoughts on data and privacy).

The question of “entropy” in personal data is going to be a key one, as we speed ever faster into the world of grids, sensors and smart devices. As I mentioned in a Tweet earlier today, it means that, as a perverse consequence, the more users pare their electricity consumption down to the bare essentials, for instance, the more identifiable the resulting usage pattern will be.

Guardian Tech interview with Eric Schmidt

Some of my readers are probably old enough to remember the occasion in 1984 when President Ronald Reagan stepped up to a microphone for a sound check and uttered the memorable words:

“My fellow Americans, I’m pleased to tell you today that I’ve signed legislation that will outlaw Russia forever. We begin bombing in five minutes.”

This week’s Tech Weekly audiocast on the Guardian site (here) includes a brief interview with Eric Schmidt (it’s in the first 10 minutes, followed by analysis/discussion from the Tech Weekly team).

In the excerpt, Eric Schmidt explains to Jemima Kiss how Google happened to capture some network traffic as its rather inaccurately-named camera cars “sniffed” wireless SSIDs as well as StreetView image data.

Unfortunately – at least on the basis of this part of the interview – I am still not convinced that Mr Schmidt really has as firm a grasp on the privacy issue as I would have hoped for from Google’s CEO. Here’s why:

1 – the problem of cross-border jurisdiction. One of the examples Schmidt cites, of ‘how much data we all happen to disclose’, is that of mobile phone location data. He describes it as a ‘legal requirement’ that your ISP should be able to locate your mobile phone (in case it is needed for emergency services, for instance). As I understand it, that is a legal requirement in the US, but not in the UK, for example. I don’t claim to know which jurisdictions do and don’t require it, but that’s beside the point – the point being that the legal status of your mobile phone location data varies by jurisdiction.

When the CEO of a company with Google’s global reach and colossal processing capacity uses examples which suggest he thinks the regulatory regime is homogenous world wide, that does not instill confidence. Not all countries have the same cultural, legal or regulatory approach to privacy as the US, and it is dangerous to proceed on the assumption that they do.

2 – the issue of privacy and harm. At one point, Schmidt essentially argues that we need to keep the wi-fi data snarfing in perspective, and bear in mind that, as no harm has arisen out of it, it’s not really a privacy breach. Again, if one is in Schmidt’s position, I think that is a very dangerous position to espouse. For instance, there is (as yet) no indication that harm has arisen from the UK HMRC “2 CDs” data breach… so is that entirely privacy neutral? Of course not; it would be absurd to conclude that absence of provable harm means that no action need be taken as a result of the HMRC data breach.

Harm is one factor in assessing actual or potential data breaches, but it is absolutely not a sufficient metric for gauging privacy risk.

And finally, there’s the question of Google’s reaction to the wi-fi incident. What will they do as a result? Well, according to Schmidt’s comments, it’s predominantly a matter of “education” and addressing the fact that “people don’t like it”.

Those are part of the picture, for sure – but again, they are not enough. There are laws in this area – and if those are not given due consideration, the fact of whether or not people like your behaviour is somewhat secondary.

The point of my opening reference to Reagan is that often it’s not just a question of what is said, but by whom and in what context. It may well be that Schmidt’s heart is in the right place and has “Don’t be evil” tattooed on it – but I come back to the point that, because of the post he occupies, his pronouncements on these topics have a very particular weight and resonance. On that basis, I think we are entitled to less about ‘educating us about why we should like it’, and more about building respect for our privacy into Google’s business model.

O’Reilly gets contrarian on Facebook privacy

[This is a slightly extended re-post of a comment I left on Tim O’Reilly’s blog, here.]

First, I should make it clear that I congratulate Tim O’Reilly on managing a contrarian post which is thought-provoking without being inflammatory… a balance which is hard to strike on this topic.

I have to say, though, that over-all I disagree with Tim’s argument. We are (or should be) way past the stage, with social networks, of just innovating “because it’s possible”; a more evolved attitude is to ask not just whether we can do something (because after all, these days online, pretty much anything is possible) but rather whether we should do it. Innovation does not trump ethics.

By comparison, think what the reaction would be if, instead of personal data, Facebook’s raw material was genetically modified bacteria. Would we really be happy for them to be playing around with a 400 million-person petri dish, all in the name of “innovation”?

No – if Mr Zuckerberg wants to push his agenda of “radical transparency”, he should be doing it with the knowing, informed and explicit consent of whatever subset of users want to take part, not with the vast majority of users who don’t have the information or the tools with which to make rational decisions about the privacy of their information online – and who, in many cases, signed up under a radically different set of terms to those which have since replaced them.

As a more general principle, what this suggests to me is no different from what has been happening for decades in other fields of innovation (and I think of things like medicine, nuclear physics, gene technology and so on): we are quite accustomed to having discrete and very different governance regimes for the “research and development” phase and the “mass consumer roll-out” phase. Facebook’s model (and the one Tim O’Reilly seems keen to endorse) it that it’s OK to conflate the two, treat the “mass consumer roll-out” phase as your personal horde of lab rats, and innovate in ways which put them at risk.

My former colleague Michelle Dennedy, then CPO at Sun Microsystems, used to advise organisations to treat personal data like toxic waste. From that perspective, I don’t think what Mr Zuckerberg is doing with 400m people’s personal data is at all healthy. Not for the individuals concerned, and ultimately not for the rest of us either.

Missing the point on Facebook and privacy

B.L. Ochman writes here about Facebook’s privacy issues, arguing that actually, the fault lies with all of us for consistently oversharing on the internet, rather than with Facebook. While admitting that Facebook has made a PR mess of the way it has introduced and communicated some of its changes, the article says, in part:

“People have made a lot of terrible decisions about what they put online for as long as the internet has existed. It’s about time everyone realized that you shouldn’t put anything online that you wouldn’t want an employer, the government or your mother to see. Facebook never made those decisions for anyone! “

Well, as far as it goes, that’s true – but it paints a picture which is partial in two crucial aspects; those of purpose and informed consent.

“Social networking” sites (and I have ranted often enough about why I object to that phrase) create a false reality which users are only too happy to collude in: the illusion that when you go online, just you and your buddies are interacting. In fact, of course, not only are you not alone, there is a third party in the room whose commercial interests lie explicitly in re-selling the byproduct of your social interactions.

Awareness of that fact is growing, thanks in part to the current publicity Facebook’s privacy-eroding policies are generating, but not, it must be said, through any informative disclosure by Facebook, to users, about their role in its business model.

Ochman makes the case that we have all been making poor privacy/disclosure decisions for years. Well, if that’s so evident, it should be correspondingly obvious how to design systems to help users make better privacy decisions. One cannot credibly argue that Facebook is an example of that.

[Thanks to @N_Hickman for pointing me to the B.L. Ochman article]

XML Summer School 2010, Oxford

Do you sometimes find yourself watching the commercial break during a TV programme and thinking “sanitary pads? volumizing mascara? chocolate flavoured diet bars? Strange – I thought I quite liked this programme, but I am obviously not supposed to, judging by its target demographic…”?

Well, with any luck, I am not about to induce the same feeling…

The XML Summer School is an event I was introduced to by two former colleagues at Sun Microsystems – Eve Maler and Lauren Wood – both of whom have been formative influences on it since its creation in 2000. The principle is very simple: you assemble a ‘faculty’ of experts, give them topics to lecture on, add congenial surroundings and extra-curricular activities, and stir in a generous mixture of participants. The result, in my experience, is quite exceptional…

There is deep technical expertise in abundance, in domains ranging from cognitive science, software and user interface design through to the nitty-gritty of RESTful applications and XML Schemas. There are also plenty of opportunities to engage with the faculty members, whether facilitated by a pint or a punt, so this is a very long way from the run-of-the-mill classroom course. As what I can only assume is a bit of (relatively) non-technical light relief, I will be talking about future directions in digital identity and privacy, as part of the Web Services and Identity course.

The whole thing is ably masterminded by John Chelsom, an experienced software developer and entrepreneur, who asked if I would extend this invitation. I am delighted to do so:

XML Summer School
5th – 10th September 2010
St Edmund Hall, Oxford.

The XML Summer School is a unique event for everyone using, designing or implementing solutions using XML and related technologies. Our speakers are some of the world’s most renowned XML practitioners and teachers, who enrich the learning experience with their enthusiasm and expert knowledge, and are always on hand to make sure that delegates receive the very best XML training available.

As always, the XML Summer School is packed with high quality technical XML training for every level of expertise, from the Hands-on Introduction through to special classes devoted to XSLT, XQuery, Semantic Technologies, Web Services and Identity. The Summer School is also a rare opportunity to experience what life is like as a student in one of the world’s oldest university cities.

To find out more and to register your place on the XML Summer School please visit www.xmlsummerschool.com

=============================
Dr John Chelsom
Partner, Eleven Informatics LLP

So there we have it; I’ll be there along with John and the rest of the faculty, and I hope you can join us at “Teddy Hall” in September!

Privacy and SSIDs – in more than 140 characters

[ I don’t normally do this, but I’d like to point to Steve Wilson’s comment on this post, because I think his analysis is exemplary. The quote Oscar Wilde – “I wish I’d said that” ;^) ]

I really value the immediacy and ‘connectedness’ of Twitter, but now and again I get into a Twitter discussion which really suffers from having to be conducted in 140-character bursts. I was in one earlier today with @dakami and @roessler which arose from news coverage of Google’s admission that they had been ‘inadvertently’ collecting wireless network data in the course of capturing Streetview images.

To be fair to Dan Kaminsky, I did rather open things up by describing him as “disingenuous” – in that what he was reported as saying (here, on the BBC site) boiled down to “well, if you broadcast wireless data, you can hardly be surprised if someone picks it up”. Dan pointed out via Twitter that actually that quotation had been somewhat selective, and that the article did not accurately portray the real thrust of his comment, which was more along these lines:

“Given that WIGLE and Skyhook have both been mapping wireless networks since the turn of the millennium, it’s a bit daft to treat Google as an egregious offender in this area”.

(I hope I’ve done Dan’s position justice here – I’m extrapolating from a couple of tweets…)

OK – so here’s my position in the kind of detail which Twitter really doesn’t lend itself to.

First; fair enough – as Thomas Roessler pointed out (also via Twitter) – is there a real privacy issue in logging the SSIDs of wireless networks? Arguably not – particularly as one has the option not to broadcast an SSID in the first place. However, I struggle to see the utility of logging domestic SSIDs, or indeed commercial ones, if they are not the SSIDs of networks intended for public access. Who stands to benefit from that data? And if it is anyone other than the owner of the network, what’s the deal with that?

Second; similarly, Dan rightly points out that an SSID is something which is broadcast… so it’s perhaps a little churlish to gripe when someone notices it. On the other hand, even in my not-very-densely populated neighborhood, there are half a dozen ‘visible’ networks. For the sake of the people who I do wish to be able to connect to my domestic wifi network, it is more convenient to have a broadcast SSID which distinguishes it from the others. Under some circumstances, it might also help them avoid getting suckered into connecting to a rogue access point.

Third; there’s the matter of intent. I set up a domestic wireless network for a very clear, well circumscribed purpose: it is there so that members of my household can share access to the cable connection. That’s all. I didn’t install it, or name it, so that it could be plotted on a map. It is there for a specific purpose which is limited to my immediate family. As such, particularly if interfered with, it engages Protocol 1, Article 1 of the European Convention on Human Rights (ECHR) – “the entitlement to quiet enjoyment of one’s possessions”. By default, that is the legal position.

Now, I fully accept that, as far as the SSID alone is concerned, and given that it is a broadcast value and that broadcasting it is a matter of choice, it is arguable that no harm arises from collecting and publicising it. I still question what the utility is of doing that, for domestic networks.

However, the point is that that is not what Google were obliged to own up to, because what the German authorities uncovered was that they had also been capturing data packets from domestic networks, not just identifiers. In that case, we’re not dealing with just the ECHR – we’re talking about unauthorised access to computer systems, which (in the UK) is an offence under the Computer Misuse Act.

You might retort that it’s the network owner’s fault anyway for being stupid enough to leave their network unsecured. I have two issues with that response.

1 – The fact that I have left a window open does not make it less of an offense to climb into my house and steal my stuff. It might affect my insurance status, but it doesn’t mean theft is not a crime.

2 – There is no contract of any kind in place between companies like Google, WIGLE, Skyhook etc and the householder whose data is being recorded through these initiatives. As the German instance makes clear, there are regional and national differences of view as to what the ‘rules’ are, in the absence of such a contract. For my part, I simply note the practical difficulty faced by the householder in making his/her preference known. For example, if I wished to make it clear that I do not consent to unauthorised access to my domestic wireless network, there is no mecanism for “posting” and explicit notice to that effect, in the way that I might post a ‘please keep out’ sign at the boundary of my property.

As long as the householder has no viable technical means of making his/her preferences known, I would argue that the default should be a presumption of privacy, not a presumption that such data is free to be broadcast to the world.

Some of you may have seen danah boyd’s presentation at SXSW this year. I wasn’t there myself, but have since been lucky enough to see a video of it (authorised, I hasten to add…). One of the many points danah made with admirable clarity was this: taking data which is in the public domain and making it more public (for instance, by broadcasting it widely, or making it globally accessible where it was not before) is not privacy neutral. Actually she put it more strongly and said that it is a violation of privacy.

What I think we need to learn – from companies like Google, WIGLE, Skyhook and others – is that privacy is seldom a binary concept. It does make sense, as danah has done, to describe some data as ‘public’ and other data as ‘more public’. It does make sense to talk of graduated consent to disclosure, rather than bald ‘consent’ or ‘refusal’. And it makes sense to think in terms of conditional disclosure, not just free-for-all or nothing.

Privacy, when it comes down to it, is not a technological construct: it is a personal, social and cultural construct, and a nuanced one at that. Inescapably, as more of our lives are technically-mediated, we face the challenge of mapping that shaded, complex social view of privacy onto a rather crude, binary set of tools. Companies like Google have shown themselves to be fantastic innovators in so many ways; it’s time they turned that ingenuity to the privacy question.

UK jobless total continues to rise…

As I reported yesterday, the new Coalition government spelled disaster for the employment prospects of Elizabeth Henderson, the human face (sic) of the UK ID Cards Scheme.

Regrettably, the jobless total seems set to soar still further, since if there’s no ID Card scheme, there’s not much need for an Identity Commissioner either. Bad news for the relatively recently appointed Sir Joseph Pilling.

Rumour has it that his office is already politely cancelling his upcoming engagements.

Spare a thought for Elizabeth Henderson

On the day Cameron and Clegg shook hands, and the national unemployment figure rose to over 2.5 million, spare a thought for Elizabeth Henderson* – one of the first identifiable people to lose her job under the Coalition.

It’s only Day One but already the Conservative/LibDem coalition agreement has confirmed both parties’ long-stated intention to scrap the ID Cards Scheme. I have to admit, some of my previous scepticism was misplaced: the Coalition will scrap the Scheme as a whole (including the National Identity Register), not just the plastic cards.

The measure comes under the general rubric of Civil Liberties, and is accompanied by statements in a number of other policy areas:

“- A Freedom or Great Repeal Bill.

The scrapping of ID card scheme, the National Identity register, the next generation of biometric passports and the Contact Point Database.

– Outlawing the finger-printing of children at school without parental permission.

– The extension of the scope of the Freedom of Information Act to provide greater transparency.

– Adopting the protections of the Scottish model for the DNA database.

– The protection of historic freedoms through the defence of trial by jury.

– The restoration of rights to non-violent protest.

– The review of libel laws to protect freedom of speech.

– Safeguards against the misuse of anti-terrorism legislation.

– Further regulation of CCTV.

– Ending of storage of internet and email records without good reason.

– A new mechanism to prevent the proliferation of unnecessary new criminal offences.”

Indeed, that list actually goes further than my recently-posted “celery-free manifesto”, though I did score a reasonable number of hits. I think, on balance, I am now cautiously optimistic, as opposed to ecstatically jubilant. After all, one doesn’t celebrate the passing of a migraine by cracking open the nearest bottle of champagne… and as migraines go, this one has been persistent.

It is now almost a decade since then Home Secretary David Blunkett set out his plans for a national “Entitlement Card” – compulsory, and in all but name, a national ID card. When challenged in Parliament by opposition MPs, he dismissed the argument as “descending into a contest with intellectual pygmies”.

Sorry, Mr Blunkett – one of the harsh realities of the information society is that, once comments like that get onto the record, they are easy to find, easy to re-publish, and impossible to delete. That, incidentally, is a critical design factor of modern identity systems which your party’s identity policies determinedly ignored for the ensuing decade.

A lot has changed in that decade, in the realms of digital identity and privacy. We have a decade’s experience – admittedly, as ever, mostly of things which turned out not to work, or not to work as well as hoped – and a decade of further technological advances. Anyone setting out to design a national identity scheme now would probably look rather foolish if they came up with a 2001-vintage design for it.

However, what I really hope we have learned that those technological advances must, if they are to succeed and be adopted, fit into an ecosystem of related elements: elements such as appropriate policy, governance and regulatory control; the right legal framework; practical, scalable deployments backed up by adequate training and resources… and a culture in which adoption by users is a rational and attractive step. As the recent history of Privacy Enhancing Technologies [PETs white paper] illustrates, if that ecosystem is absent or hostile, the best of technologies will fail to thrive.

With that in mind, I hope that the headline use of words like “freedom”, “responsibility”, “fairness” and “civil liberties” hints at something broader than just the scrapping of a number of centralising, data-intensive government programmes. Now is an opportunity to go back to the foundations of policy and re-examine the relationship between government and citizen, and the role of digital identity in that relationship. One of the persistent failings of the ID Cards scheme was that the government decided what it intended to do, and then cast around for policy objectives which might be put forward to justify that course of action.

If there is clarity of purpose, the appropriate policy goals flow naturally.

As a consequence, another failing of the ID Cards scheme was that the relationship between its stated goals and the proportionality of the scheme became ever more tenuous. If the stated principle of “fairness” leads to proportionality in measures such as public sector identity, DNA retention, data retention and data-sharing, that is to be welcomed.

A lack of proportionality in compulsory measures severly undermines the culture of adoption.

Finally, and fatally, the ID Cards scheme failed to take any account of the fact that identity in online transactions is not a universal good. There are clear and valid instances when identity is neither necessary nor beneficial, and where pseudonymous or anonymous interaction is the most desirable option. The ID Cards scheme looked backwards at traditional notions of “identity as credentials”, and in doing so failed to design for what is likely to be the majority of online activity: digital identity as a multi-faceted representation of (equally multi-faceted) digital life. Other countries have looked forwards; Costa Rica, for instance, amended its constitution so as to establish citizens’ rights to a digital identity – taking a perspective which put the citizen’s interests at the centre of legislation, policy and technological innovation.

Some forms of state-issued digital identity may have a role to play in such a system, for sure. However, a policy based on the assumption that no other forms of identity are relevant (including pseudonymity and anonymity) is doomed to be blind-sided into irrelevance by the relentless evolution of our online lives.

This holistic view requires that identity be considered in close conjunction with its siblings – privacy and self-determination. So my third hope is that the word “freedom” in the Coalition agreement will include those classic tropes of privacy:

  • freedom to be let alone
  • freedom to preserve intimacy
  • freedom to retain anonymity
  • freedom to exercise discretion

Those, it will be readily apparent, are social, political and cultural goals, not technological ones. In the information society, though, many of them will be technically-mediated, much of the time – so the policymakers cannot afford to make under- or ill-informed decisions. It is, if I may be cruel, no time for policy to be made by people who think that the “IP” in “IP Address” stands for “Intellectual Property”.

Fortunately, the last decade has also grown a community of committed, engaged, privacy-aware and technically-literate specialists who can translate between the policymakers and the technologists. Let’s hope the policymakers are willing to listen.

*[In case you’re wondering about the title of the blog post; Elizabeth Henderson is the name on the ‘sample’ UK ID card. She is no relation of Erika Mustermann].

The Celery-free Manifesto

I’ve been thinking about this whole “party manifesto” thing, and it seems to me that it’s just so… well, Web 1.0. Basically what the parties have done is take exactly what they would have printed on paper, and put it on a website instead (in other words, the lowest ‘maturity level’ for online services).

There are some minor tweaks, of course. The Conservatives win the prize for the most bloated online manifesto, crammed with images which add massively to the file size, but not the policy content. Labour seemed reluctant to let you get to the document itself, luring you instead towards some little video clips, like those supermarkets who tempt the short-of-attention with copies of “Oy!!” magazine in the checkout queue. At least the Lib Dems formatted their online manifesto so that it displayed in landscape format on a PC display.

The manifestos obviously aren’t really an attempt to set out a distinguishing political philosophy… if they were, it would be easier to get a psephologist’s cigarette-paper between the main parties’ poll ratings. Rather, they are a grab-bag of policy aspirations, crammed in in the hope that enough of them will hit the mark to attract your vote. That, too, is a symptom of the “Web 1.0” approach. Where parties could have been using Web 2.0 and Social Media to define policies which reflect the priorities of the electorate, they have instead seen it only in terms of getting their own messages in front of more people, through more channels.

For all that the ground-breaking leadership debates raised public awareness, as an exercise in communication they were entirely, unapologetically and wastefully one-way.

I went out for dinner yesterday evening, and happened to order something with a salad attached. It was a pretty good salad, as salads go; fresh, smartly dressed, and an alluring blend of the innovative and the reassuring. Unfortunately it was studded with crescents of celery, which I can’t stand.

And that’s essentially how I feel about most manifestos. For all the bits I like (or at least don’t mind) there’s always too much celery.

To get my unhesitating vote, a manifesto would need to:

– Seriously question whether a UK nuclear deterrent contributes to any realistic security goal, and, bluntly, whether we can afford it. The nuclear threat from rogue states and terrorists is not effectively countered by a Cold War nuclear defence;

– Break the government’s insane dependence on tax revenue from destructive behaviours like consumption of alcohol, tobacco and petrol;

– Be prepared to ring-fence healthcare budget for the care of an aging population… even if that means increasing individuals’ responsibility for the healthcare consequences of their own lifestyle in earlier years;

– Recognise that we need not just a unified transport policy, but a transport policy which is unified with a national energy strategy (and that includes, of course, things like food miles);

– Scrap the plans to build two new aircraft carriers: we can’t afford the ships needed to keep them safe if they were deployed; consider reducing the scale of Britain’s commitment to buy Eurofighters, opening up more options for helicopter or Harrier procurement;

– Re-instate tax relief on pension investment income;

– Scrap ID Cards and the Identity Register; legislate for the statutory recognition of pseudonymous and anonymous online personas;

– Challenge the rationale and the risk assessment for the Contact Point database;

– Destroy the DNA profiles and samples of all those on the National DNA Database who have not been convicted of an offense;

– Ban the installation of CCTV systems in public places, unless and until a costed, sustainable and accountable legislative framework can be put in place to manage their deployment;

– Give a clear commitment that the UK will neither commit nor connive at torture;

– Repeal the Digital Economy Act and invest the time and effort in legislation which actually addresses the future of Britain’s Digital Economy;

– Omit celery.

You’ll notice that, although none of those policy statements is explicitly economic, most of them have a clear economic dimension. Economic policy is not an end in itself; it’s a tool. On a good day, Gordon Brown knows that and acts accordingly. He has not had many good days in the last 13 years.

My wish-list is a very partial one, I know, and there are any number of policy areas it doesn’t even touch on. Some of the suggestions would, if the current government is to be believed, put us all at risk in one way or another. However, I am convinced that if you treat citizens with trust and dignity, they repay that faith. If you legislate on the basis that all citizens are venal chancers who are only looking for their next opportunity to break the law and get away with it, you breed a culture founded on mistrust and indignity.