MPs successfully challenge UK government in High Court

Two MPs, David Davis (Conservative) and Tom Watson (Labour), with human rights advocacy group Liberty, have won a challenge in the High Court against the UK’s data retention law (DRIPA). The ruling finds against the UK law on several grounds, including incompatibility with the right to privacy, and a lack of clarity in the rules applying to access requests. Since the current rules allow a very wide range of public authorities to request communications data, and to do so without independent approval, there is a clear implication that the UK’s approach to data retention fails the proportionality test.

It’s worth stressing just how exceptional it is for members of parliament to challenge primary legislation in court, let alone to win. After all, the first place primary legislation should be challenged is in the parliament to which they have been elected. But DRIPA’s passage through parliament was not normal. It was rushed through parliament with a single day’s debate – essentially ensuring that it could not receive proper parliamentary scrutiny.
The government tried to explained away this haste by claiming that the bill was “emergency legislation”, brought in because the prior legislation had, in turn, been ruled non-compliant with EU law. That ruling was itself delivered well in advance of the presentation of DRIPA to the house: there was no excuse for the UK government to have failed to foresee that requirement.
According to the Security Minister, John Hayes, opposition to the government’s data retention plans comes from a “paranoid liberal bourgeoisie”, intent on hamstringing law enforcement even if it means criminals and terrorists run amok. Of course, the accusation of paranoia might carry more weight if every “paranoid” prediction about the government’s use and abuse of surveillance powers hadn’t been accurately pointed out by the late Caspar Bowden and amply proved by the Snowden disclosures.
Mr Hayes says the government ‘disagrees absolutely’ with this judgment and fully intends to appeal against it. In other words, despite having had DRIPA and its predecessor struck down, the government’s view is that no change of course is needed, and they can keep the same data retention powers if they go on saying that’s what they want.
There’s an old quip, isn’t there, which defines insanity as repeating the same actions while expecting a different outcome. Far be it from me to accuse the current government of insanity, but I have to wonder how rational it is to expect an appeal to succeed.

Caspar Bowden

Caspar died today. Three simple words that encompass so much.

Twitter has, understandably and gratifyingly, lit up with messages of condolence, admiration and regret – and so it should. The privacy advocacy community has lost one of its most capable members, and feels that loss keenly. But we are all affected by the issues on which Caspar fought: the regulation and availability of cryptography, the interception of communications, the governance of surveillance, the equal application of human rights. There isn’t a person on the planet whose interests are unaffected by these issues, and we are worse off for losing Caspar’s efforts on our behalf.

Let me give two three little snapshots from my memories of Caspar.

The first is from a meeting of the Enterprise Privacy Group, in about 2007. Two or three of us had just arrived and were clustered near the coffee table, talking about ethics and data protection. Caspar arrived, and was pouring himself a cup of coffee a few feet away from our group. He carried it over to us and, as he arrived, said

“Well, I favour a Rawlsian model – because unless you can point to a basis in justice, none of the other approaches can work anyway.”

It wasn’t said as a put-down or out of intellectual snobbery; he had simply grasped the essentials of our conversation within the time it took to pour a cup of coffee, and was contributing his conclusion. Of course, the rest of us had to sneak off later and Google “Rawls on Justice” to refresh our memories <cough>, but the truth of it is, in the work I’ve done on ethical data-handling in the last couple of years, I have come back, time and again, to the principles Caspar set out in that conversation some 8 years ago.

There was mischief in Caspar too, and my second snapshot is from Berlin, where Caspar and I were among the invited participants in an Article 29 Working Group meeting. As ever, a group beer was proposed for the evening, and we all piled into taxis to get to an otherwise unremarkable Bierhof somewhere outside the city. I found it hard to believe that we couldn’t have found somewhere just as good in town, and grumbled something to Caspar along those lines. He, of course, knew exactly where we were, and said “Yes – but there’s a certain irony, isn’t there, in drinking beer with a group of European Data Protection Supervisors, right next to the Wannsee”.

The third is from QCon 2014, where Caspar gave a talk on “Mistrusting Trust”. The talk was excellent, and you can replay it here… but my abiding memory wasn’t that; it was that about 4 minutes in (as you’ll see if you watch the video), Caspar’s laptop reset, and threatened to melt down. It was overheating partly because it was a bit old, but partly because Caspar was running Qubes on it, so as to be able to boot a virtual OS in order to show his slides. Now, lots of privacy advocates love the idea of only using virtual machines, and killing them in between sessions of browsing, email and whatever… but an awful lot of us can’t be bothered to go to the inconvenience, when it comes down to it. Caspar could – even if it almost reduced his laptop to a pool of smoking black plastic in front of an audience.

To say that people sacrifice privacy for convenience is such a cliché most of us don’t even reflect on it any more. But Caspar’s example should remind us that too many of us sacrifice more than just privacy for the sake of a little convenience. Caspar had always done his homework. How many of us hold forth on the laws governing data protection, interception, surveillance and so on without actually having read them all the way through? I know I’m guilty on that score – but Caspar had read them – and not just for one country, but for the UK, and the US, and the EU, and France, and so on. How many of us read them and then forget the detail, or don’t bother to really think through the implications, and publish our analysis, and fight for how we think it should be?

And of course it wasn’t just the law. As his comment on Rawls indicates, Caspar was scarily well read in all kinds of areas, and he had a level of recall which many policymakers had occasion to find embarrassing.

Yes, he could be abrupt, and yes, he often ‘bent’ convention by asking direct and probing questions in ways that risked alienating the policymakers he sought to influence. But I never saw him do so rudely, inappropriately, or in a way that demonstrated anything less than total integrity. That took strong moral principles, intellectual rigour, and courage.

Caspar – thank you for your dedication; we’re worse off without you, and we’ll miss you. And I wish I’d said that to you before you died.

Speaking of trust…

I received a letter recently from (apparently) NHS England. In it, they require me to confirm my contact details to them, or face being removed from the patient list of my local GP (whose patient I have been for the last 30 years… in fact, his surgery has changed address more frequently than I have in that period).

In the letter, NHS England note that GP practices need accurate contact details in order to be able to contact me to arrange appointments and vaccinations, report test results, arrange the details of long-term care for chronic illness, or enable a hospital to write to me. Those are all true – but all those needs can be satisfied by my confirming my address details to the practice itself. There is nothing in their letter which convinces me to give my personal data to a third party – except their threat to have me de-listed if I fail to do so. “If you do not respond within four weeks from the date of this letter then we will assume that you have moved away”, they say. Which is odd, since I had a GP appointment a couple of months ago.

When I mentioned this on Twitter, the responses were interesting. Some people simply assumed that the letter was a phishing attempt, and advised me to ignore it… which is an interesting trust problem for the NHS to consider. Others suggested that there is a second purpose for NHS England to collect the data, which is to do with the way in which they fund GP practices. If that were the case, you might expect it to appear among their stated purposes for collecting the personal data, in the letter on the basis of which you are expected to confirm your details. It does not – though if you visit the NHS England Shared Business Services website and go to their FAQ, you will find it mentioned. Again, if you go to their website, you will find some explanation of the role of Shared Busines Services in relation to GP surgeries… but “Shared Business Services” appears nowhere on the letter, which mentions only “NHS England”. To all appearances, this is a third party organisation asking for my personal data.

Taking a hard-hearted financial view, you might say that it’s in their financial interest (and therefore, indirectly, in mine) to ensure that they aren’t paying GPs for patients who don’t exist. But there are some flaws in that argument:

  • I’ve seen my GP within the last 3 months. It’s therefore unlikely that I have moved out of the area. There must be other patients whose records indicate that they are more likely to have gone away; if NHS England are going to incur the cost of writing to anyone, shouldn’t they prioritise apparently “dormant” patients over recent, “active” ones?
  • In the short term, NHS England might indeed save some money by finding a pretext (i.e. my non-response) to remove me from the paid-for headcount at my GP’s surgery. However, since I am a genuine, current patient, if I were de-listed I would have to apply to be re-listed, which would incur extra, unnecessary administrative cost.

But, economics aside, isn’t there a law against collecting data for one purpose, and using it for something else? I rather thought that was what the Data Protection Act was for… but then again, the NHS’ recent history with regard to data protection is tarnished, to say the least. For instance, the HSCIC recently admitted that it simply ignored patients’ requests to opt-out from the care.data scheme, which they had designed as “opted-in by default” in the first place. What’s more, it emerges that care.data cannot distinguish between patients wishing to opt out of having their data shared with third parties, and patients not wishing to opt out of services such a referrals or e-prescribing.

So let’s recap the various failure modes illustrated by these two instances:

  • Designing a system as “opted-in by default”, despite the fact that it involves data-sharing with commercial and public-sector third parties, and that it processes sensitive personal information;
  • Failing to act on opt-out requests;
  • Offering different types of opt-out (good), but then failing to process them accurately (bad);
  • Dubious prioritisation of administrative effort;
  • Lack of transparency about purpose of collection;
  • Threat of de-listing for non-compliance.

Just take a look at those, and ask if they would be acceptable in clinical/surgical practice, as opposed to health service administration.

It seems to me that we have failures, here, of design, technical implementation, governance, transparency and consent – and all of those erode patients’ trust. Public sector bodies, world wide, are desperate to reap the benefits of digital infrastructure, and rightly so: it’s in our interest as citizens that our public sector services should be efficient, cost-effective and technically up to date. But data controllers must show that they are safe and worthy custodians of citizens’ personal data, and that requires a far more mature approach to transparency, informed consent, and genuine opt-out. Stampeding headlong in pursuit of the potential financial benefits of innovation, while ignoring the concomitant responsibilities, is irresponsible and immature.

The NHS is in its 60s; if we are to trust it to exploit technology effectively, in our interest and on our behalf, it needs to grow up.

What is trust?

This question either comes up explicitly in discussions of privacy and identity, or lurks unspoken beneath the surface. We talk a lot about online trust, and only occasionally does someone stop and ask if we’ve defined just what it is that we’re talking about.

Some people reply, at that point, that trying to define trust is a rat-hole/red herring/exercise in yak-shaving, but I am less pessimistic. Here’s a simple definition of trust which I think is flexible enough to serve as a basis for productive discussion, while still covering the essentials. Of course, if you disagree, I’d welcome comments!

“Trust is the belief that someone else will act in your interests, even if they have the opportunity and the motivation to do otherwise.”

It’s a belief, and like any belief, it may be well- or ill-founded. You may be mistaken or misinformed, or the other party may be deceiving you.

As I say, it’s not necessarily a perfect definition, but it has worked for me so far.

Any thoughts?

Can a machine care about privacy?

[I originally wrote this post for the Internet Society blog, here. However, I thought it worth re-publishing here, since the two blogs probably have different (if overlapping) readerships. I have made a couple of minor tweaks  – indicated by [*] – and expanded the concluding couple of paragraphs, because the loyal readers of this blog deserve a bit of added value ;^) ]

I recently attended the Digital Enlightenment Forum 2015 in Kilkenny; not your average tech conference, and not the average discussion topics, either – but topics of continued and growing relevance.

For me, the value was in having a conference that offers the time – even if only for a day – to step back and look at the bigger picture within which all our urgent-seeming daily task-work takes place.

One theme in particular stood out, and it’s also a major strand of the [*]Internet Society’s Trust and Identity work plan over the coming couple of years. Several sessions, including two breakout groups, addressed the theme of digital ethics. The discussion was wide-ranging, sometimes detailed and often abstract, but fascinating and – ultimately, entirely practical. There will be a full paper on the theme in due course, but here’s a hint of some of the ground we covered.

[Warning: may contain philosophy…]

I hope that warning hasn’t immediately discouraged you. “Don’t panic!”, as Douglas Adams would have said. I have found a really simple model for discussing complex topics like this when you have a very diverse group of people round the table; almost all the discussion tends to fall into one of four categories:

  • Philosophy/principles
  • Strategy/society
  • Implementation/practicalities
  • Technology

Once you know that, it’s much easier to avoid getting mired in the intricacies of any one of the four categories, and that keeps the discussion relevant to everyone.

So, philosophy:

Taking our cue from one of the morning’s presentations, we went right back to fundamentals: what have thinkers said about ethics in the pre-digital past? There’s the “continental” philosophical approach of people like Foucault and Barthes, who cast ethics as a series of narratives and structural relationships; then there’s the more “traditional” analytic approach, looking at systems of ethics based on consequences, rules and justice. What they have in common is a recognition that ethics is contextual, and a function of the society in which it evolves.

In our case, we’re in the midst of a post-industrial, technically-oriented society. It’s sometimes hard to imagine that things could be any other way… but what happens if you subtract technology from the ethical equation? You’re left with information (rather than data), decisions, relationships, and semantics. Technology may change a lot of things, but it doesn’t remove those fundamentals, and it doesn’t alter the contextual nature of ethics, so we can be reassured that we have some solid principles to build on.

What’s happening at the social level?

Here, the main point I picked up was about “agency”. In our technologically-oriented society, almost every action we are able take (our “agency”) is mediated – either through technology, such as computers, phones etc., or through third parties, such as banks, the retail supply chain, telcos, internet service providers, identity providers and so on. Ethically, the fact that what we do is mediated often moves us further from the consequences of our decisions and actions. This can leave us feeling that we’re not really responsible for what might happen. As one participant put it:

“Technically mediated phenomena are outstripping human-centric’ ideas of privacy and ethical outcomes.”

In the context of our discussion at the time, that was a perfectly normal and rational conclusion to draw. When you stop and think about it, it could be quite a scary one, too.

But so what… why should I care?

Practicalities:

Well, we should care because all those third parties through whom we act are making decisions, every day, which directly affect us. Sometimes they do so with our knowledge and consent, but on the Internet, that is far from the norm, as I suspect we all acknowledge. Here are some examples of the kinds of decision which are made on your behalf all the time:

  • “This privacy policy and these cookies are fine for you; there’s no need to ask you explicitly if you’re OK with them.”
  • “We’ll opt you in to our data-sharing policy by default. If you don’t like it, you can always tell us later.”
  • “Your personal data is safe with us, because we anonymise it. You don’t need to worry.”
  • “Collecting this data does compromise your privacy here and now, yes… but we expect there to be a collective benefit in the long run, so that’s OK.”
  • “We’re “personalising” our prices for you, based on the really expensive laptop you’re using. But don’t worry – we have your best interests at heart.”

These are all real, practical consequences of our technically-mediated society, and they affect your privacy every day.

Technology:

So what’s the technical dimension? Again, what struck me was “agency”. The number and diversity of ethical agents we encounter is growing fast, and… not all of them are human. A lot of decisions these days are made by algorithms (remember those stock market volatilities caused by too many automated trading systems all reacting to each other?), and any algorithm that makes decisions is not ethically neutral. “Ah,” I hear you say, “but they only do what they’re programmed to do. Someone. somewhere is responsible… not the algorithm”.

OK – let’s look at that for a moment. First, some algorithms are adaptive; there are already network security products, for instance, that learn, over time, what constitutes “normal” behaviour, and adjust their own behaviour accordingly. Then there’s machine learning in its broader sense. Researchers into artificial intelligence already report that the algorithms they create frequently go on to evolve in often unexpected ways, to exceed human capabilities [*] and, crucially, to adopt entirely different strategies for learning and decision-making, from the strategies adopted by humans.

And last: machines are increasingly capable of autonomy – self-driving cars are a good example. They will react to changing conditions, and deal with circumstances they have never encountered before, without human intervention. The first time a driverless vehicle runs someone over, we’ll see where the ethical buck stops.

Conclusions:

This has been a lightning gallop through several hours of discussion. What did we conclude?

  • First, that modern life raises just as many ethical issues as it ever did.
  • Second, that if we’re not careful, all the ethical calculations get made on our behalf – and not always in our best interest.
  • Third, that if we’re to retain our agency, we need to understand that that’s what we’re trying to do, and why.
  • Fourth, that there are indeed some constants here, despite the pace of change around us. Ethics is a social, contextual thing, and it has to do with meaning, relationships and decisions. Those are very human things.

And last, that we have entered the age where a growing number of ethical agents are non-human, and we have to understand how that affects us and our societies. Is there a fundamental ethical principle based on ‘global’ human values? Might that principle be to do with consciousness, or autonomy, for example? And if so, what’s the ethical status of machines that are increasingly autonomous and might even, at some point, be described as conscious?

[*] Machines are already ethical agents. Our lives are constantly affected by automated decision-making, in increasingly intimate ways. Algorithmic decision-making underpins many things that we take for granted, and modern life would certainly be very different if algorithmic decision-making were suddenly to be removed overnight. But how much of this automation is directed towards the protection of privacy? In my experience, not much. I can think of examples, where traffic management systems obfuscate part of the number-plate data they collect, but conversely, ANPR systems intentionally do exactly the opposite. I’ve seen a prototype of a CCTV system that obscures individuals’ faces – but again, that’s definitely the exception to the rule. And think how hard we continue to have to fight, to have airport body-scanners that display a diagram instead of an actual body image, and that discard the images as soon as they have been evaluated.

To my mind, privacy protection is still treated as a bolt-on, after the fact, and only if people complain long and loud. The more we delegate decision-making to machines, the less acceptable that is as a design philosophy. Privacy by design is, I would argue, never more important than when the ethical agent is non-human.

We aren’t necessarily at the point where a machine can care about privacy… but we should certainly keep asking the question.

 

Do you “own” personal data?

I’ve been meaning to re-write this as a blog post for ages, and a Twitter exchange with the excellent @mediamocracy has finally nudged me into doing so.

Incidentally, I say “re-write” because, despite rumours to the contrary, the Internet is not as indelible as people might have you believe. I used to have a blog at http://blogs.sun.com/racingsnake, but all you will get there now is a 404 from Oracle. I don’t know what line of reasoning they followed to delete some Sun blogs while leaving many others up, but there you go.

“Ah, but what about the Wayback Engine?”, I hear you say… What indeed? The thing about Wayback is that it only captures the page pointed to by a URL at the time of its crawl. Sun’s blogs, like many, worked as a push-down stack; so any posts that got pushed off the bottom of the page between one Wayback crawl and the next were not captured. In short, Wayback will replay some of my posts for anyone wanting to dig into them, but not the one about data “ownership”.

That being the case, I’ll base this post on a related comment I made later, on an IETF thread about privacy and geo-location.

In brief, my underlying argument is this:

You’ve probably all seen privacy threads where an aggrieved data subject says “All I want is to be given back *my* data”… The implicit assumption is that, in some way, I ‘own’ my [sic] personal data. Unfortunately, not far down the line that leads to all kinds of unwanted consequences, and therefore we’re better off not starting out with a model based on concepts of ‘ownership’ if at all possible.

For instance, as Bob Blakley pithly put it, “You can’t control the stories other people tell about you”. There’s lots of personal data about you over which you have no control, let alone ‘ownership’, because it’s generated by other people. The only time you get control over it is, for instance, if the information is libellous. Even then, you don’t get ‘ownership’ of the data, but you get the opportunity to exercise certain rights pertaining to it. [The Google “de-indexing” ruling of 2014 is a classic example of this principle.]

Similarly, a model based on a concept of ‘ownership’ doesn’t work well for informational resources that can be ‘stolen’ from you, yet still leave you in possession of the data. Think of copyright digital media… you own the CD of Beethoven’s 5th., but there are rights to do with the original work (or the performance) that you don’t enjoy.

Legally – at least in the UK and US, and I believe elsewhere, too – there are distinctions between the treatment of “personal property” (or personalty) and “real property” (or realty), my own belief is that we’re better off treating personal data as if it were realty than as if it were personalty. This is especially true of the legal remedies when something is stolen from you. What has to happen in the case of realty offers a better model than the legal remedies for theft of personalty.

I know this is a rather terse and dense statement of the issue – there are doubtless points here that could be unpacked in far greater detail – but suffice to say, I think an approach based on assumptions of ‘rights’ over data has fewer problems than one based on assumptions of ‘ownership’. Think of personal data such as location/tracking/behavioural data: it makes little sense to claim that I ‘own’ the data collected about my path through a shopping mall, but it makes a lot of sense to claim that I have certain rights relating to it.

[Update]: since I initially wrote this, I have actually tended to take a tougher line still. In my view, not only do I have rights relating to data about me; I also, I believe, have rights relating to data that affect me. I sometimes express this as “PII should be re-defined from ‘Personally Identifiable Information’ to ‘Privacy-Impacting Information'”. This might reflect more accurately the reality of today’s personal data ecosystem, in which you are affected not only by personally identifiable data, but also:

  • by inferences drawn from that data
  • by personally identifiable data about other people thought to be similar to you
  • by aggregations of metadata
  • by aggregated data about the behaviour of others.

In short, trying to protect your own privacy and self-determination by focussing solely on data over which you think you have “ownership” is likely to prove ineffective, and will fail to address a significant proportion of the real privacy risk.

A victory in the Investigatory Powers Tribunal. Or is it?

Yesterday’s big privacy headline was all about the mass data breach at the US insurance firm Anthem… today’s is about the Investigatory Powers Tribunal (IPT) ruling against GCHQ. For the first time since it was established in 2000, the IPT has ruled against an intelligence agency; GCHQ’s interception regime under the PRISM and UPSTREAM programs, it says, violated articles 8 (privacy) and 10 (freedom of expression) of the European Convention on Human Rights, and as a consequence, was unlawful from 2007 to 2014.

So far, so good: a succinct, clear and definitive ruling. And hats off, by the way to Privacy International, Amnesty International, Liberty, the ACLU, Bytes4All and others who stood up for citizens’ rights under the ECHR, getting this ruling despite the government’s ingrained unwillingness to release any data about governance of the intelligence services’ activities.  So unwilling are they that, even in the context of the Tribunal, they refuse to admit the existence of the TEMPORA program so clearly described in documents disclosed by Edward Snowden. As the Tribunal president puts it in the judgment:

“The alleged conduct itself is not admitted by the Respondents. It falls to be considered as a result of allegations made by Mr Edward Snowden, a former contractor for the National Security Agency (“NSA”) of the United States, by whom a very substantial quantity of documentation has been leaked and much put into the public domain.”

They only admit the existence of PRISM because – in the words of the senior civil servant concerned – “it has been expressly avowed by the executive branch of the US government”.

A huge victory, then? Regrettably not. GCHQ’s mass surveillance program continues exactly as before, and this ruling will not affect its operation in any way. There are two reasons why.

First, the IPT’s ruling of unlawfulness only applies to GCHQ’s actions up to December 2014. From then on, they are satisfied that the intercept regime is lawful.

Second, what was the basis for ruling the program unlawful from 2007 to 2014? It was that the ECHR requires any such interference with articles 8 and 10 to be conducted “in accordance with the law”; that, in turn, means it must not only have a basis in law, but that that legal basis must be sufficiently accessible and foreseeable to anyone potentially affected. In other words, this is not about whether the interception itself was acceptable: it’s about whether citizens were reasonably informed about the kinds of action to which the law is liable to give rise. The human rights basis for this is clear: justice and the rule of law cannot be served if citizens are governed by laws (or interpretations of those laws) that they cannot see.

So, what changed in December 2014?

Well, in the course of the Tribunal hearing, some evidence from the intelligence agencies was heard in the presence of the claimants, and some was heard behind closed session without them. This is part of the procedure for dealing with the special nature of intelligence-related hearings, where some of the relevant evidence is too sensitive to be discussed in open session. The claimants can be represented, in closed session, by a Special Advocate – but that was not the case in this hearing. The closed session is also attended by a Counsel to the Tribunal, whose role is to help the Tribunal, rather than represent the claimants. The Counsel is able to report back, in good faith, to the open session, such details as the intelligence services agree can safely be disclosed – this is to help reassure the claimants that the Tribunal is making its decision on a sound basis.

In this instance, the intelligence services explained, in closed session, some of the oversight mechanisms they apply in order to ensure that their interception activities are not indiscriminate or arbirtary. They agreed that some of that explanation could be taken back into the open session (and, as such, put into the public domain). It appears on page 26 of  the Tribunal’s December 2014 judgment, in paragraph 47. It sets out the conditions under which the intelligence services can request interception data from another country (such as the US) and the internal rules and safeguards that apply to the data received.

According to the judgment, the fact that these details are now in the public domain is enough to meet the ECHR’s requirement for foreseeability – a requirement which the European Court of Human Rights has expressed like this:

“in a system applicable to citizens generally … the law has to be sufficiently clear in its terms to give them an adequate indication as to the circumstances in which and the conditions on which the public authorities are empowered to resort to this kind of secret and potentially dangerous interference with private life” (Leander v Sweden, 1987)

In other words, it is the Tribunal’s judgment itself which, by including these details, renders GCHQ’s interception regime ‘foreseeable’ enough to comply with the ECHR. Although the Tribunal declined to admit it at the time, the logical corollary of this is that the interception regime was therefore not compliant in the absence of this explanation… and that is the substance of today’s judgment. Phew.

So, since nothing in the interception regime will change as a result, has the whole thing just been a huge waste of time?

No.

It is absolutely, vitally important that we should have legal means to challenge the policies and actions of our government and its agencies, whether those legal means are domestic (such as the IPT) or international (such as the ECHR). It’s to the credit of the claimants in this case that they were determined to exercise those legal means, and that they did so successfully.

It’s also important that the assertions of civil servants (such as Charles Farr), ministers (such as the Home Secretary) and those responsible for overseeing the intelligence services (such as Sir Malcolm Rifkind) be demonstrably open to challenge. All three of them repeatedly assured us that the surveillance regime was lawful, and all three of them have been shown to be wrong.

Where law enforcement and intelligence activities have to be carried out in secret, we rely entirely on the good faith and trustworthiness of those responsible for the governance regime. When they fall short – as they have done – we must have the means to find out, and call them to account.

We also have to learn lessons from this about the consequences of unlawful surveillance. The intelligence services now have seven years’ worth of surveillance data collected unlawfully. What are they going to do with it? That personal data is seven years’ worth of toothpaste that can’t be put back into the tube. At a time when the government keeps pressing, again and again, for increased powers of interception and surveillance, we should remind them, again and again, that mistakes here are indelible.