One hop, two hop…

(… now, what was the question?)…

OK, I can deny it no longer. I love Eric Schmidt. But only because he is good blog-fodder. I want to tie together two news stories and a blog post today: the blog post is Wendy Grossman’s concise, accurate response to Eric’s “loopy” statements on encryption. Of the two news stories, one is about said “loopy” statements, and the other is a recent Guardian article about the NSA’s interception programmes.

First, let me say that I agree with the points Wendy makes; I just have another one to add to the reasons why encryption is not, despite Eric’s reported claims, the answer to government surveillance.

Here’s the first article – one of many that quotes Mr Schmidt as saying “The solution to government surveillance is to encrypt everything”. (Actually, some stories quote him as saying “The solution to government surveillance is to encrypt everyone“, which would, if anything, be even loopier. I have no idea how you encrypt a person, but it sounds like something very bad from the first Tron movie. And trying to decrypt someone with the wrong key would be incredibly messy).

The second article is this one, a Guardian piece which explains how the NSA and GCHQ did a deal allowing the former to exploit incidentally-collected metadata, rather than having to discard it. The original policy, apparently, was that data collected as a ‘side effect’ was to be discarded. Under the new deal, if an interception happened to capture traffic or metadata from an otherwise unsuspected UK citizen, the NSA could use that as the basis for a social graph analysis of that person’s contacts. However, the NSA also has a so-called “three hops” rule, under which it is authorised to examine the communications of the original person’s friends (one), their friends-of-friends (two) and their friends-of-friends-of-friends (three).

Social graphs are, in themselves, extremely revealing – of family and social relationships, social status, socio-economic grouping, and so on; this article by John Naughton gives a good introduction. There’s also research by Prof. Sandy Pentland (MIT), using mobile phone metadata to predict social behaviour, which shows just how revealing this kind of analysis can be. The Guardian’s own research suggests that, even using just the Facebook social graph of the average user, a three-hop link would net up to 5 million other users.

All this, remember, based on the metadata of a single individual whom there was no reason to investigate in the first place.

Encrypting the traffic does nothing to protect against this.

Many of you probably remember Eric Schmidt saying, on Dec 3rd 2009, “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place”. I used to think that he was expressing a personal opinion. However, we now know that 11 months earlier, on Jan 14th 2009, Google became one of the data feeds for the NSA’s PRISM interception system.

I now wonder whether, far from offering us his own view, Eric was just repeating his most recent advice from the NSA.

Good for Caspar

If there’s one person who, in the wake of the whole Snowden episode, would be fully justified in hiring an Airbus A380 and using it to skywrite “I TOLD YOU SO” across north-western Europe, it’s Caspar Bowden. But that’s not his style. Instead, he put two years of independent research work to use, drafting the briefing note on PRISM for the European Parliament enquiry, and gets on with asking factual questions about the gap between intelligence policy and public accountability.

So congratulations to Caspar Bowden for getting put through to Deputy Prime Minister Nick Clegg’s radio phone-in slot, with a piercing question about unilateral US espionage of the UK (video available online here). Not entirely surprisingly, Caspar didn’t get a straight answer – particularly not the one British citizens wanted to hear, which was “It’s unacceptable, and we’re going to tell them all intercept-sharing deals are off until they start respecting extra-territorial human rights”. Nor the other answer “It’s unacceptable, but for £100m, we’re just going to roll over and have our tummy tickled”, which probably would have been closer to the realpolitik of it.

If there was a germ of hope to be taken from Mr Clegg’s answer, it was this: mass surveillance by the state must not only be legal, it must also be legitimate. Its legitimacy stands or falls on the government’s ability to prove that the surveillance is accountable and proportionate. As ever, Mr Clegg says the right things in terms of principle; the doubt is as to whether he can translate the principle into policy or practice.

Mass surveillance and human rights

While the IETF was meeting in Vancouver this month to discuss (among other things) the technical implications of mass surveillance, the  Council of Europe held a ministerial meeting in Belgrade to consider the human rights implications of the same capabilities. One of the things I like best about my job is that it puts me at the intersection of these two communities. As far as I’m aware, I am one of a very small number of people who regularly attends both IETF and CoE sessions (the former as an identity and security geek, the latter as a privacy geek an observer to the working party on data protection). From that perspective, I thought it might be useful for me to comment on a recent article in the Guardian, which describes the UK delegation’s objection to part of a statement issued by the ministerial meeting as a whole. That said, I should also make clear the following disclaimer: although the views I express below are informed by my professional experience, they are strictly my personal views as a UK citizen, voter and taxpayer, and should not be taken to represent the views or policies of my employer.

To quote from the article, the disputed passage of the statement reads:

“We invited the Council of Europe to … examine closely, in the light of the requirements of the European Convention on Human Rights, the question of gathering vast amounts of electronic communications data on individuals by security agencies, the deliberate building of flaws and ‘backdoors’ in the security system of the internet or otherwise deliberately weakening encryption systems.”

Given the scope and nature of recent revelations, it’s not really plausible to deny that these activities have been going on, nor that the UK has been engaged in some or all of them. That seems to me to lead to the following conclusions: either the UK Government feels there are simply no human rights implications to mass surveillance, or it acknowledges that there are, but declines to have those implications considered by the Council of Europe (of which it is a member, and whose Convention 108 it has ratified).

The first of those would be worrying, but I doubt if it is the case. The second is far more likely, but I think it should still worry us, and here’s why.

First, let’s concede that it’s not really tenable to argue that states should enjoy no exemptions from data protection laws as a whole: there are legitimate functions of the state that require access to data which might otherwise remain confidential. For instance, it would be absurd if, in the process of investigating a crime, the state did not have a legal basis for trying to determine whether or not someone is lying about whether they did it. Conversely, it would be absurd and dangerous if those exemptions were unqualified. The validity (and in many respects also the reliability) of information obtained under these exemptions actually depends on the fact that strict conditions apply to the way in which it is obtained and the ways in which it may be used.

Both Convention 108 and the EU’s Data Protection Directive 1995 contain exemptions for law enforcement and national security access to otherwise protected data. However, in both cases those exemptions are qualified. They specify that the actions taken must be “necessary” to achieve one of the stated goals, and – in the case of Convention 108 – further add the stipulation that they constitute “a necessary measure in a democratic society”.

Article 6 of the Data Protection Directive 1995 also sets out the over-arching proportionality principle, that data processed should be “adequate, relevant and not excessive”.

Article 15 of the Directive on Privacy and Electronic Communications 2002 is also explicit about the qualification of exemptions to data protection, specifying that they must constitute a “necessary, appropriate and proportionate measure within a democratic society”.

Now, the technical community at IETF last week expressed a near-unanimous view that it should take action to revisit the technical means by which the Internet could be protected against mass surveillance – describing interception on that scale as indistinguishable from an attack. Technical protection apart, though, it seems to me that what the NSA/GCHQ disclosures reveal is primarily a failure of governance: elected representatives seem either to have been kept uninformed of what the intelligence services were doing, or to have done their bit to ensure that it was not subject to effective independent accountability. Witness, for instance, the unedifying sight of a US FISC judge issuing a written ruling that “NSA exceeded the scope of authorized acquisition continuously”… before agreeing to expand its authority to collect metadata by “11-24 time”.

At the same time, we have a UK cabinet minister and member of the national security council (NSC) saying that he was given no information at all about GCHQ’s TEMPORA program or the NSA’s PRISM. The same minister notes that, for much of the previous administration’s time in power, the Home Office was pressing for new intercept laws, when in fact it was already doing what it complained that it could not. The British public have also been offered the carefully-choregraphed spectacle of the heads of intelligence services in a televised Q&A session with the committee of MPs nominally responsible for their accountability. As you can see from the video footage, the session was hardly what the BBC, with touching optimism, referred to as a “grilling”. What’s not made clear in the video, by the way, is that all the questions were pre-arranged.

Under those circumstances, I could not blame British citizens for wondering how we are to achieve adequate supervision of communications interception on a mass scale. The “gentleman’s agreement” by which the spies agree not to do anything too beastly, provided MPs are civil enough not to ask impertinent questions, clearly doesn’t cut it. And, in a sense, it’s laughable that it ever did: after all, what use is a gentleman’s agreement with someone whose profession is deceit?

The bottom line is this: the UK clearly signed up to the provisions of European regulations and Convention 108, well aware that they contained national security exemptions, but also well aware that those exemptions are qualified by requirements for democratic necessity and proportionality. It has equally clearly failed to ensure that those requirements are met. It should, therefore, either be honest and indicate that it no longer intends to be bound by any of the statutory instruments in question, or at least be open and endorse an independent examination of whether mass surveillance can be compatible with the human rights laws to which it has signed up.

Is privacy an anomaly?

I am really in two minds about Vint Cerf’s reported comments regarding privacy. There are aspects I agree with, and others I would hotly dispute.

It may well be true that in the small town where Vint grew up, one had little or no expectation of privacy. Ingrid Melve said the same thing about her home village in Norway. If you behaved badly as a child, you could expect everyone to know about it. But, Ingrid went on to say, if you mended your ways, people would forgive and even, possibly, forget. But that’s people, whereas these days, computers find the first impossible and the second extremely hard to do. Yes, technology has changed the way in which we interact, because we’re no longer dealing only with face-to-face human interactions alone: we’re dealing with interactions that are remote and technically-mediated. However, I don’t actually think the “small town” example tells us that there was no expectation of privacy: I think it tells us that people had different assumptions about what constituted the public sphere, and what constituted the private. Not that there was simply no such thing as the private sphere.

But the second point is that, technically mediated or not, privacy is a relationship. Privacy consists in retaining the choice over what you disclose to whom, in what context. If you want to describe or analyse specific privacy use-cases, it only makes sense if you can specify what the context is, and which relationship is in question. If you want to talk about privacy in the abstract, without those specifics, the only way to do it is as a social construct: privacy is a function of co-existence. The scope of our expectation of privacy may vary with time, but it has been around as long as humans have lived with each other. It is not an anomaly.

Nor are today’s technical developments the only ones to have challenged our view of privacy; Louis Brandeis’ seminal paper on privacy was spurred by  the advent of portable cameras, and the prospect that pictures taken in public could be quickly and easily published in the popular press. We’re still wrestling with that today; we rely on regulation of the press and paparazzi, not on technical solutions, to protect against this privacy risk, and frequently we fail.

Third – and this is where I find Vint’s reported remarks most problematic – it is not good enough to say that we, collectively, have forfeited our privacy voluntarily through the disclosures we make via networked social sites and other online services. It’s not good enough, because that implies that our disclosures are intentional and made on the basis of informed consent. In most cases, that is not an accurate analysis.

  • I have said for some time that the disclosures we make on networked social sites are made on the basis of a fundamentally flawed risk assessment. Any service that fits the description “if you’re not paying, you’re the product” has an overwhelming interest in fooling you into thinking that you are only disclosing information to your circle of friends. This is key to their ability to monetize your personal data, your social graph, and the metadata generated by your online activity.
  • The analysis also does not account for passively-disclosed data, or data the individual is unaware of disclosing. In both those cases, if we have forfeited our privacy, it is because of the intentional actions of others – and those actions can be subjected to regulation. It is all very well for Vint to say that we need to modify our behaviour if we want to protect our privacy, but that has only a partial effect on actively-disclosed data, and very little on passive disclosures.

I do agree with Vint that we need to modify our behaviour if we want to protect our privacy better, but that can only make a significant difference where our disclosures are either active, or passive but consensual. And last, asking whether privacy is an anomaly is just not the right question. A need for privacy is a built-in function of human society; the question to resolve is whether it is a fundamental right, and if so, what to do about that.

How might a responsible journalist behave?

Let’s do a little thought experiment, starting with a few assumptions.

  • I’m an investigative journalist, and I have come into possession of some sensitive information.
  • Publication of either the information itself, or news articles based on it, would have a substantial public interest.
  • I need to transfer the information from A to B without, myself, travelling from A to B.

What are my options?

  1. Given that the information is in digital form, I could just attach it to an email and send it to B.
  2. I could encrypt it first, and then email it.
  3. I could encrypt it and have it hand carried by a trusted courier, and not give the courier the decryption keys.

If I use strong, standardised encryption (for instance, of the kind approved by a recognised national standards body and implemented in mainstream commercial products), I am entitled to a reasonable belief that the information is protected against brute force decryption. I could also use, say, asymmetric and/or out-of-band key exchange to ensure that B can decrypt the information once the courier has delivered it.

This is all hypothetical, but it seems to me that option 3 and the subsequent paragraph would be good evidence of a “responsible” approach on the part of the journalist.

Equally hypothetically: if it subsequently emerges that the apparently strong, standardised cryptography has been intentionally weakened in the course of the standardisation process, that does not imply that I was irresponsible in my ‘reasonable belief’ in the strength of the encryption. There may be irresponsibility there, but it is not mine.

It’s apple pie… but it’s NIST standard apple pie

Look, I feel a fair bit of sympathy for NIST at the moment, what with all the fuss about Snowden, the embarrassment over Dual Elliptic Curve DRBG, the doubts about SHA-3, and so forth. I fully appreciate that the majority (and, still, quite possibly all) of its standardisation work is done in good faith, by individuals committed to the greater good. And I understand the unprecedented pressure NIST must feel to do something visible, to rebuild trust and confidence in its work on digital security and related domains – such as privacy.

So, when I saw that NIST had published a preliminary cybersecurity framework, I was interested to take a look.  I was even more interested when I saw that it included a whole appendix setting out a methodology to protect privacy and civil liberties. Unfortunately, I have to say that my hopes were somewhat disappointed.

The framework is, I hasten to affirm, all worthy stuff. There’s nothing in there that a CISO would not want to see, and the practical measures are well set within a context of risk and compliance. All good… as far as it goes.

Here’s the problem I have with the framework – particularly the section on privacy and civil liberties: it does not, fundamentally, move us beyond the OECD privacy principles first published in 1980. The OECD principles have even had their 30th-anniversary review and revision… so in that sense, the NIST framework is lagging even further behind the current state of the art in privacy regulation.

Here are some specific concerns, in a little more detail.

1 – The NIST framework focuses on the protection of personally identifiable information (PII). I’ve written several times about the problems with trying to regulate on the basis of a definition of PII; here are a couple of illustrative posts. I won’t re-hash the arguments here, but will simply say this: defined lists of PII are useful for data protection, but not enough for effective privacy regulation. Especially when the stated goal is to take civil liberties and privacy into account, no framework is complete unless it takes privacy outcomes into account, and not just the protection of a contextually-variable set of data items.

The concept of harm, as a metric, is referred to on line 692 of the framework, but is not reflected in the methodology. Even “harm” is generally recognised by the privacy community as being problematic as a privacy metric – so it is disappointing that the NIST framework does no more than mention it in passing, as if it offers a ready solution.

As it stands, the NIST framework would have little or no impact on privacy violations rising out of, say, behavioural profiling. There’s a two-line mention of Big Data-related privacy concerns in Section C-5, but that’s not a strong starting point for a 2013 guidance document aiming to protect privacy and civil liberties.

2 – In today’s hyper-connected, digital society, the right to correction/deletion of incorrect data is not enough to protect privacy. The framework mentions data retention, but does not do enough to establish the data subject’s entitlement to control over data about them. Individuals must be entitled to ensure that data retention periods are appropriately set and effectively enforced.

3 – The framework is almost entirely silent on the issue of law enforcement access to personal data. Apart from noting the critical national infrastructure as a design factor, the framework does not address issues of intelligence or national security access to data. No-one is pretending that that’s an easy topic to address – but neither can anyone credibly pretend that it doesn’t exist. Again, in a document explicitly aiming to balance cybersecurity with personal privacy and civil liberties, this is a serious flaw.

Those are some of my misgivings about the framework, but you might wonder what I would suggest as constructive input, rather than just moaning about its shortcomings. Fair point.

As an initial answer, let’s look at one of the framework document’s remarks about privacy standardisation: “There are few identifiable standards or best practices to mitigate the impact of cybersecurity activities on individuals’ privacy and civil liberties”. That may well be true. But if you set out to balance cybersecurity impacts and civil liberties, you can’t really leave it at that and move on. Unfortunately, the obvious next step is one that NIST probably feels unable to take – and that would be to acknowledge an over-arching right to privacy, along the lines of the European model. If you have no standards for best practice, but you have an over-arching principle to refer to, there’s a way forward – and it need not be a slippery slope to perdition: that right to privacy can – like , say, the right to free speech – be a qualified one, and subject to being balanced against other individual and social rights.

Second, as I hinted above, any notion of PII needs to be supplemented with the idea of privacy outcomes. If the processing of data results in a violation of the individual’s privacy, that should be the key regulatory factor, regardless of whether the data in question appears on someone’s list of what constitutes PII. In the era of social graphs, passive disclosure and behavioural profiling, nothing less than that can offer adequate privacy safeguards.

Third, and related to the previous point, regulation needs to work out what to do about inference data. Behavioural profiling is useful to the organisations that do it, because it enables them to extrapolate from the things they know to the things they can infer. But laws based on a notion of PII are, by and large, blind to the privacy impact of inference data. As one researcher put it: “you don’t have to be in the statistics to be affected by them”.

Fourth and last, and this may seem rather curmudgeonly: I just don’t think the framework, as it stands, actually steps up to the job of explaining how to balance cybersecurity practices with the protection of privacy and civil liberties. If I’m honest, the “privacy and civil liberties” phrase looks a lot like a piece of reactive, post-Snowden fairy-dust, sprinkled optimistically on an existing cybersecurity framework.

And as I noted at the beginning, part of me really can’t blame NIST for giving that a shot. I just don’t think it serves the interests of the citizen, and it does not do enough to rebuild trust and confidence in NIST’s contribution to the domain.