It’s apple pie… but it’s NIST standard apple pie

Look, I feel a fair bit of sympathy for NIST at the moment, what with all the fuss about Snowden, the embarrassment over Dual Elliptic Curve DRBG, the doubts about SHA-3, and so forth. I fully appreciate that the majority (and, still, quite possibly all) of its standardisation work is done in good faith, by individuals committed to the greater good. And I understand the unprecedented pressure NIST must feel to do something visible, to rebuild trust and confidence in its work on digital security and related domains – such as privacy.

So, when I saw that NIST had published a preliminary cybersecurity framework, I was interested to take a look.  I was even more interested when I saw that it included a whole appendix setting out a methodology to protect privacy and civil liberties. Unfortunately, I have to say that my hopes were somewhat disappointed.

The framework is, I hasten to affirm, all worthy stuff. There’s nothing in there that a CISO would not want to see, and the practical measures are well set within a context of risk and compliance. All good… as far as it goes.

Here’s the problem I have with the framework – particularly the section on privacy and civil liberties: it does not, fundamentally, move us beyond the OECD privacy principles first published in 1980. The OECD principles have even had their 30th-anniversary review and revision… so in that sense, the NIST framework is lagging even further behind the current state of the art in privacy regulation.

Here are some specific concerns, in a little more detail.

1 – The NIST framework focuses on the protection of personally identifiable information (PII). I’ve written several times about the problems with trying to regulate on the basis of a definition of PII; here are a couple of illustrative posts. I won’t re-hash the arguments here, but will simply say this: defined lists of PII are useful for data protection, but not enough for effective privacy regulation. Especially when the stated goal is to take civil liberties and privacy into account, no framework is complete unless it takes privacy outcomes into account, and not just the protection of a contextually-variable set of data items.

The concept of harm, as a metric, is referred to on line 692 of the framework, but is not reflected in the methodology. Even “harm” is generally recognised by the privacy community as being problematic as a privacy metric – so it is disappointing that the NIST framework does no more than mention it in passing, as if it offers a ready solution.

As it stands, the NIST framework would have little or no impact on privacy violations rising out of, say, behavioural profiling. There’s a two-line mention of Big Data-related privacy concerns in Section C-5, but that’s not a strong starting point for a 2013 guidance document aiming to protect privacy and civil liberties.

2 – In today’s hyper-connected, digital society, the right to correction/deletion of incorrect data is not enough to protect privacy. The framework mentions data retention, but does not do enough to establish the data subject’s entitlement to control over data about them. Individuals must be entitled to ensure that data retention periods are appropriately set and effectively enforced.

3 – The framework is almost entirely silent on the issue of law enforcement access to personal data. Apart from noting the critical national infrastructure as a design factor, the framework does not address issues of intelligence or national security access to data. No-one is pretending that that’s an easy topic to address – but neither can anyone credibly pretend that it doesn’t exist. Again, in a document explicitly aiming to balance cybersecurity with personal privacy and civil liberties, this is a serious flaw.

Those are some of my misgivings about the framework, but you might wonder what I would suggest as constructive input, rather than just moaning about its shortcomings. Fair point.

As an initial answer, let’s look at one of the framework document’s remarks about privacy standardisation: “There are few identifiable standards or best practices to mitigate the impact of cybersecurity activities on individuals’ privacy and civil liberties”. That may well be true. But if you set out to balance cybersecurity impacts and civil liberties, you can’t really leave it at that and move on. Unfortunately, the obvious next step is one that NIST probably feels unable to take – and that would be to acknowledge an over-arching right to privacy, along the lines of the European model. If you have no standards for best practice, but you have an over-arching principle to refer to, there’s a way forward – and it need not be a slippery slope to perdition: that right to privacy can – like , say, the right to free speech – be a qualified one, and subject to being balanced against other individual and social rights.

Second, as I hinted above, any notion of PII needs to be supplemented with the idea of privacy outcomes. If the processing of data results in a violation of the individual’s privacy, that should be the key regulatory factor, regardless of whether the data in question appears on someone’s list of what constitutes PII. In the era of social graphs, passive disclosure and behavioural profiling, nothing less than that can offer adequate privacy safeguards.

Third, and related to the previous point, regulation needs to work out what to do about inference data. Behavioural profiling is useful to the organisations that do it, because it enables them to extrapolate from the things they know to the things they can infer. But laws based on a notion of PII are, by and large, blind to the privacy impact of inference data. As one researcher put it: “you don’t have to be in the statistics to be affected by them”.

Fourth and last, and this may seem rather curmudgeonly: I just don’t think the framework, as it stands, actually steps up to the job of explaining how to balance cybersecurity practices with the protection of privacy and civil liberties. If I’m honest, the “privacy and civil liberties” phrase looks a lot like a piece of reactive, post-Snowden fairy-dust, sprinkled optimistically on an existing cybersecurity framework.

And as I noted at the beginning, part of me really can’t blame NIST for giving that a shot. I just don’t think it serves the interests of the citizen, and it does not do enough to rebuild trust and confidence in NIST’s contribution to the domain.