I very much doubt Andy Smith, of the UK Cabinet Office, thought that his remarks yesterday about disclosing fake personal details would generate quite the flurry of comment they did. But that’s the problem with the whole topic of online privacy and anonymity: it is fraught with unexpected and sometimes perverse consequences.
I won’t re-hash the background here; you can get a far better write-up from Alec Muffett’s excellent piece of re-contextualisation and analysis, here, and Joanna Geary’s article in the Guardian, here.
What I will do, though, is quote a participant at last week’s OECD Working Party on Information Security and Privacy:
“It’s not a matter of designing for privacy *or* security: the proper goal is to optimise for both”
That’s not to say, by any means, that it’s an easy problem. Far from it. Let me give a couple of examples of how challenging it can be to do this well:
1 – Perverse consequences
If you oblige people to prove that they are, for instance, under 18 in order to use a ‘safe online chat environment’ for young people, one foreseeable consequence is that you create a market for the malicious use of valid credentials. Now put yourself in the position of a young person with valid “under 18” credentials; are you really any safer now that a bully or child abuser has a strong incentive to bribe, threaten or cajole you into lending them your credentials?
2 – Is it OK to lie about your age?
Helen Goodman MP may, like Andy Smith, be wondering if she could perhaps have expressed herself better yesterday. Her line of reasoning appears to have been as follows:
“If you let people lie about their personal details online, you make it possible for them to lead potential victims towards abuse. Therefore it should be made illegal to give false personal details online.”
Unfortunately, whether or not the premise is true, the proposed solution doesn’t work. If you make it illegal to give false personal details online (and if, for the sake of argument, you are able to enforce such a law), you also expose individuals to risks of identity theft, fraud, reputational damage, and in some cases physical harm unless you can also create a legal and regulatory environment in which it is impossible for that data to be subsequently abused, either accidentally or on purpose. Since Helen Goodman must know that that, in turn, is also impossible, her line of reasoning looks increasingly foolish.
And let’s not forget the basics: a ‘date of birth’ disclosure, to a third party who has no means of verifying it, is an essentially unreliable self-asserted attribute. As a security mechanism, it is only made weaker if the information in question is known to multiple third parties.
Unfortunately, the bottom line is that optimising for privacy and security is tough, and tough problems are never solved by soundbite. In this instance, it needs (at least) realism about policy objectives, clarity about what ‘authentication’ really is, and an understanding of what the internet means for personal privacy. Then, perhaps, we can start to close the reality gap between policy aspirations and social reality.