Paying for Privacy…

There’s a good article on cnet news, to co-incide with Data Privacy Day – thoughtful and though-provoking.

“It’s been 10 years: Why won’t people pay for privacy?”, by Declan McCullagh

I’ve left some feedback on it, but as I’m about 40th in the comment stack, I don’t imagine it will attract much attention there – so here it is:

“Congratulations on a well-timed article with a lot of thought behind it.

I’ve been working on digital ID and privacy for the last 7-8 years, and I suspect that, if you’re looking at the commercial aspects, there are two reasons why “privacy protection” has largely failed to offer a compelling value proposition. One is comparatively old, the other is a little newer.

The older reason is that “point” privacy protection products can usually do little or nothing about the elephant in the room… the vested and mostly-invisible commercial interests behind online advertising are so huge, so entrenched and so opaque to the user that it is all but impossible to change the balance of power between the ‘data subject’ and the ‘data gatherer’. As an example, look at the difficulty some very bright people have had with turning VRM from concept into reality. (VRM, or “Vendor Relationship Management” was coined as a flip-side to “Customer Relationship Management” – CRM – … the idea being that my interests would be better served if I took control of my data and used it as the leverage to change vendors’ behaviour). The idea, the principles and the technology might all be fine, but those factors are not enough to convince/persuade/force vendors to do things your way instead of theirs.

The second, and newer, reason has to do with the increasing ability of data-miners to build an extremely accurate model of you (and your behaviour and preferences) without needing to know exactly who you are.

And here’s the worrying point, in the light of that second reason. Most of us think we have a reasonable handle on what our privacy is, and what we might do to protect it. The problem is that most of us are still thinking in terms of the risks arising from reason number 1. Very few of us have any notion of what the risks are which arise from reason number 2, let alone how to mitigate them.”

It was a rather hastily sketched-out response, and probably raises more questions than answers – but I wanted to make it promptly, partly because I hope it will tie in nicely with some of the comments I’ll be blogging in due course about the CPDP conference I’ve just got back from. More later…

Privacy, personas and consent

Followers of this blog will instantly recognise the three things in the title… I’ve gone on about them often enough. Some may be wondering if (or indeed how) I can have more to say. Well, there’s always another twist, another perspective, so here goes.

Usually, I’m promoting “personas” as a useful privacy tool: that is, if you can segregate and selectively reveal different aspects of your online identity, you can probably manage your privacy better. The people who know you as a “soccer mom”, for instance, don’t necessarily know that you’re also an Army reservist… and why should they – unless you tell them?

It’s that ‘consent’ aspect which I’d like to take a fresh look at today. After all, I usually use an example like the one above: where the person concerned wishes, for whatever reason, to separate one aspect of their life from another. However, let’s look at the flip side: hypothetically, let’s assume you have a university academic who is known to his students as a specialist in IT law, a lecturer and in some cases perhaps a PhD supervisor. In other words, he has a pretty clearly-bounded, if multi-faceted ‘persona’ as a ‘professor’. Let’s call him Professor Dent for the heck of it; I. Dent, of course, because the Prof. has an academic and practical interest in questions of online identity and privacy.

Naturally, Prof. Dent tends to see a correspondingly bounded ‘persona’ of his students, delineated by their areas of study, their essays, their behaviour in lectures and so on.

Now, these days it would by no means be considered unusual if those students were also socially active online, in the course of which they might exchange news, photos, gossip etc and even comments about their studies (hey, it could happen…). Prof. Dent, being an up-to-date sort of academic, has accounts on MyBook, FaceSpace and so on, and some of his students even reckon they can “friend” him without losing too much credibility. In fact, he turns out to be (rather endearingly) a huge Dr Who fan… though of course in his case that really means Jon Pertwee, not these rock-star Johnny-come-latelies.

However, the University authorities find out about Prof. Dent’s online presence and are not at all happy. Not because of any of the content, I hasten to add… there’s nothing in the least bit prurient or reprehensible to be found there. No – they just don’t think it’s appropriate for the Prof to be generating this discourse, accessible to people who are ‘supposed’ to see him in his professorial persona only.

So, here we have rather the converse of the original problem. Rather than keep specific personas strictly segregated, Prof. Dent has decided that his professorial persona might actually benefit from being made slightly more multi-dimensional. Those of you who originally followed this blog’s precursor on might see reflections of Sun’s enlightened ‘blog anything’ corporate policy in that. But Prof. Dent is essentially being told, not only that he must segregate his various personas, but that he must not continue with his online presence.

In other words, regardless of the privacy and persona aspects, his consent is being over-ridden.

Of course, all this is purely academic (ha ha): after all, what University would be daft enough to think that such a policy was (a) a good idea or (b) practical. What’s more, I think I would have reservations about trying to impose it on a bunch of academics – especially IT-literate lawyers. Thank goodness it’s all hypothetical.

The "German TV, body-scanner" meme

There’s been lots of traffic about a programme from German TV station ZDF, generally under a headline like “German TV programme shows naked scanners don’t work”. When something like that gets onto Twitter, BoingBoing and Bruce Schneier’s blog, it’s clearly reaching a heck of an audience.

It’s unfortunate, then, that even Schneier (and I realise this is close to heresy) is jumping from a set of premisses to a conclusion which they really do not support… certainly not in the terms which most readers will assume. Put in its simplest form, the logic readers might be tempted to follow on reading Schneier’s post is this: “this type of full-body scanner failed in certain respects in this demonstration, therefore all full-body scanners are useless in any implementation”. The shame of it is, of course, that Bruce has a well-deserved reputation for debunking exactly that kind of bogosity.

The title of his post is: “German TV on the Failure of Full-Body Scanners”. Now, I appreciate an eye-catching lede as much as the next blogger… but that one is just bound to create some very misleading impressions (and it’s not helped, frankly, by Schneier’s implication that you will pick up the relevant gist of the video even if you don’t understand German).

Sure, we’ve all seen the media footage of those millimeter-wave “naked scanners”, and most of us have seen some of the resulting privacy-related fallout. But if you thought that the ZDF programme would give you the evidence for why those scanners are useless, you’re going to be disappointed. OK, so I’d better put my iconoclastic money where my blasphemous mouth is. Here are some of the factors which Bruce might usefully have pointed out…

– the scanner used for the programme is not an X-ray device; in fact, it’s an entirely passive device which generates an image based on the subject’s radiated body-heat. Things placed between the body and the detector, and which obstruct the radiation of body-heat, will show up because of the difference in temperature between them and the body itself. In some respects it can reveal more than an X-ray-based scanner (for instance, it was obvious if the subject was wearing a tie or not, because that forms a partial heat-shield in front of the chest); in other respects, it shows less than an X-ray scanner (for instance, it did not clearly show objects which were in the subject’s jacket pockets, because those were not between the body and the scanner). Neither did it reveal Wolfgang Bosbach’s pacemaker… because that is subcutaneous.

– As you will have gathered from the previous point, some of the items which weren’t detected would have been found under airport security conditions, even by this scanner… either because
the subject would have had to take off his jacket, or because he would have been scanned from the side as well as front and back.

I’m not saying that makes this scanner good, by the way – I’m just clearing up some of the things which anyone just reading the blog/Twitter traffic might not be aware of.

There are a couple of other interesting points which come across if you listen to the programme.

– Mr Bosbach, one of the participants, is Chair of the Bundestag’s Home Affairs Committee. I make no comment about his broader policy position, but just note that he explicitly states three criteria which full-body scanners will have to satisfy before he will consider deploying them in Germany:

  1. They must deliver a quantifiable benefit in terms of increased security;
  2. They must do so in a way which adequately respects passengers’ privacy;
  3. They must do so without risk of causing harm.

Those seem like a pretty rational set of pre-conditions (though they also still leave plenty of wiggle-room about what the benefits might be, how much respect for privacy, and so on).

– Bosbach also notes that even the high-resolution millimeter-wave scanners can be implemented in ways which are more privacy-respecting than the kinds of image we have all seen on the news. For instance, if a scan detects nothing suspect, it can simply respond to the operator with a green “OK” symbol. If something untoward is found, its location can be indicated on a simple stick-figure representing the passenger. No graphical representation of the passenger is needed for either of those steps.

Again, I’m not saying that makes such a system desirable – nor am I suggesting that that makes it impossible for the scanner (as opposed to the operator’s display screen) to capture, store, transmit and otherwise process the images it generates.

So, the points about side-scanning and detection of items in jacket pockets illustrate that a failure in this demonstration do not necessarily imply that this scanner (let alone all types of full-body scanner) must be ineffective in all implementations; the points about pre-conditions for implementation, and how data is presented to the operator, illustrate that not all deployments need be equally privacy-intrusive.

Don’t get me wrong: I’m all for railing against the idiocies of so-called airport “security” measures which maximise passenger inconvenience for no (and in some cases negative) security benefit. For example, I’ve blogged in the past about what is probably the most dangerous place in any airport: that massive queue for the security scanners, where you have a higher density of people than almost anywhere else in the system, and guess what… at that point, no-one has been scanned, and neither has their hand luggage.

I’ve also commented on the arbitrary nonsense which passes for risk mitigation – such as the time when a (lethal and subversive) tennis ball was confiscated from my carry-on luggage on the grounds that it would harden under low air pressure and could be hurled at someone. This, when the Harrods shop in the same terminal would happily sell me a box of golf balls (and a stylish Argyle sock to pop them into), or some nice [censored, I’m not giving you any more free advice on how to arm yourself pre-flight…].

The best technology will fail if it is poorly implemented and badly deployed; and the best deployments can fail to achieve good outcomes if they reflect a fundamentally flawed policy. I seriously doubt that all deployments of full-body scanners (whichever technology they use) will be either effective or privacy-respecting. But I don’t think the argument against them is best conducted by leaving a swathe of relevant factors out of the analysis.

"Data ownership", "social networking" and other nonsense

Joe Andrieu, who co-chairs the Information Sharing Work Group at the Kantara Initiative, has written a superb piece here, neatly summing up why taking about personal data in terms of “ownership” just doesn’t work.

I’m certainly not going to try and re-hash Joe’s analysis; it stand perfectly well in its own right. All I will offer is this: language and thought are closely intertwined… somewhere on the spectrum between Chomsky and the Neuro-Linguistic Programmers, there’s a happy medium where the thoughts we have are intimately influenced by the ways in which we are able to articulate them… and vice versa.

There are times when a simple phrase takes over as a convenient shorthand for a complex set of concepts – and once that happens, it seems it’s all too easy to ignore the underlying complexity and collude in the belief that what we’re talking about is as simple as the phrase we’re using to talk about it. I notice, in passing, that Noam Chomsky refuses to refer to “the war on drugs”, insisting instead on calling it “the war on certain drugs”. That’s pretty much what I’m getting at here, and I’d argue that the phrase “data ownership” is a prime example.

We know what “data” is, right? It’s a simple enough word. And we all know what “ownership” means… so “data ownership” must mean something correspondingly simple. Except that (as Joe’s piece lucidly explains) if you start by asking questions framed in terms of “data ownership”, it leads you down a path which is neither long nor fruitful.

So, here are two ‘thought experiments’ I’d like to recommend, when you are faced with a couple of these deceptively simple shorthand phrases. Whenever you encounter a question like “who owns my personal data?”, try re-framing it in different terms.

“What rights do I have over data about me?”
“What rights do others have concerning data about me?”
“What duties do I and others have concerning that data?”

You should find that these questions, which acknowledge that you can have rights and duties quite aside from any notions of “ownership”, generate a far more practical and productive conversation.

Here’s the second “shorthand phrase”, and it relates to another one of my current bugbears: “social networking”. You’ve probably heard my micro-rants about this before, online or elsewhere, but in essence… I think the phrase “social networking” is actually encouraging us to blind ourselves to the fact that “networked interaction” and “social interaction” work by entirely different sets of rules – and that if you engage in networked interaction while assuming that you’re playing by the rules of (face to face) social interaction, you’re deluding yourself and probably putting your privacy and self-determination at risk.

Consider this: have you ever heard two children having an animated conversation in another room, and walked in there only to have their previous conversation come to an abrupt end and be replaced with something entirely innocuous? Quite.

The things they were happy to talk about one-to-one are not the same as the things they are happy to talk about with a parent in the room. Surprise, surprise. We know that. We all know that. We’ve known that ever since that day a teacher walked into the room just too late for us to abort the punch-line of a dirty joke. Human beings are social animals, and learning these things is part of being human.

So why do we blithely ignore the fact that there’s a third party involved in all our supposedly “friend-to-friend” interactions in “social networks”? And why are we surprised that their interests do not necessarily co-incide with ours? We want to share gossip and photos with our friends, and the third party wants to monetize that relationship.

OK – so please, when you encounter the phrase “social networking”, try replacing it with “networked interaction masquerading as social interaction”. My hope is that that will encourage you to bear in mind that, despite all appearance to the contrary, you are engaged in something which does not follow the normal rules of face-to-face personal interaction. That should be healthier for your privacy and, over time, who knows – it might even encourage networked interaction sites to be a little more up-front about the hidden side of what they are offering.

Slight change of role

I thought I would post an update to let everyone know about a gentle shift in direction…

As readers you’re probably aware, for the last 10 months I’ve been the Liberty Alliance’s Director for Privacy and Public Policy. One of my duties was to define an equivalent post for the Kantara Initiative; I also helped recruit and establish Kantara’s Privacy and Public Policy Work Group (P3WG), which I have chaired since its foundation.

I’m happy to say that, since it was chartered in June 2009, P3WG has rapidly attracted a growing and influential core of members – currently up to 52. The Work Group has a weekly conference call, and held face-to-face sessions at the Kantara plenaries in Las Vegas in September, with strong representation from our Japanese and New Zealand colleagues as well as the US public sector.

The Group’s work has also been evolving well: we produced ad hoc responses to the US General Services Administration (GSA) ICAM team on their Identity Assurance plans, and the Public Voice civil society organisation on their Madrid Declaration on Privacy. Perhaps more important: as well as the ad hoc work, we have been assembling and prioritising our candidate work items for the coming months, so that we have a clear and robust structure for meeting the Group’s strategic objectives.

As the Kantara Initiative moves forward, I have been invited to take up the equivalent Director of Privacy and Public Policy post I mentioned earlier, and I accepted without hesitation. I decided that that was a good time to step down as Chair of the P3WG, so that we can maintain a clear separation of duties between those two roles and avoid any perception of conflict of interest.

Accordingly, P3WG ballotted for a new Chair, and I am delighted to say that the post will be filled by Dr Abbie Barbir, whose professional background and experience in OASIS and ITU-T make him an outstanding person for the job. Abbie will enjoy the excellent assistance of Jeff Stollman and Darrell Shull, both of whom have said they will carry on as officers of the Group. I’m handing over the reins of P3WG to a strong and extremely competent team, which is very gratifying.

Of course, as DPP for Kantara, I will continue to have a very direct interest in P3WG’s work, and look forward to taking that forward with the team. The work programme is strong, relevant and innovative, and exciting times lie ahead.

Gartner acquisition of Burton Group

I was emailing a couple of people involved in this, and unsurprisingly they said that the phrase “living in interesting times” was getting a real work-out in their office today.

As it happens, over the last 14 months I have worked with some of the identity management specialists from both firms; late in 2008 I was invited to run a Privacy Summit on the eve of the Gartner IAM symposium in Orlando (as far as I know, the first time a third party had been asked to do something like that). Then, last summer, I had the opportunity to speak at Burton Catalyst in San Diego – which obviously also involved some detailed discussions with the identity team there.

Having attended both events, and had a level of contact with some of the analysts in each case, here’s my forecast for the merger of the two: handled right, it could deliver some really compelling synergy. Again, this is based solely on my partial contact with the two firms, but my superficial and subjective assessment is that the Gartner IDM team are strong on the “implementation and governance” stages of the IDM lifecycle, while the Burton team – while by no means ignoring that phase – really shine in the conceptual and system design phase.

Of course there’s overlap, and of course there are differences of perspective and differences of opinion… for instance, there are some interesting internal meetings ahead as Bob Blakley and Andrea DiMaio get together to thrash out the corporate line on online privacy ;^)

But that, of course, is also where some of the best conceptual work comes from. If the merger of the two teams leads to better insights into how to progress from concept, through implementation to governance, we all win…

Here’s hoping that’s how it plays out.