Insalubrious premisses

From time to time, someone points out that the security of most ID/password-based website authentications actually depends on the (quite unrelated) security of the user’s primary email account… in the sense that that’s where most of the password reset confirmation messages get sent.

In fact, it’s worth assessing the risk of just how many sites you could be locked out of, if (say) you could no longer access the email account(s) you specified when you registered with them.

I see the folks at Facebook have thought that problem through, though: one of the options you have on Facebook is to recover access to your account if (i) your Facebook password has been lost/compromised; (ii) the email account you registered with has also been hacked.

Under that unhappy combination of circumstances, you will be relieved to know that all is not lost… you can ask for your password reset confirmation to be sent to a completely new email address. To do this, you will need to know:

  • the email address you originally registered;
  • your full name on the Facebook account;
  • your date of birth;
  • the URL of your hacked profile.

It doesn’t take much research to conclude that those four pieces of corroborative data are freely published by quite a lot of users, either elsewhere or on their Facebook profile itself. And that, therefore, this procedure is also open to anyone sufficiently motivated to hi-jack your Facebook account.

Perhaps this is the design you end up with, if you start from the premise that “privacy is no longer the social norm”.

Identity, Privacy and the Post-bureaucratic Age

I was at a fascinating (if chilly) conference yesterday to hear a gratifyingly diverse bunch of panellists express their views on the so-called “Post-Bureaucratic Age” (PBA… Twitter hash-tag #pbage in case that’s how you prefer to get fed). A strong “draw” for the morning session was the appearance of David Cameron to set out how a Post-Bureaucratic strategy could help square the circle of improving public services while wrestling with colossal budget constraints.

I’m not going to try and define what the PBA is, or re-visit yesterday’s speeches – there’s plenty of coverage of that online (here‘s as good a starting-point as any, and William Heath’s comments here set out the identity and privacy landscape with commendable eloquence).

What I will try and do, though, is look at this through the eyes of a privacy/policy technologist well-used to promises that the latest technology will fix all our ills. Here are some of the pitfalls I think this policy strategy will need to negotiate if it is to deliver the kind of benefits being cited in its support.

First, I should make it clear that I agree with many of the concepts. The PBAge is to be founded on the idea that public data should be more visible; processes more transparent; decision-making more accountable. Arguing against that is a bit like making the case against motherhood and apple pie.

However, whatever the potential benefits of a PBA approach, I think it’s absolutely critical that we understand the basis on which we’re opting for it, if we do. For instance, I suspect that a large part of the purely emotional appeal of PBA is in reaction to the undeniably adverse perceptions of “Bureaucratic Age” programmes such as the National Identity Scheme, ContactPoint, Connecting for Health, the Independent Safeguarding Authority, the Digital Economy Bill and so on. In short, we need to be very clear-headed about whether we’re signing up to PBA “on the rebound”. That’s not a healthy basis for a long-term relationship…

Second, there’s the risk of believing our own hype about this sparkly new technology. Too often, this takes the form of a stampede to novelty, with thoughts of security/privacy only after it’s too late. As regular readers can imagine, I do not subscribe to Mr Zuckerberg’s recent contention that “privacy is no longer the social norm”. It will be a strange day indeed when the last word on “privacy as a social norm” comes from a 25-year-old white male American billionaire college drop-out.

So, by all means consider ways of making household energy consumption figures more visible… but think equally carefully about whether there might be some issues of personal privacy involved in printing your household energy numbers on your neighbour’s bill (as was suggested yesterday).

If the concepts are poorly implemented, they encourage ‘gaming’ of the system. After all, public sector performance/league tables were also proposed with the best apparent intentions of ‘transparency’, ‘choice and ‘accountability’ in mind, yet yesterday Prof McGurk’s plea to manage outcomes, not processes, could not have been more heartfelt.

If the implementations are badly deployed, they lead to perverse consequences. For instance, current guidance on the implementation of ISA “safeguarding” measures appear to require employers to compile – and disclose – dossiers which intrude substantially into the private lives and personal history of their employees – regardless of the provenance or reliability of that information, or its relevance either to the employee’s job, or to the stated purpose of protecting vulnerable third parties.

Third, governance. All the use-cases proposed yesterday were conspicuously cuddly. Energy-saving; better public services; reducing public debt; plan and fund the development of your own neighborhood. Bless. Here are some things which weren’t mentioned: counter-terrorism; anti-money-laundering; immigration and border controls; taxation; reduction of benefit fraud… My point is, as well as empowering us to do the things we would like to do, the PBAge will also have to include those things which we don’t like, but which the state occasionally has to do to us.

In terms of identity, privacy and personal data, the way I usually describe this is as follows:

Think of “the government” as having a number of sources of information about you, by virtue of the various relationships it has with you as a citizen.

The PBAge suggests that you, as a citizen, can benefit from better services if you or someone else is able to join together various now-to-be-published sources of data. Presumably it will still, though, be important that that only happens as and when you consent to it (either the data-joining, or your making use of the resulting service).

On the other side of the ‘cloud’ are the less cuddly things government is expected to achieve; protecting public funds and services against fraud, delivering effective law enforcement, preserving national security, and so on. Many of these things, let’s face it, are often based on the non-consensual joining-together of information about you… and some of that information is from exactly the same sources as are to be used for the cuddlier purposes of PBAge.

In other words, what is required is the ability to manage two different and conflicting views over what may often be the same data. That implies a level of data management expertise which I don’t see in many places in the commercial sector, let alone in the public sector.

I do not think we understand, yet, what is the appropriate governance regime for data (including personal information) in the PBAge… bearing in mind that there’s no reason to suppose the current governance regime is fit for its current purpose, let alone applicable to the emerging requirements of the future.

With that in mind, whither PII?

Current notions of online privacy are, for the most part, based on legislation which seeks to list a finite number of pieces of data, which are to be considered “personal”, and which are therefore to benefit from special protection. Nowhere in those lists will you find pieces of data like “mobile network presence”, “time-stamp”, “network cell location”, “search history”, “browser profile”, “browser history”. And yet there is ample evidence to suggest that these are more than enough to compromise the individual’s privacy, anonymity and indeed self-determination. (I also hear, incidentally, that from the pattern of energy usage reported by a smart meter, the electricity company can tell the make, model and age of your washing machine… How long before the manufacturers start offering money for that data so that you can be spammed with “it’s time to replace your washer” messages…?).

Among the questions I think PBAge proponents are going to have to address with some urgency are these: in an age where “all data is personally identifiable”, how is personal privacy to be managed? How are factors such as informational self-determination to be incorporated in the resulting policies? And, echoing Prof McGurk again… how are we to manage data, but measure outcomes?

Google Buzz: what’s the appropriate reaction?

I must admit, I’m so undecided over this one that two hands are no longer sufficient…

On the one hand, I am actually grateful to Google for the fact that their botched implementation and deployment of Buzz has brought some of the flaws of “social networking” to the fore. There’s intelligent, rational and perceptive comment from John Pescatore at Gartner, for instance.

The term “social networking” perpetuates a deception in which users, myself included, have been too happy to collude… the idea that “social networking” operates by the same rules as face-to-face personal interaction, and that there is no ‘third party in the room’ when you interact with your online buddies. The sooner that particular emperor is revealed to be naked, the better.

On the other hand, it frustrates and even angers me that Google have not, among the various changes they have made since the launch, done the single, simple thing which would most clearly illustrate that they acknowledge there’s a problem: make Buzz an explicit opt-in service. I see from this article in the Guardian that that is what EPIC have asked the FTC to rule – and rightly so.

On the third hand… I still can’t decide whether Google’s approach here is just naive, or breathtakingly disingenuous. I mean, imagine you read the following in a news article:

“We only released the crocodiles into the primary school building a week ago. We’ve already made a few changes based on user feedback, and we have more improvements in the works. We look forward to hearing more suggestions and will continue to improve the primary-school crocodile experience, with user transparency and control top of mind.”

Is there nothing in that which might ring a few alarm bells, in terms of risk, duty of care, and considering (in advance…) whether you’re about to do something with irreversible consequences?

[Here’s the actual Google statement, for clarity’s sake: “Buzz was launched only a week ago. We’ve already made a few changes based on user feedback, and we have more improvements in the works. We look forward to hearing more suggestions and will continue to improve the Buzz experience with user transparency and control top of mind.”]

One of the commenters on John Pescatore’s blog expressed the view that Google had forgotten “the cardinal rule of social media: Assume anything your members do is private until they tell you otherwise”.

With respect to that commenter, I don’t think that is the cardinal rule at all. I think the game operates more like this:

  • Rule One: Maintain the illusion that the user is interacting only with their chosen parties; as long as you don’t spook them, users will be happy to connive at this pretence.
  • Rule Two: Under no circumstances force the user to acknowledge that there’s a third party in the room… whether that’s you, as the “social network” provider, or the others with whom you exchange data about the users.
  • Rule Three: Keep calling it “social networking”, to reinforce the impression that it operates by the same rules as face-to-face interaction between friends. (It doesn’t, but see Rule One).

Privacy by design, privacy by default

Well, the online reactions to Google’s Buzz innovation (“organising the world’s address books and making them visible”?) continue. Among others –

Sharon Machlis gives some specific examples of why it’s a bad idea to treat email as if it were the same as other ‘networked interaction’ tools;

Harriet Jacobs gives a crucial perspective on the dangers of promiscuous data-sharing (warning: contains lively language and references to sexual violence);

Tom Krazit provides an update on some of Google’s first tweaks in reaction to the negative feedback.

So it’s not that Google is doing nothing. However, I still haven’t seen the most important change – namely, from “default opt-in by presumption” to a proper, explicit opt-in based on informed consent.

I blogged a couple of months ago in response to Eric Schmidt’s ill-chosen words on privacy, and noted that they came particularly badly from the CEO of a company with such global reach and power, and the ability to have such a fundamental effect on individuals’ privacy.

I would say the same thing about the Buzz implementation. It is extremely unhealthy for the online eco-system as a whole (users, service providers, developers and so on) if a stakeholder such as Google demonstrates a willingness to ignore privacy fundamentals such as informed consent, explicit opt-in, and ‘opt-out by default’.

Please, Google: you’re never slow to trumpet the philanthropic potential of your innovations; live up to that PR by setting a better example: privacy by design, privacy by default.

WARNING! LARK’S VOMIT

If, like me, you assumed that saying “No thanks” to the invitation to try Google Buzz meant that you had opted out, you should definitely read this guidance from Robert Westervelt.

For non Python fans, the “lark’s vomit” reference originates in their “Crunchy Frog” sketch about the highly dubious “Wizzo Chocolate Assortment”:

Inspector Praline: I’m not interested in your sales! I have to protect the general public! Now what about this one: […] Number five Ram’s Bladder Cup. What sort of confection is this?
Mr Milton: We use choicest juicy chunks of fresh Cornish ram’s bladder, emptied, steamed, flavoured with sesame seeds, whipped into a fondue and garnished with lark’s vomit.
Praline: Lark’s vomit?
Milton: Correct.
Praline: Well it don’t say nothing about that here.
Milton: Er, yes it does… on the bottom of the box, after monosodium glutamate.
Praline: Well, I hardly think this is good enough. I think it’d be more appropriate if the box bore a great red label: “Warning! Lark’s vomit!”

No such thing as bad publicity…?

Well, Google just doesn’t seem to be able to stay out of the spotlight at the moment. I’m not going to try and comment on how much of the adverse attention is merited, as opposed to feeding-frenzy – but having just heard Google’s Alma Whitten present at the Trust in the Information Society conference this morning, some of the news items did have a little more resonance than usual.

First, a quick note to the Marketing Dept at Google: Alma has certainly taken the corporate messaging on board. The phrase “organizing the world’s information and making it accessible and useful” occurred more than once, believe me.

Hence the first hum of resonance, when I read that a number of bloggers on Google-hosted services have had their blogs summarily removed, archives deleted etc., for alleged violation of terms of service relating to music copyright. According to the article, some of the blogs were in fact only publishing music with the consent (and in some cases outright collaboration) of the artists and/or publishers.

Far be it from me to play devil’s advocate, but one reply might be that Google is only doing its best to winnow out offenders in the interests of copyright holders. The trouble is, that sits ill with Monday’s Radio Five Live programme about the Google Books project; in that broadcast, several authors made the argument that their work had appeared without their consent on the Google Books project (copyright page and all), and expressed their fundamental objection to the notion that they should be required to opt out explicitly if they wished their (prior and existing) assertions of copyright to be honoured.

So, is this just Google being “damned if they do and damned if they don’t”? Or does this illustrate that if you want to either re-publish copyright works or prosecute copyright violations on the scale to which Google aspires, a blanket approach will always fail?

And so to resonance number three: those authors who objected to the implicit opt-in are unlikely, on that basis, to be signing up for Google’s new Buzz service. Molly Wood gives a damning analysis, here, of several of the ways in which that, aggressively and by default, assumes a comprehensive opt-in on the part of the user. From what she says, in a number of instances that assumption goes well beyond the reasonable[*].

As I say, I’m not going to pass judgement on whether this is just the media laying into their favourite whipping-boy of the moment, but I think it’s legitimate to ask how these anecdotes contrast with some of the stated aims, goals and indeed values which Google professes.

As I heard repeatedly this morning, Google wants to innovate and satisfy user requirements. On the face of it, who could argue with that? But innovation is not sufficient justification for compromising users’ privacy. And in that respect, there is an absolutely critical difference between satisfying the requirements which users apparently express via whatever privacy-related options they may be offered, and satisfying the requirements which lead to the privacy outcomes which users would choose if they were in a position to do so.

In a piercing analysis, Mireille Hildebrandt of the Vreie Universiteit Brussel pointed out that a user’s behaviour (and the data that implicitly discloses) often reveals a far more accurate picture of their real attitudes than are their answers to questions about what they want. Why’s that relevant here? Well, because if you give users the option of not explicitly disclosing personal information (for instance, name, address and so on), but you collect (and even anonymise) behavioural data without giving the user the ability to opt out of that, you have created an illusion of privacy-respecting choice while in fact providing no such thing.

In other words, users’ behaviour may be at odds both with their stated preferences and their best interests. It might sound as though I’m calling for data custodians in general to adopt a very paternalistic attitude towards user privacy – but I’m not. What I am calling for is a much more mature approach to the responsibilities which data custodians take on, when they gather data which users don’t even know they are disclosing, and which reveal things about the user which they may not even be aware of.

I’ve posted and ranted elsewhere about the inadequacy of the term ‘ownership’ to describe our relationship to data about ourselves, and that is as true here as anywhere. I’m not calling for data custodians to take (or relinquish) ‘ownership’ of such data, and I’m not calling for ‘ownership’ of it to be assigned to the data subject. But I do think there needs to be a lot more transparency in the following areas:

– what data is collected about users, either explicitly or (more important) implicitly;
– what categorisations and inferences are made on the basis of that data;
– what actions are taken, which affect one user, on the basis of inferences from data about other users;
– what rights a user is assumed to have concerning data about them;
– what responsibilities a data custodian is assumed to have regarding that data and those rights.

Admittedly, this is a long way from anything you would find in the current generation of Privacy Statements and Privacy Policies (let alone privacy laws) – but that doesn’t mean we shouldn’t be seeking to improve as we eye the next generation of privacy measures.

As the European Commission, the OECD and other bodies review the current set of Data Protection Principles, these are among the questions they should be seeking to address. Frankly, the initial reaction suggests that Google Buzz has set off in entirely the other direction.

The mission statement of “organising the world’s data…” is a goal which sets Google up to have a lot of stakeholders – and individually or in aggregate, those stakeholders have rights and expectations which deserve to be satisfied. “The world’s data” is not a privacy-neutral concept, and “organising it” is even less privacy-neutral.

The danger of favouring commercial objectives over the other stakeholder rights is that it creates the impression of selling out, rather than shouldering the responsibility of satisfying the non-commercial stakeholders to the appropriate degree. Rudyard Kipling described “Power without responsibility” as “the prerogative of the harlot”. That’s not an alluring brush with which to be tarred…

*[A further brief update: the introduction of Buzz, in its simplest form, consists of a ‘splash screen’ as you log into your gmail account. This offers you the choice of “Trying Buzz” or “No thanks, just take me to my inbox”. Simple enough – except that if you choose the latter, Gmail will turn Buzz on anyway. I’m at a loss to understand why that is the appropriate unilateral action for Google to take, especially after clearly giving the impression that you have already opted out.

It cannot be good practice that we have to rely on third party sources to instruct users on how to disable the Buzz service. Nor, surely, can it be good practice to turn the service on, by default, before the user even gets a chance to turn it off.]